What's on the table this week.
A "now" page. What I'm reading, thinking, building, watching. Snapshot, not a blog post. Updated when there's something worth saying.
Last updated: 15 May 2026
Building
Mid-build on this site, in fact. Ditched a six-year-old WordPress placeholder for an Eleventy static stack with a working blog pipeline behind it.
Also in the workshop: extending my LLM brand visibility audit methodology to cover Google AI Overviews citation tracking more rigorously. The interesting question right now isn't "does AI know about my client?". It's "when AI cites the answer, who gets the link?". That's where the meaningful traffic of the next 24 months is decided.
Reading
A short, biased list:
- AirOps on "the fan-out effect", on what actually happens between a query and a citation. The clearest map I've seen of the gap GEO has to close.
- Steve Toth on ranking in ChatGPT Deep Research. Honestly, everything Steve publishes on SEO Notebook and AI Notebook is gold; this piece on refinements is a good place to start.
- "Why CLIs beat MCP for AI agents", a contrarian, practical take on agent tooling that matches what I keep finding in my own stack.
- "Harness engineering", on the scaffolding around models doing more of the real work than the models themselves.
- Pedro Matias on LLM SEO hype, the anti-doomerism take I keep coming back to.
Thinking about
How small the gap is, right now, between "I have a working agent stack" and "I have an LLM-visible brand". Most of my consulting hours go on the first half of that journey for clients. The interesting work is recognising they're the same project.
Also: how unusable most "AI strategy" decks are. There's a real opportunity for someone to write the practical, opinionated counter, the one that says what to actually build, in what order, with what budget. The kind of thing a CMO could hand to a CTO without losing them.
Watching
- The shape of Google's AI Overviews monetisation experiments in the US. They're already shifting which CTR cohorts get traffic.
- Perplexity Pages / Comet. Perplexity is doing more interesting product work than it gets credit for, and the page-publishing angle has GEO implications people are sleeping on.
- Anthropic's Skills API, for the agent-tooling side. The next plateau of agent capability is mostly about better composability, not better models.
Where I'm spending too much time
Tuning. Both site-tuning (you should see what the canvas in the hero used to look like) and tuning my agent stack (adjusting prompts, swapping models, watching for drift). Useful, but I notice when a Tuesday afternoon disappears into it.
Open offer
If you've got a brand and you're wondering what AI actually says about you right now, I'll run a selection of tailored, customer-relevant prompts through ChatGPT, Claude, Gemini and Perplexity and send you back the raw output, no commitment. It's quick. It's free. It's a strong indicator of whether we should talk.