AI Automation · Brighton

An AI automation agency without the agency.

Most "AI automation agencies" are three people in matching hoodies doing an impression of McKinsey. I'm one senior operator with seventeen years of process discipline and an agent stack I run my own business on. Not a demo, an actual dependency. The deliverable isn't "AI-powered transformation". It's hours. Specifically, the hours you currently spend doing things a machine should have quietly handled while you were asleep.

17 yrs

Spent learning which processes are worth automating and which just feel important

10x

Output per operator-hour, against doing it the way you do it now

2 wks

From first proper conversation to something genuinely running

0

Account managers positioned thoughtfully between you and the work

What you'd actually buy

The work, not the deck.

Plenty of agencies will sell you a sixty-slide AI strategy, which is a wonderful thing to own if your goal is to own sixty slides. I'd rather build something that works by the end of week one. These are the engagements I'm taking on right now.

A · Audit

AI automation audit

A two-week paid engagement. I shadow the workflows you quietly suspect are wasting everyone's time, score them on hours saved against complexity, and hand back a prioritised backlog with working prototypes for the top three. You leave with a real plan, even if you decide to stop there and never speak to me again.

  • Process mapping across team, tools and rituals
  • Opportunity scoring (hours saved × confidence)
  • Three working prototypes by the end
B · Build

Custom agent & workflow build

The part where things actually get built. I implement the automation with whichever LLMs and platforms genuinely fit the job: research agents, content pipelines, reporting bots, internal copilots. You get working software, documentation a human can follow, and a hand-off your team can maintain without a séance.

  • Autonomous research and monitoring agents
  • Multi-step content production pipelines
  • Reporting workflows pulling from GA, GSC, CRM, LLMs
  • Internal "ask the docs" copilots
C · Operate

Light-touch retainer

For teams who'd rather not spend their evenings babysitting an agent. I keep the things alive, watch for the ways they fail, and iterate as your needs move. Capped hours, monthly reporting, and no auto-renewal lying in wait.

  • Monthly performance & failure-mode reporting
  • Iteration on prompts, models and integrations
  • Roadmap reviews on new automation opportunities
D · Train

Team workshops

For teams who want to own this themselves. Half-day or full-day sessions on agent design, prompt engineering, automation hygiene, and the underrated skill of telling a genuine opportunity apart from a shiny distraction. You build things on the day, actual things, rather than watching me do it.

  • Live agent-building sessions
  • Tool selection clinics (n8n, Make, LangGraph, custom)
  • Internal automation discovery facilitation
Four phases

How an engagement actually runs.

No multi-month onboarding. No fifteen kick-off calls to align on the alignment. Discovery starts in week one; something is shipping inside a fortnight.

01 · Discover

Map the real workflow

I sit with the team, read the docs, and watch the work actually happen, which is usually nothing like the work as described. We score the opportunities on hours saved and confidence, then quietly bin anything that only looked good as a bullet point.

02 · Design

Architect the agents

The right model, the right orchestration, the right places to plug in. I sketch the agent's inputs, outputs and, more importantly, its failure modes. Scope gets signed off before a line of code is written, because surprises are only fun in fiction.

03 · Deploy

Ship working software

Build, test, integrate, document. The automation runs in production, rather than in the safe and flattering light of a demo. Your team can watch it work, inspect its decisions, and run it themselves.

04 · Operate

Monitor, iterate, compound

Models change. Processes change. Light-touch support keeps the automations honest and the roadmap moving. No commitment beyond the point where I'm still useful.

Where it pays off

Things I've actually shipped, including for myself.

I run my own business on this stack, which is a fairly useful tell. If I wouldn't trust it in production for my own work, I'm not going to sell it to you with a straight face.

Autonomous content pipelines

Multi-agent chains that take a topic brief, disappear off to do the research, come back with a long-form draft and hero imagery, and drop the lot into the tools your team already has open. A content backlog stops being a guilt pile and starts being a production line.

Scheduled production agents

Pipelines that run on a schedule without being asked, pull the data, do the analysis, and deliver the output into the channels your team genuinely checks. Production-grade work, not a dashboard nobody opens after week two.

LLM brand visibility audits

A repeatable method that runs a brand and its rivals through GPT, Claude, Gemini and Perplexity, scores how accurately each one describes you, and tracks citation share month over month. SEO for the machines quietly replacing the search box.

Internal "ask the docs" copilots

For teams slowly drowning in Notion. A retrieval agent over your internal docs, channels and meeting notes, so "what did we actually decide about X?" becomes a question with an answer, rather than a thirty-minute archaeological dig through Slack.

Competitor & market monitoring

A scheduled agent that keeps an eye on competitor sites, social profiles and SERPs, notices the changes that matter, and files a weekly digest. It does, uncomplainingly, the thing an analyst used to dread every Monday morning.

Automated reporting

Monthly reports that assemble themselves from analytics, search consoles, ad platforms and LLM audits into something resembling a narrative. Nobody, anywhere, touching a spreadsheet at midnight.

Honest comparison

Why a solo operator beats most "AI automation agencies".

Not because solo is magically superior. For the kind of project most teams actually need, the agency model is mostly overhead wearing a lanyard.

  Typical AI automation agency Dog on the Table
Who actually does the work Mostly junior implementers, supervised by a senior consultant who appears on calls A 17-year senior operator. Every line of the build.
Time to first shipped automation 6–10 weeks, once onboarding, kickoffs and alignment sessions are out of the way Around 2 weeks from first conversation to something running
Platform commitment Locked into the agency's preferred stack, usually the one they happen to know Platform-agnostic. n8n, LangGraph, custom code. Whatever the job honestly needs.
Account managers between you and the work 2–3 (PM, CSM, occasionally a strategist for emotional support) Zero. You email me. I reply. Novel, I know.
Demonstrated working agent stack Case studies, beautifully rendered, in slides A production agent stack I run my own business on, every day.
Lock-in / minimum commitment 6–12 month retainers, ideally signed before you've seen anything Per-project. Optional light retainer, no auto-renewal trapdoor.
FAQ

The questions I get asked most.

What does an AI automation agency actually do?

The useful version designs and builds autonomous (or nearly autonomous) workflows that take repetitive operational, content and research tasks off your team's plate: agent chains that go research → analysis → output, reporting pipelines, content production systems, and tidy integrations between the tools you already pay for. Everything beyond that is usually just consulting with an AI sticker on the lid.

How is this different from a traditional automation consultant?

Traditional automation consultants tend to live inside a single platform (Zapier, Make, n8n), chaining triggers together. I work platform-agnostic and lead with AI-native agents: workflows that read documents, summarise meetings, run research on a schedule, or produce a first draft. The output thinks a little, rather than just firing in sequence.

Often the right answer is a mix. Agents for the cognitive parts, classical automation for the plumbing. Working out which is which is the genuinely interesting bit.

Do I have to be technical to work with you?

Not even slightly. Most engagements start with a discovery session where I learn how your team actually works, as opposed to how the process diagram claims it works, and then I prototype something concrete within a couple of weeks. You see real output before committing to anything more ambitious. The technical part is my problem; the domain knowledge is yours.

What does an engagement cost?

Most engagements open with a fixed-fee automation audit (a couple of weeks, prototypes included) so you know precisely what you're buying before you buy more of it. From there we scope a build, usually project-based, with an optional light retainer if you want ongoing maintenance and iteration. No twelve-month contracts, no early-exit drama.

Are you based in the UK?

Yes. Brighton, specifically. I work with UK, European and US clients remotely, and for UK and EU engagements I'll happily travel for kickoffs and stakeholder sessions where being in the room genuinely changes the outcome.

What if we already have an internal tech team?

Even better. I'll usually act as the senior specialist you don't quite have a reason to hire full-time: designing the agents, training your team to keep them running, then stepping back. The goal is to leave you self-sufficient rather than quietly dependent on me forever, which would be good for my invoices and bad for my conscience.

Less strategy, more working software

Senior AI automation. No agency overhead.

If someone has told you that you need "an AI strategy", when what you actually want is a thing that works in production by month-end. Let's talk. Most engagements start with a paid audit, which is a low-drama way to find out whether I'm any good.