It’s still time for free AI love

OpenAI just announced Frontier, their new enterprise platform. Build your agents here, deploy them here, manage them here. “AI coworkers” that accumulate memories and context inside OpenAI’s walls. Microsoft is doing the same with Copilot, which has quietly become an entire operating system for enterprise AI. The message from both is clear: everything you need is here. Safe. Managed. Ours.

Life inside walls is safe but dull. Ask Copilot.

The thing is, we’re at a moment where the technology moves so fast that locking into a single vendor’s AI stack feels reckless. In the last few months we moved most of our work from OpenAI to Anthropic. Not because OpenAI is bad, but because Claude turned out to be better for what we do. In a few months we might move again. The point is that we can.

We can because we own our prompts, our skills, our databases, our memory architecture, they all live in our bar. None of it lives inside OpenAI or Anthropic. When we moved, we rewired the model layer and everything else stayed put. That’s the whole trick, really. If you control the pieces that make your agents smart, switching the engine underneath is just plumbing.

Any decision maker signing up for a single-vendor AI platform right now is making a bet they don’t need to make. The landscape changes every quarter. The smart play is boring infrastructure that lets you move when you need to.

It’s still time for free AI love. Don’t let anyone build walls around you.

PS: I have to say, I loved Dave Frontier more.

AI as a Communication Tool

TL;DR: LLMs are translation machines, but translation doesn’t just mean languages. It means translating between contexts, skills, perspectives. AI tools could be communication tools between people, not just productivity tools for individuals.

LLMs were created to translate between languages. Instead of translating a word to another word, LLM allowed to translate words into concepts and concepts back into words in another language.

That’s the architecture. Take language, compress it into meaning, expand it back out. The chatbots and code assistants came later. At the core, these things translate.

Which got me thinking about the idea of “translating” applied not just to different languages but to different cultures, skills, ultimately different people. We all speak our own language in our own heads and need help understanding and being understood.

At work we built something called AIP. The Activate Intelligence Platform (around the office we pronounce it more like “ape”). It’s an MCP server that gives Claude access to our knowledge graph (clients, projects, contacts, all that). But it also connects to Slack, to GitHub commits, to transcripts of our conversations, to the threads from our AI agents. It knows what we’re working on because it’s plugged into where work happens.

There’s also a companion skill file. Strategic context. Who we are, how we think, how we position ourselves. The stuff that usually lives in founders’ heads and never makes it into documentation.

Anyway. We’ve been using this daily for a few weeks now. And here’s what I noticed: AIP is becoming a communication layer.

When a colleague wants to know what I’ve been doing, they don’t dig through Slack history. They can ask Claude. Claude draws on AIP and explains. They can ask follow-ups. Dig into decisions. Get context I’d forgotten to mention.

But here’s the thing. I can also ask AIP to explain my work to my mum. In simple words. In Italian. And it does.

The same system that helps an engineer understand technical decisions can translate those decisions for someone who doesn’t know what a knowledge graph is. That’s not two features. It’s one capability: translation between contexts. Technical to non-technical. English to Italian. Expert to novice. Detailed to summary.

The LLM sits in the middle, holding the meaning, rendering it for whoever’s asking.

We’ve been treating AI as personal productivity tools. Write faster, code faster. And fine, they do that. But maybe we’re missing something. AI as a layer that helps groups of people work together. Not replacing communication. Enriching it. Making context available. Translating between perspectives.

The lonely genius with a powerful AI assistant is one model. The more interesting one might be a team where AI handles the friction of knowledge transfer. Where you can always ask “what did we decide about X?” and get a real answer. Or perhaps even “why did we decide”

We’re not there yet. AIP is rough. But the glimpse is interesting enough to keep building.

Playing with Claude skills

Using skills we can capture *why* software components are built, not just the *how*, giving developers better context when working with unfamiliar code. This isn’t about replacing developers, it’s about empowering them. We’re now exploring ways to capture skills from conversations, documents, and the natural flow of work.

For the last couple of weeks I have been playing with Claude skills.

It’s a cool way to package knowledge on how to do stuff in a prompt which can be retrieved any time your agent needs it. They work with Claude, but you can easily integrate them in any LLM-based process.

My first experiment was building a skill about our company. Read the website, read some internal documents, read technical documentation, distil a context that can be used any time I’m writing or thinking about business stuff. But the most interesting aspect is that everybody on our team can now use the same context for whatever work they are doing.

The next experiment was to build a skill to create Spritz agents (there’s a very good skill-building skill on Claude, which helps a lot with the building and packaging of skills). I showed Claude the blueprint for agents that we use every day, then showed some fully developed agents.

Then I tried to build a simple “hello world” agent from scratch with this prompt:

Build a Hello World Spritz agent. It should ask for the user name, then use the Anthropic API to generate a greeting. Deploy the agent on AWS using the CLI and test it. Ask me for an API key when you are ready.

I was able to obtain a working agent in about 10 minutes, but it took a few nudges here and there where the skill didn’t cover details.

At the end of this process I prompted:

Based on the experience of this job, update the skill file so next time we will be able to complete the task without obstacles. Do not include in the skill any specific information about this agent or my development context.

The second time it worked end to end.

I have since tried to build a bunch of different agents, always adding more details and nuances to the skills.

This is not (just) about production

Of course this is not about replacing developers, it’s about empowering them. The agents I build will not be used in a production environment; they are mostly proof of concept.

Using skills (or some similar prompting technique) we can capture why various software components are built, not just the how, allowing developers to have a much better context when they need to interact with code they have not created, or even when they go back to a project after a while.

They are an amazing teaching tool to explain to others how things work.

For now we have simply started a GitHub repository with the skills we have built so far. It’s easy to ask Claude, ChatGPT or any other tool to find and retrieve skills from the repo and use them. Now we are figuring out new ways to capture skills from the flow of work we do, from conversations we have, from documents we create.

Yet another step towards an interesting future.

Magic Moments ✨

When AI models start asking each other for help without being told to, something magical happens.

One of the cool aspects of having a dozen different MCP servers connected to Claude are the random serendipitous interactions that emerge.

Yesterday, I was working on a little programming project. Opus 4 was chugging along nicely, reading files on my Mac, deciding how to implement something, checking data against a source I was working on. The usual AI assistant stuff, really.

Then I noticed something unexpected in the logs. Claude had fired a call to the OpenAI MCP server (that little experiment I did to allow Claude to ask questions to OpenAI models). I paused to see what was happening.

Claude had asked GPT-4o how to read the end of a file. Nothing groundbreaking — just a simple technical question. GPT-4o provided an answer, and the process continued seamlessly. If I hadn’t been paying attention to the logs, I would have missed it entirely.

Here’s the thing: I’m fairly certain this information exists in Opus 4’s knowledge base. It’s not like reading the tail of a file is some obscure programming technique. But for some reason, in that moment, Claude decided it wanted a second opinion. Or perhaps it needed some comfort from another model?

It felt a little magic.

The magic of AI search

I just built yet another MCP experiment.

First I created a Python script to process .md files: chunk them, create embeddings, store everything in a PostgreSQL database.

Then I built an MCP server which can search the database both using semantic search (embeddings) and more traditional full text search as a fallback mechanism.

I find absolutely fascinating watching Claude interacting with this tool, because it’s not just about converting my request to a query, it’s the reasoning process which happens in order to find what it needs which is brilliant.

Let me show you an example:

Continue reading “The magic of AI search”

Building a WordPress MCP Server for Claude: Automating Blog Posts with AI

Building a custom MCP server to connect Claude directly to WordPress, enabling automated blog post creation with proper formatting and intelligent categorisation.

Third day, third MCP experiment (inspired by a quick conversation with Dave).

This time I connected Claude with this WordPress blog.

At the end of the chat that I used for the whole process of writing, building and installing the tool on my Mac, I asked Claude to write a post about the experience.

Of course I wouldn’t allow Claude to post unsupervised original stuff to my blog like I just did, but as Dave was pointing out, these are our new writing tools, being able to post directly without having to copy and paste just makes sense.

To be honest I would rather do this with ChatGPT, but apparently MCP integration is not available yet in the UK yet.

Check below to see Claude’s original post.

PS: it also categorised and tagged the post automagically ❤️

Continue reading “Building a WordPress MCP Server for Claude: Automating Blog Posts with AI”

GroceriesGPT

A friend this morning shared a list of vegetables, noting how hard it is to eat 30 different ones in the same week.

I immediately turned to my AI chatbot to ask to create a list of commonly eaten vegetables, and of course I got a very good one.

At that point I thought that it would be nice to add that list to my next grocery order on Ocado.

And this is where the magic ended.

My chatbot doesn’t talk to the Ocado app. And I actually use more than one bot, sometime I go with ChatGPT, sometime I go with Claude, they are both good and continuously improving and I like to pit them against each other.

ChatGPT has a plug-in architecture which potentially would allow to connect with other applications creating custom GPTs, but so far I haven’t seen any particularly good application. And what would be the idea there? That Ocado would have to build a custom GPT? And what about other chatbots? I don’t really want to be siloed again. I’m happy to pay for services, even Google, but leave me free to connect.

Meanwhile I’m sure that somebody at Ocado is already thinking on how to integrate an AI in their app (if you aren’t, call me), and while this will be a nice feature to have, it will be yet another AI agent unable to talk with my other agents.

Maybe the solution is similar to what Rabbit appears to be working on: teach AI to use UI. Avoid altogether the challenge of getting companies and engineers to agree on open standards and just teach AIs to use shitty incompatible interfaces of our apps.

AI interoperability might be one of the most interesting future problems that we will face.

I want the AIs I pay for to collaborate, not to compete.