It’s still time for free AI love

OpenAI just announced Frontier, their new enterprise platform. Build your agents here, deploy them here, manage them here. “AI coworkers” that accumulate memories and context inside OpenAI’s walls. Microsoft is doing the same with Copilot, which has quietly become an entire operating system for enterprise AI. The message from both is clear: everything you need is here. Safe. Managed. Ours.

Life inside walls is safe but dull. Ask Copilot.

The thing is, we’re at a moment where the technology moves so fast that locking into a single vendor’s AI stack feels reckless. In the last few months we moved most of our work from OpenAI to Anthropic. Not because OpenAI is bad, but because Claude turned out to be better for what we do. In a few months we might move again. The point is that we can.

We can because we own our prompts, our skills, our databases, our memory architecture, they all live in our bar. None of it lives inside OpenAI or Anthropic. When we moved, we rewired the model layer and everything else stayed put. That’s the whole trick, really. If you control the pieces that make your agents smart, switching the engine underneath is just plumbing.

Any decision maker signing up for a single-vendor AI platform right now is making a bet they don’t need to make. The landscape changes every quarter. The smart play is boring infrastructure that lets you move when you need to.

It’s still time for free AI love. Don’t let anyone build walls around you.

PS: I have to say, I loved Dave Frontier more.

AI as a Communication Tool

TL;DR: LLMs are translation machines, but translation doesn’t just mean languages. It means translating between contexts, skills, perspectives. AI tools could be communication tools between people, not just productivity tools for individuals.

LLMs were created to translate between languages. Instead of translating a word to another word, LLM allowed to translate words into concepts and concepts back into words in another language.

That’s the architecture. Take language, compress it into meaning, expand it back out. The chatbots and code assistants came later. At the core, these things translate.

Which got me thinking about the idea of “translating” applied not just to different languages but to different cultures, skills, ultimately different people. We all speak our own language in our own heads and need help understanding and being understood.

At work we built something called AIP. The Activate Intelligence Platform (around the office we pronounce it more like “ape”). It’s an MCP server that gives Claude access to our knowledge graph (clients, projects, contacts, all that). But it also connects to Slack, to GitHub commits, to transcripts of our conversations, to the threads from our AI agents. It knows what we’re working on because it’s plugged into where work happens.

There’s also a companion skill file. Strategic context. Who we are, how we think, how we position ourselves. The stuff that usually lives in founders’ heads and never makes it into documentation.

Anyway. We’ve been using this daily for a few weeks now. And here’s what I noticed: AIP is becoming a communication layer.

When a colleague wants to know what I’ve been doing, they don’t dig through Slack history. They can ask Claude. Claude draws on AIP and explains. They can ask follow-ups. Dig into decisions. Get context I’d forgotten to mention.

But here’s the thing. I can also ask AIP to explain my work to my mum. In simple words. In Italian. And it does.

The same system that helps an engineer understand technical decisions can translate those decisions for someone who doesn’t know what a knowledge graph is. That’s not two features. It’s one capability: translation between contexts. Technical to non-technical. English to Italian. Expert to novice. Detailed to summary.

The LLM sits in the middle, holding the meaning, rendering it for whoever’s asking.

We’ve been treating AI as personal productivity tools. Write faster, code faster. And fine, they do that. But maybe we’re missing something. AI as a layer that helps groups of people work together. Not replacing communication. Enriching it. Making context available. Translating between perspectives.

The lonely genius with a powerful AI assistant is one model. The more interesting one might be a team where AI handles the friction of knowledge transfer. Where you can always ask “what did we decide about X?” and get a real answer. Or perhaps even “why did we decide”

We’re not there yet. AIP is rough. But the glimpse is interesting enough to keep building.

Playing with Claude skills

Using skills we can capture *why* software components are built, not just the *how*, giving developers better context when working with unfamiliar code. This isn’t about replacing developers, it’s about empowering them. We’re now exploring ways to capture skills from conversations, documents, and the natural flow of work.

For the last couple of weeks I have been playing with Claude skills.

It’s a cool way to package knowledge on how to do stuff in a prompt which can be retrieved any time your agent needs it. They work with Claude, but you can easily integrate them in any LLM-based process.

My first experiment was building a skill about our company. Read the website, read some internal documents, read technical documentation, distil a context that can be used any time I’m writing or thinking about business stuff. But the most interesting aspect is that everybody on our team can now use the same context for whatever work they are doing.

The next experiment was to build a skill to create Spritz agents (there’s a very good skill-building skill on Claude, which helps a lot with the building and packaging of skills). I showed Claude the blueprint for agents that we use every day, then showed some fully developed agents.

Then I tried to build a simple “hello world” agent from scratch with this prompt:

Build a Hello World Spritz agent. It should ask for the user name, then use the Anthropic API to generate a greeting. Deploy the agent on AWS using the CLI and test it. Ask me for an API key when you are ready.

I was able to obtain a working agent in about 10 minutes, but it took a few nudges here and there where the skill didn’t cover details.

At the end of this process I prompted:

Based on the experience of this job, update the skill file so next time we will be able to complete the task without obstacles. Do not include in the skill any specific information about this agent or my development context.

The second time it worked end to end.

I have since tried to build a bunch of different agents, always adding more details and nuances to the skills.

This is not (just) about production

Of course this is not about replacing developers, it’s about empowering them. The agents I build will not be used in a production environment; they are mostly proof of concept.

Using skills (or some similar prompting technique) we can capture why various software components are built, not just the how, allowing developers to have a much better context when they need to interact with code they have not created, or even when they go back to a project after a while.

They are an amazing teaching tool to explain to others how things work.

For now we have simply started a GitHub repository with the skills we have built so far. It’s easy to ask Claude, ChatGPT or any other tool to find and retrieve skills from the repo and use them. Now we are figuring out new ways to capture skills from the flow of work we do, from conversations we have, from documents we create.

Yet another step towards an interesting future.

Magic Moments ✨

When AI models start asking each other for help without being told to, something magical happens.

One of the cool aspects of having a dozen different MCP servers connected to Claude are the random serendipitous interactions that emerge.

Yesterday, I was working on a little programming project. Opus 4 was chugging along nicely, reading files on my Mac, deciding how to implement something, checking data against a source I was working on. The usual AI assistant stuff, really.

Then I noticed something unexpected in the logs. Claude had fired a call to the OpenAI MCP server (that little experiment I did to allow Claude to ask questions to OpenAI models). I paused to see what was happening.

Claude had asked GPT-4o how to read the end of a file. Nothing groundbreaking — just a simple technical question. GPT-4o provided an answer, and the process continued seamlessly. If I hadn’t been paying attention to the logs, I would have missed it entirely.

Here’s the thing: I’m fairly certain this information exists in Opus 4’s knowledge base. It’s not like reading the tail of a file is some obscure programming technique. But for some reason, in that moment, Claude decided it wanted a second opinion. Or perhaps it needed some comfort from another model?

It felt a little magic.

If you can explain it, it’s solved.

An old friend with many years of software development experience yesterday reminded me of the old saying: “if you can explain a problem, it is half solved”.

Chatting about it we agreed that even with the current generation of AI tools to support software development, we are getting closer and closer to “if you can explain it, it is solved!”.

The new challenge is going to be getting the next generation of software developers off the ground. More and more the jobs performed by junior developers will be taken on by AI agents, making it harder for young people to kickstart their career.

We need more than ever skilled professionals who can understand the complexity of the world and know how to use AI to solve difficult problems. The apprenticeship model is broken. We are building a new one.