It’s still time for free AI love

OpenAI just announced Frontier, their new enterprise platform. Build your agents here, deploy them here, manage them here. “AI coworkers” that accumulate memories and context inside OpenAI’s walls. Microsoft is doing the same with Copilot, which has quietly become an entire operating system for enterprise AI. The message from both is clear: everything you need is here. Safe. Managed. Ours.

Life inside walls is safe but dull. Ask Copilot.

The thing is, we’re at a moment where the technology moves so fast that locking into a single vendor’s AI stack feels reckless. In the last few months we moved most of our work from OpenAI to Anthropic. Not because OpenAI is bad, but because Claude turned out to be better for what we do. In a few months we might move again. The point is that we can.

We can because we own our prompts, our skills, our databases, our memory architecture, they all live in our bar. None of it lives inside OpenAI or Anthropic. When we moved, we rewired the model layer and everything else stayed put. That’s the whole trick, really. If you control the pieces that make your agents smart, switching the engine underneath is just plumbing.

Any decision maker signing up for a single-vendor AI platform right now is making a bet they don’t need to make. The landscape changes every quarter. The smart play is boring infrastructure that lets you move when you need to.

It’s still time for free AI love. Don’t let anyone build walls around you.

PS: I have to say, I loved Dave Frontier more.

AI as a Communication Tool

TL;DR: LLMs are translation machines, but translation doesn’t just mean languages. It means translating between contexts, skills, perspectives. AI tools could be communication tools between people, not just productivity tools for individuals.

LLMs were created to translate between languages. Instead of translating a word to another word, LLM allowed to translate words into concepts and concepts back into words in another language.

That’s the architecture. Take language, compress it into meaning, expand it back out. The chatbots and code assistants came later. At the core, these things translate.

Which got me thinking about the idea of “translating” applied not just to different languages but to different cultures, skills, ultimately different people. We all speak our own language in our own heads and need help understanding and being understood.

At work we built something called AIP. The Activate Intelligence Platform (around the office we pronounce it more like “ape”). It’s an MCP server that gives Claude access to our knowledge graph (clients, projects, contacts, all that). But it also connects to Slack, to GitHub commits, to transcripts of our conversations, to the threads from our AI agents. It knows what we’re working on because it’s plugged into where work happens.

There’s also a companion skill file. Strategic context. Who we are, how we think, how we position ourselves. The stuff that usually lives in founders’ heads and never makes it into documentation.

Anyway. We’ve been using this daily for a few weeks now. And here’s what I noticed: AIP is becoming a communication layer.

When a colleague wants to know what I’ve been doing, they don’t dig through Slack history. They can ask Claude. Claude draws on AIP and explains. They can ask follow-ups. Dig into decisions. Get context I’d forgotten to mention.

But here’s the thing. I can also ask AIP to explain my work to my mum. In simple words. In Italian. And it does.

The same system that helps an engineer understand technical decisions can translate those decisions for someone who doesn’t know what a knowledge graph is. That’s not two features. It’s one capability: translation between contexts. Technical to non-technical. English to Italian. Expert to novice. Detailed to summary.

The LLM sits in the middle, holding the meaning, rendering it for whoever’s asking.

We’ve been treating AI as personal productivity tools. Write faster, code faster. And fine, they do that. But maybe we’re missing something. AI as a layer that helps groups of people work together. Not replacing communication. Enriching it. Making context available. Translating between perspectives.

The lonely genius with a powerful AI assistant is one model. The more interesting one might be a team where AI handles the friction of knowledge transfer. Where you can always ask “what did we decide about X?” and get a real answer. Or perhaps even “why did we decide”

We’re not there yet. AIP is rough. But the glimpse is interesting enough to keep building.