AI as a Communication Tool

TL;DR: LLMs are translation machines, but translation doesn’t just mean languages. It means translating between contexts, skills, perspectives. AI tools could be communication tools between people, not just productivity tools for individuals.

LLMs were created to translate between languages. Instead of translating a word to another word, LLM allowed to translate words into concepts and concepts back into words in another language.

That’s the architecture. Take language, compress it into meaning, expand it back out. The chatbots and code assistants came later. At the core, these things translate.

Which got me thinking about the idea of “translating” applied not just to different languages but to different cultures, skills, ultimately different people. We all speak our own language in our own heads and need help understanding and being understood.

At work we built something called AIP. The Activate Intelligence Platform (around the office we pronounce it more like “ape”). It’s an MCP server that gives Claude access to our knowledge graph (clients, projects, contacts, all that). But it also connects to Slack, to GitHub commits, to transcripts of our conversations, to the threads from our AI agents. It knows what we’re working on because it’s plugged into where work happens.

There’s also a companion skill file. Strategic context. Who we are, how we think, how we position ourselves. The stuff that usually lives in founders’ heads and never makes it into documentation.

Anyway. We’ve been using this daily for a few weeks now. And here’s what I noticed: AIP is becoming a communication layer.

When a colleague wants to know what I’ve been doing, they don’t dig through Slack history. They can ask Claude. Claude draws on AIP and explains. They can ask follow-ups. Dig into decisions. Get context I’d forgotten to mention.

But here’s the thing. I can also ask AIP to explain my work to my mum. In simple words. In Italian. And it does.

The same system that helps an engineer understand technical decisions can translate those decisions for someone who doesn’t know what a knowledge graph is. That’s not two features. It’s one capability: translation between contexts. Technical to non-technical. English to Italian. Expert to novice. Detailed to summary.

The LLM sits in the middle, holding the meaning, rendering it for whoever’s asking.

We’ve been treating AI as personal productivity tools. Write faster, code faster. And fine, they do that. But maybe we’re missing something. AI as a layer that helps groups of people work together. Not replacing communication. Enriching it. Making context available. Translating between perspectives.

The lonely genius with a powerful AI assistant is one model. The more interesting one might be a team where AI handles the friction of knowledge transfer. Where you can always ask “what did we decide about X?” and get a real answer. Or perhaps even “why did we decide”

We’re not there yet. AIP is rough. But the glimpse is interesting enough to keep building.

Magic Moments ✨

When AI models start asking each other for help without being told to, something magical happens.

One of the cool aspects of having a dozen different MCP servers connected to Claude are the random serendipitous interactions that emerge.

Yesterday, I was working on a little programming project. Opus 4 was chugging along nicely, reading files on my Mac, deciding how to implement something, checking data against a source I was working on. The usual AI assistant stuff, really.

Then I noticed something unexpected in the logs. Claude had fired a call to the OpenAI MCP server (that little experiment I did to allow Claude to ask questions to OpenAI models). I paused to see what was happening.

Claude had asked GPT-4o how to read the end of a file. Nothing groundbreaking — just a simple technical question. GPT-4o provided an answer, and the process continued seamlessly. If I hadn’t been paying attention to the logs, I would have missed it entirely.

Here’s the thing: I’m fairly certain this information exists in Opus 4’s knowledge base. It’s not like reading the tail of a file is some obscure programming technique. But for some reason, in that moment, Claude decided it wanted a second opinion. Or perhaps it needed some comfort from another model?

It felt a little magic.

The magic of AI search

I just built yet another MCP experiment.

First I created a Python script to process .md files: chunk them, create embeddings, store everything in a PostgreSQL database.

Then I built an MCP server which can search the database both using semantic search (embeddings) and more traditional full text search as a fallback mechanism.

I find absolutely fascinating watching Claude interacting with this tool, because it’s not just about converting my request to a query, it’s the reasoning process which happens in order to find what it needs which is brilliant.

Let me show you an example:

Continue reading “The magic of AI search”

Building a WordPress MCP Server for Claude: Automating Blog Posts with AI

Building a custom MCP server to connect Claude directly to WordPress, enabling automated blog post creation with proper formatting and intelligent categorisation.

Third day, third MCP experiment (inspired by a quick conversation with Dave).

This time I connected Claude with this WordPress blog.

At the end of the chat that I used for the whole process of writing, building and installing the tool on my Mac, I asked Claude to write a post about the experience.

Of course I wouldn’t allow Claude to post unsupervised original stuff to my blog like I just did, but as Dave was pointing out, these are our new writing tools, being able to post directly without having to copy and paste just makes sense.

To be honest I would rather do this with ChatGPT, but apparently MCP integration is not available yet in the UK yet.

Check below to see Claude’s original post.

PS: it also categorised and tagged the post automagically ❤️

Continue reading “Building a WordPress MCP Server for Claude: Automating Blog Posts with AI”