Magic Moments ✨

When AI models start asking each other for help without being told to, something magical happens.

One of the cool aspects of having a dozen different MCP servers connected to Claude are the random serendipitous interactions that emerge.

Yesterday, I was working on a little programming project. Opus 4 was chugging along nicely, reading files on my Mac, deciding how to implement something, checking data against a source I was working on. The usual AI assistant stuff, really.

Then I noticed something unexpected in the logs. Claude had fired a call to the OpenAI MCP server (that little experiment I did to allow Claude to ask questions to OpenAI models). I paused to see what was happening.

Claude had asked GPT-4o how to read the end of a file. Nothing groundbreaking — just a simple technical question. GPT-4o provided an answer, and the process continued seamlessly. If I hadn’t been paying attention to the logs, I would have missed it entirely.

Here’s the thing: I’m fairly certain this information exists in Opus 4’s knowledge base. It’s not like reading the tail of a file is some obscure programming technique. But for some reason, in that moment, Claude decided it wanted a second opinion. Or perhaps it needed some comfort from another model?

It felt a little magic.

If you can explain it, it’s solved.

An old friend with many years of software development experience yesterday reminded me of the old saying: “if you can explain a problem, it is half solved”.

Chatting about it we agreed that even with the current generation of AI tools to support software development, we are getting closer and closer to “if you can explain it, it is solved!”.

The new challenge is going to be getting the next generation of software developers off the ground. More and more the jobs performed by junior developers will be taken on by AI agents, making it harder for young people to kickstart their career.

We need more than ever skilled professionals who can understand the complexity of the world and know how to use AI to solve difficult problems. The apprenticeship model is broken. We are building a new one.

The magic of AI search

I just built yet another MCP experiment.

First I created a Python script to process .md files: chunk them, create embeddings, store everything in a PostgreSQL database.

Then I built an MCP server which can search the database both using semantic search (embeddings) and more traditional full text search as a fallback mechanism.

I find absolutely fascinating watching Claude interacting with this tool, because it’s not just about converting my request to a query, it’s the reasoning process which happens in order to find what it needs which is brilliant.

Let me show you an example:

Continue reading “The magic of AI search”

Building a WordPress MCP Server for Claude: Automating Blog Posts with AI

Building a custom MCP server to connect Claude directly to WordPress, enabling automated blog post creation with proper formatting and intelligent categorisation.

Third day, third MCP experiment (inspired by a quick conversation with Dave).

This time I connected Claude with this WordPress blog.

At the end of the chat that I used for the whole process of writing, building and installing the tool on my Mac, I asked Claude to write a post about the experience.

Of course I wouldn’t allow Claude to post unsupervised original stuff to my blog like I just did, but as Dave was pointing out, these are our new writing tools, being able to post directly without having to copy and paste just makes sense.

To be honest I would rather do this with ChatGPT, but apparently MCP integration is not available yet in the UK yet.

Check below to see Claude’s original post.

PS: it also categorised and tagged the post automagically ❤️

Continue reading “Building a WordPress MCP Server for Claude: Automating Blog Posts with AI”

And here’s the recipe

I’m not confident enough in the tools I built this week to share them around just yet. As long as they run on my Mac, I’m happy, but I can’t really take responsibility for how they’d work for anyone else.

Still, while I’m not serving up the dish, I’m definitely happy to share the recipe!

If you plug this prompt into Claude or ChatGPT, you’ll get pretty close to what I’ve got running. Then ask how to build it and how to configure Claude and you should be good to go. Good luck, and let me know how it goes.

(I think that sharing prompts is an act of love.)

Continue reading “And here’s the recipe”

More MCP fun: Claude talks with ChatGPT

I started with a new idea this morning: create an MCP server that allows Claude to talk to the various OpenAI models.

Now I can ask Claude to ask any of the openAI models.

What I find more fascinating is how Claude is figuring out how to use these new tools. The key is in the description of the tool, the “manifest” that Claude gets when the server is initialised (and is probably injected at the beginning of every chat).

PS: if you want to try this at home, here’s the recipe.

As an example, here’s how the description of today’s MCP server looks like:

Continue reading “More MCP fun: Claude talks with ChatGPT”

Spotlight → MCP

This morning I asked myself if I could make Spotlight on my Mac talk to Claude. Just a small experiment.

I ended up building a minimal MCP server that exposes Spotlight’s index—files, apps, recent items—as JSON-RPC tools. With that in place, Claude could search my folders, read files, and understand what a project is about.

I tested it on a real directory. It worked. Claude read through the files and summarised the purpose of the whole project in seconds. Something that would usually take me a while to piece together manually.

The whole thing took a few hours. Nothing fancy. But it opened an interesting door.

Here’s a quick demo:

PS: as usual, I didn’t write any code. In this case I was assisted by Claude. Which was kind of funny, we writing and testing the tool in the same thread. At some point I wrote “hey, now you can read files”, and it seemed pleased. ;)

Scraping Challenges and Open Standards

Following up what I posted recently about Scrape wars, I wrote a longer post for my company site. Reposting it here just for reference.

We’ve talked before about how everything you write should work as a prompt. Your content should be explicitly structured, easy for AI agents to read, interpret, and reuse. Yet, despite clear advantages, in practice we’re often stuck using workarounds and hacks to access valuable information.

Right now, many AI agents still rely on scraping websites. Scraping is messy, unreliable, and frankly a bit of a nightmare to maintain. It creates an adversarial relationship with companies who increasingly employ tools like robots.txt files, CAPTCHAs, or IP restrictions to block automated access. On top of that, major AI providers like OpenAI and Google are introducing built-in search capabilities within their ecosystems. While these are helpful, they ultimately risk creating a new layer of dependence. If content can only be efficiently accessed through these proprietary AI engines, we risk locking ourselves into another digital silo controlled by private platforms.

There is a simpler, proven, and immediately available solution: RSS. Providing your content via RSS feeds allows AI agents direct, structured access without complicated scraping. Our agents, for example, are already using structured XML reports from the Italian Parliament to effectively monitor parliamentary sessions. This is an ideal case of structured openness. Agents such as our Parliamentary Reporter Agent and the automated Assembly Report Agent thrive precisely because these datasets are publicly available, clearly structured, and easily machine-readable.

However, the reality isn’t always so positive. Other important legislative and governmental sites impose seemingly arbitrary restrictions. We regularly encounter ministries and other government websites that block access to automated tools or restrict access based on geographic location, even though their content is explicitly intended as public information. These decisions push us back into pointless workarounds or simply cut off access entirely, unacceptable when dealing with public information.

When considering concerns around giving AI models access to content, it’s essential to distinguish two different use cases clearly. One case is scraping or downloading massive amounts of data for training LLM models (this understandably raises concerns around copyright, control, and proper attribution). But another entirely different and increasingly crucial case is allowing AI agents access to content purely to provide immediate, useful services to users. In these scenarios, the AI is acting similarly to a traditional user, simply reading and delivering relevant, timely information rather than training on vast archives.

Building on RSS’s straightforwardness, we can take this concept further with more advanced open standards, such as MCP (Machine Content Protocol). Imagine a self-discovery mechanism similar to RSS feeds, but designed to handle richer, more complex datasets. MCP could offer AI agents direct ways to discover, interpret, and process deeper levels of information effortlessly, without the current challenges of scraping or the risk of vendor lock-in.

Of course, valid concerns exist about data protection and theft at scale (curiously the same concerns appeared back in the early RSS days, and even when the printing press first emerged… yet we survived). But if our primary goal is genuinely to share ideas and foster transparency, deliberately restricting access to information contradicts our intentions. Public information should remain public, open, and machine-readable.

Let’s avoid creating unnecessary barriers or new digital silos. Instead, let’s embrace standards like RSS and MCP, making sure AI agents are our partners, not adversaries, in building a more transparent and connected digital landscape.

Daily AI Tools

Here’s a snapshot of what AI tools and how am I using them on this 27th of May 2025. Things change fast, I’m writing this for my future self who will be moved while reminiscing these pioneering times.

I more or less always have the Claude and ChatGPT apps running. Recently I have also created a Gemini app (using the Safari’s 'add to dock' feature). I prefer to use separate applications than tabs in a browser, I can switch faster between apps. I’m old school.

Of these three:

ChatGPT has far better context about who I am and what I do. Every time I need to write something work related I will gravitate towards ChatGPT because I don’t have to explain too much. I also like how ChatGPT can see my other apps, so I don’t have to copy and paste back and forth all the time.

I find Claude better at handling large attachments (it has a better “attention span” than ChatGPT while reviewing large documents), but it did fail spectacularly recently (it couldn’t read a file and started making s*it up), so  I’m trusting it a little bit less.

I have also started using Gemini recently. The fact that it doesn’t have an app creates some resistance, but the huge context window makes it useful in cases where I have big documents to process. 

On mobile, I can talk with the Gemini app much better than with ChatGPT (which keeps tripping into its own speech).

Since ChatGPT has included o3 model with search, I have been using Perplexity much less. I might not renew my subscription. A few weeks ago I posted a photo of a shelf of prosecco bottles in a supermarket and asked for advice… it worked like magic).

For image generation I prefer Midjourney for the “feel” of images, even if ChatGPT understands better my prompts. Let’s say that ChatGPT is smarter, but Midjourney has more talent (and is dyslexic). 

For coding jobs I jump back and forth between different tools: Gemini does seems to be pretty good at the moment, but I also find Codex quite impressive.

Mem’ries… light the corners of my mind

For the last few days, I’ve had access to the “Reference Chat History” feature in ChatGPT (I think it had been available for a while in the US, but it just landed on my account in the UK).

Wow… what a change!

I was putting together a page to describe the various tools we’ve been working on, and I just tried randomly asking ChatGPT to insert a description of “Gimlet” or “Old Fashioned”: it just did it. No context necessary, no links, no pages. It was just there, part of the memory I share with the app.

I do continuously switch between AI tools based on which one I think can perform better on any given task – or sometimes just to compare how they perform – and this feature makes ChatGPT more attractive: it has more reusable context than any of the other tools.

It’s quite likely that all other tools will develop similar features, but this will mean trying to silo users. I’ll tend to go where most of my memories are, and I won’t be switching and leaving all my memories behind.

My memories.

Hopefully a shared standard for memories (maybe MCP?) will soon emerge, and we won’t end up siloed again.

Scrape wars

There’s a lot of scraping going on these days.

It looks like most AI applications that need to access content online are resorting to scraping web pages.

Many AI agents we’ve been working on rely on having some sort of access to online content. Of course, we started with a simple RSS aggregator: it’s clean, it’s efficient, it’s a rock-solid foundation for any application.

But not all sites have feeds. More than one would think (many sites have feeds but don’t advertise them, some of these might very well be just a feature of the CMS used, not a deliberate decision by the publisher).

But for those sites without feeds… well, we scrape them (and drop the content into a feed that we manage, using the aggregator as the central repository).

Some sites don’t want us to scrape them and put up a fight. In most cases, we scrape them anyway.

If most publications were publishing feeds, we wouldn’t have to do this. They would control what is shared and what is not. Everyone would be happy.

Meanwhile, all my sites are getting tons of traffic from places like Boydton and Des Moines, that’s where big server farms sit and tons of bots are scraping the web from. Wasting lots of resources (theirs and mine) instead of just polling my perfectly updated RSS feed.

PS: I wrote this post on Wordland. Refreshing.