The magic of AI search

I just built yet another MCP experiment.

First I created a Python script to process .md files: chunk them, create embeddings, store everything in a PostgreSQL database.

Then I built an MCP server which can search the database both using semantic search (embeddings) and more traditional full text search as a fallback mechanism.

I find absolutely fascinating watching Claude interacting with this tool, because it’s not just about converting my request to a query, it’s the reasoning process which happens in order to find what it needs which is brilliant.

Let me show you an example:

Continue reading “The magic of AI search”

Building a WordPress MCP Server for Claude: Automating Blog Posts with AI

Building a custom MCP server to connect Claude directly to WordPress, enabling automated blog post creation with proper formatting and intelligent categorisation.

Third day, third MCP experiment (inspired by a quick conversation with Dave).

This time I connected Claude with this WordPress blog.

At the end of the chat that I used for the whole process of writing, building and installing the tool on my Mac, I asked Claude to write a post about the experience.

Of course I wouldn’t allow Claude to post unsupervised original stuff to my blog like I just did, but as Dave was pointing out, these are our new writing tools, being able to post directly without having to copy and paste just makes sense.

To be honest I would rather do this with ChatGPT, but apparently MCP integration is not available yet in the UK yet.

Check below to see Claude’s original post.

PS: it also categorised and tagged the post automagically ❤️

Continue reading “Building a WordPress MCP Server for Claude: Automating Blog Posts with AI”

And here’s the recipe

I’m not confident enough in the tools I built this week to share them around just yet. As long as they run on my Mac, I’m happy, but I can’t really take responsibility for how they’d work for anyone else.

Still, while I’m not serving up the dish, I’m definitely happy to share the recipe!

If you plug this prompt into Claude or ChatGPT, you’ll get pretty close to what I’ve got running. Then ask how to build it and how to configure Claude and you should be good to go. Good luck, and let me know how it goes.

(I think that sharing prompts is an act of love.)

Continue reading “And here’s the recipe”

More MCP fun: Claude talks with ChatGPT

I started with a new idea this morning: create an MCP server that allows Claude to talk to the various OpenAI models.

Now I can ask Claude to ask any of the openAI models.

What I find more fascinating is how Claude is figuring out how to use these new tools. The key is in the description of the tool, the “manifest” that Claude gets when the server is initialised (and is probably injected at the beginning of every chat).

PS: if you want to try this at home, here’s the recipe.

As an example, here’s how the description of today’s MCP server looks like:

Continue reading “More MCP fun: Claude talks with ChatGPT”

Spotlight → MCP

This morning I asked myself if I could make Spotlight on my Mac talk to Claude. Just a small experiment.

I ended up building a minimal MCP server that exposes Spotlight’s index—files, apps, recent items—as JSON-RPC tools. With that in place, Claude could search my folders, read files, and understand what a project is about.

I tested it on a real directory. It worked. Claude read through the files and summarised the purpose of the whole project in seconds. Something that would usually take me a while to piece together manually.

The whole thing took a few hours. Nothing fancy. But it opened an interesting door.

Here’s a quick demo:

PS: as usual, I didn’t write any code. In this case I was assisted by Claude. Which was kind of funny, we writing and testing the tool in the same thread. At some point I wrote “hey, now you can read files”, and it seemed pleased. ;)

Scraping Challenges and Open Standards

Following up what I posted recently about Scrape wars, I wrote a longer post for my company site. Reposting it here just for reference.

We’ve talked before about how everything you write should work as a prompt. Your content should be explicitly structured, easy for AI agents to read, interpret, and reuse. Yet, despite clear advantages, in practice we’re often stuck using workarounds and hacks to access valuable information.

Right now, many AI agents still rely on scraping websites. Scraping is messy, unreliable, and frankly a bit of a nightmare to maintain. It creates an adversarial relationship with companies who increasingly employ tools like robots.txt files, CAPTCHAs, or IP restrictions to block automated access. On top of that, major AI providers like OpenAI and Google are introducing built-in search capabilities within their ecosystems. While these are helpful, they ultimately risk creating a new layer of dependence. If content can only be efficiently accessed through these proprietary AI engines, we risk locking ourselves into another digital silo controlled by private platforms.

There is a simpler, proven, and immediately available solution: RSS. Providing your content via RSS feeds allows AI agents direct, structured access without complicated scraping. Our agents, for example, are already using structured XML reports from the Italian Parliament to effectively monitor parliamentary sessions. This is an ideal case of structured openness. Agents such as our Parliamentary Reporter Agent and the automated Assembly Report Agent thrive precisely because these datasets are publicly available, clearly structured, and easily machine-readable.

However, the reality isn’t always so positive. Other important legislative and governmental sites impose seemingly arbitrary restrictions. We regularly encounter ministries and other government websites that block access to automated tools or restrict access based on geographic location, even though their content is explicitly intended as public information. These decisions push us back into pointless workarounds or simply cut off access entirely, unacceptable when dealing with public information.

When considering concerns around giving AI models access to content, it’s essential to distinguish two different use cases clearly. One case is scraping or downloading massive amounts of data for training LLM models (this understandably raises concerns around copyright, control, and proper attribution). But another entirely different and increasingly crucial case is allowing AI agents access to content purely to provide immediate, useful services to users. In these scenarios, the AI is acting similarly to a traditional user, simply reading and delivering relevant, timely information rather than training on vast archives.

Building on RSS’s straightforwardness, we can take this concept further with more advanced open standards, such as MCP (Machine Content Protocol). Imagine a self-discovery mechanism similar to RSS feeds, but designed to handle richer, more complex datasets. MCP could offer AI agents direct ways to discover, interpret, and process deeper levels of information effortlessly, without the current challenges of scraping or the risk of vendor lock-in.

Of course, valid concerns exist about data protection and theft at scale (curiously the same concerns appeared back in the early RSS days, and even when the printing press first emerged… yet we survived). But if our primary goal is genuinely to share ideas and foster transparency, deliberately restricting access to information contradicts our intentions. Public information should remain public, open, and machine-readable.

Let’s avoid creating unnecessary barriers or new digital silos. Instead, let’s embrace standards like RSS and MCP, making sure AI agents are our partners, not adversaries, in building a more transparent and connected digital landscape.

Daily AI Tools

Here’s a snapshot of what AI tools and how am I using them on this 27th of May 2025. Things change fast, I’m writing this for my future self who will be moved while reminiscing these pioneering times.

I more or less always have the Claude and ChatGPT apps running. Recently I have also created a Gemini app (using the Safari’s 'add to dock' feature). I prefer to use separate applications than tabs in a browser, I can switch faster between apps. I’m old school.

Of these three:

ChatGPT has far better context about who I am and what I do. Every time I need to write something work related I will gravitate towards ChatGPT because I don’t have to explain too much. I also like how ChatGPT can see my other apps, so I don’t have to copy and paste back and forth all the time.

I find Claude better at handling large attachments (it has a better “attention span” than ChatGPT while reviewing large documents), but it did fail spectacularly recently (it couldn’t read a file and started making s*it up), so  I’m trusting it a little bit less.

I have also started using Gemini recently. The fact that it doesn’t have an app creates some resistance, but the huge context window makes it useful in cases where I have big documents to process. 

On mobile, I can talk with the Gemini app much better than with ChatGPT (which keeps tripping into its own speech).

Since ChatGPT has included o3 model with search, I have been using Perplexity much less. I might not renew my subscription. A few weeks ago I posted a photo of a shelf of prosecco bottles in a supermarket and asked for advice… it worked like magic).

For image generation I prefer Midjourney for the “feel” of images, even if ChatGPT understands better my prompts. Let’s say that ChatGPT is smarter, but Midjourney has more talent (and is dyslexic). 

For coding jobs I jump back and forth between different tools: Gemini does seems to be pretty good at the moment, but I also find Codex quite impressive.

Mem’ries… light the corners of my mind

For the last few days, I’ve had access to the “Reference Chat History” feature in ChatGPT (I think it had been available for a while in the US, but it just landed on my account in the UK).

Wow… what a change!

I was putting together a page to describe the various tools we’ve been working on, and I just tried randomly asking ChatGPT to insert a description of “Gimlet” or “Old Fashioned”: it just did it. No context necessary, no links, no pages. It was just there, part of the memory I share with the app.

I do continuously switch between AI tools based on which one I think can perform better on any given task – or sometimes just to compare how they perform – and this feature makes ChatGPT more attractive: it has more reusable context than any of the other tools.

It’s quite likely that all other tools will develop similar features, but this will mean trying to silo users. I’ll tend to go where most of my memories are, and I won’t be switching and leaving all my memories behind.

My memories.

Hopefully a shared standard for memories (maybe MCP?) will soon emerge, and we won’t end up siloed again.

Scrape wars

There’s a lot of scraping going on these days.

It looks like most AI applications that need to access content online are resorting to scraping web pages.

Many AI agents we’ve been working on rely on having some sort of access to online content. Of course, we started with a simple RSS aggregator: it’s clean, it’s efficient, it’s a rock-solid foundation for any application.

But not all sites have feeds. More than one would think (many sites have feeds but don’t advertise them, some of these might very well be just a feature of the CMS used, not a deliberate decision by the publisher).

But for those sites without feeds… well, we scrape them (and drop the content into a feed that we manage, using the aggregator as the central repository).

Some sites don’t want us to scrape them and put up a fight. In most cases, we scrape them anyway.

If most publications were publishing feeds, we wouldn’t have to do this. They would control what is shared and what is not. Everyone would be happy.

Meanwhile, all my sites are getting tons of traffic from places like Boydton and Des Moines, that’s where big server farms sit and tons of bots are scraping the web from. Wasting lots of resources (theirs and mine) instead of just polling my perfectly updated RSS feed.

PS: I wrote this post on Wordland. Refreshing.

The “think of a number” fallacy

Some time a go a colleague commenting on the idea of iterative prompting, suggested to ask GPT to “think about something” and then make a decision on what to write or not to write.

The problem with this approach is that a session with an LLM doesn’t really have a memory outside the actual text being created by the chat, consequently it cannot “keep something in mind” while completing other tasks.

But it can pretend it does.

To test this, you can ask to a LLM to “think of a number, but don’t tell me”. At the time of this writing most models will respond by confirming that they have thought of a number. Of course they haven’t... but because they are trained to mimic human interactions, they are pretending they are.

This is something to always keep in mind while prompting.

For example, it is not effective to prompt a system to “make a list and only show me the part matching a criteria”, but you can request to print the full output and then generate a final list (“print the list, then update it with the criteria”).

GroceriesGPT

A friend this morning shared a list of vegetables, noting how hard it is to eat 30 different ones in the same week.

I immediately turned to my AI chatbot to ask to create a list of commonly eaten vegetables, and of course I got a very good one.

At that point I thought that it would be nice to add that list to my next grocery order on Ocado.

And this is where the magic ended.

My chatbot doesn’t talk to the Ocado app. And I actually use more than one bot, sometime I go with ChatGPT, sometime I go with Claude, they are both good and continuously improving and I like to pit them against each other.

ChatGPT has a plug-in architecture which potentially would allow to connect with other applications creating custom GPTs, but so far I haven’t seen any particularly good application. And what would be the idea there? That Ocado would have to build a custom GPT? And what about other chatbots? I don’t really want to be siloed again. I’m happy to pay for services, even Google, but leave me free to connect.

Meanwhile I’m sure that somebody at Ocado is already thinking on how to integrate an AI in their app (if you aren’t, call me), and while this will be a nice feature to have, it will be yet another AI agent unable to talk with my other agents.

Maybe the solution is similar to what Rabbit appears to be working on: teach AI to use UI. Avoid altogether the challenge of getting companies and engineers to agree on open standards and just teach AIs to use shitty incompatible interfaces of our apps.

AI interoperability might be one of the most interesting future problems that we will face.

I want the AIs I pay for to collaborate, not to compete.

(Not) too old for this *

By the end of this year it will be 30 years since I registered my first domain name (warning, your browser might throw a hissy fit, I didn’t bother to get a certificate to secure that site, it’s just there for nostalgic reasons, not worth the hassle).

Yesterday I was trying to get a colleague to deploy a simple service that would allow us to save a file on a server and download the file from the server. Apparently it’s much harder in today’s sophisticated cloud environments than it used to be.

Speaking of clouds, I did some house cleaning on various accounts, domains, mailboxes, cloud services today. I got lost multiple times in the complexities of these services (in particular I feel a new and warm form of hate for google cloud). When corporate meets software this is what we get.

I’m not really complaining, every day I’m talking with a chatbot who understands me better than most souls. It’s magic.

Yet there are moments when I miss a world when early technologies were simpler to master. There was some stuff I knew almost everything about.

But at the same time I find amazing coming to work every morning and inventing new things. For the complicated stuff now I just ask ChatGPT ;)

With great responsibilities

Having just started a company that primarily deals with large language models I’m occasionally thinking about the responsibilities that we have when we introduce a new AI agent in the digital space.

Besides preservation of the human species, a good rule that I think we should give ourselves is “avoid bullshit”, and while this rule must certainly be true for any human activity, I think it’s extremely important when you are dealing with the equivalent of BS thermo-nuclear devices.

I’m still working on my list, this is as far as I got.

Every time one of our AI agents produces an output we should ask ourselves:

  • does this text improves the life of the intended recipient?
  • will it be delivered only to the intended recipient (and not to a whole bunch of innocent bystanders)?
  • is it as efficient as it can be in the way it uses language and delivers its message?

If these minimum parameters are not met, the agent should be destroyed.

As with everything else, AI is not the cause of this, there has been plenty of wasteful content well before a computer could start mimicking the output of a sapiens. And because these LLM models have been trained on a lot of this useless noise, they are extremely good at generating more.

So even before you worry if AI can get the wrong president elected or robots can terminate humanity, just make sure that you are not accidentally leaving the BS tap open when you leave.