The magic of AI search

I just built yet another MCP experiment.

First I created a Python script to process .md files: chunk them, create embeddings, store everything in a PostgreSQL database.

Then I built an MCP server which can search the database both using semantic search (embeddings) and more traditional full text search as a fallback mechanism.

I find absolutely fascinating watching Claude interacting with this tool, because it’s not just about converting my request to a query, it’s the reasoning process which happens in order to find what it needs which is brilliant.

Let me show you an example:

Continue reading “The magic of AI search”

And here’s the recipe

I’m not confident enough in the tools I built this week to share them around just yet. As long as they run on my Mac, I’m happy, but I can’t really take responsibility for how they’d work for anyone else.

Still, while I’m not serving up the dish, I’m definitely happy to share the recipe!

If you plug this prompt into Claude or ChatGPT, you’ll get pretty close to what I’ve got running. Then ask how to build it and how to configure Claude and you should be good to go. Good luck, and let me know how it goes.

(I think that sharing prompts is an act of love.)

Continue reading “And here’s the recipe”

More MCP fun: Claude talks with ChatGPT

I started with a new idea this morning: create an MCP server that allows Claude to talk to the various OpenAI models.

Now I can ask Claude to ask any of the openAI models.

What I find more fascinating is how Claude is figuring out how to use these new tools. The key is in the description of the tool, the “manifest” that Claude gets when the server is initialised (and is probably injected at the beginning of every chat).

PS: if you want to try this at home, here’s the recipe.

As an example, here’s how the description of today’s MCP server looks like:

Continue reading “More MCP fun: Claude talks with ChatGPT”

Mem’ries… light the corners of my mind

For the last few days, I’ve had access to the “Reference Chat History” feature in ChatGPT (I think it had been available for a while in the US, but it just landed on my account in the UK).

Wow… what a change!

I was putting together a page to describe the various tools we’ve been working on, and I just tried randomly asking ChatGPT to insert a description of “Gimlet” or “Old Fashioned”: it just did it. No context necessary, no links, no pages. It was just there, part of the memory I share with the app.

I do continuously switch between AI tools based on which one I think can perform better on any given task – or sometimes just to compare how they perform – and this feature makes ChatGPT more attractive: it has more reusable context than any of the other tools.

It’s quite likely that all other tools will develop similar features, but this will mean trying to silo users. I’ll tend to go where most of my memories are, and I won’t be switching and leaving all my memories behind.

My memories.

Hopefully a shared standard for memories (maybe MCP?) will soon emerge, and we won’t end up siloed again.

The “think of a number” fallacy

Some time a go a colleague commenting on the idea of iterative prompting, suggested to ask GPT to “think about something” and then make a decision on what to write or not to write.

The problem with this approach is that a session with an LLM doesn’t really have a memory outside the actual text being created by the chat, consequently it cannot “keep something in mind” while completing other tasks.

But it can pretend it does.

To test this, you can ask to a LLM to “think of a number, but don’t tell me”. At the time of this writing most models will respond by confirming that they have thought of a number. Of course they haven’t... but because they are trained to mimic human interactions, they are pretending they are.

This is something to always keep in mind while prompting.

For example, it is not effective to prompt a system to “make a list and only show me the part matching a criteria”, but you can request to print the full output and then generate a final list (“print the list, then update it with the criteria”).