Daily AI Tools

Here’s a snapshot of what AI tools and how am I using them on this 27th of May 2025. Things change fast, I’m writing this for my future self who will be moved while reminiscing these pioneering times.

I more or less always have the Claude and ChatGPT apps running. Recently I have also created a Gemini app (using the Safari’s 'add to dock' feature). I prefer to use separate applications than tabs in a browser, I can switch faster between apps. I’m old school.

Of these three:

ChatGPT has far better context about who I am and what I do. Every time I need to write something work related I will gravitate towards ChatGPT because I don’t have to explain too much. I also like how ChatGPT can see my other apps, so I don’t have to copy and paste back and forth all the time.

I find Claude better at handling large attachments (it has a better “attention span” than ChatGPT while reviewing large documents), but it did fail spectacularly recently (it couldn’t read a file and started making s*it up), so  I’m trusting it a little bit less.

I have also started using Gemini recently. The fact that it doesn’t have an app creates some resistance, but the huge context window makes it useful in cases where I have big documents to process. 

On mobile, I can talk with the Gemini app much better than with ChatGPT (which keeps tripping into its own speech).

Since ChatGPT has included o3 model with search, I have been using Perplexity much less. I might not renew my subscription. A few weeks ago I posted a photo of a shelf of prosecco bottles in a supermarket and asked for advice… it worked like magic).

For image generation I prefer Midjourney for the “feel” of images, even if ChatGPT understands better my prompts. Let’s say that ChatGPT is smarter, but Midjourney has more talent (and is dyslexic). 

For coding jobs I jump back and forth between different tools: Gemini does seems to be pretty good at the moment, but I also find Codex quite impressive.

Mem’ries… light the corners of my mind

For the last few days, I’ve had access to the “Reference Chat History” feature in ChatGPT (I think it had been available for a while in the US, but it just landed on my account in the UK).

Wow… what a change!

I was putting together a page to describe the various tools we’ve been working on, and I just tried randomly asking ChatGPT to insert a description of “Gimlet” or “Old Fashioned”: it just did it. No context necessary, no links, no pages. It was just there, part of the memory I share with the app.

I do continuously switch between AI tools based on which one I think can perform better on any given task – or sometimes just to compare how they perform – and this feature makes ChatGPT more attractive: it has more reusable context than any of the other tools.

It’s quite likely that all other tools will develop similar features, but this will mean trying to silo users. I’ll tend to go where most of my memories are, and I won’t be switching and leaving all my memories behind.

My memories.

Hopefully a shared standard for memories (maybe MCP?) will soon emerge, and we won’t end up siloed again.

Scrape wars

There’s a lot of scraping going on these days.

It looks like most AI applications that need to access content online are resorting to scraping web pages.

Many AI agents we’ve been working on rely on having some sort of access to online content. Of course, we started with a simple RSS aggregator: it’s clean, it’s efficient, it’s a rock-solid foundation for any application.

But not all sites have feeds. More than one would think (many sites have feeds but don’t advertise them, some of these might very well be just a feature of the CMS used, not a deliberate decision by the publisher).

But for those sites without feeds… well, we scrape them (and drop the content into a feed that we manage, using the aggregator as the central repository).

Some sites don’t want us to scrape them and put up a fight. In most cases, we scrape them anyway.

If most publications were publishing feeds, we wouldn’t have to do this. They would control what is shared and what is not. Everyone would be happy.

Meanwhile, all my sites are getting tons of traffic from places like Boydton and Des Moines, that’s where big server farms sit and tons of bots are scraping the web from. Wasting lots of resources (theirs and mine) instead of just polling my perfectly updated RSS feed.

PS: I wrote this post on Wordland. Refreshing.