Stop and think

If you run a business, pretty much any kind of business, the one thing you should do about AI is stop and think: what new problems can you find for your organisation?

Because whatever value you’re creating today by solving today’s problems, some form of AI will most likely be able to generate shortly. Faster, cheaper, and without you. Bolting AI onto your current process buys you a few months of advantage at best. Then everyone has the same tools, and you’re back where you started.

The real opportunity is the other direction. AI lets you solve problems you couldn’t touch before. Problems that were too expensive, too slow, too specialised, or simply unimagined. Those are where the next decade of value is going to come from, and almost nobody is looking for them yet.

David Deutsch puts it well in The Beginning of Infinity: problems are inevitable, but problems are soluble. And the reward for solving one is finding a better one. That’s how knowledge grows. If you’re using AI to keep solving the same problem you’ve been solving for a decade, you’re not making progress. You’re just going faster on a road that might not lead anywhere.

This is a philosophical question, not a technical one. And philosophy is uncomfortable because it doesn’t come with an SDK.

I noticed this recently during a conversation with a potential candidate for a job. I started asking myself: how good of a philosopher is this guy? If I were shut in a room thinking about the future, is he somebody I want with me? That’s the test now. Anyone can execute. Fewer people can sit with a hard question long enough to find a better one.

So leaders in any organisation should create space for thinking. Put your domain experts in a room with people who understand where AI is heading, and ask: what couldn’t we do before that we can do now? The easy problems are already solved, or about to be. The interesting ones are still waiting.

Stop. And think. Then build.

A Rash Conclusion

Everyone worries about AI replacing doctors. After 24 hours in the hands of the NHS, I think they’re looking in the wrong direction.

GP, A&E, then other parts of the hospital. Every shift, a new doctor. Every new doctor, the same questions. The same story, retold from the top. Every single one of them then took a picture of my rash with their phone.

The first GP I saw actually had an AI assistant. It recorded our conversation and drafted a letter, which he printed on that grey recyclable paper the NHS uses for everything and which absolutely no one in the chain that followed ever read.

Meanwhile, I had Claude in my pocket. It knew my whole story from the first symptom. It could interpret the blood results before the doctor did, and flag a couple of things worth asking about. By the third doctor, I was essentially a transcription layer between Claude and the NHS.

My whole situation could have easily be baked into a Claude skill, easy to interpret by any LLM based diagnostic software.

I’m not going to argue AI should replace doctors. Mine were careful, kind, and correct. But AI replacing the patient? That part is ready now. The doctors get a well-organised history, an accurate timeline, current medications, no retelling. The patient gets to rest.

The hard part in medicine is context. The context of the whole field is enormous and probably beyond what current AI can carry reliably. But the context of one specific patient, with one specific problem, unfolding over a few days? That’s hard, but achievable with today’s tech. Mine was, all things considered, a relatively simple case. Cases at this level of complexity still make up the bulk of what the system handles every day.

Which brings me to the bigger point. The waste I watched from the inside over 24 hours was remarkable: the same story told six times, the same photo taken six times, the same pieces of information gathered again and again because no one had time to read what the previous shift had written. If we can shift even part of that burden onto the patient’s side, with the patient’s own AI carrying their story cleanly from one clinician to the next, it starts to look like a real contribution to the sustainability problem that every public health system is wrestling with.

And once we have that down, the Doc AI will get much better too.

Not today. Tomorrow for sure.

“Go on then”

The skill isn’t writing better prompts anymore — it’s knowing when to stop writing them. Sometimes “go on then” beats a page of instructions. Sometimes it falls flat. Telling the two apart is the new craft.

I was telling Mollie this morning about a conversation I’d had with Dan on using AIP as a CRM. AIP is the knowledge graph we’ve been building at Activate Intelligence, a place where clients, projects, people, conversations and updates all live as connected entities rather than rows in a table. My suggestion to Dan was simple: just create a skill. AIP already works as infrastructure, it can already store companies and updates, so decide how you want to manage these things, how you want to call them, and let it do the rest. Create an empty folder if you like, and tell Cowork to set it up the way it finds most effective. Let it manage it for you.

It reminded me of when Apple decided it was no longer convenient for you to manage your MP3 files by yourself, in folders you understood. iTunes took over. A lot of people were upset. “I want to manage my files. I want my folders. I want control.” And the system was basically saying, well, it’s simply better if you don’t.

I think we’re at one of those points again. A few of us working at the forefront of this are already relinquishing control in terms of prompting. We’re not even writing the prompts anymore. We’re allowing Claude to write its own prompts. We set up the situation. We manage the context. Earlier today Mollie and I built a skill that prepares context before a call, so that aip.app goes into a meeting already knowing who’s in the room and what we last discussed. We set up a small experiment, pointed Claude Code at it, and let it iterate until the skill was working. The prompt it produced was very effective, and we hadn’t even read it until afterwards.

There’s something in this about trusting change, or resisting it. The farther away we get from direct control, the more agency we give these tools.

Here’s a small experiment I haven’t run but I’m confident would work. Put a job description and a stack of CVs in a folder. Point Claude at the folder. The most effective prompt, I think, would be three words.

Go on then.

Better, probably, than “analyse these CVs, compare them to the application, create a table with these columns, apply this weighting, blah, blah, blah”. With a reasoning model, “go on then” means it prompts itself. It works out what it’s doing. If you start with a big pile of instructions, it can get confused. It might not understand you completely, or it follows your structure when it had a better one in mind.

But, and this is the odd thing, it really is a matter of sensibility. I can describe the CV experiment and I’m pretty sure it will work. You’d probably agree. But there’s a whole other class of environments where “go on then” falls flat, where you do need the structured instructions and the specified output.

Knowing which is which is the skill. There are things I’m absolutely confident will work with minimal prompting. There are things I know won’t. And there is a large middle where you don’t know and you have to try to get it to work.

That’s the interesting space right now.