A Rash Conclusion

Everyone worries about AI replacing doctors. After 24 hours in the hands of the NHS, I think they’re looking in the wrong direction.

GP, A&E, then other parts of the hospital. Every shift, a new doctor. Every new doctor, the same questions. The same story, retold from the top. Every single one of them then took a picture of my rash with their phone.

The first GP I saw actually had an AI assistant. It recorded our conversation and drafted a letter, which he printed on that grey recyclable paper the NHS uses for everything and which absolutely no one in the chain that followed ever read.

Meanwhile, I had Claude in my pocket. It knew my whole story from the first symptom. It could interpret the blood results before the doctor did, and flag a couple of things worth asking about. By the third doctor, I was essentially a transcription layer between Claude and the NHS.

My whole situation could have easily be baked into a Claude skill, easy to interpret by any LLM based diagnostic software.

I’m not going to argue AI should replace doctors. Mine were careful, kind, and correct. But AI replacing the patient? That part is ready now. The doctors get a well-organised history, an accurate timeline, current medications, no retelling. The patient gets to rest.

The hard part in medicine is context. The context of the whole field is enormous and probably beyond what current AI can carry reliably. But the context of one specific patient, with one specific problem, unfolding over a few days? That’s hard, but achievable with today’s tech. Mine was, all things considered, a relatively simple case. Cases at this level of complexity still make up the bulk of what the system handles every day.

Which brings me to the bigger point. The waste I watched from the inside over 24 hours was remarkable: the same story told six times, the same photo taken six times, the same pieces of information gathered again and again because no one had time to read what the previous shift had written. If we can shift even part of that burden onto the patient’s side, with the patient’s own AI carrying their story cleanly from one clinician to the next, it starts to look like a real contribution to the sustainability problem that every public health system is wrestling with.

And once we have that down, the Doc AI will get much better too.

Not today. Tomorrow for sure.

Playing with Claude skills

Using skills we can capture *why* software components are built, not just the *how*, giving developers better context when working with unfamiliar code. This isn’t about replacing developers, it’s about empowering them. We’re now exploring ways to capture skills from conversations, documents, and the natural flow of work.

For the last couple of weeks I have been playing with Claude skills.

It’s a cool way to package knowledge on how to do stuff in a prompt which can be retrieved any time your agent needs it. They work with Claude, but you can easily integrate them in any LLM-based process.

My first experiment was building a skill about our company. Read the website, read some internal documents, read technical documentation, distil a context that can be used any time I’m writing or thinking about business stuff. But the most interesting aspect is that everybody on our team can now use the same context for whatever work they are doing.

The next experiment was to build a skill to create Spritz agents (there’s a very good skill-building skill on Claude, which helps a lot with the building and packaging of skills). I showed Claude the blueprint for agents that we use every day, then showed some fully developed agents.

Then I tried to build a simple “hello world” agent from scratch with this prompt:

Build a Hello World Spritz agent. It should ask for the user name, then use the Anthropic API to generate a greeting. Deploy the agent on AWS using the CLI and test it. Ask me for an API key when you are ready.

I was able to obtain a working agent in about 10 minutes, but it took a few nudges here and there where the skill didn’t cover details.

At the end of this process I prompted:

Based on the experience of this job, update the skill file so next time we will be able to complete the task without obstacles. Do not include in the skill any specific information about this agent or my development context.

The second time it worked end to end.

I have since tried to build a bunch of different agents, always adding more details and nuances to the skills.

This is not (just) about production

Of course this is not about replacing developers, it’s about empowering them. The agents I build will not be used in a production environment; they are mostly proof of concept.

Using skills (or some similar prompting technique) we can capture why various software components are built, not just the how, allowing developers to have a much better context when they need to interact with code they have not created, or even when they go back to a project after a while.

They are an amazing teaching tool to explain to others how things work.

For now we have simply started a GitHub repository with the skills we have built so far. It’s easy to ask Claude, ChatGPT or any other tool to find and retrieve skills from the repo and use them. Now we are figuring out new ways to capture skills from the flow of work we do, from conversations we have, from documents we create.

Yet another step towards an interesting future.