Playing with Claude skills

Using skills we can capture *why* software components are built, not just the *how*, giving developers better context when working with unfamiliar code. This isn’t about replacing developers, it’s about empowering them. We’re now exploring ways to capture skills from conversations, documents, and the natural flow of work.

For the last couple of weeks I have been playing with Claude skills.

It’s a cool way to package knowledge on how to do stuff in a prompt which can be retrieved any time your agent needs it. They work with Claude, but you can easily integrate them in any LLM-based process.

My first experiment was building a skill about our company. Read the website, read some internal documents, read technical documentation, distil a context that can be used any time I’m writing or thinking about business stuff. But the most interesting aspect is that everybody on our team can now use the same context for whatever work they are doing.

The next experiment was to build a skill to create Spritz agents (there’s a very good skill-building skill on Claude, which helps a lot with the building and packaging of skills). I showed Claude the blueprint for agents that we use every day, then showed some fully developed agents.

Then I tried to build a simple “hello world” agent from scratch with this prompt:

Build a Hello World Spritz agent. It should ask for the user name, then use the Anthropic API to generate a greeting. Deploy the agent on AWS using the CLI and test it. Ask me for an API key when you are ready.

I was able to obtain a working agent in about 10 minutes, but it took a few nudges here and there where the skill didn’t cover details.

At the end of this process I prompted:

Based on the experience of this job, update the skill file so next time we will be able to complete the task without obstacles. Do not include in the skill any specific information about this agent or my development context.

The second time it worked end to end.

I have since tried to build a bunch of different agents, always adding more details and nuances to the skills.

This is not (just) about production

Of course this is not about replacing developers, it’s about empowering them. The agents I build will not be used in a production environment; they are mostly proof of concept.

Using skills (or some similar prompting technique) we can capture why various software components are built, not just the how, allowing developers to have a much better context when they need to interact with code they have not created, or even when they go back to a project after a while.

They are an amazing teaching tool to explain to others how things work.

For now we have simply started a GitHub repository with the skills we have built so far. It’s easy to ask Claude, ChatGPT or any other tool to find and retrieve skills from the repo and use them. Now we are figuring out new ways to capture skills from the flow of work we do, from conversations we have, from documents we create.

Yet another step towards an interesting future.

Magic Moments ✨

When AI models start asking each other for help without being told to, something magical happens.

One of the cool aspects of having a dozen different MCP servers connected to Claude are the random serendipitous interactions that emerge.

Yesterday, I was working on a little programming project. Opus 4 was chugging along nicely, reading files on my Mac, deciding how to implement something, checking data against a source I was working on. The usual AI assistant stuff, really.

Then I noticed something unexpected in the logs. Claude had fired a call to the OpenAI MCP server (that little experiment I did to allow Claude to ask questions to OpenAI models). I paused to see what was happening.

Claude had asked GPT-4o how to read the end of a file. Nothing groundbreaking — just a simple technical question. GPT-4o provided an answer, and the process continued seamlessly. If I hadn’t been paying attention to the logs, I would have missed it entirely.

Here’s the thing: I’m fairly certain this information exists in Opus 4’s knowledge base. It’s not like reading the tail of a file is some obscure programming technique. But for some reason, in that moment, Claude decided it wanted a second opinion. Or perhaps it needed some comfort from another model?

It felt a little magic.

If you can explain it, it’s solved.

An old friend with many years of software development experience yesterday reminded me of the old saying: “if you can explain a problem, it is half solved”.

Chatting about it we agreed that even with the current generation of AI tools to support software development, we are getting closer and closer to “if you can explain it, it is solved!”.

The new challenge is going to be getting the next generation of software developers off the ground. More and more the jobs performed by junior developers will be taken on by AI agents, making it harder for young people to kickstart their career.

We need more than ever skilled professionals who can understand the complexity of the world and know how to use AI to solve difficult problems. The apprenticeship model is broken. We are building a new one.