Site icon Paolo’s Weblog

And here’s the recipe

I’m not confident enough in the tools I built this week to share them around just yet. As long as they run on my Mac, I’m happy, but I can’t really take responsibility for how they’d work for anyone else.

Still, while I’m not serving up the dish, I’m definitely happy to share the recipe!

If you plug this prompt into Claude or ChatGPT, you’ll get pretty close to what I’ve got running. Then ask how to build it and how to configure Claude and you should be good to go. Good luck, and let me know how it goes.

(I think that sharing prompts is an act of love.)

PROMPT:

I need you to create a complete MCP (Model Context Protocol) server that provides access to OpenAI’s API with conversation threading capabilities. Here are my specific requirements:

Core Functionality:

Technical Requirements:

Tool Parameters: The openai_conversation tool should accept:

Configuration Structure:

Output Requirements:

Integration:

Please generate all necessary files including:

  1. Complete TypeScript source code (src/index.ts) with proper error handling and retry logic
  2. Package configuration (package.jsontsconfig.json.eslintrc.json.prettierrc)
  3. Configuration templates (config.yaml.env.example)
  4. Comprehensive documentation (README.mdCHANGELOG.md)
  5. Development tooling (Dockerfile.dockerignore, basic GitHub Actions CI)
  6. OpenAPI specification (openapi.yaml) describing the tool interface
  7. Usage examples, troubleshooting guide, and FAQ section

Example Response Formats: Success: {"conversation_id": "conv_123", "response_id": "resp_456", "message": "...", "model": "gpt-4o", "usage": {...}, "reasoning": null} Error: {"error": {"code": "OPENAI_RATE_LIMIT", "message": "Rate limit exceeded", "retry_after": 20}}

The server should be production-ready with proper error handling, type safety, and user-friendly configuration management.

Exit mobile version