I’m not confident enough in the tools I built this week to share them around just yet. As long as they run on my Mac, I’m happy, but I can’t really take responsibility for how they’d work for anyone else.
Still, while I’m not serving up the dish, I’m definitely happy to share the recipe!
If you plug this prompt into Claude or ChatGPT, you’ll get pretty close to what I’ve got running. Then ask how to build it and how to configure Claude and you should be good to go. Good luck, and let me know how it goes.
(I think that sharing prompts is an act of love.)
PROMPT:
I need you to create a complete MCP (Model Context Protocol) server that provides access to OpenAI’s API with conversation threading capabilities. Here are my specific requirements:
Core Functionality:
- Build a TypeScript-based MCP server using @modelcontextprotocol/sdk (latest stable version)
- Target Node.js 20 LTS minimum, TypeScript 5.4+, use MIT license
- Provide a single tool called
openai_conversation
that handles all OpenAI interactions - Maintain threaded conversations using OpenAI’s response IDs with stable conversation tracking
- Include comprehensive parameter validation using Zod schemas with meaningful error messages
- Implement in-memory conversation state management for server lifetime
Technical Requirements:
- Configuration precedence: environment variables → YAML files → defaults
- Support .env files for local development alongside YAML config
- Support all OpenAI chat models with fallback strategy for unavailable models
- Include reasoning support: when enabled for o-series models, return both answer and reasoning fields
- Implement flexible configuration file loading (current dir, dist dir, parent dir)
- Provide structured error handling with error codes, messages, and retry guidance
- Use ES modules and modern TypeScript practices with proper exit codes on startup failures
Tool Parameters: The openai_conversation
tool should accept:
input
(required): The message to sendprevious_response_id
(optional): For continuing conversationsinstructions
(optional): System/developer messages for contextmodel
(optional): Specific OpenAI model to usetemperature
(optional): Response creativity (0-2)max_output_tokens
(optional): Response length limitstore
(optional): Whether to store conversation for future referencereasoning
(optional): Enable reasoning for o-series models
Configuration Structure:
- YAML-based config with
openai_api_key
(required, but overridable by OPENAI_API_KEY env var) - Optional defaults for model, temperature, max tokens, and request timeout
- Include .env file support for local development
- Search multiple paths with clear precedence order documented
- Schema validation for all configuration with helpful error messages
Output Requirements:
- Return structured JSON responses with conversation_id, response_id, message content, model used, token usage, and reasoning summary
- Use consistent error envelope:
{"error": {"code": "ERROR_TYPE", "message": "...", "retry_after": seconds}}
- Include proper TypeScript types and interfaces with OpenAPI 3.1 specification
- Provide complete package.json with all dependencies and both CommonJS/ESM builds
- Include build scripts, TypeScript configuration, ESLint, and Prettier setup
- Add comprehensive README with architecture overview, configuration precedence, and examples
- Include structured logging with correlation IDs and appropriate log levels
Integration:
- Must work with Claude Desktop MCP configuration
- Include example configuration snippets for both macOS and Windows
- Provide clear installation and setup steps
Please generate all necessary files including:
- Complete TypeScript source code (
src/index.ts
) with proper error handling and retry logic - Package configuration (
package.json
,tsconfig.json
,.eslintrc.json
,.prettierrc
) - Configuration templates (
config.yaml
,.env.example
) - Comprehensive documentation (
README.md
,CHANGELOG.md
) - Development tooling (
Dockerfile
,.dockerignore
, basic GitHub Actions CI) - OpenAPI specification (
openapi.yaml
) describing the tool interface - Usage examples, troubleshooting guide, and FAQ section
Example Response Formats: Success: {"conversation_id": "conv_123", "response_id": "resp_456", "message": "...", "model": "gpt-4o", "usage": {...}, "reasoning": null}
Error: {"error": {"code": "OPENAI_RATE_LIMIT", "message": "Rate limit exceeded", "retry_after": 20}}
The server should be production-ready with proper error handling, type safety, and user-friendly configuration management.
One thought on “And here’s the recipe”
Comments are closed.