--- title: AI Prompt Optimizer emoji: 🚀 colorFrom: blue colorTo: green sdk: docker app_port: 8501 tags: - streamlit - ai - llm - optimization - prompt-engineering pinned: false short_description: AI prompt optimizer with cost reduction and LLM support license: mit --- # AI Prompt Optimizer An advanced Streamlit web application that helps reduce the cost of using large language models (LLMs) by intelligently optimizing prompts. The app shortens prompts by removing filler words, simplifying phrases, and using sophisticated linguistic techniques, thereby reducing the number of tokens sent to APIs. ## Features - **Advanced Prompt Optimization**: Rule-based optimization using spaCy and linguistic techniques - **LLM-based Optimization**: 8 specialized personas for context-aware optimization - **Real-time Cost Analysis**: Accurate token counting and cost savings calculation - **Multi-Model Support**: GPT-4, GPT-5, Claude, LLaMA 2, and custom models - **Professional UI**: Modern, gradient-based interface with responsive design - **Tavily Search Integration**: Enhanced prompts with real-time web information - **MCP Server Support**: Model Context Protocol integration for advanced workflows ## Technologies Used - **Streamlit**: Modern web interface - **spaCy**: Natural language processing - **tiktoken**: Accurate token counting - **LangChain**: LLM integration and metadata tracking - **Tavily**: Web search API integration ## Usage 1. Enter your prompt text 2. Choose optimization method (Local or LLM-based) 3. Select model and persona (for LLM optimization) 4. Click "Optimize" to see results and cost savings Perfect for developers, content creators, and AI enthusiasts looking to reduce LLM costs while maintaining prompt effectiveness. ## Deployment on Hugging Face Spaces This app is ready to deploy on Hugging Face Spaces. To deploy: 1. **Fork this repository** or upload the files to your Hugging Face Space 2. **Set up environment variables** in your Space settings: - `AIMLAPI_API_KEY`: Get from [AIMLAPI](https://aimlapi.com) (required for LLM-based optimization) - `TAVILY_API_KEY`: Get from [Tavily](https://tavily.com) (required for agent functionality) 3. **Configure your Space**: - SDK: Docker - App port: 8501 - Hardware: CPU basic (recommended) The app will automatically detect missing API keys and allow users to enter them manually if needed. ## Local Development 1. Clone the repository 2. Install dependencies: `pip install -r requirements.txt` 3. Copy `.env.example` to `.env` and add your API keys 4. Run: `streamlit run src/app.py`