promptlifter


Namepromptlifter JSON
Version 0.0.1 PyPI version JSON
download
home_pagehttps://github.com/yourusername/promptlifter
SummaryLLM-powered contextual expansion using LangGraph
upload_time2025-08-06 00:40:17
maintainerNone
docs_urlNone
authorPromptLifter Team
requires_python>=3.8
licenseMIT
keywords llm langgraph research ai machine-learning context-expansion search vector-search
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PromptLifter

A LangGraph-powered context extender that prioritizes custom/local LLM endpoints (like Llama, Ollama) with commercial model fallbacks, orchestrating web search and vector search to produce structured, expert-level answers from complex queries.

[![PyPI version](https://badge.fury.io/py/promptlifter.svg)](https://badge.fury.io/py/promptlifter)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Tests](https://github.com/promptlifter/promptlifter/workflows/CI/badge.svg)](https://github.com/promptlifter/promptlifter/actions)

## ✨ Features

- **Subtask Decomposition**: Automatically breaks complex queries into focused research subtasks using custom/local LLMs (Llama, Ollama, etc.)
- **Parallel Processing**: Executes web search and vector search simultaneously for each subtask
- **Context-Aware Generation**: Combines web results and internal knowledge for comprehensive answers
- **Structured Output**: Produces well-organized, research-quality responses
- **LangGraph Orchestration**: Leverages LangGraph for robust workflow management

## 🚀 Quick Start

### Installation

#### From PyPI (Recommended)
```bash
pip install promptlifter
```

#### From Source
```bash
git clone https://github.com/promptlifter/promptlifter
cd promptlifter
pip install -e .
```

### Basic Usage

```python
from promptlifter import build_graph

# Build the workflow graph
graph = build_graph()

# Run a research query
result = await graph.ainvoke({
    "input": "Research quantum computing trends and applications"
})

print(result["final_output"])
```

### Command Line Usage

```bash
# Interactive mode
promptlifter --interactive

# Single query
promptlifter --query "Research AI in healthcare applications"

# Save results to file
promptlifter --query "Quantum computing research" --save results.json
```

## 🏗️ Architecture

```
promptlifter/
├── promptlifter/          # Main package
│   ├── __init__.py        # Package initialization
│   ├── main.py            # Main application entry point
│   ├── graph.py           # LangGraph workflow definition
│   ├── config.py          # Configuration and environment variables
│   ├── logging_config.py  # Logging configuration
│   └── nodes/             # Individual workflow nodes
│       ├── __init__.py    # Nodes package initialization
│       ├── split_input.py # Query decomposition
│       ├── llm_service.py # LLM service with rate limiting
│       ├── embedding_service.py # Embedding service
│       ├── run_tavily_search.py # Web search integration
│       ├── run_pinecone_search.py # Vector search integration
│       ├── compose_contextual_prompt.py # Prompt composition
│       ├── run_subtask_llm.py # LLM processing
│       ├── subtask_handler.py # Parallel subtask orchestration
│       └── gather_and_compile.py # Final result compilation
├── tests/                 # Test suite
│   ├── __init__.py        # Test package initialization
│   ├── conftest.py        # Pytest fixtures
│   ├── test_config.py     # Configuration tests
│   ├── test_graph.py      # Graph tests
│   └── test_nodes.py      # Node tests
├── setup.py               # Package setup
├── pytest.ini            # Pytest configuration
├── requirements.txt       # Dependencies
├── .env                   # Environment variables
└── README.md             # This file
```

## 🔧 Setup

### 1. Clone the Repository

```bash
git clone https://github.com/promptlifter/promptlifter
cd promptlifter
```

### 2. Install Dependencies

#### Option 1: Install from Source
```bash
pip install -e .
```

#### Option 2: Install with Development Dependencies
```bash
pip install -e ".[dev]"
```

#### Option 3: Install with Test Dependencies
```bash
pip install -e ".[test]"
```

### 3. Configure Environment Variables

Copy the example environment file and configure your API keys:

```bash
cp env.example .env
```

Edit `.env` with your configuration:

```env
# Custom LLM Configuration (Primary - Local Models or OpenAI-Compatible APIs)
CUSTOM_LLM_ENDPOINT=http://localhost:11434
CUSTOM_LLM_MODEL=llama3.1
CUSTOM_LLM_API_KEY=

# LLM Provider Configuration (Choose ONE provider)
LLM_PROVIDER=custom  # custom, openai, anthropic, google

# Embedding Configuration (Choose ONE provider)
EMBEDDING_PROVIDER=custom  # custom, openai, anthropic
EMBEDDING_MODEL=text-embedding-3-small  # Model name for embeddings

# Commercial LLM Configuration (API keys for non-custom providers)
OPENAI_API_KEY=your-openai-api-key-here
ANTHROPIC_API_KEY=your-anthropic-api-key-here
GOOGLE_API_KEY=your-google-api-key-here

# Search and Vector Configuration (Optional)
TAVILY_API_KEY=your-tavily-api-key-here
PINECONE_API_KEY=your-pinecone-api-key-here
PINECONE_INDEX=your-pinecone-index-name-here
PINECONE_NAMESPACE=research

# Pinecone Search Configuration (Optional)
PINECONE_TOP_K=10                    # Number of results (default: 10)
PINECONE_SIMILARITY_THRESHOLD=0.7    # Minimum similarity (0.0-1.0)
PINECONE_INCLUDE_SCORES=true         # Show similarity scores
PINECONE_FILTER_BY_SCORE=true        # Filter by threshold
```

**Note**: PromptLifter uses a simplified configuration approach. You specify exactly which LLM and embedding providers to use, eliminating cascading fallbacks for more predictable behavior.

### 4. Set Up LLM Providers

#### Option 1: Local LLM (Recommended - No API Keys Needed)

##### Using Ollama (Easiest)
```bash
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull Llama 3.1 model
ollama pull llama3.1

# Start Ollama server
ollama serve
```

##### Using Other Local LLM Servers
- **LM Studio**: Run with OpenAI-compatible API
- **vLLM**: Fast inference server
- **Custom endpoints**: Any OpenAI-compatible API

#### Option 2: OpenAI-Compatible APIs (Requires API Keys)

##### Lambda Labs Setup
1. Get API key from https://cloud.lambdalabs.com/
2. Add to `.env`:
```env
CUSTOM_LLM_ENDPOINT=https://api.lambda.ai/v1
CUSTOM_LLM_MODEL=llama-4-maverick-17b-128e-instruct-fp8
CUSTOM_LLM_API_KEY=your-lambda-api-key-here
```

##### Together AI Setup
1. Get API key from https://together.ai/
2. Add to `.env`:
```env
CUSTOM_LLM_ENDPOINT=https://api.together.xyz/v1
CUSTOM_LLM_MODEL=meta-llama/Llama-3.1-8B-Instruct
CUSTOM_LLM_API_KEY=your-together-api-key-here
```

##### Perplexity AI Setup
1. Get API key from https://www.perplexity.ai/
2. Add to `.env`:
```env
CUSTOM_LLM_ENDPOINT=https://api.perplexity.ai
CUSTOM_LLM_MODEL=llama-3.1-8b-instruct
CUSTOM_LLM_API_KEY=your-perplexity-api-key-here
```

#### Option 3: Commercial LLM (Fallback)

##### OpenAI Setup
1. Get API key from https://platform.openai.com/api-keys
2. Add to `.env`:
```env
OPENAI_API_KEY=sk-your-actual-key-here
```

##### Anthropic Setup
1. Get API key from https://console.anthropic.com/
2. Add to `.env`:
```env
ANTHROPIC_API_KEY=sk-ant-your-actual-key-here
```

##### Google Setup
1. Get API key from https://makersuite.google.com/app/apikey
2. Add to `.env`:
```env
GOOGLE_API_KEY=your-actual-key-here
```

### 5. Run the Application

#### Interactive Mode
```bash
promptlifter --interactive
# or
python -m promptlifter.main --interactive
```

#### Single Query Mode
```bash
promptlifter --query "Research quantum computing trends"
# or
python -m promptlifter.main --query "Research quantum computing trends"
```

### 6. Run Tests

```bash
# Run all tests
python run_tests.py

# Run specific tests
python run_tests.py config

# Run with pytest directly
pytest tests/ -v

# Run with coverage
pytest tests/ --cov=promptlifter --cov-report=html
```

#### Save Results
```bash
python main.py --query "AI in healthcare" --save result.json
```

## 🚀 How It Works

1. **Query Input**: User provides a complex research query
2. **Subtask Decomposition**: GPT-4 breaks the query into 3-5 focused subtasks
3. **Parallel Research**: For each subtask:
   - Tavily performs web search
   - Pinecone searches internal knowledge base
   - Results are combined into a contextual prompt
4. **LLM Processing**: Custom/local LLMs generate expert-level responses for each subtask
5. **Final Compilation**: All subtask results are compiled into a structured final response

## 📊 Example Output

```
# Research Response: Write a research summary on quantum computing trends

## Summary

This research response addresses the query: "Write a research summary on quantum computing trends"

## Detailed Findings

### Current State of Quantum Computing Hardware
[Comprehensive analysis of current quantum hardware developments...]

### Key Research Papers and Breakthroughs
[Analysis of recent quantum computing research papers...]

### Open Questions and Future Directions
[Discussion of remaining challenges and future research directions...]

## Conclusion

The above findings provide a comprehensive analysis addressing the original research query...
```

## 🔍 Node Details

### `split_input.py`
- Uses custom/local LLMs to decompose complex queries into focused subtasks
- Ensures each subtask is specific and researchable

### `run_tavily_search.py`
- Performs web search using Tavily API
- Returns relevant content from recent web sources

### `run_pinecone_search.py`
- Searches internal knowledge base using Pinecone
- Leverages vector embeddings for semantic search
- **NEW**: Configurable relevance scoring and filtering
- **NEW**: Proper text embedding using multiple providers
- **NEW**: Similarity threshold filtering and score display

### `compose_contextual_prompt.py`
- Combines web and vector search results
- Creates structured prompts for LLM processing

### `run_subtask_llm.py`
- Processes contextual prompts with custom/local LLMs
- Generates expert-level research summaries

### `subtask_handler.py`
- Orchestrates parallel processing of all subtasks
- Manages async execution for optimal performance

### `gather_and_compile.py`
- Collects all subtask results
- Compiles into structured final response

## 🎯 Simplified Configuration

PromptLifter uses a simplified configuration approach that eliminates cascading fallbacks:

### **LLM Provider Configuration**
- **`LLM_PROVIDER`**: Choose one provider: `custom`, `openai`, `anthropic`, or `google`
- **`CUSTOM_LLM_ENDPOINT`**: Your custom LLM endpoint (e.g., Lambda Labs, Ollama)
- **`CUSTOM_LLM_MODEL`**: Model name for your custom endpoint

### **Embedding Provider Configuration**
- **`EMBEDDING_PROVIDER`**: Choose one provider: `custom`, `openai`, or `anthropic`
- **`EMBEDDING_MODEL`**: Specific embedding model to use (e.g., `text-embedding-3-small`)

### **Benefits of Simplified Configuration**
- ✅ **No cascading fallbacks**: Uses only the configured provider
- ✅ **Predictable behavior**: No unexpected provider switches
- ✅ **Better error handling**: Clear failure messages for the configured provider
- ✅ **Reduced complexity**: Easier to debug and configure

## 🎯 Relevance Scoring & Configuration

PromptLifter now includes advanced Pinecone relevance scoring with configurable parameters:

### **Similarity Threshold Filtering**
- Set `PINECONE_SIMILARITY_THRESHOLD` (0.0-1.0) to filter out low-relevance results
- Default: 0.7 (70% similarity required)
- Lower values = more results, higher values = higher quality

### **Score Display Options**
- Enable `PINECONE_INCLUDE_SCORES=true` to see similarity scores in results
- Format: `[Score: 0.892] Your search result content...`

### **Result Count Control**
- Configure `PINECONE_TOP_K` to control how many results to retrieve
- Default: 10 results
- Higher values = more comprehensive search

### **Smart Filtering**
- Enable `PINECONE_FILER_BY_SCORE=true` to automatically filter by threshold
- Provides summary of filtered vs. returned results

### **Multi-Provider Embeddings**
- Automatic fallback between embedding providers:
  1. Custom LLM (Ollama, etc.)
  2. OpenAI Embeddings
  3. Anthropic Embeddings
  4. Hash-based fallback

### **Embedding Optimization**

For optimal performance, use these recommended settings:

```env
# Embedding Configuration
EMBEDDING_PROVIDER=openai
EMBEDDING_MODEL=text-embedding-3-small

# Pinecone Search Configuration (Optimized)
PINECONE_SIMILARITY_THRESHOLD=0.2    # Lower threshold for better results
PINECONE_FILTER_BY_SCORE=true        # Keep filtering enabled
PINECONE_INCLUDE_SCORES=true         # Show scores for debugging
```

**Expected Results:**
- ✅ Most Pinecone results will be accepted
- ✅ Better hybrid search (web + vector)
- ✅ More comprehensive research responses
- ✅ Faster processing (no fallback embeddings)

## 🛠️ Configuration

The application uses environment variables for configuration:

**Primary (Custom LLMs):**
- `CUSTOM_LLM_ENDPOINT`: Your local LLM endpoint (default: http://localhost:11434 for Ollama)
- `CUSTOM_LLM_MODEL`: Your local model name (default: llama3.1)
- `CUSTOM_LLM_API_KEY`: Optional API key for custom endpoints

**Fallback (Commercial LLMs):**
- `OPENAI_API_KEY`: Your OpenAI API key
- `ANTHROPIC_API_KEY`: Your Anthropic API key
- `GOOGLE_API_KEY`: Your Google API key

**Search and Vector:**
- `TAVILY_API_KEY`: Your Tavily search API key
- `PINECONE_API_KEY`: Your Pinecone API key
- `PINECONE_INDEX`: Your Pinecone index name
- `PINECONE_NAMESPACE`: Your Pinecone namespace (default: research)

**Pinecone Search Configuration:**
- `PINECONE_TOP_K`: Number of results to retrieve (default: 10)
- `PINECONE_SIMILARITY_THRESHOLD`: Minimum similarity score 0.0-1.0 (default: 0.7)
- `PINECONE_INCLUDE_SCORES`: Include similarity scores in output (true/false)
- `PINECONE_FILTER_BY_SCORE`: Filter results by similarity threshold (true/false)

## 🔧 Troubleshooting

### Search Issues

If you're experiencing search-related problems, use these debugging tools:

#### 1. Run the Debug Script
```bash
python debug_search.py
```
This will check your configuration and test both search services.

#### 2. Interactive Configuration Fixer
```bash
python fix_search_config.py
```
This interactive script helps you set up your API keys and configuration properly.

#### Common Issues and Solutions

**Tavily 401 Error:**
- Get a free API key from [Tavily](https://tavily.com/)
- Add to `.env`: `TAVILY_API_KEY=your-key-here`

**Pinecone Connection Issues:**
- Get a free API key from [Pinecone](https://www.pinecone.io/)
- Create an index in your Pinecone dashboard
- Add to `.env`:
  ```
  PINECONE_API_KEY=your-key-here
  PINECONE_INDEX=your-index-name
  ```

**No Search Results:**
- Lower the similarity threshold: `PINECONE_SIMILARITY_THRESHOLD=0.3`
- Disable filtering: `PINECONE_FILTER_BY_SCORE=false`
- Check if your Pinecone index has data

**Timeout Errors:**
- Increase timeout values in the code
- Check your internet connection
- Verify API service status

### Logging

Enable detailed logging to debug issues:
```python
import logging
logging.basicConfig(level=logging.INFO)
```

## 📦 Development & Release

### For Developers

```bash
# Install development dependencies
pip install -e ".[dev]"

# Run quality checks
tox

# Run tests
pytest tests/ -v

# Format code
black promptlifter tests
```

### For Contributors

See [CONTRIBUTING.md](docs/CONTRIBUTING.md) for detailed development guidelines.

### Release Process

```bash
# Setup PyPI credentials
python scripts/setup_pypi.py

# Test release to TestPyPI
python scripts/release.py test

# Release to PyPI
python scripts/release.py release
```

## 🤝 Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request

See [CONTRIBUTING.md](docs/CONTRIBUTING.md) for detailed guidelines.

## 📄 License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## 🙏 Acknowledgments

- [LangGraph](https://github.com/langchain-ai/langgraph) for workflow orchestration
- [Ollama](https://ollama.ai/) for local LLM deployment
- [Meta](https://ai.meta.com/) for Llama models
- [Tavily](https://tavily.com/) for web search capabilities
- [Pinecone](https://www.pinecone.io/) for vector search infrastructure 

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/yourusername/promptlifter",
    "name": "promptlifter",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "PromptLifter Team <promptlifter@thinkata.com>",
    "keywords": "llm, langgraph, research, ai, machine-learning, context-expansion, search, vector-search",
    "author": "PromptLifter Team",
    "author_email": "PromptLifter Team <promptlifter@thinkata.com>",
    "download_url": "https://files.pythonhosted.org/packages/ba/e2/60653d8ac03a69980d6067897215bb019f00ba183f7c5d7dce28896bcddf/promptlifter-0.0.1.tar.gz",
    "platform": null,
    "description": "# PromptLifter\n\nA LangGraph-powered context extender that prioritizes custom/local LLM endpoints (like Llama, Ollama) with commercial model fallbacks, orchestrating web search and vector search to produce structured, expert-level answers from complex queries.\n\n[![PyPI version](https://badge.fury.io/py/promptlifter.svg)](https://badge.fury.io/py/promptlifter)\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Tests](https://github.com/promptlifter/promptlifter/workflows/CI/badge.svg)](https://github.com/promptlifter/promptlifter/actions)\n\n## \u2728 Features\n\n- **Subtask Decomposition**: Automatically breaks complex queries into focused research subtasks using custom/local LLMs (Llama, Ollama, etc.)\n- **Parallel Processing**: Executes web search and vector search simultaneously for each subtask\n- **Context-Aware Generation**: Combines web results and internal knowledge for comprehensive answers\n- **Structured Output**: Produces well-organized, research-quality responses\n- **LangGraph Orchestration**: Leverages LangGraph for robust workflow management\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n#### From PyPI (Recommended)\n```bash\npip install promptlifter\n```\n\n#### From Source\n```bash\ngit clone https://github.com/promptlifter/promptlifter\ncd promptlifter\npip install -e .\n```\n\n### Basic Usage\n\n```python\nfrom promptlifter import build_graph\n\n# Build the workflow graph\ngraph = build_graph()\n\n# Run a research query\nresult = await graph.ainvoke({\n    \"input\": \"Research quantum computing trends and applications\"\n})\n\nprint(result[\"final_output\"])\n```\n\n### Command Line Usage\n\n```bash\n# Interactive mode\npromptlifter --interactive\n\n# Single query\npromptlifter --query \"Research AI in healthcare applications\"\n\n# Save results to file\npromptlifter --query \"Quantum computing research\" --save results.json\n```\n\n## \ud83c\udfd7\ufe0f Architecture\n\n```\npromptlifter/\n\u251c\u2500\u2500 promptlifter/          # Main package\n\u2502   \u251c\u2500\u2500 __init__.py        # Package initialization\n\u2502   \u251c\u2500\u2500 main.py            # Main application entry point\n\u2502   \u251c\u2500\u2500 graph.py           # LangGraph workflow definition\n\u2502   \u251c\u2500\u2500 config.py          # Configuration and environment variables\n\u2502   \u251c\u2500\u2500 logging_config.py  # Logging configuration\n\u2502   \u2514\u2500\u2500 nodes/             # Individual workflow nodes\n\u2502       \u251c\u2500\u2500 __init__.py    # Nodes package initialization\n\u2502       \u251c\u2500\u2500 split_input.py # Query decomposition\n\u2502       \u251c\u2500\u2500 llm_service.py # LLM service with rate limiting\n\u2502       \u251c\u2500\u2500 embedding_service.py # Embedding service\n\u2502       \u251c\u2500\u2500 run_tavily_search.py # Web search integration\n\u2502       \u251c\u2500\u2500 run_pinecone_search.py # Vector search integration\n\u2502       \u251c\u2500\u2500 compose_contextual_prompt.py # Prompt composition\n\u2502       \u251c\u2500\u2500 run_subtask_llm.py # LLM processing\n\u2502       \u251c\u2500\u2500 subtask_handler.py # Parallel subtask orchestration\n\u2502       \u2514\u2500\u2500 gather_and_compile.py # Final result compilation\n\u251c\u2500\u2500 tests/                 # Test suite\n\u2502   \u251c\u2500\u2500 __init__.py        # Test package initialization\n\u2502   \u251c\u2500\u2500 conftest.py        # Pytest fixtures\n\u2502   \u251c\u2500\u2500 test_config.py     # Configuration tests\n\u2502   \u251c\u2500\u2500 test_graph.py      # Graph tests\n\u2502   \u2514\u2500\u2500 test_nodes.py      # Node tests\n\u251c\u2500\u2500 setup.py               # Package setup\n\u251c\u2500\u2500 pytest.ini            # Pytest configuration\n\u251c\u2500\u2500 requirements.txt       # Dependencies\n\u251c\u2500\u2500 .env                   # Environment variables\n\u2514\u2500\u2500 README.md             # This file\n```\n\n## \ud83d\udd27 Setup\n\n### 1. Clone the Repository\n\n```bash\ngit clone https://github.com/promptlifter/promptlifter\ncd promptlifter\n```\n\n### 2. Install Dependencies\n\n#### Option 1: Install from Source\n```bash\npip install -e .\n```\n\n#### Option 2: Install with Development Dependencies\n```bash\npip install -e \".[dev]\"\n```\n\n#### Option 3: Install with Test Dependencies\n```bash\npip install -e \".[test]\"\n```\n\n### 3. Configure Environment Variables\n\nCopy the example environment file and configure your API keys:\n\n```bash\ncp env.example .env\n```\n\nEdit `.env` with your configuration:\n\n```env\n# Custom LLM Configuration (Primary - Local Models or OpenAI-Compatible APIs)\nCUSTOM_LLM_ENDPOINT=http://localhost:11434\nCUSTOM_LLM_MODEL=llama3.1\nCUSTOM_LLM_API_KEY=\n\n# LLM Provider Configuration (Choose ONE provider)\nLLM_PROVIDER=custom  # custom, openai, anthropic, google\n\n# Embedding Configuration (Choose ONE provider)\nEMBEDDING_PROVIDER=custom  # custom, openai, anthropic\nEMBEDDING_MODEL=text-embedding-3-small  # Model name for embeddings\n\n# Commercial LLM Configuration (API keys for non-custom providers)\nOPENAI_API_KEY=your-openai-api-key-here\nANTHROPIC_API_KEY=your-anthropic-api-key-here\nGOOGLE_API_KEY=your-google-api-key-here\n\n# Search and Vector Configuration (Optional)\nTAVILY_API_KEY=your-tavily-api-key-here\nPINECONE_API_KEY=your-pinecone-api-key-here\nPINECONE_INDEX=your-pinecone-index-name-here\nPINECONE_NAMESPACE=research\n\n# Pinecone Search Configuration (Optional)\nPINECONE_TOP_K=10                    # Number of results (default: 10)\nPINECONE_SIMILARITY_THRESHOLD=0.7    # Minimum similarity (0.0-1.0)\nPINECONE_INCLUDE_SCORES=true         # Show similarity scores\nPINECONE_FILTER_BY_SCORE=true        # Filter by threshold\n```\n\n**Note**: PromptLifter uses a simplified configuration approach. You specify exactly which LLM and embedding providers to use, eliminating cascading fallbacks for more predictable behavior.\n\n### 4. Set Up LLM Providers\n\n#### Option 1: Local LLM (Recommended - No API Keys Needed)\n\n##### Using Ollama (Easiest)\n```bash\n# Install Ollama\ncurl -fsSL https://ollama.ai/install.sh | sh\n\n# Pull Llama 3.1 model\nollama pull llama3.1\n\n# Start Ollama server\nollama serve\n```\n\n##### Using Other Local LLM Servers\n- **LM Studio**: Run with OpenAI-compatible API\n- **vLLM**: Fast inference server\n- **Custom endpoints**: Any OpenAI-compatible API\n\n#### Option 2: OpenAI-Compatible APIs (Requires API Keys)\n\n##### Lambda Labs Setup\n1. Get API key from https://cloud.lambdalabs.com/\n2. Add to `.env`:\n```env\nCUSTOM_LLM_ENDPOINT=https://api.lambda.ai/v1\nCUSTOM_LLM_MODEL=llama-4-maverick-17b-128e-instruct-fp8\nCUSTOM_LLM_API_KEY=your-lambda-api-key-here\n```\n\n##### Together AI Setup\n1. Get API key from https://together.ai/\n2. Add to `.env`:\n```env\nCUSTOM_LLM_ENDPOINT=https://api.together.xyz/v1\nCUSTOM_LLM_MODEL=meta-llama/Llama-3.1-8B-Instruct\nCUSTOM_LLM_API_KEY=your-together-api-key-here\n```\n\n##### Perplexity AI Setup\n1. Get API key from https://www.perplexity.ai/\n2. Add to `.env`:\n```env\nCUSTOM_LLM_ENDPOINT=https://api.perplexity.ai\nCUSTOM_LLM_MODEL=llama-3.1-8b-instruct\nCUSTOM_LLM_API_KEY=your-perplexity-api-key-here\n```\n\n#### Option 3: Commercial LLM (Fallback)\n\n##### OpenAI Setup\n1. Get API key from https://platform.openai.com/api-keys\n2. Add to `.env`:\n```env\nOPENAI_API_KEY=sk-your-actual-key-here\n```\n\n##### Anthropic Setup\n1. Get API key from https://console.anthropic.com/\n2. Add to `.env`:\n```env\nANTHROPIC_API_KEY=sk-ant-your-actual-key-here\n```\n\n##### Google Setup\n1. Get API key from https://makersuite.google.com/app/apikey\n2. Add to `.env`:\n```env\nGOOGLE_API_KEY=your-actual-key-here\n```\n\n### 5. Run the Application\n\n#### Interactive Mode\n```bash\npromptlifter --interactive\n# or\npython -m promptlifter.main --interactive\n```\n\n#### Single Query Mode\n```bash\npromptlifter --query \"Research quantum computing trends\"\n# or\npython -m promptlifter.main --query \"Research quantum computing trends\"\n```\n\n### 6. Run Tests\n\n```bash\n# Run all tests\npython run_tests.py\n\n# Run specific tests\npython run_tests.py config\n\n# Run with pytest directly\npytest tests/ -v\n\n# Run with coverage\npytest tests/ --cov=promptlifter --cov-report=html\n```\n\n#### Save Results\n```bash\npython main.py --query \"AI in healthcare\" --save result.json\n```\n\n## \ud83d\ude80 How It Works\n\n1. **Query Input**: User provides a complex research query\n2. **Subtask Decomposition**: GPT-4 breaks the query into 3-5 focused subtasks\n3. **Parallel Research**: For each subtask:\n   - Tavily performs web search\n   - Pinecone searches internal knowledge base\n   - Results are combined into a contextual prompt\n4. **LLM Processing**: Custom/local LLMs generate expert-level responses for each subtask\n5. **Final Compilation**: All subtask results are compiled into a structured final response\n\n## \ud83d\udcca Example Output\n\n```\n# Research Response: Write a research summary on quantum computing trends\n\n## Summary\n\nThis research response addresses the query: \"Write a research summary on quantum computing trends\"\n\n## Detailed Findings\n\n### Current State of Quantum Computing Hardware\n[Comprehensive analysis of current quantum hardware developments...]\n\n### Key Research Papers and Breakthroughs\n[Analysis of recent quantum computing research papers...]\n\n### Open Questions and Future Directions\n[Discussion of remaining challenges and future research directions...]\n\n## Conclusion\n\nThe above findings provide a comprehensive analysis addressing the original research query...\n```\n\n## \ud83d\udd0d Node Details\n\n### `split_input.py`\n- Uses custom/local LLMs to decompose complex queries into focused subtasks\n- Ensures each subtask is specific and researchable\n\n### `run_tavily_search.py`\n- Performs web search using Tavily API\n- Returns relevant content from recent web sources\n\n### `run_pinecone_search.py`\n- Searches internal knowledge base using Pinecone\n- Leverages vector embeddings for semantic search\n- **NEW**: Configurable relevance scoring and filtering\n- **NEW**: Proper text embedding using multiple providers\n- **NEW**: Similarity threshold filtering and score display\n\n### `compose_contextual_prompt.py`\n- Combines web and vector search results\n- Creates structured prompts for LLM processing\n\n### `run_subtask_llm.py`\n- Processes contextual prompts with custom/local LLMs\n- Generates expert-level research summaries\n\n### `subtask_handler.py`\n- Orchestrates parallel processing of all subtasks\n- Manages async execution for optimal performance\n\n### `gather_and_compile.py`\n- Collects all subtask results\n- Compiles into structured final response\n\n## \ud83c\udfaf Simplified Configuration\n\nPromptLifter uses a simplified configuration approach that eliminates cascading fallbacks:\n\n### **LLM Provider Configuration**\n- **`LLM_PROVIDER`**: Choose one provider: `custom`, `openai`, `anthropic`, or `google`\n- **`CUSTOM_LLM_ENDPOINT`**: Your custom LLM endpoint (e.g., Lambda Labs, Ollama)\n- **`CUSTOM_LLM_MODEL`**: Model name for your custom endpoint\n\n### **Embedding Provider Configuration**\n- **`EMBEDDING_PROVIDER`**: Choose one provider: `custom`, `openai`, or `anthropic`\n- **`EMBEDDING_MODEL`**: Specific embedding model to use (e.g., `text-embedding-3-small`)\n\n### **Benefits of Simplified Configuration**\n- \u2705 **No cascading fallbacks**: Uses only the configured provider\n- \u2705 **Predictable behavior**: No unexpected provider switches\n- \u2705 **Better error handling**: Clear failure messages for the configured provider\n- \u2705 **Reduced complexity**: Easier to debug and configure\n\n## \ud83c\udfaf Relevance Scoring & Configuration\n\nPromptLifter now includes advanced Pinecone relevance scoring with configurable parameters:\n\n### **Similarity Threshold Filtering**\n- Set `PINECONE_SIMILARITY_THRESHOLD` (0.0-1.0) to filter out low-relevance results\n- Default: 0.7 (70% similarity required)\n- Lower values = more results, higher values = higher quality\n\n### **Score Display Options**\n- Enable `PINECONE_INCLUDE_SCORES=true` to see similarity scores in results\n- Format: `[Score: 0.892] Your search result content...`\n\n### **Result Count Control**\n- Configure `PINECONE_TOP_K` to control how many results to retrieve\n- Default: 10 results\n- Higher values = more comprehensive search\n\n### **Smart Filtering**\n- Enable `PINECONE_FILER_BY_SCORE=true` to automatically filter by threshold\n- Provides summary of filtered vs. returned results\n\n### **Multi-Provider Embeddings**\n- Automatic fallback between embedding providers:\n  1. Custom LLM (Ollama, etc.)\n  2. OpenAI Embeddings\n  3. Anthropic Embeddings\n  4. Hash-based fallback\n\n### **Embedding Optimization**\n\nFor optimal performance, use these recommended settings:\n\n```env\n# Embedding Configuration\nEMBEDDING_PROVIDER=openai\nEMBEDDING_MODEL=text-embedding-3-small\n\n# Pinecone Search Configuration (Optimized)\nPINECONE_SIMILARITY_THRESHOLD=0.2    # Lower threshold for better results\nPINECONE_FILTER_BY_SCORE=true        # Keep filtering enabled\nPINECONE_INCLUDE_SCORES=true         # Show scores for debugging\n```\n\n**Expected Results:**\n- \u2705 Most Pinecone results will be accepted\n- \u2705 Better hybrid search (web + vector)\n- \u2705 More comprehensive research responses\n- \u2705 Faster processing (no fallback embeddings)\n\n## \ud83d\udee0\ufe0f Configuration\n\nThe application uses environment variables for configuration:\n\n**Primary (Custom LLMs):**\n- `CUSTOM_LLM_ENDPOINT`: Your local LLM endpoint (default: http://localhost:11434 for Ollama)\n- `CUSTOM_LLM_MODEL`: Your local model name (default: llama3.1)\n- `CUSTOM_LLM_API_KEY`: Optional API key for custom endpoints\n\n**Fallback (Commercial LLMs):**\n- `OPENAI_API_KEY`: Your OpenAI API key\n- `ANTHROPIC_API_KEY`: Your Anthropic API key\n- `GOOGLE_API_KEY`: Your Google API key\n\n**Search and Vector:**\n- `TAVILY_API_KEY`: Your Tavily search API key\n- `PINECONE_API_KEY`: Your Pinecone API key\n- `PINECONE_INDEX`: Your Pinecone index name\n- `PINECONE_NAMESPACE`: Your Pinecone namespace (default: research)\n\n**Pinecone Search Configuration:**\n- `PINECONE_TOP_K`: Number of results to retrieve (default: 10)\n- `PINECONE_SIMILARITY_THRESHOLD`: Minimum similarity score 0.0-1.0 (default: 0.7)\n- `PINECONE_INCLUDE_SCORES`: Include similarity scores in output (true/false)\n- `PINECONE_FILTER_BY_SCORE`: Filter results by similarity threshold (true/false)\n\n## \ud83d\udd27 Troubleshooting\n\n### Search Issues\n\nIf you're experiencing search-related problems, use these debugging tools:\n\n#### 1. Run the Debug Script\n```bash\npython debug_search.py\n```\nThis will check your configuration and test both search services.\n\n#### 2. Interactive Configuration Fixer\n```bash\npython fix_search_config.py\n```\nThis interactive script helps you set up your API keys and configuration properly.\n\n#### Common Issues and Solutions\n\n**Tavily 401 Error:**\n- Get a free API key from [Tavily](https://tavily.com/)\n- Add to `.env`: `TAVILY_API_KEY=your-key-here`\n\n**Pinecone Connection Issues:**\n- Get a free API key from [Pinecone](https://www.pinecone.io/)\n- Create an index in your Pinecone dashboard\n- Add to `.env`:\n  ```\n  PINECONE_API_KEY=your-key-here\n  PINECONE_INDEX=your-index-name\n  ```\n\n**No Search Results:**\n- Lower the similarity threshold: `PINECONE_SIMILARITY_THRESHOLD=0.3`\n- Disable filtering: `PINECONE_FILTER_BY_SCORE=false`\n- Check if your Pinecone index has data\n\n**Timeout Errors:**\n- Increase timeout values in the code\n- Check your internet connection\n- Verify API service status\n\n### Logging\n\nEnable detailed logging to debug issues:\n```python\nimport logging\nlogging.basicConfig(level=logging.INFO)\n```\n\n## \ud83d\udce6 Development & Release\n\n### For Developers\n\n```bash\n# Install development dependencies\npip install -e \".[dev]\"\n\n# Run quality checks\ntox\n\n# Run tests\npytest tests/ -v\n\n# Format code\nblack promptlifter tests\n```\n\n### For Contributors\n\nSee [CONTRIBUTING.md](docs/CONTRIBUTING.md) for detailed development guidelines.\n\n### Release Process\n\n```bash\n# Setup PyPI credentials\npython scripts/setup_pypi.py\n\n# Test release to TestPyPI\npython scripts/release.py test\n\n# Release to PyPI\npython scripts/release.py release\n```\n\n## \ud83e\udd1d Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests if applicable\n5. Submit a pull request\n\nSee [CONTRIBUTING.md](docs/CONTRIBUTING.md) for detailed guidelines.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- [LangGraph](https://github.com/langchain-ai/langgraph) for workflow orchestration\n- [Ollama](https://ollama.ai/) for local LLM deployment\n- [Meta](https://ai.meta.com/) for Llama models\n- [Tavily](https://tavily.com/) for web search capabilities\n- [Pinecone](https://www.pinecone.io/) for vector search infrastructure \n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "LLM-powered contextual expansion using LangGraph",
    "version": "0.0.1",
    "project_urls": {
        "Bug Reports": "https://github.com/Thinkata/promptlifter/issues",
        "Changelog": "https://github.com/Thinkata/promptlifter/blob/main/CHANGELOG.md",
        "Documentation": "https://github.com/Thinkata/promptlifter#readme",
        "Homepage": "https://github.com/Thinkata/promptlifter",
        "Repository": "https://github.com/Thinkata/promptlifter",
        "Source Code": "https://github.com/Thinkata/promptlifter"
    },
    "split_keywords": [
        "llm",
        " langgraph",
        " research",
        " ai",
        " machine-learning",
        " context-expansion",
        " search",
        " vector-search"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "4faf5111ce7101b191bfe6bc20def6f599322e9e170f3d0045c43795bd4eae7a",
                "md5": "305b0b1f40510fc08fdf69ac71929721",
                "sha256": "703b4ec035b6fb4057643d103ed2656b98ecb93f66e7e99b33729c61dd11b16c"
            },
            "downloads": -1,
            "filename": "promptlifter-0.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "305b0b1f40510fc08fdf69ac71929721",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 28056,
            "upload_time": "2025-08-06T00:40:15",
            "upload_time_iso_8601": "2025-08-06T00:40:15.797231Z",
            "url": "https://files.pythonhosted.org/packages/4f/af/5111ce7101b191bfe6bc20def6f599322e9e170f3d0045c43795bd4eae7a/promptlifter-0.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "bae260653d8ac03a69980d6067897215bb019f00ba183f7c5d7dce28896bcddf",
                "md5": "3c5856a52b711061cb7c465dc039c0d0",
                "sha256": "514a1a48e4ff7247e8de2b527dca822980383ac81ca4912da55e7b8f28ff229d"
            },
            "downloads": -1,
            "filename": "promptlifter-0.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "3c5856a52b711061cb7c465dc039c0d0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 44089,
            "upload_time": "2025-08-06T00:40:17",
            "upload_time_iso_8601": "2025-08-06T00:40:17.027759Z",
            "url": "https://files.pythonhosted.org/packages/ba/e2/60653d8ac03a69980d6067897215bb019f00ba183f7c5d7dce28896bcddf/promptlifter-0.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-06 00:40:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yourusername",
    "github_project": "promptlifter",
    "github_not_found": true,
    "lcname": "promptlifter"
}
        
Elapsed time: 0.89890s