# LitAI
AI-powered literature review assistant that understands your research questions and automatically finds papers, extracts insights, and synthesizes findings - all through natural conversation.
## Why LitAI?
LitAI accelerates your research by turning hours of paper reading into minutes of focused insights:
- **Find relevant papers fast**: Natural language search across millions of papers
- **Extract key insights**: AI reads papers and pulls out claims with evidence
- **Synthesize findings**: Ask questions across multiple papers and get cited answers
- **Build your collection**: Manage PDFs locally with automatic downloads from ArXiv
Perfect for:
- Literature reviews for research papers
- Understanding a new field quickly
- Finding solutions to technical problems
- Discovering contradictions in existing work
- Building comprehensive reading lists
💡 **Tip**: Use the `/questions` command to see research unblocking questions organized by phase - from debugging experiments to contextualizing results.
## Installation
### Prerequisites
- Python 3.11 or higher
- OpenAI API key ([Get one here](https://platform.openai.com/api-keys))
First [install uv](https://docs.astral.sh/uv/getting-started/installation/), then:
```bash
# Install litai globally
uv tool install litai-research
# Alternative: using pipx
pipx install litai-research
```
### Updates
```bash
# Get latest stable updates
uv tool upgrade litai-research
# Alternative: using pipx
pipx upgrade litai-research
```
### Development/Pre-release
For the latest features (may have bugs):
```bash
# Install pre-release version
uv tool install --prerelease=allow litai-research
# Upgrade to latest pre-release
uv tool upgrade --prerelease=allow litai-research
# Alternative: using pipx
pipx install --prerelease litai-research
pipx upgrade --prerelease litai-research
```
## Configuration
Set your OpenAI API key as an environment variable:
```bash
export OPENAI_API_KEY=sk-...
```
Get your API key from [platform.openai.com/api-keys](https://platform.openai.com/api-keys)
**Note:** For best results, use the smartest models as they are better at understanding complex research questions and tool calling. LitAI defaults to GPT-5, the most capable model. You can switch to GPT-5-mini for faster, more affordable processing, or use any other model offered by OpenAI.
**💡 Tip:** You may be eligible for complimentary tokens by sharing data with OpenAI for model improvement. [Learn more about the data sharing program](https://help.openai.com/en/articles/10306912-sharing-feedback-evaluation-and-fine-tuning-data-and-api-inputs-and-outputs-with-openai).
<details>
<summary>Advanced Configuration</summary>
Configure LitAI using the `/config` command:
```bash
# Show current configuration
/config show
# Change model (defaults to gpt-5)
/config set llm.model gpt-5-mini # Use the faster, more affordable model
# Reset to defaults
/config reset
```
Configuration is stored in `~/.litai/config.json` and persists across sessions.
</details>
## Getting Started
### 1. Launch LitAI
```bash
litai
```
### 2. Set Up Your Research Context (Recommended)
Provide context about your research to get more tailored responses:
```bash
/prompt
```
This opens your default editor with a template where you can describe:
- **Research Context**: Your area of study and current focus
- **Background & Expertise**: Your academic/professional background
- **Specific Interests**: Particular topics, methods, or problems you're investigating
- **Preferences**: How you prefer information to be presented or synthesized
**Example research context:**
```markdown
## Research Context
I'm a PhD student researching efficient transformer architectures for edge deployment. Currently focusing on knowledge distillation and pruning techniques for large language models.
## Background & Expertise
- Strong background in deep learning and PyTorch
- Experience with model compression techniques
- Familiar with transformer architectures and attention mechanisms
## Specific Interests
- Structured pruning methods that maintain model accuracy
- Hardware-aware neural architecture search
- Quantization techniques for transformers
## Preferences
- When synthesizing papers, please highlight actual compression ratios achieved
- I prefer concrete numbers over vague claims
- Interested in both positive and negative results
```
**Why this matters**: This context gets automatically included in every AI conversation, helping LitAI understand your expertise level and tailor responses accordingly. Without it, LitAI treats every user the same way.
### 3. Understanding LitAI's Two Modes
**Normal Mode** - Build your research context:
```bash
normal ▸ "Find papers about attention mechanisms"
normal ▸ "Add the Transformer paper to my collection"
normal ▸ /papers # View your collection
normal ▸ /note 1 # Add personal notes
normal ▸ /tag 1 -a transformers # Organize with tags
```
**Synthesis Mode** - Ask questions and analyze:
```bash
normal ▸ /synthesize # Enter synthesis mode
synthesis ▸ "What are the key findings across my transformer papers?"
synthesis ▸ "How do attention mechanisms work?"
synthesis ▸ "Compare BERT vs GPT architectures"
synthesis ▸ "Go deeper on the mathematical foundations"
synthesis ▸ exit # Return to normal mode
```
**The Workflow:**
1. **Normal Mode**: Search, collect, and organize papers
2. **Synthesis Mode**: Ask research questions and get AI analysis
3. **Switch freely**: `/synthesize` to enter, `exit` to return
### 4. Build Your Research Workflow
**For New Research Areas:**
1. **Normal Mode**: `"Find recent papers about [topic]"` + `"Add the most cited papers"`
2. **Synthesis Mode**: `"What are the main approaches in this field?"` + follow-up questions
**For Literature Reviews:**
1. **Normal Mode**: Build collection, add notes (`/note`), organize with tags (`/tag`)
2. **Synthesis Mode**: `"Compare methodologies across my papers"` + deep analysis questions
**For Keeping Current:**
1. **Normal Mode**: `/questions` → See research-unblocking prompts by phase
2. **Synthesis Mode**: Regular Q&A sessions to connect new papers to existing work
> **Key Insight**: Normal mode = building context, Synthesis mode = asking questions
## Features
### 🔍 Paper Discovery & Management
- **Smart Search**: Natural language queries across millions of papers via Semantic Scholar
- **Intelligent Collection**: Automatic duplicate detection and citation key generation
- **PDF Integration**: Automatic ArXiv downloads with local storage
- **Flexible Organization**: Tags, notes, and configurable paper list views
- **Import Support**: BibTeX file import for existing libraries
### 🤖 AI-Powered Analysis
- **Key Point Extraction**: Automatically extract main claims with evidence
- **Deep Synthesis**: Interactive synthesis mode for collaborative exploration
- **Context-Aware**: Multiple context depths (abstracts, notes, key points, full text)
- **Agent Notes**: AI-generated insights and summaries for papers
- **Research Context**: Personal research profile for tailored responses
### 💬 Interactive Experience
- **Natural Language Interface**: Chat naturally about your research
- **Command Autocomplete**: Tab completion for all commands and file paths
- **Vi Mode Support**: Optional vi-style keybindings
- **Session Management**: Persistent conversations with paper selections
- **Research Questions**: Built-in prompts to unblock research at any phase
### ⚙️ Advanced Features
- **Configurable Display**: Customize paper list columns and layout
- **Tool Approval System**: Control AI tool usage in all modes (queries and synthesis)
- **Comprehensive Logging**: Debug and track all operations
- **Multi-LLM Support**: OpenAI and Anthropic models with auto-detection
## Command Reference
### Essential Commands
```bash
/find <query> # Search for papers
/add <numbers> # Add papers from search results
/papers [page] # List your collection (with pagination)
/synthesize # Enter interactive synthesis mode
/note <number> # Manage paper notes
/tag <number> -a <tags> # Add tags to papers
/prompt # Set up your research context (recommended)
/questions # Show research-unblocking prompts
/help # Show all commands
```
### Papers Command Options
```bash
/papers --tags # Show all tags with counts
/papers --notes # Show papers with notes
/papers 2 # Show page 2 of collection
```
### Research Context Commands
```bash
/prompt # Edit your research context (opens in editor)
/prompt view # Display your current research context
/prompt append "text" # Add text to your existing context
/prompt clear # Delete your research context
```
### Configuration
```bash
/config show # Display current settings
/config set llm.model gpt-4o-mini
/config set tool_approval false # Disable approval prompts (all modes)
/config set display.list_columns title,authors,tags,notes
```
> **Note**: Configuration changes require restarting LitAI to take effect
### Normal Mode vs Synthesis Mode
**Normal Mode** - Context building and management:
```bash
/find <query> # Search for papers
/add <numbers> # Add papers from search results
/papers [page] # List your collection
/note <number> # Add your personal notes
/tag <number> -a <tags> # Add tags to papers
/synthesize # Enter synthesis mode
```
**Synthesis Mode** - Question answering and analysis:
```bash
synthesis ▸ "What are the key insights from paper X?"
synthesis ▸ "How do these approaches compare?"
synthesis ▸ "Go deeper on the methodology"
synthesis ▸ "Add AI notes to paper 1" # Ask AI to generate analysis notes
synthesis ▸ /papers # Show full collection
synthesis ▸ /selected # Show papers in current session
synthesis ▸ /context key_points # Change context depth
synthesis ▸ /clear # Clear session (keep selected papers)
synthesis ▸ exit # Return to normal mode
```
### Notes System
- **Personal Notes** (`/note` in normal mode): Your own thoughts and observations
- **AI Notes** (request in synthesis mode): Ask AI to generate insights and summaries for papers
## Data Storage
LitAI stores all data locally in `~/.litai/`:
- `litai.db` - SQLite database with paper metadata and extractions
- `pdfs/` - Downloaded PDF files
- `logs/litai.log` - Application logs for debugging
- `config.json` - User configuration
- `user_prompt.txt` - Personal research context
## FAQ
### Why do paper searches sometimes fail?
Semantic Scholar's public API can experience high load, leading to search failures. If you encounter frequent issues:
- Wait a few minutes and try again
- Consider requesting a free API key for higher rate limits: [Semantic Scholar API Key Form](https://www.semanticscholar.org/product/api#api-key-form)
## License
This project is open source and available under the [MIT License](LICENSE).
## Acknowledgments
- Built with [Semantic Scholar API](https://www.semanticscholar.org/product/api)
- Powered by OpenAI/Anthropic language models
## Support
- Report issues: [GitHub Issues](https://github.com/harmonbhasin/litai/issues)
- Logs for debugging: `~/.litai/logs/litai.log`
Raw data
{
"_id": null,
"home_page": null,
"name": "litai-research",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "academic, papers, research, synthesis, ai, literature-review, semantic-scholar, arxiv",
"author": null,
"author_email": "Harmon Bhasin <harmonprograms@protonmail.com>",
"download_url": "https://files.pythonhosted.org/packages/b6/c6/85931b3816e1bcfdf233bbcb39290e808fead300ee9f89ff931bdaedd9d4/litai_research-0.1.3.tar.gz",
"platform": null,
"description": "# LitAI\n\nAI-powered literature review assistant that understands your research questions and automatically finds papers, extracts insights, and synthesizes findings - all through natural conversation.\n\n## Why LitAI?\n\nLitAI accelerates your research by turning hours of paper reading into minutes of focused insights:\n\n- **Find relevant papers fast**: Natural language search across millions of papers\n- **Extract key insights**: AI reads papers and pulls out claims with evidence\n- **Synthesize findings**: Ask questions across multiple papers and get cited answers\n- **Build your collection**: Manage PDFs locally with automatic downloads from ArXiv\n\nPerfect for:\n- Literature reviews for research papers\n- Understanding a new field quickly \n- Finding solutions to technical problems\n- Discovering contradictions in existing work\n- Building comprehensive reading lists\n\n\ud83d\udca1 **Tip**: Use the `/questions` command to see research unblocking questions organized by phase - from debugging experiments to contextualizing results.\n\n## Installation\n\n### Prerequisites\n- Python 3.11 or higher\n- OpenAI API key ([Get one here](https://platform.openai.com/api-keys))\n\nFirst [install uv](https://docs.astral.sh/uv/getting-started/installation/), then:\n\n```bash\n# Install litai globally\nuv tool install litai-research\n\n# Alternative: using pipx\npipx install litai-research\n```\n\n### Updates\n```bash\n# Get latest stable updates\nuv tool upgrade litai-research\n\n# Alternative: using pipx\npipx upgrade litai-research\n```\n\n### Development/Pre-release\nFor the latest features (may have bugs):\n```bash\n# Install pre-release version\nuv tool install --prerelease=allow litai-research\n\n# Upgrade to latest pre-release\nuv tool upgrade --prerelease=allow litai-research\n\n# Alternative: using pipx\npipx install --prerelease litai-research\npipx upgrade --prerelease litai-research\n```\n\n## Configuration\n\nSet your OpenAI API key as an environment variable:\n```bash\nexport OPENAI_API_KEY=sk-...\n```\n\nGet your API key from [platform.openai.com/api-keys](https://platform.openai.com/api-keys)\n\n**Note:** For best results, use the smartest models as they are better at understanding complex research questions and tool calling. LitAI defaults to GPT-5, the most capable model. You can switch to GPT-5-mini for faster, more affordable processing, or use any other model offered by OpenAI.\n\n**\ud83d\udca1 Tip:** You may be eligible for complimentary tokens by sharing data with OpenAI for model improvement. [Learn more about the data sharing program](https://help.openai.com/en/articles/10306912-sharing-feedback-evaluation-and-fine-tuning-data-and-api-inputs-and-outputs-with-openai).\n\n<details>\n<summary>Advanced Configuration</summary>\n\nConfigure LitAI using the `/config` command:\n```bash\n# Show current configuration\n/config show\n\n# Change model (defaults to gpt-5)\n/config set llm.model gpt-5-mini # Use the faster, more affordable model\n\n# Reset to defaults\n/config reset\n```\n\nConfiguration is stored in `~/.litai/config.json` and persists across sessions.\n</details>\n\n## Getting Started\n\n### 1. Launch LitAI\n```bash\nlitai\n```\n\n### 2. Set Up Your Research Context (Recommended)\nProvide context about your research to get more tailored responses:\n\n```bash\n/prompt\n```\n\nThis opens your default editor with a template where you can describe:\n- **Research Context**: Your area of study and current focus\n- **Background & Expertise**: Your academic/professional background\n- **Specific Interests**: Particular topics, methods, or problems you're investigating \n- **Preferences**: How you prefer information to be presented or synthesized\n\n**Example research context:**\n```markdown\n## Research Context\nI'm a PhD student researching efficient transformer architectures for edge deployment. Currently focusing on knowledge distillation and pruning techniques for large language models.\n\n## Background & Expertise\n- Strong background in deep learning and PyTorch\n- Experience with model compression techniques\n- Familiar with transformer architectures and attention mechanisms\n\n## Specific Interests\n- Structured pruning methods that maintain model accuracy\n- Hardware-aware neural architecture search\n- Quantization techniques for transformers\n\n## Preferences\n- When synthesizing papers, please highlight actual compression ratios achieved\n- I prefer concrete numbers over vague claims\n- Interested in both positive and negative results\n```\n\n**Why this matters**: This context gets automatically included in every AI conversation, helping LitAI understand your expertise level and tailor responses accordingly. Without it, LitAI treats every user the same way.\n\n### 3. Understanding LitAI's Two Modes\n\n**Normal Mode** - Build your research context:\n```bash\nnormal \u25b8 \"Find papers about attention mechanisms\"\nnormal \u25b8 \"Add the Transformer paper to my collection\" \nnormal \u25b8 /papers # View your collection\nnormal \u25b8 /note 1 # Add personal notes\nnormal \u25b8 /tag 1 -a transformers # Organize with tags\n```\n\n**Synthesis Mode** - Ask questions and analyze:\n```bash\nnormal \u25b8 /synthesize # Enter synthesis mode\nsynthesis \u25b8 \"What are the key findings across my transformer papers?\"\nsynthesis \u25b8 \"How do attention mechanisms work?\"\nsynthesis \u25b8 \"Compare BERT vs GPT architectures\" \nsynthesis \u25b8 \"Go deeper on the mathematical foundations\"\nsynthesis \u25b8 exit # Return to normal mode\n```\n\n**The Workflow:**\n1. **Normal Mode**: Search, collect, and organize papers\n2. **Synthesis Mode**: Ask research questions and get AI analysis\n3. **Switch freely**: `/synthesize` to enter, `exit` to return\n\n### 4. Build Your Research Workflow\n\n**For New Research Areas:**\n1. **Normal Mode**: `\"Find recent papers about [topic]\"` + `\"Add the most cited papers\"`\n2. **Synthesis Mode**: `\"What are the main approaches in this field?\"` + follow-up questions\n\n**For Literature Reviews:**\n1. **Normal Mode**: Build collection, add notes (`/note`), organize with tags (`/tag`)\n2. **Synthesis Mode**: `\"Compare methodologies across my papers\"` + deep analysis questions\n\n**For Keeping Current:**\n1. **Normal Mode**: `/questions` \u2192 See research-unblocking prompts by phase\n2. **Synthesis Mode**: Regular Q&A sessions to connect new papers to existing work\n\n> **Key Insight**: Normal mode = building context, Synthesis mode = asking questions\n\n## Features\n\n### \ud83d\udd0d Paper Discovery & Management\n- **Smart Search**: Natural language queries across millions of papers via Semantic Scholar\n- **Intelligent Collection**: Automatic duplicate detection and citation key generation\n- **PDF Integration**: Automatic ArXiv downloads with local storage\n- **Flexible Organization**: Tags, notes, and configurable paper list views\n- **Import Support**: BibTeX file import for existing libraries\n\n### \ud83e\udd16 AI-Powered Analysis\n- **Key Point Extraction**: Automatically extract main claims with evidence\n- **Deep Synthesis**: Interactive synthesis mode for collaborative exploration \n- **Context-Aware**: Multiple context depths (abstracts, notes, key points, full text)\n- **Agent Notes**: AI-generated insights and summaries for papers\n- **Research Context**: Personal research profile for tailored responses\n\n### \ud83d\udcac Interactive Experience\n- **Natural Language Interface**: Chat naturally about your research\n- **Command Autocomplete**: Tab completion for all commands and file paths\n- **Vi Mode Support**: Optional vi-style keybindings\n- **Session Management**: Persistent conversations with paper selections\n- **Research Questions**: Built-in prompts to unblock research at any phase\n\n### \u2699\ufe0f Advanced Features\n- **Configurable Display**: Customize paper list columns and layout\n- **Tool Approval System**: Control AI tool usage in all modes (queries and synthesis)\n- **Comprehensive Logging**: Debug and track all operations\n- **Multi-LLM Support**: OpenAI and Anthropic models with auto-detection\n\n## Command Reference\n\n### Essential Commands\n```bash\n/find <query> # Search for papers \n/add <numbers> # Add papers from search results\n/papers [page] # List your collection (with pagination)\n/synthesize # Enter interactive synthesis mode\n/note <number> # Manage paper notes\n/tag <number> -a <tags> # Add tags to papers\n/prompt # Set up your research context (recommended)\n/questions # Show research-unblocking prompts\n/help # Show all commands\n```\n\n### Papers Command Options\n```bash\n/papers --tags # Show all tags with counts\n/papers --notes # Show papers with notes\n/papers 2 # Show page 2 of collection\n```\n\n### Research Context Commands\n```bash\n/prompt # Edit your research context (opens in editor)\n/prompt view # Display your current research context\n/prompt append \"text\" # Add text to your existing context\n/prompt clear # Delete your research context\n```\n\n### Configuration\n```bash\n/config show # Display current settings\n/config set llm.model gpt-4o-mini\n/config set tool_approval false # Disable approval prompts (all modes)\n/config set display.list_columns title,authors,tags,notes\n```\n\n> **Note**: Configuration changes require restarting LitAI to take effect\n\n### Normal Mode vs Synthesis Mode\n\n**Normal Mode** - Context building and management:\n```bash\n/find <query> # Search for papers \n/add <numbers> # Add papers from search results\n/papers [page] # List your collection\n/note <number> # Add your personal notes\n/tag <number> -a <tags> # Add tags to papers\n/synthesize # Enter synthesis mode\n```\n\n**Synthesis Mode** - Question answering and analysis:\n```bash\nsynthesis \u25b8 \"What are the key insights from paper X?\"\nsynthesis \u25b8 \"How do these approaches compare?\"\nsynthesis \u25b8 \"Go deeper on the methodology\"\nsynthesis \u25b8 \"Add AI notes to paper 1\" # Ask AI to generate analysis notes\nsynthesis \u25b8 /papers # Show full collection\nsynthesis \u25b8 /selected # Show papers in current session \nsynthesis \u25b8 /context key_points # Change context depth\nsynthesis \u25b8 /clear # Clear session (keep selected papers)\nsynthesis \u25b8 exit # Return to normal mode\n```\n\n### Notes System\n- **Personal Notes** (`/note` in normal mode): Your own thoughts and observations\n- **AI Notes** (request in synthesis mode): Ask AI to generate insights and summaries for papers\n\n## Data Storage\n\nLitAI stores all data locally in `~/.litai/`:\n- `litai.db` - SQLite database with paper metadata and extractions\n- `pdfs/` - Downloaded PDF files \n- `logs/litai.log` - Application logs for debugging\n- `config.json` - User configuration\n- `user_prompt.txt` - Personal research context\n\n## FAQ\n\n### Why do paper searches sometimes fail?\n\nSemantic Scholar's public API can experience high load, leading to search failures. If you encounter frequent issues:\n- Wait a few minutes and try again\n- Consider requesting a free API key for higher rate limits: [Semantic Scholar API Key Form](https://www.semanticscholar.org/product/api#api-key-form)\n\n## License\n\nThis project is open source and available under the [MIT License](LICENSE).\n\n## Acknowledgments\n\n- Built with [Semantic Scholar API](https://www.semanticscholar.org/product/api)\n- Powered by OpenAI/Anthropic language models\n\n## Support\n\n- Report issues: [GitHub Issues](https://github.com/harmonbhasin/litai/issues)\n- Logs for debugging: `~/.litai/logs/litai.log`\n",
"bugtrack_url": null,
"license": null,
"summary": "AI-powered academic paper synthesis tool",
"version": "0.1.3",
"project_urls": {
"Documentation": "https://github.com/harmonbhasin/litai/blob/main/README.md",
"Homepage": "https://github.com/harmonbhasin/litai",
"Issues": "https://github.com/harmonbhasin/litai/issues",
"Repository": "https://github.com/harmonbhasin/litai"
},
"split_keywords": [
"academic",
" papers",
" research",
" synthesis",
" ai",
" literature-review",
" semantic-scholar",
" arxiv"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3226feca55705f4f1fb7d5bca7c93ac4c36b605589c16da3a982a12d80f9a97b",
"md5": "95be7e4f7ff1712515d98c800d34640c",
"sha256": "543bb18b84bb26580e250a52f74ab66e212f3a19b046656b51a52ae7eaac4918"
},
"downloads": -1,
"filename": "litai_research-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "95be7e4f7ff1712515d98c800d34640c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 87033,
"upload_time": "2025-08-10T21:18:12",
"upload_time_iso_8601": "2025-08-10T21:18:12.406345Z",
"url": "https://files.pythonhosted.org/packages/32/26/feca55705f4f1fb7d5bca7c93ac4c36b605589c16da3a982a12d80f9a97b/litai_research-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b6c685931b3816e1bcfdf233bbcb39290e808fead300ee9f89ff931bdaedd9d4",
"md5": "67d1575ca37339680bda3493cd8018c2",
"sha256": "c438785372ce953f9421a883d73d89f96bb984bc373339f2386f6021e4385082"
},
"downloads": -1,
"filename": "litai_research-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "67d1575ca37339680bda3493cd8018c2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 111221,
"upload_time": "2025-08-10T21:18:13",
"upload_time_iso_8601": "2025-08-10T21:18:13.622004Z",
"url": "https://files.pythonhosted.org/packages/b6/c6/85931b3816e1bcfdf233bbcb39290e808fead300ee9f89ff931bdaedd9d4/litai_research-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-10 21:18:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "harmonbhasin",
"github_project": "litai",
"github_not_found": true,
"lcname": "litai-research"
}