Name | fastccg JSON |
Version |
0.2.0.post1
JSON |
| download |
home_page | None |
Summary | Fast, minimalist, multi-model terminal-based SDK for building, testing, and interacting with LLMs via cloud APIs. |
upload_time | 2025-07-21 15:31:55 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | MIT |
keywords |
llm
openai
gemini
claude
mistral
terminal
chatbot
sdk
ai
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# FastCCG (Fast Conversational & Completion Gateway)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://pypi.org/project/fastccg/)
[](https://github.com/mebaadwaheed/fastccg/stargazers)
[](https://github.com/mebaadwaheed/fastccg/issues)
[](https://github.com/mebaadwaheed/fastccg/tree/main/docs)
**FastCCG** is a simple, powerful, and developer-friendly Python library for interacting with Large Language Models (LLMs). It provides a clean, unified API to work with models from leading providers like OpenAI, Google, Anthropic, and Mistral, making it easy to build, test, and deploy AI-powered applications.
## 🚀 Key Features
- **🔄 Unified API**: Switch between different LLM providers with minimal code changes
- **⚡ Async Support**: Built-in asynchronous operations for high-performance applications
- **🧠 Retrieval-Augmented Generation (RAG)**: Build powerful Q&A systems over your own documents
- **✨ Text Embedding**: Convert text into vector representations for semantic search
- **🌊 Streaming**: Real-time response streaming for interactive experiences
- **💾 Session Management**: Save and restore conversation history
- **🖥️ CLI Interface**: Powerful command-line tools for quick testing and interaction
- **🔧 Easy Configuration**: Chainable methods for clean, readable code
- **🛡️ Error Handling**: Robust error handling with custom exceptions
## 🏗️ Supported Providers
| Provider | Models | Status |
|----------|--------|--------|
| **OpenAI** | GPT-4o, GPT-3.5 Turbo | ✅ Fully Supported |
| **Google** | Gemini 1.5 Pro, Gemini 1.5 Flash | ✅ Fully Supported |
| **Mistral** | Mistral Tiny, Small, Medium | ✅ Fully Supported |
| **Anthropic** | Claude 3 Sonnet | ✅ Fully Supported |
## 📦 Installation
```bash
pip install fastccg
```
## ⚡ Quick Start
```python
import fastccg
from fastccg.models.gpt import gpt_4o
# Add your API key
api_key = fastccg.add_openai_key("sk-...")
# Initialize the model
model = fastccg.init_model(gpt_4o, api_key=api_key)
# Ask a question
response = model.ask("What is the best thing about Large Language Models?")
print(response.content)
```
## 🖥️ CLI Usage
FastCCG comes with a powerful CLI for quick interactions:
```bash
# List available models
fastccg models
# Ask a single question
fastccg ask "What is the capital of France?" --model gpt_4o
# Start an interactive chat session
fastccg chat --model gpt_4o
```
## 🧠 Retrieval-Augmented Generation (RAG)
Build a powerful question-answering system over your own documents with just a few lines of code. FastCCG handles the complexity of embedding, indexing, and context retrieval for you.
```python
import asyncio
import fastccg
from fastccg.models.gpt import gpt_4o
from fastccg.embedding.openai import text_embedding_3_small
from fastccg.rag import RAGModel
# 1. Setup API keys and models
api_key = fastccg.add_openai_key("sk-...")
llm = fastccg.init_model(gpt_4o, api_key=api_key)
embedder = OpenAIEmbedding(api_key=api_key)
# 2. Create and configure the RAG model
rag = RAGModel(llm=llm, embedder=embedder)
# 3. Index your documents
documents = {
"doc1": "The sky is blue during a clear day.",
"doc2": "The grass in the park is typically green."
}
# 4. Ask a question related to your documents
async def main():
response = await rag.ask_async("What color is the sky?")
print(response.content)
# Expected output will be based on the indexed context
asyncio.run(main())
# 5. Save your knowledge base for later use
rag.save("my_knowledge.fcvs", pretty_print=True)
```
## 🔄 Advanced Features
### Asynchronous Operations
```python
import asyncio
async def main():
# Run multiple prompts concurrently
task1 = model.ask_async("What is the speed of light?")
task2 = model.ask_async("What is the capital of Australia?")
responses = await asyncio.gather(task1, task2)
for response in responses:
print(response.content)
asyncio.run(main())
```
### Streaming Responses
```python
async def stream_example():
async for chunk in model.ask_stream("Tell me a story"):
print(chunk.content, end="", flush=True)
asyncio.run(stream_example())
```
### Session Management
```python
# Save conversation
model.save("my_session.json")
# Load conversation later
loaded_model = fastccg.load_model("my_session.json", api_key=api_key)
```
## 📚 Documentation
Comprehensive documentation is available in the [`docs/`](./docs/) directory:
- **[Quick Start Guide](./docs/quick_start.md)** - Get up and running in minutes
- **[CLI Usage](./docs/cli_usage.md)** - Command-line interface guide
- **[FCVS CLI Tool](./docs/fcvs_cli.md)** - Manage `.fcvs` vector store files
- **[Embedding and RAG](./docs/embedding_and_rag.md)** - Guides for embedding and RAG
- **[API Reference](./docs/api_reference.md)** - Complete API documentation
- **[Supported Models](./docs/supported_models.md)** - All available models and providers
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 🌟 Why FastCCG?
- **Developer Experience**: Clean, intuitive API that just works
- **Performance**: Built with async-first architecture for scalable applications
- **Flexibility**: Easy to switch between providers and models
- **Reliability**: Comprehensive error handling and testing
- **Community**: Open source with active development and support
---
**[📖 Read the Full Documentation](./docs/index.md)** | **[🚀 Get Started Now](./docs/quick_start.md)** | **[💬 Join the Discussion](https://github.com/mebaadwaheed/fastccg/discussions)**
Raw data
{
"_id": null,
"home_page": null,
"name": "fastccg",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "llm, openai, gemini, claude, mistral, terminal, chatbot, sdk, ai",
"author": null,
"author_email": "Your Name <you@example.com>",
"download_url": "https://files.pythonhosted.org/packages/37/eb/52203c56ce2bc1ad610ccf1f5873b4a92bf7e9ada51141a2e6ab4ccb4d2c/fastccg-0.2.0.post1.tar.gz",
"platform": null,
"description": "# FastCCG (Fast Conversational & Completion Gateway)\r\n\r\n[](https://www.python.org/downloads/)\r\n[](LICENSE)\r\n[](https://pypi.org/project/fastccg/)\r\n[](https://github.com/mebaadwaheed/fastccg/stargazers)\r\n[](https://github.com/mebaadwaheed/fastccg/issues)\r\n[](https://github.com/mebaadwaheed/fastccg/tree/main/docs)\r\n\r\n**FastCCG** is a simple, powerful, and developer-friendly Python library for interacting with Large Language Models (LLMs). It provides a clean, unified API to work with models from leading providers like OpenAI, Google, Anthropic, and Mistral, making it easy to build, test, and deploy AI-powered applications.\r\n\r\n## \ud83d\ude80 Key Features\r\n\r\n- **\ud83d\udd04 Unified API**: Switch between different LLM providers with minimal code changes\r\n- **\u26a1 Async Support**: Built-in asynchronous operations for high-performance applications\r\n- **\ud83e\udde0 Retrieval-Augmented Generation (RAG)**: Build powerful Q&A systems over your own documents\r\n- **\u2728 Text Embedding**: Convert text into vector representations for semantic search\r\n- **\ud83c\udf0a Streaming**: Real-time response streaming for interactive experiences\r\n- **\ud83d\udcbe Session Management**: Save and restore conversation history\r\n- **\ud83d\udda5\ufe0f CLI Interface**: Powerful command-line tools for quick testing and interaction\r\n- **\ud83d\udd27 Easy Configuration**: Chainable methods for clean, readable code\r\n- **\ud83d\udee1\ufe0f Error Handling**: Robust error handling with custom exceptions\r\n\r\n## \ud83c\udfd7\ufe0f Supported Providers\r\n\r\n| Provider | Models | Status |\r\n|----------|--------|--------|\r\n| **OpenAI** | GPT-4o, GPT-3.5 Turbo | \u2705 Fully Supported |\r\n| **Google** | Gemini 1.5 Pro, Gemini 1.5 Flash | \u2705 Fully Supported |\r\n| **Mistral** | Mistral Tiny, Small, Medium | \u2705 Fully Supported |\r\n| **Anthropic** | Claude 3 Sonnet | \u2705 Fully Supported |\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n```bash\r\npip install fastccg\r\n```\r\n\r\n## \u26a1 Quick Start\r\n\r\n```python\r\nimport fastccg\r\nfrom fastccg.models.gpt import gpt_4o\r\n\r\n# Add your API key\r\napi_key = fastccg.add_openai_key(\"sk-...\")\r\n\r\n# Initialize the model\r\nmodel = fastccg.init_model(gpt_4o, api_key=api_key)\r\n\r\n# Ask a question\r\nresponse = model.ask(\"What is the best thing about Large Language Models?\")\r\nprint(response.content)\r\n```\r\n\r\n## \ud83d\udda5\ufe0f CLI Usage\r\n\r\nFastCCG comes with a powerful CLI for quick interactions:\r\n\r\n```bash\r\n# List available models\r\nfastccg models\r\n\r\n# Ask a single question\r\nfastccg ask \"What is the capital of France?\" --model gpt_4o\r\n\r\n# Start an interactive chat session\r\nfastccg chat --model gpt_4o\r\n```\r\n\r\n## \ud83e\udde0 Retrieval-Augmented Generation (RAG)\r\n\r\nBuild a powerful question-answering system over your own documents with just a few lines of code. FastCCG handles the complexity of embedding, indexing, and context retrieval for you.\r\n\r\n```python\r\nimport asyncio\r\nimport fastccg\r\nfrom fastccg.models.gpt import gpt_4o\r\nfrom fastccg.embedding.openai import text_embedding_3_small\r\nfrom fastccg.rag import RAGModel\r\n\r\n# 1. Setup API keys and models\r\napi_key = fastccg.add_openai_key(\"sk-...\")\r\nllm = fastccg.init_model(gpt_4o, api_key=api_key)\r\nembedder = OpenAIEmbedding(api_key=api_key)\r\n\r\n# 2. Create and configure the RAG model\r\nrag = RAGModel(llm=llm, embedder=embedder)\r\n\r\n# 3. Index your documents\r\ndocuments = {\r\n \"doc1\": \"The sky is blue during a clear day.\",\r\n \"doc2\": \"The grass in the park is typically green.\"\r\n}\r\n\r\n# 4. Ask a question related to your documents\r\nasync def main():\r\n response = await rag.ask_async(\"What color is the sky?\")\r\n print(response.content)\r\n # Expected output will be based on the indexed context\r\n\r\nasyncio.run(main())\r\n\r\n# 5. Save your knowledge base for later use\r\nrag.save(\"my_knowledge.fcvs\", pretty_print=True)\r\n```\r\n\r\n## \ud83d\udd04 Advanced Features\r\n\r\n### Asynchronous Operations\r\n```python\r\nimport asyncio\r\n\r\nasync def main():\r\n # Run multiple prompts concurrently\r\n task1 = model.ask_async(\"What is the speed of light?\")\r\n task2 = model.ask_async(\"What is the capital of Australia?\")\r\n \r\n responses = await asyncio.gather(task1, task2)\r\n for response in responses:\r\n print(response.content)\r\n\r\nasyncio.run(main())\r\n```\r\n\r\n### Streaming Responses\r\n```python\r\nasync def stream_example():\r\n async for chunk in model.ask_stream(\"Tell me a story\"):\r\n print(chunk.content, end=\"\", flush=True)\r\n\r\nasyncio.run(stream_example())\r\n```\r\n\r\n### Session Management\r\n```python\r\n# Save conversation\r\nmodel.save(\"my_session.json\")\r\n\r\n# Load conversation later\r\nloaded_model = fastccg.load_model(\"my_session.json\", api_key=api_key)\r\n```\r\n\r\n## \ud83d\udcda Documentation\r\n\r\nComprehensive documentation is available in the [`docs/`](./docs/) directory:\r\n\r\n- **[Quick Start Guide](./docs/quick_start.md)** - Get up and running in minutes\r\n- **[CLI Usage](./docs/cli_usage.md)** - Command-line interface guide\r\n- **[FCVS CLI Tool](./docs/fcvs_cli.md)** - Manage `.fcvs` vector store files\r\n- **[Embedding and RAG](./docs/embedding_and_rag.md)** - Guides for embedding and RAG\r\n- **[API Reference](./docs/api_reference.md)** - Complete API documentation\r\n- **[Supported Models](./docs/supported_models.md)** - All available models and providers\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nWe welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## \ud83c\udf1f Why FastCCG?\r\n\r\n- **Developer Experience**: Clean, intuitive API that just works\r\n- **Performance**: Built with async-first architecture for scalable applications\r\n- **Flexibility**: Easy to switch between providers and models\r\n- **Reliability**: Comprehensive error handling and testing\r\n- **Community**: Open source with active development and support\r\n\r\n---\r\n\r\n**[\ud83d\udcd6 Read the Full Documentation](./docs/index.md)** | **[\ud83d\ude80 Get Started Now](./docs/quick_start.md)** | **[\ud83d\udcac Join the Discussion](https://github.com/mebaadwaheed/fastccg/discussions)**\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Fast, minimalist, multi-model terminal-based SDK for building, testing, and interacting with LLMs via cloud APIs.",
"version": "0.2.0.post1",
"project_urls": null,
"split_keywords": [
"llm",
" openai",
" gemini",
" claude",
" mistral",
" terminal",
" chatbot",
" sdk",
" ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1f83d501cd11af9bd0bfe3a9730a5c164015a103da91339dfa149ded2357f77f",
"md5": "b9e3b6ed2488ced0d9dfa8be60c018ec",
"sha256": "992897d6decf4a0aa256dc8d67a1a748460841746696016a5487ce33b11e250b"
},
"downloads": -1,
"filename": "fastccg-0.2.0.post1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b9e3b6ed2488ced0d9dfa8be60c018ec",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 27242,
"upload_time": "2025-07-21T15:31:54",
"upload_time_iso_8601": "2025-07-21T15:31:54.627754Z",
"url": "https://files.pythonhosted.org/packages/1f/83/d501cd11af9bd0bfe3a9730a5c164015a103da91339dfa149ded2357f77f/fastccg-0.2.0.post1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "37eb52203c56ce2bc1ad610ccf1f5873b4a92bf7e9ada51141a2e6ab4ccb4d2c",
"md5": "9ddc45c52232781f51b36c2aa8cfbfac",
"sha256": "f20f5421773f8be7acb86ca68a4c746554325f24c83be161efaa9936ff64a95c"
},
"downloads": -1,
"filename": "fastccg-0.2.0.post1.tar.gz",
"has_sig": false,
"md5_digest": "9ddc45c52232781f51b36c2aa8cfbfac",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 22347,
"upload_time": "2025-07-21T15:31:55",
"upload_time_iso_8601": "2025-07-21T15:31:55.665306Z",
"url": "https://files.pythonhosted.org/packages/37/eb/52203c56ce2bc1ad610ccf1f5873b4a92bf7e9ada51141a2e6ab4ccb4d2c/fastccg-0.2.0.post1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-21 15:31:55",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "fastccg"
}