clonellm


Nameclonellm JSON
Version 0.0.7 PyPI version JSON
download
home_pagehttps://github.com/msamsami/clonellm
SummaryPython package to create an AI clone of yourself using LLMs.
upload_time2024-06-21 10:35:28
maintainerNone
docs_urlNone
authorMehdi Samsami
requires_python<3.13,>=3.9
licenseMIT
keywords llm language models nlp rag ai ai clone
VCS
bugtrack_url
requirements litellm langchain langchain-chroma
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <img src="https://raw.githubusercontent.com/msamsami/clonellm/main/docs/assets/images/logo.png" alt="Logo" width="250" />
</p>
<h1 align="center">
    CloneLLM
</h1>
<p align="center">
    <p align="center">Create an AI clone of yourself using LLMs.</p>
</p>   

<h4 align="center">
    <a href="https://pypi.org/project/clonellm/" target="_blank">
        <img src="https://img.shields.io/badge/release-v0.0.7-green" alt="Latest Release">
    </a>
    <a href="https://pypi.org/project/clonellm/" target="_blank">
        <img src="https://img.shields.io/pypi/v/clonellm.svg" alt="PyPI Version">
    </a>
    <a target="_blank">
        <img src="https://img.shields.io/badge/python-3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue" alt="Python Versions">
    </a>
    <a target="_blank">
        <img src="https://img.shields.io/pypi/l/clonellm" alt="PyPI License">
    </a>
</h4>

## Introduction
A minimal Python package that enables you to create an AI clone of yourself using LLMs. Built on top of LiteLLM and LangChain, CloneLLM utilizes the Retrieval-Augmented Generation (RAG) to tailor AI responses as if you are answering the questions.

You can input texts and documents about yourself — including personal information, professional experience, educational background, etc. — which are then embedded into a vector space for dynamic retrieval. This AI clone can act as a virtual assistant or digital representation, capable of handling queries and tasks in a manner that reflects the your own knowledge, tone, style and mannerisms.

## Installation

### Prerequisites
Before installing CloneLLM, make sure you have Python 3.9 or newer installed on your machine. 

### PyPi
```bash
pip install clonellm
```

### Poetry
```bash
poetry add clonellm
```

### GitHub
```bash
# Clone the repository
git clone https://github.com/msamsami/clonellm.git

# Navigate into the project directory
cd clonellm

# Install the package
pip install .
```

## Usage

### Getting started

**Step 1**. Gather documents that contain relavant information about you. These documents form the base from which your AI clone will learn to mimic your tone, style, and expertise.
```python
from langchain_core.documents import Document

documents = [
    Document(page_content="My name is Mehdi Samsami."),
    open("about_me.txt", "r").read(),
]
```

**Step 2**. Initialize an embedding model using CloneLLM's `LiteLLMEmbeddings` or LangChain's embeddings. Then, initialize a clone with your documents, embedding model, and your preferred LLM.
```python
from clonellm import CloneLLM, LiteLLMEmbeddings

embedding = LiteLLMEmbeddings(model="text-embedding-ada-002")
clone = CloneLLM(model="gpt-4-turbo", documents=documents, embedding=embedding)
```

**Step 3**. Configure environment variables to store API keys for embedding and LLM models.
```bash
export OPENAI_API_KEY=sk-...
```

**Step 4**. Fit the clone to the data (documents).
```python
clone.fit()
```

**Step 5**. Invoke the clone to ask questions.
```python
clone.invoke("What's your name?")

# Response: My name is Mehdi Samsami. How can I help you?
```

### Models
At its core, CloneLLM utilizes LiteLLM for interactions with various LLMs. This is why you can choose from 100+ LLMs from many different providers, including Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate, etc.

### Document loaders
You can use LangChain's document loaders to seamlessly import data from various sources into `Document` format. Take, for example, text and HTML loaders:
```python
# !pip install unstructured
from langchain_community.document_loaders import TextLoader, UnstructuredHTMLLoader

documents = TextLoader("cv.txt").load() + UnstructuredHTMLLoader("linkedin.html").load()
```

Or JSON loader:
```python
# !pip install jq
from langchain_community.document_loaders import JSONLoader

documents = JSONLoader(
    file_path='chat.json',
    jq_schema='.messages[].content',
    text_content=False
).load()
```

### Embeddings
With `LiteLLMEmbeddings`, CloneLLM allows you to utilize embedding models from a variety of providers supported by LiteLLM. Additionally, you can select any preferred embedding model from LangChain's extensive range. Take, for example, the Hugging Face embedding:
```python
# !pip install --upgrade --quiet sentence_transformers
from langchain_community.embeddings import HuggingFaceEmbeddings
from clonellm import CloneLLM
import os

os.environ["COHERE_API_KEY"] = "cohere-api-key"

embedding = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
clone = CloneLLM(model="command-xlarge-beta", documents=documents, embedding=embedding)
```

Or, the Llama-cpp embedding:
```python
# !pip install --upgrade --quiet llama-cpp-python
from langchain_community.embeddings import LlamaCppEmbeddings
from clonellm import CloneLLM
import os

os.environ["OPENAI_API_KEY"] = "openai-api-key"

embedding = LlamaCppEmbeddings(model_path="ggml-model-q4_0.bin")
clone = CloneLLM(model="gpt-3.5-turbo", documents=documents, embedding=embedding)
```

### User profile
Create a personalized profile using CloneLLM's `UserProfile`, which allows you to feed detailed personal information into your clone for more customized interactions:
```python
from clonellm import UserProfile

profile = UserProfile(
    first_name="Mehdi",
    last_name="Samsami",
    city="Shiraz",
    country="Iran",
    expertise=["Data Science", "AI/ML", "Data Analytics"],
)
```

Or simply define your profile using Python dictionaries:
```python
profile = {
    "full_name": "Mehdi Samsami",
    "age": 28,
    "location": "Shiraz, Iran",
    "expertise": ["Data Science", "AI/ML", "Data Analytics"],
    "languages": ["English", "Persian"],
    "tone": "Friendly",
}
```

Finnaly:
```python
from clonellm import CloneLLM
import os

os.environ["ANTHROPIC_API_KEY"] = "anthropic-api-key"

clone = CloneLLM(
    model="claude-3-opus-20240229",
    documents=documents,
    embedding=embedding,
    user_profile=profile,
)
```

### Conversation history (memory)
Enable the memory feature to allow your clone to access to the history of conversation. This is simply done by setting `memory` argument to `True` or `-1` for infinite memory or an integer greater than zero for a fixed size of memory:
```python
from clonellm import CloneLLM
import os

os.environ["HUGGINGFACE_API_KEY"] = "huggingface-api-key"

clone = CloneLLM(
    model="meta-llama/Llama-2-70b-chat",
    documents=documents,
    embedding=embedding,
    memory=10,  # Enable memory with maximum size of 10
)
```

Use the `memory_size` attribute to get the current length of conversation history, i.e., the size of clone memory:
```
print(clone.memory_size)
# 6
```

If you needed to clear the history of the conversation, i.e., the clone memory, at any time, you can easily call either of the `reset_memory()` and `clear_memory()` methods.
```python
clone.clear_memory()
# clone.reset_memory()
```

### Streaming
CloneLLM supports streaming responses from the LLM, allowing for real-time processing of text as it is being generated, rather than receiving the whole output at once.
```python
from clonellm import CloneLLM, LiteLLMEmbeddings
import os

os.environ["VERTEXAI_PROJECT"] = "hardy-device-28813"
os.environ["VERTEXAI_LOCATION"] = "us-central1"
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/your/credentials.json"

embedding = LiteLLMEmbeddings(model="textembedding-gecko@001")
clone = CloneLLM(model="gemini-1.0-pro", documents=documents, embedding=embedding)

for chunk in clone.stream("Describe yourself in 100 words"):
    print(chunk, end="", flush=True)
```

### Async
CloneLLM provides asynchronous counterparts to its core methods, `afit`, `ainvoke`, and `astream`, enhancing performance in asynchronous programming contexts.

#### `ainvoke`
```python
import asyncio
from clonellm import CloneLLM, LiteLLMEmbeddings
from langchain_core.documents import Document
import os

os.environ["OPENAI_API_KEY"] = "openai-api-key"

async def main():
    documents = [...]
    embedding = LiteLLMEmbeddings(model="text-embedding-ada-002")
    clone = CloneLLM(model="gpt-4o", documents=documents, embedding=embedding)
    await clone.afit()
    response = await clone.ainvoke("Tell me about your skills?")
    return response

response = asyncio.run(main())
print(response)
```

#### `astream`
```python
import asyncio
from clonellm import CloneLLM, LiteLLMEmbeddings
from langchain_core.documents import Document
import os

os.environ["OPENAI_API_KEY"] = "openai-api-key"

async def main():
    documents = [...]
    embedding = LiteLLMEmbeddings(model="text-embedding-3-small")
    clone = CloneLLM(model="gpt-4o", documents=documents, embedding=embedding)
    await clone.afit()
    async for chunk in clone.astream("How comfortable are you with remote work?"):
        print(chunk, end="", flush=True)

asyncio.run(main())
```

## Support Us
If you find CloneLLM useful, please consider showing your support in one of the following ways:

- ⭐ **Star our GitHub repository:** This helps increase the visibility of our project.
- 💡 **Contribute:** Submit pull requests to help improve the codebase, whether it's adding new features, fixing bugs, or improving documentation.
- 📰 **Share:** Post about CloneLLM on LinkedIn or other social platforms.

Thank you for your interest in CloneLLM. We look forward to seeing what you'll create with your AI clone!

## TODO
- [x] Add pre commit configuration file
- [x] Add setup.py script
- [x] Add support for conversation history
- [x] Add support for RAG with no embedding (use a summary of documents as the context)
- [x] Add support for string documents
- [x] Fix mypy errors
- [x] Rename `completion` methods to `invoke`
- [x] Add support for streaming completion
- [x] Make `LiteLLMEmbeddings.all_embedding_models` a property
- [x] Add an attribute to `CloneLLM` to return supported models
- [x] Add initial version of README
- [x] Describe `CloneLLM.clear_memory` method in README
- [x] Add an attribute to `CloneLLM` to return the memory size
- [x] Add support for fixed size memory
- [ ] Add support for custom system prompts
- [ ] Add documents
- [x] Add usage examples
- [x] Add unit tests for non-core modules
- [x] Add unit tests for core module
- [x] Add GitHub workflow to run tests on PR
- [x] Add GitHub workflow to publish to PyPI on release


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/msamsami/clonellm",
    "name": "clonellm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.9",
    "maintainer_email": null,
    "keywords": "llm, language models, nlp, rag, ai, ai clone",
    "author": "Mehdi Samsami",
    "author_email": "mehdisamsami@live.com",
    "download_url": "https://files.pythonhosted.org/packages/d2/06/29c2d73651d68e2aa4d569f9e4ad75d49a33fa8e3a32ac16405c3f2a1fa5/clonellm-0.0.7.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/msamsami/clonellm/main/docs/assets/images/logo.png\" alt=\"Logo\" width=\"250\" />\n</p>\n<h1 align=\"center\">\n    CloneLLM\n</h1>\n<p align=\"center\">\n    <p align=\"center\">Create an AI clone of yourself using LLMs.</p>\n</p>   \n\n<h4 align=\"center\">\n    <a href=\"https://pypi.org/project/clonellm/\" target=\"_blank\">\n        <img src=\"https://img.shields.io/badge/release-v0.0.7-green\" alt=\"Latest Release\">\n    </a>\n    <a href=\"https://pypi.org/project/clonellm/\" target=\"_blank\">\n        <img src=\"https://img.shields.io/pypi/v/clonellm.svg\" alt=\"PyPI Version\">\n    </a>\n    <a target=\"_blank\">\n        <img src=\"https://img.shields.io/badge/python-3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue\" alt=\"Python Versions\">\n    </a>\n    <a target=\"_blank\">\n        <img src=\"https://img.shields.io/pypi/l/clonellm\" alt=\"PyPI License\">\n    </a>\n</h4>\n\n## Introduction\nA minimal Python package that enables you to create an AI clone of yourself using LLMs. Built on top of LiteLLM and LangChain, CloneLLM utilizes the Retrieval-Augmented Generation (RAG) to tailor AI responses as if you are answering the questions.\n\nYou can input texts and documents about yourself \u2014 including personal information, professional experience, educational background, etc. \u2014 which are then embedded into a vector space for dynamic retrieval. This AI clone can act as a virtual assistant or digital representation, capable of handling queries and tasks in a manner that reflects the your own knowledge, tone, style and mannerisms.\n\n## Installation\n\n### Prerequisites\nBefore installing CloneLLM, make sure you have Python 3.9 or newer installed on your machine. \n\n### PyPi\n```bash\npip install clonellm\n```\n\n### Poetry\n```bash\npoetry add clonellm\n```\n\n### GitHub\n```bash\n# Clone the repository\ngit clone https://github.com/msamsami/clonellm.git\n\n# Navigate into the project directory\ncd clonellm\n\n# Install the package\npip install .\n```\n\n## Usage\n\n### Getting started\n\n**Step 1**. Gather documents that contain relavant information about you. These documents form the base from which your AI clone will learn to mimic your tone, style, and expertise.\n```python\nfrom langchain_core.documents import Document\n\ndocuments = [\n    Document(page_content=\"My name is Mehdi Samsami.\"),\n    open(\"about_me.txt\", \"r\").read(),\n]\n```\n\n**Step 2**. Initialize an embedding model using CloneLLM's `LiteLLMEmbeddings` or LangChain's embeddings. Then, initialize a clone with your documents, embedding model, and your preferred LLM.\n```python\nfrom clonellm import CloneLLM, LiteLLMEmbeddings\n\nembedding = LiteLLMEmbeddings(model=\"text-embedding-ada-002\")\nclone = CloneLLM(model=\"gpt-4-turbo\", documents=documents, embedding=embedding)\n```\n\n**Step 3**. Configure environment variables to store API keys for embedding and LLM models.\n```bash\nexport OPENAI_API_KEY=sk-...\n```\n\n**Step 4**. Fit the clone to the data (documents).\n```python\nclone.fit()\n```\n\n**Step 5**. Invoke the clone to ask questions.\n```python\nclone.invoke(\"What's your name?\")\n\n# Response: My name is Mehdi Samsami. How can I help you?\n```\n\n### Models\nAt its core, CloneLLM utilizes LiteLLM for interactions with various LLMs. This is why you can choose from 100+ LLMs from many different providers, including Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate, etc.\n\n### Document loaders\nYou can use LangChain's document loaders to seamlessly import data from various sources into `Document` format. Take, for example, text and HTML loaders:\n```python\n# !pip install unstructured\nfrom langchain_community.document_loaders import TextLoader, UnstructuredHTMLLoader\n\ndocuments = TextLoader(\"cv.txt\").load() + UnstructuredHTMLLoader(\"linkedin.html\").load()\n```\n\nOr JSON loader:\n```python\n# !pip install jq\nfrom langchain_community.document_loaders import JSONLoader\n\ndocuments = JSONLoader(\n    file_path='chat.json',\n    jq_schema='.messages[].content',\n    text_content=False\n).load()\n```\n\n### Embeddings\nWith `LiteLLMEmbeddings`, CloneLLM allows you to utilize embedding models from a variety of providers supported by LiteLLM. Additionally, you can select any preferred embedding model from LangChain's extensive range. Take, for example, the Hugging Face embedding:\n```python\n# !pip install --upgrade --quiet sentence_transformers\nfrom langchain_community.embeddings import HuggingFaceEmbeddings\nfrom clonellm import CloneLLM\nimport os\n\nos.environ[\"COHERE_API_KEY\"] = \"cohere-api-key\"\n\nembedding = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-mpnet-base-v2\")\nclone = CloneLLM(model=\"command-xlarge-beta\", documents=documents, embedding=embedding)\n```\n\nOr, the Llama-cpp embedding:\n```python\n# !pip install --upgrade --quiet llama-cpp-python\nfrom langchain_community.embeddings import LlamaCppEmbeddings\nfrom clonellm import CloneLLM\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"openai-api-key\"\n\nembedding = LlamaCppEmbeddings(model_path=\"ggml-model-q4_0.bin\")\nclone = CloneLLM(model=\"gpt-3.5-turbo\", documents=documents, embedding=embedding)\n```\n\n### User profile\nCreate a personalized profile using CloneLLM's `UserProfile`, which allows you to feed detailed personal information into your clone for more customized interactions:\n```python\nfrom clonellm import UserProfile\n\nprofile = UserProfile(\n    first_name=\"Mehdi\",\n    last_name=\"Samsami\",\n    city=\"Shiraz\",\n    country=\"Iran\",\n    expertise=[\"Data Science\", \"AI/ML\", \"Data Analytics\"],\n)\n```\n\nOr simply define your profile using Python dictionaries:\n```python\nprofile = {\n    \"full_name\": \"Mehdi Samsami\",\n    \"age\": 28,\n    \"location\": \"Shiraz, Iran\",\n    \"expertise\": [\"Data Science\", \"AI/ML\", \"Data Analytics\"],\n    \"languages\": [\"English\", \"Persian\"],\n    \"tone\": \"Friendly\",\n}\n```\n\nFinnaly:\n```python\nfrom clonellm import CloneLLM\nimport os\n\nos.environ[\"ANTHROPIC_API_KEY\"] = \"anthropic-api-key\"\n\nclone = CloneLLM(\n    model=\"claude-3-opus-20240229\",\n    documents=documents,\n    embedding=embedding,\n    user_profile=profile,\n)\n```\n\n### Conversation history (memory)\nEnable the memory feature to allow your clone to access to the history of conversation. This is simply done by setting `memory` argument to `True` or `-1` for infinite memory or an integer greater than zero for a fixed size of memory:\n```python\nfrom clonellm import CloneLLM\nimport os\n\nos.environ[\"HUGGINGFACE_API_KEY\"] = \"huggingface-api-key\"\n\nclone = CloneLLM(\n    model=\"meta-llama/Llama-2-70b-chat\",\n    documents=documents,\n    embedding=embedding,\n    memory=10,  # Enable memory with maximum size of 10\n)\n```\n\nUse the `memory_size` attribute to get the current length of conversation history, i.e., the size of clone memory:\n```\nprint(clone.memory_size)\n# 6\n```\n\nIf you needed to clear the history of the conversation, i.e., the clone memory, at any time, you can easily call either of the `reset_memory()` and `clear_memory()` methods.\n```python\nclone.clear_memory()\n# clone.reset_memory()\n```\n\n### Streaming\nCloneLLM supports streaming responses from the LLM, allowing for real-time processing of text as it is being generated, rather than receiving the whole output at once.\n```python\nfrom clonellm import CloneLLM, LiteLLMEmbeddings\nimport os\n\nos.environ[\"VERTEXAI_PROJECT\"] = \"hardy-device-28813\"\nos.environ[\"VERTEXAI_LOCATION\"] = \"us-central1\"\nos.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = \"path/to/your/credentials.json\"\n\nembedding = LiteLLMEmbeddings(model=\"textembedding-gecko@001\")\nclone = CloneLLM(model=\"gemini-1.0-pro\", documents=documents, embedding=embedding)\n\nfor chunk in clone.stream(\"Describe yourself in 100 words\"):\n    print(chunk, end=\"\", flush=True)\n```\n\n### Async\nCloneLLM provides asynchronous counterparts to its core methods, `afit`, `ainvoke`, and `astream`, enhancing performance in asynchronous programming contexts.\n\n#### `ainvoke`\n```python\nimport asyncio\nfrom clonellm import CloneLLM, LiteLLMEmbeddings\nfrom langchain_core.documents import Document\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"openai-api-key\"\n\nasync def main():\n    documents = [...]\n    embedding = LiteLLMEmbeddings(model=\"text-embedding-ada-002\")\n    clone = CloneLLM(model=\"gpt-4o\", documents=documents, embedding=embedding)\n    await clone.afit()\n    response = await clone.ainvoke(\"Tell me about your skills?\")\n    return response\n\nresponse = asyncio.run(main())\nprint(response)\n```\n\n#### `astream`\n```python\nimport asyncio\nfrom clonellm import CloneLLM, LiteLLMEmbeddings\nfrom langchain_core.documents import Document\nimport os\n\nos.environ[\"OPENAI_API_KEY\"] = \"openai-api-key\"\n\nasync def main():\n    documents = [...]\n    embedding = LiteLLMEmbeddings(model=\"text-embedding-3-small\")\n    clone = CloneLLM(model=\"gpt-4o\", documents=documents, embedding=embedding)\n    await clone.afit()\n    async for chunk in clone.astream(\"How comfortable are you with remote work?\"):\n        print(chunk, end=\"\", flush=True)\n\nasyncio.run(main())\n```\n\n## Support Us\nIf you find CloneLLM useful, please consider showing your support in one of the following ways:\n\n- \u2b50 **Star our GitHub repository:** This helps increase the visibility of our project.\n- \ud83d\udca1 **Contribute:** Submit pull requests to help improve the codebase, whether it's adding new features, fixing bugs, or improving documentation.\n- \ud83d\udcf0 **Share:** Post about CloneLLM on LinkedIn or other social platforms.\n\nThank you for your interest in CloneLLM. We look forward to seeing what you'll create with your AI clone!\n\n## TODO\n- [x] Add pre commit configuration file\n- [x] Add setup.py script\n- [x] Add support for conversation history\n- [x] Add support for RAG with no embedding (use a summary of documents as the context)\n- [x] Add support for string documents\n- [x] Fix mypy errors\n- [x] Rename `completion` methods to `invoke`\n- [x] Add support for streaming completion\n- [x] Make `LiteLLMEmbeddings.all_embedding_models` a property\n- [x] Add an attribute to `CloneLLM` to return supported models\n- [x] Add initial version of README\n- [x] Describe `CloneLLM.clear_memory` method in README\n- [x] Add an attribute to `CloneLLM` to return the memory size\n- [x] Add support for fixed size memory\n- [ ] Add support for custom system prompts\n- [ ] Add documents\n- [x] Add usage examples\n- [x] Add unit tests for non-core modules\n- [x] Add unit tests for core module\n- [x] Add GitHub workflow to run tests on PR\n- [x] Add GitHub workflow to publish to PyPI on release\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python package to create an AI clone of yourself using LLMs.",
    "version": "0.0.7",
    "project_urls": {
        "Homepage": "https://github.com/msamsami/clonellm",
        "Repository": "https://github.com/msamsami/clonellm"
    },
    "split_keywords": [
        "llm",
        " language models",
        " nlp",
        " rag",
        " ai",
        " ai clone"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3b449d2371e10d01442f6b40bbbf818aa767067936e5aa4eb9ffb162c7701c70",
                "md5": "f031bb83c244c9b61b0d75c148787b62",
                "sha256": "8275d5c2f39866348143e766baf17f5a46cd35b31aa444190d17f2241e63dcee"
            },
            "downloads": -1,
            "filename": "clonellm-0.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f031bb83c244c9b61b0d75c148787b62",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.9",
            "size": 13029,
            "upload_time": "2024-06-21T10:35:27",
            "upload_time_iso_8601": "2024-06-21T10:35:27.522272Z",
            "url": "https://files.pythonhosted.org/packages/3b/44/9d2371e10d01442f6b40bbbf818aa767067936e5aa4eb9ffb162c7701c70/clonellm-0.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d20629c2d73651d68e2aa4d569f9e4ad75d49a33fa8e3a32ac16405c3f2a1fa5",
                "md5": "b8eaa30e2bdba80da9041654a9d333d0",
                "sha256": "af6241d18b1130b331b0727b6cdc2406ac4ebc9ec295dc43ab70dd9e0054ba6f"
            },
            "downloads": -1,
            "filename": "clonellm-0.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "b8eaa30e2bdba80da9041654a9d333d0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.9",
            "size": 14448,
            "upload_time": "2024-06-21T10:35:28",
            "upload_time_iso_8601": "2024-06-21T10:35:28.499629Z",
            "url": "https://files.pythonhosted.org/packages/d2/06/29c2d73651d68e2aa4d569f9e4ad75d49a33fa8e3a32ac16405c3f2a1fa5/clonellm-0.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-21 10:35:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "msamsami",
    "github_project": "clonellm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "litellm",
            "specs": [
                [
                    ">=",
                    "1.36.0"
                ]
            ]
        },
        {
            "name": "langchain",
            "specs": [
                [
                    ">=",
                    "0.1.17"
                ]
            ]
        },
        {
            "name": "langchain-chroma",
            "specs": []
        }
    ],
    "lcname": "clonellm"
}
        
Elapsed time: 0.28315s