swarms-memory


Nameswarms-memory JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/swarms-memory
SummarySwarms Memory - Pytorch
upload_time2024-08-28 19:36:06
maintainerNone
docs_urlNone
authorKye Gomez
requires_python<4.0,>=3.10
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<div align="center">
  <a href="https://swarms.world">
    <h1>Swarms Memory</h1>
  </a>
</div>
<p align="center">
  <em>The Enterprise-Grade Production-Ready RAG Framework</em>
</p>

<p align="center">
    <a href="https://pypi.org/project/swarms/" target="_blank">
        <img alt="Python" src="https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" />
        <img alt="Version" src="https://img.shields.io/pypi/v/swarms?style=for-the-badge&color=3670A0">
    </a>
</p>
<p align="center">
<a href="https://twitter.com/swarms_corp/">🐦 Twitter</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://discord.gg/agora-999382051935506503">📢 Discord</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://swarms.world/explorer">Swarms Platform</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://docs.swarms.world">📙 Documentation</a>
</p>


[![GitHub issues](https://img.shields.io/github/issues/kyegomez/swarms)](https://github.com/kyegomez/swarms-memory/issues) [![GitHub forks](https://img.shields.io/github/forks/kyegomez/swarms)](https://github.com/kyegomez/swarms-memory/network) [![GitHub stars](https://img.shields.io/github/stars/kyegomez/swarms)](https://github.com/kyegomez/swarms-memory/stargazers) [![GitHub license](https://img.shields.io/github/license/kyegomez/swarms-memory)](https://github.com/kyegomez/swarms-memory/blob/main/LICENSE)[![GitHub star chart](https://img.shields.io/github/stars/kyegomez/swarms-memory?style=social)](https://star-history.com/#kyegomez/swarms)[![Dependency Status](https://img.shields.io/librariesio/github/kyegomez/swarms)](https://libraries.io/github/kyegomez/swarms) [![Downloads](https://static.pepy.tech/badge/swarms-memory/month)](https://pepy.tech/project/swarms-memory)

[![Join the Agora discord](https://img.shields.io/discord/1110910277110743103?label=Discord&logo=discord&logoColor=white&style=plastic&color=d7b023)![Share on Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Share%20%40kyegomez/swarmsmemory)](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [![Share on Facebook](https://img.shields.io/badge/Share-%20facebook-blue)](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [![Share on LinkedIn](https://img.shields.io/badge/Share-%20linkedin-blue)](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=&summary=&source=)

[![Share on Reddit](https://img.shields.io/badge/-Share%20on%20Reddit-orange)](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=Swarms%20-%20the%20future%20of%20AI) [![Share on Hacker News](https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange)](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&t=Swarms%20-%20the%20future%20of%20AI) [![Share on Pinterest](https://img.shields.io/badge/-Share%20on%20Pinterest-red)](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Swarms%20-%20the%20future%20of%20AI) [![Share on WhatsApp](https://img.shields.io/badge/-Share%20on%20WhatsApp-green)](https://api.whatsapp.com/send?text=Check%20out%20Swarms%20-%20the%20future%20of%20AI%20%23swarms%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms)


Here's a more detailed and larger table with descriptions and website links for each RAG system:

| **RAG System** | **Status**  | **Description**                                                                 | **Documentation**                                     | **Website**                      |
|----------------|-------------|---------------------------------------------------------------------------------|-------------------------------------------------------|----------------------------------|
| **ChromaDB**   | Available   | A high-performance, distributed database optimized for handling large-scale AI tasks. | [ChromaDB Documentation](swarms_memory/memory/chromadb.md) | [ChromaDB](https://chromadb.com) |
| **Pinecone**   | Available   | A fully managed vector database that makes it easy to add vector search to your applications. | [Pinecone Documentation](swarms_memory/memory/pinecone.md) | [Pinecone](https://pinecone.io) |
| **Redis**      | Coming Soon | An open-source, in-memory data structure store, used as a database, cache, and message broker. | [Redis Documentation](swarms_memory/memory/redis.md) | [Redis](https://redis.io)       |
| **Faiss**      | Coming Soon | A library for efficient similarity search and clustering of dense vectors, developed by Facebook AI. | [Faiss Documentation](swarms_memory/memory/faiss.md) | [Faiss](https://faiss.ai)       |
| **HNSW**       | Coming Soon | A graph-based algorithm for approximate nearest neighbor search, known for its speed and accuracy. | [HNSW Documentation](swarms_memory/memory/hnsw.md)   | [HNSW](https://github.com/nmslib/hnswlib) |

This table includes a brief description of each system, their current status, links to their documentation, and their respective websites for further information.


### Requirements:
- `python 3.10` 
- `.env` with your respective keys like `PINECONE_API_KEY` can be found in the `.env.examples`

## Install
```bash
$ pip install swarms-memory
```




## Usage

### Pinecone
```python
from typing import List, Dict, Any
from swarms_memory import PineconeMemory


# Example usage
if __name__ == "__main__":
    from transformers import AutoTokenizer, AutoModel
    import torch

    # Custom embedding function using a HuggingFace model
    def custom_embedding_function(text: str) -> List[float]:
        tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
        model = AutoModel.from_pretrained("bert-base-uncased")
        inputs = tokenizer(
            text,
            return_tensors="pt",
            padding=True,
            truncation=True,
            max_length=512,
        )
        with torch.no_grad():
            outputs = model(**inputs)
        embeddings = (
            outputs.last_hidden_state.mean(dim=1).squeeze().tolist()
        )
        return embeddings

    # Custom preprocessing function
    def custom_preprocess(text: str) -> str:
        return text.lower().strip()

    # Custom postprocessing function
    def custom_postprocess(
        results: List[Dict[str, Any]],
    ) -> List[Dict[str, Any]]:
        for result in results:
            result["custom_score"] = (
                result["score"] * 2
            )  # Example modification
        return results

    # Initialize the wrapper with custom functions
    wrapper = PineconeMemory(
        api_key="your-api-key",
        environment="your-environment",
        index_name="your-index-name",
        embedding_function=custom_embedding_function,
        preprocess_function=custom_preprocess,
        postprocess_function=custom_postprocess,
        logger_config={
            "handlers": [
                {
                    "sink": "custom_rag_wrapper.log",
                    "rotation": "1 GB",
                },
                {
                    "sink": lambda msg: print(
                        f"Custom log: {msg}", end=""
                    )
                },
            ],
        },
    )

    # Adding documents
    wrapper.add(
        "This is a sample document about artificial intelligence.",
        {"category": "AI"},
    )
    wrapper.add(
        "Python is a popular programming language for data science.",
        {"category": "Programming"},
    )

    # Querying
    results = wrapper.query("What is AI?", filter={"category": "AI"})
    for result in results:
        print(
            f"Score: {result['score']}, Custom Score: {result['custom_score']}, Text: {result['metadata']['text']}"
        )



```


### ChromaDB
```python
from swarms_memory import ChromaDB

chromadb = ChromaDB(
    metric="cosine",
    output_dir="results",
    limit_tokens=1000,
    n_results=2,
    docs_folder="path/to/docs",
    verbose=True,
)

# Add a document
doc_id = chromadb.add("This is a test document.")

# Query the document
result = chromadb.query("This is a test query.")

# Traverse a directory
chromadb.traverse_directory()

# Display the result
print(result)

```


### Faiss

```python
from typing import List, Dict, Any
from swarms_memory.faiss_wrapper import FAISSDB


from transformers import AutoTokenizer, AutoModel
import torch


# Custom embedding function using a HuggingFace model
def custom_embedding_function(text: str) -> List[float]:
    tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
    model = AutoModel.from_pretrained("bert-base-uncased")
    inputs = tokenizer(
        text,
        return_tensors="pt",
        padding=True,
        truncation=True,
        max_length=512,
    )
    with torch.no_grad():
        outputs = model(**inputs)
    embeddings = (
        outputs.last_hidden_state.mean(dim=1).squeeze().tolist()
    )
    return embeddings


# Custom preprocessing function
def custom_preprocess(text: str) -> str:
    return text.lower().strip()


# Custom postprocessing function
def custom_postprocess(
    results: List[Dict[str, Any]],
) -> List[Dict[str, Any]]:
    for result in results:
        result["custom_score"] = (
            result["score"] * 2
        )  # Example modification
    return results


# Initialize the wrapper with custom functions
wrapper = FAISSDB(
    dimension=768,
    index_type="Flat",
    embedding_function=custom_embedding_function,
    preprocess_function=custom_preprocess,
    postprocess_function=custom_postprocess,
    metric="cosine",
    logger_config={
        "handlers": [
            {
                "sink": "custom_faiss_rag_wrapper.log",
                "rotation": "1 GB",
            },
            {"sink": lambda msg: print(f"Custom log: {msg}", end="")},
        ],
    },
)

# Adding documents
wrapper.add(
    "This is a sample document about artificial intelligence.",
    {"category": "AI"},
)
wrapper.add(
    "Python is a popular programming language for data science.",
    {"category": "Programming"},
)

# Querying
results = wrapper.query("What is AI?")
for result in results:
    print(
        f"Score: {result['score']}, Custom Score: {result['custom_score']}, Text: {result['metadata']['text']}"
    )
```


# License
MIT


# Citation
Please cite Swarms in your paper or your project if you found it beneficial in any way! Appreciate you.

```bibtex
@misc{swarms,
  author = {Gomez, Kye},
  title = {{Swarms: The Multi-Agent Collaboration Framework}},
  howpublished = {\url{https://github.com/kyegomez/swarms}},
  year = {2023},
  note = {Accessed: Date}
}
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/swarms-memory",
    "name": "swarms-memory",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/af/e0/f50f813c3c03139c5da55d00b6bea434612be7d91ff1840f50b604e173de/swarms_memory-0.1.1.tar.gz",
    "platform": null,
    "description": "\n<div align=\"center\">\n  <a href=\"https://swarms.world\">\n    <h1>Swarms Memory</h1>\n  </a>\n</div>\n<p align=\"center\">\n  <em>The Enterprise-Grade Production-Ready RAG Framework</em>\n</p>\n\n<p align=\"center\">\n    <a href=\"https://pypi.org/project/swarms/\" target=\"_blank\">\n        <img alt=\"Python\" src=\"https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54\" />\n        <img alt=\"Version\" src=\"https://img.shields.io/pypi/v/swarms?style=for-the-badge&color=3670A0\">\n    </a>\n</p>\n<p align=\"center\">\n<a href=\"https://twitter.com/swarms_corp/\">\ud83d\udc26 Twitter</a>\n<span>&nbsp;&nbsp;\u2022&nbsp;&nbsp;</span>\n<a href=\"https://discord.gg/agora-999382051935506503\">\ud83d\udce2 Discord</a>\n<span>&nbsp;&nbsp;\u2022&nbsp;&nbsp;</span>\n<a href=\"https://swarms.world/explorer\">Swarms Platform</a>\n<span>&nbsp;&nbsp;\u2022&nbsp;&nbsp;</span>\n<a href=\"https://docs.swarms.world\">\ud83d\udcd9 Documentation</a>\n</p>\n\n\n[![GitHub issues](https://img.shields.io/github/issues/kyegomez/swarms)](https://github.com/kyegomez/swarms-memory/issues) [![GitHub forks](https://img.shields.io/github/forks/kyegomez/swarms)](https://github.com/kyegomez/swarms-memory/network) [![GitHub stars](https://img.shields.io/github/stars/kyegomez/swarms)](https://github.com/kyegomez/swarms-memory/stargazers) [![GitHub license](https://img.shields.io/github/license/kyegomez/swarms-memory)](https://github.com/kyegomez/swarms-memory/blob/main/LICENSE)[![GitHub star chart](https://img.shields.io/github/stars/kyegomez/swarms-memory?style=social)](https://star-history.com/#kyegomez/swarms)[![Dependency Status](https://img.shields.io/librariesio/github/kyegomez/swarms)](https://libraries.io/github/kyegomez/swarms) [![Downloads](https://static.pepy.tech/badge/swarms-memory/month)](https://pepy.tech/project/swarms-memory)\n\n[![Join the Agora discord](https://img.shields.io/discord/1110910277110743103?label=Discord&logo=discord&logoColor=white&style=plastic&color=d7b023)![Share on Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Share%20%40kyegomez/swarmsmemory)](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [![Share on Facebook](https://img.shields.io/badge/Share-%20facebook-blue)](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [![Share on LinkedIn](https://img.shields.io/badge/Share-%20linkedin-blue)](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=&summary=&source=)\n\n[![Share on Reddit](https://img.shields.io/badge/-Share%20on%20Reddit-orange)](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=Swarms%20-%20the%20future%20of%20AI) [![Share on Hacker News](https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange)](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&t=Swarms%20-%20the%20future%20of%20AI) [![Share on Pinterest](https://img.shields.io/badge/-Share%20on%20Pinterest-red)](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Swarms%20-%20the%20future%20of%20AI) [![Share on WhatsApp](https://img.shields.io/badge/-Share%20on%20WhatsApp-green)](https://api.whatsapp.com/send?text=Check%20out%20Swarms%20-%20the%20future%20of%20AI%20%23swarms%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms)\n\n\nHere's a more detailed and larger table with descriptions and website links for each RAG system:\n\n| **RAG System** | **Status**  | **Description**                                                                 | **Documentation**                                     | **Website**                      |\n|----------------|-------------|---------------------------------------------------------------------------------|-------------------------------------------------------|----------------------------------|\n| **ChromaDB**   | Available   | A high-performance, distributed database optimized for handling large-scale AI tasks. | [ChromaDB Documentation](swarms_memory/memory/chromadb.md) | [ChromaDB](https://chromadb.com) |\n| **Pinecone**   | Available   | A fully managed vector database that makes it easy to add vector search to your applications. | [Pinecone Documentation](swarms_memory/memory/pinecone.md) | [Pinecone](https://pinecone.io) |\n| **Redis**      | Coming Soon | An open-source, in-memory data structure store, used as a database, cache, and message broker. | [Redis Documentation](swarms_memory/memory/redis.md) | [Redis](https://redis.io)       |\n| **Faiss**      | Coming Soon | A library for efficient similarity search and clustering of dense vectors, developed by Facebook AI. | [Faiss Documentation](swarms_memory/memory/faiss.md) | [Faiss](https://faiss.ai)       |\n| **HNSW**       | Coming Soon | A graph-based algorithm for approximate nearest neighbor search, known for its speed and accuracy. | [HNSW Documentation](swarms_memory/memory/hnsw.md)   | [HNSW](https://github.com/nmslib/hnswlib) |\n\nThis table includes a brief description of each system, their current status, links to their documentation, and their respective websites for further information.\n\n\n### Requirements:\n- `python 3.10` \n- `.env` with your respective keys like `PINECONE_API_KEY` can be found in the `.env.examples`\n\n## Install\n```bash\n$ pip install swarms-memory\n```\n\n\n\n\n## Usage\n\n### Pinecone\n```python\nfrom typing import List, Dict, Any\nfrom swarms_memory import PineconeMemory\n\n\n# Example usage\nif __name__ == \"__main__\":\n    from transformers import AutoTokenizer, AutoModel\n    import torch\n\n    # Custom embedding function using a HuggingFace model\n    def custom_embedding_function(text: str) -> List[float]:\n        tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\n        model = AutoModel.from_pretrained(\"bert-base-uncased\")\n        inputs = tokenizer(\n            text,\n            return_tensors=\"pt\",\n            padding=True,\n            truncation=True,\n            max_length=512,\n        )\n        with torch.no_grad():\n            outputs = model(**inputs)\n        embeddings = (\n            outputs.last_hidden_state.mean(dim=1).squeeze().tolist()\n        )\n        return embeddings\n\n    # Custom preprocessing function\n    def custom_preprocess(text: str) -> str:\n        return text.lower().strip()\n\n    # Custom postprocessing function\n    def custom_postprocess(\n        results: List[Dict[str, Any]],\n    ) -> List[Dict[str, Any]]:\n        for result in results:\n            result[\"custom_score\"] = (\n                result[\"score\"] * 2\n            )  # Example modification\n        return results\n\n    # Initialize the wrapper with custom functions\n    wrapper = PineconeMemory(\n        api_key=\"your-api-key\",\n        environment=\"your-environment\",\n        index_name=\"your-index-name\",\n        embedding_function=custom_embedding_function,\n        preprocess_function=custom_preprocess,\n        postprocess_function=custom_postprocess,\n        logger_config={\n            \"handlers\": [\n                {\n                    \"sink\": \"custom_rag_wrapper.log\",\n                    \"rotation\": \"1 GB\",\n                },\n                {\n                    \"sink\": lambda msg: print(\n                        f\"Custom log: {msg}\", end=\"\"\n                    )\n                },\n            ],\n        },\n    )\n\n    # Adding documents\n    wrapper.add(\n        \"This is a sample document about artificial intelligence.\",\n        {\"category\": \"AI\"},\n    )\n    wrapper.add(\n        \"Python is a popular programming language for data science.\",\n        {\"category\": \"Programming\"},\n    )\n\n    # Querying\n    results = wrapper.query(\"What is AI?\", filter={\"category\": \"AI\"})\n    for result in results:\n        print(\n            f\"Score: {result['score']}, Custom Score: {result['custom_score']}, Text: {result['metadata']['text']}\"\n        )\n\n\n\n```\n\n\n### ChromaDB\n```python\nfrom swarms_memory import ChromaDB\n\nchromadb = ChromaDB(\n    metric=\"cosine\",\n    output_dir=\"results\",\n    limit_tokens=1000,\n    n_results=2,\n    docs_folder=\"path/to/docs\",\n    verbose=True,\n)\n\n# Add a document\ndoc_id = chromadb.add(\"This is a test document.\")\n\n# Query the document\nresult = chromadb.query(\"This is a test query.\")\n\n# Traverse a directory\nchromadb.traverse_directory()\n\n# Display the result\nprint(result)\n\n```\n\n\n### Faiss\n\n```python\nfrom typing import List, Dict, Any\nfrom swarms_memory.faiss_wrapper import FAISSDB\n\n\nfrom transformers import AutoTokenizer, AutoModel\nimport torch\n\n\n# Custom embedding function using a HuggingFace model\ndef custom_embedding_function(text: str) -> List[float]:\n    tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\n    model = AutoModel.from_pretrained(\"bert-base-uncased\")\n    inputs = tokenizer(\n        text,\n        return_tensors=\"pt\",\n        padding=True,\n        truncation=True,\n        max_length=512,\n    )\n    with torch.no_grad():\n        outputs = model(**inputs)\n    embeddings = (\n        outputs.last_hidden_state.mean(dim=1).squeeze().tolist()\n    )\n    return embeddings\n\n\n# Custom preprocessing function\ndef custom_preprocess(text: str) -> str:\n    return text.lower().strip()\n\n\n# Custom postprocessing function\ndef custom_postprocess(\n    results: List[Dict[str, Any]],\n) -> List[Dict[str, Any]]:\n    for result in results:\n        result[\"custom_score\"] = (\n            result[\"score\"] * 2\n        )  # Example modification\n    return results\n\n\n# Initialize the wrapper with custom functions\nwrapper = FAISSDB(\n    dimension=768,\n    index_type=\"Flat\",\n    embedding_function=custom_embedding_function,\n    preprocess_function=custom_preprocess,\n    postprocess_function=custom_postprocess,\n    metric=\"cosine\",\n    logger_config={\n        \"handlers\": [\n            {\n                \"sink\": \"custom_faiss_rag_wrapper.log\",\n                \"rotation\": \"1 GB\",\n            },\n            {\"sink\": lambda msg: print(f\"Custom log: {msg}\", end=\"\")},\n        ],\n    },\n)\n\n# Adding documents\nwrapper.add(\n    \"This is a sample document about artificial intelligence.\",\n    {\"category\": \"AI\"},\n)\nwrapper.add(\n    \"Python is a popular programming language for data science.\",\n    {\"category\": \"Programming\"},\n)\n\n# Querying\nresults = wrapper.query(\"What is AI?\")\nfor result in results:\n    print(\n        f\"Score: {result['score']}, Custom Score: {result['custom_score']}, Text: {result['metadata']['text']}\"\n    )\n```\n\n\n# License\nMIT\n\n\n# Citation\nPlease cite Swarms in your paper or your project if you found it beneficial in any way! Appreciate you.\n\n```bibtex\n@misc{swarms,\n  author = {Gomez, Kye},\n  title = {{Swarms: The Multi-Agent Collaboration Framework}},\n  howpublished = {\\url{https://github.com/kyegomez/swarms}},\n  year = {2023},\n  note = {Accessed: Date}\n}\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Swarms Memory - Pytorch",
    "version": "0.1.1",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/swarms-memory",
        "Homepage": "https://github.com/kyegomez/swarms-memory",
        "Repository": "https://github.com/kyegomez/swarms-memory"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " optimizers",
        " prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9b647ad4fb0036f470af326a68347df0ad261fb4b02d9b443e079c5f69cf08ea",
                "md5": "08c4e8b377c4e276b43e906f52cbb91d",
                "sha256": "0bec766d34bdc4226456b66c7059bb2f355369d404ea67cbc01978683ff54019"
            },
            "downloads": -1,
            "filename": "swarms_memory-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "08c4e8b377c4e276b43e906f52cbb91d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 25166,
            "upload_time": "2024-08-28T19:36:04",
            "upload_time_iso_8601": "2024-08-28T19:36:04.687762Z",
            "url": "https://files.pythonhosted.org/packages/9b/64/7ad4fb0036f470af326a68347df0ad261fb4b02d9b443e079c5f69cf08ea/swarms_memory-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "afe0f50f813c3c03139c5da55d00b6bea434612be7d91ff1840f50b604e173de",
                "md5": "024d7f4c6d2a208d3b35fa886ca332bb",
                "sha256": "3e5d3a44dab00c9e39b4120fc3fb4750cd8b1c0e06867512f4595931a6544983"
            },
            "downloads": -1,
            "filename": "swarms_memory-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "024d7f4c6d2a208d3b35fa886ca332bb",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 21304,
            "upload_time": "2024-08-28T19:36:06",
            "upload_time_iso_8601": "2024-08-28T19:36:06.144684Z",
            "url": "https://files.pythonhosted.org/packages/af/e0/f50f813c3c03139c5da55d00b6bea434612be7d91ff1840f50b604e173de/swarms_memory-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-28 19:36:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "swarms-memory",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "swarms-memory"
}
        
Elapsed time: 0.33863s