ai-microcore


Nameai-microcore JSON
Version 3.12.1 PyPI version JSON
download
home_pageNone
Summary# Minimalistic Foundation for AI Applications
upload_time2024-12-10 19:17:05
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords llm large language models ai similarity search ai search gpt openai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="right">
    <a href="https://github.com/Nayjest/ai-microcore/releases" target="_blank"><img src="https://img.shields.io/github/release/ai-microcore/microcore" alt="Release Notes"></a>
    <a href="https://app.codacy.com/gh/Nayjest/ai-microcore/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade" target="_blank"><img src="https://app.codacy.com/project/badge/Grade/441d03416bc048828c649129530dcbc3" alt="Code Quality"></a>
    <a href="https://github.com/Nayjest/ai-microcore/actions/workflows/pylint.yml" target="_blank"><img src="https://github.com/Nayjest/ai-microcore/actions/workflows/pylint.yml/badge.svg" alt="Pylint"></a>
    <a href="https://github.com/Nayjest/ai-microcore/actions/workflows/tests.yml" target="_blank"><img src="https://github.com/Nayjest/ai-microcore/actions/workflows/tests.yml/badge.svg" alt="Tests"></a>
    <a href="https://github.com/Nayjest/ai-microcore/blob/main/LICENSE" target="_blank"><img src="https://img.shields.io/static/v1?label=license&message=MIT&color=d08aff" alt="License"></a>
</p>


# AI MicroCore: A Minimalistic Foundation for AI Applications

**MicroCore** is a collection of python adapters for Large Language Models
and Semantic Search APIs allowing to 
communicate with these services in a convenient way, make them easily switchable 
and separate business logic from the implementation details.

It defines interfaces for features typically used in AI applications,
which allows you to keep your application as simple as possible and try various models & services
without need to change your application code.

You even can switch between text completion and chat completion models only using configuration.

The basic example of usage is as follows:

```python
from microcore import llm

while user_msg := input('Enter message: '):
    print('AI: ' + llm(user_msg))
```

## πŸ”— Links

 -   [API Reference](https://ai-microcore.github.io/api-reference/)
 -   [PyPi Package](https://pypi.org/project/ai-microcore/)
 -   [GitHub Repository](https://github.com/Nayjest/ai-microcore)


## πŸ’» Installation

Install as PyPi package:
```
pip install ai-microcore
```

Alternatively, you may just copy `microcore` folder to your project sources root.
```bash
git clone git@github.com:Nayjest/ai-microcore.git && mv ai-microcore/microcore ./ && rm -rf ai-microcore
```


## πŸ“‹ Requirements

Python 3.10 / 3.11 / 3.12

Both v0.28+ and v1.X OpenAI package versions are supported.


## βš™οΈ Configuring

### Minimal Configuration

Having `OPENAI_API_KEY` in OS environment variables is enough for basic usage.

Similarity search features will work out of the box if you have the `chromadb` pip package installed.

### Configuration Methods

There are a few options available for configuring microcore:

-   Use `microcore.configure(**params)`
    <br>πŸ’‘ <small>All configuration options should be available in IDE autocompletion tooltips</small>
-   Create a `.env` file in your project root; examples: [basic.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.example), [Mistral Large.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.mistral.example), [Anthropic Claude 3 Opus.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.anthropic.example), [Gemini on Vertex AI.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.google-vertex-gemini.example), [Gemini on AI Studio.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.gemini.example)
-   Use a custom configuration file: `mc.configure(DOT_ENV_FILE='dev-config.ini')`
-   Define OS environment variables

For the full list of available configuration options, you may also check [`microcore/config.py`](https://github.com/Nayjest/ai-microcore/blob/main/microcore/config.py).

### Installing vendor-specific packages
For the models working not via OpenAI API, you may need to install additional packages:
#### Anthropic Claude 3
```bash
pip install anthropic
```
#### Google Gemini via AI Studio
```bash
pip install google-generativeai
```
#### Google Gemini via Vertex AI
```bash
pip install vertexai
```
πŸ“ŒAdditonaly for working through [Vertex AI](https://cloud.google.com/vertex-ai) you need to
[install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install)
and [configure the authorization](https://cloud.google.com/sdk/docs/authorizing).

#### Local language models via Hugging Face Transformers

You will need to install transformers and deep learning library of your choice (PyTorch, TensorFlow, Flax, etc).

See [transformers installation](https://huggingface.co/docs/transformers/installation).

### Priority of Configuration Sources

1.  Configuration options passed as arguments to `microcore.configure()` have the highest priority.
2.  The priority of configuration file options (`.env` by default or the value of `DOT_ENV_FILE`) is higher than OS environment variables.
    <br>πŸ’‘ <small>Setting `USE_DOT_ENV` to `false` disables reading configuration files.</small>
3.  OS environment variables have the lowest priority.


## 🌟 Core Functions

### llm(prompt: str, \*\*kwargs) β†’ str

Performs a request to a large language model (LLM).

Asynchronous variant: `allm(prompt: str, **kwargs)`

```python
from microcore import *

# Will print all requests and responses to console
use_logging()

# Basic usage
ai_response = llm('What is your model name?')

# You also may pass a list of strings as prompt
# - For chat completion models elements are treated as separate messages
# - For completion LLMs elements are treated as text lines
llm(['1+2', '='])
llm('1+2=', model='gpt-4')

# To specify a message role, you can use dictionary or classes
llm(dict(role='system', content='1+2='))
# equivalent
llm(SysMsg('1+2='))

# The returned value is a string
assert '7' == llm([
 SysMsg('You are a calculator'),
 UserMsg('1+2='),
 AssistantMsg('3'),
 UserMsg('3+4=')]
).strip()

# But it contains all fields of the LLM response in additional attributes
for i in llm('1+2=?', n=3, temperature=2).choices:
    print('RESPONSE:', i.message.content)

# To use response streaming you may specify the callback function:
llm('Hi there', callback=lambda x: print(x, end=''))

# Or multiple callbacks:
output = []
llm('Hi there', callbacks=[
    lambda x: print(x, end=''),
    lambda x: output.append(x),
])
```

### tpl(file_path, \*\*params) β†’ str
Renders prompt template with params.

Full-featured Jinja2 templates are used by default.

Related configuration options:

```python
from microcore import configure
configure(
    # 'tpl' folder in current working directory by default
    PROMPT_TEMPLATES_PATH = 'my_templates_folder'
)
```

### texts.search(collection: str, query: str | list, n_results: int = 5, where: dict = None, **kwargs) β†’ list[str]
Similarity search

### texts.find_one(self, collection: str, query: str | list) β†’ str | None
Find most similar text

### texts.get_all(self, collection: str) -> list[str]
Return collection of texts

### texts.save(collection: str, text: str, metadata: dict = None))
Store text and related metadata in embeddings database

### texts.save_many(collection: str, items: list[tuple[str, dict] | str])
Store multiple texts and related metadata in the embeddings database

### texts.clear(collection: str):
Clear collection

## API providers and models support

LLM Microcore supports all models & API providers having OpenAI API.

### List of API providers and models tested with LLM Microcore:

| API Provider                                                                             |                                                                                                                                      Models |
|------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------:|
| [OpenAI](https://openai.com)                                                             |                                    All GPT-4 and GTP-3.5-Turbo models<br/>all text completion models (davinci, gpt-3.5-turbo-instruct, etc) |
| [Microsoft Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service) |                                                                                                            All OpenAI models, Mistral Large |
| [Anthropic](https://anthropic.com)                                                       |                                                                                                                             Claude 3 models |
| [MistralAI](https://mistral.ai)                                                          |                                                                                                                          All Mistral models |
| [Google AI Studio](https://aistudio.google.com/)                             |                                                                                                                        Google Gemini models |
| [Google Vertex AI](https://cloud.google.com/vertex-ai?hl=en)                             |                                                   Gemini Pro & [other models](https://cloud.google.com/vertex-ai/docs/start/explore-models) |
| [Deep Infra](https://deepinfra.com)                                                      | deepinfra/airoboros-70b<br/>jondurbin/airoboros-l2-70b-gpt4-1.4.1<br/>meta-llama/Llama-2-70b-chat-hf<br/>and other models having OpenAI API |
| [Anyscale](https://anyscale.com)                                                         |                                           meta-llama/Llama-2-70b-chat-hf<br/>meta-llama/Llama-2-13b-chat-hf<br/>meta-llama/Llama-7b-chat-hf |
| [Groq](https://groq.com/)                                                         |                                           LLaMA2 70b<br>Mixtral 8x7b<br>Gemma 7b |
| [Fireworks](fireworks.ai)                                                         |                                           [Over 50 open-source language models](https://fireworks.ai/models?show=All) |

## Supported local language model APIs:
- HuggingFace [Transformers](https://huggingface.co/docs/transformers/index) (see configuration examples [here](https://github.com/Nayjest/ai-microcore/blob/main/tests/local/test_transformers.py)).
- Custom local models by providing own function for chat / text completion, sync / async inference.

## πŸ–ΌοΈ Examples

#### [Code review tool](https://github.com/llm-microcore/microcore/blob/main/examples/code-review-tool)
Performs code review by LLM for changes in git .patch files in any programming languages.

#### [Image analysis](https://colab.research.google.com/drive/1qTJ51wxCv3VlyqLt3M8OZ7183YXPFpic) (Google Colab)
Determine the number of petals and the color of the flower from a photo (gpt-4-turbo)

#### [Banchmark LLMs on math problems](https://www.kaggle.com/code/nayjest/gigabenchmark-llm-accuracy-math-problems) (Kaggle Notebook)
Benchmark accuracy of 20+ state of the art models on solving olympiad math problems. Inferencing local language models via HuggingFace Transformers, parallel inference.
 
#### [Other examples](https://github.com/llm-microcore/microcore/tree/main/examples)

## Python functions as AI tools

@TODO

## πŸ€– AI Modules
**This is experimental feature.**

Tweaks the Python import system to provide automatic setup of MicroCore environment
based on metadata in module docstrings.
### Usage:
```python
import microcore.ai_modules
```
### Features:

*   Automatically registers template folders of AI modules in Jinja2 environment


## πŸ› οΈ Contributing

Please see [CONTRIBUTING](https://github.com/Nayjest/ai-microcore/blob/main/CONTRIBUTING.md) for details.


## πŸ“ License

Licensed under the [MIT License](https://github.com/Nayjest/ai-microcore/blob/main/LICENSE)
Β© 2023 [Vitalii Stepanenko](mailto:mail@vitaliy.in)


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ai-microcore",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "Vitalii Stepanenko <mail@vitalii.in>",
    "keywords": "llm, large language models, ai, similarity search, ai search, gpt, openai",
    "author": null,
    "author_email": "Vitalii Stepanenko <mail@vitalii.in>",
    "download_url": "https://files.pythonhosted.org/packages/f6/d3/d2aed66769777a0b9ab4bda94452c8004c5b1b00cd1b078dc56fb1caaa38/ai_microcore-3.12.1.tar.gz",
    "platform": null,
    "description": "<p align=\"right\">\n    <a href=\"https://github.com/Nayjest/ai-microcore/releases\" target=\"_blank\"><img src=\"https://img.shields.io/github/release/ai-microcore/microcore\" alt=\"Release Notes\"></a>\n    <a href=\"https://app.codacy.com/gh/Nayjest/ai-microcore/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade\" target=\"_blank\"><img src=\"https://app.codacy.com/project/badge/Grade/441d03416bc048828c649129530dcbc3\" alt=\"Code Quality\"></a>\n    <a href=\"https://github.com/Nayjest/ai-microcore/actions/workflows/pylint.yml\" target=\"_blank\"><img src=\"https://github.com/Nayjest/ai-microcore/actions/workflows/pylint.yml/badge.svg\" alt=\"Pylint\"></a>\n    <a href=\"https://github.com/Nayjest/ai-microcore/actions/workflows/tests.yml\" target=\"_blank\"><img src=\"https://github.com/Nayjest/ai-microcore/actions/workflows/tests.yml/badge.svg\" alt=\"Tests\"></a>\n    <a href=\"https://github.com/Nayjest/ai-microcore/blob/main/LICENSE\" target=\"_blank\"><img src=\"https://img.shields.io/static/v1?label=license&message=MIT&color=d08aff\" alt=\"License\"></a>\n</p>\n\n\n# AI MicroCore: A Minimalistic Foundation for AI Applications\n\n**MicroCore** is a collection of python adapters for Large Language Models\nand Semantic Search APIs allowing to \ncommunicate with these services in a convenient way, make them easily switchable \nand separate business logic from the implementation details.\n\nIt defines interfaces for features typically used in AI applications,\nwhich allows you to keep your application as simple as possible and try various models & services\nwithout need to change your application code.\n\nYou even can switch between text completion and chat completion models only using configuration.\n\nThe basic example of usage is as follows:\n\n```python\nfrom microcore import llm\n\nwhile user_msg := input('Enter message: '):\n    print('AI: ' + llm(user_msg))\n```\n\n## \ud83d\udd17 Links\n\n -   [API Reference](https://ai-microcore.github.io/api-reference/)\n -   [PyPi Package](https://pypi.org/project/ai-microcore/)\n -   [GitHub Repository](https://github.com/Nayjest/ai-microcore)\n\n\n## \ud83d\udcbb Installation\n\nInstall as PyPi package:\n```\npip install ai-microcore\n```\n\nAlternatively, you may just copy `microcore` folder to your project sources root.\n```bash\ngit clone git@github.com:Nayjest/ai-microcore.git && mv ai-microcore/microcore ./ && rm -rf ai-microcore\n```\n\n\n## \ud83d\udccb Requirements\n\nPython 3.10 / 3.11 / 3.12\n\nBoth v0.28+ and v1.X OpenAI package versions are supported.\n\n\n## \u2699\ufe0f Configuring\n\n### Minimal Configuration\n\nHaving `OPENAI_API_KEY` in OS environment variables is enough for basic usage.\n\nSimilarity search features will work out of the box if you have the `chromadb` pip package installed.\n\n### Configuration Methods\n\nThere are a few options available for configuring microcore:\n\n-   Use `microcore.configure(**params)`\n    <br>\ud83d\udca1 <small>All configuration options should be available in IDE autocompletion tooltips</small>\n-   Create a `.env` file in your project root; examples: [basic.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.example), [Mistral Large.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.mistral.example), [Anthropic Claude 3 Opus.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.anthropic.example), [Gemini on Vertex AI.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.google-vertex-gemini.example), [Gemini on AI Studio.env](https://github.com/Nayjest/ai-microcore/blob/main/.env.gemini.example)\n-   Use a custom configuration file: `mc.configure(DOT_ENV_FILE='dev-config.ini')`\n-   Define OS environment variables\n\nFor the full list of available configuration options, you may also check [`microcore/config.py`](https://github.com/Nayjest/ai-microcore/blob/main/microcore/config.py).\n\n### Installing vendor-specific packages\nFor the models working not via OpenAI API, you may need to install additional packages:\n#### Anthropic Claude 3\n```bash\npip install anthropic\n```\n#### Google Gemini via AI Studio\n```bash\npip install google-generativeai\n```\n#### Google Gemini via Vertex AI\n```bash\npip install vertexai\n```\n\ud83d\udcccAdditonaly for working through [Vertex AI](https://cloud.google.com/vertex-ai) you need to\n[install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install)\nand [configure the authorization](https://cloud.google.com/sdk/docs/authorizing).\n\n#### Local language models via Hugging Face Transformers\n\nYou will need to install transformers and deep learning library of your choice (PyTorch, TensorFlow, Flax, etc).\n\nSee [transformers installation](https://huggingface.co/docs/transformers/installation).\n\n### Priority of Configuration Sources\n\n1.  Configuration options passed as arguments to `microcore.configure()` have the highest priority.\n2.  The priority of configuration file options (`.env` by default or the value of `DOT_ENV_FILE`) is higher than OS environment variables.\n    <br>\ud83d\udca1 <small>Setting `USE_DOT_ENV` to `false` disables reading configuration files.</small>\n3.  OS environment variables have the lowest priority.\n\n\n## \ud83c\udf1f Core Functions\n\n### llm(prompt: str, \\*\\*kwargs) \u2192 str\n\nPerforms a request to a large language model (LLM).\n\nAsynchronous variant: `allm(prompt: str, **kwargs)`\n\n```python\nfrom microcore import *\n\n# Will print all requests and responses to console\nuse_logging()\n\n# Basic usage\nai_response = llm('What is your model name?')\n\n# You also may pass a list of strings as prompt\n# - For chat completion models elements are treated as separate messages\n# - For completion LLMs elements are treated as text lines\nllm(['1+2', '='])\nllm('1+2=', model='gpt-4')\n\n# To specify a message role, you can use dictionary or classes\nllm(dict(role='system', content='1+2='))\n# equivalent\nllm(SysMsg('1+2='))\n\n# The returned value is a string\nassert '7' == llm([\n SysMsg('You are a calculator'),\n UserMsg('1+2='),\n AssistantMsg('3'),\n UserMsg('3+4=')]\n).strip()\n\n# But it contains all fields of the LLM response in additional attributes\nfor i in llm('1+2=?', n=3, temperature=2).choices:\n    print('RESPONSE:', i.message.content)\n\n# To use response streaming you may specify the callback function:\nllm('Hi there', callback=lambda x: print(x, end=''))\n\n# Or multiple callbacks:\noutput = []\nllm('Hi there', callbacks=[\n    lambda x: print(x, end=''),\n    lambda x: output.append(x),\n])\n```\n\n### tpl(file_path, \\*\\*params) \u2192 str\nRenders prompt template with params.\n\nFull-featured Jinja2 templates are used by default.\n\nRelated configuration options:\n\n```python\nfrom microcore import configure\nconfigure(\n    # 'tpl' folder in current working directory by default\n    PROMPT_TEMPLATES_PATH = 'my_templates_folder'\n)\n```\n\n### texts.search(collection: str, query: str | list, n_results: int = 5, where: dict = None, **kwargs) \u2192 list[str]\nSimilarity search\n\n### texts.find_one(self, collection: str, query: str | list) \u2192 str | None\nFind most similar text\n\n### texts.get_all(self, collection: str) -> list[str]\nReturn collection of texts\n\n### texts.save(collection: str, text: str, metadata: dict = None))\nStore text and related metadata in embeddings database\n\n### texts.save_many(collection: str, items: list[tuple[str, dict] | str])\nStore multiple texts and related metadata in the embeddings database\n\n### texts.clear(collection: str):\nClear collection\n\n## API providers and models support\n\nLLM Microcore supports all models & API providers having OpenAI API.\n\n### List of API providers and models tested with LLM Microcore:\n\n| API Provider                                                                             |                                                                                                                                      Models |\n|------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------:|\n| [OpenAI](https://openai.com)                                                             |                                    All GPT-4 and GTP-3.5-Turbo models<br/>all text completion models (davinci, gpt-3.5-turbo-instruct, etc) |\n| [Microsoft Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service) |                                                                                                            All OpenAI models, Mistral Large |\n| [Anthropic](https://anthropic.com)                                                       |                                                                                                                             Claude 3 models |\n| [MistralAI](https://mistral.ai)                                                          |                                                                                                                          All Mistral models |\n| [Google AI Studio](https://aistudio.google.com/)                             |                                                                                                                        Google Gemini models |\n| [Google Vertex AI](https://cloud.google.com/vertex-ai?hl=en)                             |                                                   Gemini Pro & [other models](https://cloud.google.com/vertex-ai/docs/start/explore-models) |\n| [Deep Infra](https://deepinfra.com)                                                      | deepinfra/airoboros-70b<br/>jondurbin/airoboros-l2-70b-gpt4-1.4.1<br/>meta-llama/Llama-2-70b-chat-hf<br/>and other models having OpenAI API |\n| [Anyscale](https://anyscale.com)                                                         |                                           meta-llama/Llama-2-70b-chat-hf<br/>meta-llama/Llama-2-13b-chat-hf<br/>meta-llama/Llama-7b-chat-hf |\n| [Groq](https://groq.com/)                                                         |                                           LLaMA2 70b<br>Mixtral 8x7b<br>Gemma 7b |\n| [Fireworks](fireworks.ai)                                                         |                                           [Over 50 open-source language models](https://fireworks.ai/models?show=All) |\n\n## Supported local language model APIs:\n- HuggingFace [Transformers](https://huggingface.co/docs/transformers/index) (see configuration examples [here](https://github.com/Nayjest/ai-microcore/blob/main/tests/local/test_transformers.py)).\n- Custom local models by providing own function for chat / text completion, sync / async inference.\n\n## \ud83d\uddbc\ufe0f Examples\n\n#### [Code review tool](https://github.com/llm-microcore/microcore/blob/main/examples/code-review-tool)\nPerforms code review by LLM for changes in git .patch files in any programming languages.\n\n#### [Image analysis](https://colab.research.google.com/drive/1qTJ51wxCv3VlyqLt3M8OZ7183YXPFpic) (Google Colab)\nDetermine the number of petals and the color of the flower from a photo (gpt-4-turbo)\n\n#### [Banchmark LLMs on math problems](https://www.kaggle.com/code/nayjest/gigabenchmark-llm-accuracy-math-problems) (Kaggle Notebook)\nBenchmark accuracy of 20+ state of the art models on solving olympiad math problems. Inferencing local language models via HuggingFace Transformers, parallel inference.\n \n#### [Other examples](https://github.com/llm-microcore/microcore/tree/main/examples)\n\n## Python functions as AI tools\n\n@TODO\n\n## \ud83e\udd16 AI Modules\n**This is experimental feature.**\n\nTweaks the Python import system to provide automatic setup of MicroCore environment\nbased on metadata in module docstrings.\n### Usage:\n```python\nimport microcore.ai_modules\n```\n### Features:\n\n*   Automatically registers template folders of AI modules in Jinja2 environment\n\n\n## \ud83d\udee0\ufe0f Contributing\n\nPlease see [CONTRIBUTING](https://github.com/Nayjest/ai-microcore/blob/main/CONTRIBUTING.md) for details.\n\n\n## \ud83d\udcdd License\n\nLicensed under the [MIT License](https://github.com/Nayjest/ai-microcore/blob/main/LICENSE)\n\u00a9 2023 [Vitalii Stepanenko](mailto:mail@vitaliy.in)\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "# Minimalistic Foundation for AI Applications",
    "version": "3.12.1",
    "project_urls": {
        "Source Code": "https://github.com/Nayjest/ai-microcore"
    },
    "split_keywords": [
        "llm",
        " large language models",
        " ai",
        " similarity search",
        " ai search",
        " gpt",
        " openai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d5792081effb1581525d0b36696dfec87191751480f3f42b1ab1f98d17ba5eb8",
                "md5": "0f92c8ec7a5b521b08a9befa7b040c0a",
                "sha256": "6e1f1433f31dab9d8ab14a3511c2861eef3ae0ace8ad6b857da7bff8310c0e41"
            },
            "downloads": -1,
            "filename": "ai_microcore-3.12.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0f92c8ec7a5b521b08a9befa7b040c0a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 50775,
            "upload_time": "2024-12-10T19:17:03",
            "upload_time_iso_8601": "2024-12-10T19:17:03.524655Z",
            "url": "https://files.pythonhosted.org/packages/d5/79/2081effb1581525d0b36696dfec87191751480f3f42b1ab1f98d17ba5eb8/ai_microcore-3.12.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f6d3d2aed66769777a0b9ab4bda94452c8004c5b1b00cd1b078dc56fb1caaa38",
                "md5": "0318f8f587b21b96d1799b97bd1f70bd",
                "sha256": "d6fdd5c04171a061e2d6ec0eb27e74d4c3f7c33ab92ee65db2670b6d63d301f5"
            },
            "downloads": -1,
            "filename": "ai_microcore-3.12.1.tar.gz",
            "has_sig": false,
            "md5_digest": "0318f8f587b21b96d1799b97bd1f70bd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 41378,
            "upload_time": "2024-12-10T19:17:05",
            "upload_time_iso_8601": "2024-12-10T19:17:05.909467Z",
            "url": "https://files.pythonhosted.org/packages/f6/d3/d2aed66769777a0b9ab4bda94452c8004c5b1b00cd1b078dc56fb1caaa38/ai_microcore-3.12.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-10 19:17:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Nayjest",
    "github_project": "ai-microcore",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "ai-microcore"
}
        
Elapsed time: 0.43218s