Name | cria JSON |
Version |
1.6.6
JSON |
| download |
home_page | None |
Summary | Run AI locally with as little friction as possible. |
upload_time | 2024-08-21 02:31:44 |
maintainer | None |
docs_url | None |
author | leftmove |
requires_python | <4.0,>=3.8 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<p align="center">
<a href="https://github.com/leftmove/cria"><img src="https://i.imgur.com/vjGJOLQ.png" alt="cria"></a>
</p>
<p align="center">
<em>Cria, use Python to run LLMs with as little friction as possible.</em>
</p>
Cria is a library for programmatically running Large Language Models through Python. Cria is built so you need as little configuration as possible — even with more advanced features.
- **Easy**: No configuration is required out of the box. Getting started takes just five lines of code.
- **Concise**: Write less code to save time and avoid duplication.
- **Local**: Free and unobstructed by rate limits, running LLMs requires no internet connection.
- **Efficient**: Use advanced features with your own `ollama` instance, or a subprocess.
<!-- <p align="center">
<em>
Cria uses <a href="https://ollama.com/">ollama</a>.
</em>
</p> -->
## Guide
- [Quick Start](#quickstart)
- [Installation](#installation)
- [Windows](#windows)
- [Mac](#mac)
- [Linux](#linux)
- [Advanced Usage](#advanced-usage)
- [Custom Models](#custom-models)
- [Streams](#streams)
- [Closing](#closing)
- [Message History](#message-history)
- [Follow-Up](#follow-up)
- [Clear Message History](#clear-message-history)
- [Passing In Custom Context](#passing-in-custom-context)
- [Interrupting](#interrupting)
- [With Message History](#with-message-history)
- [Without Message History](#without-message-history)
- [Multiple Models and Parallel Conversations](#multiple-models-and-parallel-conversations)
- [Models](#models)
- [With](#with-model)
- [Standalone](#standalone-model)
- [Running Standalone](#running-standalone)
- [Formatting](#formatting)
- [Contributing](#contributing)
- [License](#license)
## Quickstart
Running Cria is easy. After installation, you need just five lines of code — no configurations, no manual downloads, no API keys, and no servers to worry about.
```python
import cria
ai = cria.Cria()
prompt = "Who is the CEO of OpenAI?"
for chunk in ai.chat(prompt):
print(chunk, end="")
```
```
>>> The CEO of OpenAI is Sam Altman!
```
or, you can run this more configurable example.
```python
import cria
with cria.Model() as ai:
prompt = "Who is the CEO of OpenAI?"
response = ai.chat(prompt, stream=False)
print(response)
```
```
>>> The CEO of OpenAI is Sam Altman!
```
> [!WARNING]
> If no model is configured, Cria automatically installs and runs the default model: `llama3.1:8b` (4.7GB).
## Installation
1. Cria uses [`ollama`](https://ollama.com/), to install it, run the following.
### Windows
[Download](https://ollama.com/download/windows)
### Mac
[Download](https://ollama.com/download/mac)
### Linux
```
curl -fsSL https://ollama.com/install.sh | sh
```
2. Install Cria with `pip`.
```
pip install cria
```
## Advanced Usage
### Custom Models
To run other LLMs, pass them into your `ai` variable.
```python
import cria
ai = cria.Cria("llama2")
prompt = "Who is the CEO of OpenAI?"
for chunk in ai.chat(prompt):
print(chunk, end="") # The CEO of OpenAI is Sam Altman. He co-founded OpenAI in 2015 with...
```
You can find available models [here](https://ollama.com/library).
### Streams
Streams are used by default in Cria, but you can turn them off by passing in a boolean for the `stream` parameter.
```python
prompt = "Who is the CEO of OpenAI?"
response = ai.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman!
```
### Closing
By default, models are closed when you exit the Python program, but closing them manually is a best practice.
```python
ai.close()
```
You can also use [`with`](#with-model) statements to close models automatically (recommended).
### Message History
#### Follow-Up
Message history is automatically saved in Cria, so asking follow-up questions is easy.
```python
prompt = "Who is the CEO of OpenAI?"
response = ai.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman.
prompt = "Tell me more about him."
response = ai.chat(prompt, stream=False)
print(response) # Sam Altman is an American entrepreneur and technologist who serves as the CEO of OpenAI...
```
#### Clear Message History
You can reset message history by running the `clear` method.
```python
prompt = "Who is the CEO of OpenAI?"
response = ai.chat(prompt, stream=False)
print(response) # Sam Altman is an American entrepreneur and technologist who serves as the CEO of OpenAI...
ai.clear()
prompt = "Tell me more about him."
response = ai.chat(prompt, stream=False)
print(response) # I apologize, but I don't have any information about "him" because the conversation just started...
```
#### Passing In Custom Context
You can also create a custom message history, and pass in your own context.
```python
context = "Our AI system employed a hybrid approach combining reinforcement learning and generative adversarial networks (GANs) to optimize the decision-making..."
messages = [
{"role": "system", "content": "You are a technical documentation writer"},
{"role": "user", "content": context},
]
prompt = "Write some documentation using the text I gave you."
for chunk in ai.chat(messages=messages, prompt=prompt):
print(chunk, end="") # AI System Optimization: Hybrid Approach Combining Reinforcement Learning and...
```
In the example, instructions are given to the LLM as the `system`. Then, extra context is given as the `user`. Finally, the prompt is entered (as a `user`). You can use any mixture of roles to specify the LLM to your liking.
The available roles for messages are:
- `user` - Pass prompts as the user.
- `system` - Give instructions as the system.
- `assistant` - Act as the AI assistant yourself, and give the LLM lines.
The prompt parameter will always be appended to messages under the `user` role, to override this, you can choose to pass in nothing for `prompt`.
### Interrupting
#### With Message History
If you are streaming messages with Cria, you can interrupt the prompt mid way.
```python
response = ""
max_token_length = 5
prompt = "Who is the CEO of OpenAI?"
for i, chunk in enumerate(ai.chat(prompt)):
if i >= max_token_length:
ai.stop()
response += chunk
print(response) # The CEO of OpenAI is
```
```python
response = ""
max_token_length = 5
prompt = "Who is the CEO of OpenAI?"
for i, chunk in enumerate(ai.generate(prompt)):
if i >= max_token_length:
ai.stop()
response += chunk
print(response) # The CEO of OpenAI is
```
In the examples, after the AI generates five tokens (units of text that are usually a couple of characters long), text generation is stopped via the `stop` method. After `stop` is called, you can safely `break` out of the `for` loop.
#### Without Message History
By default, Cria automatically saves responses in message history, even if the stream is interrupted. To prevent this behaviour though, you can pass in the `allow_interruption` boolean.
```python
ai = cria.Cria(allow_interruption=False)
response = ""
max_token_length = 5
prompt = "Who is the CEO of OpenAI?"
for i, chunk in enumerate(ai.chat(prompt)):
if i >= max_token_length:
ai.stop()
break
print(chunk, end="") # The CEO of OpenAI is
prompt = "Tell me more about him."
for chunk in ai.chat(prompt):
print(chunk, end="") # I apologize, but I don't have any information about "him" because the conversation just started...
```
### Multiple Models and Parallel Conversations
#### Models
If you are running multiple models or parallel conversations, the `Model` class is also available. This is recommended for most use cases.
```python
import cria
ai = cria.Model()
prompt = "Who is the CEO of OpenAI?"
response = ai.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman.
```
_All methods that apply to the `Cria` class also apply to `Model`._
#### With Model
Multiple models can be run through a `with` statement. This automatically closes them after use.
```python
import cria
prompt = "Who is the CEO of OpenAI?"
with cria.Model("llama3") as ai:
response = ai.chat(prompt, stream=False)
print(response) # OpenAI's CEO is Sam Altman, who also...
with cria.Model("llama2") as ai:
response = ai.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman.
```
#### Standalone Model
Or, models can be run traditionally.
```python
import cria
prompt = "Who is the CEO of OpenAI?"
llama3 = cria.Model("llama3")
response = llama3.chat(prompt, stream=False)
print(response) # OpenAI's CEO is Sam Altman, who also...
llama2 = cria.Model("llama2")
response = llama2.chat(prompt, stream=False)
print(response) # The CEO of OpenAI is Sam Altman.
# Not required, but best practice.
llama3.close()
llama2.close()
```
### Generate
Cria also has a `generate` method.
```python
prompt = "Who is the CEO of OpenAI?"
for chunk in ai.generate(prompt):
print(chunk, end="") # The CEO of OpenAI (Open-source Artificial Intelligence) is Sam Altman.
promt = "Tell me more about him."
response = ai.generate(prompt, stream=False)
print(response) # I apologize, but I think there may have been some confusion earlier. As this...
```
### Running Standalone
When you run `cria.Cria()`, an `ollama` instance will start up if one is not already running. When the program exits, this instance will terminate.
However, if you want to save resources by not exiting `ollama`, either run your own `ollama` instance in another terminal, or run a managed subprocess.
#### Running Your Own Ollama Instance
```bash
ollama serve
```
```python
prompt = "Who is the CEO of OpenAI?"
with cria.Model() as ai:
response = ai.generate("Who is the CEO of OpenAI?", stream=False)
print(response)
```
#### Running A Managed Subprocess (Reccomended)
```python
# If it is the first time you start the program, ollama will start automatically
# If it is the second time (or subsequent times) you run the program, ollama will already be running
ai = cria.Cria(standalone=True, close_on_exit=False)
prompt = "Who is the CEO of OpenAI?"
with cria.Model("llama2") as llama2:
response = llama2.generate("Who is the CEO of OpenAI?", stream=False)
print(response)
with cria.Model("llama3") as llama3:
response = llama3.generate("Who is the CEO of OpenAI?", stream=False)
print(response)
quit()
# Despite exiting, olama will keep running, and be used the next time this program starts.
```
### Formatting
To format the output of the LLM, pass in the format keyword.
```python
ai = cria.Cria()
prompt = "Return a JSON array of AI companies."
response = ai.chat(prompt, stream=False, format="json")
print(response) # ["OpenAI", "Anthropic", "Meta", "Google", "Cohere", ...].
```
The current supported formats are:
* JSON
## Contributing
If you have a feature request, feel free to make an issue!
Contributions are highly appreciated.
## License
[MIT](./LICENSE.md)
Raw data
{
"_id": null,
"home_page": null,
"name": "cria",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": null,
"keywords": null,
"author": "leftmove",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/49/7a/aed55ba6e5bce415ed3921d6232649d49ad357e03c81031b08008a165a26/cria-1.6.6.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <a href=\"https://github.com/leftmove/cria\"><img src=\"https://i.imgur.com/vjGJOLQ.png\" alt=\"cria\"></a>\n</p>\n<p align=\"center\">\n <em>Cria, use Python to run LLMs with as little friction as possible.</em>\n</p>\n\nCria is a library for programmatically running Large Language Models through Python. Cria is built so you need as little configuration as possible \u2014 even with more advanced features.\n\n- **Easy**: No configuration is required out of the box. Getting started takes just five lines of code.\n- **Concise**: Write less code to save time and avoid duplication.\n- **Local**: Free and unobstructed by rate limits, running LLMs requires no internet connection.\n- **Efficient**: Use advanced features with your own `ollama` instance, or a subprocess.\n\n<!-- <p align=\"center\">\n <em>\n Cria uses <a href=\"https://ollama.com/\">ollama</a>.\n </em>\n</p> -->\n\n## Guide\n\n- [Quick Start](#quickstart)\n- [Installation](#installation)\n - [Windows](#windows)\n - [Mac](#mac)\n - [Linux](#linux)\n- [Advanced Usage](#advanced-usage)\n - [Custom Models](#custom-models)\n - [Streams](#streams)\n - [Closing](#closing)\n - [Message History](#message-history)\n - [Follow-Up](#follow-up)\n - [Clear Message History](#clear-message-history)\n - [Passing In Custom Context](#passing-in-custom-context)\n - [Interrupting](#interrupting)\n - [With Message History](#with-message-history)\n - [Without Message History](#without-message-history)\n - [Multiple Models and Parallel Conversations](#multiple-models-and-parallel-conversations)\n - [Models](#models)\n - [With](#with-model)\n - [Standalone](#standalone-model)\n - [Running Standalone](#running-standalone)\n - [Formatting](#formatting)\n- [Contributing](#contributing)\n- [License](#license)\n\n## Quickstart\n\nRunning Cria is easy. After installation, you need just five lines of code \u2014 no configurations, no manual downloads, no API keys, and no servers to worry about.\n\n```python\nimport cria\n\nai = cria.Cria()\n\nprompt = \"Who is the CEO of OpenAI?\"\nfor chunk in ai.chat(prompt):\n print(chunk, end=\"\")\n```\n\n```\n>>> The CEO of OpenAI is Sam Altman!\n```\n\nor, you can run this more configurable example.\n\n```python\nimport cria\n\nwith cria.Model() as ai:\n prompt = \"Who is the CEO of OpenAI?\"\n response = ai.chat(prompt, stream=False)\n print(response)\n```\n\n```\n>>> The CEO of OpenAI is Sam Altman!\n```\n\n> [!WARNING]\n> If no model is configured, Cria automatically installs and runs the default model: `llama3.1:8b` (4.7GB).\n\n## Installation\n\n1. Cria uses [`ollama`](https://ollama.com/), to install it, run the following.\n\n ### Windows\n\n [Download](https://ollama.com/download/windows)\n\n ### Mac\n\n [Download](https://ollama.com/download/mac)\n\n ### Linux\n\n ```\n curl -fsSL https://ollama.com/install.sh | sh\n ```\n\n2. Install Cria with `pip`.\n\n ```\n pip install cria\n ```\n\n## Advanced Usage\n\n### Custom Models\n\nTo run other LLMs, pass them into your `ai` variable.\n\n```python\nimport cria\n\nai = cria.Cria(\"llama2\")\n\nprompt = \"Who is the CEO of OpenAI?\"\nfor chunk in ai.chat(prompt):\n print(chunk, end=\"\") # The CEO of OpenAI is Sam Altman. He co-founded OpenAI in 2015 with...\n```\n\nYou can find available models [here](https://ollama.com/library).\n\n### Streams\n\nStreams are used by default in Cria, but you can turn them off by passing in a boolean for the `stream` parameter.\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # The CEO of OpenAI is Sam Altman!\n```\n\n### Closing\n\nBy default, models are closed when you exit the Python program, but closing them manually is a best practice.\n\n```python\nai.close()\n```\n\nYou can also use [`with`](#with-model) statements to close models automatically (recommended).\n\n### Message History\n\n#### Follow-Up\n\nMessage history is automatically saved in Cria, so asking follow-up questions is easy.\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # The CEO of OpenAI is Sam Altman.\n\nprompt = \"Tell me more about him.\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # Sam Altman is an American entrepreneur and technologist who serves as the CEO of OpenAI...\n```\n\n#### Clear Message History\n\nYou can reset message history by running the `clear` method.\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # Sam Altman is an American entrepreneur and technologist who serves as the CEO of OpenAI...\n\nai.clear()\n\nprompt = \"Tell me more about him.\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # I apologize, but I don't have any information about \"him\" because the conversation just started...\n```\n\n#### Passing In Custom Context\n\nYou can also create a custom message history, and pass in your own context.\n\n```python\ncontext = \"Our AI system employed a hybrid approach combining reinforcement learning and generative adversarial networks (GANs) to optimize the decision-making...\"\nmessages = [\n {\"role\": \"system\", \"content\": \"You are a technical documentation writer\"},\n {\"role\": \"user\", \"content\": context},\n]\n\nprompt = \"Write some documentation using the text I gave you.\"\nfor chunk in ai.chat(messages=messages, prompt=prompt):\n print(chunk, end=\"\") # AI System Optimization: Hybrid Approach Combining Reinforcement Learning and...\n```\n\nIn the example, instructions are given to the LLM as the `system`. Then, extra context is given as the `user`. Finally, the prompt is entered (as a `user`). You can use any mixture of roles to specify the LLM to your liking.\n\nThe available roles for messages are:\n\n- `user` - Pass prompts as the user.\n- `system` - Give instructions as the system.\n- `assistant` - Act as the AI assistant yourself, and give the LLM lines.\n\nThe prompt parameter will always be appended to messages under the `user` role, to override this, you can choose to pass in nothing for `prompt`.\n\n### Interrupting\n\n#### With Message History\n\nIf you are streaming messages with Cria, you can interrupt the prompt mid way.\n\n```python\nresponse = \"\"\nmax_token_length = 5\n\nprompt = \"Who is the CEO of OpenAI?\"\nfor i, chunk in enumerate(ai.chat(prompt)):\n if i >= max_token_length:\n ai.stop()\n response += chunk\n\nprint(response) # The CEO of OpenAI is\n```\n\n```python\nresponse = \"\"\nmax_token_length = 5\n\nprompt = \"Who is the CEO of OpenAI?\"\nfor i, chunk in enumerate(ai.generate(prompt)):\n if i >= max_token_length:\n ai.stop()\n response += chunk\n\nprint(response) # The CEO of OpenAI is\n```\n\nIn the examples, after the AI generates five tokens (units of text that are usually a couple of characters long), text generation is stopped via the `stop` method. After `stop` is called, you can safely `break` out of the `for` loop.\n\n#### Without Message History\n\nBy default, Cria automatically saves responses in message history, even if the stream is interrupted. To prevent this behaviour though, you can pass in the `allow_interruption` boolean.\n\n```python\nai = cria.Cria(allow_interruption=False)\n\nresponse = \"\"\nmax_token_length = 5\n\nprompt = \"Who is the CEO of OpenAI?\"\nfor i, chunk in enumerate(ai.chat(prompt)):\n\n if i >= max_token_length:\n ai.stop()\n break\n\n print(chunk, end=\"\") # The CEO of OpenAI is\n\nprompt = \"Tell me more about him.\"\nfor chunk in ai.chat(prompt):\n print(chunk, end=\"\") # I apologize, but I don't have any information about \"him\" because the conversation just started...\n```\n\n### Multiple Models and Parallel Conversations\n\n#### Models\n\nIf you are running multiple models or parallel conversations, the `Model` class is also available. This is recommended for most use cases.\n\n```python\nimport cria\n\nai = cria.Model()\n\nprompt = \"Who is the CEO of OpenAI?\"\nresponse = ai.chat(prompt, stream=False)\nprint(response) # The CEO of OpenAI is Sam Altman.\n```\n\n_All methods that apply to the `Cria` class also apply to `Model`._\n\n#### With Model\n\nMultiple models can be run through a `with` statement. This automatically closes them after use.\n\n```python\nimport cria\n\nprompt = \"Who is the CEO of OpenAI?\"\n\nwith cria.Model(\"llama3\") as ai:\n response = ai.chat(prompt, stream=False)\n print(response) # OpenAI's CEO is Sam Altman, who also...\n\nwith cria.Model(\"llama2\") as ai:\n response = ai.chat(prompt, stream=False)\n print(response) # The CEO of OpenAI is Sam Altman.\n```\n\n#### Standalone Model\n\nOr, models can be run traditionally.\n\n```python\nimport cria\n\n\nprompt = \"Who is the CEO of OpenAI?\"\n\nllama3 = cria.Model(\"llama3\")\nresponse = llama3.chat(prompt, stream=False)\nprint(response) # OpenAI's CEO is Sam Altman, who also...\n\nllama2 = cria.Model(\"llama2\")\nresponse = llama2.chat(prompt, stream=False)\nprint(response) # The CEO of OpenAI is Sam Altman.\n\n# Not required, but best practice.\nllama3.close()\nllama2.close()\n\n```\n\n### Generate\n\nCria also has a `generate` method.\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nfor chunk in ai.generate(prompt):\n print(chunk, end=\"\") # The CEO of OpenAI (Open-source Artificial Intelligence) is Sam Altman.\n\npromt = \"Tell me more about him.\"\nresponse = ai.generate(prompt, stream=False)\nprint(response) # I apologize, but I think there may have been some confusion earlier. As this...\n```\n\n### Running Standalone\n\nWhen you run `cria.Cria()`, an `ollama` instance will start up if one is not already running. When the program exits, this instance will terminate.\n\nHowever, if you want to save resources by not exiting `ollama`, either run your own `ollama` instance in another terminal, or run a managed subprocess.\n\n#### Running Your Own Ollama Instance\n\n```bash\nollama serve\n```\n\n```python\nprompt = \"Who is the CEO of OpenAI?\"\nwith cria.Model() as ai:\n response = ai.generate(\"Who is the CEO of OpenAI?\", stream=False)\n print(response)\n```\n\n#### Running A Managed Subprocess (Reccomended)\n\n```python\n\n# If it is the first time you start the program, ollama will start automatically\n# If it is the second time (or subsequent times) you run the program, ollama will already be running\n\nai = cria.Cria(standalone=True, close_on_exit=False)\nprompt = \"Who is the CEO of OpenAI?\"\n\nwith cria.Model(\"llama2\") as llama2:\n response = llama2.generate(\"Who is the CEO of OpenAI?\", stream=False)\n print(response)\n\nwith cria.Model(\"llama3\") as llama3:\n response = llama3.generate(\"Who is the CEO of OpenAI?\", stream=False)\n print(response)\n\nquit()\n# Despite exiting, olama will keep running, and be used the next time this program starts.\n```\n\n### Formatting\n\nTo format the output of the LLM, pass in the format keyword.\n\n```python\nai = cria.Cria()\n\nprompt = \"Return a JSON array of AI companies.\"\nresponse = ai.chat(prompt, stream=False, format=\"json\")\nprint(response) # [\"OpenAI\", \"Anthropic\", \"Meta\", \"Google\", \"Cohere\", ...].\n```\n\nThe current supported formats are:\n\n* JSON \n\n## Contributing\n\nIf you have a feature request, feel free to make an issue!\n\nContributions are highly appreciated.\n\n## License\n\n[MIT](./LICENSE.md)\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Run AI locally with as little friction as possible.",
"version": "1.6.6",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fdcc0e3b7e9844ad125797e0fa12add12a8c6b8e3baf7b5d276612602ae8e615",
"md5": "4f7cb068b3cc982e3d102d74af103718",
"sha256": "5e5b6d0d437e4e323963e06fce3a7d7d6ac1520f290ed8d50e0310254449ebf4"
},
"downloads": -1,
"filename": "cria-1.6.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4f7cb068b3cc982e3d102d74af103718",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 7646,
"upload_time": "2024-08-21T02:31:42",
"upload_time_iso_8601": "2024-08-21T02:31:42.713929Z",
"url": "https://files.pythonhosted.org/packages/fd/cc/0e3b7e9844ad125797e0fa12add12a8c6b8e3baf7b5d276612602ae8e615/cria-1.6.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "497aaed55ba6e5bce415ed3921d6232649d49ad357e03c81031b08008a165a26",
"md5": "81f5881fd1d3e9d4adcf7816b3463691",
"sha256": "e1d5402eee1894c8586066ead8c08376f3f864ce8e7a0b35dc7c9ae133f01923"
},
"downloads": -1,
"filename": "cria-1.6.6.tar.gz",
"has_sig": false,
"md5_digest": "81f5881fd1d3e9d4adcf7816b3463691",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 7298,
"upload_time": "2024-08-21T02:31:44",
"upload_time_iso_8601": "2024-08-21T02:31:44.222014Z",
"url": "https://files.pythonhosted.org/packages/49/7a/aed55ba6e5bce415ed3921d6232649d49ad357e03c81031b08008a165a26/cria-1.6.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-21 02:31:44",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "cria"
}