ramalama


Nameramalama JSON
Version 0.0.20 PyPI version JSON
download
home_pageNone
SummaryRamaLama is a command line tool for working with AI LLM models.
upload_time2024-10-21 19:51:08
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Eric Curtin Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords ramalama llama ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![RAMALAMA logo](logos/PNG/ramalama-logo-full-vertical-added-bg.png)

# RamaLama

The RamaLama project's goal is to make working with AI boring
through the use of OCI containers.

RamaLama tool facilitates local management and serving of AI Models.

On first run RamaLama inspects your system for GPU support, falling back to CPU support if no GPUs are present.

RamaLama uses container engines like Podman or Docker to pull the appropriate OCI image with all of the software necessary to run an AI Model for your systems setup.

Running in containers eliminates the need for users to configure the host system for AI. After the initialization, RamaLama runs the AI Models within a container based on the OCI image.

RamaLama then pulls AI Models from model registries. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images.

When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behavior. When neather are installed RamaLama will attempt to run the model with software on the local system.

RamaLama supports multiple AI model registries types called transports.
Supported transports:


## TRANSPORTS

| Transports    | Web Site                                            |
| ------------- | --------------------------------------------------- |
| HuggingFace   | [`huggingface.co`](https://www.huggingface.co)      |
| Ollama        | [`ollama.com`](https://www.ollama.com)              |
| OCI Container Registries | [`opencontainers.org`](https://opencontainers.org)|
||Examples: [`quay.io`](https://quay.io),  [`Docker Hub`](https://docker.io), and [`Artifactory`](https://artifactory.com)|

RamaLama uses the Ollama registry transport by default. Use the RAMALAMA_TRANSPORTS environment variable to modify the default. `export RAMALAMA_TRANSPORT=huggingface` Changes RamaLama to use huggingface transport.

Individual model transports can be modifies when specifying a model via the `huggingface://`, `oci://`, or `ollama://` prefix.

`ramalama pull huggingface://afrideva/Tiny-Vicuna-1B-GGUF/tiny-vicuna-1b.q2_k.gguf`

To make it easier for users, RamaLama uses shortname files, which container
alias names for fully specified AI Models allowing users to specify the shorter
names when referring to models. RamaLama reads shortnames.conf files if they
exist . These files contain a list of name value pairs for specification of
the model. The following table specifies the order which RamaLama reads the files
. Any duplicate names that exist override previously defined shortnames.

| Shortnames type | Path                                            |
| --------------- | ---------------------------------------- |
| Distribution    | /usr/share/ramalama/shortnames.conf      |
| Administrators  | /etc/ramamala/shortnames.conf            |
| Users           | $HOME/.config/ramalama/shortnames.conf   |

```code
$ cat /usr/share/ramalama/shortnames.conf
[shortnames]
  "tiny" = "ollama://tinyllama"
  "granite" = "huggingface://instructlab/granite-7b-lab-GGUF/granite-7b-lab-Q4_K_M.gguf"
  "granite:7b" = "huggingface://instructlab/granite-7b-lab-GGUF/granite-7b-lab-Q4_K_M.gguf"
  "ibm/granite" = "huggingface://instructlab/granite-7b-lab-GGUF/granite-7b-lab-Q4_K_M.gguf"
  "merlinite" = "huggingface://instructlab/merlinite-7b-lab-GGUF/merlinite-7b-lab-Q4_K_M.gguf"
  "merlinite:7b" = "huggingface://instructlab/merlinite-7b-lab-GGUF/merlinite-7b-lab-Q4_K_M.gguf"
...
```

## Install

## Install via PyPi

RamaLama is available via PyPi [https://pypi.org/project/ramalama](https://pypi.org/project/ramalama)

```
pipx install ramalama
```

## Install by script

Install RamaLama by running this one-liner (on macOS run without sudo):

Linux:

```
curl -fsSL https://raw.githubusercontent.com/containers/ramalama/s/install.sh | sudo bash
```

macOS:

```
curl -fsSL https://raw.githubusercontent.com/containers/ramalama/s/install.sh | bash
```

## Hardware Support

| Hardware                           | Enabled |
| ---------------------------------- | ------- |
| CPU                                | :white_check_mark: |
| Apple Silicon GPU (macOS)          | :white_check_mark: |
| Apple Silicon GPU (podman-machine) | :x: |
| Nvidia GPU (cuda)                  | :x: [Containerfile](https://github.com/containers/ramalama/blob/main/container-images/cuda/Containerfile) available but not published to quay.io |
| AMD GPU (rocm)                     | :white_check_mark: |

## COMMANDS

| Command                                                | Description                                                |
| ------------------------------------------------------ | ---------------------------------------------------------- |
| [ramalama(1)](docs/ramalama.1.md)                      | primary RamaLama man page                                  |
| [ramalama-containers(1)](docs/ramalama-containers.1.md)| list all RamaLama containers                               |
| [ramalama-info(1)](ramalama-info.1.md)                 | display RamaLama configuration information                 |
| [ramalama-list(1)](docs/ramalama-list.1.md)            | list all downloaded AI Models                              |
| [ramalama-login(1)](docs/ramalama-login.1.md)          | login to remote registry                                   |
| [ramalama-logout(1)](docs/ramalama-logout.1.md)        | logout from remote registry                                |
| [ramalama-pull(1)](docs/ramalama-pull.1.md)            | pull AI Model from Model registry to local storage         |
| [ramalama-push(1)](docs/ramalama-push.1.md)            | push AI Model from local storage to remote registry        |
| [ramalama-rm(1)](docs/ramalama-rm.1.md)                | remove AI Model from local storage                         |
| [ramalama-run(1)](docs/ramalama-run.1.md)              | run specified AI Model as a chatbot                        |
| [ramalama-serve(1)](docs/ramalama-serve.1.md)          | serve REST API on specified AI Model                       |
| [ramalama-stop(1)](docs/ramalama-stop.1.md)            | stop named container that is running AI Model              |
| [ramalama-version(1)](docs/ramalama-version.1.md)      | display version of AI Model                                |

## Usage

### Running Models

You can `run` a chatbot on a model using the `run` command. By default, it pulls from the ollama registry.

Note: RamaLama will inspect your machine for native GPU support and then will
use a container engine like Podman to pull an OCI container image with the
appropriate code and libraries to run the AI Model. This can take a long time to setup, but only on the first run.
```
$ ramalama run instructlab/merlinite-7b-lab
Copying blob 5448ec8c0696 [--------------------------------------] 0.0b / 63.6MiB (skipped: 0.0b = 0.00%)
Copying blob cbd7e392a514 [--------------------------------------] 0.0b / 65.3MiB (skipped: 0.0b = 0.00%)
Copying blob 5d6c72bcd967 done  208.5MiB / 208.5MiB (skipped: 0.0b = 0.00%)
Copying blob 9ccfa45da380 [--------------------------------------] 0.0b / 7.6MiB (skipped: 0.0b = 0.00%)
Copying blob 4472627772b1 [--------------------------------------] 0.0b / 120.0b (skipped: 0.0b = 0.00%)
>
```

After the initial container image has been downloaded, you can interact with
different models, using the container image.
```
$ ramalama run granite-code
> Write a hello world application in python

print("Hello World")
```

In a different terminal window see the running podman container.
```
$ podman ps
CONTAINER ID  IMAGE                             COMMAND               CREATED        STATUS        PORTS       NAMES
91df4a39a360  quay.io/ramalama/ramalama:latest  /home/dwalsh/rama...  4 minutes ago  Up 4 minutes              gifted_volhard
```

### Listing Models

You can `list` all models pulled into local storage.

```
$ ramalama list
NAME                                                                MODIFIED     SIZE
ollama://tiny-llm:latest                                            16 hours ago 5.5M
huggingface://afrideva/Tiny-Vicuna-1B-GGUF/tiny-vicuna-1b.q2_k.gguf 14 hours ago 460M
ollama://granite-code:3b                                            5 days ago   1.9G
ollama://granite-code:latest                                        1 day ago    1.9G
ollama://moondream:latest                                           6 days ago   791M
```
### Pulling Models

You can `pull` a model using the `pull` command. By default, it pulls from the ollama registry.

```
$ ramalama pull granite-code
###################################################                       32.5%
```

### Serving Models

You can `serve` multiple models using the `serve` command. By default, it pulls from the ollama registry.

```
$ ramalama serve --name mylama llama3
```

### Stopping servers

You can stop a running model if it is running in a container.

```
$ ramalama stop mylama
```

## Diagram

```
+---------------------------+
|                           |
| ramalama run granite-code |
|                           |
+-------+-------------------+
	|
	|
	|                                          +------------------+
	|                                          | Pull model layer |
	+----------------------------------------->| granite-code     |
						   +------------------+
						   | Repo options:    |
						   +-+-------+------+-+
						     |       |      |
						     v       v      v
					     +---------+ +------+ +----------+
					     | Hugging | | quay | | Ollama   |
					     | Face    | |      | | Registry |
					     +-------+-+ +---+--+ +-+--------+
						     |       |      |
						     v       v      v
						   +------------------+
						   | Start with       |
						   | llama.cpp and    |
						   | granite-code     |
						   | model            |
						   +------------------+
```

## In development

Regard this alpha, everything is under development, so expect breaking changes, luckily it's easy to reset everything and re-install:

```
rm -rf /var/lib/ramalama # only required if running as root user
rm -rf $HOME/.local/share/ramalama
```

and install again.

## Credit where credit is due

This project wouldn't be possible without the help of other projects like:

llama.cpp
whisper.cpp
vllm
podman
omlmd
huggingface

so if you like this tool, give some of these repos a :star:, and hey, give us a :star: too while you are at it.

## Community

[`Matrix`](https://matrix.to/#/#ramalama:fedoraproject.org)

## Contributors

Open to contributors

<a href="https://github.com/containers/ramalama/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=containers/ramalama" />
</a>

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ramalama",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Dan Walsh <dwalsh@redhat.com>, Eric Curtin <ecurtin@redhat.com>",
    "keywords": "ramalama, llama, AI",
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/5a/cb/b6214a1370dc1b911c679ad30c50e9d16ae0ef3fa759fb251bc401c2f41e/ramalama-0.0.20.tar.gz",
    "platform": null,
    "description": "![RAMALAMA logo](logos/PNG/ramalama-logo-full-vertical-added-bg.png)\n\n# RamaLama\n\nThe RamaLama project's goal is to make working with AI boring\nthrough the use of OCI containers.\n\nRamaLama tool facilitates local management and serving of AI Models.\n\nOn first run RamaLama inspects your system for GPU support, falling back to CPU support if no GPUs are present.\n\nRamaLama uses container engines like Podman or Docker to pull the appropriate OCI image with all of the software necessary to run an AI Model for your systems setup.\n\nRunning in containers eliminates the need for users to configure the host system for AI. After the initialization, RamaLama runs the AI Models within a container based on the OCI image.\n\nRamaLama then pulls AI Models from model registries. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images.\n\nWhen both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behavior. When neather are installed RamaLama will attempt to run the model with software on the local system.\n\nRamaLama supports multiple AI model registries types called transports.\nSupported transports:\n\n\n## TRANSPORTS\n\n| Transports    | Web Site                                            |\n| ------------- | --------------------------------------------------- |\n| HuggingFace   | [`huggingface.co`](https://www.huggingface.co)      |\n| Ollama        | [`ollama.com`](https://www.ollama.com)              |\n| OCI Container Registries | [`opencontainers.org`](https://opencontainers.org)|\n||Examples: [`quay.io`](https://quay.io),  [`Docker Hub`](https://docker.io), and [`Artifactory`](https://artifactory.com)|\n\nRamaLama uses the Ollama registry transport by default. Use the RAMALAMA_TRANSPORTS environment variable to modify the default. `export RAMALAMA_TRANSPORT=huggingface` Changes RamaLama to use huggingface transport.\n\nIndividual model transports can be modifies when specifying a model via the `huggingface://`, `oci://`, or `ollama://` prefix.\n\n`ramalama pull huggingface://afrideva/Tiny-Vicuna-1B-GGUF/tiny-vicuna-1b.q2_k.gguf`\n\nTo make it easier for users, RamaLama uses shortname files, which container\nalias names for fully specified AI Models allowing users to specify the shorter\nnames when referring to models. RamaLama reads shortnames.conf files if they\nexist . These files contain a list of name value pairs for specification of\nthe model. The following table specifies the order which RamaLama reads the files\n. Any duplicate names that exist override previously defined shortnames.\n\n| Shortnames type | Path                                            |\n| --------------- | ---------------------------------------- |\n| Distribution    | /usr/share/ramalama/shortnames.conf      |\n| Administrators  | /etc/ramamala/shortnames.conf            |\n| Users           | $HOME/.config/ramalama/shortnames.conf   |\n\n```code\n$ cat /usr/share/ramalama/shortnames.conf\n[shortnames]\n  \"tiny\" = \"ollama://tinyllama\"\n  \"granite\" = \"huggingface://instructlab/granite-7b-lab-GGUF/granite-7b-lab-Q4_K_M.gguf\"\n  \"granite:7b\" = \"huggingface://instructlab/granite-7b-lab-GGUF/granite-7b-lab-Q4_K_M.gguf\"\n  \"ibm/granite\" = \"huggingface://instructlab/granite-7b-lab-GGUF/granite-7b-lab-Q4_K_M.gguf\"\n  \"merlinite\" = \"huggingface://instructlab/merlinite-7b-lab-GGUF/merlinite-7b-lab-Q4_K_M.gguf\"\n  \"merlinite:7b\" = \"huggingface://instructlab/merlinite-7b-lab-GGUF/merlinite-7b-lab-Q4_K_M.gguf\"\n...\n```\n\n## Install\n\n## Install via PyPi\n\nRamaLama is available via PyPi [https://pypi.org/project/ramalama](https://pypi.org/project/ramalama)\n\n```\npipx install ramalama\n```\n\n## Install by script\n\nInstall RamaLama by running this one-liner (on macOS run without sudo):\n\nLinux:\n\n```\ncurl -fsSL https://raw.githubusercontent.com/containers/ramalama/s/install.sh | sudo bash\n```\n\nmacOS:\n\n```\ncurl -fsSL https://raw.githubusercontent.com/containers/ramalama/s/install.sh | bash\n```\n\n## Hardware Support\n\n| Hardware                           | Enabled |\n| ---------------------------------- | ------- |\n| CPU                                | :white_check_mark: |\n| Apple Silicon GPU (macOS)          | :white_check_mark: |\n| Apple Silicon GPU (podman-machine) | :x: |\n| Nvidia GPU (cuda)                  | :x: [Containerfile](https://github.com/containers/ramalama/blob/main/container-images/cuda/Containerfile) available but not published to quay.io |\n| AMD GPU (rocm)                     | :white_check_mark: |\n\n## COMMANDS\n\n| Command                                                | Description                                                |\n| ------------------------------------------------------ | ---------------------------------------------------------- |\n| [ramalama(1)](docs/ramalama.1.md)                      | primary RamaLama man page                                  |\n| [ramalama-containers(1)](docs/ramalama-containers.1.md)| list all RamaLama containers                               |\n| [ramalama-info(1)](ramalama-info.1.md)                 | display RamaLama configuration information                 |\n| [ramalama-list(1)](docs/ramalama-list.1.md)            | list all downloaded AI Models                              |\n| [ramalama-login(1)](docs/ramalama-login.1.md)          | login to remote registry                                   |\n| [ramalama-logout(1)](docs/ramalama-logout.1.md)        | logout from remote registry                                |\n| [ramalama-pull(1)](docs/ramalama-pull.1.md)            | pull AI Model from Model registry to local storage         |\n| [ramalama-push(1)](docs/ramalama-push.1.md)            | push AI Model from local storage to remote registry        |\n| [ramalama-rm(1)](docs/ramalama-rm.1.md)                | remove AI Model from local storage                         |\n| [ramalama-run(1)](docs/ramalama-run.1.md)              | run specified AI Model as a chatbot                        |\n| [ramalama-serve(1)](docs/ramalama-serve.1.md)          | serve REST API on specified AI Model                       |\n| [ramalama-stop(1)](docs/ramalama-stop.1.md)            | stop named container that is running AI Model              |\n| [ramalama-version(1)](docs/ramalama-version.1.md)      | display version of AI Model                                |\n\n## Usage\n\n### Running Models\n\nYou can `run` a chatbot on a model using the `run` command. By default, it pulls from the ollama registry.\n\nNote: RamaLama will inspect your machine for native GPU support and then will\nuse a container engine like Podman to pull an OCI container image with the\nappropriate code and libraries to run the AI Model. This can take a long time to setup, but only on the first run.\n```\n$ ramalama run instructlab/merlinite-7b-lab\nCopying blob 5448ec8c0696 [--------------------------------------] 0.0b / 63.6MiB (skipped: 0.0b = 0.00%)\nCopying blob cbd7e392a514 [--------------------------------------] 0.0b / 65.3MiB (skipped: 0.0b = 0.00%)\nCopying blob 5d6c72bcd967 done  208.5MiB / 208.5MiB (skipped: 0.0b = 0.00%)\nCopying blob 9ccfa45da380 [--------------------------------------] 0.0b / 7.6MiB (skipped: 0.0b = 0.00%)\nCopying blob 4472627772b1 [--------------------------------------] 0.0b / 120.0b (skipped: 0.0b = 0.00%)\n>\n```\n\nAfter the initial container image has been downloaded, you can interact with\ndifferent models, using the container image.\n```\n$ ramalama run granite-code\n> Write a hello world application in python\n\nprint(\"Hello World\")\n```\n\nIn a different terminal window see the running podman container.\n```\n$ podman ps\nCONTAINER ID  IMAGE                             COMMAND               CREATED        STATUS        PORTS       NAMES\n91df4a39a360  quay.io/ramalama/ramalama:latest  /home/dwalsh/rama...  4 minutes ago  Up 4 minutes              gifted_volhard\n```\n\n### Listing Models\n\nYou can `list` all models pulled into local storage.\n\n```\n$ ramalama list\nNAME                                                                MODIFIED     SIZE\nollama://tiny-llm:latest                                            16 hours ago 5.5M\nhuggingface://afrideva/Tiny-Vicuna-1B-GGUF/tiny-vicuna-1b.q2_k.gguf 14 hours ago 460M\nollama://granite-code:3b                                            5 days ago   1.9G\nollama://granite-code:latest                                        1 day ago    1.9G\nollama://moondream:latest                                           6 days ago   791M\n```\n### Pulling Models\n\nYou can `pull` a model using the `pull` command. By default, it pulls from the ollama registry.\n\n```\n$ ramalama pull granite-code\n###################################################                       32.5%\n```\n\n### Serving Models\n\nYou can `serve` multiple models using the `serve` command. By default, it pulls from the ollama registry.\n\n```\n$ ramalama serve --name mylama llama3\n```\n\n### Stopping servers\n\nYou can stop a running model if it is running in a container.\n\n```\n$ ramalama stop mylama\n```\n\n## Diagram\n\n```\n+---------------------------+\n|                           |\n| ramalama run granite-code |\n|                           |\n+-------+-------------------+\n\t|\n\t|\n\t|                                          +------------------+\n\t|                                          | Pull model layer |\n\t+----------------------------------------->| granite-code     |\n\t\t\t\t\t\t   +------------------+\n\t\t\t\t\t\t   | Repo options:    |\n\t\t\t\t\t\t   +-+-------+------+-+\n\t\t\t\t\t\t     |       |      |\n\t\t\t\t\t\t     v       v      v\n\t\t\t\t\t     +---------+ +------+ +----------+\n\t\t\t\t\t     | Hugging | | quay | | Ollama   |\n\t\t\t\t\t     | Face    | |      | | Registry |\n\t\t\t\t\t     +-------+-+ +---+--+ +-+--------+\n\t\t\t\t\t\t     |       |      |\n\t\t\t\t\t\t     v       v      v\n\t\t\t\t\t\t   +------------------+\n\t\t\t\t\t\t   | Start with       |\n\t\t\t\t\t\t   | llama.cpp and    |\n\t\t\t\t\t\t   | granite-code     |\n\t\t\t\t\t\t   | model            |\n\t\t\t\t\t\t   +------------------+\n```\n\n## In development\n\nRegard this alpha, everything is under development, so expect breaking changes, luckily it's easy to reset everything and re-install:\n\n```\nrm -rf /var/lib/ramalama # only required if running as root user\nrm -rf $HOME/.local/share/ramalama\n```\n\nand install again.\n\n## Credit where credit is due\n\nThis project wouldn't be possible without the help of other projects like:\n\nllama.cpp\nwhisper.cpp\nvllm\npodman\nomlmd\nhuggingface\n\nso if you like this tool, give some of these repos a :star:, and hey, give us a :star: too while you are at it.\n\n## Community\n\n[`Matrix`](https://matrix.to/#/#ramalama:fedoraproject.org)\n\n## Contributors\n\nOpen to contributors\n\n<a href=\"https://github.com/containers/ramalama/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=containers/ramalama\" />\n</a>\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Eric Curtin  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "RamaLama is a command line tool for working with AI LLM models.",
    "version": "0.0.20",
    "project_urls": {
        "Documentation": "https://github.com/containers/ramalama/tree/main/docs",
        "Homepage": "https://github.com/containers/ramalama",
        "Issues": "https://github.com/containers/ramalama/issues",
        "Repository": "https://github.com/containers/ramalama"
    },
    "split_keywords": [
        "ramalama",
        " llama",
        " ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ab39050c3fc2a9b5927c40f83d7015c28f0ce1608d9a46e95e39ef5a66075df5",
                "md5": "e64b32f1e947b21cb09a1b96c79c1806",
                "sha256": "31bfbb49bfd6ab316ee2619e7984bac64e8604bae6c1c4d6dd97635be0082704"
            },
            "downloads": -1,
            "filename": "ramalama-0.0.20-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e64b32f1e947b21cb09a1b96c79c1806",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 40386,
            "upload_time": "2024-10-21T19:51:06",
            "upload_time_iso_8601": "2024-10-21T19:51:06.607843Z",
            "url": "https://files.pythonhosted.org/packages/ab/39/050c3fc2a9b5927c40f83d7015c28f0ce1608d9a46e95e39ef5a66075df5/ramalama-0.0.20-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5acbb6214a1370dc1b911c679ad30c50e9d16ae0ef3fa759fb251bc401c2f41e",
                "md5": "8278ef62bc859a95a81bf1009537284c",
                "sha256": "7f3cb5d67f84733aa536b4c09364457c5620018b187e9862e5868ce7c781d65b"
            },
            "downloads": -1,
            "filename": "ramalama-0.0.20.tar.gz",
            "has_sig": false,
            "md5_digest": "8278ef62bc859a95a81bf1009537284c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 32657,
            "upload_time": "2024-10-21T19:51:08",
            "upload_time_iso_8601": "2024-10-21T19:51:08.007413Z",
            "url": "https://files.pythonhosted.org/packages/5a/cb/b6214a1370dc1b911c679ad30c50e9d16ae0ef3fa759fb251bc401c2f41e/ramalama-0.0.20.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-21 19:51:08",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "containers",
    "github_project": "ramalama",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "ramalama"
}
        
Elapsed time: 0.40235s