llama-index-llms-groq


Namellama-index-llms-groq JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
Summaryllama-index llms groq integration
upload_time2024-11-18 01:27:42
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.9
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # LlamaIndex Llms Integration: Groq

Welcome to Groq! ๐Ÿš€ At Groq, we've developed the world's first Language Processing Unitโ„ข, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload.

Beyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can:

- Achieve uncompromised low latency and performance for real-time AI and HPC inferences ๐Ÿ”ฅ
- Know the exact performance and compute time for any given workload ๐Ÿ”ฎ
- Take advantage of our cutting-edge technology to stay ahead of the competition ๐Ÿ’ช

Want more Groq? Check out our [website](https://groq.com) for more resources and join our [Discord community](https://discord.gg/JvNsBDKeCG) to connect with our developers!

## Develop

To create a development environment, install poetry then run:

```bash
poetry install --with dev
```

## Testing

To test the integration, first enter the poetry venv:

```bash
poetry shell
```

Then tests can be run with make

```bash
make test
```

### Integration tests

Integration tests will be skipped unless an API key is provided. API keys can be created ath the [groq console](https://console.groq.com/keys).
Once created, store the API key in an environment variable and run tests

```bash
export GROQ_API_KEY=<your key here>
make test
```

## Linting and Formatting

Linting and code formatting can be executed with make.

```bash
make format
make lint
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-llms-groq",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "you@example.com",
    "download_url": "https://files.pythonhosted.org/packages/28/1e/3ef527582f9658afb055cfbab10d3119462f8bde9cf8fc2293fe185f63ca/llama_index_llms_groq-0.3.0.tar.gz",
    "platform": null,
    "description": "# LlamaIndex Llms Integration: Groq\n\nWelcome to Groq! \ud83d\ude80 At Groq, we've developed the world's first Language Processing Unit\u2122, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload.\n\nBeyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can:\n\n- Achieve uncompromised low latency and performance for real-time AI and HPC inferences \ud83d\udd25\n- Know the exact performance and compute time for any given workload \ud83d\udd2e\n- Take advantage of our cutting-edge technology to stay ahead of the competition \ud83d\udcaa\n\nWant more Groq? Check out our [website](https://groq.com) for more resources and join our [Discord community](https://discord.gg/JvNsBDKeCG) to connect with our developers!\n\n## Develop\n\nTo create a development environment, install poetry then run:\n\n```bash\npoetry install --with dev\n```\n\n## Testing\n\nTo test the integration, first enter the poetry venv:\n\n```bash\npoetry shell\n```\n\nThen tests can be run with make\n\n```bash\nmake test\n```\n\n### Integration tests\n\nIntegration tests will be skipped unless an API key is provided. API keys can be created ath the [groq console](https://console.groq.com/keys).\nOnce created, store the API key in an environment variable and run tests\n\n```bash\nexport GROQ_API_KEY=<your key here>\nmake test\n```\n\n## Linting and Formatting\n\nLinting and code formatting can be executed with make.\n\n```bash\nmake format\nmake lint\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index llms groq integration",
    "version": "0.3.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "62a48b6647c405fba7525c3f1aeb0b430a5f67d963032ac341ef14ca23f63686",
                "md5": "c569f435bb3fda14c8b7e1cbeefc8c53",
                "sha256": "47799133e35b8671ca0a70e8c6af39713f28c39b99dd5687e1c3bb847d263357"
            },
            "downloads": -1,
            "filename": "llama_index_llms_groq-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c569f435bb3fda14c8b7e1cbeefc8c53",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 2878,
            "upload_time": "2024-11-18T01:27:40",
            "upload_time_iso_8601": "2024-11-18T01:27:40.809546Z",
            "url": "https://files.pythonhosted.org/packages/62/a4/8b6647c405fba7525c3f1aeb0b430a5f67d963032ac341ef14ca23f63686/llama_index_llms_groq-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "281e3ef527582f9658afb055cfbab10d3119462f8bde9cf8fc2293fe185f63ca",
                "md5": "723078f9c49100a57a9e004893de6afc",
                "sha256": "f3ce9511783fa9fc75e00e048d8590901b4801ab18d277d3b96eab84cec1ca64"
            },
            "downloads": -1,
            "filename": "llama_index_llms_groq-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "723078f9c49100a57a9e004893de6afc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 2740,
            "upload_time": "2024-11-18T01:27:42",
            "upload_time_iso_8601": "2024-11-18T01:27:42.348759Z",
            "url": "https://files.pythonhosted.org/packages/28/1e/3ef527582f9658afb055cfbab10d3119462f8bde9cf8fc2293fe185f63ca/llama_index_llms_groq-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-18 01:27:42",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-llms-groq"
}
        
Elapsed time: 0.45019s