# metricsGPT
Talk to your metrics.
<img src="./demo.png" alt="Demo" width="800" style="max-width: 100%;" />
> [!NOTE]
>
> This is a work in progress with no API guarantees. The current implementation needs work on scalability.
> Right now it will cause quite some load on your Prometheus API and take a while.
## Installation
Ensure you have Python 3.12+ and Node v20+ locally .
By default this tool uses [`llama3`](https://ollama.com/library/llama3) and [`nomic-embed-text`](https://ollama.com/library/nomic-embed-text).
```bash
ollama pull llama3
ollama pull nomic-embed-text
```
Have some prometheus up and running. You can use `make run-prom` to get one running in docker that scrapes itself.
You can choose to grab the CLI from https://pypi.org/project/metricsgpt/
```bash
pip3 install metricsgpt
metricsGPT --server --config=config.yaml
```
If building locally you can use Poetry,
```bash
poetry install
poetry run metricsGPT --server --config=config.yaml
```
and visit localhost:8081!
## Configuration
Edit [config.yaml](./config.yaml) to suit your own models/Prometheus/Thanos setups.
```yaml
# Prometheus Configuration
prometheus_url: "http://localhost:9090"
# prometheus_auth:
# # Basic authentication
# basic_auth:
# username: "your_username"
# password: "your_password"
# # Or Bearer token
# bearer_token: "your_token"
# # Or custom headers
# custom_headers:
# Authorization: "Custom your_auth_header"
# X-Custom-Header: "custom_value"
# # TLS/SSL configuration
# tls:
# cert_file: "/path/to/cert.pem"
# key_file: "/path/to/key.pem"
# skip_verify: false # Set to true to skip
prom_external_url: null # Optional external URL for links in the UI
query_lookback_hours: 1.0
# Storage Configuration
vectordb_path: "./data.db"
series_cache_file: "./series_cache.json"
# Server Configuration
refresh_interval: 900 # VectorDB Refresh interval in seconds
server_host: "0.0.0.0"
server_port: 8081
# LLM Configuration
llm:
provider: "ollama"
model: "llama3.1"
embedding:
provider: "ollama" # or "openai"
model: "nomic-embed-text"
dimension: 768 # optional, defaults to this dimension
# For Azure OpenAI embeddings:
#embedding:
# provider: "azure"
# model: "text-embedding-ada-002"
# deployment_name: "your-embedding-deployment"
# api_key: "your-api-key"
# endpoint: "your-azure-endpoint"
# api_version: "2023-05-15"
# dimension: "dimensions of model"
# For Watson embeddings:
#embedding:
# provider: "watsonx"
# api_key: "your-api-key"
# project_id: "your-project-id"
# model_id: "google/flan-ul2" # optional, defaults to this model
# dimension: "dimensions of model"
# For OpenAI embeddings:
#embedding:
# provider: "openai"
# model: "text-embedding-ada-002"
# api_key: "your-api-key"
# dimension: "dimensions of model"
# Example configurations for different providers:
# For OpenAI:
#llm:
# provider: "openai"
# model: "gpt-4"
# api_key: "your-api-key"
# For Ollama:
#llm:
# provider: "ollama"
# model: "metricsGPT"
# timeout: 120.0
# For Azure:
#llm:
# provider: "azure"
# model: "gpt-4"
# deployment_name: "your-deployment"
# api_key: "your-api-key"
# endpoint: "your-azure-endpoint"
# For Gemini:
#llm:
# provider: "gemini"
# model: "gemini-pro"
# api_key: "your-api-key"
# For WatsonX:
#llm:
# provider: "watsonx"
# api_key: "your-api-key"
# project_id: "your-project-id"
# model_id: "your-model-id"
```
## TODOs:
- Much more efficient vectorDB ops
- Use other Prom HTTP APIs for more context
- Range queries
- Visualize
- Embed query results for better analysis
- Process alerts
Raw data
{
"_id": null,
"home_page": "https://github.com/saswatamcode/metricsgpt",
"name": "metricsgpt",
"maintainer": "Saswata Mukherjee",
"docs_url": null,
"requires_python": "<4.0,>=3.12",
"maintainer_email": "saswataminsta@yahoo.com",
"keywords": "prometheus, promql, metrics, chat, llm, sre, observability",
"author": "Saswata Mukherjee",
"author_email": "saswataminsta@yahoo.com",
"download_url": "https://files.pythonhosted.org/packages/a5/a7/01ed99325d5f00250c21dff00d8330e09a1351552933709a07b6aaa7d77c/metricsgpt-0.3.0.tar.gz",
"platform": null,
"description": "# metricsGPT\n\nTalk to your metrics.\n\n<img src=\"./demo.png\" alt=\"Demo\" width=\"800\" style=\"max-width: 100%;\" />\n\n> [!NOTE]\n>\n> This is a work in progress with no API guarantees. The current implementation needs work on scalability.\n> Right now it will cause quite some load on your Prometheus API and take a while.\n\n## Installation\n\nEnsure you have Python 3.12+ and Node v20+ locally .\n\nBy default this tool uses [`llama3`](https://ollama.com/library/llama3) and [`nomic-embed-text`](https://ollama.com/library/nomic-embed-text).\n\n```bash\nollama pull llama3\nollama pull nomic-embed-text\n```\n\nHave some prometheus up and running. You can use `make run-prom` to get one running in docker that scrapes itself.\n\nYou can choose to grab the CLI from https://pypi.org/project/metricsgpt/\n```bash\npip3 install metricsgpt\nmetricsGPT --server --config=config.yaml\n```\n\nIf building locally you can use Poetry,\n```bash\npoetry install\npoetry run metricsGPT --server --config=config.yaml\n```\n\nand visit localhost:8081!\n\n##\u00a0Configuration\n\nEdit [config.yaml](./config.yaml) to suit your own models/Prometheus/Thanos setups.\n\n```yaml\n# Prometheus Configuration\nprometheus_url: \"http://localhost:9090\"\n# prometheus_auth:\n# # Basic authentication\n# basic_auth:\n# username: \"your_username\"\n# password: \"your_password\"\n \n# # Or Bearer token\n# bearer_token: \"your_token\"\n \n# # Or custom headers\n# custom_headers:\n# Authorization: \"Custom your_auth_header\"\n# X-Custom-Header: \"custom_value\"\n \n# # TLS/SSL configuration\n# tls:\n# cert_file: \"/path/to/cert.pem\"\n# key_file: \"/path/to/key.pem\"\n# skip_verify: false # Set to true to skip \n\nprom_external_url: null # Optional external URL for links in the UI\nquery_lookback_hours: 1.0\n\n# Storage Configuration\nvectordb_path: \"./data.db\"\nseries_cache_file: \"./series_cache.json\"\n\n# Server Configuration\nrefresh_interval: 900 # VectorDB Refresh interval in seconds \nserver_host: \"0.0.0.0\"\nserver_port: 8081\n\n# LLM Configuration\nllm:\n provider: \"ollama\"\n model: \"llama3.1\"\n\nembedding:\n provider: \"ollama\" # or \"openai\"\n model: \"nomic-embed-text\"\n dimension: 768 # optional, defaults to this dimension\n\n# For Azure OpenAI embeddings:\n#embedding:\n# provider: \"azure\"\n# model: \"text-embedding-ada-002\"\n# deployment_name: \"your-embedding-deployment\"\n# api_key: \"your-api-key\"\n# endpoint: \"your-azure-endpoint\"\n# api_version: \"2023-05-15\" \n# dimension: \"dimensions of model\"\n\n# For Watson embeddings:\n#embedding:\n# provider: \"watsonx\"\n# api_key: \"your-api-key\"\n# project_id: \"your-project-id\"\n# model_id: \"google/flan-ul2\" # optional, defaults to this model\n# dimension: \"dimensions of model\"\n\n# For OpenAI embeddings:\n#embedding:\n# provider: \"openai\"\n# model: \"text-embedding-ada-002\"\n# api_key: \"your-api-key\"\n# dimension: \"dimensions of model\"\n\n# Example configurations for different providers:\n\n# For OpenAI:\n#llm:\n# provider: \"openai\"\n# model: \"gpt-4\"\n# api_key: \"your-api-key\"\n\n# For Ollama:\n#llm:\n# provider: \"ollama\"\n# model: \"metricsGPT\"\n# timeout: 120.0\n\n# For Azure:\n#llm:\n# provider: \"azure\"\n# model: \"gpt-4\"\n# deployment_name: \"your-deployment\"\n# api_key: \"your-api-key\"\n# endpoint: \"your-azure-endpoint\"\n\n# For Gemini:\n#llm:\n# provider: \"gemini\"\n# model: \"gemini-pro\"\n# api_key: \"your-api-key\"\n\n# For WatsonX:\n#llm:\n# provider: \"watsonx\"\n# api_key: \"your-api-key\"\n# project_id: \"your-project-id\"\n# model_id: \"your-model-id\"\n```\n\n## TODOs:\n- Much more efficient vectorDB ops\n- Use other Prom HTTP APIs for more context\n- Range queries\n- Visualize\n- Embed query results for better analysis\n- Process alerts",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "MetricsGPT is a tool for generating PromQL queries from natural language queries. Talk to your metrics!",
"version": "0.3.0",
"project_urls": {
"Documentation": "https://github.com/saswatamcode/metricsgpt",
"Homepage": "https://github.com/saswatamcode/metricsgpt",
"Repository": "https://github.com/saswatamcode/metricsgpt"
},
"split_keywords": [
"prometheus",
" promql",
" metrics",
" chat",
" llm",
" sre",
" observability"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "f7a28addb49f365ea4e1188ecbae142701e039a890502abcb7468621111bcfcf",
"md5": "ce1f9eae22bd792a2df2fb2eaace078f",
"sha256": "28e2f48839d99a429ae8c1ef5b87c29dabc7ebef917c5a11ade724edb3666bf6"
},
"downloads": -1,
"filename": "metricsgpt-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ce1f9eae22bd792a2df2fb2eaace078f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.12",
"size": 1424775,
"upload_time": "2025-01-12T14:11:32",
"upload_time_iso_8601": "2025-01-12T14:11:32.341112Z",
"url": "https://files.pythonhosted.org/packages/f7/a2/8addb49f365ea4e1188ecbae142701e039a890502abcb7468621111bcfcf/metricsgpt-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a5a701ed99325d5f00250c21dff00d8330e09a1351552933709a07b6aaa7d77c",
"md5": "b5f7d5e1a3ef62b7d5456cae42bf90ea",
"sha256": "8a57ec3067911d9c45b58fd9d4a97c1f15f4d383b78c50d38df924e719e4202a"
},
"downloads": -1,
"filename": "metricsgpt-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "b5f7d5e1a3ef62b7d5456cae42bf90ea",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.12",
"size": 1419827,
"upload_time": "2025-01-12T14:11:46",
"upload_time_iso_8601": "2025-01-12T14:11:46.907826Z",
"url": "https://files.pythonhosted.org/packages/a5/a7/01ed99325d5f00250c21dff00d8330e09a1351552933709a07b6aaa7d77c/metricsgpt-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-12 14:11:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "saswatamcode",
"github_project": "metricsgpt",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "aioconsole",
"specs": [
[
"==",
"0.8.1"
]
]
},
{
"name": "aiohappyeyeballs",
"specs": [
[
"==",
"2.4.3"
]
]
},
{
"name": "aiohttp",
"specs": [
[
"==",
"3.11.7"
]
]
},
{
"name": "aiosignal",
"specs": [
[
"==",
"1.3.1"
]
]
},
{
"name": "annotated-types",
"specs": [
[
"==",
"0.7.0"
]
]
},
{
"name": "anyio",
"specs": [
[
"==",
"4.6.2.post1"
]
]
},
{
"name": "attrs",
"specs": [
[
"==",
"24.2.0"
]
]
},
{
"name": "azure-core",
"specs": [
[
"==",
"1.32.0"
]
]
},
{
"name": "azure-identity",
"specs": [
[
"==",
"1.19.0"
]
]
},
{
"name": "beautifulsoup4",
"specs": [
[
"==",
"4.12.3"
]
]
},
{
"name": "cachetools",
"specs": [
[
"==",
"5.5.0"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2024.8.30"
]
]
},
{
"name": "cffi",
"specs": [
[
"==",
"1.17.1"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"3.4.0"
]
]
},
{
"name": "click",
"specs": [
[
"==",
"8.1.7"
]
]
},
{
"name": "colorama",
"specs": [
[
"==",
"0.4.6"
]
]
},
{
"name": "contourpy",
"specs": [
[
"==",
"1.3.1"
]
]
},
{
"name": "cryptography",
"specs": [
[
"==",
"43.0.3"
]
]
},
{
"name": "cycler",
"specs": [
[
"==",
"0.12.1"
]
]
},
{
"name": "dataclasses-json",
"specs": [
[
"==",
"0.6.7"
]
]
},
{
"name": "dateparser",
"specs": [
[
"==",
"1.2.0"
]
]
},
{
"name": "Deprecated",
"specs": [
[
"==",
"1.2.15"
]
]
},
{
"name": "dirtyjson",
"specs": [
[
"==",
"1.0.8"
]
]
},
{
"name": "distro",
"specs": [
[
"==",
"1.9.0"
]
]
},
{
"name": "environs",
"specs": [
[
"==",
"9.5.0"
]
]
},
{
"name": "fastapi",
"specs": [
[
"==",
"0.115.5"
]
]
},
{
"name": "filetype",
"specs": [
[
"==",
"1.2.0"
]
]
},
{
"name": "fonttools",
"specs": [
[
"==",
"4.55.0"
]
]
},
{
"name": "frozenlist",
"specs": [
[
"==",
"1.5.0"
]
]
},
{
"name": "fsspec",
"specs": [
[
"==",
"2024.10.0"
]
]
},
{
"name": "google-ai-generativelanguage",
"specs": [
[
"==",
"0.6.4"
]
]
},
{
"name": "google-api-core",
"specs": [
[
"==",
"2.23.0"
]
]
},
{
"name": "google-api-python-client",
"specs": [
[
"==",
"2.154.0"
]
]
},
{
"name": "google-auth",
"specs": [
[
"==",
"2.36.0"
]
]
},
{
"name": "google-auth-httplib2",
"specs": [
[
"==",
"0.2.0"
]
]
},
{
"name": "google-generativeai",
"specs": [
[
"==",
"0.5.4"
]
]
},
{
"name": "googleapis-common-protos",
"specs": [
[
"==",
"1.66.0"
]
]
},
{
"name": "greenlet",
"specs": [
[
"==",
"3.1.1"
]
]
},
{
"name": "grpcio",
"specs": [
[
"==",
"1.68.0"
]
]
},
{
"name": "grpcio-status",
"specs": [
[
"==",
"1.62.3"
]
]
},
{
"name": "h11",
"specs": [
[
"==",
"0.14.0"
]
]
},
{
"name": "httmock",
"specs": [
[
"==",
"1.4.0"
]
]
},
{
"name": "httpcore",
"specs": [
[
"==",
"1.0.7"
]
]
},
{
"name": "httplib2",
"specs": [
[
"==",
"0.22.0"
]
]
},
{
"name": "httpx",
"specs": [
[
"==",
"0.27.2"
]
]
},
{
"name": "ibm-cos-sdk",
"specs": [
[
"==",
"2.13.6"
]
]
},
{
"name": "ibm-cos-sdk-core",
"specs": [
[
"==",
"2.13.6"
]
]
},
{
"name": "ibm-cos-sdk-s3transfer",
"specs": [
[
"==",
"2.13.6"
]
]
},
{
"name": "ibm_watsonx_ai",
"specs": [
[
"==",
"1.1.24"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.10"
]
]
},
{
"name": "importlib_metadata",
"specs": [
[
"==",
"8.5.0"
]
]
},
{
"name": "jiter",
"specs": [
[
"==",
"0.7.1"
]
]
},
{
"name": "jmespath",
"specs": [
[
"==",
"1.0.1"
]
]
},
{
"name": "joblib",
"specs": [
[
"==",
"1.4.2"
]
]
},
{
"name": "kiwisolver",
"specs": [
[
"==",
"1.4.7"
]
]
},
{
"name": "llama-cloud",
"specs": [
[
"==",
"0.1.5"
]
]
},
{
"name": "llama-index",
"specs": [
[
"==",
"0.12.1"
]
]
},
{
"name": "llama-index-agent-openai",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "llama-index-cli",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "llama-index-core",
"specs": [
[
"==",
"0.12.1"
]
]
},
{
"name": "llama-index-embeddings-azure-openai",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "llama-index-embeddings-ibm",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "llama-index-embeddings-ollama",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "llama-index-embeddings-openai",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "llama-index-indices-managed-llama-cloud",
"specs": [
[
"==",
"0.6.2"
]
]
},
{
"name": "llama-index-legacy",
"specs": [
[
"==",
"0.9.48.post4"
]
]
},
{
"name": "llama-index-llms-azure-openai",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "llama-index-llms-gemini",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "llama-index-llms-ibm",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "llama-index-llms-ollama",
"specs": [
[
"==",
"0.4.1"
]
]
},
{
"name": "llama-index-llms-openai",
"specs": [
[
"==",
"0.3.1"
]
]
},
{
"name": "llama-index-multi-modal-llms-openai",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "llama-index-program-openai",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "llama-index-question-gen-openai",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "llama-index-readers-file",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "llama-index-readers-llama-parse",
"specs": [
[
"==",
"0.4.0"
]
]
},
{
"name": "llama-index-vector-stores-milvus",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "llama-parse",
"specs": [
[
"==",
"0.5.15"
]
]
},
{
"name": "lomond",
"specs": [
[
"==",
"0.3.3"
]
]
},
{
"name": "marshmallow",
"specs": [
[
"==",
"3.23.1"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.9.2"
]
]
},
{
"name": "milvus-lite",
"specs": [
[
"==",
"2.4.10"
]
]
},
{
"name": "msal",
"specs": [
[
"==",
"1.31.1"
]
]
},
{
"name": "msal-extensions",
"specs": [
[
"==",
"1.2.0"
]
]
},
{
"name": "multidict",
"specs": [
[
"==",
"6.1.0"
]
]
},
{
"name": "mypy-extensions",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "nest-asyncio",
"specs": [
[
"==",
"1.6.0"
]
]
},
{
"name": "networkx",
"specs": [
[
"==",
"3.4.2"
]
]
},
{
"name": "nltk",
"specs": [
[
"==",
"3.9.1"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.26.4"
]
]
},
{
"name": "ollama",
"specs": [
[
"==",
"0.3.3"
]
]
},
{
"name": "openai",
"specs": [
[
"==",
"1.55.0"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"24.2"
]
]
},
{
"name": "pandas",
"specs": [
[
"==",
"2.1.4"
]
]
},
{
"name": "pillow",
"specs": [
[
"==",
"10.4.0"
]
]
},
{
"name": "portalocker",
"specs": [
[
"==",
"2.10.1"
]
]
},
{
"name": "prometheus-api-client",
"specs": [
[
"==",
"0.5.5"
]
]
},
{
"name": "propcache",
"specs": [
[
"==",
"0.2.0"
]
]
},
{
"name": "proto-plus",
"specs": [
[
"==",
"1.25.0"
]
]
},
{
"name": "protobuf",
"specs": [
[
"==",
"4.25.5"
]
]
},
{
"name": "pyarrow",
"specs": [
[
"==",
"18.0.0"
]
]
},
{
"name": "pyasn1",
"specs": [
[
"==",
"0.6.1"
]
]
},
{
"name": "pyasn1_modules",
"specs": [
[
"==",
"0.4.1"
]
]
},
{
"name": "pycparser",
"specs": [
[
"==",
"2.22"
]
]
},
{
"name": "pydantic",
"specs": [
[
"==",
"2.9.2"
]
]
},
{
"name": "pydantic_core",
"specs": [
[
"==",
"2.23.4"
]
]
},
{
"name": "PyJWT",
"specs": [
[
"==",
"2.10.0"
]
]
},
{
"name": "pymilvus",
"specs": [
[
"==",
"2.4.9"
]
]
},
{
"name": "pyparsing",
"specs": [
[
"==",
"3.2.0"
]
]
},
{
"name": "pypdf",
"specs": [
[
"==",
"5.1.0"
]
]
},
{
"name": "python-dateutil",
"specs": [
[
"==",
"2.9.0.post0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
"==",
"1.0.1"
]
]
},
{
"name": "pytz",
"specs": [
[
"==",
"2024.2"
]
]
},
{
"name": "PyYAML",
"specs": [
[
"==",
"6.0.2"
]
]
},
{
"name": "regex",
"specs": [
[
"==",
"2024.11.6"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.2"
]
]
},
{
"name": "rsa",
"specs": [
[
"==",
"4.9"
]
]
},
{
"name": "setuptools",
"specs": [
[
"==",
"75.6.0"
]
]
},
{
"name": "six",
"specs": [
[
"==",
"1.16.0"
]
]
},
{
"name": "sniffio",
"specs": [
[
"==",
"1.3.1"
]
]
},
{
"name": "soupsieve",
"specs": [
[
"==",
"2.6"
]
]
},
{
"name": "SQLAlchemy",
"specs": [
[
"==",
"2.0.36"
]
]
},
{
"name": "starlette",
"specs": [
[
"==",
"0.41.3"
]
]
},
{
"name": "striprtf",
"specs": [
[
"==",
"0.0.26"
]
]
},
{
"name": "tabulate",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "tenacity",
"specs": [
[
"==",
"8.5.0"
]
]
},
{
"name": "tiktoken",
"specs": [
[
"==",
"0.8.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
"==",
"4.67.0"
]
]
},
{
"name": "typing-inspect",
"specs": [
[
"==",
"0.9.0"
]
]
},
{
"name": "typing_extensions",
"specs": [
[
"==",
"4.12.2"
]
]
},
{
"name": "tzdata",
"specs": [
[
"==",
"2024.2"
]
]
},
{
"name": "tzlocal",
"specs": [
[
"==",
"5.2"
]
]
},
{
"name": "ujson",
"specs": [
[
"==",
"5.10.0"
]
]
},
{
"name": "uritemplate",
"specs": [
[
"==",
"4.1.1"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"2.2.3"
]
]
},
{
"name": "uvicorn",
"specs": [
[
"==",
"0.32.1"
]
]
},
{
"name": "wrapt",
"specs": [
[
"==",
"1.17.0"
]
]
},
{
"name": "yarl",
"specs": [
[
"==",
"1.18.0"
]
]
},
{
"name": "zipp",
"specs": [
[
"==",
"3.21.0"
]
]
}
],
"lcname": "metricsgpt"
}