nomadic


Namenomadic JSON
Version 0.0.1.4 PyPI version JSON
download
home_pagehttps://nomadicml.com/
SummaryNomadic is an enterprise-grade toolkit for teams to continuously optimize compound AI systems
upload_time2024-09-26 23:35:31
maintainerMustafa Bal
docs_urlNone
authorMustafa Bal
requires_python<3.13,>=3.9
licenseApache-2.0
keywords llm hpo rag data devtools optimization backtesting
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img src="https://raw.githubusercontent.com/nomadic-ml/nomadic/main/assets/NomadicMLLogo.png" alt="NomadicMLLogo" width="50%">
</p>

<p align="center">
  Nomadic is an enterprise-grade toolkit by <a href="https://www.nomadicml.com/">NomadicML</a> focused on parameter search for ML teams to continuously optimize compound AI systems, from pre to post-production. Rapidly experiment and keep hyperparameters, prompts, and all aspects of your system production-ready. Teams use Nomadic to deeply understand their AI system's best levers to boost performance as it scales.
</p>

<p align="center">
  <img alt="PyPI - Version" src="https://img.shields.io/pypi/v/nomadic?link=https%3A%2F%2Fpypi.org%2Fproject%2Fnomadic%2F">
  <img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/nomadic?link=https%3A%2F%2Fpypi.org%2Fproject%2Fnomadic%2F">
  <img alt="Discord" src="https://img.shields.io/discord/1281121359476559996?link=https%3A%2F%2Fdiscord.gg%2FPF869aGM">
  <img alt="Static Badge" src="https://img.shields.io/badge/Pear_X-S24-green">
</p>

<p align="center">
Join our <a href="https://discord.gg/cqnxY4as">Discord</a>!
</p>

# 🗂️  Installation

You can install `nomadic` with pip (Python 3.9+ required):

```bash
pip install nomadic
```

# 📄  Documentation

Full documentation can be found here: https://docs.nomadicml.com.

Please check it out for the most up-to-date tutorials, cookbooks, SDK references, and other resources!

# Local Development

Follow the instructions below to get started on local development of the Nomadic SDK. Afterwards select the produced Python `.venv` environment in your IDE of choice.

## MacOS

```bash
make setup_dev_environment
source .venv/bin/activate
```

## Linux-Ubuntu

Coming soon!

## Build Nomadic wheel

Run:

```bash
source .venv/bin/activate # If .venv isn't already activated
make build
```

# 💻 Example Usage

## Optimizing a RAG to Boost Accuracy & Retrieval Speed by 40%

For other Quickstarts based on your application: including LLM safety, advanced RAGs, transcription/summarization (across fintech, support, healthcare),  or especially compound AI systems (multiple components > monolithic models), check out our [🍴Cookbooks](https://docs.nomadicml.com/get-started/cookbooks).

### 1. Import Nomadic Libraries and Upload OpenAI Key
``` python
import os

# Import relevant Nomadic libraries
from nomadic.experiment import Experiment
from nomadic.model import OpenAIModel
from nomadic.tuner import tune
from nomadic.experiment.base import Experiment, retry_with_exponential_backoff
from nomadic.experiment.rag import (
    run_rag_pipeline,
    run_retrieval_pipeline,
    run_inference_pipeline,
    obtain_rag_inputs,
    save_run_results,
    load_run_results,
    get_best_run_result,
    create_inference_heatmap
)

import pandas as pd
pd.set_option('display.max_colwidth', None)
import json

# Insert your OPENAI_API_KEY below
os.environ["OPENAI_API_KEY"]= <YOUR_OPENAI_KEY>
```

### 2. Define RAG Hyperparameters for the Experiments

Say we want to explore (all of!) the following hyperparameters and search spaces to optimize a RAG performance:

| Parameter                     | Search Space                                                   | Pipeline Stage |
|-------------------------|------------------------------------------------------------------------|----------------|
| **chunk_size**          | 128, 256, 512                                                          | Retrieval      |
| **top_k**               | 1, 3, 5                                                                | Retrieval      |
| **overlap**             | 50, 100, 150                                                           | Retrieval      |
| **similarity_threshold**| 0.5, 0.7, 0.9                                                          | Retrieval      |
| **embedding_model**     | "text-embedding-ada-002", "text-embedding-curie-001"                   | Retrieval      |
| **model_name**          | "gpt-3.5-turbo", "gpt-4"                                               | Both           |
| **temperature**         | 0.3, 0.7, 0.9                                                          | Inference      |
| **max_tokens**          | 300, 500, 700                                                          | Inference      |
| **retrieval_strategy**  | "sentence-window", "full-document"                                     | Retrieval      |
| **reranking_model**     | true, false                                                            | Inference      |
| **query_transformation**| "rephrasing", "HyDE", "sub-queries"                                    | Both           |

Then, define the search spaces for each RAG pipeline hyperparameter you want to experiment with.

```python
chunk_size = tune.choice([256, 512])
temperature = tune.choice([0.1, 0.9])
overlap = tune.choice([25])
similarity_threshold = tune.choice([50])
top_k =  tune.choice([1, 2])
max_tokens = tune.choice([100, 200])
model_name = tune.choice(["gpt-3.5-turbo", "gpt-4o"])
embedding_model = tune.choice(["text-embedding-ada-002", "text-embedding-curie-001"])
retrieval_strategy = tune.choice(["sentence-window", "auto-merging"])
```

### 3. Upload Evaluation Dataset and External Data

```python
eval_json = {
    "queries": {
        "query1": "Describe the architecture of convolutional neural networks.",
        "query2": "What are the ethical implications of AI in healthcare?",
    },
    "responses": {
        "query1": "Convolutional neural networks consist of an input layer, convolutional layers, activation functions, pooling layers, fully connected layers, and an output layer.",
        "query2": "Ethical implications include issues of privacy, autonomy, and the potential for bias, which must be carefully managed to avoid harm.",
    }
}
pdf_url = "https://www.dropbox.com/scl/fi/sbko6nyzsuw00f2nhxa38/CS229_Lecture_Notes.pdf?rlkey=pebhb2qrdh08bnyxtus8qm11v&st=yha4ikm2&dl=1"

```
### Evaluation Metrics for Retrieval and Inference

In this demo, we use specialized evaluation metrics that work specifically well for the retrieval / inferencing stages of a RAG.
#### A. Retrieval Evaluation Metrics

- **BM25 Scoring:**
  - BM25 (Best Matching 25) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It considers term frequency (TF), inverse document frequency (IDF), document length, and other factors to compute a score. The BM25 score is used to rank the documents retrieved based on their relevance to the transformed queries. The best-retrieved documents are determined by their BM25 scores.

- **Average Retrieval Score:**
  - The average score is calculated as the mean of the BM25 scores for the best retrieval results across different queries. This score provides a measure of how well the retrieval process is performing overall.

- **Retrieval Time (in milliseconds):**
  - The total time taken to retrieve the documents is measured in milliseconds. This metric helps to evaluate the efficiency of the retrieval process, particularly in terms of speed.

#### B. Inference Evaluation Metric

- **Hallucination Score:**
  - This metric assesses the extent to which the generated response includes information not found in the context. It calculates the proportion of the predicted response tokens that match tokens found in the provided context. The score is computed as:

  `Hallucination Score = 1 - (Matching Tokens / Total Predicted Tokens)`

  - A lower hallucination score indicates that the generated response closely aligns with the provided context, while a higher score suggests the presence of hallucinated (incorrect or fabricated) information.




### 4. Run the Retrieval Experiment! 🚀
```python
# Obtain RAG inputs
docs, eval_qs, ref_response_strs = obtain_rag_inputs(pdf_url=pdf_url, eval_json=eval_json)

# Run retrieval experiment
experiment_retrieval = Experiment(
    param_fn=run_retrieval_pipeline,
    param_dict={
        "top_k": top_k,
        "model_name": model_name,
        "retrieval_strategy": retrieval_strategy,
        "embedding_model": embedding_model
    },
    fixed_param_dict={
        "docs": docs,
        "eval_qs": eval_qs[:10],
        "ref_response_strs": ref_response_strs[:10],
    },
)

# After the retrieval is done
retrieval_results = experiment_retrieval.run(param_dict={
        "top_k": top_k,
        "model_name": model_name,
        "retrieval_strategy": retrieval_strategy,
        "embedding_model": embedding_model
    })
save_run_results(retrieval_results, "run_results.json")
```

### 5. Run the Inferencing Experiment! 🚀
```python
# Load the saved results and get the best run result
loaded_results = load_run_results("run_results.json")
best_run_result = get_best_run_result(loaded_results)
best_retrieval_results = best_run_result['metadata'].get("best_retrieval_results", [])

# Run inference experiment
experiment_inference = Experiment(
    param_fn=run_inference_pipeline,
    params={"temperature","model_name", "max_tokens", "reranking_model", "similarity_threshold"},
    fixed_param_dict={
        "best_retrieval_results": best_run_result['metadata'].get("best_retrieval_results", []),
        "ref_response_strs": ref_response_strs[:10],  # Make sure this matches the number of queries used in retrieval
    },
)

inference_results = experiment_inference.run(param_dict={
    "model_name": model_name,
    "temperature": temperature,
    "max_tokens": max_tokens,
    "reranking_model": "cross-encoder/ms-marco-MiniLM-L-6-v2",
    "similarity_threshold": 0.7,
})
```

### 6. Interpret Results

Now we visualize the retrieval score (for the best run result) along with the inferencing scores for different configurations.

```python
create_retrieval_heatmap(retrieval_results)
```

![Retrieval Results](https://github.com/user-attachments/assets/91e8d760-c301-427b-975c-44520a21e22d)

Here are the results using the best-performing parameter configuration:

```python
create_inference_heatmap(inference_results)
```

![Inference Results](https://github.com/user-attachments/assets/1c9c7cc9-8b1f-4e26-8050-f7e6bd1f96ce)


# 💡 Contributing

Interested in contributing? Contributions to Nomadic as well as contributing integrations are both accepted and highly encouraged! Send questions in our [Discord]([https://discord.gg/PF869aGM).


            

Raw data

            {
    "_id": null,
    "home_page": "https://nomadicml.com/",
    "name": "nomadic",
    "maintainer": "Mustafa Bal",
    "docs_url": null,
    "requires_python": "<3.13,>=3.9",
    "maintainer_email": "mustafa@nomadicml.com",
    "keywords": "LLM, HPO, RAG, data, devtools, optimization, backtesting",
    "author": "Mustafa Bal",
    "author_email": "mustafa@nomadicml.com",
    "download_url": "https://files.pythonhosted.org/packages/cc/da/95d26bbd3fa749c4173ec2ad3f705021f67d38ee258c599a432649779d40/nomadic-0.0.1.4.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/nomadic-ml/nomadic/main/assets/NomadicMLLogo.png\" alt=\"NomadicMLLogo\" width=\"50%\">\n</p>\n\n<p align=\"center\">\n  Nomadic is an enterprise-grade toolkit by <a href=\"https://www.nomadicml.com/\">NomadicML</a> focused on parameter search for ML teams to continuously optimize compound AI systems, from pre to post-production. Rapidly experiment and keep hyperparameters, prompts, and all aspects of your system production-ready. Teams use Nomadic to deeply understand their AI system's best levers to boost performance as it scales.\n</p>\n\n<p align=\"center\">\n  <img alt=\"PyPI - Version\" src=\"https://img.shields.io/pypi/v/nomadic?link=https%3A%2F%2Fpypi.org%2Fproject%2Fnomadic%2F\">\n  <img alt=\"PyPI - Python Version\" src=\"https://img.shields.io/pypi/pyversions/nomadic?link=https%3A%2F%2Fpypi.org%2Fproject%2Fnomadic%2F\">\n  <img alt=\"Discord\" src=\"https://img.shields.io/discord/1281121359476559996?link=https%3A%2F%2Fdiscord.gg%2FPF869aGM\">\n  <img alt=\"Static Badge\" src=\"https://img.shields.io/badge/Pear_X-S24-green\">\n</p>\n\n<p align=\"center\">\nJoin our <a href=\"https://discord.gg/cqnxY4as\">Discord</a>!\n</p>\n\n# \ud83d\uddc2\ufe0f  Installation\n\nYou can install `nomadic` with pip (Python 3.9+ required):\n\n```bash\npip install nomadic\n```\n\n# \ud83d\udcc4  Documentation\n\nFull documentation can be found here: https://docs.nomadicml.com.\n\nPlease check it out for the most up-to-date tutorials, cookbooks, SDK references, and other resources!\n\n# Local Development\n\nFollow the instructions below to get started on local development of the Nomadic SDK. Afterwards select the produced Python `.venv` environment in your IDE of choice.\n\n## MacOS\n\n```bash\nmake setup_dev_environment\nsource .venv/bin/activate\n```\n\n## Linux-Ubuntu\n\nComing soon!\n\n## Build Nomadic wheel\n\nRun:\n\n```bash\nsource .venv/bin/activate # If .venv isn't already activated\nmake build\n```\n\n# \ud83d\udcbb Example Usage\n\n## Optimizing a RAG to Boost Accuracy & Retrieval Speed by 40%\n\nFor other Quickstarts based on your application: including LLM safety, advanced RAGs, transcription/summarization (across fintech, support, healthcare),  or especially compound AI systems (multiple components > monolithic models), check out our [\ud83c\udf74Cookbooks](https://docs.nomadicml.com/get-started/cookbooks).\n\n### 1. Import Nomadic Libraries and Upload OpenAI Key\n``` python\nimport os\n\n# Import relevant Nomadic libraries\nfrom nomadic.experiment import Experiment\nfrom nomadic.model import OpenAIModel\nfrom nomadic.tuner import tune\nfrom nomadic.experiment.base import Experiment, retry_with_exponential_backoff\nfrom nomadic.experiment.rag import (\n    run_rag_pipeline,\n    run_retrieval_pipeline,\n    run_inference_pipeline,\n    obtain_rag_inputs,\n    save_run_results,\n    load_run_results,\n    get_best_run_result,\n    create_inference_heatmap\n)\n\nimport pandas as pd\npd.set_option('display.max_colwidth', None)\nimport json\n\n# Insert your OPENAI_API_KEY below\nos.environ[\"OPENAI_API_KEY\"]= <YOUR_OPENAI_KEY>\n```\n\n### 2. Define RAG Hyperparameters for the Experiments\n\nSay we want to explore (all of!) the following hyperparameters and search spaces to optimize a RAG performance:\n\n| Parameter                     | Search Space                                                   | Pipeline Stage |\n|-------------------------|------------------------------------------------------------------------|----------------|\n| **chunk_size**          | 128, 256, 512                                                          | Retrieval      |\n| **top_k**               | 1, 3, 5                                                                | Retrieval      |\n| **overlap**             | 50, 100, 150                                                           | Retrieval      |\n| **similarity_threshold**| 0.5, 0.7, 0.9                                                          | Retrieval      |\n| **embedding_model**     | \"text-embedding-ada-002\", \"text-embedding-curie-001\"                   | Retrieval      |\n| **model_name**          | \"gpt-3.5-turbo\", \"gpt-4\"                                               | Both           |\n| **temperature**         | 0.3, 0.7, 0.9                                                          | Inference      |\n| **max_tokens**          | 300, 500, 700                                                          | Inference      |\n| **retrieval_strategy**  | \"sentence-window\", \"full-document\"                                     | Retrieval      |\n| **reranking_model**     | true, false                                                            | Inference      |\n| **query_transformation**| \"rephrasing\", \"HyDE\", \"sub-queries\"                                    | Both           |\n\nThen, define the search spaces for each RAG pipeline hyperparameter you want to experiment with.\n\n```python\nchunk_size = tune.choice([256, 512])\ntemperature = tune.choice([0.1, 0.9])\noverlap = tune.choice([25])\nsimilarity_threshold = tune.choice([50])\ntop_k =  tune.choice([1, 2])\nmax_tokens = tune.choice([100, 200])\nmodel_name = tune.choice([\"gpt-3.5-turbo\", \"gpt-4o\"])\nembedding_model = tune.choice([\"text-embedding-ada-002\", \"text-embedding-curie-001\"])\nretrieval_strategy = tune.choice([\"sentence-window\", \"auto-merging\"])\n```\n\n### 3. Upload Evaluation Dataset and External Data\n\n```python\neval_json = {\n    \"queries\": {\n        \"query1\": \"Describe the architecture of convolutional neural networks.\",\n        \"query2\": \"What are the ethical implications of AI in healthcare?\",\n    },\n    \"responses\": {\n        \"query1\": \"Convolutional neural networks consist of an input layer, convolutional layers, activation functions, pooling layers, fully connected layers, and an output layer.\",\n        \"query2\": \"Ethical implications include issues of privacy, autonomy, and the potential for bias, which must be carefully managed to avoid harm.\",\n    }\n}\npdf_url = \"https://www.dropbox.com/scl/fi/sbko6nyzsuw00f2nhxa38/CS229_Lecture_Notes.pdf?rlkey=pebhb2qrdh08bnyxtus8qm11v&st=yha4ikm2&dl=1\"\n\n```\n### Evaluation Metrics for Retrieval and Inference\n\nIn this demo, we use specialized evaluation metrics that work specifically well for the retrieval / inferencing stages of a RAG.\n#### A. Retrieval Evaluation Metrics\n\n- **BM25 Scoring:**\n  - BM25 (Best Matching 25) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It considers term frequency (TF), inverse document frequency (IDF), document length, and other factors to compute a score. The BM25 score is used to rank the documents retrieved based on their relevance to the transformed queries. The best-retrieved documents are determined by their BM25 scores.\n\n- **Average Retrieval Score:**\n  - The average score is calculated as the mean of the BM25 scores for the best retrieval results across different queries. This score provides a measure of how well the retrieval process is performing overall.\n\n- **Retrieval Time (in milliseconds):**\n  - The total time taken to retrieve the documents is measured in milliseconds. This metric helps to evaluate the efficiency of the retrieval process, particularly in terms of speed.\n\n#### B. Inference Evaluation Metric\n\n- **Hallucination Score:**\n  - This metric assesses the extent to which the generated response includes information not found in the context. It calculates the proportion of the predicted response tokens that match tokens found in the provided context. The score is computed as:\n\n  `Hallucination Score = 1 - (Matching Tokens / Total Predicted Tokens)`\n\n  - A lower hallucination score indicates that the generated response closely aligns with the provided context, while a higher score suggests the presence of hallucinated (incorrect or fabricated) information.\n\n\n\n\n### 4. Run the Retrieval Experiment! \ud83d\ude80\n```python\n# Obtain RAG inputs\ndocs, eval_qs, ref_response_strs = obtain_rag_inputs(pdf_url=pdf_url, eval_json=eval_json)\n\n# Run retrieval experiment\nexperiment_retrieval = Experiment(\n    param_fn=run_retrieval_pipeline,\n    param_dict={\n        \"top_k\": top_k,\n        \"model_name\": model_name,\n        \"retrieval_strategy\": retrieval_strategy,\n        \"embedding_model\": embedding_model\n    },\n    fixed_param_dict={\n        \"docs\": docs,\n        \"eval_qs\": eval_qs[:10],\n        \"ref_response_strs\": ref_response_strs[:10],\n    },\n)\n\n# After the retrieval is done\nretrieval_results = experiment_retrieval.run(param_dict={\n        \"top_k\": top_k,\n        \"model_name\": model_name,\n        \"retrieval_strategy\": retrieval_strategy,\n        \"embedding_model\": embedding_model\n    })\nsave_run_results(retrieval_results, \"run_results.json\")\n```\n\n### 5. Run the Inferencing Experiment! \ud83d\ude80\n```python\n# Load the saved results and get the best run result\nloaded_results = load_run_results(\"run_results.json\")\nbest_run_result = get_best_run_result(loaded_results)\nbest_retrieval_results = best_run_result['metadata'].get(\"best_retrieval_results\", [])\n\n# Run inference experiment\nexperiment_inference = Experiment(\n    param_fn=run_inference_pipeline,\n    params={\"temperature\",\"model_name\", \"max_tokens\", \"reranking_model\", \"similarity_threshold\"},\n    fixed_param_dict={\n        \"best_retrieval_results\": best_run_result['metadata'].get(\"best_retrieval_results\", []),\n        \"ref_response_strs\": ref_response_strs[:10],  # Make sure this matches the number of queries used in retrieval\n    },\n)\n\ninference_results = experiment_inference.run(param_dict={\n    \"model_name\": model_name,\n    \"temperature\": temperature,\n    \"max_tokens\": max_tokens,\n    \"reranking_model\": \"cross-encoder/ms-marco-MiniLM-L-6-v2\",\n    \"similarity_threshold\": 0.7,\n})\n```\n\n### 6. Interpret Results\n\nNow we visualize the retrieval score (for the best run result) along with the inferencing scores for different configurations.\n\n```python\ncreate_retrieval_heatmap(retrieval_results)\n```\n\n![Retrieval Results](https://github.com/user-attachments/assets/91e8d760-c301-427b-975c-44520a21e22d)\n\nHere are the results using the best-performing parameter configuration:\n\n```python\ncreate_inference_heatmap(inference_results)\n```\n\n![Inference Results](https://github.com/user-attachments/assets/1c9c7cc9-8b1f-4e26-8050-f7e6bd1f96ce)\n\n\n# \ud83d\udca1 Contributing\n\nInterested in contributing? Contributions to Nomadic as well as contributing integrations are both accepted and highly encouraged! Send questions in our [Discord]([https://discord.gg/PF869aGM).\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Nomadic is an enterprise-grade toolkit for teams to continuously optimize compound AI systems",
    "version": "0.0.1.4",
    "project_urls": {
        "Documentation": "https://docs.nomadicml.com/",
        "Homepage": "https://nomadicml.com/",
        "Repository": "https://github.com/nomadic-ml/nomadic"
    },
    "split_keywords": [
        "llm",
        " hpo",
        " rag",
        " data",
        " devtools",
        " optimization",
        " backtesting"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "000b1aa3e5119fac7aa404685f71c4ff05a208dda286d8c9ca5e687d7ae775fd",
                "md5": "6bbc0a2beed7262dace12d56090a2c16",
                "sha256": "b85d0b4c5e5ec44d1d2d2f9d2efd8c18bf5c7a5ba7b7fe072e1bdabab59dfb7d"
            },
            "downloads": -1,
            "filename": "nomadic-0.0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6bbc0a2beed7262dace12d56090a2c16",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.9",
            "size": 84663,
            "upload_time": "2024-09-26T23:35:29",
            "upload_time_iso_8601": "2024-09-26T23:35:29.755634Z",
            "url": "https://files.pythonhosted.org/packages/00/0b/1aa3e5119fac7aa404685f71c4ff05a208dda286d8c9ca5e687d7ae775fd/nomadic-0.0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ccda95d26bbd3fa749c4173ec2ad3f705021f67d38ee258c599a432649779d40",
                "md5": "cfdceb474305c682f6119b025155c2d3",
                "sha256": "b4563b743734237aba9457ee0aafa76db2687822bcc66c70366d4ba7d6392182"
            },
            "downloads": -1,
            "filename": "nomadic-0.0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "cfdceb474305c682f6119b025155c2d3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.9",
            "size": 73499,
            "upload_time": "2024-09-26T23:35:31",
            "upload_time_iso_8601": "2024-09-26T23:35:31.268221Z",
            "url": "https://files.pythonhosted.org/packages/cc/da/95d26bbd3fa749c4173ec2ad3f705021f67d38ee258c599a432649779d40/nomadic-0.0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-26 23:35:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nomadic-ml",
    "github_project": "nomadic",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nomadic"
}
        
Elapsed time: 1.08720s