capibara-ent


Namecapibara-ent JSON
Version 1.2.1 PyPI version JSON
download
home_pagehttps://github.com/anacronic-io/capibaraGPT-v2
SummaryA flexible multimodal AI library for advanced contextual understanding and deployment.
upload_time2024-11-05 21:20:49
maintainerNone
docs_urlNone
authorMarco Durán
requires_python>=3.8
licenseNone
keywords ai nlp multimodal machine-learning deep-learning language-models ethics tpu training deployment
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CapibaraENT CLI

![Capibara SSBD Model](./capibara_model/src/public/3BSSBD.webp)

CapibaraENT is a command-line tool for training, evaluating, and deploying Capibara-based language models, optimized for TPUs and featuring hyperparameter optimization.

## Features

- Training and evaluation of Capibara models
- Built-in TPU support
- Hyperparameter optimization
- Model deployment
- Performance measurement
- Docker container execution (optional)
- Integration with Weights & Biases for experiment tracking
- **New layers and sub-models**: Support for the latest modeling layers and advanced sub-models.

## Requirements

- Python 3.7+
- JAX (for TPU optimization)
- TensorFlow
- Weights & Biases
- Docker (optional, for container execution)

## Installation

1. Clone this repository:

   ```bash
   git clone https://github.com/anachroni-io/capibaraent-cli.git
   cd capibaraent-cli
   ```

2. Install dependencies:

   ```bash
   pip install -r requirements.txt
   ```

3. Set up Weights & Biases:

   ```bash
   wandb login
   ```

## Usage

The CapibaraENT CLI offers various options for working with Capibara models:

```bash
python capibaraent_cli.py [options]
```

### Available options

- `--log-level`: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
- `--train`: Train the model
- `--evaluate`: Evaluate the model
- `--optimize`: Perform hyperparameter optimization
- `--use-docker`: Run the model inside Docker (optional, commented)
- `--deploy`: Deploy the model
- `--measure-performance`: Measure the model's performance
- `--model`: Path to the model YAML file (for deserialization)
- `--new-layer`: (optional) Activate new modeling layers
- `--sub-model`: (optional) Specify sub-models to use

### Usage Examples

1. Train a model:

   ```bash
   python capibaraent_cli.py --train
   ```

2. Evaluate a model:

   ```bash
   python capibaraent_cli.py --evaluate
   ```

3. Perform hyperparameter optimization:

   ```bash
   python optimize_hyperparameters.py
   ```

4. Deploy a model:

   ```bash
   python capibaraent_cli.py --deploy
   ```

5. Measure model performance:

   ```bash
   python capibaraent_cli.py --measure-performance
   ```

6. Run a model in Docker (optional, if Docker is set up):

   ```bash
   python capibaraent_cli.py --use-docker
   ```

## Configuration

Model configuration is handled through environment variables and YAML files. Key configuration parameters include:

- `CAPIBARA_LEARNING_RATE`
- `CAPIBARA_BATCH_SIZE`
- `CAPIBARA_MAX_LENGTH`
- `CAPIBARA_USE_TPU`
- `WANDB_PROJECT`
- `WANDB_ENTITY`
- `CAPIBARA_NEW_LAYER` (new layer)
- `CAPIBARA_SUB_MODEL` (sub-model)

### Example `.env` file

```env
CAPIBARA_LEARNING_RATE=0.001
CAPIBARA_BATCH_SIZE=32
CAPIBARA_MAX_LENGTH=512
CAPIBARA_USE_TPU=True
WANDB_PROJECT=my_project
WANDB_ENTITY=my_entity
CAPIBARA_NEW_LAYER=True
CAPIBARA_SUB_MODEL=my_sub_model
```

For a full list of configuration options, refer to the `.env.example` file.

## Hyperparameter Optimization

To perform hyperparameter optimization:

1. Ensure your Weights & Biases project is set up.
2. Run the optimization script:

   ```bash
   python optimize_hyperparameters.py
   ```

3. View the results in your Weights & Biases dashboard.

## Development

To contribute to the project:

1. Fork the repository
2. Create a new branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request

## License

Distributed under the MIT License. See `LICENSE` for more information.

## Contact

Marco Durán - <marco@anachroni.co>

Project Link: [https://github.com/anachroni-io/capibaraent-cli](https://github.com/anachroni-io/capibaraent-cli)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/anacronic-io/capibaraGPT-v2",
    "name": "capibara-ent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "ai nlp multimodal machine-learning deep-learning language-models ethics tpu training deployment",
    "author": "Marco Dur\u00e1n",
    "author_email": "marco@anachroni.com",
    "download_url": "https://files.pythonhosted.org/packages/a5/58/ebad67149362da809553c0d7beb8e31fa750b991f6a200d1f56d13629a74/capibara_ent-1.2.1.tar.gz",
    "platform": null,
    "description": "# CapibaraENT CLI\n\n![Capibara SSBD Model](./capibara_model/src/public/3BSSBD.webp)\n\nCapibaraENT is a command-line tool for training, evaluating, and deploying Capibara-based language models, optimized for TPUs and featuring hyperparameter optimization.\n\n## Features\n\n- Training and evaluation of Capibara models\n- Built-in TPU support\n- Hyperparameter optimization\n- Model deployment\n- Performance measurement\n- Docker container execution (optional)\n- Integration with Weights & Biases for experiment tracking\n- **New layers and sub-models**: Support for the latest modeling layers and advanced sub-models.\n\n## Requirements\n\n- Python 3.7+\n- JAX (for TPU optimization)\n- TensorFlow\n- Weights & Biases\n- Docker (optional, for container execution)\n\n## Installation\n\n1. Clone this repository:\n\n   ```bash\n   git clone https://github.com/anachroni-io/capibaraent-cli.git\n   cd capibaraent-cli\n   ```\n\n2. Install dependencies:\n\n   ```bash\n   pip install -r requirements.txt\n   ```\n\n3. Set up Weights & Biases:\n\n   ```bash\n   wandb login\n   ```\n\n## Usage\n\nThe CapibaraENT CLI offers various options for working with Capibara models:\n\n```bash\npython capibaraent_cli.py [options]\n```\n\n### Available options\n\n- `--log-level`: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)\n- `--train`: Train the model\n- `--evaluate`: Evaluate the model\n- `--optimize`: Perform hyperparameter optimization\n- `--use-docker`: Run the model inside Docker (optional, commented)\n- `--deploy`: Deploy the model\n- `--measure-performance`: Measure the model's performance\n- `--model`: Path to the model YAML file (for deserialization)\n- `--new-layer`: (optional) Activate new modeling layers\n- `--sub-model`: (optional) Specify sub-models to use\n\n### Usage Examples\n\n1. Train a model:\n\n   ```bash\n   python capibaraent_cli.py --train\n   ```\n\n2. Evaluate a model:\n\n   ```bash\n   python capibaraent_cli.py --evaluate\n   ```\n\n3. Perform hyperparameter optimization:\n\n   ```bash\n   python optimize_hyperparameters.py\n   ```\n\n4. Deploy a model:\n\n   ```bash\n   python capibaraent_cli.py --deploy\n   ```\n\n5. Measure model performance:\n\n   ```bash\n   python capibaraent_cli.py --measure-performance\n   ```\n\n6. Run a model in Docker (optional, if Docker is set up):\n\n   ```bash\n   python capibaraent_cli.py --use-docker\n   ```\n\n## Configuration\n\nModel configuration is handled through environment variables and YAML files. Key configuration parameters include:\n\n- `CAPIBARA_LEARNING_RATE`\n- `CAPIBARA_BATCH_SIZE`\n- `CAPIBARA_MAX_LENGTH`\n- `CAPIBARA_USE_TPU`\n- `WANDB_PROJECT`\n- `WANDB_ENTITY`\n- `CAPIBARA_NEW_LAYER` (new layer)\n- `CAPIBARA_SUB_MODEL` (sub-model)\n\n### Example `.env` file\n\n```env\nCAPIBARA_LEARNING_RATE=0.001\nCAPIBARA_BATCH_SIZE=32\nCAPIBARA_MAX_LENGTH=512\nCAPIBARA_USE_TPU=True\nWANDB_PROJECT=my_project\nWANDB_ENTITY=my_entity\nCAPIBARA_NEW_LAYER=True\nCAPIBARA_SUB_MODEL=my_sub_model\n```\n\nFor a full list of configuration options, refer to the `.env.example` file.\n\n## Hyperparameter Optimization\n\nTo perform hyperparameter optimization:\n\n1. Ensure your Weights & Biases project is set up.\n2. Run the optimization script:\n\n   ```bash\n   python optimize_hyperparameters.py\n   ```\n\n3. View the results in your Weights & Biases dashboard.\n\n## Development\n\nTo contribute to the project:\n\n1. Fork the repository\n2. Create a new branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add some amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## License\n\nDistributed under the MIT License. See `LICENSE` for more information.\n\n## Contact\n\nMarco Dur\u00e1n - <marco@anachroni.co>\n\nProject Link: [https://github.com/anachroni-io/capibaraent-cli](https://github.com/anachroni-io/capibaraent-cli)\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A flexible multimodal AI library for advanced contextual understanding and deployment.",
    "version": "1.2.1",
    "project_urls": {
        "Homepage": "https://github.com/anacronic-io/capibaraGPT-v2"
    },
    "split_keywords": [
        "ai",
        "nlp",
        "multimodal",
        "machine-learning",
        "deep-learning",
        "language-models",
        "ethics",
        "tpu",
        "training",
        "deployment"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e764b6aa8d839b5c1614dfd9cee1cb0daaea7a6a04da25c22fcb0fe6597fe848",
                "md5": "3d126efa8d8f68f16d8c7e8180301bd6",
                "sha256": "0a42255fa8935fe042b314a81a751d4406d3666e1781f5e6a621690b4dc004a1"
            },
            "downloads": -1,
            "filename": "capibara_ent-1.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3d126efa8d8f68f16d8c7e8180301bd6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 38424,
            "upload_time": "2024-11-05T21:20:48",
            "upload_time_iso_8601": "2024-11-05T21:20:48.093978Z",
            "url": "https://files.pythonhosted.org/packages/e7/64/b6aa8d839b5c1614dfd9cee1cb0daaea7a6a04da25c22fcb0fe6597fe848/capibara_ent-1.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a558ebad67149362da809553c0d7beb8e31fa750b991f6a200d1f56d13629a74",
                "md5": "8e56d742c19aafa1f4795b3e9250d729",
                "sha256": "251242483c1586ef2013d647bf48d29df17271a4c2d9e176daecee1527a0a380"
            },
            "downloads": -1,
            "filename": "capibara_ent-1.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "8e56d742c19aafa1f4795b3e9250d729",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 28349,
            "upload_time": "2024-11-05T21:20:49",
            "upload_time_iso_8601": "2024-11-05T21:20:49.703428Z",
            "url": "https://files.pythonhosted.org/packages/a5/58/ebad67149362da809553c0d7beb8e31fa750b991f6a200d1f56d13629a74/capibara_ent-1.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-05 21:20:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "anacronic-io",
    "github_project": "capibaraGPT-v2",
    "github_not_found": true,
    "lcname": "capibara-ent"
}
        
Elapsed time: 0.97798s