horizon-takeoff


Namehorizon-takeoff JSON
Version 0.0.4.4 PyPI version JSON
download
home_page
SummaryAuto-deploy the Takeoff Server on AWS for LLM inference
upload_time2024-01-06 15:19:51
maintainer
docs_urlNone
authorInquestGeronimo
requires_python
licenseApache-2.0 license
keywords cloud titanml server llm nlp mlops deployment
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<h1 align="center">
Horizon Takeoff
</h1>

<div align="center">
    <img width="400" height="350" src="/img/rocket.png">
</div>

**Horizon Takeoff** is a Python library for simplifying the cloud deployment of LLMs with TitanML's [Takeoff Server](https://github.com/titanml/takeoff-community) on AWS, with a specific focus on EC2 and SageMaker. The deployment process is facilitated through an interactive Terminal User Interface (TUI) for streamlining the configuration of your cloud environment. To gain a deeper understanding of the features offered by the Takeoff Server, refer to TitanML's [documentation](https://docs.titanml.co/docs/intro).

With Horizon-Takeoff, you have the flexibility to choose between two distinct workflows:

**1. Terminal User Interface (TUI):** This approach guides you through a step-by-step process within the terminal. This procedure automatically saves your cloud environment settings in a YAML file and handles cloud orchestration tasks such as handling of the Takeoff Server image to AWS's Elastic Container Registry (ECR), initiating the instance launch and Takeoff Server configuration for LLM inference.

**2. Python API:** Alternatively, you can can manually create the YAML config file according to your specific requirements and execute the orchestration and instance launch in Python. Further details found in the `YAML Configuration` section.

## Requirements

**1.** AWS CLI installed and configured on local machine.  
**2.** Docker installed.  
**3.** Own an AWS account with the following configurations:

* Have an instance profile role with access to `AmazonEC2ContainerRegistryReadOnly`. This will allow access to Docker pulls from ECR within an instance.

* Own a security group allowing inbound traffic to `port: 8000` (required for Takeoff Server community edition) and `port: 3000` (required for Takeoff Server pro edition). This will expose the appropriate Docker endpoints for API calling depending on your server edition of choice.

> Currently, only EC2 instance deployment on the Community edition server is stable, Sagemaker and/or Takeoff Server Pro edition is under development.

# Install <img align="center" width="30" height="29" src="https://media.giphy.com/media/sULKEgDMX8LcI/giphy.gif">
<br>

```
pip install horizon-takeoff
```

# TUI Launch <img align="center" width="30" height="29" src="https://media.giphy.com/media/QLcCBdBemDIqpbK6jA/giphy.gif">
<br>

Launch the TUI for configuring an EC2 instance with the community version of the Takeoff Server:


```bash
horizon-takeoff ec2 community
```

The TUI not only features a clean user interface (powered by the `rich` library) but also conducts pre-start checks for AWS CLI and Docker installations. Furthermore, it has access to your AWS profile data, allowing it to significantly expedite the configuration of your AWS cloud environment. This includes the ability to list your available AWS keys, security groups, and ARN roles.

<div style="display: flex; justify-content: center;">
  <video muted controls src="https://github.com/InquestGeronimo/horizon-takeoff/assets/79061523/c1eabd23-c064-41b3-9966-414c5e4b5524" class="d-block rounded-bottom-2 border-top width-fit" style="max-height:640px; min-height: 200px"></video>
</div>

# Staging <img align="center" width="30" height="29" src="https://media.giphy.com/media/SmaYvew52UlC9MmB6l/giphy.gif">
<br>

After you've finished the TUI workflow, a YAML configuration file will be automatically stored in your working directory. This file will trigger the staging process of your deployment and you will receive a notification in terminal of your instance launch. 

Wait a few minutes as the instance downloads the LLM model and initiates the Docker container containing the Takeoff Server. To keep track of the progress and access your instance's initialization logs, you can SSH into your instance:

```bash
ssh -i ~/<pem.key> <user>@<public-ipv4-dns>  # e.g. ssh -i ~/aws.pem ubuntu@ec2-44-205-255-59.compute-1.amazonaws.com
```

In your instance's terminal, run the following command to view your logs to confirm when your container is up and running:

```bash
cat /var/log/cloud-init-output.log
```

If you observe the Uvicorn URL endpoint being displayed, it signifies that your Docker container is operational and you are now ready to invoke API calls to the inference endpoint.

# Calling the Endpoint <img align="center" width="30" height="29" src="https://media.giphy.com/media/l41YvpiA9uMWw5AMU/giphy.gif">
<br>

Once you've initialized the EC2Endpoint class, you can effortlessly invoke your LLM in the cloud with just a single line of code.

```py
from horizon import EC2Endpoint

endpoint = EC2Endpoint()
generation = endpoint('List 3 things to do in London.')
print(generation)
```

You can pass generative arguments to the `EC2Endpoint()` class in order to shape your model's output and/or choose server edition and endpoint type:

```python
pro: bool = False,
stream: bool = False,
sampling_topk: int = 1,
sampling_topp: float = 1.0,
sampling_temperature: float = 1.0,
repetition_penalty: int = 1,
no_repeat_ngram_size: int = 0,
```

 For more information regarding the available decoding arguments, refer to TitanML's [docs](https://docs.titanml.co/docs/titan-takeoff/experimentation/generation-parameters).

# Deleting Instance <img align="center" width="30" height="29" src="https://media.giphy.com/media/HhTXt43pk1I1W/giphy.gif">
<br>

To delete your working instance via the terminal, run:

```bash
horizon-del
```

# YAML Configuration <img align="center" width="30" height="29" src="https://media.giphy.com/media/mrYOnKZ7MJFCM/giphy.gif">
<br>

If you prefer to bypass the TUI, you can enter your YAML configuration manually. Make sure to add the following EC2-related variables and save them in a `ec2_config.yaml` file:

```yaml
EC2:
  ami_id: ami-0c7217cdde317cfec             # Set the ID of the Amazon Machine Image (AMI) to use for EC2 instances.
  ecr_repo_name: takeoff                    # Set the name of the ECR repository. If it doesn't exist it will be created.
  hardware: cpu                             # Set the hardware type: 'cpu' or 'gpu'
  hf_model_name: tiiuae/falcon-7b-instruct  # Set the name of the Hugging Face model to use.
  instance_role_arn: arn:aws:iam::^^^:path  # Set the ARN of the IAM instance profile role.
  instance_type: c5.2xlarge                 # Set the EC2 instance type.
  key_name: aws                             # Set the name of the AWS key pair.
  region_name: us-east-1                    # Set the AWS region name.
  security_group_ids:                       # Set the security group ID(s).
    - sg-0fefe7b366b0c0843
  server_edition: community                 # defaults to "community" ("pro" not available yet)                
```

# Launch in Python <img align="center" width="30" height="29" src="https://media.giphy.com/media/PeaNPlyOVPNMHjqTm7/giphy.gif">
<br>

Upon configuring the YAML file, instantiate the `DockerHandler` and `TitanEC2` classes to handle Docker image flows and instance launch.

### Docker 

Load the YAML file into the `DockerHandler`. These commands will pull the Takeoff Docker image, tag it, and push it to ECR:

```py
from horizon import DockerHandler, TitanEC2

docker = DockerHandler("ec2_config.yaml")

docker.pull_takeoff_image()
docker.push_takeoff_image()
```

### Create Instance

Launch the EC2 instance:

```py
titan = TitanEC2("ec2_config.yaml")
instance_id, meta_data = titan.create_instance()
print(meta_data)
```
When you instance is created, you will get a JSON output of the instance's meta data.

Revisit the `Staging` and `Calling the Inference Endpoint` section for API handling.

### Delete Instance

Pass your `Instance Id` to the `delete_instance` method:

```py
titan.delete_instance(instance_id)
```



            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "horizon-takeoff",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "cloud,titanml,server,LLM,NLP,MLOps,deployment",
    "author": "InquestGeronimo",
    "author_email": "rcostanl@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/09/5d/a4219c07ca306157dc50146ff8855d04bdaaaf26a1fba2ee9670ea30166f/horizon-takeoff-0.0.4.4.tar.gz",
    "platform": null,
    "description": "\n<h1 align=\"center\">\nHorizon Takeoff\n</h1>\n\n<div align=\"center\">\n    <img width=\"400\" height=\"350\" src=\"/img/rocket.png\">\n</div>\n\n**Horizon Takeoff** is a Python library for simplifying the cloud deployment of LLMs with TitanML's [Takeoff Server](https://github.com/titanml/takeoff-community) on AWS, with a specific focus on EC2 and SageMaker. The deployment process is facilitated through an interactive Terminal User Interface (TUI) for streamlining the configuration of your cloud environment. To gain a deeper understanding of the features offered by the Takeoff Server, refer to TitanML's [documentation](https://docs.titanml.co/docs/intro).\n\nWith Horizon-Takeoff, you have the flexibility to choose between two distinct workflows:\n\n**1. Terminal User Interface (TUI):** This approach guides you through a step-by-step process within the terminal. This procedure automatically saves your cloud environment settings in a YAML file and handles cloud orchestration tasks such as handling of the Takeoff Server image to AWS's Elastic Container Registry (ECR), initiating the instance launch and Takeoff Server configuration for LLM inference.\n\n**2. Python API:** Alternatively, you can can manually create the YAML config file according to your specific requirements and execute the orchestration and instance launch in Python. Further details found in the `YAML Configuration` section.\n\n## Requirements\n\n**1.** AWS CLI installed and configured on local machine.  \n**2.** Docker installed.  \n**3.** Own an AWS account with the following configurations:\n\n* Have an instance profile role with access to `AmazonEC2ContainerRegistryReadOnly`. This will allow access to Docker pulls from ECR within an instance.\n\n* Own a security group allowing inbound traffic to `port: 8000` (required for Takeoff Server community edition) and `port: 3000` (required for Takeoff Server pro edition). This will expose the appropriate Docker endpoints for API calling depending on your server edition of choice.\n\n> Currently, only EC2 instance deployment on the Community edition server is stable, Sagemaker and/or Takeoff Server Pro edition is under development.\n\n# Install <img align=\"center\" width=\"30\" height=\"29\" src=\"https://media.giphy.com/media/sULKEgDMX8LcI/giphy.gif\">\n<br>\n\n```\npip install horizon-takeoff\n```\n\n# TUI Launch <img align=\"center\" width=\"30\" height=\"29\" src=\"https://media.giphy.com/media/QLcCBdBemDIqpbK6jA/giphy.gif\">\n<br>\n\nLaunch the TUI for configuring an EC2 instance with the community version of the Takeoff Server:\n\n\n```bash\nhorizon-takeoff ec2 community\n```\n\nThe TUI not only features a clean user interface (powered by the `rich` library) but also conducts pre-start checks for AWS CLI and Docker installations. Furthermore, it has access to your AWS profile data, allowing it to significantly expedite the configuration of your AWS cloud environment. This includes the ability to list your available AWS keys, security groups, and ARN roles.\n\n<div style=\"display: flex; justify-content: center;\">\n  <video muted controls src=\"https://github.com/InquestGeronimo/horizon-takeoff/assets/79061523/c1eabd23-c064-41b3-9966-414c5e4b5524\" class=\"d-block rounded-bottom-2 border-top width-fit\" style=\"max-height:640px; min-height: 200px\"></video>\n</div>\n\n# Staging <img align=\"center\" width=\"30\" height=\"29\" src=\"https://media.giphy.com/media/SmaYvew52UlC9MmB6l/giphy.gif\">\n<br>\n\nAfter you've finished the TUI workflow, a YAML configuration file will be automatically stored in your working directory. This file will trigger the staging process of your deployment and you will receive a notification in terminal of your instance launch. \n\nWait a few minutes as the instance downloads the LLM model and initiates the Docker container containing the Takeoff Server. To keep track of the progress and access your instance's initialization logs, you can SSH into your instance:\n\n```bash\nssh -i ~/<pem.key> <user>@<public-ipv4-dns>  # e.g. ssh -i ~/aws.pem ubuntu@ec2-44-205-255-59.compute-1.amazonaws.com\n```\n\nIn your instance's terminal, run the following command to view your logs to confirm when your container is up and running:\n\n```bash\ncat /var/log/cloud-init-output.log\n```\n\nIf you observe the Uvicorn URL endpoint being displayed, it signifies that your Docker container is operational and you are now ready to invoke API calls to the inference endpoint.\n\n# Calling the Endpoint <img align=\"center\" width=\"30\" height=\"29\" src=\"https://media.giphy.com/media/l41YvpiA9uMWw5AMU/giphy.gif\">\n<br>\n\nOnce you've initialized the EC2Endpoint class, you can effortlessly invoke your LLM in the cloud with just a single line of code.\n\n```py\nfrom horizon import EC2Endpoint\n\nendpoint = EC2Endpoint()\ngeneration = endpoint('List 3 things to do in London.')\nprint(generation)\n```\n\nYou can pass generative arguments to the `EC2Endpoint()` class in order to shape your model's output and/or choose server edition and endpoint type:\n\n```python\npro: bool = False,\nstream: bool = False,\nsampling_topk: int = 1,\nsampling_topp: float = 1.0,\nsampling_temperature: float = 1.0,\nrepetition_penalty: int = 1,\nno_repeat_ngram_size: int = 0,\n```\n\n For more information regarding the available decoding arguments, refer to TitanML's [docs](https://docs.titanml.co/docs/titan-takeoff/experimentation/generation-parameters).\n\n# Deleting Instance <img align=\"center\" width=\"30\" height=\"29\" src=\"https://media.giphy.com/media/HhTXt43pk1I1W/giphy.gif\">\n<br>\n\nTo delete your working instance via the terminal, run:\n\n```bash\nhorizon-del\n```\n\n# YAML Configuration <img align=\"center\" width=\"30\" height=\"29\" src=\"https://media.giphy.com/media/mrYOnKZ7MJFCM/giphy.gif\">\n<br>\n\nIf you prefer to bypass the TUI, you can enter your YAML configuration manually. Make sure to add the following EC2-related variables and save them in a `ec2_config.yaml` file:\n\n```yaml\nEC2:\n  ami_id: ami-0c7217cdde317cfec             # Set the ID of the Amazon Machine Image (AMI) to use for EC2 instances.\n  ecr_repo_name: takeoff                    # Set the name of the ECR repository. If it doesn't exist it will be created.\n  hardware: cpu                             # Set the hardware type: 'cpu' or 'gpu'\n  hf_model_name: tiiuae/falcon-7b-instruct  # Set the name of the Hugging Face model to use.\n  instance_role_arn: arn:aws:iam::^^^:path  # Set the ARN of the IAM instance profile role.\n  instance_type: c5.2xlarge                 # Set the EC2 instance type.\n  key_name: aws                             # Set the name of the AWS key pair.\n  region_name: us-east-1                    # Set the AWS region name.\n  security_group_ids:                       # Set the security group ID(s).\n    - sg-0fefe7b366b0c0843\n  server_edition: community                 # defaults to \"community\" (\"pro\" not available yet)                \n```\n\n# Launch in Python <img align=\"center\" width=\"30\" height=\"29\" src=\"https://media.giphy.com/media/PeaNPlyOVPNMHjqTm7/giphy.gif\">\n<br>\n\nUpon configuring the YAML file, instantiate the `DockerHandler` and `TitanEC2` classes to handle Docker image flows and instance launch.\n\n### Docker \n\nLoad the YAML file into the `DockerHandler`. These commands will pull the Takeoff Docker image, tag it, and push it to ECR:\n\n```py\nfrom horizon import DockerHandler, TitanEC2\n\ndocker = DockerHandler(\"ec2_config.yaml\")\n\ndocker.pull_takeoff_image()\ndocker.push_takeoff_image()\n```\n\n### Create Instance\n\nLaunch the EC2 instance:\n\n```py\ntitan = TitanEC2(\"ec2_config.yaml\")\ninstance_id, meta_data = titan.create_instance()\nprint(meta_data)\n```\nWhen you instance is created, you will get a JSON output of the instance's meta data.\n\nRevisit the `Staging` and `Calling the Inference Endpoint` section for API handling.\n\n### Delete Instance\n\nPass your `Instance Id` to the `delete_instance` method:\n\n```py\ntitan.delete_instance(instance_id)\n```\n\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0 license",
    "summary": "Auto-deploy the Takeoff Server on AWS for LLM inference",
    "version": "0.0.4.4",
    "project_urls": null,
    "split_keywords": [
        "cloud",
        "titanml",
        "server",
        "llm",
        "nlp",
        "mlops",
        "deployment"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c93f7de24717cfa76a3208ca6f2ba017f2a7538f9d43419b59f9054eb8ffdc53",
                "md5": "6bc9102c078a86ddb30d5111f33e561a",
                "sha256": "01f9ed9f9173d54cfa689f72e7c1df6f61037af0ef96fb27f4e04c3ded87a814"
            },
            "downloads": -1,
            "filename": "horizon_takeoff-0.0.4.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6bc9102c078a86ddb30d5111f33e561a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 25100,
            "upload_time": "2024-01-06T15:19:50",
            "upload_time_iso_8601": "2024-01-06T15:19:50.473993Z",
            "url": "https://files.pythonhosted.org/packages/c9/3f/7de24717cfa76a3208ca6f2ba017f2a7538f9d43419b59f9054eb8ffdc53/horizon_takeoff-0.0.4.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "095da4219c07ca306157dc50146ff8855d04bdaaaf26a1fba2ee9670ea30166f",
                "md5": "eb8d8662f58e97de4501e8769bfc3dec",
                "sha256": "b34ec6661da933881d5fcd93fc4daa606f5e69588eef6a8c3f9f30bde6610a74"
            },
            "downloads": -1,
            "filename": "horizon-takeoff-0.0.4.4.tar.gz",
            "has_sig": false,
            "md5_digest": "eb8d8662f58e97de4501e8769bfc3dec",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 23257,
            "upload_time": "2024-01-06T15:19:51",
            "upload_time_iso_8601": "2024-01-06T15:19:51.537534Z",
            "url": "https://files.pythonhosted.org/packages/09/5d/a4219c07ca306157dc50146ff8855d04bdaaaf26a1fba2ee9670ea30166f/horizon-takeoff-0.0.4.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-06 15:19:51",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "horizon-takeoff"
}
        
Elapsed time: 0.16560s