word-embeddings-sdk


Nameword-embeddings-sdk JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttps://github.com/Width-ai/embeddings-sdk
SummaryPython sdk to interface with the WordEmbeddings API
upload_time2023-10-26 15:43:07
maintainer
docs_urlNone
authorPatrick Hennis
requires_python
licenseMIT
keywords embeddings sdk wordembeddings wordembeddings.ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # WordEmbeddings SDK
Python sdk to interface with the Word Embeddings API


## Example Usage
```python
# import the library
from word_embeddings_sdk import WordEmbeddingsSession

# instantiate a session with your credentials
session = WordEmbeddingsSession(customer_id="...", api_key="...")

# list all your models, should be empty at this point
session.get_models()

# create a new model
model_info = session.create_model(model_name="testing model")

# now when you list your models you will see the one you just created
session.get_models()

# get model info and store it for later use
model_info = session.get_models()['models'][0]

# kick off your first finetuning job, in this example we are finetuning an embeddings model geared towards foods
finetune_info = session.finetune(
    model_id=model_info.get("id"),
    datasets=[{
        "loss": "TripletLoss",
        "loss_params": {
            "distance": "cosine",
            "margin": 0.5
        },
        "examples": [
            {"texts": ["Cheeseburger", "Hamburger", "Pizza"]},
            {"texts": ["Sushi", "Maki Roll", "Ice Cream"]},
            {"texts": ["Pancakes", "Waffles", "Salad"]},
            {"texts": ["Steak", "Ribeye", "Hot Dog"]},
            {"texts": ["Chicken Wings", "Buffalo Wings", "French Fries"]},
            {"texts": ["Tacos", "Burritos", "Nachos"]},
            {"texts": ["Spaghetti", "Lasagna", "Garlic Bread"]},
            {"texts": ["Sashimi", "Nigiri", "Tempura"]},
            {"texts": ["Donuts", "Cupcakes", "Muffins"]},
            {"texts": ["Pho", "Ramen", "Spring Rolls"]},
            {"texts": ["Fish and Chips", "Clam Chowder", "Onion Rings"]},
            {"texts": ["Fried Chicken", "Chicken Nuggets", "Mashed Potatoes"]},
            {"texts": ["Sushi Rolls", "California Roll", "Edamame"]},
            {"texts": ["Pasta Carbonara", "Fettuccine Alfredo", "Caesar Salad"]},
            {"texts": ["Gyoza", "Dumplings", "Fried Rice"]},
            {"texts": ["Cheesecake", "Brownies", "Creme Brulee"]},
            {"texts": ["Pad Thai", "Tom Yum Soup", "Thai Curry"]},
            {"texts": ["Fish Tacos", "Shrimp Tacos", "Guacamole"]},
            {"texts": ["Chicken Parmesan", "Meatball Subs", "Garlic Knots"]},
            {"texts": ["Burger and Fries", "Fish Sandwiches", "Onion Rings"]},
            {"texts": ["Tiramisu", "Cannoli", "Gelato"]},
            {"texts": ["Chicken Caesar Wrap", "Greek Salad", "Hummus"]},
            {"texts": ["Beef Stir Fry", "Sweet and Sour Chicken", "Egg Rolls"]},
            {"texts": ["Peking Duck", "Mongolian Beef", "Fried Rice"]},
            {"texts": ["Shrimp Scampi", "Lobster Bisque", "Crab Cakes"]},
            {"texts": ["Chicken Tikka Masala", "Naan Bread", "Samosas"]},
            {"texts": ["potato salad", "mashed potatoes", "sushi rolls"]},
            {"texts": ["cheeseburger", "hamburger", "ice cream"]},
            {"texts": ["steak", "ribeye steak", "salmon"]},
            {"texts": ["fried chicken", "grilled chicken", "lobster"]},
            {"texts": ["cheese", "mozzarella cheese", "chocolate"]},
            {"texts": ["sushi", "sashimi", "tempura"]},
            {"texts": ["fried rice", "steamed rice", "fried noodles"]}
        ]
    }]
)

# check on the finetuning job status
session.monitor_finetuning(finetune_info.get("finetune_id"))

# get all model versions
model_versions = session.get_model_versions()

# model is done training, lets get some embeddings
embeddings = session.inference(model_id=model_info.get("id"), model_version_id=finetune_info.get("model_version_id"),
                               input_texts=["oatmeal cookie", "bagel", "fried chicken"])

# if you want to keep the resource that is hosting your model hot, you can use this function
session.keep_alive(model_id=model_info.get("id"), model_version_id=finetune_info.get("model_version_id"))

# your inferences will now have a shorter delay between the call and response since you don't have to wait for the underlying resources to spin up
embeddings = session.inference(model_id=model_info.get("id"), model_version_id=finetune_info.get("model_version_id"),
                               input_texts=["shrimp poboy", "candy cane"])

# and then when you are finished running your inferences make sure to tear down your resources
if input("tear down stack? [Y/n] ") == "Y":
    session.tear_down(model_id=model_info.get("id"), model_version_id=finetune_info.get("model_version_id"))

# if you want to delete a model version
if input("delete model version? [Y/n] ") == "Y":
    session.delete_model_version(model_id=model_info.get("id"), model_version_id=finetune_info.get("model_version_id"))

# and if you want to delete a model
if input("delete model? [Y/n] ") == "Y":
    session.delete_model(model_id=model_info.get("id"))
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Width-ai/embeddings-sdk",
    "name": "word-embeddings-sdk",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Embeddings,SDK,WordEmbeddings,WordEmbeddings.Ai",
    "author": "Patrick Hennis",
    "author_email": "patrick@width.ai",
    "download_url": "https://github.com/Width-ai/embeddings-sdk/archive/refs/tags/v0.1.5.tar.gz",
    "platform": null,
    "description": "# WordEmbeddings SDK\nPython sdk to interface with the Word Embeddings API\n\n\n## Example Usage\n```python\n# import the library\nfrom word_embeddings_sdk import WordEmbeddingsSession\n\n# instantiate a session with your credentials\nsession = WordEmbeddingsSession(customer_id=\"...\", api_key=\"...\")\n\n# list all your models, should be empty at this point\nsession.get_models()\n\n# create a new model\nmodel_info = session.create_model(model_name=\"testing model\")\n\n# now when you list your models you will see the one you just created\nsession.get_models()\n\n# get model info and store it for later use\nmodel_info = session.get_models()['models'][0]\n\n# kick off your first finetuning job, in this example we are finetuning an embeddings model geared towards foods\nfinetune_info = session.finetune(\n    model_id=model_info.get(\"id\"),\n    datasets=[{\n        \"loss\": \"TripletLoss\",\n        \"loss_params\": {\n            \"distance\": \"cosine\",\n            \"margin\": 0.5\n        },\n        \"examples\": [\n            {\"texts\": [\"Cheeseburger\", \"Hamburger\", \"Pizza\"]},\n            {\"texts\": [\"Sushi\", \"Maki Roll\", \"Ice Cream\"]},\n            {\"texts\": [\"Pancakes\", \"Waffles\", \"Salad\"]},\n            {\"texts\": [\"Steak\", \"Ribeye\", \"Hot Dog\"]},\n            {\"texts\": [\"Chicken Wings\", \"Buffalo Wings\", \"French Fries\"]},\n            {\"texts\": [\"Tacos\", \"Burritos\", \"Nachos\"]},\n            {\"texts\": [\"Spaghetti\", \"Lasagna\", \"Garlic Bread\"]},\n            {\"texts\": [\"Sashimi\", \"Nigiri\", \"Tempura\"]},\n            {\"texts\": [\"Donuts\", \"Cupcakes\", \"Muffins\"]},\n            {\"texts\": [\"Pho\", \"Ramen\", \"Spring Rolls\"]},\n            {\"texts\": [\"Fish and Chips\", \"Clam Chowder\", \"Onion Rings\"]},\n            {\"texts\": [\"Fried Chicken\", \"Chicken Nuggets\", \"Mashed Potatoes\"]},\n            {\"texts\": [\"Sushi Rolls\", \"California Roll\", \"Edamame\"]},\n            {\"texts\": [\"Pasta Carbonara\", \"Fettuccine Alfredo\", \"Caesar Salad\"]},\n            {\"texts\": [\"Gyoza\", \"Dumplings\", \"Fried Rice\"]},\n            {\"texts\": [\"Cheesecake\", \"Brownies\", \"Creme Brulee\"]},\n            {\"texts\": [\"Pad Thai\", \"Tom Yum Soup\", \"Thai Curry\"]},\n            {\"texts\": [\"Fish Tacos\", \"Shrimp Tacos\", \"Guacamole\"]},\n            {\"texts\": [\"Chicken Parmesan\", \"Meatball Subs\", \"Garlic Knots\"]},\n            {\"texts\": [\"Burger and Fries\", \"Fish Sandwiches\", \"Onion Rings\"]},\n            {\"texts\": [\"Tiramisu\", \"Cannoli\", \"Gelato\"]},\n            {\"texts\": [\"Chicken Caesar Wrap\", \"Greek Salad\", \"Hummus\"]},\n            {\"texts\": [\"Beef Stir Fry\", \"Sweet and Sour Chicken\", \"Egg Rolls\"]},\n            {\"texts\": [\"Peking Duck\", \"Mongolian Beef\", \"Fried Rice\"]},\n            {\"texts\": [\"Shrimp Scampi\", \"Lobster Bisque\", \"Crab Cakes\"]},\n            {\"texts\": [\"Chicken Tikka Masala\", \"Naan Bread\", \"Samosas\"]},\n            {\"texts\": [\"potato salad\", \"mashed potatoes\", \"sushi rolls\"]},\n            {\"texts\": [\"cheeseburger\", \"hamburger\", \"ice cream\"]},\n            {\"texts\": [\"steak\", \"ribeye steak\", \"salmon\"]},\n            {\"texts\": [\"fried chicken\", \"grilled chicken\", \"lobster\"]},\n            {\"texts\": [\"cheese\", \"mozzarella cheese\", \"chocolate\"]},\n            {\"texts\": [\"sushi\", \"sashimi\", \"tempura\"]},\n            {\"texts\": [\"fried rice\", \"steamed rice\", \"fried noodles\"]}\n        ]\n    }]\n)\n\n# check on the finetuning job status\nsession.monitor_finetuning(finetune_info.get(\"finetune_id\"))\n\n# get all model versions\nmodel_versions = session.get_model_versions()\n\n# model is done training, lets get some embeddings\nembeddings = session.inference(model_id=model_info.get(\"id\"), model_version_id=finetune_info.get(\"model_version_id\"),\n                               input_texts=[\"oatmeal cookie\", \"bagel\", \"fried chicken\"])\n\n# if you want to keep the resource that is hosting your model hot, you can use this function\nsession.keep_alive(model_id=model_info.get(\"id\"), model_version_id=finetune_info.get(\"model_version_id\"))\n\n# your inferences will now have a shorter delay between the call and response since you don't have to wait for the underlying resources to spin up\nembeddings = session.inference(model_id=model_info.get(\"id\"), model_version_id=finetune_info.get(\"model_version_id\"),\n                               input_texts=[\"shrimp poboy\", \"candy cane\"])\n\n# and then when you are finished running your inferences make sure to tear down your resources\nif input(\"tear down stack? [Y/n] \") == \"Y\":\n    session.tear_down(model_id=model_info.get(\"id\"), model_version_id=finetune_info.get(\"model_version_id\"))\n\n# if you want to delete a model version\nif input(\"delete model version? [Y/n] \") == \"Y\":\n    session.delete_model_version(model_id=model_info.get(\"id\"), model_version_id=finetune_info.get(\"model_version_id\"))\n\n# and if you want to delete a model\nif input(\"delete model? [Y/n] \") == \"Y\":\n    session.delete_model(model_id=model_info.get(\"id\"))\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python sdk to interface with the WordEmbeddings API",
    "version": "0.1.5",
    "project_urls": {
        "Download": "https://github.com/Width-ai/embeddings-sdk/archive/refs/tags/v0.1.5.tar.gz",
        "Homepage": "https://github.com/Width-ai/embeddings-sdk"
    },
    "split_keywords": [
        "embeddings",
        "sdk",
        "wordembeddings",
        "wordembeddings.ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4ed27896da0b2f9b99706684ed46a330501193f5aa703e86b6e54fe874f59d55",
                "md5": "18825937ddf5971854f2a1130e747de7",
                "sha256": "571f926d91a8918dae49b06419740d819f76393d8057ec279648638acbd840c0"
            },
            "downloads": -1,
            "filename": "word_embeddings_sdk-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "18825937ddf5971854f2a1130e747de7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 7810,
            "upload_time": "2023-10-26T15:43:07",
            "upload_time_iso_8601": "2023-10-26T15:43:07.247575Z",
            "url": "https://files.pythonhosted.org/packages/4e/d2/7896da0b2f9b99706684ed46a330501193f5aa703e86b6e54fe874f59d55/word_embeddings_sdk-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-26 15:43:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Width-ai",
    "github_project": "embeddings-sdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "word-embeddings-sdk"
}
        
Elapsed time: 0.16565s