muxllm


Namemuxllm JSON
Version 1.0.2 PyPI version JSON
download
home_pageNone
Summarymuxllm is a python library designed to be an all-in-one wrapper for various cloud providers as well as local inference.
upload_time2024-11-17 23:02:54
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT License Copyright (c) 2024 MannanB Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords llm gpt wrapper api
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            

# MUXLLM

  

muxllm is a python library designed to be an all-in-one wrapper for using LLMs via various cloud providers as well as local inference (WIP). Its main purpose is to be a unified API that allows for hot-swapping between LLMs and between cloud/local inference. It uses a simple interface with built-in chat and prompting capabilities for easy use.
  

Install via pip
```
pip install muxllm
```

[Docs](https://github.com/MannanB/MUXLLM)

# Documentation

  

Basic Usage
----
```python
from muxllm import LLM, Provider

llm = LLM(Provider.openai, "gpt-4")

response = llm.ask("Translate 'Hola, como estas?' to english")

print(response.message) # Hello, how are you?
```

API keys can be passed via the api_key parameter in the LLM class. Otherwise, it will try to get the key from environment variables with the following pattern: [PROVIDER_NAME]_API_KEY

Adding a system prompt
```python
llm = LLM(Provider.openai, "gpt-4", system_prompt="...")
```
Anything passed to kwargs will be passed to the chat completion function for whatever provider you chose
```python
llm.ask("...", temperature=1, top_p=.5)
```
Built-in chat functionality with conversation history
```python
response = llm.chat("how are you?")
print(response.content)
# the previous response has automatically been stored
response = llm.chat("what are you doing right now?") 
llm.save_history("./history.json") # save history if you want to continue conversation later
...
llm.load_history("./history.json")
```
Function calling (only for function-calling enabled models)

```python
tools = [...]
llm = LLM(Provider.fireworks, "firefunction-v2")

resp = llm.chat("What is 5*5",
                tools=tools,
                tool_choice={"type": "function"})
                
# muxllm returns a list of ToolCalls
tool_call = resp.tools[0]

# call function with args
tool_response = ...

# the function response is not automatically added to history, so you must manually add it
llm.add_tool_response(tool_call, tool_response)

resp = llm.chat("what did the tool tell you?")
print(resp.message) # 25
```

Each ToolCall contains the name of the tool (```tool_call.name```) and its arguments in a dict (```tool_call.args```). Arguments are always passed as strings, so integers, floats, etc, must be parsed in the function.

In order to use tools you need to supply a dict containing information about each tool that you have available. Check out the [Open AI Guide](https://platform.openai.com/docs/guides/function-calling) to see how to do this.

Backend API usage
----
If you need more fine control, you may choose to directly call the provider's api. There are multiple ways to do this.
1. calling the LLM class
```python
from muxllm import LLM, Provider
llm = LLM(Provider.openai, "gpt-4")
response = llm(messages={...})
print(response.message)
```
2. Using the Provider factory
```python
from muxllm.providers.factory import Provider, create_provider
provider = create_provider(Provider.groq)
response = provider.get_response(messages={...}, model="llama3-8b-instruct")
print(response.message)
```
There may be some edge cases or features of a certain provider that muxllm doesn't cover. In that case, you may want to see the raw response returned directly from either their web API or SDK. This can be done via ```LLMResponse.raw_response``` (note that LLMResponse is what is returned any time you call the LLM, whether it is through .chat, .ask, or any of the above functions)
```python
from muxllm import LLM, Provider
llm = LLM(Provider.openai, "gpt-4")
response = llm(messages={...})
print(response.raw_response)

assert response.message == response.raw_response.choices[0].message
```

Streaming
----
As of right now streaming support is unavailable. However, it is a planned feature.

Async
---
Async is only possible through the provider class as of right now
```python
from muxllm.providers.factory import Provider, create_provider
provider = create_provider(Provider.groq)
# in some async function
response = await provider.get_response_async(messages={...}, model="...")
```
Prompting with muxllm
--
muxllm provides a simple way to add pythonic prompting
For any plain text file or string surrounding a variable with {{ ... }} will allow you to reference the variable when using the LLM class

Example Usage
```python
from muxllm import LLM, Provider

llm  =  LLM(Provider.openai, "gpt-3.5-turbo")
myprompt = "Translate {{spanish}} to english"

response = llm.ask(myprompt, spanish="Hola, como estas?").content
```
Prompts inside txt files
```python
from muxllm import LLM, Provider, Prompt

llm  =  LLM(Provider.openai, "gpt-3.5-turbo")
# muxllm will look for prompt files in cwd and ./prompts if that folder exists
# You can also provide a direct or relative path to the txt file
response  =  llm.ask(Prompt("translate_prompt.txt"), spanish="Hola, como estas?").content
```

Single Prompt LLMs; A lot of times a single LLM class is only used for one prompt. SinglePromptLLM is a subclass of LLM but can only use one prompt given in the constructor
```python
from muxllm.llm import SinglePromptLLM
from muxllm import Prompt, Provider

llm = SinglePromptLLM(Provider.openai, "gpt-3.5-turbo", prompt="translate {{spanish}} to english")

# via file
llm = SinglePromptLLM(Provider.openai, "gpt-3.5-turbo", prompt=Prompt("translate_prompt.txt"))

print(llm.ask(spanish="hola, como estas?").content)
```

Tools with muxllm
-- 
Muxllm provides a simple way to automatically create the tools dictionary as well as easily call the functions that the LLM requests.
To use this, you first a create a 	```ToolBox``` that contains all of your tools. Then, using the ```tool``` decorator, you can define functions as tools that you want to use. 
```python
from muxllm.tools import tool, ToolBox, Param

my_tools = ToolBox()

@tool("get_current_weather", my_tools, "Get the current weather", [
    Param("location", "string", "The city and state, e.g. San Francisco, CA"),
    Param("fmt", "string", "The temperature unit to use. Infer this from the users location.")
])
def get_current_weather(location, fmt):
    return f"It is sunny in {location} according to the weather forecast in {fmt}"
```
Note that for the Param class, the second argument is the type of the argument. The possible types are defined here: https://json-schema.org/understanding-json-schema/reference/type

Once you have created each tool, you can then easily convert it to the tools dictionary and pass it to an LLM.
```python
tools_dict = my_tools.to_dict()
response = llm.chat("What is the weather in San Francisco, CA in fahrenheit", tools=tools_dict)
```
Finally, you can use the ```ToolBox``` to invoke the tool and get a response
```python
tool_call = response.tools[0]
tool_resp = my_tools.invoke_tool(tool_call)
llm.add_tool_response(tool_call, tool_resp)
```
Its also possible to have multiple ```ToolBox```s and then combine them. This is useful if you want to remove or add certain tools from the LLM dynamically.
```python
coding_tools = ToolBox()
...
writing_tools = ToolBox()
...
research_tools = ToolBox()
...
# When passing the tools to the LLM
all_tools = coding_tools.to_dict() + writing_tools.to_dict() + research_tools.to_dict()
```

Providers
==
Currently the following providers are available: openai, groq, fireworks, Google Gemini, Anthropic

Local inference /w huggingface and llama.cpp are planned in the future

Model Alias
---
Fireworks, Groq, and local inference have common models. For the sake of generalization, these have been given aliases that you may choose to use if you don't want to use the specific model name for that provider. This gives the benefit of being interchangeable between providers without having to change the model name
| Name                   | Fireworks                                        | Groq               | HuggingFace |
| ---------------------- | ------------------------------------------------ | ------------------ | ----------- |
| llama3-8b-instruct     | accounts/fireworks/models/llama-v3-8b-instruct   | llama3-8b-8192     | WIP         |
| llama3-70b-instruct    | accounts/fireworks/models/llama-v3-70b-instruct  | llama3-70b-8192    | WIP         |
| mixtral-8x7b-instruct  | accounts/fireworks/models/mixtral-8x7b-instruct  | mixtral-8x7b-32768 | WIP         |
| gemma-7b-instruct      | accounts/fireworks/models/gemma-7b-it            | gemma-7b-it        | WIP         |
| gemma2-9b-instruct     | accounts/fireworks/models/gemma-9b-it            | gemma-9b-it        | WIP         |
| firefunction-v2        | accounts/fireworks/models/firefunction-v2        | N/A                | WIP         |
| mixtral-8x22b-instruct | accounts/fireworks/models/mixtral-8x22b-instruct | N/A                | WIP         |

```python
# the following are all equivalent, in terms of what model they use
LLM(Provider.fireworks, "llama3-8b-instruct")
LLM(Provider.groq, "llama3-8b-instruct")
LLM(Provider.fireworks, "accounts/fireworks/models/llama-v3-8b-instruct")
LLM(Provider.groq, "llama3-8b-8192")
```

Future Plans
===

* Adding cost tracking / forecasting (I.E. llm.get_cost(...))
* Support for Local Inference
* Seamless async and streaming support
* Homogenized error handling across SDKs

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "muxllm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "llm, gpt, wrapper, api",
    "author": null,
    "author_email": "Mannan Bhardwaj <mannanb728@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/56/02/46d7ce998f2f0de4cf1db3b36a167d9d83497c148c88c61d0a930240d94d/muxllm-1.0.2.tar.gz",
    "platform": null,
    "description": "\r\n\r\n# MUXLLM\r\n\r\n  \r\n\r\nmuxllm is a python library designed to be an all-in-one wrapper for using LLMs via various cloud providers as well as local inference (WIP). Its main purpose is to be a unified API that allows for hot-swapping between LLMs and between cloud/local inference. It uses a simple interface with built-in chat and prompting capabilities for easy use.\r\n  \r\n\r\nInstall via pip\r\n```\r\npip install muxllm\r\n```\r\n\r\n[Docs](https://github.com/MannanB/MUXLLM)\r\n\r\n# Documentation\r\n\r\n  \r\n\r\nBasic Usage\r\n----\r\n```python\r\nfrom muxllm import LLM, Provider\r\n\r\nllm = LLM(Provider.openai, \"gpt-4\")\r\n\r\nresponse = llm.ask(\"Translate 'Hola, como estas?' to english\")\r\n\r\nprint(response.message) # Hello, how are you?\r\n```\r\n\r\nAPI keys can be passed via the api_key parameter in the LLM class. Otherwise, it will try to get the key from environment variables with the following pattern: [PROVIDER_NAME]_API_KEY\r\n\r\nAdding a system prompt\r\n```python\r\nllm = LLM(Provider.openai, \"gpt-4\", system_prompt=\"...\")\r\n```\r\nAnything passed to kwargs will be passed to the chat completion function for whatever provider you chose\r\n```python\r\nllm.ask(\"...\", temperature=1, top_p=.5)\r\n```\r\nBuilt-in chat functionality with conversation history\r\n```python\r\nresponse = llm.chat(\"how are you?\")\r\nprint(response.content)\r\n# the previous response has automatically been stored\r\nresponse = llm.chat(\"what are you doing right now?\") \r\nllm.save_history(\"./history.json\") # save history if you want to continue conversation later\r\n...\r\nllm.load_history(\"./history.json\")\r\n```\r\nFunction calling (only for function-calling enabled models)\r\n\r\n```python\r\ntools = [...]\r\nllm = LLM(Provider.fireworks, \"firefunction-v2\")\r\n\r\nresp = llm.chat(\"What is 5*5\",\r\n                tools=tools,\r\n                tool_choice={\"type\": \"function\"})\r\n                \r\n# muxllm returns a list of ToolCalls\r\ntool_call = resp.tools[0]\r\n\r\n# call function with args\r\ntool_response = ...\r\n\r\n# the function response is not automatically added to history, so you must manually add it\r\nllm.add_tool_response(tool_call, tool_response)\r\n\r\nresp = llm.chat(\"what did the tool tell you?\")\r\nprint(resp.message) # 25\r\n```\r\n\r\nEach ToolCall contains the name of the tool (```tool_call.name```) and its arguments in a dict (```tool_call.args```). Arguments are always passed as strings, so integers, floats, etc, must be parsed in the function.\r\n\r\nIn order to use tools you need to supply a dict containing information about each tool that you have available. Check out the [Open AI Guide](https://platform.openai.com/docs/guides/function-calling) to see how to do this.\r\n\r\nBackend API usage\r\n----\r\nIf you need more fine control, you may choose to directly call the provider's api. There are multiple ways to do this.\r\n1. calling the LLM class\r\n```python\r\nfrom muxllm import LLM, Provider\r\nllm = LLM(Provider.openai, \"gpt-4\")\r\nresponse = llm(messages={...})\r\nprint(response.message)\r\n```\r\n2. Using the Provider factory\r\n```python\r\nfrom muxllm.providers.factory import Provider, create_provider\r\nprovider = create_provider(Provider.groq)\r\nresponse = provider.get_response(messages={...}, model=\"llama3-8b-instruct\")\r\nprint(response.message)\r\n```\r\nThere may be some edge cases or features of a certain provider that muxllm doesn't cover. In that case, you may want to see the raw response returned directly from either their web API or SDK. This can be done via ```LLMResponse.raw_response``` (note that LLMResponse is what is returned any time you call the LLM, whether it is through .chat, .ask, or any of the above functions)\r\n```python\r\nfrom muxllm import LLM, Provider\r\nllm = LLM(Provider.openai, \"gpt-4\")\r\nresponse = llm(messages={...})\r\nprint(response.raw_response)\r\n\r\nassert response.message == response.raw_response.choices[0].message\r\n```\r\n\r\nStreaming\r\n----\r\nAs of right now streaming support is unavailable. However, it is a planned feature.\r\n\r\nAsync\r\n---\r\nAsync is only possible through the provider class as of right now\r\n```python\r\nfrom muxllm.providers.factory import Provider, create_provider\r\nprovider = create_provider(Provider.groq)\r\n# in some async function\r\nresponse = await provider.get_response_async(messages={...}, model=\"...\")\r\n```\r\nPrompting with muxllm\r\n--\r\nmuxllm provides a simple way to add pythonic prompting\r\nFor any plain text file or string surrounding a variable with {{ ... }} will allow you to reference the variable when using the LLM class\r\n\r\nExample Usage\r\n```python\r\nfrom muxllm import LLM, Provider\r\n\r\nllm  =  LLM(Provider.openai, \"gpt-3.5-turbo\")\r\nmyprompt = \"Translate {{spanish}} to english\"\r\n\r\nresponse = llm.ask(myprompt, spanish=\"Hola, como estas?\").content\r\n```\r\nPrompts inside txt files\r\n```python\r\nfrom muxllm import LLM, Provider, Prompt\r\n\r\nllm  =  LLM(Provider.openai, \"gpt-3.5-turbo\")\r\n# muxllm will look for prompt files in cwd and ./prompts if that folder exists\r\n# You can also provide a direct or relative path to the txt file\r\nresponse  =  llm.ask(Prompt(\"translate_prompt.txt\"), spanish=\"Hola, como estas?\").content\r\n```\r\n\r\nSingle Prompt LLMs; A lot of times a single LLM class is only used for one prompt. SinglePromptLLM is a subclass of LLM but can only use one prompt given in the constructor\r\n```python\r\nfrom muxllm.llm import SinglePromptLLM\r\nfrom muxllm import Prompt, Provider\r\n\r\nllm = SinglePromptLLM(Provider.openai, \"gpt-3.5-turbo\", prompt=\"translate {{spanish}} to english\")\r\n\r\n# via file\r\nllm = SinglePromptLLM(Provider.openai, \"gpt-3.5-turbo\", prompt=Prompt(\"translate_prompt.txt\"))\r\n\r\nprint(llm.ask(spanish=\"hola, como estas?\").content)\r\n```\r\n\r\nTools with muxllm\r\n-- \r\nMuxllm provides a simple way to automatically create the tools dictionary as well as easily call the functions that the LLM requests.\r\nTo use this, you first a create a \t```ToolBox``` that contains all of your tools. Then, using the ```tool``` decorator, you can define functions as tools that you want to use. \r\n```python\r\nfrom muxllm.tools import tool, ToolBox, Param\r\n\r\nmy_tools = ToolBox()\r\n\r\n@tool(\"get_current_weather\", my_tools, \"Get the current weather\", [\r\n    Param(\"location\", \"string\", \"The city and state, e.g. San Francisco, CA\"),\r\n    Param(\"fmt\", \"string\", \"The temperature unit to use. Infer this from the users location.\")\r\n])\r\ndef get_current_weather(location, fmt):\r\n    return f\"It is sunny in {location} according to the weather forecast in {fmt}\"\r\n```\r\nNote that for the Param class, the second argument is the type of the argument. The possible types are defined here: https://json-schema.org/understanding-json-schema/reference/type\r\n\r\nOnce you have created each tool, you can then easily convert it to the tools dictionary and pass it to an LLM.\r\n```python\r\ntools_dict = my_tools.to_dict()\r\nresponse = llm.chat(\"What is the weather in San Francisco, CA in fahrenheit\", tools=tools_dict)\r\n```\r\nFinally, you can use the ```ToolBox``` to invoke the tool and get a response\r\n```python\r\ntool_call = response.tools[0]\r\ntool_resp = my_tools.invoke_tool(tool_call)\r\nllm.add_tool_response(tool_call, tool_resp)\r\n```\r\nIts also possible to have multiple ```ToolBox```s and then combine them. This is useful if you want to remove or add certain tools from the LLM dynamically.\r\n```python\r\ncoding_tools = ToolBox()\r\n...\r\nwriting_tools = ToolBox()\r\n...\r\nresearch_tools = ToolBox()\r\n...\r\n# When passing the tools to the LLM\r\nall_tools = coding_tools.to_dict() + writing_tools.to_dict() + research_tools.to_dict()\r\n```\r\n\r\nProviders\r\n==\r\nCurrently the following providers are available: openai, groq, fireworks, Google Gemini, Anthropic\r\n\r\nLocal inference /w huggingface and llama.cpp are planned in the future\r\n\r\nModel Alias\r\n---\r\nFireworks, Groq, and local inference have common models. For the sake of generalization, these have been given aliases that you may choose to use if you don't want to use the specific model name for that provider. This gives the benefit of being interchangeable between providers without having to change the model name\r\n| Name                   | Fireworks                                        | Groq               | HuggingFace |\r\n| ---------------------- | ------------------------------------------------ | ------------------ | ----------- |\r\n| llama3-8b-instruct     | accounts/fireworks/models/llama-v3-8b-instruct   | llama3-8b-8192     | WIP         |\r\n| llama3-70b-instruct    | accounts/fireworks/models/llama-v3-70b-instruct  | llama3-70b-8192    | WIP         |\r\n| mixtral-8x7b-instruct  | accounts/fireworks/models/mixtral-8x7b-instruct  | mixtral-8x7b-32768 | WIP         |\r\n| gemma-7b-instruct      | accounts/fireworks/models/gemma-7b-it            | gemma-7b-it        | WIP         |\r\n| gemma2-9b-instruct     | accounts/fireworks/models/gemma-9b-it            | gemma-9b-it        | WIP         |\r\n| firefunction-v2        | accounts/fireworks/models/firefunction-v2        | N/A                | WIP         |\r\n| mixtral-8x22b-instruct | accounts/fireworks/models/mixtral-8x22b-instruct | N/A                | WIP         |\r\n\r\n```python\r\n# the following are all equivalent, in terms of what model they use\r\nLLM(Provider.fireworks, \"llama3-8b-instruct\")\r\nLLM(Provider.groq, \"llama3-8b-instruct\")\r\nLLM(Provider.fireworks, \"accounts/fireworks/models/llama-v3-8b-instruct\")\r\nLLM(Provider.groq, \"llama3-8b-8192\")\r\n```\r\n\r\nFuture Plans\r\n===\r\n\r\n* Adding cost tracking / forecasting (I.E. llm.get_cost(...))\r\n* Support for Local Inference\r\n* Seamless async and streaming support\r\n* Homogenized error handling across SDKs\r\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 MannanB  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "muxllm is a python library designed to be an all-in-one wrapper for various cloud providers as well as local inference.",
    "version": "1.0.2",
    "project_urls": {
        "Homepage": "https://github.com/MannanB/MUXLLM"
    },
    "split_keywords": [
        "llm",
        " gpt",
        " wrapper",
        " api"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ee1ad50765119138d6b20f84fdd1572d150c3af88ffd16b074aabe02e95ffc34",
                "md5": "3bdb49ff4ae64bdb3b51df82642a9ef9",
                "sha256": "42de0349ce7434d8dc30498bae432fb5e0208ce1ebc77d0320814a2a724f1827"
            },
            "downloads": -1,
            "filename": "muxllm-1.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3bdb49ff4ae64bdb3b51df82642a9ef9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 15242,
            "upload_time": "2024-11-17T23:02:53",
            "upload_time_iso_8601": "2024-11-17T23:02:53.398883Z",
            "url": "https://files.pythonhosted.org/packages/ee/1a/d50765119138d6b20f84fdd1572d150c3af88ffd16b074aabe02e95ffc34/muxllm-1.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "560246d7ce998f2f0de4cf1db3b36a167d9d83497c148c88c61d0a930240d94d",
                "md5": "0afa4f0ace5387e7668e8509da38cc18",
                "sha256": "e73bf643baf678e5fee7201771fc1f9fe565403831ffacd59bc2aae231895d7f"
            },
            "downloads": -1,
            "filename": "muxllm-1.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "0afa4f0ace5387e7668e8509da38cc18",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 16468,
            "upload_time": "2024-11-17T23:02:54",
            "upload_time_iso_8601": "2024-11-17T23:02:54.335807Z",
            "url": "https://files.pythonhosted.org/packages/56/02/46d7ce998f2f0de4cf1db3b36a167d9d83497c148c88c61d0a930240d94d/muxllm-1.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-17 23:02:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "MannanB",
    "github_project": "MUXLLM",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "muxllm"
}
        
Elapsed time: 0.98587s