Name | aios-core JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | AIOS: LLM Agent Operating System |
upload_time | 2024-11-03 13:25:03 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT License Copyright (c) 2024 AGI Research Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
llm
os
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# AIOS: LLM Agent Operating System
<a href='https://arxiv.org/abs/2403.16971'><img src='https://img.shields.io/badge/Paper-PDF-red'></a>
<a href='https://arxiv.org/abs/2312.03815'><img src='https://img.shields.io/badge/Paper-PDF-blue'></a>
<a href='https://aios.readthedocs.io/'><img src='https://img.shields.io/badge/Documentation-AIOS-green'></a>
[![Code License](https://img.shields.io/badge/Code%20License-MIT-orange.svg)](https://github.com/agiresearch/AIOS/blob/main/LICENSE)
<a href='https://discord.gg/B2HFxEgTJX'><img src='https://img.shields.io/badge/Community-Discord-8A2BE2'></a>
<a href="https://trendshift.io/repositories/8908" target="_blank"><img src="https://trendshift.io/api/badge/repositories/8908" alt="agiresearch%2FAIOS | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
The goal of AIOS is to build a large language model (LLM) agent operating system, which intends to embed large language model into the operating system as the brain of the OS. AIOS is designed to address problems (e.g., scheduling, context switch, memory management, etc.) during the development and deployment of LLM-based agents, for a better ecosystem among agent developers and users.
## 🏠 Architecture of AIOS
<p align="center">
<img src="docs/assets/aios-figs/AIOS-Architecture.png">
</p>
AIOS provides the LLM kernel as an abstraction on top of the OS kernel. The kernel facilitates the installation, execution and usage of agents. Furthermore, the AIOS SDK facilitates the development and deployment of agents.
## 📰 News
- **[2024-09-01]** 🔥 AIOS supports multiple agent creation frameworks (e.g., ReAct, Reflexion, OpenAGI, AutoGen, Open Interpreter, MetaGPT). Agents created by these frameworks can onboard AIOS. Onboarding guidelines can be found at the [Doc](https://aios.readthedocs.io/).
- **[2024-07-10]** 📖 AIOS documentation template is up: [Code](https://github.com/agiresearch/AIOS/tree/main/docs) and [Website](https://aios.readthedocs.io/).
- **[2024-06-20]** 🔥 Function calling for open-sourced LLMs (native huggingface, vllm, ollama) is supported.
- **[2024-05-20]** 🚀 More agents with ChatGPT-based tool calling are added (i.e., MathAgent, RecAgent, TravelAgent, AcademicAgent and CreationAgent), their profiles and workflows can be found in [OpenAGI](https://github.com/agiresearch/OpenAGI).
- **[2024-05-13]** 🛠️ Local models (diffusion models) as tools from HuggingFace are integrated.
- **[2024-05-01]** 🛠️ The agent creation in AIOS is refactored, which can be found in our [OpenAGI](https://github.com/agiresearch/OpenAGI) package.
- **[2024-04-05]** 🛠️ AIOS currently supports external tool callings (google search, wolframalpha, rapid API, etc).
- **[2024-04-02]** 🤝 AIOS [Discord Community](https://discord.gg/B2HFxEgTJX) is up. Welcome to join the community for discussions, brainstorming, development, or just random chats! For how to contribute to AIOS, please see [CONTRIBUTE](https://github.com/agiresearch/AIOS/blob/main/CONTRIBUTE.md).
- **[2024-03-25]** ✈️ Our paper [AIOS: LLM Agent Operating System](https://arxiv.org/abs/2403.16971) is released!
- **[2023-12-06]** 📋 After several months of working, our perspective paper [LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem](https://arxiv.org/abs/2312.03815) is officially released.
## ✈️ Getting Started
Please see our ongoing [documentation](https://aios.readthedocs.io/en/latest/) for more information.
- [Installation](https://aios.readthedocs.io/en/latest/get_started/installation.html)
- [Quickstart](https://aios.readthedocs.io/en/latest/get_started/quickstart.html)
### Installation
Git clone AIOS
```bash
git clone https://github.com/agiresearch/AIOS.git
cd AIOS
```
Create venv environment (recommended)
```bash
python -m venv venv
source venv/bin/activate
```
or create conda environment
```bash
conda create -n venv python=3.10 # For Python 3.10
conda create -n venv python=3.11 # For Python 3.11
conda activate venv
```
If you have GPU environments, you can install the dependencies using
```bash
pip install -r requirements-cuda.txt
```
or else you can install the dependencies using
```bash
pip install -r requirements.txt
```
### Quickstart
> [!TIP]
>
> For the config of LLM endpoints, multiple API keys may be required to set up.
> Here we provide the .env.example to for easier configuration of these API keys, you can just copy .env.example as .env and set up the required keys based on your needs.
Note: Please use `launch.py` for the WebUI, or `agent_repl.py` for the TUI.
#### Use with OpenAI API
You need to get your OpenAI API key from https://platform.openai.com/api-keys.
Then set up your OpenAI API key as an environment variable
```bash
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
```
Then run main.py with the models provided by OpenAI API
```python
python main.py --llm_name gpt-3.5-turbo # use gpt-3.5-turbo for example
```
#### Use with Gemini API
You need to get your Gemini API key from https://ai.google.dev/gemini-api
```bash
export GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>
```
Then run main.py with the models provided by OpenAI API
```python
python main.py --llm_name gemini-1.5-flash # use gemini-1.5-flash for example
```
If you want to use **open-sourced** models provided by huggingface, here we provide three options:
* Use with ollama
* Use with native huggingface models
* Use with vllm
#### Use with ollama
You need to download ollama from from https://ollama.com/.
Then you need to start the ollama server either from ollama app
or using the following command in the terminal
```bash
ollama serve
```
To use models provided by ollama, you need to pull the available models from https://ollama.com/library
```bash
ollama pull llama3:8b # use llama3:8b for example
```
ollama can support CPU-only environment, so if you do not have CUDA environment
You can run aios with ollama models by
```python
python main.py --llm_name ollama/llama3:8b --use_backend ollama # use ollama/llama3:8b for example
```
However, if you have the GPU environment, you can also pass GPU-related parameters to speed up
using the following command
```python
python main.py --llm_name ollama/llama3:8b --use_backend ollama --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256
```
#### Use with native huggingface llm models
Some of the huggingface models require authentification, if you want to use all of
the models you need to set up your authentification token in https://huggingface.co/settings/tokens
and set up it as an environment variable using the following command
```bash
export HF_AUTH_TOKENS=<YOUR_TOKEN_ID>
```
You can run with the
```python
python main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256
```
By default, huggingface will download the models in the `~/.cache` directory.
If you want to designate the download directory, you can set up it using the following command
```bash
export HF_HOME=<YOUR_HF_HOME>
```
#### Use with vllm
If you want to speed up the inference of huggingface models, you can use vllm as the backend.
> [!NOTE]
>
> It is important to note that vllm currently only supports linux and GPU-enabled environment. So if you do not have the environment, you need to choose other options.
Considering that vllm itself does not support passing designated GPU ids, you need to either
setup the environment variable,
```bash
export CUDA_VISIBLE_DEVICES="0" # replace with your designated gpu ids
```
Then run the command
```python
python main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --use_backend vllm --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256
```
or you can pass the `CUDA_VISIBLE_DEVICES` as the prefix
```python
CUDA_VISIBLE_DEVICES=0 python main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --use_backend vllm --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256
```
### Web Quickstart
#### Requirements
##### Python
- Supported versions: **Python 3.9 - 3.11**
##### Node
- Supported versions: **LTS** support ONLY
Run the launch.py to start both the frontend and backend
```
python launch.py
```
which should open up `https://localhost:3000` (if it doesn't, navigate to that on your browser)
Interact with all agents by using the `@` to tag an agent.
### Supported Agent Frameworks
- [OpenAGI](https://github.com/agiresearch/openagi)
- [AutoGen](https://github.com/microsoft/autogen)
- [Open-Interpreter](https://github.com/OpenInterpreter/open-interpreter)
- [MetaGPT](https://github.com/geekan/MetaGPT?tab=readme-ov-file)
### Supported LLM Endpoints
- [OpenAI API](https://platform.openai.com/api-keys)
- [Gemini API](https://ai.google.dev/gemini-api)
- [ollama](https://ollama.com/)
- [vllm](https://docs.vllm.ai/en/stable/)
- [native huggingface models (locally)](https://huggingface.co/)
## 🖋️ References
```
@article{mei2024aios,
title={AIOS: LLM Agent Operating System},
author={Mei, Kai and Li, Zelong and Xu, Shuyuan and Ye, Ruosong and Ge, Yingqiang and Zhang, Yongfeng}
journal={arXiv:2403.16971},
year={2024}
}
@article{ge2023llm,
title={LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem},
author={Ge, Yingqiang and Ren, Yujie and Hua, Wenyue and Xu, Shuyuan and Tan, Juntao and Zhang, Yongfeng},
journal={arXiv:2312.03815},
year={2023}
}
```
## 🚀 Contributions
For how to contribute, see [CONTRIBUTE](https://github.com/agiresearch/AIOS/blob/main/CONTRIBUTE.md). If you would like to contribute to the codebase, [issues](https://github.com/agiresearch/AIOS/issues) or [pull requests](https://github.com/agiresearch/AIOS/pulls) are always welcome!
## 🌍 AIOS Contributors
[![AIOS contributors](https://contrib.rocks/image?repo=agiresearch/AIOS&max=300)](https://github.com/agiresearch/AIOS/graphs/contributors)
## 🤝 Discord Channel
If you would like to join the community, ask questions, chat with fellows, learn about or propose new features, and participate in future developments, join our [Discord Community](https://discord.gg/B2HFxEgTJX)!
## 📪 Contact
For issues related to AIOS development, we encourage submitting [issues](https://github.com/agiresearch/AIOS/issues), [pull requests](https://github.com/agiresearch/AIOS/pulls), or initiating discussions in AIOS [Discord Channel](https://discord.gg/B2HFxEgTJX). For other issues please feel free to contact AIOS Foundation ([contact@aios.foundation](mailto:contact@aios.foundation)).
Raw data
{
"_id": null,
"home_page": null,
"name": "aios-core",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "llm, os",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/a2/1c/712ddb700202b256efcbe84d637a75591594e7ee88c3a1c89649d3b21f28/aios_core-0.1.0.tar.gz",
"platform": null,
"description": "# AIOS: LLM Agent Operating System\n\n<a href='https://arxiv.org/abs/2403.16971'><img src='https://img.shields.io/badge/Paper-PDF-red'></a>\n<a href='https://arxiv.org/abs/2312.03815'><img src='https://img.shields.io/badge/Paper-PDF-blue'></a>\n<a href='https://aios.readthedocs.io/'><img src='https://img.shields.io/badge/Documentation-AIOS-green'></a>\n[![Code License](https://img.shields.io/badge/Code%20License-MIT-orange.svg)](https://github.com/agiresearch/AIOS/blob/main/LICENSE)\n<a href='https://discord.gg/B2HFxEgTJX'><img src='https://img.shields.io/badge/Community-Discord-8A2BE2'></a>\n\n<a href=\"https://trendshift.io/repositories/8908\" target=\"_blank\"><img src=\"https://trendshift.io/api/badge/repositories/8908\" alt=\"agiresearch%2FAIOS | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"/></a>\n\nThe goal of AIOS is to build a large language model (LLM) agent operating system, which intends to embed large language model into the operating system as the brain of the OS. AIOS is designed to address problems (e.g., scheduling, context switch, memory management, etc.) during the development and deployment of LLM-based agents, for a better ecosystem among agent developers and users.\n\n## \ud83c\udfe0 Architecture of AIOS\n<p align=\"center\">\n<img src=\"docs/assets/aios-figs/AIOS-Architecture.png\">\n</p>\n\nAIOS provides the LLM kernel as an abstraction on top of the OS kernel. The kernel facilitates the installation, execution and usage of agents. Furthermore, the AIOS SDK facilitates the development and deployment of agents.\n\n## \ud83d\udcf0 News\n- **[2024-09-01]** \ud83d\udd25 AIOS supports multiple agent creation frameworks (e.g., ReAct, Reflexion, OpenAGI, AutoGen, Open Interpreter, MetaGPT). Agents created by these frameworks can onboard AIOS. Onboarding guidelines can be found at the [Doc](https://aios.readthedocs.io/).\n- **[2024-07-10]** \ud83d\udcd6 AIOS documentation template is up: [Code](https://github.com/agiresearch/AIOS/tree/main/docs) and [Website](https://aios.readthedocs.io/).\n- **[2024-06-20]** \ud83d\udd25 Function calling for open-sourced LLMs (native huggingface, vllm, ollama) is supported.\n- **[2024-05-20]** \ud83d\ude80 More agents with ChatGPT-based tool calling are added (i.e., MathAgent, RecAgent, TravelAgent, AcademicAgent and CreationAgent), their profiles and workflows can be found in [OpenAGI](https://github.com/agiresearch/OpenAGI).\n- **[2024-05-13]** \ud83d\udee0\ufe0f Local models (diffusion models) as tools from HuggingFace are integrated.\n- **[2024-05-01]** \ud83d\udee0\ufe0f The agent creation in AIOS is refactored, which can be found in our [OpenAGI](https://github.com/agiresearch/OpenAGI) package.\n- **[2024-04-05]** \ud83d\udee0\ufe0f AIOS currently supports external tool callings (google search, wolframalpha, rapid API, etc).\n- **[2024-04-02]** \ud83e\udd1d AIOS [Discord Community](https://discord.gg/B2HFxEgTJX) is up. Welcome to join the community for discussions, brainstorming, development, or just random chats! For how to contribute to AIOS, please see [CONTRIBUTE](https://github.com/agiresearch/AIOS/blob/main/CONTRIBUTE.md).\n- **[2024-03-25]** \u2708\ufe0f Our paper [AIOS: LLM Agent Operating System](https://arxiv.org/abs/2403.16971) is released!\n- **[2023-12-06]** \ud83d\udccb After several months of working, our perspective paper [LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem](https://arxiv.org/abs/2312.03815) is officially released.\n\n\n## \u2708\ufe0f Getting Started\nPlease see our ongoing [documentation](https://aios.readthedocs.io/en/latest/) for more information.\n- [Installation](https://aios.readthedocs.io/en/latest/get_started/installation.html)\n- [Quickstart](https://aios.readthedocs.io/en/latest/get_started/quickstart.html)\n\n### Installation\n\nGit clone AIOS\n```bash\ngit clone https://github.com/agiresearch/AIOS.git\ncd AIOS\n```\nCreate venv environment (recommended)\n```bash\npython -m venv venv\nsource venv/bin/activate\n```\nor create conda environment\n```bash\nconda create -n venv python=3.10 # For Python 3.10\nconda create -n venv python=3.11 # For Python 3.11\nconda activate venv\n```\n\nIf you have GPU environments, you can install the dependencies using\n```bash\npip install -r requirements-cuda.txt\n```\nor else you can install the dependencies using\n```bash\npip install -r requirements.txt\n```\n\n### Quickstart\n> [!TIP]\n>\n> For the config of LLM endpoints, multiple API keys may be required to set up.\n> Here we provide the .env.example to for easier configuration of these API keys, you can just copy .env.example as .env and set up the required keys based on your needs.\n\nNote: Please use `launch.py` for the WebUI, or `agent_repl.py` for the TUI.\n\n#### Use with OpenAI API\nYou need to get your OpenAI API key from https://platform.openai.com/api-keys.\nThen set up your OpenAI API key as an environment variable\n\n```bash\nexport OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>\n```\n\nThen run main.py with the models provided by OpenAI API\n\n```python\npython main.py --llm_name gpt-3.5-turbo # use gpt-3.5-turbo for example\n```\n\n#### Use with Gemini API\nYou need to get your Gemini API key from https://ai.google.dev/gemini-api\n\n```bash\nexport GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>\n```\n\nThen run main.py with the models provided by OpenAI API\n\n```python\npython main.py --llm_name gemini-1.5-flash # use gemini-1.5-flash for example\n```\n\nIf you want to use **open-sourced** models provided by huggingface, here we provide three options:\n* Use with ollama\n* Use with native huggingface models\n* Use with vllm\n\n#### Use with ollama\nYou need to download ollama from from https://ollama.com/.\n\nThen you need to start the ollama server either from ollama app\n\nor using the following command in the terminal\n\n```bash\nollama serve\n```\n\nTo use models provided by ollama, you need to pull the available models from https://ollama.com/library\n\n```bash\nollama pull llama3:8b # use llama3:8b for example\n```\n\nollama can support CPU-only environment, so if you do not have CUDA environment\n\nYou can run aios with ollama models by\n\n```python\npython main.py --llm_name ollama/llama3:8b --use_backend ollama # use ollama/llama3:8b for example\n```\n\nHowever, if you have the GPU environment, you can also pass GPU-related parameters to speed up\nusing the following command\n\n```python\npython main.py --llm_name ollama/llama3:8b --use_backend ollama --max_gpu_memory '{\"0\": \"24GB\"}' --eval_device \"cuda:0\" --max_new_tokens 256\n```\n\n#### Use with native huggingface llm models\nSome of the huggingface models require authentification, if you want to use all of\nthe models you need to set up your authentification token in https://huggingface.co/settings/tokens\nand set up it as an environment variable using the following command\n\n```bash\nexport HF_AUTH_TOKENS=<YOUR_TOKEN_ID>\n```\n\nYou can run with the\n\n```python\npython main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --max_gpu_memory '{\"0\": \"24GB\"}' --eval_device \"cuda:0\" --max_new_tokens 256\n```\n\nBy default, huggingface will download the models in the `~/.cache` directory.\nIf you want to designate the download directory, you can set up it using the following command\n\n```bash\nexport HF_HOME=<YOUR_HF_HOME>\n```\n\n#### Use with vllm\nIf you want to speed up the inference of huggingface models, you can use vllm as the backend.\n\n> [!NOTE]\n>\n> It is important to note that vllm currently only supports linux and GPU-enabled environment. So if you do not have the environment, you need to choose other options.\n\nConsidering that vllm itself does not support passing designated GPU ids, you need to either\nsetup the environment variable,\n\n```bash\nexport CUDA_VISIBLE_DEVICES=\"0\" # replace with your designated gpu ids\n```\n\nThen run the command\n\n```python\npython main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --use_backend vllm --max_gpu_memory '{\"0\": \"24GB\"}' --eval_device \"cuda:0\" --max_new_tokens 256\n```\n\nor you can pass the `CUDA_VISIBLE_DEVICES` as the prefix\n\n```python\nCUDA_VISIBLE_DEVICES=0 python main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --use_backend vllm --max_gpu_memory '{\"0\": \"24GB\"}' --eval_device \"cuda:0\" --max_new_tokens 256\n```\n\n### Web Quickstart\n#### Requirements\n\n##### Python\n- Supported versions: **Python 3.9 - 3.11**\n##### Node\n- Supported versions: **LTS** support ONLY\n\nRun the launch.py to start both the frontend and backend\n```\npython launch.py\n```\nwhich should open up `https://localhost:3000` (if it doesn't, navigate to that on your browser)\n\nInteract with all agents by using the `@` to tag an agent.\n\n### Supported Agent Frameworks\n- [OpenAGI](https://github.com/agiresearch/openagi)\n- [AutoGen](https://github.com/microsoft/autogen)\n- [Open-Interpreter](https://github.com/OpenInterpreter/open-interpreter)\n- [MetaGPT](https://github.com/geekan/MetaGPT?tab=readme-ov-file)\n\n### Supported LLM Endpoints\n- [OpenAI API](https://platform.openai.com/api-keys)\n- [Gemini API](https://ai.google.dev/gemini-api)\n- [ollama](https://ollama.com/)\n- [vllm](https://docs.vllm.ai/en/stable/)\n- [native huggingface models (locally)](https://huggingface.co/)\n\n## \ud83d\udd8b\ufe0f References\n```\n@article{mei2024aios,\n title={AIOS: LLM Agent Operating System},\n author={Mei, Kai and Li, Zelong and Xu, Shuyuan and Ye, Ruosong and Ge, Yingqiang and Zhang, Yongfeng}\n journal={arXiv:2403.16971},\n year={2024}\n}\n@article{ge2023llm,\n title={LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem},\n author={Ge, Yingqiang and Ren, Yujie and Hua, Wenyue and Xu, Shuyuan and Tan, Juntao and Zhang, Yongfeng},\n journal={arXiv:2312.03815},\n year={2023}\n}\n```\n\n## \ud83d\ude80 Contributions\nFor how to contribute, see [CONTRIBUTE](https://github.com/agiresearch/AIOS/blob/main/CONTRIBUTE.md). If you would like to contribute to the codebase, [issues](https://github.com/agiresearch/AIOS/issues) or [pull requests](https://github.com/agiresearch/AIOS/pulls) are always welcome!\n\n## \ud83c\udf0d AIOS Contributors\n[![AIOS contributors](https://contrib.rocks/image?repo=agiresearch/AIOS&max=300)](https://github.com/agiresearch/AIOS/graphs/contributors)\n\n\n## \ud83e\udd1d Discord Channel\nIf you would like to join the community, ask questions, chat with fellows, learn about or propose new features, and participate in future developments, join our [Discord Community](https://discord.gg/B2HFxEgTJX)!\n\n## \ud83d\udcea Contact\n\nFor issues related to AIOS development, we encourage submitting [issues](https://github.com/agiresearch/AIOS/issues), [pull requests](https://github.com/agiresearch/AIOS/pulls), or initiating discussions in AIOS [Discord Channel](https://discord.gg/B2HFxEgTJX). For other issues please feel free to contact AIOS Foundation ([contact@aios.foundation](mailto:contact@aios.foundation)).\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2024 AGI Research Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"summary": "AIOS: LLM Agent Operating System",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/agiresearch/AIOS",
"Repository": "https://github.com/agiresearch/AIOS.git"
},
"split_keywords": [
"llm",
" os"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "cfcbd78a98f7e3a865eb5291c8dda3934c358cb91c8c50a2d4b6f83df9d68492",
"md5": "5eb5b0293a6c31ef60bc4962a4bcbc12",
"sha256": "b83cae460b3024550e87a87cbf8d9e1cfa5a3a397775eda62e4fd0d2ee8139ab"
},
"downloads": -1,
"filename": "aios_core-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5eb5b0293a6c31ef60bc4962a4bcbc12",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 61068,
"upload_time": "2024-11-03T13:25:01",
"upload_time_iso_8601": "2024-11-03T13:25:01.808067Z",
"url": "https://files.pythonhosted.org/packages/cf/cb/d78a98f7e3a865eb5291c8dda3934c358cb91c8c50a2d4b6f83df9d68492/aios_core-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "a21c712ddb700202b256efcbe84d637a75591594e7ee88c3a1c89649d3b21f28",
"md5": "7eab76c3bc71936ba8a036dc0e921299",
"sha256": "c6d633c8e7c1e202e93dc97e81ae4cc335ded5275be167e0a30f44c6046a0fd1"
},
"downloads": -1,
"filename": "aios_core-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "7eab76c3bc71936ba8a036dc0e921299",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 12886708,
"upload_time": "2024-11-03T13:25:03",
"upload_time_iso_8601": "2024-11-03T13:25:03.836266Z",
"url": "https://files.pythonhosted.org/packages/a2/1c/712ddb700202b256efcbe84d637a75591594e7ee88c3a1c89649d3b21f28/aios_core-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-03 13:25:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "agiresearch",
"github_project": "AIOS",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "aios-core"
}