Name | relyonai JSON |
Version |
0.1.2
JSON |
| download |
home_page | |
Summary | Python Runtime Inline AI Calls |
upload_time | 2023-06-10 23:19:02 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.8 |
license | |
keywords |
ai
gpt3
openai
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# relyonai
![](showcase.png)
Python Runtime **Inline** AI Calls
- 🤖 AI language extension.
- 📉 Reduces code size.
- 🐍 Can use python runtime objects.
- 💿 Caches calls, so it's fast on re-runing.
- ✅ Fixes mistakes internallly.
- 🎁 Saves runtime context for reusing.
- ⭐️ Uses [Few-Shot](https://www.promptingguide.ai/techniques/fewshot) and [COT](https://www.promptingguide.ai/techniques/cot) under the hood.
> **Warning**
> See [Reflections](#reflections).
## Usage
```bash
pip install relyonai
export OPENAI_API_KEY=...
```
```python
from relyonai import ai
f = ai('get print hello world func')
f()
```
```python
from relyonai import config
config.model = 'gpt-4'
config.clear_cache()
```
See [tests](tests/ai/single) for more.
## Reflections
Although the idea is working to some extent, it remains largely impractical for the time being. It falls short in a few crucial areas:
- **Reliability** and **Consistency**: The code that's being generated is not consistently accurate, with minor prompt modifications often leading to significant changes.
- **Controllability**: GPT-3.5, which was the primary model for the development of **0.1**, practically ignores system messages and often fails to follow few-shot examples, even for simple calls.
- **Privacy** and **Security**: This conserns could be largely mitigated by clever prompting and potentially runtime sandboxing (like patching Python's `open` call or something similar).
GPT-4 is expected to be a major leap over all axes with usable clever system messages and better Few-Shot/COT, but it needs to become publicly accessible, approximately 5x cheaper and roughly 2x faster to be practical for this task.
## 0.2 Plans
> **Note**
> this is a rough draft
- [ ] improve quality
- [ ] gpt-4: new system prompt and better cot
- [ ] more tests + add reliability tests
- [ ] include invocation context (`ai`, ±5 above/below exprs)
- [ ] include project info / files / pip / hardware / os / time / etc.
- [ ] final result check
- [ ] different prompt for different py versions
- [ ] other runtimes: bash, remote, c++, browser, etc.
- [ ] bootstrap the package itlsef (just for the fun of it)
- [ ] better introspection
- [ ] `rich` dialogue printing (+spinner)
- [ ] much better error handling
- [ ] view source code/doc
- [ ] more granular sessions (trees)
- [ ] advanced caching
- [ ] redis
- [ ] limit size (up to ~20mb)
- [ ] save pre-compiled code
- [ ] async
- [ ] aai
- [ ] agpt
- [ ] aemb
## License
[MIT](LICENSE)
Raw data
{
"_id": null,
"home_page": "",
"name": "relyonai",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "ai,gpt3,openai",
"author": "",
"author_email": "Stanislav Beliaev <its@stanb.me>",
"download_url": "https://files.pythonhosted.org/packages/8c/c2/ef973c9a8f9f9738c80d8bc829df2f91323b4092cb99596ee8def2d1a0ae/relyonai-0.1.2.tar.gz",
"platform": null,
"description": "# relyonai\n\n![](showcase.png)\n\nPython Runtime **Inline** AI Calls\n- \ud83e\udd16 AI language extension.\n- \ud83d\udcc9 Reduces code size.\n- \ud83d\udc0d Can use python runtime objects.\n- \ud83d\udcbf Caches calls, so it's fast on re-runing.\n- \u2705 Fixes mistakes internallly.\n- \ud83c\udf81 Saves runtime context for reusing.\n- \u2b50\ufe0f Uses [Few-Shot](https://www.promptingguide.ai/techniques/fewshot) and [COT](https://www.promptingguide.ai/techniques/cot) under the hood.\n\n> **Warning** \n> See [Reflections](#reflections).\n\n## Usage\n\n```bash\npip install relyonai\nexport OPENAI_API_KEY=...\n```\n\n```python\nfrom relyonai import ai\nf = ai('get print hello world func')\nf()\n```\n\n```python\nfrom relyonai import config\nconfig.model = 'gpt-4'\nconfig.clear_cache()\n```\n\nSee [tests](tests/ai/single) for more.\n\n## Reflections\n\nAlthough the idea is working to some extent, it remains largely impractical for the time being. It falls short in a few crucial areas:\n- **Reliability** and **Consistency**: The code that's being generated is not consistently accurate, with minor prompt modifications often leading to significant changes.\n- **Controllability**: GPT-3.5, which was the primary model for the development of **0.1**, practically ignores system messages and often fails to follow few-shot examples, even for simple calls.\n- **Privacy** and **Security**: This conserns could be largely mitigated by clever prompting and potentially runtime sandboxing (like patching Python's `open` call or something similar).\n\nGPT-4 is expected to be a major leap over all axes with usable clever system messages and better Few-Shot/COT, but it needs to become publicly accessible, approximately 5x cheaper and roughly 2x faster to be practical for this task.\n\n## 0.2 Plans\n\n> **Note** \n> this is a rough draft\n\n- [ ] improve quality\n - [ ] gpt-4: new system prompt and better cot\n - [ ] more tests + add reliability tests\n - [ ] include invocation context (`ai`, \u00b15 above/below exprs)\n - [ ] include project info / files / pip / hardware / os / time / etc.\n - [ ] final result check\n - [ ] different prompt for different py versions\n - [ ] other runtimes: bash, remote, c++, browser, etc.\n - [ ] bootstrap the package itlsef (just for the fun of it)\n- [ ] better introspection\n - [ ] `rich` dialogue printing (+spinner)\n - [ ] much better error handling\n - [ ] view source code/doc\n - [ ] more granular sessions (trees)\n- [ ] advanced caching\n - [ ] redis\n - [ ] limit size (up to ~20mb)\n - [ ] save pre-compiled code\n- [ ] async\n - [ ] aai\n - [ ] agpt\n - [ ] aemb\n\n## License\n\n[MIT](LICENSE)\n",
"bugtrack_url": null,
"license": "",
"summary": "Python Runtime Inline AI Calls",
"version": "0.1.2",
"project_urls": null,
"split_keywords": [
"ai",
"gpt3",
"openai"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8748b9e2f2b7f80c8def98e340bebcd9a12059b26c233c11b28b254cdf84d451",
"md5": "9f28c766f67cea5a2d46e60b22213ca7",
"sha256": "6e3a3f2667167678c67ec34f3ef744db739094f6a6d9b6692f1a626089353fd0"
},
"downloads": -1,
"filename": "relyonai-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9f28c766f67cea5a2d46e60b22213ca7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 47804,
"upload_time": "2023-06-10T23:18:55",
"upload_time_iso_8601": "2023-06-10T23:18:55.687796Z",
"url": "https://files.pythonhosted.org/packages/87/48/b9e2f2b7f80c8def98e340bebcd9a12059b26c233c11b28b254cdf84d451/relyonai-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8cc2ef973c9a8f9f9738c80d8bc829df2f91323b4092cb99596ee8def2d1a0ae",
"md5": "d0d60fe61e8a25eecff0935861a9a71c",
"sha256": "23d79a72f694376598688c51303a618634809327a9d66ef4fc377bfd770a3825"
},
"downloads": -1,
"filename": "relyonai-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "d0d60fe61e8a25eecff0935861a9a71c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 2127638,
"upload_time": "2023-06-10T23:19:02",
"upload_time_iso_8601": "2023-06-10T23:19:02.485746Z",
"url": "https://files.pythonhosted.org/packages/8c/c2/ef973c9a8f9f9738c80d8bc829df2f91323b4092cb99596ee8def2d1a0ae/relyonai-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-06-10 23:19:02",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "relyonai"
}