Name | g4f-xn JSON |
Version |
0.0.6
JSON |
| download |
home_page | None |
Summary | library for easy access to GPT models based on g4f |
upload_time | 2024-05-17 10:08:22 |
maintainer | None |
docs_url | None |
author | Xandr0v |
requires_python | None |
license | None |
keywords |
python
g4f
gpt
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
### g4f_xn
---
#### A library that allows you to use different GPT models!
Also has some other stuff like:
- reading and writing to files
- sound playback(i mean why not)
- other functions for convenience
The project is not ready for public use, but more for a fun and convinient (in my opinion) use of gpt models
#### code example:
```python
import g4f_xn as xn
# specify model
xn.settings('gpt-3.5-turbo')
prompt = '''
type numbers from 1 to 10
'''
result = xn.gpt(prompt)
print(result)
```
---
also you can provide an array to xn.gpt that will represent history of previous prompts and answers. It will use it as context and append prompt with response.
#### advanced example:
```python
import g4f_xn as xn
# specify model and provider
xn.settings('gpt-4', 'Bing')
p1 = '''
type numbers from 1 to 10
'''
p2 = '''
multiply each number by 2
'''
chat = []
r1 = xn.gpt(p1, chat)
r2 = xn.gpt(p2, chat)
print(r1) # 1, 2, 3, ...
print(r2) # 2, 4, 6, ...
```
---
#### gpt -> chat gpt
You can also save chat variable to txt file using xn.write, xn.read <br>
it will be basically chatgpt now
#### example with local chat history:
```python
import g4f_xn as xn
xn.settings('gpt-3.5-turbo')
history = xn.read(r'path/to/file.txt')
chat = [history]
prompt = '''
type numbers from 1 to 10
'''
result = xn.gpt(prompt, chat)
print(result)
xn.write(''.join(chat), r'path/to/file.txt')
```
---
It is highly recommended to use .ipynb files with this library as it is really flexible for that task
Raw data
{
"_id": null,
"home_page": null,
"name": "g4f-xn",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "python, g4f, gpt",
"author": "Xandr0v",
"author_email": "<olegalexandrov468@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/51/39/1371d649719988e42dfd8b032b5451efe9e53098c204166fbaed13732cc2/g4f_xn-0.0.6.tar.gz",
"platform": null,
"description": "### g4f_xn\r\n\r\n---\r\n#### A library that allows you to use different GPT models!\r\n\r\nAlso has some other stuff like:\r\n- reading and writing to files\r\n- sound playback(i mean why not)\r\n- other functions for convenience\r\n\r\nThe project is not ready for public use, but more for a fun and convinient (in my opinion) use of gpt models \r\n\r\n#### code example:\r\n\r\n```python\r\nimport g4f_xn as xn\r\n\r\n# specify model\r\nxn.settings('gpt-3.5-turbo')\r\n\r\nprompt = '''\r\ntype numbers from 1 to 10\r\n'''\r\n\r\nresult = xn.gpt(prompt)\r\nprint(result)\r\n```\r\n\r\n---\r\nalso you can provide an array to xn.gpt that will represent history of previous prompts and answers. It will use it as context and append prompt with response.\r\n\r\n\r\n#### advanced example:\r\n\r\n```python\r\nimport g4f_xn as xn\r\n\r\n# specify model and provider\r\nxn.settings('gpt-4', 'Bing')\r\n\r\np1 = '''\r\ntype numbers from 1 to 10\r\n'''\r\np2 = '''\r\nmultiply each number by 2\r\n'''\r\n\r\nchat = []\r\n\r\nr1 = xn.gpt(p1, chat)\r\nr2 = xn.gpt(p2, chat)\r\nprint(r1) # 1, 2, 3, ...\r\nprint(r2) # 2, 4, 6, ...\r\n```\r\n---\r\n\r\n#### gpt -> chat gpt\r\nYou can also save chat variable to txt file using xn.write, xn.read <br>\r\nit will be basically chatgpt now\r\n\r\n#### example with local chat history:\r\n\r\n```python\r\nimport g4f_xn as xn\r\n\r\nxn.settings('gpt-3.5-turbo')\r\n\r\nhistory = xn.read(r'path/to/file.txt')\r\nchat = [history]\r\nprompt = '''\r\ntype numbers from 1 to 10\r\n'''\r\n\r\nresult = xn.gpt(prompt, chat)\r\nprint(result)\r\n\r\nxn.write(''.join(chat), r'path/to/file.txt')\r\n```\r\n---\r\nIt is highly recommended to use .ipynb files with this library as it is really flexible for that task\r\n\r\n\r\n",
"bugtrack_url": null,
"license": null,
"summary": "library for easy access to GPT models based on g4f",
"version": "0.0.6",
"project_urls": null,
"split_keywords": [
"python",
" g4f",
" gpt"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "64563aae281c1256413ebaae623f7b6190d05c9a3c02651e7516c1e4df468cce",
"md5": "69a8c04c207f89e6871165bd0c202775",
"sha256": "07e58b6598ef8c7b20e2bcbedbef95039bdc711e59170356099743994c3aae8b"
},
"downloads": -1,
"filename": "g4f_xn-0.0.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "69a8c04c207f89e6871165bd0c202775",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 621499,
"upload_time": "2024-05-17T10:08:20",
"upload_time_iso_8601": "2024-05-17T10:08:20.733963Z",
"url": "https://files.pythonhosted.org/packages/64/56/3aae281c1256413ebaae623f7b6190d05c9a3c02651e7516c1e4df468cce/g4f_xn-0.0.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "51391371d649719988e42dfd8b032b5451efe9e53098c204166fbaed13732cc2",
"md5": "65c2d4d58e4365ea553239da73e021f8",
"sha256": "ddfd003af81fb608a44fa23e06ca80e1b9f6414b0afa75eca2fb5ddecc470683"
},
"downloads": -1,
"filename": "g4f_xn-0.0.6.tar.gz",
"has_sig": false,
"md5_digest": "65c2d4d58e4365ea553239da73e021f8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 623269,
"upload_time": "2024-05-17T10:08:22",
"upload_time_iso_8601": "2024-05-17T10:08:22.761492Z",
"url": "https://files.pythonhosted.org/packages/51/39/1371d649719988e42dfd8b032b5451efe9e53098c204166fbaed13732cc2/g4f_xn-0.0.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-17 10:08:22",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "g4f-xn"
}