# generativellm
`generativellm` is a simple wrapper around the Gemini REST API using pure HTTP requests. It lets you create chat sessions and generate responses.
## Installation
```bash
pip install generativellm
```
## Usage
```python
from generativellm import AIChat
chatbot = AIChat(token="your-gemini-api-key", model="gemini-pro")
conversation = [
"Hello!",
"Hi there! How can I help?",
"Can you summarize general relativity?",
]
response = chatbot.get_response(conversation)
print(response)
```
Raw data
{
"_id": null,
"home_page": "https://github.com/arfjdms1/generativellm",
"name": "generativellm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "gemini, ai, chatbot, google, rest, http, wrapper",
"author": "Your Name",
"author_email": "aaravkhemka9@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/46/54/1f8eca9d2d4196a642fdd012a01413b06e7dcba76ae7edd8896851f72f29/generativellm-0.1.2.tar.gz",
"platform": null,
"description": "# generativellm\n\n`generativellm` is a simple wrapper around the Gemini REST API using pure HTTP requests. It lets you create chat sessions and generate responses.\n\n## Installation\n\n```bash\npip install generativellm\n```\n\n## Usage\n\n```python\nfrom generativellm import AIChat\n\nchatbot = AIChat(token=\"your-gemini-api-key\", model=\"gemini-pro\")\n\nconversation = [\n \"Hello!\",\n \"Hi there! How can I help?\",\n \"Can you summarize general relativity?\",\n]\n\nresponse = chatbot.get_response(conversation)\nprint(response)\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A lightweight Gemini chat wrapper using HTTP requests",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/arfjdms1/generativellm"
},
"split_keywords": [
"gemini",
" ai",
" chatbot",
" google",
" rest",
" http",
" wrapper"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "84e4cbcf08de08f75b8aa270efd870de2de371f4896bb4194779c18798aca66c",
"md5": "e4ed499986b46c533d395169a68e5659",
"sha256": "ec6708128a3f11b9409474bbebac05dda2673d996a9c6383e7262e74caf1ef7b"
},
"downloads": -1,
"filename": "generativellm-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e4ed499986b46c533d395169a68e5659",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 2647,
"upload_time": "2025-07-23T18:09:49",
"upload_time_iso_8601": "2025-07-23T18:09:49.305364Z",
"url": "https://files.pythonhosted.org/packages/84/e4/cbcf08de08f75b8aa270efd870de2de371f4896bb4194779c18798aca66c/generativellm-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "46541f8eca9d2d4196a642fdd012a01413b06e7dcba76ae7edd8896851f72f29",
"md5": "7c4c71d40a1cbcfaf1f1ad60f587034c",
"sha256": "f718e74324e3d1d431649bba202e0e7c6f980ed8d898e9bab8e3be487d710e8b"
},
"downloads": -1,
"filename": "generativellm-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "7c4c71d40a1cbcfaf1f1ad60f587034c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 2397,
"upload_time": "2025-07-23T18:09:50",
"upload_time_iso_8601": "2025-07-23T18:09:50.098469Z",
"url": "https://files.pythonhosted.org/packages/46/54/1f8eca9d2d4196a642fdd012a01413b06e7dcba76ae7edd8896851f72f29/generativellm-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-23 18:09:50",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "arfjdms1",
"github_project": "generativellm",
"github_not_found": true,
"lcname": "generativellm"
}