# gemini_ai
`gemini_ai` is a Python package that provides an interface to interact with Google’s Gemini AI model. This package enables advanced configuration of the generative model, file handling, media uploading, and response streaming, primarily optimized for Google Colab and Jupyter Notebook environments.
## Features
- **Flexible Model Configuration**: Customize model settings like temperature, top-p, and top-k sampling for generation.
- **File Upload and Media Support**: Supports uploading various file types (image, audio, text) to Google Colab and Gemini API.
- **Chat and Response Management**: Easily manage chat sessions with token counting, response streaming, and history display.
- **Environment-Specific Optimizations**: Automatic detection of Google Colab and Jupyter Notebook environments for optimized performance.
## Installation
To install `gemini_ai` and its dependencies, use:
```bash
pip install gemini_ai
```
This will install the required packages, including `google-generativeai`, `pillow`, `ipywidgets`, and `ipython`.
## Usage
### 1. Initializing GeminiAI
First, you need a Google Gemini API key. Set up your account and get an API key from [Google Cloud](https://cloud.google.com/).
```python
from gemini.gemini import GeminiAI
# Initialize GeminiAI with your API key
gemini = GeminiAI(api_key="YOUR_API_KEY")
```
### 2. Configuring the Model
You can configure the model with parameters such as `temperature`, `top_p`, `top_k`, and `max_output_tokens` for tailored response generation.
```python
gemini.config(temp=0.7, top_p=0.9, top_k=50, max_output_tokens=1024)
```
### 3. Starting a Chat Session
To start a chat session with the AI model, provide an initial instruction. If you’re working in Google Colab, you can also upload files as part of the chat context.
```python
gemini.start_chat(instruction="Tell me about the latest in AI technology.")
```
### 4. Sending Messages and Generating Content
Once a session is started, you can send prompts to the AI model and retrieve responses. The `send_message` function is useful for quick interactions, while `generate` can be used for more complex responses with optional streaming.
```python
# Send a simple message
gemini.send_message("What are the recent advancements in AI?")
# Generate a more elaborate response with optional streaming
gemini.generate(prompt="Can you write a story about space exploration?", stream=True)
```
### 5. Handling File Uploads (Google Colab Only)
In Google Colab, you can upload files directly to the Colab environment or to the Gemini API.
#### Uploading to Colab
```python
file_path = gemini.upload() # Uploads a file in Colab and returns the file path
```
#### Uploading to Gemini API
```python
file_uri = gemini.upload_to_gemini(path=file_path, mime_type="text/plain")
print(f"File URI: {file_uri}")
```
### 6. Managing Chat History and Token Counts
You can display the chat history or count tokens in the chat session to manage usage effectively.
```python
# Display chat history
gemini.history()
# Count tokens in the chat history
gemini._token_counts()
```
## Environment-Specific Features
`GeminiAI` optimizes certain features based on the runtime environment. Here are some environment-specific details:
- **Google Colab**: Supports file uploads directly to Colab and uses `google.colab` utilities.
- **Jupyter Notebook**: Limits file upload functionality, skipping Colab-specific features gracefully.
## Class and Method Overview
### Class: `GeminiAI`
#### `__init__(api_key: str, gemini_model: str = 'gemini-1.5-flash-latest')`
Initializes the `GeminiAI` object with an API key and model name.
- **api_key** (str): Your API key for Gemini AI.
- **gemini_model** (str): Specifies the model version. Default is `'gemini-1.5-flash-latest'`.
#### `config(temp: Optional[int] = 1, top_p: Optional[float] = 0.95, top_k: Optional[int] = 64, max_output_tokens: Optional[int] = 8192, response_mime_type: str = "text/plain", stream: bool = True, silent: bool = True)`
Configures the model settings with adjustable parameters.
#### `start_chat(instruction: [str], file_path: Optional[str] = None, meme_type: Optional[str]="text/plain")`
Starts a new chat session with the AI, with optional file input.
#### `send_message(prompt: str, stream: bool = False)`
Sends a text prompt to the AI and retrieves a response, with optional streaming.
#### `generate(prompt: str, stream: bool = True, chunk_size: int = 80)`
Generates content from a prompt, with support for chunked streaming.
#### `upload() -> str`
Uploads a file in Google Colab and returns the file path. Raises an error if not in Colab.
#### `upload_to_gemini(path, mime_type=None)`
Uploads the specified file directly to the Gemini API.
#### `history()`
Displays the chat session history.
#### `_token_counts()`
Counts tokens in the entire chat session history for API usage management.
## MIME Types Supported
This package supports various MIME types for file uploads:
- **Image**: `image/jpeg`, `image/png`, `image/gif`, `image/webp`, `image/heic`, `image/heif`
- **Audio**: `audio/wav`, `audio/mp3`, `audio/aiff`, `audio/aac`, `audio/ogg`, `audio/flac`
- **Text**: `text/plain`, `text/html`, `text/css`, `text/javascript`, `application/json`, `text/markdown`
## Running Tests
To test the `gemini_ai` package, use `pytest` with coverage:
```bash
python3.10 -m pytest --cov=gemini --cov-report=term-missing
```
## Example Code
Here’s a complete example demonstrating the initialization, configuration, chat session setup, and file upload:
```python
from gemini.gemini import GeminiAI
# Initialize the AI with your API key
gemini = GeminiAI(api_key="YOUR_API_KEY")
# Configure model settings
gemini.config(temp=0.7, top_p=0.9)
# Start a chat session
gemini.start_chat(instruction="Tell me about recent advancements in AI")
# Send a prompt and generate a response
gemini.send_message("What's the future of AI?")
gemini.generate("Can you explain the role of AI in healthcare?")
# Display chat history
gemini.history()
# Upload a file to Gemini (Colab only)
file_uri = gemini.upload_to_gemini("/path/to/your/file.txt")
print(f"File uploaded to Gemini with URI: {file_uri}")
```
## Contribution
Contributions to `gemini_ai` are welcome! Please feel free to open issues or submit pull requests on the [GitHub repository](https://github.com/vnstock-hq/gemini_ai).
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": "https://github.com/vnstock-hq/gemini-ai",
"name": "gemini-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": null,
"author": "Thinh Vu",
"author_email": "vnstock.hq@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/b1/38/1b6d2a854504e225eabfe711f00c55e27011e8750d5eb8a0991c69ac3890/gemini_ai-0.0.5.tar.gz",
"platform": null,
"description": "# gemini_ai\n\n`gemini_ai` is a Python package that provides an interface to interact with Google\u2019s Gemini AI model. This package enables advanced configuration of the generative model, file handling, media uploading, and response streaming, primarily optimized for Google Colab and Jupyter Notebook environments.\n\n## Features\n\n- **Flexible Model Configuration**: Customize model settings like temperature, top-p, and top-k sampling for generation.\n- **File Upload and Media Support**: Supports uploading various file types (image, audio, text) to Google Colab and Gemini API.\n- **Chat and Response Management**: Easily manage chat sessions with token counting, response streaming, and history display.\n- **Environment-Specific Optimizations**: Automatic detection of Google Colab and Jupyter Notebook environments for optimized performance.\n\n## Installation\n\nTo install `gemini_ai` and its dependencies, use:\n\n```bash\npip install gemini_ai\n```\n\nThis will install the required packages, including `google-generativeai`, `pillow`, `ipywidgets`, and `ipython`.\n\n## Usage\n\n### 1. Initializing GeminiAI\n\nFirst, you need a Google Gemini API key. Set up your account and get an API key from [Google Cloud](https://cloud.google.com/).\n\n```python\nfrom gemini.gemini import GeminiAI\n\n# Initialize GeminiAI with your API key\ngemini = GeminiAI(api_key=\"YOUR_API_KEY\")\n```\n\n### 2. Configuring the Model\n\nYou can configure the model with parameters such as `temperature`, `top_p`, `top_k`, and `max_output_tokens` for tailored response generation.\n\n```python\ngemini.config(temp=0.7, top_p=0.9, top_k=50, max_output_tokens=1024)\n```\n\n### 3. Starting a Chat Session\n\nTo start a chat session with the AI model, provide an initial instruction. If you\u2019re working in Google Colab, you can also upload files as part of the chat context.\n\n```python\ngemini.start_chat(instruction=\"Tell me about the latest in AI technology.\")\n```\n\n### 4. Sending Messages and Generating Content\n\nOnce a session is started, you can send prompts to the AI model and retrieve responses. The `send_message` function is useful for quick interactions, while `generate` can be used for more complex responses with optional streaming.\n\n```python\n# Send a simple message\ngemini.send_message(\"What are the recent advancements in AI?\")\n\n# Generate a more elaborate response with optional streaming\ngemini.generate(prompt=\"Can you write a story about space exploration?\", stream=True)\n```\n\n### 5. Handling File Uploads (Google Colab Only)\n\nIn Google Colab, you can upload files directly to the Colab environment or to the Gemini API.\n\n#### Uploading to Colab\n\n```python\nfile_path = gemini.upload() # Uploads a file in Colab and returns the file path\n```\n\n#### Uploading to Gemini API\n\n```python\nfile_uri = gemini.upload_to_gemini(path=file_path, mime_type=\"text/plain\")\nprint(f\"File URI: {file_uri}\")\n```\n\n### 6. Managing Chat History and Token Counts\n\nYou can display the chat history or count tokens in the chat session to manage usage effectively.\n\n```python\n# Display chat history\ngemini.history()\n\n# Count tokens in the chat history\ngemini._token_counts()\n```\n\n## Environment-Specific Features\n\n`GeminiAI` optimizes certain features based on the runtime environment. Here are some environment-specific details:\n\n- **Google Colab**: Supports file uploads directly to Colab and uses `google.colab` utilities.\n- **Jupyter Notebook**: Limits file upload functionality, skipping Colab-specific features gracefully.\n\n## Class and Method Overview\n\n### Class: `GeminiAI`\n\n#### `__init__(api_key: str, gemini_model: str = 'gemini-1.5-flash-latest')`\n\nInitializes the `GeminiAI` object with an API key and model name.\n\n- **api_key** (str): Your API key for Gemini AI.\n- **gemini_model** (str): Specifies the model version. Default is `'gemini-1.5-flash-latest'`.\n\n#### `config(temp: Optional[int] = 1, top_p: Optional[float] = 0.95, top_k: Optional[int] = 64, max_output_tokens: Optional[int] = 8192, response_mime_type: str = \"text/plain\", stream: bool = True, silent: bool = True)`\n\nConfigures the model settings with adjustable parameters.\n\n#### `start_chat(instruction: [str], file_path: Optional[str] = None, meme_type: Optional[str]=\"text/plain\")`\n\nStarts a new chat session with the AI, with optional file input.\n\n#### `send_message(prompt: str, stream: bool = False)`\n\nSends a text prompt to the AI and retrieves a response, with optional streaming.\n\n#### `generate(prompt: str, stream: bool = True, chunk_size: int = 80)`\n\nGenerates content from a prompt, with support for chunked streaming.\n\n#### `upload() -> str`\n\nUploads a file in Google Colab and returns the file path. Raises an error if not in Colab.\n\n#### `upload_to_gemini(path, mime_type=None)`\n\nUploads the specified file directly to the Gemini API.\n\n#### `history()`\n\nDisplays the chat session history.\n\n#### `_token_counts()`\n\nCounts tokens in the entire chat session history for API usage management.\n\n## MIME Types Supported\n\nThis package supports various MIME types for file uploads:\n\n- **Image**: `image/jpeg`, `image/png`, `image/gif`, `image/webp`, `image/heic`, `image/heif`\n- **Audio**: `audio/wav`, `audio/mp3`, `audio/aiff`, `audio/aac`, `audio/ogg`, `audio/flac`\n- **Text**: `text/plain`, `text/html`, `text/css`, `text/javascript`, `application/json`, `text/markdown`\n\n## Running Tests\n\nTo test the `gemini_ai` package, use `pytest` with coverage:\n\n```bash\npython3.10 -m pytest --cov=gemini --cov-report=term-missing\n```\n\n## Example Code\n\nHere\u2019s a complete example demonstrating the initialization, configuration, chat session setup, and file upload:\n\n```python\nfrom gemini.gemini import GeminiAI\n\n# Initialize the AI with your API key\ngemini = GeminiAI(api_key=\"YOUR_API_KEY\")\n\n# Configure model settings\ngemini.config(temp=0.7, top_p=0.9)\n\n# Start a chat session\ngemini.start_chat(instruction=\"Tell me about recent advancements in AI\")\n\n# Send a prompt and generate a response\ngemini.send_message(\"What's the future of AI?\")\ngemini.generate(\"Can you explain the role of AI in healthcare?\")\n\n# Display chat history\ngemini.history()\n\n# Upload a file to Gemini (Colab only)\nfile_uri = gemini.upload_to_gemini(\"/path/to/your/file.txt\")\nprint(f\"File uploaded to Gemini with URI: {file_uri}\")\n```\n\n## Contribution\n\nContributions to `gemini_ai` are welcome! Please feel free to open issues or submit pull requests on the [GitHub repository](https://github.com/vnstock-hq/gemini_ai).\n\n## License\n\nThis project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "A Python package for the Gemini AI generative model",
"version": "0.0.5",
"project_urls": {
"Homepage": "https://github.com/vnstock-hq/gemini-ai"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "67c007e14a0d6b07facbb3722c66d80ca2ad5743fce31618685f5545d72edc28",
"md5": "5a41b10b41d7c517b1a43ed1b79f23ff",
"sha256": "cb7198cc2d30ed23a5c524a65cd8cfbf37492207025627eaf1d6ef58e222974b"
},
"downloads": -1,
"filename": "gemini_ai-0.0.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5a41b10b41d7c517b1a43ed1b79f23ff",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 8119,
"upload_time": "2024-10-29T17:28:38",
"upload_time_iso_8601": "2024-10-29T17:28:38.188219Z",
"url": "https://files.pythonhosted.org/packages/67/c0/07e14a0d6b07facbb3722c66d80ca2ad5743fce31618685f5545d72edc28/gemini_ai-0.0.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b1381b6d2a854504e225eabfe711f00c55e27011e8750d5eb8a0991c69ac3890",
"md5": "a9cc0e59b687490a689b19ac30b0ee09",
"sha256": "351c1ba23c8d129370d149fe824e56c7a3e51c0724285fd7babbbac265a1d17b"
},
"downloads": -1,
"filename": "gemini_ai-0.0.5.tar.gz",
"has_sig": false,
"md5_digest": "a9cc0e59b687490a689b19ac30b0ee09",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 9548,
"upload_time": "2024-10-29T17:28:39",
"upload_time_iso_8601": "2024-10-29T17:28:39.770424Z",
"url": "https://files.pythonhosted.org/packages/b1/38/1b6d2a854504e225eabfe711f00c55e27011e8750d5eb8a0991c69ac3890/gemini_ai-0.0.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-29 17:28:39",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "vnstock-hq",
"github_project": "gemini-ai",
"github_not_found": true,
"lcname": "gemini-ai"
}