VisionAPI


NameVisionAPI JSON
Version 0.1.4 PyPI version JSON
download
home_pagehttps://github.com/josebenitezg/VisionAPI
SummaryVisionAPI - a Python library for GPT-Based Vision Models inference
upload_time2024-01-03 17:15:06
maintainer
docs_urlNone
authorJose Benitez
requires_python>=3.8,<3.12.0
licenseMIT
keywords
VCS
bugtrack_url
requirements openai opencv-python numpy gradio
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # VisionAPI 👓✨ - AI Vision & Language Processing

### Welcome to the Future of AI Vision 🌟

Hello and welcome to VisionAPI, where cutting-edge GPT-based models meet simplicity in a sleek API interface. Our mission is to harness the power of AI to work with images, videos, and audio to create Apps fasther than ever.

### 🚀 Getting Started

#### Prerequisites

Make sure you have Python installed on your system and you're ready to dive into the world of AI.

#### 📦 Installation

To install VisionAPI, simply run the following command in your terminal:

```bash
pip install visionapi
```
##### 🔑 Authentication
Before you begin, authenticate your OpenAI API key with the following command:

```bash
export OPENAI_API_KEY='your-api-key-here'
```
#### 🔩 Usage
##### 🖼️ Image Inference
Empower your applications to understand and describe images with precision.

```python
import visionapi

# Initialize the Inference Engine
inference = visionapi.Inference()

# Provide an image URL or a local path
image = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"

# Set your descriptive prompt
prompt = "What is this image about?"

# Get the AI's perspective
response = inference.image(image, prompt)

# Revel in the AI-generated description
print(response.message.content)


```
##### 🎥 Video Inference
Narrate the stories unfolding in your videos with our AI-driven descriptions.

```python
import visionapi

# Gear up the Inference Engine
inference = visionapi.Inference()

# Craft a captivating prompt
prompt = "Summarize the key moments in this video."

# Point to your video file
video = "path/to/video.mp4"

# Let the AI weave the narrative
response = inference.video(video, prompt)

# Display the narrative
print(response.message.content)

```

##### 🎨 Image Generation
Watch your words paint pictures with our intuitive image generation capabilities.

```python
import visionapi

# Activate the Inference Engine
inference = visionapi.Inference()

# Describe your vision
prompt = "A tranquil lake at sunset with mountains in the background."

# Bring your vision to life
image_urls = inference.generate_image(prompt, save=True)  # Set `save=True` to store locally

# Behold the AI-crafted imagery
print(image_urls)
```

##### 🗣️ TTS (Text to Speech)
Transform your text into natural-sounding speech with just a few lines of code.

```python
import visionapi

# Power up the Inference Engine
inference = visionapi.Inference()

# Specify where to save the audio
save_path = "output/speech.mp3"

# Type out what you need to vocalize
text = "Hey, ready to explore AI-powered speech synthesis?"

# Make the AI speak
inference.TTS(text, save_path)
```

##### 🎧 STT (Speech to Text)
Convert audio into text with unparalleled clarity, opening up a world of possibilities.

```python
import visionapi

# Initialize the Inference Engine
inference = visionapi.Inference()

# Convert spoken words to written text
text = inference.STT('path/to/audio.mp3')

# Marvel at the transcription
print(text)
```

## 🌐 Contribute
Add cool stuff:

- Fork the repository.
- Extend the capabilities by integrating more models.
- Enhance existing features or add new ones.
- Submit a pull request with your improvements.

Your contributions are what make VisionAPI not just a tool, but a community.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/josebenitezg/VisionAPI",
    "name": "VisionAPI",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<3.12.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Jose Benitez",
    "author_email": "benitez.ing@gmial.com",
    "download_url": "https://files.pythonhosted.org/packages/17/59/b1bf486155046561bbaa92ca9ede052ff1163ef99c45f56f4b1d887be860/VisionAPI-0.1.4.tar.gz",
    "platform": null,
    "description": "# VisionAPI \ud83d\udc53\u2728 - AI Vision & Language Processing\n\n### Welcome to the Future of AI Vision \ud83c\udf1f\n\nHello and welcome to VisionAPI, where cutting-edge GPT-based models meet simplicity in a sleek API interface. Our mission is to harness the power of AI to work with images, videos, and audio to create Apps fasther than ever.\n\n### \ud83d\ude80 Getting Started\n\n#### Prerequisites\n\nMake sure you have Python installed on your system and you're ready to dive into the world of AI.\n\n#### \ud83d\udce6 Installation\n\nTo install VisionAPI, simply run the following command in your terminal:\n\n```bash\npip install visionapi\n```\n##### \ud83d\udd11 Authentication\nBefore you begin, authenticate your OpenAI API key with the following command:\n\n```bash\nexport OPENAI_API_KEY='your-api-key-here'\n```\n#### \ud83d\udd29 Usage\n##### \ud83d\uddbc\ufe0f Image Inference\nEmpower your applications to understand and describe images with precision.\n\n```python\nimport visionapi\n\n# Initialize the Inference Engine\ninference = visionapi.Inference()\n\n# Provide an image URL or a local path\nimage = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n\n# Set your descriptive prompt\nprompt = \"What is this image about?\"\n\n# Get the AI's perspective\nresponse = inference.image(image, prompt)\n\n# Revel in the AI-generated description\nprint(response.message.content)\n\n\n```\n##### \ud83c\udfa5 Video Inference\nNarrate the stories unfolding in your videos with our AI-driven descriptions.\n\n```python\nimport visionapi\n\n# Gear up the Inference Engine\ninference = visionapi.Inference()\n\n# Craft a captivating prompt\nprompt = \"Summarize the key moments in this video.\"\n\n# Point to your video file\nvideo = \"path/to/video.mp4\"\n\n# Let the AI weave the narrative\nresponse = inference.video(video, prompt)\n\n# Display the narrative\nprint(response.message.content)\n\n```\n\n##### \ud83c\udfa8 Image Generation\nWatch your words paint pictures with our intuitive image generation capabilities.\n\n```python\nimport visionapi\n\n# Activate the Inference Engine\ninference = visionapi.Inference()\n\n# Describe your vision\nprompt = \"A tranquil lake at sunset with mountains in the background.\"\n\n# Bring your vision to life\nimage_urls = inference.generate_image(prompt, save=True)  # Set `save=True` to store locally\n\n# Behold the AI-crafted imagery\nprint(image_urls)\n```\n\n##### \ud83d\udde3\ufe0f TTS (Text to Speech)\nTransform your text into natural-sounding speech with just a few lines of code.\n\n```python\nimport visionapi\n\n# Power up the Inference Engine\ninference = visionapi.Inference()\n\n# Specify where to save the audio\nsave_path = \"output/speech.mp3\"\n\n# Type out what you need to vocalize\ntext = \"Hey, ready to explore AI-powered speech synthesis?\"\n\n# Make the AI speak\ninference.TTS(text, save_path)\n```\n\n##### \ud83c\udfa7 STT (Speech to Text)\nConvert audio into text with unparalleled clarity, opening up a world of possibilities.\n\n```python\nimport visionapi\n\n# Initialize the Inference Engine\ninference = visionapi.Inference()\n\n# Convert spoken words to written text\ntext = inference.STT('path/to/audio.mp3')\n\n# Marvel at the transcription\nprint(text)\n```\n\n## \ud83c\udf10 Contribute\nAdd cool stuff:\n\n- Fork the repository.\n- Extend the capabilities by integrating more models.\n- Enhance existing features or add new ones.\n- Submit a pull request with your improvements.\n\nYour contributions are what make VisionAPI not just a tool, but a community.\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "VisionAPI - a Python library for GPT-Based Vision Models inference",
    "version": "0.1.4",
    "project_urls": {
        "Homepage": "https://github.com/josebenitezg/VisionAPI"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f05c04618397848c2fb7b7a199af5f79d06f95dc3b41441e2e684422bc663e77",
                "md5": "446a5f411f536c0fc10594f095d05926",
                "sha256": "325b065c4f979b5534b249efe320c8517cf441c85d4019e2406bba6e5d798405"
            },
            "downloads": -1,
            "filename": "VisionAPI-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "446a5f411f536c0fc10594f095d05926",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<3.12.0",
            "size": 7993,
            "upload_time": "2024-01-03T17:15:03",
            "upload_time_iso_8601": "2024-01-03T17:15:03.378342Z",
            "url": "https://files.pythonhosted.org/packages/f0/5c/04618397848c2fb7b7a199af5f79d06f95dc3b41441e2e684422bc663e77/VisionAPI-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1759b1bf486155046561bbaa92ca9ede052ff1163ef99c45f56f4b1d887be860",
                "md5": "c02be5f6e696fcc29375c9ae622f5971",
                "sha256": "3e0a1cf350ba2535c68b352e0acaf40e9980eea756d23ab1a82b7116f4ce656f"
            },
            "downloads": -1,
            "filename": "VisionAPI-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "c02be5f6e696fcc29375c9ae622f5971",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<3.12.0",
            "size": 7593,
            "upload_time": "2024-01-03T17:15:06",
            "upload_time_iso_8601": "2024-01-03T17:15:06.828140Z",
            "url": "https://files.pythonhosted.org/packages/17/59/b1bf486155046561bbaa92ca9ede052ff1163ef99c45f56f4b1d887be860/VisionAPI-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-03 17:15:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "josebenitezg",
    "github_project": "VisionAPI",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "openai",
            "specs": []
        },
        {
            "name": "opencv-python",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "gradio",
            "specs": [
                [
                    "==",
                    "3.50.2"
                ]
            ]
        }
    ],
    "lcname": "visionapi"
}
        
Elapsed time: 0.16751s