ai4free


Nameai4free JSON
Version 0.7 PyPI version JSON
download
home_pageNone
Summarycollection of free AI provides
upload_time2024-05-20 10:13:17
maintainerNone
docs_urlNone
authorOEvortex
requires_pythonNone
licenseHelpingAI Simplified Universal License
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
<div align="center">
  <img src="https://img.shields.io/badge/Ai4Free-API-blue?style=for-the-badge&logo=huggingface" alt="HuggingChat API Badge">
  <h1>Free - Unofficial Reverse Engineered API 🚀</h1>
  <p>
    <a href="https://github.com/Devs-Do-Code/ai4free/stargazers">
      <img alt="GitHub stars" src="https://img.shields.io/github/stars/Devs-Do-Code/ai4free?style=social">
    </a>
    <a href="https://github.com/Devs-Do-Code/ai4free/network/members">
      <img alt="GitHub forks" src="https://img.shields.io/github/forks/Devs-Do-Code/ai4free?style=social">
    </a>
    <a href="https://github.com/Devs-Do-Code/ai4free/issues">
      <img alt="GitHub issues" src="https://img.shields.io/github/issues/Devs-Do-Code/ai4free?style=social">
    </a>
  </p>
</div>

<div align="center">
  <!-- Replace `#` with your actual links -->
  <a href="https://youtube.com/@devsdocode"><img alt="YouTube" src="https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white"></a>
  <a href="https://t.me/devsdocode"><img alt="Telegram" src="https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white"></a>
  <a href="https://www.instagram.com/sree.shades_/"><img alt="Instagram" src="https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white"></a>
  <a href="https://www.linkedin.com/in/developer-sreejan/"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"></a>
  <a href="https://buymeacoffee.com/devsdocode"><img alt="Buy Me A Coffee" src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black"></a>
</div>

<!-- <p align="center"> -->
  <!-- <a href="https://memgpt.ai"><img src="https://github.com/cpacker/MemGPT/assets/5475622/80f2f418-ef92-4f7a-acab-5d642faa4991" alt="MemGPT logo"></a> -->
<!-- </p> -->

<div align="center">

 <!-- <strong>MemGPT allows you to build LLM agents with long term memory & custom tools</strong> -->

[![Discord](https://img.shields.io/discord/1161736243340640419?label=Discord&logo=discord&logoColor=5865F2&style=flat-square&color=5865F2)](https://discord.gg/ehwfVtsAts)
[![Twitter Follow](https://img.shields.io/badge/follow-%40anand_sreejan-1DA1F2?style=flat-square&logo=x&logoColor=white)](https://twitter.com/anand_sreejan)
<!-- [![arxiv 2310.08560](https://img.shields.io/badge/arXiv-2310.08560-B31B1B?logo=arxiv&style=flat-square)](https://arxiv.org/abs/2310.08560)
[![Documentation](https://img.shields.io/github/v/release/cpacker/MemGPT?label=Documentation&logo=readthedocs&style=flat-square)](https://memgpt.readme.io/docs) -->

</div>

# AI4Free: A Python Library for Free Access to All Available Large Language Models

AI4Free is a Python library that provides convenient access to a variety of large language models (LLMs) from different providers, all without requiring any API keys or fees. This allows developers and researchers to experiment with various LLMs and explore their capabilities without the barrier of cost.

## Crafted with ❤️ by Devs Do Code (Sree)

> **Disclaimer:** This project is not officially associated with Any Offical APIs. It is an independent reverse engineering effort to explore the All Available APIs.


## Features
- **Multiple LLM Providers:** AI4Free supports a diverse range of LLM providers including:
  - **Open-source LLMs:** KoboldAI, LEO (Brave AI)
  - **Free-tier access LLMs:** YouChat, OpenGPT, Yep
  - **Research/Demo access LLMs:** Phind, Blackbox
- **Conversation Management:** The library helps maintain conversation history with the LLMs, enabling more natural and context-aware interactions.
- **Prompt Optimization:** AI4Free includes built-in prompt optimization techniques to enhance the quality and relevance of generated responses.
- **Streaming Support:** Responses can be streamed in real-time, allowing for immediate feedback and dynamic interactions.
- **Asynchronous Capabilities:** Async versions of several providers are available for efficient handling of multiple requests and improved performance.

## Installation
```bash
pip install -U ai4free
```
**Use code with caution.**

## Usage
The basic usage pattern involves creating an instance of the desired LLM provider and then using the `chat()` method to interact with the model.

## Example Usage of Available Providers (Synchronous)

Here's how to use each of the available providers in AI4Free without asynchronous functions:

## LEO
```python
from ai4free import LEO

leo = LEO()

while True:
    prompt = input("You: ")
    response = leo.chat(prompt)
    print(f"LEO: {response}")
```


## KoboldAI
```python
from ai4free import KOBOLDAI

koboldai = KOBOLDAI()

while True:
    prompt = input("You: ")
    response = koboldai.chat(prompt)
    print(f"KoboldAI: {response}")
```


## Blackbox
```python
from ai4free import BLACKBOXAI

ai = BLACKBOXAI(
    is_conversation=True,
    max_tokens=800,
    timeout=30,
    intro=None,
    filepath=None,
    update_file=True,
    proxies={},
    history_offset=10250,
    act=None,
    model=None # You can specify a model if needed
)

# Start an infinite loop for continuous interaction
while True:
    # Define a prompt to send to the AI
    prompt = input("Enter your prompt: ")
    
    # Check if the user wants to exit the loop
    if prompt.lower() == "exit":
        break
    
    # Use the 'chat' method to send the prompt and receive a response
    r = ai.chat(prompt)
    print(r)
```

### ThinkAnyAI
```python
from ai4free import ThinkAnyAI

opengpt = ThinkAnyAI()

while True:
    prompt = input("Enter your prompt: ")
    response_str = opengpt.chat(prompt)
    print(response_str)
```


## Phind
```python
from ai4free import PhindSearch

# Create an instance of the PHIND class
ph = PhindSearch()

# Define a prompt to send to the AI
prompt = "write a essay on phind"

response = ph.chat(prompt)
print(response)
```


## Yep
```python
from ai4free import YEPCHAT

# Instantiate the YEPCHAT class with default parameters
YEPCHAT = YEPCHAT()

# Define a prompt to send to the AI
prompt = "What is the capital of France?"

# Use the 'chat' method to get a response from the AI
r = YEPCHAT.chat(prompt)
print(r)
```


## YouChat
```python
from ai4free import YouChat

ai = YouChat(
    is_conversation=True,
    max_tokens=800,
    timeout=30,
    intro=None,
    filepath=None,
    update_file=True,
    proxies={},
    history_offset=10250,
    act=None,
)

prompt = "what is meaning of life"

response = ai.ask(prompt)

# Extract and print the message from the response
message = ai.get_message(response)
print(message)
```

## Cohere
```python
from ai4free import Cohere

# Replace 'YOUR_API_KEY' with your Cohere API key
cohere = Cohere(api_key='YOUR_API_KEY')

while True:
    prompt = input("You: ")
    response = cohere.chat(prompt)
    print(f"Cohere: {response}")
```


## REKA
```python
from ai4free import REKA

# Replace 'YOUR_API_KEY' with your REKA API key
reka = REKA(api_key='YOUR_API_KEY')

while True:
    prompt = input("You: ")
    response = reka.chat(prompt)
    print(f"REKA: {response}")
```


## GROQ
```python
from ai4free import GROQ

# Replace 'YOUR_API_KEY' with your GROQ API key
groq = GROQ(api_key='YOUR_API_KEY')

while True:
    prompt = input("You: ")
    response = groq.chat(prompt)
    print(f"GROQ: {response}")
```
## VLM
```python
from ai4free import VLM


# Initialize the VLM class
vlm = VLM(model="llava-hf/llava-1.5-7b-hf", system_prompt="You are a helpful and informative AI assistant.")

# Path to the image and the user message
image_path = r"C:\Users\hp\Desktop\ai4free\WhatsApp Image 2024-05-19 at 19.01.01_47251a0f.jpg"
user_message = "What is shown in this image?"

# Encode the image to base64
image_base64 = vlm.encode_image_to_base64(image_path)

# Define the prompt with both image and text
prompt = {
    "role": "user",
    "content": [
        {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image_base64}"}},
        {"type": "text", "text": user_message}
    ]
}

# Get the response
response = vlm.ask(prompt)

# Extract and print the message from the response
message = vlm.get_message(response)
print(message)
```
## Deepinfra
```
from ai4free import DeepInfra

ai = DeepInfra(
    model="meta-llama/Meta-Llama-3-70B-Instruct", # DeepInfra models
    is_conversation=True,
    max_tokens=800,
    timeout=30,
    intro=None,
    filepath=None,
    update_file=True,
    proxies={},
    history_offset=10250,
    act=None,
)

prompt = "what is meaning of life"

response = ai.ask(prompt)

# Extract and print the message from the response
message = ai.get_message(response)
print(message)
```
## Available Providers
- **Cohere:** Provides access to various text generation models including "command-r-plus" with capabilities like summarization, copywriting, and dialogue.
- **REKA:** Offers several LLM models like "reka-core", "reka-flash", and "reka-edge" for tasks such as question answering, text generation, and summarization.
- **GROQ:** Grants access to models like "mixtral-8x7b-32768" with capabilities for text generation, translation, and question answering.
- **LEO:** Provides access to "llama-2-13b-chat" with abilities for dialogue, text generation, and question answering.
- **KoboldAI:** Offers various open-source LLM models for text generation and creative writing.
- **OpenAI:** Enables interaction with OpenAI models like "gpt-3.5-turbo" for diverse tasks like text generation, translation, and code generation. Requires an API key.
- **OpenGPT:** Provides access to various LLM models for text generation and creative writing.
- **Blackbox:** Grants access to powerful LLMs for various tasks like text generation, translation, and question answering.
- **Phind:** Offers access to advanced LLMs with research and demo capabilities for tasks like text generation, code generation, and question answering.
- **Yep:** Provides access to models like "Mixtral-8x7B-Instruct-v0.1" with capabilities for text generation, translation, and question answering.
- **YouChat:** Offers free-tier access to a powerful LLM with abilities for dialogue, text generation, and question answering.
- **ThinkAnyAI:** Offers access to various LLM models like "claude-3-haiku", "llama-3-8b-instruct", "mistral-7b-instruct", "rwkv-v6", "gemini-pro", and "gpt-3.5-turbo" for tasks like text generation, question answering, and creative writing.

## Conclusion
AI4Free opens up exciting possibilities for exploring and utilizing the power of large language models without any cost. With its easy-to-use interface and support for diverse LLM providers, the library provides a valuable tool for developers, researchers, and anyone interested in exploring the cutting-edge of AI language technology.


<div align="center">
  <!-- Replace `#` with your actual links -->
  <a href="https://youtube.com/@devsdocode"><img alt="YouTube" src="https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white"></a>
  <a href="https://t.me/devsdocode"><img alt="Telegram" src="https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white"></a>
  <a href="https://www.instagram.com/sree.shades_/"><img alt="Instagram" src="https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white"></a>
  <a href="https://www.linkedin.com/in/developer-sreejan/"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"></a>
  <a href="https://buymeacoffee.com/devsdocode"><img alt="Buy Me A Coffee" src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black"></a>
</div>
<!-- <p align="center"> -->
  <!-- <a href="https://memgpt.ai"><img src="https://github.com/cpacker/MemGPT/assets/5475622/80f2f418-ef92-4f7a-acab-5d642faa4991" alt="MemGPT logo"></a> -->
<!-- </p> -->

<div align="center">

 <!-- <strong>MemGPT allows you to build LLM agents with long term memory & custom tools</strong> -->

[![Discord](https://img.shields.io/discord/1161736243340640419?label=Discord&logo=discord&logoColor=5865F2&style=flat-square&color=5865F2)](https://discord.gg/ehwfVtsAts)
[![Twitter Follow](https://img.shields.io/badge/follow-%40anand_sreejan-1DA1F2?style=flat-square&logo=x&logoColor=white)](https://twitter.com/anand_sreejan)
<!-- [![arxiv 2310.08560](https://img.shields.io/badge/arXiv-2310.08560-B31B1B?logo=arxiv&style=flat-square)](https://arxiv.org/abs/2310.08560)
[![Documentation](https://img.shields.io/github/v/release/cpacker/MemGPT?label=Documentation&logo=readthedocs&style=flat-square)](https://memgpt.readme.io/docs) -->

</div>

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ai4free",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "OEvortex",
    "author_email": "helpingai5@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/ee/64/95227b91060bcfe5346808955c1a158d5faa95415ef9f974c053c12d19c8/ai4free-0.7.tar.gz",
    "platform": null,
    "description": "\r\n<div align=\"center\">\r\n  <img src=\"https://img.shields.io/badge/Ai4Free-API-blue?style=for-the-badge&logo=huggingface\" alt=\"HuggingChat API Badge\">\r\n  <h1>Free - Unofficial Reverse Engineered API \ud83d\ude80</h1>\r\n  <p>\r\n    <a href=\"https://github.com/Devs-Do-Code/ai4free/stargazers\">\r\n      <img alt=\"GitHub stars\" src=\"https://img.shields.io/github/stars/Devs-Do-Code/ai4free?style=social\">\r\n    </a>\r\n    <a href=\"https://github.com/Devs-Do-Code/ai4free/network/members\">\r\n      <img alt=\"GitHub forks\" src=\"https://img.shields.io/github/forks/Devs-Do-Code/ai4free?style=social\">\r\n    </a>\r\n    <a href=\"https://github.com/Devs-Do-Code/ai4free/issues\">\r\n      <img alt=\"GitHub issues\" src=\"https://img.shields.io/github/issues/Devs-Do-Code/ai4free?style=social\">\r\n    </a>\r\n  </p>\r\n</div>\r\n\r\n<div align=\"center\">\r\n  <!-- Replace `#` with your actual links -->\r\n  <a href=\"https://youtube.com/@devsdocode\"><img alt=\"YouTube\" src=\"https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white\"></a>\r\n  <a href=\"https://t.me/devsdocode\"><img alt=\"Telegram\" src=\"https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white\"></a>\r\n  <a href=\"https://www.instagram.com/sree.shades_/\"><img alt=\"Instagram\" src=\"https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white\"></a>\r\n  <a href=\"https://www.linkedin.com/in/developer-sreejan/\"><img alt=\"LinkedIn\" src=\"https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white\"></a>\r\n  <a href=\"https://buymeacoffee.com/devsdocode\"><img alt=\"Buy Me A Coffee\" src=\"https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black\"></a>\r\n</div>\r\n\r\n<!-- <p align=\"center\"> -->\r\n  <!-- <a href=\"https://memgpt.ai\"><img src=\"https://github.com/cpacker/MemGPT/assets/5475622/80f2f418-ef92-4f7a-acab-5d642faa4991\" alt=\"MemGPT logo\"></a> -->\r\n<!-- </p> -->\r\n\r\n<div align=\"center\">\r\n\r\n <!-- <strong>MemGPT allows you to build LLM agents with long term memory & custom tools</strong> -->\r\n\r\n[![Discord](https://img.shields.io/discord/1161736243340640419?label=Discord&logo=discord&logoColor=5865F2&style=flat-square&color=5865F2)](https://discord.gg/ehwfVtsAts)\r\n[![Twitter Follow](https://img.shields.io/badge/follow-%40anand_sreejan-1DA1F2?style=flat-square&logo=x&logoColor=white)](https://twitter.com/anand_sreejan)\r\n<!-- [![arxiv 2310.08560](https://img.shields.io/badge/arXiv-2310.08560-B31B1B?logo=arxiv&style=flat-square)](https://arxiv.org/abs/2310.08560)\r\n[![Documentation](https://img.shields.io/github/v/release/cpacker/MemGPT?label=Documentation&logo=readthedocs&style=flat-square)](https://memgpt.readme.io/docs) -->\r\n\r\n</div>\r\n\r\n# AI4Free: A Python Library for Free Access to All Available Large Language Models\r\n\r\nAI4Free is a Python library that provides convenient access to a variety of large language models (LLMs) from different providers, all without requiring any API keys or fees. This allows developers and researchers to experiment with various LLMs and explore their capabilities without the barrier of cost.\r\n\r\n## Crafted with \u2764\ufe0f by Devs Do Code (Sree)\r\n\r\n> **Disclaimer:** This project is not officially associated with Any Offical APIs. It is an independent reverse engineering effort to explore the All Available APIs.\r\n\r\n\r\n## Features\r\n- **Multiple LLM Providers:** AI4Free supports a diverse range of LLM providers including:\r\n  - **Open-source LLMs:** KoboldAI, LEO (Brave AI)\r\n  - **Free-tier access LLMs:** YouChat, OpenGPT, Yep\r\n  - **Research/Demo access LLMs:** Phind, Blackbox\r\n- **Conversation Management:** The library helps maintain conversation history with the LLMs, enabling more natural and context-aware interactions.\r\n- **Prompt Optimization:** AI4Free includes built-in prompt optimization techniques to enhance the quality and relevance of generated responses.\r\n- **Streaming Support:** Responses can be streamed in real-time, allowing for immediate feedback and dynamic interactions.\r\n- **Asynchronous Capabilities:** Async versions of several providers are available for efficient handling of multiple requests and improved performance.\r\n\r\n## Installation\r\n```bash\r\npip install -U ai4free\r\n```\r\n**Use code with caution.**\r\n\r\n## Usage\r\nThe basic usage pattern involves creating an instance of the desired LLM provider and then using the `chat()` method to interact with the model.\r\n\r\n## Example Usage of Available Providers (Synchronous)\r\n\r\nHere's how to use each of the available providers in AI4Free without asynchronous functions:\r\n\r\n## LEO\r\n```python\r\nfrom ai4free import LEO\r\n\r\nleo = LEO()\r\n\r\nwhile True:\r\n    prompt = input(\"You: \")\r\n    response = leo.chat(prompt)\r\n    print(f\"LEO: {response}\")\r\n```\r\n\r\n\r\n## KoboldAI\r\n```python\r\nfrom ai4free import KOBOLDAI\r\n\r\nkoboldai = KOBOLDAI()\r\n\r\nwhile True:\r\n    prompt = input(\"You: \")\r\n    response = koboldai.chat(prompt)\r\n    print(f\"KoboldAI: {response}\")\r\n```\r\n\r\n\r\n## Blackbox\r\n```python\r\nfrom ai4free import BLACKBOXAI\r\n\r\nai = BLACKBOXAI(\r\n    is_conversation=True,\r\n    max_tokens=800,\r\n    timeout=30,\r\n    intro=None,\r\n    filepath=None,\r\n    update_file=True,\r\n    proxies={},\r\n    history_offset=10250,\r\n    act=None,\r\n    model=None # You can specify a model if needed\r\n)\r\n\r\n# Start an infinite loop for continuous interaction\r\nwhile True:\r\n    # Define a prompt to send to the AI\r\n    prompt = input(\"Enter your prompt: \")\r\n    \r\n    # Check if the user wants to exit the loop\r\n    if prompt.lower() == \"exit\":\r\n        break\r\n    \r\n    # Use the 'chat' method to send the prompt and receive a response\r\n    r = ai.chat(prompt)\r\n    print(r)\r\n```\r\n\r\n### ThinkAnyAI\r\n```python\r\nfrom ai4free import ThinkAnyAI\r\n\r\nopengpt = ThinkAnyAI()\r\n\r\nwhile True:\r\n    prompt = input(\"Enter your prompt: \")\r\n    response_str = opengpt.chat(prompt)\r\n    print(response_str)\r\n```\r\n\r\n\r\n## Phind\r\n```python\r\nfrom ai4free import PhindSearch\r\n\r\n# Create an instance of the PHIND class\r\nph = PhindSearch()\r\n\r\n# Define a prompt to send to the AI\r\nprompt = \"write a essay on phind\"\r\n\r\nresponse = ph.chat(prompt)\r\nprint(response)\r\n```\r\n\r\n\r\n## Yep\r\n```python\r\nfrom ai4free import YEPCHAT\r\n\r\n# Instantiate the YEPCHAT class with default parameters\r\nYEPCHAT = YEPCHAT()\r\n\r\n# Define a prompt to send to the AI\r\nprompt = \"What is the capital of France?\"\r\n\r\n# Use the 'chat' method to get a response from the AI\r\nr = YEPCHAT.chat(prompt)\r\nprint(r)\r\n```\r\n\r\n\r\n## YouChat\r\n```python\r\nfrom ai4free import YouChat\r\n\r\nai = YouChat(\r\n    is_conversation=True,\r\n    max_tokens=800,\r\n    timeout=30,\r\n    intro=None,\r\n    filepath=None,\r\n    update_file=True,\r\n    proxies={},\r\n    history_offset=10250,\r\n    act=None,\r\n)\r\n\r\nprompt = \"what is meaning of life\"\r\n\r\nresponse = ai.ask(prompt)\r\n\r\n# Extract and print the message from the response\r\nmessage = ai.get_message(response)\r\nprint(message)\r\n```\r\n\r\n## Cohere\r\n```python\r\nfrom ai4free import Cohere\r\n\r\n# Replace 'YOUR_API_KEY' with your Cohere API key\r\ncohere = Cohere(api_key='YOUR_API_KEY')\r\n\r\nwhile True:\r\n    prompt = input(\"You: \")\r\n    response = cohere.chat(prompt)\r\n    print(f\"Cohere: {response}\")\r\n```\r\n\r\n\r\n## REKA\r\n```python\r\nfrom ai4free import REKA\r\n\r\n# Replace 'YOUR_API_KEY' with your REKA API key\r\nreka = REKA(api_key='YOUR_API_KEY')\r\n\r\nwhile True:\r\n    prompt = input(\"You: \")\r\n    response = reka.chat(prompt)\r\n    print(f\"REKA: {response}\")\r\n```\r\n\r\n\r\n## GROQ\r\n```python\r\nfrom ai4free import GROQ\r\n\r\n# Replace 'YOUR_API_KEY' with your GROQ API key\r\ngroq = GROQ(api_key='YOUR_API_KEY')\r\n\r\nwhile True:\r\n    prompt = input(\"You: \")\r\n    response = groq.chat(prompt)\r\n    print(f\"GROQ: {response}\")\r\n```\r\n## VLM\r\n```python\r\nfrom ai4free import VLM\r\n\r\n\r\n# Initialize the VLM class\r\nvlm = VLM(model=\"llava-hf/llava-1.5-7b-hf\", system_prompt=\"You are a helpful and informative AI assistant.\")\r\n\r\n# Path to the image and the user message\r\nimage_path = r\"C:\\Users\\hp\\Desktop\\ai4free\\WhatsApp Image 2024-05-19 at 19.01.01_47251a0f.jpg\"\r\nuser_message = \"What is shown in this image?\"\r\n\r\n# Encode the image to base64\r\nimage_base64 = vlm.encode_image_to_base64(image_path)\r\n\r\n# Define the prompt with both image and text\r\nprompt = {\r\n    \"role\": \"user\",\r\n    \"content\": [\r\n        {\"type\": \"image_url\", \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_base64}\"}},\r\n        {\"type\": \"text\", \"text\": user_message}\r\n    ]\r\n}\r\n\r\n# Get the response\r\nresponse = vlm.ask(prompt)\r\n\r\n# Extract and print the message from the response\r\nmessage = vlm.get_message(response)\r\nprint(message)\r\n```\r\n## Deepinfra\r\n```\r\nfrom ai4free import DeepInfra\r\n\r\nai = DeepInfra(\r\n    model=\"meta-llama/Meta-Llama-3-70B-Instruct\", # DeepInfra models\r\n    is_conversation=True,\r\n    max_tokens=800,\r\n    timeout=30,\r\n    intro=None,\r\n    filepath=None,\r\n    update_file=True,\r\n    proxies={},\r\n    history_offset=10250,\r\n    act=None,\r\n)\r\n\r\nprompt = \"what is meaning of life\"\r\n\r\nresponse = ai.ask(prompt)\r\n\r\n# Extract and print the message from the response\r\nmessage = ai.get_message(response)\r\nprint(message)\r\n```\r\n## Available Providers\r\n- **Cohere:** Provides access to various text generation models including \"command-r-plus\" with capabilities like summarization, copywriting, and dialogue.\r\n- **REKA:** Offers several LLM models like \"reka-core\", \"reka-flash\", and \"reka-edge\" for tasks such as question answering, text generation, and summarization.\r\n- **GROQ:** Grants access to models like \"mixtral-8x7b-32768\" with capabilities for text generation, translation, and question answering.\r\n- **LEO:** Provides access to \"llama-2-13b-chat\" with abilities for dialogue, text generation, and question answering.\r\n- **KoboldAI:** Offers various open-source LLM models for text generation and creative writing.\r\n- **OpenAI:** Enables interaction with OpenAI models like \"gpt-3.5-turbo\" for diverse tasks like text generation, translation, and code generation. Requires an API key.\r\n- **OpenGPT:** Provides access to various LLM models for text generation and creative writing.\r\n- **Blackbox:** Grants access to powerful LLMs for various tasks like text generation, translation, and question answering.\r\n- **Phind:** Offers access to advanced LLMs with research and demo capabilities for tasks like text generation, code generation, and question answering.\r\n- **Yep:** Provides access to models like \"Mixtral-8x7B-Instruct-v0.1\" with capabilities for text generation, translation, and question answering.\r\n- **YouChat:** Offers free-tier access to a powerful LLM with abilities for dialogue, text generation, and question answering.\r\n- **ThinkAnyAI:** Offers access to various LLM models like \"claude-3-haiku\", \"llama-3-8b-instruct\", \"mistral-7b-instruct\", \"rwkv-v6\", \"gemini-pro\", and \"gpt-3.5-turbo\" for tasks like text generation, question answering, and creative writing.\r\n\r\n## Conclusion\r\nAI4Free opens up exciting possibilities for exploring and utilizing the power of large language models without any cost. With its easy-to-use interface and support for diverse LLM providers, the library provides a valuable tool for developers, researchers, and anyone interested in exploring the cutting-edge of AI language technology.\r\n\r\n\r\n<div align=\"center\">\r\n  <!-- Replace `#` with your actual links -->\r\n  <a href=\"https://youtube.com/@devsdocode\"><img alt=\"YouTube\" src=\"https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white\"></a>\r\n  <a href=\"https://t.me/devsdocode\"><img alt=\"Telegram\" src=\"https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white\"></a>\r\n  <a href=\"https://www.instagram.com/sree.shades_/\"><img alt=\"Instagram\" src=\"https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white\"></a>\r\n  <a href=\"https://www.linkedin.com/in/developer-sreejan/\"><img alt=\"LinkedIn\" src=\"https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white\"></a>\r\n  <a href=\"https://buymeacoffee.com/devsdocode\"><img alt=\"Buy Me A Coffee\" src=\"https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black\"></a>\r\n</div>\r\n<!-- <p align=\"center\"> -->\r\n  <!-- <a href=\"https://memgpt.ai\"><img src=\"https://github.com/cpacker/MemGPT/assets/5475622/80f2f418-ef92-4f7a-acab-5d642faa4991\" alt=\"MemGPT logo\"></a> -->\r\n<!-- </p> -->\r\n\r\n<div align=\"center\">\r\n\r\n <!-- <strong>MemGPT allows you to build LLM agents with long term memory & custom tools</strong> -->\r\n\r\n[![Discord](https://img.shields.io/discord/1161736243340640419?label=Discord&logo=discord&logoColor=5865F2&style=flat-square&color=5865F2)](https://discord.gg/ehwfVtsAts)\r\n[![Twitter Follow](https://img.shields.io/badge/follow-%40anand_sreejan-1DA1F2?style=flat-square&logo=x&logoColor=white)](https://twitter.com/anand_sreejan)\r\n<!-- [![arxiv 2310.08560](https://img.shields.io/badge/arXiv-2310.08560-B31B1B?logo=arxiv&style=flat-square)](https://arxiv.org/abs/2310.08560)\r\n[![Documentation](https://img.shields.io/github/v/release/cpacker/MemGPT?label=Documentation&logo=readthedocs&style=flat-square)](https://memgpt.readme.io/docs) -->\r\n\r\n</div>\r\n",
    "bugtrack_url": null,
    "license": "HelpingAI Simplified Universal License",
    "summary": "collection of free AI provides",
    "version": "0.7",
    "project_urls": {
        "Source": "https://github.com/Devs-Do-Code/ai4free",
        "Tracker": "https://github.com/Devs-Do-Code/ai4free/issues",
        "YouTube": "https://youtube.com/@OEvortex",
        "Youtube": "https://www.youtube.com/@DevsDoCode"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "36d128c58f68680960d327b372f2c6bbc97e15a65c6a8f5c959b3185072f2d17",
                "md5": "680e353ab1fb8c14d2a4280a6767dab5",
                "sha256": "4546bc112b074970ed9830e5a842009dad55c89159eb9b07738e0314e5e030b4"
            },
            "downloads": -1,
            "filename": "ai4free-0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "680e353ab1fb8c14d2a4280a6767dab5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 56152,
            "upload_time": "2024-05-20T10:13:14",
            "upload_time_iso_8601": "2024-05-20T10:13:14.947877Z",
            "url": "https://files.pythonhosted.org/packages/36/d1/28c58f68680960d327b372f2c6bbc97e15a65c6a8f5c959b3185072f2d17/ai4free-0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ee6495227b91060bcfe5346808955c1a158d5faa95415ef9f974c053c12d19c8",
                "md5": "0689e3bc90d5ce8a53e7016f69841ef7",
                "sha256": "a3357069943affa1473983c2d94c4a75587fea7f0a5133337fa358f69c42a369"
            },
            "downloads": -1,
            "filename": "ai4free-0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "0689e3bc90d5ce8a53e7016f69841ef7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 31440,
            "upload_time": "2024-05-20T10:13:17",
            "upload_time_iso_8601": "2024-05-20T10:13:17.805492Z",
            "url": "https://files.pythonhosted.org/packages/ee/64/95227b91060bcfe5346808955c1a158d5faa95415ef9f974c053c12d19c8/ai4free-0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-20 10:13:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Devs-Do-Code",
    "github_project": "ai4free",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "ai4free"
}
        
Elapsed time: 2.15987s