# LangChain G4F - Enhanced GPT-4 Free Integration
A comprehensive Python package that integrates G4F (GPT4Free) with LangChain, providing free access to various AI models including chat, image generation, vision, and media processing capabilities.
## π Table of Contents
- [Features](#-features)
- [Installation](#-installation)
- [Quick Start](#-quick-start)
- [Use Cases](#-use-cases)
- [Authentication](#-authentication)
- [Provider Categories](#-provider-categories)
- [Testing](#-testing)
- [Examples](#-examples)
- [API Reference](#%EF%B8%8F-api-reference)
- [Advanced Usage](#-advanced-usage)
- [Contributing](#-contributing)
- [License](#-license)
- [Support](#-support)
## π Features
### Core Chat Functionality
- **Multi-Model Support**: Access to GPT-4, GPT-3.5, Claude, Gemini, and more
- **Provider Management**: 80+ providers with authentication categorization
- **Vision Support**: Chat with images using vision-capable models
- **Streaming**: Real-time response streaming
- **Async Support**: Full async/await compatibility
### Image Generation
- **Multiple Models**: Flux, DALL-E, Stable Diffusion, and more
- **Batch Processing**: Generate multiple images asynchronously
- **Format Support**: URL, base64, PIL image objects
- **Style Control**: Size, style, and quality parameters
### Media Processing
- **Audio Generation**: Text-to-speech with multiple voices
- **Audio Transcription**: Speech-to-text conversion
- **Video Generation**: Text-to-video capabilities
- **File Processing**: Upload and analyze various file types
### Provider Intelligence
- **Authentication Categorization**: API key, cookies, HAR files, no-auth
- **Capability Detection**: Text, images, vision, audio support
- **Working Status**: Real-time provider availability
- **Smart Recommendations**: Best providers for specific use cases
## π¦ Installation
```bash
# Install from local directory
pip install -e .
# Install dependencies
pip install g4f langchain-core pillow aiohttp requests langchain-g4f-chat
```
## π§ Quick Start
### Basic Chat
```python
from langchain_g4f import ChatG4F
import g4f.Provider as Provider
# Initialize chat model
llm = ChatG4F(
model="gpt-4o-mini",
provider=Provider.Blackbox, # No auth required
temperature=0.7
)
# Chat
messages = [
("system", "You are a helpful assistant."),
("human", "What is machine learning?")
]
response = llm.invoke(messages)
print(response.content)
```
### Image Generation
```python
from langchain_g4f import generate_image
import g4f.Provider as Provider
# Generate image
response = generate_image(
"a beautiful sunset over mountains",
model="flux",
provider=Provider.PollinationsAI
)
print(f"Image URL: {response.url}")
response.save("sunset.png") # Save locally
response.show() # Display with PIL
```
### Vision Chat
```python
# Chat with image
messages = [
("human", [
{"type": "text", "text": "What's in this image?"},
{"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
])
]
response = llm.invoke(messages)
print(response.content)
```
### Provider Discovery
```python
from langchain_g4f import G4FUtils, get_providers
# Get providers by authentication
no_auth_providers = G4FUtils.get_providers_by_auth(needs_auth=False)
print(f"No-auth providers: {len(no_auth_providers)}")
# Get providers by capability
image_providers = G4FUtils.get_providers_by_capability(images=True)
vision_providers = G4FUtils.get_providers_by_capability(vision=True)
# Get working providers
working_providers = get_providers(working=True)
```
## π― Use Cases
### 1. Content Creation Workflow
```python
import asyncio
from langchain_g4f import generate_image, quick_image_analyze, ChatG4F
async def content_workflow():
# Generate image
image = generate_image("modern office space")
# Analyze image
analysis = await quick_image_analyze(
image.url,
"Describe the atmosphere and design elements"
)
# Create content
llm = ChatG4F(model="gpt-4o-mini")
content = llm.invoke([
("human", f"Write a blog post about this office: {analysis}")
])
return image, analysis, content
# Run workflow
image, analysis, content = asyncio.run(content_workflow())
```
### 2. Multi-Modal Analysis
```python
from langchain_g4f import MediaProcessorG4F
processor = MediaProcessorG4F()
# Batch process different media types
items = [
{"type": "vision", "image": "photo.jpg", "query": "Analyze this image"},
{"type": "audio", "text": "Create narration", "voice": "alloy"},
{"type": "video", "prompt": "A spinning globe", "resolution": "720p"}
]
results = await processor.batch_process_media(items)
```
### 3. Provider Optimization
```python
from langchain_g4f import G4FUtils
# Find best providers for your needs
def get_optimal_provider(use_case):
if use_case == "chat":
providers = G4FUtils.get_providers_by_auth(needs_auth=False)
return [p for p in providers if p.working and p.supports_text][:3]
elif use_case == "images":
providers = G4FUtils.get_providers_by_capability(images=True)
return [p for p in providers if p.working][:3]
elif use_case == "vision":
providers = G4FUtils.get_providers_by_capability(vision=True)
return [p for p in providers if p.working][:3]
# Get recommendations
chat_providers = get_optimal_provider("chat")
image_providers = get_optimal_provider("images")
```
## π Authentication
Different providers require different authentication methods:
### No Authentication Required
```python
# These providers work without any setup
Provider.Blackbox # Chat, code
Provider.PollinationsAI # Images, audio
Provider.FreeGpt # Basic chat
```
### API Key Authentication
```python
# Set environment variables
import os
os.environ["OPENAI_API_KEY"] = "your-key"
os.environ["GEMINI_API_KEY"] = "your-key"
os.environ["HF_TOKEN"] = "your-token"
# Use with providers
ChatG4F(provider=Provider.OpenaiChat, api_key=os.getenv("OPENAI_API_KEY"))
```
### Cookie/Session Authentication
```python
# Some providers need browser cookies
# Check G4F documentation for setup
```
## π Provider Categories
### By Authentication Type
- **No Auth**: 45+ providers (Blackbox, FreeGpt, PollinationsAI, etc.)
- **API Key**: 20+ providers (OpenAI, Gemini, Anthropic, etc.)
- **Cookies**: 15+ providers (ChatGPT, Bing, Bard, etc.)
- **HAR Files**: 5+ providers (Advanced session management)
### By Capability
- **Text Chat**: 70+ providers
- **Image Generation**: 15+ providers
- **Vision/Image Analysis**: 25+ providers
- **Audio Processing**: 8+ providers
- **File Upload**: 12+ providers
## π§ͺ Testing
Run the comprehensive test suite:
```python
# Test utilities
python -m pytest langchain_g4f/test_utils.py -v
# Test image generation
python -m pytest langchain_g4f/test_image_generation.py -v
# Test media processing
python -m pytest langchain_g4f/test_media.py -v
# Run all tests
python -m pytest langchain_g4f/ -v
## π Examples
### Complete Examples
# Run comprehensive examples
python langchain_g4f/example_usage.py
```
### Available Examples
1. **Basic Chat** - Simple Q&A with various models
2. **Vision Chat** - Image analysis and description
3. **Image Generation** - Create images from text
4. **Batch Processing** - Multiple operations simultaneously
5. **Audio Generation** - Text-to-speech conversion
6. **Media Processing** - Multi-modal content handling
7. **Provider Discovery** - Find optimal providers
8. **Workflow Automation** - Complete AI pipelines## π οΈ API Reference
### Core Classes
#### ChatG4F
```python
class ChatG4F:
def __init__(self, model, provider, temperature=0.7, **kwargs)
def invoke(self, messages) -> AIMessage
async def ainvoke(self, messages) -> AIMessage
def stream(self, messages) -> Iterator[AIMessageChunk]
```
#### ImageG4F
```python
class ImageG4F:
def __init__(self, model, provider, **kwargs)
def generate(self, prompt, **kwargs) -> ImageGenerationResponse
async def agenerate(self, prompt, **kwargs) -> ImageGenerationResponse
@staticmethod
def get_available_models() -> List[str]
```
#### MediaProcessorG4F
```python
class MediaProcessorG4F:
def __init__(self, provider=None, **kwargs)
async def generate_audio(self, text, voice="alloy") -> Dict
async def transcribe_audio(self, audio_file) -> Dict
async def analyze_image(self, image, query) -> Dict
async def batch_process_media(self, items) -> List[Dict]
```
#### G4FUtils
```python
class G4FUtils:
@staticmethod
def get_providers_by_auth(needs_auth=None) -> List[ProviderInfo]
@staticmethod
def get_providers_by_capability(**capabilities) -> List[ProviderInfo]
@staticmethod
def get_image_generation_models() -> List[str]
@staticmethod
def get_vision_models() -> List[str]
```
### Convenience Functions
```python
# Image generation
generate_image(prompt, model, provider) -> ImageGenerationResponse
batch_generate_images(prompts, **kwargs) -> List[ImageGenerationResponse]
# Media processing
quick_audio_generate(text, voice, provider) -> bytes
quick_image_analyze(image, query, provider) -> str
# Provider discovery
get_providers(working=None, needs_auth=None) -> List[ProviderInfo]
get_models() -> List[str]
print_summary() -> None
## π Advanced Usage
### Custom Provider Configuration
# Configure provider with specific settings
llm = ChatG4F(
model="gpt-4",
provider=Provider.GPTalk,
temperature=0.3,
max_tokens=2000,
top_p=0.9,
frequency_penalty=0.1
)
```
### Error Handling
```python
from langchain_g4f import ChatG4F, G4FUtils
try:
# Try primary provider
llm = ChatG4F(model="gpt-4", provider=Provider.Primary)
response = llm.invoke(messages)
except Exception:
# Fallback to reliable provider
fallback_providers = G4FUtils.get_providers_by_auth(needs_auth=False)
llm = ChatG4F(model="gpt-3.5-turbo", provider=fallback_providers[0].provider)
response = llm.invoke(messages)
```
### Performance Optimization
```python
# Use caching for provider information
G4FUtils._cache_timeout = 3600 # Cache for 1 hour
# Batch operations for efficiency
prompts = ["Generate image 1", "Generate image 2", "Generate image 3"]
images = await batch_generate_images(prompts, provider=Provider.PollinationsAI)
```
## π€ Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
## π License
This project is licensed under the MIT License. See LICENSE file for details.
## π Acknowledgments
- [G4F (GPT4Free)](https://github.com/xtekky/gpt4free) - Core AI provider integration
- [LangChain](https://github.com/langchain-ai/langchain) - Framework foundation
- All the amazing G4F provider maintainers
## π Support
- Check the [examples](example_usage.py) for common use cases
- Review [test files](test_*.py) for implementation details
- Open issues for bugs or feature requests
- Join discussions for community support
---
**Made with β€οΈ for the open-source AI community**
---
> **Note:** This project is made for educational and learning purposes only. If you are a copyright holder or have any concerns about the content, please contact us politely and we will promptly address your request, including removal if necessary. No legal process is requiredβjust reach out and we will cooperate respectfully.
Raw data
{
"_id": null,
"home_page": null,
"name": "langchain-g4f-chat",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "langchain, gpt4free, g4f, openai, chatgpt, llm, ai, image-generation, text-processing",
"author": "AIMLStudent",
"author_email": "aistudentlearn4@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/4b/db/59e65174818570657127a5049ea3df2fc95157879af4d7f9bf0f0a20da5e/langchain_g4f_chat-0.0.1.tar.gz",
"platform": null,
"description": "# LangChain G4F - Enhanced GPT-4 Free Integration\r\n\r\nA comprehensive Python package that integrates G4F (GPT4Free) with LangChain, providing free access to various AI models including chat, image generation, vision, and media processing capabilities.\r\n\r\n## \ud83d\udcd1 Table of Contents\r\n\r\n- [Features](#-features)\r\n- [Installation](#-installation)\r\n- [Quick Start](#-quick-start)\r\n- [Use Cases](#-use-cases)\r\n- [Authentication](#-authentication)\r\n- [Provider Categories](#-provider-categories)\r\n- [Testing](#-testing)\r\n- [Examples](#-examples)\r\n- [API Reference](#%EF%B8%8F-api-reference)\r\n- [Advanced Usage](#-advanced-usage)\r\n- [Contributing](#-contributing)\r\n- [License](#-license)\r\n- [Support](#-support)\r\n\r\n## \ud83d\ude80 Features\r\n\r\n### Core Chat Functionality\r\n- **Multi-Model Support**: Access to GPT-4, GPT-3.5, Claude, Gemini, and more\r\n- **Provider Management**: 80+ providers with authentication categorization\r\n- **Vision Support**: Chat with images using vision-capable models\r\n- **Streaming**: Real-time response streaming\r\n- **Async Support**: Full async/await compatibility\r\n\r\n### Image Generation\r\n- **Multiple Models**: Flux, DALL-E, Stable Diffusion, and more\r\n- **Batch Processing**: Generate multiple images asynchronously\r\n- **Format Support**: URL, base64, PIL image objects\r\n- **Style Control**: Size, style, and quality parameters\r\n\r\n### Media Processing\r\n- **Audio Generation**: Text-to-speech with multiple voices\r\n- **Audio Transcription**: Speech-to-text conversion\r\n- **Video Generation**: Text-to-video capabilities\r\n- **File Processing**: Upload and analyze various file types\r\n\r\n### Provider Intelligence\r\n- **Authentication Categorization**: API key, cookies, HAR files, no-auth\r\n- **Capability Detection**: Text, images, vision, audio support\r\n- **Working Status**: Real-time provider availability\r\n- **Smart Recommendations**: Best providers for specific use cases\r\n\r\n## \ud83d\udce6 Installation\r\n\r\n```bash\r\n# Install from local directory\r\npip install -e .\r\n\r\n# Install dependencies\r\npip install g4f langchain-core pillow aiohttp requests langchain-g4f-chat\r\n```\r\n## \ud83d\udd27 Quick Start\r\n\r\n### Basic Chat\r\n\r\n```python\r\nfrom langchain_g4f import ChatG4F\r\nimport g4f.Provider as Provider\r\n\r\n# Initialize chat model\r\nllm = ChatG4F(\r\n model=\"gpt-4o-mini\",\r\n provider=Provider.Blackbox, # No auth required\r\n temperature=0.7\r\n)\r\n\r\n# Chat\r\nmessages = [\r\n (\"system\", \"You are a helpful assistant.\"),\r\n (\"human\", \"What is machine learning?\")\r\n]\r\n\r\nresponse = llm.invoke(messages)\r\nprint(response.content)\r\n```\r\n\r\n### Image Generation\r\n\r\n```python\r\nfrom langchain_g4f import generate_image\r\nimport g4f.Provider as Provider\r\n\r\n# Generate image\r\nresponse = generate_image(\r\n \"a beautiful sunset over mountains\",\r\n model=\"flux\",\r\n provider=Provider.PollinationsAI\r\n)\r\n\r\nprint(f\"Image URL: {response.url}\")\r\nresponse.save(\"sunset.png\") # Save locally\r\nresponse.show() # Display with PIL\r\n```\r\n\r\n### Vision Chat\r\n\r\n```python\r\n# Chat with image\r\nmessages = [\r\n (\"human\", [\r\n {\"type\": \"text\", \"text\": \"What's in this image?\"},\r\n {\"type\": \"image_url\", \"image_url\": {\"url\": \"https://example.com/image.jpg\"}}\r\n ])\r\n]\r\n\r\nresponse = llm.invoke(messages)\r\nprint(response.content)\r\n```\r\n\r\n### Provider Discovery\r\n\r\n```python\r\nfrom langchain_g4f import G4FUtils, get_providers\r\n\r\n# Get providers by authentication\r\nno_auth_providers = G4FUtils.get_providers_by_auth(needs_auth=False)\r\nprint(f\"No-auth providers: {len(no_auth_providers)}\")\r\n\r\n# Get providers by capability\r\nimage_providers = G4FUtils.get_providers_by_capability(images=True)\r\nvision_providers = G4FUtils.get_providers_by_capability(vision=True)\r\n\r\n# Get working providers\r\nworking_providers = get_providers(working=True)\r\n```\r\n## \ud83c\udfaf Use Cases\r\n\r\n### 1. Content Creation Workflow\r\n\r\n```python\r\nimport asyncio\r\nfrom langchain_g4f import generate_image, quick_image_analyze, ChatG4F\r\n\r\nasync def content_workflow():\r\n # Generate image\r\n image = generate_image(\"modern office space\")\r\n \r\n # Analyze image\r\n analysis = await quick_image_analyze(\r\n image.url, \r\n \"Describe the atmosphere and design elements\"\r\n )\r\n \r\n # Create content\r\n llm = ChatG4F(model=\"gpt-4o-mini\")\r\n content = llm.invoke([\r\n (\"human\", f\"Write a blog post about this office: {analysis}\")\r\n ])\r\n \r\n return image, analysis, content\r\n\r\n# Run workflow\r\nimage, analysis, content = asyncio.run(content_workflow())\r\n```\r\n\r\n### 2. Multi-Modal Analysis\r\n\r\n```python\r\nfrom langchain_g4f import MediaProcessorG4F\r\n\r\nprocessor = MediaProcessorG4F()\r\n\r\n# Batch process different media types\r\nitems = [\r\n {\"type\": \"vision\", \"image\": \"photo.jpg\", \"query\": \"Analyze this image\"},\r\n {\"type\": \"audio\", \"text\": \"Create narration\", \"voice\": \"alloy\"},\r\n {\"type\": \"video\", \"prompt\": \"A spinning globe\", \"resolution\": \"720p\"}\r\n]\r\n\r\nresults = await processor.batch_process_media(items)\r\n```\r\n\r\n### 3. Provider Optimization\r\n\r\n```python\r\nfrom langchain_g4f import G4FUtils\r\n\r\n# Find best providers for your needs\r\ndef get_optimal_provider(use_case):\r\n if use_case == \"chat\":\r\n providers = G4FUtils.get_providers_by_auth(needs_auth=False)\r\n return [p for p in providers if p.working and p.supports_text][:3]\r\n \r\n elif use_case == \"images\":\r\n providers = G4FUtils.get_providers_by_capability(images=True)\r\n return [p for p in providers if p.working][:3]\r\n \r\n elif use_case == \"vision\":\r\n providers = G4FUtils.get_providers_by_capability(vision=True)\r\n return [p for p in providers if p.working][:3]\r\n\r\n# Get recommendations\r\nchat_providers = get_optimal_provider(\"chat\")\r\nimage_providers = get_optimal_provider(\"images\")\r\n```\r\n## \ud83d\udd10 Authentication\r\n\r\nDifferent providers require different authentication methods:\r\n\r\n### No Authentication Required\r\n\r\n```python\r\n# These providers work without any setup\r\nProvider.Blackbox # Chat, code\r\nProvider.PollinationsAI # Images, audio\r\nProvider.FreeGpt # Basic chat\r\n```\r\n\r\n### API Key Authentication\r\n\r\n```python\r\n# Set environment variables\r\nimport os\r\n\r\nos.environ[\"OPENAI_API_KEY\"] = \"your-key\"\r\nos.environ[\"GEMINI_API_KEY\"] = \"your-key\"\r\nos.environ[\"HF_TOKEN\"] = \"your-token\"\r\n\r\n# Use with providers\r\nChatG4F(provider=Provider.OpenaiChat, api_key=os.getenv(\"OPENAI_API_KEY\"))\r\n```\r\n\r\n### Cookie/Session Authentication\r\n\r\n```python\r\n# Some providers need browser cookies\r\n# Check G4F documentation for setup\r\n```\r\n## \ud83d\udcca Provider Categories\r\n\r\n### By Authentication Type\r\n- **No Auth**: 45+ providers (Blackbox, FreeGpt, PollinationsAI, etc.)\r\n- **API Key**: 20+ providers (OpenAI, Gemini, Anthropic, etc.)\r\n- **Cookies**: 15+ providers (ChatGPT, Bing, Bard, etc.)\r\n- **HAR Files**: 5+ providers (Advanced session management)\r\n\r\n\r\n### By Capability\r\n- **Text Chat**: 70+ providers\r\n- **Image Generation**: 15+ providers\r\n- **Vision/Image Analysis**: 25+ providers\r\n- **Audio Processing**: 8+ providers\r\n- **File Upload**: 12+ providers\r\n\r\n\r\n## \ud83e\uddea Testing\r\n\r\nRun the comprehensive test suite:\r\n\r\n```python\r\n# Test utilities\r\npython -m pytest langchain_g4f/test_utils.py -v\r\n\r\n# Test image generation\r\npython -m pytest langchain_g4f/test_image_generation.py -v\r\n\r\n# Test media processing\r\npython -m pytest langchain_g4f/test_media.py -v\r\n\r\n# Run all tests\r\npython -m pytest langchain_g4f/ -v\r\n## \ud83d\udcd6 Examples\r\n\r\n### Complete Examples\r\n\r\n# Run comprehensive examples\r\npython langchain_g4f/example_usage.py\r\n```\r\n\r\n### Available Examples\r\n\r\n1. **Basic Chat** - Simple Q&A with various models\r\n2. **Vision Chat** - Image analysis and description\r\n3. **Image Generation** - Create images from text\r\n4. **Batch Processing** - Multiple operations simultaneously\r\n5. **Audio Generation** - Text-to-speech conversion\r\n6. **Media Processing** - Multi-modal content handling\r\n7. **Provider Discovery** - Find optimal providers\r\n8. **Workflow Automation** - Complete AI pipelines## \ud83d\udee0\ufe0f API Reference\r\n\r\n### Core Classes\r\n\r\n#### ChatG4F\r\n\r\n```python\r\nclass ChatG4F:\r\n def __init__(self, model, provider, temperature=0.7, **kwargs)\r\n def invoke(self, messages) -> AIMessage\r\n async def ainvoke(self, messages) -> AIMessage\r\n def stream(self, messages) -> Iterator[AIMessageChunk]\r\n```\r\n\r\n#### ImageG4F\r\n\r\n```python\r\nclass ImageG4F:\r\n def __init__(self, model, provider, **kwargs)\r\n def generate(self, prompt, **kwargs) -> ImageGenerationResponse\r\n async def agenerate(self, prompt, **kwargs) -> ImageGenerationResponse\r\n @staticmethod\r\n def get_available_models() -> List[str]\r\n```\r\n\r\n#### MediaProcessorG4F\r\n\r\n```python\r\nclass MediaProcessorG4F:\r\n def __init__(self, provider=None, **kwargs)\r\n async def generate_audio(self, text, voice=\"alloy\") -> Dict\r\n async def transcribe_audio(self, audio_file) -> Dict\r\n async def analyze_image(self, image, query) -> Dict\r\n async def batch_process_media(self, items) -> List[Dict]\r\n```\r\n\r\n#### G4FUtils\r\n\r\n```python\r\nclass G4FUtils:\r\n @staticmethod\r\n def get_providers_by_auth(needs_auth=None) -> List[ProviderInfo]\r\n @staticmethod\r\n def get_providers_by_capability(**capabilities) -> List[ProviderInfo]\r\n @staticmethod\r\n def get_image_generation_models() -> List[str]\r\n @staticmethod\r\n def get_vision_models() -> List[str]\r\n```\r\n\r\n### Convenience Functions\r\n\r\n```python\r\n# Image generation\r\ngenerate_image(prompt, model, provider) -> ImageGenerationResponse\r\nbatch_generate_images(prompts, **kwargs) -> List[ImageGenerationResponse]\r\n\r\n# Media processing\r\nquick_audio_generate(text, voice, provider) -> bytes\r\nquick_image_analyze(image, query, provider) -> str\r\n\r\n# Provider discovery\r\nget_providers(working=None, needs_auth=None) -> List[ProviderInfo]\r\nget_models() -> List[str]\r\nprint_summary() -> None\r\n\r\n\r\n## \ud83d\udd0d Advanced Usage\r\n\r\n### Custom Provider Configuration\r\n\r\n# Configure provider with specific settings\r\nllm = ChatG4F(\r\n model=\"gpt-4\",\r\n provider=Provider.GPTalk,\r\n temperature=0.3,\r\n max_tokens=2000,\r\n top_p=0.9,\r\n frequency_penalty=0.1\r\n)\r\n```\r\n\r\n### Error Handling\r\n\r\n```python\r\nfrom langchain_g4f import ChatG4F, G4FUtils\r\n\r\ntry:\r\n # Try primary provider\r\n llm = ChatG4F(model=\"gpt-4\", provider=Provider.Primary)\r\n response = llm.invoke(messages)\r\nexcept Exception:\r\n # Fallback to reliable provider\r\n fallback_providers = G4FUtils.get_providers_by_auth(needs_auth=False)\r\n llm = ChatG4F(model=\"gpt-3.5-turbo\", provider=fallback_providers[0].provider)\r\n response = llm.invoke(messages)\r\n```\r\n\r\n### Performance Optimization\r\n\r\n```python\r\n# Use caching for provider information\r\nG4FUtils._cache_timeout = 3600 # Cache for 1 hour\r\n\r\n# Batch operations for efficiency\r\nprompts = [\"Generate image 1\", \"Generate image 2\", \"Generate image 3\"]\r\nimages = await batch_generate_images(prompts, provider=Provider.PollinationsAI)\r\n```\r\n## \ud83e\udd1d Contributing\r\n\r\n1. Fork the repository\r\n2. Create a feature branch\r\n3. Add tests for new functionality\r\n4. Ensure all tests pass\r\n5. Submit a pull request\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License. See LICENSE file for details.\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n- [G4F (GPT4Free)](https://github.com/xtekky/gpt4free) - Core AI provider integration\r\n- [LangChain](https://github.com/langchain-ai/langchain) - Framework foundation\r\n- All the amazing G4F provider maintainers\r\n\r\n## \ud83c\udd98 Support\r\n\r\n- Check the [examples](example_usage.py) for common use cases\r\n- Review [test files](test_*.py) for implementation details\r\n- Open issues for bugs or feature requests\r\n- Join discussions for community support\r\n\r\n\r\n---\r\n\r\n**Made with \u2764\ufe0f for the open-source AI community**\r\n\r\n---\r\n\r\n> **Note:** This project is made for educational and learning purposes only. If you are a copyright holder or have any concerns about the content, please contact us politely and we will promptly address your request, including removal if necessary. No legal process is required\u2014just reach out and we will cooperate respectfully.\r\n",
"bugtrack_url": null,
"license": null,
"summary": "LangChain integration for GPT4Free (g4f) with chat, image generation, and text processing capabilities",
"version": "0.0.1",
"project_urls": {
"Bug Reports": "https://github.com/MeetSolanki530/langchain-gpt4free/issues",
"Documentation": "https://github.com/MeetSolanki530/langchain-gpt4free/blob/main/README.md",
"Source": "https://github.com/MeetSolanki530/langchain-gpt4free"
},
"split_keywords": [
"langchain",
" gpt4free",
" g4f",
" openai",
" chatgpt",
" llm",
" ai",
" image-generation",
" text-processing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "7d541e8f6156936964cab48fb5991d6c78d26d3ceb4bb7061fbf948e13cc38c8",
"md5": "2c07b34d5c68963161c0befbfb93aead",
"sha256": "e8da3a477668c68b4e0ef377f9310d0ff564f937a058b2d68c39bcbd7d69b7fc"
},
"downloads": -1,
"filename": "langchain_g4f_chat-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2c07b34d5c68963161c0befbfb93aead",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 26768,
"upload_time": "2025-07-28T12:28:21",
"upload_time_iso_8601": "2025-07-28T12:28:21.592959Z",
"url": "https://files.pythonhosted.org/packages/7d/54/1e8f6156936964cab48fb5991d6c78d26d3ceb4bb7061fbf948e13cc38c8/langchain_g4f_chat-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "4bdb59e65174818570657127a5049ea3df2fc95157879af4d7f9bf0f0a20da5e",
"md5": "d63ab963d024f68c2b1e713139192fc4",
"sha256": "de2e649f696293eac6480bc4ad0889c977e5e466f33465204bdbcfc284118e17"
},
"downloads": -1,
"filename": "langchain_g4f_chat-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "d63ab963d024f68c2b1e713139192fc4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 19152,
"upload_time": "2025-07-28T12:28:22",
"upload_time_iso_8601": "2025-07-28T12:28:22.847297Z",
"url": "https://files.pythonhosted.org/packages/4b/db/59e65174818570657127a5049ea3df2fc95157879af4d7f9bf0f0a20da5e/langchain_g4f_chat-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-28 12:28:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "MeetSolanki530",
"github_project": "langchain-gpt4free",
"github_not_found": true,
"lcname": "langchain-g4f-chat"
}