continuous-image-gen


Namecontinuous-image-gen JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryContinuous image generation system using Ollama for prompts and Flux for image generation
upload_time2025-09-01 13:46:02
maintainerNone
docs_urlNone
authorNone
requires_python>=3.11
licenseMIT
keywords ai art automation cli creative flux image-generation ollama
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🤖 Do LLMs Dream of electric sheep?

A Python application that continuously generates creative images using AI. It uses Ollama for generating creative prompts and Flux for image generation. Let your machine dream up endless artistic possibilities! ✨

Like electric sheep in the dreams of androids, this project explores the boundaries between human and artificial creativity. What does AI imagine when we let it dream? 🌠

Built by [Agentic Insights](https://agenticinsights.com)

![Do androids dream of electric sheep?](https://host-image.agentic.workers.dev/)

## 🔑 Key Benefits

- **100% Local Processing**: Everything runs locally on your machine - no cloud APIs, no usage limits!
- **Privacy-First**: Your prompts and generated images never leave your computer
- **Internet-Optional**: Only connects to the internet to download model weights
- **Extensible Plugin System**: Enhance your prompts with local or remote data sources
- **No Subscription Fees**: Generate unlimited images without ongoing costs

## 🚀 Quick Start

### Option 1: Install from PyPI (Recommended)

```bash
# Install using pipx (isolated environment)
pipx install continuous-image-gen

# Or install with pip
pip install continuous-image-gen

# Set your environment variables
export HUGGINGFACE_TOKEN=your_token_here

# Start generating!
imagegen generate
```

### Option 2: Development Setup

1. Install prerequisites:
   - uv Python manager (install using [astral](https://astral.sh/uv/install))
   - Ollama (from [ollama.ai](https://ollama.ai))
   - CUDA-capable GPU (8GB+ VRAM recommended) or Apple Silicon Mac (M1/M2/M3/M4)
   - Hugging Face account with access token (for downloading models)

2. Set up the project:
   ```bash
   git clone https://github.com/vaski/continuous-image-gen
   cd continuous-image-gen
   
   # Set your Hugging Face token (required to download models)
   # You must ungate the dev or schnell models on Hugging Face first
   export HUGGINGFACE_TOKEN=your_token_here
   
   # Install dependencies
   uv sync
   
   # For NVIDIA GPU support (CUDA), install PyTorch separately:
   uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
   
   # Note: uv sync may revert PyTorch to CPU version. After running uv sync,
   # always reinstall PyTorch with CUDA if you have an NVIDIA GPU.
   ```

3. Let the magic happen! ✨
   ```bash
   # Single image
   uv run imagegen generate

   # With interactive prompt feedback
   uv run imagegen generate --interactive

   # Multiple images (perfect for coffee breaks ☕)
   uv run imagegen loop --batch-size 10 --interval 300

   # Force a specific prompt (bypass Ollama)
   uv run imagegen generate -p "your custom prompt here"

   # Run without downloading large models (saves a placeholder image)
   uv run imagegen generate --mock

   # Enable verbose backend logging
   uv run imagegen --debug generate
   ```

4. Launch the modern web interface:
   ```bash
   # Start the FastAPI backend
   uv run uvicorn src.api.server:app --reload --port 8000
   
   # In a new terminal, start the Next.js frontend
   cd web-ui
   npm install
   npm run dev
   
   # Open http://localhost:7860 in your browser
   ```

## ✨ Features

- **Modern Web Interface**:
  - IDE-style dark theme with VS Code aesthetics
  - Real-time generation with WebSocket updates
  - Plugin management interface
  - Gallery view for browsing generated images
  - Built with Next.js, TypeScript, and Tailwind CSS
- **RESTful API with FastAPI**:
  - Full REST API for programmatic access
  - WebSocket support for real-time updates
  - Batch generation endpoints
  - Plugin management API
- **Powerful Plugin System** for dynamic prompt enhancement:
  - Time of day context (morning/afternoon/evening/night)
  - Holiday detection and theming (because every day is special 🎉)
  - Art style variation (90+ distinct styles)
  - Lora integration (custom model fine-tuning)
  - Extensible plugin architecture (PRs welcome! 🙌)
- AI-powered prompt generation using Ollama (runs 100% locally)
- Image generation using Flux transformers (runs 100% locally)
- Interactive mode for prompt feedback (be the art director!)
- Lora support for custom model fine-tuning

## 🔌 Plugin System

The plugin system is the heart of what makes this project special. It dynamically enhances prompts with contextual information to create more creative, relevant, and diverse images.

### How It Works

1. **Modular Architecture**: Each plugin is a standalone Python module that can be enabled/disabled independently
2. **Context Injection**: Plugins provide contextual information that gets seamlessly integrated into prompts
3. **Local & Remote Sources**: Plugins can use local data files or connect to remote APIs (while respecting your privacy settings)
4. **Easy Extensibility**: Create your own plugins with minimal code to add custom functionality

### Included Plugins

- **Time of Day**: Adapts prompts to morning, afternoon, evening, or night themes
- **Holiday Awareness**: Detects upcoming holidays and incorporates them into prompts
- **Art Style Variation**: Rotates through 90+ distinct art styles to keep generations fresh
- **Lora Integration**: Seamlessly incorporates your custom Lora models as subjects
- **Day of Week**: Adjusts prompts based on the current day of the week

### Creating Custom Plugins

The plugin system follows a simple interface pattern, making it easy to create your own:

```python
def get_context() -> Optional[str]:
    """Return contextual information to enhance prompts"""
    return "your custom context here"
```

Place your plugin in the `src/plugins/` directory and update the plugin manager to include it. Your plugin can:
- Read from local data files
- Connect to APIs (with proper authentication)
- Use system information
- Implement caching for performance
- Maintain state between generations

## 🎮 Command Reference

### Generate Single Image
```bash
uv run imagegen generate [OPTIONS]

Options:
-i, --interactive      Enable interactive mode
-m, --model TEXT      Ollama model (default: phi4:latest)
-f, --flux-model TEXT Model variant: 'dev' or 'schnell'
-p, --prompt TEXT     Custom prompt (bypass Ollama generation)
--height INT         Image height (128-2048, default: 768)
--width INT          Image width (128-2048, default: 1360)
-s, --steps INT      Inference steps (1-150)
-g, --guidance FLOAT Guidance scale (1.0-30.0)
--true-cfg FLOAT    True CFG scale (1.0-10.0)
--cpu-only          Force CPU mode (slower but hey, it works! 🐌)
--mps-use-fp16      Use float16 precision on Apple Silicon (may improve performance for some models)
--mock              Use placeholder image generator (no models required)
```

### Generate Multiple Images
```bash
uv run imagegen loop [OPTIONS]

Options:
-b, --batch-size INT Number of images (1-100)
-n, --interval INT  Seconds between generations
[+ same options as generate command]
```

### Run System Diagnostics
```bash
uv run imagegen diagnose [OPTIONS]

Options:
-v, --verbose        Show detailed diagnostic information
--check-env/--no-check-env  Check environment variables (default: True)
--fix                Attempt to fix common issues automatically
```

### Launch Web UI
```bash
uv run imagegen web [OPTIONS]

Options:
--mock              Use placeholder image generator (no models required)
```

## 🎭 Model Variants

Flux offers two model variants with different licensing terms:

1. **Dev Model** (`-f dev`)
   ```bash
   uv run imagegen generate -f dev --height 1024 --width 1024
   ```
   - Non-commercial use only
   - High-quality output (for when you're feeling fancy 🎩)
   - 50 inference steps
   - 7.5 guidance scale
   - Best for personal projects and experimentation

2. **Schnell Model** (`-f schnell`)
   ```bash
   uv run imagegen generate -f schnell --steps 4 --guidance 0.0
   ```
   - Commercial-friendly license
   - Optimized for speed (zoom zoom! 🏃‍♂️)
   - 4 inference steps
   - 0.0 guidance scale
   - Suitable for production environments

Choose the appropriate model based on your use case and licensing requirements.

## 🍎 Apple Silicon Support

This project now supports Apple Silicon (M1/M2/M3/M4) Macs using PyTorch's Metal Performance Shaders (MPS) backend. The system will automatically detect Apple Silicon and use the appropriate GPU acceleration.

### Apple Silicon Tips

- Performance is generally good on Apple Silicon, but may vary depending on model complexity
- By default, the system uses float32 precision on MPS for better compatibility
- You can enable float16 precision with the `--mps-use-fp16` flag for potentially better performance
- Memory management on Apple Silicon is handled automatically through the unified memory architecture
- For best results on Apple Silicon, consider using the Schnell model variant which is optimized for speed

```bash
# Example: Running on Apple Silicon with float16 precision
uv run imagegen generate --mps-use-fp16

# Example: Running the faster Schnell model on Apple Silicon
uv run imagegen generate -f schnell --mps-use-fp16
```

## 🎨 Lora Support

The system supports Lora models for custom fine-tuning. Loras are loaded from subdirectories in your Lora directory, with automatic version selection.

Loras can be used to add specific likenesses (people, characters) or artistic styles to your generated images. The plugin system **automatically integrates Loras into your prompts** when they are enabled, making it seamless to add your favorite characters or styles to generated images.

### Lora Sources
- [Fal.ai](https://fal.ai/) - Offers high-quality Loras for various styles and subjects
- [CivitAI](https://civitai.com/) - Large community library of Loras for characters and styles
- [Hugging Face](https://huggingface.co/) - Many open-source Loras with various licenses

### Configuration
```bash
# Lora Configuration in .env
LORA_DIR=C:/ComfyUI/ComfyUI/models/loras
ENABLED_LORAS=your_lora_name
LORA_APPLICATION_PROBABILITY=0.99
```

### Directory Structure
```
loras/
└── your_lora_name/
    ├── your_lora_name-000004.safetensors
    ├── your_lora_name-000008.safetensors
    └── your_lora_name-000012.safetensors  # Latest version used
```

### Using Loras

#### Automatic Integration (Recommended)
The system will automatically:
1. Randomly select from your enabled Loras based on the configured probability
2. Integrate the selected Lora as a central character/subject in the generated prompt
3. Format the Lora keyword properly with single quotes (e.g., 'your_lora_name')

Simply run:
```bash
uv run imagegen generate
# or
uv run imagegen loop --batch-size 10
```

#### Manual Prompt with Lora
If you prefer to craft your own prompt with a specific Lora:
```bash
uv run imagegen generate -p "Evening scene with 'your_lora_name' as the main character walking through a cyberpunk city"
```

> **How it works**: The Lora plugin detects enabled Loras, selects one based on your configuration, and instructs the prompt generator to make the Lora a central subject in the scene. This happens automatically in continuous generation mode.

## 🌐 Host-Image Feature

Share your AI-generated masterpieces with the world! This feature allows you to have your latest generated image available on a public endpoint using Cloudflare Workers and R2 storage.

> **Note**: This feature is already in use to serve the image at the top of this README!

### How It Works
1. Your generated images are uploaded to a Cloudflare R2 bucket
2. A Cloudflare Worker serves the latest image via a public URL
3. You can embed this URL anywhere (websites, social media, etc.)

### Requirements
- Cloudflare account with Workers and R2 access
- Basic knowledge of Cloudflare Workers deployment

### Setup Instructions
1. Clone the host-image directory
2. Configure your R2 bucket in wrangler.jsonc
3. Deploy using `wrangler deploy`

### Usage
Once deployed, your image will be available at your worker's URL:
```
https://host-image.yourdomain.workers.dev/
```

Perfect for embedding in websites, sharing on social media, or creating an always-updating display of your AI art!

## ⚙️ Environment Configuration

Set these environment variables before running:
```bash
# Default values shown
export HUGGINGFACE_TOKEN=your_token_here  # Required for downloading models
export OLLAMA_MODEL=phi4:latest
export OLLAMA_TEMPERATURE=0.7
export FLUX_MODEL=dev  # Must ungate this model on Hugging Face first
export IMAGE_HEIGHT=768
export IMAGE_WIDTH=1360
export NUM_INFERENCE_STEPS=50  # 50 for dev, 4 for schnell
export GUIDANCE_SCALE=7.5      # 7.5 for dev, 0.0 for schnell
export TRUE_CFG_SCALE=1.0
export MAX_SEQUENCE_LENGTH=512

# Lora Configuration
export LORA_DIR=C:/ComfyUI/ComfyUI/models/loras
export ENABLED_LORAS=your_lora_name
export LORA_APPLICATION_PROBABILITY=0.99
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "continuous-image-gen",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "ai, art, automation, cli, creative, flux, image-generation, ollama",
    "author": null,
    "author_email": "Vaski <vaski@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/c4/c8/b1240fb162a20aeb39f2c917b26540d346124d4f85b320df7f260700438b/continuous_image_gen-1.0.0.tar.gz",
    "platform": null,
    "description": "# \ud83e\udd16 Do LLMs Dream of electric sheep?\n\nA Python application that continuously generates creative images using AI. It uses Ollama for generating creative prompts and Flux for image generation. Let your machine dream up endless artistic possibilities! \u2728\n\nLike electric sheep in the dreams of androids, this project explores the boundaries between human and artificial creativity. What does AI imagine when we let it dream? \ud83c\udf20\n\nBuilt by [Agentic Insights](https://agenticinsights.com)\n\n![Do androids dream of electric sheep?](https://host-image.agentic.workers.dev/)\n\n## \ud83d\udd11 Key Benefits\n\n- **100% Local Processing**: Everything runs locally on your machine - no cloud APIs, no usage limits!\n- **Privacy-First**: Your prompts and generated images never leave your computer\n- **Internet-Optional**: Only connects to the internet to download model weights\n- **Extensible Plugin System**: Enhance your prompts with local or remote data sources\n- **No Subscription Fees**: Generate unlimited images without ongoing costs\n\n## \ud83d\ude80 Quick Start\n\n### Option 1: Install from PyPI (Recommended)\n\n```bash\n# Install using pipx (isolated environment)\npipx install continuous-image-gen\n\n# Or install with pip\npip install continuous-image-gen\n\n# Set your environment variables\nexport HUGGINGFACE_TOKEN=your_token_here\n\n# Start generating!\nimagegen generate\n```\n\n### Option 2: Development Setup\n\n1. Install prerequisites:\n   - uv Python manager (install using [astral](https://astral.sh/uv/install))\n   - Ollama (from [ollama.ai](https://ollama.ai))\n   - CUDA-capable GPU (8GB+ VRAM recommended) or Apple Silicon Mac (M1/M2/M3/M4)\n   - Hugging Face account with access token (for downloading models)\n\n2. Set up the project:\n   ```bash\n   git clone https://github.com/vaski/continuous-image-gen\n   cd continuous-image-gen\n   \n   # Set your Hugging Face token (required to download models)\n   # You must ungate the dev or schnell models on Hugging Face first\n   export HUGGINGFACE_TOKEN=your_token_here\n   \n   # Install dependencies\n   uv sync\n   \n   # For NVIDIA GPU support (CUDA), install PyTorch separately:\n   uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124\n   \n   # Note: uv sync may revert PyTorch to CPU version. After running uv sync,\n   # always reinstall PyTorch with CUDA if you have an NVIDIA GPU.\n   ```\n\n3. Let the magic happen! \u2728\n   ```bash\n   # Single image\n   uv run imagegen generate\n\n   # With interactive prompt feedback\n   uv run imagegen generate --interactive\n\n   # Multiple images (perfect for coffee breaks \u2615)\n   uv run imagegen loop --batch-size 10 --interval 300\n\n   # Force a specific prompt (bypass Ollama)\n   uv run imagegen generate -p \"your custom prompt here\"\n\n   # Run without downloading large models (saves a placeholder image)\n   uv run imagegen generate --mock\n\n   # Enable verbose backend logging\n   uv run imagegen --debug generate\n   ```\n\n4. Launch the modern web interface:\n   ```bash\n   # Start the FastAPI backend\n   uv run uvicorn src.api.server:app --reload --port 8000\n   \n   # In a new terminal, start the Next.js frontend\n   cd web-ui\n   npm install\n   npm run dev\n   \n   # Open http://localhost:7860 in your browser\n   ```\n\n## \u2728 Features\n\n- **Modern Web Interface**:\n  - IDE-style dark theme with VS Code aesthetics\n  - Real-time generation with WebSocket updates\n  - Plugin management interface\n  - Gallery view for browsing generated images\n  - Built with Next.js, TypeScript, and Tailwind CSS\n- **RESTful API with FastAPI**:\n  - Full REST API for programmatic access\n  - WebSocket support for real-time updates\n  - Batch generation endpoints\n  - Plugin management API\n- **Powerful Plugin System** for dynamic prompt enhancement:\n  - Time of day context (morning/afternoon/evening/night)\n  - Holiday detection and theming (because every day is special \ud83c\udf89)\n  - Art style variation (90+ distinct styles)\n  - Lora integration (custom model fine-tuning)\n  - Extensible plugin architecture (PRs welcome! \ud83d\ude4c)\n- AI-powered prompt generation using Ollama (runs 100% locally)\n- Image generation using Flux transformers (runs 100% locally)\n- Interactive mode for prompt feedback (be the art director!)\n- Lora support for custom model fine-tuning\n\n## \ud83d\udd0c Plugin System\n\nThe plugin system is the heart of what makes this project special. It dynamically enhances prompts with contextual information to create more creative, relevant, and diverse images.\n\n### How It Works\n\n1. **Modular Architecture**: Each plugin is a standalone Python module that can be enabled/disabled independently\n2. **Context Injection**: Plugins provide contextual information that gets seamlessly integrated into prompts\n3. **Local & Remote Sources**: Plugins can use local data files or connect to remote APIs (while respecting your privacy settings)\n4. **Easy Extensibility**: Create your own plugins with minimal code to add custom functionality\n\n### Included Plugins\n\n- **Time of Day**: Adapts prompts to morning, afternoon, evening, or night themes\n- **Holiday Awareness**: Detects upcoming holidays and incorporates them into prompts\n- **Art Style Variation**: Rotates through 90+ distinct art styles to keep generations fresh\n- **Lora Integration**: Seamlessly incorporates your custom Lora models as subjects\n- **Day of Week**: Adjusts prompts based on the current day of the week\n\n### Creating Custom Plugins\n\nThe plugin system follows a simple interface pattern, making it easy to create your own:\n\n```python\ndef get_context() -> Optional[str]:\n    \"\"\"Return contextual information to enhance prompts\"\"\"\n    return \"your custom context here\"\n```\n\nPlace your plugin in the `src/plugins/` directory and update the plugin manager to include it. Your plugin can:\n- Read from local data files\n- Connect to APIs (with proper authentication)\n- Use system information\n- Implement caching for performance\n- Maintain state between generations\n\n## \ud83c\udfae Command Reference\n\n### Generate Single Image\n```bash\nuv run imagegen generate [OPTIONS]\n\nOptions:\n-i, --interactive      Enable interactive mode\n-m, --model TEXT      Ollama model (default: phi4:latest)\n-f, --flux-model TEXT Model variant: 'dev' or 'schnell'\n-p, --prompt TEXT     Custom prompt (bypass Ollama generation)\n--height INT         Image height (128-2048, default: 768)\n--width INT          Image width (128-2048, default: 1360)\n-s, --steps INT      Inference steps (1-150)\n-g, --guidance FLOAT Guidance scale (1.0-30.0)\n--true-cfg FLOAT    True CFG scale (1.0-10.0)\n--cpu-only          Force CPU mode (slower but hey, it works! \ud83d\udc0c)\n--mps-use-fp16      Use float16 precision on Apple Silicon (may improve performance for some models)\n--mock              Use placeholder image generator (no models required)\n```\n\n### Generate Multiple Images\n```bash\nuv run imagegen loop [OPTIONS]\n\nOptions:\n-b, --batch-size INT Number of images (1-100)\n-n, --interval INT  Seconds between generations\n[+ same options as generate command]\n```\n\n### Run System Diagnostics\n```bash\nuv run imagegen diagnose [OPTIONS]\n\nOptions:\n-v, --verbose        Show detailed diagnostic information\n--check-env/--no-check-env  Check environment variables (default: True)\n--fix                Attempt to fix common issues automatically\n```\n\n### Launch Web UI\n```bash\nuv run imagegen web [OPTIONS]\n\nOptions:\n--mock              Use placeholder image generator (no models required)\n```\n\n## \ud83c\udfad Model Variants\n\nFlux offers two model variants with different licensing terms:\n\n1. **Dev Model** (`-f dev`)\n   ```bash\n   uv run imagegen generate -f dev --height 1024 --width 1024\n   ```\n   - Non-commercial use only\n   - High-quality output (for when you're feeling fancy \ud83c\udfa9)\n   - 50 inference steps\n   - 7.5 guidance scale\n   - Best for personal projects and experimentation\n\n2. **Schnell Model** (`-f schnell`)\n   ```bash\n   uv run imagegen generate -f schnell --steps 4 --guidance 0.0\n   ```\n   - Commercial-friendly license\n   - Optimized for speed (zoom zoom! \ud83c\udfc3\u200d\u2642\ufe0f)\n   - 4 inference steps\n   - 0.0 guidance scale\n   - Suitable for production environments\n\nChoose the appropriate model based on your use case and licensing requirements.\n\n## \ud83c\udf4e Apple Silicon Support\n\nThis project now supports Apple Silicon (M1/M2/M3/M4) Macs using PyTorch's Metal Performance Shaders (MPS) backend. The system will automatically detect Apple Silicon and use the appropriate GPU acceleration.\n\n### Apple Silicon Tips\n\n- Performance is generally good on Apple Silicon, but may vary depending on model complexity\n- By default, the system uses float32 precision on MPS for better compatibility\n- You can enable float16 precision with the `--mps-use-fp16` flag for potentially better performance\n- Memory management on Apple Silicon is handled automatically through the unified memory architecture\n- For best results on Apple Silicon, consider using the Schnell model variant which is optimized for speed\n\n```bash\n# Example: Running on Apple Silicon with float16 precision\nuv run imagegen generate --mps-use-fp16\n\n# Example: Running the faster Schnell model on Apple Silicon\nuv run imagegen generate -f schnell --mps-use-fp16\n```\n\n## \ud83c\udfa8 Lora Support\n\nThe system supports Lora models for custom fine-tuning. Loras are loaded from subdirectories in your Lora directory, with automatic version selection.\n\nLoras can be used to add specific likenesses (people, characters) or artistic styles to your generated images. The plugin system **automatically integrates Loras into your prompts** when they are enabled, making it seamless to add your favorite characters or styles to generated images.\n\n### Lora Sources\n- [Fal.ai](https://fal.ai/) - Offers high-quality Loras for various styles and subjects\n- [CivitAI](https://civitai.com/) - Large community library of Loras for characters and styles\n- [Hugging Face](https://huggingface.co/) - Many open-source Loras with various licenses\n\n### Configuration\n```bash\n# Lora Configuration in .env\nLORA_DIR=C:/ComfyUI/ComfyUI/models/loras\nENABLED_LORAS=your_lora_name\nLORA_APPLICATION_PROBABILITY=0.99\n```\n\n### Directory Structure\n```\nloras/\n\u2514\u2500\u2500 your_lora_name/\n    \u251c\u2500\u2500 your_lora_name-000004.safetensors\n    \u251c\u2500\u2500 your_lora_name-000008.safetensors\n    \u2514\u2500\u2500 your_lora_name-000012.safetensors  # Latest version used\n```\n\n### Using Loras\n\n#### Automatic Integration (Recommended)\nThe system will automatically:\n1. Randomly select from your enabled Loras based on the configured probability\n2. Integrate the selected Lora as a central character/subject in the generated prompt\n3. Format the Lora keyword properly with single quotes (e.g., 'your_lora_name')\n\nSimply run:\n```bash\nuv run imagegen generate\n# or\nuv run imagegen loop --batch-size 10\n```\n\n#### Manual Prompt with Lora\nIf you prefer to craft your own prompt with a specific Lora:\n```bash\nuv run imagegen generate -p \"Evening scene with 'your_lora_name' as the main character walking through a cyberpunk city\"\n```\n\n> **How it works**: The Lora plugin detects enabled Loras, selects one based on your configuration, and instructs the prompt generator to make the Lora a central subject in the scene. This happens automatically in continuous generation mode.\n\n## \ud83c\udf10 Host-Image Feature\n\nShare your AI-generated masterpieces with the world! This feature allows you to have your latest generated image available on a public endpoint using Cloudflare Workers and R2 storage.\n\n> **Note**: This feature is already in use to serve the image at the top of this README!\n\n### How It Works\n1. Your generated images are uploaded to a Cloudflare R2 bucket\n2. A Cloudflare Worker serves the latest image via a public URL\n3. You can embed this URL anywhere (websites, social media, etc.)\n\n### Requirements\n- Cloudflare account with Workers and R2 access\n- Basic knowledge of Cloudflare Workers deployment\n\n### Setup Instructions\n1. Clone the host-image directory\n2. Configure your R2 bucket in wrangler.jsonc\n3. Deploy using `wrangler deploy`\n\n### Usage\nOnce deployed, your image will be available at your worker's URL:\n```\nhttps://host-image.yourdomain.workers.dev/\n```\n\nPerfect for embedding in websites, sharing on social media, or creating an always-updating display of your AI art!\n\n## \u2699\ufe0f Environment Configuration\n\nSet these environment variables before running:\n```bash\n# Default values shown\nexport HUGGINGFACE_TOKEN=your_token_here  # Required for downloading models\nexport OLLAMA_MODEL=phi4:latest\nexport OLLAMA_TEMPERATURE=0.7\nexport FLUX_MODEL=dev  # Must ungate this model on Hugging Face first\nexport IMAGE_HEIGHT=768\nexport IMAGE_WIDTH=1360\nexport NUM_INFERENCE_STEPS=50  # 50 for dev, 4 for schnell\nexport GUIDANCE_SCALE=7.5      # 7.5 for dev, 0.0 for schnell\nexport TRUE_CFG_SCALE=1.0\nexport MAX_SEQUENCE_LENGTH=512\n\n# Lora Configuration\nexport LORA_DIR=C:/ComfyUI/ComfyUI/models/loras\nexport ENABLED_LORAS=your_lora_name\nexport LORA_APPLICATION_PROBABILITY=0.99\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Continuous image generation system using Ollama for prompts and Flux for image generation",
    "version": "1.0.0",
    "project_urls": {
        "Documentation": "https://github.com/vaski/continuous-image-gen#readme",
        "Homepage": "https://github.com/vaski/continuous-image-gen",
        "Issues": "https://github.com/vaski/continuous-image-gen/issues",
        "Repository": "https://github.com/vaski/continuous-image-gen"
    },
    "split_keywords": [
        "ai",
        " art",
        " automation",
        " cli",
        " creative",
        " flux",
        " image-generation",
        " ollama"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b81a8ce7b315dfae201a7807e14f088102401bae899ba6ac39621d59a9443147",
                "md5": "6e797a3a78b253cb4a178cb815e36d0a",
                "sha256": "c1f46478fbbda34a2c5caffd97d286ebad64b95533abc93e6e1550688f7fc1d1"
            },
            "downloads": -1,
            "filename": "continuous_image_gen-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6e797a3a78b253cb4a178cb815e36d0a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 55974,
            "upload_time": "2025-09-01T13:46:00",
            "upload_time_iso_8601": "2025-09-01T13:46:00.739776Z",
            "url": "https://files.pythonhosted.org/packages/b8/1a/8ce7b315dfae201a7807e14f088102401bae899ba6ac39621d59a9443147/continuous_image_gen-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c4c8b1240fb162a20aeb39f2c917b26540d346124d4f85b320df7f260700438b",
                "md5": "864448f59041eb4883ab4fd2ad2b55b8",
                "sha256": "75ef49aceac05a4a3b04e768442fc31702ed66d824190b65a8d2bdedc4a2afe6"
            },
            "downloads": -1,
            "filename": "continuous_image_gen-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "864448f59041eb4883ab4fd2ad2b55b8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 1011655,
            "upload_time": "2025-09-01T13:46:02",
            "upload_time_iso_8601": "2025-09-01T13:46:02.264856Z",
            "url": "https://files.pythonhosted.org/packages/c4/c8/b1240fb162a20aeb39f2c917b26540d346124d4f85b320df7f260700438b/continuous_image_gen-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-01 13:46:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vaski",
    "github_project": "continuous-image-gen#readme",
    "github_not_found": true,
    "lcname": "continuous-image-gen"
}
        
Elapsed time: 1.95549s