flog-otlp


Nameflog-otlp JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryA Python utility that generates realistic log data using flog and sends it to OpenTelemetry Protocol (OTLP) endpoints
upload_time2025-09-02 00:31:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.13
licenseMIT License Copyright (c) 2025 Rick Jury Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords flog logging observability opentelemetry otlp testing
VCS
bugtrack_url
requirements requests
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # flog-otlp

A Python package that generates realistic log data using [flog](https://github.com/mingrammer/flog) and sends it to OpenTelemetry Protocol (OTLP) endpoints. Perfect for testing log pipelines, observability systems, and OTLP collectors.

**Now available as a proper Python package with pip installation!**

flog_otlp is a python wrapper to take STDOUT from [flog](https://github.com/mingrammer/flog) which can generate log file samples for formats like apache and json, then encode these in a OTLP compliant wrapper and forward to an OTLP endpoint. You can also provide custom attributes and execute complex scenarios with asynchronous timing control. I created this for testing sending OTLP log data to Sumo Logic but it could be applicable to any OTLP compliant receiver.

Mapping for flog payload:
- The flog event is encoded in a "log" json key.
- otlp-attributes: add resource-level attributes map to "fields" in sumologic. Fields are posted seperate to the log body and stored in the index with data but each named field must first be must be enabled or it's suppressed.
- telemetry-attributes: add log-level attributes that appear as json keys in the log event body in sumo logic.

Example standard body as it appears in sumo side:

```json
{"log_source":"flog","log_type":"apache_common","log":"41.253.249.79 - rath4856 [27/Aug/2025:16:31:15 +1200] \"HEAD /empower HTTP/2.0\" 501 8873"}
```

## Installation

### Requirements
- **Python**: 3.13+ required
- **uv**: Modern Python package manager (recommended)
- **flog**: Log generation tool

### Install flog-otlp Package

```bash
# Install from PyPI (when published) - using uv (recommended)
uv add flog-otlp

# Or using pip
pip install flog-otlp

# Development setup with uv (recommended)
git clone <repo-url>
cd flog_otlp
uv sync --group dev

# Alternative: development setup with pip
pip install -e ".[dev]"
```

### Install flog Tool (Required)

```bash
# macOS
brew install mingrammer/flog/flog

# Go install (any platform)
go install github.com/mingrammer/flog@latest
```

## Usage

After installation, use the `flog-otlp` command. Supported log formats: `apache_common`, `apache_combined`, `apache_error`, `rfc3164`, `rfc5424`, `common_log`, `json`

### Single Execution Examples

```bash
# Default: 200 logs over 10 seconds
flog-otlp

# 100 logs over 5 seconds
flog-otlp -n 100 -s 5s

# 50 Apache common format logs
flog-otlp -f apache_common -n 50

# 100 JSON logs, no infinite loop
flog-otlp -f json -n 100 --no-loop

# Custom OTLP endpoint
flog-otlp --otlp-endpoint https://collector:4318/v1/logs

# With custom resource attributes
flog-otlp --otlp-attributes environment=production --otlp-attributes region=us-east-1

# With custom log attributes
flog-otlp --telemetry-attributes app=web-server --telemetry-attributes debug=true

# With authentication headers
flog-otlp --otlp-header "Authorization=Bearer token123" --otlp-header "X-Custom=value"
```

### Development Usage (Without Installation)

```bash
# Clone and run without installing
git clone <repo-url>
cd flog_otlp
python3 scripts/run.py -n 50 -f json
```

## Scenario Mode

**NEW in v0.2.0**: Execute complex log generation scenarios with precise asynchronous timing control using YAML configuration files.

### Quick Start

```bash
# Execute a scenario from YAML file
flog-otlp --scenario scenario.yaml

# With custom endpoint
flog-otlp --scenario scenario.yaml --otlp-endpoint https://collector:4318/v1/logs
```

### YAML Scenario Format

Create a YAML file defining your test scenario:

```yaml
name: "Production Load Test"
description: "Simulates real-world traffic patterns with overlapping log types"

steps:
  # Normal baseline traffic
  - start_time: "0s"
    interval: "30s"
    iterations: 10
    parameters:
      format: "json"
      number: 100
      sleep: "10s"
      telemetry_attributes:
        - "log_level=info"
        - "service=web-frontend"
      otlp_attributes:
        - "environment=production"
        - "region=us-east-1"

  # Error spike after 2 minutes
  - start_time: "2m"
    interval: "15s" 
    iterations: 6
    parameters:
      format: "json"
      number: 200
      sleep: "8s"
      telemetry_attributes:
        - "log_level=error"
        - "service=web-frontend"
        - "incident=auth-failure"
      otlp_attributes:
        - "environment=production"
        - "region=us-east-1"
        - "alert_state=triggered"

  # Recovery phase
  - start_time: "4m"
    interval: "45s"
    iterations: 4
    parameters:
      format: "json"
      number: 80
      sleep: "10s"
      telemetry_attributes:
        - "log_level=warn"
        - "service=web-frontend"
        - "status=recovering"
```

### Scenario Features

- **Asynchronous Execution**: All steps run concurrently with precise timing
- **Flexible Scheduling**: Define when each step starts and how often it repeats
- **Parameter Override**: Each step can customize any flog-otlp parameter
- **Enhanced Logging**: INFO-level logging shows flog commands and parameters
- **Concurrent Patterns**: Steps can overlap, run in parallel, or execute sequentially

### Scenario Parameters

| Parameter | Description | Example | 
|-----------|-------------|---------|
| `start_time` | When to begin this step | `"0s"`, `"2m"`, `"1h"` |
| `interval` | Time between iterations | `"30s"`, `"5m"` |
| `iterations` | Number of times to run | `1`, `10`, `0` (infinite) |
| `parameters` | Any flog-otlp parameters | `format`, `number`, `attributes` |

### Time Format Support
- **Seconds**: `"30s"`, `"90s"`
- **Minutes**: `"5m"`, `"2.5m"`
- **Hours**: `"1h"`, `"0.5h"`
- **Plain numbers**: `30` (defaults to seconds)

### Real-World Scenario Examples

**Gradual Load Increase**:
```yaml
name: "Load Test Ramp-Up"
steps:
  - start_time: "0s"
    interval: "60s"
    iterations: 5
    parameters:
      number: 50   # Light load
  - start_time: "5m"  
    interval: "30s"
    iterations: 10
    parameters:
      number: 100  # Medium load
  - start_time: "10m"
    interval: "15s" 
    iterations: 20
    parameters:
      number: 200  # Heavy load
```

**Multi-Service Simulation**:
```yaml
name: "Microservices Load"
steps:
  - start_time: "0s"
    interval: "20s"
    iterations: 15
    parameters:
      telemetry_attributes:
        - "service=api-gateway"
  - start_time: "10s"
    interval: "25s"
    iterations: 12  
    parameters:
      telemetry_attributes:
        - "service=user-service"
  - start_time: "20s"
    interval: "35s"
    iterations: 10
    parameters:
      telemetry_attributes:
        - "service=order-service"
```

### Scenario Execution Flow

1. **Load & Validate**: YAML file is parsed and validated
2. **Schedule All Steps**: Each step is scheduled in its own thread
3. **Asynchronous Execution**: Steps wait for their start time, then execute iterations
4. **Real-time Logging**: Progress, commands, and parameters logged at INFO level
5. **Graceful Shutdown**: Ctrl+C stops all threads cleanly

Example output:
```
INFO - Starting scenario: Production Load Test
INFO - Estimated total scenario duration: 390.0s
INFO - All 3 steps scheduled. Waiting for completion...
INFO - Executing step 1, iteration 1/10 at 14:30:00 UTC (T+0.0s)
INFO - Step 1.1 flog command: flog -f json -n 100 -s 10s
INFO - Step 1.1 parameters: {'format': 'json', 'number': 100, ...}
INFO - Step 2 scheduled to start in 120.0s
INFO - Step 1.1 completed in 11.2s (100 logs)
INFO - Executing step 1, iteration 2/10 at 14:30:30 UTC (T+30.0s)
```

## Docker Usage

### Building the Docker Image

```bash
# Build with uv (fastest, recommended)
docker build -f Dockerfile.uv -t flog-otlp:uv .

# Build standard image
docker build -t flog-otlp .

# Alternative build if issues occur
docker build -f Dockerfile.alt -t flog-otlp:alt .
```

### Running with Docker

```bash
# Show help
docker run --rm flog-otlp

# Basic usage (default: 200 logs over 10 seconds to localhost)
docker run --rm flog-otlp -n 100 -s 5s

# Send to external OTLP endpoint
docker run --rm flog-otlp \
  --otlp-endpoint https://your-collector:4318/v1/logs \
  -f json -n 50

# With custom attributes and headers
docker run --rm flog-otlp \
  --otlp-attributes environment=production \
  --otlp-attributes region=us-west-2 \
  --telemetry-attributes app=test-app \
  --otlp-header "Authorization=Bearer your-token" \
  -f apache_combined -n 100

# Recurring execution (run 5 times with 30s intervals)
docker run --rm flog-otlp \
  --wait-time 30 --max-executions 5 \
  -n 200 -f json

# Long-running container (until stopped)
docker run --rm --name flog-generator flog-otlp \
  --wait-time 60 --max-executions 0 \
  --otlp-endpoint http://host.docker.internal:4318/v1/logs
```

### Running with Podman

When using Podman, `localhost` inside the container refers to the container itself, not the host. Use these alternatives to access services on your host:

```bash
# Recommended: Use host.containers.internal (modern Podman)
podman run --rm flog-otlp:uv \
  --otlp-endpoint http://host.containers.internal:4318/v1/logs \
  -f json -n 50

# Alternative 1: Use host networking (container shares host network)
podman run --rm --network=host flog-otlp:uv \
  --otlp-endpoint http://localhost:4318/v1/logs \
  -f json -n 50

# Alternative 2: Use your host's actual IP address
podman run --rm flog-otlp:uv \
  --otlp-endpoint http://YOUR_HOST_IP:4318/v1/logs \
  -f json -n 50

# Alternative 3: For older Podman versions, try host.docker.internal
podman run --rm flog-otlp:uv \
  --otlp-endpoint http://host.docker.internal:4318/v1/logs \
  -f json -n 50
```

### Docker Compose Example

```yaml
version: '3.8'
services:
  flog-otlp:
    build: .
    command: >
      --otlp-endpoint http://otel-collector:4318/v1/logs
      --wait-time 30
      --max-executions 0
      -f json
      -n 100
    environment:
      - LOG_LEVEL=INFO
    depends_on:
      - otel-collector
    
  otel-collector:
    image: otel/opentelemetry-collector:latest
    # ... collector configuration
```

### Docker Image Details

- **Base Image**: `python:3.13-slim` for minimal footprint
- **Multi-stage Build**: Uses Go builder stage to compile flog, then copies binary to final image
- **Security**: Runs as non-root user (`flog-user`)  
- **Size**: Optimized with `.dockerignore` to exclude unnecessary files
- **Dependencies**: Includes both `flog` binary and `flog-otlp` Python package
- **uv Support**: `Dockerfile.uv` provides fastest dependency installation

### Troubleshooting Docker Build

All Docker builds should work now. If you encounter any issues:

1. **Recommended**: Use the uv-based build (fastest)
2. **Standard**: Use the main Dockerfile
3. **Fallback**: Use the alternative Dockerfile

```bash
# Build with uv (recommended - fastest and most reliable)
docker build -f Dockerfile.uv -t flog-otlp:uv .

# Standard build
docker build -t flog-otlp .

# Alternative build (if others fail)
docker build -f Dockerfile.alt -t flog-otlp:alt .
```

## Recurring Executions
This enables powerful use cases like continuous log generation for testing, scheduled batch processing, and long-running observability scenarios

### Smart Mode Detection
- **Single mode**: When wait-time=0 and max-executions=1 (default)
- **Recurring mode**: When wait-time>0 OR max-executions≠1

The wrapper can call your flog command and forward logs on a configurable interval.

- --wait-time (seconds): Default: 0 (single execution),  > 0: Time to wait between flog executions
Examples: --wait-time 30, --wait-time 120.5
- --max-executions (count): Default: 1 (single execution), 0: Run forever until manually stopped (Ctrl+C), > 1: Run specified number of times
Examples: --max-executions 10, --max-executions 0

### Graceful Interruption:
Ctrl+C stops gracefully with summary report
Current execution completes before stopping
No data loss during interruption

```bash
# Run 10 times with 30 second intervals
flog-otlp --wait-time 30 --max-executions 10

# Run forever with 1 minute intervals
flog-otlp --wait-time 60 --max-executions 0

# Generate 100 logs every 2 minutes, run 24 times (48 hours)
flog-otlp -n 100 -s 5s --wait-time 120 --max-executions 24

# High-frequency: 50 logs every 10 seconds, run until stopped
flog-otlp -n 50 -s 2s --wait-time 10 --max-executions 0

# JSON logs with custom attributes, 5 executions
flog-otlp -f json -n 200 \
  --otlp-attributes environment=production \
  --wait-time 45 --max-executions 5
```

### Detailed Logging

```
Execution #3 started at 14:30:15 UTC
Executing: flog -f apache_common -n 100 -s 5s
[... processing logs ...]
Execution #3 completed in 7.2s (100 logs)
Waiting 30s before next execution...
Comprehensive Summary:
EXECUTION SUMMARY:
  Total executions: 5
  Total logs processed: 1,000  
  Total runtime: 187.3s
  Average logs per execution: 200.0
  Started: 2025-09-01 14:25:00 UTC
  Ended: 2025-09-01 14:28:07 UTC
```

## Parameters Reference

### OTLP Configuration
| Parameter | Description | Default | Example |
|-----------|-------------|---------|---------|
| `--otlp-endpoint` | OTLP logs endpoint URL | `http://localhost:4318/v1/logs` | `https://collector:4318/v1/logs` |
| `--service-name` | Service name in resource attributes | `flog-generator` | `web-server` |
| `--otlp-attributes` | Resource-level attributes (repeatable) | None | `--otlp-attributes env=prod` |
| `--telemetry-attributes` | Log-level attributes (repeatable) | None | `--telemetry-attributes app=nginx` |
| `--otlp-header` | Custom HTTP headers (repeatable) | None | `--otlp-header "Auth=Bearer xyz"` |

### Log Generation (flog)
| Parameter | Description | Default | Example |
|-----------|-------------|---------|---------|
| `-f, --format` | Log format | `apache_common` | `json`, `rfc5424`, `apache_combined` |
| `-n, --number` | Number of logs to generate | `200` | `1000` |
| `-s, --sleep` | Duration to generate logs over | `10s` | `5s`, `2m`, `1h` |
| `-r, --rate` | Rate limit (logs/second) | None | `50` |
| `-p, --bytes` | Bytes limit per second | None | `1024` |
| `-d, --delay-flog` | Delay between log generation | None | `100ms` |
| `--no-loop` | Disable infinite loop mode | False | N/A |

### Execution Control
| Parameter | Description | Default | Example |
|-----------|-------------|---------|---------|
| `--scenario` | Path to YAML scenario file | None | `scenario.yaml`, `./tests/load.yaml` |
| `--wait-time` | Seconds between executions | `0` (single) | `30`, `120.5` |
| `--max-executions` | Number of executions (0=infinite) | `1` | `10`, `0` |
| `--delay` | Delay between individual log sends | `0.1` | `0.05`, `0` |
| `--verbose` | Enable verbose output | False | N/A |

### Supported Log Formats
- `apache_common` - Apache Common Log Format
- `apache_combined` - Apache Combined Log Format  
- `apache_error` - Apache Error Log Format
- `rfc3164` - RFC3164 (Legacy Syslog)
- `rfc5424` - RFC5424 (Modern Syslog)
- `common_log` - Common Log Format
- `json` - JSON structured logs

## Development

### Modern Workflow with uv (Recommended)

```bash
# Setup development environment
uv sync --group dev

# Run tests
uv run pytest

# Run tests with coverage  
uv run pytest --cov=flog_otlp --cov-report=term-missing

# Run specific test file
uv run pytest tests/test_sender.py

# Code formatting
uv run --group lint black src/ tests/
uv run --group lint ruff format src/ tests/

# Linting and type checking
uv run --group lint ruff check src/ tests/
uv run --group lint mypy src/

# Run application
uv run flog-otlp --help

# Use Makefile for convenience
make test        # Run tests
make lint        # Run all linting
make format      # Format code  
make check       # Format, lint, and test
```

### Traditional Workflow (pip/pytest)

```bash
# Run all tests
pytest

# Run with coverage
pytest --cov=flog_otlp

# Code quality tools
black src/ tests/
ruff check src/ tests/
mypy src/
```

### Project Structure

```
flog_otlp/
├── src/flog_otlp/           # Main package
│   ├── __init__.py          # Package initialization
│   ├── cli.py               # Command-line interface
│   ├── sender.py            # Core OTLP sender logic
│   ├── scenario.py          # Scenario YAML parsing and execution (NEW v0.2.0)
│   ├── parser.py            # Argument parsing utilities
│   └── logging_config.py    # Logging configuration
├── scripts/
│   └── run.py               # Development runner (no install needed)
├── tests/                   # Test suite
├── example_scenario.yaml    # Example scenario file (NEW v0.2.0)
└── pyproject.toml           # Package configuration
```

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Run tests and linting
5. Submit a pull request

## License

See LICENSE file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "flog-otlp",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.13",
    "maintainer_email": null,
    "keywords": "flog, logging, observability, opentelemetry, otlp, testing",
    "author": null,
    "author_email": "Rick Jury <rjury@sumologic.com>",
    "download_url": "https://files.pythonhosted.org/packages/14/b5/b94fa970ad8beb3593320dcbcec27a1e8f2ec6171927da360131bb9ac310/flog_otlp-0.2.0.tar.gz",
    "platform": null,
    "description": "# flog-otlp\n\nA Python package that generates realistic log data using [flog](https://github.com/mingrammer/flog) and sends it to OpenTelemetry Protocol (OTLP) endpoints. Perfect for testing log pipelines, observability systems, and OTLP collectors.\n\n**Now available as a proper Python package with pip installation!**\n\nflog_otlp is a python wrapper to take STDOUT from [flog](https://github.com/mingrammer/flog) which can generate log file samples for formats like apache and json, then encode these in a OTLP compliant wrapper and forward to an OTLP endpoint. You can also provide custom attributes and execute complex scenarios with asynchronous timing control. I created this for testing sending OTLP log data to Sumo Logic but it could be applicable to any OTLP compliant receiver.\n\nMapping for flog payload:\n- The flog event is encoded in a \"log\" json key.\n- otlp-attributes: add resource-level attributes map to \"fields\" in sumologic. Fields are posted seperate to the log body and stored in the index with data but each named field must first be must be enabled or it's suppressed.\n- telemetry-attributes: add log-level attributes that appear as json keys in the log event body in sumo logic.\n\nExample standard body as it appears in sumo side:\n\n```json\n{\"log_source\":\"flog\",\"log_type\":\"apache_common\",\"log\":\"41.253.249.79 - rath4856 [27/Aug/2025:16:31:15 +1200] \\\"HEAD /empower HTTP/2.0\\\" 501 8873\"}\n```\n\n## Installation\n\n### Requirements\n- **Python**: 3.13+ required\n- **uv**: Modern Python package manager (recommended)\n- **flog**: Log generation tool\n\n### Install flog-otlp Package\n\n```bash\n# Install from PyPI (when published) - using uv (recommended)\nuv add flog-otlp\n\n# Or using pip\npip install flog-otlp\n\n# Development setup with uv (recommended)\ngit clone <repo-url>\ncd flog_otlp\nuv sync --group dev\n\n# Alternative: development setup with pip\npip install -e \".[dev]\"\n```\n\n### Install flog Tool (Required)\n\n```bash\n# macOS\nbrew install mingrammer/flog/flog\n\n# Go install (any platform)\ngo install github.com/mingrammer/flog@latest\n```\n\n## Usage\n\nAfter installation, use the `flog-otlp` command. Supported log formats: `apache_common`, `apache_combined`, `apache_error`, `rfc3164`, `rfc5424`, `common_log`, `json`\n\n### Single Execution Examples\n\n```bash\n# Default: 200 logs over 10 seconds\nflog-otlp\n\n# 100 logs over 5 seconds\nflog-otlp -n 100 -s 5s\n\n# 50 Apache common format logs\nflog-otlp -f apache_common -n 50\n\n# 100 JSON logs, no infinite loop\nflog-otlp -f json -n 100 --no-loop\n\n# Custom OTLP endpoint\nflog-otlp --otlp-endpoint https://collector:4318/v1/logs\n\n# With custom resource attributes\nflog-otlp --otlp-attributes environment=production --otlp-attributes region=us-east-1\n\n# With custom log attributes\nflog-otlp --telemetry-attributes app=web-server --telemetry-attributes debug=true\n\n# With authentication headers\nflog-otlp --otlp-header \"Authorization=Bearer token123\" --otlp-header \"X-Custom=value\"\n```\n\n### Development Usage (Without Installation)\n\n```bash\n# Clone and run without installing\ngit clone <repo-url>\ncd flog_otlp\npython3 scripts/run.py -n 50 -f json\n```\n\n## Scenario Mode\n\n**NEW in v0.2.0**: Execute complex log generation scenarios with precise asynchronous timing control using YAML configuration files.\n\n### Quick Start\n\n```bash\n# Execute a scenario from YAML file\nflog-otlp --scenario scenario.yaml\n\n# With custom endpoint\nflog-otlp --scenario scenario.yaml --otlp-endpoint https://collector:4318/v1/logs\n```\n\n### YAML Scenario Format\n\nCreate a YAML file defining your test scenario:\n\n```yaml\nname: \"Production Load Test\"\ndescription: \"Simulates real-world traffic patterns with overlapping log types\"\n\nsteps:\n  # Normal baseline traffic\n  - start_time: \"0s\"\n    interval: \"30s\"\n    iterations: 10\n    parameters:\n      format: \"json\"\n      number: 100\n      sleep: \"10s\"\n      telemetry_attributes:\n        - \"log_level=info\"\n        - \"service=web-frontend\"\n      otlp_attributes:\n        - \"environment=production\"\n        - \"region=us-east-1\"\n\n  # Error spike after 2 minutes\n  - start_time: \"2m\"\n    interval: \"15s\" \n    iterations: 6\n    parameters:\n      format: \"json\"\n      number: 200\n      sleep: \"8s\"\n      telemetry_attributes:\n        - \"log_level=error\"\n        - \"service=web-frontend\"\n        - \"incident=auth-failure\"\n      otlp_attributes:\n        - \"environment=production\"\n        - \"region=us-east-1\"\n        - \"alert_state=triggered\"\n\n  # Recovery phase\n  - start_time: \"4m\"\n    interval: \"45s\"\n    iterations: 4\n    parameters:\n      format: \"json\"\n      number: 80\n      sleep: \"10s\"\n      telemetry_attributes:\n        - \"log_level=warn\"\n        - \"service=web-frontend\"\n        - \"status=recovering\"\n```\n\n### Scenario Features\n\n- **Asynchronous Execution**: All steps run concurrently with precise timing\n- **Flexible Scheduling**: Define when each step starts and how often it repeats\n- **Parameter Override**: Each step can customize any flog-otlp parameter\n- **Enhanced Logging**: INFO-level logging shows flog commands and parameters\n- **Concurrent Patterns**: Steps can overlap, run in parallel, or execute sequentially\n\n### Scenario Parameters\n\n| Parameter | Description | Example | \n|-----------|-------------|---------|\n| `start_time` | When to begin this step | `\"0s\"`, `\"2m\"`, `\"1h\"` |\n| `interval` | Time between iterations | `\"30s\"`, `\"5m\"` |\n| `iterations` | Number of times to run | `1`, `10`, `0` (infinite) |\n| `parameters` | Any flog-otlp parameters | `format`, `number`, `attributes` |\n\n### Time Format Support\n- **Seconds**: `\"30s\"`, `\"90s\"`\n- **Minutes**: `\"5m\"`, `\"2.5m\"`\n- **Hours**: `\"1h\"`, `\"0.5h\"`\n- **Plain numbers**: `30` (defaults to seconds)\n\n### Real-World Scenario Examples\n\n**Gradual Load Increase**:\n```yaml\nname: \"Load Test Ramp-Up\"\nsteps:\n  - start_time: \"0s\"\n    interval: \"60s\"\n    iterations: 5\n    parameters:\n      number: 50   # Light load\n  - start_time: \"5m\"  \n    interval: \"30s\"\n    iterations: 10\n    parameters:\n      number: 100  # Medium load\n  - start_time: \"10m\"\n    interval: \"15s\" \n    iterations: 20\n    parameters:\n      number: 200  # Heavy load\n```\n\n**Multi-Service Simulation**:\n```yaml\nname: \"Microservices Load\"\nsteps:\n  - start_time: \"0s\"\n    interval: \"20s\"\n    iterations: 15\n    parameters:\n      telemetry_attributes:\n        - \"service=api-gateway\"\n  - start_time: \"10s\"\n    interval: \"25s\"\n    iterations: 12  \n    parameters:\n      telemetry_attributes:\n        - \"service=user-service\"\n  - start_time: \"20s\"\n    interval: \"35s\"\n    iterations: 10\n    parameters:\n      telemetry_attributes:\n        - \"service=order-service\"\n```\n\n### Scenario Execution Flow\n\n1. **Load & Validate**: YAML file is parsed and validated\n2. **Schedule All Steps**: Each step is scheduled in its own thread\n3. **Asynchronous Execution**: Steps wait for their start time, then execute iterations\n4. **Real-time Logging**: Progress, commands, and parameters logged at INFO level\n5. **Graceful Shutdown**: Ctrl+C stops all threads cleanly\n\nExample output:\n```\nINFO - Starting scenario: Production Load Test\nINFO - Estimated total scenario duration: 390.0s\nINFO - All 3 steps scheduled. Waiting for completion...\nINFO - Executing step 1, iteration 1/10 at 14:30:00 UTC (T+0.0s)\nINFO - Step 1.1 flog command: flog -f json -n 100 -s 10s\nINFO - Step 1.1 parameters: {'format': 'json', 'number': 100, ...}\nINFO - Step 2 scheduled to start in 120.0s\nINFO - Step 1.1 completed in 11.2s (100 logs)\nINFO - Executing step 1, iteration 2/10 at 14:30:30 UTC (T+30.0s)\n```\n\n## Docker Usage\n\n### Building the Docker Image\n\n```bash\n# Build with uv (fastest, recommended)\ndocker build -f Dockerfile.uv -t flog-otlp:uv .\n\n# Build standard image\ndocker build -t flog-otlp .\n\n# Alternative build if issues occur\ndocker build -f Dockerfile.alt -t flog-otlp:alt .\n```\n\n### Running with Docker\n\n```bash\n# Show help\ndocker run --rm flog-otlp\n\n# Basic usage (default: 200 logs over 10 seconds to localhost)\ndocker run --rm flog-otlp -n 100 -s 5s\n\n# Send to external OTLP endpoint\ndocker run --rm flog-otlp \\\n  --otlp-endpoint https://your-collector:4318/v1/logs \\\n  -f json -n 50\n\n# With custom attributes and headers\ndocker run --rm flog-otlp \\\n  --otlp-attributes environment=production \\\n  --otlp-attributes region=us-west-2 \\\n  --telemetry-attributes app=test-app \\\n  --otlp-header \"Authorization=Bearer your-token\" \\\n  -f apache_combined -n 100\n\n# Recurring execution (run 5 times with 30s intervals)\ndocker run --rm flog-otlp \\\n  --wait-time 30 --max-executions 5 \\\n  -n 200 -f json\n\n# Long-running container (until stopped)\ndocker run --rm --name flog-generator flog-otlp \\\n  --wait-time 60 --max-executions 0 \\\n  --otlp-endpoint http://host.docker.internal:4318/v1/logs\n```\n\n### Running with Podman\n\nWhen using Podman, `localhost` inside the container refers to the container itself, not the host. Use these alternatives to access services on your host:\n\n```bash\n# Recommended: Use host.containers.internal (modern Podman)\npodman run --rm flog-otlp:uv \\\n  --otlp-endpoint http://host.containers.internal:4318/v1/logs \\\n  -f json -n 50\n\n# Alternative 1: Use host networking (container shares host network)\npodman run --rm --network=host flog-otlp:uv \\\n  --otlp-endpoint http://localhost:4318/v1/logs \\\n  -f json -n 50\n\n# Alternative 2: Use your host's actual IP address\npodman run --rm flog-otlp:uv \\\n  --otlp-endpoint http://YOUR_HOST_IP:4318/v1/logs \\\n  -f json -n 50\n\n# Alternative 3: For older Podman versions, try host.docker.internal\npodman run --rm flog-otlp:uv \\\n  --otlp-endpoint http://host.docker.internal:4318/v1/logs \\\n  -f json -n 50\n```\n\n### Docker Compose Example\n\n```yaml\nversion: '3.8'\nservices:\n  flog-otlp:\n    build: .\n    command: >\n      --otlp-endpoint http://otel-collector:4318/v1/logs\n      --wait-time 30\n      --max-executions 0\n      -f json\n      -n 100\n    environment:\n      - LOG_LEVEL=INFO\n    depends_on:\n      - otel-collector\n    \n  otel-collector:\n    image: otel/opentelemetry-collector:latest\n    # ... collector configuration\n```\n\n### Docker Image Details\n\n- **Base Image**: `python:3.13-slim` for minimal footprint\n- **Multi-stage Build**: Uses Go builder stage to compile flog, then copies binary to final image\n- **Security**: Runs as non-root user (`flog-user`)  \n- **Size**: Optimized with `.dockerignore` to exclude unnecessary files\n- **Dependencies**: Includes both `flog` binary and `flog-otlp` Python package\n- **uv Support**: `Dockerfile.uv` provides fastest dependency installation\n\n### Troubleshooting Docker Build\n\nAll Docker builds should work now. If you encounter any issues:\n\n1. **Recommended**: Use the uv-based build (fastest)\n2. **Standard**: Use the main Dockerfile\n3. **Fallback**: Use the alternative Dockerfile\n\n```bash\n# Build with uv (recommended - fastest and most reliable)\ndocker build -f Dockerfile.uv -t flog-otlp:uv .\n\n# Standard build\ndocker build -t flog-otlp .\n\n# Alternative build (if others fail)\ndocker build -f Dockerfile.alt -t flog-otlp:alt .\n```\n\n## Recurring Executions\nThis enables powerful use cases like continuous log generation for testing, scheduled batch processing, and long-running observability scenarios\n\n### Smart Mode Detection\n- **Single mode**: When wait-time=0 and max-executions=1 (default)\n- **Recurring mode**: When wait-time>0 OR max-executions\u22601\n\nThe wrapper can call your flog command and forward logs on a configurable interval.\n\n- --wait-time (seconds): Default: 0 (single execution),  > 0: Time to wait between flog executions\nExamples: --wait-time 30, --wait-time 120.5\n- --max-executions (count): Default: 1 (single execution), 0: Run forever until manually stopped (Ctrl+C), > 1: Run specified number of times\nExamples: --max-executions 10, --max-executions 0\n\n### Graceful Interruption:\nCtrl+C stops gracefully with summary report\nCurrent execution completes before stopping\nNo data loss during interruption\n\n```bash\n# Run 10 times with 30 second intervals\nflog-otlp --wait-time 30 --max-executions 10\n\n# Run forever with 1 minute intervals\nflog-otlp --wait-time 60 --max-executions 0\n\n# Generate 100 logs every 2 minutes, run 24 times (48 hours)\nflog-otlp -n 100 -s 5s --wait-time 120 --max-executions 24\n\n# High-frequency: 50 logs every 10 seconds, run until stopped\nflog-otlp -n 50 -s 2s --wait-time 10 --max-executions 0\n\n# JSON logs with custom attributes, 5 executions\nflog-otlp -f json -n 200 \\\n  --otlp-attributes environment=production \\\n  --wait-time 45 --max-executions 5\n```\n\n### Detailed Logging\n\n```\nExecution #3 started at 14:30:15 UTC\nExecuting: flog -f apache_common -n 100 -s 5s\n[... processing logs ...]\nExecution #3 completed in 7.2s (100 logs)\nWaiting 30s before next execution...\nComprehensive Summary:\nEXECUTION SUMMARY:\n  Total executions: 5\n  Total logs processed: 1,000  \n  Total runtime: 187.3s\n  Average logs per execution: 200.0\n  Started: 2025-09-01 14:25:00 UTC\n  Ended: 2025-09-01 14:28:07 UTC\n```\n\n## Parameters Reference\n\n### OTLP Configuration\n| Parameter | Description | Default | Example |\n|-----------|-------------|---------|---------|\n| `--otlp-endpoint` | OTLP logs endpoint URL | `http://localhost:4318/v1/logs` | `https://collector:4318/v1/logs` |\n| `--service-name` | Service name in resource attributes | `flog-generator` | `web-server` |\n| `--otlp-attributes` | Resource-level attributes (repeatable) | None | `--otlp-attributes env=prod` |\n| `--telemetry-attributes` | Log-level attributes (repeatable) | None | `--telemetry-attributes app=nginx` |\n| `--otlp-header` | Custom HTTP headers (repeatable) | None | `--otlp-header \"Auth=Bearer xyz\"` |\n\n### Log Generation (flog)\n| Parameter | Description | Default | Example |\n|-----------|-------------|---------|---------|\n| `-f, --format` | Log format | `apache_common` | `json`, `rfc5424`, `apache_combined` |\n| `-n, --number` | Number of logs to generate | `200` | `1000` |\n| `-s, --sleep` | Duration to generate logs over | `10s` | `5s`, `2m`, `1h` |\n| `-r, --rate` | Rate limit (logs/second) | None | `50` |\n| `-p, --bytes` | Bytes limit per second | None | `1024` |\n| `-d, --delay-flog` | Delay between log generation | None | `100ms` |\n| `--no-loop` | Disable infinite loop mode | False | N/A |\n\n### Execution Control\n| Parameter | Description | Default | Example |\n|-----------|-------------|---------|---------|\n| `--scenario` | Path to YAML scenario file | None | `scenario.yaml`, `./tests/load.yaml` |\n| `--wait-time` | Seconds between executions | `0` (single) | `30`, `120.5` |\n| `--max-executions` | Number of executions (0=infinite) | `1` | `10`, `0` |\n| `--delay` | Delay between individual log sends | `0.1` | `0.05`, `0` |\n| `--verbose` | Enable verbose output | False | N/A |\n\n### Supported Log Formats\n- `apache_common` - Apache Common Log Format\n- `apache_combined` - Apache Combined Log Format  \n- `apache_error` - Apache Error Log Format\n- `rfc3164` - RFC3164 (Legacy Syslog)\n- `rfc5424` - RFC5424 (Modern Syslog)\n- `common_log` - Common Log Format\n- `json` - JSON structured logs\n\n## Development\n\n### Modern Workflow with uv (Recommended)\n\n```bash\n# Setup development environment\nuv sync --group dev\n\n# Run tests\nuv run pytest\n\n# Run tests with coverage  \nuv run pytest --cov=flog_otlp --cov-report=term-missing\n\n# Run specific test file\nuv run pytest tests/test_sender.py\n\n# Code formatting\nuv run --group lint black src/ tests/\nuv run --group lint ruff format src/ tests/\n\n# Linting and type checking\nuv run --group lint ruff check src/ tests/\nuv run --group lint mypy src/\n\n# Run application\nuv run flog-otlp --help\n\n# Use Makefile for convenience\nmake test        # Run tests\nmake lint        # Run all linting\nmake format      # Format code  \nmake check       # Format, lint, and test\n```\n\n### Traditional Workflow (pip/pytest)\n\n```bash\n# Run all tests\npytest\n\n# Run with coverage\npytest --cov=flog_otlp\n\n# Code quality tools\nblack src/ tests/\nruff check src/ tests/\nmypy src/\n```\n\n### Project Structure\n\n```\nflog_otlp/\n\u251c\u2500\u2500 src/flog_otlp/           # Main package\n\u2502   \u251c\u2500\u2500 __init__.py          # Package initialization\n\u2502   \u251c\u2500\u2500 cli.py               # Command-line interface\n\u2502   \u251c\u2500\u2500 sender.py            # Core OTLP sender logic\n\u2502   \u251c\u2500\u2500 scenario.py          # Scenario YAML parsing and execution (NEW v0.2.0)\n\u2502   \u251c\u2500\u2500 parser.py            # Argument parsing utilities\n\u2502   \u2514\u2500\u2500 logging_config.py    # Logging configuration\n\u251c\u2500\u2500 scripts/\n\u2502   \u2514\u2500\u2500 run.py               # Development runner (no install needed)\n\u251c\u2500\u2500 tests/                   # Test suite\n\u251c\u2500\u2500 example_scenario.yaml    # Example scenario file (NEW v0.2.0)\n\u2514\u2500\u2500 pyproject.toml           # Package configuration\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Run tests and linting\n5. Submit a pull request\n\n## License\n\nSee LICENSE file for details.\n",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Rick Jury\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.",
    "summary": "A Python utility that generates realistic log data using flog and sends it to OpenTelemetry Protocol (OTLP) endpoints",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/rjury-sumo/flog_otlp/",
        "Issues": "https://github.com/rjury-sumo/flog_otlp/issues",
        "Repository": "https://github.com/rjury-sumo/flog_otlp.git"
    },
    "split_keywords": [
        "flog",
        " logging",
        " observability",
        " opentelemetry",
        " otlp",
        " testing"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "524b9d06b54da87aa3d0a87b1b07f9efc574c8b28dec30e9033d84c56644c780",
                "md5": "9f989cb0f619ba0bfdefe0155c37ebb6",
                "sha256": "78440820d8d823d4898cfed50c409f12b7ee06567dbc3b6ed69ea5e42978ad7e"
            },
            "downloads": -1,
            "filename": "flog_otlp-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9f989cb0f619ba0bfdefe0155c37ebb6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.13",
            "size": 19831,
            "upload_time": "2025-09-02T00:31:26",
            "upload_time_iso_8601": "2025-09-02T00:31:26.297151Z",
            "url": "https://files.pythonhosted.org/packages/52/4b/9d06b54da87aa3d0a87b1b07f9efc574c8b28dec30e9033d84c56644c780/flog_otlp-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "14b5b94fa970ad8beb3593320dcbcec27a1e8f2ec6171927da360131bb9ac310",
                "md5": "1c4db7cf60e15a95f199ca35d3f88e16",
                "sha256": "75ef8d5b9d420bc5d0dc62f67771b1aae8efb70bdae3fcf7edc15accc8b03a7c"
            },
            "downloads": -1,
            "filename": "flog_otlp-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "1c4db7cf60e15a95f199ca35d3f88e16",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.13",
            "size": 67876,
            "upload_time": "2025-09-02T00:31:27",
            "upload_time_iso_8601": "2025-09-02T00:31:27.768574Z",
            "url": "https://files.pythonhosted.org/packages/14/b5/b94fa970ad8beb3593320dcbcec27a1e8f2ec6171927da360131bb9ac310/flog_otlp-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-02 00:31:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rjury-sumo",
    "github_project": "flog_otlp",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "requests",
            "specs": [
                [
                    ">=",
                    "2.25.0"
                ]
            ]
        }
    ],
    "lcname": "flog-otlp"
}
        
Elapsed time: 0.78929s