# Structured Logging Documentation
This module provides structured logging capabilities that allow you to output either human-readable
logs or JSON-formatted structured logs. These features are especially useful for debugging in
development and for generating easily parseable logs in production environments.
## Overview
The logging setup supports two main modes:
1. **Human-readable logs**: Provides logs in a colored, properly formatted output for local development.
2. **Structured logs**: Outputs logs as JSON objects, with each log record represented as a single line. This mode is ideal for production environments where logs need to be processed by log aggregation systems.
### Key Functions
- `add_logging_args()`: Adds logging configuration options to an `argparse.ArgumentParser` instance, making it easy to configure logging via command-line arguments.
- `setup()`: Directly configures logging by specifying the logging level and format (structured or human-readable).
- `set_context()`: Assigns a custom context to be logged with each message in structured logging mode.
- `log_multipart()`: Logs large messages by splitting them into chunks and compressing the data.
---
## Usage
### Basic Setup with `setup()`
To initialize logging in your application, call the `setup()` function. You can specify whether to enable structured logging or use the default human-readable format.
```python
from my_logging_module import setup
# Initialize logging
setup(level="INFO", structured=False) # Human-readable format
# Enable structured logging for production
setup(level="INFO", structured=True)
```
#### Parameters for `setup()`
- `level`: The logging level (e.g., "DEBUG", "INFO", "WARNING", "ERROR"). This can be a string or an integer.
- `structured`: A boolean that controls whether structured logging is enabled. Set to `True` for JSON logs.
- `allow_trailing_dot`: Prevents log messages from having a trailing dot unless explicitly allowed.
- `level_from_msg`: An optional function to dynamically change the logging level based on the content of the message.
- `ensure_utf8_streams`: Ensures that `stdout` and `stderr` use UTF-8 encoding.
### Adding Logging Arguments with `add_logging_args()`
You can easily integrate logging configuration options into your command-line interface using `add_logging_args()`. This function automatically adds command-line flags for setting the logging level and format.
#### Command-Line Flags
- `--log-level`: Set the logging verbosity (e.g., "DEBUG", "INFO", "WARNING").
- `--log-structured`: Enable structured logging (outputs logs in JSON format).
#### Environment Variables
You can also set the logging level and format using environment variables:
- `LOG_LEVEL`: Set the logging level.
- `LOG_STRUCTURED`: Enable structured logging.
```bash
LOG_LEVEL=DEBUG LOG_STRUCTURED=1 python my_app.py
```
### Structured Logging in Production
To enable structured logging (JSON logs), you can either set the `--log-structured` flag when running your application or configure it programmatically using `setup()`:
```bash
python my_app.py --log-level DEBUG --log-structured
```
In structured logging mode, each log entry is a JSON object with the following fields:
- `level`: The log level (e.g., "info", "error").
- `msg`: The log message.
- `source`: The file and line number where the log occurred.
- `time`: The timestamp of the log event.
- `thread`: The thread ID in a shortened format.
- `name`: The logger name.
Example structured log output:
```json
{
"level": "info",
"msg": "Application started",
"source": "app.py:42",
"time": "2023-09-23T14:22:35.000+00:00",
"thread": "f47c",
"name": "my_app"
}
```
### Custom Context with `set_context()`
In structured logging mode, you can attach additional context to each log message by calling `set_context()`. This context is logged alongside the usual fields, allowing you to track custom metadata.
```python
from my_logging_module import set_context
# Set custom context
set_context({"user_id": "12345", "transaction_id": "abcde"})
# The custom context will now appear in each structured log message
```
### Handling Large Log Messages with `log_multipart()`
When logging large messages (e.g., serialized data or files), the `log_multipart()` function compresses and splits the message into smaller chunks to prevent issues with log size limits.
```python
from my_logging_module import log_multipart
# Log a large message
log_multipart(logging.getLogger(), b"Large data to be logged")
```
This function will automatically split the message and log each chunk, ensuring the entire message is captured.
---
## Customizing the Logging Format
### Human-Readable Logs
By default, when not using structured logging, logs are output in a colored format, with color-coding based on the log level:
- **DEBUG**: Gray
- **INFO**: Cyan
- **WARNING**: Yellow
- **ERROR/CRITICAL**: Red
You can further customize the format by modifying the `AwesomeFormatter` class, which is used for formatting logs in human-readable mode. It also shortens thread IDs for easier readability.
### Enforcing Logging Standards
To enforce standards in your logging messages, such as preventing trailing dots in log messages, the module provides the `check_trailing_dot()` decorator. This can be applied to logging functions to raise an error if a message ends with a dot:
```python
from my_logging_module import check_trailing_dot
@check_trailing_dot
def log_message(record):
# Your custom logging logic
pass
```
---
## Best Practices
- Use **human-readable logs** in development for easier debugging.
- Switch to **structured logging** in production to enable easier parsing and aggregation by log management tools.
- **Set custom contexts** to include additional metadata in your logs, such as user IDs or request IDs, to improve traceability in production.
- **Use multipart logging** to handle large log messages that might otherwise exceed log size limits.
---
## Example
Here's a full example of how to use structured logging with command-line configuration:
```python
import argparse
import logging
from flogging import add_logging_args, set_context, setup
# Initialize logging
setup(level="INFO", structured=False) # Human-readable format
# Create argument parser
parser = argparse.ArgumentParser(description="My Application")
add_logging_args(parser)
# Parse arguments and setup logging
args = parser.parse_args()
# Set additional context for structured logging
set_context({"request_id": "123abc"})
# Start logging messages
logger = logging.getLogger("my_app")
logger.info("Application started")
```
Raw data
{
"_id": null,
"home_page": null,
"name": "flogging",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "Machine learning, artificial intelligence",
"author": "fragiletech",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/e1/5a/2033cd4bac1e7e6f367e853db0de77fe7af09e2b9645f1b69f945ef8d12b/flogging-0.0.23.tar.gz",
"platform": null,
"description": "# Structured Logging Documentation\n\nThis module provides structured logging capabilities that allow you to output either human-readable \nlogs or JSON-formatted structured logs. These features are especially useful for debugging in \ndevelopment and for generating easily parseable logs in production environments.\n\n## Overview\n\nThe logging setup supports two main modes:\n\n1. **Human-readable logs**: Provides logs in a colored, properly formatted output for local development.\n2. **Structured logs**: Outputs logs as JSON objects, with each log record represented as a single line. This mode is ideal for production environments where logs need to be processed by log aggregation systems.\n\n### Key Functions\n\n- `add_logging_args()`: Adds logging configuration options to an `argparse.ArgumentParser` instance, making it easy to configure logging via command-line arguments.\n- `setup()`: Directly configures logging by specifying the logging level and format (structured or human-readable).\n- `set_context()`: Assigns a custom context to be logged with each message in structured logging mode.\n- `log_multipart()`: Logs large messages by splitting them into chunks and compressing the data.\n\n---\n\n## Usage\n\n### Basic Setup with `setup()`\n\nTo initialize logging in your application, call the `setup()` function. You can specify whether to enable structured logging or use the default human-readable format.\n\n```python\nfrom my_logging_module import setup\n\n# Initialize logging\nsetup(level=\"INFO\", structured=False) # Human-readable format\n\n# Enable structured logging for production\nsetup(level=\"INFO\", structured=True)\n```\n\n#### Parameters for `setup()`\n\n- `level`: The logging level (e.g., \"DEBUG\", \"INFO\", \"WARNING\", \"ERROR\"). This can be a string or an integer.\n- `structured`: A boolean that controls whether structured logging is enabled. Set to `True` for JSON logs.\n- `allow_trailing_dot`: Prevents log messages from having a trailing dot unless explicitly allowed.\n- `level_from_msg`: An optional function to dynamically change the logging level based on the content of the message.\n- `ensure_utf8_streams`: Ensures that `stdout` and `stderr` use UTF-8 encoding.\n\n### Adding Logging Arguments with `add_logging_args()`\n\nYou can easily integrate logging configuration options into your command-line interface using `add_logging_args()`. This function automatically adds command-line flags for setting the logging level and format.\n\n#### Command-Line Flags\n\n- `--log-level`: Set the logging verbosity (e.g., \"DEBUG\", \"INFO\", \"WARNING\").\n- `--log-structured`: Enable structured logging (outputs logs in JSON format).\n\n#### Environment Variables\n\nYou can also set the logging level and format using environment variables:\n\n- `LOG_LEVEL`: Set the logging level.\n- `LOG_STRUCTURED`: Enable structured logging.\n\n```bash\nLOG_LEVEL=DEBUG LOG_STRUCTURED=1 python my_app.py\n```\n\n### Structured Logging in Production\n\nTo enable structured logging (JSON logs), you can either set the `--log-structured` flag when running your application or configure it programmatically using `setup()`:\n\n```bash\npython my_app.py --log-level DEBUG --log-structured\n```\n\nIn structured logging mode, each log entry is a JSON object with the following fields:\n\n- `level`: The log level (e.g., \"info\", \"error\").\n- `msg`: The log message.\n- `source`: The file and line number where the log occurred.\n- `time`: The timestamp of the log event.\n- `thread`: The thread ID in a shortened format.\n- `name`: The logger name.\n\nExample structured log output:\n\n```json\n{\n \"level\": \"info\",\n \"msg\": \"Application started\",\n \"source\": \"app.py:42\",\n \"time\": \"2023-09-23T14:22:35.000+00:00\",\n \"thread\": \"f47c\",\n \"name\": \"my_app\"\n}\n```\n\n### Custom Context with `set_context()`\n\nIn structured logging mode, you can attach additional context to each log message by calling `set_context()`. This context is logged alongside the usual fields, allowing you to track custom metadata.\n\n```python\nfrom my_logging_module import set_context\n\n# Set custom context\nset_context({\"user_id\": \"12345\", \"transaction_id\": \"abcde\"})\n\n# The custom context will now appear in each structured log message\n```\n\n### Handling Large Log Messages with `log_multipart()`\n\nWhen logging large messages (e.g., serialized data or files), the `log_multipart()` function compresses and splits the message into smaller chunks to prevent issues with log size limits.\n\n```python\nfrom my_logging_module import log_multipart\n\n# Log a large message\nlog_multipart(logging.getLogger(), b\"Large data to be logged\")\n```\n\nThis function will automatically split the message and log each chunk, ensuring the entire message is captured.\n\n---\n\n## Customizing the Logging Format\n\n### Human-Readable Logs\n\nBy default, when not using structured logging, logs are output in a colored format, with color-coding based on the log level:\n\n- **DEBUG**: Gray\n- **INFO**: Cyan\n- **WARNING**: Yellow\n- **ERROR/CRITICAL**: Red\n\nYou can further customize the format by modifying the `AwesomeFormatter` class, which is used for formatting logs in human-readable mode. It also shortens thread IDs for easier readability.\n\n### Enforcing Logging Standards\n\nTo enforce standards in your logging messages, such as preventing trailing dots in log messages, the module provides the `check_trailing_dot()` decorator. This can be applied to logging functions to raise an error if a message ends with a dot:\n\n```python\nfrom my_logging_module import check_trailing_dot\n\n@check_trailing_dot\ndef log_message(record):\n # Your custom logging logic\n pass\n```\n\n---\n\n## Best Practices\n\n- Use **human-readable logs** in development for easier debugging.\n- Switch to **structured logging** in production to enable easier parsing and aggregation by log management tools.\n- **Set custom contexts** to include additional metadata in your logs, such as user IDs or request IDs, to improve traceability in production.\n- **Use multipart logging** to handle large log messages that might otherwise exceed log size limits.\n\n---\n\n## Example\n\nHere's a full example of how to use structured logging with command-line configuration:\n\n```python\nimport argparse\nimport logging\nfrom flogging import add_logging_args, set_context, setup\n\n# Initialize logging\nsetup(level=\"INFO\", structured=False) # Human-readable format\n# Create argument parser\nparser = argparse.ArgumentParser(description=\"My Application\")\nadd_logging_args(parser)\n\n# Parse arguments and setup logging\nargs = parser.parse_args()\n\n# Set additional context for structured logging\nset_context({\"request_id\": \"123abc\"})\n\n# Start logging messages\nlogger = logging.getLogger(\"my_app\")\nlogger.info(\"Application started\")\n```\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "flogging nice logging formatting and structured logging.",
"version": "0.0.23",
"project_urls": null,
"split_keywords": [
"machine learning",
" artificial intelligence"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4d37ffa2e9788461d4011ca23d2c6f80be98075fe8b03cf5e2c095b4cf78def6",
"md5": "a9e590d374762d42bbf95e7fe2c1c2d1",
"sha256": "2b7d1559749973e0cf9e9ec7dc365c20a36742df9635829fb16706c0868c82cc"
},
"downloads": -1,
"filename": "flogging-0.0.23-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a9e590d374762d42bbf95e7fe2c1c2d1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 10267,
"upload_time": "2024-09-23T16:16:19",
"upload_time_iso_8601": "2024-09-23T16:16:19.841276Z",
"url": "https://files.pythonhosted.org/packages/4d/37/ffa2e9788461d4011ca23d2c6f80be98075fe8b03cf5e2c095b4cf78def6/flogging-0.0.23-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e15a2033cd4bac1e7e6f367e853db0de77fe7af09e2b9645f1b69f945ef8d12b",
"md5": "0001c8be55833e26b51cd14b76a3de76",
"sha256": "f5c7b02ef903b73fcec076196a2fa338ff310296f47bdc56b1a6f4a02db9c720"
},
"downloads": -1,
"filename": "flogging-0.0.23.tar.gz",
"has_sig": false,
"md5_digest": "0001c8be55833e26b51cd14b76a3de76",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 12860,
"upload_time": "2024-09-23T16:16:20",
"upload_time_iso_8601": "2024-09-23T16:16:20.925548Z",
"url": "https://files.pythonhosted.org/packages/e1/5a/2033cd4bac1e7e6f367e853db0de77fe7af09e2b9645f1b69f945ef8d12b/flogging-0.0.23.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-23 16:16:20",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "flogging"
}