Name | logging-metrics JSON |
Version |
0.1.2
JSON |
| download |
home_page | None |
Summary | Advanced logging utilities for robust, standardized logs in Python projects, APIs, data engineering, and more. |
upload_time | 2025-08-06 14:24:27 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.8 |
license | None |
keywords |
logging
metrics
python
spark
instrumentation
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# logging-metrics - Utilities Library for Logging Configuration and Management
This module provides functions and classes to configure logging for different environments and use cases:
- Colored logs for the terminal
- Rotating log files (by time or size)
- Customizable settings for different verbosity levels
- Text or JSON formatters compatible with external analysis tools
- Utilities for timing operations and collecting custom metrics
- **Utility functions for logging PySpark DataFrames** (e.g., row count, schema, samples, and basic statistics)
Main Components:
----------------
- `ColoredFormatter`: Colorized terminal output for quick identification of log levels
- `JSONFormatter`: JSON-formatted logs for external tool integration
- Functions to create handlers (console, file, rotation by time or size)
- `LogTimer`: Measure execution time of code blocks (context manager or decorator)
- `LogMetrics`: Collect and log custom metrics (counters, timers, values)
- `log_spark_dataframe_info`: Easy, structured logging for PySpark DataFrames
This toolkit is recommended for data pipelines, ETLs, and projects where traceability, auditability, and log performance are critical requirements.
---
This README.md covers:
- Purpose
- Installation
- Main Features
- Best Practices
- Usage Example
- Pyspark dataframe integration
- Dependencies & License
---
# logging-metrics
A library for configuring and managing logs in Python, focused on simplicity and performance.
---
#### โจ Features
- ๐จ Colored logs for the terminal with different levels
- ๐ Automatic file rotation by time or size
- โก PySpark DataFrame integration
- ๐ JSON format for observability systems
- โฑ๏ธ Timing with LogTimer
- ๐ Metrics monitoring with LogMetrics
- ๐ง Hierarchical logger configuration
- ๐ Optimized performance for critical applications
---
## ๐ฆ Installation
#### Install via pip:
```bash
pip install logging-metrics
```
#### For development:
```bash
git clone https://github.com/thaissateodoro/logging-metrics.git
cd logging-metrics
pip install -e ".[dev]"
```
---
## ๐ Functions and Classes Overview
Main Functions
```
| Name | Type | Description |
|---------------------------|----------|--------------------------------------------------------------------------------------|
| `configure_basic_logging` | Function | Configures root logger for colored console logging. |
| `setup_file_logging` | Function | Configures a logger with file output (rotation), optional console, JSON formatting. |
| `LogTimer` | Class | Context manager and decorator to log execution time of code blocks or functions. |
| `log_spark_dataframe_info`| Function | Logs schema, sample, stats of a PySpark DataFrame (row count, sample, stats, etc). |
| `LogMetrics` | Class | Utility for collecting, incrementing, timing, and logging custom processing metrics. |
| `get_logger` | Function | Returns a logger with custom handlers and caplog-friendly mode for pytest. |
```
---
### Utility Classes
#### LogTimer
- Context manager: with LogTimer(logger, "operation"):
- Decorator: @LogTimer.decorator(logger, "function")
- Manual: timer.start() / timer.stop()
#### LogMetrics
- Counters: metrics.increment('counter')
- Timers: metrics.start('timer') / metrics.stop('timer')
- Context manager: with metrics.timer('operation'):
- Report: metrics.log_all()
---
## ๐ Quick Start
```python
import logging
from logging_metrics import setup_file_logging, LogTimer
# Basic configuration
logger = setup_file_logging(
logger_name="my_app",
log_dir="./logs",
console_level=logging.INFO, # Less verbose in console
level=logging.DEBUG # More detailed in the file
)
# Simple usage
logger.info("Application started!")
# Timing operations
with LogTimer(logger, "Critical operation"):
# your code here
pass
```
---
## ๐ Main Features
1. Logging configuration:
```python
import logging
from logging-metrics import configure_basic_logging
logger = configure_basic_logging()
logger.debug("Debug message") # Gray
logger.info("Info") # Green
logger.warning("Warning") # Yellow
logger.error("Error") # Red
logger.critical("Critical") # Bold red
```
2. Automatic Log Rotation:
```python
from logging-metrics import setup_file_logging, LogTimer
# Size-based rotation
logger = setup_file_logging(
logger_name="app",
log_dir="./logs",
max_bytes=10*1024*1024, # 10MB
rotation='size'
)
# Time-based rotation
logger = setup_file_logging(
logger_name="app",
log_dir="./logs",
rotation='time'
)
```
3. Spark/Databricks Integration:
```python
from pyspark.sql import SparkSession
from logging_metrics import configure_basic_logging, log_spark_dataframe_info
spark = SparkSession.builder.getOrCreate()
df = spark.createDataFrame([(1, "Ana"), (2, "Bruno")], ["id", "nome"])
logger = configure_basic_logging()
print("Logger:", logger)
log_spark_dataframe_info(
df = df,logger = logger, name ="spark_app")
logger.info("Spark processing started")
```
4. โฑ Timing with LogTimer:
```python
from logging_metrics import LogTimer, configure_basic_logging
logger = configure_basic_logging()
# As a context manager
with LogTimer(logger, "DB query"):
logger.info("Test")
# As a decorator
@LogTimer.as_decorator(logger, "Data processing")
def process_data(data):
return data.transform()
```
5. ๐ Metrics Monitoring:
```python
from logging_metrics import LogMetrics, configure_basic_logging
import time
logger = configure_basic_logging()
metrics = LogMetrics(logger)
items = [10, 5, 80, 60, 'test1', 'test2']
# Start timer for total operation
metrics.start('total_processing')
for item in items:
# Increments the processed records counter
metrics.increment('records_processed')
# If it is an error (simulation)
if isinstance(item, str):
metrics.increment('errors')
# Simulates item processing
time.sleep(0.1)
# Custom value example
metrics.set('last_item', item)
# Finalize and log all metrics
elapsed = metrics.stop('total_processing')
# Logs all collected metrics
metrics.log_all()
# Output:
# --- Processing Metrics ---
# Counters:
# - records_processed: 6
# - errors_found: 2
# Values:
# - last_item: test2
# Completed timers:
# - total_processing: 0.60 seconds
```
6. Hierarchical Configuration:
```python
from logging_metrics import setup_file_logging
import logging
# Main logger
main_logger = setup_file_logging("my_app", log_dir="./logs")
# Sub-loggers organized hierarchically
db_logger = logging.getLogger("my_app.database")
api_logger = logging.getLogger("my_app.api")
auth_logger = logging.getLogger("my_app.auth")
# Module-specific configuration
db_logger.setLevel(logging.DEBUG) # More verbose for DB
api_logger.setLevel(logging.INFO) # Normal for API
auth_logger.setLevel(logging.WARNING) # Only warnings/errors for auth
db_logger.debug("querying the database")
db_logger.info("consultation successfully completed")
db_logger.error("Error connecting to database!")
auth_logger.debug("doing authentication")
auth_logger.info("authentication successfully completed")
api_logger.debug("querying the api")
api_logger.info("consultation successfully completed")
api_logger.error("Error querying the api")
auth_logger.error("Auth error!")
```
7. ๐ JSON Format for Observability:
```python
from logging_metrics import setup_file_logging
# JSON logs for integration with ELK, Grafana, etc.
logger = setup_file_logging(
logger_name="microservice",
log_dir="./logs",
json_format = True
)
logger.info("User logged in", extra={"user_id": 12345, "action": "login"})
# Example JSON output:
# {
# "timestamp": "2024-08-05T10:30:00.123Z",
# "level": "INFO",
# "name": "microservice",
# "message": "User logged in",
# "module": "user-api",
# "function": "<module>",
# "line": 160,
# "taskName": null,
# "user_id": 12345,
# "action": "login"
# }
```
---
## ๐ Best Practices
1. Configure logging once at the start:
```python
# In main.py or __init__.py
logger = setup_file_logging("my_app", log_dir="./logs")
```
2. Use logger hierarchy:
```python
# Organize by modules/features
db_logger = logging.getLogger("app.database")
api_logger = logging.getLogger("app.api")
```
3. Different levels for console and file:
```python
logger = setup_file_logging(
console_level=logging.WARNING, # Less verbose in console
level=logging.DEBUG # More detailed in the file
)
```
4. Use LogTimer for critical operations:
```python
with LogTimer(logger, "Complex query"):
result = run_heavy_query()
```
5. Monitor metrics in long processes:
```python
metrics = LogMetrics(logger)
for batch in batches:
with metrics.timer('batch_processing'):
process_batch(batch)
```
---
## โ Avoid
- Configuring loggers multiple times
- Using print() instead of logger
- Excessive logging in critical loops
- Exposing sensitive information in logs
- Ignoring log file rotation
---
## ๐ง Advanced Configuration
Example of full configuration:
```python
from logging_metrics import setup_file_logging, LogMetrics
import logging
# Main configuration with all options
logger = setup_file_logging(
logger_name="my_app",
log_folder: str = "unknown/"
log_dir="./logs",
level=logging.DEBUG,
console_level=logging.INFO,
rotation='time',
log_format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
date_format="%Y-%m-%d %H:%M:%S",
max_bytes=50*1024*1024, # 50MB
backup_count=10,
add_console= True
)
# Sub-module configuration
modules = ['database', 'api', 'auth', 'cache']
for module in modules:
module_logger = logging.getLogger(f"my_app.{module}")
module_logger.setLevel(logging.INFO)
```
---
## ๐งช Complete Example
```python
import logging
from logging_metrics import setup_file_logging, LogTimer, LogMetrics
def main():
# Initial configuration
logger = setup_file_logging(
logger_name="data_processor",
log_dir="./logs",
console_level=logging.INFO,
level=logging.DEBUG
)
# Sub-loggers
db_logger = logging.getLogger("data_processor.database")
api_logger = logging.getLogger("data_processor.api")
# Metrics
metrics = LogMetrics(logger)
logger.info("Application started")
try:
# Main processing with timing
with LogTimer(logger, "Full processing"):
metrics.start('total_processing')
# Simulate processing
for i in range(1000):
metrics.increment('records_processed')
if i % 100 == 0:
logger.info(f"Processed {i} records")
# Simulate occasional error
if i % 250 == 0:
metrics.increment('errors_recovered')
logger.warning(f"Recovered error at record {i}")
metrics.stop('total_processing')
metrics.log_all()
logger.info("Processing successfully completed")
except Exception as e:
logger.error(f"Error during processing: {e}", exc_info=True)
raise
if __name__ == "__main__":
main()
```
---
## ๐งช Tests
The library has a complete test suite to ensure quality and reliability.
#### Running the tests:
```bash
# Install development dependencies
pip install -e ".[dev]"
# Run all tests
make test
# Tests with coverage
make test-cov
# Specific tests
pytest test/test_file_logging.py -v
# Tests with different verbosity levels
pytest test/ -v # Verbose
pytest test/ -s # No output capture
pytest test/ --tb=short # Short traceback
```
#### Test Structure
```
test/
โโโ conftest.py # Shared pytest fixtures and test configurations
โโโ Makefile # Automation commands for testing, linting, and build tasks
โโโ pytest.ini # Global pytest configuration settings
โโโ run_tests.py # Script to run all tests automatically
โโโ test-requirements.txt # Development and test dependencies
โโโ TEST_GUIDE.md # Quick guide: how to run and interpret tests
โโโ test_logging_metrics.py # Automated tests for the logging_metrics library
```
#### Current coverage
```
# Coverage report
Name Stmts Miss Cover
-----------------------------------------------
src/logging_metrics/__init__.py 12 0 100%
src/logging_metrics/console.py 45 2 96%
src/logging_metrics/file.py 78 3 96%
src/logging_metrics/spark.py 32 1 97%
src/logging_metrics/timer.py 56 2 96%
src/logging_metrics/metrics.py 89 4 96%
-----------------------------------------------
TOTAL 312 12 96%
```
#### Running tests in different environments
```bash
# Test in multiple Python versions with tox
pip install tox
tox
# Specific configurations
tox -e py38 # Python 3.8
tox -e py39 # Python 3.9
tox -e py310 # Python 3.10
tox -e py311 # Python 3.11
tox -e py312 # Python 3.12
tox -e lint # Only linting
tox -e coverage # Only coverage
```
#### Running tests in CI/CD
Tests are run automatically in:
---
## ๐ง Requirements
Python: >= 3.8
Dependencies:
- pytz (for timezone handling)
- pyspark
---
## ๐ Changelog
v0.1.2 (Current)
- Initial stable version
- LogTimer and LogMetrics
- Spark integration
- Colored logs
- JSON log support
- Fixed file rotation bug on Windows
- Expanded documentation with more examples
---
## ๐ค Contributing
#### Contributions are welcome!
1. Fork the project
2. Create your feature branch (`git checkout -b feature/logging-metrics`)
3. Commit your changes (`git commit -m 'Add logging-metrics'`)
4. Push to the branch (`git push origin feature/logging-metrics`)
5. Open a Pull Request
---
## License
MIT License. See LICENSE for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "logging-metrics",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "logging, metrics, python, spark, instrumentation",
"author": null,
"author_email": "Thaissa Ferreira <thaissa.teodoro@hotmail.com>",
"download_url": "https://files.pythonhosted.org/packages/97/a6/e8f11b8f30b22cfb016911bba551821d0ecd0a26ae395519591d6b8c84b6/logging_metrics-0.1.2.tar.gz",
"platform": null,
"description": "\n# logging-metrics - Utilities Library for Logging Configuration and Management\n\nThis module provides functions and classes to configure logging for different environments and use cases:\n\n- Colored logs for the terminal\n- Rotating log files (by time or size)\n- Customizable settings for different verbosity levels\n- Text or JSON formatters compatible with external analysis tools\n- Utilities for timing operations and collecting custom metrics\n- **Utility functions for logging PySpark DataFrames** (e.g., row count, schema, samples, and basic statistics)\n\nMain Components:\n----------------\n- `ColoredFormatter`: Colorized terminal output for quick identification of log levels\n- `JSONFormatter`: JSON-formatted logs for external tool integration\n- Functions to create handlers (console, file, rotation by time or size)\n- `LogTimer`: Measure execution time of code blocks (context manager or decorator)\n- `LogMetrics`: Collect and log custom metrics (counters, timers, values)\n- `log_spark_dataframe_info`: Easy, structured logging for PySpark DataFrames\n\nThis toolkit is recommended for data pipelines, ETLs, and projects where traceability, auditability, and log performance are critical requirements.\n\n---\n\nThis README.md covers:\n\n- Purpose\n- Installation\n- Main Features\n- Best Practices\n- Usage Example\n- Pyspark dataframe integration\n- Dependencies & License\n\n---\n\n# logging-metrics\n\nA library for configuring and managing logs in Python, focused on simplicity and performance.\n\n---\n\n#### \u2728 Features\n\n- \ud83c\udfa8 Colored logs for the terminal with different levels\n- \ud83d\udcc1 Automatic file rotation by time or size\n- \u26a1 PySpark DataFrame integration\n- \ud83d\udcca JSON format for observability systems\n- \u23f1\ufe0f Timing with LogTimer\n- \ud83d\udcc8 Metrics monitoring with LogMetrics\n- \ud83d\udd27 Hierarchical logger configuration\n- \ud83d\ude80 Optimized performance for critical applications\n\n---\n\n## \ud83d\udce6 Installation\n\n#### Install via pip:\n```bash\npip install logging-metrics \n```\n\n#### For development:\n```bash\ngit clone https://github.com/thaissateodoro/logging-metrics.git\ncd logging-metrics\npip install -e \".[dev]\"\n```\n\n---\n## \ud83d\udccb Functions and Classes Overview\n\nMain Functions\n```\n| Name | Type | Description |\n|---------------------------|----------|--------------------------------------------------------------------------------------|\n| `configure_basic_logging` | Function | Configures root logger for colored console logging. |\n| `setup_file_logging` | Function | Configures a logger with file output (rotation), optional console, JSON formatting. |\n| `LogTimer` | Class | Context manager and decorator to log execution time of code blocks or functions. |\n| `log_spark_dataframe_info`| Function | Logs schema, sample, stats of a PySpark DataFrame (row count, sample, stats, etc). |\n| `LogMetrics` | Class | Utility for collecting, incrementing, timing, and logging custom processing metrics. |\n| `get_logger` | Function | Returns a logger with custom handlers and caplog-friendly mode for pytest. |\n```\n---\n\n### Utility Classes\n#### LogTimer\n- Context manager: with LogTimer(logger, \"operation\"):\n- Decorator: @LogTimer.decorator(logger, \"function\")\n- Manual: timer.start() / timer.stop()\n\n#### LogMetrics\n- Counters: metrics.increment('counter')\n- Timers: metrics.start('timer') / metrics.stop('timer')\n- Context manager: with metrics.timer('operation'):\n- Report: metrics.log_all()\n\n---\n\n## \ud83d\ude80 Quick Start\n\n```python\nimport logging\nfrom logging_metrics import setup_file_logging, LogTimer\n\n# Basic configuration\nlogger = setup_file_logging(\n logger_name=\"my_app\",\n log_dir=\"./logs\",\n console_level=logging.INFO, # Less verbose in console\n level=logging.DEBUG # More detailed in the file\n)\n\n# Simple usage\nlogger.info(\"Application started!\")\n\n# Timing operations\nwith LogTimer(logger, \"Critical operation\"):\n # your code here\n pass\n```\n\n---\n\n## \ud83d\udcd6 Main Features\n\n1. Logging configuration:\n ```python\n import logging\n from logging-metrics import configure_basic_logging\n logger = configure_basic_logging()\n logger.debug(\"Debug message\") # Gray\n logger.info(\"Info\") # Green \n logger.warning(\"Warning\") # Yellow\n logger.error(\"Error\") # Red\n logger.critical(\"Critical\") # Bold red\n ```\n\n2. Automatic Log Rotation:\n ```python\n from logging-metrics import setup_file_logging, LogTimer\n # Size-based rotation\n logger = setup_file_logging(\n logger_name=\"app\",\n log_dir=\"./logs\",\n max_bytes=10*1024*1024, # 10MB\n rotation='size'\n )\n \n # Time-based rotation\n logger = setup_file_logging(\n logger_name=\"app\", \n log_dir=\"./logs\",\n rotation='time' \n )\n ```\n\n3. Spark/Databricks Integration:\n ```python\n from pyspark.sql import SparkSession\n from logging_metrics import configure_basic_logging, log_spark_dataframe_info\n \n spark = SparkSession.builder.getOrCreate()\n df = spark.createDataFrame([(1, \"Ana\"), (2, \"Bruno\")], [\"id\", \"nome\"])\n \n logger = configure_basic_logging()\n print(\"Logger:\", logger)\n \n log_spark_dataframe_info(\n df = df,logger = logger, name =\"spark_app\")\n \n logger.info(\"Spark processing started\")\n ```\n\n4. \u23f1 Timing with LogTimer:\n ```python\n from logging_metrics import LogTimer, configure_basic_logging\n\n logger = configure_basic_logging()\n # As a context manager\n with LogTimer(logger, \"DB query\"):\n logger.info(\"Test\")\n \n # As a decorator\n @LogTimer.as_decorator(logger, \"Data processing\")\n def process_data(data):\n return data.transform()\n ```\n\n5. \ud83d\udcc8 Metrics Monitoring:\n ```python\n from logging_metrics import LogMetrics, configure_basic_logging\n import time\n \n logger = configure_basic_logging()\n \n metrics = LogMetrics(logger)\n \n items = [10, 5, 80, 60, 'test1', 'test2']\n \n # Start timer for total operation\n metrics.start('total_processing')\n \n \n for item in items:\n # Increments the processed records counter\n metrics.increment('records_processed')\n\n # If it is an error (simulation)\n if isinstance(item, str):\n metrics.increment('errors')\n \n # Simulates item processing\n time.sleep(0.1)\n \n # Custom value example\n metrics.set('last_item', item)\n \n \n # Finalize and log all metrics\n elapsed = metrics.stop('total_processing')\n \n # Logs all collected metrics\n metrics.log_all()\n \n # Output:\n # --- Processing Metrics ---\n # Counters:\n # - records_processed: 6\n # - errors_found: 2\n # Values:\n # - last_item: test2\n # Completed timers:\n # - total_processing: 0.60 seconds\n ```\n\n6. Hierarchical Configuration:\n ```python\n from logging_metrics import setup_file_logging\n import logging\n \n # Main logger\n main_logger = setup_file_logging(\"my_app\", log_dir=\"./logs\")\n \n # Sub-loggers organized hierarchically\n db_logger = logging.getLogger(\"my_app.database\")\n api_logger = logging.getLogger(\"my_app.api\")\n auth_logger = logging.getLogger(\"my_app.auth\")\n \n # Module-specific configuration\n db_logger.setLevel(logging.DEBUG) # More verbose for DB\n api_logger.setLevel(logging.INFO) # Normal for API\n auth_logger.setLevel(logging.WARNING) # Only warnings/errors for auth\n \n db_logger.debug(\"querying the database\")\n db_logger.info(\"consultation successfully completed\")\n db_logger.error(\"Error connecting to database!\")\n \n auth_logger.debug(\"doing authentication\")\n auth_logger.info(\"authentication successfully completed\")\n api_logger.debug(\"querying the api\")\n api_logger.info(\"consultation successfully completed\")\n api_logger.error(\"Error querying the api\")\n auth_logger.error(\"Auth error!\")\n ```\n\n7. \ud83d\udcca JSON Format for Observability:\n ```python\n from logging_metrics import setup_file_logging\n \n # JSON logs for integration with ELK, Grafana, etc.\n logger = setup_file_logging(\n logger_name=\"microservice\",\n log_dir=\"./logs\",\n json_format = True\n )\n \n logger.info(\"User logged in\", extra={\"user_id\": 12345, \"action\": \"login\"})\n \n # Example JSON output:\n # {\n # \"timestamp\": \"2024-08-05T10:30:00.123Z\",\n # \"level\": \"INFO\", \n # \"name\": \"microservice\",\n # \"message\": \"User logged in\",\n # \"module\": \"user-api\",\n # \"function\": \"<module>\",\n # \"line\": 160,\n # \"taskName\": null,\n # \"user_id\": 12345,\n # \"action\": \"login\"\n # }\n ```\n\n---\n\n## \ud83c\udfc6 Best Practices\n\n1. Configure logging once at the start:\n ```python\n # In main.py or __init__.py\n logger = setup_file_logging(\"my_app\", log_dir=\"./logs\")\n ```\n\n2. Use logger hierarchy:\n ```python\n # Organize by modules/features\n db_logger = logging.getLogger(\"app.database\")\n api_logger = logging.getLogger(\"app.api\")\n ```\n\n3. Different levels for console and file:\n ```python\n logger = setup_file_logging(\n console_level=logging.WARNING, # Less verbose in console\n level=logging.DEBUG # More detailed in the file\n )\n ```\n\n4. Use LogTimer for critical operations:\n ```python\n with LogTimer(logger, \"Complex query\"):\n result = run_heavy_query()\n ```\n\n5. Monitor metrics in long processes:\n ```python\n metrics = LogMetrics(logger)\n for batch in batches:\n with metrics.timer('batch_processing'):\n process_batch(batch)\n ```\n\n---\n\n## \u274c Avoid\n- Configuring loggers multiple times\n- Using print() instead of logger\n- Excessive logging in critical loops\n- Exposing sensitive information in logs\n- Ignoring log file rotation\n\n---\n\n## \ud83d\udd27 Advanced Configuration\n\nExample of full configuration:\n```python\nfrom logging_metrics import setup_file_logging, LogMetrics\nimport logging\n\n# Main configuration with all options\nlogger = setup_file_logging(\n logger_name=\"my_app\",\n log_folder: str = \"unknown/\"\n log_dir=\"./logs\",\n level=logging.DEBUG,\n console_level=logging.INFO,\n rotation='time',\n log_format=\"%(asctime)s - %(name)s - %(levelname)s - %(message)s\",\n date_format=\"%Y-%m-%d %H:%M:%S\",\n max_bytes=50*1024*1024, # 50MB\n backup_count=10,\n add_console= True\n)\n\n# Sub-module configuration\nmodules = ['database', 'api', 'auth', 'cache']\nfor module in modules:\n module_logger = logging.getLogger(f\"my_app.{module}\")\n module_logger.setLevel(logging.INFO)\n```\n\n---\n\n## \ud83e\uddea Complete Example\n\n```python\nimport logging\nfrom logging_metrics import setup_file_logging, LogTimer, LogMetrics\n\ndef main():\n # Initial configuration\n logger = setup_file_logging(\n logger_name=\"data_processor\",\n log_dir=\"./logs\",\n console_level=logging.INFO,\n level=logging.DEBUG\n )\n \n # Sub-loggers\n db_logger = logging.getLogger(\"data_processor.database\")\n api_logger = logging.getLogger(\"data_processor.api\")\n \n # Metrics\n metrics = LogMetrics(logger)\n \n logger.info(\"Application started\")\n \n try:\n # Main processing with timing\n with LogTimer(logger, \"Full processing\"):\n metrics.start('total_processing')\n \n # Simulate processing\n for i in range(1000):\n metrics.increment('records_processed')\n \n if i % 100 == 0:\n logger.info(f\"Processed {i} records\")\n \n # Simulate occasional error\n if i % 250 == 0:\n metrics.increment('errors_recovered')\n logger.warning(f\"Recovered error at record {i}\")\n \n metrics.stop('total_processing')\n metrics.log_all()\n \n logger.info(\"Processing successfully completed\")\n \n except Exception as e:\n logger.error(f\"Error during processing: {e}\", exc_info=True)\n raise\n\nif __name__ == \"__main__\":\n main()\n```\n\n---\n\n## \ud83e\uddea Tests\n\nThe library has a complete test suite to ensure quality and reliability.\n\n#### Running the tests:\n```bash\n# Install development dependencies\npip install -e \".[dev]\"\n\n# Run all tests\nmake test\n\n# Tests with coverage\nmake test-cov\n\n# Specific tests\npytest test/test_file_logging.py -v\n\n# Tests with different verbosity levels\npytest test/ -v # Verbose\npytest test/ -s # No output capture\npytest test/ --tb=short # Short traceback\n```\n\n#### Test Structure\n```\ntest/\n\u251c\u2500\u2500 conftest.py # Shared pytest fixtures and test configurations \n\u251c\u2500\u2500 Makefile # Automation commands for testing, linting, and build tasks\n\u251c\u2500\u2500 pytest.ini # Global pytest configuration settings\n\u251c\u2500\u2500 run_tests.py # Script to run all tests automatically\n\u251c\u2500\u2500 test-requirements.txt # Development and test dependencies\n\u251c\u2500\u2500 TEST_GUIDE.md # Quick guide: how to run and interpret tests\n\u2514\u2500\u2500 test_logging_metrics.py # Automated tests for the logging_metrics library\n```\n\n#### Current coverage\n```\n# Coverage report\nName Stmts Miss Cover\n-----------------------------------------------\nsrc/logging_metrics/__init__.py 12 0 100%\nsrc/logging_metrics/console.py 45 2 96%\nsrc/logging_metrics/file.py 78 3 96%\nsrc/logging_metrics/spark.py 32 1 97%\nsrc/logging_metrics/timer.py 56 2 96%\nsrc/logging_metrics/metrics.py 89 4 96%\n-----------------------------------------------\nTOTAL 312 12 96%\n```\n\n#### Running tests in different environments\n```bash\n# Test in multiple Python versions with tox\npip install tox\n\ntox\n\n# Specific configurations\ntox -e py38 # Python 3.8\ntox -e py39 # Python 3.9 \ntox -e py310 # Python 3.10\ntox -e py311 # Python 3.11\ntox -e py312 # Python 3.12\ntox -e lint # Only linting\ntox -e coverage # Only coverage\n```\n\n#### Running tests in CI/CD\nTests are run automatically in:\n\n---\n\n\n## \ud83d\udd27 Requirements\n\nPython: >= 3.8\n\nDependencies:\n\n- pytz (for timezone handling)\n- pyspark\n\n---\n\n## \ud83d\udcdd Changelog\n\nv0.1.2 (Current)\n- Initial stable version\n- LogTimer and LogMetrics\n- Spark integration\n- Colored logs\n- JSON log support\n- Fixed file rotation bug on Windows\n- Expanded documentation with more examples\n\n---\n\n## \ud83e\udd1d Contributing\n\n#### Contributions are welcome!\n1. Fork the project\n2. Create your feature branch (`git checkout -b feature/logging-metrics`)\n3. Commit your changes (`git commit -m 'Add logging-metrics'`)\n4. Push to the branch (`git push origin feature/logging-metrics`)\n5. Open a Pull Request\n\n---\n\n## License\n\nMIT License. See LICENSE for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "Advanced logging utilities for robust, standardized logs in Python projects, APIs, data engineering, and more.",
"version": "0.1.2",
"project_urls": {
"Changelog": "https://https://github.com/ThaissaTeodoro/logging-metrics/blob/main/CHANGELOG.md",
"Repository": "https://github.com/ThaissaTeodoro/logging-metrics"
},
"split_keywords": [
"logging",
" metrics",
" python",
" spark",
" instrumentation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "bdc3250b9abd7add7ac2cbaf27a8c15f11aa991d01e888e92e8d6143e29bf651",
"md5": "a806e20dc10af4ee9c894360f63f2057",
"sha256": "274a53efdb017e9ff73782f2039b4754c03a131d1e8550cc742964729acd1414"
},
"downloads": -1,
"filename": "logging_metrics-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a806e20dc10af4ee9c894360f63f2057",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 13705,
"upload_time": "2025-08-06T14:24:26",
"upload_time_iso_8601": "2025-08-06T14:24:26.103426Z",
"url": "https://files.pythonhosted.org/packages/bd/c3/250b9abd7add7ac2cbaf27a8c15f11aa991d01e888e92e8d6143e29bf651/logging_metrics-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "97a6e8f11b8f30b22cfb016911bba551821d0ecd0a26ae395519591d6b8c84b6",
"md5": "6f6728b5a4e66d2d5a524ad15d5892fa",
"sha256": "08c0737c8a24dbf811f30c473b65056fe5e5e1f5e5981275506181fe2ca2a099"
},
"downloads": -1,
"filename": "logging_metrics-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "6f6728b5a4e66d2d5a524ad15d5892fa",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 20492,
"upload_time": "2025-08-06T14:24:27",
"upload_time_iso_8601": "2025-08-06T14:24:27.190898Z",
"url": "https://files.pythonhosted.org/packages/97/a6/e8f11b8f30b22cfb016911bba551821d0ecd0a26ae395519591d6b8c84b6/logging_metrics-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-06 14:24:27",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ThaissaTeodoro",
"github_project": "logging-metrics",
"github_not_found": true,
"lcname": "logging-metrics"
}