# PyLoggerX
[](https://badge.fury.io/py/pyloggerx)
[](https://pypi.org/project/pyloggerx/)
[](https://opensource.org/licenses/MIT)
**Bibliothèque de logging moderne, colorée et riche en fonctionnalités pour Python avec logging structuré, tracking de performance et logging distant.**
PyLoggerX est un wrapper puissant qui étend le module logging standard de Python avec une sortie console élégante, du logging JSON structuré, une rotation automatique des logs, et du logging distant vers des services populaires comme Elasticsearch, Grafana Loki, Sentry, Datadog, et plus encore. Conçu pour les workflows DevOps modernes et les applications cloud-native.
---
## Table des Matières
- [Fonctionnalités](#fonctionnalités)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Intégration DevOps](#intégration-devops)
- [Docker & Kubernetes](#docker--kubernetes)
- [Pipelines CI/CD](#pipelines-cicd)
- [Stack d'Observabilité](#stack-dobservabilité)
- [Infrastructure as Code](#infrastructure-as-code)
- [Guide d'Utilisation Complet](#guide-dutilisation-complet)
- [Logging Distant](#logging-distant-v30)
- [Fonctionnalités Avancées](#fonctionnalités-avancées)
- [Référence de Configuration](#référence-de-configuration)
- [Configuration Avancée](#configuration-avancée)
- [Chargement depuis Fichiers](#chargement-depuis-fichiers)
- [Configuration par Variables d'Environnement](#configuration-par-variables-denvironnement)
- [Configuration Multi-Sources](#configuration-multi-sources)
- [Validation de Configuration](#validation-de-configuration)
- [Configurations Prédéfinies](#configurations-prédéfinies)
- [Monitoring et Métriques](#monitoring-et-métriques)
- [Collecteur de Métriques](#collecteur-de-métriques)
- [Gestionnaire d'Alertes](#gestionnaire-dalertes)
- [Monitoring de Santé](#monitoring-de-santé)
- [Dashboard Console](#dashboard-console)
- [Intégrations Monitoring](#intégrations-monitoring)
- [Prometheus](#intégration-prometheus)
- [Grafana](#intégration-grafana)
- [Custom Metrics](#métriques-personnalisées)
- [Exemples Complets](#exemples-complets)
- [Référence Config](#référence-config)
- [Exemples Réels](#exemples-réels)
- [Meilleures Pratiques](#meilleures-pratiques)
- [Référence API](#référence-api)
- [Tests](#tests)
- [Dépannage](#dépannage)
- [Contribution](#contribution)
- [Licence](#licence)
---
## Fonctionnalités
### Fonctionnalités Core
- **Sortie Console Colorée** - Logs console élégants avec indicateurs emoji
- **Logging JSON Structuré** - Export de logs en format JSON structuré
- **Rotation Automatique** - Rotation basée sur la taille et le temps
- **Tracking de Performance** - Chronométrage et monitoring intégrés
- **Zéro Configuration** - Fonctionne immédiatement avec des valeurs par défaut sensées
- **Hautement Configurable** - Options de personnalisation étendues
- **Enrichissement de Contexte** - Injection de métadonnées automatique
- **Formats de Sortie Multiples** - Console, JSON, texte
### Fonctionnalités DevOps & Cloud-Native
- **Compatible Conteneurs** - Logging structuré adapté aux conteneurs
- **Compatible Kubernetes** - Sortie JSON pour les log collectors
- **Intégration CI/CD** - Support pour GitHub Actions, GitLab CI, Jenkins
- **Support Correlation ID** - Pour le tracing distribué
- **Logging Health Check** - Monitoring de santé des services
- **Format Prêt pour les Métriques** - Sortie compatible avec Prometheus
- **Configuration par Environnement** - Adaptation automatique selon l'environnement
- **Conforme 12-Factor App** - Suit les meilleures pratiques
### Fonctionnalités Avancées
- **Logging Distant** - Export vers Elasticsearch, Loki, Sentry, Datadog
- **Échantillonnage de Logs** - Gestion efficace des scénarios à haut volume
- **Limitation de Débit** - Prévention de l'inondation de logs
- **Filtrage Avancé** - Filtres par niveau, pattern ou logique personnalisée
- **Traitement par Batch** - Batching efficace pour les exports distants
- **Support Webhook** - Envoi de logs vers des endpoints personnalisés
- **Intégration Slack** - Alertes critiques dans Slack
- **Processing Asynchrone** - Non-bloquant pour les performances
---
## Installation
### Installation Basique
```bash
pip install pyloggerx
```
### Dans requirements.txt
```text
pyloggerx>=1.0.0
```
### Dans pyproject.toml (Poetry)
```toml
[tool.poetry.dependencies]
pyloggerx = "^1.0.0"
```
### Avec Support de Logging Distant
```bash
# Pour Elasticsearch
pip install pyloggerx[elasticsearch]
# Pour Sentry
pip install pyloggerx[sentry]
# Pour tous les services distants
pip install pyloggerx[all]
```
### Installation Développement
```bash
git clone https://github.com/yourusername/pyloggerx.git
cd pyloggerx
pip install -e ".[dev]"
```
---
## Quick Start
### Usage Basique
```python
from pyloggerx import log
# Logging simple
log.info("Application démarrée")
log.warning("Ceci est un avertissement")
log.error("Une erreur s'est produite")
log.debug("Informations de debug")
# Avec contexte
log.info("Utilisateur connecté", user_id=123, ip="192.168.1.1")
```
### Instance de Logger Personnalisée
```python
from pyloggerx import PyLoggerX
logger = PyLoggerX(
name="myapp",
level="INFO",
console=True,
colors=True,
json_file="logs/app.json",
text_file="logs/app.log"
)
logger.info("Logger personnalisé initialisé")
```
### Logger avec Export Distant
```python
logger = PyLoggerX(
name="production-app",
console=True,
json_file="logs/app.json",
# Export vers Elasticsearch
elasticsearch_url="http://localhost:9200",
elasticsearch_index="myapp-logs",
# Alertes Sentry pour les erreurs
sentry_dsn="https://xxx@sentry.io/xxx",
# Notifications Slack pour les critiques
slack_webhook="https://hooks.slack.com/services/xxx"
)
logger.info("Application démarrée")
logger.error("Erreur critique") # Envoyé à tous les services
```
---
## Intégration DevOps
### Docker & Kubernetes
#### Application Conteneurisée
```python
# app.py
import os
from pyloggerx import PyLoggerX
# Configuration pour environnement conteneur
logger = PyLoggerX(
name=os.getenv("APP_NAME", "myapp"),
level=os.getenv("LOG_LEVEL", "INFO"),
console=True, # Logs vers stdout pour les collecteurs
colors=False, # Désactiver les couleurs dans les conteneurs
json_file=None, # Utiliser stdout uniquement
include_caller=True,
enrichment_data={
"environment": os.getenv("ENVIRONMENT", "production"),
"pod_name": os.getenv("POD_NAME", "unknown"),
"namespace": os.getenv("NAMESPACE", "default"),
"version": os.getenv("APP_VERSION", "1.0.0"),
"region": os.getenv("AWS_REGION", "us-east-1")
}
)
logger.info("Application démarrée", port=8080)
```
#### Dockerfile Optimisé
```dockerfile
FROM python:3.11-slim
WORKDIR /app
# Installation des dépendances
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# Variables d'environnement pour logging
ENV LOG_LEVEL=INFO \
APP_NAME=myapp \
ENVIRONMENT=production \
PYTHONUNBUFFERED=1
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \
CMD python -c "import requests; requests.get('http://localhost:8080/health')"
# Exécution
CMD ["python", "app.py"]
```
#### Déploiement Kubernetes avec Logging
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: production
labels:
app: myapp
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
spec:
containers:
- name: myapp
image: myapp:1.0.0
env:
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: log-level
- name: APP_NAME
value: "myapp"
- name: ENVIRONMENT
value: "production"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: APP_VERSION
value: "1.0.0"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 8080
name: http
# Probes avec logging
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
log-level: "INFO"
```
#### Sortie JSON pour Kubernetes (Fluentd/Filebeat)
```python
# Pour les collecteurs de logs Kubernetes
logger = PyLoggerX(
name="k8s-app",
console=True,
colors=False, # IMPORTANT: désactiver pour les collecteurs
format_string='{"timestamp":"%(asctime)s","level":"%(levelname)s","logger":"%(name)s","message":"%(message)s"}',
enrichment_data={
"cluster": os.getenv("CLUSTER_NAME", "prod-cluster"),
"pod_ip": os.getenv("POD_IP", "unknown")
}
)
logger.info("Requête traitée",
duration_ms=123,
status_code=200,
endpoint="/api/users")
```
### Pipelines CI/CD
#### GitHub Actions
```yaml
# .github/workflows/test-and-deploy.yml
name: Test, Build & Deploy
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
LOG_LEVEL: DEBUG
APP_NAME: myapp
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.9', '3.10', '3.11']
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Cache dependencies
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
- name: Install dependencies
run: |
pip install -e .
pip install pytest pytest-cov pytest-xdist
- name: Run tests with logging
run: |
python -c "
from pyloggerx import PyLoggerX
import os
logger = PyLoggerX(
name='ci-tests',
level='DEBUG',
console=True,
json_file='test-results/logs.json',
enrichment_data={
'pipeline': 'github-actions',
'commit': '${{ github.sha }}',
'branch': '${{ github.ref_name }}',
'actor': '${{ github.actor }}',
'run_id': '${{ github.run_id }}',
'python_version': '${{ matrix.python-version }}'
}
)
logger.info('Démarrage des tests CI')
"
pytest -n auto --cov=pyloggerx --cov-report=xml --cov-report=html
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: ./coverage.xml
- name: Upload test logs
if: always()
uses: actions/upload-artifact@v3
with:
name: test-logs-${{ matrix.python-version }}
path: test-results/
retention-days: 30
build:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: |
myapp:latest
myapp:${{ github.sha }}
cache-from: type=registry,ref=myapp:buildcache
cache-to: type=registry,ref=myapp:buildcache,mode=max
deploy:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3
- name: Deploy to Kubernetes
run: |
echo "${{ secrets.KUBECONFIG }}" > kubeconfig
export KUBECONFIG=kubeconfig
kubectl set image deployment/myapp myapp=myapp:${{ github.sha }} -n production
kubectl rollout status deployment/myapp -n production
```
#### GitLab CI
```yaml
# .gitlab-ci.yml
variables:
LOG_LEVEL: "DEBUG"
APP_NAME: "myapp"
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
stages:
- test
- build
- deploy
- monitor
# Template pour logging
.logging_template: &logging_setup
before_script:
- |
python -c "
from pyloggerx import PyLoggerX
import os
logger = PyLoggerX(
name='gitlab-ci',
level=os.getenv('LOG_LEVEL', 'INFO'),
console=True,
json_file='logs/ci.json',
enrichment_data={
'pipeline_id': os.getenv('CI_PIPELINE_ID'),
'job_id': os.getenv('CI_JOB_ID'),
'commit_sha': os.getenv('CI_COMMIT_SHA'),
'branch': os.getenv('CI_COMMIT_REF_NAME'),
'runner': os.getenv('CI_RUNNER_DESCRIPTION'),
'project': os.getenv('CI_PROJECT_NAME')
}
)
logger.info('Job GitLab CI démarré')
"
test:unit:
stage: test
image: python:3.11
<<: *logging_setup
script:
- pip install -e .[dev]
- pytest --cov=pyloggerx --cov-report=xml --cov-report=term
coverage: '/TOTAL.*\s+(\d+%)$/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
paths:
- logs/
- htmlcov/
expire_in: 1 week
only:
- merge_requests
- main
- develop
test:integration:
stage: test
image: python:3.11
services:
- postgres:14
- redis:7
variables:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
REDIS_URL: redis://redis:6379
script:
- pip install -e .[dev]
- pytest tests/integration/ -v
only:
- main
- develop
build:docker:
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
only:
- main
deploy:production:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl config use-context production
- kubectl set image deployment/myapp myapp=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -n production
- kubectl rollout status deployment/myapp -n production
environment:
name: production
url: https://myapp.example.com
when: manual
only:
- main
monitor:health:
stage: monitor
image: curlimages/curl:latest
script:
- |
for i in {1..5}; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://myapp.example.com/health)
if [ "$STATUS" == "200" ]; then
echo "Health check passed"
exit 0
fi
echo "Attempt $i failed, retrying..."
sleep 10
done
exit 1
only:
- main
```
#### Jenkins Pipeline
```groovy
// Jenkinsfile
pipeline {
agent any
environment {
LOG_LEVEL = 'DEBUG'
APP_NAME = 'myapp'
ENVIRONMENT = 'staging'
DOCKER_REGISTRY = 'registry.example.com'
KUBE_NAMESPACE = 'production'
}
options {
buildDiscarder(logRotator(numToKeepStr: '10'))
timeout(time: 1, unit: 'HOURS')
timestamps()
}
stages {
stage('Setup') {
steps {
script {
sh '''
python3 -m venv venv
. venv/bin/activate
pip install -e .[dev]
'''
}
}
}
stage('Test') {
parallel {
stage('Unit Tests') {
steps {
script {
sh '''
. venv/bin/activate
python -c "
from pyloggerx import PyLoggerX
logger = PyLoggerX(
name='jenkins-tests',
json_file='logs/unit-tests.json',
enrichment_data={
'build_number': '${BUILD_NUMBER}',
'job_name': '${JOB_NAME}',
'node_name': '${NODE_NAME}'
}
)
logger.info('Tests unitaires démarrés')
"
pytest tests/unit/ -v --junitxml=results/unit.xml
'''
}
}
}
stage('Integration Tests') {
steps {
script {
sh '''
. venv/bin/activate
pytest tests/integration/ -v --junitxml=results/integration.xml
'''
}
}
}
}
post {
always {
junit 'results/*.xml'
}
}
}
stage('Build') {
steps {
script {
docker.build("${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}")
}
}
}
stage('Push') {
steps {
script {
docker.withRegistry("https://${DOCKER_REGISTRY}", 'docker-credentials') {
docker.image("${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}").push()
docker.image("${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}").push('latest')
}
}
}
}
stage('Deploy') {
when {
branch 'main'
}
steps {
script {
withKubeConfig([credentialsId: 'kube-config']) {
sh """
kubectl set image deployment/${APP_NAME} \
${APP_NAME}=${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER} \
-n ${KUBE_NAMESPACE}
kubectl rollout status deployment/${APP_NAME} -n ${KUBE_NAMESPACE}
"""
}
}
}
}
stage('Smoke Tests') {
when {
branch 'main'
}
steps {
script {
sh '''
for i in {1..5}; do
if curl -f https://myapp.example.com/health; then
echo "Smoke test passed"
exit 0
fi
sleep 10
done
exit 1
'''
}
}
}
}
post {
always {
archiveArtifacts artifacts: 'logs/*.json', allowEmptyArchive: true
cleanWs()
}
success {
slackSend(
color: 'good',
message: "Build #${BUILD_NUMBER} succeeded for ${JOB_NAME}"
)
}
failure {
slackSend(
color: 'danger',
message: "Build #${BUILD_NUMBER} failed for ${JOB_NAME}"
)
}
}
}
```
### Stack d'Observabilité
#### ELK Stack (Elasticsearch, Logstash, Kibana)
```python
# Configuration pour ELK
from pyloggerx import PyLoggerX
import socket
import os
logger = PyLoggerX(
name="elk-app",
console=True,
json_file="/var/log/myapp/app.json", # Filebeat surveille ce fichier
colors=False,
# Export direct vers Elasticsearch
elasticsearch_url="http://elasticsearch:9200",
elasticsearch_index="myapp-logs",
elasticsearch_username=os.getenv("ES_USERNAME"),
elasticsearch_password=os.getenv("ES_PASSWORD"),
enrichment_data={
"service": "payment-api",
"environment": os.getenv("ENVIRONMENT", "production"),
"hostname": socket.gethostname(),
"version": os.getenv("APP_VERSION", "1.0.0"),
"datacenter": os.getenv("DATACENTER", "us-east-1")
}
)
# Les logs sont envoyés à Elasticsearch et écrits dans un fichier
logger.info("Paiement traité",
transaction_id="txn_123",
amount=99.99,
currency="USD",
customer_id="cust_456",
payment_method="card")
```
**Configuration Filebeat** (`filebeat.yml`):
```yaml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/myapp/*.json
json.keys_under_root: true
json.add_error_key: true
json.message_key: message
fields:
service: myapp
environment: production
fields_under_root: true
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "myapp-logs-%{+yyyy.MM.dd}"
username: "${ES_USERNAME}"
password: "${ES_PASSWORD}"
ssl.verification_mode: none
setup.kibana:
host: "kibana:5601"
setup.ilm.enabled: true
setup.ilm.rollover_alias: "myapp-logs"
setup.ilm.pattern: "{now/d}-000001"
logging.level: info
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
```
#### Prometheus & Grafana
```python
from pyloggerx import PyLoggerX
from prometheus_client import Counter, Histogram, Gauge, start_http_server
import time
import functools
# Métriques Prometheus
request_counter = Counter(
'http_requests_total',
'Total HTTP requests',
['method', 'endpoint', 'status']
)
request_duration = Histogram(
'http_request_duration_seconds',
'HTTP request duration',
['method', 'endpoint']
)
active_requests = Gauge(
'http_requests_active',
'Active HTTP requests'
)
error_counter = Counter(
'application_errors_total',
'Total application errors',
['error_type']
)
logger = PyLoggerX(
name="metrics-app",
json_file="logs/metrics.json",
performance_tracking=True,
# Export vers services de monitoring
datadog_api_key=os.getenv("DATADOG_API_KEY"),
enrichment_data={
"service": "api-gateway",
"version": "2.0.0"
}
)
def monitor_request(func):
"""Décorateur pour monitorer les requêtes"""
@functools.wraps(func)
def wrapper(method, endpoint, *args, **kwargs):
active_requests.inc()
start_time = time.time()
logger.info("Requête reçue",
method=method,
endpoint=endpoint)
try:
result = func(method, endpoint, *args, **kwargs)
duration = time.time() - start_time
# Mettre à jour les métriques
request_counter.labels(
method=method,
endpoint=endpoint,
status=200
).inc()
request_duration.labels(
method=method,
endpoint=endpoint
).observe(duration)
logger.info("Requête complétée",
method=method,
endpoint=endpoint,
status=200,
duration_ms=duration*1000)
return result
except Exception as e:
duration = time.time() - start_time
error_type = type(e).__name__
request_counter.labels(
method=method,
endpoint=endpoint,
status=500
).inc()
error_counter.labels(error_type=error_type).inc()
logger.error("Requête échouée",
method=method,
endpoint=endpoint,
status=500,
duration_ms=duration*1000,
error=str(e),
error_type=error_type)
raise
finally:
active_requests.dec()
return wrapper
@monitor_request
def handle_request(method, endpoint, data=None):
# Logique de traitement
time.sleep(0.1) # Simulation
return {"status": "success"}
# Démarrer le serveur de métriques Prometheus
start_http_server(8000)
logger.info("Serveur de métriques démarré", port=8000)
# Endpoint de métriques custom
def get_performance_metrics():
stats = logger.get_performance_stats()
return {
"logging": {
"total_logs": stats.get("total_operations", 0),
"avg_duration": stats.get("avg_duration", 0),
"max_duration": stats.get("max_duration", 0)
},
"requests": {
"total": request_counter._value.sum(),
"active": active_requests._value.get()
}
}
```
#### OpenTelemetry Integration
```python
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from pyloggerx import PyLoggerX
import os
# Setup OpenTelemetry
resource = Resource.create({
"service.name": "myapp",
"service.version": "1.0.0",
"deployment.environment": os.getenv("ENVIRONMENT", "production")
})
trace.set_tracer_provider(TracerProvider(resource=resource))
tracer = trace.get_tracer(__name__)
# Export vers OTLP collector
otlp_exporter = OTLPSpanExporter(
endpoint="http://otel-collector:4317",
insecure=True
)
span_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
logger = PyLoggerX(
name="otel-app",
json_file="logs/traces.json",
enrichment_data={
"service": "order-service"
}
)
def process_order(order_id):
"""Traiter une commande avec tracing distribué"""
with tracer.start_as_current_span("process_order") as span:
span.set_attribute("order.id", order_id)
# Récupérer le contexte de trace
ctx = span.get_span_context()
trace_id = format(ctx.trace_id, '032x')
span_id = format(ctx.span_id, '016x')
logger.info("Traitement de la commande",
order_id=order_id,
trace_id=trace_id,
span_id=span_id)
# Étapes de traitement avec spans
validate_order(order_id, trace_id, span_id)
charge_payment(order_id, trace_id, span_id)
ship_order(order_id, trace_id, span_id)
logger.info("Commande complétée",
order_id=order_id,
trace_id=trace_id,
span_id=span_id)
def validate_order(order_id, trace_id, span_id):
with tracer.start_as_current_span("validate_order"):
logger.debug("Validation de la commande",
order_id=order_id,
trace_id=trace_id,
span_id=span_id)
# Logique de validation
time.sleep(0.1)
def charge_payment(order_id, trace_id, span_id):
with tracer.start_as_current_span("charge_payment"):
logger.info("Traitement du paiement",
order_id=order_id,
trace_id=trace_id,
span_id=span_id)
# Logique de paiement
time.sleep(0.2)
def ship_order(order_id, trace_id, span_id):
with tracer.start_as_current_span("ship_order"):
logger.info("Expédition de la commande",
order_id=order_id,
trace_id=trace_id,
span_id=span_id)
# Logique d'expédition
time.sleep(0.15)
```
#### Grafana Loki Integration
```python
from pyloggerx import PyLoggerX
import os
logger = PyLoggerX(
name="loki-app",
console=True,
# Export direct vers Loki
loki_url="http://loki:3100",
loki_labels={
"app": "payment-service",
"environment": os.getenv("ENVIRONMENT", "production"),
"region": os.getenv("AWS_REGION", "us-east-1"),
"version": os.getenv("APP_VERSION", "1.0.0")
},
enrichment_data={
"service": "payment-api",
"instance": os.getenv("HOSTNAME", "unknown")
}
)
# Les logs sont automatiquement envoyés à Loki
logger.info("Paiement initié",
transaction_id="txn_789",
amount=150.00,
currency="EUR")
logger.info("Paiement complété",
transaction_id="txn_789",
status="success",
processing_time_ms=234)
```
**Configuration Promtail** (`promtail-config.yml`):
```yaml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
- job_name: containers
docker_sd_configs:
- host: unix:///var/run/docker.sock
refresh_interval: 5s
relabel_configs:
- source_labels: ['__meta_docker_container_name']
regex: '/(.*)'
target_label: 'container'
- source_labels: ['__meta_docker_container_log_stream']
target_label: 'logstream'
- source_labels: ['__meta_docker_container_label_logging_jobname']
target_label: 'job'
```
### Infrastructure as Code
#### Terraform avec Logging
```python
# terraform_deploy.py
from pyloggerx import PyLoggerX
import subprocess
import json
import sys
logger = PyLoggerX(
name="terraform",
console=True,
json_file="logs/terraform.json",
performance_tracking=True,
# Notifications Slack pour les déploiements
slack_webhook=os.getenv("SLACK_WEBHOOK"),
enrichment_data={
"tool": "terraform",
"workspace": os.getenv("TF_WORKSPACE", "default")
}
)
def run_terraform_command(command, **kwargs):
"""Exécuter une commande Terraform avec logging"""
logger.info(f"Exécution: terraform {command}", **kwargs)
result = subprocess.run(
["terraform"] + command.split(),
capture_output=True,
text=True
)
if result.returncode == 0:
logger.info(f"Commande réussie: terraform {command}")
else:
logger.error(f"Commande échouée: terraform {command}",
stderr=result.stderr,
returncode=result.returncode)
return result
def terraform_deploy(workspace="production", auto_approve=False):
"""Déploiement Terraform complet avec logging détaillé"""
logger.info("Déploiement Terraform démarré",
workspace=workspace,
auto_approve=auto_approve)
try:
# Init
with logger.timer("Terraform Init"):
result = run_terraform_command("init -upgrade")
if result.returncode != 0:
raise Exception("Terraform init failed")
# Workspace
if workspace != "default":
with logger.timer("Terraform Workspace"):
run_terraform_command(f"workspace select {workspace}")
# Plan
with logger.timer("Terraform Plan"):
result = run_terraform_command("plan -out=tfplan -json")
# Parser la sortie JSON
changes = {"add": 0, "change": 0, "destroy": 0}
for line in result.stdout.split('\n'):
if line.strip():
try:
data = json.loads(line)
if data.get("type") == "change_summary":
changes = data.get("changes", changes)
except:
pass
logger.info("Plan Terraform terminé",
resources_to_add=changes["add"],
resources_to_change=changes["change"],
resources_to_destroy=changes["destroy"])
# Alerte si destruction de ressources
if changes["destroy"] > 0:
logger.warning("Destruction de ressources détectée",
count=changes["destroy"])
# Apply
apply_cmd = "apply tfplan"
if auto_approve:
apply_cmd += " -auto-approve"
with logger.timer("Terraform Apply"):
result = run_terraform_command(apply_cmd)
if result.returncode == 0:
logger.info("Déploiement Terraform réussi")
else:
logger.error("Déploiement Terraform échoué",
returncode=result.returncode)
sys.exit(1)
# Statistiques finales
stats = logger.get_performance_stats()
logger.info("Déploiement complété",
total_duration=stats["total_duration"],
avg_duration=stats["avg_duration"])
except Exception as e:
logger.exception("Erreur lors du déploiement Terraform",
error=str(e))
sys.exit(1)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--workspace", default="production")
parser.add_argument("--auto-approve", action="store_true")
args = parser.parse_args()
terraform_deploy(args.workspace, args.auto_approve)
```
#### Ansible avec Logging
```python
# ansible_playbook.py
from pyloggerx import PyLoggerX
import subprocess
import json
from datetime import datetime
import os
logger = PyLoggerX(
name="ansible",
json_file="logs/ansible.json",
# Export vers Elasticsearch pour analyse
elasticsearch_url=os.getenv("ES_URL"),
elasticsearch_index="ansible-logs",
enrichment_data={
"automation": "ansible",
"run_id": datetime.now().strftime("%Y%m%d_%H%M%S"),
"user": os.getenv("USER")
}
)
def run_playbook(playbook_path, inventory="hosts.ini", extra_vars=None, tags=None):
"""Exécuter un playbook Ansible avec logging détaillé"""
logger.info("Playbook Ansible démarré",
playbook=playbook_path,
inventory=inventory,
extra_vars=extra_vars,
tags=tags)
cmd = [
"ansible-playbook",
playbook_path,
"-i", inventory,
"-v" # Verbosité
]
if extra_vars:
cmd.extend(["--extra-vars", json.dumps(extra_vars)])
if tags:
cmd.extend(["--tags", tags])
# Exécution avec capture de sortie
result = subprocess.run(
cmd,
capture_output=True,
text=True
)
# Parser la sortie Ansible
stats = parse_ansible_output(result.stdout)
# Logger les résultats
if stats:
logger.info("Playbook Ansible terminé",
hosts_processed=len(stats),
total_ok=sum(s.get("ok", 0) for s in stats.values()),
total_changed=sum(s.get("changed", 0) for s in stats.values()),
total_failed=sum(s.get("failures", 0) for s in stats.values()),
total_unreachable=sum(s.get("unreachable", 0) for s in stats.values()))
# Logger par hôte
for host, host_stats in stats.items():
logger.debug("Statistiques par hôte",
host=host,
ok=host_stats.get("ok", 0),
changed=host_stats.get("changed", 0),
failed=host_stats.get("failures", 0))
if result.returncode != 0:
logger.error("Playbook Ansible échoué",
returncode=result.returncode,
stderr=result.stderr)
return False
return True
def parse_ansible_output(output):
"""Parser la sortie Ansible"""
stats = {}
in_recap = False
for line in output.split('\n'):
if "PLAY RECAP" in line:
in_recap = True
continue
if in_recap and ":" in line:
parts = line.split(":")
if len(parts) >= 2:
host = parts[0].strip()
stats_str = parts[1].strip()
# Parser les statistiques
host_stats = {}
for stat in stats_str.split():
if "=" in stat:
key, value = stat.split("=")
try:
host_stats[key] = int(value)
except:
pass
stats[host] = host_stats
return stats
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("playbook", help="Chemin du playbook")
parser.add_argument("-i", "--inventory", default="hosts.ini")
parser.add_argument("-e", "--extra-vars", help="Variables supplémentaires (JSON)")
parser.add_argument("-t", "--tags", help="Tags à exécuter")
args = parser.parse_args()
extra_vars = json.loads(args.extra_vars) if args.extra_vars else None
success = run_playbook(
args.playbook,
inventory=args.inventory,
extra_vars=extra_vars,
tags=args.tags
)
sys.exit(0 if success else 1)
```
#### AWS CDK avec Logging
```python
# cdk_app.py
from aws_cdk import (
App, Stack, Duration,
aws_lambda as lambda_,
aws_apigateway as apigw,
aws_dynamodb as dynamodb,
aws_logs as logs
)
from pyloggerx import PyLoggerX
import os
logger = PyLoggerX(
name="cdk-deploy",
json_file="logs/cdk.json",
enrichment_data={
"tool": "aws-cdk",
"account": os.getenv("CDK_DEFAULT_ACCOUNT"),
"region": os.getenv("CDK_DEFAULT_REGION", "us-east-1")
}
)
class MyApplicationStack(Stack):
def __init__(self, scope, id, **kwargs):
super().__init__(scope, id, **kwargs)
logger.info("Création du stack CDK", stack_name=id)
# DynamoDB Table
logger.info("Création de la table DynamoDB")
table = dynamodb.Table(
self, "DataTable",
partition_key=dynamodb.Attribute(
name="id",
type=dynamodb.AttributeType.STRING
),
billing_mode=dynamodb.BillingMode.PAY_PER_REQUEST,
removal_policy=RemovalPolicy.DESTROY
)
logger.info("Table DynamoDB créée", table_name=table.table_name)
# Lambda Function
logger.info("Création de la fonction Lambda")
lambda_fn = lambda_.Function(
self, "ApiHandler",
runtime=lambda_.Runtime.PYTHON_3_11,
handler="index.handler",
code=lambda_.Code.from_asset("lambda"),
environment={
"TABLE_NAME": table.table_name,
"LOG_LEVEL": "INFO"
},
timeout=Duration.seconds(30),
memory_size=256,
log_retention=logs.RetentionDays.ONE_WEEK
)
# Permissions
table.grant_read_write_data(lambda_fn)
logger.info("Fonction Lambda créée",
function_name=lambda_fn.function_name)
# API Gateway
logger.info("Création de l'API Gateway")
api = apigw.LambdaRestApi(
self, "ApiGateway",
handler=lambda_fn,
proxy=False,
deploy_options=apigw.StageOptions(
logging_level=apigw.MethodLoggingLevel.INFO,
data_trace_enabled=True,
metrics_enabled=True
)
)
# Endpoints
items = api.root.add_resource("items")
items.add_method("GET")
items.add_method("POST")
item = items.add_resource("{id}")
item.add_method("GET")
item.add_method("PUT")
item.add_method("DELETE")
logger.info("API Gateway créée",
api_id=api.rest_api_id,
api_url=api.url)
def main():
app = App()
logger.info("Synthèse CDK démarrée")
# Créer les stacks
MyApplicationStack(
app, "MyApp-Dev",
env={
"account": os.getenv("CDK_DEFAULT_ACCOUNT"),
"region": "us-east-1"
}
)
MyApplicationStack(
app, "MyApp-Prod",
env={
"account": os.getenv("CDK_DEFAULT_ACCOUNT"),
"region": "us-west-2"
}
)
logger.info("Synthèse CDK complétée")
# Synthétiser l'application
app.synth()
if __name__ == "__main__":
main()
```
---
## Guide d'Utilisation Complet
### 1. Console Logging
```python
from pyloggerx import PyLoggerX
logger = PyLoggerX(
name="console_app",
console=True,
colors=True
)
logger.debug("Message de debug")
logger.info("Message d'info")
logger.warning("Message d'avertissement")
logger.error("Message d'erreur")
logger.critical("Message critique")
```
### 2. Logging vers Fichiers
#### JSON Structuré
```python
logger = PyLoggerX(
name="json_logger",
json_file="logs/app.json",
max_bytes=10 * 1024 * 1024, # 10MB
backup_count=5
)
logger.info("Action utilisateur",
user_id=123,
action="login",
ip="192.168.1.1",
user_agent="Mozilla/5.0"
)
```
**Sortie** (`logs/app.json`):
```json
{
"timestamp": "2025-01-15T10:30:45.123456",
"level": "INFO",
"logger": "json_logger",
"message": "Action utilisateur",
"module": "main",
"function": "login_handler",
"user_id": 123,
"action": "login",
"ip": "192.168.1.1",
"user_agent": "Mozilla/5.0"
}
```
#### Fichier Texte
```python
logger = PyLoggerX(
name="text_logger",
text_file="logs/app.log",
format_string="%(asctime)s - %(levelname)s - %(message)s"
)
```
#### Rotation Basée sur le Temps
```python
logger = PyLoggerX(
name="timed_logger",
text_file="logs/app.log",
rotation_when="midnight", # Rotation à minuit
rotation_interval=1, # Chaque jour
backup_count=7 # Garder 7 jours
)
# Options pour rotation_when:
# "S": Secondes
# "M": Minutes
# "H": Heures
# "D": Jours
# "midnight": À minuit
# "W0"-"W6": Jour de la semaine (0=Lundi)
```
### 3. Tracking de Performance
```python
logger = PyLoggerX(
name="perf_logger",
performance_tracking=True
)
# Utilisation du context manager
with logger.timer("Requête Base de Données"):
result = db.query("SELECT * FROM users WHERE active = true")
# Chronométrage manuel
import time
start = time.time()
process_large_dataset(data)
duration = time.time() - start
logger.info("Traitement complété",
duration_seconds=duration,
records_processed=len(data))
# Récupérer les statistiques
stats = logger.get_performance_stats()
print(f"Moyenne: {stats['avg_duration']:.3f}s")
print(f"Maximum: {stats['max_duration']:.3f}s")
print(f"Total opérations: {stats['total_operations']}")
```
---
## Logging Distant
### Elasticsearch
```python
logger = PyLoggerX(
name="es_logger",
elasticsearch_url="http://elasticsearch:9200",
elasticsearch_index="myapp-logs",
elasticsearch_username="elastic",
elasticsearch_password="changeme",
batch_size=100, # Taille du batch
batch_timeout=5 # Timeout en secondes
)
logger.info("Log envoyé vers Elasticsearch",
service="api",
environment="production",
request_id="req_123")
```
### Grafana Loki
```python
logger = PyLoggerX(
name="loki_logger",
loki_url="http://loki:3100",
loki_labels={
"app": "myapp",
"environment": "production",
"region": "us-east-1",
"tier": "backend"
}
)
logger.info("Log envoyé vers Loki",
endpoint="/api/users",
method="GET",
status_code=200)
```
### Sentry (Error Tracking)
```python
logger = PyLoggerX(
name="sentry_logger",
sentry_dsn="https://examplePublicKey@o0.ingest.sentry.io/0",
sentry_environment="production",
sentry_release="myapp@1.0.0"
)
# Seuls les erreurs et critiques sont envoyés à Sentry
logger.error("Échec du traitement du paiement",
user_id=123,
amount=99.99,
error_code="PAYMENT_DECLINED",
card_type="visa")
```
### Datadog
```python
logger = PyLoggerX(
name="datadog_logger",
datadog_api_key="your_datadog_api_key",
datadog_site="datadoghq.com", # ou datadoghq.eu
datadog_service="web-api",
datadog_tags=["env:prod", "version:1.0.0"]
)
logger.info("Log Datadog",
service="web-api",
env="prod",
metric="request.duration",
value=234)
```
### Slack Notifications
```python
logger = PyLoggerX(
name="slack_logger",
slack_webhook="https://hooks.slack.com/services/YOUR/WEBHOOK/URL",
slack_channel="#alerts",
slack_username="PyLoggerX Bot"
)
# Seuls les warnings et au-dessus sont envoyés à Slack
logger.warning("Utilisation mémoire élevée",
memory_percent=95,
hostname="server-01")
logger.error("Service indisponible",
service="payment-api",
error="Connection timeout")
```
### Webhook Personnalisé
```python
logger = PyLoggerX(
name="webhook_logger",
webhook_url="https://your-api.com/logs",
webhook_method="POST",
webhook_headers={
"Authorization": "Bearer YOUR_TOKEN",
"Content-Type": "application/json"
}
)
logger.info("Log webhook personnalisé",
custom_field="value")
```
### Configuration Multi-Services
```python
logger = PyLoggerX(
name="multi_logger",
# Console et fichiers locaux
console=True,
json_file="logs/app.json",
# Elasticsearch pour tous les logs
elasticsearch_url="http://elasticsearch:9200",
elasticsearch_index="myapp-logs",
# Loki pour le streaming
loki_url="http://loki:3100",
loki_labels={"app": "myapp", "env": "prod"},
# Sentry pour les erreurs
sentry_dsn="https://xxx@sentry.io/xxx",
# Slack pour les alertes critiques
slack_webhook="https://hooks.slack.com/services/xxx",
# Datadog pour les métriques
datadog_api_key="your_api_key",
# Configuration des batchs
batch_size=100,
batch_timeout=5
)
# Ce log ira partout sauf Slack (niveau trop bas)
logger.info("Application démarrée")
# Ce log ira partout y compris Slack
logger.error("Erreur critique détectée",
component="database",
error="Connection pool exhausted")
```
---
## Fonctionnalités Avancées
### 1. Filtrage Avancé
#### Filtrage par Niveau
```python
from pyloggerx import PyLoggerX
from pyloggerx.filters import LevelFilter
logger = PyLoggerX(name="filtered_logger")
# Garder seulement WARNING et ERROR
level_filter = LevelFilter(min_level="WARNING", max_level="ERROR")
logger.add_filter(level_filter)
logger.debug("Ceci ne sera pas loggé")
logger.warning("Ceci sera loggé")
logger.error("Ceci sera loggé")
logger.critical("Ceci ne sera pas loggé (au-dessus de ERROR)")
```
#### Filtrage par Pattern
```python
from pyloggerx.filters import MessageFilter
# Inclure seulement les messages correspondant au pattern
include_filter = MessageFilter(pattern="user_.*", exclude=False)
logger.add_filter(include_filter)
# Exclure les messages correspondant au pattern
exclude_filter = MessageFilter(pattern="debug_.*", exclude=True)
logger.add_filter(exclude_filter)
```
#### Limitation de Débit (Rate Limiting)
```python
from pyloggerx.filters import RateLimitFilter
# Maximum 100 logs par minute
rate_limiter = RateLimitFilter(max_logs=100, period=60)
logger.add_filter(rate_limiter)
# Utile pour les boucles à haut volume
for i in range(10000):
logger.debug(f"Traitement de l'item {i}")
# Seuls ~100 logs seront émis
```
#### Filtre Personnalisé
```python
import logging
class CustomFilter(logging.Filter):
def filter(self, record):
# Logique personnalisée
# Retourne True pour garder le log, False pour l'ignorer
# Exemple: garder seulement les logs d'un module spécifique
if record.module != "payment_processor":
return False
# Exemple: ignorer les logs contenant des données sensibles
if hasattr(record, 'password') or hasattr(record, 'ssn'):
return False
return True
logger.add_filter(CustomFilter())
```
### 2. Échantillonnage de Logs (Log Sampling)
Pour les applications à haut volume, l'échantillonnage réduit le volume de logs:
```python
logger = PyLoggerX(
name="sampled_logger",
enable_sampling=True,
sampling_rate=0.1 # Garder seulement 10% des logs
)
# Utile pour les logs de debug en production
for i in range(10000):
logger.debug(f"Traitement de l'item {i}")
# Environ 1000 seront loggés
# Les logs ERROR et CRITICAL ne sont jamais échantillonnés
logger.error("Erreur importante") # Toujours loggé
```
#### Échantillonnage Adaptatif
```python
from pyloggerx.sampling import AdaptiveSampler
logger = PyLoggerX(
name="adaptive_logger",
enable_sampling=True,
sampler=AdaptiveSampler(
base_rate=0.1, # Taux de base 10%
error_rate=1.0, # 100% pour les erreurs
spike_threshold=1000, # Détection de pic
spike_rate=0.01 # 1% pendant les pics
)
)
```
### 3. Enrichissement de Contexte
```python
logger = PyLoggerX(name="enriched_logger")
# Ajouter un contexte global
logger.add_enrichment(
app_version="2.0.0",
environment="production",
hostname=socket.gethostname(),
region="us-east-1",
datacenter="dc1"
)
# Tous les logs suivants incluront ces données
logger.info("Utilisateur connecté", user_id=123)
# Output: {..., "app_version": "2.0.0", "environment": "production", ..., "user_id": 123}
# Enrichissement dynamique par requête
with logger.context(request_id="req_789", user_id=456):
logger.info("Traitement de la requête")
# Ce log inclut request_id et user_id
process_request()
logger.info("Requête complétée")
# Ce log inclut aussi request_id et user_id
# Hors du contexte
logger.info("Log suivant")
# Ce log n'inclut plus request_id et user_id
```
### 4. Gestion des Exceptions
```python
try:
result = risky_operation()
except ValueError as e:
logger.exception("Opération risquée échouée",
operation="data_validation",
input_value=user_input,
error_type=type(e).__name__)
# Inclut automatiquement la stack trace complète
except Exception as e:
logger.error("Erreur inattendue",
operation="data_validation",
error=str(e),
exc_info=True) # Inclut la traceback sans exception()
```
### 5. Niveaux de Log Dynamiques
```python
logger = PyLoggerX(name="dynamic_logger", level="INFO")
# Basé sur l'environnement
import os
if os.getenv("DEBUG_MODE") == "true":
logger.set_level("DEBUG")
# Basé sur une condition
if user.is_admin():
logger.set_level("DEBUG")
else:
logger.set_level("WARNING")
# Changement temporaire
original_level = logger.level
logger.set_level("DEBUG")
debug_sensitive_operation()
logger.set_level(original_level)
```
### 6. Loggers Multiples
```python
# Séparer les logs par composant
api_logger = PyLoggerX(
name="api",
json_file="logs/api.json",
elasticsearch_index="api-logs"
)
database_logger = PyLoggerX(
name="database",
json_file="logs/database.json",
elasticsearch_index="db-logs",
performance_tracking=True
)
worker_logger = PyLoggerX(
name="worker",
json_file="logs/worker.json",
elasticsearch_index="worker-logs"
)
# Utilisation
api_logger.info("Requête API reçue", endpoint="/api/users")
database_logger.info("Requête exécutée", query="SELECT * FROM users")
worker_logger.info("Job traité", job_id="job_123")
```
---
## Référence de Configuration
### Paramètres PyLoggerX
| Paramètre | Type | Défaut | Description |
|-----------|------|--------|-------------|
| `name` | str | "PyLoggerX" | Nom du logger |
| `level` | str | "INFO" | Niveau de log (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
| `console` | bool | True | Activer la sortie console |
| `colors` | bool | True | Activer les couleurs console |
| `json_file` | str | None | Chemin vers fichier JSON |
| `text_file` | str | None | Chemin vers fichier texte |
| `max_bytes` | int | 10MB | Taille max avant rotation |
| `backup_count` | int | 5 | Nombre de fichiers de sauvegarde |
| `rotation_when` | str | "midnight" | Quand faire la rotation temporelle |
| `rotation_interval` | int | 1 | Intervalle de rotation |
| `format_string` | str | None | Format personnalisé |
| `include_caller` | bool | False | Inclure fichier/ligne dans les logs |
| `performance_tracking` | bool | False | Activer le tracking de performance |
| `enrichment_data` | dict | {} | Données ajoutées à tous les logs |
### Paramètres de Logging Distant
#### Elasticsearch
| Paramètre | Type | Défaut | Description |
|-----------|------|--------|-------------|
| `elasticsearch_url` | str | None | URL du serveur Elasticsearch |
| `elasticsearch_index` | str | "pyloggerx" | Nom de l'index |
| `elasticsearch_username` | str | None | Nom d'utilisateur (optionnel) |
| `elasticsearch_password` | str | None | Mot de passe (optionnel) |
| `elasticsearch_ca_certs` | str | None | Certificats CA pour SSL |
| `elasticsearch_verify_certs` | bool | True | Vérifier les certificats SSL |
#### Grafana Loki
| Paramètre | Type | Défaut | Description |
|-----------|------|--------|-------------|
| `loki_url` | str | None | URL du serveur Loki |
| `loki_labels` | dict | {} | Labels par défaut |
| `loki_batch_size` | int | 100 | Taille du batch |
| `loki_batch_timeout` | int | 5 | Timeout du batch (secondes) |
#### Sentry
| Paramètre | Type | Défaut | Description |
|-----------|------|--------|-------------|
| `sentry_dsn` | str | None | DSN Sentry |
| `sentry_environment` | str | "production" | Nom de l'environnement |
| `sentry_release` | str | None | Version de release |
| `sentry_traces_sample_rate` | float | 0.0 | Taux d'échantillonnage des traces |
#### Datadog
| Paramètre | Type | Défaut | Description |
|-----------|------|--------|-------------|
| `datadog_api_key` | str | None | Clé API Datadog |
| `datadog_site` | str | "datadoghq.com" | Site Datadog |
| `datadog_service` | str | None | Nom du service |
| `datadog_tags` | list | [] | Tags par défaut |
#### Slack
| Paramètre | Type | Défaut | Description |
|-----------|------|--------|-------------|
| `slack_webhook` | str | None | URL webhook Slack |
| `slack_channel` | str | None | Canal (optionnel) |
| `slack_username` | str | "PyLoggerX" | Nom d'utilisateur du bot |
| `slack_min_level` | str | "WARNING" | Niveau minimum à envoyer |
#### Webhook
| Paramètre | Type | Défaut | Description |
|-----------|------|--------|-------------|
| `webhook_url` | str | None | URL du webhook |
| `webhook_method` | str | "POST" | Méthode HTTP |
| `webhook_headers` | dict | {} | En-têtes HTTP |
| `webhook_timeout` | int | 5 | Timeout (secondes) |
### Paramètres Avancés
| Paramètre | Type | Défaut | Description |
|-----------|------|--------|-------------|
| `enable_sampling` | bool | False | Activer l'échantillonnage |
| `sampling_rate` | float | 1.0 | Taux d'échantillonnage (0.0-1.0) |
| `enable_rate_limit` | bool | False | Activer la limitation de débit |
| `rate_limit_messages` | int | 100 | Max messages par période |
| `rate_limit_period` | int | 60 | Période en secondes |
| `batch_size` | int | 100 | Taille du batch pour export distant |
| `batch_timeout` | int | 5 | Timeout du batch (secondes) |
| `async_export` | bool | True | Export asynchrone (non-bloquant) |
| `queue_size` | int | 1000 | Taille de la queue d'export |
| `filters` | list | [] | Liste de filtres |
---
# Configuration Avancée et Monitoring - PyLoggerX
## Configuration Avancée
PyLoggerX offre plusieurs méthodes flexibles pour configurer votre logger, permettant de s'adapter à différents environnements et workflows.
### Chargement depuis Fichiers
#### Configuration JSON
La méthode la plus simple et portable pour configurer PyLoggerX.
```python
from pyloggerx import PyLoggerX
from pyloggerx.config import load_config
# Charger la configuration depuis un fichier JSON
config = load_config(config_file="pyloggerx.json")
logger = PyLoggerX(**config)
logger.info("Logger configuré depuis JSON")
```
**Exemple de fichier `pyloggerx.json`:**
```json
{
"name": "myapp",
"level": "INFO",
"console": true,
"colors": true,
"json_file": "logs/app.json",
"text_file": "logs/app.log",
"max_bytes": 10485760,
"backup_count": 5,
"include_caller": true,
"performance_tracking": true,
"elasticsearch_url": "http://elasticsearch:9200",
"elasticsearch_index": "myapp-logs",
"elasticsearch_username": "elastic",
"elasticsearch_password": "changeme",
"loki_url": "http://loki:3100",
"loki_labels": {
"app": "myapp",
"environment": "production",
"region": "us-east-1"
},
"sentry_dsn": "https://xxx@sentry.io/xxx",
"sentry_environment": "production",
"sentry_release": "1.0.0",
"slack_webhook": "https://hooks.slack.com/services/xxx",
"slack_channel": "#alerts",
"slack_username": "PyLoggerX Bot",
"enable_rate_limit": true,
"rate_limit_messages": 100,
"rate_limit_period": 60,
"enable_sampling": false,
"sampling_rate": 1.0,
"batch_size": 100,
"batch_timeout": 5,
"async_export": true,
"enrichment_data": {
"service": "web-api",
"version": "2.0.0",
"datacenter": "dc1"
}
}
```
#### Configuration YAML
Pour ceux qui préfèrent YAML (plus lisible pour les humains).
```python
from pyloggerx.config import load_config
# Installation requise: pip install pyyaml
config = load_config(config_file="pyloggerx.yaml")
logger = PyLoggerX(**config)
```
**Exemple de fichier `pyloggerx.yaml`:**
```yaml
# Configuration générale
name: myapp
level: INFO
console: true
colors: true
# Fichiers de logs
json_file: logs/app.json
text_file: logs/app.log
max_bytes: 10485760 # 10MB
backup_count: 5
# Options
include_caller: true
performance_tracking: true
# Elasticsearch
elasticsearch_url: http://elasticsearch:9200
elasticsearch_index: myapp-logs
elasticsearch_username: elastic
elasticsearch_password: changeme
# Grafana Loki
loki_url: http://loki:3100
loki_labels:
app: myapp
environment: production
region: us-east-1
# Sentry
sentry_dsn: https://xxx@sentry.io/xxx
sentry_environment: production
sentry_release: "1.0.0"
# Slack
slack_webhook: https://hooks.slack.com/services/xxx
slack_channel: "#alerts"
slack_username: PyLoggerX Bot
# Rate limiting
enable_rate_limit: true
rate_limit_messages: 100
rate_limit_period: 60
# Sampling
enable_sampling: false
sampling_rate: 1.0
# Export batch
batch_size: 100
batch_timeout: 5
async_export: true
# Enrichissement
enrichment_data:
service: web-api
version: "2.0.0"
datacenter: dc1
```
#### Détection Automatique du Format
```python
from pyloggerx.config import ConfigLoader
# Détecte automatiquement JSON ou YAML selon l'extension
config = ConfigLoader.from_file("config.json") # JSON
config = ConfigLoader.from_file("config.yaml") # YAML
config = ConfigLoader.from_file("config.yml") # YAML
logger = PyLoggerX(**config)
```
### Configuration par Variables d'Environnement
Idéal pour les applications conteneurisées et les déploiements cloud-native suivant les principes 12-factor.
#### Variables Supportées
```bash
# Configuration de base
export PYLOGGERX_NAME=myapp
export PYLOGGERX_LEVEL=INFO
export PYLOGGERX_CONSOLE=true
export PYLOGGERX_COLORS=false # Désactiver dans les conteneurs
# Fichiers de logs
export PYLOGGERX_JSON_FILE=/var/log/myapp/app.json
export PYLOGGERX_TEXT_FILE=/var/log/myapp/app.log
# Rate limiting
export PYLOGGERX_RATE_LIMIT_ENABLED=true
export PYLOGGERX_RATE_LIMIT_MESSAGES=100
export PYLOGGERX_RATE_LIMIT_PERIOD=60
# Services distants
export PYLOGGERX_ELASTICSEARCH_URL=http://elasticsearch:9200
export PYLOGGERX_LOKI_URL=http://loki:3100
export PYLOGGERX_SENTRY_DSN=https://xxx@sentry.io/xxx
export PYLOGGERX_DATADOG_API_KEY=your_api_key
export PYLOGGERX_SLACK_WEBHOOK=https://hooks.slack.com/services/xxx
```
#### Utilisation des Variables d'Environnement
```python
from pyloggerx.config import load_config
# Charger uniquement depuis les variables d'environnement
config = load_config(from_env=True)
logger = PyLoggerX(**config)
# Ou utiliser directement ConfigLoader
from pyloggerx.config import ConfigLoader
env_config = ConfigLoader.from_env(prefix="PYLOGGERX_")
logger = PyLoggerX(**env_config)
```
#### Exemple Docker Compose
```yaml
version: '3.8'
services:
myapp:
build: .
environment:
PYLOGGERX_LEVEL: INFO
PYLOGGERX_CONSOLE: "true"
PYLOGGERX_COLORS: "false"
PYLOGGERX_JSON_FILE: /var/log/app.json
PYLOGGERX_ELASTICSEARCH_URL: http://elasticsearch:9200
PYLOGGERX_RATE_LIMIT_ENABLED: "true"
PYLOGGERX_RATE_LIMIT_MESSAGES: 100
volumes:
- ./logs:/var/log
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:8.11.0
environment:
- discovery.type=single-node
ports:
- "9200:9200"
```
#### Exemple Kubernetes ConfigMap
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: pyloggerx-config
namespace: production
data:
PYLOGGERX_LEVEL: "INFO"
PYLOGGERX_CONSOLE: "true"
PYLOGGERX_COLORS: "false"
PYLOGGERX_ELASTICSEARCH_URL: "http://elasticsearch.logging.svc.cluster.local:9200"
PYLOGGERX_RATE_LIMIT_ENABLED: "true"
PYLOGGERX_RATE_LIMIT_MESSAGES: "100"
---
apiVersion: v1
kind: Secret
metadata:
name: pyloggerx-secrets
namespace: production
type: Opaque
stringData:
PYLOGGERX_SENTRY_DSN: "https://xxx@sentry.io/xxx"
PYLOGGERX_DATADOG_API_KEY: "your_api_key"
PYLOGGERX_SLACK_WEBHOOK: "https://hooks.slack.com/services/xxx"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: myapp
image: myapp:1.0.0
envFrom:
- configMapRef:
name: pyloggerx-config
- secretRef:
name: pyloggerx-secrets
```
### Configuration Multi-Sources
Combinez plusieurs sources de configuration avec ordre de priorité.
#### Priorité de Configuration
**Ordre (du plus prioritaire au moins prioritaire):**
1. Variables d'environnement
2. Fichier de configuration
3. Valeurs par défaut
```python
from pyloggerx.config import load_config
# Charger avec priorités
config = load_config(
config_file="config.json", # 2e priorité
from_env=True, # 1re priorité (écrase config_file)
defaults={ # 3e priorité (fallback)
"name": "default-app",
"level": "INFO",
"console": True,
"colors": False
}
)
logger = PyLoggerX(**config)
```
#### Exemple Pratique: Configuration par Environnement
```python
import os
from pyloggerx.config import load_config
# Déterminer le fichier de config selon l'environnement
env = os.getenv("ENVIRONMENT", "development")
config_files = {
"development": "config.dev.json",
"staging": "config.staging.json",
"production": "config.prod.json"
}
# Charger la config appropriée
config = load_config(
config_file=config_files.get(env),
from_env=True, # Permet les overrides par env vars
defaults={"level": "DEBUG" if env == "development" else "INFO"}
)
logger = PyLoggerX(**config)
logger.info(f"Application démarrée en mode {env}")
```
#### Fusion Manuelle de Configurations
```python
from pyloggerx.config import ConfigLoader
# Charger plusieurs configs
base_config = ConfigLoader.from_json("config.base.json")
env_config = ConfigLoader.from_json(f"config.{environment}.json")
local_overrides = ConfigLoader.from_json("config.local.json")
env_vars = ConfigLoader.from_env()
# Fusionner dans l'ordre (derniers écrasent les premiers)
merged_config = ConfigLoader.merge_configs(
base_config,
env_config,
local_overrides,
env_vars
)
logger = PyLoggerX(**merged_config)
```
### Validation de Configuration
PyLoggerX valide automatiquement votre configuration.
#### Validation Automatique
```python
from pyloggerx.config import load_config
try:
config = load_config(config_file="config.json")
logger = PyLoggerX(**config)
except ValueError as e:
print(f"Configuration invalide: {e}")
exit(1)
```
#### Validation Manuelle
```python
from pyloggerx.config import ConfigValidator
config = {
"name": "myapp",
"level": "INVALID", # Niveau invalide
"rate_limit_messages": -10 # Valeur négative invalide
}
is_valid, error_message = ConfigValidator.validate(config)
if not is_valid:
print(f"Erreur de configuration: {error_message}")
else:
logger = PyLoggerX(**config)
```
#### Règles de Validation
Le validateur vérifie:
1. **Niveau de log**: Doit être DEBUG, INFO, WARNING, ERROR, ou CRITICAL
2. **Rate limiting**:
- `rate_limit_messages` doit être un entier positif
- `rate_limit_period` doit être un nombre positif
3. **Sampling**:
- `sampling_rate` doit être entre 0.0 et 1.0
4. **URLs**:
- Les URLs (Elasticsearch, Loki, webhook) doivent commencer par http/https
### Configurations Prédéfinies
PyLoggerX inclut des templates de configuration prêts à l'emploi.
#### Templates Disponibles
```python
from pyloggerx.config import EXAMPLE_CONFIGS
# Afficher les templates disponibles
print(list(EXAMPLE_CONFIGS.keys()))
# ['basic', 'production', 'development']
# Utiliser un template
logger = PyLoggerX(**EXAMPLE_CONFIGS['production'])
```
#### Template "Basic"
Configuration simple pour démarrer rapidement.
```python
from pyloggerx.config import EXAMPLE_CONFIGS
logger = PyLoggerX(**EXAMPLE_CONFIGS['basic'])
```
**Configuration:**
```json
{
"name": "myapp",
"level": "INFO",
"console": true,
"colors": true,
"json_file": "logs/app.json",
"text_file": "logs/app.txt"
}
```
#### Template "Production"
Configuration optimisée pour environnements de production.
```python
logger = PyLoggerX(**EXAMPLE_CONFIGS['production'])
```
**Configuration:**
```json
{
"name": "myapp",
"level": "WARNING",
"console": false,
"colors": false,
"json_file": "/var/log/myapp/app.json",
"text_file": "/var/log/myapp/app.txt",
"max_bytes": 52428800,
"backup_count": 10,
"enable_rate_limit": true,
"rate_limit_messages": 100,
"rate_limit_period": 60,
"performance_tracking": true
}
```
#### Template "Development"
Configuration détaillée pour développement.
```python
logger = PyLoggerX(**EXAMPLE_CONFIGS['development'])
```
**Configuration:**
```json
{
"name": "myapp-dev",
"level": "DEBUG",
"console": true,
"colors": true,
"include_caller": true,
"json_file": "logs/dev.json",
"enable_rate_limit": false,
"performance_tracking": true
}
```
#### Sauvegarder un Template
```python
from pyloggerx.config import save_example_config
# Sauvegarder un template dans un fichier
save_example_config("production", "my-config.json")
# Puis charger et personnaliser
config = load_config(config_file="my-config.json")
config['name'] = "my-custom-app"
logger = PyLoggerX(**config)
```
#### Créer un Template Personnalisé
```python
from pyloggerx.config import EXAMPLE_CONFIGS
# Partir d'un template existant
custom_config = EXAMPLE_CONFIGS['production'].copy()
# Personnaliser
custom_config.update({
'name': 'my-microservice',
'elasticsearch_url': 'http://my-es:9200',
'slack_webhook': 'https://hooks.slack.com/xxx',
'enrichment_data': {
'service': 'payment-api',
'team': 'backend',
'region': 'eu-west-1'
}
})
logger = PyLoggerX(**custom_config)
```
---
## Monitoring et Métriques
PyLoggerX intègre un système complet de monitoring pour surveiller la santé, les performances et les métriques de votre système de logging.
### Collecteur de Métriques
Le `MetricsCollector` collecte et agrège automatiquement les métriques de logging.
#### Utilisation Basique
```python
from pyloggerx import PyLoggerX
from pyloggerx.monitoring import MetricsCollector
# Créer un collecteur
collector = MetricsCollector(window_size=300) # Fenêtre de 5 minutes
# Attacher au logger
logger = PyLoggerX(
name="monitored_app",
console=True
)
# Enregistrer manuellement des logs (optionnel, fait automatiquement)
collector.record_log(level="INFO", size=256)
collector.record_log(level="ERROR", size=512)
# Obtenir les métriques
metrics = collector.get_metrics()
print(f"Uptime: {metrics['uptime_seconds']}s")
print(f"Total logs: {metrics['total_logs']}")
print(f"Logs/seconde: {metrics['logs_per_second']}")
print(f"Taille moyenne: {metrics['avg_log_size_bytes']} bytes")
print(f"Par niveau: {metrics['logs_per_level']}")
print(f"Erreurs récentes: {metrics['recent_errors']}")
```
#### Métriques Collectées
Le collecteur suit:
1. **Uptime**: Temps écoulé depuis le démarrage
2. **Total des logs**: Nombre total de logs émis
3. **Logs par niveau**: Compteurs pour DEBUG, INFO, WARNING, ERROR, CRITICAL
4. **Taux de logs**: Logs par seconde (fenêtre glissante)
5. **Taille des logs**: Taille moyenne des logs en bytes
6. **Erreurs**: Historique des erreurs récentes
#### Enregistrement d'Erreurs
```python
try:
risky_operation()
except Exception as e:
collector.record_error(str(e))
logger.exception("Opération échouée")
```
#### Réinitialisation des Métriques
```python
# Réinitialiser toutes les métriques
collector.reset()
```
#### Fenêtre de Temps Personnalisée
```python
# Collecteur avec fenêtre de 10 minutes
collector = MetricsCollector(window_size=600)
# Métriques sur les 10 dernières minutes
metrics = collector.get_metrics()
```
### Gestionnaire d'Alertes
Le `AlertManager` permet de définir des règles d'alerte basées sur les métriques.
#### Configuration des Alertes
```python
from pyloggerx.monitoring import AlertManager
# Créer le gestionnaire
alert_mgr = AlertManager()
# Définir une règle d'alerte
alert_mgr.add_rule(
name="high_error_rate",
condition=lambda m: m['logs_per_level'].get('ERROR', 0) > 100,
cooldown=300, # 5 minutes entre alertes
message="Taux d'erreurs élevé détecté (>100 erreurs)"
)
# Définir un callback
def send_alert(alert_name, message):
print(f"ALERTE [{alert_name}]: {message}")
# Envoyer email, Slack, PagerDuty, etc.
alert_mgr.add_callback(send_alert)
# Vérifier les métriques périodiquement
metrics = collector.get_metrics()
alert_mgr.check_metrics(metrics)
```
#### Règles d'Alerte Prédéfinies
```python
# Taux d'erreurs élevé
alert_mgr.add_rule(
name="high_error_rate",
condition=lambda m: m['logs_per_level'].get('ERROR', 0) > 100,
cooldown=300
)
# Taux de logs excessif
alert_mgr.add_rule(
name="high_log_rate",
condition=lambda m: m['logs_per_second'] > 100,
cooldown=300
)
# Circuit breaker ouvert
alert_mgr.add_rule(
name="exporter_circuit_breaker",
condition=lambda m: any(
exp.get('circuit_breaker_open', False)
for exp in m.get('exporter_metrics', {}).values()
),
cooldown=600
)
# Taille de queue élevée
alert_mgr.add_rule(
name="high_queue_size",
condition=lambda m: any(
exp.get('queue_size', 0) > 1000
for exp in m.get('exporter_metrics', {}).values()
),
cooldown=300
)
# Utilisation mémoire
alert_mgr.add_rule(
name="high_memory",
condition=lambda m: m.get('avg_log_size_bytes', 0) > 10000,
cooldown=600
)
```
#### Callbacks Multiples
```python
def slack_alert(alert_name, message):
requests.post(
slack_webhook,
json={"text": f":warning: {message}"}
)
def email_alert(alert_name, message):
send_email(
to="ops@example.com",
subject=f"Alert: {alert_name}",
body=message
)
def log_alert(alert_name, message):
logger.critical(message, alert=alert_name)
# Ajouter tous les callbacks
alert_mgr.add_callback(slack_alert)
alert_mgr.add_callback(email_alert)
alert_mgr.add_callback(log_alert)
```
#### Cooldown Personnalisé
Le cooldown évite le spam d'alertes:
```python
# Alerte critique avec cooldown court (1 minute)
alert_mgr.add_rule(
name="critical_error",
condition=lambda m: m['logs_per_level'].get('CRITICAL', 0) > 0,
cooldown=60, # 1 minute
message="Erreur critique détectée!"
)
# Alerte warning avec cooldown long (10 minutes)
alert_mgr.add_rule(
name="performance_degradation",
condition=lambda m: m['logs_per_second'] > 50,
cooldown=600, # 10 minutes
message="Performance dégradée détectée"
)
```
### Monitoring de Santé
Le `HealthMonitor` surveille automatiquement la santé du logger en arrière-plan.
#### Configuration et Démarrage
```python
from pyloggerx.monitoring import HealthMonitor
logger = PyLoggerX(name="production_app")
# Créer le monitor
monitor = HealthMonitor(
logger=logger,
check_interval=60 # Vérifier toutes les 60 secondes
)
# Démarrer le monitoring
monitor.start()
# Le monitoring s'exécute en arrière-plan dans un thread séparé
# et vérifie automatiquement la santé toutes les 60 secondes
# ... votre application tourne ...
# Arrêter le monitoring proprement
monitor.stop()
```
#### Obtenir le Statut
```python
# Obtenir le statut complet
status = monitor.get_status()
print(f"Monitoring actif: {status['running']}")
print(f"Métriques: {status['metrics']}")
print(f"Stats du logger: {status['logger_stats']}")
print(f"Santé du logger: {status['logger_health']}")
```
#### Alertes Automatiques
Le `HealthMonitor` inclut des règles d'alerte par défaut:
1. **high_error_rate**: Plus de 100 erreurs
2. **high_log_rate**: Plus de 100 logs/seconde
3. **exporter_circuit_breaker**: Circuit breaker d'un exporter ouvert
```python
# Ajouter un callback pour les alertes
def handle_alert(alert_name, message):
print(f"ALERTE: {message}")
# Envoyer notification
monitor.alert_manager.add_callback(handle_alert)
```
#### Règles Personnalisées
```python
# Ajouter vos propres règles d'alerte
monitor.alert_manager.add_rule(
name="custom_metric",
condition=lambda m: your_custom_check(m),
cooldown=300,
message="Condition personnalisée déclenchée"
)
```
#### Exemple Complet: Application avec Monitoring
```python
from pyloggerx import PyLoggerX
from pyloggerx.monitoring import HealthMonitor
import time
import signal
import sys
# Configuration du logger
logger = PyLoggerX(
name="monitored_service",
console=True,
json_file="logs/service.json",
elasticsearch_url="http://elasticsearch:9200",
performance_tracking=True
)
# Configuration du monitoring
monitor = HealthMonitor(logger, check_interval=30)
def alert_callback(alert_name, message):
"""Callback pour les alertes"""
logger.critical(f"ALERTE: {message}", alert=alert_name)
# Ici: envoyer email, Slack, PagerDuty, etc.
monitor.alert_manager.add_callback(alert_callback)
# Ajout de règles personnalisées
monitor.alert_manager.add_rule(
name="service_overload",
condition=lambda m: m['logs_per_second'] > 50,
cooldown=180,
message="Service surchargé: >50 logs/sec"
)
def shutdown_handler(signum, frame):
"""Arrêt propre"""
logger.info("Arrêt du service...")
monitor.stop()
logger.close()
sys.exit(0)
signal.signal(signal.SIGINT, shutdown_handler)
signal.signal(signal.SIGTERM, shutdown_handler)
def main():
# Démarrer le monitoring
monitor.start()
logger.info("Service et monitoring démarrés")
# Votre application
while True:
try:
# Logique métier
logger.info("Traitement en cours...")
time.sleep(10)
except Exception as e:
logger.exception("Erreur dans la boucle principale")
if __name__ == "__main__":
main()
```
### Dashboard Console
Affichez un dashboard de monitoring directement dans la console.
#### Affichage Simple
```python
from pyloggerx.monitoring import print_dashboard
logger = PyLoggerX(name="myapp")
# Afficher le dashboard
print_dashboard(logger, clear_screen=True)
```
#### Sortie du Dashboard
```
============================================================
PyLoggerX Monitoring Dashboard
============================================================
Timestamp: 2025-01-15 10:30:45
📊 General Statistics:
Total Logs: 15423
Exporters: 3
Filters: 2
🚦 Rate Limiting:
Enabled: Yes
Max Messages: 100
Period: 60s
Rejections: 45
🏥 Exporter Health:
Overall Healthy: ✅ Yes
✅ elasticsearch
✅ loki
❌ sentry
📈 Exporter Metrics:
elasticsearch:
Exported: 12450
Failed: 23
Dropped: 0
Queue: 15
⚠️ Circuit Breaker: OPEN (failures: 5)
loki:
Exported: 11890
Failed: 5
Dropped: 0
Queue: 8
sentry:
Exported: 345
Failed: 102
Dropped: 0
Queue: 0
⚠️ Circuit Breaker: OPEN (failures: 10)
============================================================
```
#### Dashboard en Boucle
```python
import time
from pyloggerx.monitoring import print_dashboard
logger = PyLoggerX(name="myapp")
# Rafraîchir le dashboard toutes les 5 secondes
try:
while True:
print_dashboard(logger, clear_screen=True)
time.sleep(5)
except KeyboardInterrupt:
print("\nDashboard arrêté")
```
#### Dashboard Personnalisé
```python
from pyloggerx.monitoring import HealthMonitor
import os
def custom_dashboard(logger):
"""Dashboard personnalisé"""
stats = logger.get_stats()
health = logger.healthcheck()
os.system('cls' if os.name == 'nt' else 'clear')
print("=" * 60)
print("Mon Application - Dashboard")
print("=" * 60)
# Santé globale
status_icon = "✅" if health['healthy'] else "❌"
print(f"\n{status_icon} Statut: {'HEALTHY' if health['healthy'] else 'UNHEALTHY'}")
# Métriques clés
print(f"\n📊 Métriques:")
print(f" Total logs: {stats['total_logs']:,}")
if 'logs_per_level' in stats:
print(f"\n📈 Par niveau:")
for level, count in sorted(stats['logs_per_level'].items()):
print(f" {level}: {count:,}")
# Exporters
if health['exporters']:
print(f"\n🔌 Exporters:")
for name, is_healthy in health['exporters'].items():
icon = "✅" if is_healthy else "❌"
print(f" {icon} {name}")
print("\n" + "=" * 60)
# Utilisation
while True:
custom_dashboard(logger)
time.sleep(5)
```
---
## Intégrations Monitoring
### Intégration Prometheus
Exposez les métriques PyLoggerX à Prometheus pour un monitoring centralisé.
#### Installation
```bash
pip install prometheus-client
```
#### Configuration Basique
```python
from pyloggerx import PyLoggerX
from prometheus_client import Counter, Gauge, Histogram, start_http_server
import time
# Métriques Prometheus
logs_total = Counter(
'pyloggerx_logs_total',
'Total number of logs',
['level', 'logger']
)
logs_per_second = Gauge(
'pyloggerx_logs_per_second',
'Current logs per second',
['logger']
)
export_errors = Counter(
'pyloggerx_export_errors_total',
'Total export errors',
['exporter', 'logger']
)
queue_size = Gauge(
'pyloggerx_queue_size',
'Current queue size',
['exporter', 'logger']
)
log_size_bytes = Histogram(
'pyloggerx_log_size_bytes',
'Log size distribution',
['logger']
)
# Logger
logger = PyLoggerX(
name="prometheus_app",
console=True,
json_file="logs/app.json",
elasticsearch_url="http://elasticsearch:9200"
)
def update_prometheus_metrics():
"""Mettre à jour les métriques Prometheus depuis PyLoggerX"""
stats = logger.get_stats()
# Logs par niveau
if 'logs_per_level' in stats:
for level, count in stats['logs_per_level'].items():
logs_total.labels(level=level, logger=logger.name).inc(count)
# Métriques d'export
if 'exporter_metrics' in stats:
for exporter_name, metrics in stats['exporter_metrics'].items():
# Erreurs d'export
export_errors.labels(
exporter=exporter_name,
logger=logger.name
).inc(metrics.get('failed_logs', 0))
# Taille de queue
queue_size.labels(
exporter=exporter_name,
logger=logger.name
).set(metrics.get('queue_size', 0))
# Démarrer le serveur de métriques Prometheus
start_http_server(8000)
logger.info("Serveur de métriques Prometheus démarré", port=8000)
# Mettre à jour les métriques périodiquement
while True:
update_prometheus_metrics()
time.sleep(15)
```
#### Configuration Prometheus
**prometheus.yml:**
```yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'pyloggerx'
static_configs:
- targets: ['localhost:8000']
labels:
app: 'myapp'
environment: 'production'
```
#### Queries Prometheus Utiles
```promql
# Taux de logs par seconde
rate(pyloggerx_logs_total[5m])
# Erreurs par exporter
sum by (exporter) (pyloggerx_export_errors_total)
# Taille de queue par exporter
pyloggerx_queue_size
# Percentile 95 de la taille des logs
histogram_quantile(0.95, pyloggerx_log_size_bytes_bucket)
# Logs par niveau (graphique empilé)
sum by (level) (rate(pyloggerx_logs_total[5m]))
```
#### Alertes Prometheus
**alerts.yml:**
```yaml
groups:
- name: pyloggerx_alerts
interval: 30s
rules:
- alert: HighErrorRate
expr: rate(pyloggerx_logs_total{level="ERROR"}[5m]) > 10
for: 5m
labels:
severity: warning
annotations:
summary: "Taux d'erreurs élevé détecté"
description: "{{ $labels.logger }} a un taux d'erreurs de {{ $value }}/s"
- alert: ExporterDown
expr: pyloggerx_queue_size > 1000
for: 10m
labels:
severity: critical
annotations:
summary: "Exporter surchargé"
description: "{{ $labels.exporter }} a une queue de {{ $value }} messages"
- alert: HighExportFailureRate
expr: rate(pyloggerx_export_errors_total[5m]) > 1
for: 5m
labels:
severity: warning
annotations:
summary: "Échecs d'export fréquents"
description: "{{ $labels.exporter }} échoue à {{ $value }}/s"
```
### Intégration Grafana
Créez des dashboards visuels pour surveiller PyLoggerX.
#### Dashboard JSON pour Grafana
```json
{
"dashboard": {
"title": "PyLoggerX Monitoring",
"panels": [
{
"title": "Logs par Seconde",
"type": "graph",
"targets": [
{
"expr": "rate(pyloggerx_logs_total[5m])"
}
]
},
{
"title": "Logs par Niveau",
"type": "graph",
"targets": [
{
"expr": "sum by (level) (rate(pyloggerx_logs_total[5m]))"
}
],
"stack": true
},
{
"title": "Taille de Queue",
"type": "graph",
"targets": [
{
"expr": "pyloggerx_queue_size"
}
]
},
{
"title": "Erreurs d'Export",
"type": "stat",
"targets": [
{
"expr": "sum(pyloggerx_export_errors_total)"
}
]
}
]
}
}
```
#### Variables de Dashboard
```json
{
"templating": {
"list": [
{
"name": "logger",
"type": "query",
"query": "label_values(pyloggerx_logs_total, logger)"
},
{
"name": "exporter",
"type": "query",
"query": "label_values(pyloggerx_queue_size, exporter)"
}
]
}
}
```
#### Panels Recommandés
1. **Logs par Seconde**: Graph avec `rate(pyloggerx_logs_total[5m])`
2. **Distribution par Niveau**: Stacked graph avec `sum by (level)`
3. **Santé des Exporters**: Stat panel avec `up` metric
4. **Taille de Queue**: Graph avec `pyloggerx_queue_size`
5. **Erreurs d'Export**: Graph avec `rate(pyloggerx_export_errors_total[5m])`
6. **Latence des Logs**: Histogram avec `pyloggerx_log_size_bytes`
### Métriques Personnalisées
Créez vos propres métriques métier.
#### Métriques Applicatives
```python
from pyloggerx import PyLoggerX
from prometheus_client import Counter, Histogram
import time
logger = PyLoggerX(name="business_app")
# Métriques métier
user_logins = Counter('app_user_logins_total', 'Total user logins')
order_value = Histogram('app_order_value_dollars', 'Order values')
api_requests = Counter('app_api_requests_total', 'API requests', ['endpoint', 'status'])
processing_time = Histogram('app_processing_seconds', 'Processing time', ['operation'])
def handle_login(user_id):
"""Gérer une connexion utilisateur"""
start = time.time()
try:
# Logique de connexion
logger.info("Connexion utilisateur", user_id=user_id)
user_logins.inc()
duration = time.time() - start
processing_time.labels(operation='login').observe(duration)
return True
except Exception as e:
logger.error("Échec de connexion", user_id=user_id, error=str(e))
return False
def process_order(order_id, amount):
"""Traiter une commande"""
logger.info("Traitement commande", order_id=order_id, amount=amount)
# Enregistrer la valeur
order_value.observe(amount)
# Logique métier
# ...
def api_endpoint(endpoint, func):
"""Décorateur pour tracker les appels API"""
def wrapper(*args, **kwargs):
start = time.time()
try:
result = func(*args, **kwargs)
status = 'success'
logger.info("API appelée", endpoint=endpoint, status=status)
return result
except Exception as e:
status = 'error'
logger.error("API échouée", endpoint=endpoint, error=str(e))
raise
finally:
duration = time.time() - start
api_requests.labels(endpoint=endpoint, status=status).inc()
processing_time.labels(operation=endpoint).observe(duration)
return wrapper
@api_endpoint('/api/users')
def get_users():
# Logique API
return {"users": []}
```
#### Métriques Combinées
```python
from pyloggerx.monitoring import MetricsCollector
collector = MetricsCollector()
logger = PyLoggerX(name="app")
# Fonction périodique pour exporter vers Prometheus
def export_pyloggerx_metrics():
"""Exporter les métriques PyLoggerX vers Prometheus"""
metrics = collector.get_metrics()
# Métriques système
logs_total_gauge.set(metrics['total_logs'])
logs_per_second_gauge.set(metrics['logs_per_second'])
avg_log_size_gauge.set(metrics['avg_log_size_bytes'])
# Logs par niveau
for level, count in metrics['logs_per_level'].items():
logs_by_level.labels(level=level).set(count)
# Erreurs récentes
error_count.set(len(metrics['recent_errors']))
# Appeler périodiquement
import threading
import time
def metrics_updater():
while True:
export_pyloggerx_metrics()
time.sleep(15)
metrics_thread = threading.Thread(target=metrics_updater, daemon=True)
metrics_thread.start()
```
---
## Exemples Complets
### Exemple 1: Application Web avec Monitoring Complet
```python
"""
Application FastAPI avec monitoring PyLoggerX complet
"""
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
from pyloggerx import PyLoggerX
from pyloggerx.monitoring import HealthMonitor, print_dashboard
from pyloggerx.config import load_config
from prometheus_client import make_asgi_app, Counter, Histogram
import time
import uuid
import os
# Charger la configuration
config = load_config(
config_file="config.json",
from_env=True,
defaults={"name": "web-api", "level": "INFO"}
)
# Initialiser le logger
logger = PyLoggerX(**config)
# Initialiser le monitoring
monitor = HealthMonitor(logger, check_interval=30)
# Métriques Prometheus
http_requests = Counter(
'http_requests_total',
'Total HTTP requests',
['method', 'endpoint', 'status']
)
http_duration = Histogram(
'http_request_duration_seconds',
'HTTP request duration',
['method', 'endpoint']
)
# Callbacks d'alerte
def alert_to_slack(alert_name, message):
"""Envoyer alerte à Slack"""
if os.getenv('SLACK_WEBHOOK'):
import requests
requests.post(
os.getenv('SLACK_WEBHOOK'),
json={"text": f":warning: {message}"}
)
monitor.alert_manager.add_callback(alert_to_slack)
# Application FastAPI
app = FastAPI(title="API avec Monitoring")
# Monter le endpoint Prometheus
metrics_app = make_asgi_app()
app.mount("/metrics", metrics_app)
@app.on_event("startup")
async def startup():
"""Démarrage de l'application"""
monitor.start()
logger.info("Application et monitoring démarrés")
@app.on_event("shutdown")
async def shutdown():
"""Arrêt de l'application"""
monitor.stop()
logger.info("Application arrêtée")
logger.close()
@app.middleware("http")
async def logging_middleware(request: Request, call_next):
"""Middleware de logging et métriques"""
request_id = str(uuid.uuid4())
start_time = time.time()
# Contexte de logging
with logger.context(request_id=request_id):
logger.info(
"Requête reçue",
method=request.method,
path=request.url.path,
client=request.client.host
)
try:
response = await call_next(request)
duration = time.time() - start_time
# Métriques
http_requests.labels(
method=request.method,
endpoint=request.url.path,
status=response.status_code
).inc()
http_duration.labels(
method=request.method,
endpoint=request.url.path
).observe(duration)
logger.info(
"Requête complétée",
method=request.method,
path=request.url.path,
status=response.status_code,
duration_ms=duration * 1000
)
response.headers["X-Request-ID"] = request_id
return response
except Exception as e:
duration = time.time() - start_time
logger.exception(
"Erreur requête",
method=request.method,
path=request.url.path,
duration_ms=duration * 1000
)
raise
@app.get("/")
async def root():
"""Endpoint racine"""
return {"status": "ok", "service": "web-api"}
@app.get("/health")
async def health():
"""Health check détaillé"""
health_status = logger.healthcheck()
stats = logger.get_stats()
monitor_status = monitor.get_status()
return {
"healthy": health_status['healthy'],
"logger": {
"total_logs": stats['total_logs'],
"exporters": health_status['exporters']
},
"monitor": {
"running": monitor_status['running'],
"metrics": monitor_status['metrics']
}
}
@app.get("/stats")
async def stats():
"""Statistiques détaillées"""
return {
"logger": logger.get_stats(),
"monitor": monitor.get_status()
}
@app.get("/dashboard")
async def dashboard():
"""Dashboard en format texte"""
import io
import sys
# Capturer la sortie du dashboard
old_stdout = sys.stdout
sys.stdout = buffer = io.StringIO()
print_dashboard(logger, clear_screen=False)
sys.stdout = old_stdout
output = buffer.getvalue()
return {"dashboard": output}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
### Exemple 2: Worker Batch avec Configuration Avancée
```python
"""
Worker de traitement batch avec configuration complète
"""
import time
import sys
from datetime import datetime
from pyloggerx import PyLoggerX
from pyloggerx.config import load_config, save_example_config
from pyloggerx.monitoring import HealthMonitor, MetricsCollector
# Générer une config si elle n'existe pas
config_file = "worker-config.json"
if not os.path.exists(config_file):
save_example_config("production", config_file)
print(f"Configuration créée: {config_file}")
# Charger la configuration
config = load_config(
config_file=config_file,
from_env=True,
defaults={
"name": "batch-worker",
"level": "INFO",
"performance_tracking": True
}
)
# Personnaliser la config
config.update({
"enrichment_data": {
"worker_id": os.getenv("WORKER_ID", "worker-1"),
"datacenter": os.getenv("DATACENTER", "dc1"),
"start_time": datetime.now().isoformat()
}
})
# Initialiser le logger
logger = PyLoggerX(**config)
# Monitoring
collector = MetricsCollector(window_size=600) # 10 minutes
monitor = HealthMonitor(logger, check_interval=60)
# Callbacks d'alerte
def email_alert(alert_name, message):
logger.critical(f"ALERTE: {message}", alert=alert_name)
# Implémenter l'envoi d'email
def metrics_alert(alert_name, message):
"""Logger les alertes pour les métriques"""
collector.record_error(f"{alert_name}: {message}")
monitor.alert_manager.add_callback(email_alert)
monitor.alert_manager.add_callback(metrics_alert)
# Règles d'alerte personnalisées
monitor.alert_manager.add_rule(
name="processing_slow",
condition=lambda m: m.get('avg_duration', 0) > 5.0,
cooldown=300,
message="Traitement lent détecté (>5s en moyenne)"
)
class BatchWorker:
def __init__(self):
self.running = False
self.processed = 0
self.errors = 0
def start(self):
"""Démarrer le worker"""
self.running = True
monitor.start()
logger.info("Worker démarré", config=config)
try:
while self.running:
self.process_batch()
time.sleep(10)
except KeyboardInterrupt:
logger.info("Arrêt demandé")
finally:
self.stop()
def process_batch(self):
"""Traiter un batch"""
with logger.timer("Batch Processing"):
try:
# Récupérer les jobs
jobs = self.fetch_jobs()
if not jobs:
logger.debug("Aucun job à traiter")
return
logger.info("Traitement batch", job_count=len(jobs))
# Traiter chaque job
for job in jobs:
self.process_job(job)
# Enregistrer les métriques
collector.record_log("INFO", size=len(str(jobs)))
except Exception as e:
self.errors += 1
collector.record_error(str(e))
logger.exception("Erreur batch")
def fetch_jobs(self):
"""Récupérer les jobs depuis la queue"""
# Simuler la récupération
import random
return [{"id": i} for i in range(random.randint(0, 10))]
def process_job(self, job):
"""Traiter un job"""
job_id = job["id"]
try:
logger.debug("Traitement job", job_id=job_id)
# Simuler le traitement
time.sleep(0.1)
self.processed += 1
logger.info("Job complété", job_id=job_id)
except Exception as e:
self.errors += 1
logger.error("Job échoué", job_id=job_id, error=str(e))
def stop(self):
"""Arrêter le worker"""
self.running = False
monitor.stop()
# Stats finales
stats = logger.get_performance_stats()
metrics = collector.get_metrics()
logger.info(
"Worker arrêté",
processed=self.processed,
errors=self.errors,
total_duration=stats.get('total_duration', 0),
avg_duration=stats.get('avg_duration', 0),
logs_per_second=metrics.get('logs_per_second', 0)
)
logger.close()
if __name__ == "__main__":
worker = BatchWorker()
worker.start()
```
### Exemple 3: Microservice avec Dashboard Live
```python
"""
Microservice avec dashboard de monitoring en temps réel
"""
import threading
import time
import os
from pyloggerx import PyLoggerX
from pyloggerx.monitoring import HealthMonitor, print_dashboard
from pyloggerx.config import load_config
# Configuration
config = load_config(
from_env=True,
defaults={
"name": "microservice",
"level": "INFO",
"console": True,
"json_file": "logs/service.json",
"performance_tracking": True,
"enable_rate_limit": True,
"rate_limit_messages": 100,
"rate_limit_period": 60
}
)
logger = PyLoggerX(**config)
monitor = HealthMonitor(logger, check_interval=30)
# Dashboard en thread séparé
def dashboard_updater():
"""Mettre à jour le dashboard en continu"""
while True:
try:
print_dashboard(logger, clear_screen=True)
time.sleep(5)
except KeyboardInterrupt:
break
# Démarrer le dashboard dans un thread séparé
if os.getenv("SHOW_DASHBOARD", "false").lower() == "true":
dashboard_thread = threading.Thread(target=dashboard_updater, daemon=True)
dashboard_thread.start()
logger.info("Dashboard activé")
# Service principal
monitor.start()
logger.info("Microservice démarré")
try:
# Boucle principale du service
while True:
# Simuler du travail
logger.info("Traitement en cours")
time.sleep(10)
# Simuler des erreurs occasionnelles
import random
if random.random() < 0.1:
logger.error("Erreur simulée", error_code=random.randint(500, 599))
except KeyboardInterrupt:
logger.info("Arrêt du service")
finally:
monitor.stop()
logger.close()
```
---
## Référence Config
### ConfigLoader
Classe pour charger des configurations depuis différentes sources.
```python
class ConfigLoader:
@staticmethod
def from_json(filepath: str) -> Dict[str, Any]
@staticmethod
def from_yaml(filepath: str) -> Dict[str, Any]
@staticmethod
def from_env(prefix: str = "PYLOGGERX_") -> Dict[str, Any]
@staticmethod
def from_file(filepath: str) -> Dict[str, Any]
@staticmethod
def merge_configs(*configs: Dict[str, Any]) -> Dict[str, Any]
```
### ConfigValidator
Classe pour valider les configurations.
```python
class ConfigValidator:
VALID_LEVELS = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
@staticmethod
def validate(config: Dict[str, Any]) -> tuple[bool, Optional[str]]
```
### load_config
Fonction pour charger une configuration complète.
```python
def load_config(
config_file: Optional[str] = None,
from_env: bool = True,
defaults: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]
```
### MetricsCollector
Collecteur de métriques de logging.
```python
class MetricsCollector:
def __init__(self, window_size: int = 300)
def record_log(self, level: str, size: int = 0) -> None
def record_error(self, error: str) -> None
def get_metrics(self) -> Dict[str, Any]
def reset(self) -> None
```
**Métriques retournées par get_metrics():**
- `uptime_seconds`: Temps écoulé depuis le démarrage
- `logs_per_level`: Dict avec compteurs par niveau
- `logs_per_second`: Taux de logs (fenêtre glissante)
- `avg_log_size_bytes`: Taille moyenne des logs
- `recent_errors`: Liste des 10 dernières erreurs
- `total_logs`: Total de logs émis
### AlertManager
Gestionnaire d'alertes basées sur métriques.
```python
class AlertManager:
def add_rule(
self,
name: str,
condition: Callable[[Dict[str, Any]], bool],
cooldown: int = 300,
message: Optional[str] = None
) -> None
def add_callback(self, callback: Callable[[str, str], None]) -> None
def check_metrics(self, metrics: Dict[str, Any]) -> None
```
### HealthMonitor
Moniteur de santé automatique.
```python
class HealthMonitor:
def __init__(
self,
logger: PyLoggerX,
check_interval: int = 60
)
def start(self) -> None
def stop(self) -> None
def get_status(self) -> Dict[str, Any]
# Propriétés
@property
def metrics_collector: MetricsCollector
@property
def alert_manager: AlertManager
```
**Status retourné par get_status():**
- `running`: Statut du monitoring
- `metrics`: Métriques du collecteur
- `logger_stats`: Statistiques du logger
- `logger_health`: Santé du logger
### print_dashboard
Fonction pour afficher le dashboard console.
```python
def print_dashboard(
logger: PyLoggerX,
clear_screen: bool = True
) -> None
```
---
## Variables d'Environnement Complètes
Liste exhaustive des variables d'environnement supportées:
```bash
# Général
PYLOGGERX_NAME=myapp
PYLOGGERX_LEVEL=INFO
# Sortie
PYLOGGERX_CONSOLE=true
PYLOGGERX_COLORS=false
# Fichiers
PYLOGGERX_JSON_FILE=/var/log/app.json
PYLOGGERX_TEXT_FILE=/var/log/app.log
# Rate Limiting
PYLOGGERX_RATE_LIMIT_ENABLED=true
PYLOGGERX_RATE_LIMIT_MESSAGES=100
PYLOGGERX_RATE_LIMIT_PERIOD=60
# Sampling
PYLOGGERX_SAMPLING_ENABLED=false
PYLOGGERX_SAMPLING_RATE=1.0
# Elasticsearch
PYLOGGERX_ELASTICSEARCH_URL=http://elasticsearch:9200
PYLOGGERX_ELASTICSEARCH_INDEX=logs
PYLOGGERX_ELASTICSEARCH_USERNAME=elastic
PYLOGGERX_ELASTICSEARCH_PASSWORD=changeme
# Loki
PYLOGGERX_LOKI_URL=http://loki:3100
# Sentry
PYLOGGERX_SENTRY_DSN=https://xxx@sentry.io/xxx
PYLOGGERX_SENTRY_ENVIRONMENT=production
PYLOGGERX_SENTRY_RELEASE=1.0.0
# Datadog
PYLOGGERX_DATADOG_API_KEY=your_api_key
PYLOGGERX_DATADOG_SITE=datadoghq.com
PYLOGGERX_DATADOG_SERVICE=myapp
# Slack
PYLOGGERX_SLACK_WEBHOOK=https://hooks.slack.com/services/xxx
PYLOGGERX_SLACK_CHANNEL=#alerts
PYLOGGERX_SLACK_USERNAME=PyLoggerX Bot
# Webhook
PYLOGGERX_WEBHOOK_URL=https://example.com/logs
PYLOGGERX_WEBHOOK_METHOD=POST
# Performance
PYLOGGERX_PERFORMANCE_TRACKING=true
PYLOGGERX_INCLUDE_CALLER=false
# Export
PYLOGGERX_BATCH_SIZE=100
PYLOGGERX_BATCH_TIMEOUT=5
PYLOGGERX_ASYNC_EXPORT=true
```
---
## Meilleures Pratiques
### Configuration
1. **Utiliser des fichiers de config par environnement**
```python
config = load_config(
config_file=f"config.{os.getenv('ENV', 'dev')}.json",
from_env=True
)
```
2. **Ne jamais commiter les secrets**
- Utiliser des variables d'environnement
- Utiliser des outils comme Vault, AWS Secrets Manager
3. **Valider la configuration au démarrage**
```python
try:
config = load_config(config_file="config.json")
except ValueError as e:
print(f"Config invalide: {e}")
sys.exit(1)
```
4. **Documenter les configurations**
- Créer des exemples de configuration
- Documenter chaque paramètre
### Monitoring
1. **Toujours monitorer en production**
```python
monitor = HealthMonitor(logger, check_interval=60)
monitor.start()
```
2. **Configurer des alertes pertinentes**
- Pas trop d'alertes (fatigue d'alerte)
- Pas trop peu (problèmes non détectés)
3. **Exporter vers un système centralisé**
- Prometheus + Grafana
- Datadog
- CloudWatch
4. **Tester les alertes régulièrement**
```python
# Test mensuel
logger.critical("TEST: Alerte critique", test=True)
```
### Performance
1. **Activer le rate limiting en production**
```python
config['enable_rate_limit'] = True
config['rate_limit_messages'] = 100
```
2. **Utiliser l'export asynchrone**
```python
config['async_export'] = True
```
3. **Ajuster la taille des batchs**
```python
config['batch_size'] = 50 # Plus petit pour latence faible
config['batch_timeout'] = 2 # Timeout court
```
4. **Monitorer les métriques de performance**
```python
stats = logger.get_performance_stats()
if stats['avg_duration'] > 1.0:
logger.warning("Performance dégradée")
```
## Exemples Réels
### 1. Application Web (FastAPI)
```python
from fastapi import FastAPI, Request, HTTPException
from fastapi.responses import JSONResponse
from pyloggerx import PyLoggerX
import time
import uuid
app = FastAPI()
# Configuration du logger
logger = PyLoggerX(
name="fastapi_app",
console=True,
json_file="logs/web.json",
# Export distant
elasticsearch_url="http://elasticsearch:9200",
sentry_dsn=os.getenv("SENTRY_DSN"),
enrichment_data={
"service": "web-api",
"version": "2.0.0",
"environment": os.getenv("ENVIRONMENT", "production")
}
)
@app.middleware("http")
async def log_requests(request: Request, call_next):
"""Middleware de logging pour toutes les requêtes"""
request_id = str(uuid.uuid4())
start_time = time.time()
# Ajouter request_id au contexte
with logger.context(request_id=request_id):
logger.info("Requête reçue",
method=request.method,
path=request.url.path,
client_ip=request.client.host,
user_agent=request.headers.get("user-agent")
)
try:
response = await call_next(request)
duration = time.time() - start_time
logger.info("Requête complétée",
method=request.method,
path=request.url.path,
status_code=response.status_code,
duration_ms=duration * 1000
)
# Ajouter request_id au header de réponse
response.headers["X-Request-ID"] = request_id
return response
except Exception as e:
duration = time.time() - start_time
logger.exception("Erreur de requête",
method=request.method,
path=request.url.path,
duration_ms=duration * 1000,
error_type=type(e).__name__
)
raise
@app.exception_handler(HTTPException)
async def http_exception_handler(request: Request, exc: HTTPException):
"""Gestionnaire d'exceptions HTTP"""
logger.warning("Exception HTTP",
status_code=exc.status_code,
detail=exc.detail,
path=request.url.path
)
return JSONResponse(
status_code=exc.status_code,
content={"error": exc.detail}
)
@app.get("/")
def root():
logger.info("Endpoint racine accédé")
return {"status": "ok", "service": "web-api"}
@app.get("/health")
def health_check():
"""Health check avec métriques"""
import psutil
cpu = psutil.cpu_percent()
memory = psutil.virtual_memory().percent
status = "healthy"
if cpu > 80 or memory > 80:
status = "degraded"
logger.warning("Service dégradé",
cpu_percent=cpu,
memory_percent=memory
)
logger.info("Health check",
status=status,
cpu_percent=cpu,
memory_percent=memory
)
return {
"status": status,
"metrics": {
"cpu_percent": cpu,
"memory_percent": memory
}
}
@app.on_event("startup")
async def startup_event():
logger.info("Application démarrée",
workers=os.getenv("WEB_CONCURRENCY", 1))
@app.on_event("shutdown")
async def shutdown_event():
logger.info("Application arrêtée")
logger.flush() # Vider tous les buffers
logger.close()
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8080)
```
### 2. Pipeline de Traitement de Données
```python
from pyloggerx import PyLoggerX
import pandas as pd
import sys
logger = PyLoggerX(
name="data_pipeline",
console=True,
json_file="logs/pipeline.json",
performance_tracking=True,
# Alertes pour les échecs
slack_webhook=os.getenv("SLACK_WEBHOOK"),
enrichment_data={
"pipeline": "data-processing",
"version": "1.0.0"
}
)
class DataPipeline:
def __init__(self, input_file):
self.input_file = input_file
self.df = None
def run(self):
"""Exécuter le pipeline complet"""
logger.info("Pipeline démarré", input_file=self.input_file)
try:
self.load_data()
self.validate_data()
self.clean_data()
self.transform_data()
self.export_data()
# Statistiques finales
stats = logger.get_performance_stats()
logger.info("Pipeline complété avec succès",
total_duration=stats["total_duration"],
operations=stats["total_operations"])
except Exception as e:
logger.exception("Pipeline échoué", error=str(e))
sys.exit(1)
def load_data(self):
"""Charger les données"""
with logger.timer("Chargement des données"):
try:
self.df = pd.read_csv(self.input_file)
logger.info("Données chargées",
rows=len(self.df),
columns=len(self.df.columns),
memory_mb=self.df.memory_usage(deep=True).sum() / 1024**2)
except Exception as e:
logger.error("Échec du chargement",
file=self.input_file,
error=str(e))
raise
def validate_data(self):
"""Valider les données"""
with logger.timer("Validation des données"):
required_columns = ['id', 'timestamp', 'value']
missing_columns = [col for col in required_columns
if col not in self.df.columns]
if missing_columns:
logger.error("Colonnes manquantes",
missing=missing_columns,
found=list(self.df.columns))
raise ValueError(f"Colonnes manquantes: {missing_columns}")
logger.info("Validation réussie")
def clean_data(self):
"""Nettoyer les données"""
with logger.timer("Nettoyage des données"):
initial_rows = len(self.df)
# Supprimer les doublons
duplicates = self.df.duplicated().sum()
self.df = self.df.drop_duplicates()
# Supprimer les valeurs nulles
null_counts = self.df.isnull().sum()
self.df = self.df.dropna()
removed_rows = initial_rows - len(self.df)
logger.info("Données nettoyées",
initial_rows=initial_rows,
removed_rows=removed_rows,
duplicates_removed=duplicates,
remaining_rows=len(self.df),
null_values=null_counts.to_dict())
if removed_rows > initial_rows * 0.5:
logger.warning("Plus de 50% des lignes supprimées",
percent_removed=removed_rows/initial_rows*100)
def transform_data(self):
"""Transformer les données"""
with logger.timer("Transformation des données"):
# Conversion de types
self.df['timestamp'] = pd.to_datetime(self.df['timestamp'])
self.df['value'] = pd.to_numeric(self.df['value'], errors='coerce')
# Ajout de colonnes calculées
self.df['year'] = self.df['timestamp'].dt.year
self.df['month'] = self.df['timestamp'].dt.month
logger.info("Transformation complétée",
new_columns=['year', 'month'])
def export_data(self):
"""Exporter les données"""
output_file = "output/processed_data.csv"
with logger.timer("Export des données"):
self.df.to_csv(output_file, index=False)
file_size_mb = os.path.getsize(output_file) / 1024**2
logger.info("Données exportées",
output_file=output_file,
rows=len(self.df),
file_size_mb=file_size_mb)
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python pipeline.py <input_file>")
sys.exit(1)
pipeline = DataPipeline(sys.argv[1])
pipeline.run()
```
### 3. Microservice avec Monitoring Complet
```python
from pyloggerx import PyLoggerX
from fastapi import FastAPI
import psutil
import time
import os
app = FastAPI()
# Logger principal
logger = PyLoggerX(
name="microservice",
console=True,
json_file="logs/service.json",
# Stack d'observabilité complète
elasticsearch_url=os.getenv("ES_URL"),
elasticsearch_index="microservice-logs",
loki_url=os.getenv("LOKI_URL"),
loki_labels={"service": "payment-processor", "env": "prod"},
sentry_dsn=os.getenv("SENTRY_DSN"),
datadog_api_key=os.getenv("DD_API_KEY"),
slack_webhook=os.getenv("SLACK_WEBHOOK"),
# Configuration avancée
batch_size=50,
enable_sampling=True,
sampling_rate=0.5, # 50% en production
enrichment_data={
"service": "payment-processor",
"version": os.getenv("APP_VERSION", "1.0.0"),
"instance": os.getenv("HOSTNAME")
}
)
@app.get("/health")
def health_check():
"""Health check détaillé"""
cpu = psutil.cpu_percent(interval=1)
memory = psutil.virtual_memory().percent
disk = psutil.disk_usage('/').percent
# Vérifier les dépendances
dependencies = {
"database": check_database(),
"redis": check_redis(),
"external_api": check_external_api()
}
all_healthy = all(dependencies.values())
status = "healthy" if all_healthy and cpu < 80 and memory < 80 else "degraded"
log_level = "info" if status == "healthy" else "warning"
getattr(logger, log_level)("Health check",
status=status,
cpu_percent=cpu,
memory_percent=memory,
disk_percent=disk,
dependencies=dependencies
)
return {
"status": status,
"metrics": {
"cpu_percent": cpu,
"memory_percent": memory,
"disk_percent": disk
},
"dependencies": dependencies
}
@app.get("/metrics")
def get_metrics():
"""Métriques de logging et performance"""
log_stats = logger.get_stats()
perf_stats = logger.get_performance_stats()
return {
"logging": log_stats,
"performance": perf_stats,
"system": {
"cpu_percent": psutil.cpu_percent(),
"memory_percent": psutil.virtual_memory().percent,
"disk_percent": psutil.disk_usage('/').percent
}
}
def check_database():
try:
# Vérifier la connexion DB
# db.execute("SELECT 1")
return True
except:
logger.error("Database health check failed")
return False
def check_redis():
try:
# Vérifier Redis
# redis_client.ping()
return True
except:
logger.error("Redis health check failed")
return False
def check_external_api():
try:
# Vérifier l'API externe
# requests.get("https://api.example.com/health", timeout=2)
return True
except:
logger.error("External API health check failed")
return False
```
### 4. Worker Asynchrone avec Gestion d'Erreurs
```python
from pyloggerx import PyLoggerX
import asyncio
import aiohttp
from typing import List, Dict
import random
logger = PyLoggerX(
name="async_worker",
json_file="logs/worker.json",
performance_tracking=True,
# Alertes
slack_webhook=os.getenv("SLACK_WEBHOOK"),
enrichment_data={
"worker_type": "async-processor",
"version": "1.0.0"
}
)
class AsyncWorker:
def __init__(self, worker_id: str):
self.worker_id = worker_id
self.is_running = False
self.processed_count = 0
self.error_count = 0
async def start(self):
"""Démarrer le worker"""
self.is_running = True
logger.info("Worker démarré", worker_id=self.worker_id)
while self.is_running:
try:
await self.process_batch()
await asyncio.sleep(5)
except Exception as e:
logger.exception("Erreur worker", worker_id=self.worker_id)
await asyncio.sleep(10)
async def process_batch(self):
"""Traiter un batch de jobs"""
with logger.timer(f"Batch-{self.worker_id}"):
jobs = await self.fetch_jobs()
if not jobs:
logger.debug("Aucun job à traiter", worker_id=self.worker_id)
return
logger.info("Batch récupéré",
worker_id=self.worker_id,
job_count=len(jobs))
# Traiter en parallèle
tasks = [self.process_job(job) for job in jobs]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Compter succès/échecs
successes = sum(1 for r in results if not isinstance(r, Exception))
failures = len(results) - successes
self.processed_count += successes
self.error_count += failures
logger.info("Batch traité",
worker_id=self.worker_id,
successes=successes,
failures=failures,
total_processed=self.processed_count,
total_errors=self.error_count)
async def fetch_jobs(self) -> List[Dict]:
"""Récupérer les jobs depuis la queue"""
# Simuler la récupération de jobs
await asyncio.sleep(0.1)
return [{"id": f"job_{i}", "data": random.random()}
for i in range(random.randint(0, 10))]
async def process_job(self, job: Dict):
"""Traiter un job individuel"""
job_id = job["id"]
try:
logger.debug("Traitement job",
worker_id=self.worker_id,
job_id=job_id)
# Simuler le traitement
await asyncio.sleep(random.uniform(0.1, 0.5))
# Simuler des échecs aléatoires (10%)
if random.random() < 0.1:
raise Exception("Job processing failed")
logger.info("Job complété",
worker_id=self.worker_id,
job_id=job_id,
status="success")
except Exception as e:
logger.error("Job échoué",
worker_id=self.worker_id,
job_id=job_id,
error=str(e),
status="failed")
raise
def stop(self):
"""Arrêter le worker"""
self.is_running = False
logger.info("Worker arrêté",
worker_id=self.worker_id,
total_processed=self.processed_count,
total_errors=self.error_count)
async def main():
# Démarrer plusieurs workers
workers = [AsyncWorker(f"worker-{i}") for i in range(3)]
tasks = [worker.start() for worker in workers]
try:
await asyncio.gather(*tasks)
except KeyboardInterrupt:
logger.info("Arrêt demandé")
for worker in workers:
worker.stop()
if __name__ == "__main__":
asyncio.run(main())
```
---
## Meilleures Pratiques
### 1. Structurer les Logs pour le Parsing
Toujours utiliser des paires clé-valeur pour les données structurées:
```python
# BON - structuré
logger.info("Connexion utilisateur",
user_id=123,
username="john",
ip="192.168.1.1",
auth_method="oauth2")
# MAUVAIS - non structuré
logger.info(f"User john (ID: 123) logged in from 192.168.1.1 using OAuth2")
```
### 2. Utiliser les Niveaux de Log Appropriés
```python
# DEBUG - Informations de diagnostic détaillées
logger.debug("Cache hit", key="user:123", ttl=3600)
# INFO - Messages informatifs généraux
logger.info("Service démarré", port=8080, workers=4)
# WARNING - Quelque chose d'inattendu mais pas critique
logger.warning("Utilisation mémoire élevée",
percent=85,
threshold=80)
# ERROR - Une erreur s'est produite mais le service continue
logger.error("Requête échouée",
query="SELECT...",
error=str(e),
retry_count=3)
# CRITICAL - Le service ne peut pas continuer
logger.critical("Connexion base de données perdue",
retries_exhausted=True,
last_error=str(e))
```
### 3. Inclure du Contexte dans les Logs
```python
# Contexte utilisateur
logger.add_enrichment(
user_id=user.id,
session_id=session.id,
request_id=request_id,
ip_address=request.remote_addr
)
# Tous les logs suivants incluront ce contexte
logger.info("Appel API", endpoint="/api/users")
```
### 4. Tracking de Performance
```python
# Utiliser les timers pour les opérations critiques
with logger.timer("Requête Base de Données"):
result = expensive_query()
# Logger les durées pour analyse
start = time.time()
process_data()
duration = time.time() - start
if duration > 1.0: # Seuil de performance
logger.warning("Opération lente",
operation="process_data",
duration_seconds=duration)
```
### 5. Gestion des Exceptions
```python
try:
risky_operation()
except SpecificException as e:
logger.exception("Opération échouée",
operation="data_sync",
error_type=type(e).__name__,
recoverable=True)
# Inclut automatiquement la stack trace
except Exception as e:
logger.critical("Erreur inattendue",
operation="data_sync",
error=str(e))
# Alerte immédiate via Slack/Sentry
```
### 6. Logging Adapté aux Conteneurs
```python
# Pour les applications conteneurisées
logger = PyLoggerX(
name="container-app",
console=True, # Vers stdout/stderr
colors=False, # IMPORTANT pour les collecteurs
json_file=None, # Pas de fichiers en conteneur
format_string='{"time":"%(asctime)s","level":"%(levelname)s","msg":"%(message)s"}'
)
```
### 7. Correlation IDs pour Systèmes Distribués
```python
import uuid
def handle_request(request):
# Propager ou créer un correlation ID
correlation_id = request.headers.get(
'X-Correlation-ID',
str(uuid.uuid4())
)
with logger.context(correlation_id=correlation_id):
logger.info("Requête reçue",
method=request.method,
path=request.path)
# Passer aux services downstream
response = downstream_service.call(
data,
headers={'X-Correlation-ID': correlation_id}
)
logger.info("Requête complétée",
status=response.status_code)
return response
```
### 8. Health Checks et Monitoring
```python
@app.get("/health")
def health():
checks = {
"database": check_db(),
"cache": check_redis(),
"queue": check_queue()
}
all_healthy = all(checks.values())
if not all_healthy:
failed = [k for k, v in checks.items() if not v]
logger.error("Health check échoué",
failed_components=failed)
return {
"status": "healthy" if all_healthy else "unhealthy",
"checks": checks
}
```
### 9. Protection des Données Sensibles
```python
import hashlib
def login(username, password):
# MAUVAIS - Ne jamais logger de données sensibles
# logger.info("Tentative de connexion",
# username=username,
# password=password)
# BON - Hash ou masquage
logger.info("Tentative de connexion",
username=username,
password_hash=hashlib.sha256(password.encode()).hexdigest()[:8])
```
### 10. Rotation des Logs pour Services Long-Running
```python
# Éviter le remplissage du disque
logger = PyLoggerX(
name="long-running-service",
json_file="logs/service.json",
max_bytes=10 * 1024 * 1024, # 10MB
backup_count=5, # Garder 5 fichiers
rotation_when="midnight" # + rotation quotidienne
)
```
---
## Tests
### Tests Unitaires
```python
import pytest
import json
from pathlib import Path
from pyloggerx import PyLoggerX
def test_json_logging(tmp_path):
"""Tester la sortie JSON"""
log_file = tmp_path / "test.json"
logger = PyLoggerX(
name="test_logger",
json_file=str(log_file),
console=False
)
logger.info("Message de test",
test_id=123,
status="success")
assert log_file.exists()
with open(log_file) as f:
log_entry = json.loads(f.readline())
assert log_entry["message"] == "Message de test"
assert log_entry["test_id"] == 123
assert log_entry["status"] == "success"
def test_performance_tracking(tmp_path):
"""Tester le tracking de performance"""
logger = PyLoggerX(
name="perf_test",
performance_tracking=True,
console=False
)
import time
with logger.timer("Opération Test"):
time.sleep(0.1)
stats = logger.get_performance_stats()
assert stats["total_operations"] == 1
assert stats["avg_duration"] >= 0.1
def test_enrichment(tmp_path):
"""Tester l'enrichissement de contexte"""
log_file = tmp_path / "test.json"
logger = PyLoggerX(
name="enrichment_test",
json_file=str(log_file),
console=False,
enrichment_data={
"app_version": "1.0.0",
"environment": "test"
}
)
logger.info("Test avec enrichissement")
with open(log_file) as f:
log_entry = json.loads(f.readline())
assert log_entry["app_version"] == "1.0.0"
assert log_entry["environment"] == "test"
def test_log_levels():
"""Tester les différents niveaux de log"""
logger = PyLoggerX(name="level_test", console=False)
# Ne devrait pas lever d'exception
logger.debug("Debug message")
logger.info("Info message")
logger.warning("Warning message")
logger.error("Error message")
logger.critical("Critical message")
@pytest.fixture
def logger(tmp_path):
"""Fixture pour logger"""
return PyLoggerX(
name="test",
json_file=str(tmp_path / "test.json"),
console=False
)
def test_remote_logging_mock(logger, monkeypatch):
"""Tester l'export distant (avec mock)"""
import requests
# Mock de la requête HTTP
class MockResponse:
status_code = 200
def mock_post(*args, **kwargs):
return MockResponse()
monkeypatch.setattr(requests, "post", mock_post)
# Logger avec webhook
logger_remote = PyLoggerX(
name="remote_test",
webhook_url="http://example.com/logs",
console=False
)
logger_remote.info("Test remote")
```
### Tests d'Intégration
```python
import pytest
import requests
from pyloggerx import PyLoggerX
@pytest.fixture(scope="module")
def app_with_logging():
"""Démarrer l'application avec logging"""
logger = PyLoggerX(
name="integration_test",
json_file="logs/integration.json"
)
logger.info("Tests d'intégration démarrés")
# Démarrer votre app ici
yield app
logger.info("Tests d'intégration terminés")
logger.close()
def test_api_endpoint(app_with_logging):
"""Tester un endpoint API avec logging"""
response = requests.get("http://localhost:8080/api/health")
assert response.status_code == 200
assert response.json()["status"] == "healthy"
def test_error_handling(app_with_logging):
"""Tester la gestion d'erreurs"""
response = requests.get("http://localhost:8080/api/invalid")
assert response.status_code == 404
```
---
## Dépannage
### Problème: Les logs n'apparaissent pas
**Solution:**
```python
# Vérifier le niveau de log
logger.set_level("DEBUG")
# Vérifier les handlers
print(logger.logger.handlers)
# Forcer le flush
logger.flush()
# S'assurer que le répertoire existe
import os
os.makedirs("logs", exist_ok=True)
```
### Problème: Erreurs de permissions fichier
**Solution:**
```python
# Utiliser un chemin absolu
import os
log_path = os.path.join(os.getcwd(), "logs", "app.json")
# S'assurer des permissions d'écriture
os.makedirs(os.path.dirname(log_path), exist_ok=True, mode=0o755)
# Vérifier les permissions
if not os.access(os.path.dirname(log_path), os.W_OK):
raise PermissionError(f"Pas de permission d'écriture: {log_path}")
```
### Problème: Couleurs ne fonctionnent pas dans les conteneurs
**Solution:**
```python
# Désactiver les couleurs pour les conteneurs
logger = PyLoggerX(
name="container-app",
colors=False # Important pour les collecteurs de logs
)
```
### Problème: Fichiers de log trop volumineux
**Solution:**
```python
# Utiliser la rotation
logger = PyLoggerX(
json_file="logs/app.json",
max_bytes=5 * 1024 * 1024, # 5MB
backup_count=3, # Garder 3 backups
rotation_when="midnight" # + rotation quotidienne
)
```
### Problème: Surcharge de performance
**Solution:**
```python
# Augmenter le niveau de log en production
logger.set_level("WARNING")
# Désactiver le tracking de performance
logger = PyLoggerX(performance_tracking=False)
# Activer l'échantillonnage
logger = PyLoggerX(
enable_sampling=True,
sampling_rate=0.1 # Garder 10% des logs
)
# Utiliser l'export asynchrone
logger = PyLoggerX(
async_export=True,
queue_size=1000
)
```
### Problème: Logs distants non envoyés
**Solution:**
```python
# Activer le logging de debug
import logging
logging.basicConfig(level=logging.DEBUG)
# Vérifier la connectivité
import requests
try:
response = requests.get("http://elasticsearch:9200")
print(f"ES Status: {response.status_code}")
except Exception as e:
print(f"Erreur de connexion: {e}")
# Forcer le flush avant la fermeture
logger.flush()
logger.close()
# Vérifier la configuration du batch
logger = PyLoggerX(
elasticsearch_url="http://elasticsearch:9200",
batch_size=10, # Batch plus petit pour test
batch_timeout=1 # Timeout court
)
```
### Problème: Utilisation mémoire élevée avec logging distant
**Solution:**
```python
# Ajuster les paramètres de batch
logger = PyLoggerX(
elasticsearch_url="http://elasticsearch:9200",
batch_size=50, # Batches plus petits
batch_timeout=2, # Timeout plus court
queue_size=500, # Queue plus petite
async_export=True # Export asynchrone
)
# Activer l'échantillonnage pour réduire le volume
logger = PyLoggerX(
enable_sampling=True,
sampling_rate=0.5 # 50% des logs
)
```
---
## Référence API
### Classe PyLoggerX
#### Constructeur
```python
PyLoggerX(
name: str = "PyLoggerX",
level: str = "INFO",
console: bool = True,
colors: bool = True,
json_file: Optional[str] = None,
text_file: Optional[str] = None,
max_bytes: int = 10 * 1024 * 1024,
backup_count: int = 5,
rotation_when: str = "midnight",
rotation_interval: int = 1,
format_string: Optional[str] = None,
include_caller: bool = False,
performance_tracking: bool = False,
enrichment_data: Optional[Dict[str, Any]] = None,
# Elasticsearch
elasticsearch_url: Optional[str] = None,
elasticsearch_index: str = "pyloggerx",
elasticsearch_username: Optional[str] = None,
elasticsearch_password: Optional[str] = None,
# Loki
loki_url: Optional[str] = None,
loki_labels: Optional[Dict[str, str]] = None,
# Sentry
sentry_dsn: Optional[str] = None,
sentry_environment: str = "production",
# Datadog
datadog_api_key: Optional[str] = None,
datadog_site: str = "datadoghq.com",
# Slack
slack_webhook: Optional[str] = None,
slack_channel: Optional[str] = None,
# Webhook
webhook_url: Optional[str] = None,
webhook_method: str = "POST",
# Avancé
enable_sampling: bool = False,
sampling_rate: float = 1.0,
batch_size: int = 100,
batch_timeout: int = 5,
enable_rate_limit= bool = True,
rate_limit_messages= int = 2,
rate_limit_period= int = 10,
async_export: bool = True
)
```
#### Méthodes de Logging
```python
debug(message: str, **kwargs) -> None
info(message: str, **kwargs) -> None
warning(message: str, **kwargs) -> None
error(message: str, **kwargs) -> None
critical(message: str, **kwargs) -> None
exception(message: str, **kwargs) -> None # Inclut la traceback
```
#### Méthodes de Configuration
```python
set_level(level: str) -> None
add_context(**kwargs) -> None
add_enrichment(**kwargs) -> None
add_filter(filter_obj: logging.Filter) -> None
remove_filter(filter_obj: logging.Filter) -> None
```
#### Méthodes de Performance
```python
timer(operation_name: str) -> ContextManager
get_performance_stats() -> Dict[str, Any]
clear_performance_stats() -> None
```
#### Méthodes Utilitaires
```python
get_stats() -> Dict[str, Any]
flush() -> None # Vider tous les buffers
close() -> None # Fermer tous les handlers
context(**kwargs) -> ContextManager # Contexte temporaire
```
### Logger Global
```python
from pyloggerx import log
# Utiliser le logger global par défaut
log.info("Logging rapide sans configuration")
log.error("Erreur", error_code=500)
```
---
## Contribution
Les contributions sont les bienvenues ! Suivez ces étapes :
### Configuration de Développement
```bash
# Cloner le dépôt
git clone https://github.com/yourusername/pyloggerx.git
cd pyloggerx
# Créer un environnement virtuel
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Installer en mode développement
pip install -e ".[dev]"
# Installer les hooks pre-commit
pre-commit install
```
### Exécuter les Tests
```bash
# Tous les tests
pytest
# Avec couverture
pytest --cov=pyloggerx --cov-report=html
# Tests spécifiques
pytest tests/test_core.py -v
# Tests avec sortie
pytest -v -s
```
### Style de Code
```bash
# Formater le code
black pyloggerx/
isort pyloggerx/
# Vérifier le style
flake8 pyloggerx/
pylint pyloggerx/
# Vérification de types
mypy pyloggerx/
```
### Soumettre des Modifications
1. Fork le dépôt
2. Créer une branche: `git checkout -b feature/amazing-feature`
3. Faire vos modifications
4. Ajouter des tests pour les nouvelles fonctionnalités
5. S'assurer que les tests passent: `pytest`
6. Commit: `git commit -m 'Add amazing feature'`
7. Push: `git push origin feature/amazing-feature`
8. Ouvrir une Pull Request
### Directives de Contribution
- Suivre PEP 8 pour le style de code
- Ajouter des docstrings pour toutes les fonctions publiques
- Écrire des tests pour toutes les nouvelles fonctionnalités
- Mettre à jour la documentation si nécessaire
- S'assurer que tous les tests passent avant de soumettre
---
## Roadmap
### Version 3.1.0 (Planifié - Q2 2025)
- Support de logging asynchrone natif
- Formatters additionnels (Logfmt, GELF)
- Support AWS CloudWatch Logs
- Support Google Cloud Logging
- Métriques intégrées (histogrammes, compteurs)
### Version 3.5.0 (Planifié - Q3 2025)
- Dashboard de monitoring intégré
- Support Apache Kafka pour logs streaming
- Compression automatique des logs archivés
- Support de chiffrement des logs sensibles
### Version 4.0.0 (Futur)
- Tracing distribué intégré (OpenTelemetry complet)
- Machine learning pour détection d'anomalies
- Alerting avancé avec règles personnalisées
- Support multi-tenant
---
## FAQ
**Q: PyLoggerX est-il prêt pour la production ?**
R: Oui, PyLoggerX suit les meilleures pratiques de logging Python et est utilisé en production par plusieurs entreprises.
**Q: PyLoggerX fonctionne-t-il avec le logging existant ?**
R: Oui, PyLoggerX encapsule le module logging standard de Python et est compatible avec les handlers existants.
**Q: Comment faire une rotation des logs par temps plutôt que par taille ?**
R: Utilisez le paramètre `rotation_when` avec des valeurs comme "midnight", "H" (horaire), ou "D" (quotidien).
**Q: Puis-je logger vers plusieurs fichiers simultanément ?**
R: Oui, spécifiez à la fois `json_file` et `text_file`.
**Q: PyLoggerX est-il thread-safe ?**
R: Oui, PyLoggerX utilise le module logging de Python qui est thread-safe.
**Q: Comment intégrer avec les outils d'agrégation de logs existants ?**
R: Utilisez le format JSON qui est compatible avec la plupart des outils (ELK, Splunk, Datadog, etc.) ou les exporters directs.
**Q: Quelle est la surcharge de performance ?**
R: L'impact est minimal. Utilisez l'échantillonnage et l'export asynchrone pour les applications à très haut volume.
**Q: Les logs sont-ils envoyés de manière synchrone ou asynchrone ?**
R: Par défaut, les exports distants sont asynchrones (non-bloquants). Vous pouvez désactiver avec `async_export=False`.
**Q: Comment gérer les logs sensibles ?**
R: Ne loggez jamais de données sensibles directement. Utilisez le hashing ou le masquage.
**Q: Puis-je utiliser PyLoggerX dans des fonctions AWS Lambda ?**
R: Oui, mais désactivez les fichiers locaux et utilisez console ou export distant uniquement.
---
## Licence
Ce projet est sous licence MIT :
```
MIT License
Copyright (c) 2025 PyLoggerX Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
---
## Support & Communauté
- **Documentation**: [https://pyloggerx.readthedocs.io](https://pyloggerx.readthedocs.io)
- **GitHub Issues**: [https://github.com/yourusername/pyloggerx/issues](https://github.com/yourusername/pyloggerx/issues)
- **Discussions**: [https://github.com/yourusername/pyloggerx/discussions](https://github.com/yourusername/pyloggerx/discussions)
- **PyPI**: [https://pypi.org/project/pyloggerx/](https://pypi.org/project/pyloggerx/)
- **Stack Overflow**: Tag `pyloggerx`
- **Discord**: [https://discord.gg/pyloggerx](https://discord.gg/pyloggerx)
---
## Remerciements
- Construit sur le module `logging` standard de Python
- Inspiré par des bibliothèques modernes comme structlog et loguru
- Merci à tous les contributeurs et utilisateurs
- Remerciements spéciaux à la communauté DevOps pour les retours
---
## Changelog
### v1.0.0 (2025-09-15)
**Fonctionnalités Majeures**
- Support de logging distant (Elasticsearch, Loki, Sentry, Datadog, Slack)
- Échantillonnage de logs pour applications à haut volume
- Limitation de débit (rate limiting)
- Filtrage avancé (niveau, pattern, personnalisé)
- Traitement par batch pour exports distants
- Support webhook personnalisé
- Export asynchrone non-bloquant
- Enrichissement de contexte amélioré
**Améliorations**
- Performance optimisée pour exports distants
- Meilleure gestion des erreurs d'export
- Documentation étendue avec exemples DevOps
- Support amélioré pour Kubernetes et conteneurs
**Nouvelles Fonctionnalités**
- Tracking de performance avec timers
- Formatters personnalisés
- Rotation des logs (taille et temps)
- Enrichissement de contexte global
- Meilleure gestion des exceptions
**Corrections**
- Correction de fuites mémoire dans certains scénarios
- Amélioration de la gestion des reconnexions
- Correction de problèmes de rotation de fichiers
---
**Fait avec soin pour la communauté Python et DevOps**
Raw data
{
"_id": null,
"home_page": "https://github.com/Moesthetics-code/pyloggerx",
"name": "pyloggerx",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "logging, colorful, json, rotation, modern",
"author": "Mohamed NDIAYE",
"author_email": "Mohamed NDIAYE <mintok2000@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/02/62/de502f7ae5a8252bdb03e25abcae73e0f62a5de6591af925b5beaf56cf97/pyloggerx-1.0.0.tar.gz",
"platform": null,
"description": "# PyLoggerX\r\n\r\n[](https://badge.fury.io/py/pyloggerx)\r\n[](https://pypi.org/project/pyloggerx/)\r\n[](https://opensource.org/licenses/MIT)\r\n\r\n**Biblioth\u00e8que de logging moderne, color\u00e9e et riche en fonctionnalit\u00e9s pour Python avec logging structur\u00e9, tracking de performance et logging distant.**\r\n\r\nPyLoggerX est un wrapper puissant qui \u00e9tend le module logging standard de Python avec une sortie console \u00e9l\u00e9gante, du logging JSON structur\u00e9, une rotation automatique des logs, et du logging distant vers des services populaires comme Elasticsearch, Grafana Loki, Sentry, Datadog, et plus encore. Con\u00e7u pour les workflows DevOps modernes et les applications cloud-native.\r\n\r\n---\r\n\r\n## Table des Mati\u00e8res\r\n\r\n- [Fonctionnalit\u00e9s](#fonctionnalit\u00e9s)\r\n- [Installation](#installation)\r\n- [Quick Start](#quick-start)\r\n- [Int\u00e9gration DevOps](#int\u00e9gration-devops)\r\n - [Docker & Kubernetes](#docker--kubernetes)\r\n - [Pipelines CI/CD](#pipelines-cicd)\r\n - [Stack d'Observabilit\u00e9](#stack-dobservabilit\u00e9)\r\n - [Infrastructure as Code](#infrastructure-as-code)\r\n- [Guide d'Utilisation Complet](#guide-dutilisation-complet)\r\n- [Logging Distant](#logging-distant-v30)\r\n- [Fonctionnalit\u00e9s Avanc\u00e9es](#fonctionnalit\u00e9s-avanc\u00e9es)\r\n- [R\u00e9f\u00e9rence de Configuration](#r\u00e9f\u00e9rence-de-configuration)\r\n- [Configuration Avanc\u00e9e](#configuration-avanc\u00e9e)\r\n - [Chargement depuis Fichiers](#chargement-depuis-fichiers)\r\n - [Configuration par Variables d'Environnement](#configuration-par-variables-denvironnement)\r\n - [Configuration Multi-Sources](#configuration-multi-sources)\r\n - [Validation de Configuration](#validation-de-configuration)\r\n - [Configurations Pr\u00e9d\u00e9finies](#configurations-pr\u00e9d\u00e9finies)\r\n- [Monitoring et M\u00e9triques](#monitoring-et-m\u00e9triques)\r\n - [Collecteur de M\u00e9triques](#collecteur-de-m\u00e9triques)\r\n - [Gestionnaire d'Alertes](#gestionnaire-dalertes)\r\n - [Monitoring de Sant\u00e9](#monitoring-de-sant\u00e9)\r\n - [Dashboard Console](#dashboard-console)\r\n- [Int\u00e9grations Monitoring](#int\u00e9grations-monitoring)\r\n - [Prometheus](#int\u00e9gration-prometheus)\r\n - [Grafana](#int\u00e9gration-grafana)\r\n - [Custom Metrics](#m\u00e9triques-personnalis\u00e9es)\r\n- [Exemples Complets](#exemples-complets)\r\n- [R\u00e9f\u00e9rence Config](#r\u00e9f\u00e9rence-config)\r\n- [Exemples R\u00e9els](#exemples-r\u00e9els)\r\n- [Meilleures Pratiques](#meilleures-pratiques)\r\n- [R\u00e9f\u00e9rence API](#r\u00e9f\u00e9rence-api)\r\n- [Tests](#tests)\r\n- [D\u00e9pannage](#d\u00e9pannage)\r\n- [Contribution](#contribution)\r\n- [Licence](#licence)\r\n\r\n---\r\n\r\n## Fonctionnalit\u00e9s\r\n\r\n### Fonctionnalit\u00e9s Core\r\n- **Sortie Console Color\u00e9e** - Logs console \u00e9l\u00e9gants avec indicateurs emoji\r\n- **Logging JSON Structur\u00e9** - Export de logs en format JSON structur\u00e9\r\n- **Rotation Automatique** - Rotation bas\u00e9e sur la taille et le temps\r\n- **Tracking de Performance** - Chronom\u00e9trage et monitoring int\u00e9gr\u00e9s\r\n- **Z\u00e9ro Configuration** - Fonctionne imm\u00e9diatement avec des valeurs par d\u00e9faut sens\u00e9es\r\n- **Hautement Configurable** - Options de personnalisation \u00e9tendues\r\n- **Enrichissement de Contexte** - Injection de m\u00e9tadonn\u00e9es automatique\r\n- **Formats de Sortie Multiples** - Console, JSON, texte\r\n\r\n### Fonctionnalit\u00e9s DevOps & Cloud-Native\r\n- **Compatible Conteneurs** - Logging structur\u00e9 adapt\u00e9 aux conteneurs\r\n- **Compatible Kubernetes** - Sortie JSON pour les log collectors\r\n- **Int\u00e9gration CI/CD** - Support pour GitHub Actions, GitLab CI, Jenkins\r\n- **Support Correlation ID** - Pour le tracing distribu\u00e9\r\n- **Logging Health Check** - Monitoring de sant\u00e9 des services\r\n- **Format Pr\u00eat pour les M\u00e9triques** - Sortie compatible avec Prometheus\r\n- **Configuration par Environnement** - Adaptation automatique selon l'environnement\r\n- **Conforme 12-Factor App** - Suit les meilleures pratiques\r\n\r\n### Fonctionnalit\u00e9s Avanc\u00e9es\r\n- **Logging Distant** - Export vers Elasticsearch, Loki, Sentry, Datadog\r\n- **\u00c9chantillonnage de Logs** - Gestion efficace des sc\u00e9narios \u00e0 haut volume\r\n- **Limitation de D\u00e9bit** - Pr\u00e9vention de l'inondation de logs\r\n- **Filtrage Avanc\u00e9** - Filtres par niveau, pattern ou logique personnalis\u00e9e\r\n- **Traitement par Batch** - Batching efficace pour les exports distants\r\n- **Support Webhook** - Envoi de logs vers des endpoints personnalis\u00e9s\r\n- **Int\u00e9gration Slack** - Alertes critiques dans Slack\r\n- **Processing Asynchrone** - Non-bloquant pour les performances\r\n\r\n---\r\n\r\n## Installation\r\n\r\n### Installation Basique\r\n\r\n```bash\r\npip install pyloggerx\r\n```\r\n\r\n### Dans requirements.txt\r\n\r\n```text\r\npyloggerx>=1.0.0\r\n```\r\n\r\n### Dans pyproject.toml (Poetry)\r\n\r\n```toml\r\n[tool.poetry.dependencies]\r\npyloggerx = \"^1.0.0\"\r\n```\r\n\r\n### Avec Support de Logging Distant\r\n\r\n```bash\r\n# Pour Elasticsearch\r\npip install pyloggerx[elasticsearch]\r\n\r\n# Pour Sentry\r\npip install pyloggerx[sentry]\r\n\r\n# Pour tous les services distants\r\npip install pyloggerx[all]\r\n```\r\n\r\n### Installation D\u00e9veloppement\r\n\r\n```bash\r\ngit clone https://github.com/yourusername/pyloggerx.git\r\ncd pyloggerx\r\npip install -e \".[dev]\"\r\n```\r\n\r\n---\r\n\r\n## Quick Start\r\n\r\n### Usage Basique\r\n\r\n```python\r\nfrom pyloggerx import log\r\n\r\n# Logging simple\r\nlog.info(\"Application d\u00e9marr\u00e9e\")\r\nlog.warning(\"Ceci est un avertissement\")\r\nlog.error(\"Une erreur s'est produite\")\r\nlog.debug(\"Informations de debug\")\r\n\r\n# Avec contexte\r\nlog.info(\"Utilisateur connect\u00e9\", user_id=123, ip=\"192.168.1.1\")\r\n```\r\n\r\n### Instance de Logger Personnalis\u00e9e\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\n\r\nlogger = PyLoggerX(\r\n name=\"myapp\",\r\n level=\"INFO\",\r\n console=True,\r\n colors=True,\r\n json_file=\"logs/app.json\",\r\n text_file=\"logs/app.log\"\r\n)\r\n\r\nlogger.info(\"Logger personnalis\u00e9 initialis\u00e9\")\r\n```\r\n\r\n### Logger avec Export Distant\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"production-app\",\r\n console=True,\r\n json_file=\"logs/app.json\",\r\n \r\n # Export vers Elasticsearch\r\n elasticsearch_url=\"http://localhost:9200\",\r\n elasticsearch_index=\"myapp-logs\",\r\n \r\n # Alertes Sentry pour les erreurs\r\n sentry_dsn=\"https://xxx@sentry.io/xxx\",\r\n \r\n # Notifications Slack pour les critiques\r\n slack_webhook=\"https://hooks.slack.com/services/xxx\"\r\n)\r\n\r\nlogger.info(\"Application d\u00e9marr\u00e9e\")\r\nlogger.error(\"Erreur critique\") # Envoy\u00e9 \u00e0 tous les services\r\n```\r\n\r\n---\r\n\r\n## Int\u00e9gration DevOps\r\n\r\n### Docker & Kubernetes\r\n\r\n#### Application Conteneuris\u00e9e\r\n\r\n```python\r\n# app.py\r\nimport os\r\nfrom pyloggerx import PyLoggerX\r\n\r\n# Configuration pour environnement conteneur\r\nlogger = PyLoggerX(\r\n name=os.getenv(\"APP_NAME\", \"myapp\"),\r\n level=os.getenv(\"LOG_LEVEL\", \"INFO\"),\r\n console=True, # Logs vers stdout pour les collecteurs\r\n colors=False, # D\u00e9sactiver les couleurs dans les conteneurs\r\n json_file=None, # Utiliser stdout uniquement\r\n include_caller=True,\r\n enrichment_data={\r\n \"environment\": os.getenv(\"ENVIRONMENT\", \"production\"),\r\n \"pod_name\": os.getenv(\"POD_NAME\", \"unknown\"),\r\n \"namespace\": os.getenv(\"NAMESPACE\", \"default\"),\r\n \"version\": os.getenv(\"APP_VERSION\", \"1.0.0\"),\r\n \"region\": os.getenv(\"AWS_REGION\", \"us-east-1\")\r\n }\r\n)\r\n\r\nlogger.info(\"Application d\u00e9marr\u00e9e\", port=8080)\r\n```\r\n\r\n#### Dockerfile Optimis\u00e9\r\n\r\n```dockerfile\r\nFROM python:3.11-slim\r\n\r\nWORKDIR /app\r\n\r\n# Installation des d\u00e9pendances\r\nCOPY requirements.txt .\r\nRUN pip install --no-cache-dir -r requirements.txt\r\n\r\nCOPY . .\r\n\r\n# Variables d'environnement pour logging\r\nENV LOG_LEVEL=INFO \\\r\n APP_NAME=myapp \\\r\n ENVIRONMENT=production \\\r\n PYTHONUNBUFFERED=1\r\n\r\n# Health check\r\nHEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \\\r\n CMD python -c \"import requests; requests.get('http://localhost:8080/health')\"\r\n\r\n# Ex\u00e9cution\r\nCMD [\"python\", \"app.py\"]\r\n```\r\n\r\n#### D\u00e9ploiement Kubernetes avec Logging\r\n\r\n```yaml\r\napiVersion: apps/v1\r\nkind: Deployment\r\nmetadata:\r\n name: myapp\r\n namespace: production\r\n labels:\r\n app: myapp\r\n version: v1\r\nspec:\r\n replicas: 3\r\n selector:\r\n matchLabels:\r\n app: myapp\r\n template:\r\n metadata:\r\n labels:\r\n app: myapp\r\n version: v1\r\n annotations:\r\n prometheus.io/scrape: \"true\"\r\n prometheus.io/port: \"8080\"\r\n spec:\r\n containers:\r\n - name: myapp\r\n image: myapp:1.0.0\r\n env:\r\n - name: LOG_LEVEL\r\n valueFrom:\r\n configMapKeyRef:\r\n name: app-config\r\n key: log-level\r\n - name: APP_NAME\r\n value: \"myapp\"\r\n - name: ENVIRONMENT\r\n value: \"production\"\r\n - name: POD_NAME\r\n valueFrom:\r\n fieldRef:\r\n fieldPath: metadata.name\r\n - name: NAMESPACE\r\n valueFrom:\r\n fieldRef:\r\n fieldPath: metadata.namespace\r\n - name: APP_VERSION\r\n value: \"1.0.0\"\r\n - name: NODE_NAME\r\n valueFrom:\r\n fieldRef:\r\n fieldPath: spec.nodeName\r\n ports:\r\n - containerPort: 8080\r\n name: http\r\n # Probes avec logging\r\n livenessProbe:\r\n httpGet:\r\n path: /health\r\n port: 8080\r\n initialDelaySeconds: 30\r\n periodSeconds: 10\r\n timeoutSeconds: 5\r\n failureThreshold: 3\r\n readinessProbe:\r\n httpGet:\r\n path: /ready\r\n port: 8080\r\n initialDelaySeconds: 10\r\n periodSeconds: 5\r\n resources:\r\n requests:\r\n memory: \"256Mi\"\r\n cpu: \"250m\"\r\n limits:\r\n memory: \"512Mi\"\r\n cpu: \"500m\"\r\n---\r\napiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n name: app-config\r\n namespace: production\r\ndata:\r\n log-level: \"INFO\"\r\n```\r\n\r\n#### Sortie JSON pour Kubernetes (Fluentd/Filebeat)\r\n\r\n```python\r\n# Pour les collecteurs de logs Kubernetes\r\nlogger = PyLoggerX(\r\n name=\"k8s-app\",\r\n console=True,\r\n colors=False, # IMPORTANT: d\u00e9sactiver pour les collecteurs\r\n format_string='{\"timestamp\":\"%(asctime)s\",\"level\":\"%(levelname)s\",\"logger\":\"%(name)s\",\"message\":\"%(message)s\"}',\r\n enrichment_data={\r\n \"cluster\": os.getenv(\"CLUSTER_NAME\", \"prod-cluster\"),\r\n \"pod_ip\": os.getenv(\"POD_IP\", \"unknown\")\r\n }\r\n)\r\n\r\nlogger.info(\"Requ\u00eate trait\u00e9e\", \r\n duration_ms=123, \r\n status_code=200,\r\n endpoint=\"/api/users\")\r\n```\r\n\r\n### Pipelines CI/CD\r\n\r\n#### GitHub Actions\r\n\r\n```yaml\r\n# .github/workflows/test-and-deploy.yml\r\nname: Test, Build & Deploy\r\n\r\non:\r\n push:\r\n branches: [main, develop]\r\n pull_request:\r\n branches: [main]\r\n\r\nenv:\r\n LOG_LEVEL: DEBUG\r\n APP_NAME: myapp\r\n\r\njobs:\r\n test:\r\n runs-on: ubuntu-latest\r\n strategy:\r\n matrix:\r\n python-version: ['3.9', '3.10', '3.11']\r\n \r\n steps:\r\n - uses: actions/checkout@v3\r\n \r\n - name: Set up Python ${{ matrix.python-version }}\r\n uses: actions/setup-python@v4\r\n with:\r\n python-version: ${{ matrix.python-version }}\r\n \r\n - name: Cache dependencies\r\n uses: actions/cache@v3\r\n with:\r\n path: ~/.cache/pip\r\n key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}\r\n \r\n - name: Install dependencies\r\n run: |\r\n pip install -e .\r\n pip install pytest pytest-cov pytest-xdist\r\n \r\n - name: Run tests with logging\r\n run: |\r\n python -c \"\r\n from pyloggerx import PyLoggerX\r\n import os\r\n \r\n logger = PyLoggerX(\r\n name='ci-tests',\r\n level='DEBUG',\r\n console=True,\r\n json_file='test-results/logs.json',\r\n enrichment_data={\r\n 'pipeline': 'github-actions',\r\n 'commit': '${{ github.sha }}',\r\n 'branch': '${{ github.ref_name }}',\r\n 'actor': '${{ github.actor }}',\r\n 'run_id': '${{ github.run_id }}',\r\n 'python_version': '${{ matrix.python-version }}'\r\n }\r\n )\r\n logger.info('D\u00e9marrage des tests CI')\r\n \"\r\n pytest -n auto --cov=pyloggerx --cov-report=xml --cov-report=html\r\n \r\n - name: Upload coverage\r\n uses: codecov/codecov-action@v3\r\n with:\r\n files: ./coverage.xml\r\n \r\n - name: Upload test logs\r\n if: always()\r\n uses: actions/upload-artifact@v3\r\n with:\r\n name: test-logs-${{ matrix.python-version }}\r\n path: test-results/\r\n retention-days: 30\r\n \r\n build:\r\n needs: test\r\n runs-on: ubuntu-latest\r\n if: github.ref == 'refs/heads/main'\r\n \r\n steps:\r\n - uses: actions/checkout@v3\r\n \r\n - name: Set up Docker Buildx\r\n uses: docker/setup-buildx-action@v2\r\n \r\n - name: Login to DockerHub\r\n uses: docker/login-action@v2\r\n with:\r\n username: ${{ secrets.DOCKERHUB_USERNAME }}\r\n password: ${{ secrets.DOCKERHUB_TOKEN }}\r\n \r\n - name: Build and push\r\n uses: docker/build-push-action@v4\r\n with:\r\n context: .\r\n push: true\r\n tags: |\r\n myapp:latest\r\n myapp:${{ github.sha }}\r\n cache-from: type=registry,ref=myapp:buildcache\r\n cache-to: type=registry,ref=myapp:buildcache,mode=max\r\n \r\n deploy:\r\n needs: build\r\n runs-on: ubuntu-latest\r\n if: github.ref == 'refs/heads/main'\r\n \r\n steps:\r\n - uses: actions/checkout@v3\r\n \r\n - name: Deploy to Kubernetes\r\n run: |\r\n echo \"${{ secrets.KUBECONFIG }}\" > kubeconfig\r\n export KUBECONFIG=kubeconfig\r\n kubectl set image deployment/myapp myapp=myapp:${{ github.sha }} -n production\r\n kubectl rollout status deployment/myapp -n production\r\n```\r\n\r\n#### GitLab CI\r\n\r\n```yaml\r\n# .gitlab-ci.yml\r\nvariables:\r\n LOG_LEVEL: \"DEBUG\"\r\n APP_NAME: \"myapp\"\r\n DOCKER_DRIVER: overlay2\r\n DOCKER_TLS_CERTDIR: \"/certs\"\r\n\r\nstages:\r\n - test\r\n - build\r\n - deploy\r\n - monitor\r\n\r\n# Template pour logging\r\n.logging_template: &logging_setup\r\n before_script:\r\n - |\r\n python -c \"\r\n from pyloggerx import PyLoggerX\r\n import os\r\n \r\n logger = PyLoggerX(\r\n name='gitlab-ci',\r\n level=os.getenv('LOG_LEVEL', 'INFO'),\r\n console=True,\r\n json_file='logs/ci.json',\r\n enrichment_data={\r\n 'pipeline_id': os.getenv('CI_PIPELINE_ID'),\r\n 'job_id': os.getenv('CI_JOB_ID'),\r\n 'commit_sha': os.getenv('CI_COMMIT_SHA'),\r\n 'branch': os.getenv('CI_COMMIT_REF_NAME'),\r\n 'runner': os.getenv('CI_RUNNER_DESCRIPTION'),\r\n 'project': os.getenv('CI_PROJECT_NAME')\r\n }\r\n )\r\n logger.info('Job GitLab CI d\u00e9marr\u00e9')\r\n \"\r\n\r\ntest:unit:\r\n stage: test\r\n image: python:3.11\r\n <<: *logging_setup\r\n script:\r\n - pip install -e .[dev]\r\n - pytest --cov=pyloggerx --cov-report=xml --cov-report=term\r\n coverage: '/TOTAL.*\\s+(\\d+%)$/'\r\n artifacts:\r\n reports:\r\n coverage_report:\r\n coverage_format: cobertura\r\n path: coverage.xml\r\n paths:\r\n - logs/\r\n - htmlcov/\r\n expire_in: 1 week\r\n only:\r\n - merge_requests\r\n - main\r\n - develop\r\n\r\ntest:integration:\r\n stage: test\r\n image: python:3.11\r\n services:\r\n - postgres:14\r\n - redis:7\r\n variables:\r\n POSTGRES_DB: testdb\r\n POSTGRES_USER: testuser\r\n POSTGRES_PASSWORD: testpass\r\n REDIS_URL: redis://redis:6379\r\n script:\r\n - pip install -e .[dev]\r\n - pytest tests/integration/ -v\r\n only:\r\n - main\r\n - develop\r\n\r\nbuild:docker:\r\n stage: build\r\n image: docker:latest\r\n services:\r\n - docker:dind\r\n script:\r\n - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY\r\n - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .\r\n - docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest\r\n - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA\r\n - docker push $CI_REGISTRY_IMAGE:latest\r\n only:\r\n - main\r\n\r\ndeploy:production:\r\n stage: deploy\r\n image: bitnami/kubectl:latest\r\n script:\r\n - kubectl config use-context production\r\n - kubectl set image deployment/myapp myapp=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -n production\r\n - kubectl rollout status deployment/myapp -n production\r\n environment:\r\n name: production\r\n url: https://myapp.example.com\r\n when: manual\r\n only:\r\n - main\r\n\r\nmonitor:health:\r\n stage: monitor\r\n image: curlimages/curl:latest\r\n script:\r\n - |\r\n for i in {1..5}; do\r\n STATUS=$(curl -s -o /dev/null -w \"%{http_code}\" https://myapp.example.com/health)\r\n if [ \"$STATUS\" == \"200\" ]; then\r\n echo \"Health check passed\"\r\n exit 0\r\n fi\r\n echo \"Attempt $i failed, retrying...\"\r\n sleep 10\r\n done\r\n exit 1\r\n only:\r\n - main\r\n```\r\n\r\n#### Jenkins Pipeline\r\n\r\n```groovy\r\n// Jenkinsfile\r\npipeline {\r\n agent any\r\n \r\n environment {\r\n LOG_LEVEL = 'DEBUG'\r\n APP_NAME = 'myapp'\r\n ENVIRONMENT = 'staging'\r\n DOCKER_REGISTRY = 'registry.example.com'\r\n KUBE_NAMESPACE = 'production'\r\n }\r\n \r\n options {\r\n buildDiscarder(logRotator(numToKeepStr: '10'))\r\n timeout(time: 1, unit: 'HOURS')\r\n timestamps()\r\n }\r\n \r\n stages {\r\n stage('Setup') {\r\n steps {\r\n script {\r\n sh '''\r\n python3 -m venv venv\r\n . venv/bin/activate\r\n pip install -e .[dev]\r\n '''\r\n }\r\n }\r\n }\r\n \r\n stage('Test') {\r\n parallel {\r\n stage('Unit Tests') {\r\n steps {\r\n script {\r\n sh '''\r\n . venv/bin/activate\r\n python -c \"\r\nfrom pyloggerx import PyLoggerX\r\nlogger = PyLoggerX(\r\n name='jenkins-tests',\r\n json_file='logs/unit-tests.json',\r\n enrichment_data={\r\n 'build_number': '${BUILD_NUMBER}',\r\n 'job_name': '${JOB_NAME}',\r\n 'node_name': '${NODE_NAME}'\r\n }\r\n)\r\nlogger.info('Tests unitaires d\u00e9marr\u00e9s')\r\n \"\r\n pytest tests/unit/ -v --junitxml=results/unit.xml\r\n '''\r\n }\r\n }\r\n }\r\n \r\n stage('Integration Tests') {\r\n steps {\r\n script {\r\n sh '''\r\n . venv/bin/activate\r\n pytest tests/integration/ -v --junitxml=results/integration.xml\r\n '''\r\n }\r\n }\r\n }\r\n }\r\n post {\r\n always {\r\n junit 'results/*.xml'\r\n }\r\n }\r\n }\r\n \r\n stage('Build') {\r\n steps {\r\n script {\r\n docker.build(\"${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}\")\r\n }\r\n }\r\n }\r\n \r\n stage('Push') {\r\n steps {\r\n script {\r\n docker.withRegistry(\"https://${DOCKER_REGISTRY}\", 'docker-credentials') {\r\n docker.image(\"${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}\").push()\r\n docker.image(\"${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}\").push('latest')\r\n }\r\n }\r\n }\r\n }\r\n \r\n stage('Deploy') {\r\n when {\r\n branch 'main'\r\n }\r\n steps {\r\n script {\r\n withKubeConfig([credentialsId: 'kube-config']) {\r\n sh \"\"\"\r\n kubectl set image deployment/${APP_NAME} \\\r\n ${APP_NAME}=${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER} \\\r\n -n ${KUBE_NAMESPACE}\r\n kubectl rollout status deployment/${APP_NAME} -n ${KUBE_NAMESPACE}\r\n \"\"\"\r\n }\r\n }\r\n }\r\n }\r\n \r\n stage('Smoke Tests') {\r\n when {\r\n branch 'main'\r\n }\r\n steps {\r\n script {\r\n sh '''\r\n for i in {1..5}; do\r\n if curl -f https://myapp.example.com/health; then\r\n echo \"Smoke test passed\"\r\n exit 0\r\n fi\r\n sleep 10\r\n done\r\n exit 1\r\n '''\r\n }\r\n }\r\n }\r\n }\r\n \r\n post {\r\n always {\r\n archiveArtifacts artifacts: 'logs/*.json', allowEmptyArchive: true\r\n cleanWs()\r\n }\r\n success {\r\n slackSend(\r\n color: 'good',\r\n message: \"Build #${BUILD_NUMBER} succeeded for ${JOB_NAME}\"\r\n )\r\n }\r\n failure {\r\n slackSend(\r\n color: 'danger',\r\n message: \"Build #${BUILD_NUMBER} failed for ${JOB_NAME}\"\r\n )\r\n }\r\n }\r\n}\r\n```\r\n\r\n### Stack d'Observabilit\u00e9\r\n\r\n#### ELK Stack (Elasticsearch, Logstash, Kibana)\r\n\r\n```python\r\n# Configuration pour ELK\r\nfrom pyloggerx import PyLoggerX\r\nimport socket\r\nimport os\r\n\r\nlogger = PyLoggerX(\r\n name=\"elk-app\",\r\n console=True,\r\n json_file=\"/var/log/myapp/app.json\", # Filebeat surveille ce fichier\r\n colors=False,\r\n \r\n # Export direct vers Elasticsearch\r\n elasticsearch_url=\"http://elasticsearch:9200\",\r\n elasticsearch_index=\"myapp-logs\",\r\n elasticsearch_username=os.getenv(\"ES_USERNAME\"),\r\n elasticsearch_password=os.getenv(\"ES_PASSWORD\"),\r\n \r\n enrichment_data={\r\n \"service\": \"payment-api\",\r\n \"environment\": os.getenv(\"ENVIRONMENT\", \"production\"),\r\n \"hostname\": socket.gethostname(),\r\n \"version\": os.getenv(\"APP_VERSION\", \"1.0.0\"),\r\n \"datacenter\": os.getenv(\"DATACENTER\", \"us-east-1\")\r\n }\r\n)\r\n\r\n# Les logs sont envoy\u00e9s \u00e0 Elasticsearch et \u00e9crits dans un fichier\r\nlogger.info(\"Paiement trait\u00e9\", \r\n transaction_id=\"txn_123\",\r\n amount=99.99,\r\n currency=\"USD\",\r\n customer_id=\"cust_456\",\r\n payment_method=\"card\")\r\n```\r\n\r\n**Configuration Filebeat** (`filebeat.yml`):\r\n\r\n```yaml\r\nfilebeat.inputs:\r\n- type: log\r\n enabled: true\r\n paths:\r\n - /var/log/myapp/*.json\r\n json.keys_under_root: true\r\n json.add_error_key: true\r\n json.message_key: message\r\n fields:\r\n service: myapp\r\n environment: production\r\n fields_under_root: true\r\n\r\nprocessors:\r\n - add_host_metadata:\r\n when.not.contains.tags: forwarded\r\n - add_cloud_metadata: ~\r\n - add_docker_metadata: ~\r\n - add_kubernetes_metadata: ~\r\n\r\noutput.elasticsearch:\r\n hosts: [\"elasticsearch:9200\"]\r\n index: \"myapp-logs-%{+yyyy.MM.dd}\"\r\n username: \"${ES_USERNAME}\"\r\n password: \"${ES_PASSWORD}\"\r\n ssl.verification_mode: none\r\n\r\nsetup.kibana:\r\n host: \"kibana:5601\"\r\n\r\nsetup.ilm.enabled: true\r\nsetup.ilm.rollover_alias: \"myapp-logs\"\r\nsetup.ilm.pattern: \"{now/d}-000001\"\r\n\r\nlogging.level: info\r\nlogging.to_files: true\r\nlogging.files:\r\n path: /var/log/filebeat\r\n name: filebeat\r\n keepfiles: 7\r\n permissions: 0644\r\n```\r\n\r\n#### Prometheus & Grafana\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nfrom prometheus_client import Counter, Histogram, Gauge, start_http_server\r\nimport time\r\nimport functools\r\n\r\n# M\u00e9triques Prometheus\r\nrequest_counter = Counter(\r\n 'http_requests_total', \r\n 'Total HTTP requests', \r\n ['method', 'endpoint', 'status']\r\n)\r\nrequest_duration = Histogram(\r\n 'http_request_duration_seconds', \r\n 'HTTP request duration',\r\n ['method', 'endpoint']\r\n)\r\nactive_requests = Gauge(\r\n 'http_requests_active',\r\n 'Active HTTP requests'\r\n)\r\nerror_counter = Counter(\r\n 'application_errors_total',\r\n 'Total application errors',\r\n ['error_type']\r\n)\r\n\r\nlogger = PyLoggerX(\r\n name=\"metrics-app\",\r\n json_file=\"logs/metrics.json\",\r\n performance_tracking=True,\r\n \r\n # Export vers services de monitoring\r\n datadog_api_key=os.getenv(\"DATADOG_API_KEY\"),\r\n \r\n enrichment_data={\r\n \"service\": \"api-gateway\",\r\n \"version\": \"2.0.0\"\r\n }\r\n)\r\n\r\ndef monitor_request(func):\r\n \"\"\"D\u00e9corateur pour monitorer les requ\u00eates\"\"\"\r\n @functools.wraps(func)\r\n def wrapper(method, endpoint, *args, **kwargs):\r\n active_requests.inc()\r\n start_time = time.time()\r\n \r\n logger.info(\"Requ\u00eate re\u00e7ue\", \r\n method=method, \r\n endpoint=endpoint)\r\n \r\n try:\r\n result = func(method, endpoint, *args, **kwargs)\r\n duration = time.time() - start_time\r\n \r\n # Mettre \u00e0 jour les m\u00e9triques\r\n request_counter.labels(\r\n method=method,\r\n endpoint=endpoint,\r\n status=200\r\n ).inc()\r\n request_duration.labels(\r\n method=method,\r\n endpoint=endpoint\r\n ).observe(duration)\r\n \r\n logger.info(\"Requ\u00eate compl\u00e9t\u00e9e\",\r\n method=method,\r\n endpoint=endpoint,\r\n status=200,\r\n duration_ms=duration*1000)\r\n \r\n return result\r\n \r\n except Exception as e:\r\n duration = time.time() - start_time\r\n error_type = type(e).__name__\r\n \r\n request_counter.labels(\r\n method=method,\r\n endpoint=endpoint,\r\n status=500\r\n ).inc()\r\n error_counter.labels(error_type=error_type).inc()\r\n \r\n logger.error(\"Requ\u00eate \u00e9chou\u00e9e\",\r\n method=method,\r\n endpoint=endpoint,\r\n status=500,\r\n duration_ms=duration*1000,\r\n error=str(e),\r\n error_type=error_type)\r\n raise\r\n finally:\r\n active_requests.dec()\r\n \r\n return wrapper\r\n\r\n@monitor_request\r\ndef handle_request(method, endpoint, data=None):\r\n # Logique de traitement\r\n time.sleep(0.1) # Simulation\r\n return {\"status\": \"success\"}\r\n\r\n# D\u00e9marrer le serveur de m\u00e9triques Prometheus\r\nstart_http_server(8000)\r\nlogger.info(\"Serveur de m\u00e9triques d\u00e9marr\u00e9\", port=8000)\r\n\r\n# Endpoint de m\u00e9triques custom\r\ndef get_performance_metrics():\r\n stats = logger.get_performance_stats()\r\n return {\r\n \"logging\": {\r\n \"total_logs\": stats.get(\"total_operations\", 0),\r\n \"avg_duration\": stats.get(\"avg_duration\", 0),\r\n \"max_duration\": stats.get(\"max_duration\", 0)\r\n },\r\n \"requests\": {\r\n \"total\": request_counter._value.sum(),\r\n \"active\": active_requests._value.get()\r\n }\r\n }\r\n```\r\n\r\n#### OpenTelemetry Integration\r\n\r\n```python\r\nfrom opentelemetry import trace\r\nfrom opentelemetry.sdk.trace import TracerProvider\r\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter\r\nfrom opentelemetry.sdk.resources import Resource\r\nfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter\r\nfrom pyloggerx import PyLoggerX\r\nimport os\r\n\r\n# Setup OpenTelemetry\r\nresource = Resource.create({\r\n \"service.name\": \"myapp\",\r\n \"service.version\": \"1.0.0\",\r\n \"deployment.environment\": os.getenv(\"ENVIRONMENT\", \"production\")\r\n})\r\n\r\ntrace.set_tracer_provider(TracerProvider(resource=resource))\r\ntracer = trace.get_tracer(__name__)\r\n\r\n# Export vers OTLP collector\r\notlp_exporter = OTLPSpanExporter(\r\n endpoint=\"http://otel-collector:4317\",\r\n insecure=True\r\n)\r\nspan_processor = BatchSpanProcessor(otlp_exporter)\r\ntrace.get_tracer_provider().add_span_processor(span_processor)\r\n\r\nlogger = PyLoggerX(\r\n name=\"otel-app\",\r\n json_file=\"logs/traces.json\",\r\n enrichment_data={\r\n \"service\": \"order-service\"\r\n }\r\n)\r\n\r\ndef process_order(order_id):\r\n \"\"\"Traiter une commande avec tracing distribu\u00e9\"\"\"\r\n with tracer.start_as_current_span(\"process_order\") as span:\r\n span.set_attribute(\"order.id\", order_id)\r\n \r\n # R\u00e9cup\u00e9rer le contexte de trace\r\n ctx = span.get_span_context()\r\n trace_id = format(ctx.trace_id, '032x')\r\n span_id = format(ctx.span_id, '016x')\r\n \r\n logger.info(\"Traitement de la commande\",\r\n order_id=order_id,\r\n trace_id=trace_id,\r\n span_id=span_id)\r\n \r\n # \u00c9tapes de traitement avec spans\r\n validate_order(order_id, trace_id, span_id)\r\n charge_payment(order_id, trace_id, span_id)\r\n ship_order(order_id, trace_id, span_id)\r\n \r\n logger.info(\"Commande compl\u00e9t\u00e9e\",\r\n order_id=order_id,\r\n trace_id=trace_id,\r\n span_id=span_id)\r\n\r\ndef validate_order(order_id, trace_id, span_id):\r\n with tracer.start_as_current_span(\"validate_order\"):\r\n logger.debug(\"Validation de la commande\",\r\n order_id=order_id,\r\n trace_id=trace_id,\r\n span_id=span_id)\r\n # Logique de validation\r\n time.sleep(0.1)\r\n\r\ndef charge_payment(order_id, trace_id, span_id):\r\n with tracer.start_as_current_span(\"charge_payment\"):\r\n logger.info(\"Traitement du paiement\",\r\n order_id=order_id,\r\n trace_id=trace_id,\r\n span_id=span_id)\r\n # Logique de paiement\r\n time.sleep(0.2)\r\n\r\ndef ship_order(order_id, trace_id, span_id):\r\n with tracer.start_as_current_span(\"ship_order\"):\r\n logger.info(\"Exp\u00e9dition de la commande\",\r\n order_id=order_id,\r\n trace_id=trace_id,\r\n span_id=span_id)\r\n # Logique d'exp\u00e9dition\r\n time.sleep(0.15)\r\n```\r\n\r\n#### Grafana Loki Integration\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nimport os\r\n\r\nlogger = PyLoggerX(\r\n name=\"loki-app\",\r\n console=True,\r\n \r\n # Export direct vers Loki\r\n loki_url=\"http://loki:3100\",\r\n loki_labels={\r\n \"app\": \"payment-service\",\r\n \"environment\": os.getenv(\"ENVIRONMENT\", \"production\"),\r\n \"region\": os.getenv(\"AWS_REGION\", \"us-east-1\"),\r\n \"version\": os.getenv(\"APP_VERSION\", \"1.0.0\")\r\n },\r\n \r\n enrichment_data={\r\n \"service\": \"payment-api\",\r\n \"instance\": os.getenv(\"HOSTNAME\", \"unknown\")\r\n }\r\n)\r\n\r\n# Les logs sont automatiquement envoy\u00e9s \u00e0 Loki\r\nlogger.info(\"Paiement initi\u00e9\", \r\n transaction_id=\"txn_789\",\r\n amount=150.00,\r\n currency=\"EUR\")\r\n\r\nlogger.info(\"Paiement compl\u00e9t\u00e9\",\r\n transaction_id=\"txn_789\",\r\n status=\"success\",\r\n processing_time_ms=234)\r\n```\r\n\r\n**Configuration Promtail** (`promtail-config.yml`):\r\n\r\n```yaml\r\nserver:\r\n http_listen_port: 9080\r\n grpc_listen_port: 0\r\n\r\npositions:\r\n filename: /tmp/positions.yaml\r\n\r\nclients:\r\n - url: http://loki:3100/loki/api/v1/push\r\n\r\nscrape_configs:\r\n - job_name: system\r\n static_configs:\r\n - targets:\r\n - localhost\r\n labels:\r\n job: varlogs\r\n __path__: /var/log/*log\r\n\r\n - job_name: containers\r\n docker_sd_configs:\r\n - host: unix:///var/run/docker.sock\r\n refresh_interval: 5s\r\n relabel_configs:\r\n - source_labels: ['__meta_docker_container_name']\r\n regex: '/(.*)'\r\n target_label: 'container'\r\n - source_labels: ['__meta_docker_container_log_stream']\r\n target_label: 'logstream'\r\n - source_labels: ['__meta_docker_container_label_logging_jobname']\r\n target_label: 'job'\r\n```\r\n\r\n### Infrastructure as Code\r\n\r\n#### Terraform avec Logging\r\n\r\n```python\r\n# terraform_deploy.py\r\nfrom pyloggerx import PyLoggerX\r\nimport subprocess\r\nimport json\r\nimport sys\r\n\r\nlogger = PyLoggerX(\r\n name=\"terraform\",\r\n console=True,\r\n json_file=\"logs/terraform.json\",\r\n performance_tracking=True,\r\n \r\n # Notifications Slack pour les d\u00e9ploiements\r\n slack_webhook=os.getenv(\"SLACK_WEBHOOK\"),\r\n \r\n enrichment_data={\r\n \"tool\": \"terraform\",\r\n \"workspace\": os.getenv(\"TF_WORKSPACE\", \"default\")\r\n }\r\n)\r\n\r\ndef run_terraform_command(command, **kwargs):\r\n \"\"\"Ex\u00e9cuter une commande Terraform avec logging\"\"\"\r\n logger.info(f\"Ex\u00e9cution: terraform {command}\", **kwargs)\r\n \r\n result = subprocess.run(\r\n [\"terraform\"] + command.split(),\r\n capture_output=True,\r\n text=True\r\n )\r\n \r\n if result.returncode == 0:\r\n logger.info(f\"Commande r\u00e9ussie: terraform {command}\")\r\n else:\r\n logger.error(f\"Commande \u00e9chou\u00e9e: terraform {command}\",\r\n stderr=result.stderr,\r\n returncode=result.returncode)\r\n \r\n return result\r\n\r\ndef terraform_deploy(workspace=\"production\", auto_approve=False):\r\n \"\"\"D\u00e9ploiement Terraform complet avec logging d\u00e9taill\u00e9\"\"\"\r\n logger.info(\"D\u00e9ploiement Terraform d\u00e9marr\u00e9\", \r\n workspace=workspace,\r\n auto_approve=auto_approve)\r\n \r\n try:\r\n # Init\r\n with logger.timer(\"Terraform Init\"):\r\n result = run_terraform_command(\"init -upgrade\")\r\n if result.returncode != 0:\r\n raise Exception(\"Terraform init failed\")\r\n \r\n # Workspace\r\n if workspace != \"default\":\r\n with logger.timer(\"Terraform Workspace\"):\r\n run_terraform_command(f\"workspace select {workspace}\")\r\n \r\n # Plan\r\n with logger.timer(\"Terraform Plan\"):\r\n result = run_terraform_command(\"plan -out=tfplan -json\")\r\n \r\n # Parser la sortie JSON\r\n changes = {\"add\": 0, \"change\": 0, \"destroy\": 0}\r\n for line in result.stdout.split('\\n'):\r\n if line.strip():\r\n try:\r\n data = json.loads(line)\r\n if data.get(\"type\") == \"change_summary\":\r\n changes = data.get(\"changes\", changes)\r\n except:\r\n pass\r\n \r\n logger.info(\"Plan Terraform termin\u00e9\",\r\n resources_to_add=changes[\"add\"],\r\n resources_to_change=changes[\"change\"],\r\n resources_to_destroy=changes[\"destroy\"])\r\n \r\n # Alerte si destruction de ressources\r\n if changes[\"destroy\"] > 0:\r\n logger.warning(\"Destruction de ressources d\u00e9tect\u00e9e\",\r\n count=changes[\"destroy\"])\r\n \r\n # Apply\r\n apply_cmd = \"apply tfplan\"\r\n if auto_approve:\r\n apply_cmd += \" -auto-approve\"\r\n \r\n with logger.timer(\"Terraform Apply\"):\r\n result = run_terraform_command(apply_cmd)\r\n \r\n if result.returncode == 0:\r\n logger.info(\"D\u00e9ploiement Terraform r\u00e9ussi\")\r\n else:\r\n logger.error(\"D\u00e9ploiement Terraform \u00e9chou\u00e9\",\r\n returncode=result.returncode)\r\n sys.exit(1)\r\n \r\n # Statistiques finales\r\n stats = logger.get_performance_stats()\r\n logger.info(\"D\u00e9ploiement compl\u00e9t\u00e9\",\r\n total_duration=stats[\"total_duration\"],\r\n avg_duration=stats[\"avg_duration\"])\r\n \r\n except Exception as e:\r\n logger.exception(\"Erreur lors du d\u00e9ploiement Terraform\",\r\n error=str(e))\r\n sys.exit(1)\r\n\r\nif __name__ == \"__main__\":\r\n import argparse\r\n \r\n parser = argparse.ArgumentParser()\r\n parser.add_argument(\"--workspace\", default=\"production\")\r\n parser.add_argument(\"--auto-approve\", action=\"store_true\")\r\n args = parser.parse_args()\r\n \r\n terraform_deploy(args.workspace, args.auto_approve)\r\n```\r\n\r\n#### Ansible avec Logging\r\n\r\n```python\r\n# ansible_playbook.py\r\nfrom pyloggerx import PyLoggerX\r\nimport subprocess\r\nimport json\r\nfrom datetime import datetime\r\nimport os\r\n\r\nlogger = PyLoggerX(\r\n name=\"ansible\",\r\n json_file=\"logs/ansible.json\",\r\n \r\n # Export vers Elasticsearch pour analyse\r\n elasticsearch_url=os.getenv(\"ES_URL\"),\r\n elasticsearch_index=\"ansible-logs\",\r\n \r\n enrichment_data={\r\n \"automation\": \"ansible\",\r\n \"run_id\": datetime.now().strftime(\"%Y%m%d_%H%M%S\"),\r\n \"user\": os.getenv(\"USER\")\r\n }\r\n)\r\n\r\ndef run_playbook(playbook_path, inventory=\"hosts.ini\", extra_vars=None, tags=None):\r\n \"\"\"Ex\u00e9cuter un playbook Ansible avec logging d\u00e9taill\u00e9\"\"\"\r\n logger.info(\"Playbook Ansible d\u00e9marr\u00e9\",\r\n playbook=playbook_path,\r\n inventory=inventory,\r\n extra_vars=extra_vars,\r\n tags=tags)\r\n \r\n cmd = [\r\n \"ansible-playbook\",\r\n playbook_path,\r\n \"-i\", inventory,\r\n \"-v\" # Verbosit\u00e9\r\n ]\r\n \r\n if extra_vars:\r\n cmd.extend([\"--extra-vars\", json.dumps(extra_vars)])\r\n \r\n if tags:\r\n cmd.extend([\"--tags\", tags])\r\n \r\n # Ex\u00e9cution avec capture de sortie\r\n result = subprocess.run(\r\n cmd,\r\n capture_output=True,\r\n text=True\r\n )\r\n \r\n # Parser la sortie Ansible\r\n stats = parse_ansible_output(result.stdout)\r\n \r\n # Logger les r\u00e9sultats\r\n if stats:\r\n logger.info(\"Playbook Ansible termin\u00e9\",\r\n hosts_processed=len(stats),\r\n total_ok=sum(s.get(\"ok\", 0) for s in stats.values()),\r\n total_changed=sum(s.get(\"changed\", 0) for s in stats.values()),\r\n total_failed=sum(s.get(\"failures\", 0) for s in stats.values()),\r\n total_unreachable=sum(s.get(\"unreachable\", 0) for s in stats.values()))\r\n \r\n # Logger par h\u00f4te\r\n for host, host_stats in stats.items():\r\n logger.debug(\"Statistiques par h\u00f4te\",\r\n host=host,\r\n ok=host_stats.get(\"ok\", 0),\r\n changed=host_stats.get(\"changed\", 0),\r\n failed=host_stats.get(\"failures\", 0))\r\n \r\n if result.returncode != 0:\r\n logger.error(\"Playbook Ansible \u00e9chou\u00e9\",\r\n returncode=result.returncode,\r\n stderr=result.stderr)\r\n return False\r\n \r\n return True\r\n\r\ndef parse_ansible_output(output):\r\n \"\"\"Parser la sortie Ansible\"\"\"\r\n stats = {}\r\n in_recap = False\r\n \r\n for line in output.split('\\n'):\r\n if \"PLAY RECAP\" in line:\r\n in_recap = True\r\n continue\r\n \r\n if in_recap and \":\" in line:\r\n parts = line.split(\":\")\r\n if len(parts) >= 2:\r\n host = parts[0].strip()\r\n stats_str = parts[1].strip()\r\n \r\n # Parser les statistiques\r\n host_stats = {}\r\n for stat in stats_str.split():\r\n if \"=\" in stat:\r\n key, value = stat.split(\"=\")\r\n try:\r\n host_stats[key] = int(value)\r\n except:\r\n pass\r\n \r\n stats[host] = host_stats\r\n \r\n return stats\r\n\r\nif __name__ == \"__main__\":\r\n import argparse\r\n \r\n parser = argparse.ArgumentParser()\r\n parser.add_argument(\"playbook\", help=\"Chemin du playbook\")\r\n parser.add_argument(\"-i\", \"--inventory\", default=\"hosts.ini\")\r\n parser.add_argument(\"-e\", \"--extra-vars\", help=\"Variables suppl\u00e9mentaires (JSON)\")\r\n parser.add_argument(\"-t\", \"--tags\", help=\"Tags \u00e0 ex\u00e9cuter\")\r\n args = parser.parse_args()\r\n \r\n extra_vars = json.loads(args.extra_vars) if args.extra_vars else None\r\n \r\n success = run_playbook(\r\n args.playbook,\r\n inventory=args.inventory,\r\n extra_vars=extra_vars,\r\n tags=args.tags\r\n )\r\n \r\n sys.exit(0 if success else 1)\r\n```\r\n\r\n#### AWS CDK avec Logging\r\n\r\n```python\r\n# cdk_app.py\r\nfrom aws_cdk import (\r\n App, Stack, Duration,\r\n aws_lambda as lambda_,\r\n aws_apigateway as apigw,\r\n aws_dynamodb as dynamodb,\r\n aws_logs as logs\r\n)\r\nfrom pyloggerx import PyLoggerX\r\nimport os\r\n\r\nlogger = PyLoggerX(\r\n name=\"cdk-deploy\",\r\n json_file=\"logs/cdk.json\",\r\n \r\n enrichment_data={\r\n \"tool\": \"aws-cdk\",\r\n \"account\": os.getenv(\"CDK_DEFAULT_ACCOUNT\"),\r\n \"region\": os.getenv(\"CDK_DEFAULT_REGION\", \"us-east-1\")\r\n }\r\n)\r\n\r\nclass MyApplicationStack(Stack):\r\n def __init__(self, scope, id, **kwargs):\r\n super().__init__(scope, id, **kwargs)\r\n \r\n logger.info(\"Cr\u00e9ation du stack CDK\", stack_name=id)\r\n \r\n # DynamoDB Table\r\n logger.info(\"Cr\u00e9ation de la table DynamoDB\")\r\n table = dynamodb.Table(\r\n self, \"DataTable\",\r\n partition_key=dynamodb.Attribute(\r\n name=\"id\",\r\n type=dynamodb.AttributeType.STRING\r\n ),\r\n billing_mode=dynamodb.BillingMode.PAY_PER_REQUEST,\r\n removal_policy=RemovalPolicy.DESTROY\r\n )\r\n logger.info(\"Table DynamoDB cr\u00e9\u00e9e\", table_name=table.table_name)\r\n \r\n # Lambda Function\r\n logger.info(\"Cr\u00e9ation de la fonction Lambda\")\r\n lambda_fn = lambda_.Function(\r\n self, \"ApiHandler\",\r\n runtime=lambda_.Runtime.PYTHON_3_11,\r\n handler=\"index.handler\",\r\n code=lambda_.Code.from_asset(\"lambda\"),\r\n environment={\r\n \"TABLE_NAME\": table.table_name,\r\n \"LOG_LEVEL\": \"INFO\"\r\n },\r\n timeout=Duration.seconds(30),\r\n memory_size=256,\r\n log_retention=logs.RetentionDays.ONE_WEEK\r\n )\r\n \r\n # Permissions\r\n table.grant_read_write_data(lambda_fn)\r\n \r\n logger.info(\"Fonction Lambda cr\u00e9\u00e9e\",\r\n function_name=lambda_fn.function_name)\r\n \r\n # API Gateway\r\n logger.info(\"Cr\u00e9ation de l'API Gateway\")\r\n api = apigw.LambdaRestApi(\r\n self, \"ApiGateway\",\r\n handler=lambda_fn,\r\n proxy=False,\r\n deploy_options=apigw.StageOptions(\r\n logging_level=apigw.MethodLoggingLevel.INFO,\r\n data_trace_enabled=True,\r\n metrics_enabled=True\r\n )\r\n )\r\n \r\n # Endpoints\r\n items = api.root.add_resource(\"items\")\r\n items.add_method(\"GET\")\r\n items.add_method(\"POST\")\r\n \r\n item = items.add_resource(\"{id}\")\r\n item.add_method(\"GET\")\r\n item.add_method(\"PUT\")\r\n item.add_method(\"DELETE\")\r\n \r\n logger.info(\"API Gateway cr\u00e9\u00e9e\",\r\n api_id=api.rest_api_id,\r\n api_url=api.url)\r\n\r\ndef main():\r\n app = App()\r\n \r\n logger.info(\"Synth\u00e8se CDK d\u00e9marr\u00e9e\")\r\n \r\n # Cr\u00e9er les stacks\r\n MyApplicationStack(\r\n app, \"MyApp-Dev\",\r\n env={\r\n \"account\": os.getenv(\"CDK_DEFAULT_ACCOUNT\"),\r\n \"region\": \"us-east-1\"\r\n }\r\n )\r\n \r\n MyApplicationStack(\r\n app, \"MyApp-Prod\",\r\n env={\r\n \"account\": os.getenv(\"CDK_DEFAULT_ACCOUNT\"),\r\n \"region\": \"us-west-2\"\r\n }\r\n )\r\n \r\n logger.info(\"Synth\u00e8se CDK compl\u00e9t\u00e9e\")\r\n \r\n # Synth\u00e9tiser l'application\r\n app.synth()\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n---\r\n\r\n## Guide d'Utilisation Complet\r\n\r\n### 1. Console Logging\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\n\r\nlogger = PyLoggerX(\r\n name=\"console_app\",\r\n console=True,\r\n colors=True\r\n)\r\n\r\nlogger.debug(\"Message de debug\") \r\nlogger.info(\"Message d'info\") \r\nlogger.warning(\"Message d'avertissement\") \r\nlogger.error(\"Message d'erreur\") \r\nlogger.critical(\"Message critique\") \r\n```\r\n\r\n### 2. Logging vers Fichiers\r\n\r\n#### JSON Structur\u00e9\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"json_logger\",\r\n json_file=\"logs/app.json\",\r\n max_bytes=10 * 1024 * 1024, # 10MB\r\n backup_count=5\r\n)\r\n\r\nlogger.info(\"Action utilisateur\",\r\n user_id=123,\r\n action=\"login\",\r\n ip=\"192.168.1.1\",\r\n user_agent=\"Mozilla/5.0\"\r\n)\r\n```\r\n\r\n**Sortie** (`logs/app.json`):\r\n```json\r\n{\r\n \"timestamp\": \"2025-01-15T10:30:45.123456\",\r\n \"level\": \"INFO\",\r\n \"logger\": \"json_logger\",\r\n \"message\": \"Action utilisateur\",\r\n \"module\": \"main\",\r\n \"function\": \"login_handler\",\r\n \"user_id\": 123,\r\n \"action\": \"login\",\r\n \"ip\": \"192.168.1.1\",\r\n \"user_agent\": \"Mozilla/5.0\"\r\n}\r\n```\r\n\r\n#### Fichier Texte\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"text_logger\",\r\n text_file=\"logs/app.log\",\r\n format_string=\"%(asctime)s - %(levelname)s - %(message)s\"\r\n)\r\n```\r\n\r\n#### Rotation Bas\u00e9e sur le Temps\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"timed_logger\",\r\n text_file=\"logs/app.log\",\r\n rotation_when=\"midnight\", # Rotation \u00e0 minuit\r\n rotation_interval=1, # Chaque jour\r\n backup_count=7 # Garder 7 jours\r\n)\r\n\r\n# Options pour rotation_when:\r\n# \"S\": Secondes\r\n# \"M\": Minutes\r\n# \"H\": Heures\r\n# \"D\": Jours\r\n# \"midnight\": \u00c0 minuit\r\n# \"W0\"-\"W6\": Jour de la semaine (0=Lundi)\r\n```\r\n\r\n### 3. Tracking de Performance\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"perf_logger\",\r\n performance_tracking=True\r\n)\r\n\r\n# Utilisation du context manager\r\nwith logger.timer(\"Requ\u00eate Base de Donn\u00e9es\"):\r\n result = db.query(\"SELECT * FROM users WHERE active = true\")\r\n\r\n# Chronom\u00e9trage manuel\r\nimport time\r\nstart = time.time()\r\nprocess_large_dataset(data)\r\nduration = time.time() - start\r\n\r\nlogger.info(\"Traitement compl\u00e9t\u00e9\",\r\n duration_seconds=duration,\r\n records_processed=len(data))\r\n\r\n# R\u00e9cup\u00e9rer les statistiques\r\nstats = logger.get_performance_stats()\r\nprint(f\"Moyenne: {stats['avg_duration']:.3f}s\")\r\nprint(f\"Maximum: {stats['max_duration']:.3f}s\")\r\nprint(f\"Total op\u00e9rations: {stats['total_operations']}\")\r\n```\r\n\r\n---\r\n\r\n## Logging Distant\r\n\r\n### Elasticsearch\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"es_logger\",\r\n elasticsearch_url=\"http://elasticsearch:9200\",\r\n elasticsearch_index=\"myapp-logs\",\r\n elasticsearch_username=\"elastic\",\r\n elasticsearch_password=\"changeme\",\r\n batch_size=100, # Taille du batch\r\n batch_timeout=5 # Timeout en secondes\r\n)\r\n\r\nlogger.info(\"Log envoy\u00e9 vers Elasticsearch\",\r\n service=\"api\",\r\n environment=\"production\",\r\n request_id=\"req_123\")\r\n```\r\n\r\n### Grafana Loki\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"loki_logger\",\r\n loki_url=\"http://loki:3100\",\r\n loki_labels={\r\n \"app\": \"myapp\",\r\n \"environment\": \"production\",\r\n \"region\": \"us-east-1\",\r\n \"tier\": \"backend\"\r\n }\r\n)\r\n\r\nlogger.info(\"Log envoy\u00e9 vers Loki\",\r\n endpoint=\"/api/users\",\r\n method=\"GET\",\r\n status_code=200)\r\n```\r\n\r\n### Sentry (Error Tracking)\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"sentry_logger\",\r\n sentry_dsn=\"https://examplePublicKey@o0.ingest.sentry.io/0\",\r\n sentry_environment=\"production\",\r\n sentry_release=\"myapp@1.0.0\"\r\n)\r\n\r\n# Seuls les erreurs et critiques sont envoy\u00e9s \u00e0 Sentry\r\nlogger.error(\"\u00c9chec du traitement du paiement\",\r\n user_id=123,\r\n amount=99.99,\r\n error_code=\"PAYMENT_DECLINED\",\r\n card_type=\"visa\")\r\n```\r\n\r\n### Datadog\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"datadog_logger\",\r\n datadog_api_key=\"your_datadog_api_key\",\r\n datadog_site=\"datadoghq.com\", # ou datadoghq.eu\r\n datadog_service=\"web-api\",\r\n datadog_tags=[\"env:prod\", \"version:1.0.0\"]\r\n)\r\n\r\nlogger.info(\"Log Datadog\",\r\n service=\"web-api\",\r\n env=\"prod\",\r\n metric=\"request.duration\",\r\n value=234)\r\n```\r\n\r\n### Slack Notifications\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"slack_logger\",\r\n slack_webhook=\"https://hooks.slack.com/services/YOUR/WEBHOOK/URL\",\r\n slack_channel=\"#alerts\",\r\n slack_username=\"PyLoggerX Bot\"\r\n)\r\n\r\n# Seuls les warnings et au-dessus sont envoy\u00e9s \u00e0 Slack\r\nlogger.warning(\"Utilisation m\u00e9moire \u00e9lev\u00e9e\",\r\n memory_percent=95,\r\n hostname=\"server-01\")\r\n\r\nlogger.error(\"Service indisponible\",\r\n service=\"payment-api\",\r\n error=\"Connection timeout\")\r\n```\r\n\r\n### Webhook Personnalis\u00e9\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"webhook_logger\",\r\n webhook_url=\"https://your-api.com/logs\",\r\n webhook_method=\"POST\",\r\n webhook_headers={\r\n \"Authorization\": \"Bearer YOUR_TOKEN\",\r\n \"Content-Type\": \"application/json\"\r\n }\r\n)\r\n\r\nlogger.info(\"Log webhook personnalis\u00e9\",\r\n custom_field=\"value\")\r\n```\r\n\r\n### Configuration Multi-Services\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"multi_logger\",\r\n \r\n # Console et fichiers locaux\r\n console=True,\r\n json_file=\"logs/app.json\",\r\n \r\n # Elasticsearch pour tous les logs\r\n elasticsearch_url=\"http://elasticsearch:9200\",\r\n elasticsearch_index=\"myapp-logs\",\r\n \r\n # Loki pour le streaming\r\n loki_url=\"http://loki:3100\",\r\n loki_labels={\"app\": \"myapp\", \"env\": \"prod\"},\r\n \r\n # Sentry pour les erreurs\r\n sentry_dsn=\"https://xxx@sentry.io/xxx\",\r\n \r\n # Slack pour les alertes critiques\r\n slack_webhook=\"https://hooks.slack.com/services/xxx\",\r\n \r\n # Datadog pour les m\u00e9triques\r\n datadog_api_key=\"your_api_key\",\r\n \r\n # Configuration des batchs\r\n batch_size=100,\r\n batch_timeout=5\r\n)\r\n\r\n# Ce log ira partout sauf Slack (niveau trop bas)\r\nlogger.info(\"Application d\u00e9marr\u00e9e\")\r\n\r\n# Ce log ira partout y compris Slack\r\nlogger.error(\"Erreur critique d\u00e9tect\u00e9e\", \r\n component=\"database\",\r\n error=\"Connection pool exhausted\")\r\n```\r\n\r\n---\r\n\r\n## Fonctionnalit\u00e9s Avanc\u00e9es\r\n\r\n### 1. Filtrage Avanc\u00e9\r\n\r\n#### Filtrage par Niveau\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nfrom pyloggerx.filters import LevelFilter\r\n\r\nlogger = PyLoggerX(name=\"filtered_logger\")\r\n\r\n# Garder seulement WARNING et ERROR\r\nlevel_filter = LevelFilter(min_level=\"WARNING\", max_level=\"ERROR\")\r\nlogger.add_filter(level_filter)\r\n\r\nlogger.debug(\"Ceci ne sera pas logg\u00e9\")\r\nlogger.warning(\"Ceci sera logg\u00e9\")\r\nlogger.error(\"Ceci sera logg\u00e9\")\r\nlogger.critical(\"Ceci ne sera pas logg\u00e9 (au-dessus de ERROR)\")\r\n```\r\n\r\n#### Filtrage par Pattern\r\n\r\n```python\r\nfrom pyloggerx.filters import MessageFilter\r\n\r\n# Inclure seulement les messages correspondant au pattern\r\ninclude_filter = MessageFilter(pattern=\"user_.*\", exclude=False)\r\nlogger.add_filter(include_filter)\r\n\r\n# Exclure les messages correspondant au pattern\r\nexclude_filter = MessageFilter(pattern=\"debug_.*\", exclude=True)\r\nlogger.add_filter(exclude_filter)\r\n```\r\n\r\n#### Limitation de D\u00e9bit (Rate Limiting)\r\n\r\n```python\r\nfrom pyloggerx.filters import RateLimitFilter\r\n\r\n# Maximum 100 logs par minute\r\nrate_limiter = RateLimitFilter(max_logs=100, period=60)\r\nlogger.add_filter(rate_limiter)\r\n\r\n# Utile pour les boucles \u00e0 haut volume\r\nfor i in range(10000):\r\n logger.debug(f\"Traitement de l'item {i}\")\r\n # Seuls ~100 logs seront \u00e9mis\r\n```\r\n\r\n#### Filtre Personnalis\u00e9\r\n\r\n```python\r\nimport logging\r\n\r\nclass CustomFilter(logging.Filter):\r\n def filter(self, record):\r\n # Logique personnalis\u00e9e\r\n # Retourne True pour garder le log, False pour l'ignorer\r\n \r\n # Exemple: garder seulement les logs d'un module sp\u00e9cifique\r\n if record.module != \"payment_processor\":\r\n return False\r\n \r\n # Exemple: ignorer les logs contenant des donn\u00e9es sensibles\r\n if hasattr(record, 'password') or hasattr(record, 'ssn'):\r\n return False\r\n \r\n return True\r\n\r\nlogger.add_filter(CustomFilter())\r\n```\r\n\r\n### 2. \u00c9chantillonnage de Logs (Log Sampling)\r\n\r\nPour les applications \u00e0 haut volume, l'\u00e9chantillonnage r\u00e9duit le volume de logs:\r\n\r\n```python\r\nlogger = PyLoggerX(\r\n name=\"sampled_logger\",\r\n enable_sampling=True,\r\n sampling_rate=0.1 # Garder seulement 10% des logs\r\n)\r\n\r\n# Utile pour les logs de debug en production\r\nfor i in range(10000):\r\n logger.debug(f\"Traitement de l'item {i}\")\r\n # Environ 1000 seront logg\u00e9s\r\n\r\n# Les logs ERROR et CRITICAL ne sont jamais \u00e9chantillonn\u00e9s\r\nlogger.error(\"Erreur importante\") # Toujours logg\u00e9\r\n```\r\n\r\n#### \u00c9chantillonnage Adaptatif\r\n\r\n```python\r\nfrom pyloggerx.sampling import AdaptiveSampler\r\n\r\nlogger = PyLoggerX(\r\n name=\"adaptive_logger\",\r\n enable_sampling=True,\r\n sampler=AdaptiveSampler(\r\n base_rate=0.1, # Taux de base 10%\r\n error_rate=1.0, # 100% pour les erreurs\r\n spike_threshold=1000, # D\u00e9tection de pic\r\n spike_rate=0.01 # 1% pendant les pics\r\n )\r\n)\r\n```\r\n\r\n### 3. Enrichissement de Contexte\r\n\r\n```python\r\nlogger = PyLoggerX(name=\"enriched_logger\")\r\n\r\n# Ajouter un contexte global\r\nlogger.add_enrichment(\r\n app_version=\"2.0.0\",\r\n environment=\"production\",\r\n hostname=socket.gethostname(),\r\n region=\"us-east-1\",\r\n datacenter=\"dc1\"\r\n)\r\n\r\n# Tous les logs suivants incluront ces donn\u00e9es\r\nlogger.info(\"Utilisateur connect\u00e9\", user_id=123)\r\n# Output: {..., \"app_version\": \"2.0.0\", \"environment\": \"production\", ..., \"user_id\": 123}\r\n\r\n# Enrichissement dynamique par requ\u00eate\r\nwith logger.context(request_id=\"req_789\", user_id=456):\r\n logger.info(\"Traitement de la requ\u00eate\")\r\n # Ce log inclut request_id et user_id\r\n \r\n process_request()\r\n \r\n logger.info(\"Requ\u00eate compl\u00e9t\u00e9e\")\r\n # Ce log inclut aussi request_id et user_id\r\n\r\n# Hors du contexte\r\nlogger.info(\"Log suivant\")\r\n# Ce log n'inclut plus request_id et user_id\r\n```\r\n\r\n### 4. Gestion des Exceptions\r\n\r\n```python\r\ntry:\r\n result = risky_operation()\r\nexcept ValueError as e:\r\n logger.exception(\"Op\u00e9ration risqu\u00e9e \u00e9chou\u00e9e\",\r\n operation=\"data_validation\",\r\n input_value=user_input,\r\n error_type=type(e).__name__)\r\n # Inclut automatiquement la stack trace compl\u00e8te\r\n\r\nexcept Exception as e:\r\n logger.error(\"Erreur inattendue\",\r\n operation=\"data_validation\",\r\n error=str(e),\r\n exc_info=True) # Inclut la traceback sans exception()\r\n```\r\n\r\n### 5. Niveaux de Log Dynamiques\r\n\r\n```python\r\nlogger = PyLoggerX(name=\"dynamic_logger\", level=\"INFO\")\r\n\r\n# Bas\u00e9 sur l'environnement\r\nimport os\r\nif os.getenv(\"DEBUG_MODE\") == \"true\":\r\n logger.set_level(\"DEBUG\")\r\n\r\n# Bas\u00e9 sur une condition\r\nif user.is_admin():\r\n logger.set_level(\"DEBUG\")\r\nelse:\r\n logger.set_level(\"WARNING\")\r\n\r\n# Changement temporaire\r\noriginal_level = logger.level\r\nlogger.set_level(\"DEBUG\")\r\ndebug_sensitive_operation()\r\nlogger.set_level(original_level)\r\n```\r\n\r\n### 6. Loggers Multiples\r\n\r\n```python\r\n# S\u00e9parer les logs par composant\r\napi_logger = PyLoggerX(\r\n name=\"api\",\r\n json_file=\"logs/api.json\",\r\n elasticsearch_index=\"api-logs\"\r\n)\r\n\r\ndatabase_logger = PyLoggerX(\r\n name=\"database\",\r\n json_file=\"logs/database.json\",\r\n elasticsearch_index=\"db-logs\",\r\n performance_tracking=True\r\n)\r\n\r\nworker_logger = PyLoggerX(\r\n name=\"worker\",\r\n json_file=\"logs/worker.json\",\r\n elasticsearch_index=\"worker-logs\"\r\n)\r\n\r\n# Utilisation\r\napi_logger.info(\"Requ\u00eate API re\u00e7ue\", endpoint=\"/api/users\")\r\ndatabase_logger.info(\"Requ\u00eate ex\u00e9cut\u00e9e\", query=\"SELECT * FROM users\")\r\nworker_logger.info(\"Job trait\u00e9\", job_id=\"job_123\")\r\n```\r\n\r\n---\r\n\r\n## R\u00e9f\u00e9rence de Configuration\r\n\r\n### Param\u00e8tres PyLoggerX\r\n\r\n| Param\u00e8tre | Type | D\u00e9faut | Description |\r\n|-----------|------|--------|-------------|\r\n| `name` | str | \"PyLoggerX\" | Nom du logger |\r\n| `level` | str | \"INFO\" | Niveau de log (DEBUG, INFO, WARNING, ERROR, CRITICAL) |\r\n| `console` | bool | True | Activer la sortie console |\r\n| `colors` | bool | True | Activer les couleurs console |\r\n| `json_file` | str | None | Chemin vers fichier JSON |\r\n| `text_file` | str | None | Chemin vers fichier texte |\r\n| `max_bytes` | int | 10MB | Taille max avant rotation |\r\n| `backup_count` | int | 5 | Nombre de fichiers de sauvegarde |\r\n| `rotation_when` | str | \"midnight\" | Quand faire la rotation temporelle |\r\n| `rotation_interval` | int | 1 | Intervalle de rotation |\r\n| `format_string` | str | None | Format personnalis\u00e9 |\r\n| `include_caller` | bool | False | Inclure fichier/ligne dans les logs |\r\n| `performance_tracking` | bool | False | Activer le tracking de performance |\r\n| `enrichment_data` | dict | {} | Donn\u00e9es ajout\u00e9es \u00e0 tous les logs |\r\n\r\n### Param\u00e8tres de Logging Distant\r\n\r\n#### Elasticsearch\r\n| Param\u00e8tre | Type | D\u00e9faut | Description |\r\n|-----------|------|--------|-------------|\r\n| `elasticsearch_url` | str | None | URL du serveur Elasticsearch |\r\n| `elasticsearch_index` | str | \"pyloggerx\" | Nom de l'index |\r\n| `elasticsearch_username` | str | None | Nom d'utilisateur (optionnel) |\r\n| `elasticsearch_password` | str | None | Mot de passe (optionnel) |\r\n| `elasticsearch_ca_certs` | str | None | Certificats CA pour SSL |\r\n| `elasticsearch_verify_certs` | bool | True | V\u00e9rifier les certificats SSL |\r\n\r\n#### Grafana Loki\r\n| Param\u00e8tre | Type | D\u00e9faut | Description |\r\n|-----------|------|--------|-------------|\r\n| `loki_url` | str | None | URL du serveur Loki |\r\n| `loki_labels` | dict | {} | Labels par d\u00e9faut |\r\n| `loki_batch_size` | int | 100 | Taille du batch |\r\n| `loki_batch_timeout` | int | 5 | Timeout du batch (secondes) |\r\n\r\n#### Sentry\r\n| Param\u00e8tre | Type | D\u00e9faut | Description |\r\n|-----------|------|--------|-------------|\r\n| `sentry_dsn` | str | None | DSN Sentry |\r\n| `sentry_environment` | str | \"production\" | Nom de l'environnement |\r\n| `sentry_release` | str | None | Version de release |\r\n| `sentry_traces_sample_rate` | float | 0.0 | Taux d'\u00e9chantillonnage des traces |\r\n\r\n#### Datadog\r\n| Param\u00e8tre | Type | D\u00e9faut | Description |\r\n|-----------|------|--------|-------------|\r\n| `datadog_api_key` | str | None | Cl\u00e9 API Datadog |\r\n| `datadog_site` | str | \"datadoghq.com\" | Site Datadog |\r\n| `datadog_service` | str | None | Nom du service |\r\n| `datadog_tags` | list | [] | Tags par d\u00e9faut |\r\n\r\n#### Slack\r\n| Param\u00e8tre | Type | D\u00e9faut | Description |\r\n|-----------|------|--------|-------------|\r\n| `slack_webhook` | str | None | URL webhook Slack |\r\n| `slack_channel` | str | None | Canal (optionnel) |\r\n| `slack_username` | str | \"PyLoggerX\" | Nom d'utilisateur du bot |\r\n| `slack_min_level` | str | \"WARNING\" | Niveau minimum \u00e0 envoyer |\r\n\r\n#### Webhook\r\n| Param\u00e8tre | Type | D\u00e9faut | Description |\r\n|-----------|------|--------|-------------|\r\n| `webhook_url` | str | None | URL du webhook |\r\n| `webhook_method` | str | \"POST\" | M\u00e9thode HTTP |\r\n| `webhook_headers` | dict | {} | En-t\u00eates HTTP |\r\n| `webhook_timeout` | int | 5 | Timeout (secondes) |\r\n\r\n### Param\u00e8tres Avanc\u00e9s\r\n| Param\u00e8tre | Type | D\u00e9faut | Description |\r\n|-----------|------|--------|-------------|\r\n| `enable_sampling` | bool | False | Activer l'\u00e9chantillonnage |\r\n| `sampling_rate` | float | 1.0 | Taux d'\u00e9chantillonnage (0.0-1.0) |\r\n| `enable_rate_limit` | bool | False | Activer la limitation de d\u00e9bit |\r\n| `rate_limit_messages` | int | 100 | Max messages par p\u00e9riode |\r\n| `rate_limit_period` | int | 60 | P\u00e9riode en secondes |\r\n| `batch_size` | int | 100 | Taille du batch pour export distant |\r\n| `batch_timeout` | int | 5 | Timeout du batch (secondes) |\r\n| `async_export` | bool | True | Export asynchrone (non-bloquant) |\r\n| `queue_size` | int | 1000 | Taille de la queue d'export |\r\n| `filters` | list | [] | Liste de filtres |\r\n\r\n---\r\n\r\n# Configuration Avanc\u00e9e et Monitoring - PyLoggerX\r\n\r\n## Configuration Avanc\u00e9e\r\n\r\nPyLoggerX offre plusieurs m\u00e9thodes flexibles pour configurer votre logger, permettant de s'adapter \u00e0 diff\u00e9rents environnements et workflows.\r\n\r\n### Chargement depuis Fichiers\r\n\r\n#### Configuration JSON\r\n\r\nLa m\u00e9thode la plus simple et portable pour configurer PyLoggerX.\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nfrom pyloggerx.config import load_config\r\n\r\n# Charger la configuration depuis un fichier JSON\r\nconfig = load_config(config_file=\"pyloggerx.json\")\r\nlogger = PyLoggerX(**config)\r\n\r\nlogger.info(\"Logger configur\u00e9 depuis JSON\")\r\n```\r\n\r\n**Exemple de fichier `pyloggerx.json`:**\r\n\r\n```json\r\n{\r\n \"name\": \"myapp\",\r\n \"level\": \"INFO\",\r\n \"console\": true,\r\n \"colors\": true,\r\n \"json_file\": \"logs/app.json\",\r\n \"text_file\": \"logs/app.log\",\r\n \"max_bytes\": 10485760,\r\n \"backup_count\": 5,\r\n \"include_caller\": true,\r\n \"performance_tracking\": true,\r\n \r\n \"elasticsearch_url\": \"http://elasticsearch:9200\",\r\n \"elasticsearch_index\": \"myapp-logs\",\r\n \"elasticsearch_username\": \"elastic\",\r\n \"elasticsearch_password\": \"changeme\",\r\n \r\n \"loki_url\": \"http://loki:3100\",\r\n \"loki_labels\": {\r\n \"app\": \"myapp\",\r\n \"environment\": \"production\",\r\n \"region\": \"us-east-1\"\r\n },\r\n \r\n \"sentry_dsn\": \"https://xxx@sentry.io/xxx\",\r\n \"sentry_environment\": \"production\",\r\n \"sentry_release\": \"1.0.0\",\r\n \r\n \"slack_webhook\": \"https://hooks.slack.com/services/xxx\",\r\n \"slack_channel\": \"#alerts\",\r\n \"slack_username\": \"PyLoggerX Bot\",\r\n \r\n \"enable_rate_limit\": true,\r\n \"rate_limit_messages\": 100,\r\n \"rate_limit_period\": 60,\r\n \r\n \"enable_sampling\": false,\r\n \"sampling_rate\": 1.0,\r\n \r\n \"batch_size\": 100,\r\n \"batch_timeout\": 5,\r\n \"async_export\": true,\r\n \r\n \"enrichment_data\": {\r\n \"service\": \"web-api\",\r\n \"version\": \"2.0.0\",\r\n \"datacenter\": \"dc1\"\r\n }\r\n}\r\n```\r\n\r\n#### Configuration YAML\r\n\r\nPour ceux qui pr\u00e9f\u00e8rent YAML (plus lisible pour les humains).\r\n\r\n```python\r\nfrom pyloggerx.config import load_config\r\n\r\n# Installation requise: pip install pyyaml\r\nconfig = load_config(config_file=\"pyloggerx.yaml\")\r\nlogger = PyLoggerX(**config)\r\n```\r\n\r\n**Exemple de fichier `pyloggerx.yaml`:**\r\n\r\n```yaml\r\n# Configuration g\u00e9n\u00e9rale\r\nname: myapp\r\nlevel: INFO\r\nconsole: true\r\ncolors: true\r\n\r\n# Fichiers de logs\r\njson_file: logs/app.json\r\ntext_file: logs/app.log\r\nmax_bytes: 10485760 # 10MB\r\nbackup_count: 5\r\n\r\n# Options\r\ninclude_caller: true\r\nperformance_tracking: true\r\n\r\n# Elasticsearch\r\nelasticsearch_url: http://elasticsearch:9200\r\nelasticsearch_index: myapp-logs\r\nelasticsearch_username: elastic\r\nelasticsearch_password: changeme\r\n\r\n# Grafana Loki\r\nloki_url: http://loki:3100\r\nloki_labels:\r\n app: myapp\r\n environment: production\r\n region: us-east-1\r\n\r\n# Sentry\r\nsentry_dsn: https://xxx@sentry.io/xxx\r\nsentry_environment: production\r\nsentry_release: \"1.0.0\"\r\n\r\n# Slack\r\nslack_webhook: https://hooks.slack.com/services/xxx\r\nslack_channel: \"#alerts\"\r\nslack_username: PyLoggerX Bot\r\n\r\n# Rate limiting\r\nenable_rate_limit: true\r\nrate_limit_messages: 100\r\nrate_limit_period: 60\r\n\r\n# Sampling\r\nenable_sampling: false\r\nsampling_rate: 1.0\r\n\r\n# Export batch\r\nbatch_size: 100\r\nbatch_timeout: 5\r\nasync_export: true\r\n\r\n# Enrichissement\r\nenrichment_data:\r\n service: web-api\r\n version: \"2.0.0\"\r\n datacenter: dc1\r\n```\r\n\r\n#### D\u00e9tection Automatique du Format\r\n\r\n```python\r\nfrom pyloggerx.config import ConfigLoader\r\n\r\n# D\u00e9tecte automatiquement JSON ou YAML selon l'extension\r\nconfig = ConfigLoader.from_file(\"config.json\") # JSON\r\nconfig = ConfigLoader.from_file(\"config.yaml\") # YAML\r\nconfig = ConfigLoader.from_file(\"config.yml\") # YAML\r\n\r\nlogger = PyLoggerX(**config)\r\n```\r\n\r\n### Configuration par Variables d'Environnement\r\n\r\nId\u00e9al pour les applications conteneuris\u00e9es et les d\u00e9ploiements cloud-native suivant les principes 12-factor.\r\n\r\n#### Variables Support\u00e9es\r\n\r\n```bash\r\n# Configuration de base\r\nexport PYLOGGERX_NAME=myapp\r\nexport PYLOGGERX_LEVEL=INFO\r\nexport PYLOGGERX_CONSOLE=true\r\nexport PYLOGGERX_COLORS=false # D\u00e9sactiver dans les conteneurs\r\n\r\n# Fichiers de logs\r\nexport PYLOGGERX_JSON_FILE=/var/log/myapp/app.json\r\nexport PYLOGGERX_TEXT_FILE=/var/log/myapp/app.log\r\n\r\n# Rate limiting\r\nexport PYLOGGERX_RATE_LIMIT_ENABLED=true\r\nexport PYLOGGERX_RATE_LIMIT_MESSAGES=100\r\nexport PYLOGGERX_RATE_LIMIT_PERIOD=60\r\n\r\n# Services distants\r\nexport PYLOGGERX_ELASTICSEARCH_URL=http://elasticsearch:9200\r\nexport PYLOGGERX_LOKI_URL=http://loki:3100\r\nexport PYLOGGERX_SENTRY_DSN=https://xxx@sentry.io/xxx\r\nexport PYLOGGERX_DATADOG_API_KEY=your_api_key\r\nexport PYLOGGERX_SLACK_WEBHOOK=https://hooks.slack.com/services/xxx\r\n```\r\n\r\n#### Utilisation des Variables d'Environnement\r\n\r\n```python\r\nfrom pyloggerx.config import load_config\r\n\r\n# Charger uniquement depuis les variables d'environnement\r\nconfig = load_config(from_env=True)\r\nlogger = PyLoggerX(**config)\r\n\r\n# Ou utiliser directement ConfigLoader\r\nfrom pyloggerx.config import ConfigLoader\r\n\r\nenv_config = ConfigLoader.from_env(prefix=\"PYLOGGERX_\")\r\nlogger = PyLoggerX(**env_config)\r\n```\r\n\r\n#### Exemple Docker Compose\r\n\r\n```yaml\r\nversion: '3.8'\r\n\r\nservices:\r\n myapp:\r\n build: .\r\n environment:\r\n PYLOGGERX_LEVEL: INFO\r\n PYLOGGERX_CONSOLE: \"true\"\r\n PYLOGGERX_COLORS: \"false\"\r\n PYLOGGERX_JSON_FILE: /var/log/app.json\r\n PYLOGGERX_ELASTICSEARCH_URL: http://elasticsearch:9200\r\n PYLOGGERX_RATE_LIMIT_ENABLED: \"true\"\r\n PYLOGGERX_RATE_LIMIT_MESSAGES: 100\r\n volumes:\r\n - ./logs:/var/log\r\n depends_on:\r\n - elasticsearch\r\n \r\n elasticsearch:\r\n image: elasticsearch:8.11.0\r\n environment:\r\n - discovery.type=single-node\r\n ports:\r\n - \"9200:9200\"\r\n```\r\n\r\n#### Exemple Kubernetes ConfigMap\r\n\r\n```yaml\r\napiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n name: pyloggerx-config\r\n namespace: production\r\ndata:\r\n PYLOGGERX_LEVEL: \"INFO\"\r\n PYLOGGERX_CONSOLE: \"true\"\r\n PYLOGGERX_COLORS: \"false\"\r\n PYLOGGERX_ELASTICSEARCH_URL: \"http://elasticsearch.logging.svc.cluster.local:9200\"\r\n PYLOGGERX_RATE_LIMIT_ENABLED: \"true\"\r\n PYLOGGERX_RATE_LIMIT_MESSAGES: \"100\"\r\n\r\n---\r\napiVersion: v1\r\nkind: Secret\r\nmetadata:\r\n name: pyloggerx-secrets\r\n namespace: production\r\ntype: Opaque\r\nstringData:\r\n PYLOGGERX_SENTRY_DSN: \"https://xxx@sentry.io/xxx\"\r\n PYLOGGERX_DATADOG_API_KEY: \"your_api_key\"\r\n PYLOGGERX_SLACK_WEBHOOK: \"https://hooks.slack.com/services/xxx\"\r\n\r\n---\r\napiVersion: apps/v1\r\nkind: Deployment\r\nmetadata:\r\n name: myapp\r\nspec:\r\n template:\r\n spec:\r\n containers:\r\n - name: myapp\r\n image: myapp:1.0.0\r\n envFrom:\r\n - configMapRef:\r\n name: pyloggerx-config\r\n - secretRef:\r\n name: pyloggerx-secrets\r\n```\r\n\r\n### Configuration Multi-Sources\r\n\r\nCombinez plusieurs sources de configuration avec ordre de priorit\u00e9.\r\n\r\n#### Priorit\u00e9 de Configuration\r\n\r\n**Ordre (du plus prioritaire au moins prioritaire):**\r\n1. Variables d'environnement\r\n2. Fichier de configuration\r\n3. Valeurs par d\u00e9faut\r\n\r\n```python\r\nfrom pyloggerx.config import load_config\r\n\r\n# Charger avec priorit\u00e9s\r\nconfig = load_config(\r\n config_file=\"config.json\", # 2e priorit\u00e9\r\n from_env=True, # 1re priorit\u00e9 (\u00e9crase config_file)\r\n defaults={ # 3e priorit\u00e9 (fallback)\r\n \"name\": \"default-app\",\r\n \"level\": \"INFO\",\r\n \"console\": True,\r\n \"colors\": False\r\n }\r\n)\r\n\r\nlogger = PyLoggerX(**config)\r\n```\r\n\r\n#### Exemple Pratique: Configuration par Environnement\r\n\r\n```python\r\nimport os\r\nfrom pyloggerx.config import load_config\r\n\r\n# D\u00e9terminer le fichier de config selon l'environnement\r\nenv = os.getenv(\"ENVIRONMENT\", \"development\")\r\nconfig_files = {\r\n \"development\": \"config.dev.json\",\r\n \"staging\": \"config.staging.json\",\r\n \"production\": \"config.prod.json\"\r\n}\r\n\r\n# Charger la config appropri\u00e9e\r\nconfig = load_config(\r\n config_file=config_files.get(env),\r\n from_env=True, # Permet les overrides par env vars\r\n defaults={\"level\": \"DEBUG\" if env == \"development\" else \"INFO\"}\r\n)\r\n\r\nlogger = PyLoggerX(**config)\r\nlogger.info(f\"Application d\u00e9marr\u00e9e en mode {env}\")\r\n```\r\n\r\n#### Fusion Manuelle de Configurations\r\n\r\n```python\r\nfrom pyloggerx.config import ConfigLoader\r\n\r\n# Charger plusieurs configs\r\nbase_config = ConfigLoader.from_json(\"config.base.json\")\r\nenv_config = ConfigLoader.from_json(f\"config.{environment}.json\")\r\nlocal_overrides = ConfigLoader.from_json(\"config.local.json\")\r\nenv_vars = ConfigLoader.from_env()\r\n\r\n# Fusionner dans l'ordre (derniers \u00e9crasent les premiers)\r\nmerged_config = ConfigLoader.merge_configs(\r\n base_config,\r\n env_config,\r\n local_overrides,\r\n env_vars\r\n)\r\n\r\nlogger = PyLoggerX(**merged_config)\r\n```\r\n\r\n### Validation de Configuration\r\n\r\nPyLoggerX valide automatiquement votre configuration.\r\n\r\n#### Validation Automatique\r\n\r\n```python\r\nfrom pyloggerx.config import load_config\r\n\r\ntry:\r\n config = load_config(config_file=\"config.json\")\r\n logger = PyLoggerX(**config)\r\nexcept ValueError as e:\r\n print(f\"Configuration invalide: {e}\")\r\n exit(1)\r\n```\r\n\r\n#### Validation Manuelle\r\n\r\n```python\r\nfrom pyloggerx.config import ConfigValidator\r\n\r\nconfig = {\r\n \"name\": \"myapp\",\r\n \"level\": \"INVALID\", # Niveau invalide\r\n \"rate_limit_messages\": -10 # Valeur n\u00e9gative invalide\r\n}\r\n\r\nis_valid, error_message = ConfigValidator.validate(config)\r\n\r\nif not is_valid:\r\n print(f\"Erreur de configuration: {error_message}\")\r\nelse:\r\n logger = PyLoggerX(**config)\r\n```\r\n\r\n#### R\u00e8gles de Validation\r\n\r\nLe validateur v\u00e9rifie:\r\n\r\n1. **Niveau de log**: Doit \u00eatre DEBUG, INFO, WARNING, ERROR, ou CRITICAL\r\n2. **Rate limiting**: \r\n - `rate_limit_messages` doit \u00eatre un entier positif\r\n - `rate_limit_period` doit \u00eatre un nombre positif\r\n3. **Sampling**: \r\n - `sampling_rate` doit \u00eatre entre 0.0 et 1.0\r\n4. **URLs**: \r\n - Les URLs (Elasticsearch, Loki, webhook) doivent commencer par http/https\r\n\r\n### Configurations Pr\u00e9d\u00e9finies\r\n\r\nPyLoggerX inclut des templates de configuration pr\u00eats \u00e0 l'emploi.\r\n\r\n#### Templates Disponibles\r\n\r\n```python\r\nfrom pyloggerx.config import EXAMPLE_CONFIGS\r\n\r\n# Afficher les templates disponibles\r\nprint(list(EXAMPLE_CONFIGS.keys()))\r\n# ['basic', 'production', 'development']\r\n\r\n# Utiliser un template\r\nlogger = PyLoggerX(**EXAMPLE_CONFIGS['production'])\r\n```\r\n\r\n#### Template \"Basic\"\r\n\r\nConfiguration simple pour d\u00e9marrer rapidement.\r\n\r\n```python\r\nfrom pyloggerx.config import EXAMPLE_CONFIGS\r\n\r\nlogger = PyLoggerX(**EXAMPLE_CONFIGS['basic'])\r\n```\r\n\r\n**Configuration:**\r\n```json\r\n{\r\n \"name\": \"myapp\",\r\n \"level\": \"INFO\",\r\n \"console\": true,\r\n \"colors\": true,\r\n \"json_file\": \"logs/app.json\",\r\n \"text_file\": \"logs/app.txt\"\r\n}\r\n```\r\n\r\n#### Template \"Production\"\r\n\r\nConfiguration optimis\u00e9e pour environnements de production.\r\n\r\n```python\r\nlogger = PyLoggerX(**EXAMPLE_CONFIGS['production'])\r\n```\r\n\r\n**Configuration:**\r\n```json\r\n{\r\n \"name\": \"myapp\",\r\n \"level\": \"WARNING\",\r\n \"console\": false,\r\n \"colors\": false,\r\n \"json_file\": \"/var/log/myapp/app.json\",\r\n \"text_file\": \"/var/log/myapp/app.txt\",\r\n \"max_bytes\": 52428800,\r\n \"backup_count\": 10,\r\n \"enable_rate_limit\": true,\r\n \"rate_limit_messages\": 100,\r\n \"rate_limit_period\": 60,\r\n \"performance_tracking\": true\r\n}\r\n```\r\n\r\n#### Template \"Development\"\r\n\r\nConfiguration d\u00e9taill\u00e9e pour d\u00e9veloppement.\r\n\r\n```python\r\nlogger = PyLoggerX(**EXAMPLE_CONFIGS['development'])\r\n```\r\n\r\n**Configuration:**\r\n```json\r\n{\r\n \"name\": \"myapp-dev\",\r\n \"level\": \"DEBUG\",\r\n \"console\": true,\r\n \"colors\": true,\r\n \"include_caller\": true,\r\n \"json_file\": \"logs/dev.json\",\r\n \"enable_rate_limit\": false,\r\n \"performance_tracking\": true\r\n}\r\n```\r\n\r\n#### Sauvegarder un Template\r\n\r\n```python\r\nfrom pyloggerx.config import save_example_config\r\n\r\n# Sauvegarder un template dans un fichier\r\nsave_example_config(\"production\", \"my-config.json\")\r\n\r\n# Puis charger et personnaliser\r\nconfig = load_config(config_file=\"my-config.json\")\r\nconfig['name'] = \"my-custom-app\"\r\nlogger = PyLoggerX(**config)\r\n```\r\n\r\n#### Cr\u00e9er un Template Personnalis\u00e9\r\n\r\n```python\r\nfrom pyloggerx.config import EXAMPLE_CONFIGS\r\n\r\n# Partir d'un template existant\r\ncustom_config = EXAMPLE_CONFIGS['production'].copy()\r\n\r\n# Personnaliser\r\ncustom_config.update({\r\n 'name': 'my-microservice',\r\n 'elasticsearch_url': 'http://my-es:9200',\r\n 'slack_webhook': 'https://hooks.slack.com/xxx',\r\n 'enrichment_data': {\r\n 'service': 'payment-api',\r\n 'team': 'backend',\r\n 'region': 'eu-west-1'\r\n }\r\n})\r\n\r\nlogger = PyLoggerX(**custom_config)\r\n```\r\n\r\n---\r\n\r\n## Monitoring et M\u00e9triques\r\n\r\nPyLoggerX int\u00e8gre un syst\u00e8me complet de monitoring pour surveiller la sant\u00e9, les performances et les m\u00e9triques de votre syst\u00e8me de logging.\r\n\r\n### Collecteur de M\u00e9triques\r\n\r\nLe `MetricsCollector` collecte et agr\u00e8ge automatiquement les m\u00e9triques de logging.\r\n\r\n#### Utilisation Basique\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nfrom pyloggerx.monitoring import MetricsCollector\r\n\r\n# Cr\u00e9er un collecteur\r\ncollector = MetricsCollector(window_size=300) # Fen\u00eatre de 5 minutes\r\n\r\n# Attacher au logger\r\nlogger = PyLoggerX(\r\n name=\"monitored_app\",\r\n console=True\r\n)\r\n\r\n# Enregistrer manuellement des logs (optionnel, fait automatiquement)\r\ncollector.record_log(level=\"INFO\", size=256)\r\ncollector.record_log(level=\"ERROR\", size=512)\r\n\r\n# Obtenir les m\u00e9triques\r\nmetrics = collector.get_metrics()\r\nprint(f\"Uptime: {metrics['uptime_seconds']}s\")\r\nprint(f\"Total logs: {metrics['total_logs']}\")\r\nprint(f\"Logs/seconde: {metrics['logs_per_second']}\")\r\nprint(f\"Taille moyenne: {metrics['avg_log_size_bytes']} bytes\")\r\nprint(f\"Par niveau: {metrics['logs_per_level']}\")\r\nprint(f\"Erreurs r\u00e9centes: {metrics['recent_errors']}\")\r\n```\r\n\r\n#### M\u00e9triques Collect\u00e9es\r\n\r\nLe collecteur suit:\r\n\r\n1. **Uptime**: Temps \u00e9coul\u00e9 depuis le d\u00e9marrage\r\n2. **Total des logs**: Nombre total de logs \u00e9mis\r\n3. **Logs par niveau**: Compteurs pour DEBUG, INFO, WARNING, ERROR, CRITICAL\r\n4. **Taux de logs**: Logs par seconde (fen\u00eatre glissante)\r\n5. **Taille des logs**: Taille moyenne des logs en bytes\r\n6. **Erreurs**: Historique des erreurs r\u00e9centes\r\n\r\n#### Enregistrement d'Erreurs\r\n\r\n```python\r\ntry:\r\n risky_operation()\r\nexcept Exception as e:\r\n collector.record_error(str(e))\r\n logger.exception(\"Op\u00e9ration \u00e9chou\u00e9e\")\r\n```\r\n\r\n#### R\u00e9initialisation des M\u00e9triques\r\n\r\n```python\r\n# R\u00e9initialiser toutes les m\u00e9triques\r\ncollector.reset()\r\n```\r\n\r\n#### Fen\u00eatre de Temps Personnalis\u00e9e\r\n\r\n```python\r\n# Collecteur avec fen\u00eatre de 10 minutes\r\ncollector = MetricsCollector(window_size=600)\r\n\r\n# M\u00e9triques sur les 10 derni\u00e8res minutes\r\nmetrics = collector.get_metrics()\r\n```\r\n\r\n### Gestionnaire d'Alertes\r\n\r\nLe `AlertManager` permet de d\u00e9finir des r\u00e8gles d'alerte bas\u00e9es sur les m\u00e9triques.\r\n\r\n#### Configuration des Alertes\r\n\r\n```python\r\nfrom pyloggerx.monitoring import AlertManager\r\n\r\n# Cr\u00e9er le gestionnaire\r\nalert_mgr = AlertManager()\r\n\r\n# D\u00e9finir une r\u00e8gle d'alerte\r\nalert_mgr.add_rule(\r\n name=\"high_error_rate\",\r\n condition=lambda m: m['logs_per_level'].get('ERROR', 0) > 100,\r\n cooldown=300, # 5 minutes entre alertes\r\n message=\"Taux d'erreurs \u00e9lev\u00e9 d\u00e9tect\u00e9 (>100 erreurs)\"\r\n)\r\n\r\n# D\u00e9finir un callback\r\ndef send_alert(alert_name, message):\r\n print(f\"ALERTE [{alert_name}]: {message}\")\r\n # Envoyer email, Slack, PagerDuty, etc.\r\n\r\nalert_mgr.add_callback(send_alert)\r\n\r\n# V\u00e9rifier les m\u00e9triques p\u00e9riodiquement\r\nmetrics = collector.get_metrics()\r\nalert_mgr.check_metrics(metrics)\r\n```\r\n\r\n#### R\u00e8gles d'Alerte Pr\u00e9d\u00e9finies\r\n\r\n```python\r\n# Taux d'erreurs \u00e9lev\u00e9\r\nalert_mgr.add_rule(\r\n name=\"high_error_rate\",\r\n condition=lambda m: m['logs_per_level'].get('ERROR', 0) > 100,\r\n cooldown=300\r\n)\r\n\r\n# Taux de logs excessif\r\nalert_mgr.add_rule(\r\n name=\"high_log_rate\",\r\n condition=lambda m: m['logs_per_second'] > 100,\r\n cooldown=300\r\n)\r\n\r\n# Circuit breaker ouvert\r\nalert_mgr.add_rule(\r\n name=\"exporter_circuit_breaker\",\r\n condition=lambda m: any(\r\n exp.get('circuit_breaker_open', False)\r\n for exp in m.get('exporter_metrics', {}).values()\r\n ),\r\n cooldown=600\r\n)\r\n\r\n# Taille de queue \u00e9lev\u00e9e\r\nalert_mgr.add_rule(\r\n name=\"high_queue_size\",\r\n condition=lambda m: any(\r\n exp.get('queue_size', 0) > 1000\r\n for exp in m.get('exporter_metrics', {}).values()\r\n ),\r\n cooldown=300\r\n)\r\n\r\n# Utilisation m\u00e9moire\r\nalert_mgr.add_rule(\r\n name=\"high_memory\",\r\n condition=lambda m: m.get('avg_log_size_bytes', 0) > 10000,\r\n cooldown=600\r\n)\r\n```\r\n\r\n#### Callbacks Multiples\r\n\r\n```python\r\ndef slack_alert(alert_name, message):\r\n requests.post(\r\n slack_webhook,\r\n json={\"text\": f\":warning: {message}\"}\r\n )\r\n\r\ndef email_alert(alert_name, message):\r\n send_email(\r\n to=\"ops@example.com\",\r\n subject=f\"Alert: {alert_name}\",\r\n body=message\r\n )\r\n\r\ndef log_alert(alert_name, message):\r\n logger.critical(message, alert=alert_name)\r\n\r\n# Ajouter tous les callbacks\r\nalert_mgr.add_callback(slack_alert)\r\nalert_mgr.add_callback(email_alert)\r\nalert_mgr.add_callback(log_alert)\r\n```\r\n\r\n#### Cooldown Personnalis\u00e9\r\n\r\nLe cooldown \u00e9vite le spam d'alertes:\r\n\r\n```python\r\n# Alerte critique avec cooldown court (1 minute)\r\nalert_mgr.add_rule(\r\n name=\"critical_error\",\r\n condition=lambda m: m['logs_per_level'].get('CRITICAL', 0) > 0,\r\n cooldown=60, # 1 minute\r\n message=\"Erreur critique d\u00e9tect\u00e9e!\"\r\n)\r\n\r\n# Alerte warning avec cooldown long (10 minutes)\r\nalert_mgr.add_rule(\r\n name=\"performance_degradation\",\r\n condition=lambda m: m['logs_per_second'] > 50,\r\n cooldown=600, # 10 minutes\r\n message=\"Performance d\u00e9grad\u00e9e d\u00e9tect\u00e9e\"\r\n)\r\n```\r\n\r\n### Monitoring de Sant\u00e9\r\n\r\nLe `HealthMonitor` surveille automatiquement la sant\u00e9 du logger en arri\u00e8re-plan.\r\n\r\n#### Configuration et D\u00e9marrage\r\n\r\n```python\r\nfrom pyloggerx.monitoring import HealthMonitor\r\n\r\nlogger = PyLoggerX(name=\"production_app\")\r\n\r\n# Cr\u00e9er le monitor\r\nmonitor = HealthMonitor(\r\n logger=logger,\r\n check_interval=60 # V\u00e9rifier toutes les 60 secondes\r\n)\r\n\r\n# D\u00e9marrer le monitoring\r\nmonitor.start()\r\n\r\n# Le monitoring s'ex\u00e9cute en arri\u00e8re-plan dans un thread s\u00e9par\u00e9\r\n# et v\u00e9rifie automatiquement la sant\u00e9 toutes les 60 secondes\r\n\r\n# ... votre application tourne ...\r\n\r\n# Arr\u00eater le monitoring proprement\r\nmonitor.stop()\r\n```\r\n\r\n#### Obtenir le Statut\r\n\r\n```python\r\n# Obtenir le statut complet\r\nstatus = monitor.get_status()\r\n\r\nprint(f\"Monitoring actif: {status['running']}\")\r\nprint(f\"M\u00e9triques: {status['metrics']}\")\r\nprint(f\"Stats du logger: {status['logger_stats']}\")\r\nprint(f\"Sant\u00e9 du logger: {status['logger_health']}\")\r\n```\r\n\r\n#### Alertes Automatiques\r\n\r\nLe `HealthMonitor` inclut des r\u00e8gles d'alerte par d\u00e9faut:\r\n\r\n1. **high_error_rate**: Plus de 100 erreurs\r\n2. **high_log_rate**: Plus de 100 logs/seconde\r\n3. **exporter_circuit_breaker**: Circuit breaker d'un exporter ouvert\r\n\r\n```python\r\n# Ajouter un callback pour les alertes\r\ndef handle_alert(alert_name, message):\r\n print(f\"ALERTE: {message}\")\r\n # Envoyer notification\r\n\r\nmonitor.alert_manager.add_callback(handle_alert)\r\n```\r\n\r\n#### R\u00e8gles Personnalis\u00e9es\r\n\r\n```python\r\n# Ajouter vos propres r\u00e8gles d'alerte\r\nmonitor.alert_manager.add_rule(\r\n name=\"custom_metric\",\r\n condition=lambda m: your_custom_check(m),\r\n cooldown=300,\r\n message=\"Condition personnalis\u00e9e d\u00e9clench\u00e9e\"\r\n)\r\n```\r\n\r\n#### Exemple Complet: Application avec Monitoring\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nfrom pyloggerx.monitoring import HealthMonitor\r\nimport time\r\nimport signal\r\nimport sys\r\n\r\n# Configuration du logger\r\nlogger = PyLoggerX(\r\n name=\"monitored_service\",\r\n console=True,\r\n json_file=\"logs/service.json\",\r\n elasticsearch_url=\"http://elasticsearch:9200\",\r\n performance_tracking=True\r\n)\r\n\r\n# Configuration du monitoring\r\nmonitor = HealthMonitor(logger, check_interval=30)\r\n\r\ndef alert_callback(alert_name, message):\r\n \"\"\"Callback pour les alertes\"\"\"\r\n logger.critical(f\"ALERTE: {message}\", alert=alert_name)\r\n # Ici: envoyer email, Slack, PagerDuty, etc.\r\n\r\nmonitor.alert_manager.add_callback(alert_callback)\r\n\r\n# Ajout de r\u00e8gles personnalis\u00e9es\r\nmonitor.alert_manager.add_rule(\r\n name=\"service_overload\",\r\n condition=lambda m: m['logs_per_second'] > 50,\r\n cooldown=180,\r\n message=\"Service surcharg\u00e9: >50 logs/sec\"\r\n)\r\n\r\ndef shutdown_handler(signum, frame):\r\n \"\"\"Arr\u00eat propre\"\"\"\r\n logger.info(\"Arr\u00eat du service...\")\r\n monitor.stop()\r\n logger.close()\r\n sys.exit(0)\r\n\r\nsignal.signal(signal.SIGINT, shutdown_handler)\r\nsignal.signal(signal.SIGTERM, shutdown_handler)\r\n\r\ndef main():\r\n # D\u00e9marrer le monitoring\r\n monitor.start()\r\n logger.info(\"Service et monitoring d\u00e9marr\u00e9s\")\r\n \r\n # Votre application\r\n while True:\r\n try:\r\n # Logique m\u00e9tier\r\n logger.info(\"Traitement en cours...\")\r\n time.sleep(10)\r\n \r\n except Exception as e:\r\n logger.exception(\"Erreur dans la boucle principale\")\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n### Dashboard Console\r\n\r\nAffichez un dashboard de monitoring directement dans la console.\r\n\r\n#### Affichage Simple\r\n\r\n```python\r\nfrom pyloggerx.monitoring import print_dashboard\r\n\r\nlogger = PyLoggerX(name=\"myapp\")\r\n\r\n# Afficher le dashboard\r\nprint_dashboard(logger, clear_screen=True)\r\n```\r\n\r\n#### Sortie du Dashboard\r\n\r\n```\r\n============================================================\r\nPyLoggerX Monitoring Dashboard\r\n============================================================\r\nTimestamp: 2025-01-15 10:30:45\r\n\r\n\ud83d\udcca General Statistics:\r\n Total Logs: 15423\r\n Exporters: 3\r\n Filters: 2\r\n\r\n\ud83d\udea6 Rate Limiting:\r\n Enabled: Yes\r\n Max Messages: 100\r\n Period: 60s\r\n Rejections: 45\r\n\r\n\ud83c\udfe5 Exporter Health:\r\n Overall Healthy: \u2705 Yes\r\n \u2705 elasticsearch\r\n \u2705 loki\r\n \u274c sentry\r\n\r\n\ud83d\udcc8 Exporter Metrics:\r\n\r\n elasticsearch:\r\n Exported: 12450\r\n Failed: 23\r\n Dropped: 0\r\n Queue: 15\r\n \u26a0\ufe0f Circuit Breaker: OPEN (failures: 5)\r\n\r\n loki:\r\n Exported: 11890\r\n Failed: 5\r\n Dropped: 0\r\n Queue: 8\r\n\r\n sentry:\r\n Exported: 345\r\n Failed: 102\r\n Dropped: 0\r\n Queue: 0\r\n \u26a0\ufe0f Circuit Breaker: OPEN (failures: 10)\r\n\r\n============================================================\r\n```\r\n\r\n#### Dashboard en Boucle\r\n\r\n```python\r\nimport time\r\nfrom pyloggerx.monitoring import print_dashboard\r\n\r\nlogger = PyLoggerX(name=\"myapp\")\r\n\r\n# Rafra\u00eechir le dashboard toutes les 5 secondes\r\ntry:\r\n while True:\r\n print_dashboard(logger, clear_screen=True)\r\n time.sleep(5)\r\nexcept KeyboardInterrupt:\r\n print(\"\\nDashboard arr\u00eat\u00e9\")\r\n```\r\n\r\n#### Dashboard Personnalis\u00e9\r\n\r\n```python\r\nfrom pyloggerx.monitoring import HealthMonitor\r\nimport os\r\n\r\ndef custom_dashboard(logger):\r\n \"\"\"Dashboard personnalis\u00e9\"\"\"\r\n stats = logger.get_stats()\r\n health = logger.healthcheck()\r\n \r\n os.system('cls' if os.name == 'nt' else 'clear')\r\n \r\n print(\"=\" * 60)\r\n print(\"Mon Application - Dashboard\")\r\n print(\"=\" * 60)\r\n \r\n # Sant\u00e9 globale\r\n status_icon = \"\u2705\" if health['healthy'] else \"\u274c\"\r\n print(f\"\\n{status_icon} Statut: {'HEALTHY' if health['healthy'] else 'UNHEALTHY'}\")\r\n \r\n # M\u00e9triques cl\u00e9s\r\n print(f\"\\n\ud83d\udcca M\u00e9triques:\")\r\n print(f\" Total logs: {stats['total_logs']:,}\")\r\n \r\n if 'logs_per_level' in stats:\r\n print(f\"\\n\ud83d\udcc8 Par niveau:\")\r\n for level, count in sorted(stats['logs_per_level'].items()):\r\n print(f\" {level}: {count:,}\")\r\n \r\n # Exporters\r\n if health['exporters']:\r\n print(f\"\\n\ud83d\udd0c Exporters:\")\r\n for name, is_healthy in health['exporters'].items():\r\n icon = \"\u2705\" if is_healthy else \"\u274c\"\r\n print(f\" {icon} {name}\")\r\n \r\n print(\"\\n\" + \"=\" * 60)\r\n\r\n# Utilisation\r\nwhile True:\r\n custom_dashboard(logger)\r\n time.sleep(5)\r\n```\r\n\r\n---\r\n\r\n## Int\u00e9grations Monitoring\r\n\r\n### Int\u00e9gration Prometheus\r\n\r\nExposez les m\u00e9triques PyLoggerX \u00e0 Prometheus pour un monitoring centralis\u00e9.\r\n\r\n#### Installation\r\n\r\n```bash\r\npip install prometheus-client\r\n```\r\n\r\n#### Configuration Basique\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nfrom prometheus_client import Counter, Gauge, Histogram, start_http_server\r\nimport time\r\n\r\n# M\u00e9triques Prometheus\r\nlogs_total = Counter(\r\n 'pyloggerx_logs_total',\r\n 'Total number of logs',\r\n ['level', 'logger']\r\n)\r\n\r\nlogs_per_second = Gauge(\r\n 'pyloggerx_logs_per_second',\r\n 'Current logs per second',\r\n ['logger']\r\n)\r\n\r\nexport_errors = Counter(\r\n 'pyloggerx_export_errors_total',\r\n 'Total export errors',\r\n ['exporter', 'logger']\r\n)\r\n\r\nqueue_size = Gauge(\r\n 'pyloggerx_queue_size',\r\n 'Current queue size',\r\n ['exporter', 'logger']\r\n)\r\n\r\nlog_size_bytes = Histogram(\r\n 'pyloggerx_log_size_bytes',\r\n 'Log size distribution',\r\n ['logger']\r\n)\r\n\r\n# Logger\r\nlogger = PyLoggerX(\r\n name=\"prometheus_app\",\r\n console=True,\r\n json_file=\"logs/app.json\",\r\n elasticsearch_url=\"http://elasticsearch:9200\"\r\n)\r\n\r\ndef update_prometheus_metrics():\r\n \"\"\"Mettre \u00e0 jour les m\u00e9triques Prometheus depuis PyLoggerX\"\"\"\r\n stats = logger.get_stats()\r\n \r\n # Logs par niveau\r\n if 'logs_per_level' in stats:\r\n for level, count in stats['logs_per_level'].items():\r\n logs_total.labels(level=level, logger=logger.name).inc(count)\r\n \r\n # M\u00e9triques d'export\r\n if 'exporter_metrics' in stats:\r\n for exporter_name, metrics in stats['exporter_metrics'].items():\r\n # Erreurs d'export\r\n export_errors.labels(\r\n exporter=exporter_name,\r\n logger=logger.name\r\n ).inc(metrics.get('failed_logs', 0))\r\n \r\n # Taille de queue\r\n queue_size.labels(\r\n exporter=exporter_name,\r\n logger=logger.name\r\n ).set(metrics.get('queue_size', 0))\r\n\r\n# D\u00e9marrer le serveur de m\u00e9triques Prometheus\r\nstart_http_server(8000)\r\nlogger.info(\"Serveur de m\u00e9triques Prometheus d\u00e9marr\u00e9\", port=8000)\r\n\r\n# Mettre \u00e0 jour les m\u00e9triques p\u00e9riodiquement\r\nwhile True:\r\n update_prometheus_metrics()\r\n time.sleep(15)\r\n```\r\n\r\n#### Configuration Prometheus\r\n\r\n**prometheus.yml:**\r\n\r\n```yaml\r\nglobal:\r\n scrape_interval: 15s\r\n evaluation_interval: 15s\r\n\r\nscrape_configs:\r\n - job_name: 'pyloggerx'\r\n static_configs:\r\n - targets: ['localhost:8000']\r\n labels:\r\n app: 'myapp'\r\n environment: 'production'\r\n```\r\n\r\n#### Queries Prometheus Utiles\r\n\r\n```promql\r\n# Taux de logs par seconde\r\nrate(pyloggerx_logs_total[5m])\r\n\r\n# Erreurs par exporter\r\nsum by (exporter) (pyloggerx_export_errors_total)\r\n\r\n# Taille de queue par exporter\r\npyloggerx_queue_size\r\n\r\n# Percentile 95 de la taille des logs\r\nhistogram_quantile(0.95, pyloggerx_log_size_bytes_bucket)\r\n\r\n# Logs par niveau (graphique empil\u00e9)\r\nsum by (level) (rate(pyloggerx_logs_total[5m]))\r\n```\r\n\r\n#### Alertes Prometheus\r\n\r\n**alerts.yml:**\r\n\r\n```yaml\r\ngroups:\r\n - name: pyloggerx_alerts\r\n interval: 30s\r\n rules:\r\n - alert: HighErrorRate\r\n expr: rate(pyloggerx_logs_total{level=\"ERROR\"}[5m]) > 10\r\n for: 5m\r\n labels:\r\n severity: warning\r\n annotations:\r\n summary: \"Taux d'erreurs \u00e9lev\u00e9 d\u00e9tect\u00e9\"\r\n description: \"{{ $labels.logger }} a un taux d'erreurs de {{ $value }}/s\"\r\n \r\n - alert: ExporterDown\r\n expr: pyloggerx_queue_size > 1000\r\n for: 10m\r\n labels:\r\n severity: critical\r\n annotations:\r\n summary: \"Exporter surcharg\u00e9\"\r\n description: \"{{ $labels.exporter }} a une queue de {{ $value }} messages\"\r\n \r\n - alert: HighExportFailureRate\r\n expr: rate(pyloggerx_export_errors_total[5m]) > 1\r\n for: 5m\r\n labels:\r\n severity: warning\r\n annotations:\r\n summary: \"\u00c9checs d'export fr\u00e9quents\"\r\n description: \"{{ $labels.exporter }} \u00e9choue \u00e0 {{ $value }}/s\"\r\n```\r\n\r\n### Int\u00e9gration Grafana\r\n\r\nCr\u00e9ez des dashboards visuels pour surveiller PyLoggerX.\r\n\r\n#### Dashboard JSON pour Grafana\r\n\r\n```json\r\n{\r\n \"dashboard\": {\r\n \"title\": \"PyLoggerX Monitoring\",\r\n \"panels\": [\r\n {\r\n \"title\": \"Logs par Seconde\",\r\n \"type\": \"graph\",\r\n \"targets\": [\r\n {\r\n \"expr\": \"rate(pyloggerx_logs_total[5m])\"\r\n }\r\n ]\r\n },\r\n {\r\n \"title\": \"Logs par Niveau\",\r\n \"type\": \"graph\",\r\n \"targets\": [\r\n {\r\n \"expr\": \"sum by (level) (rate(pyloggerx_logs_total[5m]))\"\r\n }\r\n ],\r\n \"stack\": true\r\n },\r\n {\r\n \"title\": \"Taille de Queue\",\r\n \"type\": \"graph\",\r\n \"targets\": [\r\n {\r\n \"expr\": \"pyloggerx_queue_size\"\r\n }\r\n ]\r\n },\r\n {\r\n \"title\": \"Erreurs d'Export\",\r\n \"type\": \"stat\",\r\n \"targets\": [\r\n {\r\n \"expr\": \"sum(pyloggerx_export_errors_total)\"\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\n#### Variables de Dashboard\r\n\r\n```json\r\n{\r\n \"templating\": {\r\n \"list\": [\r\n {\r\n \"name\": \"logger\",\r\n \"type\": \"query\",\r\n \"query\": \"label_values(pyloggerx_logs_total, logger)\"\r\n },\r\n {\r\n \"name\": \"exporter\",\r\n \"type\": \"query\",\r\n \"query\": \"label_values(pyloggerx_queue_size, exporter)\"\r\n }\r\n ]\r\n }\r\n}\r\n```\r\n\r\n#### Panels Recommand\u00e9s\r\n\r\n1. **Logs par Seconde**: Graph avec `rate(pyloggerx_logs_total[5m])`\r\n2. **Distribution par Niveau**: Stacked graph avec `sum by (level)`\r\n3. **Sant\u00e9 des Exporters**: Stat panel avec `up` metric\r\n4. **Taille de Queue**: Graph avec `pyloggerx_queue_size`\r\n5. **Erreurs d'Export**: Graph avec `rate(pyloggerx_export_errors_total[5m])`\r\n6. **Latence des Logs**: Histogram avec `pyloggerx_log_size_bytes`\r\n\r\n### M\u00e9triques Personnalis\u00e9es\r\n\r\nCr\u00e9ez vos propres m\u00e9triques m\u00e9tier.\r\n\r\n#### M\u00e9triques Applicatives\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nfrom prometheus_client import Counter, Histogram\r\nimport time\r\n\r\nlogger = PyLoggerX(name=\"business_app\")\r\n\r\n# M\u00e9triques m\u00e9tier\r\nuser_logins = Counter('app_user_logins_total', 'Total user logins')\r\norder_value = Histogram('app_order_value_dollars', 'Order values')\r\napi_requests = Counter('app_api_requests_total', 'API requests', ['endpoint', 'status'])\r\nprocessing_time = Histogram('app_processing_seconds', 'Processing time', ['operation'])\r\n\r\ndef handle_login(user_id):\r\n \"\"\"G\u00e9rer une connexion utilisateur\"\"\"\r\n start = time.time()\r\n \r\n try:\r\n # Logique de connexion\r\n logger.info(\"Connexion utilisateur\", user_id=user_id)\r\n user_logins.inc()\r\n \r\n duration = time.time() - start\r\n processing_time.labels(operation='login').observe(duration)\r\n \r\n return True\r\n except Exception as e:\r\n logger.error(\"\u00c9chec de connexion\", user_id=user_id, error=str(e))\r\n return False\r\n\r\ndef process_order(order_id, amount):\r\n \"\"\"Traiter une commande\"\"\"\r\n logger.info(\"Traitement commande\", order_id=order_id, amount=amount)\r\n \r\n # Enregistrer la valeur\r\n order_value.observe(amount)\r\n \r\n # Logique m\u00e9tier\r\n # ...\r\n\r\ndef api_endpoint(endpoint, func):\r\n \"\"\"D\u00e9corateur pour tracker les appels API\"\"\"\r\n def wrapper(*args, **kwargs):\r\n start = time.time()\r\n \r\n try:\r\n result = func(*args, **kwargs)\r\n status = 'success'\r\n logger.info(\"API appel\u00e9e\", endpoint=endpoint, status=status)\r\n return result\r\n except Exception as e:\r\n status = 'error'\r\n logger.error(\"API \u00e9chou\u00e9e\", endpoint=endpoint, error=str(e))\r\n raise\r\n finally:\r\n duration = time.time() - start\r\n api_requests.labels(endpoint=endpoint, status=status).inc()\r\n processing_time.labels(operation=endpoint).observe(duration)\r\n \r\n return wrapper\r\n\r\n@api_endpoint('/api/users')\r\ndef get_users():\r\n # Logique API\r\n return {\"users\": []}\r\n```\r\n\r\n#### M\u00e9triques Combin\u00e9es\r\n\r\n```python\r\nfrom pyloggerx.monitoring import MetricsCollector\r\n\r\ncollector = MetricsCollector()\r\nlogger = PyLoggerX(name=\"app\")\r\n\r\n# Fonction p\u00e9riodique pour exporter vers Prometheus\r\ndef export_pyloggerx_metrics():\r\n \"\"\"Exporter les m\u00e9triques PyLoggerX vers Prometheus\"\"\"\r\n metrics = collector.get_metrics()\r\n \r\n # M\u00e9triques syst\u00e8me\r\n logs_total_gauge.set(metrics['total_logs'])\r\n logs_per_second_gauge.set(metrics['logs_per_second'])\r\n avg_log_size_gauge.set(metrics['avg_log_size_bytes'])\r\n \r\n # Logs par niveau\r\n for level, count in metrics['logs_per_level'].items():\r\n logs_by_level.labels(level=level).set(count)\r\n \r\n # Erreurs r\u00e9centes\r\n error_count.set(len(metrics['recent_errors']))\r\n\r\n# Appeler p\u00e9riodiquement\r\nimport threading\r\nimport time\r\n\r\ndef metrics_updater():\r\n while True:\r\n export_pyloggerx_metrics()\r\n time.sleep(15)\r\n\r\nmetrics_thread = threading.Thread(target=metrics_updater, daemon=True)\r\nmetrics_thread.start()\r\n```\r\n\r\n---\r\n\r\n## Exemples Complets\r\n\r\n### Exemple 1: Application Web avec Monitoring Complet\r\n\r\n```python\r\n\"\"\"\r\nApplication FastAPI avec monitoring PyLoggerX complet\r\n\"\"\"\r\nfrom fastapi import FastAPI, Request\r\nfrom fastapi.responses import JSONResponse\r\nfrom pyloggerx import PyLoggerX\r\nfrom pyloggerx.monitoring import HealthMonitor, print_dashboard\r\nfrom pyloggerx.config import load_config\r\nfrom prometheus_client import make_asgi_app, Counter, Histogram\r\nimport time\r\nimport uuid\r\nimport os\r\n\r\n# Charger la configuration\r\nconfig = load_config(\r\n config_file=\"config.json\",\r\n from_env=True,\r\n defaults={\"name\": \"web-api\", \"level\": \"INFO\"}\r\n)\r\n\r\n# Initialiser le logger\r\nlogger = PyLoggerX(**config)\r\n\r\n# Initialiser le monitoring\r\nmonitor = HealthMonitor(logger, check_interval=30)\r\n\r\n# M\u00e9triques Prometheus\r\nhttp_requests = Counter(\r\n 'http_requests_total',\r\n 'Total HTTP requests',\r\n ['method', 'endpoint', 'status']\r\n)\r\nhttp_duration = Histogram(\r\n 'http_request_duration_seconds',\r\n 'HTTP request duration',\r\n ['method', 'endpoint']\r\n)\r\n\r\n# Callbacks d'alerte\r\ndef alert_to_slack(alert_name, message):\r\n \"\"\"Envoyer alerte \u00e0 Slack\"\"\"\r\n if os.getenv('SLACK_WEBHOOK'):\r\n import requests\r\n requests.post(\r\n os.getenv('SLACK_WEBHOOK'),\r\n json={\"text\": f\":warning: {message}\"}\r\n )\r\n\r\nmonitor.alert_manager.add_callback(alert_to_slack)\r\n\r\n# Application FastAPI\r\napp = FastAPI(title=\"API avec Monitoring\")\r\n\r\n# Monter le endpoint Prometheus\r\nmetrics_app = make_asgi_app()\r\napp.mount(\"/metrics\", metrics_app)\r\n\r\n@app.on_event(\"startup\")\r\nasync def startup():\r\n \"\"\"D\u00e9marrage de l'application\"\"\"\r\n monitor.start()\r\n logger.info(\"Application et monitoring d\u00e9marr\u00e9s\")\r\n\r\n@app.on_event(\"shutdown\")\r\nasync def shutdown():\r\n \"\"\"Arr\u00eat de l'application\"\"\"\r\n monitor.stop()\r\n logger.info(\"Application arr\u00eat\u00e9e\")\r\n logger.close()\r\n\r\n@app.middleware(\"http\")\r\nasync def logging_middleware(request: Request, call_next):\r\n \"\"\"Middleware de logging et m\u00e9triques\"\"\"\r\n request_id = str(uuid.uuid4())\r\n start_time = time.time()\r\n \r\n # Contexte de logging\r\n with logger.context(request_id=request_id):\r\n logger.info(\r\n \"Requ\u00eate re\u00e7ue\",\r\n method=request.method,\r\n path=request.url.path,\r\n client=request.client.host\r\n )\r\n \r\n try:\r\n response = await call_next(request)\r\n duration = time.time() - start_time\r\n \r\n # M\u00e9triques\r\n http_requests.labels(\r\n method=request.method,\r\n endpoint=request.url.path,\r\n status=response.status_code\r\n ).inc()\r\n http_duration.labels(\r\n method=request.method,\r\n endpoint=request.url.path\r\n ).observe(duration)\r\n \r\n logger.info(\r\n \"Requ\u00eate compl\u00e9t\u00e9e\",\r\n method=request.method,\r\n path=request.url.path,\r\n status=response.status_code,\r\n duration_ms=duration * 1000\r\n )\r\n \r\n response.headers[\"X-Request-ID\"] = request_id\r\n return response\r\n \r\n except Exception as e:\r\n duration = time.time() - start_time\r\n logger.exception(\r\n \"Erreur requ\u00eate\",\r\n method=request.method,\r\n path=request.url.path,\r\n duration_ms=duration * 1000\r\n )\r\n raise\r\n\r\n@app.get(\"/\")\r\nasync def root():\r\n \"\"\"Endpoint racine\"\"\"\r\n return {\"status\": \"ok\", \"service\": \"web-api\"}\r\n\r\n@app.get(\"/health\")\r\nasync def health():\r\n \"\"\"Health check d\u00e9taill\u00e9\"\"\"\r\n health_status = logger.healthcheck()\r\n stats = logger.get_stats()\r\n monitor_status = monitor.get_status()\r\n \r\n return {\r\n \"healthy\": health_status['healthy'],\r\n \"logger\": {\r\n \"total_logs\": stats['total_logs'],\r\n \"exporters\": health_status['exporters']\r\n },\r\n \"monitor\": {\r\n \"running\": monitor_status['running'],\r\n \"metrics\": monitor_status['metrics']\r\n }\r\n }\r\n\r\n@app.get(\"/stats\")\r\nasync def stats():\r\n \"\"\"Statistiques d\u00e9taill\u00e9es\"\"\"\r\n return {\r\n \"logger\": logger.get_stats(),\r\n \"monitor\": monitor.get_status()\r\n }\r\n\r\n@app.get(\"/dashboard\")\r\nasync def dashboard():\r\n \"\"\"Dashboard en format texte\"\"\"\r\n import io\r\n import sys\r\n \r\n # Capturer la sortie du dashboard\r\n old_stdout = sys.stdout\r\n sys.stdout = buffer = io.StringIO()\r\n \r\n print_dashboard(logger, clear_screen=False)\r\n \r\n sys.stdout = old_stdout\r\n output = buffer.getvalue()\r\n \r\n return {\"dashboard\": output}\r\n\r\nif __name__ == \"__main__\":\r\n import uvicorn\r\n uvicorn.run(app, host=\"0.0.0.0\", port=8000)\r\n```\r\n\r\n### Exemple 2: Worker Batch avec Configuration Avanc\u00e9e\r\n\r\n```python\r\n\"\"\"\r\nWorker de traitement batch avec configuration compl\u00e8te\r\n\"\"\"\r\nimport time\r\nimport sys\r\nfrom datetime import datetime\r\nfrom pyloggerx import PyLoggerX\r\nfrom pyloggerx.config import load_config, save_example_config\r\nfrom pyloggerx.monitoring import HealthMonitor, MetricsCollector\r\n\r\n# G\u00e9n\u00e9rer une config si elle n'existe pas\r\nconfig_file = \"worker-config.json\"\r\nif not os.path.exists(config_file):\r\n save_example_config(\"production\", config_file)\r\n print(f\"Configuration cr\u00e9\u00e9e: {config_file}\")\r\n\r\n# Charger la configuration\r\nconfig = load_config(\r\n config_file=config_file,\r\n from_env=True,\r\n defaults={\r\n \"name\": \"batch-worker\",\r\n \"level\": \"INFO\",\r\n \"performance_tracking\": True\r\n }\r\n)\r\n\r\n# Personnaliser la config\r\nconfig.update({\r\n \"enrichment_data\": {\r\n \"worker_id\": os.getenv(\"WORKER_ID\", \"worker-1\"),\r\n \"datacenter\": os.getenv(\"DATACENTER\", \"dc1\"),\r\n \"start_time\": datetime.now().isoformat()\r\n }\r\n})\r\n\r\n# Initialiser le logger\r\nlogger = PyLoggerX(**config)\r\n\r\n# Monitoring\r\ncollector = MetricsCollector(window_size=600) # 10 minutes\r\nmonitor = HealthMonitor(logger, check_interval=60)\r\n\r\n# Callbacks d'alerte\r\ndef email_alert(alert_name, message):\r\n logger.critical(f\"ALERTE: {message}\", alert=alert_name)\r\n # Impl\u00e9menter l'envoi d'email\r\n\r\ndef metrics_alert(alert_name, message):\r\n \"\"\"Logger les alertes pour les m\u00e9triques\"\"\"\r\n collector.record_error(f\"{alert_name}: {message}\")\r\n\r\nmonitor.alert_manager.add_callback(email_alert)\r\nmonitor.alert_manager.add_callback(metrics_alert)\r\n\r\n# R\u00e8gles d'alerte personnalis\u00e9es\r\nmonitor.alert_manager.add_rule(\r\n name=\"processing_slow\",\r\n condition=lambda m: m.get('avg_duration', 0) > 5.0,\r\n cooldown=300,\r\n message=\"Traitement lent d\u00e9tect\u00e9 (>5s en moyenne)\"\r\n)\r\n\r\nclass BatchWorker:\r\n def __init__(self):\r\n self.running = False\r\n self.processed = 0\r\n self.errors = 0\r\n \r\n def start(self):\r\n \"\"\"D\u00e9marrer le worker\"\"\"\r\n self.running = True\r\n monitor.start()\r\n \r\n logger.info(\"Worker d\u00e9marr\u00e9\", config=config)\r\n \r\n try:\r\n while self.running:\r\n self.process_batch()\r\n time.sleep(10)\r\n except KeyboardInterrupt:\r\n logger.info(\"Arr\u00eat demand\u00e9\")\r\n finally:\r\n self.stop()\r\n \r\n def process_batch(self):\r\n \"\"\"Traiter un batch\"\"\"\r\n with logger.timer(\"Batch Processing\"):\r\n try:\r\n # R\u00e9cup\u00e9rer les jobs\r\n jobs = self.fetch_jobs()\r\n \r\n if not jobs:\r\n logger.debug(\"Aucun job \u00e0 traiter\")\r\n return\r\n \r\n logger.info(\"Traitement batch\", job_count=len(jobs))\r\n \r\n # Traiter chaque job\r\n for job in jobs:\r\n self.process_job(job)\r\n \r\n # Enregistrer les m\u00e9triques\r\n collector.record_log(\"INFO\", size=len(str(jobs)))\r\n \r\n except Exception as e:\r\n self.errors += 1\r\n collector.record_error(str(e))\r\n logger.exception(\"Erreur batch\")\r\n \r\n def fetch_jobs(self):\r\n \"\"\"R\u00e9cup\u00e9rer les jobs depuis la queue\"\"\"\r\n # Simuler la r\u00e9cup\u00e9ration\r\n import random\r\n return [{\"id\": i} for i in range(random.randint(0, 10))]\r\n \r\n def process_job(self, job):\r\n \"\"\"Traiter un job\"\"\"\r\n job_id = job[\"id\"]\r\n \r\n try:\r\n logger.debug(\"Traitement job\", job_id=job_id)\r\n \r\n # Simuler le traitement\r\n time.sleep(0.1)\r\n \r\n self.processed += 1\r\n logger.info(\"Job compl\u00e9t\u00e9\", job_id=job_id)\r\n \r\n except Exception as e:\r\n self.errors += 1\r\n logger.error(\"Job \u00e9chou\u00e9\", job_id=job_id, error=str(e))\r\n \r\n def stop(self):\r\n \"\"\"Arr\u00eater le worker\"\"\"\r\n self.running = False\r\n monitor.stop()\r\n \r\n # Stats finales\r\n stats = logger.get_performance_stats()\r\n metrics = collector.get_metrics()\r\n \r\n logger.info(\r\n \"Worker arr\u00eat\u00e9\",\r\n processed=self.processed,\r\n errors=self.errors,\r\n total_duration=stats.get('total_duration', 0),\r\n avg_duration=stats.get('avg_duration', 0),\r\n logs_per_second=metrics.get('logs_per_second', 0)\r\n )\r\n \r\n logger.close()\r\n\r\nif __name__ == \"__main__\":\r\n worker = BatchWorker()\r\n worker.start()\r\n```\r\n\r\n### Exemple 3: Microservice avec Dashboard Live\r\n\r\n```python\r\n\"\"\"\r\nMicroservice avec dashboard de monitoring en temps r\u00e9el\r\n\"\"\"\r\nimport threading\r\nimport time\r\nimport os\r\nfrom pyloggerx import PyLoggerX\r\nfrom pyloggerx.monitoring import HealthMonitor, print_dashboard\r\nfrom pyloggerx.config import load_config\r\n\r\n# Configuration\r\nconfig = load_config(\r\n from_env=True,\r\n defaults={\r\n \"name\": \"microservice\",\r\n \"level\": \"INFO\",\r\n \"console\": True,\r\n \"json_file\": \"logs/service.json\",\r\n \"performance_tracking\": True,\r\n \"enable_rate_limit\": True,\r\n \"rate_limit_messages\": 100,\r\n \"rate_limit_period\": 60\r\n }\r\n)\r\n\r\nlogger = PyLoggerX(**config)\r\nmonitor = HealthMonitor(logger, check_interval=30)\r\n\r\n# Dashboard en thread s\u00e9par\u00e9\r\ndef dashboard_updater():\r\n \"\"\"Mettre \u00e0 jour le dashboard en continu\"\"\"\r\n while True:\r\n try:\r\n print_dashboard(logger, clear_screen=True)\r\n time.sleep(5)\r\n except KeyboardInterrupt:\r\n break\r\n\r\n# D\u00e9marrer le dashboard dans un thread s\u00e9par\u00e9\r\nif os.getenv(\"SHOW_DASHBOARD\", \"false\").lower() == \"true\":\r\n dashboard_thread = threading.Thread(target=dashboard_updater, daemon=True)\r\n dashboard_thread.start()\r\n logger.info(\"Dashboard activ\u00e9\")\r\n\r\n# Service principal\r\nmonitor.start()\r\nlogger.info(\"Microservice d\u00e9marr\u00e9\")\r\n\r\ntry:\r\n # Boucle principale du service\r\n while True:\r\n # Simuler du travail\r\n logger.info(\"Traitement en cours\")\r\n time.sleep(10)\r\n \r\n # Simuler des erreurs occasionnelles\r\n import random\r\n if random.random() < 0.1:\r\n logger.error(\"Erreur simul\u00e9e\", error_code=random.randint(500, 599))\r\n\r\nexcept KeyboardInterrupt:\r\n logger.info(\"Arr\u00eat du service\")\r\nfinally:\r\n monitor.stop()\r\n logger.close()\r\n```\r\n\r\n---\r\n\r\n## R\u00e9f\u00e9rence Config\r\n\r\n### ConfigLoader\r\n\r\nClasse pour charger des configurations depuis diff\u00e9rentes sources.\r\n\r\n```python\r\nclass ConfigLoader:\r\n @staticmethod\r\n def from_json(filepath: str) -> Dict[str, Any]\r\n \r\n @staticmethod\r\n def from_yaml(filepath: str) -> Dict[str, Any]\r\n \r\n @staticmethod\r\n def from_env(prefix: str = \"PYLOGGERX_\") -> Dict[str, Any]\r\n \r\n @staticmethod\r\n def from_file(filepath: str) -> Dict[str, Any]\r\n \r\n @staticmethod\r\n def merge_configs(*configs: Dict[str, Any]) -> Dict[str, Any]\r\n```\r\n\r\n### ConfigValidator\r\n\r\nClasse pour valider les configurations.\r\n\r\n```python\r\nclass ConfigValidator:\r\n VALID_LEVELS = ['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']\r\n \r\n @staticmethod\r\n def validate(config: Dict[str, Any]) -> tuple[bool, Optional[str]]\r\n```\r\n\r\n### load_config\r\n\r\nFonction pour charger une configuration compl\u00e8te.\r\n\r\n```python\r\ndef load_config(\r\n config_file: Optional[str] = None,\r\n from_env: bool = True,\r\n defaults: Optional[Dict[str, Any]] = None\r\n) -> Dict[str, Any]\r\n```\r\n\r\n### MetricsCollector\r\n\r\nCollecteur de m\u00e9triques de logging.\r\n\r\n```python\r\nclass MetricsCollector:\r\n def __init__(self, window_size: int = 300)\r\n \r\n def record_log(self, level: str, size: int = 0) -> None\r\n \r\n def record_error(self, error: str) -> None\r\n \r\n def get_metrics(self) -> Dict[str, Any]\r\n \r\n def reset(self) -> None\r\n```\r\n\r\n**M\u00e9triques retourn\u00e9es par get_metrics():**\r\n- `uptime_seconds`: Temps \u00e9coul\u00e9 depuis le d\u00e9marrage\r\n- `logs_per_level`: Dict avec compteurs par niveau\r\n- `logs_per_second`: Taux de logs (fen\u00eatre glissante)\r\n- `avg_log_size_bytes`: Taille moyenne des logs\r\n- `recent_errors`: Liste des 10 derni\u00e8res erreurs\r\n- `total_logs`: Total de logs \u00e9mis\r\n\r\n### AlertManager\r\n\r\nGestionnaire d'alertes bas\u00e9es sur m\u00e9triques.\r\n\r\n```python\r\nclass AlertManager:\r\n def add_rule(\r\n self,\r\n name: str,\r\n condition: Callable[[Dict[str, Any]], bool],\r\n cooldown: int = 300,\r\n message: Optional[str] = None\r\n ) -> None\r\n \r\n def add_callback(self, callback: Callable[[str, str], None]) -> None\r\n \r\n def check_metrics(self, metrics: Dict[str, Any]) -> None\r\n```\r\n\r\n### HealthMonitor\r\n\r\nMoniteur de sant\u00e9 automatique.\r\n\r\n```python\r\nclass HealthMonitor:\r\n def __init__(\r\n self,\r\n logger: PyLoggerX,\r\n check_interval: int = 60\r\n )\r\n \r\n def start(self) -> None\r\n \r\n def stop(self) -> None\r\n \r\n def get_status(self) -> Dict[str, Any]\r\n \r\n # Propri\u00e9t\u00e9s\r\n @property\r\n def metrics_collector: MetricsCollector\r\n \r\n @property\r\n def alert_manager: AlertManager\r\n```\r\n\r\n**Status retourn\u00e9 par get_status():**\r\n- `running`: Statut du monitoring\r\n- `metrics`: M\u00e9triques du collecteur\r\n- `logger_stats`: Statistiques du logger\r\n- `logger_health`: Sant\u00e9 du logger\r\n\r\n### print_dashboard\r\n\r\nFonction pour afficher le dashboard console.\r\n\r\n```python\r\ndef print_dashboard(\r\n logger: PyLoggerX,\r\n clear_screen: bool = True\r\n) -> None\r\n```\r\n\r\n---\r\n\r\n## Variables d'Environnement Compl\u00e8tes\r\n\r\nListe exhaustive des variables d'environnement support\u00e9es:\r\n\r\n```bash\r\n# G\u00e9n\u00e9ral\r\nPYLOGGERX_NAME=myapp\r\nPYLOGGERX_LEVEL=INFO\r\n\r\n# Sortie\r\nPYLOGGERX_CONSOLE=true\r\nPYLOGGERX_COLORS=false\r\n\r\n# Fichiers\r\nPYLOGGERX_JSON_FILE=/var/log/app.json\r\nPYLOGGERX_TEXT_FILE=/var/log/app.log\r\n\r\n# Rate Limiting\r\nPYLOGGERX_RATE_LIMIT_ENABLED=true\r\nPYLOGGERX_RATE_LIMIT_MESSAGES=100\r\nPYLOGGERX_RATE_LIMIT_PERIOD=60\r\n\r\n# Sampling\r\nPYLOGGERX_SAMPLING_ENABLED=false\r\nPYLOGGERX_SAMPLING_RATE=1.0\r\n\r\n# Elasticsearch\r\nPYLOGGERX_ELASTICSEARCH_URL=http://elasticsearch:9200\r\nPYLOGGERX_ELASTICSEARCH_INDEX=logs\r\nPYLOGGERX_ELASTICSEARCH_USERNAME=elastic\r\nPYLOGGERX_ELASTICSEARCH_PASSWORD=changeme\r\n\r\n# Loki\r\nPYLOGGERX_LOKI_URL=http://loki:3100\r\n\r\n# Sentry\r\nPYLOGGERX_SENTRY_DSN=https://xxx@sentry.io/xxx\r\nPYLOGGERX_SENTRY_ENVIRONMENT=production\r\nPYLOGGERX_SENTRY_RELEASE=1.0.0\r\n\r\n# Datadog\r\nPYLOGGERX_DATADOG_API_KEY=your_api_key\r\nPYLOGGERX_DATADOG_SITE=datadoghq.com\r\nPYLOGGERX_DATADOG_SERVICE=myapp\r\n\r\n# Slack\r\nPYLOGGERX_SLACK_WEBHOOK=https://hooks.slack.com/services/xxx\r\nPYLOGGERX_SLACK_CHANNEL=#alerts\r\nPYLOGGERX_SLACK_USERNAME=PyLoggerX Bot\r\n\r\n# Webhook\r\nPYLOGGERX_WEBHOOK_URL=https://example.com/logs\r\nPYLOGGERX_WEBHOOK_METHOD=POST\r\n\r\n# Performance\r\nPYLOGGERX_PERFORMANCE_TRACKING=true\r\nPYLOGGERX_INCLUDE_CALLER=false\r\n\r\n# Export\r\nPYLOGGERX_BATCH_SIZE=100\r\nPYLOGGERX_BATCH_TIMEOUT=5\r\nPYLOGGERX_ASYNC_EXPORT=true\r\n```\r\n\r\n---\r\n\r\n## Meilleures Pratiques\r\n\r\n### Configuration\r\n\r\n1. **Utiliser des fichiers de config par environnement**\r\n ```python\r\n config = load_config(\r\n config_file=f\"config.{os.getenv('ENV', 'dev')}.json\",\r\n from_env=True\r\n )\r\n ```\r\n\r\n2. **Ne jamais commiter les secrets**\r\n - Utiliser des variables d'environnement\r\n - Utiliser des outils comme Vault, AWS Secrets Manager\r\n\r\n3. **Valider la configuration au d\u00e9marrage**\r\n ```python\r\n try:\r\n config = load_config(config_file=\"config.json\")\r\n except ValueError as e:\r\n print(f\"Config invalide: {e}\")\r\n sys.exit(1)\r\n ```\r\n\r\n4. **Documenter les configurations**\r\n - Cr\u00e9er des exemples de configuration\r\n - Documenter chaque param\u00e8tre\r\n\r\n### Monitoring\r\n\r\n1. **Toujours monitorer en production**\r\n ```python\r\n monitor = HealthMonitor(logger, check_interval=60)\r\n monitor.start()\r\n ```\r\n\r\n2. **Configurer des alertes pertinentes**\r\n - Pas trop d'alertes (fatigue d'alerte)\r\n - Pas trop peu (probl\u00e8mes non d\u00e9tect\u00e9s)\r\n\r\n3. **Exporter vers un syst\u00e8me centralis\u00e9**\r\n - Prometheus + Grafana\r\n - Datadog\r\n - CloudWatch\r\n\r\n4. **Tester les alertes r\u00e9guli\u00e8rement**\r\n ```python\r\n # Test mensuel\r\n logger.critical(\"TEST: Alerte critique\", test=True)\r\n ```\r\n\r\n### Performance\r\n\r\n1. **Activer le rate limiting en production**\r\n ```python\r\n config['enable_rate_limit'] = True\r\n config['rate_limit_messages'] = 100\r\n ```\r\n\r\n2. **Utiliser l'export asynchrone**\r\n ```python\r\n config['async_export'] = True\r\n ```\r\n\r\n3. **Ajuster la taille des batchs**\r\n ```python\r\n config['batch_size'] = 50 # Plus petit pour latence faible\r\n config['batch_timeout'] = 2 # Timeout court\r\n ```\r\n\r\n4. **Monitorer les m\u00e9triques de performance**\r\n ```python\r\n stats = logger.get_performance_stats()\r\n if stats['avg_duration'] > 1.0:\r\n logger.warning(\"Performance d\u00e9grad\u00e9e\")\r\n ```\r\n\r\n## Exemples R\u00e9els\r\n\r\n### 1. Application Web (FastAPI)\r\n\r\n```python\r\nfrom fastapi import FastAPI, Request, HTTPException\r\nfrom fastapi.responses import JSONResponse\r\nfrom pyloggerx import PyLoggerX\r\nimport time\r\nimport uuid\r\n\r\napp = FastAPI()\r\n\r\n# Configuration du logger\r\nlogger = PyLoggerX(\r\n name=\"fastapi_app\",\r\n console=True,\r\n json_file=\"logs/web.json\",\r\n \r\n # Export distant\r\n elasticsearch_url=\"http://elasticsearch:9200\",\r\n sentry_dsn=os.getenv(\"SENTRY_DSN\"),\r\n \r\n enrichment_data={\r\n \"service\": \"web-api\",\r\n \"version\": \"2.0.0\",\r\n \"environment\": os.getenv(\"ENVIRONMENT\", \"production\")\r\n }\r\n)\r\n\r\n@app.middleware(\"http\")\r\nasync def log_requests(request: Request, call_next):\r\n \"\"\"Middleware de logging pour toutes les requ\u00eates\"\"\"\r\n request_id = str(uuid.uuid4())\r\n start_time = time.time()\r\n \r\n # Ajouter request_id au contexte\r\n with logger.context(request_id=request_id):\r\n logger.info(\"Requ\u00eate re\u00e7ue\",\r\n method=request.method,\r\n path=request.url.path,\r\n client_ip=request.client.host,\r\n user_agent=request.headers.get(\"user-agent\")\r\n )\r\n \r\n try:\r\n response = await call_next(request)\r\n duration = time.time() - start_time\r\n \r\n logger.info(\"Requ\u00eate compl\u00e9t\u00e9e\",\r\n method=request.method,\r\n path=request.url.path,\r\n status_code=response.status_code,\r\n duration_ms=duration * 1000\r\n )\r\n \r\n # Ajouter request_id au header de r\u00e9ponse\r\n response.headers[\"X-Request-ID\"] = request_id\r\n return response\r\n \r\n except Exception as e:\r\n duration = time.time() - start_time\r\n logger.exception(\"Erreur de requ\u00eate\",\r\n method=request.method,\r\n path=request.url.path,\r\n duration_ms=duration * 1000,\r\n error_type=type(e).__name__\r\n )\r\n raise\r\n\r\n@app.exception_handler(HTTPException)\r\nasync def http_exception_handler(request: Request, exc: HTTPException):\r\n \"\"\"Gestionnaire d'exceptions HTTP\"\"\"\r\n logger.warning(\"Exception HTTP\",\r\n status_code=exc.status_code,\r\n detail=exc.detail,\r\n path=request.url.path\r\n )\r\n return JSONResponse(\r\n status_code=exc.status_code,\r\n content={\"error\": exc.detail}\r\n )\r\n\r\n@app.get(\"/\")\r\ndef root():\r\n logger.info(\"Endpoint racine acc\u00e9d\u00e9\")\r\n return {\"status\": \"ok\", \"service\": \"web-api\"}\r\n\r\n@app.get(\"/health\")\r\ndef health_check():\r\n \"\"\"Health check avec m\u00e9triques\"\"\"\r\n import psutil\r\n \r\n cpu = psutil.cpu_percent()\r\n memory = psutil.virtual_memory().percent\r\n \r\n status = \"healthy\"\r\n if cpu > 80 or memory > 80:\r\n status = \"degraded\"\r\n logger.warning(\"Service d\u00e9grad\u00e9\",\r\n cpu_percent=cpu,\r\n memory_percent=memory\r\n )\r\n \r\n logger.info(\"Health check\",\r\n status=status,\r\n cpu_percent=cpu,\r\n memory_percent=memory\r\n )\r\n \r\n return {\r\n \"status\": status,\r\n \"metrics\": {\r\n \"cpu_percent\": cpu,\r\n \"memory_percent\": memory\r\n }\r\n }\r\n\r\n@app.on_event(\"startup\")\r\nasync def startup_event():\r\n logger.info(\"Application d\u00e9marr\u00e9e\",\r\n workers=os.getenv(\"WEB_CONCURRENCY\", 1))\r\n\r\n@app.on_event(\"shutdown\")\r\nasync def shutdown_event():\r\n logger.info(\"Application arr\u00eat\u00e9e\")\r\n logger.flush() # Vider tous les buffers\r\n logger.close()\r\n\r\nif __name__ == \"__main__\":\r\n import uvicorn\r\n uvicorn.run(app, host=\"0.0.0.0\", port=8080)\r\n```\r\n\r\n### 2. Pipeline de Traitement de Donn\u00e9es\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nimport pandas as pd\r\nimport sys\r\n\r\nlogger = PyLoggerX(\r\n name=\"data_pipeline\",\r\n console=True,\r\n json_file=\"logs/pipeline.json\",\r\n performance_tracking=True,\r\n \r\n # Alertes pour les \u00e9checs\r\n slack_webhook=os.getenv(\"SLACK_WEBHOOK\"),\r\n \r\n enrichment_data={\r\n \"pipeline\": \"data-processing\",\r\n \"version\": \"1.0.0\"\r\n }\r\n)\r\n\r\nclass DataPipeline:\r\n def __init__(self, input_file):\r\n self.input_file = input_file\r\n self.df = None\r\n \r\n def run(self):\r\n \"\"\"Ex\u00e9cuter le pipeline complet\"\"\"\r\n logger.info(\"Pipeline d\u00e9marr\u00e9\", input_file=self.input_file)\r\n \r\n try:\r\n self.load_data()\r\n self.validate_data()\r\n self.clean_data()\r\n self.transform_data()\r\n self.export_data()\r\n \r\n # Statistiques finales\r\n stats = logger.get_performance_stats()\r\n logger.info(\"Pipeline compl\u00e9t\u00e9 avec succ\u00e8s\",\r\n total_duration=stats[\"total_duration\"],\r\n operations=stats[\"total_operations\"])\r\n \r\n except Exception as e:\r\n logger.exception(\"Pipeline \u00e9chou\u00e9\", error=str(e))\r\n sys.exit(1)\r\n \r\n def load_data(self):\r\n \"\"\"Charger les donn\u00e9es\"\"\"\r\n with logger.timer(\"Chargement des donn\u00e9es\"):\r\n try:\r\n self.df = pd.read_csv(self.input_file)\r\n logger.info(\"Donn\u00e9es charg\u00e9es\",\r\n rows=len(self.df),\r\n columns=len(self.df.columns),\r\n memory_mb=self.df.memory_usage(deep=True).sum() / 1024**2)\r\n except Exception as e:\r\n logger.error(\"\u00c9chec du chargement\",\r\n file=self.input_file,\r\n error=str(e))\r\n raise\r\n \r\n def validate_data(self):\r\n \"\"\"Valider les donn\u00e9es\"\"\"\r\n with logger.timer(\"Validation des donn\u00e9es\"):\r\n required_columns = ['id', 'timestamp', 'value']\r\n missing_columns = [col for col in required_columns \r\n if col not in self.df.columns]\r\n \r\n if missing_columns:\r\n logger.error(\"Colonnes manquantes\",\r\n missing=missing_columns,\r\n found=list(self.df.columns))\r\n raise ValueError(f\"Colonnes manquantes: {missing_columns}\")\r\n \r\n logger.info(\"Validation r\u00e9ussie\")\r\n \r\n def clean_data(self):\r\n \"\"\"Nettoyer les donn\u00e9es\"\"\"\r\n with logger.timer(\"Nettoyage des donn\u00e9es\"):\r\n initial_rows = len(self.df)\r\n \r\n # Supprimer les doublons\r\n duplicates = self.df.duplicated().sum()\r\n self.df = self.df.drop_duplicates()\r\n \r\n # Supprimer les valeurs nulles\r\n null_counts = self.df.isnull().sum()\r\n self.df = self.df.dropna()\r\n \r\n removed_rows = initial_rows - len(self.df)\r\n \r\n logger.info(\"Donn\u00e9es nettoy\u00e9es\",\r\n initial_rows=initial_rows,\r\n removed_rows=removed_rows,\r\n duplicates_removed=duplicates,\r\n remaining_rows=len(self.df),\r\n null_values=null_counts.to_dict())\r\n \r\n if removed_rows > initial_rows * 0.5:\r\n logger.warning(\"Plus de 50% des lignes supprim\u00e9es\",\r\n percent_removed=removed_rows/initial_rows*100)\r\n \r\n def transform_data(self):\r\n \"\"\"Transformer les donn\u00e9es\"\"\"\r\n with logger.timer(\"Transformation des donn\u00e9es\"):\r\n # Conversion de types\r\n self.df['timestamp'] = pd.to_datetime(self.df['timestamp'])\r\n self.df['value'] = pd.to_numeric(self.df['value'], errors='coerce')\r\n \r\n # Ajout de colonnes calcul\u00e9es\r\n self.df['year'] = self.df['timestamp'].dt.year\r\n self.df['month'] = self.df['timestamp'].dt.month\r\n \r\n logger.info(\"Transformation compl\u00e9t\u00e9e\",\r\n new_columns=['year', 'month'])\r\n \r\n def export_data(self):\r\n \"\"\"Exporter les donn\u00e9es\"\"\"\r\n output_file = \"output/processed_data.csv\"\r\n \r\n with logger.timer(\"Export des donn\u00e9es\"):\r\n self.df.to_csv(output_file, index=False)\r\n \r\n file_size_mb = os.path.getsize(output_file) / 1024**2\r\n \r\n logger.info(\"Donn\u00e9es export\u00e9es\",\r\n output_file=output_file,\r\n rows=len(self.df),\r\n file_size_mb=file_size_mb)\r\n\r\nif __name__ == \"__main__\":\r\n if len(sys.argv) != 2:\r\n print(\"Usage: python pipeline.py <input_file>\")\r\n sys.exit(1)\r\n \r\n pipeline = DataPipeline(sys.argv[1])\r\n pipeline.run()\r\n```\r\n\r\n### 3. Microservice avec Monitoring Complet\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nfrom fastapi import FastAPI\r\nimport psutil\r\nimport time\r\nimport os\r\n\r\napp = FastAPI()\r\n\r\n# Logger principal\r\nlogger = PyLoggerX(\r\n name=\"microservice\",\r\n console=True,\r\n json_file=\"logs/service.json\",\r\n \r\n # Stack d'observabilit\u00e9 compl\u00e8te\r\n elasticsearch_url=os.getenv(\"ES_URL\"),\r\n elasticsearch_index=\"microservice-logs\",\r\n loki_url=os.getenv(\"LOKI_URL\"),\r\n loki_labels={\"service\": \"payment-processor\", \"env\": \"prod\"},\r\n sentry_dsn=os.getenv(\"SENTRY_DSN\"),\r\n datadog_api_key=os.getenv(\"DD_API_KEY\"),\r\n slack_webhook=os.getenv(\"SLACK_WEBHOOK\"),\r\n \r\n # Configuration avanc\u00e9e\r\n batch_size=50,\r\n enable_sampling=True,\r\n sampling_rate=0.5, # 50% en production\r\n \r\n enrichment_data={\r\n \"service\": \"payment-processor\",\r\n \"version\": os.getenv(\"APP_VERSION\", \"1.0.0\"),\r\n \"instance\": os.getenv(\"HOSTNAME\")\r\n }\r\n)\r\n\r\n@app.get(\"/health\")\r\ndef health_check():\r\n \"\"\"Health check d\u00e9taill\u00e9\"\"\"\r\n cpu = psutil.cpu_percent(interval=1)\r\n memory = psutil.virtual_memory().percent\r\n disk = psutil.disk_usage('/').percent\r\n \r\n # V\u00e9rifier les d\u00e9pendances\r\n dependencies = {\r\n \"database\": check_database(),\r\n \"redis\": check_redis(),\r\n \"external_api\": check_external_api()\r\n }\r\n \r\n all_healthy = all(dependencies.values())\r\n status = \"healthy\" if all_healthy and cpu < 80 and memory < 80 else \"degraded\"\r\n \r\n log_level = \"info\" if status == \"healthy\" else \"warning\"\r\n getattr(logger, log_level)(\"Health check\",\r\n status=status,\r\n cpu_percent=cpu,\r\n memory_percent=memory,\r\n disk_percent=disk,\r\n dependencies=dependencies\r\n )\r\n \r\n return {\r\n \"status\": status,\r\n \"metrics\": {\r\n \"cpu_percent\": cpu,\r\n \"memory_percent\": memory,\r\n \"disk_percent\": disk\r\n },\r\n \"dependencies\": dependencies\r\n }\r\n\r\n@app.get(\"/metrics\")\r\ndef get_metrics():\r\n \"\"\"M\u00e9triques de logging et performance\"\"\"\r\n log_stats = logger.get_stats()\r\n perf_stats = logger.get_performance_stats()\r\n \r\n return {\r\n \"logging\": log_stats,\r\n \"performance\": perf_stats,\r\n \"system\": {\r\n \"cpu_percent\": psutil.cpu_percent(),\r\n \"memory_percent\": psutil.virtual_memory().percent,\r\n \"disk_percent\": psutil.disk_usage('/').percent\r\n }\r\n }\r\n\r\ndef check_database():\r\n try:\r\n # V\u00e9rifier la connexion DB\r\n # db.execute(\"SELECT 1\")\r\n return True\r\n except:\r\n logger.error(\"Database health check failed\")\r\n return False\r\n\r\ndef check_redis():\r\n try:\r\n # V\u00e9rifier Redis\r\n # redis_client.ping()\r\n return True\r\n except:\r\n logger.error(\"Redis health check failed\")\r\n return False\r\n\r\ndef check_external_api():\r\n try:\r\n # V\u00e9rifier l'API externe\r\n # requests.get(\"https://api.example.com/health\", timeout=2)\r\n return True\r\n except:\r\n logger.error(\"External API health check failed\")\r\n return False\r\n```\r\n\r\n### 4. Worker Asynchrone avec Gestion d'Erreurs\r\n\r\n```python\r\nfrom pyloggerx import PyLoggerX\r\nimport asyncio\r\nimport aiohttp\r\nfrom typing import List, Dict\r\nimport random\r\n\r\nlogger = PyLoggerX(\r\n name=\"async_worker\",\r\n json_file=\"logs/worker.json\",\r\n performance_tracking=True,\r\n \r\n # Alertes\r\n slack_webhook=os.getenv(\"SLACK_WEBHOOK\"),\r\n \r\n enrichment_data={\r\n \"worker_type\": \"async-processor\",\r\n \"version\": \"1.0.0\"\r\n }\r\n)\r\n\r\nclass AsyncWorker:\r\n def __init__(self, worker_id: str):\r\n self.worker_id = worker_id\r\n self.is_running = False\r\n self.processed_count = 0\r\n self.error_count = 0\r\n \r\n async def start(self):\r\n \"\"\"D\u00e9marrer le worker\"\"\"\r\n self.is_running = True\r\n logger.info(\"Worker d\u00e9marr\u00e9\", worker_id=self.worker_id)\r\n \r\n while self.is_running:\r\n try:\r\n await self.process_batch()\r\n await asyncio.sleep(5)\r\n except Exception as e:\r\n logger.exception(\"Erreur worker\", worker_id=self.worker_id)\r\n await asyncio.sleep(10)\r\n \r\n async def process_batch(self):\r\n \"\"\"Traiter un batch de jobs\"\"\"\r\n with logger.timer(f\"Batch-{self.worker_id}\"):\r\n jobs = await self.fetch_jobs()\r\n \r\n if not jobs:\r\n logger.debug(\"Aucun job \u00e0 traiter\", worker_id=self.worker_id)\r\n return\r\n \r\n logger.info(\"Batch r\u00e9cup\u00e9r\u00e9\",\r\n worker_id=self.worker_id,\r\n job_count=len(jobs))\r\n \r\n # Traiter en parall\u00e8le\r\n tasks = [self.process_job(job) for job in jobs]\r\n results = await asyncio.gather(*tasks, return_exceptions=True)\r\n \r\n # Compter succ\u00e8s/\u00e9checs\r\n successes = sum(1 for r in results if not isinstance(r, Exception))\r\n failures = len(results) - successes\r\n \r\n self.processed_count += successes\r\n self.error_count += failures\r\n \r\n logger.info(\"Batch trait\u00e9\",\r\n worker_id=self.worker_id,\r\n successes=successes,\r\n failures=failures,\r\n total_processed=self.processed_count,\r\n total_errors=self.error_count)\r\n \r\n async def fetch_jobs(self) -> List[Dict]:\r\n \"\"\"R\u00e9cup\u00e9rer les jobs depuis la queue\"\"\"\r\n # Simuler la r\u00e9cup\u00e9ration de jobs\r\n await asyncio.sleep(0.1)\r\n return [{\"id\": f\"job_{i}\", \"data\": random.random()} \r\n for i in range(random.randint(0, 10))]\r\n \r\n async def process_job(self, job: Dict):\r\n \"\"\"Traiter un job individuel\"\"\"\r\n job_id = job[\"id\"]\r\n \r\n try:\r\n logger.debug(\"Traitement job\",\r\n worker_id=self.worker_id,\r\n job_id=job_id)\r\n \r\n # Simuler le traitement\r\n await asyncio.sleep(random.uniform(0.1, 0.5))\r\n \r\n # Simuler des \u00e9checs al\u00e9atoires (10%)\r\n if random.random() < 0.1:\r\n raise Exception(\"Job processing failed\")\r\n \r\n logger.info(\"Job compl\u00e9t\u00e9\",\r\n worker_id=self.worker_id,\r\n job_id=job_id,\r\n status=\"success\")\r\n \r\n except Exception as e:\r\n logger.error(\"Job \u00e9chou\u00e9\",\r\n worker_id=self.worker_id,\r\n job_id=job_id,\r\n error=str(e),\r\n status=\"failed\")\r\n raise\r\n \r\n def stop(self):\r\n \"\"\"Arr\u00eater le worker\"\"\"\r\n self.is_running = False\r\n logger.info(\"Worker arr\u00eat\u00e9\",\r\n worker_id=self.worker_id,\r\n total_processed=self.processed_count,\r\n total_errors=self.error_count)\r\n\r\nasync def main():\r\n # D\u00e9marrer plusieurs workers\r\n workers = [AsyncWorker(f\"worker-{i}\") for i in range(3)]\r\n \r\n tasks = [worker.start() for worker in workers]\r\n \r\n try:\r\n await asyncio.gather(*tasks)\r\n except KeyboardInterrupt:\r\n logger.info(\"Arr\u00eat demand\u00e9\")\r\n for worker in workers:\r\n worker.stop()\r\n\r\nif __name__ == \"__main__\":\r\n asyncio.run(main())\r\n```\r\n\r\n---\r\n\r\n## Meilleures Pratiques\r\n\r\n### 1. Structurer les Logs pour le Parsing\r\n\r\nToujours utiliser des paires cl\u00e9-valeur pour les donn\u00e9es structur\u00e9es:\r\n\r\n```python\r\n# BON - structur\u00e9\r\nlogger.info(\"Connexion utilisateur\",\r\n user_id=123,\r\n username=\"john\",\r\n ip=\"192.168.1.1\",\r\n auth_method=\"oauth2\")\r\n\r\n# MAUVAIS - non structur\u00e9\r\nlogger.info(f\"User john (ID: 123) logged in from 192.168.1.1 using OAuth2\")\r\n```\r\n\r\n### 2. Utiliser les Niveaux de Log Appropri\u00e9s\r\n\r\n```python\r\n# DEBUG - Informations de diagnostic d\u00e9taill\u00e9es\r\nlogger.debug(\"Cache hit\", key=\"user:123\", ttl=3600)\r\n\r\n# INFO - Messages informatifs g\u00e9n\u00e9raux\r\nlogger.info(\"Service d\u00e9marr\u00e9\", port=8080, workers=4)\r\n\r\n# WARNING - Quelque chose d'inattendu mais pas critique\r\nlogger.warning(\"Utilisation m\u00e9moire \u00e9lev\u00e9e\",\r\n percent=85,\r\n threshold=80)\r\n\r\n# ERROR - Une erreur s'est produite mais le service continue\r\nlogger.error(\"Requ\u00eate \u00e9chou\u00e9e\",\r\n query=\"SELECT...\",\r\n error=str(e),\r\n retry_count=3)\r\n\r\n# CRITICAL - Le service ne peut pas continuer\r\nlogger.critical(\"Connexion base de donn\u00e9es perdue\",\r\n retries_exhausted=True,\r\n last_error=str(e))\r\n```\r\n\r\n### 3. Inclure du Contexte dans les Logs\r\n\r\n```python\r\n# Contexte utilisateur\r\nlogger.add_enrichment(\r\n user_id=user.id,\r\n session_id=session.id,\r\n request_id=request_id,\r\n ip_address=request.remote_addr\r\n)\r\n\r\n# Tous les logs suivants incluront ce contexte\r\nlogger.info(\"Appel API\", endpoint=\"/api/users\")\r\n```\r\n\r\n### 4. Tracking de Performance\r\n\r\n```python\r\n# Utiliser les timers pour les op\u00e9rations critiques\r\nwith logger.timer(\"Requ\u00eate Base de Donn\u00e9es\"):\r\n result = expensive_query()\r\n\r\n# Logger les dur\u00e9es pour analyse\r\nstart = time.time()\r\nprocess_data()\r\nduration = time.time() - start\r\n\r\nif duration > 1.0: # Seuil de performance\r\n logger.warning(\"Op\u00e9ration lente\",\r\n operation=\"process_data\",\r\n duration_seconds=duration)\r\n```\r\n\r\n### 5. Gestion des Exceptions\r\n\r\n```python\r\ntry:\r\n risky_operation()\r\nexcept SpecificException as e:\r\n logger.exception(\"Op\u00e9ration \u00e9chou\u00e9e\",\r\n operation=\"data_sync\",\r\n error_type=type(e).__name__,\r\n recoverable=True)\r\n # Inclut automatiquement la stack trace\r\n\r\nexcept Exception as e:\r\n logger.critical(\"Erreur inattendue\",\r\n operation=\"data_sync\",\r\n error=str(e))\r\n # Alerte imm\u00e9diate via Slack/Sentry\r\n```\r\n\r\n### 6. Logging Adapt\u00e9 aux Conteneurs\r\n\r\n```python\r\n# Pour les applications conteneuris\u00e9es\r\nlogger = PyLoggerX(\r\n name=\"container-app\",\r\n console=True, # Vers stdout/stderr\r\n colors=False, # IMPORTANT pour les collecteurs\r\n json_file=None, # Pas de fichiers en conteneur\r\n format_string='{\"time\":\"%(asctime)s\",\"level\":\"%(levelname)s\",\"msg\":\"%(message)s\"}'\r\n)\r\n```\r\n\r\n### 7. Correlation IDs pour Syst\u00e8mes Distribu\u00e9s\r\n\r\n```python\r\nimport uuid\r\n\r\ndef handle_request(request):\r\n # Propager ou cr\u00e9er un correlation ID\r\n correlation_id = request.headers.get(\r\n 'X-Correlation-ID',\r\n str(uuid.uuid4())\r\n )\r\n \r\n with logger.context(correlation_id=correlation_id):\r\n logger.info(\"Requ\u00eate re\u00e7ue\",\r\n method=request.method,\r\n path=request.path)\r\n \r\n # Passer aux services downstream\r\n response = downstream_service.call(\r\n data,\r\n headers={'X-Correlation-ID': correlation_id}\r\n )\r\n \r\n logger.info(\"Requ\u00eate compl\u00e9t\u00e9e\",\r\n status=response.status_code)\r\n \r\n return response\r\n```\r\n\r\n### 8. Health Checks et Monitoring\r\n\r\n```python\r\n@app.get(\"/health\")\r\ndef health():\r\n checks = {\r\n \"database\": check_db(),\r\n \"cache\": check_redis(),\r\n \"queue\": check_queue()\r\n }\r\n \r\n all_healthy = all(checks.values())\r\n \r\n if not all_healthy:\r\n failed = [k for k, v in checks.items() if not v]\r\n logger.error(\"Health check \u00e9chou\u00e9\",\r\n failed_components=failed)\r\n \r\n return {\r\n \"status\": \"healthy\" if all_healthy else \"unhealthy\",\r\n \"checks\": checks\r\n }\r\n```\r\n\r\n### 9. Protection des Donn\u00e9es Sensibles\r\n\r\n```python\r\nimport hashlib\r\n\r\ndef login(username, password):\r\n # MAUVAIS - Ne jamais logger de donn\u00e9es sensibles\r\n # logger.info(\"Tentative de connexion\",\r\n # username=username,\r\n # password=password)\r\n \r\n # BON - Hash ou masquage\r\n logger.info(\"Tentative de connexion\",\r\n username=username,\r\n password_hash=hashlib.sha256(password.encode()).hexdigest()[:8])\r\n```\r\n\r\n### 10. Rotation des Logs pour Services Long-Running\r\n\r\n```python\r\n# \u00c9viter le remplissage du disque\r\nlogger = PyLoggerX(\r\n name=\"long-running-service\",\r\n json_file=\"logs/service.json\",\r\n max_bytes=10 * 1024 * 1024, # 10MB\r\n backup_count=5, # Garder 5 fichiers\r\n rotation_when=\"midnight\" # + rotation quotidienne\r\n)\r\n```\r\n\r\n---\r\n\r\n## Tests\r\n\r\n### Tests Unitaires\r\n\r\n```python\r\nimport pytest\r\nimport json\r\nfrom pathlib import Path\r\nfrom pyloggerx import PyLoggerX\r\n\r\ndef test_json_logging(tmp_path):\r\n \"\"\"Tester la sortie JSON\"\"\"\r\n log_file = tmp_path / \"test.json\"\r\n \r\n logger = PyLoggerX(\r\n name=\"test_logger\",\r\n json_file=str(log_file),\r\n console=False\r\n )\r\n \r\n logger.info(\"Message de test\",\r\n test_id=123,\r\n status=\"success\")\r\n \r\n assert log_file.exists()\r\n \r\n with open(log_file) as f:\r\n log_entry = json.loads(f.readline())\r\n assert log_entry[\"message\"] == \"Message de test\"\r\n assert log_entry[\"test_id\"] == 123\r\n assert log_entry[\"status\"] == \"success\"\r\n\r\ndef test_performance_tracking(tmp_path):\r\n \"\"\"Tester le tracking de performance\"\"\"\r\n logger = PyLoggerX(\r\n name=\"perf_test\",\r\n performance_tracking=True,\r\n console=False\r\n )\r\n \r\n import time\r\n with logger.timer(\"Op\u00e9ration Test\"):\r\n time.sleep(0.1)\r\n \r\n stats = logger.get_performance_stats()\r\n assert stats[\"total_operations\"] == 1\r\n assert stats[\"avg_duration\"] >= 0.1\r\n\r\ndef test_enrichment(tmp_path):\r\n \"\"\"Tester l'enrichissement de contexte\"\"\"\r\n log_file = tmp_path / \"test.json\"\r\n \r\n logger = PyLoggerX(\r\n name=\"enrichment_test\",\r\n json_file=str(log_file),\r\n console=False,\r\n enrichment_data={\r\n \"app_version\": \"1.0.0\",\r\n \"environment\": \"test\"\r\n }\r\n )\r\n \r\n logger.info(\"Test avec enrichissement\")\r\n \r\n with open(log_file) as f:\r\n log_entry = json.loads(f.readline())\r\n assert log_entry[\"app_version\"] == \"1.0.0\"\r\n assert log_entry[\"environment\"] == \"test\"\r\n\r\ndef test_log_levels():\r\n \"\"\"Tester les diff\u00e9rents niveaux de log\"\"\"\r\n logger = PyLoggerX(name=\"level_test\", console=False)\r\n \r\n # Ne devrait pas lever d'exception\r\n logger.debug(\"Debug message\")\r\n logger.info(\"Info message\")\r\n logger.warning(\"Warning message\")\r\n logger.error(\"Error message\")\r\n logger.critical(\"Critical message\")\r\n\r\n@pytest.fixture\r\ndef logger(tmp_path):\r\n \"\"\"Fixture pour logger\"\"\"\r\n return PyLoggerX(\r\n name=\"test\",\r\n json_file=str(tmp_path / \"test.json\"),\r\n console=False\r\n )\r\n\r\ndef test_remote_logging_mock(logger, monkeypatch):\r\n \"\"\"Tester l'export distant (avec mock)\"\"\"\r\n import requests\r\n \r\n # Mock de la requ\u00eate HTTP\r\n class MockResponse:\r\n status_code = 200\r\n \r\n def mock_post(*args, **kwargs):\r\n return MockResponse()\r\n \r\n monkeypatch.setattr(requests, \"post\", mock_post)\r\n \r\n # Logger avec webhook\r\n logger_remote = PyLoggerX(\r\n name=\"remote_test\",\r\n webhook_url=\"http://example.com/logs\",\r\n console=False\r\n )\r\n \r\n logger_remote.info(\"Test remote\")\r\n```\r\n\r\n### Tests d'Int\u00e9gration\r\n\r\n```python\r\nimport pytest\r\nimport requests\r\nfrom pyloggerx import PyLoggerX\r\n\r\n@pytest.fixture(scope=\"module\")\r\ndef app_with_logging():\r\n \"\"\"D\u00e9marrer l'application avec logging\"\"\"\r\n logger = PyLoggerX(\r\n name=\"integration_test\",\r\n json_file=\"logs/integration.json\"\r\n )\r\n \r\n logger.info(\"Tests d'int\u00e9gration d\u00e9marr\u00e9s\")\r\n \r\n # D\u00e9marrer votre app ici\r\n yield app\r\n \r\n logger.info(\"Tests d'int\u00e9gration termin\u00e9s\")\r\n logger.close()\r\n\r\ndef test_api_endpoint(app_with_logging):\r\n \"\"\"Tester un endpoint API avec logging\"\"\"\r\n response = requests.get(\"http://localhost:8080/api/health\")\r\n assert response.status_code == 200\r\n assert response.json()[\"status\"] == \"healthy\"\r\n\r\ndef test_error_handling(app_with_logging):\r\n \"\"\"Tester la gestion d'erreurs\"\"\"\r\n response = requests.get(\"http://localhost:8080/api/invalid\")\r\n assert response.status_code == 404\r\n```\r\n\r\n---\r\n\r\n## D\u00e9pannage\r\n\r\n### Probl\u00e8me: Les logs n'apparaissent pas\r\n\r\n**Solution:**\r\n```python\r\n# V\u00e9rifier le niveau de log\r\nlogger.set_level(\"DEBUG\")\r\n\r\n# V\u00e9rifier les handlers\r\nprint(logger.logger.handlers)\r\n\r\n# Forcer le flush\r\nlogger.flush()\r\n\r\n# S'assurer que le r\u00e9pertoire existe\r\nimport os\r\nos.makedirs(\"logs\", exist_ok=True)\r\n```\r\n\r\n### Probl\u00e8me: Erreurs de permissions fichier\r\n\r\n**Solution:**\r\n```python\r\n# Utiliser un chemin absolu\r\nimport os\r\nlog_path = os.path.join(os.getcwd(), \"logs\", \"app.json\")\r\n\r\n# S'assurer des permissions d'\u00e9criture\r\nos.makedirs(os.path.dirname(log_path), exist_ok=True, mode=0o755)\r\n\r\n# V\u00e9rifier les permissions\r\nif not os.access(os.path.dirname(log_path), os.W_OK):\r\n raise PermissionError(f\"Pas de permission d'\u00e9criture: {log_path}\")\r\n```\r\n\r\n### Probl\u00e8me: Couleurs ne fonctionnent pas dans les conteneurs\r\n\r\n**Solution:**\r\n```python\r\n# D\u00e9sactiver les couleurs pour les conteneurs\r\nlogger = PyLoggerX(\r\n name=\"container-app\",\r\n colors=False # Important pour les collecteurs de logs\r\n)\r\n```\r\n\r\n### Probl\u00e8me: Fichiers de log trop volumineux\r\n\r\n**Solution:**\r\n```python\r\n# Utiliser la rotation\r\nlogger = PyLoggerX(\r\n json_file=\"logs/app.json\",\r\n max_bytes=5 * 1024 * 1024, # 5MB\r\n backup_count=3, # Garder 3 backups\r\n rotation_when=\"midnight\" # + rotation quotidienne\r\n)\r\n```\r\n\r\n### Probl\u00e8me: Surcharge de performance\r\n\r\n**Solution:**\r\n```python\r\n# Augmenter le niveau de log en production\r\nlogger.set_level(\"WARNING\")\r\n\r\n# D\u00e9sactiver le tracking de performance\r\nlogger = PyLoggerX(performance_tracking=False)\r\n\r\n# Activer l'\u00e9chantillonnage\r\nlogger = PyLoggerX(\r\n enable_sampling=True,\r\n sampling_rate=0.1 # Garder 10% des logs\r\n)\r\n\r\n# Utiliser l'export asynchrone\r\nlogger = PyLoggerX(\r\n async_export=True,\r\n queue_size=1000\r\n)\r\n```\r\n\r\n### Probl\u00e8me: Logs distants non envoy\u00e9s\r\n\r\n**Solution:**\r\n```python\r\n# Activer le logging de debug\r\nimport logging\r\nlogging.basicConfig(level=logging.DEBUG)\r\n\r\n# V\u00e9rifier la connectivit\u00e9\r\nimport requests\r\ntry:\r\n response = requests.get(\"http://elasticsearch:9200\")\r\n print(f\"ES Status: {response.status_code}\")\r\nexcept Exception as e:\r\n print(f\"Erreur de connexion: {e}\")\r\n\r\n# Forcer le flush avant la fermeture\r\nlogger.flush()\r\nlogger.close()\r\n\r\n# V\u00e9rifier la configuration du batch\r\nlogger = PyLoggerX(\r\n elasticsearch_url=\"http://elasticsearch:9200\",\r\n batch_size=10, # Batch plus petit pour test\r\n batch_timeout=1 # Timeout court\r\n)\r\n```\r\n\r\n### Probl\u00e8me: Utilisation m\u00e9moire \u00e9lev\u00e9e avec logging distant\r\n\r\n**Solution:**\r\n```python\r\n# Ajuster les param\u00e8tres de batch\r\nlogger = PyLoggerX(\r\n elasticsearch_url=\"http://elasticsearch:9200\",\r\n batch_size=50, # Batches plus petits\r\n batch_timeout=2, # Timeout plus court\r\n queue_size=500, # Queue plus petite\r\n async_export=True # Export asynchrone\r\n)\r\n\r\n# Activer l'\u00e9chantillonnage pour r\u00e9duire le volume\r\nlogger = PyLoggerX(\r\n enable_sampling=True,\r\n sampling_rate=0.5 # 50% des logs\r\n)\r\n```\r\n\r\n---\r\n\r\n## R\u00e9f\u00e9rence API\r\n\r\n### Classe PyLoggerX\r\n\r\n#### Constructeur\r\n\r\n```python\r\nPyLoggerX(\r\n name: str = \"PyLoggerX\",\r\n level: str = \"INFO\",\r\n console: bool = True,\r\n colors: bool = True,\r\n json_file: Optional[str] = None,\r\n text_file: Optional[str] = None,\r\n max_bytes: int = 10 * 1024 * 1024,\r\n backup_count: int = 5,\r\n rotation_when: str = \"midnight\",\r\n rotation_interval: int = 1,\r\n format_string: Optional[str] = None,\r\n include_caller: bool = False,\r\n performance_tracking: bool = False,\r\n enrichment_data: Optional[Dict[str, Any]] = None,\r\n # Elasticsearch\r\n elasticsearch_url: Optional[str] = None,\r\n elasticsearch_index: str = \"pyloggerx\",\r\n elasticsearch_username: Optional[str] = None,\r\n elasticsearch_password: Optional[str] = None,\r\n # Loki\r\n loki_url: Optional[str] = None,\r\n loki_labels: Optional[Dict[str, str]] = None,\r\n # Sentry\r\n sentry_dsn: Optional[str] = None,\r\n sentry_environment: str = \"production\",\r\n # Datadog\r\n datadog_api_key: Optional[str] = None,\r\n datadog_site: str = \"datadoghq.com\",\r\n # Slack\r\n slack_webhook: Optional[str] = None,\r\n slack_channel: Optional[str] = None,\r\n # Webhook\r\n webhook_url: Optional[str] = None,\r\n webhook_method: str = \"POST\",\r\n # Avanc\u00e9\r\n enable_sampling: bool = False,\r\n sampling_rate: float = 1.0,\r\n batch_size: int = 100,\r\n batch_timeout: int = 5,\r\n enable_rate_limit= bool = True,\r\n rate_limit_messages= int = 2,\r\n rate_limit_period= int = 10,\r\n async_export: bool = True\r\n)\r\n```\r\n\r\n#### M\u00e9thodes de Logging\r\n\r\n```python\r\ndebug(message: str, **kwargs) -> None\r\ninfo(message: str, **kwargs) -> None\r\nwarning(message: str, **kwargs) -> None\r\nerror(message: str, **kwargs) -> None\r\ncritical(message: str, **kwargs) -> None\r\nexception(message: str, **kwargs) -> None # Inclut la traceback\r\n```\r\n\r\n#### M\u00e9thodes de Configuration\r\n\r\n```python\r\nset_level(level: str) -> None\r\nadd_context(**kwargs) -> None\r\nadd_enrichment(**kwargs) -> None\r\nadd_filter(filter_obj: logging.Filter) -> None\r\nremove_filter(filter_obj: logging.Filter) -> None\r\n```\r\n\r\n#### M\u00e9thodes de Performance\r\n\r\n```python\r\ntimer(operation_name: str) -> ContextManager\r\nget_performance_stats() -> Dict[str, Any]\r\nclear_performance_stats() -> None\r\n```\r\n\r\n#### M\u00e9thodes Utilitaires\r\n\r\n```python\r\nget_stats() -> Dict[str, Any]\r\nflush() -> None # Vider tous les buffers\r\nclose() -> None # Fermer tous les handlers\r\ncontext(**kwargs) -> ContextManager # Contexte temporaire\r\n```\r\n\r\n### Logger Global\r\n\r\n```python\r\nfrom pyloggerx import log\r\n\r\n# Utiliser le logger global par d\u00e9faut\r\nlog.info(\"Logging rapide sans configuration\")\r\nlog.error(\"Erreur\", error_code=500)\r\n```\r\n\r\n---\r\n\r\n## Contribution\r\n\r\nLes contributions sont les bienvenues ! Suivez ces \u00e9tapes :\r\n\r\n### Configuration de D\u00e9veloppement\r\n\r\n```bash\r\n# Cloner le d\u00e9p\u00f4t\r\ngit clone https://github.com/yourusername/pyloggerx.git\r\ncd pyloggerx\r\n\r\n# Cr\u00e9er un environnement virtuel\r\npython -m venv venv\r\nsource venv/bin/activate # Windows: venv\\Scripts\\activate\r\n\r\n# Installer en mode d\u00e9veloppement\r\npip install -e \".[dev]\"\r\n\r\n# Installer les hooks pre-commit\r\npre-commit install\r\n```\r\n\r\n### Ex\u00e9cuter les Tests\r\n\r\n```bash\r\n# Tous les tests\r\npytest\r\n\r\n# Avec couverture\r\npytest --cov=pyloggerx --cov-report=html\r\n\r\n# Tests sp\u00e9cifiques\r\npytest tests/test_core.py -v\r\n\r\n# Tests avec sortie\r\npytest -v -s\r\n```\r\n\r\n### Style de Code\r\n\r\n```bash\r\n# Formater le code\r\nblack pyloggerx/\r\nisort pyloggerx/\r\n\r\n# V\u00e9rifier le style\r\nflake8 pyloggerx/\r\npylint pyloggerx/\r\n\r\n# V\u00e9rification de types\r\nmypy pyloggerx/\r\n```\r\n\r\n### Soumettre des Modifications\r\n\r\n1. Fork le d\u00e9p\u00f4t\r\n2. Cr\u00e9er une branche: `git checkout -b feature/amazing-feature`\r\n3. Faire vos modifications\r\n4. Ajouter des tests pour les nouvelles fonctionnalit\u00e9s\r\n5. S'assurer que les tests passent: `pytest`\r\n6. Commit: `git commit -m 'Add amazing feature'`\r\n7. Push: `git push origin feature/amazing-feature`\r\n8. Ouvrir une Pull Request\r\n\r\n### Directives de Contribution\r\n\r\n- Suivre PEP 8 pour le style de code\r\n- Ajouter des docstrings pour toutes les fonctions publiques\r\n- \u00c9crire des tests pour toutes les nouvelles fonctionnalit\u00e9s\r\n- Mettre \u00e0 jour la documentation si n\u00e9cessaire\r\n- S'assurer que tous les tests passent avant de soumettre\r\n\r\n---\r\n\r\n## Roadmap\r\n\r\n### Version 3.1.0 (Planifi\u00e9 - Q2 2025)\r\n- Support de logging asynchrone natif\r\n- Formatters additionnels (Logfmt, GELF)\r\n- Support AWS CloudWatch Logs\r\n- Support Google Cloud Logging\r\n- M\u00e9triques int\u00e9gr\u00e9es (histogrammes, compteurs)\r\n\r\n### Version 3.5.0 (Planifi\u00e9 - Q3 2025)\r\n- Dashboard de monitoring int\u00e9gr\u00e9\r\n- Support Apache Kafka pour logs streaming\r\n- Compression automatique des logs archiv\u00e9s\r\n- Support de chiffrement des logs sensibles\r\n\r\n### Version 4.0.0 (Futur)\r\n- Tracing distribu\u00e9 int\u00e9gr\u00e9 (OpenTelemetry complet)\r\n- Machine learning pour d\u00e9tection d'anomalies\r\n- Alerting avanc\u00e9 avec r\u00e8gles personnalis\u00e9es\r\n- Support multi-tenant\r\n\r\n---\r\n\r\n## FAQ\r\n\r\n**Q: PyLoggerX est-il pr\u00eat pour la production ?**\r\nR: Oui, PyLoggerX suit les meilleures pratiques de logging Python et est utilis\u00e9 en production par plusieurs entreprises.\r\n\r\n**Q: PyLoggerX fonctionne-t-il avec le logging existant ?**\r\nR: Oui, PyLoggerX encapsule le module logging standard de Python et est compatible avec les handlers existants.\r\n\r\n**Q: Comment faire une rotation des logs par temps plut\u00f4t que par taille ?**\r\nR: Utilisez le param\u00e8tre `rotation_when` avec des valeurs comme \"midnight\", \"H\" (horaire), ou \"D\" (quotidien).\r\n\r\n**Q: Puis-je logger vers plusieurs fichiers simultan\u00e9ment ?**\r\nR: Oui, sp\u00e9cifiez \u00e0 la fois `json_file` et `text_file`.\r\n\r\n**Q: PyLoggerX est-il thread-safe ?**\r\nR: Oui, PyLoggerX utilise le module logging de Python qui est thread-safe.\r\n\r\n**Q: Comment int\u00e9grer avec les outils d'agr\u00e9gation de logs existants ?**\r\nR: Utilisez le format JSON qui est compatible avec la plupart des outils (ELK, Splunk, Datadog, etc.) ou les exporters directs.\r\n\r\n**Q: Quelle est la surcharge de performance ?**\r\nR: L'impact est minimal. Utilisez l'\u00e9chantillonnage et l'export asynchrone pour les applications \u00e0 tr\u00e8s haut volume.\r\n\r\n**Q: Les logs sont-ils envoy\u00e9s de mani\u00e8re synchrone ou asynchrone ?**\r\nR: Par d\u00e9faut, les exports distants sont asynchrones (non-bloquants). Vous pouvez d\u00e9sactiver avec `async_export=False`.\r\n\r\n**Q: Comment g\u00e9rer les logs sensibles ?**\r\nR: Ne loggez jamais de donn\u00e9es sensibles directement. Utilisez le hashing ou le masquage.\r\n\r\n**Q: Puis-je utiliser PyLoggerX dans des fonctions AWS Lambda ?**\r\nR: Oui, mais d\u00e9sactivez les fichiers locaux et utilisez console ou export distant uniquement.\r\n\r\n---\r\n\r\n## Licence\r\n\r\nCe projet est sous licence MIT :\r\n\r\n```\r\nMIT License\r\n\r\nCopyright (c) 2025 PyLoggerX Contributors\r\n\r\nPermission is hereby granted, free of charge, to any person obtaining a copy\r\nof this software and associated documentation files (the \"Software\"), to deal\r\nin the Software without restriction, including without limitation the rights\r\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\r\ncopies of the Software, and to permit persons to whom the Software is\r\nfurnished to do so, subject to the following conditions:\r\n\r\nThe above copyright notice and this permission notice shall be included in all\r\ncopies or substantial portions of the Software.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\r\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\r\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\r\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\r\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\r\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\r\nSOFTWARE.\r\n```\r\n\r\n---\r\n\r\n## Support & Communaut\u00e9\r\n\r\n- **Documentation**: [https://pyloggerx.readthedocs.io](https://pyloggerx.readthedocs.io)\r\n- **GitHub Issues**: [https://github.com/yourusername/pyloggerx/issues](https://github.com/yourusername/pyloggerx/issues)\r\n- **Discussions**: [https://github.com/yourusername/pyloggerx/discussions](https://github.com/yourusername/pyloggerx/discussions)\r\n- **PyPI**: [https://pypi.org/project/pyloggerx/](https://pypi.org/project/pyloggerx/)\r\n- **Stack Overflow**: Tag `pyloggerx`\r\n- **Discord**: [https://discord.gg/pyloggerx](https://discord.gg/pyloggerx)\r\n\r\n---\r\n\r\n## Remerciements\r\n\r\n- Construit sur le module `logging` standard de Python\r\n- Inspir\u00e9 par des biblioth\u00e8ques modernes comme structlog et loguru\r\n- Merci \u00e0 tous les contributeurs et utilisateurs\r\n- Remerciements sp\u00e9ciaux \u00e0 la communaut\u00e9 DevOps pour les retours\r\n\r\n---\r\n\r\n## Changelog\r\n\r\n### v1.0.0 (2025-09-15)\r\n\r\n**Fonctionnalit\u00e9s Majeures**\r\n- Support de logging distant (Elasticsearch, Loki, Sentry, Datadog, Slack)\r\n- \u00c9chantillonnage de logs pour applications \u00e0 haut volume\r\n- Limitation de d\u00e9bit (rate limiting)\r\n- Filtrage avanc\u00e9 (niveau, pattern, personnalis\u00e9)\r\n- Traitement par batch pour exports distants\r\n- Support webhook personnalis\u00e9\r\n- Export asynchrone non-bloquant\r\n- Enrichissement de contexte am\u00e9lior\u00e9\r\n\r\n**Am\u00e9liorations**\r\n- Performance optimis\u00e9e pour exports distants\r\n- Meilleure gestion des erreurs d'export\r\n- Documentation \u00e9tendue avec exemples DevOps\r\n- Support am\u00e9lior\u00e9 pour Kubernetes et conteneurs\r\n\r\n**Nouvelles Fonctionnalit\u00e9s**\r\n- Tracking de performance avec timers\r\n- Formatters personnalis\u00e9s\r\n- Rotation des logs (taille et temps)\r\n- Enrichissement de contexte global\r\n- Meilleure gestion des exceptions\r\n\r\n**Corrections**\r\n- Correction de fuites m\u00e9moire dans certains sc\u00e9narios\r\n- Am\u00e9lioration de la gestion des reconnexions\r\n- Correction de probl\u00e8mes de rotation de fichiers\r\n\r\n---\r\n\r\n**Fait avec soin pour la communaut\u00e9 Python et DevOps**\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Modern, colorful and simple logging for Python",
"version": "1.0.0",
"project_urls": {
"Bug Reports": "https://github.com/Moesthetics-code/pyloggerx/issues",
"Documentation": "https://pyloggerx.readthedocs.io",
"Homepage": "https://github.com/Moesthetics-code/pyloggerx",
"Repository": "https://github.com/Moesthetics-code/pyloggerx"
},
"split_keywords": [
"logging",
" colorful",
" json",
" rotation",
" modern"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "43bc0e02c8c9f9d416ef91714442fc33d7fb966853fc34ddbef38c2f7265370b",
"md5": "d83a9a8af0f03dec7c4f2d54e733977b",
"sha256": "44286ff308890fcd23f6a67f21e0aa93d29f26fe7d0e4d79ca5d3f7a05f78e68"
},
"downloads": -1,
"filename": "pyloggerx-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d83a9a8af0f03dec7c4f2d54e733977b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 61050,
"upload_time": "2025-10-06T18:10:34",
"upload_time_iso_8601": "2025-10-06T18:10:34.071912Z",
"url": "https://files.pythonhosted.org/packages/43/bc/0e02c8c9f9d416ef91714442fc33d7fb966853fc34ddbef38c2f7265370b/pyloggerx-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "0262de502f7ae5a8252bdb03e25abcae73e0f62a5de6591af925b5beaf56cf97",
"md5": "4dea46b1cd8c8fa40434029140986adb",
"sha256": "a5dd66b13c27d37986e29a45e0b63503d16c5308148c6bb26729c79a3f7400ba"
},
"downloads": -1,
"filename": "pyloggerx-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "4dea46b1cd8c8fa40434029140986adb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 143145,
"upload_time": "2025-10-06T18:10:36",
"upload_time_iso_8601": "2025-10-06T18:10:36.263401Z",
"url": "https://files.pythonhosted.org/packages/02/62/de502f7ae5a8252bdb03e25abcae73e0f62a5de6591af925b5beaf56cf97/pyloggerx-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-06 18:10:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Moesthetics-code",
"github_project": "pyloggerx",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "pytest",
"specs": [
[
">=",
"6.0"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
">=",
"2.0"
]
]
},
{
"name": "black",
"specs": [
[
">=",
"21.0"
]
]
},
{
"name": "flake8",
"specs": [
[
">=",
"3.9"
]
]
},
{
"name": "mypy",
"specs": [
[
">=",
"0.900"
]
]
},
{
"name": "build",
"specs": [
[
">=",
"0.7.0"
]
]
},
{
"name": "twine",
"specs": [
[
">=",
"3.4.0"
]
]
},
{
"name": "sphinx",
"specs": [
[
">=",
"4.0"
]
]
},
{
"name": "sphinx-rtd-theme",
"specs": [
[
">=",
"1.0"
]
]
},
{
"name": "sentry_sdk",
"specs": []
},
{
"name": "PYyaml",
"specs": []
}
],
"lcname": "pyloggerx"
}