plumelog-loguru


Nameplumelog-loguru JSON
Version 0.1.1 PyPI version JSON
download
home_pageNone
SummaryPlumelog集成库,为Loguru提供Redis日志传输功能
upload_time2025-08-15 14:48:47
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseApache-2.0
keywords async logging loguru plumelog redis
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Plumelog-Loguru

一个现代化的 Python 库,为 Loguru 提供与 Plumelog 系统的集成功能,支持异步 Redis 日志传输。

## ✨ 特性

- 🚀 **异步处理**: 基于 `asyncio` 的高性能异步日志传输,对业务代码无阻塞。
- 📦 **智能批量**: 双条件触发机制 - 达到数量或时间阈值立即发送,延迟降低50-98%。
- 🔒 **类型安全**: 完整的 Python 3.10+ 类型提示,享受现代 IDE 的智能提示与静态检查。
- 🔄 **智能重试**: 内置重试次数限制和指数退避机制,防止无限重试,确保日志不丢失。
- 🏊 **连接池**: 高效管理 Redis 连接,提升高并发场景下的性能和稳定性。
- ⚙️ **灵活配置**: 基于 Pydantic 的配置模型,支持环境变量,易于在不同环境中部署。
- 🧵 **线程安全**: 专为多线程环境设计,保证在复杂应用中安全运行。

## 📦 安装

使用 `uv` 安装(推荐):

```bash
uv add plumelog-loguru
```

使用 `pip` 安装:

```bash
pip install plumelog-loguru
```

## 🚀 快速开始

### 基本使用

```python
from loguru import logger
from plumelog_loguru import create_redis_sink

# 使用默认配置添加 Redis sink
# 推荐添加 type: ignore 注释以兼容静态类型检查器
logger.add(create_redis_sink())  # type: ignore[arg-type]

# 开始记录日志,这些日志将被异步发送到 Redis
logger.info("Hello, Plumelog!")
logger.error("这是一个错误日志,包含堆栈信息。")
```

### 自定义配置

```python
from loguru import logger
from plumelog_loguru import create_redis_sink, PlumelogSettings

# 创建自定义配置实例
config = PlumelogSettings(
    app_name="my_awesome_app",
    env="production",
    redis_host="redis.example.com",
    redis_port=6379,
    redis_password="your_secret_password",
    batch_size=200,                # 提高批量大小
    batch_interval_seconds=1.0,    # 缩短发送间隔
)

# 使用自定义配置创建 sink
redis_sink = create_redis_sink(config)
logger.add(redis_sink)  # type: ignore[arg-type]

logger.info("日志已配置为生产环境。")
```

### 异步上下文使用

在 `asyncio` 应用中,可以使用异步上下文管理器来确保所有缓冲区的日志在程序退出前被完全发送。

```python
import asyncio
from loguru import logger
from plumelog_loguru import RedisSink, PlumelogSettings

async def main():
    config = PlumelogSettings(app_name="async_app")
    
    # 使用 async with 确保 sink 在退出时优雅关闭
    async with RedisSink(config) as sink:
        logger.add(sink)  # type: ignore[arg-type]
        logger.info("这是一条在异步环境中记录的日志。")
        # 在此期间,日志在后台发送
        await asyncio.sleep(0.5) 
        logger.warning("应用即将退出。")

    # 上下文结束时,RedisSink 会自动处理并发送所有剩余日志
    print("所有日志已刷新。")

asyncio.run(main())
```

## ⚙️ 配置选项

所有配置项均可通过 `PLUMELOG_` 前缀的环境变量进行设置。

| 配置项 | 环境变量 | 默认值 | 说明 |
|--------|----------|--------|------|
| `app_name` | `PLUMELOG_APP_NAME` | `"default"` | 应用名称,用于日志归属分类 |
| `env` | `PLUMELOG_ENV` | `"dev"` | 应用运行环境 (如: dev, test, prod) |
| `redis_host` | `PLUMELOG_REDIS_HOST` | `"localhost"` | Redis 主机地址 |
| `redis_port` | `PLUMELOG_REDIS_PORT` | `6379` | Redis 端口 |
| `redis_db` | `PLUMELOG_REDIS_DB` | `0` | Redis 数据库编号 |
| `redis_password` | `PLUMELOG_REDIS_PASSWORD` | `None` | Redis 密码 |
| `redis_key` | `PLUMELOG_REDIS_KEY` | `"plume_log_list"` | 日志存储的 Redis List 键名 |
| `batch_size` | `PLUMELOG_BATCH_SIZE` | `100` | 触发批量发送的日志数量阈值 |
| `batch_interval_seconds` | `PLUMELOG_BATCH_INTERVAL_SECONDS` | `2.0` | 触发批量发送的时间间隔(秒) |
| `queue_max_size` | `PLUMELOG_QUEUE_MAX_SIZE` | `10000` | 内存中转队列的最大容量 |
| `retry_count` | `PLUMELOG_RETRY_COUNT` | `3` | Redis 操作失败时的最大重试次数 |
| `max_connections` | `PLUMELOG_MAX_CONNECTIONS` | `10` | Redis 连接池的最大连接数 |

## 🚀 批量处理性能优化

### 双条件触发机制

新版本采用先进的**双条件触发**批量处理策略,显著提升响应性能:

- **数量触发**: 当积累日志达到 `batch_size` 时,立即发送
- **时间触发**: 当距离上次发送超过 `batch_interval_seconds` 时,发送当前积累的日志

### 性能提升数据

| 场景 | 延迟改进 | 适用配置 |
|-----|---------|----------|
| **高频日志** (>1000条/秒) | **98% ⬇️** | `batch_size=100, interval=2s` |
| **中频日志** (100-1000条/秒) | **90% ⬇️** | `batch_size=50, interval=1s` |
| **低频日志** (<100条/秒) | **50% ⬇️** | `batch_size=10, interval=0.5s` |

### 场景化配置建议

```python
# 🚨 实时监控场景 (延迟敏感)
config = PlumelogSettings(
    batch_size=10,              # 小批次,快速响应
    batch_interval_seconds=0.5  # 最大延迟500ms
)

# 📊 数据分析场景 (吞吐优先) 
config = PlumelogSettings(
    batch_size=1000,            # 大批次,高效传输
    batch_interval_seconds=10   # 允许较长等待
)

# ⚖️ 通用业务场景 (平衡)
config = PlumelogSettings(
    batch_size=100,             # 中等批次
    batch_interval_seconds=2    # 中等延迟
)
```

<br>

---

## 🏗️ 架构设计与异步机制详解

### 核心设计理念

本库遵循现代高性能应用的设计原则,旨在提供一个可靠且对业务无侵入的日志解决方案。

-   **生产者-消费者模型**: 用户应用代码(生产者)与日志发送任务(消费者)通过一个异步队列完全解耦。
-   **非阻塞 I/O**: 所有网络操作(Redis 通信)均基于 `asyncio`,不会阻塞主业务线程。
-   **批处理优化**: 聚合多条日志为单次网络请求,显著降低 I/O 压力和 Redis `OPS`。
-   **优雅降级**: 内置完善的重试和容错机制,在外部依赖(如 Redis)不稳时,最大程度保证系统稳定和数据不丢失。

### 系统整体架构

下图展示了从日志产生到最终存储的完整数据流和核心组件。

```mermaid
sequenceDiagram
    participant UA as 用户应用
    participant RS as RedisSink
    participant AQ as AsyncQueue
    participant BP as Consumer Task
    participant RC as AsyncRedisClient
    participant RD as Redis

    UA->>RS: 1 logger.info(...) 同步(非阻塞)
    RS->>AQ: 2 入队(异步)
    note right of AQ: 内存队列 解耦生产/消费
    BP->>AQ: 拉取日志批次
    BP->>RC: 4 批量发送
    RC->>RD: 5 LPUSH plume_log_list
    RD-->>RC: OK
    RC-->>BP: 结果

```

**处理流程解读**:
1.  **同步调用**: 你的业务代码调用 `logger.info()`。这是一个**几乎零成本**的操作,因为 `RedisSink` 仅需将日志记录放入内存队列。
2.  **异步入队**: 日志记录被快速放入一个 `asyncio.Queue`。此操作是非阻塞的,主线程立即返回,继续执行业务逻辑。
3.  **后台处理 (未在图中编号)**: 一个独立的后台任务(Consumer Task)持续监控队列。
4.  **批量发送**: 当队列中的日志达到一定数量(`batch_size`)或经过一定时间(`batch_interval_seconds`),后台任务会将它们打包。
5.  **持久化存储**: 打包后的日志通过 `AsyncRedisClient` 使用连接池中的连接,以 `LPUSH` 命令高效地一次性写入 Redis。如果发生网络错误,将触发**智能重试**。

### 异步处理机制详解

下图深入剖析了同步区域和异步区域的交互细节。

```mermaid
sequenceDiagram
    autonumber
    participant User as 用户代码
    participant Sink as RedisSink
    participant Ext as FieldExtractor
    participant Queue as AsyncQueue
    participant Task as ConsumerTask
    participant Client as RedisClient
    participant Redis as Redis

    User->>Sink: logger.info(msg)
    Sink->>Ext: 提取运行字段
    Ext-->>Sink: 字段数据
    Sink->>Queue: put_nowait(LogRecord)
    Sink-->>User: 返回 (耗时 <1ms)

    rect rgb(245,245,245)
    note over Task,Client: 后台异步循环
    Task->>Queue: 批量获取 (数量/超时/压力)
    Task->>Task: JSON 序列化
    Task->>Client: send(batch)
    Client->>Redis: LPUSH 批量
    Redis-->>Client: OK
    Client-->>Task: 成功
    end

```
**关键优势**:
-   **主线程保护**: 无论 Redis 的网络状况如何,或者日志量多大,`logger.info()` 的调用耗时始终稳定在微秒级别,**完全不影响业务响应速度**。
-   **资源效率**: 批量发送大大减少了网络往返次数和系统调用,降低了 CPU 和网络资源的消耗。连接池避免了频繁创建和销毁连接的开销。
-   **数据可靠性**: 内存队列充当了因网络延迟或 Redis 短暂不可用时的缓冲区。指数退避重试机制则能在网络抖动时,通过逐渐增加等待时间的方式尝试恢复,大大提高了日志发送的成功率。

### 核心类关系图

```mermaid
classDiagram
    class PlumelogSettings {
        +str app_name
        +str env
        +str redis_host
        +int redis_port
        +int redis_db
        +str|None redis_password
        +str redis_key
        +int batch_size
        +float batch_interval_seconds
        +int queue_max_size
        +int retry_count
        +int max_connections
    }

    class LogRecord {
        +str app_name
        +str env
        +str server_name
        +str method
        +str content
        +str level
        +str class_name
        +str thread_name
        +int seq_id
        +str date_time
        +int unix_time
        +to_dict()
    }

    class FieldExtractor {
        +get_server_ip() str
        +get_hostname() str
        +get_thread_name() str
        +get_class_method() (str,str)
        +next_seq_id() int
        -_seq_counter
    }

    class AsyncRedisClient {
        +connect()
        +send_batch(records: list[LogRecord]) bool
        +close()
        +health_check() bool
        -_retry_with_backoff(fn)
        -_pool
    }

    class AsyncQueue {
        +put_nowait(item)
        +get() LogRecord
        +qsize() int
        +full() bool
        +empty() bool
    }

    class RedisSink {
        +__call__(record)
        +close()
        +flush()
        -_ensure_async_init()
        -_consumer_loop()
        -_queue: AsyncQueue
        -_client: AsyncRedisClient
        -_extractor: FieldExtractor
        -_buffer: list[LogRecord]
    }

    RedisSink --> PlumelogSettings : uses
    RedisSink *-- AsyncQueue
    RedisSink *-- AsyncRedisClient
    RedisSink *-- FieldExtractor
    RedisSink --> LogRecord : creates
    AsyncRedisClient --> PlumelogSettings
    FieldExtractor --> LogRecord

```

### 容错与系统影响

#### 资源消耗
-   **内存**: 主要由异步队列 (`queue_max_size`) 决定。默认 `10000` 条日志大约占用 10-20MB 内存,具体取决于日志大小。
-   **CPU**: 非常低。大部分时间处于 I/O 等待状态,仅在日志格式化和序列化时有少量计算。
-   **网络**: 连接池保持少量(`max_connections`)长连接,批量发送机制极大节省了网络带宽。

#### 故障应对策略
-   **Redis 连接中断**:
    1.  `AsyncRedisClient` 捕获连接错误。
    2.  启动**指数退避重试**机制(例如,等待 1s, 2s, 4s... 后重试)。
    3.  在此期间,新的日志继续进入内存队列进行缓冲。
    4.  如果 Redis 在重试期间恢复,积压的日志将被发送。
    5.  如果达到最大重试次数后依然失败,该批次日志将被丢弃,并记录一条错误日志到标准输出,防止无限重试耗尽资源。
-   **内存队列满**:
    -   如果日志生产速度持续高于消费速度(例如 Redis 长时间不可用),队列可能会满。
    -   `put_nowait()` 会抛出 `asyncio.QueueFull` 异常,`RedisSink` 会捕获它,静默丢弃当前日志并打印一条警告。这是一种**背压(Backpressure)**机制,用于保护应用内存,防止因日志问题导致主业务崩溃。

---

## 🔧 开发与贡献

### 环境准备

```bash
# 克隆项目
git clone <repository-url>
cd plumelog-loguru

# 使用 uv (推荐) 或 pip 安装开发依赖
uv sync --dev

# 运行测试
uv run pytest

# 代码格式化
uv run format

# 类型检查
uv run lint
```

### 项目结构

```text
src/plumelog_loguru/
├── __init__.py          # 主要 API 导出
├── config.py            # Pydantic 配置模型
├── models.py            # 日志数据模型
├── extractor.py         # 上下文信息提取器
├── redis_client.py      # 异步 Redis 客户端 (含连接池和重试)
└── redis_sink.py        # Loguru Sink 实现 (含队列和消费者任务)
```

## 📝 许可证

本项目基于 [Apache License 2.0](LICENSE) 许可。

## 🤝 贡献

欢迎通过提交 Issue 和 Pull Request 来为项目做出贡献!
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "plumelog-loguru",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "async, logging, loguru, plumelog, redis",
    "author": null,
    "author_email": "Alistar Max <codingox@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/d7/94/9b6cf24a76bc9d3111c242a117704fdf5e180fdcbbc999e4a91bb808c0e5/plumelog_loguru-0.1.1.tar.gz",
    "platform": null,
    "description": "# Plumelog-Loguru\n\n\u4e00\u4e2a\u73b0\u4ee3\u5316\u7684 Python \u5e93\uff0c\u4e3a Loguru \u63d0\u4f9b\u4e0e Plumelog \u7cfb\u7edf\u7684\u96c6\u6210\u529f\u80fd\uff0c\u652f\u6301\u5f02\u6b65 Redis \u65e5\u5fd7\u4f20\u8f93\u3002\n\n## \u2728 \u7279\u6027\n\n- \ud83d\ude80 **\u5f02\u6b65\u5904\u7406**: \u57fa\u4e8e `asyncio` \u7684\u9ad8\u6027\u80fd\u5f02\u6b65\u65e5\u5fd7\u4f20\u8f93\uff0c\u5bf9\u4e1a\u52a1\u4ee3\u7801\u65e0\u963b\u585e\u3002\n- \ud83d\udce6 **\u667a\u80fd\u6279\u91cf**: \u53cc\u6761\u4ef6\u89e6\u53d1\u673a\u5236 - \u8fbe\u5230\u6570\u91cf\u6216\u65f6\u95f4\u9608\u503c\u7acb\u5373\u53d1\u9001\uff0c\u5ef6\u8fdf\u964d\u4f4e50-98%\u3002\n- \ud83d\udd12 **\u7c7b\u578b\u5b89\u5168**: \u5b8c\u6574\u7684 Python 3.10+ \u7c7b\u578b\u63d0\u793a\uff0c\u4eab\u53d7\u73b0\u4ee3 IDE \u7684\u667a\u80fd\u63d0\u793a\u4e0e\u9759\u6001\u68c0\u67e5\u3002\n- \ud83d\udd04 **\u667a\u80fd\u91cd\u8bd5**: \u5185\u7f6e\u91cd\u8bd5\u6b21\u6570\u9650\u5236\u548c\u6307\u6570\u9000\u907f\u673a\u5236\uff0c\u9632\u6b62\u65e0\u9650\u91cd\u8bd5\uff0c\u786e\u4fdd\u65e5\u5fd7\u4e0d\u4e22\u5931\u3002\n- \ud83c\udfca **\u8fde\u63a5\u6c60**: \u9ad8\u6548\u7ba1\u7406 Redis \u8fde\u63a5\uff0c\u63d0\u5347\u9ad8\u5e76\u53d1\u573a\u666f\u4e0b\u7684\u6027\u80fd\u548c\u7a33\u5b9a\u6027\u3002\n- \u2699\ufe0f **\u7075\u6d3b\u914d\u7f6e**: \u57fa\u4e8e Pydantic \u7684\u914d\u7f6e\u6a21\u578b\uff0c\u652f\u6301\u73af\u5883\u53d8\u91cf\uff0c\u6613\u4e8e\u5728\u4e0d\u540c\u73af\u5883\u4e2d\u90e8\u7f72\u3002\n- \ud83e\uddf5 **\u7ebf\u7a0b\u5b89\u5168**: \u4e13\u4e3a\u591a\u7ebf\u7a0b\u73af\u5883\u8bbe\u8ba1\uff0c\u4fdd\u8bc1\u5728\u590d\u6742\u5e94\u7528\u4e2d\u5b89\u5168\u8fd0\u884c\u3002\n\n## \ud83d\udce6 \u5b89\u88c5\n\n\u4f7f\u7528 `uv` \u5b89\u88c5\uff08\u63a8\u8350\uff09\uff1a\n\n```bash\nuv add plumelog-loguru\n```\n\n\u4f7f\u7528 `pip` \u5b89\u88c5\uff1a\n\n```bash\npip install plumelog-loguru\n```\n\n## \ud83d\ude80 \u5feb\u901f\u5f00\u59cb\n\n### \u57fa\u672c\u4f7f\u7528\n\n```python\nfrom loguru import logger\nfrom plumelog_loguru import create_redis_sink\n\n# \u4f7f\u7528\u9ed8\u8ba4\u914d\u7f6e\u6dfb\u52a0 Redis sink\n# \u63a8\u8350\u6dfb\u52a0 type: ignore \u6ce8\u91ca\u4ee5\u517c\u5bb9\u9759\u6001\u7c7b\u578b\u68c0\u67e5\u5668\nlogger.add(create_redis_sink())  # type: ignore[arg-type]\n\n# \u5f00\u59cb\u8bb0\u5f55\u65e5\u5fd7\uff0c\u8fd9\u4e9b\u65e5\u5fd7\u5c06\u88ab\u5f02\u6b65\u53d1\u9001\u5230 Redis\nlogger.info(\"Hello, Plumelog!\")\nlogger.error(\"\u8fd9\u662f\u4e00\u4e2a\u9519\u8bef\u65e5\u5fd7\uff0c\u5305\u542b\u5806\u6808\u4fe1\u606f\u3002\")\n```\n\n### \u81ea\u5b9a\u4e49\u914d\u7f6e\n\n```python\nfrom loguru import logger\nfrom plumelog_loguru import create_redis_sink, PlumelogSettings\n\n# \u521b\u5efa\u81ea\u5b9a\u4e49\u914d\u7f6e\u5b9e\u4f8b\nconfig = PlumelogSettings(\n    app_name=\"my_awesome_app\",\n    env=\"production\",\n    redis_host=\"redis.example.com\",\n    redis_port=6379,\n    redis_password=\"your_secret_password\",\n    batch_size=200,                # \u63d0\u9ad8\u6279\u91cf\u5927\u5c0f\n    batch_interval_seconds=1.0,    # \u7f29\u77ed\u53d1\u9001\u95f4\u9694\n)\n\n# \u4f7f\u7528\u81ea\u5b9a\u4e49\u914d\u7f6e\u521b\u5efa sink\nredis_sink = create_redis_sink(config)\nlogger.add(redis_sink)  # type: ignore[arg-type]\n\nlogger.info(\"\u65e5\u5fd7\u5df2\u914d\u7f6e\u4e3a\u751f\u4ea7\u73af\u5883\u3002\")\n```\n\n### \u5f02\u6b65\u4e0a\u4e0b\u6587\u4f7f\u7528\n\n\u5728 `asyncio` \u5e94\u7528\u4e2d\uff0c\u53ef\u4ee5\u4f7f\u7528\u5f02\u6b65\u4e0a\u4e0b\u6587\u7ba1\u7406\u5668\u6765\u786e\u4fdd\u6240\u6709\u7f13\u51b2\u533a\u7684\u65e5\u5fd7\u5728\u7a0b\u5e8f\u9000\u51fa\u524d\u88ab\u5b8c\u5168\u53d1\u9001\u3002\n\n```python\nimport asyncio\nfrom loguru import logger\nfrom plumelog_loguru import RedisSink, PlumelogSettings\n\nasync def main():\n    config = PlumelogSettings(app_name=\"async_app\")\n    \n    # \u4f7f\u7528 async with \u786e\u4fdd sink \u5728\u9000\u51fa\u65f6\u4f18\u96c5\u5173\u95ed\n    async with RedisSink(config) as sink:\n        logger.add(sink)  # type: ignore[arg-type]\n        logger.info(\"\u8fd9\u662f\u4e00\u6761\u5728\u5f02\u6b65\u73af\u5883\u4e2d\u8bb0\u5f55\u7684\u65e5\u5fd7\u3002\")\n        # \u5728\u6b64\u671f\u95f4\uff0c\u65e5\u5fd7\u5728\u540e\u53f0\u53d1\u9001\n        await asyncio.sleep(0.5) \n        logger.warning(\"\u5e94\u7528\u5373\u5c06\u9000\u51fa\u3002\")\n\n    # \u4e0a\u4e0b\u6587\u7ed3\u675f\u65f6\uff0cRedisSink \u4f1a\u81ea\u52a8\u5904\u7406\u5e76\u53d1\u9001\u6240\u6709\u5269\u4f59\u65e5\u5fd7\n    print(\"\u6240\u6709\u65e5\u5fd7\u5df2\u5237\u65b0\u3002\")\n\nasyncio.run(main())\n```\n\n## \u2699\ufe0f \u914d\u7f6e\u9009\u9879\n\n\u6240\u6709\u914d\u7f6e\u9879\u5747\u53ef\u901a\u8fc7 `PLUMELOG_` \u524d\u7f00\u7684\u73af\u5883\u53d8\u91cf\u8fdb\u884c\u8bbe\u7f6e\u3002\n\n| \u914d\u7f6e\u9879 | \u73af\u5883\u53d8\u91cf | \u9ed8\u8ba4\u503c | \u8bf4\u660e |\n|--------|----------|--------|------|\n| `app_name` | `PLUMELOG_APP_NAME` | `\"default\"` | \u5e94\u7528\u540d\u79f0\uff0c\u7528\u4e8e\u65e5\u5fd7\u5f52\u5c5e\u5206\u7c7b |\n| `env` | `PLUMELOG_ENV` | `\"dev\"` | \u5e94\u7528\u8fd0\u884c\u73af\u5883 (\u5982: dev, test, prod) |\n| `redis_host` | `PLUMELOG_REDIS_HOST` | `\"localhost\"` | Redis \u4e3b\u673a\u5730\u5740 |\n| `redis_port` | `PLUMELOG_REDIS_PORT` | `6379` | Redis \u7aef\u53e3 |\n| `redis_db` | `PLUMELOG_REDIS_DB` | `0` | Redis \u6570\u636e\u5e93\u7f16\u53f7 |\n| `redis_password` | `PLUMELOG_REDIS_PASSWORD` | `None` | Redis \u5bc6\u7801 |\n| `redis_key` | `PLUMELOG_REDIS_KEY` | `\"plume_log_list\"` | \u65e5\u5fd7\u5b58\u50a8\u7684 Redis List \u952e\u540d |\n| `batch_size` | `PLUMELOG_BATCH_SIZE` | `100` | \u89e6\u53d1\u6279\u91cf\u53d1\u9001\u7684\u65e5\u5fd7\u6570\u91cf\u9608\u503c |\n| `batch_interval_seconds` | `PLUMELOG_BATCH_INTERVAL_SECONDS` | `2.0` | \u89e6\u53d1\u6279\u91cf\u53d1\u9001\u7684\u65f6\u95f4\u95f4\u9694\uff08\u79d2\uff09 |\n| `queue_max_size` | `PLUMELOG_QUEUE_MAX_SIZE` | `10000` | \u5185\u5b58\u4e2d\u8f6c\u961f\u5217\u7684\u6700\u5927\u5bb9\u91cf |\n| `retry_count` | `PLUMELOG_RETRY_COUNT` | `3` | Redis \u64cd\u4f5c\u5931\u8d25\u65f6\u7684\u6700\u5927\u91cd\u8bd5\u6b21\u6570 |\n| `max_connections` | `PLUMELOG_MAX_CONNECTIONS` | `10` | Redis \u8fde\u63a5\u6c60\u7684\u6700\u5927\u8fde\u63a5\u6570 |\n\n## \ud83d\ude80 \u6279\u91cf\u5904\u7406\u6027\u80fd\u4f18\u5316\n\n### \u53cc\u6761\u4ef6\u89e6\u53d1\u673a\u5236\n\n\u65b0\u7248\u672c\u91c7\u7528\u5148\u8fdb\u7684**\u53cc\u6761\u4ef6\u89e6\u53d1**\u6279\u91cf\u5904\u7406\u7b56\u7565\uff0c\u663e\u8457\u63d0\u5347\u54cd\u5e94\u6027\u80fd\uff1a\n\n- **\u6570\u91cf\u89e6\u53d1**: \u5f53\u79ef\u7d2f\u65e5\u5fd7\u8fbe\u5230 `batch_size` \u65f6\uff0c\u7acb\u5373\u53d1\u9001\n- **\u65f6\u95f4\u89e6\u53d1**: \u5f53\u8ddd\u79bb\u4e0a\u6b21\u53d1\u9001\u8d85\u8fc7 `batch_interval_seconds` \u65f6\uff0c\u53d1\u9001\u5f53\u524d\u79ef\u7d2f\u7684\u65e5\u5fd7\n\n### \u6027\u80fd\u63d0\u5347\u6570\u636e\n\n| \u573a\u666f | \u5ef6\u8fdf\u6539\u8fdb | \u9002\u7528\u914d\u7f6e |\n|-----|---------|----------|\n| **\u9ad8\u9891\u65e5\u5fd7** (>1000\u6761/\u79d2) | **98% \u2b07\ufe0f** | `batch_size=100, interval=2s` |\n| **\u4e2d\u9891\u65e5\u5fd7** (100-1000\u6761/\u79d2) | **90% \u2b07\ufe0f** | `batch_size=50, interval=1s` |\n| **\u4f4e\u9891\u65e5\u5fd7** (<100\u6761/\u79d2) | **50% \u2b07\ufe0f** | `batch_size=10, interval=0.5s` |\n\n### \u573a\u666f\u5316\u914d\u7f6e\u5efa\u8bae\n\n```python\n# \ud83d\udea8 \u5b9e\u65f6\u76d1\u63a7\u573a\u666f (\u5ef6\u8fdf\u654f\u611f)\nconfig = PlumelogSettings(\n    batch_size=10,              # \u5c0f\u6279\u6b21\uff0c\u5feb\u901f\u54cd\u5e94\n    batch_interval_seconds=0.5  # \u6700\u5927\u5ef6\u8fdf500ms\n)\n\n# \ud83d\udcca \u6570\u636e\u5206\u6790\u573a\u666f (\u541e\u5410\u4f18\u5148) \nconfig = PlumelogSettings(\n    batch_size=1000,            # \u5927\u6279\u6b21\uff0c\u9ad8\u6548\u4f20\u8f93\n    batch_interval_seconds=10   # \u5141\u8bb8\u8f83\u957f\u7b49\u5f85\n)\n\n# \u2696\ufe0f \u901a\u7528\u4e1a\u52a1\u573a\u666f (\u5e73\u8861)\nconfig = PlumelogSettings(\n    batch_size=100,             # \u4e2d\u7b49\u6279\u6b21\n    batch_interval_seconds=2    # \u4e2d\u7b49\u5ef6\u8fdf\n)\n```\n\n<br>\n\n---\n\n## \ud83c\udfd7\ufe0f \u67b6\u6784\u8bbe\u8ba1\u4e0e\u5f02\u6b65\u673a\u5236\u8be6\u89e3\n\n### \u6838\u5fc3\u8bbe\u8ba1\u7406\u5ff5\n\n\u672c\u5e93\u9075\u5faa\u73b0\u4ee3\u9ad8\u6027\u80fd\u5e94\u7528\u7684\u8bbe\u8ba1\u539f\u5219\uff0c\u65e8\u5728\u63d0\u4f9b\u4e00\u4e2a\u53ef\u9760\u4e14\u5bf9\u4e1a\u52a1\u65e0\u4fb5\u5165\u7684\u65e5\u5fd7\u89e3\u51b3\u65b9\u6848\u3002\n\n-   **\u751f\u4ea7\u8005-\u6d88\u8d39\u8005\u6a21\u578b**: \u7528\u6237\u5e94\u7528\u4ee3\u7801\uff08\u751f\u4ea7\u8005\uff09\u4e0e\u65e5\u5fd7\u53d1\u9001\u4efb\u52a1\uff08\u6d88\u8d39\u8005\uff09\u901a\u8fc7\u4e00\u4e2a\u5f02\u6b65\u961f\u5217\u5b8c\u5168\u89e3\u8026\u3002\n-   **\u975e\u963b\u585e I/O**: \u6240\u6709\u7f51\u7edc\u64cd\u4f5c\uff08Redis \u901a\u4fe1\uff09\u5747\u57fa\u4e8e `asyncio`\uff0c\u4e0d\u4f1a\u963b\u585e\u4e3b\u4e1a\u52a1\u7ebf\u7a0b\u3002\n-   **\u6279\u5904\u7406\u4f18\u5316**: \u805a\u5408\u591a\u6761\u65e5\u5fd7\u4e3a\u5355\u6b21\u7f51\u7edc\u8bf7\u6c42\uff0c\u663e\u8457\u964d\u4f4e I/O \u538b\u529b\u548c Redis `OPS`\u3002\n-   **\u4f18\u96c5\u964d\u7ea7**: \u5185\u7f6e\u5b8c\u5584\u7684\u91cd\u8bd5\u548c\u5bb9\u9519\u673a\u5236\uff0c\u5728\u5916\u90e8\u4f9d\u8d56\uff08\u5982 Redis\uff09\u4e0d\u7a33\u65f6\uff0c\u6700\u5927\u7a0b\u5ea6\u4fdd\u8bc1\u7cfb\u7edf\u7a33\u5b9a\u548c\u6570\u636e\u4e0d\u4e22\u5931\u3002\n\n### \u7cfb\u7edf\u6574\u4f53\u67b6\u6784\n\n\u4e0b\u56fe\u5c55\u793a\u4e86\u4ece\u65e5\u5fd7\u4ea7\u751f\u5230\u6700\u7ec8\u5b58\u50a8\u7684\u5b8c\u6574\u6570\u636e\u6d41\u548c\u6838\u5fc3\u7ec4\u4ef6\u3002\n\n```mermaid\nsequenceDiagram\n    participant UA as \u7528\u6237\u5e94\u7528\n    participant RS as RedisSink\n    participant AQ as AsyncQueue\n    participant BP as Consumer Task\n    participant RC as AsyncRedisClient\n    participant RD as Redis\n\n    UA->>RS: 1 logger.info(...) \u540c\u6b65(\u975e\u963b\u585e)\n    RS->>AQ: 2 \u5165\u961f(\u5f02\u6b65)\n    note right of AQ: \u5185\u5b58\u961f\u5217 \u89e3\u8026\u751f\u4ea7/\u6d88\u8d39\n    BP->>AQ: \u62c9\u53d6\u65e5\u5fd7\u6279\u6b21\n    BP->>RC: 4 \u6279\u91cf\u53d1\u9001\n    RC->>RD: 5 LPUSH plume_log_list\n    RD-->>RC: OK\n    RC-->>BP: \u7ed3\u679c\n\n```\n\n**\u5904\u7406\u6d41\u7a0b\u89e3\u8bfb**:\n1.  **\u540c\u6b65\u8c03\u7528**: \u4f60\u7684\u4e1a\u52a1\u4ee3\u7801\u8c03\u7528 `logger.info()`\u3002\u8fd9\u662f\u4e00\u4e2a**\u51e0\u4e4e\u96f6\u6210\u672c**\u7684\u64cd\u4f5c\uff0c\u56e0\u4e3a `RedisSink` \u4ec5\u9700\u5c06\u65e5\u5fd7\u8bb0\u5f55\u653e\u5165\u5185\u5b58\u961f\u5217\u3002\n2.  **\u5f02\u6b65\u5165\u961f**: \u65e5\u5fd7\u8bb0\u5f55\u88ab\u5feb\u901f\u653e\u5165\u4e00\u4e2a `asyncio.Queue`\u3002\u6b64\u64cd\u4f5c\u662f\u975e\u963b\u585e\u7684\uff0c\u4e3b\u7ebf\u7a0b\u7acb\u5373\u8fd4\u56de\uff0c\u7ee7\u7eed\u6267\u884c\u4e1a\u52a1\u903b\u8f91\u3002\n3.  **\u540e\u53f0\u5904\u7406 (\u672a\u5728\u56fe\u4e2d\u7f16\u53f7)**: \u4e00\u4e2a\u72ec\u7acb\u7684\u540e\u53f0\u4efb\u52a1\uff08Consumer Task\uff09\u6301\u7eed\u76d1\u63a7\u961f\u5217\u3002\n4.  **\u6279\u91cf\u53d1\u9001**: \u5f53\u961f\u5217\u4e2d\u7684\u65e5\u5fd7\u8fbe\u5230\u4e00\u5b9a\u6570\u91cf\uff08`batch_size`\uff09\u6216\u7ecf\u8fc7\u4e00\u5b9a\u65f6\u95f4\uff08`batch_interval_seconds`\uff09\uff0c\u540e\u53f0\u4efb\u52a1\u4f1a\u5c06\u5b83\u4eec\u6253\u5305\u3002\n5.  **\u6301\u4e45\u5316\u5b58\u50a8**: \u6253\u5305\u540e\u7684\u65e5\u5fd7\u901a\u8fc7 `AsyncRedisClient` \u4f7f\u7528\u8fde\u63a5\u6c60\u4e2d\u7684\u8fde\u63a5\uff0c\u4ee5 `LPUSH` \u547d\u4ee4\u9ad8\u6548\u5730\u4e00\u6b21\u6027\u5199\u5165 Redis\u3002\u5982\u679c\u53d1\u751f\u7f51\u7edc\u9519\u8bef\uff0c\u5c06\u89e6\u53d1**\u667a\u80fd\u91cd\u8bd5**\u3002\n\n### \u5f02\u6b65\u5904\u7406\u673a\u5236\u8be6\u89e3\n\n\u4e0b\u56fe\u6df1\u5165\u5256\u6790\u4e86\u540c\u6b65\u533a\u57df\u548c\u5f02\u6b65\u533a\u57df\u7684\u4ea4\u4e92\u7ec6\u8282\u3002\n\n```mermaid\nsequenceDiagram\n    autonumber\n    participant User as \u7528\u6237\u4ee3\u7801\n    participant Sink as RedisSink\n    participant Ext as FieldExtractor\n    participant Queue as AsyncQueue\n    participant Task as ConsumerTask\n    participant Client as RedisClient\n    participant Redis as Redis\n\n    User->>Sink: logger.info(msg)\n    Sink->>Ext: \u63d0\u53d6\u8fd0\u884c\u5b57\u6bb5\n    Ext-->>Sink: \u5b57\u6bb5\u6570\u636e\n    Sink->>Queue: put_nowait(LogRecord)\n    Sink-->>User: \u8fd4\u56de (\u8017\u65f6 <1ms)\n\n    rect rgb(245,245,245)\n    note over Task,Client: \u540e\u53f0\u5f02\u6b65\u5faa\u73af\n    Task->>Queue: \u6279\u91cf\u83b7\u53d6 (\u6570\u91cf/\u8d85\u65f6/\u538b\u529b)\n    Task->>Task: JSON \u5e8f\u5217\u5316\n    Task->>Client: send(batch)\n    Client->>Redis: LPUSH \u6279\u91cf\n    Redis-->>Client: OK\n    Client-->>Task: \u6210\u529f\n    end\n\n```\n**\u5173\u952e\u4f18\u52bf**:\n-   **\u4e3b\u7ebf\u7a0b\u4fdd\u62a4**: \u65e0\u8bba Redis \u7684\u7f51\u7edc\u72b6\u51b5\u5982\u4f55\uff0c\u6216\u8005\u65e5\u5fd7\u91cf\u591a\u5927\uff0c`logger.info()` \u7684\u8c03\u7528\u8017\u65f6\u59cb\u7ec8\u7a33\u5b9a\u5728\u5fae\u79d2\u7ea7\u522b\uff0c**\u5b8c\u5168\u4e0d\u5f71\u54cd\u4e1a\u52a1\u54cd\u5e94\u901f\u5ea6**\u3002\n-   **\u8d44\u6e90\u6548\u7387**: \u6279\u91cf\u53d1\u9001\u5927\u5927\u51cf\u5c11\u4e86\u7f51\u7edc\u5f80\u8fd4\u6b21\u6570\u548c\u7cfb\u7edf\u8c03\u7528\uff0c\u964d\u4f4e\u4e86 CPU \u548c\u7f51\u7edc\u8d44\u6e90\u7684\u6d88\u8017\u3002\u8fde\u63a5\u6c60\u907f\u514d\u4e86\u9891\u7e41\u521b\u5efa\u548c\u9500\u6bc1\u8fde\u63a5\u7684\u5f00\u9500\u3002\n-   **\u6570\u636e\u53ef\u9760\u6027**: \u5185\u5b58\u961f\u5217\u5145\u5f53\u4e86\u56e0\u7f51\u7edc\u5ef6\u8fdf\u6216 Redis \u77ed\u6682\u4e0d\u53ef\u7528\u65f6\u7684\u7f13\u51b2\u533a\u3002\u6307\u6570\u9000\u907f\u91cd\u8bd5\u673a\u5236\u5219\u80fd\u5728\u7f51\u7edc\u6296\u52a8\u65f6\uff0c\u901a\u8fc7\u9010\u6e10\u589e\u52a0\u7b49\u5f85\u65f6\u95f4\u7684\u65b9\u5f0f\u5c1d\u8bd5\u6062\u590d\uff0c\u5927\u5927\u63d0\u9ad8\u4e86\u65e5\u5fd7\u53d1\u9001\u7684\u6210\u529f\u7387\u3002\n\n### \u6838\u5fc3\u7c7b\u5173\u7cfb\u56fe\n\n```mermaid\nclassDiagram\n    class PlumelogSettings {\n        +str app_name\n        +str env\n        +str redis_host\n        +int redis_port\n        +int redis_db\n        +str|None redis_password\n        +str redis_key\n        +int batch_size\n        +float batch_interval_seconds\n        +int queue_max_size\n        +int retry_count\n        +int max_connections\n    }\n\n    class LogRecord {\n        +str app_name\n        +str env\n        +str server_name\n        +str method\n        +str content\n        +str level\n        +str class_name\n        +str thread_name\n        +int seq_id\n        +str date_time\n        +int unix_time\n        +to_dict()\n    }\n\n    class FieldExtractor {\n        +get_server_ip() str\n        +get_hostname() str\n        +get_thread_name() str\n        +get_class_method() (str,str)\n        +next_seq_id() int\n        -_seq_counter\n    }\n\n    class AsyncRedisClient {\n        +connect()\n        +send_batch(records: list[LogRecord]) bool\n        +close()\n        +health_check() bool\n        -_retry_with_backoff(fn)\n        -_pool\n    }\n\n    class AsyncQueue {\n        +put_nowait(item)\n        +get() LogRecord\n        +qsize() int\n        +full() bool\n        +empty() bool\n    }\n\n    class RedisSink {\n        +__call__(record)\n        +close()\n        +flush()\n        -_ensure_async_init()\n        -_consumer_loop()\n        -_queue: AsyncQueue\n        -_client: AsyncRedisClient\n        -_extractor: FieldExtractor\n        -_buffer: list[LogRecord]\n    }\n\n    RedisSink --> PlumelogSettings : uses\n    RedisSink *-- AsyncQueue\n    RedisSink *-- AsyncRedisClient\n    RedisSink *-- FieldExtractor\n    RedisSink --> LogRecord : creates\n    AsyncRedisClient --> PlumelogSettings\n    FieldExtractor --> LogRecord\n\n```\n\n### \u5bb9\u9519\u4e0e\u7cfb\u7edf\u5f71\u54cd\n\n#### \u8d44\u6e90\u6d88\u8017\n-   **\u5185\u5b58**: \u4e3b\u8981\u7531\u5f02\u6b65\u961f\u5217 (`queue_max_size`) \u51b3\u5b9a\u3002\u9ed8\u8ba4 `10000` \u6761\u65e5\u5fd7\u5927\u7ea6\u5360\u7528 10-20MB \u5185\u5b58\uff0c\u5177\u4f53\u53d6\u51b3\u4e8e\u65e5\u5fd7\u5927\u5c0f\u3002\n-   **CPU**: \u975e\u5e38\u4f4e\u3002\u5927\u90e8\u5206\u65f6\u95f4\u5904\u4e8e I/O \u7b49\u5f85\u72b6\u6001\uff0c\u4ec5\u5728\u65e5\u5fd7\u683c\u5f0f\u5316\u548c\u5e8f\u5217\u5316\u65f6\u6709\u5c11\u91cf\u8ba1\u7b97\u3002\n-   **\u7f51\u7edc**: \u8fde\u63a5\u6c60\u4fdd\u6301\u5c11\u91cf\uff08`max_connections`\uff09\u957f\u8fde\u63a5\uff0c\u6279\u91cf\u53d1\u9001\u673a\u5236\u6781\u5927\u8282\u7701\u4e86\u7f51\u7edc\u5e26\u5bbd\u3002\n\n#### \u6545\u969c\u5e94\u5bf9\u7b56\u7565\n-   **Redis \u8fde\u63a5\u4e2d\u65ad**:\n    1.  `AsyncRedisClient` \u6355\u83b7\u8fde\u63a5\u9519\u8bef\u3002\n    2.  \u542f\u52a8**\u6307\u6570\u9000\u907f\u91cd\u8bd5**\u673a\u5236\uff08\u4f8b\u5982\uff0c\u7b49\u5f85 1s, 2s, 4s... \u540e\u91cd\u8bd5\uff09\u3002\n    3.  \u5728\u6b64\u671f\u95f4\uff0c\u65b0\u7684\u65e5\u5fd7\u7ee7\u7eed\u8fdb\u5165\u5185\u5b58\u961f\u5217\u8fdb\u884c\u7f13\u51b2\u3002\n    4.  \u5982\u679c Redis \u5728\u91cd\u8bd5\u671f\u95f4\u6062\u590d\uff0c\u79ef\u538b\u7684\u65e5\u5fd7\u5c06\u88ab\u53d1\u9001\u3002\n    5.  \u5982\u679c\u8fbe\u5230\u6700\u5927\u91cd\u8bd5\u6b21\u6570\u540e\u4f9d\u7136\u5931\u8d25\uff0c\u8be5\u6279\u6b21\u65e5\u5fd7\u5c06\u88ab\u4e22\u5f03\uff0c\u5e76\u8bb0\u5f55\u4e00\u6761\u9519\u8bef\u65e5\u5fd7\u5230\u6807\u51c6\u8f93\u51fa\uff0c\u9632\u6b62\u65e0\u9650\u91cd\u8bd5\u8017\u5c3d\u8d44\u6e90\u3002\n-   **\u5185\u5b58\u961f\u5217\u6ee1**:\n    -   \u5982\u679c\u65e5\u5fd7\u751f\u4ea7\u901f\u5ea6\u6301\u7eed\u9ad8\u4e8e\u6d88\u8d39\u901f\u5ea6\uff08\u4f8b\u5982 Redis \u957f\u65f6\u95f4\u4e0d\u53ef\u7528\uff09\uff0c\u961f\u5217\u53ef\u80fd\u4f1a\u6ee1\u3002\n    -   `put_nowait()` \u4f1a\u629b\u51fa `asyncio.QueueFull` \u5f02\u5e38\uff0c`RedisSink` \u4f1a\u6355\u83b7\u5b83\uff0c\u9759\u9ed8\u4e22\u5f03\u5f53\u524d\u65e5\u5fd7\u5e76\u6253\u5370\u4e00\u6761\u8b66\u544a\u3002\u8fd9\u662f\u4e00\u79cd**\u80cc\u538b\uff08Backpressure\uff09**\u673a\u5236\uff0c\u7528\u4e8e\u4fdd\u62a4\u5e94\u7528\u5185\u5b58\uff0c\u9632\u6b62\u56e0\u65e5\u5fd7\u95ee\u9898\u5bfc\u81f4\u4e3b\u4e1a\u52a1\u5d29\u6e83\u3002\n\n---\n\n## \ud83d\udd27 \u5f00\u53d1\u4e0e\u8d21\u732e\n\n### \u73af\u5883\u51c6\u5907\n\n```bash\n# \u514b\u9686\u9879\u76ee\ngit clone <repository-url>\ncd plumelog-loguru\n\n# \u4f7f\u7528 uv (\u63a8\u8350) \u6216 pip \u5b89\u88c5\u5f00\u53d1\u4f9d\u8d56\nuv sync --dev\n\n# \u8fd0\u884c\u6d4b\u8bd5\nuv run pytest\n\n# \u4ee3\u7801\u683c\u5f0f\u5316\nuv run format\n\n# \u7c7b\u578b\u68c0\u67e5\nuv run lint\n```\n\n### \u9879\u76ee\u7ed3\u6784\n\n```text\nsrc/plumelog_loguru/\n\u251c\u2500\u2500 __init__.py          # \u4e3b\u8981 API \u5bfc\u51fa\n\u251c\u2500\u2500 config.py            # Pydantic \u914d\u7f6e\u6a21\u578b\n\u251c\u2500\u2500 models.py            # \u65e5\u5fd7\u6570\u636e\u6a21\u578b\n\u251c\u2500\u2500 extractor.py         # \u4e0a\u4e0b\u6587\u4fe1\u606f\u63d0\u53d6\u5668\n\u251c\u2500\u2500 redis_client.py      # \u5f02\u6b65 Redis \u5ba2\u6237\u7aef (\u542b\u8fde\u63a5\u6c60\u548c\u91cd\u8bd5)\n\u2514\u2500\u2500 redis_sink.py        # Loguru Sink \u5b9e\u73b0 (\u542b\u961f\u5217\u548c\u6d88\u8d39\u8005\u4efb\u52a1)\n```\n\n## \ud83d\udcdd \u8bb8\u53ef\u8bc1\n\n\u672c\u9879\u76ee\u57fa\u4e8e [Apache License 2.0](LICENSE) \u8bb8\u53ef\u3002\n\n## \ud83e\udd1d \u8d21\u732e\n\n\u6b22\u8fce\u901a\u8fc7\u63d0\u4ea4 Issue \u548c Pull Request \u6765\u4e3a\u9879\u76ee\u505a\u51fa\u8d21\u732e\uff01",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Plumelog\u96c6\u6210\u5e93\uff0c\u4e3aLoguru\u63d0\u4f9bRedis\u65e5\u5fd7\u4f20\u8f93\u529f\u80fd",
    "version": "0.1.1",
    "project_urls": {
        "Homepage": "https://github.com/CodingOX/plumelog-loguru",
        "Issues": "https://github.com/CodingOX/plumelog-loguru/issues",
        "Repository": "https://github.com/CodingOX/plumelog-loguru.git"
    },
    "split_keywords": [
        "async",
        " logging",
        " loguru",
        " plumelog",
        " redis"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f4168db1fde8d9a03743696aba1d3b3d9c2d706671266146dc5fd358f49ff98e",
                "md5": "a61f83e97cb7d82e52faf5d9acb7c24a",
                "sha256": "ac9a4a0bce4ed179cbf7ea3c41fbca0499d90bf92935477a7fe80111a985d642"
            },
            "downloads": -1,
            "filename": "plumelog_loguru-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a61f83e97cb7d82e52faf5d9acb7c24a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 19719,
            "upload_time": "2025-08-15T14:48:45",
            "upload_time_iso_8601": "2025-08-15T14:48:45.661556Z",
            "url": "https://files.pythonhosted.org/packages/f4/16/8db1fde8d9a03743696aba1d3b3d9c2d706671266146dc5fd358f49ff98e/plumelog_loguru-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d7949b6cf24a76bc9d3111c242a117704fdf5e180fdcbbc999e4a91bb808c0e5",
                "md5": "d56703cc7b4c09f10be4d1a89a8caca7",
                "sha256": "26f63eb967afd4dcbd0bfe7eae7dc82366b95827292aca63cf21468130497978"
            },
            "downloads": -1,
            "filename": "plumelog_loguru-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d56703cc7b4c09f10be4d1a89a8caca7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 98106,
            "upload_time": "2025-08-15T14:48:47",
            "upload_time_iso_8601": "2025-08-15T14:48:47.053556Z",
            "url": "https://files.pythonhosted.org/packages/d7/94/9b6cf24a76bc9d3111c242a117704fdf5e180fdcbbc999e4a91bb808c0e5/plumelog_loguru-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-15 14:48:47",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "CodingOX",
    "github_project": "plumelog-loguru",
    "github_not_found": true,
    "lcname": "plumelog-loguru"
}
        
Elapsed time: 1.03010s