aiosyslogd


Nameaiosyslogd JSON
Version 1.0.4 PyPI version JSON
download
home_pagehttps://sr.ht/~cwt/aiosyslogd/
SummaryAsynchronous Syslog server using asyncio, with an optional uvloop integration and SQLite backend.
upload_time2025-08-07 18:21:50
maintainerNone
docs_urlNone
authorChaiwat Suttipongsakul
requires_python<4.0,>=3.11
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# **aiosyslogd**

[![PyPI Version](https://img.shields.io/pypi/v/aiosyslogd.svg)](https://pypi.org/project/aiosyslogd/)
[![Quay.io Build Status](https://quay.io/repository/cwt/aiosyslogd/status "Quay.io Build Status")](https://quay.io/repository/cwt/aiosyslogd)

**aiosyslogd** is a high-performance, asynchronous Syslog server built with Python's asyncio. It is designed for efficiently receiving, parsing, and storing a large volume of syslog messages.

It features an optional integration with uvloop for a significant performance boost and can write messages to a SQLite database or Meilisearch, automatically creating monthly tables/indexes and maintaining a Full-Text Search (FTS) index for fast queries.

## **Key Features**

* **Asynchronous:** Built on asyncio to handle thousands of concurrent messages with minimal overhead.
* **Fast:** Supports uvloop for a C-based event loop implementation, making it one of the fastest ways to run asyncio.
* **Flexible Database Backends:**
  * **SQLite Backend:** Writes all incoming messages to a SQLite database. For easier maintenance and backup, it creates a separate database file for each month (e.g., syslog_YYYYMM.sqlite3). Each file contains a SystemEvents table and a corresponding SystemEvents_FTS virtual table using FTS5 for powerful full-text search.
  * **Meilisearch Backend:** Optionally stores messages in Meilisearch, a fast and lightweight search engine, with automatic monthly indexes and advanced search capabilities like filtering, sorting, and proximity precision.
* **Automatic Table/Index Management:** Creates new database files (SQLite) or indexes (Meilisearch) for each month to keep the database organized and fast.
* **Full-Text Search:** Automatically maintains an FTS5 virtual table (SystemEvents_FTS) for SQLite or fully indexed Meilisearch backend for powerful and fast message searching.
* **RFC5424 Conversion:** Includes a utility to convert older *RFC3164* formatted messages to the modern *RFC5424* format.
* **Flexible Configuration:** Configure the server via a simple aiosyslogd.toml file.
* **Web UI:** A simple web interface for monitoring and searching logs, accessible via a web browser.
* **Container Support:** Pre-built Docker/Podman images for easy deployment.

## **Running with Containers (Docker / Podman)**

The most convenient way to run **aiosyslogd** is by using the pre-built container images available on [Quay.io](https://quay.io/repository/cwt/aiosyslogd).

### **Image Tags**

The container images are automatically built from the GitHub repository:

* Pushes to the `main` branch will build the `quay.io/cwt/aiosyslogd:latest` image.
* New version tags (e.g., `v0.2.5`) will automatically build a corresponding image (`quay.io/cwt/aiosyslogd:v0.2.5`).

### **Quick Start with Containers**

**1. Pull the Image**

You can pull the latest image using Docker or Podman:

```bash
# Using Docker
docker pull quay.io/cwt/aiosyslogd:latest

# Using Podman
podman pull quay.io/cwt/aiosyslogd:latest
```

**2. Run the Server**

To run the server, you must mount a volume to the /data directory inside the container. This is critical for persisting your configuration and log data.

```bash
# Run the server using Docker
docker run -d \
  --name aiosyslogd-server \
  -p 5140:5140/udp \
  -v /path/to/your/data:/data \
  quay.io/cwt/aiosyslogd:latest

# Run the web UI using Docker
docker run -d \
  --name aiosyslogd-web \
  -p 5141:5141/tcp \
  -v /path/to/your/data:/data,ro \
  quay.io/cwt/aiosyslogd:latest \
    aiosyslogd-web
```

**Note:** Be sure to replace /path/to/your/data with a real path on your host machine (e.g., ~/.aiosyslogd/data).

**Explanation of the command:**

- `-d`: Runs the container in detached mode (in the background).
- `--name aiosyslogd-server` (or `aiosyslogd-web` for the web UI): Assigns a convenient name to your container.
- `-p 5140:5140/udp`: Maps the syslog server port.
- `-p 5141:5141/tcp`: Maps the web server port.
- `-v /path/to/your/data:/data`: (IMPORTANT) Mounts a host directory into the container's data directory, and you should add `,ro` to mount it as a read-only storage for the web UI.

On the first run, the server will not find a configuration file in the mounted /data volume and will create a default aiosyslogd.toml for you there. You can then edit this file on your host machine to re-configure the server and simply restart the container for the changes to take effect.

## **Installation**

You can install the package directly from its source repository or via pip.

**Standard Installation:**

    $ pip install aiosyslogd

**For Maximum Performance (with uvloop/winloop):**

To include the performance enhancements, install the speed extra:

    $ pip install 'aiosyslogd[speed]'

## **Quick Start: Running the Server**

The package installs a command-line script called `aiosyslogd`. You can run it directly from your terminal.

    $ aiosyslogd

On the first run, if an `aiosyslogd.toml` file is not found in the current directory, the server will create one with default settings and then start.

The server will begin listening on `0.0.0.0:5140` and, if enabled in the configuration, create a `syslog.sqlite3` file (SQLite) in the current directory or connect to Meilisearch.

## **Configuration**

The server is configured using a TOML file. By default, it looks for aiosyslogd.toml in the current working directory.

#### **Default aiosyslogd.toml**

If a configuration file is not found, this default version will be created:

```toml
[server]
bind_ip = "0.0.0.0"
bind_port = 5140
debug = false
log_dump = false

[database]
driver = "sqlite"
batch_size = 100
batch_timeout = 5
sql_dump = false

[web_server]
bind_ip = "0.0.0.0"
bind_port = 5141
debug = false
redact = false

[database.sqlite]
database = "syslog.sqlite3"

[database.meilisearch]
url = "http://127.0.0.1:7700"
api_key = ""
```

#### **Custom Configuration Path**

You can specify a custom path for the configuration file by setting the `AIOSYSLOGD_CONFIG` environment variable.

    export AIOSYSLOGD_CONFIG="/etc/aiosyslogd/config.toml"
    $ aiosyslogd

When a custom path is provided, the server will **not** create a default file if it's missing and will exit with an error instead.

### **Configuration Options**

| Section              | Key           | Description                                                              | Default                 |
| :------------------- | :------------ | :----------------------------------------------------------------------- | :---------------------- |
| server               | bind_ip       | The IP address the server should bind to.                                | "0.0.0.0"               |
| server               | bind_port     | The UDP port to listen on.                                               | 5140                    |
| server               | debug         | Set to true to enable verbose logging for parsing and database errors.   | false                   |
| server               | log_dump      | Set to true to print every received message to the console.              | false                   |
| database             | driver        | The database backend to use ("sqlite" or "meilisearch").                 | "sqlite"                |
| database             | batch_size    | The number of messages to batch together before writing to the database. | 100                     |
| database             | batch_timeout | The maximum time in seconds to wait before writing an incomplete batch.  | 5                       |
| database             | sql_dump      | Set to true to print the SQLite command and parameters before execution. | false                   |
| database.sqlite      | database      | The path to the SQLite database file.                                    | "syslog.sqlite3"        |
| database.meilisearch | url           | The URL of the Meilisearch instance.                                     | "http://127.0.0.1:7700" |
| database.meilisearch | api_key       | The API key for Meilisearch (optional).                                  | ""                      |
| web_server           | bind_ip       | The IP address the web server should bind to.                            | "0.0.0.0"               |
| web_server           | bind_port     | The TCP port the web server should listen on.                            | 5141                    |
| web_server           | debug         | Set to true to enable verbose logging for the web server.                | false                   |
| web_server           | redact        | Set to true to redact the sensitive information (user, IP, MAC)          | false                   |

**Note:** when sql_dump is enabled, log_dump will be disabled.

### **Performance Tuning: Finding the Optimal batch_size**

The `database.batch_size` setting is critical for performance. It controls how many log messages are grouped together before being written to the database.

* A **larger batch_size** can be more efficient for the database but increases the risk of dropping logs under heavy load. If the database write takes too long, the server's incoming network buffer can overflow, causing the operating system to discard new messages.
* A **smaller batch_size** results in quicker but more frequent database writes, reducing the risk of buffer overflow but potentially increasing I/O overhead.

The optimal `batch_size` depends heavily on your hardware (CPU, disk speed like NVMe vs. HDD) and network conditions. A `batch_size` of 100 is set as a safe default, but you can likely increase this for better performance.

#### **Using the Log Generation Tool**

A benchmarking script is included at scripts/loggen.py to help you find the best setting for your system.

1. Set `server.debug = true` then start aiosyslogd:

       $ aiosyslogd

2. Run the benchmark:
   Open another terminal and run `scripts/loggen.py` to send a large number of messages. For example, to send 100,000 logs:

       $ python scripts/loggen.py -n 100000

   The script will output the number of messages sent.

4. Check the Server Logs:
   When the **aiosyslogd** server writes a batch, it logs the number of messages written.

       [2025-06-18 19:30:00] [12345] [DEBUG] Successfully wrote 100 logs to 'syslog_202506.sqlite3'.

   Sum up the number of logs written by the server and compare it to the number sent by `loggen.py`. If they match, your `batch_size` is good.

5. Tune and Repeat:
   If you see dropped logs (server received < 100,000), your `batch_size` is too high. If all logs are received, you can try increasing the `batch_size` in your `aiosyslogd.toml` file and run the test again. This allows you to find the highest value your specific hardware can handle without dropping packets.

## **Integrating with rsyslog**

You can use **rsyslog** as a robust, battle-tested frontend for **aiosyslogd**. This is useful for receiving logs on the standard privileged port (514) and then forwarding them to **aiosyslogd** running as a non-privileged user on a different port.

Here are two common configurations:

### **1. Forwarding from an Existing rsyslog Instance**

If you already have an **rsyslog** server running and simply want to forward all logs to **aiosyslogd**, add the following lines to a new file in /etc/rsyslog.d/, such as 99-forward-to-aiosyslogd.conf. This configuration includes queueing to prevent log loss if **aiosyslogd** is temporarily unavailable.

**File: /etc/rsyslog.d/rsyslog-forward.conf**

```
# This forwards all logs (*) to the server running on localhost:5140
# with queueing enabled for reliability.
$ActionQueueFileName fwdRule1
$ActionQueueMaxDiskSpace 1g
$ActionQueueSaveOnShutdown on
$ActionQueueType LinkedList
$ActionResumeRetryCount -1
*.* @127.0.0.1:5140
```

### **2. Using rsyslog as a Dedicated Forwarder**

If you want rsyslog to listen on the standard syslog port 514/udp and do nothing but forward to aiosyslogd, you can use a minimal configuration like this. This is a common pattern for privilege separation, allowing aiosyslogd to run as a non-root user.

**File: /etc/rsyslog.conf (Minimal Example)**

```
# Minimal rsyslog.conf to listen on port 514 and forward to aiosyslogd

# --- Global Settings ---
$WorkDirectory /var/lib/rsyslog
$FileOwner root
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022

# --- Modules ---
# Unload modules we don't need
module(load="immark" mode="off")
module(load="imuxsock" mode="off")
# Load the UDP input module
module(load="imudp")
input(
    type="imudp"
    port="514"
)

# --- Forwarding Rule ---
# Forward all received messages to aiosyslogd
$ActionQueueFileName fwdToAiosyslogd
$ActionQueueMaxDiskSpace 1g
$ActionQueueSaveOnShutdown on
$ActionQueueType LinkedList
$ActionResumeRetryCount -1
*.* @127.0.0.1:5140
```

## **Using as a Library**

You can also import and use the SyslogUDPServer in your own asyncio application.

```python
import asyncio
from aiosyslogd.server import SyslogUDPServer

async def main():
    # The server is configured via aiosyslogd.toml by default.
    # To configure programmatically, you would need to modify the
    # server class or bypass the config-loading mechanism.
    server = await SyslogUDPServer.create(host="0.0.0.0", port=5141)

    loop = asyncio.get_running_loop()

    # Define the protocol factory as a simple function
    def server_protocol_factory():
        return server

    # Start the UDP server endpoint
    transport, protocol = await loop.create_datagram_endpoint(
        server_protocol_factory,
        local_addr=(server.host, server.port)
    )

    print("Custom server running. Press Ctrl+C to stop.")
    try:
        await asyncio.Event().wait()
    except (KeyboardInterrupt, asyncio.CancelledError):
        pass
    finally:
        print("Shutting down custom server.")
        transport.close()
        await server.shutdown()

if __name__ == "__main__":
    asyncio.run(main)
```

## **Contributing**

Contributions are welcome! If you find a bug or have a feature request, please open an issue on the project's repository.

## **License**

This project is licensed under the [**MIT License**](https://hg.sr.ht/~cwt/aiosyslogd/browse/LICENSE?rev=tip).

            

Raw data

            {
    "_id": null,
    "home_page": "https://sr.ht/~cwt/aiosyslogd/",
    "name": "aiosyslogd",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.11",
    "maintainer_email": null,
    "keywords": null,
    "author": "Chaiwat Suttipongsakul",
    "author_email": "cwt@bashell.com",
    "download_url": "https://files.pythonhosted.org/packages/df/17/3469852c211cf65a7caed7423c79891dcd0428711f6f3133b29e5b872166/aiosyslogd-1.0.4.tar.gz",
    "platform": null,
    "description": "\n# **aiosyslogd**\n\n[![PyPI Version](https://img.shields.io/pypi/v/aiosyslogd.svg)](https://pypi.org/project/aiosyslogd/)\n[![Quay.io Build Status](https://quay.io/repository/cwt/aiosyslogd/status \"Quay.io Build Status\")](https://quay.io/repository/cwt/aiosyslogd)\n\n**aiosyslogd** is a high-performance, asynchronous Syslog server built with Python's asyncio. It is designed for efficiently receiving, parsing, and storing a large volume of syslog messages.\n\nIt features an optional integration with uvloop for a significant performance boost and can write messages to a SQLite database or Meilisearch, automatically creating monthly tables/indexes and maintaining a Full-Text Search (FTS) index for fast queries.\n\n## **Key Features**\n\n* **Asynchronous:** Built on asyncio to handle thousands of concurrent messages with minimal overhead.\n* **Fast:** Supports uvloop for a C-based event loop implementation, making it one of the fastest ways to run asyncio.\n* **Flexible Database Backends:**\n  * **SQLite Backend:** Writes all incoming messages to a SQLite database. For easier maintenance and backup, it creates a separate database file for each month (e.g., syslog_YYYYMM.sqlite3). Each file contains a SystemEvents table and a corresponding SystemEvents_FTS virtual table using FTS5 for powerful full-text search.\n  * **Meilisearch Backend:** Optionally stores messages in Meilisearch, a fast and lightweight search engine, with automatic monthly indexes and advanced search capabilities like filtering, sorting, and proximity precision.\n* **Automatic Table/Index Management:** Creates new database files (SQLite) or indexes (Meilisearch) for each month to keep the database organized and fast.\n* **Full-Text Search:** Automatically maintains an FTS5 virtual table (SystemEvents_FTS) for SQLite or fully indexed Meilisearch backend for powerful and fast message searching.\n* **RFC5424 Conversion:** Includes a utility to convert older *RFC3164* formatted messages to the modern *RFC5424* format.\n* **Flexible Configuration:** Configure the server via a simple aiosyslogd.toml file.\n* **Web UI:** A simple web interface for monitoring and searching logs, accessible via a web browser.\n* **Container Support:** Pre-built Docker/Podman images for easy deployment.\n\n## **Running with Containers (Docker / Podman)**\n\nThe most convenient way to run **aiosyslogd** is by using the pre-built container images available on [Quay.io](https://quay.io/repository/cwt/aiosyslogd).\n\n### **Image Tags**\n\nThe container images are automatically built from the GitHub repository:\n\n* Pushes to the `main` branch will build the `quay.io/cwt/aiosyslogd:latest` image.\n* New version tags (e.g., `v0.2.5`) will automatically build a corresponding image (`quay.io/cwt/aiosyslogd:v0.2.5`).\n\n### **Quick Start with Containers**\n\n**1. Pull the Image**\n\nYou can pull the latest image using Docker or Podman:\n\n```bash\n# Using Docker\ndocker pull quay.io/cwt/aiosyslogd:latest\n\n# Using Podman\npodman pull quay.io/cwt/aiosyslogd:latest\n```\n\n**2. Run the Server**\n\nTo run the server, you must mount a volume to the /data directory inside the container. This is critical for persisting your configuration and log data.\n\n```bash\n# Run the server using Docker\ndocker run -d \\\n  --name aiosyslogd-server \\\n  -p 5140:5140/udp \\\n  -v /path/to/your/data:/data \\\n  quay.io/cwt/aiosyslogd:latest\n\n# Run the web UI using Docker\ndocker run -d \\\n  --name aiosyslogd-web \\\n  -p 5141:5141/tcp \\\n  -v /path/to/your/data:/data,ro \\\n  quay.io/cwt/aiosyslogd:latest \\\n    aiosyslogd-web\n```\n\n**Note:** Be sure to replace /path/to/your/data with a real path on your host machine (e.g., ~/.aiosyslogd/data).\n\n**Explanation of the command:**\n\n- `-d`: Runs the container in detached mode (in the background).\n- `--name aiosyslogd-server` (or `aiosyslogd-web` for the web UI): Assigns a convenient name to your container.\n- `-p 5140:5140/udp`: Maps the syslog server port.\n- `-p 5141:5141/tcp`: Maps the web server port.\n- `-v /path/to/your/data:/data`: (IMPORTANT) Mounts a host directory into the container's data directory, and you should add `,ro` to mount it as a read-only storage for the web UI.\n\nOn the first run, the server will not find a configuration file in the mounted /data volume and will create a default aiosyslogd.toml for you there. You can then edit this file on your host machine to re-configure the server and simply restart the container for the changes to take effect.\n\n## **Installation**\n\nYou can install the package directly from its source repository or via pip.\n\n**Standard Installation:**\n\n    $ pip install aiosyslogd\n\n**For Maximum Performance (with uvloop/winloop):**\n\nTo include the performance enhancements, install the speed extra:\n\n    $ pip install 'aiosyslogd[speed]'\n\n## **Quick Start: Running the Server**\n\nThe package installs a command-line script called `aiosyslogd`. You can run it directly from your terminal.\n\n    $ aiosyslogd\n\nOn the first run, if an `aiosyslogd.toml` file is not found in the current directory, the server will create one with default settings and then start.\n\nThe server will begin listening on `0.0.0.0:5140` and, if enabled in the configuration, create a `syslog.sqlite3` file (SQLite) in the current directory or connect to Meilisearch.\n\n## **Configuration**\n\nThe server is configured using a TOML file. By default, it looks for aiosyslogd.toml in the current working directory.\n\n#### **Default aiosyslogd.toml**\n\nIf a configuration file is not found, this default version will be created:\n\n```toml\n[server]\nbind_ip = \"0.0.0.0\"\nbind_port = 5140\ndebug = false\nlog_dump = false\n\n[database]\ndriver = \"sqlite\"\nbatch_size = 100\nbatch_timeout = 5\nsql_dump = false\n\n[web_server]\nbind_ip = \"0.0.0.0\"\nbind_port = 5141\ndebug = false\nredact = false\n\n[database.sqlite]\ndatabase = \"syslog.sqlite3\"\n\n[database.meilisearch]\nurl = \"http://127.0.0.1:7700\"\napi_key = \"\"\n```\n\n#### **Custom Configuration Path**\n\nYou can specify a custom path for the configuration file by setting the `AIOSYSLOGD_CONFIG` environment variable.\n\n    export AIOSYSLOGD_CONFIG=\"/etc/aiosyslogd/config.toml\"\n    $ aiosyslogd\n\nWhen a custom path is provided, the server will **not** create a default file if it's missing and will exit with an error instead.\n\n### **Configuration Options**\n\n| Section              | Key           | Description                                                              | Default                 |\n| :------------------- | :------------ | :----------------------------------------------------------------------- | :---------------------- |\n| server               | bind_ip       | The IP address the server should bind to.                                | \"0.0.0.0\"               |\n| server               | bind_port     | The UDP port to listen on.                                               | 5140                    |\n| server               | debug         | Set to true to enable verbose logging for parsing and database errors.   | false                   |\n| server               | log_dump      | Set to true to print every received message to the console.              | false                   |\n| database             | driver        | The database backend to use (\"sqlite\" or \"meilisearch\").                 | \"sqlite\"                |\n| database             | batch_size    | The number of messages to batch together before writing to the database. | 100                     |\n| database             | batch_timeout | The maximum time in seconds to wait before writing an incomplete batch.  | 5                       |\n| database             | sql_dump      | Set to true to print the SQLite command and parameters before execution. | false                   |\n| database.sqlite      | database      | The path to the SQLite database file.                                    | \"syslog.sqlite3\"        |\n| database.meilisearch | url           | The URL of the Meilisearch instance.                                     | \"http://127.0.0.1:7700\" |\n| database.meilisearch | api_key       | The API key for Meilisearch (optional).                                  | \"\"                      |\n| web_server           | bind_ip       | The IP address the web server should bind to.                            | \"0.0.0.0\"               |\n| web_server           | bind_port     | The TCP port the web server should listen on.                            | 5141                    |\n| web_server           | debug         | Set to true to enable verbose logging for the web server.                | false                   |\n| web_server           | redact        | Set to true to redact the sensitive information (user, IP, MAC)          | false                   |\n\n**Note:** when sql_dump is enabled, log_dump will be disabled.\n\n### **Performance Tuning: Finding the Optimal batch_size**\n\nThe `database.batch_size` setting is critical for performance. It controls how many log messages are grouped together before being written to the database.\n\n* A **larger batch_size** can be more efficient for the database but increases the risk of dropping logs under heavy load. If the database write takes too long, the server's incoming network buffer can overflow, causing the operating system to discard new messages.\n* A **smaller batch_size** results in quicker but more frequent database writes, reducing the risk of buffer overflow but potentially increasing I/O overhead.\n\nThe optimal `batch_size` depends heavily on your hardware (CPU, disk speed like NVMe vs. HDD) and network conditions. A `batch_size` of 100 is set as a safe default, but you can likely increase this for better performance.\n\n#### **Using the Log Generation Tool**\n\nA benchmarking script is included at scripts/loggen.py to help you find the best setting for your system.\n\n1. Set `server.debug = true` then start aiosyslogd:\n\n       $ aiosyslogd\n\n2. Run the benchmark:\n   Open another terminal and run `scripts/loggen.py` to send a large number of messages. For example, to send 100,000 logs:\n\n       $ python scripts/loggen.py -n 100000\n\n   The script will output the number of messages sent.\n\n4. Check the Server Logs:\n   When the **aiosyslogd** server writes a batch, it logs the number of messages written.\n\n       [2025-06-18 19:30:00] [12345] [DEBUG] Successfully wrote 100 logs to 'syslog_202506.sqlite3'.\n\n   Sum up the number of logs written by the server and compare it to the number sent by `loggen.py`. If they match, your `batch_size` is good.\n\n5. Tune and Repeat:\n   If you see dropped logs (server received < 100,000), your `batch_size` is too high. If all logs are received, you can try increasing the `batch_size` in your `aiosyslogd.toml` file and run the test again. This allows you to find the highest value your specific hardware can handle without dropping packets.\n\n## **Integrating with rsyslog**\n\nYou can use **rsyslog** as a robust, battle-tested frontend for **aiosyslogd**. This is useful for receiving logs on the standard privileged port (514) and then forwarding them to **aiosyslogd** running as a non-privileged user on a different port.\n\nHere are two common configurations:\n\n### **1. Forwarding from an Existing rsyslog Instance**\n\nIf you already have an **rsyslog** server running and simply want to forward all logs to **aiosyslogd**, add the following lines to a new file in /etc/rsyslog.d/, such as 99-forward-to-aiosyslogd.conf. This configuration includes queueing to prevent log loss if **aiosyslogd** is temporarily unavailable.\n\n**File: /etc/rsyslog.d/rsyslog-forward.conf**\n\n```\n# This forwards all logs (*) to the server running on localhost:5140\n# with queueing enabled for reliability.\n$ActionQueueFileName fwdRule1\n$ActionQueueMaxDiskSpace 1g\n$ActionQueueSaveOnShutdown on\n$ActionQueueType LinkedList\n$ActionResumeRetryCount -1\n*.* @127.0.0.1:5140\n```\n\n### **2. Using rsyslog as a Dedicated Forwarder**\n\nIf you want rsyslog to listen on the standard syslog port 514/udp and do nothing but forward to aiosyslogd, you can use a minimal configuration like this. This is a common pattern for privilege separation, allowing aiosyslogd to run as a non-root user.\n\n**File: /etc/rsyslog.conf (Minimal Example)**\n\n```\n# Minimal rsyslog.conf to listen on port 514 and forward to aiosyslogd\n\n# --- Global Settings ---\n$WorkDirectory /var/lib/rsyslog\n$FileOwner root\n$FileGroup adm\n$FileCreateMode 0640\n$DirCreateMode 0755\n$Umask 0022\n\n# --- Modules ---\n# Unload modules we don't need\nmodule(load=\"immark\" mode=\"off\")\nmodule(load=\"imuxsock\" mode=\"off\")\n# Load the UDP input module\nmodule(load=\"imudp\")\ninput(\n    type=\"imudp\"\n    port=\"514\"\n)\n\n# --- Forwarding Rule ---\n# Forward all received messages to aiosyslogd\n$ActionQueueFileName fwdToAiosyslogd\n$ActionQueueMaxDiskSpace 1g\n$ActionQueueSaveOnShutdown on\n$ActionQueueType LinkedList\n$ActionResumeRetryCount -1\n*.* @127.0.0.1:5140\n```\n\n## **Using as a Library**\n\nYou can also import and use the SyslogUDPServer in your own asyncio application.\n\n```python\nimport asyncio\nfrom aiosyslogd.server import SyslogUDPServer\n\nasync def main():\n    # The server is configured via aiosyslogd.toml by default.\n    # To configure programmatically, you would need to modify the\n    # server class or bypass the config-loading mechanism.\n    server = await SyslogUDPServer.create(host=\"0.0.0.0\", port=5141)\n\n    loop = asyncio.get_running_loop()\n\n    # Define the protocol factory as a simple function\n    def server_protocol_factory():\n        return server\n\n    # Start the UDP server endpoint\n    transport, protocol = await loop.create_datagram_endpoint(\n        server_protocol_factory,\n        local_addr=(server.host, server.port)\n    )\n\n    print(\"Custom server running. Press Ctrl+C to stop.\")\n    try:\n        await asyncio.Event().wait()\n    except (KeyboardInterrupt, asyncio.CancelledError):\n        pass\n    finally:\n        print(\"Shutting down custom server.\")\n        transport.close()\n        await server.shutdown()\n\nif __name__ == \"__main__\":\n    asyncio.run(main)\n```\n\n## **Contributing**\n\nContributions are welcome! If you find a bug or have a feature request, please open an issue on the project's repository.\n\n## **License**\n\nThis project is licensed under the [**MIT License**](https://hg.sr.ht/~cwt/aiosyslogd/browse/LICENSE?rev=tip).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Asynchronous Syslog server using asyncio, with an optional uvloop integration and SQLite backend.",
    "version": "1.0.4",
    "project_urls": {
        "GitHub Mirror": "https://github.com/cwt/aiosyslogd",
        "Homepage": "https://sr.ht/~cwt/aiosyslogd/",
        "Repository": "https://hg.sr.ht/~cwt/aiosyslogd"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e119b4db29ca87a95bbd5239a7fa86644e47e51b5bde6fd299932d98c632fc28",
                "md5": "eedd6873a19d5e2cd9554833f349da23",
                "sha256": "76e9a05a75e6b577f27f7eeb5869dbe98b603d506726aaa2f66d6b4fd927069a"
            },
            "downloads": -1,
            "filename": "aiosyslogd-1.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "eedd6873a19d5e2cd9554833f349da23",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.11",
            "size": 34880,
            "upload_time": "2025-08-07T18:21:48",
            "upload_time_iso_8601": "2025-08-07T18:21:48.988574Z",
            "url": "https://files.pythonhosted.org/packages/e1/19/b4db29ca87a95bbd5239a7fa86644e47e51b5bde6fd299932d98c632fc28/aiosyslogd-1.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "df173469852c211cf65a7caed7423c79891dcd0428711f6f3133b29e5b872166",
                "md5": "264d22b7132f226992c7703ab9bc49e3",
                "sha256": "762d99362d7d90db8ed69df25aabfc29066acb9bfd205a1ab5bffe844b89ee0e"
            },
            "downloads": -1,
            "filename": "aiosyslogd-1.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "264d22b7132f226992c7703ab9bc49e3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.11",
            "size": 34136,
            "upload_time": "2025-08-07T18:21:50",
            "upload_time_iso_8601": "2025-08-07T18:21:50.370412Z",
            "url": "https://files.pythonhosted.org/packages/df/17/3469852c211cf65a7caed7423c79891dcd0428711f6f3133b29e5b872166/aiosyslogd-1.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-07 18:21:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cwt",
    "github_project": "aiosyslogd",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "aiosyslogd"
}
        
Elapsed time: 1.36214s