drain3-ankcorn


Namedrain3-ankcorn JSON
Version 0.9.14 PyPI version JSON
download
home_page
SummaryPersistent & streaming log template miner
upload_time2024-02-26 17:51:56
maintainerYihao Chen(Superskyyy)
docs_urlNone
authorIBM Research Haifa
requires_python>=3.7,<4.0
licenseMIT
keywords drain log parser ibm template logs miner
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Drain3

## Important Update

Drain3 was moved to the `logpai` GitHub organization (which is also the home for the original Drain implementation). We always welcome more contributors and maintainers to join us and push the project forward. We welcome more contributions and variants of implementations if you find practical enhancements to the algorithm in production scenarios.

## Introduction

Drain3 is an online log template miner that can extract templates (clusters) from a stream of log messages in a timely
manner. It employs a parse tree with fixed depth to guide the log group search process, which effectively avoids
constructing a very deep and unbalanced tree.

Drain3 continuously learns on-the-fly and extracts log templates from raw log entries.

#### Example:

For the input:

```
connected to 10.0.0.1
connected to 192.168.0.1
Hex number 0xDEADBEAF
user davidoh logged in
user eranr logged in
```

Drain3 extracts the following templates:

```
ID=1     : size=2         : connected to <:IP:>
ID=2     : size=1         : Hex number <:HEX:>
ID=3     : size=2         : user <:*:> logged in
```

Full sample program output:

```
Starting Drain3 template miner
Checking for saved state
Saved state not found
Drain3 started with 'FILE' persistence
Starting training mode. Reading from std-in ('q' to finish)
> connected to 10.0.0.1
Saving state of 1 clusters with 1 messages, 528 bytes, reason: cluster_created (1)
{"change_type": "cluster_created", "cluster_id": 1, "cluster_size": 1, "template_mined": "connected to <:IP:>", "cluster_count": 1}
Parameters: [ExtractedParameter(value='10.0.0.1', mask_name='IP')]
> connected to 192.168.0.1
{"change_type": "none", "cluster_id": 1, "cluster_size": 2, "template_mined": "connected to <:IP:>", "cluster_count": 1}
Parameters: [ExtractedParameter(value='192.168.0.1', mask_name='IP')]
> Hex number 0xDEADBEAF
Saving state of 2 clusters with 3 messages, 584 bytes, reason: cluster_created (2)
{"change_type": "cluster_created", "cluster_id": 2, "cluster_size": 1, "template_mined": "Hex number <:HEX:>", "cluster_count": 2}
Parameters: [ExtractedParameter(value='0xDEADBEAF', mask_name='HEX')]
> user davidoh logged in
Saving state of 3 clusters with 4 messages, 648 bytes, reason: cluster_created (3)
{"change_type": "cluster_created", "cluster_id": 3, "cluster_size": 1, "template_mined": "user davidoh logged in", "cluster_count": 3}
Parameters: []
> user eranr logged in
Saving state of 3 clusters with 5 messages, 644 bytes, reason: cluster_template_changed (3)
{"change_type": "cluster_template_changed", "cluster_id": 3, "cluster_size": 2, "template_mined": "user <:*:> logged in", "cluster_count": 3}
Parameters: [ExtractedParameter(value='eranr', mask_name='*')]
> q
Training done. Mined clusters:
ID=1     : size=2         : connected to <:IP:>
ID=2     : size=1         : Hex number <:HEX:>
ID=3     : size=2         : user <:*:> logged in
```

This project is an upgrade of the original [Drain](https://github.com/logpai/logparser/blob/master/logparser/Drain)
project by LogPAI from Python 2.7 to Python 3.6 or later with additional features and bug-fixes.

Read more information about Drain from the following paper:

- Pinjia He, Jieming Zhu, Zibin Zheng, and Michael R.
  Lyu. [Drain: An Online Log Parsing Approach with Fixed Depth Tree](http://jiemingzhu.github.io/pub/pjhe_icws2017.pdf),
  Proceedings of the 24th International Conference on Web Services (ICWS), 2017.

A Drain3 use case is presented in this blog
post: [Use open source Drain3 log-template mining project to monitor for network outages](https://developer.ibm.com/blogs/how-mining-log-templates-can-help-ai-ops-in-cloud-scale-data-centers)
.

#### New features

- [**Persistence**](#persistence). Save and load Drain state into an [Apache Kafka](https://kafka.apache.org)
  topic, [Redis](https://redis.io/) or a file.
- **Streaming**. Support feeding Drain with messages one-be-one.
- [**Masking**](#masking). Replace some message parts (e.g numbers, IPs, emails) with wildcards. This improves the
  accuracy of template mining.
- [**Packaging**](#installation). As a pip package.
- [**Configuration**](#configuration). Support for configuring Drain3 using an `.ini` file or a configuration object. 
- [**Memory efficiency**](#memory-efficiency). Decrease the memory footprint of internal data structures and introduce
  cache to control max memory consumed (thanks to @StanislawSwierc)
- [**Inference mode**](#training-vs-inference-modes). In case you want to separate training and inference phase, Drain3
  provides a function for *fast* matching against already-learned clusters (templates) only, without the usage of
  regular expressions.
- [**Parameter extraction**](#parameter-extraction). Accurate extraction of the variable parts from a log message as an
  ordered list, based on its mined template and the defined masking instructions (thanks to @Impelon).

#### Expected Input and Output

Although Drain3 can be ingested with full raw log message, template mining accuracy can be improved if you feed it with
only the unstructured free-text portion of log messages, by first removing structured parts like timestamp, hostname.
severity, etc.

The output is a dictionary with the following fields:

- `change_type` - indicates either if a new template was identified, an existing template was changed or message added
  to an existing cluster.
- `cluster_id` - Sequential ID of the cluster that the log belongs to.
- `cluster_size`- The size (message count) of the cluster that the log belongs to.
- `cluster_count` - Count clusters seen so far.
- `template_mined`- the last template of above cluster_id.

## Configuration

Drain3 is configured using [configparser](https://docs.python.org/3.4/library/configparser.html). By default, config
filename is `drain3.ini` in working directory. It can also be configured passing
a [TemplateMinerConfig](drain3/template_miner_config.py) object to the [TemplateMiner](drain3/template_miner.py)
constructor.

Primary configuration parameters:

- `[DRAIN]/sim_th` - similarity threshold. if percentage of similar tokens for a log message is below this number, a new
  log cluster will be created (default 0.4)
- `[DRAIN]/depth` - max depth levels of log clusters. Minimum is 3. (default 4)
- `[DRAIN]/max_children` - max number of children of an internal node (default 100)
- `[DRAIN]/max_clusters` - max number of tracked clusters (unlimited by default). When this number is reached, model
  starts replacing old clusters with a new ones according to the LRU cache eviction policy.
- `[DRAIN]/extra_delimiters` - delimiters to apply when splitting log message into words (in addition to whitespace) (
  default none). Format is a Python list e.g. `['_', ':']`.
- `[MASKING]/masking` - parameters masking - in json format (default "")
- `[MASKING]/mask_prefix` & `[MASKING]/mask_suffix` - the wrapping of identified parameters in templates. By default, it
  is `<` and `>` respectively.
- `[SNAPSHOT]/snapshot_interval_minutes` - time interval for new snapshots (default 1)
- `[SNAPSHOT]/compress_state` - whether to compress the state before saving it. This can be useful when using Kafka
  persistence.

## Masking

This feature allows masking of specific variable parts in log message with keywords, prior to passing to Drain. A
well-defined masking can improve template mining accuracy.

Template parameters that do not match any custom mask in the preliminary masking phase are replaced with `<*>` by Drain
core.

Use a list of regular expressions in the configuration file with the format `{'regex_pattern', 'mask_with'}` to set
custom masking.

For example, following masking instructions in `drain3.ini` will mask IP addresses and integers:

```
[MASKING]
masking = [
          {"regex_pattern":"((?<=[^A-Za-z0-9])|^)(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})((?=[^A-Za-z0-9])|$)", "mask_with": "IP"},
          {"regex_pattern":"((?<=[^A-Za-z0-9])|^)([\\-\\+]?\\d+)((?=[^A-Za-z0-9])|$)", "mask_with": "NUM"},
          ]
    ]
```

## Persistence

The persistence feature saves and loads a snapshot of Drain3 state in a (compressed) json format. This feature adds
restart resiliency to Drain allowing continuation of activity and maintain learned knowledge across restarts.

Drain3 state includes the search tree and all the clusters that were identified up until snapshot time.

The snapshot also persist number of log messages matched each cluster, and it's `cluster_id`.

An example of a snapshot:

```json
{
  "clusters": [
    {
      "cluster_id": 1,
      "log_template_tokens": [
        "aa",
        "aa",
        "<*>"
      ],
      "py/object": "drain3_core.LogCluster",
      "size": 2
    },
    {
      "cluster_id": 2,
      "log_template_tokens": [
        "My",
        "IP",
        "is",
        "<IP>"
      ],
      "py/object": "drain3_core.LogCluster",
      "size": 1
    }
  ]
}
```

This example snapshot persist two clusters with the templates:

`["aa", "aa", "<*>"]` - occurs twice

`["My", "IP", "is", "<IP>"]` - occurs once

Snapshots are created in the following events:

- `cluster_created` - in any new template
- `cluster_template_changed` - in any update of a template
- `periodic` - after n minutes from the last snapshot. This is intended to save cluster sizes even if no new template
  was identified.

Drain3 currently supports the following persistence modes:

- **Kafka** - The snapshot is saved in a dedicated topic used only for snapshots - the last message in this topic is the
  last snapshot that will be loaded after restart. For Kafka persistence, you need to provide: `topic_name`. You may
  also provide other `kwargs` that are supported by `kafka.KafkaConsumer` and `kafka.Producer` e.g `bootstrap_servers`
  to change Kafka endpoint (default is `localhost:9092`).

- **Redis** - The snapshot is saved to a key in Redis database (contributed by @matabares).

- **File** - The snapshot is saved to a file.

- **Memory** - The snapshot is saved an in-memory object.

- **None** - No persistence.

Drain3 persistence modes can be easily extended to another medium / database by inheriting
the [PersistenceHandler](drain3/persistence_handler.py) class.

## Training vs. Inference modes

In some use-cases, it is required to separate training and inference phases.

In training phase you should call `template_miner.add_log_message(log_line)`. This will match log line against an
existing cluster (if similarity is above threshold) or create a new cluster. It may also change the template of an
existing cluster.

In inference mode you should call `template_miner.match(log_line)`. This will match log line against previously learned
clusters only. No new clusters are created and templates of existing clusters are not changed. Match to existing cluster
has to be perfect, otherwise `None` is returned. You can use persistence option to load previously trained clusters
before inference.

## Memory efficiency

This feature limits the max memory used by the model. It is particularly important for large and possibly unbounded log
streams. This feature is controlled by the `max_clusters​` parameter, which sets the max number of clusters/templates
trarcked by the model. When the limit is reached, new templates start to replace the old ones according to the Least
Recently Used (LRU) eviction policy. This makes the model adapt quickly to the most recent templates in the log stream.

## Parameter Extraction

Drain3 supports retrieving an ordered list of variables in a log message, after its template was mined. Each parameter
is accompanied by the name of the mask that was matched, or `*` for the catch-all mask.

Parameter extraction is performed by generating a regular expression that matches the template and then applying it on
the log message. When `exact_matching` is enabled (by default), the generated regex included the regular expression
defined in relevant masking instructions. If there are multiple masking instructions with the same name, either match
can satisfy the regex. It is possible to disable exact matching so that every variable is matched against a
non-whitespace character sequence. This may improve performance on expanse of accuracy.

Parameter extraction regexes generated per template are cached by default, to improve performance. You can control cache
size with the ` MASKING/parameter_extraction_cache_capacity` configuration parameter.

Sample usage:

```python
result = template_miner.add_log_message(log_line)
params = template_miner.extract_parameters(
    result["template_mined"], log_line, exact_matching=True)
```

For the input `"user johndoe logged in 11 minuts ago"`, the template would be:

```
"user <:*:> logged in <:NUM:> minuts ago"
```

... and the extracted parameters:

```
[
  ExtractedParameter(value='johndoe', mask_name='*'), 
  ExtractedParameter(value='11', mask_name='NUM')
]
```

## Installation

Drain3 is available from [PyPI](https://pypi.org/project/drain3). To install use `pip`:

```
pip3 install drain3
```

Note: If you decide to use Kafka or Redis persistence, you should install relevant client library explicitly, since it
is declared as an extra (optional) dependency, by either:

```
pip3 install kafka-python
```

-- or --

```
pip3 install redis
```

## Examples

In order to run the examples directly from the repository, you need to install dependencies. You can do that using *
pipenv* by executing the following command (assuming pipenv already installed):

```shell
python3 -m pipenv sync
```

#### Example 1 - `drain_stdin_demo`

Run [examples/drain_stdin_demo.py](examples/drain_stdin_demo.py) from the root folder of the repository by:

```
python3 -m pipenv run python -m examples.drain_stdin_demo
```

This example uses Drain3 on input from stdin and persist to either Kafka / file / no persistence.

Change `persistence_type` variable in the example to change persistence mode.

Enter several log lines using the command line. Press `q` to end online learn-and-match mode.

Next, demo goes to match (inference) only mode, in which no new clusters are trained and input is matched against
previously trained clusters only. Press `q` again to finish execution.

#### Example 2 - `drain_bigfile_demo`

Run [examples/drain_bigfile_demo](examples/drain_bigfile_demo.py) from the root folder of the repository by:

```
python3 -m pipenv run python -m examples.drain_bigfile_demo
```

This example downloads a real-world log file (of an SSH server) and process all lines, then prints result clusters,
prefix tree and performance statistics.

#### Sample config file

An example `drain3.ini` file with masking instructions can be found in the [examples](examples) folder as well.

## Contributing

Our project welcomes external contributions. Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for further details.

## Change Log

##### v0.9.11

* Fixed possible DivideByZero error when the profiler is enabled - [Issue #65](https://github.com/IBM/Drain3/issues/65). 

##### v0.9.10

* Fixed compatibility issue with Python 3.10 caused by removal of `KeysView`.

##### v0.9.9

* Added support for accurate log message parameter extraction in a new function - `extract_parameters()`. The
  function `get_parameter_list()` is deprecated (Thanks to *@Impelon*).
* Refactored `AbstractMaskingInstruction` as a base class for `RegexMaskingInstruction`, allowing to introduce other
  types of masking mechanisms.

##### v0.9.8

* Added an option `full_search_strategy` option in `TemplateMiner.match()` and `Drain.match()`. See more info at
  Issue [#48](https://github.com/IBM/Drain3/issues/48).
* Added an option to disable parameterization of tokens that contains digits in
  configuration: `TemplateMinerConfig.parametrize_numeric_tokens`
* Loading Drain snapshot now only restores clusters state and not configuration parameters. This improves backwards
  compatibility when introducing new Drain configuration parameters.

##### v0.9.7

* Fixed bug in original Drain: log clusters were created multiple times for log messages with fewer tokens
  than `max_node_depth`.
* Changed `depth` property name to a more descriptive name `max_node_depth` as Drain always subtracts 2 of `depth`
  argument value. Also added `log_cluster_depth` property to reflect original value of depth argument (Breaking Change).
* Restricted `depth` param to minimum sensible value of 3.
* Added log cluster count to nodes in `Drain.print_tree()`
* Added optional log cluster details to `Drain.print_tree()`

##### v0.9.6

* Fix issue https://github.com/IBM/Drain3/issues/38: Unnecessary update of LRU cache in case `max_clusters` is used (
  thanks *@StanislawSwierc*).

##### v0.9.5

* Added: `TemplateMiner.match()` function for fast matching against existing clusters only.

##### v0.9.4

* Added: `TemplateMiner.get_parameter_list()` function to extract template parameters for raw log message (thanks to *
  @cwyalpha*)
* Added option to customize mask wrapper - Instead of the default `<*>`, `<NUM>` etc, you can select any wrapper prefix
  or suffix by overriding `TemplateMinerConfig.mask_prefix` and `TemplateMinerConfig.mask_prefix`
* Fixed: config `.ini` file is always read from same folder as source file in demos in tests (thanks *@RobinMaas95*)

##### v0.9.3

* Fixed: comparison of type int with type str in function `add_seq_to_prefix_tree` #28 (bug introduced at v0.9.1)

##### v0.9.2

* Updated jsonpickle version
* Keys `id_to_cluster` dict are now persisted by jsonpickle as `int` instead of `str` to avoid keys type conversion on
  load snapshot which caused some issues.
* Added cachetools dependency to `setup.py`.

##### v0.9.1

* Added option to configure `TemplateMiner` using a configuration object (without `.ini` file).
* Support for `print_tree()` to a file/stream.
* Added `MemoryBufferPersistence`
* Added unit tests for state save/load.
* Bug fix: missing type-conversion in state loading, introduced in v0.9.0
* Refactor: Drain prefix tree keys are now of type `str` also for 1st level
  (was `int` before), for type consistency.

##### v0.9.0

* Decrease memory footprint of the main data structures.
* Added `max_clusters` option to limit the number of tracked clusters.
* Changed cluster identifier type from str to int
* Added more unit tests and CI

##### v0.8.6

* Added `extra_delimiters` configuration option to Drain

##### v0.8.5

* Profiler improvements

##### v0.8.4

* Masking speed improvement

##### v0.8.3

* Fix: profiler state after load from snapshot

##### v0.8.2

* Fixed snapshot backward compatibility to v0.7.9

##### v0.8.1

* Bugfix in profiling configuration read

##### v0.8.0

* Added time profiling support (disabled by default)
* Added cluster ID to snapshot reason log (credit: @boernd)
* Minor Readability and documentation improvements in Drain

##### v0.7.9

* Fix: `KafkaPersistence` now accepts also `bootstrap_servers` as kwargs.

##### v0.7.8

* Using `kafka-python` package instead of `kafka` (newer).
* Added support for specifying additional configuration as `kwargs` in Kafka persistence handler.

##### v0.7.7

* Corrected default Drain config values.

##### v0.7.6

* Improvement in config file handling (Note: new sections were added instead of `DEFAULT` section)

##### v0.7.5

* Made Kafka and Redis optional requirements
 


            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "drain3-ankcorn",
    "maintainer": "Yihao Chen(Superskyyy)",
    "docs_url": null,
    "requires_python": ">=3.7,<4.0",
    "maintainer_email": "yihaochen@apache.org",
    "keywords": "drain,log,parser,IBM,template,logs,miner",
    "author": "IBM Research Haifa",
    "author_email": "david.ohana@ibm.com",
    "download_url": "https://files.pythonhosted.org/packages/a8/07/661fde86520986ad558d680fac5f8455588b0e92001b5fa43c0d45ccc9aa/drain3_ankcorn-0.9.14.tar.gz",
    "platform": null,
    "description": "# Drain3\n\n## Important Update\n\nDrain3 was moved to the `logpai` GitHub organization (which is also the home for the original Drain implementation). We always welcome more contributors and maintainers to join us and push the project forward. We welcome more contributions and variants of implementations if you find practical enhancements to the algorithm in production scenarios.\n\n## Introduction\n\nDrain3 is an online log template miner that can extract templates (clusters) from a stream of log messages in a timely\nmanner. It employs a parse tree with fixed depth to guide the log group search process, which effectively avoids\nconstructing a very deep and unbalanced tree.\n\nDrain3 continuously learns on-the-fly and extracts log templates from raw log entries.\n\n#### Example:\n\nFor the input:\n\n```\nconnected to 10.0.0.1\nconnected to 192.168.0.1\nHex number 0xDEADBEAF\nuser davidoh logged in\nuser eranr logged in\n```\n\nDrain3 extracts the following templates:\n\n```\nID=1     : size=2         : connected to <:IP:>\nID=2     : size=1         : Hex number <:HEX:>\nID=3     : size=2         : user <:*:> logged in\n```\n\nFull sample program output:\n\n```\nStarting Drain3 template miner\nChecking for saved state\nSaved state not found\nDrain3 started with 'FILE' persistence\nStarting training mode. Reading from std-in ('q' to finish)\n> connected to 10.0.0.1\nSaving state of 1 clusters with 1 messages, 528 bytes, reason: cluster_created (1)\n{\"change_type\": \"cluster_created\", \"cluster_id\": 1, \"cluster_size\": 1, \"template_mined\": \"connected to <:IP:>\", \"cluster_count\": 1}\nParameters: [ExtractedParameter(value='10.0.0.1', mask_name='IP')]\n> connected to 192.168.0.1\n{\"change_type\": \"none\", \"cluster_id\": 1, \"cluster_size\": 2, \"template_mined\": \"connected to <:IP:>\", \"cluster_count\": 1}\nParameters: [ExtractedParameter(value='192.168.0.1', mask_name='IP')]\n> Hex number 0xDEADBEAF\nSaving state of 2 clusters with 3 messages, 584 bytes, reason: cluster_created (2)\n{\"change_type\": \"cluster_created\", \"cluster_id\": 2, \"cluster_size\": 1, \"template_mined\": \"Hex number <:HEX:>\", \"cluster_count\": 2}\nParameters: [ExtractedParameter(value='0xDEADBEAF', mask_name='HEX')]\n> user davidoh logged in\nSaving state of 3 clusters with 4 messages, 648 bytes, reason: cluster_created (3)\n{\"change_type\": \"cluster_created\", \"cluster_id\": 3, \"cluster_size\": 1, \"template_mined\": \"user davidoh logged in\", \"cluster_count\": 3}\nParameters: []\n> user eranr logged in\nSaving state of 3 clusters with 5 messages, 644 bytes, reason: cluster_template_changed (3)\n{\"change_type\": \"cluster_template_changed\", \"cluster_id\": 3, \"cluster_size\": 2, \"template_mined\": \"user <:*:> logged in\", \"cluster_count\": 3}\nParameters: [ExtractedParameter(value='eranr', mask_name='*')]\n> q\nTraining done. Mined clusters:\nID=1     : size=2         : connected to <:IP:>\nID=2     : size=1         : Hex number <:HEX:>\nID=3     : size=2         : user <:*:> logged in\n```\n\nThis project is an upgrade of the original [Drain](https://github.com/logpai/logparser/blob/master/logparser/Drain)\nproject by LogPAI from Python 2.7 to Python 3.6 or later with additional features and bug-fixes.\n\nRead more information about Drain from the following paper:\n\n- Pinjia He, Jieming Zhu, Zibin Zheng, and Michael R.\n  Lyu. [Drain: An Online Log Parsing Approach with Fixed Depth Tree](http://jiemingzhu.github.io/pub/pjhe_icws2017.pdf),\n  Proceedings of the 24th International Conference on Web Services (ICWS), 2017.\n\nA Drain3 use case is presented in this blog\npost: [Use open source Drain3 log-template mining project to monitor for network outages](https://developer.ibm.com/blogs/how-mining-log-templates-can-help-ai-ops-in-cloud-scale-data-centers)\n.\n\n#### New features\n\n- [**Persistence**](#persistence). Save and load Drain state into an [Apache Kafka](https://kafka.apache.org)\n  topic, [Redis](https://redis.io/) or a file.\n- **Streaming**. Support feeding Drain with messages one-be-one.\n- [**Masking**](#masking). Replace some message parts (e.g numbers, IPs, emails) with wildcards. This improves the\n  accuracy of template mining.\n- [**Packaging**](#installation). As a pip package.\n- [**Configuration**](#configuration). Support for configuring Drain3 using an `.ini` file or a configuration object. \n- [**Memory efficiency**](#memory-efficiency). Decrease the memory footprint of internal data structures and introduce\n  cache to control max memory consumed (thanks to @StanislawSwierc)\n- [**Inference mode**](#training-vs-inference-modes). In case you want to separate training and inference phase, Drain3\n  provides a function for *fast* matching against already-learned clusters (templates) only, without the usage of\n  regular expressions.\n- [**Parameter extraction**](#parameter-extraction). Accurate extraction of the variable parts from a log message as an\n  ordered list, based on its mined template and the defined masking instructions (thanks to @Impelon).\n\n#### Expected Input and Output\n\nAlthough Drain3 can be ingested with full raw log message, template mining accuracy can be improved if you feed it with\nonly the unstructured free-text portion of log messages, by first removing structured parts like timestamp, hostname.\nseverity, etc.\n\nThe output is a dictionary with the following fields:\n\n- `change_type` - indicates either if a new template was identified, an existing template was changed or message added\n  to an existing cluster.\n- `cluster_id` - Sequential ID of the cluster that the log belongs to.\n- `cluster_size`- The size (message count) of the cluster that the log belongs to.\n- `cluster_count` - Count clusters seen so far.\n- `template_mined`- the last template of above cluster_id.\n\n## Configuration\n\nDrain3 is configured using [configparser](https://docs.python.org/3.4/library/configparser.html). By default, config\nfilename is `drain3.ini` in working directory. It can also be configured passing\na [TemplateMinerConfig](drain3/template_miner_config.py) object to the [TemplateMiner](drain3/template_miner.py)\nconstructor.\n\nPrimary configuration parameters:\n\n- `[DRAIN]/sim_th` - similarity threshold. if percentage of similar tokens for a log message is below this number, a new\n  log cluster will be created (default 0.4)\n- `[DRAIN]/depth` - max depth levels of log clusters. Minimum is 3. (default 4)\n- `[DRAIN]/max_children` - max number of children of an internal node (default 100)\n- `[DRAIN]/max_clusters` - max number of tracked clusters (unlimited by default). When this number is reached, model\n  starts replacing old clusters with a new ones according to the LRU cache eviction policy.\n- `[DRAIN]/extra_delimiters` - delimiters to apply when splitting log message into words (in addition to whitespace) (\n  default none). Format is a Python list e.g. `['_', ':']`.\n- `[MASKING]/masking` - parameters masking - in json format (default \"\")\n- `[MASKING]/mask_prefix` & `[MASKING]/mask_suffix` - the wrapping of identified parameters in templates. By default, it\n  is `<` and `>` respectively.\n- `[SNAPSHOT]/snapshot_interval_minutes` - time interval for new snapshots (default 1)\n- `[SNAPSHOT]/compress_state` - whether to compress the state before saving it. This can be useful when using Kafka\n  persistence.\n\n## Masking\n\nThis feature allows masking of specific variable parts in log message with keywords, prior to passing to Drain. A\nwell-defined masking can improve template mining accuracy.\n\nTemplate parameters that do not match any custom mask in the preliminary masking phase are replaced with `<*>` by Drain\ncore.\n\nUse a list of regular expressions in the configuration file with the format `{'regex_pattern', 'mask_with'}` to set\ncustom masking.\n\nFor example, following masking instructions in `drain3.ini` will mask IP addresses and integers:\n\n```\n[MASKING]\nmasking = [\n          {\"regex_pattern\":\"((?<=[^A-Za-z0-9])|^)(\\\\d{1,3}\\\\.\\\\d{1,3}\\\\.\\\\d{1,3}\\\\.\\\\d{1,3})((?=[^A-Za-z0-9])|$)\", \"mask_with\": \"IP\"},\n          {\"regex_pattern\":\"((?<=[^A-Za-z0-9])|^)([\\\\-\\\\+]?\\\\d+)((?=[^A-Za-z0-9])|$)\", \"mask_with\": \"NUM\"},\n          ]\n    ]\n```\n\n## Persistence\n\nThe persistence feature saves and loads a snapshot of Drain3 state in a (compressed) json format. This feature adds\nrestart resiliency to Drain allowing continuation of activity and maintain learned knowledge across restarts.\n\nDrain3 state includes the search tree and all the clusters that were identified up until snapshot time.\n\nThe snapshot also persist number of log messages matched each cluster, and it's `cluster_id`.\n\nAn example of a snapshot:\n\n```json\n{\n  \"clusters\": [\n    {\n      \"cluster_id\": 1,\n      \"log_template_tokens\": [\n        \"aa\",\n        \"aa\",\n        \"<*>\"\n      ],\n      \"py/object\": \"drain3_core.LogCluster\",\n      \"size\": 2\n    },\n    {\n      \"cluster_id\": 2,\n      \"log_template_tokens\": [\n        \"My\",\n        \"IP\",\n        \"is\",\n        \"<IP>\"\n      ],\n      \"py/object\": \"drain3_core.LogCluster\",\n      \"size\": 1\n    }\n  ]\n}\n```\n\nThis example snapshot persist two clusters with the templates:\n\n`[\"aa\", \"aa\", \"<*>\"]` - occurs twice\n\n`[\"My\", \"IP\", \"is\", \"<IP>\"]` - occurs once\n\nSnapshots are created in the following events:\n\n- `cluster_created` - in any new template\n- `cluster_template_changed` - in any update of a template\n- `periodic` - after n minutes from the last snapshot. This is intended to save cluster sizes even if no new template\n  was identified.\n\nDrain3 currently supports the following persistence modes:\n\n- **Kafka** - The snapshot is saved in a dedicated topic used only for snapshots - the last message in this topic is the\n  last snapshot that will be loaded after restart. For Kafka persistence, you need to provide: `topic_name`. You may\n  also provide other `kwargs` that are supported by `kafka.KafkaConsumer` and `kafka.Producer` e.g `bootstrap_servers`\n  to change Kafka endpoint (default is `localhost:9092`).\n\n- **Redis** - The snapshot is saved to a key in Redis database (contributed by @matabares).\n\n- **File** - The snapshot is saved to a file.\n\n- **Memory** - The snapshot is saved an in-memory object.\n\n- **None** - No persistence.\n\nDrain3 persistence modes can be easily extended to another medium / database by inheriting\nthe [PersistenceHandler](drain3/persistence_handler.py) class.\n\n## Training vs. Inference modes\n\nIn some use-cases, it is required to separate training and inference phases.\n\nIn training phase you should call `template_miner.add_log_message(log_line)`. This will match log line against an\nexisting cluster (if similarity is above threshold) or create a new cluster. It may also change the template of an\nexisting cluster.\n\nIn inference mode you should call `template_miner.match(log_line)`. This will match log line against previously learned\nclusters only. No new clusters are created and templates of existing clusters are not changed. Match to existing cluster\nhas to be perfect, otherwise `None` is returned. You can use persistence option to load previously trained clusters\nbefore inference.\n\n## Memory efficiency\n\nThis feature limits the max memory used by the model. It is particularly important for large and possibly unbounded log\nstreams. This feature is controlled by the `max_clusters\u200b` parameter, which sets the max number of clusters/templates\ntrarcked by the model. When the limit is reached, new templates start to replace the old ones according to the Least\nRecently Used (LRU) eviction policy. This makes the model adapt quickly to the most recent templates in the log stream.\n\n## Parameter Extraction\n\nDrain3 supports retrieving an ordered list of variables in a log message, after its template was mined. Each parameter\nis accompanied by the name of the mask that was matched, or `*` for the catch-all mask.\n\nParameter extraction is performed by generating a regular expression that matches the template and then applying it on\nthe log message. When `exact_matching` is enabled (by default), the generated regex included the regular expression\ndefined in relevant masking instructions. If there are multiple masking instructions with the same name, either match\ncan satisfy the regex. It is possible to disable exact matching so that every variable is matched against a\nnon-whitespace character sequence. This may improve performance on expanse of accuracy.\n\nParameter extraction regexes generated per template are cached by default, to improve performance. You can control cache\nsize with the ` MASKING/parameter_extraction_cache_capacity` configuration parameter.\n\nSample usage:\n\n```python\nresult = template_miner.add_log_message(log_line)\nparams = template_miner.extract_parameters(\n    result[\"template_mined\"], log_line, exact_matching=True)\n```\n\nFor the input `\"user johndoe logged in 11 minuts ago\"`, the template would be:\n\n```\n\"user <:*:> logged in <:NUM:> minuts ago\"\n```\n\n... and the extracted parameters:\n\n```\n[\n  ExtractedParameter(value='johndoe', mask_name='*'), \n  ExtractedParameter(value='11', mask_name='NUM')\n]\n```\n\n## Installation\n\nDrain3 is available from [PyPI](https://pypi.org/project/drain3). To install use `pip`:\n\n```\npip3 install drain3\n```\n\nNote: If you decide to use Kafka or Redis persistence, you should install relevant client library explicitly, since it\nis declared as an extra (optional) dependency, by either:\n\n```\npip3 install kafka-python\n```\n\n-- or --\n\n```\npip3 install redis\n```\n\n## Examples\n\nIn order to run the examples directly from the repository, you need to install dependencies. You can do that using *\npipenv* by executing the following command (assuming pipenv already installed):\n\n```shell\npython3 -m pipenv sync\n```\n\n#### Example 1 - `drain_stdin_demo`\n\nRun [examples/drain_stdin_demo.py](examples/drain_stdin_demo.py) from the root folder of the repository by:\n\n```\npython3 -m pipenv run python -m examples.drain_stdin_demo\n```\n\nThis example uses Drain3 on input from stdin and persist to either Kafka / file / no persistence.\n\nChange `persistence_type` variable in the example to change persistence mode.\n\nEnter several log lines using the command line. Press `q` to end online learn-and-match mode.\n\nNext, demo goes to match (inference) only mode, in which no new clusters are trained and input is matched against\npreviously trained clusters only. Press `q` again to finish execution.\n\n#### Example 2 - `drain_bigfile_demo`\n\nRun [examples/drain_bigfile_demo](examples/drain_bigfile_demo.py) from the root folder of the repository by:\n\n```\npython3 -m pipenv run python -m examples.drain_bigfile_demo\n```\n\nThis example downloads a real-world log file (of an SSH server) and process all lines, then prints result clusters,\nprefix tree and performance statistics.\n\n#### Sample config file\n\nAn example `drain3.ini` file with masking instructions can be found in the [examples](examples) folder as well.\n\n## Contributing\n\nOur project welcomes external contributions. Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for further details.\n\n## Change Log\n\n##### v0.9.11\n\n* Fixed possible DivideByZero error when the profiler is enabled - [Issue #65](https://github.com/IBM/Drain3/issues/65). \n\n##### v0.9.10\n\n* Fixed compatibility issue with Python 3.10 caused by removal of `KeysView`.\n\n##### v0.9.9\n\n* Added support for accurate log message parameter extraction in a new function - `extract_parameters()`. The\n  function `get_parameter_list()` is deprecated (Thanks to *@Impelon*).\n* Refactored `AbstractMaskingInstruction` as a base class for `RegexMaskingInstruction`, allowing to introduce other\n  types of masking mechanisms.\n\n##### v0.9.8\n\n* Added an option `full_search_strategy` option in `TemplateMiner.match()` and `Drain.match()`. See more info at\n  Issue [#48](https://github.com/IBM/Drain3/issues/48).\n* Added an option to disable parameterization of tokens that contains digits in\n  configuration: `TemplateMinerConfig.parametrize_numeric_tokens`\n* Loading Drain snapshot now only restores clusters state and not configuration parameters. This improves backwards\n  compatibility when introducing new Drain configuration parameters.\n\n##### v0.9.7\n\n* Fixed bug in original Drain: log clusters were created multiple times for log messages with fewer tokens\n  than `max_node_depth`.\n* Changed `depth` property name to a more descriptive name `max_node_depth` as Drain always subtracts 2 of `depth`\n  argument value. Also added `log_cluster_depth` property to reflect original value of depth argument (Breaking Change).\n* Restricted `depth` param to minimum sensible value of 3.\n* Added log cluster count to nodes in `Drain.print_tree()`\n* Added optional log cluster details to `Drain.print_tree()`\n\n##### v0.9.6\n\n* Fix issue https://github.com/IBM/Drain3/issues/38: Unnecessary update of LRU cache in case `max_clusters` is used (\n  thanks *@StanislawSwierc*).\n\n##### v0.9.5\n\n* Added: `TemplateMiner.match()` function for fast matching against existing clusters only.\n\n##### v0.9.4\n\n* Added: `TemplateMiner.get_parameter_list()` function to extract template parameters for raw log message (thanks to *\n  @cwyalpha*)\n* Added option to customize mask wrapper - Instead of the default `<*>`, `<NUM>` etc, you can select any wrapper prefix\n  or suffix by overriding `TemplateMinerConfig.mask_prefix` and `TemplateMinerConfig.mask_prefix`\n* Fixed: config `.ini` file is always read from same folder as source file in demos in tests (thanks *@RobinMaas95*)\n\n##### v0.9.3\n\n* Fixed: comparison of type int with type str in function `add_seq_to_prefix_tree` #28 (bug introduced at v0.9.1)\n\n##### v0.9.2\n\n* Updated jsonpickle version\n* Keys `id_to_cluster` dict are now persisted by jsonpickle as `int` instead of `str` to avoid keys type conversion on\n  load snapshot which caused some issues.\n* Added cachetools dependency to `setup.py`.\n\n##### v0.9.1\n\n* Added option to configure `TemplateMiner` using a configuration object (without `.ini` file).\n* Support for `print_tree()` to a file/stream.\n* Added `MemoryBufferPersistence`\n* Added unit tests for state save/load.\n* Bug fix: missing type-conversion in state loading, introduced in v0.9.0\n* Refactor: Drain prefix tree keys are now of type `str` also for 1st level\n  (was `int` before), for type consistency.\n\n##### v0.9.0\n\n* Decrease memory footprint of the main data structures.\n* Added `max_clusters` option to limit the number of tracked clusters.\n* Changed cluster identifier type from str to int\n* Added more unit tests and CI\n\n##### v0.8.6\n\n* Added `extra_delimiters` configuration option to Drain\n\n##### v0.8.5\n\n* Profiler improvements\n\n##### v0.8.4\n\n* Masking speed improvement\n\n##### v0.8.3\n\n* Fix: profiler state after load from snapshot\n\n##### v0.8.2\n\n* Fixed snapshot backward compatibility to v0.7.9\n\n##### v0.8.1\n\n* Bugfix in profiling configuration read\n\n##### v0.8.0\n\n* Added time profiling support (disabled by default)\n* Added cluster ID to snapshot reason log (credit: @boernd)\n* Minor Readability and documentation improvements in Drain\n\n##### v0.7.9\n\n* Fix: `KafkaPersistence` now accepts also `bootstrap_servers` as kwargs.\n\n##### v0.7.8\n\n* Using `kafka-python` package instead of `kafka` (newer).\n* Added support for specifying additional configuration as `kwargs` in Kafka persistence handler.\n\n##### v0.7.7\n\n* Corrected default Drain config values.\n\n##### v0.7.6\n\n* Improvement in config file handling (Note: new sections were added instead of `DEFAULT` section)\n\n##### v0.7.5\n\n* Made Kafka and Redis optional requirements\n \n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Persistent & streaming log template miner",
    "version": "0.9.14",
    "project_urls": null,
    "split_keywords": [
        "drain",
        "log",
        "parser",
        "ibm",
        "template",
        "logs",
        "miner"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "244ee7edbc5253a90a90d8307674b74d997bf7fb131a99396a49452230336ab5",
                "md5": "080182e40768c44733dd9a37434bb226",
                "sha256": "9363c51355648dd352e513b71d855a7ed70ea639d2b905acf816ca34be131319"
            },
            "downloads": -1,
            "filename": "drain3_ankcorn-0.9.14-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "080182e40768c44733dd9a37434bb226",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7,<4.0",
            "size": 26896,
            "upload_time": "2024-02-26T17:51:51",
            "upload_time_iso_8601": "2024-02-26T17:51:51.748543Z",
            "url": "https://files.pythonhosted.org/packages/24/4e/e7edbc5253a90a90d8307674b74d997bf7fb131a99396a49452230336ab5/drain3_ankcorn-0.9.14-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a807661fde86520986ad558d680fac5f8455588b0e92001b5fa43c0d45ccc9aa",
                "md5": "5f8a25a947e33479385aa849c9d08dca",
                "sha256": "f0ea37a2833b2f9be64c1036302a2c012e351a0dadace544266cb7d4ae80f8e9"
            },
            "downloads": -1,
            "filename": "drain3_ankcorn-0.9.14.tar.gz",
            "has_sig": false,
            "md5_digest": "5f8a25a947e33479385aa849c9d08dca",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7,<4.0",
            "size": 30118,
            "upload_time": "2024-02-26T17:51:56",
            "upload_time_iso_8601": "2024-02-26T17:51:56.379148Z",
            "url": "https://files.pythonhosted.org/packages/a8/07/661fde86520986ad558d680fac5f8455588b0e92001b5fa43c0d45ccc9aa/drain3_ankcorn-0.9.14.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-26 17:51:56",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "drain3-ankcorn"
}
        
Elapsed time: 0.20876s