arithmetic-compressor


Namearithmetic-compressor JSON
Version 0.2 PyPI version JSON
download
home_pagehttps://github.com/kodejuice/arithmetic-compressor
SummaryAn implementation of the Arithmetic Coding algorithm in Python, along with advanced models like PPM (Prediction by Partial Matching), Context Mixing and Simple Adaptive models
upload_time2023-01-29 19:44:52
maintainer
docs_urlNone
authorBiereagu Sochima
requires_python>=3.7
license
keywords arithmetic coding ppm encoding encoder prediction context mixing adaptive models
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Arithmetic Coding Library

[![Run tests](https://github.com/kodejuice/arithmetic-compressor/actions/workflows/tests.yml/badge.svg)](https://github.com/kodejuice/arithmetic-compressor/actions/workflows/tests.yml)

This library is an implementation of the [Arithmetic Coding](https://en.wikipedia.org/wiki/Arithmetic_coding) algorithm in Python, along with adaptive statistical data compression models like [PPM (Prediction by Partial Matching)](https://en.wikipedia.org/wiki/Prediction_by_partial_matching), [Context Mixing](en.wikipedia.org/wiki/Context_mixing) and Simple Adaptive models.

## Installation

To install the library, you can use pip:

```bash
pip install arithmetic_compressor
```

## Usage

```python
from arithmetic_compressor import AECompressor
from arithmetic_compressor.models import StaticModel

# create the model
model = StaticModel({'A': 0.5, 'B': 0.25, 'C': 0.25})

# create an arithmetic coder
coder = AECompressor(model)

# encode some data
data = "AAAAAABBBCCC"
N = len(data)
compressed = coder.compress(data)

# print the compressed data
print(compressed) # => [0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1]
```

And here's an example of how to decode the encoded data:

```python
decoded = coder.decompress(compressed, N)

print(decoded) # -> ['A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C']
```

## API Reference

### Arithmetic Compressing

- [**`AECompressor`**](./arithmetic_compressor/compress.py):
  - **`compress(data: list|str, model: Model) -> List[str]`**: Takes in a string or list representing the data, encodes the data using arithmetic coding then returns a string of bits.

  - **`decompress(encoded_data: List[str], length: int) -> List`**: Takes in an encoded string and the length of the original data and decodes the encoded data.

### Models

In addition to the arithmetic coding algorithm, this library also includes adaptive statistical models that can be used to improve compression.

- [**`StaticModel`**](./arithmetic_compressor/models/static_model.py): A class which implements a static model that doesn't adapt to input data or statistics.
- [**`BaseBinaryModel`**](./arithmetic_compressor/models/base_adaptive_model.py): A class which implements a simple adaptive compression algorithm for binary symbols (0 and 1)
- [**`BaseFrequencyTable`**](./arithmetic_compressor/models/base_adaptive_model.py): This implements a basic adaptive frequency table that incrementally adapts to input data.
- [**`SimpleAdaptiveModel`**](./arithmetic_compressor/models/base_adaptive_model.py): A class that implements a simple adaptive compression algorithm.
- [**`PPMModel`**](./arithmetic_compressor/models/ppm.py): A class that implements the PPM compression algorithm.
- [**`MultiPPM`**](./arithmetic_compressor/models/ppm.py): A class which uses weighted averaging to combine several PPM Models of different orders to make predictions.
- [**`BinaryPPM`**](./arithmetic_compressor/models/binary_ppm.py): A class that implements the PPM compression algorithm for binary symbols (0 and 1).
- [**`MultiBinaryPPM`**](./arithmetic_compressor/models/binary_ppm.py): A class which uses weighted averaging to combine several BinaryPPM models of different orders to make predictions.
- [**`ContextMixing_Linear`**](./arithmetic_compressor/models/context_mixing_linear.py): A class which implements the Linear Evidence Mixing variant of the Context Mixing compression algorithm.
- [**`ContextMixing_Logistic`**](./arithmetic_compressor/models/context_mixing_logistic.py): A class which implements the Neural network (Logistic) Mixing variant of the Context Mixing compression algorithm.

All models implement these common methods:

- **`update(symbol)`**: Updates the models statistics given a symbol
- **`probability()`**: Returns the probability of the next symbol
- **`cdf()`**: Returns a cummulative distribution of the next symbol probabilities
- **`test_model()`**: Tests the efficiency of the model to predict symbols

## Models Usage

A closer look at all the models.

### **Simple Models**

- `BaseFrequencyTable(symbol_probabilities: dict)`
- `SimpleAdaptiveModel(symbol_probabilities: dict, adaptation_rate: float)`

The Simple Adaptive models are models that adapts to the probability of a symbol based on the frequency of the symbol in the data.

Here's an example of how to use the Simple Adaptive models included in the library:

```python
from arithmetic_compressor import AECompressor
from arithmetic_compressor.models import\
   BaseFrequencyTable,\
   SimpleAdaptiveModel

# create the model
# model = SimpleAdaptiveModel({'A': 0.5, 'B': 0.25, 'C': 0.25})
model = BaseFrequencyTable({'A': 0.5, 'B': 0.25, 'C': 0.25})

# create an arithmetic coder
coder = AECompressor(model)

# encode some data
data = "AAAAAABBBCCC"
compressed = coder.compress(data)

# print the compressed data
print(compressed) # => [0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1]
```

The `BaseFrequencyTable` does an incremental adaptation to adapt to the statistics of the input data while the `SimpleAdaptiveModel` is essentially an [exponential moving average](https://en.wikipedia.org/wiki/Moving_average) that adapts to input data relative to the `adaptation_rate`.

### **PPM models**

> <https://en.wikipedia.org/wiki/Prediction_by_partial_matching>

- `PPMModel(symbols: list, context_size: int)`
- `MultiPPMModel(symbols: list, models: int)`
- `BinaryPPMM(context_size: int)`
- `MultiBinaryPPMM(models: int)`

PPM (Prediction by Partial Matching) models are a type of context modeling that uses a set of previous symbols to predict the probability of the next symbol.
Here's an example of how to use the PPM models included in the library:

```python
from arithmetic_compressor import AECompressor
from arithmetic_compressor.models import\
   PPMModel,\
   MultiPPM

# create the model
model = PPMModel(['A', 'B', 'C'], k = 3) # no need to pass in probabilities, only symbols

# create an arithmetic coder
coder = AECompressor(model)

# encode some data
data = "AAAAAABBBCCC"
compressed = coder.compress(data)

# print the compressed data
print(compressed) # => [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1]
```

The **`MultiPPM`** model uses [weighted averaging](https://en.wikipedia.org/wiki/Weighted_arithmetic_mean) to combine predictions from several PPM models to make a prediction, gives better compression when the input is large.

```python
# create the model
model = MultiPPM(['A', 'B', 'C'], models = 4) # will combine PPM models with context sizes of 0 to 4

# create an arithmetic coder
coder = AECompressor(model)

# encode some data
data = "AAAAAABBBCCC"
compressed = coder.compress(data)

# print the compressed data
print(compressed) # => [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1]
```

#### **Binary version**

The Binary PPM models **`BinaryPPM`** and **`MultiBinaryPPM`** behave just like normal PPM models demonstrated above, except that they only work for binary symbols `0` and `1`.

```python
from arithmetic_compressor import AECompressor
from arithmetic_compressor.models import\
   BinaryPPM,\
   MultiBinaryPPM

# create the model
model = BinaryPPM(k = 3)

# create an arithmetic coder
coder = AECompressor(model)

# encode some data
data = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1]
compressed = coder.compress(data)

# print the compressed data
print(compressed) # => [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1]
```

Likewise the **`MultiBinaryPPM`** will combine several Binary PPM models to make prediction using weighted averaging.

### **Context Mixing models**

- `ContextMix_Linear(models: List)`
- `ContextMix_Logistic(learnung_rate: float)`

Context mixing is a type of data compression algorithm in which the next-symbol predictions of two or more statistical models are combined to yield a prediction that is often more accurate than any of the individual predictions.

Two general approaches have been used, linear and logistic mixing. Linear mixing uses a weighted average of the predictions weighted by evidence.
While the logistic (or neural network) mixing first transforms the predictions into the logistic domain, log(p/(1-p)) before averaging.

The library contains a minimal implementation of the algorithm, only the core algorithm is implemented, it doesn't include as many contexts / models as in [PAQ](https://en.wikipedia.org/wiki/PAQ).

> _Note: They only work for binary symbols (**0** and **1**)._

#### **Linear Mixing**

> <https://en.wikipedia.org/wiki/Context_mixing#Linear_Mixing>

The mixer computes a probability by a weighted summation of the N models.

```python
from arithmetic_compressor import AECompressor
from arithmetic_compressor.models import ContextMix_Linear

# create the model
model = ContextMix_Linear()

# create an arithmetic coder
coder = AECompressor(model)

# encode some data
data = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
compressed = coder.compress(data)

# print the compressed data
print(compressed) # => [0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1]
```

The Linear Mixing model lets you combine other models:

```python
from arithmetic_compressor import AECompressor
from arithmetic_compressor.models import ContextMix_Linear,\
   SimpleAdaptiveModel,\
   PPMModel,\
   BaseFrequencyTable

# create the model
model = ContextMix_Linear([
  SimpleAdaptiveModel({0: 0.5, 1: 0.5}),
  BaseFrequencyTable({0: 0.5, 1: 0.5}),
  PPMModel([0, 1], k = 10)
])

# create an arithmetic coder
coder = AECompressor(model)

# encode some data
data = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
compressed = coder.compress(data)

# print the compressed data
print(compressed) # => [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```

#### **Logistic (Neural Network) Mixing**

> <https://en.wikipedia.org/wiki/PAQ#Neural-network_mixing>

A neural network is used to combine models.

```python
from arithmetic_compressor import AECompressor
from arithmetic_compressor.models import ContextMix_Logistic

# create the model
model = ContextMix_Logistic()

# create an arithmetic coder
coder = AECompressor(model)

# encode some data
data = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
compressed = coder.compress(data)

# print the compressed data
print(compressed) # => [0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1]
```

> _**Note**: This library is intended for learning and educational purposes only. The implementation may not be optimized for performance or memory usage and may not be suitable for use in production environments._
> _Please consider the performance and security issues before using it in production._
> _Please also note that you should thoroughly test the library and its models with real-world data and use cases before deploying it in production._

## More Examples

You can find more detailed examples in the [`/examples`](./examples/) folder in the repository. These examples demonstrate the capabilities of the library and show how to use the different models.

## Contribution

Contributions are very much welcome to the library. If you have an idea for a new feature or have found a bug, please submit an issue or a pull request.

## License

This library is distributed under the MIT License. See the [LICENSE](./LICENSE) file for more information.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kodejuice/arithmetic-compressor",
    "name": "arithmetic-compressor",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "arithmetic,coding,ppm,encoding,encoder,prediction,context mixing,adaptive models",
    "author": "Biereagu Sochima",
    "author_email": "<sochima.eb@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/83/9a/4d271a92eed22e8b670d4eaf3730c922eaf40409d109672443b2c156ae56/arithmetic_compressor-0.2.tar.gz",
    "platform": null,
    "description": "# Arithmetic Coding Library\n\n[![Run tests](https://github.com/kodejuice/arithmetic-compressor/actions/workflows/tests.yml/badge.svg)](https://github.com/kodejuice/arithmetic-compressor/actions/workflows/tests.yml)\n\nThis library is an implementation of the [Arithmetic Coding](https://en.wikipedia.org/wiki/Arithmetic_coding) algorithm in Python, along with adaptive statistical data compression models like [PPM (Prediction by Partial Matching)](https://en.wikipedia.org/wiki/Prediction_by_partial_matching), [Context Mixing](en.wikipedia.org/wiki/Context_mixing) and Simple Adaptive models.\n\n## Installation\n\nTo install the library, you can use pip:\n\n```bash\npip install arithmetic_compressor\n```\n\n## Usage\n\n```python\nfrom arithmetic_compressor import AECompressor\nfrom arithmetic_compressor.models import StaticModel\n\n# create the model\nmodel = StaticModel({'A': 0.5, 'B': 0.25, 'C': 0.25})\n\n# create an arithmetic coder\ncoder = AECompressor(model)\n\n# encode some data\ndata = \"AAAAAABBBCCC\"\nN = len(data)\ncompressed = coder.compress(data)\n\n# print the compressed data\nprint(compressed) # => [0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1]\n```\n\nAnd here's an example of how to decode the encoded data:\n\n```python\ndecoded = coder.decompress(compressed, N)\n\nprint(decoded) # -> ['A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C']\n```\n\n## API Reference\n\n### Arithmetic Compressing\n\n- [**`AECompressor`**](./arithmetic_compressor/compress.py):\n  - **`compress(data: list|str, model: Model) -> List[str]`**: Takes in a string or list representing the data, encodes the data using arithmetic coding then returns a string of bits.\n\n  - **`decompress(encoded_data: List[str], length: int) -> List`**: Takes in an encoded string and the length of the original data and decodes the encoded data.\n\n### Models\n\nIn addition to the arithmetic coding algorithm, this library also includes adaptive statistical models that can be used to improve compression.\n\n- [**`StaticModel`**](./arithmetic_compressor/models/static_model.py): A class which implements a static model that doesn't adapt to input data or statistics.\n- [**`BaseBinaryModel`**](./arithmetic_compressor/models/base_adaptive_model.py): A class which implements a simple adaptive compression algorithm for binary symbols (0 and 1)\n- [**`BaseFrequencyTable`**](./arithmetic_compressor/models/base_adaptive_model.py): This implements a basic adaptive frequency table that incrementally adapts to input data.\n- [**`SimpleAdaptiveModel`**](./arithmetic_compressor/models/base_adaptive_model.py): A class that implements a simple adaptive compression algorithm.\n- [**`PPMModel`**](./arithmetic_compressor/models/ppm.py): A class that implements the PPM compression algorithm.\n- [**`MultiPPM`**](./arithmetic_compressor/models/ppm.py): A class which uses weighted averaging to combine several PPM Models of different orders to make predictions.\n- [**`BinaryPPM`**](./arithmetic_compressor/models/binary_ppm.py): A class that implements the PPM compression algorithm for binary symbols (0 and 1).\n- [**`MultiBinaryPPM`**](./arithmetic_compressor/models/binary_ppm.py): A class which uses weighted averaging to combine several BinaryPPM models of different orders to make predictions.\n- [**`ContextMixing_Linear`**](./arithmetic_compressor/models/context_mixing_linear.py): A class which implements the Linear Evidence Mixing variant of the Context Mixing compression algorithm.\n- [**`ContextMixing_Logistic`**](./arithmetic_compressor/models/context_mixing_logistic.py): A class which implements the Neural network (Logistic) Mixing variant of the Context Mixing compression algorithm.\n\nAll models implement these common methods:\n\n- **`update(symbol)`**: Updates the models statistics given a symbol\n- **`probability()`**: Returns the probability of the next symbol\n- **`cdf()`**: Returns a cummulative distribution of the next symbol probabilities\n- **`test_model()`**: Tests the efficiency of the model to predict symbols\n\n## Models Usage\n\nA closer look at all the models.\n\n### **Simple Models**\n\n- `BaseFrequencyTable(symbol_probabilities: dict)`\n- `SimpleAdaptiveModel(symbol_probabilities: dict, adaptation_rate: float)`\n\nThe Simple Adaptive models are models that adapts to the probability of a symbol based on the frequency of the symbol in the data.\n\nHere's an example of how to use the Simple Adaptive models included in the library:\n\n```python\nfrom arithmetic_compressor import AECompressor\nfrom arithmetic_compressor.models import\\\n   BaseFrequencyTable,\\\n   SimpleAdaptiveModel\n\n# create the model\n# model = SimpleAdaptiveModel({'A': 0.5, 'B': 0.25, 'C': 0.25})\nmodel = BaseFrequencyTable({'A': 0.5, 'B': 0.25, 'C': 0.25})\n\n# create an arithmetic coder\ncoder = AECompressor(model)\n\n# encode some data\ndata = \"AAAAAABBBCCC\"\ncompressed = coder.compress(data)\n\n# print the compressed data\nprint(compressed) # => [0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1]\n```\n\nThe `BaseFrequencyTable` does an incremental adaptation to adapt to the statistics of the input data while the `SimpleAdaptiveModel` is essentially an [exponential moving average](https://en.wikipedia.org/wiki/Moving_average) that adapts to input data relative to the `adaptation_rate`.\n\n### **PPM models**\n\n> <https://en.wikipedia.org/wiki/Prediction_by_partial_matching>\n\n- `PPMModel(symbols: list, context_size: int)`\n- `MultiPPMModel(symbols: list, models: int)`\n- `BinaryPPMM(context_size: int)`\n- `MultiBinaryPPMM(models: int)`\n\nPPM (Prediction by Partial Matching) models are a type of context modeling that uses a set of previous symbols to predict the probability of the next symbol.\nHere's an example of how to use the PPM models included in the library:\n\n```python\nfrom arithmetic_compressor import AECompressor\nfrom arithmetic_compressor.models import\\\n   PPMModel,\\\n   MultiPPM\n\n# create the model\nmodel = PPMModel(['A', 'B', 'C'], k = 3) # no need to pass in probabilities, only symbols\n\n# create an arithmetic coder\ncoder = AECompressor(model)\n\n# encode some data\ndata = \"AAAAAABBBCCC\"\ncompressed = coder.compress(data)\n\n# print the compressed data\nprint(compressed) # => [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1]\n```\n\nThe **`MultiPPM`** model uses [weighted averaging](https://en.wikipedia.org/wiki/Weighted_arithmetic_mean) to combine predictions from several PPM models to make a prediction, gives better compression when the input is large.\n\n```python\n# create the model\nmodel = MultiPPM(['A', 'B', 'C'], models = 4) # will combine PPM models with context sizes of 0 to 4\n\n# create an arithmetic coder\ncoder = AECompressor(model)\n\n# encode some data\ndata = \"AAAAAABBBCCC\"\ncompressed = coder.compress(data)\n\n# print the compressed data\nprint(compressed) # => [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1]\n```\n\n#### **Binary version**\n\nThe Binary PPM models **`BinaryPPM`** and **`MultiBinaryPPM`** behave just like normal PPM models demonstrated above, except that they only work for binary symbols `0` and `1`.\n\n```python\nfrom arithmetic_compressor import AECompressor\nfrom arithmetic_compressor.models import\\\n   BinaryPPM,\\\n   MultiBinaryPPM\n\n# create the model\nmodel = BinaryPPM(k = 3)\n\n# create an arithmetic coder\ncoder = AECompressor(model)\n\n# encode some data\ndata = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1]\ncompressed = coder.compress(data)\n\n# print the compressed data\nprint(compressed) # => [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1]\n```\n\nLikewise the **`MultiBinaryPPM`** will combine several Binary PPM models to make prediction using weighted averaging.\n\n### **Context Mixing models**\n\n- `ContextMix_Linear(models: List)`\n- `ContextMix_Logistic(learnung_rate: float)`\n\nContext mixing is a type of data compression algorithm in which the next-symbol predictions of two or more statistical models are combined to yield a prediction that is often more accurate than any of the individual predictions.\n\nTwo general approaches have been used, linear and logistic mixing. Linear mixing uses a weighted average of the predictions weighted by evidence.\nWhile the logistic (or neural network) mixing first transforms the predictions into the logistic domain, log(p/(1-p)) before averaging.\n\nThe library contains a minimal implementation of the algorithm, only the core algorithm is implemented, it doesn't include as many contexts / models as in [PAQ](https://en.wikipedia.org/wiki/PAQ).\n\n> _Note: They only work for binary symbols (**0** and **1**)._\n\n#### **Linear Mixing**\n\n> <https://en.wikipedia.org/wiki/Context_mixing#Linear_Mixing>\n\nThe mixer computes a probability by a weighted summation of the N models.\n\n```python\nfrom arithmetic_compressor import AECompressor\nfrom arithmetic_compressor.models import ContextMix_Linear\n\n# create the model\nmodel = ContextMix_Linear()\n\n# create an arithmetic coder\ncoder = AECompressor(model)\n\n# encode some data\ndata = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\ncompressed = coder.compress(data)\n\n# print the compressed data\nprint(compressed) # => [0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1]\n```\n\nThe Linear Mixing model lets you combine other models:\n\n```python\nfrom arithmetic_compressor import AECompressor\nfrom arithmetic_compressor.models import ContextMix_Linear,\\\n   SimpleAdaptiveModel,\\\n   PPMModel,\\\n   BaseFrequencyTable\n\n# create the model\nmodel = ContextMix_Linear([\n  SimpleAdaptiveModel({0: 0.5, 1: 0.5}),\n  BaseFrequencyTable({0: 0.5, 1: 0.5}),\n  PPMModel([0, 1], k = 10)\n])\n\n# create an arithmetic coder\ncoder = AECompressor(model)\n\n# encode some data\ndata = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\ncompressed = coder.compress(data)\n\n# print the compressed data\nprint(compressed) # => [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n```\n\n#### **Logistic (Neural Network) Mixing**\n\n> <https://en.wikipedia.org/wiki/PAQ#Neural-network_mixing>\n\nA neural network is used to combine models.\n\n```python\nfrom arithmetic_compressor import AECompressor\nfrom arithmetic_compressor.models import ContextMix_Logistic\n\n# create the model\nmodel = ContextMix_Logistic()\n\n# create an arithmetic coder\ncoder = AECompressor(model)\n\n# encode some data\ndata = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\ncompressed = coder.compress(data)\n\n# print the compressed data\nprint(compressed) # => [0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1]\n```\n\n> _**Note**: This library is intended for learning and educational purposes only. The implementation may not be optimized for performance or memory usage and may not be suitable for use in production environments._\n> _Please consider the performance and security issues before using it in production._\n> _Please also note that you should thoroughly test the library and its models with real-world data and use cases before deploying it in production._\n\n## More Examples\n\nYou can find more detailed examples in the [`/examples`](./examples/) folder in the repository. These examples demonstrate the capabilities of the library and show how to use the different models.\n\n## Contribution\n\nContributions are very much welcome to the library. If you have an idea for a new feature or have found a bug, please submit an issue or a pull request.\n\n## License\n\nThis library is distributed under the MIT License. See the [LICENSE](./LICENSE) file for more information.\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "An implementation of the Arithmetic Coding algorithm in Python, along with advanced models like PPM (Prediction by Partial Matching), Context Mixing and Simple Adaptive models",
    "version": "0.2",
    "split_keywords": [
        "arithmetic",
        "coding",
        "ppm",
        "encoding",
        "encoder",
        "prediction",
        "context mixing",
        "adaptive models"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9d8577f543436551fc09b878f6a41c974315fcf51f10b9cc2fd45f63abdf1d6c",
                "md5": "38ab89c69326a9b671684a2c070d6cf5",
                "sha256": "9419e5e87aa216a0836a2eb952e2144afcd29db857c1a0bd741f6df007618cd4"
            },
            "downloads": -1,
            "filename": "arithmetic_compressor-0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "38ab89c69326a9b671684a2c070d6cf5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 22861,
            "upload_time": "2023-01-29T19:44:49",
            "upload_time_iso_8601": "2023-01-29T19:44:49.999122Z",
            "url": "https://files.pythonhosted.org/packages/9d/85/77f543436551fc09b878f6a41c974315fcf51f10b9cc2fd45f63abdf1d6c/arithmetic_compressor-0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "839a4d271a92eed22e8b670d4eaf3730c922eaf40409d109672443b2c156ae56",
                "md5": "44463e39f02fba53dde79afda7dbf593",
                "sha256": "e210167053152b1dc9bb2470d4f91d8b17899800aacd753a379e3b68143e0df3"
            },
            "downloads": -1,
            "filename": "arithmetic_compressor-0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "44463e39f02fba53dde79afda7dbf593",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 20374,
            "upload_time": "2023-01-29T19:44:52",
            "upload_time_iso_8601": "2023-01-29T19:44:52.348710Z",
            "url": "https://files.pythonhosted.org/packages/83/9a/4d271a92eed22e8b670d4eaf3730c922eaf40409d109672443b2c156ae56/arithmetic_compressor-0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-29 19:44:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "kodejuice",
    "github_project": "arithmetic-compressor",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "arithmetic-compressor"
}
        
Elapsed time: 0.07658s