Name | aprxc JSON |
Version |
2.0.0
JSON |
| download |
home_page | None |
Summary | A command-line tool to estimate the number of distinct lines in a file/stream using Chakraborty/Vinodchandran/Meel’s approximation algorithm. |
upload_time | 2024-11-02 14:14:03 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.11 |
license | None |
keywords |
algorithm
cli
computer-science
math
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# aprxc
A command-line tool (and Python class) to approximately count the number of
distinct elements in files (or a stream/pipe) using the “simple, intuitive,
sampling-based space-efficient algorithm” by S. Chakraborty, N. V. Vinodchandran
and K. S. Meel, as described in their 2023 paper [Distinct Elements in Streams:
An Algorithm for the (Text) Book](https://arxiv.org/pdf/2301.10191#section.2).
**Motivation:** Easier to remember, always faster and (much) less
memory-intensive than `sort | uniq | wc -l` or `awk '!a[$0]++' | wc -l`. In this
implementation’s default configuration results are precise until ~83k unique
values (on 64-bit CPUs), with deviations of commonly 0.4-1% afterwards.
## Installation
Choose your preferred way to install it from [PyPI](https://pypi.org/project/aprxc/):
```shell
pip install aprxc
uv tool install aprxc
```
Alternatively, run it in an isolated environment, using [pipx
run](https://pipx.pypa.io/) or [uvx](https://docs.astral.sh/uv/concepts/tools/):
```shell
pipx run aprxc --help
uvx aprxc --help
```
Lastly, as `aprxc.py` has no dependencies besides Python 3.11+, you can simply
download the script, run it, put it your PATH, vendor it, etc.
## Features and shortcomings
* Easier to remember than the pipe constructs.
* 20–60% faster than sort/uniq.
* 30–99% less memory-intensive than sort/uniq (for mid/high-cardinality data)
* (Roughly double these numbers when compared against awk.)
* Memory usage has a (configurable) upper bound.
Now let's address the elephant in the room: These advantages are bought with
**an inaccuracy in the reported results**. But how inaccurate?
### About inaccuracy
In its default configuration the algorithm has a **mean inaccuracy of about
0.4%**, with **outliers around 1%**. For example, if the script encounters 10M
(`10_000_000`) actual unique values, the reported count is typically ~40k off
(e.g. `10_038_680`), sometimes ~100k (e.g. `9_897_071`).
**However:** If the number of encountered actual unique elements is smaller than
the size of the internally used set data structure, then the reported counts are
**exact**; only once this limit is reached, the approximation algorithm 'kicks
in' and the result will become an approximation.
Here's an overview (highly unscientific!) of how the algorithm parameters 𝜀 and
𝛿 (`--epsilon` and `--delta` on the command line) affect the inaccuracy. The
default of `0.1` for both values seems to strike a good balance (and a memorable
inaccuracy of ~1%). Epsilon is the 'main manipulation knob', and you can see
quite good how its value affects especially the maximum inaccuracy.
For this first table I counted 10 million unique 32-character strings, and for
each iteration checked the reported count and compared to the actual number of
unique items. _Mean inacc._ is the mean inaccuracy across all 10M steps; _max
inacc._ is the highest deviation encountered; _memory usage_ is the linux tool
`time`'s reported _maxresident_; _time usage_ is wall time.
| tool (w/ options) | memory [MiB]| time [s]|mean ina.| max ina.| set size|
|-----------------------|------------:|-----------:|--------:|--------:|----------:|
|`sort \| uniq \| wc -l`| 1541 (100%) | 5.5 (100%)| 0% | 0% | —|
|`sort --parallel=16` | 1848 (120%) | 5.2 ( 95%)| 0% | 0% | —|
|`sort --parallel=1` | 780 ( 51%) | 18.1 (328%)| 0% | 0% | —|
|`aprxc --epsilon=0.001`| 1044 ( 68%) | 4.1 ( 75%)| 0% | 0% |831_863_138|
|`aprxc --epsilon=0.01` | 1137 ( 74%) | 5.5 (100%)| 0.001% | 0.02% | 8_318_632|
|`aprxc --epsilon=0.05` | 78 ( 5%) | 2.2 ( 40%)| 0.080% | 0.42% | 332_746|
|`aprxc` (Python 3.13) | 35 ( 2%) | 1.8 ( 32%)| 0.400% | 1.00% | 83_187|
|`aprxc` (Python 3.12) | 28 ( 2%) | 2.0 ( 36%)| 0.400% | 1.00% | 83_187|
|`aprxc --epsilon=0.2` | 26 ( 2%) | 1.8 ( 32%)| 0.700% | 2.10% | 20_797|
|`aprxc --epsilon=0.5` | 23 ( 1%) | 1.7 ( 31%)| 1.700% | 5.40% | 3_216|
|`awk '!a[$0]++'\|wc -l`| 3094 (201%) | 9.3 (169%)| 0% | 0% | —|
#### Other time/memory consumption benchmarks
For `linux-6.11.6.tar`, a medium-cardinality (total: 39_361_138, 43.3% unique)
input file:
| tool (w/ options) | memory [MiB]| time [s]|
|-----------------------|------------:|-----------:|
|`sort \| uniq \| wc -l`| 6277 (100%) | 41.4 (100%)|
|`sort --parallel=16` | 7477 (119%) | 36.7 ( 89%)|
|`sort --parallel=1` | 3275 ( 52%) |158.6 (383%)|
|`aprxc --epsilon=0.001`| 2081 ( 33%) | 13.1 ( 32%)|
|`aprxc --epsilon=0.01` | 1364 ( 22%) | 15.3 ( 37%)|
|`aprxc --epsilon=0.05` | 105 ( 2%) | 8.2 ( 20%)|
|`aprxc` (Python 3.13) | 39 ( 1%) | 7.2 ( 17%)|
|`aprxc` (Python 3.12) | 35 ( 1%) | 8.1 ( 20%)|
|`aprxc --epsilon=0.2` | 27 ( 0%) | 7.2 ( 17%)|
|`aprxc --epsilon=0.5` | 23 ( 0%) | 7.2 ( 17%)|
|`awk '!a[$0]++'\|wc -l`| 5638 ( 90%) | 24.8 ( 60%)|
For `cut -f 1 clickstream-enwiki-2024-04.tsv`, a low-cardinality (total:
34_399_603, unique: 6.4%), once via temporary file¹, once via pipe²:
| tool (w/ options) | ¹mem [MiB]| ¹time [s]| ²mem [MiB]| ²time [s]|
|-----------------------|------------:|-----------:|------------:|-----------:|
|`sort \| uniq \| wc -l`| 4823 (100%) | 11.6 (100%)| 14 (100%) | 50.3 (100%)|
|`sort --parallel=16` | 5871 (122%) | 11.9 (103%)| 14 (100%) | 48.7 ( 97%)|
|`sort --parallel=1` | 2198 ( 46%) | 50.5 (436%)| 10 ( 71%) | 48.7 ( 97%)|
|`aprxc --epsilon=0.001`| 214 ( 4%) | 10.8 ( 93%)| 215 (1532%)| 10.7 ( 21%)|
|`aprxc --epsilon=0.01` | 215 ( 4%) | 10.5 ( 91%)| 215 (1534%)| 10.6 ( 21%)|
|`aprxc --epsilon=0.05` | 73 ( 2%) | 8.2 ( 71%)| 73 (524%) | 7.7 ( 15%)|
|`aprxc` (Python 3.13) | 35 ( 1%) | 6.4 ( 55%)| 36 (254%) | 6.5 ( 13%)|
|`aprxc` (Python 3.12) | 29 ( 1%) | 7.4 ( 64%)| 29 (204%) | 7.2 ( 14%)|
|`aprxc --epsilon=0.2` | 27 ( 1%) | 5.8 ( 50%)| 27 (189%) | 5.8 ( 11%)|
|`aprxc --epsilon=0.5` | 23 ( 0%) | 6.0 ( 52%)| 23 (164%) | 6.1 ( 12%)|
|`awk '!a[$0]++'\|wc -l`| 666 ( 14%) | 15.7 (136%)| 666 (4748%)| 14.4 ( 29%)|
### Is it useful?
You have to accept the inaccuracies, obviously. But if you are working
exploratory and don't care about exact number or plan to round or throw them
away anyway; or if or you are in a memory-constrained situation and need to deal
with large input files or streaming data; or if you just cannot remember the
multi-command pipe alternatives, then this might be a tool for you.
### The experimental 'top most common' feature
I've added a couple of lines of code to support a 'top most common' items
feature. An alternative to the `sort | uniq -c | sort -rn | head`-pipeline or
[Tim Bray's nice `topfew` tool (written in
Go)](https://github.com/timbray/topfew/).
It kinda works, but…
- The counts are good, even surprisingly good, as for the whole base algorithm,
but definitely worse and not as reliable as the nice 1%-mean-inaccuracy for
the total count case.
- I lack the mathematical expertise to prove or disprove anything about that
feature.
- If you ask for a top 10 (`-t10` or `--top 10`), you mostly get what you
expect, but if the counts are close the lower ranks become 'unstable'; even
rank 1 and 2 sometimes switch places etc.
- Compared with `topfew` (I wondered if this approximation algorithm could be
_an optional_ flag for `topfew`), this Python code is impressively close to
the Go performance, especially if reading a lot of data from a pipe.
Unfortunately, I fear that this algorithm is not parallelizable. But I leave
that, and the re-implementation in Go or Rust, as an exercise for the reader
:)
- Just try it!
## Command-line interface
```shell
usage: aprxc [-h] [--top [X]] [--size SIZE] [--epsilon EPSILON]
[--delta DELTA] [--cheat] [--count-total] [--verbose] [--version]
[--debug]
[path ...]
Aproximately count the number of distinct lines in a file or pipe.
positional arguments:
path Input file path(s) and/or '-' for stdin (default:
stdin)
options:
-h, --help show this help message and exit
--top [X], -t [X] EXPERIMENTAL: Show X most common values. Off by
default. If enabled, X defaults to 10.
--size SIZE, -s SIZE Expected (estimated) total number of items. Reduces
memory usages, increases inaccuracy.
--epsilon EPSILON, -E EPSILON
--delta DELTA, -D DELTA
--cheat Improve accuracy by tracking 'total seen' and use it
as upper bound for result.
--count-total, -T Count number of total seen values.
--verbose, -v
--version, -V show program's version number and exit
--debug Track, calculate, and display various internal
statistics.
usage: aprxc [-h] [--top [X]] [--size SIZE] [--epsilon EPSILON]
[--delta DELTA] [--cheat] [--verbose] [--version] [--benchmark]
[--debug] [--debug-lines N]
[path ...]
Raw data
{
"_id": null,
"home_page": null,
"name": "aprxc",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "algorithm, cli, computer-science, math",
"author": null,
"author_email": "Fabian Neumann <dev@fabianneumann.de>",
"download_url": "https://files.pythonhosted.org/packages/4c/59/9af8a8cc521e8c219983385b07e045792d09c63dbcfe44a6828f60761f2d/aprxc-2.0.0.tar.gz",
"platform": null,
"description": "# aprxc\n\nA command-line tool (and Python class) to approximately count the number of\ndistinct elements in files (or a stream/pipe) using the \u201csimple, intuitive,\nsampling-based space-efficient algorithm\u201d by S. Chakraborty, N. V. Vinodchandran\nand K. S. Meel, as described in their 2023 paper [Distinct Elements in Streams:\nAn Algorithm for the (Text) Book](https://arxiv.org/pdf/2301.10191#section.2).\n\n**Motivation:** Easier to remember, always faster and (much) less\nmemory-intensive than `sort | uniq | wc -l` or `awk '!a[$0]++' | wc -l`. In this\nimplementation\u2019s default configuration results are precise until ~83k unique\nvalues (on 64-bit CPUs), with deviations of commonly 0.4-1% afterwards.\n\n## Installation\n\nChoose your preferred way to install it from [PyPI](https://pypi.org/project/aprxc/):\n\n```shell\npip install aprxc\nuv tool install aprxc\n```\n\nAlternatively, run it in an isolated environment, using [pipx\nrun](https://pipx.pypa.io/) or [uvx](https://docs.astral.sh/uv/concepts/tools/):\n\n```shell\npipx run aprxc --help\nuvx aprxc --help\n```\n\nLastly, as `aprxc.py` has no dependencies besides Python 3.11+, you can simply\ndownload the script, run it, put it your PATH, vendor it, etc.\n\n## Features and shortcomings\n\n* Easier to remember than the pipe constructs.\n* 20\u201360% faster than sort/uniq.\n* 30\u201399% less memory-intensive than sort/uniq (for mid/high-cardinality data)\n* (Roughly double these numbers when compared against awk.)\n* Memory usage has a (configurable) upper bound.\n\nNow let's address the elephant in the room: These advantages are bought with\n**an inaccuracy in the reported results**. But how inaccurate?\n\n### About inaccuracy\n\nIn its default configuration the algorithm has a **mean inaccuracy of about\n0.4%**, with **outliers around 1%**. For example, if the script encounters 10M\n(`10_000_000`) actual unique values, the reported count is typically ~40k off\n(e.g. `10_038_680`), sometimes ~100k (e.g. `9_897_071`).\n\n**However:** If the number of encountered actual unique elements is smaller than\nthe size of the internally used set data structure, then the reported counts are\n**exact**; only once this limit is reached, the approximation algorithm 'kicks\nin' and the result will become an approximation.\n\nHere's an overview (highly unscientific!) of how the algorithm parameters \ud835\udf00 and\n\ud835\udeff (`--epsilon` and `--delta` on the command line) affect the inaccuracy. The\ndefault of `0.1` for both values seems to strike a good balance (and a memorable\ninaccuracy of ~1%). Epsilon is the 'main manipulation knob', and you can see\nquite good how its value affects especially the maximum inaccuracy.\n\nFor this first table I counted 10 million unique 32-character strings, and for\neach iteration checked the reported count and compared to the actual number of\nunique items. _Mean inacc._ is the mean inaccuracy across all 10M steps; _max\ninacc._ is the highest deviation encountered; _memory usage_ is the linux tool\n`time`'s reported _maxresident_; _time usage_ is wall time.\n\n| tool (w/ options) | memory [MiB]| time [s]|mean ina.| max ina.| set size|\n|-----------------------|------------:|-----------:|--------:|--------:|----------:|\n|`sort \\| uniq \\| wc -l`| 1541 (100%) | 5.5 (100%)| 0% | 0% | \u2014|\n|`sort --parallel=16` | 1848 (120%) | 5.2 ( 95%)| 0% | 0% | \u2014|\n|`sort --parallel=1` | 780 ( 51%) | 18.1 (328%)| 0% | 0% | \u2014|\n|`aprxc --epsilon=0.001`| 1044 ( 68%) | 4.1 ( 75%)| 0% | 0% |831_863_138|\n|`aprxc --epsilon=0.01` | 1137 ( 74%) | 5.5 (100%)| 0.001% | 0.02% | 8_318_632|\n|`aprxc --epsilon=0.05` | 78 ( 5%) | 2.2 ( 40%)| 0.080% | 0.42% | 332_746|\n|`aprxc` (Python 3.13) | 35 ( 2%) | 1.8 ( 32%)| 0.400% | 1.00% | 83_187|\n|`aprxc` (Python 3.12) | 28 ( 2%) | 2.0 ( 36%)| 0.400% | 1.00% | 83_187|\n|`aprxc --epsilon=0.2` | 26 ( 2%) | 1.8 ( 32%)| 0.700% | 2.10% | 20_797|\n|`aprxc --epsilon=0.5` | 23 ( 1%) | 1.7 ( 31%)| 1.700% | 5.40% | 3_216|\n|`awk '!a[$0]++'\\|wc -l`| 3094 (201%) | 9.3 (169%)| 0% | 0% | \u2014|\n\n\n#### Other time/memory consumption benchmarks\n\nFor `linux-6.11.6.tar`, a medium-cardinality (total: 39_361_138, 43.3% unique)\ninput file:\n\n| tool (w/ options) | memory [MiB]| time [s]|\n|-----------------------|------------:|-----------:|\n|`sort \\| uniq \\| wc -l`| 6277 (100%) | 41.4 (100%)|\n|`sort --parallel=16` | 7477 (119%) | 36.7 ( 89%)|\n|`sort --parallel=1` | 3275 ( 52%) |158.6 (383%)|\n|`aprxc --epsilon=0.001`| 2081 ( 33%) | 13.1 ( 32%)|\n|`aprxc --epsilon=0.01` | 1364 ( 22%) | 15.3 ( 37%)|\n|`aprxc --epsilon=0.05` | 105 ( 2%) | 8.2 ( 20%)|\n|`aprxc` (Python 3.13) | 39 ( 1%) | 7.2 ( 17%)|\n|`aprxc` (Python 3.12) | 35 ( 1%) | 8.1 ( 20%)|\n|`aprxc --epsilon=0.2` | 27 ( 0%) | 7.2 ( 17%)|\n|`aprxc --epsilon=0.5` | 23 ( 0%) | 7.2 ( 17%)|\n|`awk '!a[$0]++'\\|wc -l`| 5638 ( 90%) | 24.8 ( 60%)|\n\nFor `cut -f 1 clickstream-enwiki-2024-04.tsv`, a low-cardinality (total:\n34_399_603, unique: 6.4%), once via temporary file\u00b9, once via pipe\u00b2:\n\n| tool (w/ options) | \u00b9mem [MiB]| \u00b9time [s]| \u00b2mem [MiB]| \u00b2time [s]|\n|-----------------------|------------:|-----------:|------------:|-----------:|\n|`sort \\| uniq \\| wc -l`| 4823 (100%) | 11.6 (100%)| 14 (100%) | 50.3 (100%)|\n|`sort --parallel=16` | 5871 (122%) | 11.9 (103%)| 14 (100%) | 48.7 ( 97%)|\n|`sort --parallel=1` | 2198 ( 46%) | 50.5 (436%)| 10 ( 71%) | 48.7 ( 97%)|\n|`aprxc --epsilon=0.001`| 214 ( 4%) | 10.8 ( 93%)| 215 (1532%)| 10.7 ( 21%)|\n|`aprxc --epsilon=0.01` | 215 ( 4%) | 10.5 ( 91%)| 215 (1534%)| 10.6 ( 21%)|\n|`aprxc --epsilon=0.05` | 73 ( 2%) | 8.2 ( 71%)| 73 (524%) | 7.7 ( 15%)|\n|`aprxc` (Python 3.13) | 35 ( 1%) | 6.4 ( 55%)| 36 (254%) | 6.5 ( 13%)|\n|`aprxc` (Python 3.12) | 29 ( 1%) | 7.4 ( 64%)| 29 (204%) | 7.2 ( 14%)|\n|`aprxc --epsilon=0.2` | 27 ( 1%) | 5.8 ( 50%)| 27 (189%) | 5.8 ( 11%)|\n|`aprxc --epsilon=0.5` | 23 ( 0%) | 6.0 ( 52%)| 23 (164%) | 6.1 ( 12%)|\n|`awk '!a[$0]++'\\|wc -l`| 666 ( 14%) | 15.7 (136%)| 666 (4748%)| 14.4 ( 29%)|\n\n### Is it useful?\n\nYou have to accept the inaccuracies, obviously. But if you are working\nexploratory and don't care about exact number or plan to round or throw them\naway anyway; or if or you are in a memory-constrained situation and need to deal\nwith large input files or streaming data; or if you just cannot remember the\nmulti-command pipe alternatives, then this might be a tool for you.\n\n### The experimental 'top most common' feature\n\nI've added a couple of lines of code to support a 'top most common' items\nfeature. An alternative to the `sort | uniq -c | sort -rn | head`-pipeline or\n[Tim Bray's nice `topfew` tool (written in\nGo)](https://github.com/timbray/topfew/).\n\nIt kinda works, but\u2026\n\n- The counts are good, even surprisingly good, as for the whole base algorithm,\n but definitely worse and not as reliable as the nice 1%-mean-inaccuracy for\n the total count case.\n- I lack the mathematical expertise to prove or disprove anything about that\n feature.\n- If you ask for a top 10 (`-t10` or `--top 10`), you mostly get what you\n expect, but if the counts are close the lower ranks become 'unstable'; even\n rank 1 and 2 sometimes switch places etc.\n- Compared with `topfew` (I wondered if this approximation algorithm could be\n _an optional_ flag for `topfew`), this Python code is impressively close to\n the Go performance, especially if reading a lot of data from a pipe.\n Unfortunately, I fear that this algorithm is not parallelizable. But I leave\n that, and the re-implementation in Go or Rust, as an exercise for the reader\n :)\n- Just try it!\n\n## Command-line interface\n\n```shell\nusage: aprxc [-h] [--top [X]] [--size SIZE] [--epsilon EPSILON]\n [--delta DELTA] [--cheat] [--count-total] [--verbose] [--version]\n [--debug]\n [path ...]\n\nAproximately count the number of distinct lines in a file or pipe.\n\npositional arguments:\n path Input file path(s) and/or '-' for stdin (default:\n stdin)\n\noptions:\n -h, --help show this help message and exit\n --top [X], -t [X] EXPERIMENTAL: Show X most common values. Off by\n default. If enabled, X defaults to 10.\n --size SIZE, -s SIZE Expected (estimated) total number of items. Reduces\n memory usages, increases inaccuracy.\n --epsilon EPSILON, -E EPSILON\n --delta DELTA, -D DELTA\n --cheat Improve accuracy by tracking 'total seen' and use it\n as upper bound for result.\n --count-total, -T Count number of total seen values.\n --verbose, -v\n --version, -V show program's version number and exit\n --debug Track, calculate, and display various internal\n statistics.\nusage: aprxc [-h] [--top [X]] [--size SIZE] [--epsilon EPSILON]\n [--delta DELTA] [--cheat] [--verbose] [--version] [--benchmark]\n [--debug] [--debug-lines N]\n [path ...]\n",
"bugtrack_url": null,
"license": null,
"summary": "A command-line tool to estimate the number of distinct lines in a file/stream using Chakraborty/Vinodchandran/Meel\u2019s approximation algorithm.",
"version": "2.0.0",
"project_urls": {
"Codeberg": "https://codeberg.org/fa81/aprxc",
"GitHub": "https://github.com/hellp/aprxc"
},
"split_keywords": [
"algorithm",
" cli",
" computer-science",
" math"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b9f4185f6b218024aa48399020bb7f8a7e27766e17c6fac599d2e7833ff87f0c",
"md5": "0b57ed77ab5093b91f6f752ed832a02a",
"sha256": "79425d72562c7f4560eb5f46214b6e2309bb552e88a72084ef8261acc2385860"
},
"downloads": -1,
"filename": "aprxc-2.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0b57ed77ab5093b91f6f752ed832a02a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 13303,
"upload_time": "2024-11-02T14:14:01",
"upload_time_iso_8601": "2024-11-02T14:14:01.806004Z",
"url": "https://files.pythonhosted.org/packages/b9/f4/185f6b218024aa48399020bb7f8a7e27766e17c6fac599d2e7833ff87f0c/aprxc-2.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4c599af8a8cc521e8c219983385b07e045792d09c63dbcfe44a6828f60761f2d",
"md5": "62a579da7884aa148a74393c3ce6d8b0",
"sha256": "209e872bf8ca88a7dc3559b49267b62c33f208cc4091c75be0fb82cf055b836a"
},
"downloads": -1,
"filename": "aprxc-2.0.0.tar.gz",
"has_sig": false,
"md5_digest": "62a579da7884aa148a74393c3ce6d8b0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 12376,
"upload_time": "2024-11-02T14:14:03",
"upload_time_iso_8601": "2024-11-02T14:14:03.440972Z",
"url": "https://files.pythonhosted.org/packages/4c/59/9af8a8cc521e8c219983385b07e045792d09c63dbcfe44a6828f60761f2d/aprxc-2.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-02 14:14:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hellp",
"github_project": "aprxc",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "aprxc"
}