<div align="center">
<a href="https://github.com/elastic/eland">
<img src="https://raw.githubusercontent.com/elastic/eland/main/docs/sphinx/logo/eland.png" width="30%"
alt="Eland" />
</a>
</div>
<br />
<div align="center">
<a href="https://pypi.org/project/eland"><img src="https://img.shields.io/pypi/v/eland.svg" alt="PyPI Version"></a>
<a href="https://anaconda.org/conda-forge/eland"><img src="https://img.shields.io/conda/vn/conda-forge/eland"
alt="Conda Version"></a>
<a href="https://pepy.tech/project/eland"><img src="https://pepy.tech/badge/eland" alt="Downloads"></a>
<a href="https://pypi.org/project/eland"><img src="https://img.shields.io/pypi/status/eland.svg"
alt="Package Status"></a>
<a href="https://clients-ci.elastic.co/job/elastic+eland+main"><img
src="https://clients-ci.elastic.co/buildStatus/icon?job=elastic%2Beland%2Bmain" alt="Build Status"></a>
<a href="https://github.com/elastic/eland/blob/main/LICENSE.txt"><img src="https://img.shields.io/pypi/l/eland.svg"
alt="License"></a>
<a href="https://eland.readthedocs.io"><img
src="https://readthedocs.org/projects/eland/badge/?version=latest" alt="Documentation Status"></a>
</div>
## About
Eland is a Python Elasticsearch client for exploring and analyzing data in Elasticsearch with a familiar
Pandas-compatible API.
Where possible the package uses existing Python APIs and data structures to make it easy to switch between numpy,
pandas, or scikit-learn to their Elasticsearch powered equivalents. In general, the data resides in Elasticsearch and
not in memory, which allows Eland to access large datasets stored in Elasticsearch.
Eland also provides tools to upload trained machine learning models from common libraries like
[scikit-learn](https://scikit-learn.org), [XGBoost](https://xgboost.readthedocs.io), and
[LightGBM](https://lightgbm.readthedocs.io) into Elasticsearch.
## Getting Started
Eland can be installed from [PyPI](https://pypi.org/project/eland) with Pip:
```bash
$ python -m pip install eland
```
Eland can also be installed from [Conda Forge](https://anaconda.org/conda-forge/eland) with Conda:
```bash
$ conda install -c conda-forge eland
```
### Compatibility
- Supports Python 3.8, 3.9, 3.10 and Pandas 1.5
- Supports Elasticsearch clusters that are 7.11+, recommended 8.3 or later for all features to work.
If you are using the NLP with PyTorch feature make sure your Eland minor version matches the minor
version of your Elasticsearch cluster. For all other features it is sufficient for the major versions
to match.
- You need to use PyTorch `1.13.1` or earlier to import an NLP model.
Run `pip install torch==1.13.1` to install the aproppriate version of PyTorch.
### Prerequisites
Users installing Eland on Debian-based distributions may need to install prerequisite packages for the transitive
dependencies of Eland:
```bash
$ sudo apt-get install -y \
build-essential pkg-config cmake \
python3-dev libzip-dev libjpeg-dev
```
Note that other distributions such as CentOS, RedHat, Arch, etc. may require using a different package manager and
specifying different package names.
### Docker
Users wishing to use Eland without installing it, in order to just run the available scripts, can build the Docker
container:
```bash
$ docker build -t elastic/eland .
```
The container can now be used interactively:
```bash
$ docker run -it --rm --network host elastic/eland
```
Running installed scripts is also possible without an interactive shell, e.g.:
```bash
$ docker run -it --rm --network host \
elastic/eland \
eland_import_hub_model \
--url http://host.docker.internal:9200/ \
--hub-model-id elastic/distilbert-base-cased-finetuned-conll03-english \
--task-type ner
```
### Connecting to Elasticsearch
Eland uses the [Elasticsearch low level client](https://elasticsearch-py.readthedocs.io) to connect to Elasticsearch.
This client supports a range of [connection options and authentication options](https://elasticsearch-py.readthedocs.io/en/stable/api.html#elasticsearch).
You can pass either an instance of `elasticsearch.Elasticsearch` to Eland APIs
or a string containing the host to connect to:
```python
import eland as ed
# Connecting to an Elasticsearch instance running on 'localhost:9200'
df = ed.DataFrame("localhost:9200", es_index_pattern="flights")
# Connecting to an Elastic Cloud instance
from elasticsearch import Elasticsearch
es = Elasticsearch(
cloud_id="cluster-name:...",
http_auth=("elastic", "<password>")
)
df = ed.DataFrame(es, es_index_pattern="flights")
```
## DataFrames in Eland
`eland.DataFrame` wraps an Elasticsearch index in a Pandas-like API
and defers all processing and filtering of data to Elasticsearch
instead of your local machine. This means you can process large
amounts of data within Elasticsearch from a Jupyter Notebook
without overloading your machine.
➤ [Eland DataFrame API documentation](https://eland.readthedocs.io/en/latest/reference/dataframe.html)
➤ [Advanced examples in a Jupyter Notebook](https://eland.readthedocs.io/en/latest/examples/demo_notebook.html)
```python
>>> import eland as ed
>>> # Connect to 'flights' index via localhost Elasticsearch node
>>> df = ed.DataFrame('localhost:9200', 'flights')
# eland.DataFrame instance has the same API as pandas.DataFrame
# except all data is in Elasticsearch. See .info() memory usage.
>>> df.head()
AvgTicketPrice Cancelled ... dayOfWeek timestamp
0 841.265642 False ... 0 2018-01-01 00:00:00
1 882.982662 False ... 0 2018-01-01 18:27:00
2 190.636904 False ... 0 2018-01-01 17:11:14
3 181.694216 True ... 0 2018-01-01 10:33:28
4 730.041778 False ... 0 2018-01-01 05:13:00
[5 rows x 27 columns]
>>> df.info()
<class 'eland.dataframe.DataFrame'>
Index: 13059 entries, 0 to 13058
Data columns (total 27 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 AvgTicketPrice 13059 non-null float64
1 Cancelled 13059 non-null bool
2 Carrier 13059 non-null object
...
24 OriginWeather 13059 non-null object
25 dayOfWeek 13059 non-null int64
26 timestamp 13059 non-null datetime64[ns]
dtypes: bool(2), datetime64[ns](1), float64(5), int64(2), object(17)
memory usage: 80.0 bytes
Elasticsearch storage usage: 5.043 MB
# Filtering of rows using comparisons
>>> df[(df.Carrier=="Kibana Airlines") & (df.AvgTicketPrice > 900.0) & (df.Cancelled == True)].head()
AvgTicketPrice Cancelled ... dayOfWeek timestamp
8 960.869736 True ... 0 2018-01-01 12:09:35
26 975.812632 True ... 0 2018-01-01 15:38:32
311 946.358410 True ... 0 2018-01-01 11:51:12
651 975.383864 True ... 2 2018-01-03 21:13:17
950 907.836523 True ... 2 2018-01-03 05:14:51
[5 rows x 27 columns]
# Running aggregations across an index
>>> df[['DistanceKilometers', 'AvgTicketPrice']].aggregate(['sum', 'min', 'std'])
DistanceKilometers AvgTicketPrice
sum 9.261629e+07 8.204365e+06
min 0.000000e+00 1.000205e+02
std 4.578263e+03 2.663867e+02
```
## Machine Learning in Eland
### Regression and classification
Eland allows transforming trained regression and classification models from scikit-learn, XGBoost, and LightGBM
libraries to be serialized and used as an inference model in Elasticsearch.
➤ [Eland Machine Learning API documentation](https://eland.readthedocs.io/en/latest/reference/ml.html)
➤ [Read more about Machine Learning in Elasticsearch](https://www.elastic.co/guide/en/machine-learning/current/ml-getting-started.html)
```python
>>> from xgboost import XGBClassifier
>>> from eland.ml import MLModel
# Train and exercise an XGBoost ML model locally
>>> xgb_model = XGBClassifier(booster="gbtree")
>>> xgb_model.fit(training_data[0], training_data[1])
>>> xgb_model.predict(training_data[0])
[0 1 1 0 1 0 0 0 1 0]
# Import the model into Elasticsearch
>>> es_model = MLModel.import_model(
es_client="localhost:9200",
model_id="xgb-classifier",
model=xgb_model,
feature_names=["f0", "f1", "f2", "f3", "f4"],
)
# Exercise the ML model in Elasticsearch with the training data
>>> es_model.predict(training_data[0])
[0 1 1 0 1 0 0 0 1 0]
```
### NLP with PyTorch
For NLP tasks, Eland allows importing PyTorch trained BERT models into Elasticsearch. Models can be either plain PyTorch
models, or supported [transformers](https://huggingface.co/transformers) models from the
[Hugging Face model hub](https://huggingface.co/models).
```bash
$ eland_import_hub_model \
--url http://localhost:9200/ \
--hub-model-id elastic/distilbert-base-cased-finetuned-conll03-english \
--task-type ner \
--start
```
The example above will automatically start a model deployment. This is a
good shortcut for initial experimentation, but for anything that needs
good throughput you should omit the `--start` argument from the Eland
command line and instead start the model using the ML UI in Kibana.
The `--start` argument will deploy the model with one allocation and one
thread per allocation, which will not offer good performance. When starting
the model deployment using the ML UI in Kibana or the Elasticsearch
[API](https://www.elastic.co/guide/en/elasticsearch/reference/current/start-trained-model-deployment.html)
you will be able to set the threading options to make the best use of your
hardware.
```python
>>> import elasticsearch
>>> from pathlib import Path
>>> from eland.common import es_version
>>> from eland.ml.pytorch import PyTorchModel
>>> from eland.ml.pytorch.transformers import TransformerModel
>>> es = elasticsearch.Elasticsearch("http://elastic:mlqa_admin@localhost:9200")
>>> es_cluster_version = es_version(es)
# Load a Hugging Face transformers model directly from the model hub
>>> tm = TransformerModel(model_id="elastic/distilbert-base-cased-finetuned-conll03-english", task_type="ner", es_version=es_cluster_version)
Downloading: 100%|██████████| 257/257 [00:00<00:00, 108kB/s]
Downloading: 100%|██████████| 954/954 [00:00<00:00, 372kB/s]
Downloading: 100%|██████████| 208k/208k [00:00<00:00, 668kB/s]
Downloading: 100%|██████████| 112/112 [00:00<00:00, 43.9kB/s]
Downloading: 100%|██████████| 249M/249M [00:23<00:00, 11.2MB/s]
# Export the model in a TorchScrpt representation which Elasticsearch uses
>>> tmp_path = "models"
>>> Path(tmp_path).mkdir(parents=True, exist_ok=True)
>>> model_path, config, vocab_path = tm.save(tmp_path)
# Import model into Elasticsearch
>>> ptm = PyTorchModel(es, tm.elasticsearch_model_id())
>>> ptm.import_model(model_path=model_path, config_path=None, vocab_path=vocab_path, config=config)
100%|██████████| 63/63 [00:12<00:00, 5.02it/s]
```
Raw data
{
"_id": null,
"home_page": "https://github.com/elastic/eland",
"name": "bartbroere_eland",
"maintainer": "Elastic Client Library Maintainers",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "client-libs@elastic.co",
"keywords": "elastic eland pandas python",
"author": "Steve Dodson",
"author_email": "steve.dodson@elastic.co",
"download_url": "https://files.pythonhosted.org/packages/07/83/9305809c28b9c8cb1be07000952d83cf9efea1b7d1217c6ad717759e5183/bartbroere_eland-8.9.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <a href=\"https://github.com/elastic/eland\">\n <img src=\"https://raw.githubusercontent.com/elastic/eland/main/docs/sphinx/logo/eland.png\" width=\"30%\"\n alt=\"Eland\" />\n </a>\n</div>\n<br />\n<div align=\"center\">\n <a href=\"https://pypi.org/project/eland\"><img src=\"https://img.shields.io/pypi/v/eland.svg\" alt=\"PyPI Version\"></a>\n <a href=\"https://anaconda.org/conda-forge/eland\"><img src=\"https://img.shields.io/conda/vn/conda-forge/eland\"\n alt=\"Conda Version\"></a>\n <a href=\"https://pepy.tech/project/eland\"><img src=\"https://pepy.tech/badge/eland\" alt=\"Downloads\"></a>\n <a href=\"https://pypi.org/project/eland\"><img src=\"https://img.shields.io/pypi/status/eland.svg\"\n alt=\"Package Status\"></a>\n <a href=\"https://clients-ci.elastic.co/job/elastic+eland+main\"><img\n src=\"https://clients-ci.elastic.co/buildStatus/icon?job=elastic%2Beland%2Bmain\" alt=\"Build Status\"></a>\n <a href=\"https://github.com/elastic/eland/blob/main/LICENSE.txt\"><img src=\"https://img.shields.io/pypi/l/eland.svg\"\n alt=\"License\"></a>\n <a href=\"https://eland.readthedocs.io\"><img\n src=\"https://readthedocs.org/projects/eland/badge/?version=latest\" alt=\"Documentation Status\"></a>\n</div>\n\n## About\n\nEland is a Python Elasticsearch client for exploring and analyzing data in Elasticsearch with a familiar\nPandas-compatible API.\n\nWhere possible the package uses existing Python APIs and data structures to make it easy to switch between numpy,\npandas, or scikit-learn to their Elasticsearch powered equivalents. In general, the data resides in Elasticsearch and\nnot in memory, which allows Eland to access large datasets stored in Elasticsearch.\n\nEland also provides tools to upload trained machine learning models from common libraries like\n[scikit-learn](https://scikit-learn.org), [XGBoost](https://xgboost.readthedocs.io), and\n[LightGBM](https://lightgbm.readthedocs.io) into Elasticsearch.\n\n## Getting Started\n\nEland can be installed from [PyPI](https://pypi.org/project/eland) with Pip:\n\n```bash\n$ python -m pip install eland\n```\n\nEland can also be installed from [Conda Forge](https://anaconda.org/conda-forge/eland) with Conda:\n\n```bash\n$ conda install -c conda-forge eland\n```\n\n### Compatibility\n\n- Supports Python 3.8, 3.9, 3.10 and Pandas 1.5\n- Supports Elasticsearch clusters that are 7.11+, recommended 8.3 or later for all features to work.\n If you are using the NLP with PyTorch feature make sure your Eland minor version matches the minor \n version of your Elasticsearch cluster. For all other features it is sufficient for the major versions\n to match.\n- You need to use PyTorch `1.13.1` or earlier to import an NLP model. \n Run `pip install torch==1.13.1` to install the aproppriate version of PyTorch.\n \n\n### Prerequisites\n\nUsers installing Eland on Debian-based distributions may need to install prerequisite packages for the transitive\ndependencies of Eland:\n\n```bash\n$ sudo apt-get install -y \\\n build-essential pkg-config cmake \\\n python3-dev libzip-dev libjpeg-dev\n```\n\nNote that other distributions such as CentOS, RedHat, Arch, etc. may require using a different package manager and\nspecifying different package names. \n\n### Docker\n\nUsers wishing to use Eland without installing it, in order to just run the available scripts, can build the Docker\ncontainer:\n\n```bash\n$ docker build -t elastic/eland .\n```\n\nThe container can now be used interactively:\n\n```bash\n$ docker run -it --rm --network host elastic/eland\n```\n\nRunning installed scripts is also possible without an interactive shell, e.g.:\n\n```bash\n$ docker run -it --rm --network host \\\n elastic/eland \\\n eland_import_hub_model \\\n --url http://host.docker.internal:9200/ \\\n --hub-model-id elastic/distilbert-base-cased-finetuned-conll03-english \\\n --task-type ner\n```\n\n### Connecting to Elasticsearch \n\nEland uses the [Elasticsearch low level client](https://elasticsearch-py.readthedocs.io) to connect to Elasticsearch. \nThis client supports a range of [connection options and authentication options](https://elasticsearch-py.readthedocs.io/en/stable/api.html#elasticsearch). \n\nYou can pass either an instance of `elasticsearch.Elasticsearch` to Eland APIs\nor a string containing the host to connect to:\n\n```python\nimport eland as ed\n\n# Connecting to an Elasticsearch instance running on 'localhost:9200'\ndf = ed.DataFrame(\"localhost:9200\", es_index_pattern=\"flights\")\n\n# Connecting to an Elastic Cloud instance\nfrom elasticsearch import Elasticsearch\n\nes = Elasticsearch(\n cloud_id=\"cluster-name:...\",\n http_auth=(\"elastic\", \"<password>\")\n)\ndf = ed.DataFrame(es, es_index_pattern=\"flights\")\n```\n\n## DataFrames in Eland\n\n`eland.DataFrame` wraps an Elasticsearch index in a Pandas-like API\nand defers all processing and filtering of data to Elasticsearch\ninstead of your local machine. This means you can process large\namounts of data within Elasticsearch from a Jupyter Notebook\nwithout overloading your machine.\n\n\u27a4 [Eland DataFrame API documentation](https://eland.readthedocs.io/en/latest/reference/dataframe.html)\n\n\u27a4 [Advanced examples in a Jupyter Notebook](https://eland.readthedocs.io/en/latest/examples/demo_notebook.html)\n\n```python\n>>> import eland as ed\n\n>>> # Connect to 'flights' index via localhost Elasticsearch node\n>>> df = ed.DataFrame('localhost:9200', 'flights')\n\n# eland.DataFrame instance has the same API as pandas.DataFrame\n# except all data is in Elasticsearch. See .info() memory usage.\n>>> df.head()\n AvgTicketPrice Cancelled ... dayOfWeek timestamp\n0 841.265642 False ... 0 2018-01-01 00:00:00\n1 882.982662 False ... 0 2018-01-01 18:27:00\n2 190.636904 False ... 0 2018-01-01 17:11:14\n3 181.694216 True ... 0 2018-01-01 10:33:28\n4 730.041778 False ... 0 2018-01-01 05:13:00\n\n[5 rows x 27 columns]\n\n>>> df.info()\n<class 'eland.dataframe.DataFrame'>\nIndex: 13059 entries, 0 to 13058\nData columns (total 27 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 AvgTicketPrice 13059 non-null float64 \n 1 Cancelled 13059 non-null bool \n 2 Carrier 13059 non-null object \n... \n 24 OriginWeather 13059 non-null object \n 25 dayOfWeek 13059 non-null int64 \n 26 timestamp 13059 non-null datetime64[ns]\ndtypes: bool(2), datetime64[ns](1), float64(5), int64(2), object(17)\nmemory usage: 80.0 bytes\nElasticsearch storage usage: 5.043 MB\n\n# Filtering of rows using comparisons\n>>> df[(df.Carrier==\"Kibana Airlines\") & (df.AvgTicketPrice > 900.0) & (df.Cancelled == True)].head()\n AvgTicketPrice Cancelled ... dayOfWeek timestamp\n8 960.869736 True ... 0 2018-01-01 12:09:35\n26 975.812632 True ... 0 2018-01-01 15:38:32\n311 946.358410 True ... 0 2018-01-01 11:51:12\n651 975.383864 True ... 2 2018-01-03 21:13:17\n950 907.836523 True ... 2 2018-01-03 05:14:51\n\n[5 rows x 27 columns]\n\n# Running aggregations across an index\n>>> df[['DistanceKilometers', 'AvgTicketPrice']].aggregate(['sum', 'min', 'std'])\n DistanceKilometers AvgTicketPrice\nsum 9.261629e+07 8.204365e+06\nmin 0.000000e+00 1.000205e+02\nstd 4.578263e+03 2.663867e+02\n```\n\n## Machine Learning in Eland\n\n### Regression and classification\n\nEland allows transforming trained regression and classification models from scikit-learn, XGBoost, and LightGBM\nlibraries to be serialized and used as an inference model in Elasticsearch.\n\n\u27a4 [Eland Machine Learning API documentation](https://eland.readthedocs.io/en/latest/reference/ml.html)\n\n\u27a4 [Read more about Machine Learning in Elasticsearch](https://www.elastic.co/guide/en/machine-learning/current/ml-getting-started.html)\n\n```python\n>>> from xgboost import XGBClassifier\n>>> from eland.ml import MLModel\n\n# Train and exercise an XGBoost ML model locally\n>>> xgb_model = XGBClassifier(booster=\"gbtree\")\n>>> xgb_model.fit(training_data[0], training_data[1])\n\n>>> xgb_model.predict(training_data[0])\n[0 1 1 0 1 0 0 0 1 0]\n\n# Import the model into Elasticsearch\n>>> es_model = MLModel.import_model(\n es_client=\"localhost:9200\",\n model_id=\"xgb-classifier\",\n model=xgb_model,\n feature_names=[\"f0\", \"f1\", \"f2\", \"f3\", \"f4\"],\n)\n\n# Exercise the ML model in Elasticsearch with the training data\n>>> es_model.predict(training_data[0])\n[0 1 1 0 1 0 0 0 1 0]\n```\n\n### NLP with PyTorch\n\nFor NLP tasks, Eland allows importing PyTorch trained BERT models into Elasticsearch. Models can be either plain PyTorch\nmodels, or supported [transformers](https://huggingface.co/transformers) models from the\n[Hugging Face model hub](https://huggingface.co/models).\n\n```bash\n$ eland_import_hub_model \\\n --url http://localhost:9200/ \\\n --hub-model-id elastic/distilbert-base-cased-finetuned-conll03-english \\\n --task-type ner \\\n --start\n```\n\nThe example above will automatically start a model deployment. This is a\ngood shortcut for initial experimentation, but for anything that needs\ngood throughput you should omit the `--start` argument from the Eland\ncommand line and instead start the model using the ML UI in Kibana.\nThe `--start` argument will deploy the model with one allocation and one\nthread per allocation, which will not offer good performance. When starting\nthe model deployment using the ML UI in Kibana or the Elasticsearch\n[API](https://www.elastic.co/guide/en/elasticsearch/reference/current/start-trained-model-deployment.html)\nyou will be able to set the threading options to make the best use of your\nhardware.\n\n```python\n>>> import elasticsearch\n>>> from pathlib import Path\n>>> from eland.common import es_version\n>>> from eland.ml.pytorch import PyTorchModel\n>>> from eland.ml.pytorch.transformers import TransformerModel\n\n>>> es = elasticsearch.Elasticsearch(\"http://elastic:mlqa_admin@localhost:9200\")\n>>> es_cluster_version = es_version(es)\n\n# Load a Hugging Face transformers model directly from the model hub\n>>> tm = TransformerModel(model_id=\"elastic/distilbert-base-cased-finetuned-conll03-english\", task_type=\"ner\", es_version=es_cluster_version)\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 257/257 [00:00<00:00, 108kB/s]\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 954/954 [00:00<00:00, 372kB/s]\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 208k/208k [00:00<00:00, 668kB/s] \nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 112/112 [00:00<00:00, 43.9kB/s]\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 249M/249M [00:23<00:00, 11.2MB/s]\n\n# Export the model in a TorchScrpt representation which Elasticsearch uses\n>>> tmp_path = \"models\"\n>>> Path(tmp_path).mkdir(parents=True, exist_ok=True)\n>>> model_path, config, vocab_path = tm.save(tmp_path)\n\n# Import model into Elasticsearch\n>>> ptm = PyTorchModel(es, tm.elasticsearch_model_id())\n>>> ptm.import_model(model_path=model_path, config_path=None, vocab_path=vocab_path, config=config)\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 63/63 [00:12<00:00, 5.02it/s]\n```\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Python Client and Toolkit for DataFrames, Big Data, Machine Learning and ETL in Elasticsearch",
"version": "8.9.0",
"project_urls": {
"Homepage": "https://github.com/elastic/eland"
},
"split_keywords": [
"elastic",
"eland",
"pandas",
"python"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "20a0b063c1d778d3297c8db8e00cfa8dd6b815c5ca5b54e79aa5ef126461645b",
"md5": "ccee9ee6fe6c60aef940e616eab40f21",
"sha256": "24c3ef545bc90a37bbc91c1ad718885f3e3d05a6ac16e098e5352bac213df71b"
},
"downloads": -1,
"filename": "bartbroere_eland-8.9.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ccee9ee6fe6c60aef940e616eab40f21",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 156297,
"upload_time": "2023-09-01T08:14:03",
"upload_time_iso_8601": "2023-09-01T08:14:03.029139Z",
"url": "https://files.pythonhosted.org/packages/20/a0/b063c1d778d3297c8db8e00cfa8dd6b815c5ca5b54e79aa5ef126461645b/bartbroere_eland-8.9.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "07839305809c28b9c8cb1be07000952d83cf9efea1b7d1217c6ad717759e5183",
"md5": "9250030d954ecbf4943e0f7a107eb4e5",
"sha256": "ce07fd48eae67b6d9c1ba9bd635b540f72b1c97eba2bff8a3f5cd1a143c3e3df"
},
"downloads": -1,
"filename": "bartbroere_eland-8.9.0.tar.gz",
"has_sig": false,
"md5_digest": "9250030d954ecbf4943e0f7a107eb4e5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 125394,
"upload_time": "2023-09-01T08:14:04",
"upload_time_iso_8601": "2023-09-01T08:14:04.866326Z",
"url": "https://files.pythonhosted.org/packages/07/83/9305809c28b9c8cb1be07000952d83cf9efea1b7d1217c6ad717759e5183/bartbroere_eland-8.9.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-09-01 08:14:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "elastic",
"github_project": "eland",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "bartbroere_eland"
}