# csvsdataset
`csvsdataset` is a Python library designed to simplify the process of working with multiple CSV files as a single dataset. The primary functionality is provided by the `CsvsDataset` class in the `csvsdataset.py` module.
This was written by ChatGPT4 as mentioned [here](https://www.linkedin.com/posts/petercotton_chatgpt4-opensource-python-activity-7047184874163597312-JTr3?utm_source=share&utm_medium=member_desktop). Issues will be cut and paste into a session. It is an experiment in semi-autonomous code maintenance.
## Installation
To install the `csvsdataset` library, simply run:
```bash
pip install csvsdataset
```
## Usage
from csvsdataset.csvsdataset import CsvsDataset
# Initialize the CsvsDataset instance
dataset = CsvsDataset(folder_path="path/to/your/csv/folder",
file_pattern="*.csv",
x_columns=["column1", "column2"],
y_column="target_column")
# Iterate over the dataset
for x_data, y_data in dataset:
# Your processing code here
pass
# Access a specific item in the dataset
x_data, y_data = dataset[42]
### Memory frugality
Only data from a small number of csv files are maintained in memory. The
rest is discarded on a LRU basis. This class is intended for use
when a very large number of data files exist which cannot be loaded into
memory conveniently.
Raw data
{
"_id": null,
"home_page": "https://github.com/microprediction/csvsdataset",
"name": "csvsdataset",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "microprediction",
"author_email": "peter.cotton@microprediction.com",
"download_url": "https://files.pythonhosted.org/packages/1c/ef/7259452de864117bed0e0ec17ffc07117b901fe54ad5f5e51f0d8adf85b5/csvsdataset-0.0.7.tar.gz",
"platform": null,
"description": "# csvsdataset\n\n`csvsdataset` is a Python library designed to simplify the process of working with multiple CSV files as a single dataset. The primary functionality is provided by the `CsvsDataset` class in the `csvsdataset.py` module.\n\nThis was written by ChatGPT4 as mentioned [here](https://www.linkedin.com/posts/petercotton_chatgpt4-opensource-python-activity-7047184874163597312-JTr3?utm_source=share&utm_medium=member_desktop). Issues will be cut and paste into a session. It is an experiment in semi-autonomous code maintenance.\n\n## Installation\n\nTo install the `csvsdataset` library, simply run:\n\n```bash\npip install csvsdataset\n```\n\n## Usage\n\n from csvsdataset.csvsdataset import CsvsDataset\n \n # Initialize the CsvsDataset instance\n dataset = CsvsDataset(folder_path=\"path/to/your/csv/folder\",\n file_pattern=\"*.csv\",\n x_columns=[\"column1\", \"column2\"],\n y_column=\"target_column\")\n \n # Iterate over the dataset\n for x_data, y_data in dataset:\n # Your processing code here\n pass\n \n # Access a specific item in the dataset\n x_data, y_data = dataset[42]\n\n### Memory frugality\nOnly data from a small number of csv files are maintained in memory. The\nrest is discarded on a LRU basis. This class is intended for use\nwhen a very large number of data files exist which cannot be loaded into\nmemory conveniently. \n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Memory frugal torch dataset from a csv collection",
"version": "0.0.7",
"project_urls": {
"Homepage": "https://github.com/microprediction/csvsdataset"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5e34610d9451ec9ad9100ea571d6d201e12b8a6594705507c688df053b7d0634",
"md5": "6e6c1b815810df06ec270efa43f25cfb",
"sha256": "151e992427bc6969f52a5f93966b59d32c70fd71166f7b4e48f5b8c39704bcba"
},
"downloads": -1,
"filename": "csvsdataset-0.0.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6e6c1b815810df06ec270efa43f25cfb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 35282251,
"upload_time": "2023-05-14T13:39:17",
"upload_time_iso_8601": "2023-05-14T13:39:17.935509Z",
"url": "https://files.pythonhosted.org/packages/5e/34/610d9451ec9ad9100ea571d6d201e12b8a6594705507c688df053b7d0634/csvsdataset-0.0.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1cef7259452de864117bed0e0ec17ffc07117b901fe54ad5f5e51f0d8adf85b5",
"md5": "9f1f2706473b41c8e2d7c6fbd92b8afc",
"sha256": "edbd1b5640a4a904014ed9476ea7ee3f551994a9d125f48d5d64500070c9161d"
},
"downloads": -1,
"filename": "csvsdataset-0.0.7.tar.gz",
"has_sig": false,
"md5_digest": "9f1f2706473b41c8e2d7c6fbd92b8afc",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 35038540,
"upload_time": "2023-05-14T13:39:24",
"upload_time_iso_8601": "2023-05-14T13:39:24.453661Z",
"url": "https://files.pythonhosted.org/packages/1c/ef/7259452de864117bed0e0ec17ffc07117b901fe54ad5f5e51f0d8adf85b5/csvsdataset-0.0.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-14 13:39:24",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "microprediction",
"github_project": "csvsdataset",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "pandas",
"specs": []
},
{
"name": "torch",
"specs": []
}
],
"lcname": "csvsdataset"
}