## Introduction
The matrice-datasets-sdk is a python package that has APIs defined in it to perform operations related to datasets on the matrice.ai platform.
### Installation (will update it after we release this repo as a python package):
```
git clone https://github.com/matrice-ai/matrice-dataset-sdk.git
cd matrice-database-sdk
```
## File structure
```
python_sdk/
├── docs/
│ ├── Be-Annotation.md
│ ├── Be-Dataset.md
│ ├── Be-Deployment.md
│ ├── Be-Model.md
│ ├── Be-Project.md
│ ├── Be-Resources.py
│ ├── Be-User.py
│ └── Inference-Optimization.md
├── src/
│ ├── annotation.py
│ ├── dataset.py
│ ├── deployment.py
│ ├── inference_optim.py
│ ├── models.py
│ ├── resources.py
│ ├── rpc.py
│ └── token_auth.py
├── test/
│ ├── annotation.ipynb
│ ├── config.py
│ ├── dataset.ipynb
│ ├── deployment.ipynb
│ ├── inference_optim.ipynb
│ ├── projects.ipynb
│ ├── models.ipynb
│ ├── resources.ipynb
│ └── user.ipynb
├── setup.py
└── README.md
```
### Usage:
#### Class dataset.Dataset [[Source]](https://github.com/matrice-ai/matrice-dataset-sdk/blob/main/src/dataset.py)
All the operations on a dataset will be performed via the methods of the `Dataset` class. So, before performing any operation on the datasets, we need to instantiate an object of the `Dataset` class.
```python
class dataset.Dataset(project_id, dataset_id=None, email="", password="")
```
This constructor should be used to instantiate an object of the `Dataset` class.
Parameters in the constructor:
* `project_id`: string, the id of the project in which you want to create the dataset
* `dataset_id`: string, the id of the dataset on which you want to perform operations
* `email`: string, email associated with the matrice.ai platform account
* `password`: string, password for your account corresponding to `email` for the matrice.ai platform
If you want to create a new dataset and perform some operations on it, you can instantiate an object of the `Dataset` class without setting the value of `dataset_id`.
```python3
# import Dataset class
from src.dataset import Dataset
# set the value of my_email to the email associated with the matrice.ai account
my_email = "example@gmail.com"
# set the value of my_password to the password corresponding to `my_email` for matrice.ai platform
my_password = "password"
# set the value of project_id to the id of the project in which you want to create the dataset
project_id = "abcc123facc"
# instantiate an object of the Dataset class
d1 = Dataset(project_id = project_id,email=email, password=password)
```
If you want to perform any operation on an already created dataset, you will have to set the value of `dataset_id` to the id of the dataset on which you want to perform operations while instantiating the object of the `Datasets` class.
```python3
# set the value of my_email to the email associated with the matrice.ai account
my_email = "example@gmail.com"
# set the value of my_password to the password corresponding to `my_email` for matrice.ai platform
my_password = "password"
# set the value of project_id to the id of the project in which the dataset you want to access exists
project_id = "abcc123facc"
# set the value of dataset_id to the id of the dataset on which you want to perform operations
dataset_id = "addfc123facc"
# instantiate an object of the Dataset class
d1 = Dataset(project_id = project_id, dataset_id = dataset_id ,email=email, password=password)
```
Note:
All the methods defined in `Dataset` will return `response`, `error`, and `message`. The `response` will be the response given by the `matrice.ai` backend, `error` will represent the error message(if there is any error else `None`) and `message` will be a simple text message that provides information about the status of the task being requested to execute.
#### Create a dataset:
To create a dataset in the matrice.ai platform using the `matrice-datasets-sdk`, we can use the `create_dataset()` method defined in the `dataset.Dataset` class. Before, calling the `create_dataset` method, we instantiate an object of the `Dataset` class.
```
dataset.Dataset.create_dataset(self, name, type, is_unlabeled, source, source_url, dataset_description="", version_description="")
```
Parameters in the method:
* `name`: string, name of the dataset to be created
* `type`: string, type of the task of the dataset (for now we support `"classification"` and `"detection"` types)
* `is_unlabeled`: boolean (True/False), True if the dataset is unlabeled
* `source`: string, Source for dataset files (currently supported only "url")
* `source_url` : string, URL of the dataset zipped file
* `dataset_description`: string, Optional field that is used to store some information about the dataset we are creating
* `version_description`: string, Optional field that is used to store some information about the `v1.0` of the dataset
In the matrice.ai platform, we can create a new dataset either by uploading local `zip` files or by using a URL of the dataset files stored somewhere remotely.
To create a dataset using the URL of the dataset files, we pass source="url" and pass URL of files as the `source_url` argument.
```python3
resp, err, msg = d1.create_dataset(name = "dataset-name",type="detection", is_unlabeled=False, source="url", source_url="https://example.com/sample2_data.tar.gz", dataset_description="Dataset created using matricesdk", version_description="Initial Version")
```
#### Add new samples to the existing dataset:
To add new samples to the existing dataset, we can use the `add_new_samples()` method defined in the `dataset.Dataset` class.
```
dataset.Dataset.add_new_samples(self,old_version, new_version, is_unlabeled, source, source_url, dataset_description="", version_description="")
```
Parameters in the method:
* `old_version`: string, the version of the dataset to which you want to add samples
* `new_version`: string, the new version of the dataset with added samples (set new_version equal to old_version argument if you don't want to create a new version)
* `is_unlabeled`: boolean (True/False), True if the dataset is unlabeled
* `source`: string, Source for dataset files (currently supported only "url")
* `source_url`: string, URL of the dataset zipped file
* `dataset_description`: string, Optional field if set to a string updates existing description of the dataset
* `version_description`: string, Optional field. Used to set the description of a version if we are creating a new version or used to update the description of a version if we are updating an existing version
In the matrice.ai platform, we can add new samples to the existing dataset either by uploading local `zip` files or by using a URL of the dataset files stored somewhere remotely.
To add new samples to the existing dataset using the URL of the dataset files, we pass source="url" and pass URL of files as the `source_url` argument.
```python3
resp, err, msg = d1.add_new_samples(old_version="v1.0", new_version="v1.0", is_unlabeled=False, source="url", source_url="https://myexample.com/sample2_data.tar.gz")
```
The python statement above will add new samples to version `v1.0` of the `d1` dataset using the files in the `source_url`. It will not create a new version in this case.
```python3
resp, err, msg = d1.add_new_samples(old_version="v1.0", new_version="v1.1", is_unlabeled=False, source="url", source_url="https://s3.us-west-2.amazonaws.com/temp.matrice/sample2_data.tar.gz",version_description="Adding samples")
```
The python statement above will add new samples to version `v1.0` of the `d1` dataset using the files in the `source_url and store the final result as version `v1.1`.
#### Create dataset splits:
To move samples from one split to another split or create new random splits in the existing dataset, one can use the `split_dataset()` method defined in the `dataset.Dataset` class.
```
dataset.Dataset.split_dataset(self,old_version, new_version, is_random_split, train_num=0, val_num=0, test_num=0, unassigned_num=0.dataset_description="", version_description="",)
```
Parameters in the method:
* `old_version`: string, the version of the dataset in which you want to move samples from one split to another split or create new random splits
* `new_version`: string, the new version of the dataset with added samples (set new_version equal to old_version argument if you don't want to create a new version)
* `is_random_split`: boolean (True/False), set to True if you want to create fresh new random splits
* `train_num`: int, Number of samples you want to be in the training set
* `val_num`: int, Number of samples you want to be in the validation set
* `test_num`: int, Number of samples you want to be in the test set
* `unassigned_num`: int, Number of samples you want to be in the unassigned set
* `dataset_description`: string, Optional field if set to a string updates existing description of the dataset
* `version_description`: string, Optional field. Used to set the description of a version if we are creating a new version or used to update the description of a version if we are updating an existing version
#### Delete a specific version of a dataset:
To delete a specific version of a dataset, we can use the `delete_version()` method defined in the `dataset.Dataset` class.
```
delete_version(self, version):
```
Parameters in the method:
* `version`: string, version of the dataset to be deleted
```python3
resp, err, msg = d1.delete_version("v1.2")
```
The Python statement above will delete version `v1.2` of the `d1` dataset.
#### Delete a dataset:
To delete a dataset, we can use the `delete_dataset()` method defined in the `dataset.Dataset` class.
```
delete_dataset(self)
```
```python3
resp, err, msg = d1.delete_dataset()
```
The Python statement above will delete the `d1` dataset.
#### Rename a dataset:
To rename a dataset, one can use the `rename_dataset()` method defined in the `dataset.Dataset` class.
```
rename_dataset(self, updated_name)
```
Parameters in the method:
* `updated_name`: string, the name you want the dataset to rename to
```python3
resp, err, message = d1.rename_dataset(updated_name="my_dataset")
```
The Python statement above will rename the `d1` dataset to `my_dataset`.
#### Get a list of all the datasets:
To get a list of all the datasets inside a project, one can use the `get_all_datasets()` method defined in the `dataset.Dataset` class.
```
get_all_datasets(self)
```
```python3
resp, err, message = d1.get_all_datasets()
```
The Python statement above will list all the datasets inside the `project_id` of `d1`.
#### Get information(details) about a dataset:
To get information about a particular dataset inside a project, we can use the `get_dataset_info()` method defined in the `dataset.Dataset` class.
```
get_dataset_info(self)
```
```python3
resp, err, message = d1.get_dataset_info()
```
The Python statement above will provide us with the information about `d1`.
#### Get a summary of a particular version of a dataset:
To get a summary of a particular version of a dataset, one can use the `get_version_summary()` method defined in the `dataset.Dataset` class.
```
get_version_summary(self, version)
```
Parameters in the method:
* `version`: string, version of the dataset whose summary you want to get
```python3
resp, err, msg = d1.get_version_summary("v1.0")
```
The Python statement above will provide us with the summary of version `v1.0` of dataset `d1`.
#### List all versions of a dataset:
To list all the versions of a dataset, one can use the `list_versions()` method defined in the `dataset.Dataset` class.
```
list_versions(self)
```
```python3
resp, err, msg = d1.list_versions()
```
The Python statement above will provide us with the latest version and the list of all versions of the dataset `d1`.
#### List all the label categories of a dataset:
To list all the label categories of a dataset, one can use the `list_categories()` method defined in the `dataset.Dataset` class.
```
list_categories(self)
```
```python3
resp, err, msg = d1.list_categories()
```
The Python statement above will provide us with all the label categories of the dataset `d1`.
#### List all the logs for a dataset:
To list all the logs for a dataset, one can use the `get_logs()` method defined in the `dataset.Dataset` class.
```
get_logs(self)
```
```python3
resp, err, msg = d1.get_logs()
```
The Python statement above will give us a list of all the logs for the dataset `d1`.
#### List all the dataset items in a dataset:
To list all the dataset items in a dataset, one can use the `list_all_dataset_items()` method defined in the `dataset.Dataset` class.
```
list_all_dataset_items(self, version)
```
Parameters in the method:
* `version`: string, version of the dataset whose items you want to get
```python3
resp, err, msg = d1.list_all_dataset_items(version="v1.0")
```
The Python statement above will give us a list of all the dataset items in version `v1.0` of the dataset `d1`.
Raw data
{
"_id": null,
"home_page": "https://github.com/matrice-ai/python-sdk",
"name": "matrice",
"maintainer": null,
"docs_url": null,
"requires_python": "<4,>=3.7",
"maintainer_email": null,
"keywords": "matrice setuptools sdk development",
"author": "Matrice.ai",
"author_email": "dipendra@matrice.ai",
"download_url": "https://files.pythonhosted.org/packages/06/8f/568884b660503e0c79e89c29c0823328aef3e9296f8d90fc3bdde2f5f964/matrice-1.0.98232.tar.gz",
"platform": null,
"description": "## Introduction\n\nThe matrice-datasets-sdk is a python package that has APIs defined in it to perform operations related to datasets on the matrice.ai platform.\n\n### Installation (will update it after we release this repo as a python package):\n\n```\ngit clone https://github.com/matrice-ai/matrice-dataset-sdk.git\ncd matrice-database-sdk\n```\n\n## File structure\n```\npython_sdk/\n\u251c\u2500\u2500 docs/\n\u2502 \u251c\u2500\u2500 Be-Annotation.md\n\u2502 \u251c\u2500\u2500 Be-Dataset.md\n\u2502 \u251c\u2500\u2500 Be-Deployment.md\n\u2502 \u251c\u2500\u2500 Be-Model.md\n\u2502 \u251c\u2500\u2500 Be-Project.md\n\u2502 \u251c\u2500\u2500 Be-Resources.py\n\u2502 \u251c\u2500\u2500 Be-User.py\n\u2502 \u2514\u2500\u2500 Inference-Optimization.md\n\u251c\u2500\u2500 src/\n\u2502 \u251c\u2500\u2500 annotation.py\n\u2502 \u251c\u2500\u2500 dataset.py\n\u2502 \u251c\u2500\u2500 deployment.py\n\u2502 \u251c\u2500\u2500 inference_optim.py\n\u2502 \u251c\u2500\u2500 models.py\n\u2502 \u251c\u2500\u2500 resources.py\n\u2502 \u251c\u2500\u2500 rpc.py\n\u2502 \u2514\u2500\u2500 token_auth.py\n\u251c\u2500\u2500 test/\n\u2502 \u251c\u2500\u2500 annotation.ipynb\n\u2502 \u251c\u2500\u2500 config.py\n\u2502 \u251c\u2500\u2500 dataset.ipynb\n\u2502 \u251c\u2500\u2500 deployment.ipynb\n\u2502 \u251c\u2500\u2500 inference_optim.ipynb\n\u2502 \u251c\u2500\u2500 projects.ipynb\n\u2502 \u251c\u2500\u2500 models.ipynb\n\u2502 \u251c\u2500\u2500 resources.ipynb\n\u2502 \u2514\u2500\u2500 user.ipynb\n\u251c\u2500\u2500 setup.py\n\u2514\u2500\u2500 README.md\n```\n\n### Usage:\n\n#### Class dataset.Dataset [[Source]](https://github.com/matrice-ai/matrice-dataset-sdk/blob/main/src/dataset.py)\n\nAll the operations on a dataset will be performed via the methods of the `Dataset` class. So, before performing any operation on the datasets, we need to instantiate an object of the `Dataset` class.\n\n```python\nclass dataset.Dataset(project_id, dataset_id=None, email=\"\", password=\"\")\n```\n\nThis constructor should be used to instantiate an object of the `Dataset` class. \nParameters in the constructor:\n\n* `project_id`: string, the id of the project in which you want to create the dataset \n* `dataset_id`: string, the id of the dataset on which you want to perform operations \n* `email`: string, email associated with the matrice.ai platform account\n* `password`: string, password for your account corresponding to `email` for the matrice.ai platform\n\nIf you want to create a new dataset and perform some operations on it, you can instantiate an object of the `Dataset` class without setting the value of `dataset_id`. \n\n```python3\n\n# import Dataset class \nfrom src.dataset import Dataset\n\n# set the value of my_email to the email associated with the matrice.ai account\nmy_email = \"example@gmail.com\"\n\n# set the value of my_password to the password corresponding to `my_email` for matrice.ai platform\nmy_password = \"password\"\n\n# set the value of project_id to the id of the project in which you want to create the dataset \nproject_id = \"abcc123facc\"\n\n# instantiate an object of the Dataset class\nd1 = Dataset(project_id = project_id,email=email, password=password)\n\n```\n\nIf you want to perform any operation on an already created dataset, you will have to set the value of `dataset_id` to the id of the dataset on which you want to perform operations while instantiating the object of the `Datasets` class.\n\n```python3\n\n# set the value of my_email to the email associated with the matrice.ai account\nmy_email = \"example@gmail.com\"\n\n# set the value of my_password to the password corresponding to `my_email` for matrice.ai platform\nmy_password = \"password\"\n\n# set the value of project_id to the id of the project in which the dataset you want to access exists\nproject_id = \"abcc123facc\"\n\n# set the value of dataset_id to the id of the dataset on which you want to perform operations \ndataset_id = \"addfc123facc\"\n\n# instantiate an object of the Dataset class\nd1 = Dataset(project_id = project_id, dataset_id = dataset_id ,email=email, password=password)\n\n```\n\nNote:\nAll the methods defined in `Dataset` will return `response`, `error`, and `message`. The `response` will be the response given by the `matrice.ai` backend, `error` will represent the error message(if there is any error else `None`) and `message` will be a simple text message that provides information about the status of the task being requested to execute.\n\n\n#### Create a dataset:\nTo create a dataset in the matrice.ai platform using the `matrice-datasets-sdk`, we can use the `create_dataset()` method defined in the `dataset.Dataset` class. Before, calling the `create_dataset` method, we instantiate an object of the `Dataset` class.\n\n```\ndataset.Dataset.create_dataset(self, name, type, is_unlabeled, source, source_url, dataset_description=\"\", version_description=\"\")\n```\n\nParameters in the method:\n\n* `name`: string, name of the dataset to be created \n* `type`: string, type of the task of the dataset (for now we support `\"classification\"` and `\"detection\"` types)\n* `is_unlabeled`: boolean (True/False), True if the dataset is unlabeled\n* `source`: string, Source for dataset files (currently supported only \"url\")\n* `source_url` : string, URL of the dataset zipped file\n* `dataset_description`: string, Optional field that is used to store some information about the dataset we are creating \n* `version_description`: string, Optional field that is used to store some information about the `v1.0` of the dataset\n\n\nIn the matrice.ai platform, we can create a new dataset either by uploading local `zip` files or by using a URL of the dataset files stored somewhere remotely.\n\nTo create a dataset using the URL of the dataset files, we pass source=\"url\" and pass URL of files as the `source_url` argument.\n\n```python3\nresp, err, msg = d1.create_dataset(name = \"dataset-name\",type=\"detection\", is_unlabeled=False, source=\"url\", source_url=\"https://example.com/sample2_data.tar.gz\", dataset_description=\"Dataset created using matricesdk\", version_description=\"Initial Version\")\n```\n\n#### Add new samples to the existing dataset:\n\nTo add new samples to the existing dataset, we can use the `add_new_samples()` method defined in the `dataset.Dataset` class.\n\n```\ndataset.Dataset.add_new_samples(self,old_version, new_version, is_unlabeled, source, source_url, dataset_description=\"\", version_description=\"\")\n```\n\nParameters in the method:\n\n* `old_version`: string, the version of the dataset to which you want to add samples \n* `new_version`: string, the new version of the dataset with added samples (set new_version equal to old_version argument if you don't want to create a new version)\n* `is_unlabeled`: boolean (True/False), True if the dataset is unlabeled\n* `source`: string, Source for dataset files (currently supported only \"url\")\n* `source_url`: string, URL of the dataset zipped file\n* `dataset_description`: string, Optional field if set to a string updates existing description of the dataset \n* `version_description`: string, Optional field. Used to set the description of a version if we are creating a new version or used to update the description of a version if we are updating an existing version\n\n\nIn the matrice.ai platform, we can add new samples to the existing dataset either by uploading local `zip` files or by using a URL of the dataset files stored somewhere remotely.\n\nTo add new samples to the existing dataset using the URL of the dataset files, we pass source=\"url\" and pass URL of files as the `source_url` argument.\n\n```python3\nresp, err, msg = d1.add_new_samples(old_version=\"v1.0\", new_version=\"v1.0\", is_unlabeled=False, source=\"url\", source_url=\"https://myexample.com/sample2_data.tar.gz\")\n```\n\nThe python statement above will add new samples to version `v1.0` of the `d1` dataset using the files in the `source_url`. It will not create a new version in this case.\n\n```python3\nresp, err, msg = d1.add_new_samples(old_version=\"v1.0\", new_version=\"v1.1\", is_unlabeled=False, source=\"url\", source_url=\"https://s3.us-west-2.amazonaws.com/temp.matrice/sample2_data.tar.gz\",version_description=\"Adding samples\")\n```\n\nThe python statement above will add new samples to version `v1.0` of the `d1` dataset using the files in the `source_url and store the final result as version `v1.1`.\n\n#### Create dataset splits:\n\nTo move samples from one split to another split or create new random splits in the existing dataset, one can use the `split_dataset()` method defined in the `dataset.Dataset` class.\n\n```\ndataset.Dataset.split_dataset(self,old_version, new_version, is_random_split, train_num=0, val_num=0, test_num=0, unassigned_num=0.dataset_description=\"\", version_description=\"\",)\n```\n\nParameters in the method:\n\n* `old_version`: string, the version of the dataset in which you want to move samples from one split to another split or create new random splits\n* `new_version`: string, the new version of the dataset with added samples (set new_version equal to old_version argument if you don't want to create a new version)\n* `is_random_split`: boolean (True/False), set to True if you want to create fresh new random splits\n* `train_num`: int, Number of samples you want to be in the training set\n* `val_num`: int, Number of samples you want to be in the validation set\n* `test_num`: int, Number of samples you want to be in the test set\n* `unassigned_num`: int, Number of samples you want to be in the unassigned set\n* `dataset_description`: string, Optional field if set to a string updates existing description of the dataset \n* `version_description`: string, Optional field. Used to set the description of a version if we are creating a new version or used to update the description of a version if we are updating an existing version\n\n#### Delete a specific version of a dataset:\n\nTo delete a specific version of a dataset, we can use the `delete_version()` method defined in the `dataset.Dataset` class.\n\n```\ndelete_version(self, version):\n```\n\nParameters in the method:\n\n* `version`: string, version of the dataset to be deleted\n\n```python3\nresp, err, msg = d1.delete_version(\"v1.2\")\n```\n\nThe Python statement above will delete version `v1.2` of the `d1` dataset.\n\n\n#### Delete a dataset:\n\nTo delete a dataset, we can use the `delete_dataset()` method defined in the `dataset.Dataset` class.\n\n```\ndelete_dataset(self)\n```\n\n```python3\nresp, err, msg = d1.delete_dataset()\n```\n\nThe Python statement above will delete the `d1` dataset.\n\n\n#### Rename a dataset:\n\nTo rename a dataset, one can use the `rename_dataset()` method defined in the `dataset.Dataset` class.\n\n```\nrename_dataset(self, updated_name)\n```\n\nParameters in the method:\n\n* `updated_name`: string, the name you want the dataset to rename to\n\n```python3\nresp, err, message = d1.rename_dataset(updated_name=\"my_dataset\")\n```\n\nThe Python statement above will rename the `d1` dataset to `my_dataset`.\n\n\n#### Get a list of all the datasets:\n\nTo get a list of all the datasets inside a project, one can use the `get_all_datasets()` method defined in the `dataset.Dataset` class.\n\n```\nget_all_datasets(self)\n```\n\n```python3\nresp, err, message = d1.get_all_datasets()\n```\n\nThe Python statement above will list all the datasets inside the `project_id` of `d1`.\n\n\n#### Get information(details) about a dataset:\n\nTo get information about a particular dataset inside a project, we can use the `get_dataset_info()` method defined in the `dataset.Dataset` class.\n\n```\nget_dataset_info(self)\n```\n\n```python3\nresp, err, message = d1.get_dataset_info()\n```\n\nThe Python statement above will provide us with the information about `d1`.\n\n\n#### Get a summary of a particular version of a dataset:\n\nTo get a summary of a particular version of a dataset, one can use the `get_version_summary()` method defined in the `dataset.Dataset` class.\n\n```\nget_version_summary(self, version)\n```\n\nParameters in the method:\n\n* `version`: string, version of the dataset whose summary you want to get\n\n```python3\nresp, err, msg = d1.get_version_summary(\"v1.0\")\n```\n\nThe Python statement above will provide us with the summary of version `v1.0` of dataset `d1`.\n\n\n#### List all versions of a dataset:\n\nTo list all the versions of a dataset, one can use the `list_versions()` method defined in the `dataset.Dataset` class.\n\n```\nlist_versions(self)\n```\n\n```python3\nresp, err, msg = d1.list_versions()\n```\n\nThe Python statement above will provide us with the latest version and the list of all versions of the dataset `d1`.\n\n\n#### List all the label categories of a dataset:\n\nTo list all the label categories of a dataset, one can use the `list_categories()` method defined in the `dataset.Dataset` class.\n\n```\nlist_categories(self)\n```\n\n```python3\nresp, err, msg = d1.list_categories()\n```\n\nThe Python statement above will provide us with all the label categories of the dataset `d1`.\n\n\n#### List all the logs for a dataset:\n\nTo list all the logs for a dataset, one can use the `get_logs()` method defined in the `dataset.Dataset` class.\n\n```\nget_logs(self)\n```\n\n```python3\nresp, err, msg = d1.get_logs()\n```\n\nThe Python statement above will give us a list of all the logs for the dataset `d1`.\n\n\n#### List all the dataset items in a dataset:\n\nTo list all the dataset items in a dataset, one can use the `list_all_dataset_items()` method defined in the `dataset.Dataset` class.\n\n```\nlist_all_dataset_items(self, version)\n```\n\nParameters in the method:\n\n* `version`: string, version of the dataset whose items you want to get\n\n```python3\nresp, err, msg = d1.list_all_dataset_items(version=\"v1.0\")\n```\n\nThe Python statement above will give us a list of all the dataset items in version `v1.0` of the dataset `d1`.\n",
"bugtrack_url": null,
"license": null,
"summary": "SDK for connecting to matrice.ai services",
"version": "1.0.98232",
"project_urls": {
"Bug Reports": "https://github.com/matrice-ai/python-sdk/issues",
"Homepage": "https://github.com/matrice-ai/python-sdk",
"Source": "https://github.com/matrice-ai/python-sdk/"
},
"split_keywords": [
"matrice",
"setuptools",
"sdk",
"development"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "bad64a95e28ebc0af95fde0ed20ec56b75c6feecfaa017d122174ed5a108be9a",
"md5": "70a163bf443a11a3b45cd176d9426ae0",
"sha256": "a453261e06d67358b3c4b35d1e3bfe2a5d1658ee20c004d4fce8b3a266909972"
},
"downloads": -1,
"filename": "matrice-1.0.98232-py3-none-any.whl",
"has_sig": false,
"md5_digest": "70a163bf443a11a3b45cd176d9426ae0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4,>=3.7",
"size": 121184,
"upload_time": "2024-12-16T08:04:02",
"upload_time_iso_8601": "2024-12-16T08:04:02.099411Z",
"url": "https://files.pythonhosted.org/packages/ba/d6/4a95e28ebc0af95fde0ed20ec56b75c6feecfaa017d122174ed5a108be9a/matrice-1.0.98232-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "068f568884b660503e0c79e89c29c0823328aef3e9296f8d90fc3bdde2f5f964",
"md5": "e72b743803f22b25f800cb48973d7b2f",
"sha256": "4564f0b8aac610389994b038844778f36a467ce8a5b6d0f8643bea38e6d7db37"
},
"downloads": -1,
"filename": "matrice-1.0.98232.tar.gz",
"has_sig": false,
"md5_digest": "e72b743803f22b25f800cb48973d7b2f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4,>=3.7",
"size": 108039,
"upload_time": "2024-12-16T08:04:04",
"upload_time_iso_8601": "2024-12-16T08:04:04.611854Z",
"url": "https://files.pythonhosted.org/packages/06/8f/568884b660503e0c79e89c29c0823328aef3e9296f8d90fc3bdde2f5f964/matrice-1.0.98232.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-16 08:04:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "matrice-ai",
"github_project": "python-sdk",
"github_not_found": true,
"lcname": "matrice"
}