datadotworld


Namedatadotworld JSON
Version 2.0.0 PyPI version JSON
download
home_pagehttp://github.com/datadotworld/data.world-py
SummaryPython library for data.world
upload_time2024-04-19 12:52:35
maintainerNone
docs_urlNone
authordata.world
requires_python>=3.9
licenseApache 2.0
keywords data.world dataset
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            =============
data.world-py
=============

A python library for working with data.world datasets.

This library makes it easy for data.world users to pull and work with data stored on data.world.
Additionally, the library provides convenient wrappers for data.world APIs, allowing users to create and update
datasets, add and modify files, etc, and possibly implement entire apps on top of data.world.


Quick start
===========

Install
-------

You can install it using ``pip`` directly from PyPI::

    pip install datadotworld

Optionally, you can install the library including pandas support::

    pip install datadotworld[pandas]

If you use ``conda`` to manage your python distribution, you can install from the community-maintained [conda-forge](https://conda-forge.github.io/) channel::

    conda install -c conda-forge datadotworld-py


Configure
---------

This library requires a data.world API authentication token to work.

Your authentication token can be obtained on data.world once you enable Python under
`Integrations > Python <https://data.world/integrations/python>`_

To configure the library, run the following command::

    dw configure


Alternatively, tokens can be provided via the ``DW_AUTH_TOKEN`` environment variable.
On MacOS or Unix machines, run (replacing ``<YOUR_TOKEN>>`` below with the token obtained earlier)::

    export DW_AUTH_TOKEN=<YOUR_TOKEN>

Load a dataset
--------------

The ``load_dataset()`` function facilitates maintaining copies of datasets on the local filesystem.
It will download a given dataset's `datapackage <http://specs.frictionlessdata.io/data-package/>`_
and store it under ``~/.dw/cache``. When used subsequently, ``load_dataset()`` will use the copy stored on disk and will
work offline, unless it's called with ``force_update=True`` or ``auto_update=True``. ``force_update=True`` will overwrite your local copy unconditionally. ``auto_update=True`` will only overwrite your local copy if a newer version of the dataset is available on data.world.

Once loaded, a dataset (data and metadata) can be conveniently accessed via the object returned by ``load_dataset()``.

Start by importing the ``datadotworld`` module:

.. code-block:: python

    import datadotworld as dw

Then, invoke the ``load_dataset()`` function, to download a dataset and work with it locally.
For example:

.. code-block:: python

    intro_dataset = dw.load_dataset('jonloyens/an-intro-to-dataworld-dataset')

Dataset objects allow access to data via three different properties ``raw_data``, ``tables`` and ``dataframes``.
Each of these properties is a mapping (dict) whose values are of type ``bytes``, ``list`` and ``pandas.DataFrame``,
respectively. Values are lazy loaded and cached once loaded. Their keys are the names of the files
contained in the dataset.

For example:

.. code-block:: python

    >>> intro_dataset.dataframes
    LazyLoadedDict({
        'changelog': LazyLoadedValue(<pandas.DataFrame>),
        'datadotworldbballstats': LazyLoadedValue(<pandas.DataFrame>),
        'datadotworldbballteam': LazyLoadedValue(<pandas.DataFrame>)})

**IMPORTANT**: Not all files in a dataset are tabular, therefore some will be exposed via ``raw_data`` only.

Tables are lists of rows, each represented by a mapping (dict) of column names to their respective values.

For example:

.. code-block:: python

    >>> stats_table = intro_dataset.tables['datadotworldbballstats']
    >>> stats_table[0]
    OrderedDict([('Name', 'Jon'),
                 ('PointsPerGame', Decimal('20.4')),
                 ('AssistsPerGame', Decimal('1.3'))])

You can also review the metadata associated with a file or the entire dataset, using the ``describe`` function.
For example:

.. code-block:: python

    >>> intro_dataset.describe()
    {'homepage': 'https://data.world/jonloyens/an-intro-to-dataworld-dataset',
     'name': 'jonloyens_an-intro-to-dataworld-dataset',
     'resources': [{'format': 'csv',
       'name': 'changelog',
       'path': 'data/ChangeLog.csv'},
      {'format': 'csv',
       'name': 'datadotworldbballstats',
       'path': 'data/DataDotWorldBBallStats.csv'},
      {'format': 'csv',
       'name': 'datadotworldbballteam',
       'path': 'data/DataDotWorldBBallTeam.csv'}]}
    >>> intro_dataset.describe('datadotworldbballstats')
    {'format': 'csv',
     'name': 'datadotworldbballstats',
     'path': 'data/DataDotWorldBBallStats.csv',
     'schema': {'fields': [{'name': 'Name', 'title': 'Name', 'type': 'string'},
                           {'name': 'PointsPerGame',
                            'title': 'PointsPerGame',
                            'type': 'number'},
                           {'name': 'AssistsPerGame',
                            'title': 'AssistsPerGame',
                            'type': 'number'}]}}

Query a dataset
---------------

The ``query()`` function allows datasets to be queried live using ``SQL`` or ``SPARQL`` query languages.

To query a dataset, invoke the ``query()`` function.
For example:

.. code-block:: python

    results = dw.query('jonloyens/an-intro-to-dataworld-dataset', 'SELECT * FROM DataDotWorldBBallStats')

Query result objects allow access to the data via ``raw_data``, ``table`` and ``dataframe`` properties, of type
``json``, ``list`` and ``pandas.DataFrame``, respectively.

For example:

.. code-block:: python

    >>> results.dataframe
          Name  PointsPerGame  AssistsPerGame
    0      Jon           20.4             1.3
    1      Rob           15.5             8.0
    2   Sharon           30.1            11.2
    3     Alex            8.2             0.5
    4  Rebecca           12.3            17.0
    5   Ariane           18.1             3.0
    6    Bryon           16.0             8.5
    7     Matt           13.0             2.1


Tables are lists of rows, each represented by a mapping (dict) of column names to their respective values.
For example:

.. code-block:: python

    >>> results.table[0]
    OrderedDict([('Name', 'Jon'),
                 ('PointsPerGame', Decimal('20.4')),
                 ('AssistsPerGame', Decimal('1.3'))])

To query using ``SPARQL`` invoke ``query()`` using ``query_type='sparql'``, or else, it will assume
the query to be a ``SQL`` query.

Just like in the dataset case, you can view the metadata associated with a query result using the ``describe()``
function.

For example:

.. code-block:: python

    >>> results.describe()
    {'fields': [{'name': 'Name', 'type': 'string'},
                {'name': 'PointsPerGame', 'type': 'number'},
                {'name': 'AssistsPerGame', 'type': 'number'}]}

Work with files
---------------

The ``open_remote_file()`` function allows you to write data to or read data from a file in a
data.world dataset.

Writing files
.............

The object that is returned from the ``open_remote_file()`` call is similar to a file handle that
would be used to write to a local file - it has a ``write()`` method, and contents sent to that
method will be written to the file remotely.

.. code-block:: python

        >>> import datadotworld as dw
        >>>
        >>> with dw.open_remote_file('username/test-dataset', 'test.txt') as w:
        ...   w.write("this is a test.")
        >>>

Of course, writing a text file isn't the primary use case for data.world - you want to write your
data!  The return object from ``open_remote_file()`` should be usable anywhere you could normally
use a local file handle in write mode - so you can use it to serialize the contents of a PANDAS
``DataFrame`` to a CSV file...

.. code-block:: python

        >>> import pandas as pd
        >>> df = pd.DataFrame({'foo':[1,2,3,4],'bar':['a','b','c','d']})
        >>> with dw.open_remote_file('username/test-dataset', 'dataframe.csv') as w:
        ...   df.to_csv(w, index=False)

Or, to write a series of ``dict`` objects as a JSON Lines file...

.. code-block:: python

        >>> import json
        >>> with dw.open_remote_file('username/test-dataset', 'test.jsonl') as w:
        ...   json.dump({'foo':42, 'bar':"A"}, w)
        ...   json.dump({'foo':13, 'bar':"B"}, w)
        >>>

Or to write a series of ``dict`` objects as a CSV...

.. code-block:: python

        >>> import csv
        >>> with dw.open_remote_file('username/test-dataset', 'test.csv') as w:
        ...   csvw = csv.DictWriter(w, fieldnames=['foo', 'bar'])
        ...   csvw.writeheader()
        ...   csvw.writerow({'foo':42, 'bar':"A"})
        ...   csvw.writerow({'foo':13, 'bar':"B"})
        >>>

And finally, you can write binary data by streaming ``bytes`` or ``bytearray`` objects, if you open the
file in binary mode...

.. code-block:: python

        >>> with dw.open_remote_file('username/test-dataset', 'test.txt', mode='wb') as w:
        ...   w.write(bytes([100,97,116,97,46,119,111,114,108,100]))

Reading files
.............

You can also read data from a file in a similar fashion

.. code-block:: python

        >>> with dw.open_remote_file('username/test-dataset', 'test.txt', mode='r') as r:
        ...   print(r.read)


Reading from the file into common parsing libraries works naturally, too - when opened in 'r' mode, the
file object acts as an Iterator of the lines in the file:

.. code-block:: python

        >>> with dw.open_remote_file('username/test-dataset', 'test.txt', mode='r') as r:
        ...   csvr = csv.DictReader(r)
        ...   for row in csvr:
        ...      print(row['column a'], row['column b'])


Reading binary files works naturally, too - when opened in 'rb' mode, ``read()`` returns the contents of
the file as a byte array, and the file object acts as an iterator of bytes:

.. code-block:: python

        >>> with dw.open_remote_file('username/test-dataset', 'test', mode='rb') as r:
        ...   bytes = r.read()


Additional API Features
-----------------------

For a complete list of available API operations, see
`official documentation <https://docs.data.world/documentation/api/>`_.

Python wrappers are implemented by the ``ApiClient`` class. To obtain an instance, simply call ``api_client``.
For example:

.. code-block:: python

    client = dw.api_client

The client currently implements the following functions:

* ``create_dataset``
* ``update_dataset``
* ``replace_dataset``
* ``get_dataset``
* ``delete_dataset``
* ``add_files_via_url``
* ``append_records``
* ``upload_files``
* ``upload_file``
* ``delete_files``
* ``sync_files``
* ``download_dataset``
* ``download_file``
* ``get_user_data``
* ``fetch_contributing_datasets``
* ``fetch_liked_datasets``
* ``fetch_datasets``
* ``fetch_contributing_projects``
* ``fetch_liked_projects``
* ``fetch_projects``
* ``get_project``
* ``create_project``
* ``update_project``
* ``replace_project``
* ``add_linked_dataset``
* ``remove_linked_dataset``
* ``delete_project``
* ``get_insight``
* ``get_insights_for_project``
* ``create_insight``
* ``replace_insight``
* ``update_insight``
* ``delete_insight``
* ``search_resources``
* ``create_new_tables``
* ``create_new_connections``

For a few examples of what the ``ApiClient`` can be used for, see below.

Add files from URL
..................

The ``add_files_via_url()`` function can be used to add files to a dataset from a URL. 
This can be done by specifying ``files`` as a dictionary where the keys are the desired file name and each item is an object containing ``url``, ``description`` and ``labels``. 

For example:

.. code-block:: python

    >>> client = dw.api_client()
    >>> client.add_files_via_url('username/test-dataset', files={'sample.xls': {'url':'http://www.sample.com/sample.xls', 'description': 'sample doc', 'labels': ['raw data']}})

Append records to stream
........................

The ``append_record()`` function allows you to append JSON data to a data stream associated with a dataset. Streams do not need to be created in advance. Streams are automatically created the first time a ``streamId`` is used in an append operation. 

For example:

.. code-block:: python

    >>> client = dw.api_client()
    >>> client.append_records('username/test-dataset','streamId', {'data': 'data'})

Contents of a stream will appear as part of the respective dataset as a .jsonl file.

You can find more about those functions using ``help(client)``


            

Raw data

            {
    "_id": null,
    "home_page": "http://github.com/datadotworld/data.world-py",
    "name": "datadotworld",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "data.world dataset",
    "author": "data.world",
    "author_email": "help@data.world",
    "download_url": "https://files.pythonhosted.org/packages/2f/9e/02d7362505e819b32a99a2f1b5523c189853526a0bf0b598b3189ca7717a/datadotworld-2.0.0.tar.gz",
    "platform": null,
    "description": "=============\ndata.world-py\n=============\n\nA python library for working with data.world datasets.\n\nThis library makes it easy for data.world users to pull and work with data stored on data.world.\nAdditionally, the library provides convenient wrappers for data.world APIs, allowing users to create and update\ndatasets, add and modify files, etc, and possibly implement entire apps on top of data.world.\n\n\nQuick start\n===========\n\nInstall\n-------\n\nYou can install it using ``pip`` directly from PyPI::\n\n    pip install datadotworld\n\nOptionally, you can install the library including pandas support::\n\n    pip install datadotworld[pandas]\n\nIf you use ``conda`` to manage your python distribution, you can install from the community-maintained [conda-forge](https://conda-forge.github.io/) channel::\n\n    conda install -c conda-forge datadotworld-py\n\n\nConfigure\n---------\n\nThis library requires a data.world API authentication token to work.\n\nYour authentication token can be obtained on data.world once you enable Python under\n`Integrations > Python <https://data.world/integrations/python>`_\n\nTo configure the library, run the following command::\n\n    dw configure\n\n\nAlternatively, tokens can be provided via the ``DW_AUTH_TOKEN`` environment variable.\nOn MacOS or Unix machines, run (replacing ``<YOUR_TOKEN>>`` below with the token obtained earlier)::\n\n    export DW_AUTH_TOKEN=<YOUR_TOKEN>\n\nLoad a dataset\n--------------\n\nThe ``load_dataset()`` function facilitates maintaining copies of datasets on the local filesystem.\nIt will download a given dataset's `datapackage <http://specs.frictionlessdata.io/data-package/>`_\nand store it under ``~/.dw/cache``. When used subsequently, ``load_dataset()`` will use the copy stored on disk and will\nwork offline, unless it's called with ``force_update=True`` or ``auto_update=True``. ``force_update=True`` will overwrite your local copy unconditionally. ``auto_update=True`` will only overwrite your local copy if a newer version of the dataset is available on data.world.\n\nOnce loaded, a dataset (data and metadata) can be conveniently accessed via the object returned by ``load_dataset()``.\n\nStart by importing the ``datadotworld`` module:\n\n.. code-block:: python\n\n    import datadotworld as dw\n\nThen, invoke the ``load_dataset()`` function, to download a dataset and work with it locally.\nFor example:\n\n.. code-block:: python\n\n    intro_dataset = dw.load_dataset('jonloyens/an-intro-to-dataworld-dataset')\n\nDataset objects allow access to data via three different properties ``raw_data``, ``tables`` and ``dataframes``.\nEach of these properties is a mapping (dict) whose values are of type ``bytes``, ``list`` and ``pandas.DataFrame``,\nrespectively. Values are lazy loaded and cached once loaded. Their keys are the names of the files\ncontained in the dataset.\n\nFor example:\n\n.. code-block:: python\n\n    >>> intro_dataset.dataframes\n    LazyLoadedDict({\n        'changelog': LazyLoadedValue(<pandas.DataFrame>),\n        'datadotworldbballstats': LazyLoadedValue(<pandas.DataFrame>),\n        'datadotworldbballteam': LazyLoadedValue(<pandas.DataFrame>)})\n\n**IMPORTANT**: Not all files in a dataset are tabular, therefore some will be exposed via ``raw_data`` only.\n\nTables are lists of rows, each represented by a mapping (dict) of column names to their respective values.\n\nFor example:\n\n.. code-block:: python\n\n    >>> stats_table = intro_dataset.tables['datadotworldbballstats']\n    >>> stats_table[0]\n    OrderedDict([('Name', 'Jon'),\n                 ('PointsPerGame', Decimal('20.4')),\n                 ('AssistsPerGame', Decimal('1.3'))])\n\nYou can also review the metadata associated with a file or the entire dataset, using the ``describe`` function.\nFor example:\n\n.. code-block:: python\n\n    >>> intro_dataset.describe()\n    {'homepage': 'https://data.world/jonloyens/an-intro-to-dataworld-dataset',\n     'name': 'jonloyens_an-intro-to-dataworld-dataset',\n     'resources': [{'format': 'csv',\n       'name': 'changelog',\n       'path': 'data/ChangeLog.csv'},\n      {'format': 'csv',\n       'name': 'datadotworldbballstats',\n       'path': 'data/DataDotWorldBBallStats.csv'},\n      {'format': 'csv',\n       'name': 'datadotworldbballteam',\n       'path': 'data/DataDotWorldBBallTeam.csv'}]}\n    >>> intro_dataset.describe('datadotworldbballstats')\n    {'format': 'csv',\n     'name': 'datadotworldbballstats',\n     'path': 'data/DataDotWorldBBallStats.csv',\n     'schema': {'fields': [{'name': 'Name', 'title': 'Name', 'type': 'string'},\n                           {'name': 'PointsPerGame',\n                            'title': 'PointsPerGame',\n                            'type': 'number'},\n                           {'name': 'AssistsPerGame',\n                            'title': 'AssistsPerGame',\n                            'type': 'number'}]}}\n\nQuery a dataset\n---------------\n\nThe ``query()`` function allows datasets to be queried live using ``SQL`` or ``SPARQL`` query languages.\n\nTo query a dataset, invoke the ``query()`` function.\nFor example:\n\n.. code-block:: python\n\n    results = dw.query('jonloyens/an-intro-to-dataworld-dataset', 'SELECT * FROM DataDotWorldBBallStats')\n\nQuery result objects allow access to the data via ``raw_data``, ``table`` and ``dataframe`` properties, of type\n``json``, ``list`` and ``pandas.DataFrame``, respectively.\n\nFor example:\n\n.. code-block:: python\n\n    >>> results.dataframe\n          Name  PointsPerGame  AssistsPerGame\n    0      Jon           20.4             1.3\n    1      Rob           15.5             8.0\n    2   Sharon           30.1            11.2\n    3     Alex            8.2             0.5\n    4  Rebecca           12.3            17.0\n    5   Ariane           18.1             3.0\n    6    Bryon           16.0             8.5\n    7     Matt           13.0             2.1\n\n\nTables are lists of rows, each represented by a mapping (dict) of column names to their respective values.\nFor example:\n\n.. code-block:: python\n\n    >>> results.table[0]\n    OrderedDict([('Name', 'Jon'),\n                 ('PointsPerGame', Decimal('20.4')),\n                 ('AssistsPerGame', Decimal('1.3'))])\n\nTo query using ``SPARQL`` invoke ``query()`` using ``query_type='sparql'``, or else, it will assume\nthe query to be a ``SQL`` query.\n\nJust like in the dataset case, you can view the metadata associated with a query result using the ``describe()``\nfunction.\n\nFor example:\n\n.. code-block:: python\n\n    >>> results.describe()\n    {'fields': [{'name': 'Name', 'type': 'string'},\n                {'name': 'PointsPerGame', 'type': 'number'},\n                {'name': 'AssistsPerGame', 'type': 'number'}]}\n\nWork with files\n---------------\n\nThe ``open_remote_file()`` function allows you to write data to or read data from a file in a\ndata.world dataset.\n\nWriting files\n.............\n\nThe object that is returned from the ``open_remote_file()`` call is similar to a file handle that\nwould be used to write to a local file - it has a ``write()`` method, and contents sent to that\nmethod will be written to the file remotely.\n\n.. code-block:: python\n\n        >>> import datadotworld as dw\n        >>>\n        >>> with dw.open_remote_file('username/test-dataset', 'test.txt') as w:\n        ...   w.write(\"this is a test.\")\n        >>>\n\nOf course, writing a text file isn't the primary use case for data.world - you want to write your\ndata!  The return object from ``open_remote_file()`` should be usable anywhere you could normally\nuse a local file handle in write mode - so you can use it to serialize the contents of a PANDAS\n``DataFrame`` to a CSV file...\n\n.. code-block:: python\n\n        >>> import pandas as pd\n        >>> df = pd.DataFrame({'foo':[1,2,3,4],'bar':['a','b','c','d']})\n        >>> with dw.open_remote_file('username/test-dataset', 'dataframe.csv') as w:\n        ...   df.to_csv(w, index=False)\n\nOr, to write a series of ``dict`` objects as a JSON Lines file...\n\n.. code-block:: python\n\n        >>> import json\n        >>> with dw.open_remote_file('username/test-dataset', 'test.jsonl') as w:\n        ...   json.dump({'foo':42, 'bar':\"A\"}, w)\n        ...   json.dump({'foo':13, 'bar':\"B\"}, w)\n        >>>\n\nOr to write a series of ``dict`` objects as a CSV...\n\n.. code-block:: python\n\n        >>> import csv\n        >>> with dw.open_remote_file('username/test-dataset', 'test.csv') as w:\n        ...   csvw = csv.DictWriter(w, fieldnames=['foo', 'bar'])\n        ...   csvw.writeheader()\n        ...   csvw.writerow({'foo':42, 'bar':\"A\"})\n        ...   csvw.writerow({'foo':13, 'bar':\"B\"})\n        >>>\n\nAnd finally, you can write binary data by streaming ``bytes`` or ``bytearray`` objects, if you open the\nfile in binary mode...\n\n.. code-block:: python\n\n        >>> with dw.open_remote_file('username/test-dataset', 'test.txt', mode='wb') as w:\n        ...   w.write(bytes([100,97,116,97,46,119,111,114,108,100]))\n\nReading files\n.............\n\nYou can also read data from a file in a similar fashion\n\n.. code-block:: python\n\n        >>> with dw.open_remote_file('username/test-dataset', 'test.txt', mode='r') as r:\n        ...   print(r.read)\n\n\nReading from the file into common parsing libraries works naturally, too - when opened in 'r' mode, the\nfile object acts as an Iterator of the lines in the file:\n\n.. code-block:: python\n\n        >>> with dw.open_remote_file('username/test-dataset', 'test.txt', mode='r') as r:\n        ...   csvr = csv.DictReader(r)\n        ...   for row in csvr:\n        ...      print(row['column a'], row['column b'])\n\n\nReading binary files works naturally, too - when opened in 'rb' mode, ``read()`` returns the contents of\nthe file as a byte array, and the file object acts as an iterator of bytes:\n\n.. code-block:: python\n\n        >>> with dw.open_remote_file('username/test-dataset', 'test', mode='rb') as r:\n        ...   bytes = r.read()\n\n\nAdditional API Features\n-----------------------\n\nFor a complete list of available API operations, see\n`official documentation <https://docs.data.world/documentation/api/>`_.\n\nPython wrappers are implemented by the ``ApiClient`` class. To obtain an instance, simply call ``api_client``.\nFor example:\n\n.. code-block:: python\n\n    client = dw.api_client\n\nThe client currently implements the following functions:\n\n* ``create_dataset``\n* ``update_dataset``\n* ``replace_dataset``\n* ``get_dataset``\n* ``delete_dataset``\n* ``add_files_via_url``\n* ``append_records``\n* ``upload_files``\n* ``upload_file``\n* ``delete_files``\n* ``sync_files``\n* ``download_dataset``\n* ``download_file``\n* ``get_user_data``\n* ``fetch_contributing_datasets``\n* ``fetch_liked_datasets``\n* ``fetch_datasets``\n* ``fetch_contributing_projects``\n* ``fetch_liked_projects``\n* ``fetch_projects``\n* ``get_project``\n* ``create_project``\n* ``update_project``\n* ``replace_project``\n* ``add_linked_dataset``\n* ``remove_linked_dataset``\n* ``delete_project``\n* ``get_insight``\n* ``get_insights_for_project``\n* ``create_insight``\n* ``replace_insight``\n* ``update_insight``\n* ``delete_insight``\n* ``search_resources``\n* ``create_new_tables``\n* ``create_new_connections``\n\nFor a few examples of what the ``ApiClient`` can be used for, see below.\n\nAdd files from URL\n..................\n\nThe ``add_files_via_url()`` function can be used to add files to a dataset from a URL. \nThis can be done by specifying ``files`` as a dictionary where the keys are the desired file name and each item is an object containing ``url``, ``description`` and ``labels``. \n\nFor example:\n\n.. code-block:: python\n\n    >>> client = dw.api_client()\n    >>> client.add_files_via_url('username/test-dataset', files={'sample.xls': {'url':'http://www.sample.com/sample.xls', 'description': 'sample doc', 'labels': ['raw data']}})\n\nAppend records to stream\n........................\n\nThe ``append_record()`` function allows you to append JSON data to a data stream associated with a dataset. Streams do not need to be created in advance. Streams are automatically created the first time a ``streamId`` is used in an append operation. \n\nFor example:\n\n.. code-block:: python\n\n    >>> client = dw.api_client()\n    >>> client.append_records('username/test-dataset','streamId', {'data': 'data'})\n\nContents of a stream will appear as part of the respective dataset as a .jsonl file.\n\nYou can find more about those functions using ``help(client)``\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Python library for data.world",
    "version": "2.0.0",
    "project_urls": {
        "Homepage": "http://github.com/datadotworld/data.world-py"
    },
    "split_keywords": [
        "data.world",
        "dataset"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "11eb51304ffe101d9ae03c1d69dd38b2302d94decf4c13a4954fbeb72979f45b",
                "md5": "aa82fade39cff1cce5fd833db80032fc",
                "sha256": "fb6f212d9bf95eefe11fc66c79ac7ee8935c368d20a807cd350931e458ee5d3c"
            },
            "downloads": -1,
            "filename": "datadotworld-2.0.0-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "aa82fade39cff1cce5fd833db80032fc",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.9",
            "size": 423197,
            "upload_time": "2024-04-19T12:52:33",
            "upload_time_iso_8601": "2024-04-19T12:52:33.493996Z",
            "url": "https://files.pythonhosted.org/packages/11/eb/51304ffe101d9ae03c1d69dd38b2302d94decf4c13a4954fbeb72979f45b/datadotworld-2.0.0-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2f9e02d7362505e819b32a99a2f1b5523c189853526a0bf0b598b3189ca7717a",
                "md5": "ac990b49a8fa5b61d465f37daff92653",
                "sha256": "49784a75f43782e449df644d07ff3908f6cac799da4b002becd523c246218ab1"
            },
            "downloads": -1,
            "filename": "datadotworld-2.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ac990b49a8fa5b61d465f37daff92653",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 160282,
            "upload_time": "2024-04-19T12:52:35",
            "upload_time_iso_8601": "2024-04-19T12:52:35.501565Z",
            "url": "https://files.pythonhosted.org/packages/2f/9e/02d7362505e819b32a99a2f1b5523c189853526a0bf0b598b3189ca7717a/datadotworld-2.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-19 12:52:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "datadotworld",
    "github_project": "data.world-py",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "circle": true,
    "tox": true,
    "lcname": "datadotworld"
}
        
Elapsed time: 0.80608s