thinknum


Namethinknum JSON
Version 1.96 PyPI version JSON
download
home_pagehttps://www.thinknum.com/demo/
Summary
upload_time2023-08-16 07:38:54
maintainer
docs_urlNone
authorThinknum
requires_python
licenseMIT
keywords thinknum alternative data data
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Thinknum
As companies move their business operations to the Internet, new data trails are being created that can provide unique insights on these companies. Thinknum Alternative Data indexes all of these data trails in one platform, providing investors with critical data points that others miss.

## Installation
```
pip install thinknum
```

## Query

Import library.

```
from thinknum import Query
```

To authenticate, you must first obtain a client_id and client_secret from your assigned Thinknum account manager. Your client_secret must not be shared or exposed via publicly accessible resources (such as browser client-side scripting).

```python
q = Query(
    client_id='Your client id',
    client_secret='Your client secret'
)
```

The default timeout is 180 seconds. If you need to change timeout seconds, you can configure it with the timeout argument.

```python
q = Query(
    client_id='Your client id',
    client_secret='Your client secret',
    timeout=300
)
```

No timeout error happen if you set it to None.
```python
q = Query(
    client_id='Your client id',
    client_secret='Your client secret',
    timeout=None
)
```

If you need to use a proxy, you can configure it with the proxies argument.

```python
proxies = {
  "http": "http://10.10.1.10:3128",
  "https": "http://10.10.1.10:1080",
}

q = Query(
    client_id='Your client id',
    client_secret='Your client secret',
    proxies=proxies
)
```

Requests can ignore verifying the SSL certficate if you set verify to False. By default, verify is set to True.

```python
q = Query(
    client_id='Your client id',
    client_secret='Your client secret',
    verify=False
)
```

You will get a list of datasets, each of which has the dataset id and its display_name.
```python
q.get_dataset_list()
```

You will get dataset's metadata.
```python
q.get_dataset_metadata(dataset_id='job_listings')
```

It's possible to limit the dataset list to a specific ticker by specific a "ticker" query parameter. For example, getting all datasets available for Apple Inc:

```python
q.get_ticker_dataset_list(query='nasdaq:aapl')
```

You can search for tickers.
```python
q.get_ticker_list(query="tesla")
```

You can also search for tickers of particular dataset
```python
q.get_ticker_list(query="tesla", dataset_id='job_listings')
```

You can retrieve data for specific dataset and tickers with various filters. To retrieve data lulu's job listings in 2020, an example request is:

```python
q.add_ticker('nasdaq:lulu') # Add ticker
q.add_filter(
    column='as_of_date',
    type='>=',
    value=["2020-01-01"]
)
q.add_filter(
    column='as_of_date',
    type='<=',
    value=["2020-12-31"]
)
q.add_sort(
    column='as_of_date',
    order='asc'
)   # Add Sort
q.get_data(dataset_id='job_listings')    # Retrieve data
```

You can retrieve data with OR filters. To retrieve lulu's job listings which title has `sales` or description has `sales` in 2020, an example request is:
```python
q.add_ticker('nasdaq:aapl') # Add ticker
q.add_filter(
    column='as_of_date',
    type='>=',
    value=["2020-01-01"]
)
q.add_filter(
    column='as_of_date',
    type='<=',
    value=["2020-12-31"]
)
root_condition = q.add_condition(
    match='any'
)
q.add_filter(
    column='title',
    type='...',
    value='sales',
    condition=root_condition
)
q.add_filter(
    column='description',
    type='...',
    value='sales',
    condition=root_condition
)
q.get_data(dataset_id='job_listings')    # Retrieve data
```

You can retrieve data with more complicated filters. To retrieve lulu's sales job in 2020 or marketing job in 2021:
```python
q.add_ticker('nasdaq:aapl') # Add ticker
q.add_filter(
    column='as_of_date',
    type='>=',
    value=["2020-01-01"]
)
q.add_filter(
    column='as_of_date',
    type='<=',
    value=["2020-12-31"]
)
root_condition = q.add_condition(
    match='any',
)
c1 = q.add_condition(
    match='all',
    condition=root_condition
)
q.add_filter(
    column='title',
    type='...',
    value='sales',
    condition=c1
)
q.add_filter(
    column='as_of_date',
    type='>=',
    value=["2020-01-01"],
    condition=c1
)
q.add_filter(
    column='as_of_date',
    type='<=',
    value=["2020-12-31"],
    condition=c1
)

c2 = q.add_condition(
    match='all',
    condition=root_condition
)
q.add_filter(
    column='title',
    type='...',
    value='marketing',
    condition=c2
)
q.add_filter(
    column='as_of_date',
    type='>=',
    value=["2021-01-01"],
    condition=c2
)
q.add_filter(
    column='as_of_date',
    type='<=',
    value=["2021-12-31"],
    condition=c2
)
q.get_data(dataset_id='job_listings')    # Retrieve data
```

Please note that the maximum depth of condition is two.

You can also specify `start` and `limit`. The default values are `1` and `100000`.
```
q.get_data(
    dataset_id='job_listings', 
    start=1,
    limit=1000
)
```

Sometimes you only need get aggregated results for a dataset. In such cases you can retrieve them through the `addGroup` and `addAggregation` functions.

```python
q.add_ticker('nasdaq:lulu') # Add ticker
q.add_group(column='as_of_date') # Add group
q.add_aggregation(
    column='dataset__entity__entity_ticker__ticker__ticker',
    type='count'
)   # Add aggregation
q.add_sort(
    column='as_of_date',
    order='asc'
)   # Add sort
q.get_data(dataset_id='job_listings')
```

There a few functions that you can apply to queries to gather even more insight into the data. You can retrieve a listing of the available functions in a dataset with the `getDatasetMetadata` function. For example, there is `nearby` function for `store` dataset.

```python
q.add_ticker('nasdaq:lulu')
q.add_function(
    function='nearby',
    parameters={
        "dataset_type": "dataset",
        "dataset": "store",
        "tickers":["nyse:ua"],
        "entities": [],
        "distance": 5,
        "is_include_closed": False
    }
)
q.get_data(dataset_id='store')
```

Also, you can apply `nearest` function to `store` dataset like the following code.
```python
q.add_ticker('nasdaq:lulu')
q.add_function(
    function='nearest',
    parameters={
        "dataset_type": "dataset",
        "dataset": "store",
        "tickers":["nyse:ua"],
        "entities": [],
        "ranks": [1],
        "is_include_closed": False
    }
)
q.get_data(dataset_id='store')
```

Also, you can apply `sales` function to `Car Inventory` dataset like the following code.

```python
q.add_ticker('nyse:kmx')
q.add_function(
    function='sales',
    parameters={
        "lookahead_day_count": 2,
        "start_date": "2020-01-01",
        "end_date": "2020-01-07"
    }
)
q.get_data(dataset_id='car_inventory')
```

Also, you can reset entire query.
```python
q.reset_query()
```

Also, you can reset tickers.
```python
q.reset_tickers()
```

Also, you can reset filters.
```python
q.reset_filters()
```

Also, you can reset functions.
```python
q.reset_functions()
```

Also, you can reset groups.
```python
q.reset_groups()
```

Also, you can reset aggregations.
```python
q.reset_aggregations()
```

Also, you can reset sorts.
```python
q.reset_sorts()
```

## History

Import library.

```
from thinknum import History
```

Like the `Query` library, you must authenticate to utilize `History` library.

```python
h = History(
    client_id='Your client id',
    client_secret='Your client secret'
)
```

If you need to use a proxy, you can configure it with the proxies argument.

```python
proxies = {
  "http": "http://10.10.1.10:3128",
  "https": "http://10.10.1.10:1080",
}

h = History(
    client_id='Your client id',
    client_secret='Your client secret',
    proxies=proxies
)
```

Requests can ignore verifying the SSL certficate if you set verify to False. By default, verify is set to True.

```python
h = History(
    client_id='Your client id',
    client_secret='Your client secret',
    verify=False
)
```

To retrieve a list of available history for a dataset:

```python
h.get_history_list(dataset_id='store')
```

You can view the metadata for the historical file:

```python
h.get_history_metadata(
    dataset_id='store',
    history_date='2020-03-09'
)
```

To download a CSV of the historical data:

```python
h.download(
    dataset_id='store',
    history_date='2020-03-09'
)
```

You can specify download path:

```python
h.download(
    dataset_id='store',
    history_date='2020-03-09', 
    download_path='/Users/sangwonseo/Downloads'
)
```


## Upload

Import library.

```
from thinknum import Upload
```

Like the `Query` library, you must authenticate to utilize `Upload` library.

```python
u = Upload(
    client_id='Your client id',
    client_secret='Your client secret'
)
```

You can upload and import CSV to Thinknum.

```python
result = u.upload_csv('/Users/sangwonseo/Downloads/data.csv')
```

You can check status of CSV.

```python
u.check_upload_status(upload_id=result['id'])
```


## For more details about Library or API
Please visit https://docs.thinknum.com/docs

## If you are interested in Thinknum
Please request demo at https://www.thinknum.com/demo/

## If you have any questions
Please email at customersuccess@thinknum.com

License
----

MIT



            

Raw data

            {
    "_id": null,
    "home_page": "https://www.thinknum.com/demo/",
    "name": "thinknum",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Thinknum,Alternative data,Data",
    "author": "Thinknum",
    "author_email": "customersuccess@thinknum.com",
    "download_url": "https://files.pythonhosted.org/packages/00/6e/1c9537794a6bdf546c41041fd79bba1f6d82546bd17d118428b03c5f6599/thinknum-1.96.tar.gz",
    "platform": null,
    "description": "# Thinknum\nAs companies move their business operations to the Internet, new data trails are being created that can provide unique insights on these companies. Thinknum Alternative Data indexes all of these data trails in one platform, providing investors with critical data points that others miss.\n\n## Installation\n```\npip install thinknum\n```\n\n## Query\n\nImport library.\n\n```\nfrom thinknum import Query\n```\n\nTo authenticate, you must first obtain a client_id and client_secret from your assigned Thinknum account manager. Your client_secret must not be shared or exposed via publicly accessible resources (such as browser client-side scripting).\n\n```python\nq = Query(\n    client_id='Your client id',\n    client_secret='Your client secret'\n)\n```\n\nThe default timeout is 180 seconds. If you need to change timeout seconds, you can configure it with the timeout argument.\n\n```python\nq = Query(\n    client_id='Your client id',\n    client_secret='Your client secret',\n    timeout=300\n)\n```\n\nNo timeout error happen if you set it to None.\n```python\nq = Query(\n    client_id='Your client id',\n    client_secret='Your client secret',\n    timeout=None\n)\n```\n\nIf you need to use a proxy, you can configure it with the proxies argument.\n\n```python\nproxies = {\n  \"http\": \"http://10.10.1.10:3128\",\n  \"https\": \"http://10.10.1.10:1080\",\n}\n\nq = Query(\n    client_id='Your client id',\n    client_secret='Your client secret',\n    proxies=proxies\n)\n```\n\nRequests can ignore verifying the SSL certficate if you set verify to False. By default, verify is set to True.\n\n```python\nq = Query(\n    client_id='Your client id',\n    client_secret='Your client secret',\n    verify=False\n)\n```\n\nYou will get a list of datasets, each of which has the dataset id and its display_name.\n```python\nq.get_dataset_list()\n```\n\nYou will get dataset's metadata.\n```python\nq.get_dataset_metadata(dataset_id='job_listings')\n```\n\nIt's possible to limit the dataset list to a specific ticker by specific a \"ticker\" query parameter. For example, getting all datasets available for Apple Inc:\n\n```python\nq.get_ticker_dataset_list(query='nasdaq:aapl')\n```\n\nYou can search for tickers.\n```python\nq.get_ticker_list(query=\"tesla\")\n```\n\nYou can also search for tickers of particular dataset\n```python\nq.get_ticker_list(query=\"tesla\", dataset_id='job_listings')\n```\n\nYou can retrieve data for specific dataset and tickers with various filters. To retrieve data lulu's job listings in 2020, an example request is:\n\n```python\nq.add_ticker('nasdaq:lulu') # Add ticker\nq.add_filter(\n    column='as_of_date',\n    type='>=',\n    value=[\"2020-01-01\"]\n)\nq.add_filter(\n    column='as_of_date',\n    type='<=',\n    value=[\"2020-12-31\"]\n)\nq.add_sort(\n    column='as_of_date',\n    order='asc'\n)   # Add Sort\nq.get_data(dataset_id='job_listings')    # Retrieve data\n```\n\nYou can retrieve data with OR filters. To retrieve lulu's job listings which title has `sales` or description has `sales` in 2020, an example request is:\n```python\nq.add_ticker('nasdaq:aapl') # Add ticker\nq.add_filter(\n    column='as_of_date',\n    type='>=',\n    value=[\"2020-01-01\"]\n)\nq.add_filter(\n    column='as_of_date',\n    type='<=',\n    value=[\"2020-12-31\"]\n)\nroot_condition = q.add_condition(\n    match='any'\n)\nq.add_filter(\n    column='title',\n    type='...',\n    value='sales',\n    condition=root_condition\n)\nq.add_filter(\n    column='description',\n    type='...',\n    value='sales',\n    condition=root_condition\n)\nq.get_data(dataset_id='job_listings')    # Retrieve data\n```\n\nYou can retrieve data with more complicated filters. To retrieve lulu's sales job in 2020 or marketing job in 2021:\n```python\nq.add_ticker('nasdaq:aapl') # Add ticker\nq.add_filter(\n    column='as_of_date',\n    type='>=',\n    value=[\"2020-01-01\"]\n)\nq.add_filter(\n    column='as_of_date',\n    type='<=',\n    value=[\"2020-12-31\"]\n)\nroot_condition = q.add_condition(\n    match='any',\n)\nc1 = q.add_condition(\n    match='all',\n    condition=root_condition\n)\nq.add_filter(\n    column='title',\n    type='...',\n    value='sales',\n    condition=c1\n)\nq.add_filter(\n    column='as_of_date',\n    type='>=',\n    value=[\"2020-01-01\"],\n    condition=c1\n)\nq.add_filter(\n    column='as_of_date',\n    type='<=',\n    value=[\"2020-12-31\"],\n    condition=c1\n)\n\nc2 = q.add_condition(\n    match='all',\n    condition=root_condition\n)\nq.add_filter(\n    column='title',\n    type='...',\n    value='marketing',\n    condition=c2\n)\nq.add_filter(\n    column='as_of_date',\n    type='>=',\n    value=[\"2021-01-01\"],\n    condition=c2\n)\nq.add_filter(\n    column='as_of_date',\n    type='<=',\n    value=[\"2021-12-31\"],\n    condition=c2\n)\nq.get_data(dataset_id='job_listings')    # Retrieve data\n```\n\nPlease note that the maximum depth of condition is two.\n\nYou can also specify `start` and `limit`. The default values are `1` and `100000`.\n```\nq.get_data(\n    dataset_id='job_listings', \n    start=1,\n    limit=1000\n)\n```\n\nSometimes you only need get aggregated results for a dataset. In such cases you can retrieve them through the `addGroup` and `addAggregation` functions.\n\n```python\nq.add_ticker('nasdaq:lulu') # Add ticker\nq.add_group(column='as_of_date') # Add group\nq.add_aggregation(\n    column='dataset__entity__entity_ticker__ticker__ticker',\n    type='count'\n)   # Add aggregation\nq.add_sort(\n    column='as_of_date',\n    order='asc'\n)   # Add sort\nq.get_data(dataset_id='job_listings')\n```\n\nThere a few functions that you can apply to queries to gather even more insight into the data. You can retrieve a listing of the available functions in a dataset with the `getDatasetMetadata` function. For example, there is `nearby` function for `store` dataset.\n\n```python\nq.add_ticker('nasdaq:lulu')\nq.add_function(\n    function='nearby',\n    parameters={\n        \"dataset_type\": \"dataset\",\n        \"dataset\": \"store\",\n        \"tickers\":[\"nyse:ua\"],\n        \"entities\": [],\n        \"distance\": 5,\n        \"is_include_closed\": False\n    }\n)\nq.get_data(dataset_id='store')\n```\n\nAlso, you can apply `nearest` function to `store` dataset like the following code.\n```python\nq.add_ticker('nasdaq:lulu')\nq.add_function(\n    function='nearest',\n    parameters={\n        \"dataset_type\": \"dataset\",\n        \"dataset\": \"store\",\n        \"tickers\":[\"nyse:ua\"],\n        \"entities\": [],\n        \"ranks\": [1],\n        \"is_include_closed\": False\n    }\n)\nq.get_data(dataset_id='store')\n```\n\nAlso, you can apply `sales` function to `Car Inventory` dataset like the following code.\n\n```python\nq.add_ticker('nyse:kmx')\nq.add_function(\n    function='sales',\n    parameters={\n        \"lookahead_day_count\": 2,\n        \"start_date\": \"2020-01-01\",\n        \"end_date\": \"2020-01-07\"\n    }\n)\nq.get_data(dataset_id='car_inventory')\n```\n\nAlso, you can reset entire query.\n```python\nq.reset_query()\n```\n\nAlso, you can reset tickers.\n```python\nq.reset_tickers()\n```\n\nAlso, you can reset filters.\n```python\nq.reset_filters()\n```\n\nAlso, you can reset functions.\n```python\nq.reset_functions()\n```\n\nAlso, you can reset groups.\n```python\nq.reset_groups()\n```\n\nAlso, you can reset aggregations.\n```python\nq.reset_aggregations()\n```\n\nAlso, you can reset sorts.\n```python\nq.reset_sorts()\n```\n\n## History\n\nImport library.\n\n```\nfrom thinknum import History\n```\n\nLike the `Query` library, you must authenticate to utilize `History` library.\n\n```python\nh = History(\n    client_id='Your client id',\n    client_secret='Your client secret'\n)\n```\n\nIf you need to use a proxy, you can configure it with the proxies argument.\n\n```python\nproxies = {\n  \"http\": \"http://10.10.1.10:3128\",\n  \"https\": \"http://10.10.1.10:1080\",\n}\n\nh = History(\n    client_id='Your client id',\n    client_secret='Your client secret',\n    proxies=proxies\n)\n```\n\nRequests can ignore verifying the SSL certficate if you set verify to False. By default, verify is set to True.\n\n```python\nh = History(\n    client_id='Your client id',\n    client_secret='Your client secret',\n    verify=False\n)\n```\n\nTo retrieve a list of available history for a dataset:\n\n```python\nh.get_history_list(dataset_id='store')\n```\n\nYou can view the metadata for the historical file:\n\n```python\nh.get_history_metadata(\n    dataset_id='store',\n    history_date='2020-03-09'\n)\n```\n\nTo download a CSV of the historical data:\n\n```python\nh.download(\n    dataset_id='store',\n    history_date='2020-03-09'\n)\n```\n\nYou can specify download path:\n\n```python\nh.download(\n    dataset_id='store',\n    history_date='2020-03-09', \n    download_path='/Users/sangwonseo/Downloads'\n)\n```\n\n\n## Upload\n\nImport library.\n\n```\nfrom thinknum import Upload\n```\n\nLike the `Query` library, you must authenticate to utilize `Upload` library.\n\n```python\nu = Upload(\n    client_id='Your client id',\n    client_secret='Your client secret'\n)\n```\n\nYou can upload and import CSV to Thinknum.\n\n```python\nresult = u.upload_csv('/Users/sangwonseo/Downloads/data.csv')\n```\n\nYou can check status of CSV.\n\n```python\nu.check_upload_status(upload_id=result['id'])\n```\n\n\n## For more details about Library or API\nPlease visit https://docs.thinknum.com/docs\n\n## If you are interested in Thinknum\nPlease request demo at https://www.thinknum.com/demo/\n\n## If you have any questions\nPlease email at customersuccess@thinknum.com\n\nLicense\n----\n\nMIT\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "",
    "version": "1.96",
    "project_urls": {
        "Download": "https://github.com/thinknum/thinknum-client",
        "Homepage": "https://www.thinknum.com/demo/"
    },
    "split_keywords": [
        "thinknum",
        "alternative data",
        "data"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "006e1c9537794a6bdf546c41041fd79bba1f6d82546bd17d118428b03c5f6599",
                "md5": "5d59f3b00aed015f0fdf27a141793fa5",
                "sha256": "a0813e0a6811d28d02df7c9d6103a431340b504c317e831178256a89cc4b2a61"
            },
            "downloads": -1,
            "filename": "thinknum-1.96.tar.gz",
            "has_sig": false,
            "md5_digest": "5d59f3b00aed015f0fdf27a141793fa5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 10438,
            "upload_time": "2023-08-16T07:38:54",
            "upload_time_iso_8601": "2023-08-16T07:38:54.202321Z",
            "url": "https://files.pythonhosted.org/packages/00/6e/1c9537794a6bdf546c41041fd79bba1f6d82546bd17d118428b03c5f6599/thinknum-1.96.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-16 07:38:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "thinknum",
    "github_project": "thinknum-client",
    "github_not_found": true,
    "lcname": "thinknum"
}
        
Elapsed time: 0.20931s