fds.sdk.StandardDatafeed


Namefds.sdk.StandardDatafeed JSON
Version 0.40.1 PyPI version JSON
download
home_pagehttps://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2
SummaryStandard Datafeed client library for Python
upload_time2024-04-08 14:52:07
maintainerNone
docs_urlNone
authorFactSet Research Systems
requires_python>=3.7
licenseApache License, Version 2.0
keywords factset api sdk
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![FactSet](https://raw.githubusercontent.com/factset/enterprise-sdk/main/docs/images/factset-logo.svg)](https://www.factset.com)

# Standard Datafeed client library for Python

[![PyPi](https://img.shields.io/pypi/v/fds.sdk.StandardDatafeed)](https://pypi.org/project/fds.sdk.StandardDatafeed/)
[![Apache-2 license](https://img.shields.io/badge/license-Apache2-brightgreen.svg)](https://www.apache.org/licenses/LICENSE-2.0)

[The Standard DataFeed (SDF) Download API](https://www.factset.com/marketplace/catalog/product/factset-standard-datafeed-download-api) 
provides an alternative method for users to request and retrieve SDF packages (schemas & bundles). 
This service is not a direct replacement and does not have 100% feature parity with the Loader Application. 
This API provides an alternative for users who are unable to utilize the Loader application due to the following reasons:

* Inability to install 3rd party executables due to Corporate Security policies.

* Inability to utilize the Loader application due to limitations or restrictions with the environment used to consume Standard Datafeed.

* Clients who are utilizing existing delivery methods like FTP, who may want to use a more secure & modern solution.

This API allows users to retrieve

* Both SDF and the QFL (Quant Factor Library (Factor Family & Factor Groups)) packages they have subscriptions for, with data available since January 1, 1995.

Additional parameters are available to filter requests to get the exact files users are looking for.

QFL data is delivered through Content API & Bulk Data API (SDF API).

* Content API: Provides direct access to FactSet-hosted QFL data. Suitable for interactive, ad hoc QFL requests. Constraints on large extracts. Costs are based on consumption, i.e. more calls can result in more costs.

* Bulk Data API: Provides access to download locations of zip files for client download. Suitable for production processes within a client environment. Cost is based on the use case and fixed unless scope changes (same as other SDFs).

Please find all the content-related comprehensive documentation [here](https://my.apps.factset.com/oa/pages/15222).


This Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:

- API version: 2.0.0
- Package version: 0.40.1
- Build package: org.openapitools.codegen.languages.PythonClientCodegen

For more information, please visit [https://developer.factset.com/contact](https://developer.factset.com/contact)

## Requirements

* Python >= 3.7

## Installation

### Poetry

```shell
poetry add fds.sdk.utils fds.sdk.StandardDatafeed==0.40.1
```

### pip

```shell
pip install fds.sdk.utils fds.sdk.StandardDatafeed==0.40.1
```

## Usage

1. [Generate authentication credentials](../../../../README.md#authentication).
2. Setup Python environment.
   1. Install and activate python 3.7+. If you're using [pyenv](https://github.com/pyenv/pyenv):

      ```sh
      pyenv install 3.9.7
      pyenv shell 3.9.7
      ```

   2. (optional) [Install poetry](https://python-poetry.org/docs/#installation).
3. [Install dependencies](#installation).
4. Run the following:

> [!IMPORTANT]
> The parameter variables defined below are just examples and may potentially contain non valid values. Please replace them with valid values.

### Example Code

```python
from fds.sdk.utils.authentication import ConfidentialClient

import fds.sdk.StandardDatafeed
from fds.sdk.StandardDatafeed.api import sdf_and_qfl_content_library_api
from fds.sdk.StandardDatafeed.models import *
from dateutil.parser import parse as dateutil_parser
from pprint import pprint

# See configuration.py for a list of all supported configuration parameters.

# Examples for each supported authentication method are below,
# choose one that satisfies your use case.

# (Preferred) OAuth 2.0: FactSetOAuth2
# See https://github.com/FactSet/enterprise-sdk#oauth-20
# for information on how to create the app-config.json file
#
# The confidential client instance should be reused in production environments.
# See https://github.com/FactSet/enterprise-sdk-utils-python#authentication
# for more information on using the ConfidentialClient class
configuration = fds.sdk.StandardDatafeed.Configuration(
    fds_oauth_client=ConfidentialClient('/path/to/app-config.json')
)

# Basic authentication: FactSetApiKey
# See https://github.com/FactSet/enterprise-sdk#api-key
# for information how to create an API key
# configuration = fds.sdk.StandardDatafeed.Configuration(
#     username='USERNAME-SERIAL',
#     password='API-KEY'
# )

# Enter a context with an instance of the API client
with fds.sdk.StandardDatafeed.ApiClient(configuration) as api_client:
    # Create an instance of the API class
    api_instance = sdf_and_qfl_content_library_api.SDFAndQFLContentLibraryApi(api_client)
    schema = "fgp_v1" # str | Name of the schema. (optional)
    bundle = "fgp_global_prices_am_v1" # str | Name of the bundle. (optional)
    type = "delta" # str | Type of the file.  Note: - Full files are snapshots of the bundle capturing the most recent version of the bundle generated every weekend. When requesting 'Full' files, ensure that the date range includes weekend dates. - Delta files include the incremental changes (inserts, updates, deletes), since the last file and have incremental sequence numbers.    (optional) if omitted the server will use the default value of "delta"
    start_date = "2023-01-01" # str | The earliest date of the feed file the API should fetch based on the file timestamp. Consider the following points:   - Dates provided in `startDate` and `endDate` along with `schema` parameter: The returned dataset is limited to a maximum of latest 30 days' worth of records. - Format: Should be absolute (YYYY-MM-DD).  (optional)
    start_date_relative = 1 # int | The earliest date of the feed file the API should fetch based on the file timestamp. Consider the following points:  - Dates provided in `startDate` and `endDate` along with `schema` parameter: The returned dataset is limited to a maximum of latest 30 days' worth of records. - Format: Specify the date using a relative term as an integer: '0' for today, '-1' for yesterday, '-2' for two days ago, and so forth. Negative values are used to represent past dates.  *Note:* - *Either `startDate` or `startDateRelative` should be used, but not both.* - *If both `startDate` and `startDateRelative` are provided in the same request, the API will return an error.* - *If users provide future dates in requests for `startDate` or `startDateRelative`, the API will not return any data.*  (optional)
    end_date = "2023-01-28T00:00:00.000Z" # str | The latest date of the feed file the API should fetch for based on the file timestamp.  - Format: Should be absolute - YYYY-MM-DD.  (optional)
    end_date_relative = 1 # int | The latest date of the feed file the API should fetch for based on the file timestamp.  - Format: Specify the date using a relative term as an integer: '0' for today, '-1' for yesterday, '-2' for two days ago, and so forth. Negative values are used to represent past dates.  *Note:* - *Either `endDate` or `endDateRelative` should be used, but not both.* - *If both `endDate` and `endDateRelative` are provided in the same request, the API will return an error.* - *If users provide future dates in requests for `endDate` or `endDateRelative`, the API will not return any data.*  (optional)
    pagination_limit = 20 # int | Specifies the number of results to return per page. (optional) if omitted the server will use the default value of 20
    pagination_offset = 0 # int | Specifies the starting point for pagination. This parameter is used to identify the beginning of next set of results. (optional) if omitted the server will use the default value of 0
    sort = ["-startDate"] # [str] | Enables sorting data in ascending or descending chronological order based on startDate.  (optional) if omitted the server will use the default value of ["-startDate"]

    try:
        # Returns delta & full files for the schemas.
        # example passing only required values which don't have defaults set
        # and optional values
        api_response = api_instance.get_list_files(schema=schema, bundle=bundle, type=type, start_date=start_date, start_date_relative=start_date_relative, end_date=end_date, end_date_relative=end_date_relative, pagination_limit=pagination_limit, pagination_offset=pagination_offset, sort=sort)

        pprint(api_response)
    except fds.sdk.StandardDatafeed.ApiException as e:
        print("Exception when calling SDFAndQFLContentLibraryApi->get_list_files: %s\n" % e)

    # # Get response, http status code and response headers
    # try:
    #     # Returns delta & full files for the schemas.
    #     api_response, http_status_code, response_headers = api_instance.get_list_files_with_http_info(schema=schema, bundle=bundle, type=type, start_date=start_date, start_date_relative=start_date_relative, end_date=end_date, end_date_relative=end_date_relative, pagination_limit=pagination_limit, pagination_offset=pagination_offset, sort=sort)


    #     pprint(api_response)
    #     pprint(http_status_code)
    #     pprint(response_headers)
    # except fds.sdk.StandardDatafeed.ApiException as e:
    #     print("Exception when calling SDFAndQFLContentLibraryApi->get_list_files: %s\n" % e)

    # # Get response asynchronous
    # try:
    #     # Returns delta & full files for the schemas.
    #     async_result = api_instance.get_list_files_async(schema=schema, bundle=bundle, type=type, start_date=start_date, start_date_relative=start_date_relative, end_date=end_date, end_date_relative=end_date_relative, pagination_limit=pagination_limit, pagination_offset=pagination_offset, sort=sort)
    #     api_response = async_result.get()


    #     pprint(api_response)
    # except fds.sdk.StandardDatafeed.ApiException as e:
    #     print("Exception when calling SDFAndQFLContentLibraryApi->get_list_files: %s\n" % e)

    # # Get response, http status code and response headers asynchronous
    # try:
    #     # Returns delta & full files for the schemas.
    #     async_result = api_instance.get_list_files_with_http_info_async(schema=schema, bundle=bundle, type=type, start_date=start_date, start_date_relative=start_date_relative, end_date=end_date, end_date_relative=end_date_relative, pagination_limit=pagination_limit, pagination_offset=pagination_offset, sort=sort)
    #     api_response, http_status_code, response_headers = async_result.get()


    #     pprint(api_response)
    #     pprint(http_status_code)
    #     pprint(response_headers)
    # except fds.sdk.StandardDatafeed.ApiException as e:
    #     print("Exception when calling SDFAndQFLContentLibraryApi->get_list_files: %s\n" % e)

```

### Using Pandas

To convert an API response to a Pandas DataFrame, it is necessary to transform it first to a dictionary.
```python
import pandas as pd

response_dict = api_response.to_dict()['data']

simple_json_response = pd.DataFrame(response_dict)
nested_json_response = pd.json_normalize(response_dict)
```

### Debugging

The SDK uses the standard library [`logging`](https://docs.python.org/3/library/logging.html#module-logging) module.

Setting `debug` to `True` on an instance of the `Configuration` class sets the log-level of related packages to `DEBUG`
and enables additional logging in Pythons [HTTP Client](https://docs.python.org/3/library/http.client.html).

**Note**: This prints out sensitive information (e.g. the full request and response). Use with care.

```python
import logging
import fds.sdk.StandardDatafeed

logging.basicConfig(level=logging.DEBUG)

configuration = fds.sdk.StandardDatafeed.Configuration(...)
configuration.debug = True
```

### Configure a Proxy

You can pass proxy settings to the Configuration class:

* `proxy`: The URL of the proxy to use.
* `proxy_headers`: a dictionary to pass additional headers to the proxy (e.g. `Proxy-Authorization`).

```python
import fds.sdk.StandardDatafeed

configuration = fds.sdk.StandardDatafeed.Configuration(
    # ...
    proxy="http://secret:password@localhost:5050",
    proxy_headers={
        "Custom-Proxy-Header": "Custom-Proxy-Header-Value"
    }
)
```

### Custom SSL Certificate

TLS/SSL certificate verification can be configured with the following Configuration parameters:

* `ssl_ca_cert`: a path to the certificate to use for verification in `PEM` format.
* `verify_ssl`: setting this to `False` disables the verification of certificates.
  Disabling the verification is not recommended, but it might be useful during
  local development or testing.

```python
import fds.sdk.StandardDatafeed

configuration = fds.sdk.StandardDatafeed.Configuration(
    # ...
    ssl_ca_cert='/path/to/ca.pem'
)
```


## Documentation for API Endpoints

All URIs are relative to *https://api.factset.com/bulk-documents/sdf/v2*

Class | Method | HTTP request | Description
------------ | ------------- | ------------- | -------------
*SDFAndQFLContentLibraryApi* | [**get_list_files**](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SDFAndQFLContentLibraryApi.md#get_list_files) | **GET** /list-files | Returns delta & full files for the schemas.
*SDFAndQFLContentLibraryApi* | [**gethistorical_files**](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SDFAndQFLContentLibraryApi.md#gethistorical_files) | **GET** /historical-files | Returns full historic data of specified schema and bundle.
*SchemaApi* | [**get_list_schemaswithoutwithoout_required_parameters**](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SchemaApi.md#get_list_schemaswithoutwithoout_required_parameters) | **GET** /list-schemas | List of Standard DataFeed (SDF) schemas.
*SchemaApi* | [**get_schema_details**](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SchemaApi.md#get_schema_details) | **GET** /schema-details | Schema Details.


## Documentation For Models

 - [ErrorExample](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ErrorExample.md)
 - [ErrorResponse](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ErrorResponse.md)
 - [HistoricalFile](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/HistoricalFile.md)
 - [HistoricalFileObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/HistoricalFileObject.md)
 - [HistoricalFileObjectObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/HistoricalFileObjectObject.md)
 - [ListFile](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListFile.md)
 - [ListFileObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListFileObject.md)
 - [ListFileObjectObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListFileObjectObject.md)
 - [ListSchema](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListSchema.md)
 - [ListSchemaObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListSchemaObject.md)
 - [Meta](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/Meta.md)
 - [Pagination](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/Pagination.md)
 - [SchemaDetail](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SchemaDetail.md)
 - [SchemaDetailObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SchemaDetailObject.md)


## Documentation For Authorization


## FactSetApiKey

- **Type**: HTTP basic authentication


## FactSetOAuth2

- **Type**: OAuth
- **Flow**: application
- **Authorization URL**: 
- **Scopes**: N/A


## Notes for Large OpenAPI documents
If the OpenAPI document is large, imports in fds.sdk.StandardDatafeed.apis and fds.sdk.StandardDatafeed.models may fail with a
RecursionError indicating the maximum recursion limit has been exceeded. In that case, there are a couple of solutions:

Solution 1:
Use specific imports for apis and models like:
- `from fds.sdk.StandardDatafeed.api.default_api import DefaultApi`
- `from fds.sdk.StandardDatafeed.model.pet import Pet`

Solution 2:
Before importing the package, adjust the maximum recursion limit as shown below:
```
import sys
sys.setrecursionlimit(1500)
import fds.sdk.StandardDatafeed
from fds.sdk.StandardDatafeed.apis import *
from fds.sdk.StandardDatafeed.models import *
```

## Contributing

Please refer to the [contributing guide](../../../../CONTRIBUTING.md).

## Copyright

Copyright 2022 FactSet Research Systems Inc

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.




            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2",
    "name": "fds.sdk.StandardDatafeed",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "FactSet, API, SDK",
    "author": "FactSet Research Systems",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "[![FactSet](https://raw.githubusercontent.com/factset/enterprise-sdk/main/docs/images/factset-logo.svg)](https://www.factset.com)\n\n# Standard Datafeed client library for Python\n\n[![PyPi](https://img.shields.io/pypi/v/fds.sdk.StandardDatafeed)](https://pypi.org/project/fds.sdk.StandardDatafeed/)\n[![Apache-2 license](https://img.shields.io/badge/license-Apache2-brightgreen.svg)](https://www.apache.org/licenses/LICENSE-2.0)\n\n[The Standard DataFeed (SDF) Download API](https://www.factset.com/marketplace/catalog/product/factset-standard-datafeed-download-api) \nprovides an alternative method for users to request and retrieve SDF packages (schemas & bundles). \nThis service is not a direct replacement and does not have 100% feature parity with the Loader Application. \nThis API provides an alternative for users who are unable to utilize the Loader application due to the following reasons:\n\n* Inability to install 3rd party executables due to Corporate Security policies.\n\n* Inability to utilize the Loader application due to limitations or restrictions with the environment used to consume Standard Datafeed.\n\n* Clients who are utilizing existing delivery methods like FTP, who may want to use a more secure & modern solution.\n\nThis API allows users to retrieve\n\n* Both SDF and the QFL (Quant Factor Library (Factor Family & Factor Groups)) packages they have subscriptions for, with data available since January 1, 1995.\n\nAdditional parameters are available to filter requests to get the exact files users are looking for.\n\nQFL data is delivered through Content API & Bulk Data API (SDF API).\n\n* Content API: Provides direct access to FactSet-hosted QFL data. Suitable for interactive, ad hoc QFL requests. Constraints on large extracts. Costs are based on consumption, i.e. more calls can result in more costs.\n\n* Bulk Data API: Provides access to download locations of zip files for client download. Suitable for production processes within a client environment. Cost is based on the use case and fixed unless scope changes (same as other SDFs).\n\nPlease find all the content-related comprehensive documentation [here](https://my.apps.factset.com/oa/pages/15222).\n\n\nThis Python package is automatically generated by the [OpenAPI Generator](https://openapi-generator.tech) project:\n\n- API version: 2.0.0\n- Package version: 0.40.1\n- Build package: org.openapitools.codegen.languages.PythonClientCodegen\n\nFor more information, please visit [https://developer.factset.com/contact](https://developer.factset.com/contact)\n\n## Requirements\n\n* Python >= 3.7\n\n## Installation\n\n### Poetry\n\n```shell\npoetry add fds.sdk.utils fds.sdk.StandardDatafeed==0.40.1\n```\n\n### pip\n\n```shell\npip install fds.sdk.utils fds.sdk.StandardDatafeed==0.40.1\n```\n\n## Usage\n\n1. [Generate authentication credentials](../../../../README.md#authentication).\n2. Setup Python environment.\n   1. Install and activate python 3.7+. If you're using [pyenv](https://github.com/pyenv/pyenv):\n\n      ```sh\n      pyenv install 3.9.7\n      pyenv shell 3.9.7\n      ```\n\n   2. (optional) [Install poetry](https://python-poetry.org/docs/#installation).\n3. [Install dependencies](#installation).\n4. Run the following:\n\n> [!IMPORTANT]\n> The parameter variables defined below are just examples and may potentially contain non valid values. Please replace them with valid values.\n\n### Example Code\n\n```python\nfrom fds.sdk.utils.authentication import ConfidentialClient\n\nimport fds.sdk.StandardDatafeed\nfrom fds.sdk.StandardDatafeed.api import sdf_and_qfl_content_library_api\nfrom fds.sdk.StandardDatafeed.models import *\nfrom dateutil.parser import parse as dateutil_parser\nfrom pprint import pprint\n\n# See configuration.py for a list of all supported configuration parameters.\n\n# Examples for each supported authentication method are below,\n# choose one that satisfies your use case.\n\n# (Preferred) OAuth 2.0: FactSetOAuth2\n# See https://github.com/FactSet/enterprise-sdk#oauth-20\n# for information on how to create the app-config.json file\n#\n# The confidential client instance should be reused in production environments.\n# See https://github.com/FactSet/enterprise-sdk-utils-python#authentication\n# for more information on using the ConfidentialClient class\nconfiguration = fds.sdk.StandardDatafeed.Configuration(\n    fds_oauth_client=ConfidentialClient('/path/to/app-config.json')\n)\n\n# Basic authentication: FactSetApiKey\n# See https://github.com/FactSet/enterprise-sdk#api-key\n# for information how to create an API key\n# configuration = fds.sdk.StandardDatafeed.Configuration(\n#     username='USERNAME-SERIAL',\n#     password='API-KEY'\n# )\n\n# Enter a context with an instance of the API client\nwith fds.sdk.StandardDatafeed.ApiClient(configuration) as api_client:\n    # Create an instance of the API class\n    api_instance = sdf_and_qfl_content_library_api.SDFAndQFLContentLibraryApi(api_client)\n    schema = \"fgp_v1\" # str | Name of the schema. (optional)\n    bundle = \"fgp_global_prices_am_v1\" # str | Name of the bundle. (optional)\n    type = \"delta\" # str | Type of the file.  Note: - Full files are snapshots of the bundle capturing the most recent version of the bundle generated every weekend. When requesting 'Full' files, ensure that the date range includes weekend dates. - Delta files include the incremental changes (inserts, updates, deletes), since the last file and have incremental sequence numbers.    (optional) if omitted the server will use the default value of \"delta\"\n    start_date = \"2023-01-01\" # str | The earliest date of the feed file the API should fetch based on the file timestamp. Consider the following points:   - Dates provided in `startDate` and `endDate` along with `schema` parameter: The returned dataset is limited to a maximum of latest 30 days' worth of records. - Format: Should be absolute (YYYY-MM-DD).  (optional)\n    start_date_relative = 1 # int | The earliest date of the feed file the API should fetch based on the file timestamp. Consider the following points:  - Dates provided in `startDate` and `endDate` along with `schema` parameter: The returned dataset is limited to a maximum of latest 30 days' worth of records. - Format: Specify the date using a relative term as an integer: '0' for today, '-1' for yesterday, '-2' for two days ago, and so forth. Negative values are used to represent past dates.  *Note:* - *Either `startDate` or `startDateRelative` should be used, but not both.* - *If both `startDate` and `startDateRelative` are provided in the same request, the API will return an error.* - *If users provide future dates in requests for `startDate` or `startDateRelative`, the API will not return any data.*  (optional)\n    end_date = \"2023-01-28T00:00:00.000Z\" # str | The latest date of the feed file the API should fetch for based on the file timestamp.  - Format: Should be absolute - YYYY-MM-DD.  (optional)\n    end_date_relative = 1 # int | The latest date of the feed file the API should fetch for based on the file timestamp.  - Format: Specify the date using a relative term as an integer: '0' for today, '-1' for yesterday, '-2' for two days ago, and so forth. Negative values are used to represent past dates.  *Note:* - *Either `endDate` or `endDateRelative` should be used, but not both.* - *If both `endDate` and `endDateRelative` are provided in the same request, the API will return an error.* - *If users provide future dates in requests for `endDate` or `endDateRelative`, the API will not return any data.*  (optional)\n    pagination_limit = 20 # int | Specifies the number of results to return per page. (optional) if omitted the server will use the default value of 20\n    pagination_offset = 0 # int | Specifies the starting point for pagination. This parameter is used to identify the beginning of next set of results. (optional) if omitted the server will use the default value of 0\n    sort = [\"-startDate\"] # [str] | Enables sorting data in ascending or descending chronological order based on startDate.  (optional) if omitted the server will use the default value of [\"-startDate\"]\n\n    try:\n        # Returns delta & full files for the schemas.\n        # example passing only required values which don't have defaults set\n        # and optional values\n        api_response = api_instance.get_list_files(schema=schema, bundle=bundle, type=type, start_date=start_date, start_date_relative=start_date_relative, end_date=end_date, end_date_relative=end_date_relative, pagination_limit=pagination_limit, pagination_offset=pagination_offset, sort=sort)\n\n        pprint(api_response)\n    except fds.sdk.StandardDatafeed.ApiException as e:\n        print(\"Exception when calling SDFAndQFLContentLibraryApi->get_list_files: %s\\n\" % e)\n\n    # # Get response, http status code and response headers\n    # try:\n    #     # Returns delta & full files for the schemas.\n    #     api_response, http_status_code, response_headers = api_instance.get_list_files_with_http_info(schema=schema, bundle=bundle, type=type, start_date=start_date, start_date_relative=start_date_relative, end_date=end_date, end_date_relative=end_date_relative, pagination_limit=pagination_limit, pagination_offset=pagination_offset, sort=sort)\n\n\n    #     pprint(api_response)\n    #     pprint(http_status_code)\n    #     pprint(response_headers)\n    # except fds.sdk.StandardDatafeed.ApiException as e:\n    #     print(\"Exception when calling SDFAndQFLContentLibraryApi->get_list_files: %s\\n\" % e)\n\n    # # Get response asynchronous\n    # try:\n    #     # Returns delta & full files for the schemas.\n    #     async_result = api_instance.get_list_files_async(schema=schema, bundle=bundle, type=type, start_date=start_date, start_date_relative=start_date_relative, end_date=end_date, end_date_relative=end_date_relative, pagination_limit=pagination_limit, pagination_offset=pagination_offset, sort=sort)\n    #     api_response = async_result.get()\n\n\n    #     pprint(api_response)\n    # except fds.sdk.StandardDatafeed.ApiException as e:\n    #     print(\"Exception when calling SDFAndQFLContentLibraryApi->get_list_files: %s\\n\" % e)\n\n    # # Get response, http status code and response headers asynchronous\n    # try:\n    #     # Returns delta & full files for the schemas.\n    #     async_result = api_instance.get_list_files_with_http_info_async(schema=schema, bundle=bundle, type=type, start_date=start_date, start_date_relative=start_date_relative, end_date=end_date, end_date_relative=end_date_relative, pagination_limit=pagination_limit, pagination_offset=pagination_offset, sort=sort)\n    #     api_response, http_status_code, response_headers = async_result.get()\n\n\n    #     pprint(api_response)\n    #     pprint(http_status_code)\n    #     pprint(response_headers)\n    # except fds.sdk.StandardDatafeed.ApiException as e:\n    #     print(\"Exception when calling SDFAndQFLContentLibraryApi->get_list_files: %s\\n\" % e)\n\n```\n\n### Using Pandas\n\nTo convert an API response to a Pandas DataFrame, it is necessary to transform it first to a dictionary.\n```python\nimport pandas as pd\n\nresponse_dict = api_response.to_dict()['data']\n\nsimple_json_response = pd.DataFrame(response_dict)\nnested_json_response = pd.json_normalize(response_dict)\n```\n\n### Debugging\n\nThe SDK uses the standard library [`logging`](https://docs.python.org/3/library/logging.html#module-logging) module.\n\nSetting `debug` to `True` on an instance of the `Configuration` class sets the log-level of related packages to `DEBUG`\nand enables additional logging in Pythons [HTTP Client](https://docs.python.org/3/library/http.client.html).\n\n**Note**: This prints out sensitive information (e.g. the full request and response). Use with care.\n\n```python\nimport logging\nimport fds.sdk.StandardDatafeed\n\nlogging.basicConfig(level=logging.DEBUG)\n\nconfiguration = fds.sdk.StandardDatafeed.Configuration(...)\nconfiguration.debug = True\n```\n\n### Configure a Proxy\n\nYou can pass proxy settings to the Configuration class:\n\n* `proxy`: The URL of the proxy to use.\n* `proxy_headers`: a dictionary to pass additional headers to the proxy (e.g. `Proxy-Authorization`).\n\n```python\nimport fds.sdk.StandardDatafeed\n\nconfiguration = fds.sdk.StandardDatafeed.Configuration(\n    # ...\n    proxy=\"http://secret:password@localhost:5050\",\n    proxy_headers={\n        \"Custom-Proxy-Header\": \"Custom-Proxy-Header-Value\"\n    }\n)\n```\n\n### Custom SSL Certificate\n\nTLS/SSL certificate verification can be configured with the following Configuration parameters:\n\n* `ssl_ca_cert`: a path to the certificate to use for verification in `PEM` format.\n* `verify_ssl`: setting this to `False` disables the verification of certificates.\n  Disabling the verification is not recommended, but it might be useful during\n  local development or testing.\n\n```python\nimport fds.sdk.StandardDatafeed\n\nconfiguration = fds.sdk.StandardDatafeed.Configuration(\n    # ...\n    ssl_ca_cert='/path/to/ca.pem'\n)\n```\n\n\n## Documentation for API Endpoints\n\nAll URIs are relative to *https://api.factset.com/bulk-documents/sdf/v2*\n\nClass | Method | HTTP request | Description\n------------ | ------------- | ------------- | -------------\n*SDFAndQFLContentLibraryApi* | [**get_list_files**](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SDFAndQFLContentLibraryApi.md#get_list_files) | **GET** /list-files | Returns delta & full files for the schemas.\n*SDFAndQFLContentLibraryApi* | [**gethistorical_files**](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SDFAndQFLContentLibraryApi.md#gethistorical_files) | **GET** /historical-files | Returns full historic data of specified schema and bundle.\n*SchemaApi* | [**get_list_schemaswithoutwithoout_required_parameters**](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SchemaApi.md#get_list_schemaswithoutwithoout_required_parameters) | **GET** /list-schemas | List of Standard DataFeed (SDF) schemas.\n*SchemaApi* | [**get_schema_details**](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SchemaApi.md#get_schema_details) | **GET** /schema-details | Schema Details.\n\n\n## Documentation For Models\n\n - [ErrorExample](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ErrorExample.md)\n - [ErrorResponse](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ErrorResponse.md)\n - [HistoricalFile](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/HistoricalFile.md)\n - [HistoricalFileObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/HistoricalFileObject.md)\n - [HistoricalFileObjectObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/HistoricalFileObjectObject.md)\n - [ListFile](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListFile.md)\n - [ListFileObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListFileObject.md)\n - [ListFileObjectObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListFileObjectObject.md)\n - [ListSchema](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListSchema.md)\n - [ListSchemaObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/ListSchemaObject.md)\n - [Meta](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/Meta.md)\n - [Pagination](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/Pagination.md)\n - [SchemaDetail](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SchemaDetail.md)\n - [SchemaDetailObject](https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2/docs/SchemaDetailObject.md)\n\n\n## Documentation For Authorization\n\n\n## FactSetApiKey\n\n- **Type**: HTTP basic authentication\n\n\n## FactSetOAuth2\n\n- **Type**: OAuth\n- **Flow**: application\n- **Authorization URL**: \n- **Scopes**: N/A\n\n\n## Notes for Large OpenAPI documents\nIf the OpenAPI document is large, imports in fds.sdk.StandardDatafeed.apis and fds.sdk.StandardDatafeed.models may fail with a\nRecursionError indicating the maximum recursion limit has been exceeded. In that case, there are a couple of solutions:\n\nSolution 1:\nUse specific imports for apis and models like:\n- `from fds.sdk.StandardDatafeed.api.default_api import DefaultApi`\n- `from fds.sdk.StandardDatafeed.model.pet import Pet`\n\nSolution 2:\nBefore importing the package, adjust the maximum recursion limit as shown below:\n```\nimport sys\nsys.setrecursionlimit(1500)\nimport fds.sdk.StandardDatafeed\nfrom fds.sdk.StandardDatafeed.apis import *\nfrom fds.sdk.StandardDatafeed.models import *\n```\n\n## Contributing\n\nPlease refer to the [contributing guide](../../../../CONTRIBUTING.md).\n\n## Copyright\n\nCopyright 2022 FactSet Research Systems Inc\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\n\n",
    "bugtrack_url": null,
    "license": "Apache License, Version 2.0",
    "summary": "Standard Datafeed client library for Python",
    "version": "0.40.1",
    "project_urls": {
        "Homepage": "https://github.com/FactSet/enterprise-sdk/tree/main/code/python/StandardDatafeed/v2"
    },
    "split_keywords": [
        "factset",
        " api",
        " sdk"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "94ccc9495b91a581ff85e51dc9873be82973e292a9d0810634d7b9f2892678a9",
                "md5": "0e97e5b1bba6027feadd04c2feea387d",
                "sha256": "f9ea7d09f7cc9831b2ae09cd076124c67ed7271c49b2b8cb6142cb0d7e43a489"
            },
            "downloads": -1,
            "filename": "fds.sdk.StandardDatafeed-0.40.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0e97e5b1bba6027feadd04c2feea387d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 107854,
            "upload_time": "2024-04-08T14:52:07",
            "upload_time_iso_8601": "2024-04-08T14:52:07.479349Z",
            "url": "https://files.pythonhosted.org/packages/94/cc/c9495b91a581ff85e51dc9873be82973e292a9d0810634d7b9f2892678a9/fds.sdk.StandardDatafeed-0.40.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-08 14:52:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "FactSet",
    "github_project": "enterprise-sdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "fds.sdk.standarddatafeed"
}
        
Elapsed time: 0.32998s