pyhasura


Namepyhasura JSON
Version 1.0.22 PyPI version JSON
download
home_pageNone
SummaryA Python library to simplify Hasura, GraphQL and Machine Learning
upload_time2024-04-28 15:43:54
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords graphql hasura ml ai machine learning arrow
VCS
bugtrack_url
requirements python-dotenv gql pyhasura pandas pyarrow scikit-learn setuptools aiohttp
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PyHasura

A library for conveniently working with Hasura, GraphQL, File Formats, and some basic Machine Learning.

## Getting Started

### HasuraClient

```python

# Create Hasura Client
import os
from dotenv import load_dotenv
from pyhasura import gql_client, HasuraClient, ExportFormat
from pprint import pprint

load_dotenv()  # Load environment variables from .env

hasura_client = HasuraClient(uri=os.environ.get("HASURA_URI"), admin_secret=os.environ.get("HASURA_ADMIN_SECRET"))
```

### Query for a Result

```python
result = hasura_client.execute("""
        query findCarts {
            carts {
                is_complete
                cart_items {
                    quantity
                    product {
                        price
                    }
                }
            }
            cart_items {
                id
            }
        }
    """)

pprint(result)
```

### Convert Results to a Dictionary of Alternate Formats

```python
result = hasura_client.convert_output_format(ExportFormat.ARROW)
pprint(result)
result = hasura_client.convert_output_format(ExportFormat.CSV)
pprint(result)
result = hasura_client.convert_output_format(ExportFormat.PARQUET)
pprint(result)
result = hasura_client.convert_output_format(ExportFormat.DATAFRAME)
pprint(result)
result = hasura_client.convert_output_format(ExportFormat.FLAT)
pprint(result)
```

### Write Results, one file for each root entry in the query
```python
result = hasura_client.write_to_file(output_format=ExportFormat.ARROW)
pprint(result)
result = hasura_client.write_to_file(output_format=ExportFormat.CSV)
pprint(result)
result = hasura_client.write_to_file(output_format=ExportFormat.PARQUET)
pprint(result)
result = hasura_client.write_to_file(output_format=ExportFormat.FLAT)
pprint(result)
result = hasura_client.write_to_file(output_format=ExportFormat.NATURAL)
pprint(result)
```

### Detect Anomalies

Uses DictVectorizer. Assumes text is categorical, or enumerators. 
To Do - allow an alternate vectorizer - e.g. Word2Vec. To include more semantic meaning in anomaly detection.
```python
result = hasura_client.anomalies()
pprint(result)
result = hasura_client.anomalies(threshold=.03)
pprint(result)
```

### Train and Serialize then Re-Use for Anomaly Detection

Typically, do this to train on some historical dataset and then
search for anomalies in an alternate (maybe current) dataset.
```python
result = hasura_client.anomalies_training()
pprint(result)
result = hasura_client.anomalies(training_files=result, threshold=0)
pprint(result)
```

### Clustering

Uses KMedoids clustering. You are always working on a dictionary of datasets.
You need to define the number of clusters for each dataset in a corresponding input dictionary.
You can auto-generate the optimal number of clusters and use that as the input.
```python
result = hasura_client.optimal_number_of_clusters(1,8)
pprint(result)
result = hasura_client.clusters(result)
pprint(result)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pyhasura",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "graphql, hasura, ml, ai, machine learning, arrow",
    "author": null,
    "author_email": "Kenneth Stott <ken@kenstott.com>",
    "download_url": "https://files.pythonhosted.org/packages/ad/78/64d3db28df040bddebda40cef785d929172b3303206b669c493c7d4dd68e/pyhasura-1.0.22.tar.gz",
    "platform": null,
    "description": "# PyHasura\n\nA library for conveniently working with Hasura, GraphQL, File Formats, and some basic Machine Learning.\n\n## Getting Started\n\n### HasuraClient\n\n```python\n\n# Create Hasura Client\nimport os\nfrom dotenv import load_dotenv\nfrom pyhasura import gql_client, HasuraClient, ExportFormat\nfrom pprint import pprint\n\nload_dotenv()  # Load environment variables from .env\n\nhasura_client = HasuraClient(uri=os.environ.get(\"HASURA_URI\"), admin_secret=os.environ.get(\"HASURA_ADMIN_SECRET\"))\n```\n\n### Query for a Result\n\n```python\nresult = hasura_client.execute(\"\"\"\n        query findCarts {\n            carts {\n                is_complete\n                cart_items {\n                    quantity\n                    product {\n                        price\n                    }\n                }\n            }\n            cart_items {\n                id\n            }\n        }\n    \"\"\")\n\npprint(result)\n```\n\n### Convert Results to a Dictionary of Alternate Formats\n\n```python\nresult = hasura_client.convert_output_format(ExportFormat.ARROW)\npprint(result)\nresult = hasura_client.convert_output_format(ExportFormat.CSV)\npprint(result)\nresult = hasura_client.convert_output_format(ExportFormat.PARQUET)\npprint(result)\nresult = hasura_client.convert_output_format(ExportFormat.DATAFRAME)\npprint(result)\nresult = hasura_client.convert_output_format(ExportFormat.FLAT)\npprint(result)\n```\n\n### Write Results, one file for each root entry in the query\n```python\nresult = hasura_client.write_to_file(output_format=ExportFormat.ARROW)\npprint(result)\nresult = hasura_client.write_to_file(output_format=ExportFormat.CSV)\npprint(result)\nresult = hasura_client.write_to_file(output_format=ExportFormat.PARQUET)\npprint(result)\nresult = hasura_client.write_to_file(output_format=ExportFormat.FLAT)\npprint(result)\nresult = hasura_client.write_to_file(output_format=ExportFormat.NATURAL)\npprint(result)\n```\n\n### Detect Anomalies\n\nUses DictVectorizer. Assumes text is categorical, or enumerators. \nTo Do - allow an alternate vectorizer - e.g. Word2Vec. To include more semantic meaning in anomaly detection.\n```python\nresult = hasura_client.anomalies()\npprint(result)\nresult = hasura_client.anomalies(threshold=.03)\npprint(result)\n```\n\n### Train and Serialize then Re-Use for Anomaly Detection\n\nTypically, do this to train on some historical dataset and then\nsearch for anomalies in an alternate (maybe current) dataset.\n```python\nresult = hasura_client.anomalies_training()\npprint(result)\nresult = hasura_client.anomalies(training_files=result, threshold=0)\npprint(result)\n```\n\n### Clustering\n\nUses KMedoids clustering. You are always working on a dictionary of datasets.\nYou need to define the number of clusters for each dataset in a corresponding input dictionary.\nYou can auto-generate the optimal number of clusters and use that as the input.\n```python\nresult = hasura_client.optimal_number_of_clusters(1,8)\npprint(result)\nresult = hasura_client.clusters(result)\npprint(result)\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Python library to simplify Hasura, GraphQL and Machine Learning",
    "version": "1.0.22",
    "project_urls": {
        "repository": "https://github.com/kenstott/pyhasura"
    },
    "split_keywords": [
        "graphql",
        " hasura",
        " ml",
        " ai",
        " machine learning",
        " arrow"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0a8c14882837694d0b0d2f56c9b0312ec3bf40ae07736c7857b33827faf43b6d",
                "md5": "64d8cfb1878f7fe11474427c25c1e062",
                "sha256": "d9801c08e76997b9d91ddd93087413cae1ba0bdb9c60519dc0cb96add3cbedd8"
            },
            "downloads": -1,
            "filename": "pyhasura-1.0.22-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "64d8cfb1878f7fe11474427c25c1e062",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 12318,
            "upload_time": "2024-04-28T15:43:53",
            "upload_time_iso_8601": "2024-04-28T15:43:53.043997Z",
            "url": "https://files.pythonhosted.org/packages/0a/8c/14882837694d0b0d2f56c9b0312ec3bf40ae07736c7857b33827faf43b6d/pyhasura-1.0.22-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ad7864d3db28df040bddebda40cef785d929172b3303206b669c493c7d4dd68e",
                "md5": "9fc4ee1e5ed92ba6bbcddeaba45fd307",
                "sha256": "bbd402a70e831ac6f00ee28e564693b05304e4ad602805ea3dac89051694d5ff"
            },
            "downloads": -1,
            "filename": "pyhasura-1.0.22.tar.gz",
            "has_sig": false,
            "md5_digest": "9fc4ee1e5ed92ba6bbcddeaba45fd307",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 9661,
            "upload_time": "2024-04-28T15:43:54",
            "upload_time_iso_8601": "2024-04-28T15:43:54.531277Z",
            "url": "https://files.pythonhosted.org/packages/ad/78/64d3db28df040bddebda40cef785d929172b3303206b669c493c7d4dd68e/pyhasura-1.0.22.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-28 15:43:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kenstott",
    "github_project": "pyhasura",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "python-dotenv",
            "specs": [
                [
                    "~=",
                    "1.0.1"
                ]
            ]
        },
        {
            "name": "gql",
            "specs": [
                [
                    "~=",
                    "3.5.0"
                ]
            ]
        },
        {
            "name": "pyhasura",
            "specs": [
                [
                    "~=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    "~=",
                    "2.2.2"
                ]
            ]
        },
        {
            "name": "pyarrow",
            "specs": [
                [
                    "~=",
                    "15.0.2"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    "~=",
                    "1.4.2"
                ]
            ]
        },
        {
            "name": "setuptools",
            "specs": [
                [
                    "~=",
                    "68.2.0"
                ]
            ]
        },
        {
            "name": "aiohttp",
            "specs": [
                [
                    "~=",
                    "3.9.5"
                ]
            ]
        }
    ],
    "lcname": "pyhasura"
}
        
Elapsed time: 0.25133s