cannula


Namecannula JSON
Version 0.11.0 PyPI version JSON
download
home_pageNone
SummaryAsync GraphQL Helper Library
upload_time2024-01-20 21:01:32
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords graphql
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Cannula

[![CircleCI](https://circleci.com/gh/rmyers/cannula.svg?style=shield)](https://circleci.com/gh/rmyers/cannula)
[![Documentation Status](https://readthedocs.org/projects/cannula/badge/?version=main)](https://cannula.readthedocs.io/en/main/?badge=main)

> GraphQL for people who like Python!

* [Why Cannula](#why)
* [Installation](#install)
* [Quick Start](#start)
* [Performance](#performance)
* [Examples](#examples)
* [Documentation](https://cannula.readthedocs.io/)

<h2 id="why">Why Cannula?</h2>

We wanted to make the world a better place, but we are programmers so we settled
on making the web fun again. Too much attention has been given to Javascript
client libraries. They all seem to compete on size and speed and features but
most of them do not solve any of the actual problems you have. So while the
todo application is quick and easy to follow the hard parts take a long time
to complete.

Now a days if you want a fancy single page application you need to invest a
good week or so planning out all the tools you will need to assemble your site.
Every decision is full of sorrow and doubt as you google for the latest trends
or how to setup unit tests. Or searching for a bootstrapped version of the
library you like.

Using GraphQL you can simplify your web application stack and reduce
dependencies to achieve the same customer experience without regret. By using
a schema to define your application you can auto generate much of the code
you need to interact with it.

Our Philosophy:
1. Make your site easy to maintain.
2. Document your code.
3. Don't lock yourself into a framework.
4. Be happy!

<h2 id="install">Installation</h2>

Requires Python 3.8 or greater! The only dependency is
[graphql-core-next](https://graphql-core-next.readthedocs.io/en/latest/).

```bash
pip3 install cannula
```

<h2 id="start">Quick Start</h2>

Here is a small [hello world example](examples/hello.py):

```python
import typing
import sys

import cannula

SCHEMA = """
    type Query {
        hello(who: String!): String
    }
"""

# Basic API setup with the schema we defined
api = cannula.API(schema=SCHEMA)


# The query resolver takes a `source` and `info` objects
# and any arguments defined by the schema. Here we
# only accept a single argument `who`.
@api.query()
async def hello(
    source: typing.Any,
    info: cannula.ResolveInfo,
    who: str,
) -> str:
    # Here the field_name is 'hello' so we'll
    # return 'hello {who}!'
    return f"{info.field_name} {who}!"


# Pre-parse your query to speed up your requests.
SAMPLE_QUERY = cannula.gql(
    """
    query HelloWorld ($who: String!) {
        hello(who: $who)
    }
"""
)


def run_hello(who: str = "world"):
    return api.call_sync(SAMPLE_QUERY, variables={"who": who})


if __name__ == "__main__":
    who = "world"
    if len(sys.argv) > 1:
        who = sys.argv[1]

    print(run_hello(who))

```

Now you should see the results if you run the sample on the command line:

```
$ python3 examples/hello.py
ExecutionResult(
  data={'hello': "hello world!"},
  errors=None
)

$ python3 examples/hello.py Bob
ExecutionResult(
  data={"hello": "hello Bob!"},
  errors=None
)
```

<h2 id="performance">Performance</h2>

We try to make sure cannula is as fast as possible. While real world benchmarks are always difficult we do have a simple test that attempts to show how cannula performs against other setups.

You can view the tests in [performance](performance/test_performance.py). We have a simple function that returns data then compare the time it takes to return those results with a plan FastAPI app vs a GraphQL request. Then we try the same GraphQL request in both Cannula and Ariadne. Here is a sample of the output:

```
1000 iterations (lower is better)

test_performance.py::test_performance
performance test results:
fastapi: 0.41961031800019555
ariadne results: 1.8639117470011115
cannula results: 0.5465521310106851
PASSED
test_performance.py::test_performance_invalid_request
performance test results:
fastapi: 0.375848950992804
ariadne results: 0.8494849189883098
cannula results: 0.4427280649833847
PASSED
test_performance.py::test_performance_invalid_query
performance test results:
fastapi: 0.37241295698913746
ariadne results: 2.1828249279933516
cannula results: 0.4591125229781028
PASSED
```

As you can see Cannula is close to the raw performance of FastAPI. Granted real world results might be different as the way Cannula achieves it speed is by caching query validation results. This works best if you have a relatively fixed set of queries that are performed such as a UI that you or another team manages. If the requests are completely ad hoc like a public api then the results will not be as great.

<h2 id="examples">Examples and Documentation</h2>

* [hello world](examples/hello.py)
* [using mocks](examples/mocks.py)

[Documentation](https://cannula.readthedocs.io/)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "cannula",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "graphql",
    "author": null,
    "author_email": "Robert Myers <robert@julython.org>",
    "download_url": "https://files.pythonhosted.org/packages/50/52/e748cfa0312a3c0628075df72958c2457d7c2ea9c7c054011bd4eb9588bb/cannula-0.11.0.tar.gz",
    "platform": null,
    "description": "# Cannula\n\n[![CircleCI](https://circleci.com/gh/rmyers/cannula.svg?style=shield)](https://circleci.com/gh/rmyers/cannula)\n[![Documentation Status](https://readthedocs.org/projects/cannula/badge/?version=main)](https://cannula.readthedocs.io/en/main/?badge=main)\n\n> GraphQL for people who like Python!\n\n* [Why Cannula](#why)\n* [Installation](#install)\n* [Quick Start](#start)\n* [Performance](#performance)\n* [Examples](#examples)\n* [Documentation](https://cannula.readthedocs.io/)\n\n<h2 id=\"why\">Why Cannula?</h2>\n\nWe wanted to make the world a better place, but we are programmers so we settled\non making the web fun again. Too much attention has been given to Javascript\nclient libraries. They all seem to compete on size and speed and features but\nmost of them do not solve any of the actual problems you have. So while the\ntodo application is quick and easy to follow the hard parts take a long time\nto complete.\n\nNow a days if you want a fancy single page application you need to invest a\ngood week or so planning out all the tools you will need to assemble your site.\nEvery decision is full of sorrow and doubt as you google for the latest trends\nor how to setup unit tests. Or searching for a bootstrapped version of the\nlibrary you like.\n\nUsing GraphQL you can simplify your web application stack and reduce\ndependencies to achieve the same customer experience without regret. By using\na schema to define your application you can auto generate much of the code\nyou need to interact with it.\n\nOur Philosophy:\n1. Make your site easy to maintain.\n2. Document your code.\n3. Don't lock yourself into a framework.\n4. Be happy!\n\n<h2 id=\"install\">Installation</h2>\n\nRequires Python 3.8 or greater! The only dependency is\n[graphql-core-next](https://graphql-core-next.readthedocs.io/en/latest/).\n\n```bash\npip3 install cannula\n```\n\n<h2 id=\"start\">Quick Start</h2>\n\nHere is a small [hello world example](examples/hello.py):\n\n```python\nimport typing\nimport sys\n\nimport cannula\n\nSCHEMA = \"\"\"\n    type Query {\n        hello(who: String!): String\n    }\n\"\"\"\n\n# Basic API setup with the schema we defined\napi = cannula.API(schema=SCHEMA)\n\n\n# The query resolver takes a `source` and `info` objects\n# and any arguments defined by the schema. Here we\n# only accept a single argument `who`.\n@api.query()\nasync def hello(\n    source: typing.Any,\n    info: cannula.ResolveInfo,\n    who: str,\n) -> str:\n    # Here the field_name is 'hello' so we'll\n    # return 'hello {who}!'\n    return f\"{info.field_name} {who}!\"\n\n\n# Pre-parse your query to speed up your requests.\nSAMPLE_QUERY = cannula.gql(\n    \"\"\"\n    query HelloWorld ($who: String!) {\n        hello(who: $who)\n    }\n\"\"\"\n)\n\n\ndef run_hello(who: str = \"world\"):\n    return api.call_sync(SAMPLE_QUERY, variables={\"who\": who})\n\n\nif __name__ == \"__main__\":\n    who = \"world\"\n    if len(sys.argv) > 1:\n        who = sys.argv[1]\n\n    print(run_hello(who))\n\n```\n\nNow you should see the results if you run the sample on the command line:\n\n```\n$ python3 examples/hello.py\nExecutionResult(\n  data={'hello': \"hello world!\"},\n  errors=None\n)\n\n$ python3 examples/hello.py Bob\nExecutionResult(\n  data={\"hello\": \"hello Bob!\"},\n  errors=None\n)\n```\n\n<h2 id=\"performance\">Performance</h2>\n\nWe try to make sure cannula is as fast as possible. While real world benchmarks are always difficult we do have a simple test that attempts to show how cannula performs against other setups.\n\nYou can view the tests in [performance](performance/test_performance.py). We have a simple function that returns data then compare the time it takes to return those results with a plan FastAPI app vs a GraphQL request. Then we try the same GraphQL request in both Cannula and Ariadne. Here is a sample of the output:\n\n```\n1000 iterations (lower is better)\n\ntest_performance.py::test_performance\nperformance test results:\nfastapi: 0.41961031800019555\nariadne results: 1.8639117470011115\ncannula results: 0.5465521310106851\nPASSED\ntest_performance.py::test_performance_invalid_request\nperformance test results:\nfastapi: 0.375848950992804\nariadne results: 0.8494849189883098\ncannula results: 0.4427280649833847\nPASSED\ntest_performance.py::test_performance_invalid_query\nperformance test results:\nfastapi: 0.37241295698913746\nariadne results: 2.1828249279933516\ncannula results: 0.4591125229781028\nPASSED\n```\n\nAs you can see Cannula is close to the raw performance of FastAPI. Granted real world results might be different as the way Cannula achieves it speed is by caching query validation results. This works best if you have a relatively fixed set of queries that are performed such as a UI that you or another team manages. If the requests are completely ad hoc like a public api then the results will not be as great.\n\n<h2 id=\"examples\">Examples and Documentation</h2>\n\n* [hello world](examples/hello.py)\n* [using mocks](examples/mocks.py)\n\n[Documentation](https://cannula.readthedocs.io/)\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Async GraphQL Helper Library",
    "version": "0.11.0",
    "project_urls": {
        "Homepage": "https://github.com/rmyers/cannula"
    },
    "split_keywords": [
        "graphql"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d859fe23f5224264fd28c65db986d127867d71d302140e238d65aac2117ddb4a",
                "md5": "bb850591b470079fee2d38aa1eaef8e3",
                "sha256": "7fe70f233109902410448e628bdd0d93b81b8fe950a9183b3efdf25410b2168c"
            },
            "downloads": -1,
            "filename": "cannula-0.11.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bb850591b470079fee2d38aa1eaef8e3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 24516,
            "upload_time": "2024-01-20T21:01:34",
            "upload_time_iso_8601": "2024-01-20T21:01:34.186109Z",
            "url": "https://files.pythonhosted.org/packages/d8/59/fe23f5224264fd28c65db986d127867d71d302140e238d65aac2117ddb4a/cannula-0.11.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5052e748cfa0312a3c0628075df72958c2457d7c2ea9c7c054011bd4eb9588bb",
                "md5": "b7e88c63c8f4486172272a14dcc4aa2f",
                "sha256": "7549299870779f00f4b62f8ab7ffb750a320d0bb6494b7731ca85208fba603e2"
            },
            "downloads": -1,
            "filename": "cannula-0.11.0.tar.gz",
            "has_sig": false,
            "md5_digest": "b7e88c63c8f4486172272a14dcc4aa2f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 19129,
            "upload_time": "2024-01-20T21:01:32",
            "upload_time_iso_8601": "2024-01-20T21:01:32.683068Z",
            "url": "https://files.pythonhosted.org/packages/50/52/e748cfa0312a3c0628075df72958c2457d7c2ea9c7c054011bd4eb9588bb/cannula-0.11.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-20 21:01:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rmyers",
    "github_project": "cannula",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "circle": true,
    "lcname": "cannula"
}
        
Elapsed time: 0.17146s