velocem


Namevelocem JSON
Version 0.0.12 PyPI version JSON
download
home_pageNone
SummaryHyperspeed Python Web Framework
upload_time2024-10-07 20:11:41
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseCopyright (c) Vito Gamberini and Contributors Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords wsgi
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/nickelpro/velocem/test.yaml?style=for-the-badge)](https://github.com/nickelpro/velocem/actions/workflows/test.yaml)
[![Gunicorn](https://img.shields.io/badge/Faster%20Than%20Gunicorn-1516%25-08824c?style=for-the-badge)](https://github.com/benoitc/gunicorn)
[![FastWSGI](https://img.shields.io/badge/Faster%20Than%20FastWSGI-12.59%25-lightblue?style=for-the-badge)](https://github.com/jamesroberts/fastwsgi)

# Velocem

This is a Work-In-Progress experimental Python extension. It aims to provide
a stupid-fast, pointlessly low-latency, web development framework.

If you use this code in production you get what you deserve.

## Motivation

All of the fast Python server-providing web frameworks work off the same basic
theory, take fast native IO and HTTP components and bind them to Python. They
use slightly different optimization tricks and shortcuts along the way, but
that's the shape of things.

A quick survery of the field is presented below.

| Framework | IO Library | HTTP Library |              Notes                 |
| --------- | ---------- | ------------ | -----------------------------------|
| [Bjoern](https://github.com/jonashaag/bjoern) | [libev](https://github.com/enki/libev) | [http_parser](https://github.com/nodejs/http-parser)  | The OG, surpassed by modern frameworks, pioneered most of the micro-optimizations used by later entrants. Notable especially for being the first to take whatever Node.js was doing at the time and applying it to Python.  |
| [Japronto!](https://github.com/squeaky-pl/japronto/tree/master) | [libuv](https://github.com/libuv/libuv) via [uvloop](https://github.com/MagicStack/uvloop) | [picohttpparser](https://github.com/h2o/picohttpparser) | An early example of saying "Screw WSGI!" Every developer working on fast Python web frameworks quickly realizes WSGI is a deadend, to be supported only for compatibility. |
| [uWSGI](https://github.com/unbit/uwsgi) | None | None | Never very fast, but written in C and very popular. Worth mentioning because it pioneered the uwsgi _protocol_ as a replacement for HTTP/1.1 for communicating between load balancers and application servers. |
| [Socketify.py](https://github.com/cirospaciari/socketify.py) | [µWebSockets](https://github.com/uNetworking/uWebSockets) | [µWebSockets](https://github.com/uNetworking/uWebSockets) | The current industry standard for speed. Spends a lot of implementation effort to maximize compatibility with PyPy, under which it runs 2-3x faster. Still very fast under CPython, but is the undisputed king for PyPy-based stacks. |
| [emmet](https://github.com/emmett-framework/emmett) / [granian](https://github.com/emmett-framework/granian) | [tokio](https://github.com/tokio-rs/tokio) | [hyper](https://github.com/hyperium/hyper) | Rust! You knew one of these was going to be in Rust. Here it is. The one that's in Rust. |
| [FastWSGI](https://github.com/jamesroberts/fastwsgi) | [libuv](https://github.com/libuv/libuv) | [llhttp](https://github.com/nodejs/llhttp) | The modern successor to Bjoern, King of the "Do What Node Does" Throne. Screaming fast. Real thorn in the side of projects trying to claim to be the fastest WSGI server. Demolishes micro-benchmarks. All server, no framework. |
| Velocem | [asio](https://github.com/chriskohlhoff/asio) ([Docs](https://think-async.com/Asio/)) | [llhttp](https://github.com/nodejs/llhttp) | The one you're looking at right now. Notable for being the only framework on this list primarily written in C++. Will cross any lines and break any rules of software engineering to shave microseconds off the WSGI "Hello World" app.

**What about FastAPI/Starlette/Blacksheep/apidaora/Falcon/My Favorite Web Framework?**

Don't provide a server, or only provide a development server. Benchmarks where
these run screaming fast are relying on a server-providing framework such as
the ones above, so they don't get credit.

**What about CherryPy/Gunicorn/Uvicorn/Tornado/Twisted?**

Painfully slow, by a factor of 10x or more than the above frameworks. Most of
these rely heavily on pure-Python components which means that their latency
skyrockets regardless of what IO strategy they implement. Higher latency, lower
req/sec, higher resource usage, means you need more application servers to
handle a given load.

**What exactly is the motivation?**

To be the fastest, at everything. WSGI, ASGI, Routing, Velocem's own custom
interface, maybe more.

The goal is to be the anchor statistic, the 1x against which everything else
is measured in factors of. And, by demonstrating what is possible, hopefully
motivating other more commerical, production-oriented frameworks to adopt
more performant techniques as well.

## What Works

* **WSGI**: A complete PEP 3333 implementation, with only the lightest of
  shortcuts taken. Marginally more conforming than FastWSGI, and ahead by
  a fraction of a microsecond on benchmarks. Blows everything else out of the
  water.

* **HTTP/1.1**: We parse it and return valid responses. Yippee.

* **Router**: Routing is what massacres most benchmarks. The "Hello World"
  Flask app is 5x slower than the raw WSGI equivalent. A fast router is
  essential to a fast, low latency application.

  Velocem has a _very basic_ router, but it should still be faster than most
  other implementations. This needs a lot of expansion.

* **Tests**: There are a couple of tests. The current testing strategy is "when
  she segfaults, write a test so the same segfault doesn't happen again".
  Obviously more work to come here.

* **PyPI**: Velocem is available via PyPI, but as this is pre-alpha software and
  features and fixes make there way in with every single commit, building from
  source is recommended for those who want to play with this.

## What's On Deck

* **Docs**: Yes, obviously.

* **Benchmarks**: Need a more complete suite of benchmarks than "Ctrl-R for the
  last `wrk` command we ran"

* **sendfile**: The single most obvious WSGI optimization, but needs special
handling code

* **ASGI**: Will likely only support the latest standard. Going to need to
implement our own asyncio loop for this to have any shot of being fast.

* **uwsgi Protocol**: HTTP/1.1 is non-ideal as a lingua franca for web
  application backends, uwsgi is an improvement worth exploring.

--------------

## License

The Velocem source code (and associated build/test/utility files) is released
into the public domain via CC0-1.0, see `License` for details.

I (Vito Gamberini) hold no patents and have no knowledge of any patented
techniques used by Velocem. However, some organizations refuse to incorporate or
distribute public domain code due to patent concerns, for this reason Velocem is
additionally licensed for use under MIT-0, see `UsageLicense` for details.

The purpose in using these well known legal texts is that they are widely
trusted and understood by the open source community. A bespoke legal text might
limit usage by particularly skittish developers or organizations. However, I
want to make my intentions clear in plain English as well:

**Do what you want with this stuff. There are zero restrictions on the use of
this code in any context.**

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "velocem",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "WSGI",
    "author": null,
    "author_email": "Vito Gamberini <vito@gamberini.email>",
    "download_url": "https://files.pythonhosted.org/packages/9f/7a/511a9711f654c87d51c43a12ad11ab21be1925ce6c6960db89106bfc2f6e/velocem-0.0.12.tar.gz",
    "platform": null,
    "description": "[![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/nickelpro/velocem/test.yaml?style=for-the-badge)](https://github.com/nickelpro/velocem/actions/workflows/test.yaml)\n[![Gunicorn](https://img.shields.io/badge/Faster%20Than%20Gunicorn-1516%25-08824c?style=for-the-badge)](https://github.com/benoitc/gunicorn)\n[![FastWSGI](https://img.shields.io/badge/Faster%20Than%20FastWSGI-12.59%25-lightblue?style=for-the-badge)](https://github.com/jamesroberts/fastwsgi)\n\n# Velocem\n\nThis is a Work-In-Progress experimental Python extension. It aims to provide\na stupid-fast, pointlessly low-latency, web development framework.\n\nIf you use this code in production you get what you deserve.\n\n## Motivation\n\nAll of the fast Python server-providing web frameworks work off the same basic\ntheory, take fast native IO and HTTP components and bind them to Python. They\nuse slightly different optimization tricks and shortcuts along the way, but\nthat's the shape of things.\n\nA quick survery of the field is presented below.\n\n| Framework | IO Library | HTTP Library |              Notes                 |\n| --------- | ---------- | ------------ | -----------------------------------|\n| [Bjoern](https://github.com/jonashaag/bjoern) | [libev](https://github.com/enki/libev) | [http_parser](https://github.com/nodejs/http-parser)  | The OG, surpassed by modern frameworks, pioneered most of the micro-optimizations used by later entrants. Notable especially for being the first to take whatever Node.js was doing at the time and applying it to Python.  |\n| [Japronto!](https://github.com/squeaky-pl/japronto/tree/master) | [libuv](https://github.com/libuv/libuv) via [uvloop](https://github.com/MagicStack/uvloop) | [picohttpparser](https://github.com/h2o/picohttpparser) | An early example of saying \"Screw WSGI!\" Every developer working on fast Python web frameworks quickly realizes WSGI is a deadend, to be supported only for compatibility. |\n| [uWSGI](https://github.com/unbit/uwsgi) | None | None | Never very fast, but written in C and very popular. Worth mentioning because it pioneered the uwsgi _protocol_ as a replacement for HTTP/1.1 for communicating between load balancers and application servers. |\n| [Socketify.py](https://github.com/cirospaciari/socketify.py) | [\u00b5WebSockets](https://github.com/uNetworking/uWebSockets) | [\u00b5WebSockets](https://github.com/uNetworking/uWebSockets) | The current industry standard for speed. Spends a lot of implementation effort to maximize compatibility with PyPy, under which it runs 2-3x faster. Still very fast under CPython, but is the undisputed king for PyPy-based stacks. |\n| [emmet](https://github.com/emmett-framework/emmett) / [granian](https://github.com/emmett-framework/granian) | [tokio](https://github.com/tokio-rs/tokio) | [hyper](https://github.com/hyperium/hyper) | Rust! You knew one of these was going to be in Rust. Here it is. The one that's in Rust. |\n| [FastWSGI](https://github.com/jamesroberts/fastwsgi) | [libuv](https://github.com/libuv/libuv) | [llhttp](https://github.com/nodejs/llhttp) | The modern successor to Bjoern, King of the \"Do What Node Does\" Throne. Screaming fast. Real thorn in the side of projects trying to claim to be the fastest WSGI server. Demolishes micro-benchmarks. All server, no framework. |\n| Velocem | [asio](https://github.com/chriskohlhoff/asio) ([Docs](https://think-async.com/Asio/)) | [llhttp](https://github.com/nodejs/llhttp) | The one you're looking at right now. Notable for being the only framework on this list primarily written in C++. Will cross any lines and break any rules of software engineering to shave microseconds off the WSGI \"Hello World\" app.\n\n**What about FastAPI/Starlette/Blacksheep/apidaora/Falcon/My Favorite Web Framework?**\n\nDon't provide a server, or only provide a development server. Benchmarks where\nthese run screaming fast are relying on a server-providing framework such as\nthe ones above, so they don't get credit.\n\n**What about CherryPy/Gunicorn/Uvicorn/Tornado/Twisted?**\n\nPainfully slow, by a factor of 10x or more than the above frameworks. Most of\nthese rely heavily on pure-Python components which means that their latency\nskyrockets regardless of what IO strategy they implement. Higher latency, lower\nreq/sec, higher resource usage, means you need more application servers to\nhandle a given load.\n\n**What exactly is the motivation?**\n\nTo be the fastest, at everything. WSGI, ASGI, Routing, Velocem's own custom\ninterface, maybe more.\n\nThe goal is to be the anchor statistic, the 1x against which everything else\nis measured in factors of. And, by demonstrating what is possible, hopefully\nmotivating other more commerical, production-oriented frameworks to adopt\nmore performant techniques as well.\n\n## What Works\n\n* **WSGI**: A complete PEP 3333 implementation, with only the lightest of\n  shortcuts taken. Marginally more conforming than FastWSGI, and ahead by\n  a fraction of a microsecond on benchmarks. Blows everything else out of the\n  water.\n\n* **HTTP/1.1**: We parse it and return valid responses. Yippee.\n\n* **Router**: Routing is what massacres most benchmarks. The \"Hello World\"\n  Flask app is 5x slower than the raw WSGI equivalent. A fast router is\n  essential to a fast, low latency application.\n\n  Velocem has a _very basic_ router, but it should still be faster than most\n  other implementations. This needs a lot of expansion.\n\n* **Tests**: There are a couple of tests. The current testing strategy is \"when\n  she segfaults, write a test so the same segfault doesn't happen again\".\n  Obviously more work to come here.\n\n* **PyPI**: Velocem is available via PyPI, but as this is pre-alpha software and\n  features and fixes make there way in with every single commit, building from\n  source is recommended for those who want to play with this.\n\n## What's On Deck\n\n* **Docs**: Yes, obviously.\n\n* **Benchmarks**: Need a more complete suite of benchmarks than \"Ctrl-R for the\n  last `wrk` command we ran\"\n\n* **sendfile**: The single most obvious WSGI optimization, but needs special\nhandling code\n\n* **ASGI**: Will likely only support the latest standard. Going to need to\nimplement our own asyncio loop for this to have any shot of being fast.\n\n* **uwsgi Protocol**: HTTP/1.1 is non-ideal as a lingua franca for web\n  application backends, uwsgi is an improvement worth exploring.\n\n--------------\n\n## License\n\nThe Velocem source code (and associated build/test/utility files) is released\ninto the public domain via CC0-1.0, see `License` for details.\n\nI (Vito Gamberini) hold no patents and have no knowledge of any patented\ntechniques used by Velocem. However, some organizations refuse to incorporate or\ndistribute public domain code due to patent concerns, for this reason Velocem is\nadditionally licensed for use under MIT-0, see `UsageLicense` for details.\n\nThe purpose in using these well known legal texts is that they are widely\ntrusted and understood by the open source community. A bespoke legal text might\nlimit usage by particularly skittish developers or organizations. However, I\nwant to make my intentions clear in plain English as well:\n\n**Do what you want with this stuff. There are zero restrictions on the use of\nthis code in any context.**\n",
    "bugtrack_url": null,
    "license": "Copyright (c) Vito Gamberini and Contributors  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so.  THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "Hyperspeed Python Web Framework",
    "version": "0.0.12",
    "project_urls": {
        "Documentation": "https://github.com/nickelpro/velocem"
    },
    "split_keywords": [
        "wsgi"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c8991c50b79a8c294fc2f5c392320726461db42f67bdff912e2fc4db17518cdf",
                "md5": "fe261adc139e7ab0c887864798da76bb",
                "sha256": "60dba2570037732b52714c1d4ef87aaac7bfe249a89949a5522990044464d00a"
            },
            "downloads": -1,
            "filename": "velocem-0.0.12-cp312-cp312-macosx_13_3_arm64.whl",
            "has_sig": false,
            "md5_digest": "fe261adc139e7ab0c887864798da76bb",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.12",
            "size": 148571,
            "upload_time": "2024-10-07T20:11:37",
            "upload_time_iso_8601": "2024-10-07T20:11:37.359120Z",
            "url": "https://files.pythonhosted.org/packages/c8/99/1c50b79a8c294fc2f5c392320726461db42f67bdff912e2fc4db17518cdf/velocem-0.0.12-cp312-cp312-macosx_13_3_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e224718167c26c807735efc4500ebc38d2c986d16d018e55e3f40cd189f6a64e",
                "md5": "81079c04082fa7f34c618715a038802c",
                "sha256": "ac885fd988ba8ac3a2e2c694ca72223f1ec63f36c1b3f18114813f7790327548"
            },
            "downloads": -1,
            "filename": "velocem-0.0.12-cp312-cp312-manylinux_2_40_x86_64.whl",
            "has_sig": false,
            "md5_digest": "81079c04082fa7f34c618715a038802c",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.12",
            "size": 186784,
            "upload_time": "2024-10-07T20:11:38",
            "upload_time_iso_8601": "2024-10-07T20:11:38.640602Z",
            "url": "https://files.pythonhosted.org/packages/e2/24/718167c26c807735efc4500ebc38d2c986d16d018e55e3f40cd189f6a64e/velocem-0.0.12-cp312-cp312-manylinux_2_40_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "69ca136426a70004b066a5ea2aca1fe705f530db43867dcdb0a37a4a930efc0e",
                "md5": "034f801b80b6269889f5db53285b0248",
                "sha256": "4dd238693f8b9e09552e012bc9196efcac920685a1c2c960eebf45b79520e84d"
            },
            "downloads": -1,
            "filename": "velocem-0.0.12-cp312-cp312-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "034f801b80b6269889f5db53285b0248",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.12",
            "size": 333365,
            "upload_time": "2024-10-07T20:11:39",
            "upload_time_iso_8601": "2024-10-07T20:11:39.834870Z",
            "url": "https://files.pythonhosted.org/packages/69/ca/136426a70004b066a5ea2aca1fe705f530db43867dcdb0a37a4a930efc0e/velocem-0.0.12-cp312-cp312-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9f7a511a9711f654c87d51c43a12ad11ab21be1925ce6c6960db89106bfc2f6e",
                "md5": "b8085c15b07b872afdea88a31db58073",
                "sha256": "a9ec6299d73932e2068592a2d48111ea37719afb68ff3c8df0c8d3972e7c85e0"
            },
            "downloads": -1,
            "filename": "velocem-0.0.12.tar.gz",
            "has_sig": false,
            "md5_digest": "b8085c15b07b872afdea88a31db58073",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 27175,
            "upload_time": "2024-10-07T20:11:41",
            "upload_time_iso_8601": "2024-10-07T20:11:41.210391Z",
            "url": "https://files.pythonhosted.org/packages/9f/7a/511a9711f654c87d51c43a12ad11ab21be1925ce6c6960db89106bfc2f6e/velocem-0.0.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-07 20:11:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nickelpro",
    "github_project": "velocem",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "velocem"
}
        
Elapsed time: 0.37546s