streamerate


Namestreamerate JSON
Version 1.1.0 PyPI version JSON
download
home_pagehttps://github.com/asuiu/streamerate
Summarystreamerate: a fluent and expressive Python library for chainable iterable processing, inspired by Java 8 streams.
upload_time2024-04-24 09:34:52
maintainerNone
docs_urlNone
authorAndrei Suiu
requires_python<4.0.0,>=3.8
licenseMIT
keywords
VCS
bugtrack_url
requirements eventlet pydantic tblib throttlex tqdm
Travis-CI
coveralls test coverage No coveralls.
            # streamerate
[![build Status](https://travis-ci.org/asuiu/streamerate.svg?branch=master)](https://travis-ci.org/asuiu/streamerate)
[![Coverage Status](https://coveralls.io/repos/asuiu/streamerate/badge.svg?branch=master&service=github)](https://coveralls.io/github/asuiu/streamerate?branch=master)

__[streamerate](https://github.com/asuiu/streamerate)__  is a powerful pure-Python library inspired by **[Fluent Interface pattern](https://en.wikipedia.org/wiki/Fluent_interface)** (used by Java 8 streams), providing a chainable and expressive approach to processing iterable data.


By leveraging the **[Fluent Interface pattern](https://en.wikipedia.org/wiki/Fluent_interface)**, [streamerate](https://github.com/asuiu/streamerate) enables you to chain together multiple operations, such as filtering, mapping, and reducing, to create complex data processing pipelines with ease. With streamerate, you can write elegant and readable code that efficiently operates on streams of data, facilitating the development of clean and expressive Python applications.


__[streamerate](https://github.com/asuiu/streamerate)__ empowers you to write elegant and functional code, unlocking the full potential of your iterable data processing pipelines

The library is distributed under the permissive [MIT license](https://opensource.org/license/mit/), allowing you to freely use, modify, and distribute it in both open-source and commercial projects.

*Note:* __[streamerate](https://github.com/asuiu/streamerate)__ originated as part of the [pyxtension](https://github.com/asuiu/pyxtension) project but has since been migrated as a standalone library.


## Installation
```
pip install streamerate
```
or from Github:
```
git clone https://github.com/asuiu/streamerate.git
cd streamerate
python setup.py install
```
or
```
git submodule add https://github.com/asuiu/streamerate.git
```

## Modules overview

### streams.py
#### stream
`stream` subclasses `collections.Iterable`. It's the same Python iterable, but with more added methods, suitable for multithreading and multiprocess processings.
Used to create stream processing pipelines, similar to those used in [Scala](http://www.scala-lang.org/) and [MapReduce](https://en.wikipedia.org/wiki/MapReduce) programming model.
Those who used [Apache Spark](http://spark.apache.org/) [RDD](http://spark.apache.org/docs/latest/programming-guide.html#rdd-operations) functions will find this model of processing very easy to use.

### [streams](https://github.com/asuiu/streamerate/blob/master/streams.py)
**Never again will you have to write code like this**:
```python
> lst = xrange(1,6)
> reduce(lambda x, y: x * y, map(lambda _: _ * _, filter(lambda _: _ % 2 == 0, lst)))
64
```
From now on, you may simply write the following lines:
```python
> the_stream = stream( xrange(1,6) )
> the_stream.\
    filter(lambda _: _ % 2 == 0).\
    map(lambda _: _ * _).\
    reduce(lambda x, y: x * y)
64
```

#### A Word Count [Map-Reduce](https://en.wikipedia.org/wiki/MapReduce) naive example using multiprocessing map
```python
corpus = [
    "MapReduce is a programming model and an associated implementation for processing and generating large data sets with a parallel, distributed algorithm on a cluster.",
    "At Google, MapReduce was used to completely regenerate Google's index of the World Wide Web",
    "Conceptually similar approaches have been very well known since 1995 with the Message Passing Interface standard having reduce and scatter operations."]

def reduceMaps(m1, m2):
    for k, v in m2.iteritems():
        m1[k] = m1.get(k, 0) + v
    return m1

word_counts = stream(corpus).\
    mpmap(lambda line: stream(line.lower().split(' ')).countByValue()).\
    reduce(reduceMaps)
```

#### Basic methods
##### **map(f)**
Identic with builtin `map` but returns a stream


##### **mpmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count(), bufferSize: Optional[int] = None)**
Parallel ordered map using `multiprocessing.Pool.imap()`.

It can replace the `map` when need to split computations to multiple cores, and order of results matters.

It spawns at most `poolSize` processes and applies the `f` function.

It won't take more than `bufferSize` elements from the input unless it was already required by output, so you can use it with `takeWhile` on infinite streams and not be afraid that it will continue work in background.

The elements in the result stream appears in the same order they appear in the initial iterable.

```
:type f: (T) -> V
:rtype: `stream`
```


##### **mpfastmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count(), bufferSize: Optional[int] = None)**
Parallel ordered map using `multiprocessing.Pool.imap_unordered()`.

It can replace the `map` when the ordered of results doesn't matter.

It spawns at most `poolSize` processes and applies the `f` function.

It won't take more than `bufferSize` elements from the input unless it was already required by output, so you can use it with `takeWhile` on infinite streams and not be afraid that it will continue work in background.

The elements in the result stream appears in the unpredicted order.

```
:type f: (T) -> V
:rtype: `stream`
```


##### **fastmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count(), bufferSize: Optional[int] = None)**
Parallel unordered map using multithreaded pool.
It can replace the `map` when the ordered of results doesn't matter.

It spawns at most `poolSize` threads and applies the `f` function.

The elements in the result stream appears in the **unpredicted** order.

It won't take more than `bufferSize` elements from the input unless it was already required by output, so you can use it with `takeWhile` on infinite streams and not be afraid that it will continue work in background.

Because of CPython [GIL](https://wiki.python.org/moin/GlobalInterpreterLock) it's most usefull for I/O or CPU intensive consuming native functions, or on Jython or IronPython interpreters.

:type f: (T) -> V

:rtype: `stream`

##### **mtmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count(), bufferSize: Optional[int] = None)**
Parallel ordered map using multithreaded pool.
It can replace the `map` and the order of output stream will be the same as of the input.

It spawns at most `poolSize` threads and applies the `f` function.

The elements in the result stream appears in the **predicted** order.

It won't take more than `bufferSize` elements from the input unless it was already required by output, so you can use it with `takeWhile` on infinite streams and not be afraid that it will continue work in background.

Because of CPython [GIL](https://wiki.python.org/moin/GlobalInterpreterLock) it's most usefull for I/O or CPU intensive consuming native functions, or on Jython or IronPython interpreters.

:type f: (T) -> V

:rtype: `stream`

##### **gtmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count())**

##### **flatMap(predicate=_IDENTITY_FUNC)**
:param predicate: is a function that will receive elements of self collection and return an iterable

By default predicate is an identity function

:type predicate: (V)-> collections.Iterable[T]

:return: will return stream of objects of the same type of elements from the stream returned by predicate()

Example:
```python
stream([[1, 2], [3, 4], [4, 5]]).flatMap().toList() == [1, 2, 3, 4, 4, 5]
```


##### **filter(predicate)**
identic with builtin filter, but returns stream


##### **reversed()**
returns reversed stream


##### **exists(predicate)**
Tests whether a predicate holds for some of the elements of this sequence.

:rtype: bool

Example:
```python
stream([1, 2, 3]).exists(0) -> False
stream([1, 2, 3]).exists(1) -> True
```


##### **keyBy(keyfunc = _IDENTITY_FUNC)**
Transforms stream of values to a stream of tuples (key, value)

:param keyfunc: function to map values to keys

:type keyfunc: (V) -> T

:return: stream of Key, Value pairs

:rtype: stream[( T, V )]

Example:
```python
stream([1, 2, 3, 4]).keyBy(lambda _:_ % 2) -> [(1, 1), (0, 2), (1, 3), (0, 4)]
```

##### **groupBy()**
groupBy([keyfunc]) -> Make an iterator that returns consecutive keys and groups from the iterable.

The iterable needs not to be sorted on the same key function, but the keyfunction need to return hasable objects.

:param keyfunc: [Optional] The key is a function computing a key value for each element.

:type keyfunc: (T) -> (V)

:return: (key, sub-iterator) grouped by each value of key(value).

:rtype: stream[ ( V, slist[T] ) ]

Example:
```python
stream([1, 2, 3, 4]).groupBy(lambda _: _ % 2) -> [(0, [2, 4]), (1, [1, 3])]
```

##### **countByValue()**
Returns a collections.Counter of values

Example
```python
stream(['a', 'b', 'a', 'b', 'c', 'd']).countByValue() == {'a': 2, 'b': 2, 'c': 1, 'd': 1}
```

##### **distinct()**
Returns stream of distinct values. Values must be hashable.
```python
stream(['a', 'b', 'a', 'b', 'c', 'd']).distinct() == {'a', 'b', 'c', 'd'}
```


##### **reduce(f, init=None)**
same arguments with builtin reduce() function

##### **throttle(max_req: int, interval: float) -> "stream[_K]"**
Throttles the stream.

:param max_req: number of requests
:param interval: period in number of seconds
:return: throttled stream

Example:
```py
>>> s = Stream()
>>> throttled_stream = s.throttle(10, 1.5)
>>> for item in throttled_stream:
...     print(item)
```

##### **toSet()**
returns sset() instance


##### **toList()**
returns slist() instance


##### **toMap()**
returns sdict() instance


##### **sorted(key=None, cmp=None, reverse=False)**
same arguments with builtin sorted()


##### **size()**
returns length of stream. Use carefully on infinite streams.


##### **join(f)**
Returns a string joined by f. Proivides same functionality as str.join() builtin method.

if f is basestring, uses it to join the stream, else f should be a callable that returns a string to be used for join


##### **mkString(f)**
identic with join(f)


##### **take(n)**
    returns first n elements from stream


##### **head()**
    returns first element from stream


##### **zip()**
    the same behavior with itertools.izip()

##### **unique(predicate=_IDENTITY_FUNC)**
    Returns a stream of unique (according to predicate) elements appearing in the same order as in original stream

    The items returned by predicate should be hashable and comparable.


#### Statistics related methods
##### **entropy()**
calculates the Shannon entropy of the values from stream


##### **pstddev()**
Calculates the population standard deviation.


##### **mean()**
returns the arithmetical mean of the values


##### **sum()**
returns the sum of elements from stream


##### **min(key=_IDENTITY_FUNC)**
same functionality with builtin min() funcion


##### **min_default(default, key=_IDENTITY_FUNC)**
same functionality with min() but returns :default: when called on empty streams


##### **max()**
same functionality with builtin max()


##### **maxes(key=_IDENTITY_FUNC)**
returns a stream of max values from stream


##### **mins(key=_IDENTITY_FUNC)**
returns a stream of min values from stream


### Other classes
##### slist
Inherits `streams.stream` and built-in `list` classes, and keeps in memory a list allowing faster index access
##### sset
Inherits `streams.stream` and built-in `set` classes, and keeps in memory the whole set of values
##### sdict
Inherits `streams.stream` and built-in `dict`, and keeps in memory the dict object.
##### defaultstreamdict
Inherits `streams.sdict` and adds functionality  of `collections.defaultdict` from stdlib


### License
streamerate is released under MIT license.

### Alternatives
There are other libraries that support Fluent Interface streams as alternatives to streamerate, but being much more poor in features for streaming:
- https://pypi.org/project/lazy-streams/
- https://pypi.org/project/pystreams/
- https://pypi.org/project/fluentpy/
- https://github.com/matthagy/scalaps
- https://pypi.org/project/infixpy/ mentioned [here](https://stackoverflow.com/questions/49001986/left-to-right-application-of-operations-on-a-list-in-python3/62585964?noredirect=1#comment111806251_62585964)
- https://github.com/sspipe/sspipe


and something quite different from Fluent pattern, that makes kind of Piping: https://github.com/sspipe/sspipe and https://github.com/JulienPalard/Pipe


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/asuiu/streamerate",
    "name": "streamerate",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0.0,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Andrei Suiu",
    "author_email": "andrei.suiu@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/ac/25/8dd1790d14299814b0ac1ac1903d3e857e31930634b3132b190eb0b169c4/streamerate-1.1.0.tar.gz",
    "platform": null,
    "description": "# streamerate\n[![build Status](https://travis-ci.org/asuiu/streamerate.svg?branch=master)](https://travis-ci.org/asuiu/streamerate)\n[![Coverage Status](https://coveralls.io/repos/asuiu/streamerate/badge.svg?branch=master&service=github)](https://coveralls.io/github/asuiu/streamerate?branch=master)\n\n__[streamerate](https://github.com/asuiu/streamerate)__  is a powerful pure-Python library inspired by **[Fluent Interface pattern](https://en.wikipedia.org/wiki/Fluent_interface)** (used by Java 8 streams), providing a chainable and expressive approach to processing iterable data.\n\n\nBy leveraging the **[Fluent Interface pattern](https://en.wikipedia.org/wiki/Fluent_interface)**, [streamerate](https://github.com/asuiu/streamerate) enables you to chain together multiple operations, such as filtering, mapping, and reducing, to create complex data processing pipelines with ease. With streamerate, you can write elegant and readable code that efficiently operates on streams of data, facilitating the development of clean and expressive Python applications.\n\n\n__[streamerate](https://github.com/asuiu/streamerate)__ empowers you to write elegant and functional code, unlocking the full potential of your iterable data processing pipelines\n\nThe library is distributed under the permissive [MIT license](https://opensource.org/license/mit/), allowing you to freely use, modify, and distribute it in both open-source and commercial projects.\n\n*Note:* __[streamerate](https://github.com/asuiu/streamerate)__ originated as part of the [pyxtension](https://github.com/asuiu/pyxtension) project but has since been migrated as a standalone library.\n\n\n## Installation\n```\npip install streamerate\n```\nor from Github:\n```\ngit clone https://github.com/asuiu/streamerate.git\ncd streamerate\npython setup.py install\n```\nor\n```\ngit submodule add https://github.com/asuiu/streamerate.git\n```\n\n## Modules overview\n\n### streams.py\n#### stream\n`stream` subclasses `collections.Iterable`. It's the same Python iterable, but with more added methods, suitable for multithreading and multiprocess processings.\nUsed to create stream processing pipelines, similar to those used in [Scala](http://www.scala-lang.org/) and [MapReduce](https://en.wikipedia.org/wiki/MapReduce) programming model.\nThose who used [Apache Spark](http://spark.apache.org/) [RDD](http://spark.apache.org/docs/latest/programming-guide.html#rdd-operations) functions will find this model of processing very easy to use.\n\n### [streams](https://github.com/asuiu/streamerate/blob/master/streams.py)\n**Never again will you have to write code like this**:\n```python\n> lst = xrange(1,6)\n> reduce(lambda x, y: x * y, map(lambda _: _ * _, filter(lambda _: _ % 2 == 0, lst)))\n64\n```\nFrom now on, you may simply write the following lines:\n```python\n> the_stream = stream( xrange(1,6) )\n> the_stream.\\\n    filter(lambda _: _ % 2 == 0).\\\n    map(lambda _: _ * _).\\\n    reduce(lambda x, y: x * y)\n64\n```\n\n#### A Word Count [Map-Reduce](https://en.wikipedia.org/wiki/MapReduce) naive example using multiprocessing map\n```python\ncorpus = [\n    \"MapReduce is a programming model and an associated implementation for processing and generating large data sets with a parallel, distributed algorithm on a cluster.\",\n    \"At Google, MapReduce was used to completely regenerate Google's index of the World Wide Web\",\n    \"Conceptually similar approaches have been very well known since 1995 with the Message Passing Interface standard having reduce and scatter operations.\"]\n\ndef reduceMaps(m1, m2):\n    for k, v in m2.iteritems():\n        m1[k] = m1.get(k, 0) + v\n    return m1\n\nword_counts = stream(corpus).\\\n    mpmap(lambda line: stream(line.lower().split(' ')).countByValue()).\\\n    reduce(reduceMaps)\n```\n\n#### Basic methods\n##### **map(f)**\nIdentic with builtin `map` but returns a stream\n\n\n##### **mpmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count(), bufferSize: Optional[int] = None)**\nParallel ordered map using `multiprocessing.Pool.imap()`.\n\nIt can replace the `map` when need to split computations to multiple cores, and order of results matters.\n\nIt spawns at most `poolSize` processes and applies the `f` function.\n\nIt won't take more than `bufferSize` elements from the input unless it was already required by output, so you can use it with `takeWhile` on infinite streams and not be afraid that it will continue work in background.\n\nThe elements in the result stream appears in the same order they appear in the initial iterable.\n\n```\n:type f: (T) -> V\n:rtype: `stream`\n```\n\n\n##### **mpfastmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count(), bufferSize: Optional[int] = None)**\nParallel ordered map using `multiprocessing.Pool.imap_unordered()`.\n\nIt can replace the `map` when the ordered of results doesn't matter.\n\nIt spawns at most `poolSize` processes and applies the `f` function.\n\nIt won't take more than `bufferSize` elements from the input unless it was already required by output, so you can use it with `takeWhile` on infinite streams and not be afraid that it will continue work in background.\n\nThe elements in the result stream appears in the unpredicted order.\n\n```\n:type f: (T) -> V\n:rtype: `stream`\n```\n\n\n##### **fastmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count(), bufferSize: Optional[int] = None)**\nParallel unordered map using multithreaded pool.\nIt can replace the `map` when the ordered of results doesn't matter.\n\nIt spawns at most `poolSize` threads and applies the `f` function.\n\nThe elements in the result stream appears in the **unpredicted** order.\n\nIt won't take more than `bufferSize` elements from the input unless it was already required by output, so you can use it with `takeWhile` on infinite streams and not be afraid that it will continue work in background.\n\nBecause of CPython [GIL](https://wiki.python.org/moin/GlobalInterpreterLock) it's most usefull for I/O or CPU intensive consuming native functions, or on Jython or IronPython interpreters.\n\n:type f: (T) -> V\n\n:rtype: `stream`\n\n##### **mtmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count(), bufferSize: Optional[int] = None)**\nParallel ordered map using multithreaded pool.\nIt can replace the `map` and the order of output stream will be the same as of the input.\n\nIt spawns at most `poolSize` threads and applies the `f` function.\n\nThe elements in the result stream appears in the **predicted** order.\n\nIt won't take more than `bufferSize` elements from the input unless it was already required by output, so you can use it with `takeWhile` on infinite streams and not be afraid that it will continue work in background.\n\nBecause of CPython [GIL](https://wiki.python.org/moin/GlobalInterpreterLock) it's most usefull for I/O or CPU intensive consuming native functions, or on Jython or IronPython interpreters.\n\n:type f: (T) -> V\n\n:rtype: `stream`\n\n##### **gtmap(self, f: Callable[[_K], _V], poolSize: int = cpu_count())**\n\n##### **flatMap(predicate=_IDENTITY_FUNC)**\n:param predicate: is a function that will receive elements of self collection and return an iterable\n\nBy default predicate is an identity function\n\n:type predicate: (V)-> collections.Iterable[T]\n\n:return: will return stream of objects of the same type of elements from the stream returned by predicate()\n\nExample:\n```python\nstream([[1, 2], [3, 4], [4, 5]]).flatMap().toList() == [1, 2, 3, 4, 4, 5]\n```\n\n\n##### **filter(predicate)**\nidentic with builtin filter, but returns stream\n\n\n##### **reversed()**\nreturns reversed stream\n\n\n##### **exists(predicate)**\nTests whether a predicate holds for some of the elements of this sequence.\n\n:rtype: bool\n\nExample:\n```python\nstream([1, 2, 3]).exists(0) -> False\nstream([1, 2, 3]).exists(1) -> True\n```\n\n\n##### **keyBy(keyfunc = _IDENTITY_FUNC)**\nTransforms stream of values to a stream of tuples (key, value)\n\n:param keyfunc: function to map values to keys\n\n:type keyfunc: (V) -> T\n\n:return: stream of Key, Value pairs\n\n:rtype: stream[( T, V )]\n\nExample:\n```python\nstream([1, 2, 3, 4]).keyBy(lambda _:_ % 2) -> [(1, 1), (0, 2), (1, 3), (0, 4)]\n```\n\n##### **groupBy()**\ngroupBy([keyfunc]) -> Make an iterator that returns consecutive keys and groups from the iterable.\n\nThe iterable needs not to be sorted on the same key function, but the keyfunction need to return hasable objects.\n\n:param keyfunc: [Optional] The key is a function computing a key value for each element.\n\n:type keyfunc: (T) -> (V)\n\n:return: (key, sub-iterator) grouped by each value of key(value).\n\n:rtype: stream[ ( V, slist[T] ) ]\n\nExample:\n```python\nstream([1, 2, 3, 4]).groupBy(lambda _: _ % 2) -> [(0, [2, 4]), (1, [1, 3])]\n```\n\n##### **countByValue()**\nReturns a collections.Counter of values\n\nExample\n```python\nstream(['a', 'b', 'a', 'b', 'c', 'd']).countByValue() == {'a': 2, 'b': 2, 'c': 1, 'd': 1}\n```\n\n##### **distinct()**\nReturns stream of distinct values. Values must be hashable.\n```python\nstream(['a', 'b', 'a', 'b', 'c', 'd']).distinct() == {'a', 'b', 'c', 'd'}\n```\n\n\n##### **reduce(f, init=None)**\nsame arguments with builtin reduce() function\n\n##### **throttle(max_req: int, interval: float) -> \"stream[_K]\"**\nThrottles the stream.\n\n:param max_req: number of requests\n:param interval: period in number of seconds\n:return: throttled stream\n\nExample:\n```py\n>>> s = Stream()\n>>> throttled_stream = s.throttle(10, 1.5)\n>>> for item in throttled_stream:\n...     print(item)\n```\n\n##### **toSet()**\nreturns sset() instance\n\n\n##### **toList()**\nreturns slist() instance\n\n\n##### **toMap()**\nreturns sdict() instance\n\n\n##### **sorted(key=None, cmp=None, reverse=False)**\nsame arguments with builtin sorted()\n\n\n##### **size()**\nreturns length of stream. Use carefully on infinite streams.\n\n\n##### **join(f)**\nReturns a string joined by f. Proivides same functionality as str.join() builtin method.\n\nif f is basestring, uses it to join the stream, else f should be a callable that returns a string to be used for join\n\n\n##### **mkString(f)**\nidentic with join(f)\n\n\n##### **take(n)**\n    returns first n elements from stream\n\n\n##### **head()**\n    returns first element from stream\n\n\n##### **zip()**\n    the same behavior with itertools.izip()\n\n##### **unique(predicate=_IDENTITY_FUNC)**\n    Returns a stream of unique (according to predicate) elements appearing in the same order as in original stream\n\n    The items returned by predicate should be hashable and comparable.\n\n\n#### Statistics related methods\n##### **entropy()**\ncalculates the Shannon entropy of the values from stream\n\n\n##### **pstddev()**\nCalculates the population standard deviation.\n\n\n##### **mean()**\nreturns the arithmetical mean of the values\n\n\n##### **sum()**\nreturns the sum of elements from stream\n\n\n##### **min(key=_IDENTITY_FUNC)**\nsame functionality with builtin min() funcion\n\n\n##### **min_default(default, key=_IDENTITY_FUNC)**\nsame functionality with min() but returns :default: when called on empty streams\n\n\n##### **max()**\nsame functionality with builtin max()\n\n\n##### **maxes(key=_IDENTITY_FUNC)**\nreturns a stream of max values from stream\n\n\n##### **mins(key=_IDENTITY_FUNC)**\nreturns a stream of min values from stream\n\n\n### Other classes\n##### slist\nInherits `streams.stream` and built-in `list` classes, and keeps in memory a list allowing faster index access\n##### sset\nInherits `streams.stream` and built-in `set` classes, and keeps in memory the whole set of values\n##### sdict\nInherits `streams.stream` and built-in `dict`, and keeps in memory the dict object.\n##### defaultstreamdict\nInherits `streams.sdict` and adds functionality  of `collections.defaultdict` from stdlib\n\n\n### License\nstreamerate is released under MIT license.\n\n### Alternatives\nThere are other libraries that support Fluent Interface streams as alternatives to streamerate, but being much more poor in features for streaming:\n- https://pypi.org/project/lazy-streams/\n- https://pypi.org/project/pystreams/\n- https://pypi.org/project/fluentpy/\n- https://github.com/matthagy/scalaps\n- https://pypi.org/project/infixpy/ mentioned [here](https://stackoverflow.com/questions/49001986/left-to-right-application-of-operations-on-a-list-in-python3/62585964?noredirect=1#comment111806251_62585964)\n- https://github.com/sspipe/sspipe\n\n\nand something quite different from Fluent pattern, that makes kind of Piping: https://github.com/sspipe/sspipe and https://github.com/JulienPalard/Pipe\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "streamerate: a fluent and expressive Python library for chainable iterable processing, inspired by Java 8 streams.",
    "version": "1.1.0",
    "project_urls": {
        "Homepage": "https://github.com/asuiu/streamerate",
        "Repository": "https://github.com/asuiu/streamerate"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "906dba88872d832088168928dd7a794ab2aa734f8bc8aa1c98b8fd667788e98f",
                "md5": "71936ee1f4bf07a24acbf024459cb784",
                "sha256": "e21f9e3eb3e675f1e3699bbb3e46a412ced45d156d1a348545b890c02d44827b"
            },
            "downloads": -1,
            "filename": "streamerate-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "71936ee1f4bf07a24acbf024459cb784",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0.0,>=3.8",
            "size": 28733,
            "upload_time": "2024-04-24T09:34:50",
            "upload_time_iso_8601": "2024-04-24T09:34:50.809612Z",
            "url": "https://files.pythonhosted.org/packages/90/6d/ba88872d832088168928dd7a794ab2aa734f8bc8aa1c98b8fd667788e98f/streamerate-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ac258dd1790d14299814b0ac1ac1903d3e857e31930634b3132b190eb0b169c4",
                "md5": "d0918cb7736ea627adf8fcc8c5f55ffc",
                "sha256": "def5938908e21043f543a5e435cccfb5caf40edb510d2eef30e125728844b8e2"
            },
            "downloads": -1,
            "filename": "streamerate-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "d0918cb7736ea627adf8fcc8c5f55ffc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0.0,>=3.8",
            "size": 31213,
            "upload_time": "2024-04-24T09:34:52",
            "upload_time_iso_8601": "2024-04-24T09:34:52.665546Z",
            "url": "https://files.pythonhosted.org/packages/ac/25/8dd1790d14299814b0ac1ac1903d3e857e31930634b3132b190eb0b169c4/streamerate-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-24 09:34:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "asuiu",
    "github_project": "streamerate",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "eventlet",
            "specs": [
                [
                    ">=",
                    "0.20.0"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    ">=",
                    "1.8.2"
                ]
            ]
        },
        {
            "name": "tblib",
            "specs": [
                [
                    ">=",
                    "1.7.0"
                ]
            ]
        },
        {
            "name": "throttlex",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    ">=",
                    "4.62.0"
                ]
            ]
        }
    ],
    "lcname": "streamerate"
}
        
Elapsed time: 0.67982s