pycond


Namepycond JSON
Version 20230212 PyPI version JSON
download
home_pagehttps://github.com/axiros/pycond
SummaryLightweight Condition Parsing and Building of Evaluation Expressions
upload_time2023-02-10 22:54:29
maintainer
docs_urlNone
authorgk
requires_python
licenseBSD
keywords conditions expression async serialization rxpy reactivex
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            ---

author: gk
version: 20230212

---


# pycond: Lightweight Declarative Condition Expressions

[![Build Status](https://travis-ci.org/axiros/pycond.svg?branch=master)](https://travis-ci.org/axiros/pycond) [![codecov](https://codecov.io/gh/axiros/pycond/branch/master/graph/badge.svg)](https://codecov.io/gh/axiros/pycond)[![PyPI    version][pypisvg]][pypi] [![][blacksvg]][black]

[blacksvg]: https://img.shields.io/badge/code%20style-black-000000.svg
[black]: https://github.com/ambv/black
[pypisvg]: https://img.shields.io/pypi/v/pycond.svg
[pypi]: https://badge.fury.io/py/pycond

<!-- badges: http://thomas-cokelaer.info/blog/2014/08/1013/ -->


<!-- TOC -->

# Table Of Contents

- <a name="toc1"></a>[What](#what)
- <a name="toc2"></a>[Why](#why)
    - <a name="toc3"></a>[Alternatives](#alternatives)
- <a name="toc4"></a>[Mechanics](#mechanics)
    - <a name="toc5"></a>[Parsing](#parsing)
    - <a name="toc6"></a>[Building](#building)
    - <a name="toc7"></a>[Structured Conditions](#structured-conditions)
    - <a name="toc8"></a>[Evaluation](#evaluation)
    - <a name="toc9"></a>[Default Lookup](#default-lookup)
    - <a name="toc10"></a>[Passing State](#passing-state)
    - <a name="toc11"></a>[Deep Lookup / Nested State / Lists](#deep-lookup-nested-state-lists)
        - <a name="toc12"></a>[Lookup Performance: Prebuilt Deep Getters](#lookup-performance-prebuilt-deep-getters)
        - <a name="toc13"></a>[Best Practices](#best-practices)
    - <a name="toc14"></a>[Prefixed Data](#prefixed-data)
    - <a name="toc15"></a>[Attributes Access](#attributes-access)
    - <a name="toc16"></a>[Custom Lookup And Value Passing](#custom-lookup-and-value-passing)
    - <a name="toc17"></a>[Lazy Evaluation](#lazy-evaluation)
    - <a name="toc18"></a>[Condition Operators (Comparators)](#condition-operators-comparators)
        - <a name="toc19"></a>[Using Symbolic Operators](#using-symbolic-operators)
        - <a name="toc20"></a>[Extending Condition Operators](#extending-condition-operators)
        - <a name="toc21"></a>[Negation `not`](#negation-not)
        - <a name="toc22"></a>[Reversal `rev`](#reversal-rev)
        - <a name="toc23"></a>[Wrapping Condition Operators](#wrapping-condition-operators)
            - <a name="toc24"></a>[Global Wrapping](#global-wrapping)
        - <a name="toc25"></a>[Condition Local Wrapping](#condition-local-wrapping)
        - <a name="toc26"></a>[Combining Operations](#combining-operations)
    - <a name="toc27"></a>[Details](#details)
        - <a name="toc28"></a>[Debugging Lookups](#debugging-lookups)
        - <a name="toc29"></a>[Enabling/Disabling of Branches](#enabling-disabling-of-branches)
        - <a name="toc30"></a>[Building Conditions From Text](#building-conditions-from-text)
            - <a name="toc31"></a>[Grammar](#grammar)
            - <a name="toc32"></a>[Atomic Conditions](#atomic-conditions)
            - <a name="toc33"></a>[Nesting](#nesting)
        - <a name="toc34"></a>[Tokenizing Details](#tokenizing-details)
            - <a name="toc35"></a>[Functioning](#functioning)
            - <a name="toc36"></a>[Separator `sep`](#separator-sep)
            - <a name="toc37"></a>[Apostrophes](#apostrophes)
            - <a name="toc38"></a>[Escaping](#escaping)
        - <a name="toc39"></a>[Building](#building)
            - <a name="toc40"></a>[Autoconv: Casting of values into python simple types](#autoconv-casting-of-values-into-python-simple-types)
    - <a name="toc41"></a>[Context On Demand](#context-on-demand)
    - <a name="toc42"></a>[Lookup Providers](#lookup-providers)
        - <a name="toc43"></a>[Accepted Signatures](#accepted-signatures)
        - <a name="toc44"></a>[Parametrized Lookup Functions](#parametrized-lookup-functions)
        - <a name="toc45"></a>[Namespace](#namespace)
    - <a name="toc46"></a>[Caching](#caching)
    - <a name="toc47"></a>[Extensions](#extensions)
    - <a name="toc48"></a>[Named Conditions: Qualification](#named-conditions-qualification)
    - <a name="toc49"></a>[Options](#options)
    - <a name="toc50"></a>[Partial Evaluation](#partial-evaluation)
    - <a name="toc51"></a>[Streaming Data](#streaming-data)
        - <a name="toc52"></a>[Filtering](#filtering)
        - <a name="toc53"></a>[Streaming Classification](#streaming-classification)
        - <a name="toc54"></a>[Selective Classification](#selective-classification)
    - <a name="toc55"></a>[Treating of Booleans (Conditions, Not Names)](#treating-of-booleans-conditions-not-names)
    - <a name="toc56"></a>[Asyncronous Operations](#asyncronous-operations)
        - <a name="toc57"></a>[Asyncronous Filter](#asyncronous-filter)

<!-- TOC -->


# <a href="#toc1">What</a>

You have a bunch of data, possibly streaming...

```csv
id,first_name,last_name,email,gender,ip_address
1,Rufe,Morstatt,rmorstatt0@newsvine.de,Male,216.70.69.120
2,Kaela,Scott,scott@opera.com,Female,73.248.145.44,2
(...)
```

... and you need to filter. For now lets say we have them already as list of dicts.

You can do it imperatively:

```python
foo_users = [
    u
    for u in users
    if (u['gender'] == 'Male' or u['last_name'] == 'Scott') and '@' in u['email']
]
```

or you have this module assemble a condition function from a declaration like:

```python
from pycond import make_filter
cond = 'email contains .de and gender eq Male or last_name eq Scott'
is_foo = make_filter(cond) # the built filter function is first
```

and then apply as often as you need, against varying state / facts / models (...):

```
foo_users = filter(is_foo, users)
```

with roughly the same performance (factor 2-3) than the handcrafted python.

> In real life performance is often **better** then using imperative code, due to
`pycond's` [lazy evaluation](#context-on-demand-and-lazy-evaluation) feature. 

# <a href="#toc2">Why</a>

When the developer can decide upon the filters to apply on data he'll certainly
use Python's excellent expressive possibilities directly, e.g. as shown above
through list comprehensions.   
But what if the filtering conditions are based on decisions outside of the program's
control? I.e. from an end user, hitting the program via the network, in a somehow serialized form, which is rarely directly evaluatable Python.

This is the main use case for this module.  

## <a href="#toc3">Alternatives</a>

But why yet another tool for such a standard job?  

There is a list of great tools and frameworks where condition parsing is a (small) part of them, e.g. [pyke](http://pyke.sourceforge.net/) or [durable](https://pypi.python.org/pypi/durable_rules) and many in the django world or from SQL statement parsers.


`1.` I just needed a very **slim** tool for only the parsing into functions - but this pretty transparent and customizable

pycond allows to customize
- the list of condition operators
- the list of combination operators
- the general behavior of condition operators via global or condition local wrappers
- their names
- the tokenizer
- the value lookup function

and ships as zero dependency single module.

All evaluation is done via [partials](https://stackoverflow.com/a/3252425/4583360) and not lambdas, i.e. operations can be introspected and debugged very simply, through breakpoints or custom logging operator or lookup wrappers.

`2.` Simplicity of the grammar: Easy to type directly, readable by non
programmers but also synthesisable from structured data, e.g. from a web framework.


`3.` Performance: Good enough to have "pyconditions" used within [stream filters](https://github.com/ReactiveX/RxPY).
With the current feature set we are sometimes a factor 2-3 worse but (due to lazy eval) often better,
compared with handcrafted list comprehensions.


# <a href="#toc4">Mechanics</a>

<!-- md_links_for: github -->
<!-- autogen tutorial -->

## <a href="#toc5">Parsing</a>

pycond parses the condition expressions according to a set of constraints given to the parser in the `tokenizer` function.

The result of the tokenizer is given to the builder.



```python
import pycond as pc

expr = '[a eq b and [c lt 42 or foo eq bar]]'
cond = pc.to_struct(pc.tokenize(expr, sep=' ', brkts='[]'))
print('filter:', cond)
# test:
data = [
    {'a': 'b', 'c': 1, 'foo': 42},
    {'a': 'not b', 'c': 1},
]
filtered = list(filter(pc.make_filter(expr), data))
print('matching:', filtered)
return cond, len(filtered)
```
Output:

```
filter: [['a', 'eq', 'b', 'and', ['c', 'lt', '42', 'or', 'foo', 'eq', 'bar']]]
matching: [{'a': 'b', 'c': 1, 'foo': 42}]
```


## <a href="#toc6">Building</a>

After parsing, the builder is assembling a nested set of operator functions,
combined via combining operators. The functions are partials, i.e. not yet
evaluated - but information about the necessary keys is already available:



```python
f, meta = pc.parse_cond('foo eq bar')
assert meta['keys'] == ['foo']
assert f(state={'foo': 'bar'}) == True
```



Note: The `make_filter` function is actually a convencience function for
`parse_cond`, ignoring that meta information and calling with
`state=<filter val>`


## <a href="#toc7">Structured Conditions</a>

Other processes may deliver condition structures via serializable formats (e.g.
json). If you pass such already tokenized constructs to the `pycond` function,
then the tokenizer is bypassed:



```python
cond = [['a', 'eq', 'b'], 'or', ['c', 'in', ['foo', 'bar']]]
assert pc.pycond(cond)(state={'a': 'b'}) == True
# json support is built in:
cond_as_json = json.dumps(cond)
assert pc.pycond(cond_as_json)(state={'a': 'b'}) == True
```



## <a href="#toc8">Evaluation</a>

The result of the builder is a 'pycondition', i.e. a function which can be run many times against varying state of the system.
How state is evaluated is customizable at build and run time.

## <a href="#toc9">Default Lookup</a>

"Lookup" denotes the process of deriving the actual values to evaluate, from a given state. Can be simple gets, getattrs, walks into the structure - or arbitrary, via custom lookup functions.

The default is to *get* lookup keys within expressions from an initially empty `State` dict within the module. This is *not* thread safe, i.e. not to be used in async or non cooperative multitasking environments.



```python
f = pc.pycond('foo eq bar')
assert f() == False
pc.State['foo'] = 'bar'  # not thread safe!
assert f() == True
```


(`pycond` is a shortcut for `parse_cond`, when meta infos are not required).

## <a href="#toc10">Passing State</a>

Using a state argument at evaluation *is* thread safe:  


```python
assert pc.pycond('a gt 2')(state={'a': 42}) == True
assert pc.pycond('a gt 2')(state={'a': -2}) == False
```

## <a href="#toc11">Deep Lookup / Nested State / Lists</a>

You may supply a path seperator for diving into nested structures like so:  


```python
m = {'a': {'b': [{'c': 1}]}}
assert pc.pycond('a.b.0.c', deep='.')(state=m) == True
assert pc.pycond('a.b.1.c', deep='.')(state=m) == False
assert pc.pycond('a.b.0.c eq 1', deep='.')(state=m) == True
# convencience argument for string conditions:
assert pc.pycond('deep: a.b.0.c')(state=m) == True

# This is how you express deep access via structured conditions:
assert pc.pycond([('a', 'b', 0, 'c'), 'eq', 1])(state=m) == True

# Since tuples are not transferrable in json, we also allow deep paths as list:
# We apply heuristics to exclude expressions or conditions:
c = [[['a', 'b', 0, 'c'], 'eq', 1], 'and', 'a']
f, nfos = pc.parse_cond(c)
# sorting order for keys: tuples at end, sorted by len, rest default py sorted:
assert f(state=m) == True and nfos['keys'] == ['a', ('a', 'b', 0, 'c')]
```


- The structure may also contain objects, then we use getattribute to get to the next value.

- `deep="."` is actually just convience notation for supplying the following "lookup function" (see below):



```python
m = {'a': {'b': [{'c': 1}]}}
assert pc.pycond('a.b.0.c', lookup=pc.state_get_deep)(state=m) == True
```


### <a href="#toc12">Lookup Performance: Prebuilt Deep Getters</a>

The value lookup within nested structures can be stored into item and attribute getters (or , alternatively, an evaluated synthesized lookup function), built, when the first item has a matching structure.

- Upside: [Performance](./test/test_getter_perf.py) is a few times better compared to when the structure of items is explored each time, as with the 'deep' parameter.
- Downside: The lookup remains as built for the first structurely matching item. Schematic changes like from a key within a dict to an attribute will not except but deliver always False for the
  actual condition value matching.

- `pycond.Getters.state_get_deep2`: A list of item and attribute getters is built at first successfull lookup evaluation.
- `pycond.Getters.state_get_evl`: An expression like "lambda state=state['a'].b[0]['c']" is built and evaluated, then applied to the items. 
   - Fastest way to get to the values at evaluation time.   
   - Security: Round brackets within key names are forbidden and deliver always false - but an eval is an eval i.e. potentially evil.

These two additional "deep" lookup functions are conveniently made accessible by supplying a `deep2` or `deep3` argument:



```python
m = {'a': {'b': [{'c': 1}]}}
# 3 times faster than deep. Safe.
assert pc.pycond('a.b.0.c', deep2='.')(state=m) == True
# 4 times faster than deep. Eval involved.
assert pc.pycond('a.b.0.c', deep3='.')(state=m) == True
```

The evaluation results for the keys are cached. The cache is cleared after 1Mio entries but can be cleared manually via `pc.clear_caches()` any time before that.

### <a href="#toc13">Best Practices</a>

- Lookup keys change all the time, not many items checked for specific key: Use `deep`
- Many items to be checked with same keys, input from untrusted users: Use `deep2`
- Many items to be checked with same keys, input from trusted users: Use `deep3`



## <a href="#toc14">Prefixed Data</a>

When data is passed through processing pipelines, it often is passed with headers. So it may be useful to pass a global prefix to access the payload like so:



```python
m = {'payload': {'b': [{'c': 1}], 'id': 123}}
assert pc.pycond('b.0.c', deep='.', prefix='payload')(state=m) == True
```

## <a href="#toc15">Attributes Access</a>

Since version 20210221 we try attributes when objects are not dicts:



```python
class MyObj:
    val = {'a': 'b'}

m = {'payload': {'obj': MyObj()}}
cond = [['obj.val.a', 'eq', 'b']]
assert pc.pycond(cond, deep='.', prefix='payload')(state=m) == True
```


## <a href="#toc16">Custom Lookup And Value Passing</a>

You can supply your own function for value acquisition.

- Signature: See example.
- Returns: The value for the key from the current state plus the
  compare value for the operator function.  


```python
# must return a (key, value) tuple:
model = {'eve': {'last_host': 'somehost'}}

def my_lu(k, v, req, user, model=model):
    print('user check. locals:', dict(locals()))
    return (model.get(user) or {}).get(k), req[v]

f = pc.pycond('last_host eq host', lookup=my_lu)

req = {'host': 'somehost'}
assert f(req=req, user='joe') == False
assert f(req=req, user='eve') == True
```
Output:

```
user check. locals: {'k': 'last_host', 'v': 'host', 'req': {'host': 'somehost'}, 'user': 'joe', 'model': {'eve': {'last_host': 'somehost'}}}
user check. locals: {'k': 'last_host', 'v': 'host', 'req': {'host': 'somehost'}, 'user': 'eve', 'model': {'eve': {'last_host': 'somehost'}}}
```

> as you can see in the example, the state parameter is just a convention
for `pyconds'` [title: default lookup function, fmatch:pycond.py, lmatch:def state_get] < SRC > .

## <a href="#toc17">Lazy Evaluation</a>

This is avoiding unnecessary calculations in many cases:

When an evaluation branch contains an "and" or "and_not" combinator, then
at runtime we evaluate the first expression - and stop if it is already
False.
Same when first expression is True, followed by "or" or "or_not".

That way expensive deep branch evaluations are omitted or, when
the lookup is done lazy, the values won't be even fetched:



```python
evaluated = []

def myget(key, val, cfg, state=None, **kw):
    evaluated.append(key)
    return pc.state_get(key, val, cfg, state, **kw)

f = pc.pycond('[a eq b] or foo eq bar and baz eq bar', lookup=myget)
assert f(state={'foo': 42}) == False
# the value for "baz" is not even fetched and the whole (possibly
# deep) branch after the last and is ignored:
assert evaluated == ['a', 'foo']
print(evaluated)
evaluated.clear()

f = pc.pycond('[[a eq b] or foo eq bar] and baz eq bar', lookup=myget)
assert f(state={'a': 'b', 'baz': 'bar'}) == True
# the value for "baz" is not even fetched and the whole (possibly
# deep) branch after the last and is ignored:
assert evaluated == ['a', 'baz']
print(evaluated)
```
Output:

```
['a', 'foo']
['a', 'baz']
```

Remember that all keys occurring in a condition(which may be provided by the user at runtime) are returned by the condition parser. Means that building of evaluation contexts[can be done](  # context-on-demand-and-lazy-evaluation), based on the data actually needed and not more.

## <a href="#toc18">Condition Operators (Comparators)</a>

All boolean[standardlib operators](https://docs.python.org/2/library/operator.html)
are available by default:



```python
from pytest2md import html_table as tbl  # just a table gen.
from pycond import get_ops

for k in 'nr', 'str':
    s = 'Default supported ' + k + ' operators...(click to extend)'
    print(tbl(get_ops()[k], [k + ' operator', 'alias'], summary=s))
```


<details><summary>Default supported nr operators...(click to extend)</summary>

<table>
<tr><td>nr operator</td><td>alias</td></tr>
<tr><td>add</td><td>+</td></tr>
<tr><td>and_</td><td>&</td></tr>
<tr><td>eq</td><td>==</td></tr>
<tr><td>floordiv</td><td>//</td></tr>
<tr><td>ge</td><td>>=</td></tr>
<tr><td>gt</td><td>></td></tr>
<tr><td>iadd</td><td>+=</td></tr>
<tr><td>iand</td><td>&=</td></tr>
<tr><td>ifloordiv</td><td>//=</td></tr>
<tr><td>ilshift</td><td><<=</td></tr>
<tr><td>imod</td><td>%=</td></tr>
<tr><td>imul</td><td>*=</td></tr>
<tr><td>ior</td><td>|=</td></tr>
<tr><td>ipow</td><td>**=</td></tr>
<tr><td>irshift</td><td>>>=</td></tr>
<tr><td>is_</td><td>is</td></tr>
<tr><td>is_not</td><td>is</td></tr>
<tr><td>isub</td><td>-=</td></tr>
<tr><td>itruediv</td><td>/=</td></tr>
<tr><td>ixor</td><td>^=</td></tr>
<tr><td>le</td><td><=</td></tr>
<tr><td>lshift</td><td><<</td></tr>
<tr><td>lt</td><td><</td></tr>
<tr><td>mod</td><td>%</td></tr>
<tr><td>mul</td><td>*</td></tr>
<tr><td>ne</td><td>!=</td></tr>
<tr><td>or_</td><td>|</td></tr>
<tr><td>pow</td><td>**</td></tr>
<tr><td>rshift</td><td>>></td></tr>
<tr><td>sub</td><td>-</td></tr>
<tr><td>truediv</td><td>/</td></tr>
<tr><td>xor</td><td>^</td></tr>
<tr><td>itemgetter</td><td></td></tr>
<tr><td>length_hint</td><td></td></tr>
</table>
</details>




<details><summary>Default supported str operators...(click to extend)</summary>

<table>
<tr><td>str operator</td><td>alias</td></tr>
<tr><td>attrgetter</td><td></td></tr>
<tr><td>concat</td><td>+</td></tr>
<tr><td>contains</td><td></td></tr>
<tr><td>countOf</td><td></td></tr>
<tr><td>iconcat</td><td>+=</td></tr>
<tr><td>indexOf</td><td></td></tr>
<tr><td>methodcaller</td><td></td></tr>
</table>
</details>




### <a href="#toc19">Using Symbolic Operators</a>

By default pycond uses text style operators.

- `ops_use_symbolic` switches processwide to symbolic style only.
- `ops_use_symbolic_and_txt` switches processwide to both notations allowed.



```python
pc.ops_use_symbolic()
pc.State['foo'] = 'bar'
assert pc.pycond('foo == bar')() == True
try:
    # this raises now, text ops not known anymore:
    pc.pycond('foo eq bar')
except:
    pc.ops_use_symbolic_and_txt(allow_single_eq=True)
    assert pc.pycond('foo = bar')() == True
    assert pc.pycond('foo == bar')() == True
    assert pc.pycond('foo eq bar')() == True
    assert pc.pycond('foo != baz')() == True
```


> Operator namespace(s) should be assigned at process start, they are global.

### <a href="#toc20">Extending Condition Operators</a>



```python
pc.OPS['maybe'] = lambda a, b: int(time.time()) % 2
# valid expression now:
assert pc.pycond('a maybe b')() in (True, False)
```


### <a href="#toc21">Negation `not`</a>

Negates the result of the condition operator:



```python
pc.State['foo'] = 'abc'
assert pc.pycond('foo eq abc')() == True
assert pc.pycond('foo not eq abc')() == False
```


### <a href="#toc22">Reversal `rev`</a>

Reverses the arguments before calling the operator  


```python
pc.State['foo'] = 'abc'
assert pc.pycond('foo contains a')() == True
assert pc.pycond('foo rev contains abc')() == True
```


> `rev` and `not` can be combined in any order.

### <a href="#toc23">Wrapping Condition Operators</a>

#### <a href="#toc24">Global Wrapping</a>

You may globally wrap all evaluation time condition operations through a custom function:



```python
l = []

def hk(f_op, a, b, l=l):
    l.append((getattr(f_op, '__name__', ''), a, b))
    return f_op(a, b)

pc.run_all_ops_thru(hk)  # globally wrap the operators

pc.State.update({'a': 1, 'b': 2, 'c': 3})
f = pc.pycond('a gt 0 and b lt 3 and not c gt 4')
assert l == []
f()
expected_log = [('gt', 1, 0.0), ('lt', 2, 3.0), ('gt', 3, 4.0)]
assert l == expected_log
pc.ops_use_symbolic_and_txt()
```


You may compose such wrappers via repeated application of the `run_all_ops_thru` API function.

### <a href="#toc25">Condition Local Wrapping</a>

This is done through the `ops_thru` parameter as shown:



```python
def myhk(f_op, a, b):
    return True

pc.State['a'] = 1
f = pc.pycond('a eq 2')
assert f() == False
f = pc.pycond('a eq 2', ops_thru=myhk)
assert f() == True
```


> Using `ops_thru` is a good way to debug unexpected results, since you
> can add breakpoints or loggers there.

### <a href="#toc26">Combining Operations</a>

You can combine single conditions with

- `and`
- `and not`
- `or`
- `or not`
- `xor` by default.

The combining functions are stored in `pycond.COMB_OPS` dict and may be extended.

> Do not use spaces for the names of combining operators. The user may use them but they are replaced at before tokenizing time, like `and not` -> `and_not`.

## <a href="#toc27">Details</a>

### <a href="#toc28">Debugging Lookups</a>

pycond provides a key getter which prints out every lookup.  


```python
f = pc.pycond('[[a eq b] or foo eq bar] or [baz eq bar]', lookup=pc.dbg_get)
assert f(state={'foo': 'bar'}) == True
```
Output:

```
Lookup: a b -> None
Lookup: foo bar -> bar
```

### <a href="#toc29">Enabling/Disabling of Branches</a>

Insert booleans like shown:  


```python
f = pc.pycond(['foo', 'and', ['bar', 'eq', 1]])
assert f(state={'foo': 1}) == False
f = pc.pycond(['foo', 'and', [True, 'or', ['bar', 'eq', 1]]])
assert f(state={'foo': 1}) == True
```

### <a href="#toc30">Building Conditions From Text</a>

Condition functions are created internally from structured expressions -
but those are[hard to type](  # lazy-dynamic-context-assembly),
involving many apostropies.

The text based condition syntax is intended for situations when end users
type them into text boxes directly.

#### <a href="#toc31">Grammar</a>

Combine atomic conditions with boolean operators and nesting brackets like:

```
[< atom1 > < and | or | and not|... > <atom2 > ] < and|or... > [ [ < atom3 > ....
```

#### <a href="#toc32">Atomic Conditions</a>

```
[not] < lookup_key > [[rev] [not] < condition operator (co) > <value > ]
```
- When just `lookup_key` is given, then `co` is set to the `truthy` function:
```python
def truthy(key, val=None):
    return operatur.truth(k)
```

so such an expression is valid and True:



```python
pc.State.update({'foo': 1, 'bar': 'a', 'baz': []})
assert pc.pycond('[ foo and bar and not baz]')() == True
```

- When `not lookup_key` is given, then `co` is set to the `falsy`
  function:



```python
m = {'x': 'y', 'falsy_val': {}}
# normal way
assert pc.pycond(['foo', 'eq', None])(state=m) == True
# using "not" as prefix:
assert pc.pycond('not foo')(state=m) == True
assert pc.pycond(['not', 'foo'])(state=m) == True
assert pc.pycond('not falsy_val')(state=m) == True
assert pc.pycond('x and not foo')(state=m) == True
assert pc.pycond('y and not falsy_val')(state=m) == False
```


#### <a href="#toc33">Nesting</a>

Combined conditions may be arbitrarily nested using brackets "[" and "]".

> Via the `brkts` config parameter you may change those to other separators at build time.

### <a href="#toc34">Tokenizing Details</a>

> Brackets as strings in this flat list form, e.g. `['[', 'a', 'and' 'b', ']'...]`

#### <a href="#toc35">Functioning</a>

The tokenizers job is to take apart expression strings for the builder.

#### <a href="#toc36">Separator `sep`</a>

Separates the different parts of an expression. Default is ' '.



```python
pc.State['a'] = 42
assert pc.pycond('a.eq.42', sep='.')() == True
```

> sep can be a any single character including binary.

Bracket characters do not need to be separated, the tokenizer will do:



```python
# equal:
assert (
    pc.pycond('[[a eq 42] and b]')() == pc.pycond('[ [ a eq 42 ] and b ]')()
)
```

> The condition functions themselves do not evaluate equal - those
> had been assembled two times.

#### <a href="#toc37">Apostrophes</a>

By putting strings into Apostrophes you can tell the tokenizer to not further inspect them, e.g. for the seperator:



```python
pc.State['a'] = 'Hello World'
assert pc.pycond('a eq "Hello World"')() == True
```


#### <a href="#toc38">Escaping</a>

Tell the tokenizer to not interpret the next character:



```python
pc.State['b'] = 'Hello World'
assert pc.pycond('b eq Hello\ World')() == True
```


### <a href="#toc39">Building</a>

#### <a href="#toc40">Autoconv: Casting of values into python simple types</a>

Expression string values are automatically cast into bools and numbers via the public `pycond.py_type` function.

This can be prevented by setting the `autoconv` parameter to `False` or by using Apostrophes:



```python
pc.State['a'] = '42'
assert pc.pycond('a eq 42')() == False
# compared as string now
assert pc.pycond('a eq "42"')() == True
# compared as string now
assert pc.pycond('a eq 42', autoconv=False)() == True
```


If you do not want to provide a custom lookup function(where you can do what you want)
but want to have looked up keys autoconverted then use:



```python
for id in '1', 1:
    pc.State['id'] = id
    assert pc.pycond('id lt 42', autoconv_lookups=True)
```


## <a href="#toc41">Context On Demand</a>

Often the conditions are in user space, applied on data streams under
the developer's control only at development time.

The end user might pick only a few keys from many offered within an API.

pycond's `ctx_builder` allows to only calculate those keys at runtime,
the user decided to base conditions upon:
At condition build time hand over a namespace for *all * functions which
are available to build the ctx.

`pycon` will return a context builder function for you, calling only those functions
which the condition actually requires.



```python
pc.ops_use_symbolic_and_txt(allow_single_eq=True)

# Condition the end user configured, e.g. at program run time:
cond = [
    ['group_type', 'in', ['lab', 'first1k', 'friendly', 'auto']],
    'and',
    [
        [
            [
                [
                    ['cur_q', '<', 0.5],
                    'and',
                    ['delta_q', '>=', 0.15],
                ],
                'and',
                ['dt_last_enforce', '>', 28800],
            ],
            'and',
            ['cur_hour', 'in', [3, 4, 5]],
        ],
        'or',
        [
            [
                [
                    ['cur_q', '<', 0.5],
                    'and',
                    ['delta_q', '>=', 0.15],
                ],
                'and',
                ['dt_last_enforce', '>', 28800],
            ],
            'and',
            ['clients', '=', 0],
        ],
    ],
]

# Getters for API keys offered to the user, involving potentially
# expensive to fetch context delivery functions:
# Signature must provide minimum a positional for the current
# state:
class ApiCtxFuncs:
    def expensive_but_not_needed_here(ctx):
        raise Exception("Won't run with cond. from above")

    def cur_q(ctx):
        print('Calculating cur_q')
        return 0.1

    def cur_hour(ctx):
        print('Calculating cur_hour')
        return 4

    def dt_last_enforce(ctx):
        print('Calculating dt_last_enforce')
        return 10000000

    def delta_q(ctx):
        print('Calculating (expensive) delta_q')
        time.sleep(0.1)
        return 1

    def clients(ctx):
        print('Calculating clients')
        return 0

if sys.version_info[0] < 3:
    # we don't think it is a good idea to make the getter API stateful ;-)
    p2m.convert_to_staticmethods(ApiCtxFuncs)

f, nfos = pc.parse_cond(cond, ctx_provider=ApiCtxFuncs)

# now we create (incomplete) data..
data1 = {'group_type': 'xxx'}, False
data2 = {'group_type': 'lab'}, True

# this key stores a context builder function, calculating the complete data:
make_ctx = nfos['complete_ctx']

t0 = time.time()
for event, expected in data1, data2:
    assert f(state=make_ctx(event)) == expected

print('Calc.Time (delta_q was called twice):', round(time.time() - t0, 4)),
return cond, ApiCtxFuncs
```
Output:

```
Calculating clients
Calculating cur_hour
Calculating cur_q
Calculating (expensive) delta_q
Calculating dt_last_enforce
Calculating clients
Calculating cur_hour
Calculating cur_q
Calculating (expensive) delta_q
Calculating dt_last_enforce
Calc.Time (delta_q was called twice): 0.2006
```


## <a href="#toc42">Lookup Providers</a>

ContextBuilders are interesting but we can do better.

We still calculated values for keys which might(dependent on the data) be not needed in dead ends of a lazily evaluated condition.

Lets avoid calculating these values, remembering the [custom lookup function](#custom-lookup-and-value-passing) feature.

This is where lookup providers come in, providing namespaces for functions to be called conditionally.

Pycond [treats the condition keys as function names][pycond.py#614] within that namespace and calls them, when needed.

### <a href="#toc43">Accepted Signatures</a>

Lookup provider functions may have the following signatures:



```python
class F:
    # simple data passing
    def f1(data):
        """simple return a value being compared, getting passed the state/data"""
        return data['a']

    # simple, with ctx
    def f2(data, **kw):
        """
        simple return a value being compared, getting passed the state/data
        All context information within kw, compare value not modifiable
        """
        return data['b']

    # full pycond compliant signature,
    def f3(key, val, cfg, data, **kw):
        """
        full pycond signature.
        val is the value as defined by the condition, and which you could return modified
        kw holds the cache, cfg holds the setup
        v has to be returned:
        """
        return data['c'], 100  # not 45!

    # applied al
    def f4(*a, **kw):
        """
        Full variant(always when varargs are involved)
        """
        return a[3]['d'], 'foo'

_ = 'and'
f = pc.pycond(
    [
        [':f1', 'eq', 42],
        _,
        [':f2', 'eq', 43, _, ':f3', 'eq', 45],
        _,
        [':f4', 'eq', 'foo'],
    ],
    lookup_provider=F,
)
assert f(state={'a': 42, 'b': 43, 'c': 100, 'd': 'foo'}) == True
```

### <a href="#toc44">Parametrized Lookup Functions</a>

Via the 'params' parameter you may supply keyword args to lookup functions:  


```python
class F:
    def hello(k, v, cfg, data, count, **kw):
        return data['foo'] == count, 0

m = pc.pycond([':hello'], lookup_provider=F, params={'hello': {'count': 2}})(
    state={'foo': 2}
)
assert m == True
```


### <a href="#toc45">Namespace</a>

- Lookup functions can be found in nested class hirarchies or dicts. Separator is colon(':')
- As shown above, if they are flat within a toplevel class or dict you should still prefix with ':', to get build time exception(MissingLookupFunction) when not present
- You can switch that behaviour off per condition build as config arg, as shown below
- You can switch that behaviour off globally via `pc.prefixed_lookup_funcs=False`

Warning: This is a breaking API change with pre-20200610 versions, where the prefix was not required to find functions in, back then, only flat namespaces. Use the global switch after import to get the old behaviour.



```python
class F:
    def a(data):
        return data['foo']

    class inner:
        def b(data):
            return data['bar']

m = {'c': {'d': {'func': lambda data: data['baz']}}}

# for the inner lookup the first prefix may be omitted:
_ = 'and'
cond = [
    [':a', 'eq', 'foo1'],
    _,
    ['inner:b', 'eq', 'bar1'],
    _,
    [
        'c:d',
        'eq',
        'baz1',
    ],
]
c = pc.pycond(cond, lookup_provider=F, lookup_provider_dict=m)
assert c(state={'foo': 'foo1', 'bar': 'bar1', 'baz': 'baz1'}) == True

# Prefix checking on / off:
try:
    pc.pycond([':xx', 'and', cond])
    i = 9 / 0  # above will raise this:
except pc.MissingLookupFunction:
    pass
try:
    pc.pycond([':xx', 'and', cond], prefixed_lookup_funcs=False)
    i = 9 / 0  # above will raise this:
except pc.MissingLookupFunction:
    pass
cond[0] = 'a'  # remove prefix, will still be found
c = pc.pycond(
    ['xx', 'or', cond],
    lookup_provider=F,
    lookup_provider_dict=m,
    prefixed_lookup_funcs=False,
)
assert c(state={'foo': 'foo1', 'bar': 'bar1', 'baz': 'baz1'}) == True
```

You can switch that prefix needs off - and pycond will then check the state for key presence:



```python
# we let pycond generate the lookup function (we use the simple signature type):
f = pc.pycond(cond, lookup_provider=ApiCtxFuncs, prefixed_lookup_funcs=False)
# Same events as above:
data1 = {'group_type': 'xxx'}, False
data2 = {'group_type': 'lab'}, True

t0 = time.time()
for event, expected in data1, data2:
    # we will lookup only once:
    assert f(state=event) == expected

print(
    'Calc.Time (delta_q was called just once):',
    round(time.time() - t0, 4),
)

# The deep switch keeps working:
cond2 = [cond, 'or', ['a-0-b', 'eq', 42]]
f = pc.pycond(
    cond2,
    lookup_provider=ApiCtxFuncs,
    deep='-',
    prefixed_lookup_funcs=False,
)
data2[0]['a'] = [{'b': 42}]
print('sample:', data2[0])
assert f(state=data2[0]) == True
```
Output:

```
Calculating cur_q
Calculating (expensive) delta_q
Calculating dt_last_enforce
Calculating cur_hour
Calc.Time (delta_q was called just once): 0.1004
sample: {'group_type': 'lab', 'a': [{'b': 42}]}
Calculating cur_q
Calculating (expensive) delta_q
Calculating dt_last_enforce
Calculating cur_hour
```


The output demonstrates that we did not even call the value provider functions for the dead branches of the condition.

NOTE: Instead of providing a class tree you may also provide a dict of functions as `lookup_provider_dict` argument, see `qualify` examples below.

## <a href="#toc46">Caching</a>

Note: Currently you cannot override these defaults. Drop an issue if you need to.

- Builtin state lookups: Not cached
- Custom `lookup` functions: Not cached(you can implement caching within those functions)
- Lookup provider return values: Cached, i.e. called only once, per data set
- Named condition sets(see below): Cached

## <a href="#toc47">Extensions</a>

We deliver a few lookup function [extensions][pycond.py#711]

- for time checks
- for os.environ checks(re-evaluated at runtime)



```python
from datetime import datetime as dt
from os import environ as env

this_sec = dt.now().second
this_utc_hour = dt.utcnow().hour
f = pc.pycond(
    [
        ['env:foo', 'eq', 'bar'],
        'and',
        # not breaking the build when the sec just jumps:
        ['dt:second', 'in', [this_sec, this_sec + 1, 0]],
        'and',
        ['utc:hour', 'eq', this_utc_hour],
    ]
)
env['foo'] = 'bar'
assert f(state={'a': 1}) == True
```



## <a href="#toc48">Named Conditions: Qualification</a>

Instead of just delivering booleans, pycond can be used to determine a whole set of
information about data declaratively, like so:  


```python
# We accept different forms of delivery.
# The first full text is restricted to simple flat dicts only:
for c in [
    'one: a gt 10, two: a gt 10 or foo eq bar',
    {'one': 'a gt 10', 'two': 'a gt 10 or foo eq bar'},
    {
        'one': ['a', 'gt', 10],
        'two': ['a', 'gt', 10, 'or', 'foo', 'eq', 'bar'],
    },
]:
    f = pc.qualify(c)
    r = f({'foo': 'bar', 'a': 0})
    assert r == {'one': False, 'two': True}
```


We may refer to results of other named conditions and also can pass named condition sets as lists instead of dicts:  


```python
def run(q):
    print('Running', q)

    class F:
        def custom(data):
            return data.get('a')

    f = pc.qualify(q, lookup_provider=F)

    assert f({'a': 'b'}) == {
        'first': True,
        'listed': [False, False],
        'thrd': True,
        'zero': True,
        'last': True,
    }
    res = f({'c': 'foo', 'x': 1})
    assert res == {
        'first': False,
        'listed': [False, True],
        'thrd': False,
        'zero': True,
        'last': True,
    }

q = {
    'thrd': ['k', 'or', ':first'],
    'listed': [['foo'], ['c', 'eq', 'foo']],
    'zero': [['x', 'eq', 1], 'or', ':thrd'],
    'first': [':custom', 'eq', 'b'],
    'last': True,  # you might want to do this to always get at least one matcher, e.g. for data streaming
}
# as list of conditions:
run(q)

# as dict:
q = dict([[k, v] for k, v in q.items()])
run(q)
```
Output:

```
Running {'thrd': ['k', 'or', ':first'], 'listed': [['foo'], ['c', 'eq', 'foo']], 'zero': [['x', 'eq', 1], 'or', ':thrd'], 'first': [':custom', 'eq', 'b'], 'last': True}
Running {'thrd': ['k', 'or', ':first'], 'listed': [['foo'], ['c', 'eq', 'foo']], 'zero': [['x', 'eq', 1], 'or', ':thrd'], 'first': [':custom', 'eq', 'b'], 'last': True}
```

WARNING: For performance reasons there is no built in circular reference check. You'll run into python's built in recursion checker!

## <a href="#toc49">Options</a>

- into: Put the matched named conditions into the original data
- prefix: Work from a prefix nested in the root
- add_cached: Return also the data from function result cache

Here a few variants to parametrize behaviour, by example:  


```python
conds = {
    0: ['foo'],
    1: ['bar'],
    2: ['func'],
    3: ['n'],
    'n': ['bar'],
}

class F:
    def func(*a, **kw):
        return True, 0

q = lambda d, **kw: pc.qualify(
    conds, lookup_provider=F, prefixed_lookup_funcs=False, **kw
)(d)

m = q({'bar': 1})
assert m == {0: False, 1: True, 2: True, 3: True, 'n': True}

# return data, with matched conds in:
m = q({'bar': 1}, into='conds')
assert m == {
    'bar': 1,
    'conds': {0: False, 1: True, 2: True, 3: True, 'n': True},
}

def msg():
    return {'bar': 1, 'pl': {'a': 1}}

# add_cached == True -> it's put into the cond results:
m = q(msg(), into='conds', add_cached=True)
assert m == {
    'bar': 1,
    'conds': {0: False, 1: True, 2: True, 3: True, 'n': True, 'func': True},
    'pl': {'a': 1},
}

m = q(msg(), into='conds', add_cached='pl')
assert m == {
    'bar': 1,
    'conds': {0: False, 1: True, 2: True, 3: True, 'n': True},
    # n had been put into the cache, was not evaled twice:
    'pl': {'a': 1, 'func': True, 'n': True},
}

m = q({'bar': 1}, add_cached='pl')
assert m == {0: False, 1: True, 2: True, 3: True, 'n': True, 'func': True}

# prefix -> Nr 1, bar,  should NOT be True, since not in pl now:
m = q(
    msg(),
    prefix='pl',
    into='conds',
    add_cached='pl',
)
assert m == {
    'bar': 1,
    'conds': {0: False, 1: False, 2: True, 3: False, 'n': False},
    'pl': {'a': 1, 'func': True, 'n': False},
}
```



## <a href="#toc50">Partial Evaluation</a>

If you either supply a key called 'root' OR supply it as argument to `qualify`, pycond will only evaluate named conditions required to calculate the root key:



```python
called = []

def expensive_func(k, v, cfg, data, **kw):
    called.append(data)
    return 1, v

def xx(k, v, cfg, data, **kw):
    called.append(data)
    return data.get('a'), v

funcs = {'exp': {'func': expensive_func}, 'xx': {'func': xx}}
q = {
    'root': ['foo', 'and', ':bar'],
    'bar': [
        ['somecond'],
        'or',
        [[':exp', 'eq', 1], 'and', ':baz'],
    ],
    'x': [':xx'],
    'baz': [':exp', 'lt', 10],
}
qualifier = pc.qualify(q, lookup_provider_dict=funcs, add_cached=True)

d = {'foo': 1}
r = qualifier(d)

# root, bar, baz had been calculated, not x
assert r == {'root': True, 'bar': True, 'baz': True, 'exp': 1}
# expensive_func result, which was cached, is also returned.
# expensive_func only called once allthough result evaluated for bar and baz:
assert len(called) == 1

called.clear()
f = pc.qualify(q, lookup_provider_dict=funcs, root='x', add_cached=True)
assert f({'a': 1}) == {'x': True, 'xx': 1}
assert f({'b': 1}) == {'x': False, 'xx': None}
assert called == [{'a': 1}, {'b': 1}]
```

This means pycond can be used as a lightweight declarative function dispatching framework.


## <a href="#toc51">Streaming Data</a>

Since version 20200601 and Python 3.x versions, pycond can deliver[ReactiveX](https://github.com/ReactiveX/RxPY) compliant stream operators.

Lets first set up a test data stream, by defining a function `rx_setup` like so:



```python
# simply `import rx as Rx and rx = rx.operators`:
# import pycond as pc, like always:
Rx, rx, GS = pc.import_rx('GS')

def push_through(*test_pipe, items=4):
    """
    Function which takes a set of operators and runs an 'rx.interval' stream, until count items are through
    """

    # stream sink result holder plus a stream completer:
    l, compl = [], rx.take(items)
    l.clear()  # clear any previous results

    def next_(x):
        # simply remember what went through in a list:
        l.append(x)

    def err(*a):
        # should never happen:
        print('exception', a)

    stream = Rx.interval(0.01)  # numbers, each on its own thread

    # turns the ints into dicts: {'i': 1}, then {'i': 2} and so on:
    # (we start from 1, the first 0 we filter out)
    stream = stream.pipe(
        rx.filter(lambda i: i > 0), rx.map(lambda i: {'i': i})
    )

    # defines the stream through the tested operators:
    test_pipe = test_pipe + (compl,)
    s = stream.pipe(*test_pipe)

    # runs the stream:
    d = s.subscribe(
        on_error=err,
        on_next=next_,
        on_completed=lambda: l.append('completed'),
    )

    # blocks until completed:
    while not (l and l[-1] == 'completed'):
        time.sleep(0.001)
    l.pop()  # removes completed indicator

    return l  # returns all processed messages

return Rx, rx, push_through
```

Lets test the setup by having some messages streamed through:



```python
Rx, rx, push_through = rx_setup()
# test test setup:
r = push_through(items=3)
assert r == [{'i': 1}, {'i': 2}, {'i': 3}]
```

-> test setup works.

### <a href="#toc52">Filtering</a>

This is the most simple operation: A simple stream filter.



```python
Rx, rx, push_through = rx_setup()

# ask pycond for a stream filter based on a condition:
pcfilter = partial(pc.rxop, ['i', 'mod', 2])

r = push_through(pcfilter())
odds = [{'i': 1}, {'i': 3}, {'i': 5}, {'i': 7}]
assert r == odds

# try the stream filter with message headered data:
pl = 'payload'
r = push_through(rx.map(lambda i: {pl: i}), pcfilter(prefix=pl))
print('Full messages passed:', r)
r = [m[pl] for m in r]
assert len(r) == 4
assert r == odds
```
Output:

```
Full messages passed: [{'payload': {'i': 1}}, {'payload': {'i': 3}}, {'payload': {'i': 5}}, {'payload': {'i': 7}}]
```

### <a href="#toc53">Streaming Classification</a>

Using named condition dicts we can classify data, i.e. tag it, in order to process subsequently:



```python
Rx, rx, push_through = rx_setup()

# generate a set of classifiers:
conds = [['i', 'mod', i] for i in range(2, 4)]

def run(offs=0):

    # and get a classifying operator from pycond, adding the results in place, at key 'mod':
    r = push_through(pc.rxop(conds, into='mod'))
    i, j = 0 + offs, 1 + offs
    assert r == [
        {'i': 1, 'mod': {i: 1, j: 1}},
        {'i': 2, 'mod': {i: 0, j: 2}},
        {'i': 3, 'mod': {i: 1, j: 0}},
        {'i': 4, 'mod': {i: 0, j: 1}},
    ]

# this will automatically number the classifiers, from 0:
run()

# we can also provide the names of the classifiers by passing a dict:
# here we pass 2 and 3 as those names:
conds = dict([(i, ['i', 'mod', i]) for i in range(2, 4)])
run(2)
```

Normally the data has headers, so thats a good place to keep the classification tags.

### <a href="#toc54">Selective Classification</a>

We fall back to an alternative condition evaluation(which could be a function call) * only * when a previous condition evaluation returns something falsy - by providing a * root condition*.
When it evaluated, possibly requiring evaluation of other conditions, we return:  


```python
Rx, rx, push_through = rx_setup()

# using the list style:
conds = [[i, [['i', 'mod', i], 'or', ':alt']] for i in range(2, 4)]
conds.append(['alt', ['i', 'gt', 1]])

# provide the root condition. Only when it evals falsy, the named "alt" condiction will be evaluated:
r = push_through(pc.rxop(conds, into='mod', root=2, add_cached=True))

assert r == [
    # evaluation of alt was not required:
    {'i': 1, 'mod': {2: True}},
    # evaluation of alt was required:
    {'i': 2, 'mod': {2: True, 'alt': True}},
    {'i': 3, 'mod': {2: True}},
    {'i': 4, 'mod': {2: True, 'alt': True}},
]
```

## <a href="#toc55">Treating of Booleans (Conditions, Not Names)</a>

For the special case of booleans in a condition list we do not treat them as names.  


```python
# 2 unnamed conditions -> keys will be positional
qs = pc.qualify([True, False])
res = qs({'a': 1})
assert res == {0: True, 1: False}  # and not {True: False}
# 2 named conds
qs = pc.qualify([[1, ['a', 'eq', 1]], [2, ['b', 'eq', 42]]])
res = qs({'a': 1})
assert res == {1: True, 2: False}
```

## <a href="#toc56">Asyncronous Operations</a>

WARNING: Early Version. Only for the gevent platform.

Selective classification allows to call condition functions only when other criteria are met.
That makes it possible to read e.g. from a database only when data is really required - and not always, "just in case".

pycond allows to define, that blocking operations should be run * async* within the stream, possibly giving up order.

### <a href="#toc57">Asyncronous Filter</a>

First a simple filter, which gives up order but does not block:



```python
Rx, rx, push_through = rx_setup()

class F:
    def check(k, v, cfg, data, t0=[], **kw):
        # will be on different thread:
        i, pointer = data['i'], ''
        if not t0:
            t0.append(now())
        if i == 1:
            # ints are fired at 0.01, i.e. the 1 will land 4 after 1:
            time.sleep(0.048)
            pointer = '   <----- not in order, blocked'
        # demonstrate that item 1 is not blocking anything - just order is disturbed:
        print('item %s: %.3fs %s' % (i, now() - t0[0], pointer))
        return i % 2, v

# have the operator built for us - with a single condition filter:
rxop = pc.rxop(
    [':check'],
    into='mod',
    lookup_provider=F,
    asyn=['check'],
)
r = push_through(rxop, items=5)
assert [m['i'] for m in r] == [3, 5, 1, 7, 9]
```
Output:

```
item 2: 0.011s 
item 3: 0.022s 
item 4: 0.034s 
item 5: 0.045s 
item 1: 0.049s    <----- not in order, blocked
item 6: 0.057s 
item 7: 0.068s 
item 8: 0.079s 
item 9: 0.090s
```

Finally asyncronous classification, i.e. evaluation of multiple conditions:



```python
def _thn(msg, data):
    return print('thread:', cur_thread().name, msg, data)

# push_through just runs a stream of {'i': <nr>} through a given operator:
Rx, rx, push_through = rx_setup()

# Defining a simple 'set' of classifiers, here as list, with one single key: 42:
conds = [
    [
        42,
        [
            ['i', 'lt', 100],
            'and',
            [[':odd', 'eq', 1], 'or', ['i', 'eq', 2]],
            'and_not',
            [':blocking', 'eq', 3],
        ],
    ]
]

class F:
    """
    Namespace for condition lookup functions.
    You may also pass a dict(lookup_provider_dict)

    We provide the functions for 'odd' and 'blocking'.
    """

    def odd(k, v, cfg, data, **kw):
        # just print the threadname.
        # will go up, interval stream has each nr on its own thread:
        _thn('odd', data)
        # fullfill condition only for odd numbers
        # -> even nrs won't even run func 'blocking':
        return data['i'] % 2, v

    def blocking(k, v, cfg, data, **kw):
        i = data['i']
        # will be on different thread:
        _thn('blocking', data)
        if i == 1:
            # two others will "overtake the i=1 item,
            # since the interval stream is firing every 0.01 secs:
            time.sleep(0.028)
        elif i == 2:
            # Exceptions, incl. timeouts, will simply be forwarded to cfg['err_handler']
            # i.e. also timeout mgmt have to be done here, in the custom functions themselves.

            # Rationale for not providing a timeout monitoring within pycond itself:
            # Async ops are done with libs, which ship with their own timeout params.
            # No need to re-invent / overlay with our own monitoring of that.

            # In the err handler, then further arrangements can be done.
            raise TimeoutError('ups')
        elif i == 5:
            1 / 0
        return data['i'], v

errors = []

def handle_err(item, cfg, ctx, exc, t=errors, **kw):
    # args are: [item, cfg]
    if 'ups' in str(exc):
        assert item['i'] == 2
        assert exc.__class__ == TimeoutError
        t.append(item)
    else:
        assert item['i'] == 5
        assert exc.__class__ == ZeroDivisionError
        t.append(item)

# have the operator built for us:
rxop = pc.rxop(
    conds,
    into='mod',
    lookup_provider=F,
    err_handler=handle_err,
    asyn=['blocking'],
)
r = push_through(rxop, items=5)
assert [m['i'] for m in r] == [3, 1, 4, 6, 7]
assert [m['mod'][42] for m in r] == [False, True, False, False, True]
# item 2 caused a timeout:
assert [t['i'] for t in errors] == [2, 5]
```
Output:

```
thread: Thread-55 odd {'i': 1}
thread: Dummy-57 blocking {'i': 1}
thread: Thread-56 odd {'i': 2}
thread: Dummy-59 blocking {'i': 2}
thread: Thread-58 odd {'i': 3}
thread: Dummy-61 blocking {'i': 3}
thread: Thread-60 odd {'i': 4}
thread: Thread-62 odd {'i': 5}
thread: Dummy-64 blocking {'i': 5}
thread: Thread-63 odd {'i': 6}
thread: Thread-65 odd {'i': 7}
thread: Dummy-67 blocking {'i': 7}
```


*Auto generated by [pytest2md](https://github.com/axiros/pytest2md), running [./tests/test_tutorial.py](./tests/test_tutorial.py)

<!-- autogen tutorial -->


<!-- autogenlinks -->
[pycond.py#614]: https://github.com/axiros/pycond/blob/54fea4c6b0cdc62bcf0b984214649344b1a77230/pycond.py#L614
[pycond.py#711]: https://github.com/axiros/pycond/blob/54fea4c6b0cdc62bcf0b984214649344b1a77230/pycond.py#L711


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/axiros/pycond",
    "name": "pycond",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "conditions,expression,async,serialization,rxpy,reactivex",
    "author": "gk",
    "author_email": "gk@axiros.com",
    "download_url": "https://files.pythonhosted.org/packages/9a/f2/af7216e749edc5a30f945c3cf106bbf6327f82d72b74dce1cb813c67b2fc/pycond-20230212.tar.gz",
    "platform": null,
    "description": "---\n\nauthor: gk\nversion: 20230212\n\n---\n\n\n# pycond: Lightweight Declarative Condition Expressions\n\n[![Build Status](https://travis-ci.org/axiros/pycond.svg?branch=master)](https://travis-ci.org/axiros/pycond) [![codecov](https://codecov.io/gh/axiros/pycond/branch/master/graph/badge.svg)](https://codecov.io/gh/axiros/pycond)[![PyPI    version][pypisvg]][pypi] [![][blacksvg]][black]\n\n[blacksvg]: https://img.shields.io/badge/code%20style-black-000000.svg\n[black]: https://github.com/ambv/black\n[pypisvg]: https://img.shields.io/pypi/v/pycond.svg\n[pypi]: https://badge.fury.io/py/pycond\n\n<!-- badges: http://thomas-cokelaer.info/blog/2014/08/1013/ -->\n\n\n<!-- TOC -->\n\n# Table Of Contents\n\n- <a name=\"toc1\"></a>[What](#what)\n- <a name=\"toc2\"></a>[Why](#why)\n    - <a name=\"toc3\"></a>[Alternatives](#alternatives)\n- <a name=\"toc4\"></a>[Mechanics](#mechanics)\n    - <a name=\"toc5\"></a>[Parsing](#parsing)\n    - <a name=\"toc6\"></a>[Building](#building)\n    - <a name=\"toc7\"></a>[Structured Conditions](#structured-conditions)\n    - <a name=\"toc8\"></a>[Evaluation](#evaluation)\n    - <a name=\"toc9\"></a>[Default Lookup](#default-lookup)\n    - <a name=\"toc10\"></a>[Passing State](#passing-state)\n    - <a name=\"toc11\"></a>[Deep Lookup / Nested State / Lists](#deep-lookup-nested-state-lists)\n        - <a name=\"toc12\"></a>[Lookup Performance: Prebuilt Deep Getters](#lookup-performance-prebuilt-deep-getters)\n        - <a name=\"toc13\"></a>[Best Practices](#best-practices)\n    - <a name=\"toc14\"></a>[Prefixed Data](#prefixed-data)\n    - <a name=\"toc15\"></a>[Attributes Access](#attributes-access)\n    - <a name=\"toc16\"></a>[Custom Lookup And Value Passing](#custom-lookup-and-value-passing)\n    - <a name=\"toc17\"></a>[Lazy Evaluation](#lazy-evaluation)\n    - <a name=\"toc18\"></a>[Condition Operators (Comparators)](#condition-operators-comparators)\n        - <a name=\"toc19\"></a>[Using Symbolic Operators](#using-symbolic-operators)\n        - <a name=\"toc20\"></a>[Extending Condition Operators](#extending-condition-operators)\n        - <a name=\"toc21\"></a>[Negation `not`](#negation-not)\n        - <a name=\"toc22\"></a>[Reversal `rev`](#reversal-rev)\n        - <a name=\"toc23\"></a>[Wrapping Condition Operators](#wrapping-condition-operators)\n            - <a name=\"toc24\"></a>[Global Wrapping](#global-wrapping)\n        - <a name=\"toc25\"></a>[Condition Local Wrapping](#condition-local-wrapping)\n        - <a name=\"toc26\"></a>[Combining Operations](#combining-operations)\n    - <a name=\"toc27\"></a>[Details](#details)\n        - <a name=\"toc28\"></a>[Debugging Lookups](#debugging-lookups)\n        - <a name=\"toc29\"></a>[Enabling/Disabling of Branches](#enabling-disabling-of-branches)\n        - <a name=\"toc30\"></a>[Building Conditions From Text](#building-conditions-from-text)\n            - <a name=\"toc31\"></a>[Grammar](#grammar)\n            - <a name=\"toc32\"></a>[Atomic Conditions](#atomic-conditions)\n            - <a name=\"toc33\"></a>[Nesting](#nesting)\n        - <a name=\"toc34\"></a>[Tokenizing Details](#tokenizing-details)\n            - <a name=\"toc35\"></a>[Functioning](#functioning)\n            - <a name=\"toc36\"></a>[Separator `sep`](#separator-sep)\n            - <a name=\"toc37\"></a>[Apostrophes](#apostrophes)\n            - <a name=\"toc38\"></a>[Escaping](#escaping)\n        - <a name=\"toc39\"></a>[Building](#building)\n            - <a name=\"toc40\"></a>[Autoconv: Casting of values into python simple types](#autoconv-casting-of-values-into-python-simple-types)\n    - <a name=\"toc41\"></a>[Context On Demand](#context-on-demand)\n    - <a name=\"toc42\"></a>[Lookup Providers](#lookup-providers)\n        - <a name=\"toc43\"></a>[Accepted Signatures](#accepted-signatures)\n        - <a name=\"toc44\"></a>[Parametrized Lookup Functions](#parametrized-lookup-functions)\n        - <a name=\"toc45\"></a>[Namespace](#namespace)\n    - <a name=\"toc46\"></a>[Caching](#caching)\n    - <a name=\"toc47\"></a>[Extensions](#extensions)\n    - <a name=\"toc48\"></a>[Named Conditions: Qualification](#named-conditions-qualification)\n    - <a name=\"toc49\"></a>[Options](#options)\n    - <a name=\"toc50\"></a>[Partial Evaluation](#partial-evaluation)\n    - <a name=\"toc51\"></a>[Streaming Data](#streaming-data)\n        - <a name=\"toc52\"></a>[Filtering](#filtering)\n        - <a name=\"toc53\"></a>[Streaming Classification](#streaming-classification)\n        - <a name=\"toc54\"></a>[Selective Classification](#selective-classification)\n    - <a name=\"toc55\"></a>[Treating of Booleans (Conditions, Not Names)](#treating-of-booleans-conditions-not-names)\n    - <a name=\"toc56\"></a>[Asyncronous Operations](#asyncronous-operations)\n        - <a name=\"toc57\"></a>[Asyncronous Filter](#asyncronous-filter)\n\n<!-- TOC -->\n\n\n# <a href=\"#toc1\">What</a>\n\nYou have a bunch of data, possibly streaming...\n\n```csv\nid,first_name,last_name,email,gender,ip_address\n1,Rufe,Morstatt,rmorstatt0@newsvine.de,Male,216.70.69.120\n2,Kaela,Scott,scott@opera.com,Female,73.248.145.44,2\n(...)\n```\n\n... and you need to filter. For now lets say we have them already as list of dicts.\n\nYou can do it imperatively:\n\n```python\nfoo_users = [\n    u\n    for u in users\n    if (u['gender'] == 'Male' or u['last_name'] == 'Scott') and '@' in u['email']\n]\n```\n\nor you have this module assemble a condition function from a declaration like:\n\n```python\nfrom pycond import make_filter\ncond = 'email contains .de and gender eq Male or last_name eq Scott'\nis_foo = make_filter(cond) # the built filter function is first\n```\n\nand then apply as often as you need, against varying state / facts / models (...):\n\n```\nfoo_users = filter(is_foo, users)\n```\n\nwith roughly the same performance (factor 2-3) than the handcrafted python.\n\n> In real life performance is often **better** then using imperative code, due to\n`pycond's` [lazy evaluation](#context-on-demand-and-lazy-evaluation) feature. \n\n# <a href=\"#toc2\">Why</a>\n\nWhen the developer can decide upon the filters to apply on data he'll certainly\nuse Python's excellent expressive possibilities directly, e.g. as shown above\nthrough list comprehensions.   \nBut what if the filtering conditions are based on decisions outside of the program's\ncontrol? I.e. from an end user, hitting the program via the network, in a somehow serialized form, which is rarely directly evaluatable Python.\n\nThis is the main use case for this module.  \n\n## <a href=\"#toc3\">Alternatives</a>\n\nBut why yet another tool for such a standard job?  \n\nThere is a list of great tools and frameworks where condition parsing is a (small) part of them, e.g. [pyke](http://pyke.sourceforge.net/) or [durable](https://pypi.python.org/pypi/durable_rules) and many in the django world or from SQL statement parsers.\n\n\n`1.` I just needed a very **slim** tool for only the parsing into functions - but this pretty transparent and customizable\n\npycond allows to customize\n- the list of condition operators\n- the list of combination operators\n- the general behavior of condition operators via global or condition local wrappers\n- their names\n- the tokenizer\n- the value lookup function\n\nand ships as zero dependency single module.\n\nAll evaluation is done via [partials](https://stackoverflow.com/a/3252425/4583360) and not lambdas, i.e. operations can be introspected and debugged very simply, through breakpoints or custom logging operator or lookup wrappers.\n\n`2.` Simplicity of the grammar: Easy to type directly, readable by non\nprogrammers but also synthesisable from structured data, e.g. from a web framework.\n\n\n`3.` Performance: Good enough to have \"pyconditions\" used within [stream filters](https://github.com/ReactiveX/RxPY).\nWith the current feature set we are sometimes a factor 2-3 worse but (due to lazy eval) often better,\ncompared with handcrafted list comprehensions.\n\n\n# <a href=\"#toc4\">Mechanics</a>\n\n<!-- md_links_for: github -->\n<!-- autogen tutorial -->\n\n## <a href=\"#toc5\">Parsing</a>\n\npycond parses the condition expressions according to a set of constraints given to the parser in the `tokenizer` function.\n\nThe result of the tokenizer is given to the builder.\n\n\n\n```python\nimport pycond as pc\n\nexpr = '[a eq b and [c lt 42 or foo eq bar]]'\ncond = pc.to_struct(pc.tokenize(expr, sep=' ', brkts='[]'))\nprint('filter:', cond)\n# test:\ndata = [\n    {'a': 'b', 'c': 1, 'foo': 42},\n    {'a': 'not b', 'c': 1},\n]\nfiltered = list(filter(pc.make_filter(expr), data))\nprint('matching:', filtered)\nreturn cond, len(filtered)\n```\nOutput:\n\n```\nfilter: [['a', 'eq', 'b', 'and', ['c', 'lt', '42', 'or', 'foo', 'eq', 'bar']]]\nmatching: [{'a': 'b', 'c': 1, 'foo': 42}]\n```\n\n\n## <a href=\"#toc6\">Building</a>\n\nAfter parsing, the builder is assembling a nested set of operator functions,\ncombined via combining operators. The functions are partials, i.e. not yet\nevaluated - but information about the necessary keys is already available:\n\n\n\n```python\nf, meta = pc.parse_cond('foo eq bar')\nassert meta['keys'] == ['foo']\nassert f(state={'foo': 'bar'}) == True\n```\n\n\n\nNote: The `make_filter` function is actually a convencience function for\n`parse_cond`, ignoring that meta information and calling with\n`state=<filter val>`\n\n\n## <a href=\"#toc7\">Structured Conditions</a>\n\nOther processes may deliver condition structures via serializable formats (e.g.\njson). If you pass such already tokenized constructs to the `pycond` function,\nthen the tokenizer is bypassed:\n\n\n\n```python\ncond = [['a', 'eq', 'b'], 'or', ['c', 'in', ['foo', 'bar']]]\nassert pc.pycond(cond)(state={'a': 'b'}) == True\n# json support is built in:\ncond_as_json = json.dumps(cond)\nassert pc.pycond(cond_as_json)(state={'a': 'b'}) == True\n```\n\n\n\n## <a href=\"#toc8\">Evaluation</a>\n\nThe result of the builder is a 'pycondition', i.e. a function which can be run many times against varying state of the system.\nHow state is evaluated is customizable at build and run time.\n\n## <a href=\"#toc9\">Default Lookup</a>\n\n\"Lookup\" denotes the process of deriving the actual values to evaluate, from a given state. Can be simple gets, getattrs, walks into the structure - or arbitrary, via custom lookup functions.\n\nThe default is to *get* lookup keys within expressions from an initially empty `State` dict within the module. This is *not* thread safe, i.e. not to be used in async or non cooperative multitasking environments.\n\n\n\n```python\nf = pc.pycond('foo eq bar')\nassert f() == False\npc.State['foo'] = 'bar'  # not thread safe!\nassert f() == True\n```\n\n\n(`pycond` is a shortcut for `parse_cond`, when meta infos are not required).\n\n## <a href=\"#toc10\">Passing State</a>\n\nUsing a state argument at evaluation *is* thread safe:  \n\n\n```python\nassert pc.pycond('a gt 2')(state={'a': 42}) == True\nassert pc.pycond('a gt 2')(state={'a': -2}) == False\n```\n\n## <a href=\"#toc11\">Deep Lookup / Nested State / Lists</a>\n\nYou may supply a path seperator for diving into nested structures like so:  \n\n\n```python\nm = {'a': {'b': [{'c': 1}]}}\nassert pc.pycond('a.b.0.c', deep='.')(state=m) == True\nassert pc.pycond('a.b.1.c', deep='.')(state=m) == False\nassert pc.pycond('a.b.0.c eq 1', deep='.')(state=m) == True\n# convencience argument for string conditions:\nassert pc.pycond('deep: a.b.0.c')(state=m) == True\n\n# This is how you express deep access via structured conditions:\nassert pc.pycond([('a', 'b', 0, 'c'), 'eq', 1])(state=m) == True\n\n# Since tuples are not transferrable in json, we also allow deep paths as list:\n# We apply heuristics to exclude expressions or conditions:\nc = [[['a', 'b', 0, 'c'], 'eq', 1], 'and', 'a']\nf, nfos = pc.parse_cond(c)\n# sorting order for keys: tuples at end, sorted by len, rest default py sorted:\nassert f(state=m) == True and nfos['keys'] == ['a', ('a', 'b', 0, 'c')]\n```\n\n\n- The structure may also contain objects, then we use getattribute to get to the next value.\n\n- `deep=\".\"` is actually just convience notation for supplying the following \"lookup function\" (see below):\n\n\n\n```python\nm = {'a': {'b': [{'c': 1}]}}\nassert pc.pycond('a.b.0.c', lookup=pc.state_get_deep)(state=m) == True\n```\n\n\n### <a href=\"#toc12\">Lookup Performance: Prebuilt Deep Getters</a>\n\nThe value lookup within nested structures can be stored into item and attribute getters (or , alternatively, an evaluated synthesized lookup function), built, when the first item has a matching structure.\n\n- Upside: [Performance](./test/test_getter_perf.py) is a few times better compared to when the structure of items is explored each time, as with the 'deep' parameter.\n- Downside: The lookup remains as built for the first structurely matching item. Schematic changes like from a key within a dict to an attribute will not except but deliver always False for the\n  actual condition value matching.\n\n- `pycond.Getters.state_get_deep2`: A list of item and attribute getters is built at first successfull lookup evaluation.\n- `pycond.Getters.state_get_evl`: An expression like \"lambda state=state['a'].b[0]['c']\" is built and evaluated, then applied to the items. \n   - Fastest way to get to the values at evaluation time.   \n   - Security: Round brackets within key names are forbidden and deliver always false - but an eval is an eval i.e. potentially evil.\n\nThese two additional \"deep\" lookup functions are conveniently made accessible by supplying a `deep2` or `deep3` argument:\n\n\n\n```python\nm = {'a': {'b': [{'c': 1}]}}\n# 3 times faster than deep. Safe.\nassert pc.pycond('a.b.0.c', deep2='.')(state=m) == True\n# 4 times faster than deep. Eval involved.\nassert pc.pycond('a.b.0.c', deep3='.')(state=m) == True\n```\n\nThe evaluation results for the keys are cached. The cache is cleared after 1Mio entries but can be cleared manually via `pc.clear_caches()` any time before that.\n\n### <a href=\"#toc13\">Best Practices</a>\n\n- Lookup keys change all the time, not many items checked for specific key: Use `deep`\n- Many items to be checked with same keys, input from untrusted users: Use `deep2`\n- Many items to be checked with same keys, input from trusted users: Use `deep3`\n\n\n\n## <a href=\"#toc14\">Prefixed Data</a>\n\nWhen data is passed through processing pipelines, it often is passed with headers. So it may be useful to pass a global prefix to access the payload like so:\n\n\n\n```python\nm = {'payload': {'b': [{'c': 1}], 'id': 123}}\nassert pc.pycond('b.0.c', deep='.', prefix='payload')(state=m) == True\n```\n\n## <a href=\"#toc15\">Attributes Access</a>\n\nSince version 20210221 we try attributes when objects are not dicts:\n\n\n\n```python\nclass MyObj:\n    val = {'a': 'b'}\n\nm = {'payload': {'obj': MyObj()}}\ncond = [['obj.val.a', 'eq', 'b']]\nassert pc.pycond(cond, deep='.', prefix='payload')(state=m) == True\n```\n\n\n## <a href=\"#toc16\">Custom Lookup And Value Passing</a>\n\nYou can supply your own function for value acquisition.\n\n- Signature: See example.\n- Returns: The value for the key from the current state plus the\n  compare value for the operator function.  \n\n\n```python\n# must return a (key, value) tuple:\nmodel = {'eve': {'last_host': 'somehost'}}\n\ndef my_lu(k, v, req, user, model=model):\n    print('user check. locals:', dict(locals()))\n    return (model.get(user) or {}).get(k), req[v]\n\nf = pc.pycond('last_host eq host', lookup=my_lu)\n\nreq = {'host': 'somehost'}\nassert f(req=req, user='joe') == False\nassert f(req=req, user='eve') == True\n```\nOutput:\n\n```\nuser check. locals: {'k': 'last_host', 'v': 'host', 'req': {'host': 'somehost'}, 'user': 'joe', 'model': {'eve': {'last_host': 'somehost'}}}\nuser check. locals: {'k': 'last_host', 'v': 'host', 'req': {'host': 'somehost'}, 'user': 'eve', 'model': {'eve': {'last_host': 'somehost'}}}\n```\n\n> as you can see in the example, the state parameter is just a convention\nfor `pyconds'` [title: default lookup function, fmatch:pycond.py, lmatch:def state_get] < SRC > .\n\n## <a href=\"#toc17\">Lazy Evaluation</a>\n\nThis is avoiding unnecessary calculations in many cases:\n\nWhen an evaluation branch contains an \"and\" or \"and_not\" combinator, then\nat runtime we evaluate the first expression - and stop if it is already\nFalse.\nSame when first expression is True, followed by \"or\" or \"or_not\".\n\nThat way expensive deep branch evaluations are omitted or, when\nthe lookup is done lazy, the values won't be even fetched:\n\n\n\n```python\nevaluated = []\n\ndef myget(key, val, cfg, state=None, **kw):\n    evaluated.append(key)\n    return pc.state_get(key, val, cfg, state, **kw)\n\nf = pc.pycond('[a eq b] or foo eq bar and baz eq bar', lookup=myget)\nassert f(state={'foo': 42}) == False\n# the value for \"baz\" is not even fetched and the whole (possibly\n# deep) branch after the last and is ignored:\nassert evaluated == ['a', 'foo']\nprint(evaluated)\nevaluated.clear()\n\nf = pc.pycond('[[a eq b] or foo eq bar] and baz eq bar', lookup=myget)\nassert f(state={'a': 'b', 'baz': 'bar'}) == True\n# the value for \"baz\" is not even fetched and the whole (possibly\n# deep) branch after the last and is ignored:\nassert evaluated == ['a', 'baz']\nprint(evaluated)\n```\nOutput:\n\n```\n['a', 'foo']\n['a', 'baz']\n```\n\nRemember that all keys occurring in a condition(which may be provided by the user at runtime) are returned by the condition parser. Means that building of evaluation contexts[can be done](  # context-on-demand-and-lazy-evaluation), based on the data actually needed and not more.\n\n## <a href=\"#toc18\">Condition Operators (Comparators)</a>\n\nAll boolean[standardlib operators](https://docs.python.org/2/library/operator.html)\nare available by default:\n\n\n\n```python\nfrom pytest2md import html_table as tbl  # just a table gen.\nfrom pycond import get_ops\n\nfor k in 'nr', 'str':\n    s = 'Default supported ' + k + ' operators...(click to extend)'\n    print(tbl(get_ops()[k], [k + ' operator', 'alias'], summary=s))\n```\n\n\n<details><summary>Default supported nr operators...(click to extend)</summary>\n\n<table>\n<tr><td>nr operator</td><td>alias</td></tr>\n<tr><td>add</td><td>+</td></tr>\n<tr><td>and_</td><td>&</td></tr>\n<tr><td>eq</td><td>==</td></tr>\n<tr><td>floordiv</td><td>//</td></tr>\n<tr><td>ge</td><td>>=</td></tr>\n<tr><td>gt</td><td>></td></tr>\n<tr><td>iadd</td><td>+=</td></tr>\n<tr><td>iand</td><td>&=</td></tr>\n<tr><td>ifloordiv</td><td>//=</td></tr>\n<tr><td>ilshift</td><td><<=</td></tr>\n<tr><td>imod</td><td>%=</td></tr>\n<tr><td>imul</td><td>*=</td></tr>\n<tr><td>ior</td><td>|=</td></tr>\n<tr><td>ipow</td><td>**=</td></tr>\n<tr><td>irshift</td><td>>>=</td></tr>\n<tr><td>is_</td><td>is</td></tr>\n<tr><td>is_not</td><td>is</td></tr>\n<tr><td>isub</td><td>-=</td></tr>\n<tr><td>itruediv</td><td>/=</td></tr>\n<tr><td>ixor</td><td>^=</td></tr>\n<tr><td>le</td><td><=</td></tr>\n<tr><td>lshift</td><td><<</td></tr>\n<tr><td>lt</td><td><</td></tr>\n<tr><td>mod</td><td>%</td></tr>\n<tr><td>mul</td><td>*</td></tr>\n<tr><td>ne</td><td>!=</td></tr>\n<tr><td>or_</td><td>|</td></tr>\n<tr><td>pow</td><td>**</td></tr>\n<tr><td>rshift</td><td>>></td></tr>\n<tr><td>sub</td><td>-</td></tr>\n<tr><td>truediv</td><td>/</td></tr>\n<tr><td>xor</td><td>^</td></tr>\n<tr><td>itemgetter</td><td></td></tr>\n<tr><td>length_hint</td><td></td></tr>\n</table>\n</details>\n\n\n\n\n<details><summary>Default supported str operators...(click to extend)</summary>\n\n<table>\n<tr><td>str operator</td><td>alias</td></tr>\n<tr><td>attrgetter</td><td></td></tr>\n<tr><td>concat</td><td>+</td></tr>\n<tr><td>contains</td><td></td></tr>\n<tr><td>countOf</td><td></td></tr>\n<tr><td>iconcat</td><td>+=</td></tr>\n<tr><td>indexOf</td><td></td></tr>\n<tr><td>methodcaller</td><td></td></tr>\n</table>\n</details>\n\n\n\n\n### <a href=\"#toc19\">Using Symbolic Operators</a>\n\nBy default pycond uses text style operators.\n\n- `ops_use_symbolic` switches processwide to symbolic style only.\n- `ops_use_symbolic_and_txt` switches processwide to both notations allowed.\n\n\n\n```python\npc.ops_use_symbolic()\npc.State['foo'] = 'bar'\nassert pc.pycond('foo == bar')() == True\ntry:\n    # this raises now, text ops not known anymore:\n    pc.pycond('foo eq bar')\nexcept:\n    pc.ops_use_symbolic_and_txt(allow_single_eq=True)\n    assert pc.pycond('foo = bar')() == True\n    assert pc.pycond('foo == bar')() == True\n    assert pc.pycond('foo eq bar')() == True\n    assert pc.pycond('foo != baz')() == True\n```\n\n\n> Operator namespace(s) should be assigned at process start, they are global.\n\n### <a href=\"#toc20\">Extending Condition Operators</a>\n\n\n\n```python\npc.OPS['maybe'] = lambda a, b: int(time.time()) % 2\n# valid expression now:\nassert pc.pycond('a maybe b')() in (True, False)\n```\n\n\n### <a href=\"#toc21\">Negation `not`</a>\n\nNegates the result of the condition operator:\n\n\n\n```python\npc.State['foo'] = 'abc'\nassert pc.pycond('foo eq abc')() == True\nassert pc.pycond('foo not eq abc')() == False\n```\n\n\n### <a href=\"#toc22\">Reversal `rev`</a>\n\nReverses the arguments before calling the operator  \n\n\n```python\npc.State['foo'] = 'abc'\nassert pc.pycond('foo contains a')() == True\nassert pc.pycond('foo rev contains abc')() == True\n```\n\n\n> `rev` and `not` can be combined in any order.\n\n### <a href=\"#toc23\">Wrapping Condition Operators</a>\n\n#### <a href=\"#toc24\">Global Wrapping</a>\n\nYou may globally wrap all evaluation time condition operations through a custom function:\n\n\n\n```python\nl = []\n\ndef hk(f_op, a, b, l=l):\n    l.append((getattr(f_op, '__name__', ''), a, b))\n    return f_op(a, b)\n\npc.run_all_ops_thru(hk)  # globally wrap the operators\n\npc.State.update({'a': 1, 'b': 2, 'c': 3})\nf = pc.pycond('a gt 0 and b lt 3 and not c gt 4')\nassert l == []\nf()\nexpected_log = [('gt', 1, 0.0), ('lt', 2, 3.0), ('gt', 3, 4.0)]\nassert l == expected_log\npc.ops_use_symbolic_and_txt()\n```\n\n\nYou may compose such wrappers via repeated application of the `run_all_ops_thru` API function.\n\n### <a href=\"#toc25\">Condition Local Wrapping</a>\n\nThis is done through the `ops_thru` parameter as shown:\n\n\n\n```python\ndef myhk(f_op, a, b):\n    return True\n\npc.State['a'] = 1\nf = pc.pycond('a eq 2')\nassert f() == False\nf = pc.pycond('a eq 2', ops_thru=myhk)\nassert f() == True\n```\n\n\n> Using `ops_thru` is a good way to debug unexpected results, since you\n> can add breakpoints or loggers there.\n\n### <a href=\"#toc26\">Combining Operations</a>\n\nYou can combine single conditions with\n\n- `and`\n- `and not`\n- `or`\n- `or not`\n- `xor` by default.\n\nThe combining functions are stored in `pycond.COMB_OPS` dict and may be extended.\n\n> Do not use spaces for the names of combining operators. The user may use them but they are replaced at before tokenizing time, like `and not` -> `and_not`.\n\n## <a href=\"#toc27\">Details</a>\n\n### <a href=\"#toc28\">Debugging Lookups</a>\n\npycond provides a key getter which prints out every lookup.  \n\n\n```python\nf = pc.pycond('[[a eq b] or foo eq bar] or [baz eq bar]', lookup=pc.dbg_get)\nassert f(state={'foo': 'bar'}) == True\n```\nOutput:\n\n```\nLookup: a b -> None\nLookup: foo bar -> bar\n```\n\n### <a href=\"#toc29\">Enabling/Disabling of Branches</a>\n\nInsert booleans like shown:  \n\n\n```python\nf = pc.pycond(['foo', 'and', ['bar', 'eq', 1]])\nassert f(state={'foo': 1}) == False\nf = pc.pycond(['foo', 'and', [True, 'or', ['bar', 'eq', 1]]])\nassert f(state={'foo': 1}) == True\n```\n\n### <a href=\"#toc30\">Building Conditions From Text</a>\n\nCondition functions are created internally from structured expressions -\nbut those are[hard to type](  # lazy-dynamic-context-assembly),\ninvolving many apostropies.\n\nThe text based condition syntax is intended for situations when end users\ntype them into text boxes directly.\n\n#### <a href=\"#toc31\">Grammar</a>\n\nCombine atomic conditions with boolean operators and nesting brackets like:\n\n```\n[< atom1 > < and | or | and not|... > <atom2 > ] < and|or... > [ [ < atom3 > ....\n```\n\n#### <a href=\"#toc32\">Atomic Conditions</a>\n\n```\n[not] < lookup_key > [[rev] [not] < condition operator (co) > <value > ]\n```\n- When just `lookup_key` is given, then `co` is set to the `truthy` function:\n```python\ndef truthy(key, val=None):\n    return operatur.truth(k)\n```\n\nso such an expression is valid and True:\n\n\n\n```python\npc.State.update({'foo': 1, 'bar': 'a', 'baz': []})\nassert pc.pycond('[ foo and bar and not baz]')() == True\n```\n\n- When `not lookup_key` is given, then `co` is set to the `falsy`\n  function:\n\n\n\n```python\nm = {'x': 'y', 'falsy_val': {}}\n# normal way\nassert pc.pycond(['foo', 'eq', None])(state=m) == True\n# using \"not\" as prefix:\nassert pc.pycond('not foo')(state=m) == True\nassert pc.pycond(['not', 'foo'])(state=m) == True\nassert pc.pycond('not falsy_val')(state=m) == True\nassert pc.pycond('x and not foo')(state=m) == True\nassert pc.pycond('y and not falsy_val')(state=m) == False\n```\n\n\n#### <a href=\"#toc33\">Nesting</a>\n\nCombined conditions may be arbitrarily nested using brackets \"[\" and \"]\".\n\n> Via the `brkts` config parameter you may change those to other separators at build time.\n\n### <a href=\"#toc34\">Tokenizing Details</a>\n\n> Brackets as strings in this flat list form, e.g. `['[', 'a', 'and' 'b', ']'...]`\n\n#### <a href=\"#toc35\">Functioning</a>\n\nThe tokenizers job is to take apart expression strings for the builder.\n\n#### <a href=\"#toc36\">Separator `sep`</a>\n\nSeparates the different parts of an expression. Default is ' '.\n\n\n\n```python\npc.State['a'] = 42\nassert pc.pycond('a.eq.42', sep='.')() == True\n```\n\n> sep can be a any single character including binary.\n\nBracket characters do not need to be separated, the tokenizer will do:\n\n\n\n```python\n# equal:\nassert (\n    pc.pycond('[[a eq 42] and b]')() == pc.pycond('[ [ a eq 42 ] and b ]')()\n)\n```\n\n> The condition functions themselves do not evaluate equal - those\n> had been assembled two times.\n\n#### <a href=\"#toc37\">Apostrophes</a>\n\nBy putting strings into Apostrophes you can tell the tokenizer to not further inspect them, e.g. for the seperator:\n\n\n\n```python\npc.State['a'] = 'Hello World'\nassert pc.pycond('a eq \"Hello World\"')() == True\n```\n\n\n#### <a href=\"#toc38\">Escaping</a>\n\nTell the tokenizer to not interpret the next character:\n\n\n\n```python\npc.State['b'] = 'Hello World'\nassert pc.pycond('b eq Hello\\ World')() == True\n```\n\n\n### <a href=\"#toc39\">Building</a>\n\n#### <a href=\"#toc40\">Autoconv: Casting of values into python simple types</a>\n\nExpression string values are automatically cast into bools and numbers via the public `pycond.py_type` function.\n\nThis can be prevented by setting the `autoconv` parameter to `False` or by using Apostrophes:\n\n\n\n```python\npc.State['a'] = '42'\nassert pc.pycond('a eq 42')() == False\n# compared as string now\nassert pc.pycond('a eq \"42\"')() == True\n# compared as string now\nassert pc.pycond('a eq 42', autoconv=False)() == True\n```\n\n\nIf you do not want to provide a custom lookup function(where you can do what you want)\nbut want to have looked up keys autoconverted then use:\n\n\n\n```python\nfor id in '1', 1:\n    pc.State['id'] = id\n    assert pc.pycond('id lt 42', autoconv_lookups=True)\n```\n\n\n## <a href=\"#toc41\">Context On Demand</a>\n\nOften the conditions are in user space, applied on data streams under\nthe developer's control only at development time.\n\nThe end user might pick only a few keys from many offered within an API.\n\npycond's `ctx_builder` allows to only calculate those keys at runtime,\nthe user decided to base conditions upon:\nAt condition build time hand over a namespace for *all * functions which\nare available to build the ctx.\n\n`pycon` will return a context builder function for you, calling only those functions\nwhich the condition actually requires.\n\n\n\n```python\npc.ops_use_symbolic_and_txt(allow_single_eq=True)\n\n# Condition the end user configured, e.g. at program run time:\ncond = [\n    ['group_type', 'in', ['lab', 'first1k', 'friendly', 'auto']],\n    'and',\n    [\n        [\n            [\n                [\n                    ['cur_q', '<', 0.5],\n                    'and',\n                    ['delta_q', '>=', 0.15],\n                ],\n                'and',\n                ['dt_last_enforce', '>', 28800],\n            ],\n            'and',\n            ['cur_hour', 'in', [3, 4, 5]],\n        ],\n        'or',\n        [\n            [\n                [\n                    ['cur_q', '<', 0.5],\n                    'and',\n                    ['delta_q', '>=', 0.15],\n                ],\n                'and',\n                ['dt_last_enforce', '>', 28800],\n            ],\n            'and',\n            ['clients', '=', 0],\n        ],\n    ],\n]\n\n# Getters for API keys offered to the user, involving potentially\n# expensive to fetch context delivery functions:\n# Signature must provide minimum a positional for the current\n# state:\nclass ApiCtxFuncs:\n    def expensive_but_not_needed_here(ctx):\n        raise Exception(\"Won't run with cond. from above\")\n\n    def cur_q(ctx):\n        print('Calculating cur_q')\n        return 0.1\n\n    def cur_hour(ctx):\n        print('Calculating cur_hour')\n        return 4\n\n    def dt_last_enforce(ctx):\n        print('Calculating dt_last_enforce')\n        return 10000000\n\n    def delta_q(ctx):\n        print('Calculating (expensive) delta_q')\n        time.sleep(0.1)\n        return 1\n\n    def clients(ctx):\n        print('Calculating clients')\n        return 0\n\nif sys.version_info[0] < 3:\n    # we don't think it is a good idea to make the getter API stateful ;-)\n    p2m.convert_to_staticmethods(ApiCtxFuncs)\n\nf, nfos = pc.parse_cond(cond, ctx_provider=ApiCtxFuncs)\n\n# now we create (incomplete) data..\ndata1 = {'group_type': 'xxx'}, False\ndata2 = {'group_type': 'lab'}, True\n\n# this key stores a context builder function, calculating the complete data:\nmake_ctx = nfos['complete_ctx']\n\nt0 = time.time()\nfor event, expected in data1, data2:\n    assert f(state=make_ctx(event)) == expected\n\nprint('Calc.Time (delta_q was called twice):', round(time.time() - t0, 4)),\nreturn cond, ApiCtxFuncs\n```\nOutput:\n\n```\nCalculating clients\nCalculating cur_hour\nCalculating cur_q\nCalculating (expensive) delta_q\nCalculating dt_last_enforce\nCalculating clients\nCalculating cur_hour\nCalculating cur_q\nCalculating (expensive) delta_q\nCalculating dt_last_enforce\nCalc.Time (delta_q was called twice): 0.2006\n```\n\n\n## <a href=\"#toc42\">Lookup Providers</a>\n\nContextBuilders are interesting but we can do better.\n\nWe still calculated values for keys which might(dependent on the data) be not needed in dead ends of a lazily evaluated condition.\n\nLets avoid calculating these values, remembering the [custom lookup function](#custom-lookup-and-value-passing) feature.\n\nThis is where lookup providers come in, providing namespaces for functions to be called conditionally.\n\nPycond [treats the condition keys as function names][pycond.py#614] within that namespace and calls them, when needed.\n\n### <a href=\"#toc43\">Accepted Signatures</a>\n\nLookup provider functions may have the following signatures:\n\n\n\n```python\nclass F:\n    # simple data passing\n    def f1(data):\n        \"\"\"simple return a value being compared, getting passed the state/data\"\"\"\n        return data['a']\n\n    # simple, with ctx\n    def f2(data, **kw):\n        \"\"\"\n        simple return a value being compared, getting passed the state/data\n        All context information within kw, compare value not modifiable\n        \"\"\"\n        return data['b']\n\n    # full pycond compliant signature,\n    def f3(key, val, cfg, data, **kw):\n        \"\"\"\n        full pycond signature.\n        val is the value as defined by the condition, and which you could return modified\n        kw holds the cache, cfg holds the setup\n        v has to be returned:\n        \"\"\"\n        return data['c'], 100  # not 45!\n\n    # applied al\n    def f4(*a, **kw):\n        \"\"\"\n        Full variant(always when varargs are involved)\n        \"\"\"\n        return a[3]['d'], 'foo'\n\n_ = 'and'\nf = pc.pycond(\n    [\n        [':f1', 'eq', 42],\n        _,\n        [':f2', 'eq', 43, _, ':f3', 'eq', 45],\n        _,\n        [':f4', 'eq', 'foo'],\n    ],\n    lookup_provider=F,\n)\nassert f(state={'a': 42, 'b': 43, 'c': 100, 'd': 'foo'}) == True\n```\n\n### <a href=\"#toc44\">Parametrized Lookup Functions</a>\n\nVia the 'params' parameter you may supply keyword args to lookup functions:  \n\n\n```python\nclass F:\n    def hello(k, v, cfg, data, count, **kw):\n        return data['foo'] == count, 0\n\nm = pc.pycond([':hello'], lookup_provider=F, params={'hello': {'count': 2}})(\n    state={'foo': 2}\n)\nassert m == True\n```\n\n\n### <a href=\"#toc45\">Namespace</a>\n\n- Lookup functions can be found in nested class hirarchies or dicts. Separator is colon(':')\n- As shown above, if they are flat within a toplevel class or dict you should still prefix with ':', to get build time exception(MissingLookupFunction) when not present\n- You can switch that behaviour off per condition build as config arg, as shown below\n- You can switch that behaviour off globally via `pc.prefixed_lookup_funcs=False`\n\nWarning: This is a breaking API change with pre-20200610 versions, where the prefix was not required to find functions in, back then, only flat namespaces. Use the global switch after import to get the old behaviour.\n\n\n\n```python\nclass F:\n    def a(data):\n        return data['foo']\n\n    class inner:\n        def b(data):\n            return data['bar']\n\nm = {'c': {'d': {'func': lambda data: data['baz']}}}\n\n# for the inner lookup the first prefix may be omitted:\n_ = 'and'\ncond = [\n    [':a', 'eq', 'foo1'],\n    _,\n    ['inner:b', 'eq', 'bar1'],\n    _,\n    [\n        'c:d',\n        'eq',\n        'baz1',\n    ],\n]\nc = pc.pycond(cond, lookup_provider=F, lookup_provider_dict=m)\nassert c(state={'foo': 'foo1', 'bar': 'bar1', 'baz': 'baz1'}) == True\n\n# Prefix checking on / off:\ntry:\n    pc.pycond([':xx', 'and', cond])\n    i = 9 / 0  # above will raise this:\nexcept pc.MissingLookupFunction:\n    pass\ntry:\n    pc.pycond([':xx', 'and', cond], prefixed_lookup_funcs=False)\n    i = 9 / 0  # above will raise this:\nexcept pc.MissingLookupFunction:\n    pass\ncond[0] = 'a'  # remove prefix, will still be found\nc = pc.pycond(\n    ['xx', 'or', cond],\n    lookup_provider=F,\n    lookup_provider_dict=m,\n    prefixed_lookup_funcs=False,\n)\nassert c(state={'foo': 'foo1', 'bar': 'bar1', 'baz': 'baz1'}) == True\n```\n\nYou can switch that prefix needs off - and pycond will then check the state for key presence:\n\n\n\n```python\n# we let pycond generate the lookup function (we use the simple signature type):\nf = pc.pycond(cond, lookup_provider=ApiCtxFuncs, prefixed_lookup_funcs=False)\n# Same events as above:\ndata1 = {'group_type': 'xxx'}, False\ndata2 = {'group_type': 'lab'}, True\n\nt0 = time.time()\nfor event, expected in data1, data2:\n    # we will lookup only once:\n    assert f(state=event) == expected\n\nprint(\n    'Calc.Time (delta_q was called just once):',\n    round(time.time() - t0, 4),\n)\n\n# The deep switch keeps working:\ncond2 = [cond, 'or', ['a-0-b', 'eq', 42]]\nf = pc.pycond(\n    cond2,\n    lookup_provider=ApiCtxFuncs,\n    deep='-',\n    prefixed_lookup_funcs=False,\n)\ndata2[0]['a'] = [{'b': 42}]\nprint('sample:', data2[0])\nassert f(state=data2[0]) == True\n```\nOutput:\n\n```\nCalculating cur_q\nCalculating (expensive) delta_q\nCalculating dt_last_enforce\nCalculating cur_hour\nCalc.Time (delta_q was called just once): 0.1004\nsample: {'group_type': 'lab', 'a': [{'b': 42}]}\nCalculating cur_q\nCalculating (expensive) delta_q\nCalculating dt_last_enforce\nCalculating cur_hour\n```\n\n\nThe output demonstrates that we did not even call the value provider functions for the dead branches of the condition.\n\nNOTE: Instead of providing a class tree you may also provide a dict of functions as `lookup_provider_dict` argument, see `qualify` examples below.\n\n## <a href=\"#toc46\">Caching</a>\n\nNote: Currently you cannot override these defaults. Drop an issue if you need to.\n\n- Builtin state lookups: Not cached\n- Custom `lookup` functions: Not cached(you can implement caching within those functions)\n- Lookup provider return values: Cached, i.e. called only once, per data set\n- Named condition sets(see below): Cached\n\n## <a href=\"#toc47\">Extensions</a>\n\nWe deliver a few lookup function [extensions][pycond.py#711]\n\n- for time checks\n- for os.environ checks(re-evaluated at runtime)\n\n\n\n```python\nfrom datetime import datetime as dt\nfrom os import environ as env\n\nthis_sec = dt.now().second\nthis_utc_hour = dt.utcnow().hour\nf = pc.pycond(\n    [\n        ['env:foo', 'eq', 'bar'],\n        'and',\n        # not breaking the build when the sec just jumps:\n        ['dt:second', 'in', [this_sec, this_sec + 1, 0]],\n        'and',\n        ['utc:hour', 'eq', this_utc_hour],\n    ]\n)\nenv['foo'] = 'bar'\nassert f(state={'a': 1}) == True\n```\n\n\n\n## <a href=\"#toc48\">Named Conditions: Qualification</a>\n\nInstead of just delivering booleans, pycond can be used to determine a whole set of\ninformation about data declaratively, like so:  \n\n\n```python\n# We accept different forms of delivery.\n# The first full text is restricted to simple flat dicts only:\nfor c in [\n    'one: a gt 10, two: a gt 10 or foo eq bar',\n    {'one': 'a gt 10', 'two': 'a gt 10 or foo eq bar'},\n    {\n        'one': ['a', 'gt', 10],\n        'two': ['a', 'gt', 10, 'or', 'foo', 'eq', 'bar'],\n    },\n]:\n    f = pc.qualify(c)\n    r = f({'foo': 'bar', 'a': 0})\n    assert r == {'one': False, 'two': True}\n```\n\n\nWe may refer to results of other named conditions and also can pass named condition sets as lists instead of dicts:  \n\n\n```python\ndef run(q):\n    print('Running', q)\n\n    class F:\n        def custom(data):\n            return data.get('a')\n\n    f = pc.qualify(q, lookup_provider=F)\n\n    assert f({'a': 'b'}) == {\n        'first': True,\n        'listed': [False, False],\n        'thrd': True,\n        'zero': True,\n        'last': True,\n    }\n    res = f({'c': 'foo', 'x': 1})\n    assert res == {\n        'first': False,\n        'listed': [False, True],\n        'thrd': False,\n        'zero': True,\n        'last': True,\n    }\n\nq = {\n    'thrd': ['k', 'or', ':first'],\n    'listed': [['foo'], ['c', 'eq', 'foo']],\n    'zero': [['x', 'eq', 1], 'or', ':thrd'],\n    'first': [':custom', 'eq', 'b'],\n    'last': True,  # you might want to do this to always get at least one matcher, e.g. for data streaming\n}\n# as list of conditions:\nrun(q)\n\n# as dict:\nq = dict([[k, v] for k, v in q.items()])\nrun(q)\n```\nOutput:\n\n```\nRunning {'thrd': ['k', 'or', ':first'], 'listed': [['foo'], ['c', 'eq', 'foo']], 'zero': [['x', 'eq', 1], 'or', ':thrd'], 'first': [':custom', 'eq', 'b'], 'last': True}\nRunning {'thrd': ['k', 'or', ':first'], 'listed': [['foo'], ['c', 'eq', 'foo']], 'zero': [['x', 'eq', 1], 'or', ':thrd'], 'first': [':custom', 'eq', 'b'], 'last': True}\n```\n\nWARNING: For performance reasons there is no built in circular reference check. You'll run into python's built in recursion checker!\n\n## <a href=\"#toc49\">Options</a>\n\n- into: Put the matched named conditions into the original data\n- prefix: Work from a prefix nested in the root\n- add_cached: Return also the data from function result cache\n\nHere a few variants to parametrize behaviour, by example:  \n\n\n```python\nconds = {\n    0: ['foo'],\n    1: ['bar'],\n    2: ['func'],\n    3: ['n'],\n    'n': ['bar'],\n}\n\nclass F:\n    def func(*a, **kw):\n        return True, 0\n\nq = lambda d, **kw: pc.qualify(\n    conds, lookup_provider=F, prefixed_lookup_funcs=False, **kw\n)(d)\n\nm = q({'bar': 1})\nassert m == {0: False, 1: True, 2: True, 3: True, 'n': True}\n\n# return data, with matched conds in:\nm = q({'bar': 1}, into='conds')\nassert m == {\n    'bar': 1,\n    'conds': {0: False, 1: True, 2: True, 3: True, 'n': True},\n}\n\ndef msg():\n    return {'bar': 1, 'pl': {'a': 1}}\n\n# add_cached == True -> it's put into the cond results:\nm = q(msg(), into='conds', add_cached=True)\nassert m == {\n    'bar': 1,\n    'conds': {0: False, 1: True, 2: True, 3: True, 'n': True, 'func': True},\n    'pl': {'a': 1},\n}\n\nm = q(msg(), into='conds', add_cached='pl')\nassert m == {\n    'bar': 1,\n    'conds': {0: False, 1: True, 2: True, 3: True, 'n': True},\n    # n had been put into the cache, was not evaled twice:\n    'pl': {'a': 1, 'func': True, 'n': True},\n}\n\nm = q({'bar': 1}, add_cached='pl')\nassert m == {0: False, 1: True, 2: True, 3: True, 'n': True, 'func': True}\n\n# prefix -> Nr 1, bar,  should NOT be True, since not in pl now:\nm = q(\n    msg(),\n    prefix='pl',\n    into='conds',\n    add_cached='pl',\n)\nassert m == {\n    'bar': 1,\n    'conds': {0: False, 1: False, 2: True, 3: False, 'n': False},\n    'pl': {'a': 1, 'func': True, 'n': False},\n}\n```\n\n\n\n## <a href=\"#toc50\">Partial Evaluation</a>\n\nIf you either supply a key called 'root' OR supply it as argument to `qualify`, pycond will only evaluate named conditions required to calculate the root key:\n\n\n\n```python\ncalled = []\n\ndef expensive_func(k, v, cfg, data, **kw):\n    called.append(data)\n    return 1, v\n\ndef xx(k, v, cfg, data, **kw):\n    called.append(data)\n    return data.get('a'), v\n\nfuncs = {'exp': {'func': expensive_func}, 'xx': {'func': xx}}\nq = {\n    'root': ['foo', 'and', ':bar'],\n    'bar': [\n        ['somecond'],\n        'or',\n        [[':exp', 'eq', 1], 'and', ':baz'],\n    ],\n    'x': [':xx'],\n    'baz': [':exp', 'lt', 10],\n}\nqualifier = pc.qualify(q, lookup_provider_dict=funcs, add_cached=True)\n\nd = {'foo': 1}\nr = qualifier(d)\n\n# root, bar, baz had been calculated, not x\nassert r == {'root': True, 'bar': True, 'baz': True, 'exp': 1}\n# expensive_func result, which was cached, is also returned.\n# expensive_func only called once allthough result evaluated for bar and baz:\nassert len(called) == 1\n\ncalled.clear()\nf = pc.qualify(q, lookup_provider_dict=funcs, root='x', add_cached=True)\nassert f({'a': 1}) == {'x': True, 'xx': 1}\nassert f({'b': 1}) == {'x': False, 'xx': None}\nassert called == [{'a': 1}, {'b': 1}]\n```\n\nThis means pycond can be used as a lightweight declarative function dispatching framework.\n\n\n## <a href=\"#toc51\">Streaming Data</a>\n\nSince version 20200601 and Python 3.x versions, pycond can deliver[ReactiveX](https://github.com/ReactiveX/RxPY) compliant stream operators.\n\nLets first set up a test data stream, by defining a function `rx_setup` like so:\n\n\n\n```python\n# simply `import rx as Rx and rx = rx.operators`:\n# import pycond as pc, like always:\nRx, rx, GS = pc.import_rx('GS')\n\ndef push_through(*test_pipe, items=4):\n    \"\"\"\n    Function which takes a set of operators and runs an 'rx.interval' stream, until count items are through\n    \"\"\"\n\n    # stream sink result holder plus a stream completer:\n    l, compl = [], rx.take(items)\n    l.clear()  # clear any previous results\n\n    def next_(x):\n        # simply remember what went through in a list:\n        l.append(x)\n\n    def err(*a):\n        # should never happen:\n        print('exception', a)\n\n    stream = Rx.interval(0.01)  # numbers, each on its own thread\n\n    # turns the ints into dicts: {'i': 1}, then {'i': 2} and so on:\n    # (we start from 1, the first 0 we filter out)\n    stream = stream.pipe(\n        rx.filter(lambda i: i > 0), rx.map(lambda i: {'i': i})\n    )\n\n    # defines the stream through the tested operators:\n    test_pipe = test_pipe + (compl,)\n    s = stream.pipe(*test_pipe)\n\n    # runs the stream:\n    d = s.subscribe(\n        on_error=err,\n        on_next=next_,\n        on_completed=lambda: l.append('completed'),\n    )\n\n    # blocks until completed:\n    while not (l and l[-1] == 'completed'):\n        time.sleep(0.001)\n    l.pop()  # removes completed indicator\n\n    return l  # returns all processed messages\n\nreturn Rx, rx, push_through\n```\n\nLets test the setup by having some messages streamed through:\n\n\n\n```python\nRx, rx, push_through = rx_setup()\n# test test setup:\nr = push_through(items=3)\nassert r == [{'i': 1}, {'i': 2}, {'i': 3}]\n```\n\n-> test setup works.\n\n### <a href=\"#toc52\">Filtering</a>\n\nThis is the most simple operation: A simple stream filter.\n\n\n\n```python\nRx, rx, push_through = rx_setup()\n\n# ask pycond for a stream filter based on a condition:\npcfilter = partial(pc.rxop, ['i', 'mod', 2])\n\nr = push_through(pcfilter())\nodds = [{'i': 1}, {'i': 3}, {'i': 5}, {'i': 7}]\nassert r == odds\n\n# try the stream filter with message headered data:\npl = 'payload'\nr = push_through(rx.map(lambda i: {pl: i}), pcfilter(prefix=pl))\nprint('Full messages passed:', r)\nr = [m[pl] for m in r]\nassert len(r) == 4\nassert r == odds\n```\nOutput:\n\n```\nFull messages passed: [{'payload': {'i': 1}}, {'payload': {'i': 3}}, {'payload': {'i': 5}}, {'payload': {'i': 7}}]\n```\n\n### <a href=\"#toc53\">Streaming Classification</a>\n\nUsing named condition dicts we can classify data, i.e. tag it, in order to process subsequently:\n\n\n\n```python\nRx, rx, push_through = rx_setup()\n\n# generate a set of classifiers:\nconds = [['i', 'mod', i] for i in range(2, 4)]\n\ndef run(offs=0):\n\n    # and get a classifying operator from pycond, adding the results in place, at key 'mod':\n    r = push_through(pc.rxop(conds, into='mod'))\n    i, j = 0 + offs, 1 + offs\n    assert r == [\n        {'i': 1, 'mod': {i: 1, j: 1}},\n        {'i': 2, 'mod': {i: 0, j: 2}},\n        {'i': 3, 'mod': {i: 1, j: 0}},\n        {'i': 4, 'mod': {i: 0, j: 1}},\n    ]\n\n# this will automatically number the classifiers, from 0:\nrun()\n\n# we can also provide the names of the classifiers by passing a dict:\n# here we pass 2 and 3 as those names:\nconds = dict([(i, ['i', 'mod', i]) for i in range(2, 4)])\nrun(2)\n```\n\nNormally the data has headers, so thats a good place to keep the classification tags.\n\n### <a href=\"#toc54\">Selective Classification</a>\n\nWe fall back to an alternative condition evaluation(which could be a function call) * only * when a previous condition evaluation returns something falsy - by providing a * root condition*.\nWhen it evaluated, possibly requiring evaluation of other conditions, we return:  \n\n\n```python\nRx, rx, push_through = rx_setup()\n\n# using the list style:\nconds = [[i, [['i', 'mod', i], 'or', ':alt']] for i in range(2, 4)]\nconds.append(['alt', ['i', 'gt', 1]])\n\n# provide the root condition. Only when it evals falsy, the named \"alt\" condiction will be evaluated:\nr = push_through(pc.rxop(conds, into='mod', root=2, add_cached=True))\n\nassert r == [\n    # evaluation of alt was not required:\n    {'i': 1, 'mod': {2: True}},\n    # evaluation of alt was required:\n    {'i': 2, 'mod': {2: True, 'alt': True}},\n    {'i': 3, 'mod': {2: True}},\n    {'i': 4, 'mod': {2: True, 'alt': True}},\n]\n```\n\n## <a href=\"#toc55\">Treating of Booleans (Conditions, Not Names)</a>\n\nFor the special case of booleans in a condition list we do not treat them as names.  \n\n\n```python\n# 2 unnamed conditions -> keys will be positional\nqs = pc.qualify([True, False])\nres = qs({'a': 1})\nassert res == {0: True, 1: False}  # and not {True: False}\n# 2 named conds\nqs = pc.qualify([[1, ['a', 'eq', 1]], [2, ['b', 'eq', 42]]])\nres = qs({'a': 1})\nassert res == {1: True, 2: False}\n```\n\n## <a href=\"#toc56\">Asyncronous Operations</a>\n\nWARNING: Early Version. Only for the gevent platform.\n\nSelective classification allows to call condition functions only when other criteria are met.\nThat makes it possible to read e.g. from a database only when data is really required - and not always, \"just in case\".\n\npycond allows to define, that blocking operations should be run * async* within the stream, possibly giving up order.\n\n### <a href=\"#toc57\">Asyncronous Filter</a>\n\nFirst a simple filter, which gives up order but does not block:\n\n\n\n```python\nRx, rx, push_through = rx_setup()\n\nclass F:\n    def check(k, v, cfg, data, t0=[], **kw):\n        # will be on different thread:\n        i, pointer = data['i'], ''\n        if not t0:\n            t0.append(now())\n        if i == 1:\n            # ints are fired at 0.01, i.e. the 1 will land 4 after 1:\n            time.sleep(0.048)\n            pointer = '   <----- not in order, blocked'\n        # demonstrate that item 1 is not blocking anything - just order is disturbed:\n        print('item %s: %.3fs %s' % (i, now() - t0[0], pointer))\n        return i % 2, v\n\n# have the operator built for us - with a single condition filter:\nrxop = pc.rxop(\n    [':check'],\n    into='mod',\n    lookup_provider=F,\n    asyn=['check'],\n)\nr = push_through(rxop, items=5)\nassert [m['i'] for m in r] == [3, 5, 1, 7, 9]\n```\nOutput:\n\n```\nitem 2: 0.011s \nitem 3: 0.022s \nitem 4: 0.034s \nitem 5: 0.045s \nitem 1: 0.049s    <----- not in order, blocked\nitem 6: 0.057s \nitem 7: 0.068s \nitem 8: 0.079s \nitem 9: 0.090s\n```\n\nFinally asyncronous classification, i.e. evaluation of multiple conditions:\n\n\n\n```python\ndef _thn(msg, data):\n    return print('thread:', cur_thread().name, msg, data)\n\n# push_through just runs a stream of {'i': <nr>} through a given operator:\nRx, rx, push_through = rx_setup()\n\n# Defining a simple 'set' of classifiers, here as list, with one single key: 42:\nconds = [\n    [\n        42,\n        [\n            ['i', 'lt', 100],\n            'and',\n            [[':odd', 'eq', 1], 'or', ['i', 'eq', 2]],\n            'and_not',\n            [':blocking', 'eq', 3],\n        ],\n    ]\n]\n\nclass F:\n    \"\"\"\n    Namespace for condition lookup functions.\n    You may also pass a dict(lookup_provider_dict)\n\n    We provide the functions for 'odd' and 'blocking'.\n    \"\"\"\n\n    def odd(k, v, cfg, data, **kw):\n        # just print the threadname.\n        # will go up, interval stream has each nr on its own thread:\n        _thn('odd', data)\n        # fullfill condition only for odd numbers\n        # -> even nrs won't even run func 'blocking':\n        return data['i'] % 2, v\n\n    def blocking(k, v, cfg, data, **kw):\n        i = data['i']\n        # will be on different thread:\n        _thn('blocking', data)\n        if i == 1:\n            # two others will \"overtake the i=1 item,\n            # since the interval stream is firing every 0.01 secs:\n            time.sleep(0.028)\n        elif i == 2:\n            # Exceptions, incl. timeouts, will simply be forwarded to cfg['err_handler']\n            # i.e. also timeout mgmt have to be done here, in the custom functions themselves.\n\n            # Rationale for not providing a timeout monitoring within pycond itself:\n            # Async ops are done with libs, which ship with their own timeout params.\n            # No need to re-invent / overlay with our own monitoring of that.\n\n            # In the err handler, then further arrangements can be done.\n            raise TimeoutError('ups')\n        elif i == 5:\n            1 / 0\n        return data['i'], v\n\nerrors = []\n\ndef handle_err(item, cfg, ctx, exc, t=errors, **kw):\n    # args are: [item, cfg]\n    if 'ups' in str(exc):\n        assert item['i'] == 2\n        assert exc.__class__ == TimeoutError\n        t.append(item)\n    else:\n        assert item['i'] == 5\n        assert exc.__class__ == ZeroDivisionError\n        t.append(item)\n\n# have the operator built for us:\nrxop = pc.rxop(\n    conds,\n    into='mod',\n    lookup_provider=F,\n    err_handler=handle_err,\n    asyn=['blocking'],\n)\nr = push_through(rxop, items=5)\nassert [m['i'] for m in r] == [3, 1, 4, 6, 7]\nassert [m['mod'][42] for m in r] == [False, True, False, False, True]\n# item 2 caused a timeout:\nassert [t['i'] for t in errors] == [2, 5]\n```\nOutput:\n\n```\nthread: Thread-55 odd {'i': 1}\nthread: Dummy-57 blocking {'i': 1}\nthread: Thread-56 odd {'i': 2}\nthread: Dummy-59 blocking {'i': 2}\nthread: Thread-58 odd {'i': 3}\nthread: Dummy-61 blocking {'i': 3}\nthread: Thread-60 odd {'i': 4}\nthread: Thread-62 odd {'i': 5}\nthread: Dummy-64 blocking {'i': 5}\nthread: Thread-63 odd {'i': 6}\nthread: Thread-65 odd {'i': 7}\nthread: Dummy-67 blocking {'i': 7}\n```\n\n\n*Auto generated by [pytest2md](https://github.com/axiros/pytest2md), running [./tests/test_tutorial.py](./tests/test_tutorial.py)\n\n<!-- autogen tutorial -->\n\n\n<!-- autogenlinks -->\n[pycond.py#614]: https://github.com/axiros/pycond/blob/54fea4c6b0cdc62bcf0b984214649344b1a77230/pycond.py#L614\n[pycond.py#711]: https://github.com/axiros/pycond/blob/54fea4c6b0cdc62bcf0b984214649344b1a77230/pycond.py#L711\n\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "Lightweight Condition Parsing and Building of Evaluation Expressions",
    "version": "20230212",
    "split_keywords": [
        "conditions",
        "expression",
        "async",
        "serialization",
        "rxpy",
        "reactivex"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4e37d2a8b1f3740724b7919a13dc0487d831de53ed7753e1e8c23c327fc03b35",
                "md5": "94e31ae97dd7672c1852f25c4f0f6168",
                "sha256": "a0a9da689cb6c071fc8d551a2231270ec6672feb7d0107551bd1fa59673a0d46"
            },
            "downloads": -1,
            "filename": "pycond-20230212-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "94e31ae97dd7672c1852f25c4f0f6168",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 31582,
            "upload_time": "2023-02-10T22:54:27",
            "upload_time_iso_8601": "2023-02-10T22:54:27.288150Z",
            "url": "https://files.pythonhosted.org/packages/4e/37/d2a8b1f3740724b7919a13dc0487d831de53ed7753e1e8c23c327fc03b35/pycond-20230212-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9af2af7216e749edc5a30f945c3cf106bbf6327f82d72b74dce1cb813c67b2fc",
                "md5": "febee61408f13c0e0169b7dbb7b42428",
                "sha256": "eb38daa6fe6903cd8e578bb48d976bc7177dcc7087fef3e616ea39f92e01e7c1"
            },
            "downloads": -1,
            "filename": "pycond-20230212.tar.gz",
            "has_sig": false,
            "md5_digest": "febee61408f13c0e0169b7dbb7b42428",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 88693,
            "upload_time": "2023-02-10T22:54:29",
            "upload_time_iso_8601": "2023-02-10T22:54:29.857524Z",
            "url": "https://files.pythonhosted.org/packages/9a/f2/af7216e749edc5a30f945c3cf106bbf6327f82d72b74dce1cb813c67b2fc/pycond-20230212.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-10 22:54:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "axiros",
    "github_project": "pycond",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": false,
    "tox": true,
    "lcname": "pycond"
}
        
gk
Elapsed time: 0.04305s