nbmodular


Namenbmodular JSON
Version 0.0.22 PyPI version JSON
download
home_pagehttps://github.com/JaumeAmoresDS/nbmodular
SummaryConvert notebooks to modular code
upload_time2023-12-06 09:53:17
maintainer
docs_urlNone
authorJaume Amores
requires_python>=3.7
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            nbmodular
================

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

Convert data science notebooks with poor modularity to fully modular
notebooks that are automatically exported as python modules.

## Motivation

In data science, it is usual to develop experimentally and quickly based
on notebooks, with little regard to software engineering practices and
modularity. It can become challenging to start working on someone else’s
notebooks with no modularity in terms of separate functions, and a great
degree of duplicated code between the different notebooks. This makes it
difficult to understand the logic in terms of semantically separate
units, see what are the commonalities and differences between the
notebooks, and be able to extend, generalize, and configure the current
solution.

## Objectives

`nbmodular` is a library conceived with the objective of helping
converting the cells of a notebook into separate functions with clear
dependencies in terms of inputs and outputs. This is done though a
combination of tools which semi-automatically understand the data-flow
in the code, based on mild assumptions about its structure. It also
helps test the current logic and compare it against a modularized
solution, to make sure that the refactored code is equivalent to the
original one.

## Features

- [x] Convert cells to functions.
- [x] The logic of a single function can be written across multiple
  cells.
- [x] Functions can be either regular functions or unit test functions.
- [x] Functions and tests are exported to separate python modules.
- [ ] TODO: use nbdev to sync the exported python module with the
  notebook code, so that changes to the module are reflected back in the
  notebook.
- [x] Processed cells can continue to operate as cells or be only used
  as functions.
- [x] A pipeline function is automatically created and updated. This
  pipeline provides the data-flow from the first to the last function
  call in the notebook.
- [x] Functions act as nodes in a dependency graph. These nodes can
  optionally hold the values of local variables for inspection outside
  of the function. This is similar to having a single global scope,
  which is the original situation. Since this is memory-consuming,
  storing local variables is optional.
- [x] Local variables are persisted in disk, so that we may decide to
  reuse previous results without running the whole notebook.
- [ ] TODO: Once we are able to construct a graph, we may be able to
  draw it or show it in text, and pass it to ADG processors that can run
  functions sequentially or in parallel.
- [ ] TODO: if we have the dependency graph and persisted inputs /
  outputs, we may decide to only run those cells that are predecessors
  of the current one, i.e., the ones that provide the inputs needed by
  the current cell.
- [ ] TODO: if we associate a hash code to input data, we may only run
  the cells when the input data changes. Similarly, if we associate a
  hash code with AST-converted function code, we may only run those
  cells whose code has been updated.
- [ ] TODO: the output of a test cell can be used for assertions, where
  we require that the current output is the same as the original one.
- [ ] TODO: Compare the result of the pipeline with the result of
  running the original notebook.
- [ ] TODO: Currently, AST processing is used for assessing whether
  variables are modified in the cell or are just read. This just gives
  an estimate. We may want to compare the values of existing variables
  before and after running the code in the cell. We may also use a type
  checker such as mypy to assess whether a variable is immutable in the
  cell (e.g., mark the variable as Final and see if mypy complaints)

## Install

``` sh
pip install nbmodular
```

## Usage

Load ipython extension

This allows us to use the following of magic commands, among others

- function <name_of_function_to_define>
- print <name_of_previous_function>
- function_info <name_of_previous_function>
- print_pipeline

Let’s go one by one

### function

Use magic command `function` allows to:

- Run the code in the cell normally, and at the same time detect its
  input and output dependencies and define a function with this input
  and output:

``` python
a = 2
b = 3
c = a+b
print (a+b)
```

    5

The code in the previous cell runs as it normally would, but and at the
same time defines a function named `get_initial_values` which we can
show with the magic command `print`:

``` python
```

    def get_initial_values(test=False):
        a = 2
        b = 3
        c = a+b
        print (a+b)

This function is defined in the notebook space, so we can invoke it:

``` python
```

    def get_initial_values(test=False):
        a = 2
        b = 3
        c = a+b
        print (a+b)

The inputs and outputs of the function change dynamically every time we
add a new function cell. For example, if we add a new function `get_d`:

``` python
d = 10
```

``` python
```

    def get_d():
        d = 10

And then a function `add_all` that depend on the previous two functions:

``` python
a = a + d
b = b + d
c = c + d
```

``` python
f = %function_info add_all
```

``` python
print(f.code)
```

    def add_all(d, b, c, a):
        a = a + d
        b = b + d
        c = c + d

``` python
```

    def add_all(d, b, c, a):
        a = a + d
        b = b + d
        c = c + d

``` python
```


    from sklearn.utils import Bunch
    from pathlib import Path
    import joblib
    import pandas as pd
    import numpy as np

    def test_index_pipeline (test=True, prev_result=None, result_file_name="index_pipeline"):
        result = index_pipeline (test=test, load=True, save=True, result_file_name=result_file_name)
        if prev_result is None:
            prev_result = index_pipeline (test=test, load=True, save=True, result_file_name=f"test_{result_file_name}")
        for k in prev_result:
            assert k in result
            if type(prev_result[k]) is pd.DataFrame:    
                pd.testing.assert_frame_equal (result[k], prev_result[k])
            elif type(prev_result[k]) is np.array:
                np.testing.assert_array_equal (result[k], prev_result[k])
            else:
                assert result[k]==prev_result[k]

``` python
```


    def index_pipeline (test=False, load=True, save=True, result_file_name="index_pipeline"):

        # load result
        result_file_name += '.pk'
        path_variables = Path ("index") / result_file_name
        if load and path_variables.exists():
            result = joblib.load (path_variables)
            return result

        b, c, a = get_initial_values (test=test)
        d = get_d ()
        add_all (d, b, c, a)

        # save result
        result = Bunch (b=b,c=c,a=a,d=d)
        if save:    
            path_variables.parent.mkdir (parents=True, exist_ok=True)
            joblib.dump (result, path_variables)
        return result

``` python
```

    def add_all(d, b, c, a):
        a = a + d
        b = b + d
        c = c + d

We can see that the uputs from `get_initial_values` and `get_d` change
as needed. We can look at all the functions defined so far by using
`print all`:

``` python
```

    def get_initial_values(test=False):
        a = 2
        b = 3
        c = a+b
        print (a+b)
        return b,c,a

    def get_d():
        d = 10
        return d

    def add_all(d, b, c, a):
        a = a + d
        b = b + d
        c = c + d

Similarly the outputs from the last function `add_all` change after we
add a other functions that depend on it:

``` python
print (a, b, c, d)
```

    12 13 15 10

### print

We can see each of the defined functions with `print my_function`, and
list all of them with `print all`

``` python
```

    def get_initial_values(test=False):
        a = 2
        b = 3
        c = a+b
        print (a+b)
        return b,c,a

    def get_d():
        d = 10
        return d

    def add_all(d, b, c, a):
        a = a + d
        b = b + d
        c = c + d
        return b,c,a

    def print_all(b, d, a, c):
        print (a, b, c, d)

### print_pipeline

As we add functions to the notebook, a pipeline function is defined. We
can print this pipeline with the magic `print_pipeline`

``` python
```


    def index_pipeline (test=False, load=True, save=True, result_file_name="index_pipeline"):

        # load result
        result_file_name += '.pk'
        path_variables = Path ("index") / result_file_name
        if load and path_variables.exists():
            result = joblib.load (path_variables)
            return result

        b, c, a = get_initial_values (test=test)
        d = get_d ()
        b, c, a = add_all (d, b, c, a)
        print_all (b, d, a, c)

        # save result
        result = Bunch (b=b,d=d,c=c,a=a)
        if save:    
            path_variables.parent.mkdir (parents=True, exist_ok=True)
            joblib.dump (result, path_variables)
        return result

This shows the data flow in terms of inputs and outputs

And run it:

``` python
self = %cell_processor
```

``` python
self.function_list
```

    [FunctionProcessor with name get_initial_values, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
         Arguments: []
         Output: ['b', 'c', 'a']
         Locals: dict_keys(['a', 'b', 'c']),
     FunctionProcessor with name get_d, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
         Arguments: []
         Output: ['d']
         Locals: dict_keys(['d']),
     FunctionProcessor with name add_all, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
         Arguments: ['d', 'b', 'c', 'a']
         Output: ['b', 'c', 'a']
         Locals: dict_keys(['a', 'b', 'c']),
     FunctionProcessor with name print_all, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
         Arguments: ['b', 'd', 'a', 'c']
         Output: []
         Locals: dict_keys([])]

``` python
```

    def get_initial_values(test=False):
        a = 2
        b = 3
        c = a+b
        print (a+b)
        return b,c,a

    def get_d():
        d = 10
        return d

    def add_all(d, b, c, a):
        a = a + d
        b = b + d
        c = c + d
        return b,c,a

    def print_all(b, d, a, c):
        print (a, b, c, d)

``` python
index_pipeline()
```

    {'d': 10, 'b': 13, 'a': 12, 'c': 15}

### function_info

We can get access to many of the details of each of the defined
functions by calling `function_info` on a given function name:

``` python
get_initial_values_info = %function_info get_initial_values
```

This allows us to see:

- The name and value (at the time of running) of the local variables,
  arguments and results from the function:

``` python
get_initial_values_info.arguments
```

    []

``` python
get_initial_values_info.current_values
```

    {'a': 2, 'b': 3, 'c': 5}

``` python
get_initial_values_info.return_values
```

    ['b', 'c', 'a']

We can also inspect the original code written in the cell…

``` python
print (get_initial_values_info.original_code)
```

    a = 2
    b = 3
    c = a+b
    print (a+b)

the code of the defined function:

``` python
print (get_initial_values_info.code)
```

    def get_initial_values(test=False):
        a = 2
        b = 3
        c = a+b
        print (a+b)
        return b,c,a

.. and the AST trees:

``` python
print (get_initial_values_info.get_ast (code=get_initial_values_info.original_code))
```

    Module(
      body=[
        Assign(
          targets=[
            Name(id='a', ctx=Store())],
          value=Constant(value=2)),
        Assign(
          targets=[
            Name(id='b', ctx=Store())],
          value=Constant(value=3)),
        Assign(
          targets=[
            Name(id='c', ctx=Store())],
          value=BinOp(
            left=Name(id='a', ctx=Load()),
            op=Add(),
            right=Name(id='b', ctx=Load()))),
        Expr(
          value=Call(
            func=Name(id='print', ctx=Load()),
            args=[
              BinOp(
                left=Name(id='a', ctx=Load()),
                op=Add(),
                right=Name(id='b', ctx=Load()))],
            keywords=[]))],
      type_ignores=[])
    None

``` python
print (get_initial_values_info.get_ast (code=get_initial_values_info.code))
```

    Module(
      body=[
        FunctionDef(
          name='get_initial_values',
          args=arguments(
            posonlyargs=[],
            args=[
              arg(arg='test')],
            kwonlyargs=[],
            kw_defaults=[],
            defaults=[
              Constant(value=False)]),
          body=[
            Assign(
              targets=[
                Name(id='a', ctx=Store())],
              value=Constant(value=2)),
            Assign(
              targets=[
                Name(id='b', ctx=Store())],
              value=Constant(value=3)),
            Assign(
              targets=[
                Name(id='c', ctx=Store())],
              value=BinOp(
                left=Name(id='a', ctx=Load()),
                op=Add(),
                right=Name(id='b', ctx=Load()))),
            Expr(
              value=Call(
                func=Name(id='print', ctx=Load()),
                args=[
                  BinOp(
                    left=Name(id='a', ctx=Load()),
                    op=Add(),
                    right=Name(id='b', ctx=Load()))],
                keywords=[])),
            Return(
              value=Tuple(
                elts=[
                  Name(id='b', ctx=Load()),
                  Name(id='c', ctx=Load()),
                  Name(id='a', ctx=Load())],
                ctx=Load()))],
          decorator_list=[])],
      type_ignores=[])
    None

Now, we can define another function in a cell that uses variables from
the previous function.

### cell_processor

This magic allows us to get access to the CellProcessor class managing
the logic for running the above magic commands, which can become handy:

``` python
cell_processor = %cell_processor
```

## Merging function cells

In order to explore intermediate results, it is convenient to split the
code in a function among different cells. This can be done by passing
the flag `--merge True`

``` python
x = [1, 2, 3]
y = [100, 200, 300]
z = [u+v for u,v in zip(x,y)]
```

``` python
z
```

    [101, 202, 303]

``` python
```

    def analyze():
        x = [1, 2, 3]
        y = [100, 200, 300]
        z = [u+v for u,v in zip(x,y)]

``` python
product = [u*v for u, v in zip(x,y)]
```

``` python
```

    def analyze():
        x = [1, 2, 3]
        y = [100, 200, 300]
        z = [u+v for u,v in zip(x,y)]
        product = [u*v for u, v in zip(x,y)]

# Test functions

By passing the flag `--test` we can indicate that the logic in the cell
is dedicated to test other functions in the notebook. The test function
is defined taking the well-known `pytest` library as a test engine in
mind.

This has the following consequences:  
- The analysis of dependencies is not associated with variables found in
other cells. - Test functions do not appear in the overall pipeline. -
The data variables used by the test function can be defined in separate
test data cells which in turn are converted to functions. These
functions are called at the beginning of the test cell.

Let’s see an example

``` python
a = 5
b = 3
c = 6
d = 7
```

``` python
add_all(d, a, b, c)
```

    (12, 10, 13)

``` python
# test function add_all
assert add_all(d, a, b, c)==(12, 10, 13)
```

``` python
```

    def test_add_all():
        b,c,a,d = test_input_add_all()
        # test function add_all
        assert add_all(d, a, b, c)==(12, 10, 13)

``` python
```

    def test_input_add_all(test=False):
        a = 5
        b = 3
        c = 6
        d = 7
        return b,c,a,d

Test functions are written in a separate test module, withprefix `test_`

``` python
!ls ../tests
```

    index.ipynb  test_example.py

# Imports

In order to include libraries in our python module, we can use the magic
imports. Those will be written at the beginning of the module:

``` python
import pandas as pd
```

Imports can be indicated separately for the test module by passing the
flag `--test`:

``` python
import matplotlib.pyplot as plt
```

# Defined functions

Functions can be included already being defined with signature and
return values. The only caveat is that, if we want the function to be
executed, the variables in the argument list need to be created outside
of the function. Otherwise we need to pass the flag –norun to avoid
errors:

``` python
def myfunc (x, y, a=1, b=3):
    print ('hello', a, b)
    c = a+b
    return c
```

Although the internal code of the function is not executed, it is still
parsed using an AST. This allows to provide very tentative *warnings*
regarding names not found in the argument list

``` python
def other_func (x, y):
    print ('hello', a, b)
    c = a+b
    return c
```

    Detected the following previous variables that are not in the argument list: ['b', 'a']

Let’s do the same but running the function:

``` python
a=1
b=3
```

``` python
def myfunc (x, y, a=1, b=3):
    print ('hello', a, b)
    c = a+b
    return c
```

    hello 1 3

``` python
myfunc (10, 20)
```

    hello 1 3

    4

``` python
myfunc_info = %function_info myfunc
```

``` python
myfunc_info
```

    FunctionProcessor with name myfunc, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])
        Arguments: ['x', 'y', 'a', 'b']
        Output: ['c']
        Locals: dict_keys(['c'])

``` python
myfunc_info.c
```

    4

# Storing local variables in memory

By default, when we run a cell function its local variables are stored
in a dictionary called `current_values`:

``` python
my_new_local = 3
my_other_new_local = 4
```

The stored variables can be accessed by calling the magic
`function_info`:

``` python
my_new_function_info = %function_info my_new_function
```

``` python
my_new_function_info.current_values
```

    {'my_new_local': 3, 'my_other_new_local': 4}

This default behaviour can be overriden by passing the flag
`--not-store`

``` python
my_second_variable = 100
my_second_other_variable = 200
```

``` python
my_second_new_function_info = %function_info my_second_new_function
```

``` python
my_second_new_function_info.current_values
```

    {}

# (Un)packing Bunch I/O

``` python
from sklearn.utils import Bunch
```

``` python
x = Bunch (a=1, b=2)
```

``` python
c = 3
a = 4
```

``` python
```

    def bunch_processor(x, day):
        a = x["a"]
        b = x["b"]
        c = 3
        a = 4
        x["a"] = a
        x["c"] = c
        x["day"] = day
        return x

# Function’s info object holding local variables

``` python
df = pd.DataFrame (dict(Year=[1,2,3], Month=[1,2,3], Day=[1,2,3]))
fy = '2023'
```

``` python
def days (df, fy, x=1, /, y=3, *, n=4):
    df_group = df.groupby(['Year','Month']).agg({'Day': lambda x: len (x)})
    df_group = df.reset_index()
    print ('other args: fy', fy, 'x', x, 'y', y)
    return df_group
```

    other args: fy 2023 x 1 y 3
    Stored the following local variables in the days current_values dictionary: ['df_group']
    Detected the following previous variables that are not in the argument list: ['x', 'df', 'fy']

An info object with name <function_name>\_info is created in memory, and
can be used to get access to local variables

``` python
days_info.df_group
```

<div>
<style scoped>
    .dataframe tbody tr th:only-of-type {
        vertical-align: middle;
    }

    .dataframe tbody tr th {
        vertical-align: top;
    }

    .dataframe thead th {
        text-align: right;
    }
</style>
<table border="1" class="dataframe">
  <thead>
    <tr style="text-align: right;">
      <th></th>
      <th>index</th>
      <th>Year</th>
      <th>Month</th>
      <th>Day</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>0</th>
      <td>0</td>
      <td>1</td>
      <td>1</td>
      <td>1</td>
    </tr>
    <tr>
      <th>1</th>
      <td>1</td>
      <td>2</td>
      <td>2</td>
      <td>2</td>
    </tr>
    <tr>
      <th>2</th>
      <td>2</td>
      <td>3</td>
      <td>3</td>
      <td>3</td>
    </tr>
  </tbody>
</table>
</div>

There is more information in this object: previous variables, code, etc.

``` python
days_info.current_values
```

    {'df_group':    index  Year  Month  Day
     0      0     1      1    1
     1      1     2      2    2
     2      2     3      3    3}

``` python
days_info
```

    FunctionProcessor with name days, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'not_run', 'previous_values', 'current_values', 'returns_dict', 'returns_bunch', 'unpack_bunch', 'include_input', 'exclude_input', 'include_output', 'exclude_output', 'store_locals_in_disk', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx'])
        Arguments: ['df', 'fy', 'x', 'y']
        Output: ['df_group']
        Locals: dict_keys(['df_group'])

The function can also be called directly:

``` python
days (df*100, 100, x=4)
```

    other args: fy 100 x 4 y 3

<div>
<style scoped>
    .dataframe tbody tr th:only-of-type {
        vertical-align: middle;
    }

    .dataframe tbody tr th {
        vertical-align: top;
    }

    .dataframe thead th {
        text-align: right;
    }
</style>
<table border="1" class="dataframe">
  <thead>
    <tr style="text-align: right;">
      <th></th>
      <th>index</th>
      <th>Year</th>
      <th>Month</th>
      <th>Day</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>0</th>
      <td>0</td>
      <td>100</td>
      <td>100</td>
      <td>100</td>
    </tr>
    <tr>
      <th>1</th>
      <td>1</td>
      <td>200</td>
      <td>200</td>
      <td>200</td>
    </tr>
    <tr>
      <th>2</th>
      <td>2</td>
      <td>300</td>
      <td>300</td>
      <td>300</td>
    </tr>
  </tbody>
</table>
</div>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/JaumeAmoresDS/nbmodular",
    "name": "nbmodular",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "nbdev jupyter notebook python",
    "author": "Jaume Amores",
    "author_email": "jaume.dsdev@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/98/44/ec6eef83ad4afff9ba7fb5d6cbb05fde9ec39ea3e24e6adaf755fee37ad8/nbmodular-0.0.22.tar.gz",
    "platform": null,
    "description": "nbmodular\n================\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\nConvert data science notebooks with poor modularity to fully modular\nnotebooks that are automatically exported as python modules.\n\n## Motivation\n\nIn data science, it is usual to develop experimentally and quickly based\non notebooks, with little regard to software engineering practices and\nmodularity. It can become challenging to start working on someone else\u2019s\nnotebooks with no modularity in terms of separate functions, and a great\ndegree of duplicated code between the different notebooks. This makes it\ndifficult to understand the logic in terms of semantically separate\nunits, see what are the commonalities and differences between the\nnotebooks, and be able to extend, generalize, and configure the current\nsolution.\n\n## Objectives\n\n`nbmodular` is a library conceived with the objective of helping\nconverting the cells of a notebook into separate functions with clear\ndependencies in terms of inputs and outputs. This is done though a\ncombination of tools which semi-automatically understand the data-flow\nin the code, based on mild assumptions about its structure. It also\nhelps test the current logic and compare it against a modularized\nsolution, to make sure that the refactored code is equivalent to the\noriginal one.\n\n## Features\n\n- [x] Convert cells to functions.\n- [x] The logic of a single function can be written across multiple\n  cells.\n- [x] Functions can be either regular functions or unit test functions.\n- [x] Functions and tests are exported to separate python modules.\n- [ ] TODO: use nbdev to sync the exported python module with the\n  notebook code, so that changes to the module are reflected back in the\n  notebook.\n- [x] Processed cells can continue to operate as cells or be only used\n  as functions.\n- [x] A pipeline function is automatically created and updated. This\n  pipeline provides the data-flow from the first to the last function\n  call in the notebook.\n- [x] Functions act as nodes in a dependency graph. These nodes can\n  optionally hold the values of local variables for inspection outside\n  of the function. This is similar to having a single global scope,\n  which is the original situation. Since this is memory-consuming,\n  storing local variables is optional.\n- [x] Local variables are persisted in disk, so that we may decide to\n  reuse previous results without running the whole notebook.\n- [ ] TODO: Once we are able to construct a graph, we may be able to\n  draw it or show it in text, and pass it to ADG processors that can run\n  functions sequentially or in parallel.\n- [ ] TODO: if we have the dependency graph and persisted inputs /\n  outputs, we may decide to only run those cells that are predecessors\n  of the current one, i.e., the ones that provide the inputs needed by\n  the current cell.\n- [ ] TODO: if we associate a hash code to input data, we may only run\n  the cells when the input data changes. Similarly, if we associate a\n  hash code with AST-converted function code, we may only run those\n  cells whose code has been updated.\n- [ ] TODO: the output of a test cell can be used for assertions, where\n  we require that the current output is the same as the original one.\n- [ ] TODO: Compare the result of the pipeline with the result of\n  running the original notebook.\n- [ ] TODO: Currently, AST processing is used for assessing whether\n  variables are modified in the cell or are just read. This just gives\n  an estimate. We may want to compare the values of existing variables\n  before and after running the code in the cell. We may also use a type\n  checker such as mypy to assess whether a variable is immutable in the\n  cell (e.g., mark the variable as Final and see if mypy complaints)\n\n## Install\n\n``` sh\npip install nbmodular\n```\n\n## Usage\n\nLoad ipython extension\n\nThis allows us to use the following of magic commands, among others\n\n- function <name_of_function_to_define>\n- print <name_of_previous_function>\n- function_info <name_of_previous_function>\n- print_pipeline\n\nLet\u2019s go one by one\n\n### function\n\nUse magic command `function` allows to:\n\n- Run the code in the cell normally, and at the same time detect its\n  input and output dependencies and define a function with this input\n  and output:\n\n``` python\na = 2\nb = 3\nc = a+b\nprint (a+b)\n```\n\n    5\n\nThe code in the previous cell runs as it normally would, but and at the\nsame time defines a function named `get_initial_values` which we can\nshow with the magic command `print`:\n\n``` python\n```\n\n    def get_initial_values(test=False):\n        a = 2\n        b = 3\n        c = a+b\n        print (a+b)\n\nThis function is defined in the notebook space, so we can invoke it:\n\n``` python\n```\n\n    def get_initial_values(test=False):\n        a = 2\n        b = 3\n        c = a+b\n        print (a+b)\n\nThe inputs and outputs of the function change dynamically every time we\nadd a new function cell. For example, if we add a new function `get_d`:\n\n``` python\nd = 10\n```\n\n``` python\n```\n\n    def get_d():\n        d = 10\n\nAnd then a function `add_all` that depend on the previous two functions:\n\n``` python\na = a + d\nb = b + d\nc = c + d\n```\n\n``` python\nf = %function_info add_all\n```\n\n``` python\nprint(f.code)\n```\n\n    def add_all(d, b, c, a):\n        a = a + d\n        b = b + d\n        c = c + d\n\n``` python\n```\n\n    def add_all(d, b, c, a):\n        a = a + d\n        b = b + d\n        c = c + d\n\n``` python\n```\n\n\n    from sklearn.utils import Bunch\n    from pathlib import Path\n    import joblib\n    import pandas as pd\n    import numpy as np\n\n    def test_index_pipeline (test=True, prev_result=None, result_file_name=\"index_pipeline\"):\n        result = index_pipeline (test=test, load=True, save=True, result_file_name=result_file_name)\n        if prev_result is None:\n            prev_result = index_pipeline (test=test, load=True, save=True, result_file_name=f\"test_{result_file_name}\")\n        for k in prev_result:\n            assert k in result\n            if type(prev_result[k]) is pd.DataFrame:    \n                pd.testing.assert_frame_equal (result[k], prev_result[k])\n            elif type(prev_result[k]) is np.array:\n                np.testing.assert_array_equal (result[k], prev_result[k])\n            else:\n                assert result[k]==prev_result[k]\n\n``` python\n```\n\n\n    def index_pipeline (test=False, load=True, save=True, result_file_name=\"index_pipeline\"):\n\n        # load result\n        result_file_name += '.pk'\n        path_variables = Path (\"index\") / result_file_name\n        if load and path_variables.exists():\n            result = joblib.load (path_variables)\n            return result\n\n        b, c, a = get_initial_values (test=test)\n        d = get_d ()\n        add_all (d, b, c, a)\n\n        # save result\n        result = Bunch (b=b,c=c,a=a,d=d)\n        if save:    \n            path_variables.parent.mkdir (parents=True, exist_ok=True)\n            joblib.dump (result, path_variables)\n        return result\n\n``` python\n```\n\n    def add_all(d, b, c, a):\n        a = a + d\n        b = b + d\n        c = c + d\n\nWe can see that the uputs from `get_initial_values` and `get_d` change\nas needed. We can look at all the functions defined so far by using\n`print all`:\n\n``` python\n```\n\n    def get_initial_values(test=False):\n        a = 2\n        b = 3\n        c = a+b\n        print (a+b)\n        return b,c,a\n\n    def get_d():\n        d = 10\n        return d\n\n    def add_all(d, b, c, a):\n        a = a + d\n        b = b + d\n        c = c + d\n\nSimilarly the outputs from the last function `add_all` change after we\nadd a other functions that depend on it:\n\n``` python\nprint (a, b, c, d)\n```\n\n    12 13 15 10\n\n### print\n\nWe can see each of the defined functions with `print my_function`, and\nlist all of them with `print all`\n\n``` python\n```\n\n    def get_initial_values(test=False):\n        a = 2\n        b = 3\n        c = a+b\n        print (a+b)\n        return b,c,a\n\n    def get_d():\n        d = 10\n        return d\n\n    def add_all(d, b, c, a):\n        a = a + d\n        b = b + d\n        c = c + d\n        return b,c,a\n\n    def print_all(b, d, a, c):\n        print (a, b, c, d)\n\n### print_pipeline\n\nAs we add functions to the notebook, a pipeline function is defined. We\ncan print this pipeline with the magic `print_pipeline`\n\n``` python\n```\n\n\n    def index_pipeline (test=False, load=True, save=True, result_file_name=\"index_pipeline\"):\n\n        # load result\n        result_file_name += '.pk'\n        path_variables = Path (\"index\") / result_file_name\n        if load and path_variables.exists():\n            result = joblib.load (path_variables)\n            return result\n\n        b, c, a = get_initial_values (test=test)\n        d = get_d ()\n        b, c, a = add_all (d, b, c, a)\n        print_all (b, d, a, c)\n\n        # save result\n        result = Bunch (b=b,d=d,c=c,a=a)\n        if save:    \n            path_variables.parent.mkdir (parents=True, exist_ok=True)\n            joblib.dump (result, path_variables)\n        return result\n\nThis shows the data flow in terms of inputs and outputs\n\nAnd run it:\n\n``` python\nself = %cell_processor\n```\n\n``` python\nself.function_list\n```\n\n    [FunctionProcessor with name get_initial_values, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])\n         Arguments: []\n         Output: ['b', 'c', 'a']\n         Locals: dict_keys(['a', 'b', 'c']),\n     FunctionProcessor with name get_d, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])\n         Arguments: []\n         Output: ['d']\n         Locals: dict_keys(['d']),\n     FunctionProcessor with name add_all, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])\n         Arguments: ['d', 'b', 'c', 'a']\n         Output: ['b', 'c', 'a']\n         Locals: dict_keys(['a', 'b', 'c']),\n     FunctionProcessor with name print_all, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])\n         Arguments: ['b', 'd', 'a', 'c']\n         Output: []\n         Locals: dict_keys([])]\n\n``` python\n```\n\n    def get_initial_values(test=False):\n        a = 2\n        b = 3\n        c = a+b\n        print (a+b)\n        return b,c,a\n\n    def get_d():\n        d = 10\n        return d\n\n    def add_all(d, b, c, a):\n        a = a + d\n        b = b + d\n        c = c + d\n        return b,c,a\n\n    def print_all(b, d, a, c):\n        print (a, b, c, d)\n\n``` python\nindex_pipeline()\n```\n\n    {'d': 10, 'b': 13, 'a': 12, 'c': 15}\n\n### function_info\n\nWe can get access to many of the details of each of the defined\nfunctions by calling `function_info` on a given function name:\n\n``` python\nget_initial_values_info = %function_info get_initial_values\n```\n\nThis allows us to see:\n\n- The name and value (at the time of running) of the local variables,\n  arguments and results from the function:\n\n``` python\nget_initial_values_info.arguments\n```\n\n    []\n\n``` python\nget_initial_values_info.current_values\n```\n\n    {'a': 2, 'b': 3, 'c': 5}\n\n``` python\nget_initial_values_info.return_values\n```\n\n    ['b', 'c', 'a']\n\nWe can also inspect the original code written in the cell\u2026\n\n``` python\nprint (get_initial_values_info.original_code)\n```\n\n    a = 2\n    b = 3\n    c = a+b\n    print (a+b)\n\nthe code of the defined function:\n\n``` python\nprint (get_initial_values_info.code)\n```\n\n    def get_initial_values(test=False):\n        a = 2\n        b = 3\n        c = a+b\n        print (a+b)\n        return b,c,a\n\n.. and the AST trees:\n\n``` python\nprint (get_initial_values_info.get_ast (code=get_initial_values_info.original_code))\n```\n\n    Module(\n      body=[\n        Assign(\n          targets=[\n            Name(id='a', ctx=Store())],\n          value=Constant(value=2)),\n        Assign(\n          targets=[\n            Name(id='b', ctx=Store())],\n          value=Constant(value=3)),\n        Assign(\n          targets=[\n            Name(id='c', ctx=Store())],\n          value=BinOp(\n            left=Name(id='a', ctx=Load()),\n            op=Add(),\n            right=Name(id='b', ctx=Load()))),\n        Expr(\n          value=Call(\n            func=Name(id='print', ctx=Load()),\n            args=[\n              BinOp(\n                left=Name(id='a', ctx=Load()),\n                op=Add(),\n                right=Name(id='b', ctx=Load()))],\n            keywords=[]))],\n      type_ignores=[])\n    None\n\n``` python\nprint (get_initial_values_info.get_ast (code=get_initial_values_info.code))\n```\n\n    Module(\n      body=[\n        FunctionDef(\n          name='get_initial_values',\n          args=arguments(\n            posonlyargs=[],\n            args=[\n              arg(arg='test')],\n            kwonlyargs=[],\n            kw_defaults=[],\n            defaults=[\n              Constant(value=False)]),\n          body=[\n            Assign(\n              targets=[\n                Name(id='a', ctx=Store())],\n              value=Constant(value=2)),\n            Assign(\n              targets=[\n                Name(id='b', ctx=Store())],\n              value=Constant(value=3)),\n            Assign(\n              targets=[\n                Name(id='c', ctx=Store())],\n              value=BinOp(\n                left=Name(id='a', ctx=Load()),\n                op=Add(),\n                right=Name(id='b', ctx=Load()))),\n            Expr(\n              value=Call(\n                func=Name(id='print', ctx=Load()),\n                args=[\n                  BinOp(\n                    left=Name(id='a', ctx=Load()),\n                    op=Add(),\n                    right=Name(id='b', ctx=Load()))],\n                keywords=[])),\n            Return(\n              value=Tuple(\n                elts=[\n                  Name(id='b', ctx=Load()),\n                  Name(id='c', ctx=Load()),\n                  Name(id='a', ctx=Load())],\n                ctx=Load()))],\n          decorator_list=[])],\n      type_ignores=[])\n    None\n\nNow, we can define another function in a cell that uses variables from\nthe previous function.\n\n### cell_processor\n\nThis magic allows us to get access to the CellProcessor class managing\nthe logic for running the above magic commands, which can become handy:\n\n``` python\ncell_processor = %cell_processor\n```\n\n## Merging function cells\n\nIn order to explore intermediate results, it is convenient to split the\ncode in a function among different cells. This can be done by passing\nthe flag `--merge True`\n\n``` python\nx = [1, 2, 3]\ny = [100, 200, 300]\nz = [u+v for u,v in zip(x,y)]\n```\n\n``` python\nz\n```\n\n    [101, 202, 303]\n\n``` python\n```\n\n    def analyze():\n        x = [1, 2, 3]\n        y = [100, 200, 300]\n        z = [u+v for u,v in zip(x,y)]\n\n``` python\nproduct = [u*v for u, v in zip(x,y)]\n```\n\n``` python\n```\n\n    def analyze():\n        x = [1, 2, 3]\n        y = [100, 200, 300]\n        z = [u+v for u,v in zip(x,y)]\n        product = [u*v for u, v in zip(x,y)]\n\n# Test functions\n\nBy passing the flag `--test` we can indicate that the logic in the cell\nis dedicated to test other functions in the notebook. The test function\nis defined taking the well-known `pytest` library as a test engine in\nmind.\n\nThis has the following consequences:  \n- The analysis of dependencies is not associated with variables found in\nother cells. - Test functions do not appear in the overall pipeline. -\nThe data variables used by the test function can be defined in separate\ntest data cells which in turn are converted to functions. These\nfunctions are called at the beginning of the test cell.\n\nLet\u2019s see an example\n\n``` python\na = 5\nb = 3\nc = 6\nd = 7\n```\n\n``` python\nadd_all(d, a, b, c)\n```\n\n    (12, 10, 13)\n\n``` python\n# test function add_all\nassert add_all(d, a, b, c)==(12, 10, 13)\n```\n\n``` python\n```\n\n    def test_add_all():\n        b,c,a,d = test_input_add_all()\n        # test function add_all\n        assert add_all(d, a, b, c)==(12, 10, 13)\n\n``` python\n```\n\n    def test_input_add_all(test=False):\n        a = 5\n        b = 3\n        c = 6\n        d = 7\n        return b,c,a,d\n\nTest functions are written in a separate test module, withprefix `test_`\n\n``` python\n!ls ../tests\n```\n\n    index.ipynb  test_example.py\n\n# Imports\n\nIn order to include libraries in our python module, we can use the magic\nimports. Those will be written at the beginning of the module:\n\n``` python\nimport pandas as pd\n```\n\nImports can be indicated separately for the test module by passing the\nflag `--test`:\n\n``` python\nimport matplotlib.pyplot as plt\n```\n\n# Defined functions\n\nFunctions can be included already being defined with signature and\nreturn values. The only caveat is that, if we want the function to be\nexecuted, the variables in the argument list need to be created outside\nof the function. Otherwise we need to pass the flag \u2013norun to avoid\nerrors:\n\n``` python\ndef myfunc (x, y, a=1, b=3):\n    print ('hello', a, b)\n    c = a+b\n    return c\n```\n\nAlthough the internal code of the function is not executed, it is still\nparsed using an AST. This allows to provide very tentative *warnings*\nregarding names not found in the argument list\n\n``` python\ndef other_func (x, y):\n    print ('hello', a, b)\n    c = a+b\n    return c\n```\n\n    Detected the following previous variables that are not in the argument list: ['b', 'a']\n\nLet\u2019s do the same but running the function:\n\n``` python\na=1\nb=3\n```\n\n``` python\ndef myfunc (x, y, a=1, b=3):\n    print ('hello', a, b)\n    c = a+b\n    return c\n```\n\n    hello 1 3\n\n``` python\nmyfunc (10, 20)\n```\n\n    hello 1 3\n\n    4\n\n``` python\nmyfunc_info = %function_info myfunc\n```\n\n``` python\nmyfunc_info\n```\n\n    FunctionProcessor with name myfunc, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'norun', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx', 'previous_values', 'current_values', 'all_values', 'code'])\n        Arguments: ['x', 'y', 'a', 'b']\n        Output: ['c']\n        Locals: dict_keys(['c'])\n\n``` python\nmyfunc_info.c\n```\n\n    4\n\n# Storing local variables in memory\n\nBy default, when we run a cell function its local variables are stored\nin a dictionary called `current_values`:\n\n``` python\nmy_new_local = 3\nmy_other_new_local = 4\n```\n\nThe stored variables can be accessed by calling the magic\n`function_info`:\n\n``` python\nmy_new_function_info = %function_info my_new_function\n```\n\n``` python\nmy_new_function_info.current_values\n```\n\n    {'my_new_local': 3, 'my_other_new_local': 4}\n\nThis default behaviour can be overriden by passing the flag\n`--not-store`\n\n``` python\nmy_second_variable = 100\nmy_second_other_variable = 200\n```\n\n``` python\nmy_second_new_function_info = %function_info my_second_new_function\n```\n\n``` python\nmy_second_new_function_info.current_values\n```\n\n    {}\n\n# (Un)packing Bunch I/O\n\n``` python\nfrom sklearn.utils import Bunch\n```\n\n``` python\nx = Bunch (a=1, b=2)\n```\n\n``` python\nc = 3\na = 4\n```\n\n``` python\n```\n\n    def bunch_processor(x, day):\n        a = x[\"a\"]\n        b = x[\"b\"]\n        c = 3\n        a = 4\n        x[\"a\"] = a\n        x[\"c\"] = c\n        x[\"day\"] = day\n        return x\n\n# Function\u2019s info object holding local variables\n\n``` python\ndf = pd.DataFrame (dict(Year=[1,2,3], Month=[1,2,3], Day=[1,2,3]))\nfy = '2023'\n```\n\n``` python\ndef days (df, fy, x=1, /, y=3, *, n=4):\n    df_group = df.groupby(['Year','Month']).agg({'Day': lambda x: len (x)})\n    df_group = df.reset_index()\n    print ('other args: fy', fy, 'x', x, 'y', y)\n    return df_group\n```\n\n    other args: fy 2023 x 1 y 3\n    Stored the following local variables in the days current_values dictionary: ['df_group']\n    Detected the following previous variables that are not in the argument list: ['x', 'df', 'fy']\n\nAn info object with name <function_name>\\_info is created in memory, and\ncan be used to get access to local variables\n\n``` python\ndays_info.df_group\n```\n\n<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }\n\n    .dataframe tbody tr th {\n        vertical-align: top;\n    }\n\n    .dataframe thead th {\n        text-align: right;\n    }\n</style>\n<table border=\"1\" class=\"dataframe\">\n  <thead>\n    <tr style=\"text-align: right;\">\n      <th></th>\n      <th>index</th>\n      <th>Year</th>\n      <th>Month</th>\n      <th>Day</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <th>0</th>\n      <td>0</td>\n      <td>1</td>\n      <td>1</td>\n      <td>1</td>\n    </tr>\n    <tr>\n      <th>1</th>\n      <td>1</td>\n      <td>2</td>\n      <td>2</td>\n      <td>2</td>\n    </tr>\n    <tr>\n      <th>2</th>\n      <td>2</td>\n      <td>3</td>\n      <td>3</td>\n      <td>3</td>\n    </tr>\n  </tbody>\n</table>\n</div>\n\nThere is more information in this object: previous variables, code, etc.\n\n``` python\ndays_info.current_values\n```\n\n    {'df_group':    index  Year  Month  Day\n     0      0     1      1    1\n     1      1     2      2    2\n     2      2     3      3    3}\n\n``` python\ndays_info\n```\n\n    FunctionProcessor with name days, and fields: dict_keys(['original_code', 'name', 'call', 'tab_size', 'arguments', 'return_values', 'unknown_input', 'unknown_output', 'test', 'data', 'defined', 'permanent', 'signature', 'not_run', 'previous_values', 'current_values', 'returns_dict', 'returns_bunch', 'unpack_bunch', 'include_input', 'exclude_input', 'include_output', 'exclude_output', 'store_locals_in_disk', 'created_variables', 'loaded_names', 'previous_variables', 'argument_variables', 'read_only_variables', 'posterior_variables', 'all_variables', 'idx'])\n        Arguments: ['df', 'fy', 'x', 'y']\n        Output: ['df_group']\n        Locals: dict_keys(['df_group'])\n\nThe function can also be called directly:\n\n``` python\ndays (df*100, 100, x=4)\n```\n\n    other args: fy 100 x 4 y 3\n\n<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }\n\n    .dataframe tbody tr th {\n        vertical-align: top;\n    }\n\n    .dataframe thead th {\n        text-align: right;\n    }\n</style>\n<table border=\"1\" class=\"dataframe\">\n  <thead>\n    <tr style=\"text-align: right;\">\n      <th></th>\n      <th>index</th>\n      <th>Year</th>\n      <th>Month</th>\n      <th>Day</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <th>0</th>\n      <td>0</td>\n      <td>100</td>\n      <td>100</td>\n      <td>100</td>\n    </tr>\n    <tr>\n      <th>1</th>\n      <td>1</td>\n      <td>200</td>\n      <td>200</td>\n      <td>200</td>\n    </tr>\n    <tr>\n      <th>2</th>\n      <td>2</td>\n      <td>300</td>\n      <td>300</td>\n      <td>300</td>\n    </tr>\n  </tbody>\n</table>\n</div>\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "Convert notebooks to modular code",
    "version": "0.0.22",
    "project_urls": {
        "Homepage": "https://github.com/JaumeAmoresDS/nbmodular"
    },
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f2565e37adb4b424045f89272b14cbf34f93ddedbf759c85fc99f4c19bbb1b97",
                "md5": "9087516e084440a873ea6d3b4bfa7d71",
                "sha256": "6f0e8bec6c2e3221539bcbcd06dd7e872375573ac0e8eb284e02cab3c6d71af5"
            },
            "downloads": -1,
            "filename": "nbmodular-0.0.22-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9087516e084440a873ea6d3b4bfa7d71",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 34885,
            "upload_time": "2023-12-06T09:53:15",
            "upload_time_iso_8601": "2023-12-06T09:53:15.642763Z",
            "url": "https://files.pythonhosted.org/packages/f2/56/5e37adb4b424045f89272b14cbf34f93ddedbf759c85fc99f4c19bbb1b97/nbmodular-0.0.22-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9844ec6eef83ad4afff9ba7fb5d6cbb05fde9ec39ea3e24e6adaf755fee37ad8",
                "md5": "826a879359840392267e28e4739b5488",
                "sha256": "ceae176b63c0c440c07e952c1e691f67cc252a46ba4b3d0ef85d330b985b1747"
            },
            "downloads": -1,
            "filename": "nbmodular-0.0.22.tar.gz",
            "has_sig": false,
            "md5_digest": "826a879359840392267e28e4739b5488",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 40768,
            "upload_time": "2023-12-06T09:53:17",
            "upload_time_iso_8601": "2023-12-06T09:53:17.525797Z",
            "url": "https://files.pythonhosted.org/packages/98/44/ec6eef83ad4afff9ba7fb5d6cbb05fde9ec39ea3e24e6adaf755fee37ad8/nbmodular-0.0.22.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-06 09:53:17",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "JaumeAmoresDS",
    "github_project": "nbmodular",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nbmodular"
}
        
Elapsed time: 0.62103s