Name | zenkai JSON |
Version |
0.0.9
JSON |
| download |
home_page | https://github.com/short-greg/zenkai |
Summary | A framework for flexibly developing beyond bakcpropagation. |
upload_time | 2024-09-11 06:31:25 |
maintainer | None |
docs_url | None |
author | Greg Short |
requires_python | <4.0,>=3.8 |
license | LICENSE |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Zenkai
Zenkai is a framework built on PyTorch for deep learning researchers
- to explore a wider variety of machine architectures
- to explore learning algorithms that do not rely on gradient descent
It is fundamentally based on the concepts of target propagation. In target propagation, a targets are propagated to each layer of the network by using an inversion or approximating an inversion operation. Thus, each layer has its own target. While Zenkai allows for more than just using target propagation, it is based on the concept of each layer having its own target.
## Installation
```bash
pip install zenkai
```
## Brief Overview
Zenkai consists of several packages to more flexibly define and train deep learning machines beyond what is easy to do with Pytorch.
- **zenkai**: The core package. It contains all modules necessary for defining a learning machine.
- **zenkai.tansaku**: Package for adding more exploration to learning. Contains framework for defining and creating population-based optimizers.
- **zenkai.ensemble**: Package used to create ensemble models within the Zenkai framework.
- **zenkai.targetprop** Package used for creating systems that use target propagation.
- **zenkai.feedback** Package for performing various kinds of feedback alignment.
- **zenkai.scikit** Package wrapping scikit learn to create Zenkai LeraningMachines using scikit learn modules.
- **zenkai.utils**: Utils contains a variety of utility functions that are used by the rest of the application. For example utils for getting and setting parameters or gradients.
Further documentation is available at https://zenkai.readthedocs.io
## Usage
Zenkai's primary feature is the "LearningMachine" which aims to make defining learning machines flexible. The design is similar to Torch, in that there is a forward method, a parameter update method similar to accGradParameters(), and a backpropagation method similar to updateGradInputs(). So the primary usage will be to implement them.
Here is a (non-working) example
```bash
class MyLearner(zenkai.LearningMachine):
"""A LearningMachine couples the learning mechanics for the machine with its internal mechanics."""
def __init__(
self, module: nn.Module, step_theta: zenkai.StepTheta,
step_x: StepX, loss: zenkai.Loss
):
super().__init__()
self.module = module
# step_theta is used to update the parameters of the
# module
self._step_theta = step_theta
# step_x is used to update the inputs to the module
self._step_x = step_x
self.loss = loss
def step(
self, x: IO, t: IO, state: State, **kwargs
):
# use to update the parameters of the machine
# x (IO): The input to update with
# t (IO): the target to update
# outputs for a connection of two machines
return self._step_theta(x, t)
def step_x(
self, x: IO, t: IO, state: State, **kwargs
) -> IO:
# use to update the target for the machine
# step_x is analogous to updateGradInputs in Torch except
# it calculates "new targets" for the incoming layer
return self._step_x(x, t)
def forward_nn(self, x: zenkai.IO, state: State) -> zenkai.IO:
return self.module(x.f)
my_learner = MyLearner(...)
zenkai.set_lmode(my_learner, zenkai.LMode.WithStep)
for x, t in dataloader:
# set the "learning mode" to perform a step
loss(my_learner(x), t).backward()
```
Learning machines can be stacked by making use of step_x in the training process.
Note: Since Zenkai has been set up to make use of Torch's backpropagation skills
```bash
class MyMultilayerLearner(LearningMachine):
"""A LearningMachine couples the learning mechanics for the machine with its internal mechanics."""
def __init__(
self, layer1: LearningMachine, layer2: LearningMachine
):
super().__init__()
self.layer1 = layer1
self.layer2 = layer2
# use these hooks to indicate a dependency on another method
self.add_step(StepXDep(self, 't1'))
self.add_step_x(ForwardDep(self, 'y1'))
def step(
self, x: IO, t: IO, state: State, **kwargs
):
# use to update the parameters of the machine
# x (IO): The input to update with
# t (IO): the target to update
# outputs for a connection of two machines
self.layer2.step(state._y1, t, state.sub('layer2'))
self.layer1.step(state._y2, state._t1, state.sub('layer1'))
def step_x(
self, x: IO, t: IO
) -> IO:
# use to update the target for the machine
# it calculates "new targets" for the incoming layer
t1 = state._t1 = self.layer2.step_x(state._y1, t, state.sub('layer1'))
return self.layer1.step_x(x, t1, state.sub('layer1'))
def forward_nn(self, x: zenkai.IO, state: State) -> zenkai.IO:
# define the state to be for the self, input pair
x = state._y1 = self.layer1(x, state.sub('layer1'))
x = state._y2 = self.layer2(x, state.sub('layer2'))
return x
my_learner = MyLearner(...)
zenkai.set_lmode(my_learner, zenkai.LMode.WithStep)
for x, t in dataloader:
assessment = loss(my_learner(x), t)
loss.backward()
```
## Build documentation
cd docs
make clean
sphinx-autogen source/api.rst
make html
## Tutorials
Tutorials are being made available at https://github.com/short-greg/zenkai_tutorials .
## Contributing
To contribute to the project
1. Fork the project
2. Create your feature branch
3. Commit your changes
4. Push to the branch
5. Open a pull request
## License
This project is licensed under the MIT License - see the LICENSE.md file for details.
## Citing this Software
If you use this software in your research, we request you cite it. We have provided a `CITATION.cff` file in the root of the repository. Here is an example of how you might use it in BibTeX:
Raw data
{
"_id": null,
"home_page": "https://github.com/short-greg/zenkai",
"name": "zenkai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Greg Short",
"author_email": "g.short@kurenai.waseda.jp",
"download_url": "https://files.pythonhosted.org/packages/cc/d6/43b3dfcb0463bd07f50bb432ff1c2a80acc89199e5c6991b7d965974b567/zenkai-0.0.9.tar.gz",
"platform": null,
"description": "# Zenkai\n\nZenkai is a framework built on PyTorch for deep learning researchers \n- to explore a wider variety of machine architectures\n- to explore learning algorithms that do not rely on gradient descent\n\nIt is fundamentally based on the concepts of target propagation. In target propagation, a targets are propagated to each layer of the network by using an inversion or approximating an inversion operation. Thus, each layer has its own target. While Zenkai allows for more than just using target propagation, it is based on the concept of each layer having its own target.\n\n## Installation\n\n```bash\npip install zenkai\n```\n\n## Brief Overview\n\nZenkai consists of several packages to more flexibly define and train deep learning machines beyond what is easy to do with Pytorch.\n\n- **zenkai**: The core package. It contains all modules necessary for defining a learning machine.\n- **zenkai.tansaku**: Package for adding more exploration to learning. Contains framework for defining and creating population-based optimizers.\n- **zenkai.ensemble**: Package used to create ensemble models within the Zenkai framework.\n- **zenkai.targetprop** Package used for creating systems that use target propagation.\n- **zenkai.feedback** Package for performing various kinds of feedback alignment. \n- **zenkai.scikit** Package wrapping scikit learn to create Zenkai LeraningMachines using scikit learn modules. \n- **zenkai.utils**: Utils contains a variety of utility functions that are used by the rest of the application. For example utils for getting and setting parameters or gradients.\n\nFurther documentation is available at https://zenkai.readthedocs.io\n\n## Usage\n\nZenkai's primary feature is the \"LearningMachine\" which aims to make defining learning machines flexible. The design is similar to Torch, in that there is a forward method, a parameter update method similar to accGradParameters(), and a backpropagation method similar to updateGradInputs(). So the primary usage will be to implement them.\n\nHere is a (non-working) example\n```bash\n\nclass MyLearner(zenkai.LearningMachine):\n \"\"\"A LearningMachine couples the learning mechanics for the machine with its internal mechanics.\"\"\"\n\n def __init__(\n self, module: nn.Module, step_theta: zenkai.StepTheta, \n step_x: StepX, loss: zenkai.Loss\n ):\n super().__init__()\n self.module = module\n # step_theta is used to update the parameters of the\n # module\n self._step_theta = step_theta\n # step_x is used to update the inputs to the module\n self._step_x = step_x\n self.loss = loss\n\n def step(\n self, x: IO, t: IO, state: State, **kwargs\n ):\n # use to update the parameters of the machine\n # x (IO): The input to update with\n # t (IO): the target to update\n # outputs for a connection of two machines\n return self._step_theta(x, t)\n\n def step_x(\n self, x: IO, t: IO, state: State, **kwargs\n ) -> IO:\n # use to update the target for the machine\n # step_x is analogous to updateGradInputs in Torch except\n # it calculates \"new targets\" for the incoming layer\n return self._step_x(x, t)\n\n def forward_nn(self, x: zenkai.IO, state: State) -> zenkai.IO:\n return self.module(x.f)\n\n\nmy_learner = MyLearner(...)\n\nzenkai.set_lmode(my_learner, zenkai.LMode.WithStep)\n\nfor x, t in dataloader:\n # set the \"learning mode\" to perform a step\n loss(my_learner(x), t).backward()\n\n```\n\nLearning machines can be stacked by making use of step_x in the training process. \n\nNote: Since Zenkai has been set up to make use of Torch's backpropagation skills\n\n```bash\n\nclass MyMultilayerLearner(LearningMachine):\n \"\"\"A LearningMachine couples the learning mechanics for the machine with its internal mechanics.\"\"\"\n\n def __init__(\n self, layer1: LearningMachine, layer2: LearningMachine\n ):\n super().__init__()\n self.layer1 = layer1\n self.layer2 = layer2\n\n # use these hooks to indicate a dependency on another method\n self.add_step(StepXDep(self, 't1'))\n self.add_step_x(ForwardDep(self, 'y1'))\n\n def step(\n self, x: IO, t: IO, state: State, **kwargs\n ):\n # use to update the parameters of the machine\n # x (IO): The input to update with\n # t (IO): the target to update\n # outputs for a connection of two machines\n \n self.layer2.step(state._y1, t, state.sub('layer2'))\n self.layer1.step(state._y2, state._t1, state.sub('layer1'))\n\n def step_x(\n self, x: IO, t: IO\n ) -> IO:\n # use to update the target for the machine\n # it calculates \"new targets\" for the incoming layer\n t1 = state._t1 = self.layer2.step_x(state._y1, t, state.sub('layer1'))\n return self.layer1.step_x(x, t1, state.sub('layer1'))\n\n def forward_nn(self, x: zenkai.IO, state: State) -> zenkai.IO:\n\n # define the state to be for the self, input pair\n x = state._y1 = self.layer1(x, state.sub('layer1'))\n x = state._y2 = self.layer2(x, state.sub('layer2'))\n return x\n\nmy_learner = MyLearner(...)\nzenkai.set_lmode(my_learner, zenkai.LMode.WithStep)\n\nfor x, t in dataloader:\n assessment = loss(my_learner(x), t)\n loss.backward()\n\n```\n\n## Build documentation\n\ncd docs\nmake clean\nsphinx-autogen source/api.rst\nmake html\n\n## Tutorials\n\nTutorials are being made available at https://github.com/short-greg/zenkai_tutorials .\n\n## Contributing\n\nTo contribute to the project\n\n1. Fork the project\n2. Create your feature branch\n3. Commit your changes\n4. Push to the branch\n5. Open a pull request\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE.md file for details.\n\n## Citing this Software\n\nIf you use this software in your research, we request you cite it. We have provided a `CITATION.cff` file in the root of the repository. Here is an example of how you might use it in BibTeX:\n\n",
"bugtrack_url": null,
"license": "LICENSE",
"summary": "A framework for flexibly developing beyond bakcpropagation.",
"version": "0.0.9",
"project_urls": {
"Documentation": "https://zenkai.readthedocs.com",
"Homepage": "https://github.com/short-greg/zenkai",
"Repository": "https://github.com/short-greg/zenkai"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "bab4214edb9f68e40c758ea976174de67b49695966020773d555af3d5bb8110f",
"md5": "612e651a9d2393e6ad748e20c2118184",
"sha256": "a3aff59de9b786947f0d9f2d4c1a7f2e640f16ba50aa8e22540c14430d500e9c"
},
"downloads": -1,
"filename": "zenkai-0.0.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "612e651a9d2393e6ad748e20c2118184",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.8",
"size": 80438,
"upload_time": "2024-09-11T06:31:23",
"upload_time_iso_8601": "2024-09-11T06:31:23.457522Z",
"url": "https://files.pythonhosted.org/packages/ba/b4/214edb9f68e40c758ea976174de67b49695966020773d555af3d5bb8110f/zenkai-0.0.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ccd643b3dfcb0463bd07f50bb432ff1c2a80acc89199e5c6991b7d965974b567",
"md5": "717c524b55596417c1ffea7611180524",
"sha256": "23f780e7d25d258fd5a78262c2bebd5404e8bc810a8c474e84686a208fcdf030"
},
"downloads": -1,
"filename": "zenkai-0.0.9.tar.gz",
"has_sig": false,
"md5_digest": "717c524b55596417c1ffea7611180524",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.8",
"size": 63702,
"upload_time": "2024-09-11T06:31:25",
"upload_time_iso_8601": "2024-09-11T06:31:25.354261Z",
"url": "https://files.pythonhosted.org/packages/cc/d6/43b3dfcb0463bd07f50bb432ff1c2a80acc89199e5c6991b7d965974b567/zenkai-0.0.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-11 06:31:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "short-greg",
"github_project": "zenkai",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"tox": true,
"lcname": "zenkai"
}