miniHMM
=======
Summary
-------
This is a toy library that implements first- through Nth-order hidden Markov
models.
At present, `miniHMM` offers some benefits hard to find in other HMM libraries:
- Its algorithms are numerically stable
- It is able to compute high order hidden Markov models, which allow states
to depend on the Nth previous states, rather than only on the immediate
previous state.
Concretely, high-order models are implemented via a translation layer
that converts high-order models of arbitrary degree into mathematically
equivalent first-order models over a virtual state space. This implementation
allows all algorithms developed for first-order models to be applied in
higher dimensions. See `minihmm.represent` for further detail.
- Emissions may be univariate or multivariate (for multidimensional emissions),
continuous or discrete. See `minihmm.factors` for examples of
distributions that can be built out-of-the-box, and for hints on designing new
ones,
- Multiple distinct estimators are available for probability distributions,
enabling e.g. addition of model noise, pseudocounts, et c during model
training. See `minihmm.estimators` for details.
- HMMs of all sorts can be trained via a Baum-Welch implementation with some
bells & whistles (e.g. noise scheduling, parallelization, parameter-tying
(via estimator classes), et c)
- In addition to the Viterbi algorithm (the maximum likelihood solution for a
total sequence of states), states may be inferred by:
- Probabilistically sampling valid sequences from their posterior
distribution, given a sequence of emissions. This enables estimates of
robustness and non-deterministic samples to be drawn
- Labeling individual states by highest posterior probabilities (even
though this doesn't guarantee a valid path)
Running the tests
-----------------
Tests are currently written to run under `nose` separately under Python 3.6
and 3.9, with the following virtual environments configured via `tox`:
- `*-pinned` : run using versions of dependencies pinned in ``requirements.txt``
- `*-latest` : run all tests using latest available versions of each dependency.
This will enable us to catch breaking changes.
By default, running ``tox`` from the shell will run all tests in all
environments. To choose which environment(s) or test(s) to run, you can use
standard `tox` or `nose` arguments (see their respective documentation
for more details)::
# run tests only under Python 3.6, with pinned requirements
$ tox -e py36-pinned
# run tests under all environments, but only for estimator suite
$ tox minihmm.test.test_estimators
# run tests only for estimator suite, passing verbose mode to nose
# note: nose args go after the double dash ('--')
$ tox minihmm.test.test_estimators -- -v --nocapture
As these environments assume you have Python 3.6, and 3.9 installed, we have
defined a Dockerfile that contains all of them. This is the preferred
environment for testing. Build the image with the following syntax::
# build image from inside miniHMM folder
$ docker build --pull -t minihmm .
# start a container, mounting current folder as minihmm source
$ docker run -it --rm minihmm
# alternative if you are developing- mount your dev folder within
# the container, then run tox inside the container
$ docker run -it --rm $(pwd):/usr/src/minihmm minihmm
Building the documentation
--------------------------
Documents may be built via Sphinx, either inside or outside the container.
To build the docs, you must first install the package, as well as documentation
dependencies. In the project folder::
# install package
$ pip install --user -e .
# install doc dependencies
$ pip install -r docs/requirements.txt
# build docs & open in browser
$ make -C docs html
$ firefox docs/build/html/index.html
Notes
-----
This library is in beta, and breaking changes are not uncommon. We try to be
polite by announcing these in the changelog.
Raw data
{
"_id": null,
"home_page": "http://github.com/joshuagryphon/minihmm",
"name": "minihmm",
"maintainer": "Joshua Griffin Dunn",
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "HMM hidden Markov model machine learning modeling statistics",
"author": "Joshua Griffin Dunn",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/d6/a0/2dc7cdcc605847993e30b21daabc74e06bd5c37687626b8e271bf925e050/minihmm-0.3.3.tar.gz",
"platform": "POSIX",
"description": "miniHMM\n=======\n\nSummary\n-------\n\nThis is a toy library that implements first- through Nth-order hidden Markov\nmodels. \n\nAt present, `miniHMM` offers some benefits hard to find in other HMM libraries:\n\n- Its algorithms are numerically stable\n\n- It is able to compute high order hidden Markov models, which allow states\n to depend on the Nth previous states, rather than only on the immediate\n previous state. \n \n Concretely, high-order models are implemented via a translation layer\n that converts high-order models of arbitrary degree into mathematically\n equivalent first-order models over a virtual state space. This implementation\n allows all algorithms developed for first-order models to be applied in \n higher dimensions. See `minihmm.represent` for further detail.\n\n- Emissions may be univariate or multivariate (for multidimensional emissions),\n continuous or discrete. See `minihmm.factors` for examples of\n distributions that can be built out-of-the-box, and for hints on designing new\n ones,\n \n- Multiple distinct estimators are available for probability distributions,\n enabling e.g. addition of model noise, pseudocounts, et c during model\n training. See `minihmm.estimators` for details.\n \n- HMMs of all sorts can be trained via a Baum-Welch implementation with some\n bells & whistles (e.g. noise scheduling, parallelization, parameter-tying\n (via estimator classes), et c)\n\n- In addition to the Viterbi algorithm (the maximum likelihood solution for a\n total sequence of states), states may be inferred by:\n \n - Probabilistically sampling valid sequences from their posterior\n distribution, given a sequence of emissions. This enables estimates of\n robustness and non-deterministic samples to be drawn\n\n - Labeling individual states by highest posterior probabilities (even\n though this doesn't guarantee a valid path)\n\n\nRunning the tests\n-----------------\n\nTests are currently written to run under `nose` separately under Python 3.6\nand 3.9, with the following virtual environments configured via `tox`:\n\n- `*-pinned` : run using versions of dependencies pinned in ``requirements.txt``\n\n- `*-latest` : run all tests using latest available versions of each dependency.\n This will enable us to catch breaking changes.\n\nBy default, running ``tox`` from the shell will run all tests in all\nenvironments. To choose which environment(s) or test(s) to run, you can use\nstandard `tox` or `nose` arguments (see their respective documentation\nfor more details)::\n\n # run tests only under Python 3.6, with pinned requirements\n $ tox -e py36-pinned \n\n # run tests under all environments, but only for estimator suite\n $ tox minihmm.test.test_estimators\n\n # run tests only for estimator suite, passing verbose mode to nose\n # note: nose args go after the double dash ('--')\n $ tox minihmm.test.test_estimators -- -v --nocapture\n\n\nAs these environments assume you have Python 3.6, and 3.9 installed, we have\ndefined a Dockerfile that contains all of them. This is the preferred\nenvironment for testing. Build the image with the following syntax::\n\n # build image from inside miniHMM folder\n $ docker build --pull -t minihmm .\n\n # start a container, mounting current folder as minihmm source\n $ docker run -it --rm minihmm\n\n # alternative if you are developing- mount your dev folder within\n # the container, then run tox inside the container\n $ docker run -it --rm $(pwd):/usr/src/minihmm minihmm\n\n\nBuilding the documentation\n--------------------------\n\nDocuments may be built via Sphinx, either inside or outside the container.\nTo build the docs, you must first install the package, as well as documentation\ndependencies. In the project folder::\n\n # install package\n $ pip install --user -e .\n\n # install doc dependencies\n $ pip install -r docs/requirements.txt\n\n # build docs & open in browser\n $ make -C docs html\n $ firefox docs/build/html/index.html\n\n\nNotes\n-----\n\nThis library is in beta, and breaking changes are not uncommon. We try to be\npolite by announcing these in the changelog.",
"bugtrack_url": null,
"license": "BSD 3-Clause",
"summary": "Lightweight extensible HMM engine, supporting univariate or multivariate, continuous or discrete emissions",
"version": "0.3.3",
"project_urls": {
"Homepage": "http://github.com/joshuagryphon/minihmm"
},
"split_keywords": [
"hmm",
"hidden",
"markov",
"model",
"machine",
"learning",
"modeling",
"statistics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d6a02dc7cdcc605847993e30b21daabc74e06bd5c37687626b8e271bf925e050",
"md5": "63d4ff464b3d361ce5903cfa4df1af75",
"sha256": "2485b6d3dfc09eebe2ed5b53c45fc54ec4ce335b12d7d20a009cbb5a553db708"
},
"downloads": -1,
"filename": "minihmm-0.3.3.tar.gz",
"has_sig": false,
"md5_digest": "63d4ff464b3d361ce5903cfa4df1af75",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 42558,
"upload_time": "2024-03-28T01:18:13",
"upload_time_iso_8601": "2024-03-28T01:18:13.982477Z",
"url": "https://files.pythonhosted.org/packages/d6/a0/2dc7cdcc605847993e30b21daabc74e06bd5c37687626b8e271bf925e050/minihmm-0.3.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-03-28 01:18:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "joshuagryphon",
"github_project": "minihmm",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"tox": true,
"lcname": "minihmm"
}