Name | pykda JSON |

Version | 0.9.2 JSON |

download | |

home_page | https://github.com/JoostBerkhout/PyKDA |

Summary | Python package for the Kemeny Decomposition Algorithm (KDA) together with some Markov chain tooling. |

upload_time | 2024-03-17 14:31:46 |

maintainer | Joost Berkhout |

docs_url | None |

author | Joost Berkhout |

requires_python | >=3.10,<4.0 |

license | MIT |

keywords | markov chain kemeny decomposition algorithm kda kemeny constant |

VCS | |

bugtrack_url | |

requirements | No requirements were recorded. |

Travis-CI | No Travis. |

coveralls test coverage | No coveralls. |

<img src="https://github.com/JoostBerkhout/PyKDA/blob/main/docs/images/logo.JPG" width="400"> [![PyPI version](https://badge.fury.io/py/pykda.svg)](https://badge.fury.io/py/pykda) [![ALNS](https://github.com/JoostBerkhout/PyKDA/actions/workflows/PyKDA.yml/badge.svg)](https://github.com/JoostBerkhout/PyKDA/actions/workflows/PyKDA.yml) [![codecov](https://codecov.io/gh/JoostBerkhout/PyKDA/graph/badge.svg?token=M4WF9A5ZML)](https://codecov.io/gh/JoostBerkhout/PyKDA) `pykda` is a Python package for the Kemeny Decomposition Algorithm (KDA) which allows to decompose a Markov chain into clusters of states, where states within a cluster are relatively more connected to each other than states outside the cluster. This is useful for analyzing influence graphs, such as social networks and internet networks. KDA was developed in the paper from [Berkhout and Heidergott (2019)](https://research.vu.nl/ws/portalfiles/portal/104470560/Analysis_of_Markov_influence_graphs.pdf) and uses the Kemeny constant as a connectivity measure. ### Installing `pykda` Package `pykda` depends on `numpy`, `tarjan` and `pyvis`. Use the package manager [pip](https://pip.pypa.io/en/stable/) to install PyKDA ```bash pip install pykda ``` ### Getting started The first step is to load a Markov chain as a `MarkovChain` object using a transition matrix `P`. ```python from pykda.Markov_chain import MarkovChain P = [[0, 0.3, 0.7, 0, 0], [0.7, 0.2, 0.1, 0, 0], [0.5, 0.25, 0.25, 0, 0], [0, 0, 0, 0.5, 0.5], [0, 0, 0, 0.75, 0.25]] # artificial transition matrix MC = MarkovChain(P) ``` We can study some properties of the Markov chain, such as the stationary distribution: ```python print(MC.stationary_distribution.flatten()) ``` This gives `[0.226 0.156 0.23 0.232 0.156]`. We can also plot the Markov chain: ```python MC.plot(file_name="An artificial Markov chain") ``` <img src="https://github.com/JoostBerkhout/PyKDA/blob/main/docs/images/plot_readme_example.JPG" width="280"> Now, let us decompose the Markov chain into clusters using KDA. We start by initializing a `KDA` object using the Markov chain and the KDA settings (such as the number of clusters). For more details about setting choices, see the [KDA documentation](https://joostberkhout.github.io/PyKDA/references/KDA/) or [Berkhout and Heidergott (2019)](https://research.vu.nl/ws/portalfiles/portal/104470560/Analysis_of_Markov_influence_graphs.pdf). Here, we apply the default settings, which is to cut all edges with a negative Kemeny constant derivative and normalizing the transition matrix afterward. ```python kda = KDA( original_MC=MC, CO_A="CO_A_1(1)", CO_B="CO_B_3(0)", symmetric_cut=False ) ``` Now, let us run the KDA algorithm and visualize the results. ```python kda.run() kda.plot(file_name="An artificial Markov chain after KDA_A1_1_B3_0") ``` <img src="https://github.com/JoostBerkhout/PyKDA/blob/main/docs/images/plot_readme_example_after_KDA_A1_1_B3_0.JPG" width="280"> We can study the resulting Markov chain in more detail via the current Markov chain attribute `MC` of the `KDA` object. ```python print(kda.MC) ``` This gives the following output: ```python MC with 5 states. Ergodic classes: [[2, 0], [3]]. Transient classes: [[1], [4]]. ``` So KDA led to a Markov multi-chain with two ergodic classes and two transient classes. We can also study the edges that KDA cut via the `log` attribute of the `KDA` object. ```python print(kda.log['edges cut']) ``` This gives the following output: ``` [[None], [(4, 0), (1, 4), (2, 1), (0, 1), (3, 4)]] ``` We can also study the Markov chains that KDA found in each (outer) iteration via `kda.log['Markov chains']`)`. As another KDA application example, let us apply KDA until we find two ergodic classes explicitly. We will also ensure that the Kemeny constant derivatives are recalculated after each cut (and normalize the cut transition matrix to ensure it is a stochastic matrix again). To that end, we use: ```python kda2 = KDA( original_MC=MC, CO_A="CO_A_2(2)", CO_B="CO_B_1(1)", symmetric_cut=False ) kda2.run() kda2.plot(file_name="An artificial Markov chain after KDA_A2_2_B1_1") ``` which gives (edges (4, 0) and (1, 4) are cut in two iterations): <img src="https://github.com/JoostBerkhout/PyKDA/blob/main/docs/images/plot_readme_example_after_KDA_A2_2_B1_1.JPG" width="280"> ### How to learn more about `pykda`? To learn more about `pykda` have a look at the [documentation](https://joostberkhout.github.io/PyKDA/). There, you can also find links to interactive Google Colab notebooks in [examples](https://joostberkhout.github.io/PyKDA/examples/). If you have any questions, feel free to open an issue here on [Github Issues](https://github.com/JoostBerkhout/PyKDA/issues). ### How to cite `pykda`? If you use `pykda` in your research, please consider citing the following paper: > Joost Berkhout, Bernd F. Heidergott (2019). > Analysis of Markov influence graphs. > _Operations Research_, 67(3):892-904. > https://doi.org/10.1287/opre.2018.1813 Or, using the following BibTeX entry: ```bibtex @article{Berkhout_Heidergott_2019, title = {Analysis of {Markov} influence graphs}, volume = {67}, number = {3}, journal = {Operations Research}, author = {Berkhout, J. and Heidergott, B. F.}, year = {2019}, pages = {892--904}, } ```

{ "_id": null, "home_page": "https://github.com/JoostBerkhout/PyKDA", "name": "pykda", "maintainer": "Joost Berkhout", "docs_url": null, "requires_python": ">=3.10,<4.0", "maintainer_email": "joost.berkhout@vu.nl", "keywords": "Markov chain,Kemeny Decomposition Algorithm,KDA,Kemeny constant", "author": "Joost Berkhout", "author_email": "joost.berkhout@vu.nl", "download_url": "https://files.pythonhosted.org/packages/7d/bd/1cf4cd8f18a92bcca19716fa8faefee17e7f01260b30021aae02f67d8d11/pykda-0.9.2.tar.gz", "platform": null, "description": "<img src=\"https://github.com/JoostBerkhout/PyKDA/blob/main/docs/images/logo.JPG\" width=\"400\">\n\n[![PyPI version](https://badge.fury.io/py/pykda.svg)](https://badge.fury.io/py/pykda)\n[![ALNS](https://github.com/JoostBerkhout/PyKDA/actions/workflows/PyKDA.yml/badge.svg)](https://github.com/JoostBerkhout/PyKDA/actions/workflows/PyKDA.yml)\n[![codecov](https://codecov.io/gh/JoostBerkhout/PyKDA/graph/badge.svg?token=M4WF9A5ZML)](https://codecov.io/gh/JoostBerkhout/PyKDA)\n\n`pykda` is a Python package for the Kemeny Decomposition Algorithm (KDA) which \nallows to decompose a Markov chain into clusters of states, where states within\na cluster are relatively more connected to each other than states outside\nthe cluster. This is useful for analyzing influence graphs, such as social \nnetworks and internet networks. KDA was developed in the paper from [Berkhout and Heidergott (2019)](https://research.vu.nl/ws/portalfiles/portal/104470560/Analysis_of_Markov_influence_graphs.pdf)\nand uses the Kemeny constant as a connectivity measure. \n\n### Installing `pykda`\n\nPackage `pykda` depends on `numpy`, `tarjan` and `pyvis`.\nUse the package manager [pip](https://pip.pypa.io/en/stable/) to install PyKDA\n```bash\npip install pykda\n```\n\n### Getting started\n\nThe first step is to load a Markov chain as a `MarkovChain` object using a \ntransition matrix `P`.\n```python\nfrom pykda.Markov_chain import MarkovChain\n\nP = [[0, 0.3, 0.7, 0, 0],\n [0.7, 0.2, 0.1, 0, 0],\n [0.5, 0.25, 0.25, 0, 0],\n [0, 0, 0, 0.5, 0.5],\n [0, 0, 0, 0.75, 0.25]] # artificial transition matrix\nMC = MarkovChain(P)\n```\nWe can study some properties of the Markov chain, such as the stationary distribution:\n```python\nprint(MC.stationary_distribution.flatten())\n```\nThis gives `[0.226 0.156 0.23 0.232 0.156]`. We can also plot the Markov chain:\n```python\nMC.plot(file_name=\"An artificial Markov chain\")\n```\n\n<img src=\"https://github.com/JoostBerkhout/PyKDA/blob/main/docs/images/plot_readme_example.JPG\" width=\"280\">\n\nNow, let us decompose the Markov chain into clusters using KDA. We start by\ninitializing a `KDA` object using the Markov chain and the KDA settings (such\nas the number of clusters). For more details about setting choices, see the [KDA documentation](https://joostberkhout.github.io/PyKDA/references/KDA/)\nor [Berkhout and Heidergott (2019)](https://research.vu.nl/ws/portalfiles/portal/104470560/Analysis_of_Markov_influence_graphs.pdf).\nHere, we apply the default settings, which is to cut all edges with a negative\nKemeny constant derivative and normalizing the transition matrix afterward.\n```python\nkda = KDA(\n original_MC=MC, CO_A=\"CO_A_1(1)\", CO_B=\"CO_B_3(0)\", symmetric_cut=False\n )\n```\nNow, let us run the KDA algorithm and visualize the results.\n```python\nkda.run()\nkda.plot(file_name=\"An artificial Markov chain after KDA_A1_1_B3_0\")\n```\n\n<img src=\"https://github.com/JoostBerkhout/PyKDA/blob/main/docs/images/plot_readme_example_after_KDA_A1_1_B3_0.JPG\" width=\"280\">\n\nWe can study the resulting Markov chain in more detail via the current Markov chain\nattribute `MC` of the `KDA` object.\n```python\nprint(kda.MC)\n```\nThis gives the following output:\n```python\nMC with 5 states.\nErgodic classes: [[2, 0], [3]].\nTransient classes: [[1], [4]].\n```\nSo KDA led to a Markov multi-chain with two ergodic classes and two transient classes.\nWe can also study the edges that KDA cut via the `log` attribute of the `KDA` object.\n```python\nprint(kda.log['edges cut'])\n```\nThis gives the following output:\n```\n[[None], [(4, 0), (1, 4), (2, 1), (0, 1), (3, 4)]]\n```\nWe can also study the Markov chains that KDA found in each (outer) iteration via\n `kda.log['Markov chains']`)`.\n\nAs another KDA application example, let us apply KDA until we find two ergodic \nclasses explicitly. We will also ensure that the Kemeny constant derivatives are\nrecalculated after each cut (and normalize the cut transition matrix to \nensure it is a stochastic matrix again). To that end, we use:\n```python\nkda2 = KDA(\n original_MC=MC, CO_A=\"CO_A_2(2)\", CO_B=\"CO_B_1(1)\", symmetric_cut=False\n )\nkda2.run()\nkda2.plot(file_name=\"An artificial Markov chain after KDA_A2_2_B1_1\")\n```\nwhich gives (edges (4, 0) and (1, 4) are cut in two iterations):\n\n<img src=\"https://github.com/JoostBerkhout/PyKDA/blob/main/docs/images/plot_readme_example_after_KDA_A2_2_B1_1.JPG\" width=\"280\">\n\n### How to learn more about `pykda`?\nTo learn more about `pykda` have a look at the [documentation](https://joostberkhout.github.io/PyKDA/). There, you can\nalso find links to interactive Google Colab notebooks in [examples](https://joostberkhout.github.io/PyKDA/examples/). If you\nhave any questions, feel free to open an issue here on [Github Issues](https://github.com/JoostBerkhout/PyKDA/issues).\n\n### How to cite `pykda`?\n\nIf you use `pykda` in your research, please consider citing the following paper:\n\n> Joost Berkhout, Bernd F. Heidergott (2019).\n> Analysis of Markov influence graphs. \n> _Operations Research_, 67(3):892-904.\n> https://doi.org/10.1287/opre.2018.1813\n\nOr, using the following BibTeX entry:\n\n```bibtex\n@article{Berkhout_Heidergott_2019,\n\ttitle = {Analysis of {Markov} influence graphs},\n\tvolume = {67},\n\tnumber = {3},\n\tjournal = {Operations Research},\n\tauthor = {Berkhout, J. and Heidergott, B. F.},\n\tyear = {2019},\n\tpages = {892--904},\n}\n```\n", "bugtrack_url": null, "license": "MIT", "summary": "Python package for the Kemeny Decomposition Algorithm (KDA) together with some Markov chain tooling.", "version": "0.9.2", "project_urls": { "Documentation": "https://joostberkhout.github.io/PyKDA/", "Homepage": "https://github.com/JoostBerkhout/PyKDA", "Repository": "https://github.com/JoostBerkhout/PyKDA" }, "split_keywords": [ "markov chain", "kemeny decomposition algorithm", "kda", "kemeny constant" ], "urls": [ { "comment_text": "", "digests": { "blake2b_256": "5ba6245c191e2be7c86ef9807c2b40a533b523276c38d90823a65709de1f1ed6", "md5": "2b13edc62125f7343f6807655f0c6080", "sha256": "23f418cfdb4410ebfca920082165149ee165f0ed5446e71eba2e9bc1d9506175" }, "downloads": -1, "filename": "pykda-0.9.2-py3-none-any.whl", "has_sig": false, "md5_digest": "2b13edc62125f7343f6807655f0c6080", "packagetype": "bdist_wheel", "python_version": "py3", "requires_python": ">=3.10,<4.0", "size": 75823, "upload_time": "2024-03-17T14:31:44", "upload_time_iso_8601": "2024-03-17T14:31:44.676336Z", "url": "https://files.pythonhosted.org/packages/5b/a6/245c191e2be7c86ef9807c2b40a533b523276c38d90823a65709de1f1ed6/pykda-0.9.2-py3-none-any.whl", "yanked": false, "yanked_reason": null }, { "comment_text": "", "digests": { "blake2b_256": "7dbd1cf4cd8f18a92bcca19716fa8faefee17e7f01260b30021aae02f67d8d11", "md5": "1db880f4349d33e77449f5980f63c418", "sha256": "2620e12f0de26be057313cce982432aced14b733129ce1db58664de133029b8b" }, "downloads": -1, "filename": "pykda-0.9.2.tar.gz", "has_sig": false, "md5_digest": "1db880f4349d33e77449f5980f63c418", "packagetype": "sdist", "python_version": "source", "requires_python": ">=3.10,<4.0", "size": 67474, "upload_time": "2024-03-17T14:31:46", "upload_time_iso_8601": "2024-03-17T14:31:46.677213Z", "url": "https://files.pythonhosted.org/packages/7d/bd/1cf4cd8f18a92bcca19716fa8faefee17e7f01260b30021aae02f67d8d11/pykda-0.9.2.tar.gz", "yanked": false, "yanked_reason": null } ], "upload_time": "2024-03-17 14:31:46", "github": true, "gitlab": false, "bitbucket": false, "codeberg": false, "github_user": "JoostBerkhout", "github_project": "PyKDA", "travis_ci": false, "coveralls": false, "github_actions": true, "lcname": "pykda" }

Elapsed time: 0.29140s