DyGyS


NameDyGyS JSON
Version 0.0.5 PyPI version JSON
download
home_page
SummaryDyGyS is a package for Maximum Entropy regression models with gravity specification for undirected and directed network data. Moreover, it can solve them, generate the graph ensemble, compute several network statistics, calculate model selection measures such as AIC and BIC and quantify their reproduction accuracy in topological and weighted properties.
upload_time2023-01-21 10:59:38
maintainer
docs_urlNone
author
requires_python>=3.8
license
keywords maximum entropy methods econometrics regression graph network
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![PyPI](https://img.shields.io/pypi/v/nemtropy)  [![License:GPLv3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) ![Python Version](https://img.shields.io/badge/python-3.8%20%7C%203.9-blue) [![PRR](https://img.shields.io/badge/PRR-4.033105-orange)](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.4.033105) [![CSF](https://img.shields.io/badge/CSF-166.112958-orange)](https://doi.org/10.1016/j.chaos.2022.112958)

# DyGyS: DYadic GravitY regression models with Soft constraints

DyGyS is a package developed on python3 for Maximum Entropy regression models with gravity specification for undirected and directed network data.

DyGyS provides a numerous amount of models, described in their undirected declination in articles [1](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.4.033105#) and [2](https://doi.org/10.1016/j.chaos.2022.112958) and consisting of both econometric and statistical physics-inspired models.
The use of soft constraints enable the user to explicitly constrain network properties such as the number of links, the degree sequence (degree centrality for Undirected Networks and out-degree/in-degree centralities for directed networks), and the total weight (for a small number of viable models).

Furthermore it is not only possible to solve the model and extract the parameters, but also to generate the ensemble, compute a number of network statistics, compute model selection measures such as AIC and BIC, and quantify the reproduction accuracy of:
* Topology using measures as True Positive Rate, Specificity, Precision, Accuracy, Balanced Accuracy and F1score;
* Weights, measuring the fraction of weights inside the percentile CI extracted from the ensemble of graphs;
* Network Statistics, measuring the fraction of nodes for which the network statistics are inside the wanted percentile CI extracted from the ensemble of graphs.

To explore Maximum-Entropy modeling on networks, checkout [Maximum Entropy Hub](https://meh.imtlucca.it/).

When using the module for your scientific research please consider citing:
```
    @article{PhysRevResearch.4.033105,
      title = {Gravity models of networks: Integrating maximum-entropy and econometric     approaches},
      author = {Di Vece, Marzio and Garlaschelli, Diego and Squartini, Tiziano},
      journal = {Phys. Rev. Research},
      volume = {4},
      issue = {3},
      pages = {033105},
      numpages = {19},
      year = {2022},
      month = {Aug},
      publisher = {American Physical Society},
      doi = {10.1103/PhysRevResearch.4.033105},
      url = {https://link.aps.org/doi/10.1103/PhysRevResearch.4.033105}
    }

```
and
```
    @article{DIVECE2023112958,
    title = {Reconciling econometrics with continuous maximum-entropy network models},
    journal = {Chaos, Solitons & Fractals},
    volume = {166},
    pages = {112958},
    year = {2023},
    issn = {0960-0779},
    doi = {https://doi.org/10.1016/j.chaos.2022.112958},
    url = {https://www.sciencedirect.com/science/article/pii/S0960077922011377},
    author = {Marzio {Di Vece} and Diego Garlaschelli and Tiziano Squartini},
    keywords = {Shannon entropy, Network reconstruction, Econophysics, Econometrics, Trade, Gravity},
    expected link weight coming from a probability distribution whose functional form can be chosen arbitrarily, while statistical-physics approaches construct maximum-entropy distributions of weighted graphs, constrained to satisfy a given set of measurable network properties. In a recent, companion paper, we integrated the two approaches and applied them to the World Trade Web, i.e. the network of international trade among world countries. While the companion paper dealt only with discrete-valued link weights, the present paper extends the theoretical framework to continuous-valued link weights. In particular, we construct two broad classes of maximum-entropy models, namely the integrated and the conditional ones, defined by different criteria to derive and combine the probabilistic rules for placing links and loading them with weights. In the integrated models, both rules follow from a single, constrained optimization of the continuous Kullback–Leibler divergence; in the conditional models, the two rules are disentangled and the functional form of the weight distribution follows from a conditional, optimization procedure. After deriving the general functional form of the two classes, we turn each of them into a proper family of econometric models via a suitable identification of the econometric function relating the corresponding, expected link weights to macroeconomic factors. After testing the two classes of models on World Trade Web data, we discuss their strengths and weaknesses.}
    }

```
#### Contents
- [DyGyS: DYadic GravitY regression models with Soft constraints](#dygys-dyadic-gravity-regression-models-with-soft-constraints)
      - [Contents](#contents)
  - [Currently Available Models](#currently-available-models)
  - [Installation](#installation)
  - [Dependencies](#dependencies)
  - [How-to Guidelines](#how-to-guidelines)
    - [Class Instance and Empirical Network Statistics](#class-instance-and-empirical-network-statistics)
    - [Solving the models](#solving-the-models)
    - [Generating the network ensemble](#generating-the-network-ensemble)
    - [Computing relevant measures](#computing-relevant-measures)
  - [Documentation](#documentation)
  - [Credits](#credits)

##  Currently Available Models
DyGyS contains models for network data with both continuous and discrete-valued semi-definite positive weights.
The available models for discrete count data are described in [1](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.4.033105#) and consist of:
* **POIS** *Poisson Model* 
* **ZIP** *Zero-Inflated Poisson Model* 
* **NB2** *Negative Binomial Model* 
* **ZINB** *Zero-Inflated Negative Binomial Model* 
* **L-CGeom** *L-constrained Conditional Geometric Model*, noted as TSF in the paper.
* **k-CGeom** *k-constrained Conditional Geometric Model*, noted as TS in the paper.
* **L-IGeom** *L-constrained Integrated Geometric Model*, noted as H(1) in the paper.
* **k-IGeom** *k-constrained Integrated Geometric Model*, noted as H(2) in the paper.

The analogue models for continuous-valued data are described in [2](https://arxiv.org/abs/2210.01179) and consist of:
* **L-CExp** *L-constrained Conditional Exponential Model*, the L-constrained variant of C-Exp in the paper.
* **k-CExp** *k-constrained Conditional Exponential Model*, noted as CExp in the paper.
* **L-IExp** *L-constrained Integrated Exponential Model*, the L-constrained variant of I-Exp in the paper.
* **k-IExp** *k-constrained Integrated Exponential Model*, noted as IExp in the paper.
* **L-CGamma** *L-constrained Conditional Gamma Model*, the L-constrained variant of C-Gamma in the paper.
* **k-CGamma** *k-constrained Conditional Gamma Model*, noted as CGamma in the paper.
* **L-CPareto** *L-constrained Conditional Pareto Model*, the L-constrained variant of C-Pareto in the paper.
* **k-CPareto** *k-constrained Conditional Pareto Model*, noted as CPareto in the paper.
* **L-CLognormal** *L-constrained Conditional Lognormal Model*, the L-constrained variant of C-Lognormal in the paper.
* **k-CLognormal** *k-constrained Conditional Lognormal Model*, noted as CLognormal in the paper.

Please refer to the papers for further details.

## Installation
DyGyS can be installed via pip. You can do it from your terminal
```
    $ pip install DyGyS
```
If you already installed the package and want to  upgrade it,
you can type from your terminal:

```
    $ pip install DyGyS --upgrade
```

## Dependencies
DyGyS uses the following dependencies:
* **scipy** for optimization and root solving;
* **numba** for fast computation of network statistics and criterion functions.
* **numba-scipy** for fast computation of special functions such as gammaincinv and erfinv.

They can be easily installed via pip typing

    $ pip install scipy
    $ pip install numba
    $ pip install numba-scipy


## How-to Guidelines
The module containes two classes, namely UndirectedGraph and DirectedGraph.
An Undirected Graph is defined as a network where weights are reciprocal, i.e., $w_{ij} = w_{ji}$ where $w_{ij}$ is the network weight from node $i$ to node $j$. 
If weights are not reciprocal, please use the DirectedGraph class.

### Class Instance and Empirical Network Statistics
To inizialize an UndirectedGraph or DirectedGraph instance you can type:

    G = UndirectedGraph(adjacency=Wij)
    or
    G = DirectedGraph(adjacency=Wij)
where Wij is the weighted adjacency matrix in 1-D (dense) or 2-D numpy array format.

After initializing you can already explore core network statistics such as (out-)degree, in-degree, average neighbor degree, binary clustering coefficient, (out-)strength, in-strength, average neighbor strength and weighted clustering coefficient.
These are available using the respective codewords:

    G.degree, G.degree_in, G.annd, G.clust, G.strength, G.strength_in, G.anns, G.clust_w

### Solving the models
You can explore the currently available models using
    
    G.implemented_models
use their names as described in this list not to incur in error messages.

In order to solve the models you need to define a *regressor matrix* $X_w$ of dimension $N_{obs} \times k$ where $N_{obs} = N^2$ is the number of observations (equivalent to the square of the number of nodes), and $k$ is the number of exogenous variables introduced in the Gravity Specification. 
For L-Constrained Conditional Models and Zero-Inflated models you ought to define also a regressor matrix $X_t$ for the first-stage (or topological) optimization and you can choose to fix some of the first-stage parameters.

When ready you can choose one of the aforementioned models and solve for their parameters using
    
    G.solve(model= <chosen model>,exogenous_variables = X_w, selection_variables = X_t,
        fixed_selection_parameters = <chosen fixed selection parameters>)

Once you solved the model various other attributes become visible and measures dependent solely on criterion functions are computed. These include Loglikelihood, Jacobian, Infinite Jacobian Norm, AIC, Binary AIC and BIC, available using the codewords:

    G.ll, G.jacobian, G.norm, G.aic, G.aic_binary, G.bic

For further details on the .solve functions please see the documentation.



### Generating the network ensemble 

Generating the network ensemble is very easy. It's enough to type:
    
    G.gen_ensemble(n_ensemble=<wanted number of graphs>)
The graphs are produced using the "default_rng" method for discrete-valued models or using Inverse Transform Sampling for continuous-valued models.

This method returns

    G.w_ensemble_matrix
which is a $N_{obs} \times N_{ensemble}$ matrix which includes all of the $N_{ensemble}$ adjacency matrices in the ensemble.
Such method behaves well for networks up to $ N=200 $ for $10^{4}$ ensemble graphs, no test has been done for large networks where G.w_ensemble_matrix could be limited by RAM.


### Computing relevant measures

Let's start by showing how to compute topology-related measures. 
You can type:
    
    G.classification_measures(n_ensemble=<wanted number of graphs>,percentiles = (inf_p, sup_p), stats =[<list of wanted statistics>])
This method does not need G.w_ensemble_matrix so you can use it without generating the ensemble of weighted networks.
The statistics you can compute are listed in G.implemented_classifier_statistics and once you define the number of networks, the ensemble percentiles and statistics of interest, it returns

    G.avg_*, G.std_*, G.percentiles_*, G.array_*
where "avg" stands for ensemble average, "std" for ensemble standard deviation, "array" stands for the entire measures on each ensemble graph, "percentiles" is a tuple containing the inf_p-percentile (default 2.5) and sup_p-percentile (default 97.5) in the ensemble and * is the statistic of interest, written as in G.implemented_classifier_statistics.


To compute network statistics you can type:

    G.netstats_measures(percentiles=(inf_p, sup_p), stats = [<list of wanted statistics>])
This method needs the previous computation of G.w_ensemble_matrix.
It computes average, standard deviation, percentiles and ensemble arrays of the network statistics of interest which can be seen in G.implemented_network_statistics.
It returns:

    G.avg_*, G.std_*, G.percentiles_*, G.array_*

To compute the reproduction accuracy for the network statistics (introduced in [2](https://arxiv.org/abs/2210.01179)) you can type:
    
    G.reproduction_accuracy_s(percentiles=(inf_p,sup_p),stats=[])
This method needs the previous computation of G.w_ensemble_matrix.
It computes the fraction of nodes for which the network measure is inside a percentile CI extracted from the graph ensemble.
It returns
    
    G.RA_s
i.e., a list of reproduction accuracies for each of the network statistics introduced via -stats- list arranged according to its order.

Finally, you can compute the reproduction accuracy for the weights (introduced in [2](https://arxiv.org/abs/2210.01179)) using:

    G.reproduction_accuracy_w(percentiles=(inf_p,sup_p))
This method needs the previous computation of G.w_ensemble_matrix.
It computes the fraction of empirical weights which fall inside the percentile CI interval given by the inf_p-percentile  and sup_p-percentile, extracted from the graph ensemble and it returns as the attribute 

    G.RA_w.

## Documentation
You can find the complete documentation of the DyGyS library in [documentation](https://dygys.readthedocs.io/en/latest/index.html)

## Credits

*Author*:

[Marzio Di Vece](https://www.imtlucca.it/it/marzio.divece) (a.k.a. [MarsMDK](https://github.com/MarsMDK))

*Acknowledgments*:
The module was developed under the supervision of [Diego Garlaschelli](https://www.imtlucca.it/en/diego.garlaschelli) and [Tiziano Squartini](https://www.imtlucca.it/en/tiziano.squartini).
It was developed at [IMT School for Advanced Studies Lucca](https://www.imtlucca.it/en) and
supported by the Italian ‘Programma di Attività Integrata’ (PAI) project ‘Prosociality, Cognition and Peer Effects’ (Pro.Co.P.E.), funded by IMT School for Advanced Studies.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "DyGyS",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "maximum entropy methods,econometrics,regression,graph,network",
    "author": "",
    "author_email": "Marzio Di Vece <marzio.divece@imtlucca.it>",
    "download_url": "https://files.pythonhosted.org/packages/37/57/2f8a1188e307b12e043a75e7da52d0fcfc497da619d053eb9c47ce152be3/DyGyS-0.0.5.tar.gz",
    "platform": null,
    "description": "![PyPI](https://img.shields.io/pypi/v/nemtropy)  [![License:GPLv3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) ![Python Version](https://img.shields.io/badge/python-3.8%20%7C%203.9-blue) [![PRR](https://img.shields.io/badge/PRR-4.033105-orange)](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.4.033105) [![CSF](https://img.shields.io/badge/CSF-166.112958-orange)](https://doi.org/10.1016/j.chaos.2022.112958)\r\n\r\n# DyGyS: DYadic GravitY regression models with Soft constraints\r\n\r\nDyGyS is a package developed on python3 for Maximum Entropy regression models with gravity specification for undirected and directed network data.\r\n\r\nDyGyS provides a numerous amount of models, described in their undirected declination in articles [1](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.4.033105#) and [2](https://doi.org/10.1016/j.chaos.2022.112958) and consisting of both econometric and statistical physics-inspired models.\r\nThe use of soft constraints enable the user to explicitly constrain network properties such as the number of links, the degree sequence (degree centrality for Undirected Networks and out-degree/in-degree centralities for directed networks), and the total weight (for a small number of viable models).\r\n\r\nFurthermore it is not only possible to solve the model and extract the parameters, but also to generate the ensemble, compute a number of network statistics, compute model selection measures such as AIC and BIC, and quantify the reproduction accuracy of:\r\n* Topology using measures as True Positive Rate, Specificity, Precision, Accuracy, Balanced Accuracy and F1score;\r\n* Weights, measuring the fraction of weights inside the percentile CI extracted from the ensemble of graphs;\r\n* Network Statistics, measuring the fraction of nodes for which the network statistics are inside the wanted percentile CI extracted from the ensemble of graphs.\r\n\r\nTo explore Maximum-Entropy modeling on networks, checkout [Maximum Entropy Hub](https://meh.imtlucca.it/).\r\n\r\nWhen using the module for your scientific research please consider citing:\r\n```\r\n    @article{PhysRevResearch.4.033105,\r\n      title = {Gravity models of networks: Integrating maximum-entropy and econometric     approaches},\r\n      author = {Di Vece, Marzio and Garlaschelli, Diego and Squartini, Tiziano},\r\n      journal = {Phys. Rev. Research},\r\n      volume = {4},\r\n      issue = {3},\r\n      pages = {033105},\r\n      numpages = {19},\r\n      year = {2022},\r\n      month = {Aug},\r\n      publisher = {American Physical Society},\r\n      doi = {10.1103/PhysRevResearch.4.033105},\r\n      url = {https://link.aps.org/doi/10.1103/PhysRevResearch.4.033105}\r\n    }\r\n\r\n```\r\nand\r\n```\r\n    @article{DIVECE2023112958,\r\n    title = {Reconciling econometrics with continuous maximum-entropy network models},\r\n    journal = {Chaos, Solitons & Fractals},\r\n    volume = {166},\r\n    pages = {112958},\r\n    year = {2023},\r\n    issn = {0960-0779},\r\n    doi = {https://doi.org/10.1016/j.chaos.2022.112958},\r\n    url = {https://www.sciencedirect.com/science/article/pii/S0960077922011377},\r\n    author = {Marzio {Di Vece} and Diego Garlaschelli and Tiziano Squartini},\r\n    keywords = {Shannon entropy, Network reconstruction, Econophysics, Econometrics, Trade, Gravity},\r\n    expected link weight coming from a probability distribution whose functional form can be chosen arbitrarily, while statistical-physics approaches construct maximum-entropy distributions of weighted graphs, constrained to satisfy a given set of measurable network properties. In a recent, companion paper, we integrated the two approaches and applied them to the World Trade Web, i.e. the network of international trade among world countries. While the companion paper dealt only with discrete-valued link weights, the present paper extends the theoretical framework to continuous-valued link weights. In particular, we construct two broad classes of maximum-entropy models, namely the integrated and the conditional ones, defined by different criteria to derive and combine the probabilistic rules for placing links and loading them with weights. In the integrated models, both rules follow from a single, constrained optimization of the continuous Kullback\u2013Leibler divergence; in the conditional models, the two rules are disentangled and the functional form of the weight distribution follows from a conditional, optimization procedure. After deriving the general functional form of the two classes, we turn each of them into a proper family of econometric models via a suitable identification of the econometric function relating the corresponding, expected link weights to macroeconomic factors. After testing the two classes of models on World Trade Web data, we discuss their strengths and weaknesses.}\r\n    }\r\n\r\n```\r\n#### Contents\r\n- [DyGyS: DYadic GravitY regression models with Soft constraints](#dygys-dyadic-gravity-regression-models-with-soft-constraints)\r\n      - [Contents](#contents)\r\n  - [Currently Available Models](#currently-available-models)\r\n  - [Installation](#installation)\r\n  - [Dependencies](#dependencies)\r\n  - [How-to Guidelines](#how-to-guidelines)\r\n    - [Class Instance and Empirical Network Statistics](#class-instance-and-empirical-network-statistics)\r\n    - [Solving the models](#solving-the-models)\r\n    - [Generating the network ensemble](#generating-the-network-ensemble)\r\n    - [Computing relevant measures](#computing-relevant-measures)\r\n  - [Documentation](#documentation)\r\n  - [Credits](#credits)\r\n\r\n##  Currently Available Models\r\nDyGyS contains models for network data with both continuous and discrete-valued semi-definite positive weights.\r\nThe available models for discrete count data are described in [1](https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.4.033105#) and consist of:\r\n* **POIS** *Poisson Model* \r\n* **ZIP** *Zero-Inflated Poisson Model* \r\n* **NB2** *Negative Binomial Model* \r\n* **ZINB** *Zero-Inflated Negative Binomial Model* \r\n* **L-CGeom** *L-constrained Conditional Geometric Model*, noted as TSF in the paper.\r\n* **k-CGeom** *k-constrained Conditional Geometric Model*, noted as TS in the paper.\r\n* **L-IGeom** *L-constrained Integrated Geometric Model*, noted as H(1) in the paper.\r\n* **k-IGeom** *k-constrained Integrated Geometric Model*, noted as H(2) in the paper.\r\n\r\nThe analogue models for continuous-valued data are described in [2](https://arxiv.org/abs/2210.01179) and consist of:\r\n* **L-CExp** *L-constrained Conditional Exponential Model*, the L-constrained variant of C-Exp in the paper.\r\n* **k-CExp** *k-constrained Conditional Exponential Model*, noted as CExp in the paper.\r\n* **L-IExp** *L-constrained Integrated Exponential Model*, the L-constrained variant of I-Exp in the paper.\r\n* **k-IExp** *k-constrained Integrated Exponential Model*, noted as IExp in the paper.\r\n* **L-CGamma** *L-constrained Conditional Gamma Model*, the L-constrained variant of C-Gamma in the paper.\r\n* **k-CGamma** *k-constrained Conditional Gamma Model*, noted as CGamma in the paper.\r\n* **L-CPareto** *L-constrained Conditional Pareto Model*, the L-constrained variant of C-Pareto in the paper.\r\n* **k-CPareto** *k-constrained Conditional Pareto Model*, noted as CPareto in the paper.\r\n* **L-CLognormal** *L-constrained Conditional Lognormal Model*, the L-constrained variant of C-Lognormal in the paper.\r\n* **k-CLognormal** *k-constrained Conditional Lognormal Model*, noted as CLognormal in the paper.\r\n\r\nPlease refer to the papers for further details.\r\n\r\n## Installation\r\nDyGyS can be installed via pip. You can do it from your terminal\r\n```\r\n    $ pip install DyGyS\r\n```\r\nIf you already installed the package and want to  upgrade it,\r\nyou can type from your terminal:\r\n\r\n```\r\n    $ pip install DyGyS --upgrade\r\n```\r\n\r\n## Dependencies\r\nDyGyS uses the following dependencies:\r\n* **scipy** for optimization and root solving;\r\n* **numba** for fast computation of network statistics and criterion functions.\r\n* **numba-scipy** for fast computation of special functions such as gammaincinv and erfinv.\r\n\r\nThey can be easily installed via pip typing\r\n\r\n    $ pip install scipy\r\n    $ pip install numba\r\n    $ pip install numba-scipy\r\n\r\n\r\n## How-to Guidelines\r\nThe module containes two classes, namely UndirectedGraph and DirectedGraph.\r\nAn Undirected Graph is defined as a network where weights are reciprocal, i.e., $w_{ij} = w_{ji}$ where $w_{ij}$ is the network weight from node $i$ to node $j$. \r\nIf weights are not reciprocal, please use the DirectedGraph class.\r\n\r\n### Class Instance and Empirical Network Statistics\r\nTo inizialize an UndirectedGraph or DirectedGraph instance you can type:\r\n\r\n    G = UndirectedGraph(adjacency=Wij)\r\n    or\r\n    G = DirectedGraph(adjacency=Wij)\r\nwhere Wij is the weighted adjacency matrix in 1-D (dense) or 2-D numpy array format.\r\n\r\nAfter initializing you can already explore core network statistics such as (out-)degree, in-degree, average neighbor degree, binary clustering coefficient, (out-)strength, in-strength, average neighbor strength and weighted clustering coefficient.\r\nThese are available using the respective codewords:\r\n\r\n    G.degree, G.degree_in, G.annd, G.clust, G.strength, G.strength_in, G.anns, G.clust_w\r\n\r\n### Solving the models\r\nYou can explore the currently available models using\r\n    \r\n    G.implemented_models\r\nuse their names as described in this list not to incur in error messages.\r\n\r\nIn order to solve the models you need to define a *regressor matrix* $X_w$ of dimension $N_{obs} \\times k$ where $N_{obs} = N^2$ is the number of observations (equivalent to the square of the number of nodes), and $k$ is the number of exogenous variables introduced in the Gravity Specification. \r\nFor L-Constrained Conditional Models and Zero-Inflated models you ought to define also a regressor matrix $X_t$ for the first-stage (or topological) optimization and you can choose to fix some of the first-stage parameters.\r\n\r\nWhen ready you can choose one of the aforementioned models and solve for their parameters using\r\n    \r\n    G.solve(model= <chosen model>,exogenous_variables = X_w, selection_variables = X_t,\r\n        fixed_selection_parameters = <chosen fixed selection parameters>)\r\n\r\nOnce you solved the model various other attributes become visible and measures dependent solely on criterion functions are computed. These include Loglikelihood, Jacobian, Infinite Jacobian Norm, AIC, Binary AIC and BIC, available using the codewords:\r\n\r\n    G.ll, G.jacobian, G.norm, G.aic, G.aic_binary, G.bic\r\n\r\nFor further details on the .solve functions please see the documentation.\r\n\r\n\r\n\r\n### Generating the network ensemble \r\n\r\nGenerating the network ensemble is very easy. It's enough to type:\r\n    \r\n    G.gen_ensemble(n_ensemble=<wanted number of graphs>)\r\nThe graphs are produced using the \"default_rng\" method for discrete-valued models or using Inverse Transform Sampling for continuous-valued models.\r\n\r\nThis method returns\r\n\r\n    G.w_ensemble_matrix\r\nwhich is a $N_{obs} \\times N_{ensemble}$ matrix which includes all of the $N_{ensemble}$ adjacency matrices in the ensemble.\r\nSuch method behaves well for networks up to $ N=200 $ for $10^{4}$ ensemble graphs, no test has been done for large networks where G.w_ensemble_matrix could be limited by RAM.\r\n\r\n\r\n### Computing relevant measures\r\n\r\nLet's start by showing how to compute topology-related measures. \r\nYou can type:\r\n    \r\n    G.classification_measures(n_ensemble=<wanted number of graphs>,percentiles = (inf_p, sup_p), stats =[<list of wanted statistics>])\r\nThis method does not need G.w_ensemble_matrix so you can use it without generating the ensemble of weighted networks.\r\nThe statistics you can compute are listed in G.implemented_classifier_statistics and once you define the number of networks, the ensemble percentiles and statistics of interest, it returns\r\n\r\n    G.avg_*, G.std_*, G.percentiles_*, G.array_*\r\nwhere \"avg\" stands for ensemble average, \"std\" for ensemble standard deviation, \"array\" stands for the entire measures on each ensemble graph, \"percentiles\" is a tuple containing the inf_p-percentile (default 2.5) and sup_p-percentile (default 97.5) in the ensemble and * is the statistic of interest, written as in G.implemented_classifier_statistics.\r\n\r\n\r\nTo compute network statistics you can type:\r\n\r\n    G.netstats_measures(percentiles=(inf_p, sup_p), stats = [<list of wanted statistics>])\r\nThis method needs the previous computation of G.w_ensemble_matrix.\r\nIt computes average, standard deviation, percentiles and ensemble arrays of the network statistics of interest which can be seen in G.implemented_network_statistics.\r\nIt returns:\r\n\r\n    G.avg_*, G.std_*, G.percentiles_*, G.array_*\r\n\r\nTo compute the reproduction accuracy for the network statistics (introduced in [2](https://arxiv.org/abs/2210.01179)) you can type:\r\n    \r\n    G.reproduction_accuracy_s(percentiles=(inf_p,sup_p),stats=[])\r\nThis method needs the previous computation of G.w_ensemble_matrix.\r\nIt computes the fraction of nodes for which the network measure is inside a percentile CI extracted from the graph ensemble.\r\nIt returns\r\n    \r\n    G.RA_s\r\ni.e., a list of reproduction accuracies for each of the network statistics introduced via -stats- list arranged according to its order.\r\n\r\nFinally, you can compute the reproduction accuracy for the weights (introduced in [2](https://arxiv.org/abs/2210.01179)) using:\r\n\r\n    G.reproduction_accuracy_w(percentiles=(inf_p,sup_p))\r\nThis method needs the previous computation of G.w_ensemble_matrix.\r\nIt computes the fraction of empirical weights which fall inside the percentile CI interval given by the inf_p-percentile  and sup_p-percentile, extracted from the graph ensemble and it returns as the attribute \r\n\r\n    G.RA_w.\r\n\r\n## Documentation\r\nYou can find the complete documentation of the DyGyS library in [documentation](https://dygys.readthedocs.io/en/latest/index.html)\r\n\r\n## Credits\r\n\r\n*Author*:\r\n\r\n[Marzio Di Vece](https://www.imtlucca.it/it/marzio.divece) (a.k.a. [MarsMDK](https://github.com/MarsMDK))\r\n\r\n*Acknowledgments*:\r\nThe module was developed under the supervision of [Diego Garlaschelli](https://www.imtlucca.it/en/diego.garlaschelli) and [Tiziano Squartini](https://www.imtlucca.it/en/tiziano.squartini).\r\nIt was developed at [IMT School for Advanced Studies Lucca](https://www.imtlucca.it/en) and\r\nsupported by the Italian \u2018Programma di Attivit\u00e0 Integrata\u2019 (PAI) project \u2018Prosociality, Cognition and Peer Effects\u2019 (Pro.Co.P.E.), funded by IMT School for Advanced Studies.\r\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "DyGyS is a package for Maximum Entropy regression models with gravity specification for undirected and directed network data. Moreover, it can solve them, generate the graph ensemble, compute several network statistics, calculate model selection measures such as AIC and BIC and quantify their reproduction accuracy in topological and weighted properties.",
    "version": "0.0.5",
    "split_keywords": [
        "maximum entropy methods",
        "econometrics",
        "regression",
        "graph",
        "network"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6d64b1a18e6d05f3c676053b194b79173cf1898094d69b07efd5af03c6f41dd7",
                "md5": "49fa22399d40d867f40d1091fc22233b",
                "sha256": "24456b80f3411ccdf748fdf1eb3e18eb1ad4705e25ff9485a332adea10dea5a3"
            },
            "downloads": -1,
            "filename": "DyGyS-0.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "49fa22399d40d867f40d1091fc22233b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 49959,
            "upload_time": "2023-01-21T10:59:35",
            "upload_time_iso_8601": "2023-01-21T10:59:35.693004Z",
            "url": "https://files.pythonhosted.org/packages/6d/64/b1a18e6d05f3c676053b194b79173cf1898094d69b07efd5af03c6f41dd7/DyGyS-0.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "37572f8a1188e307b12e043a75e7da52d0fcfc497da619d053eb9c47ce152be3",
                "md5": "952dfb51d0094af815f2483686ff9083",
                "sha256": "05e80e6edc9889cc0131abac22475ad4043338e4d83a0c16f5bb507d4b76edc0"
            },
            "downloads": -1,
            "filename": "DyGyS-0.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "952dfb51d0094af815f2483686ff9083",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 48472,
            "upload_time": "2023-01-21T10:59:38",
            "upload_time_iso_8601": "2023-01-21T10:59:38.485825Z",
            "url": "https://files.pythonhosted.org/packages/37/57/2f8a1188e307b12e043a75e7da52d0fcfc497da619d053eb9c47ce152be3/DyGyS-0.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-21 10:59:38",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "dygys"
}
        
Elapsed time: 0.07429s