# Model Confidence Set
The `model-confidence-set` package provides a Python implementation of the Model Confidence Set (MCS) procedure [(Hansen, Lunde, and Nason, 2011)](https://www.jstor.org/stable/41057463), a statistical method for comparing and selecting models based on their performance. It allows users to identify a set of models that are statistically indistinguishable from the best model, given a statistical confidence level.
This package
- supports both stationary and block bootstrap methods.
- implements two methods for p-value computation: *relative* and *sequential*.
- optionally displays progress during computation.
## Installation
To install `model-confidence-set`, simply use pip:
```bash
pip install model-confidence-set
```
## Usage
To use the Model Confidence Set in your Python code, follow the example below:
```python
import numpy as np
import pandas as pd
from model_confidence_set import ModelConfidenceSet
# Example losses matrix where rows are observations and columns are models
losses = np.random.rand(100, 5) # 100 observations for 5 models
# Initialize the MCS procedure (5'000 bootstrap iterations, 5% confidence level)
mcs = ModelConfidenceSet(losses, n_boot=5000, alpha=0.05, show_progress=True)
# Compute the MCS
mcs.compute()
# Retrieve the results as a pandas DataFrame (use as_dataframe=False for a dict)
results = mcs.results()
print(results)
```
## Parameters
- `losses`: A 2D `numpy.ndarray` or `pandas.DataFrame` containing loss values of models. Rows correspond to observations, and columns correspond to different models.
- `n_boot`: Number of bootstrap replications for computing p-values. Default is `5000`.
- `alpha`: Significance level for determining model confidence set. Default is `0.05`.
- `block_len`: The length of blocks for the block bootstrap. If `None`, it defaults to the square root of the number of observations.
- `bootstrap_variant`: Specifies the bootstrap variant to use. Options are `'stationary'` or `'block'`. Default is `'stationary'`.
- `method`: The method used for p-value calculation. Options are `'R'` for *relative* or `'SQ'` for *sequential*. Default is `'R'`.
- `show_progress`: Whether to show a progress bar during bootstrap computations. Default is `False`.
## Acknowledgments
This package draws inspiration from
+ the [Matlab implementation by Kevin Sheppard](https://www.kevinsheppard.com/code/matlab/mfe-toolbox/)
+ the [Python implementation by Michael Gong](https://michael-gong.com/blogs/model-confidence-set/).
Raw data
{
"_id": null,
"home_page": "https://github.com/JLDC/model-confidence-set",
"name": "model-confidence-set",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "model confidence set, model evaluation, statistical model comparison, model performance analysis, model selection, predictive accuracy, econometrics, financial econometrics",
"author": "Jonathan Chassot",
"author_email": "jonathan.chassot@unisg.ch",
"download_url": "https://files.pythonhosted.org/packages/98/59/47b2a644fa4be9b0e170f45c8dab31e15564633c7fc089d44f9e8d6f07f5/model_confidence_set-0.1.3.tar.gz",
"platform": null,
"description": "# Model Confidence Set\n\nThe `model-confidence-set` package provides a Python implementation of the Model Confidence Set (MCS) procedure [(Hansen, Lunde, and Nason, 2011)](https://www.jstor.org/stable/41057463), a statistical method for comparing and selecting models based on their performance. It allows users to identify a set of models that are statistically indistinguishable from the best model, given a statistical confidence level.\n\nThis package\n- supports both stationary and block bootstrap methods.\n- implements two methods for p-value computation: *relative* and *sequential*.\n- optionally displays progress during computation.\n\n## Installation\n\nTo install `model-confidence-set`, simply use pip:\n\n```bash\npip install model-confidence-set\n```\n\n## Usage\nTo use the Model Confidence Set in your Python code, follow the example below:\n\n```python\nimport numpy as np\nimport pandas as pd\nfrom model_confidence_set import ModelConfidenceSet\n\n# Example losses matrix where rows are observations and columns are models\nlosses = np.random.rand(100, 5) # 100 observations for 5 models\n\n# Initialize the MCS procedure (5'000 bootstrap iterations, 5% confidence level)\nmcs = ModelConfidenceSet(losses, n_boot=5000, alpha=0.05, show_progress=True)\n\n# Compute the MCS\nmcs.compute()\n\n# Retrieve the results as a pandas DataFrame (use as_dataframe=False for a dict)\nresults = mcs.results()\nprint(results)\n```\n\n## Parameters\n- `losses`: A 2D `numpy.ndarray` or `pandas.DataFrame` containing loss values of models. Rows correspond to observations, and columns correspond to different models.\n- `n_boot`: Number of bootstrap replications for computing p-values. Default is `5000`.\n- `alpha`: Significance level for determining model confidence set. Default is `0.05`.\n- `block_len`: The length of blocks for the block bootstrap. If `None`, it defaults to the square root of the number of observations.\n- `bootstrap_variant`: Specifies the bootstrap variant to use. Options are `'stationary'` or `'block'`. Default is `'stationary'`.\n- `method`: The method used for p-value calculation. Options are `'R'` for *relative* or `'SQ'` for *sequential*. Default is `'R'`.\n- `show_progress`: Whether to show a progress bar during bootstrap computations. Default is `False`.\n\n## Acknowledgments\nThis package draws inspiration from \n+ the [Matlab implementation by Kevin Sheppard](https://www.kevinsheppard.com/code/matlab/mfe-toolbox/) \n+ the [Python implementation by Michael Gong](https://michael-gong.com/blogs/model-confidence-set/).\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "model-confidence-set provides a Python implementation of the Model Confidence Set (MCS) procedure (Hansen, Lunde, and Nason, 2011), a statistical method for comparing and selecting models based on their performance.",
"version": "0.1.3",
"project_urls": {
"Homepage": "https://github.com/JLDC/model-confidence-set"
},
"split_keywords": [
"model confidence set",
" model evaluation",
" statistical model comparison",
" model performance analysis",
" model selection",
" predictive accuracy",
" econometrics",
" financial econometrics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d1180901711a99607621e2ea7997f4f23282423c10d533cae6f4ba38612a4dc7",
"md5": "7c3618534683519e32b891cb87cc49ca",
"sha256": "f75fe29f5a25e45b5f49c2397803b5d7c4a117a64790ed5681194e973c04fef6"
},
"downloads": -1,
"filename": "model_confidence_set-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7c3618534683519e32b891cb87cc49ca",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 7135,
"upload_time": "2024-10-21T13:50:07",
"upload_time_iso_8601": "2024-10-21T13:50:07.537127Z",
"url": "https://files.pythonhosted.org/packages/d1/18/0901711a99607621e2ea7997f4f23282423c10d533cae6f4ba38612a4dc7/model_confidence_set-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "985947b2a644fa4be9b0e170f45c8dab31e15564633c7fc089d44f9e8d6f07f5",
"md5": "e8bc2e9861ef2f2f136d601a2d949ec2",
"sha256": "605f1498010fc71d6140eb86241290ec16b31b6131b37fbf0fb14b8fe1003009"
},
"downloads": -1,
"filename": "model_confidence_set-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "e8bc2e9861ef2f2f136d601a2d949ec2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 6255,
"upload_time": "2024-10-21T13:50:08",
"upload_time_iso_8601": "2024-10-21T13:50:08.546710Z",
"url": "https://files.pythonhosted.org/packages/98/59/47b2a644fa4be9b0e170f45c8dab31e15564633c7fc089d44f9e8d6f07f5/model_confidence_set-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-21 13:50:08",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "JLDC",
"github_project": "model-confidence-set",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "model-confidence-set"
}