scipy-maxentropy


Namescipy-maxentropy JSON
Version 1.0 PyPI version JSON
download
home_page
SummaryMaximum entropy modelling code previously available as scipy.maxentropy
upload_time2024-03-16 01:39:54
maintainer
docs_urlNone
authorEd Schofield
requires_python>=3.6,<4.0
licenseBSD
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # scipy-maxentropy: maximum entropy models

This is the former `scipy.maxentropy` package that was available in SciPy up to
version 0.10.1. It was under-maintained and later removed in SciPy 0.11. It is
now available as this separate package for backward compatibility.

For new projects, consider the
[maxentropy](https://github.com/PythonCharmers/maxentropy) package instead,
which offers a more modern scikit-learn compatible API.

## Purpose

This package fits "exponential family" models, including models of maximum
entropy and minimum KL divergence to other models, subject to linear constraints
on the expectations of arbitrary feature statistics. Applications include
language models for natural language processing and understanding, machine
translation, etc., environmental species modelling, image reconstruction, and
others.

## Quickstart

Here is a quick usage example based on the trivial machine translation example
from the paper 'A maximum entropy approach to natural language processing' by
Berger et al., Computational Linguistics, 1996.

Consider the translation of the English word 'in' into French. Assume we notice
in a corpus of parallel texts the following facts:

    (1)    p(dans) + p(en) + p(à) + p(au cours de) + p(pendant) = 1
    (2)    p(dans) + p(en) = 3/10
    (3)    p(dans) + p(à)  = 1/2

This code finds the probability distribution with maximal entropy subject to
these constraints.

```python
from scipy_maxentropy import Model    # previously scipy.maxentropy

samplespace = ['dans', 'en', 'à', 'au cours de', 'pendant']

def f0(x):
    return x in samplespace

def f1(x):
    return x=='dans' or x=='en'

def f2(x):
    return x=='dans' or x=='à'

f = [f0, f1, f2]

model = Model(f, samplespace)

# Now set the desired feature expectations
b = [1.0, 0.3, 0.5]

model.verbose = False    # set to True to show optimization progress

# Fit the model
model.fit(b)

# Output the distribution
print()
print("Fitted model parameters are:\n" + str(model.params))
print()
print("Fitted distribution is:")
p = model.probdist()
for j in range(len(model.samplespace)):
    x = model.samplespace[j]
    print(f"    x = {x + ':':15s} p(x) = {p[j]:.3f}")

# Now show how well the constraints are satisfied:
print()
print("Desired constraints:")
print("    sum(p(x))           = 1.0")
print("    p['dans'] + p['en'] = 0.3")
print("    p['dans'] + p['à']  = 0.5")
print()
print("Actual expectations under the fitted model:")
print(f"    sum(p(x))           = {p.sum():.3f}")
print(f"    p['dans'] + p['en'] = {p[0] + p[1]:.3f}")
print(f"    p['dans'] + p['à']  = {p[0] + p[2]:.3f}")
```

## Models available

These model classes are available:
- `scipy_maxentropy.Model`: for models on discrete, enumerable sample spaces
- `scipy_maxentropy.ConditionalModel`: for conditional models on discrete, enumerable sample spaces
- `scipy_maxentropy.BigModel`: for models on sample spaces that are either continuous (and
perhaps high-dimensional) or discrete but too large to enumerate, like all possible
sentences in a natural language. This model uses conditional Monte Carlo methods
(primarily importance sampling).

## Background

This package fits probabilistic models of the following exponential form:

$$
   p(x) = p_0(x) \exp(\theta^T f(x)) / Z(\theta; p_0)
$$

with a real parameter vector $\theta$ of the same length $n$ as the feature
statistics $f(x) = \left(f_1(x), ..., f_n(x)\right)$.

This is the "closest" model (in the sense of minimizing KL divergence or
"relative entropy") to the prior model $p_0$ subject to the following additional
constraints on the expectations of the features:

```
    E f_1(X) = b_1
    ...
    E f_n(X) = b_n
```

for some constants $b_i$, such as statistics estimated from a dataset.

In the special case where $p_0$ is the uniform distribution, this is the
"flattest" model subject to the constraints, in the sense of having **maximum
entropy**.

For more background, see, for example, Cover and Thomas (1991), *Elements of
Information Theory*.



            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "scipy-maxentropy",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6,<4.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Ed Schofield",
    "author_email": "ed@pythoncharmers.com",
    "download_url": "https://files.pythonhosted.org/packages/5e/39/14d67a5ddd03acfc424006c0ad9769fa1ede953e0c72367db6f5e01411ef/scipy_maxentropy-1.0.tar.gz",
    "platform": null,
    "description": "# scipy-maxentropy: maximum entropy models\n\nThis is the former `scipy.maxentropy` package that was available in SciPy up to\nversion 0.10.1. It was under-maintained and later removed in SciPy 0.11. It is\nnow available as this separate package for backward compatibility.\n\nFor new projects, consider the\n[maxentropy](https://github.com/PythonCharmers/maxentropy) package instead,\nwhich offers a more modern scikit-learn compatible API.\n\n## Purpose\n\nThis package fits \"exponential family\" models, including models of maximum\nentropy and minimum KL divergence to other models, subject to linear constraints\non the expectations of arbitrary feature statistics. Applications include\nlanguage models for natural language processing and understanding, machine\ntranslation, etc., environmental species modelling, image reconstruction, and\nothers.\n\n## Quickstart\n\nHere is a quick usage example based on the trivial machine translation example\nfrom the paper 'A maximum entropy approach to natural language processing' by\nBerger et al., Computational Linguistics, 1996.\n\nConsider the translation of the English word 'in' into French. Assume we notice\nin a corpus of parallel texts the following facts:\n\n    (1)    p(dans) + p(en) + p(\u00e0) + p(au cours de) + p(pendant) = 1\n    (2)    p(dans) + p(en) = 3/10\n    (3)    p(dans) + p(\u00e0)  = 1/2\n\nThis code finds the probability distribution with maximal entropy subject to\nthese constraints.\n\n```python\nfrom scipy_maxentropy import Model    # previously scipy.maxentropy\n\nsamplespace = ['dans', 'en', '\u00e0', 'au cours de', 'pendant']\n\ndef f0(x):\n    return x in samplespace\n\ndef f1(x):\n    return x=='dans' or x=='en'\n\ndef f2(x):\n    return x=='dans' or x=='\u00e0'\n\nf = [f0, f1, f2]\n\nmodel = Model(f, samplespace)\n\n# Now set the desired feature expectations\nb = [1.0, 0.3, 0.5]\n\nmodel.verbose = False    # set to True to show optimization progress\n\n# Fit the model\nmodel.fit(b)\n\n# Output the distribution\nprint()\nprint(\"Fitted model parameters are:\\n\" + str(model.params))\nprint()\nprint(\"Fitted distribution is:\")\np = model.probdist()\nfor j in range(len(model.samplespace)):\n    x = model.samplespace[j]\n    print(f\"    x = {x + ':':15s} p(x) = {p[j]:.3f}\")\n\n# Now show how well the constraints are satisfied:\nprint()\nprint(\"Desired constraints:\")\nprint(\"    sum(p(x))           = 1.0\")\nprint(\"    p['dans'] + p['en'] = 0.3\")\nprint(\"    p['dans'] + p['\u00e0']  = 0.5\")\nprint()\nprint(\"Actual expectations under the fitted model:\")\nprint(f\"    sum(p(x))           = {p.sum():.3f}\")\nprint(f\"    p['dans'] + p['en'] = {p[0] + p[1]:.3f}\")\nprint(f\"    p['dans'] + p['\u00e0']  = {p[0] + p[2]:.3f}\")\n```\n\n## Models available\n\nThese model classes are available:\n- `scipy_maxentropy.Model`: for models on discrete, enumerable sample spaces\n- `scipy_maxentropy.ConditionalModel`: for conditional models on discrete, enumerable sample spaces\n- `scipy_maxentropy.BigModel`: for models on sample spaces that are either continuous (and\nperhaps high-dimensional) or discrete but too large to enumerate, like all possible\nsentences in a natural language. This model uses conditional Monte Carlo methods\n(primarily importance sampling).\n\n## Background\n\nThis package fits probabilistic models of the following exponential form:\n\n$$\n   p(x) = p_0(x) \\exp(\\theta^T f(x)) / Z(\\theta; p_0)\n$$\n\nwith a real parameter vector $\\theta$ of the same length $n$ as the feature\nstatistics $f(x) = \\left(f_1(x), ..., f_n(x)\\right)$.\n\nThis is the \"closest\" model (in the sense of minimizing KL divergence or\n\"relative entropy\") to the prior model $p_0$ subject to the following additional\nconstraints on the expectations of the features:\n\n```\n    E f_1(X) = b_1\n    ...\n    E f_n(X) = b_n\n```\n\nfor some constants $b_i$, such as statistics estimated from a dataset.\n\nIn the special case where $p_0$ is the uniform distribution, this is the\n\"flattest\" model subject to the constraints, in the sense of having **maximum\nentropy**.\n\nFor more background, see, for example, Cover and Thomas (1991), *Elements of\nInformation Theory*.\n\n\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "Maximum entropy modelling code previously available as scipy.maxentropy",
    "version": "1.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ad3dfd110f1aafc99a27030bd585fd27829abd88cbe5b73e362f31b244d7f159",
                "md5": "ebcdf3f161b9d5248ff30ec4c7299d06",
                "sha256": "2cd3c1d95b3aadee3ec1d081e19a5dfec74bd9c2ca8208950c9263110f836174"
            },
            "downloads": -1,
            "filename": "scipy_maxentropy-1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ebcdf3f161b9d5248ff30ec4c7299d06",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6,<4.0",
            "size": 27742,
            "upload_time": "2024-03-16T01:39:51",
            "upload_time_iso_8601": "2024-03-16T01:39:51.957217Z",
            "url": "https://files.pythonhosted.org/packages/ad/3d/fd110f1aafc99a27030bd585fd27829abd88cbe5b73e362f31b244d7f159/scipy_maxentropy-1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5e3914d67a5ddd03acfc424006c0ad9769fa1ede953e0c72367db6f5e01411ef",
                "md5": "48dcee82214c124f1faccf8f06d13997",
                "sha256": "715413e8233f5689078507ff26228df156f1503e5f0569abeaa22ffdb5244877"
            },
            "downloads": -1,
            "filename": "scipy_maxentropy-1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "48dcee82214c124f1faccf8f06d13997",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6,<4.0",
            "size": 27073,
            "upload_time": "2024-03-16T01:39:54",
            "upload_time_iso_8601": "2024-03-16T01:39:54.402370Z",
            "url": "https://files.pythonhosted.org/packages/5e/39/14d67a5ddd03acfc424006c0ad9769fa1ede953e0c72367db6f5e01411ef/scipy_maxentropy-1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-16 01:39:54",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "scipy-maxentropy"
}
        
Elapsed time: 0.19880s