aggregate


Nameaggregate JSON
Version 0.22.0 PyPI version JSON
download
home_page
SummaryTools for creating and working with aggregate probability distributions.
upload_time2024-01-23 22:58:15
maintainer
docs_urlNone
author
requires_python>=3.10
licenseBSD-3-Clause
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            |  |activity| |doc| |version|
|  |py-versions| |downloads|
|  |license| |packages| |zenodo|

-----

aggregate: a powerful actuarial modeling library
==================================================

Purpose
-----------

``aggregate`` builds approximations to compound (aggregate) probability distributions quickly and accurately.
It can be used to solve insurance, risk management, and actuarial problems using realistic models that reflect
underlying frequency and severity. It delivers the speed and accuracy of parametric distributions to situations
that usually require simulation, making it as easy to work with an aggregate (compound) probability distribution
as the lognormal. ``aggregate`` includes an expressive language called DecL to describe aggregate distributions
and is implemented in Python under an open source BSD-license.

White Paper (new July 2023)
----------------------------

The `White Paper <https://github.com/mynl/aggregate/blob/master/cheat-sheets/Aggregate_white_paper.pdf>`_ describes
the purpose, implementation, and use of the class ``aggregate.Aggregate`` that
handles the creation and manipulation of compound frequency-severity distributions.

Documentation
-------------

https://aggregate.readthedocs.io/


Where to get it
---------------

https://github.com/mynl/aggregate


Installation
------------

To install into a new ``Python>=3.10`` virtual environment::

    python -m venv path/to/your/venv``
    cd path/to/your/venv

followed by::

    \path\to\env\Scripts\activate

on Windows, or::

    source /path/to/env/bin/activate

on Linux/Unix or MacOS. Finally, install the package::

    pip install aggregate[dev]

All the code examples have been tested in such a virtual environment and the documentation will build.


Version History
-----------------

0.22.0
~~~~~~~~~~

* Created version 0.22.0, convolation

0.21.4
~~~~~~~~

* Updated requirement using ``pipreqs`` recommendations
* Color graphics in documentation
* Added ``expected_shift_reduce = 16  # Set this to the number of expected shift/reduce conflicts`` to ``parser.py``
  to avoid warnings. The conflicts are resolved in the correct way for the grammar to work.
* Issues: there is a difference between ``dfreq[1]`` and ``1 claim ... fixed``, e.g.,
  when using spliced severities. These should not  occur.


0.21.3
~~~~~~~~

* Risk progression, defaults to linear allocation.
* Added ``g_insurance_statistics`` to ``extensions`` to plot insurance statistics from a distortion ``g``.
* Added ``g_risk_appetite`` to ``extensions`` to plot risk appetite from a distortion ``g`` (value, loss ratio,
  return on capital, VaR and TVaR weights).
* Corrected Wang distortion derivative.
* Vectorized ``Distortion.g_prime`` calculation for proportional hazard
* Added ``tvar_weights`` function to ``spectral`` to compute the TVaR weights of a distortion. (Work in progress)
* Updated dependencies in pyproject.toml file.

0.21.2
~~~~~~~~

* Misc documentation updates.
* Experimental magic functions, allowing, eg. %agg [spec] to create an aggregate object (one-liner).
* 0.21.1 yanked from pypi due to error in pyproject.toml.

0.21.0
~~~~~~~~~

* Moved ``sly`` into the project for better control.  ``sly`` is a Python implementation of lex and yacc parsing tools.
  It is written by Dave Beazley. Per the sly repo on github:

  The SLY project is no longer making package-installable releases. It's fully functional, but if choose to use it,
  you should vendor the code into your application. SLY has zero-dependencies. Although I am semi-retiring the project,
  I will respond to bug reports and still may decide to make future changes to it depending on my mood.
  I'd like to thank everyone who has contributed to it over the years. --Dave

* Experimenting with a line/cell DecL magic interpreter in Jupyter Lab to obviate the
  need for ``build``.

0.20.2
~~~~~~~~~

* risk progression logic adjusted to exclude values with zero probability; graphs
  updated to use step drawstyle.

0.20.1
~~~~~~~

* Bug fix in parser interpretation of arrays with step size
* Added figures for AAS paper to extensions.ft and extensions.figures
* Validation "not unreasonable" flag set to 0
* Added aggregate_white_paper.pdf
* Colors in risk_progression

0.20.0
~~~~~~~

* ``sev_attachment``: changed default to ``None``; in that case gross losses equal
  ground-up losses, with no adjustment. But if layer is 10 xs 0 then losses
  become conditional on X > 0. That results in a different behaviour, e.g.,
  when using ``dsev[0:3]``. Ripple through effect in Aggregate (change default),
  Severity (change default, and change moment calculation; need to track the "attachment"
  of zero and the fact that it came from None, to track Pr attaching)
* dsev: check if any elements are < 0 and set to zero before computing moments
  in dhistogram
* same for dfreq; implemented in ``validate_discrete_distribution`` in distributions module
* Default ``recommend_p=0.99999`` set in constsants module.
* ``interpreter_test_suite`` renamed to ``run_test_suite`` and includes test
  to count and report if there are errors.
* Reason codes for failing validation; Aggregate.qt becomes Aggregte.explain_validation

0.19.0
~~~~~~~

* Fixed reinsurance description formatting
* Improved splice parsing to allow explicit entry of lb and ub; needed to
  model mixtures of mixtures (Albrecher et al. 2017)

0.18.0 (major update)
~~~~~~~~~~~~~~~~~~~~~~~

* Added ability to specify occ reinsurance after a built in agg; this
  allows you to alter a gross aggregate more easily.
* ``Underwriter.safe_lookup`` uses deepcopy rather than copy to avoid
  problems array elements.
* Clean up and improved Parser and grammar

    - atom -> term is much cleaner (removed power, factor; now
      managed with prcedence and assoicativity)
    - EXP and EXPONENT are right
      associative, division is not associative so 1/2/3 gives an error.
    - Still SR conflict from dfreq [ ] [  ] because it could be the
      probabilities clause or the start of a vectorized limit clause
    - Remaining SR conflicts are from NUMBER, which is used in many
      places. This is a problem with the grammar, not the parser.
    - Added more tests to the parser test suite
    - Severity weights clause must come after locations (more natural)
    - Added ability for unconditional dsev.
    - Support for splicing (see below)

* Cleanup of ``Aggregate`` class, concurrent with creating a cheat sheet

    - many documentation updates
    - ``plot_old`` deleted
    - deleted ``delbaen_haezendonck_density``; not used; not doing anything
      that isn't easy by hand. Includes dh_sev_density and dh_agg_density.
    - deleted ``fit`` as alternative name for ``approximate``
    - deleted unused fields

* Cleanup of ``Portfolio`` class, concurrent with creating a cheat sheet

    - deleted ``fit`` as alternative name for ``approximate``
    - deleted ``q_old_0_12_0`` (old quantile), ``q_temp``, ``tvar_old_0_12_0``
    - deleted ``plot_old``, ``last_a``, ``_(inverse)_tail_var(_2)``
    - deleted ``def get_stat(self, line='total', stat='EmpMean'): return self.audit_df.loc[line, stat]``
    - deleted ``resample``, was an alias for sample

* Management of knowledge in ``Underwriter`` changed to support loading
  a database after creation. Databases not loaded until needed - alas
  that includes printing the object. TODO: Consider a change?
* Frequency mfg renamed to freq_pgf to match other Frequency class methods and
  to accuractely describe the function as a probability generating function
  rather than a moment generating function.
* Added ``introspect`` function to Utilities. Used to create a cheat sheet
  for Aggregate.
* Added cheat sheets, completed for Aggregate
* Severity can now be conditional on being in a layer (see splice); managed
  adjustments to underlying frozen rv using decorators. No overhead if not
  used.
* Added "splice" option for Severity (see Albrecher et. al ch XX) and Aggregate,
  new arguments ``sev_lb`` and ``sev_ub``, each lists.
* ``Underwriter.build`` defaults update argument to None, which uses the object default.
* pretty printing: now returns a value, no tacit mode; added _html version to
  run through pygments, that looks good in Jupyter Lab.

0.17.1
~~~~~~~~

* Adjusted pyproject.toml
* pygments lexer tweaks
* Simplified grammar: % and inf now handled as part of resolving NUMBER; still 16 = 5 * 3 + 1 SR conflicts
* Reading databases on demand in Underwriter, resulting in faster object creation
* Creating and testing exsitance of subdirectories in Undewriter on demand using properties
* Creating directories moved into Extensions __init__.py
* lexer and parser as properties for Underwriter object creation
* Default ``recommend_p`` changed from 0.999 to 0.99999.
* ``recommend_bucket`` now uses ``p=max(p, 1-1e-8)`` if severity is unlimited.


0.17.0 (July 2023)
~~~~~~~~~~~~~~~~~~~~

* ``more`` added as a proper method
* Fixed debugfile in parser.py which stops installation if not None (need to
  enure the directory exists)
* Fixed build and MANIFEST to remove build warning
* parser: semicolon no longer mapped to newline; it is now used to provide hints
  notes
* ``recommend_bucket`` uses p=max(p, 1-1e-8) if limit=inf. Default increased from 0.999
  to 0.99999 based on examples; works well for limited severity but not well for unlimited severity.
* Implemented calculation hints in note strings. Format is k=v; pairs; k
  bs, log2, padding, recommend_p, normalize are recognized. If present they are used
  if no arguments are passed explicitly to ``build``.
* Added ``interpreter_test_suite()`` to ``Underwriter`` to run the test suite
* Added ``test_suite_file`` to ``Underwriter`` to return ``Path`` to ``test_suite.agg``` file
* Layers, attachments, and the reinsurance tower can now be ranges, ``[s:f:j]`` syntax

0.16.1 (July 2023)
~~~~~~~~~~~~~~~~~~~~

* IDs can now include dashes: Line-A is a legitimate date
* Include templates and test-cases.agg file in the distribution
* Fixed mixed severity / limit profile interaction. Mixtures now work with
  exposure defined by losses and premium (as opposed to just claim count),
  correctly account for excess layers (which requires re-weighting the
  mixture components). Involves fixing the ground up severity and using it
  to adjust weights first. Then, by layer, figure the severity and convert
  exposure to claim count if necessary. Cases where there is no loss in the
  layer (high layer from low mean / low vol componet) replace by zero. Use
  logging level 20 for more details.
* Added ``more`` function to ``Portfolio``, ``Aggregate`` and ``Underwriter`` classes.
  Given a regex it returns all methods and attributes matching. It tries to call a method
  with no arguments and reports the answer. ``more`` is defined in utilities
  and can be applied to any object.
* Moved work of ``qt`` from utilities into ``Aggregate``` (where it belongs).
  Retained ``qt`` for backwards compatibility.
* Parser: power <- atom ** factor to power <- factor ** factor to allow (1/2)**(3/4)
* ``random` module renamed `random_agg`` to avoid conflict with Python ``random``
* Implemented exact moments for exponential (special case of gamma) because
  MED is a common distribution and computing analytic moments is very time
  consuming for large mixtures.
* Added ZM and ZT examples to test_cases.agg; adjusted Portfolio examples to
  be on one line so they run through interpreter_file tests.

0.16.0 (June 2023)
~~~~~~~~~~~~~~~~~~~~

* Implemented ZM and ZT distributions using decorators!
* Added panjer_ab to Frequency, reports a and b values, p_k = (a + b / k) p_{k-1}. These values can be tested
  by computing implied a and b values from r_k = k p_k / p_{k-1} = ak + b; diff r_k = a and b is an easy
  computation.
* Added freq_dist(log2) option to Freq to return the frequency distribution stand-alone
* Added negbin frequency where freq_a equals the variance multiplier


0.15.0 (June 2023)
~~~~~~~~~~~~~~~~~~~~

* Added pygments lexer for decl (called agg, agregate, dec, or decl)
* Added to the documentation
* using pygments style in ``pprint_ex`` html mode
* removed old setup scripts and files and stack.md

0.14.1 (June 2023)
~~~~~~~~~~~~~~~~~~~~

* Added scripts.py for entry points
* Updated .readthedocs.yaml to build from toml not requirements.txt
* Fixes to documentation
* ``Portfolio.tvar_threshold`` updated to use ``scipy.optimize.bisect``
* Added ``kaplan_meier`` to ``utilities`` to compute product limit estimator survival
  function from censored data. This applies to a loss listing with open (censored)
  and closed claims.
* doc to docs []
* Enhanced ``make_var_tvar`` for cases where all probabilities are equal, using linspace rather
  than cumsum.

0.13.0 (June 4, 2023)
~~~~~~~~~~~~~~~~~~~~~~~

* Updated ``Portfolio.price`` to implement ``allocation='linear'`` and
  allow a dictionary of distortions
* ``ordered='strict'`` default for ``Portfolio.calibrate_distortions``
* Pentagon can return a namedtuple and solve does not return a dataframe (it has no return value)
* Added random.py module to hold random state. Incorporated into

    - Utilities: Iman Conover (ic_noise permuation) and rearrangement algorithms
    - ``Portfolio`` sample
    - ``Aggregate`` sample
    - Spectral ``bagged_distortion``

* ``Portfolio`` added ``n_units`` property
* ``Portfolio`` simplified ``__repr__``
* Added ``block_iman_conover``  to ``utilitiles``. Note tester code in the documentation. Very Nice! 😁😁😁
* New VaR, quantile and TVaR functions: 1000x speedup and more accurate. Builder function in ``utilities``.
* pyproject.toml project specification, updated build process, now creates whl file rather than egg file.

0.12.0 (May 2023)
~~~~~~~~~~~~~~~~~~~

* ``add_exa_sample`` becomes method of ``Portfolio``
* Added ``create_from_sample`` method to ``Portfolio``
* Added ``bodoff`` method to compute layer capital allocation to ``Portfolio``
* Improved validation error reporting
* ``extensions.samples`` module deleted
* Added ``spectral.approx_ccoc`` to create a ct approx to the CCoC distortion
* ``qdp`` moved to ``utilities`` (describe plus some quantiles)
* Added ``Pentagon`` class in ``extensions``
* Added example use of the Pollaczeck-Khinchine formula, reproducing examples from
  the `actuar`` risk vignette to Ch 5 of the documentation.

Earlier versions
~~~~~~~~~~~~~~~~~~

See github commit notes.

Version numbers follow semantic versioning, MAJOR.MINOR.PATCH:

* MAJOR version changes with incompatible API changes.
* MINOR version changes with added functionality in a backwards compatible manner.
* PATCH version changes with backwards compatible bug fixes.

Issues and Todo
-----------------

* Treatment of zero lb is not consistent with attachment equals zero.
* Flag attempts to use fixed frequency with non-integer expected value.
* Flag attempts to use mixing with inconsistent frequency distribution.

Getting started
---------------

To get started, import ``build``. It provides easy access to all functionality.

Here is a model of the sum of three dice rolls. The DataFrame ``describe`` compares exact mean, CV and skewness with the ``aggregate`` computation for the frequency, severity, and aggregate components. Common statistical functions like the cdf and quantile function are built-in. The whole probability distribution is available in ``a.density_df``.

::

  from aggregate import build, qd
  a = build('agg Dice dfreq [3] dsev [1:6]')
  qd(a)

>>>        E[X] Est E[X]    Err E[X]   CV(X) Est CV(X)   Err CV(X) Skew(X) Est Skew(X)
>>>  X
>>>  Freq     3                            0
>>>  Sev    3.5      3.5           0 0.48795   0.48795 -3.3307e-16       0  2.8529e-15
>>>  Agg   10.5     10.5 -3.3307e-16 0.28172   0.28172 -8.6597e-15       0 -1.5813e-13

::

  print(f'\nProbability sum < 12 = {a.cdf(12):.3f}\nMedian = {a.q(0.5):.0f}')

>>>  Probability sum < 12 = 0.741
>>>  Median = 10


``aggregate`` can use any ``scipy.stats`` continuous random variable as a severity, and
supports all common frequency distributions. Here is a compound-Poisson with lognormal
severity, mean 50 and cv 2.

::

  a = build('agg Example 10 claims sev lognorm 50 cv 2 poisson')
  qd(a)

>>>       E[X] Est E[X]   Err E[X]   CV(X) Est CV(X) Err CV(X)  Skew(X) Est Skew(X)
>>> X
>>> Freq    10                     0.31623                      0.31623
>>> Sev     50   49.888 -0.0022464       2    1.9314 -0.034314       14      9.1099
>>> Agg    500   498.27 -0.0034695 0.70711   0.68235 -0.035007   3.5355      2.2421

::

  # cdf and quantiles
  print(f'Pr(X<=500)={a.cdf(500):.3f}\n0.99 quantile={a.q(0.99)}')

>>> Pr(X<=500)=0.611
>>> 0.99 quantile=1727.125

See the documentation for more examples.

Dependencies
------------

See requirements.txt.

Install from source
--------------------
::

    git clone --no-single-branch --depth 50 https://github.com/mynl/aggregate.git .

    git checkout --force origin/master

    git clean -d -f -f

    python -mvirtualenv ./venv

    # ./venv/Scripts on Windows
    ./venv/bin/python -m pip install --exists-action=w --no-cache-dir -r requirements.txt

    # to create help files
    ./venv/bin/python -m pip install --upgrade --no-cache-dir pip setuptools<58.3.0

    ./venv/bin/python -m pip install --upgrade --no-cache-dir pillow mock==1.0.1 alabaster>=0.7,<0.8,!=0.7.5 commonmark==0.9.1 recommonmark==0.5.0 sphinx<2 sphinx-rtd-theme<0.5 readthedocs-sphinx-ext<2.3 jinja2<3.1.0

Note: options from readthedocs.org script.

License
-------

BSD 3 licence.

Help and contributions
-------------------------

Limited help available. Email me at help@aggregate.capital.

All contributions, bug reports, bug fixes, documentation improvements,
enhancements and ideas are welcome. Create a pull request on github and/or
email me.

Social media: https://www.reddit.com/r/AggregateDistribution/.


.. substitutions

.. |downloads| image:: https://img.shields.io/pypi/dm/aggregate.svg
    :target: https://pepy.tech/project/aggregate
    :alt: Downloads

.. |stars| image:: https://img.shields.io/github/stars/mynl/aggregate.svg
    :target: https://github.com/mynl/aggregate/stargazers
    :alt: Github stars

.. |forks| image:: https://img.shields.io/github/forks/mynl/aggregate.svg
    :target: https://github.com/mynl/aggregate/network/members
    :alt: Github forks

.. |contributors| image:: https://img.shields.io/github/contributors/mynl/aggregate.svg
    :target: https://github.com/mynl/aggregate/graphs/contributors
    :alt: Contributors

.. |version| image:: https://img.shields.io/pypi/v/aggregate.svg?label=pypi
    :target: https://pypi.org/project/aggregate
    :alt: Latest version

.. |activity| image:: https://img.shields.io/github/commit-activity/m/mynl/aggregate
   :target: https://github.com/mynl/aggregate
   :alt: Latest Version

.. |py-versions| image:: https://img.shields.io/pypi/pyversions/aggregate.svg
    :alt: Supported Python versions

.. |license| image:: https://img.shields.io/pypi/l/aggregate.svg
    :target: https://github.com/mynl/aggregate/blob/master/LICENSE
    :alt: License

.. |packages| image:: https://repology.org/badge/tiny-repos/python:aggregate.svg
    :target: https://repology.org/metapackage/python:aggregate/versions
    :alt: Binary packages

.. |doc| image:: https://readthedocs.org/projects/aggregate/badge/?version=latest
    :target: https://aggregate.readthedocs.io/en/latest/
    :alt: Documentation Status

.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.10557199.svg
    :target: https://zenodo.org/records/10557199
    :alt: Zenodo DOI

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "aggregate",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "\"Stephen J. Mildenhall\" <steve@convexrisk.com>",
    "keywords": "",
    "author": "",
    "author_email": "\"Stephen J. Mildenhall\" <steve@convexrisk.com>",
    "download_url": "https://files.pythonhosted.org/packages/b1/ac/e5bd2fdb471d2df195ab762fe3d28510fdff0ec77bd92c63dd1f61b57cb2/aggregate-0.22.0.tar.gz",
    "platform": null,
    "description": "|  |activity| |doc| |version|\r\n|  |py-versions| |downloads|\r\n|  |license| |packages| |zenodo|\r\n\r\n-----\r\n\r\naggregate: a powerful actuarial modeling library\r\n==================================================\r\n\r\nPurpose\r\n-----------\r\n\r\n``aggregate`` builds approximations to compound (aggregate) probability distributions quickly and accurately.\r\nIt can be used to solve insurance, risk management, and actuarial problems using realistic models that reflect\r\nunderlying frequency and severity. It delivers the speed and accuracy of parametric distributions to situations\r\nthat usually require simulation, making it as easy to work with an aggregate (compound) probability distribution\r\nas the lognormal. ``aggregate`` includes an expressive language called DecL to describe aggregate distributions\r\nand is implemented in Python under an open source BSD-license.\r\n\r\nWhite Paper (new July 2023)\r\n----------------------------\r\n\r\nThe `White Paper <https://github.com/mynl/aggregate/blob/master/cheat-sheets/Aggregate_white_paper.pdf>`_ describes\r\nthe purpose, implementation, and use of the class ``aggregate.Aggregate`` that\r\nhandles the creation and manipulation of compound frequency-severity distributions.\r\n\r\nDocumentation\r\n-------------\r\n\r\nhttps://aggregate.readthedocs.io/\r\n\r\n\r\nWhere to get it\r\n---------------\r\n\r\nhttps://github.com/mynl/aggregate\r\n\r\n\r\nInstallation\r\n------------\r\n\r\nTo install into a new ``Python>=3.10`` virtual environment::\r\n\r\n    python -m venv path/to/your/venv``\r\n    cd path/to/your/venv\r\n\r\nfollowed by::\r\n\r\n    \\path\\to\\env\\Scripts\\activate\r\n\r\non Windows, or::\r\n\r\n    source /path/to/env/bin/activate\r\n\r\non Linux/Unix or MacOS. Finally, install the package::\r\n\r\n    pip install aggregate[dev]\r\n\r\nAll the code examples have been tested in such a virtual environment and the documentation will build.\r\n\r\n\r\nVersion History\r\n-----------------\r\n\r\n0.22.0\r\n~~~~~~~~~~\r\n\r\n* Created version 0.22.0, convolation\r\n\r\n0.21.4\r\n~~~~~~~~\r\n\r\n* Updated requirement using ``pipreqs`` recommendations\r\n* Color graphics in documentation\r\n* Added ``expected_shift_reduce = 16  # Set this to the number of expected shift/reduce conflicts`` to ``parser.py``\r\n  to avoid warnings. The conflicts are resolved in the correct way for the grammar to work.\r\n* Issues: there is a difference between ``dfreq[1]`` and ``1 claim ... fixed``, e.g.,\r\n  when using spliced severities. These should not  occur.\r\n\r\n\r\n0.21.3\r\n~~~~~~~~\r\n\r\n* Risk progression, defaults to linear allocation.\r\n* Added ``g_insurance_statistics`` to ``extensions`` to plot insurance statistics from a distortion ``g``.\r\n* Added ``g_risk_appetite`` to ``extensions`` to plot risk appetite from a distortion ``g`` (value, loss ratio,\r\n  return on capital, VaR and TVaR weights).\r\n* Corrected Wang distortion derivative.\r\n* Vectorized ``Distortion.g_prime`` calculation for proportional hazard\r\n* Added ``tvar_weights`` function to ``spectral`` to compute the TVaR weights of a distortion. (Work in progress)\r\n* Updated dependencies in pyproject.toml file.\r\n\r\n0.21.2\r\n~~~~~~~~\r\n\r\n* Misc documentation updates.\r\n* Experimental magic functions, allowing, eg. %agg [spec] to create an aggregate object (one-liner).\r\n* 0.21.1 yanked from pypi due to error in pyproject.toml.\r\n\r\n0.21.0\r\n~~~~~~~~~\r\n\r\n* Moved ``sly`` into the project for better control.  ``sly`` is a Python implementation of lex and yacc parsing tools.\r\n  It is written by Dave Beazley. Per the sly repo on github:\r\n\r\n  The SLY project is no longer making package-installable releases. It's fully functional, but if choose to use it,\r\n  you should vendor the code into your application. SLY has zero-dependencies. Although I am semi-retiring the project,\r\n  I will respond to bug reports and still may decide to make future changes to it depending on my mood.\r\n  I'd like to thank everyone who has contributed to it over the years. --Dave\r\n\r\n* Experimenting with a line/cell DecL magic interpreter in Jupyter Lab to obviate the\r\n  need for ``build``.\r\n\r\n0.20.2\r\n~~~~~~~~~\r\n\r\n* risk progression logic adjusted to exclude values with zero probability; graphs\r\n  updated to use step drawstyle.\r\n\r\n0.20.1\r\n~~~~~~~\r\n\r\n* Bug fix in parser interpretation of arrays with step size\r\n* Added figures for AAS paper to extensions.ft and extensions.figures\r\n* Validation \"not unreasonable\" flag set to 0\r\n* Added aggregate_white_paper.pdf\r\n* Colors in risk_progression\r\n\r\n0.20.0\r\n~~~~~~~\r\n\r\n* ``sev_attachment``: changed default to ``None``; in that case gross losses equal\r\n  ground-up losses, with no adjustment. But if layer is 10 xs 0 then losses\r\n  become conditional on X > 0. That results in a different behaviour, e.g.,\r\n  when using ``dsev[0:3]``. Ripple through effect in Aggregate (change default),\r\n  Severity (change default, and change moment calculation; need to track the \"attachment\"\r\n  of zero and the fact that it came from None, to track Pr attaching)\r\n* dsev: check if any elements are < 0 and set to zero before computing moments\r\n  in dhistogram\r\n* same for dfreq; implemented in ``validate_discrete_distribution`` in distributions module\r\n* Default ``recommend_p=0.99999`` set in constsants module.\r\n* ``interpreter_test_suite`` renamed to ``run_test_suite`` and includes test\r\n  to count and report if there are errors.\r\n* Reason codes for failing validation; Aggregate.qt becomes Aggregte.explain_validation\r\n\r\n0.19.0\r\n~~~~~~~\r\n\r\n* Fixed reinsurance description formatting\r\n* Improved splice parsing to allow explicit entry of lb and ub; needed to\r\n  model mixtures of mixtures (Albrecher et al. 2017)\r\n\r\n0.18.0 (major update)\r\n~~~~~~~~~~~~~~~~~~~~~~~\r\n\r\n* Added ability to specify occ reinsurance after a built in agg; this\r\n  allows you to alter a gross aggregate more easily.\r\n* ``Underwriter.safe_lookup`` uses deepcopy rather than copy to avoid\r\n  problems array elements.\r\n* Clean up and improved Parser and grammar\r\n\r\n    - atom -> term is much cleaner (removed power, factor; now\r\n      managed with prcedence and assoicativity)\r\n    - EXP and EXPONENT are right\r\n      associative, division is not associative so 1/2/3 gives an error.\r\n    - Still SR conflict from dfreq [ ] [  ] because it could be the\r\n      probabilities clause or the start of a vectorized limit clause\r\n    - Remaining SR conflicts are from NUMBER, which is used in many\r\n      places. This is a problem with the grammar, not the parser.\r\n    - Added more tests to the parser test suite\r\n    - Severity weights clause must come after locations (more natural)\r\n    - Added ability for unconditional dsev.\r\n    - Support for splicing (see below)\r\n\r\n* Cleanup of ``Aggregate`` class, concurrent with creating a cheat sheet\r\n\r\n    - many documentation updates\r\n    - ``plot_old`` deleted\r\n    - deleted ``delbaen_haezendonck_density``; not used; not doing anything\r\n      that isn't easy by hand. Includes dh_sev_density and dh_agg_density.\r\n    - deleted ``fit`` as alternative name for ``approximate``\r\n    - deleted unused fields\r\n\r\n* Cleanup of ``Portfolio`` class, concurrent with creating a cheat sheet\r\n\r\n    - deleted ``fit`` as alternative name for ``approximate``\r\n    - deleted ``q_old_0_12_0`` (old quantile), ``q_temp``, ``tvar_old_0_12_0``\r\n    - deleted ``plot_old``, ``last_a``, ``_(inverse)_tail_var(_2)``\r\n    - deleted ``def get_stat(self, line='total', stat='EmpMean'): return self.audit_df.loc[line, stat]``\r\n    - deleted ``resample``, was an alias for sample\r\n\r\n* Management of knowledge in ``Underwriter`` changed to support loading\r\n  a database after creation. Databases not loaded until needed - alas\r\n  that includes printing the object. TODO: Consider a change?\r\n* Frequency mfg renamed to freq_pgf to match other Frequency class methods and\r\n  to accuractely describe the function as a probability generating function\r\n  rather than a moment generating function.\r\n* Added ``introspect`` function to Utilities. Used to create a cheat sheet\r\n  for Aggregate.\r\n* Added cheat sheets, completed for Aggregate\r\n* Severity can now be conditional on being in a layer (see splice); managed\r\n  adjustments to underlying frozen rv using decorators. No overhead if not\r\n  used.\r\n* Added \"splice\" option for Severity (see Albrecher et. al ch XX) and Aggregate,\r\n  new arguments ``sev_lb`` and ``sev_ub``, each lists.\r\n* ``Underwriter.build`` defaults update argument to None, which uses the object default.\r\n* pretty printing: now returns a value, no tacit mode; added _html version to\r\n  run through pygments, that looks good in Jupyter Lab.\r\n\r\n0.17.1\r\n~~~~~~~~\r\n\r\n* Adjusted pyproject.toml\r\n* pygments lexer tweaks\r\n* Simplified grammar: % and inf now handled as part of resolving NUMBER; still 16 = 5 * 3 + 1 SR conflicts\r\n* Reading databases on demand in Underwriter, resulting in faster object creation\r\n* Creating and testing exsitance of subdirectories in Undewriter on demand using properties\r\n* Creating directories moved into Extensions __init__.py\r\n* lexer and parser as properties for Underwriter object creation\r\n* Default ``recommend_p`` changed from 0.999 to 0.99999.\r\n* ``recommend_bucket`` now uses ``p=max(p, 1-1e-8)`` if severity is unlimited.\r\n\r\n\r\n0.17.0 (July 2023)\r\n~~~~~~~~~~~~~~~~~~~~\r\n\r\n* ``more`` added as a proper method\r\n* Fixed debugfile in parser.py which stops installation if not None (need to\r\n  enure the directory exists)\r\n* Fixed build and MANIFEST to remove build warning\r\n* parser: semicolon no longer mapped to newline; it is now used to provide hints\r\n  notes\r\n* ``recommend_bucket`` uses p=max(p, 1-1e-8) if limit=inf. Default increased from 0.999\r\n  to 0.99999 based on examples; works well for limited severity but not well for unlimited severity.\r\n* Implemented calculation hints in note strings. Format is k=v; pairs; k\r\n  bs, log2, padding, recommend_p, normalize are recognized. If present they are used\r\n  if no arguments are passed explicitly to ``build``.\r\n* Added ``interpreter_test_suite()`` to ``Underwriter`` to run the test suite\r\n* Added ``test_suite_file`` to ``Underwriter`` to return ``Path`` to ``test_suite.agg``` file\r\n* Layers, attachments, and the reinsurance tower can now be ranges, ``[s:f:j]`` syntax\r\n\r\n0.16.1 (July 2023)\r\n~~~~~~~~~~~~~~~~~~~~\r\n\r\n* IDs can now include dashes: Line-A is a legitimate date\r\n* Include templates and test-cases.agg file in the distribution\r\n* Fixed mixed severity / limit profile interaction. Mixtures now work with\r\n  exposure defined by losses and premium (as opposed to just claim count),\r\n  correctly account for excess layers (which requires re-weighting the\r\n  mixture components). Involves fixing the ground up severity and using it\r\n  to adjust weights first. Then, by layer, figure the severity and convert\r\n  exposure to claim count if necessary. Cases where there is no loss in the\r\n  layer (high layer from low mean / low vol componet) replace by zero. Use\r\n  logging level 20 for more details.\r\n* Added ``more`` function to ``Portfolio``, ``Aggregate`` and ``Underwriter`` classes.\r\n  Given a regex it returns all methods and attributes matching. It tries to call a method\r\n  with no arguments and reports the answer. ``more`` is defined in utilities\r\n  and can be applied to any object.\r\n* Moved work of ``qt`` from utilities into ``Aggregate``` (where it belongs).\r\n  Retained ``qt`` for backwards compatibility.\r\n* Parser: power <- atom ** factor to power <- factor ** factor to allow (1/2)**(3/4)\r\n* ``random` module renamed `random_agg`` to avoid conflict with Python ``random``\r\n* Implemented exact moments for exponential (special case of gamma) because\r\n  MED is a common distribution and computing analytic moments is very time\r\n  consuming for large mixtures.\r\n* Added ZM and ZT examples to test_cases.agg; adjusted Portfolio examples to\r\n  be on one line so they run through interpreter_file tests.\r\n\r\n0.16.0 (June 2023)\r\n~~~~~~~~~~~~~~~~~~~~\r\n\r\n* Implemented ZM and ZT distributions using decorators!\r\n* Added panjer_ab to Frequency, reports a and b values, p_k = (a + b / k) p_{k-1}. These values can be tested\r\n  by computing implied a and b values from r_k = k p_k / p_{k-1} = ak + b; diff r_k = a and b is an easy\r\n  computation.\r\n* Added freq_dist(log2) option to Freq to return the frequency distribution stand-alone\r\n* Added negbin frequency where freq_a equals the variance multiplier\r\n\r\n\r\n0.15.0 (June 2023)\r\n~~~~~~~~~~~~~~~~~~~~\r\n\r\n* Added pygments lexer for decl (called agg, agregate, dec, or decl)\r\n* Added to the documentation\r\n* using pygments style in ``pprint_ex`` html mode\r\n* removed old setup scripts and files and stack.md\r\n\r\n0.14.1 (June 2023)\r\n~~~~~~~~~~~~~~~~~~~~\r\n\r\n* Added scripts.py for entry points\r\n* Updated .readthedocs.yaml to build from toml not requirements.txt\r\n* Fixes to documentation\r\n* ``Portfolio.tvar_threshold`` updated to use ``scipy.optimize.bisect``\r\n* Added ``kaplan_meier`` to ``utilities`` to compute product limit estimator survival\r\n  function from censored data. This applies to a loss listing with open (censored)\r\n  and closed claims.\r\n* doc to docs []\r\n* Enhanced ``make_var_tvar`` for cases where all probabilities are equal, using linspace rather\r\n  than cumsum.\r\n\r\n0.13.0 (June 4, 2023)\r\n~~~~~~~~~~~~~~~~~~~~~~~\r\n\r\n* Updated ``Portfolio.price`` to implement ``allocation='linear'`` and\r\n  allow a dictionary of distortions\r\n* ``ordered='strict'`` default for ``Portfolio.calibrate_distortions``\r\n* Pentagon can return a namedtuple and solve does not return a dataframe (it has no return value)\r\n* Added random.py module to hold random state. Incorporated into\r\n\r\n    - Utilities: Iman Conover (ic_noise permuation) and rearrangement algorithms\r\n    - ``Portfolio`` sample\r\n    - ``Aggregate`` sample\r\n    - Spectral ``bagged_distortion``\r\n\r\n* ``Portfolio`` added ``n_units`` property\r\n* ``Portfolio`` simplified ``__repr__``\r\n* Added ``block_iman_conover``  to ``utilitiles``. Note tester code in the documentation. Very Nice! \ud83d\ude01\ud83d\ude01\ud83d\ude01\r\n* New VaR, quantile and TVaR functions: 1000x speedup and more accurate. Builder function in ``utilities``.\r\n* pyproject.toml project specification, updated build process, now creates whl file rather than egg file.\r\n\r\n0.12.0 (May 2023)\r\n~~~~~~~~~~~~~~~~~~~\r\n\r\n* ``add_exa_sample`` becomes method of ``Portfolio``\r\n* Added ``create_from_sample`` method to ``Portfolio``\r\n* Added ``bodoff`` method to compute layer capital allocation to ``Portfolio``\r\n* Improved validation error reporting\r\n* ``extensions.samples`` module deleted\r\n* Added ``spectral.approx_ccoc`` to create a ct approx to the CCoC distortion\r\n* ``qdp`` moved to ``utilities`` (describe plus some quantiles)\r\n* Added ``Pentagon`` class in ``extensions``\r\n* Added example use of the Pollaczeck-Khinchine formula, reproducing examples from\r\n  the `actuar`` risk vignette to Ch 5 of the documentation.\r\n\r\nEarlier versions\r\n~~~~~~~~~~~~~~~~~~\r\n\r\nSee github commit notes.\r\n\r\nVersion numbers follow semantic versioning, MAJOR.MINOR.PATCH:\r\n\r\n* MAJOR version changes with incompatible API changes.\r\n* MINOR version changes with added functionality in a backwards compatible manner.\r\n* PATCH version changes with backwards compatible bug fixes.\r\n\r\nIssues and Todo\r\n-----------------\r\n\r\n* Treatment of zero lb is not consistent with attachment equals zero.\r\n* Flag attempts to use fixed frequency with non-integer expected value.\r\n* Flag attempts to use mixing with inconsistent frequency distribution.\r\n\r\nGetting started\r\n---------------\r\n\r\nTo get started, import ``build``. It provides easy access to all functionality.\r\n\r\nHere is a model of the sum of three dice rolls. The DataFrame ``describe`` compares exact mean, CV and skewness with the ``aggregate`` computation for the frequency, severity, and aggregate components. Common statistical functions like the cdf and quantile function are built-in. The whole probability distribution is available in ``a.density_df``.\r\n\r\n::\r\n\r\n  from aggregate import build, qd\r\n  a = build('agg Dice dfreq [3] dsev [1:6]')\r\n  qd(a)\r\n\r\n>>>        E[X] Est E[X]    Err E[X]   CV(X) Est CV(X)   Err CV(X) Skew(X) Est Skew(X)\r\n>>>  X\r\n>>>  Freq     3                            0\r\n>>>  Sev    3.5      3.5           0 0.48795   0.48795 -3.3307e-16       0  2.8529e-15\r\n>>>  Agg   10.5     10.5 -3.3307e-16 0.28172   0.28172 -8.6597e-15       0 -1.5813e-13\r\n\r\n::\r\n\r\n  print(f'\\nProbability sum < 12 = {a.cdf(12):.3f}\\nMedian = {a.q(0.5):.0f}')\r\n\r\n>>>  Probability sum < 12 = 0.741\r\n>>>  Median = 10\r\n\r\n\r\n``aggregate`` can use any ``scipy.stats`` continuous random variable as a severity, and\r\nsupports all common frequency distributions. Here is a compound-Poisson with lognormal\r\nseverity, mean 50 and cv 2.\r\n\r\n::\r\n\r\n  a = build('agg Example 10 claims sev lognorm 50 cv 2 poisson')\r\n  qd(a)\r\n\r\n>>>       E[X] Est E[X]   Err E[X]   CV(X) Est CV(X) Err CV(X)  Skew(X) Est Skew(X)\r\n>>> X\r\n>>> Freq    10                     0.31623                      0.31623\r\n>>> Sev     50   49.888 -0.0022464       2    1.9314 -0.034314       14      9.1099\r\n>>> Agg    500   498.27 -0.0034695 0.70711   0.68235 -0.035007   3.5355      2.2421\r\n\r\n::\r\n\r\n  # cdf and quantiles\r\n  print(f'Pr(X<=500)={a.cdf(500):.3f}\\n0.99 quantile={a.q(0.99)}')\r\n\r\n>>> Pr(X<=500)=0.611\r\n>>> 0.99 quantile=1727.125\r\n\r\nSee the documentation for more examples.\r\n\r\nDependencies\r\n------------\r\n\r\nSee requirements.txt.\r\n\r\nInstall from source\r\n--------------------\r\n::\r\n\r\n    git clone --no-single-branch --depth 50 https://github.com/mynl/aggregate.git .\r\n\r\n    git checkout --force origin/master\r\n\r\n    git clean -d -f -f\r\n\r\n    python -mvirtualenv ./venv\r\n\r\n    # ./venv/Scripts on Windows\r\n    ./venv/bin/python -m pip install --exists-action=w --no-cache-dir -r requirements.txt\r\n\r\n    # to create help files\r\n    ./venv/bin/python -m pip install --upgrade --no-cache-dir pip setuptools<58.3.0\r\n\r\n    ./venv/bin/python -m pip install --upgrade --no-cache-dir pillow mock==1.0.1 alabaster>=0.7,<0.8,!=0.7.5 commonmark==0.9.1 recommonmark==0.5.0 sphinx<2 sphinx-rtd-theme<0.5 readthedocs-sphinx-ext<2.3 jinja2<3.1.0\r\n\r\nNote: options from readthedocs.org script.\r\n\r\nLicense\r\n-------\r\n\r\nBSD 3 licence.\r\n\r\nHelp and contributions\r\n-------------------------\r\n\r\nLimited help available. Email me at help@aggregate.capital.\r\n\r\nAll contributions, bug reports, bug fixes, documentation improvements,\r\nenhancements and ideas are welcome. Create a pull request on github and/or\r\nemail me.\r\n\r\nSocial media: https://www.reddit.com/r/AggregateDistribution/.\r\n\r\n\r\n.. substitutions\r\n\r\n.. |downloads| image:: https://img.shields.io/pypi/dm/aggregate.svg\r\n    :target: https://pepy.tech/project/aggregate\r\n    :alt: Downloads\r\n\r\n.. |stars| image:: https://img.shields.io/github/stars/mynl/aggregate.svg\r\n    :target: https://github.com/mynl/aggregate/stargazers\r\n    :alt: Github stars\r\n\r\n.. |forks| image:: https://img.shields.io/github/forks/mynl/aggregate.svg\r\n    :target: https://github.com/mynl/aggregate/network/members\r\n    :alt: Github forks\r\n\r\n.. |contributors| image:: https://img.shields.io/github/contributors/mynl/aggregate.svg\r\n    :target: https://github.com/mynl/aggregate/graphs/contributors\r\n    :alt: Contributors\r\n\r\n.. |version| image:: https://img.shields.io/pypi/v/aggregate.svg?label=pypi\r\n    :target: https://pypi.org/project/aggregate\r\n    :alt: Latest version\r\n\r\n.. |activity| image:: https://img.shields.io/github/commit-activity/m/mynl/aggregate\r\n   :target: https://github.com/mynl/aggregate\r\n   :alt: Latest Version\r\n\r\n.. |py-versions| image:: https://img.shields.io/pypi/pyversions/aggregate.svg\r\n    :alt: Supported Python versions\r\n\r\n.. |license| image:: https://img.shields.io/pypi/l/aggregate.svg\r\n    :target: https://github.com/mynl/aggregate/blob/master/LICENSE\r\n    :alt: License\r\n\r\n.. |packages| image:: https://repology.org/badge/tiny-repos/python:aggregate.svg\r\n    :target: https://repology.org/metapackage/python:aggregate/versions\r\n    :alt: Binary packages\r\n\r\n.. |doc| image:: https://readthedocs.org/projects/aggregate/badge/?version=latest\r\n    :target: https://aggregate.readthedocs.io/en/latest/\r\n    :alt: Documentation Status\r\n\r\n.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.10557199.svg\r\n    :target: https://zenodo.org/records/10557199\r\n    :alt: Zenodo DOI\r\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause",
    "summary": "Tools for creating and working with aggregate probability distributions.",
    "version": "0.22.0",
    "project_urls": {
        "Documentation": "https://aggregate.readthedocs.io/en/latest/",
        "Source Code": "https://github.com/mynl/aggregate"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3988623fba749174024cf17397ba047d0eed924b0d4002f14e013d204fa0e98d",
                "md5": "af329b960ab5eb7cca816d91d63674bc",
                "sha256": "5964437b36affed413dd52f56704e2ebd9b0659e86e3e196a694d6d3947ec20f"
            },
            "downloads": -1,
            "filename": "aggregate-0.22.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "af329b960ab5eb7cca816d91d63674bc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 355715,
            "upload_time": "2024-01-23T22:58:11",
            "upload_time_iso_8601": "2024-01-23T22:58:11.959987Z",
            "url": "https://files.pythonhosted.org/packages/39/88/623fba749174024cf17397ba047d0eed924b0d4002f14e013d204fa0e98d/aggregate-0.22.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b1ace5bd2fdb471d2df195ab762fe3d28510fdff0ec77bd92c63dd1f61b57cb2",
                "md5": "1f3347c2b51ac8c4906fb4534648e043",
                "sha256": "15afef29c8fa512e52874f9f2e23d2878dfb0ce35c1c569c3f0e6977ae715f17"
            },
            "downloads": -1,
            "filename": "aggregate-0.22.0.tar.gz",
            "has_sig": false,
            "md5_digest": "1f3347c2b51ac8c4906fb4534648e043",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 349046,
            "upload_time": "2024-01-23T22:58:15",
            "upload_time_iso_8601": "2024-01-23T22:58:15.039089Z",
            "url": "https://files.pythonhosted.org/packages/b1/ac/e5bd2fdb471d2df195ab762fe3d28510fdff0ec77bd92c63dd1f61b57cb2/aggregate-0.22.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-23 22:58:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mynl",
    "github_project": "aggregate",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "aggregate"
}
        
Elapsed time: 0.16963s