ratemaking


Nameratemaking JSON
Version 0.3.0 PyPI version JSON
download
home_pagehttps://github.com/little-croissant/ratemaking
SummaryA comprehensive Python library for P&C actuarial ratemaking
upload_time2025-10-26 21:01:59
maintainerNone
docs_urlNone
authorHugo Latendresse
requires_python>=3.8
licenseNone
keywords actuarial ratemaking credibility complements trending insurance p&c casualty property
VCS
bugtrack_url
requirements pytest numpy pandas pyperclip pyautogui watchdog
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Ratemaking

A comprehensive Python library for Property & Casualty actuarial ratemaking, providing tools for credibility analysis, trending, exposure calculations, and data processing.

[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## Features

### Currently Available
- **Credibility Analysis**: Classical, Bühlmann, and Bayesian credibility methods
- **Complement Calculations**: First-Dollar methods from Werner & Modlin Chapter 12
- **Comprehensive Testing**: Test suite with actuarial validation

## Installation

```bash
pip install ratemaking
```

## Quick Start

### Classical Credibility

```python
from ratemaking import (
    classical_full_credibility_frequency,
    classical_partial_credibility
)

# Calculate full credibility standard
n_full = classical_full_credibility_frequency(p=0.95, k=0.05)

# Calculate credibility factor  
z = classical_partial_credibility(n=observed_claims, n_full=n_full)

# Apply credibility blend
estimate = z * observed_rate + (1 - z) * complement_rate
```

### Bühlmann Credibility

```python
from ratemaking import BuhlmannInputs, buhlmann

data = {"risk_1": [1.2, 1.5], "risk_2": [2.1, 1.9]}
result = buhlmann(BuhlmannInputs(data=data))
print(f"Credibility weights: {result.Z_by_risk}")
```

### Bayesian Credibility

```python
from ratemaking import bayes_poisson_gamma

# Poisson-Gamma conjugate updating
posterior = bayes_poisson_gamma(
    prior_alpha=2.0, prior_beta=100.0,
    total_counts=15, total_exposure=120
)
print(f"Posterior mean: {posterior.mean}")
print(f"Credibility weight: {posterior.credibility_Z}")
```

### Complement Calculations

```python
from ratemaking.complements import (
    trended_present_rates_loss_cost,
    trended_present_rates_rate_change_factor,
    larger_group_applied_rate_change_to_present_rate,
    HarwayneInputs,
    harwayne_complement,
)

# Trended present rates method
complement = trended_present_rates_loss_cost(
    present_rate=100.0,
    prior_indicated_factor=1.10,  # 10% indicated
    prior_implemented_factor=1.06,  # 6% implemented 
    loss_trend_annual=0.05,  # 5% annual loss trend
    trend_years=2.0
)

# Harwayne's method for multi-state complements
inputs = HarwayneInputs(
    target_class_exposures=target_exposures,
    target_avg_pure_premium=120.0,
    related_state_class_pp=related_pp_data,
    related_state_class_exposures=related_exposure_data,
    class_of_interest='ClassA'
)
complement = harwayne_complement(inputs)
```

## Package Structure

```
ratemaking/
├── credibility/           # Credibility analysis tools
│   ├── classical.py      # Classical (Limited Fluctuation) credibility
│   ├── buhlmann.py       # Bühlmann & Bühlmann-Straub credibility
│   └── bayesian.py       # Bayesian credibility with conjugate priors
├── complements/          # Complement calculation methods
│   └── first_dollar.py   # First-Dollar methods (Werner & Modlin Ch.12)
├── trending/             # Trending analysis tools (coming soon)
├── exposure/             # Exposure calculation tools (coming soon)
└── utils/                # Data processing utilities (coming soon)
```

## Modular Usage

For organized imports, use the submodules:

```python
# Organized by functionality
from ratemaking.credibility import classical, buhlmann, bayesian
from ratemaking.complements import first_dollar

# Use specific functions
n_full = classical.classical_full_credibility_frequency(p=0.95, k=0.05)
complement = first_dollar.trended_present_rates_loss_cost(...)
```

## Testing

Run the test suite:

```bash
pytest tests/ -v
```

## Development

### Setting up for development:

```bash
git clone https://github.com/little-croissant/ratemaking.git
cd ratemaking
pip install -e .
pip install -e ".[test]"
```

## License

MIT License 

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/little-croissant/ratemaking",
    "name": "ratemaking",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "actuarial ratemaking credibility complements trending insurance P&C casualty property",
    "author": "Hugo Latendresse",
    "author_email": "hugolatendresse@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/ae/3a/0f3c0fb52d5c561a1200a040c4e4efe98265fd0f281d5d97f0b92c7ac0fc/ratemaking-0.3.0.tar.gz",
    "platform": null,
    "description": "# Ratemaking\n\nA comprehensive Python library for Property & Casualty actuarial ratemaking, providing tools for credibility analysis, trending, exposure calculations, and data processing.\n\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n## Features\n\n### Currently Available\n- **Credibility Analysis**: Classical, B\u00fchlmann, and Bayesian credibility methods\n- **Complement Calculations**: First-Dollar methods from Werner & Modlin Chapter 12\n- **Comprehensive Testing**: Test suite with actuarial validation\n\n## Installation\n\n```bash\npip install ratemaking\n```\n\n## Quick Start\n\n### Classical Credibility\n\n```python\nfrom ratemaking import (\n    classical_full_credibility_frequency,\n    classical_partial_credibility\n)\n\n# Calculate full credibility standard\nn_full = classical_full_credibility_frequency(p=0.95, k=0.05)\n\n# Calculate credibility factor  \nz = classical_partial_credibility(n=observed_claims, n_full=n_full)\n\n# Apply credibility blend\nestimate = z * observed_rate + (1 - z) * complement_rate\n```\n\n### B\u00fchlmann Credibility\n\n```python\nfrom ratemaking import BuhlmannInputs, buhlmann\n\ndata = {\"risk_1\": [1.2, 1.5], \"risk_2\": [2.1, 1.9]}\nresult = buhlmann(BuhlmannInputs(data=data))\nprint(f\"Credibility weights: {result.Z_by_risk}\")\n```\n\n### Bayesian Credibility\n\n```python\nfrom ratemaking import bayes_poisson_gamma\n\n# Poisson-Gamma conjugate updating\nposterior = bayes_poisson_gamma(\n    prior_alpha=2.0, prior_beta=100.0,\n    total_counts=15, total_exposure=120\n)\nprint(f\"Posterior mean: {posterior.mean}\")\nprint(f\"Credibility weight: {posterior.credibility_Z}\")\n```\n\n### Complement Calculations\n\n```python\nfrom ratemaking.complements import (\n    trended_present_rates_loss_cost,\n    trended_present_rates_rate_change_factor,\n    larger_group_applied_rate_change_to_present_rate,\n    HarwayneInputs,\n    harwayne_complement,\n)\n\n# Trended present rates method\ncomplement = trended_present_rates_loss_cost(\n    present_rate=100.0,\n    prior_indicated_factor=1.10,  # 10% indicated\n    prior_implemented_factor=1.06,  # 6% implemented \n    loss_trend_annual=0.05,  # 5% annual loss trend\n    trend_years=2.0\n)\n\n# Harwayne's method for multi-state complements\ninputs = HarwayneInputs(\n    target_class_exposures=target_exposures,\n    target_avg_pure_premium=120.0,\n    related_state_class_pp=related_pp_data,\n    related_state_class_exposures=related_exposure_data,\n    class_of_interest='ClassA'\n)\ncomplement = harwayne_complement(inputs)\n```\n\n## Package Structure\n\n```\nratemaking/\n\u251c\u2500\u2500 credibility/           # Credibility analysis tools\n\u2502   \u251c\u2500\u2500 classical.py      # Classical (Limited Fluctuation) credibility\n\u2502   \u251c\u2500\u2500 buhlmann.py       # B\u00fchlmann & B\u00fchlmann-Straub credibility\n\u2502   \u2514\u2500\u2500 bayesian.py       # Bayesian credibility with conjugate priors\n\u251c\u2500\u2500 complements/          # Complement calculation methods\n\u2502   \u2514\u2500\u2500 first_dollar.py   # First-Dollar methods (Werner & Modlin Ch.12)\n\u251c\u2500\u2500 trending/             # Trending analysis tools (coming soon)\n\u251c\u2500\u2500 exposure/             # Exposure calculation tools (coming soon)\n\u2514\u2500\u2500 utils/                # Data processing utilities (coming soon)\n```\n\n## Modular Usage\n\nFor organized imports, use the submodules:\n\n```python\n# Organized by functionality\nfrom ratemaking.credibility import classical, buhlmann, bayesian\nfrom ratemaking.complements import first_dollar\n\n# Use specific functions\nn_full = classical.classical_full_credibility_frequency(p=0.95, k=0.05)\ncomplement = first_dollar.trended_present_rates_loss_cost(...)\n```\n\n## Testing\n\nRun the test suite:\n\n```bash\npytest tests/ -v\n```\n\n## Development\n\n### Setting up for development:\n\n```bash\ngit clone https://github.com/little-croissant/ratemaking.git\ncd ratemaking\npip install -e .\npip install -e \".[test]\"\n```\n\n## License\n\nMIT License \n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A comprehensive Python library for P&C actuarial ratemaking",
    "version": "0.3.0",
    "project_urls": {
        "Bug Reports": "https://github.com/little-croissant/ratemaking/issues",
        "Documentation": "https://github.com/little-croissant/ratemaking#readme",
        "Homepage": "https://github.com/little-croissant/ratemaking",
        "Source": "https://github.com/little-croissant/ratemaking"
    },
    "split_keywords": [
        "actuarial",
        "ratemaking",
        "credibility",
        "complements",
        "trending",
        "insurance",
        "p&c",
        "casualty",
        "property"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "eed1e49bd2c80953e72db3f4f11eaf847eaa66f3d620d78986b700963ab7b9ee",
                "md5": "280cb8b23d771322fcf618362b14cc4d",
                "sha256": "91f8bdacd33c2aa4d006c88ac72734ee995543125cf59463852808f655b18964"
            },
            "downloads": -1,
            "filename": "ratemaking-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "280cb8b23d771322fcf618362b14cc4d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 22003,
            "upload_time": "2025-10-26T21:01:58",
            "upload_time_iso_8601": "2025-10-26T21:01:58.194908Z",
            "url": "https://files.pythonhosted.org/packages/ee/d1/e49bd2c80953e72db3f4f11eaf847eaa66f3d620d78986b700963ab7b9ee/ratemaking-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ae3a0f3c0fb52d5c561a1200a040c4e4efe98265fd0f281d5d97f0b92c7ac0fc",
                "md5": "83107e2b1fa28e031556c7e6e5f7f3a6",
                "sha256": "b0b4ec0c2ce4d37c4e73e7bf11fdc6a6a6b46f767fd370ae22d6bbaf42d1be54"
            },
            "downloads": -1,
            "filename": "ratemaking-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "83107e2b1fa28e031556c7e6e5f7f3a6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 18972,
            "upload_time": "2025-10-26T21:01:59",
            "upload_time_iso_8601": "2025-10-26T21:01:59.193256Z",
            "url": "https://files.pythonhosted.org/packages/ae/3a/0f3c0fb52d5c561a1200a040c4e4efe98265fd0f281d5d97f0b92c7ac0fc/ratemaking-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-26 21:01:59",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "little-croissant",
    "github_project": "ratemaking",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "pytest",
            "specs": [
                [
                    ">=",
                    "7.0.0"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "1.20.0"
                ]
            ]
        },
        {
            "name": "pandas",
            "specs": [
                [
                    ">=",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "pyperclip",
            "specs": [
                [
                    ">=",
                    "1.8.0"
                ]
            ]
        },
        {
            "name": "pyautogui",
            "specs": [
                [
                    ">=",
                    "0.9.0"
                ]
            ]
        },
        {
            "name": "watchdog",
            "specs": [
                [
                    ">=",
                    "2.0.0"
                ]
            ]
        }
    ],
    "lcname": "ratemaking"
}
        
Elapsed time: 1.75298s