pytmpinv


Namepytmpinv JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryTabular Matrix Problems via Pseudoinverse Estimation
upload_time2025-08-23 11:21:43
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords tabular-matrix-problems convex-optimization least-squares generalized-inverse regularization
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Tabular Matrix Problems via Pseudoinverse Estimation

The **Tabular Matrix Problems via Pseudoinverse Estimation (TMPinv)** is a two-stage estimation method that reformulates structured table-based systems — such as allocation problems, transaction matrices, and input–output tables — as structured least-squares problems. Based on the [Convex Least Squares Programming (CLSP)](https://pypi.org/project/pyclsp/ "Convex Least Squares Programming") framework, TMPinv solves systems with row and column constraints, block structure, and optionally reduced dimensionality by (1) constructing a canonical constraint form and applying a pseudoinverse-based projection, followed by (2) a convex-programming refinement stage to improve fit, coherence, and regularization (e.g., via Lasso, Ridge, or Elastic Net).

## Installation

```bash
pip install tmpinv
```

## Quick Example

```python
import numpy as np
from tmpinv import tmpinv

# Define a 10×10 matrix with consistent row and column sums
X_true = np.array([
[ 0,  4,  6,  8,  7, 10,  9,  5,  6, 13],
[ 6,  0,  9,  6,  5,  4, 13, 10,  8,  8],
[ 7, 10,  0, 12, 11,  5,  4,  3, 10, 10],
[11, 12,  5,  0, 12, 13,  4,  2,  4, 10],
[13,  5, 11, 11,  0,  3, 11,  6, 12,  5],
[ 4, 12, 13,  4, 11,  0,  3,  3,  4, 10],
[ 6,  9,  4, 13,  4, 13,  0, 12,  3, 11],
[10,  6, 11,  5, 12,  4, 10,  0, 13,  4],
[12, 13,  5, 14,  4, 11,  4, 13,  0,  9],
[ 5,  6, 12, 12, 10,  3, 13,  4, 11,  0]
], dtype=np.float64)

# Get row and column sums
b_row = X_true.sum(axis=1)
b_col = X_true.sum(axis=0)

# Get known values
M = np.eye(100)[[1, 2, 3, 4, 5]]
b_val = X_true[0, [1, 2, 3, 4, 5]]

# Run bounded tmpinv
result = tmpinv(
    M=M,
    b_row=b_row,
    b_col=b_col,
    b_val=b_val,
    zero_diagonal=True,
    bounds=(0, 15),
    replace_value=0
)

# Reshape result and display checks
print("Estimated matrix:\n", np.round(result.x, 2))
print("\nRow sums:   ", np.round(result.x.sum(axis=1), 2))
print("Column sums:", np.round(result.x.sum(axis=0), 2))
```

## User Reference

For comprehensive information on the estimator’s capabilities, advanced configuration options, and implementation details, please refer to the [pyclsp module](https://pypi.org/project/pyclsp/ "Convex Least Squares Programming"), on which TMPinv is based.

**TMPINV Parameters:**  

`S` : *array_like* of shape *(m + p, m + p)*, optional  
A diagonal sign slack (surplus) matrix with entries in *{0, ±1}*.  
-   *0* enforces equality (== `b_row` or `b_col`),  
-  *1* enforces a lower-than-or-equal (≤) condition,  
- *–1* enforces a greater-than-or-equal (≥) condition.  

The first `m` diagonal entries correspond to row constraints, and the remaining `p` to column constraints. Please note that, in the reduced model, `S` is ignored: slack behavior is derived implicitly from block-wise marginal totals.

`M` : *array_like* of shape *(k, m * p)*, optional  
A model matrix with entries in *{0, 1}*. Each row defines a linear restriction on the flattened solution matrix. The corresponding right-hand side values must be provided in `b_val`. This block is used to encode known cell values. Please note that, in the reduced model, `M` must be a row subset of an identity matrix (i.e., diagonal-only). Arbitrary or non-diagonal model matrices cannot be mapped to reduced blocks, making the model infeasible.

`b_row` : *array_like* of shape *(m,)*  
Right-hand side vector of row totals. Please note that both `b_row` and `b_col` must be provided.

`b_col` : *array_like* of shape *(p,)*  
Right-hand side vector of column totals. Please note that both `b_row` and `b_col` must be provided.

`b_val` : *array_like* of shape *(k,)*  
Right-hand side vector of known cell values.

`i` : *int*, default = *1*  
Number of row groups.

`j` : *int*, default = *1*  
Number of column groups.

`zero_diagonal` : *bool*, default = *False*  
If *True*, enforces the zero diagonal.

`reduced` : *tuple* of *(int, int)*, optional  
Dimensions of the reduced problem. If specified, the problem is estimated as a set of reduced problems constructed from contiguous submatrices of the original table. For example, `reduced` = *(6, 6)* implies *5×5* data blocks with *1* slack row and *1* slack column each (edge blocks may be smaller).

`bounds` : *sequence* of *(low, high)*, optional  
Bounds on cell values. If a single tuple *(low, high)* is given, it is applied to all `m` * `p` cells. Example: *(0, None)*.

`replace_value` : *float* or *None*, default = *np.nan*  
Final replacement value for any cell in the solution matrix that violates the specified bounds by more than the given tolerance.

`tolerance` : *float*, default = *square root of machine epsilon*  
Convergence tolerance for bounds.

`iteration_limit` : *int*, default = *50*  
Maximum number of iterations allowed in the refinement loop.

**CLSP Parameters:**  

`r` : *int*, default = *1*  
Number of refinement iterations for the pseudoinverse-based estimator.

`Z` : *np.ndarray* or *None*  
A symmetric idempotent matrix (projector) defining the subspace for Bott–Duffin pseudoinversion. If *None*, the identity matrix is used, reducing the Bott–Duffin inverse to the Moore–Penrose case.

`final` : *bool*, default = *True*  
If *True*, a convex programming problem is solved to refine `zhat`. The resulting solution `z` minimizes a weighted L1/L2 norm around `zhat` subject to `Az = b`.

`alpha` : *float*, default = *1.0*  
Regularization parameter (weight) in the final convex program:  
- `α = 0`: Lasso (L1 norm)  
- `α = 1`: Tikhonov Regularization/Ridge (L2 norm)  
- `0 < α < 1`: Elastic Net

`*args`, `**kwargs` : optional  
CVXPY arguments passed to the CVXPY solver.

**Returns:**  
*TMPinvResult*

`TMPinvResult.full` : *bool*  
Indicates if this result comes from the full (non-reduced) model.

`TMPinvResult.model` : *CLSP* or *list* of *CLSP*  
A single CLSP object in the full model, or a list of CLSP objects for each reduced block in the reduced model.

`TMPinvResult.x` : *np.ndarray*  
Final estimated solution matrix of shape *(m, p)*.

## Bibliography

To be added.

## License

MIT License — see the [LICENSE](LICENSE) file.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pytmpinv",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "tabular-matrix-problems, convex-optimization, least-squares, generalized-inverse, regularization",
    "author": null,
    "author_email": "The Economist <29724411+econcz@users.noreply.github.com>",
    "download_url": "https://files.pythonhosted.org/packages/31/a1/14fc4bdf315fd513ae3f8e61f3bccb7c0555346cbce86da386215f55f594/pytmpinv-1.0.0.tar.gz",
    "platform": null,
    "description": "# Tabular Matrix Problems via Pseudoinverse Estimation\n\nThe **Tabular Matrix Problems via Pseudoinverse Estimation (TMPinv)** is a two-stage estimation method that reformulates structured table-based systems \u2014 such as allocation problems, transaction matrices, and input\u2013output tables \u2014 as structured least-squares problems. Based on the [Convex Least Squares Programming (CLSP)](https://pypi.org/project/pyclsp/ \"Convex Least Squares Programming\") framework, TMPinv solves systems with row and column constraints, block structure, and optionally reduced dimensionality by (1) constructing a canonical constraint form and applying a pseudoinverse-based projection, followed by (2) a convex-programming refinement stage to improve fit, coherence, and regularization (e.g., via Lasso, Ridge, or Elastic Net).\n\n## Installation\n\n```bash\npip install tmpinv\n```\n\n## Quick Example\n\n```python\nimport numpy as np\nfrom tmpinv import tmpinv\n\n# Define a 10\u00d710 matrix with consistent row and column sums\nX_true = np.array([\n[ 0,  4,  6,  8,  7, 10,  9,  5,  6, 13],\n[ 6,  0,  9,  6,  5,  4, 13, 10,  8,  8],\n[ 7, 10,  0, 12, 11,  5,  4,  3, 10, 10],\n[11, 12,  5,  0, 12, 13,  4,  2,  4, 10],\n[13,  5, 11, 11,  0,  3, 11,  6, 12,  5],\n[ 4, 12, 13,  4, 11,  0,  3,  3,  4, 10],\n[ 6,  9,  4, 13,  4, 13,  0, 12,  3, 11],\n[10,  6, 11,  5, 12,  4, 10,  0, 13,  4],\n[12, 13,  5, 14,  4, 11,  4, 13,  0,  9],\n[ 5,  6, 12, 12, 10,  3, 13,  4, 11,  0]\n], dtype=np.float64)\n\n# Get row and column sums\nb_row = X_true.sum(axis=1)\nb_col = X_true.sum(axis=0)\n\n# Get known values\nM = np.eye(100)[[1, 2, 3, 4, 5]]\nb_val = X_true[0, [1, 2, 3, 4, 5]]\n\n# Run bounded tmpinv\nresult = tmpinv(\n    M=M,\n    b_row=b_row,\n    b_col=b_col,\n    b_val=b_val,\n    zero_diagonal=True,\n    bounds=(0, 15),\n    replace_value=0\n)\n\n# Reshape result and display checks\nprint(\"Estimated matrix:\\n\", np.round(result.x, 2))\nprint(\"\\nRow sums:   \", np.round(result.x.sum(axis=1), 2))\nprint(\"Column sums:\", np.round(result.x.sum(axis=0), 2))\n```\n\n## User Reference\n\nFor comprehensive information on the estimator\u2019s capabilities, advanced configuration options, and implementation details, please refer to the [pyclsp module](https://pypi.org/project/pyclsp/ \"Convex Least Squares Programming\"), on which TMPinv is based.\n\n**TMPINV Parameters:**  \n\n`S` : *array_like* of shape *(m + p, m + p)*, optional  \nA diagonal sign slack (surplus) matrix with entries in *{0, \u00b11}*.  \n-   *0* enforces equality (== `b_row` or `b_col`),  \n-  *1* enforces a lower-than-or-equal (\u2264) condition,  \n- *\u20131* enforces a greater-than-or-equal (\u2265) condition.  \n\nThe first `m` diagonal entries correspond to row constraints, and the remaining `p` to column constraints. Please note that, in the reduced model, `S` is ignored: slack behavior is derived implicitly from block-wise marginal totals.\n\n`M` : *array_like* of shape *(k, m * p)*, optional  \nA model matrix with entries in *{0, 1}*. Each row defines a linear restriction on the flattened solution matrix. The corresponding right-hand side values must be provided in `b_val`. This block is used to encode known cell values. Please note that, in the reduced model, `M` must be a row subset of an identity matrix (i.e., diagonal-only). Arbitrary or non-diagonal model matrices cannot be mapped to reduced blocks, making the model infeasible.\n\n`b_row` : *array_like* of shape *(m,)*  \nRight-hand side vector of row totals. Please note that both `b_row` and `b_col` must be provided.\n\n`b_col` : *array_like* of shape *(p,)*  \nRight-hand side vector of column totals. Please note that both `b_row` and `b_col` must be provided.\n\n`b_val` : *array_like* of shape *(k,)*  \nRight-hand side vector of known cell values.\n\n`i` : *int*, default = *1*  \nNumber of row groups.\n\n`j` : *int*, default = *1*  \nNumber of column groups.\n\n`zero_diagonal` : *bool*, default = *False*  \nIf *True*, enforces the zero diagonal.\n\n`reduced` : *tuple* of *(int, int)*, optional  \nDimensions of the reduced problem. If specified, the problem is estimated as a set of reduced problems constructed from contiguous submatrices of the original table. For example, `reduced` = *(6, 6)* implies *5\u00d75* data blocks with *1* slack row and *1* slack column each (edge blocks may be smaller).\n\n`bounds` : *sequence* of *(low, high)*, optional  \nBounds on cell values. If a single tuple *(low, high)* is given, it is applied to all `m` * `p` cells. Example: *(0, None)*.\n\n`replace_value` : *float* or *None*, default = *np.nan*  \nFinal replacement value for any cell in the solution matrix that violates the specified bounds by more than the given tolerance.\n\n`tolerance` : *float*, default = *square root of machine epsilon*  \nConvergence tolerance for bounds.\n\n`iteration_limit` : *int*, default = *50*  \nMaximum number of iterations allowed in the refinement loop.\n\n**CLSP Parameters:**  \n\n`r` : *int*, default = *1*  \nNumber of refinement iterations for the pseudoinverse-based estimator.\n\n`Z` : *np.ndarray* or *None*  \nA symmetric idempotent matrix (projector) defining the subspace for Bott\u2013Duffin pseudoinversion. If *None*, the identity matrix is used, reducing the Bott\u2013Duffin inverse to the Moore\u2013Penrose case.\n\n`final` : *bool*, default = *True*  \nIf *True*, a convex programming problem is solved to refine `zhat`. The resulting solution `z` minimizes a weighted L1/L2 norm around `zhat` subject to `Az = b`.\n\n`alpha` : *float*, default = *1.0*  \nRegularization parameter (weight) in the final convex program:  \n- `\u03b1 = 0`: Lasso (L1 norm)  \n- `\u03b1 = 1`: Tikhonov Regularization/Ridge (L2 norm)  \n- `0 < \u03b1 < 1`: Elastic Net\n\n`*args`, `**kwargs` : optional  \nCVXPY arguments passed to the CVXPY solver.\n\n**Returns:**  \n*TMPinvResult*\n\n`TMPinvResult.full` : *bool*  \nIndicates if this result comes from the full (non-reduced) model.\n\n`TMPinvResult.model` : *CLSP* or *list* of *CLSP*  \nA single CLSP object in the full model, or a list of CLSP objects for each reduced block in the reduced model.\n\n`TMPinvResult.x` : *np.ndarray*  \nFinal estimated solution matrix of shape *(m, p)*.\n\n## Bibliography\n\nTo be added.\n\n## License\n\nMIT License \u2014 see the [LICENSE](LICENSE) file.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Tabular Matrix Problems via Pseudoinverse Estimation",
    "version": "1.0.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/econcz/pytmpinv/issues",
        "Homepage": "https://github.com/econcz/pytmpinv"
    },
    "split_keywords": [
        "tabular-matrix-problems",
        " convex-optimization",
        " least-squares",
        " generalized-inverse",
        " regularization"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5b155c6a7ff616420e1c6d0d8ef863be98cb9fa3d316fff68e9b90281959ee35",
                "md5": "c0eaf1c08b98ae524cd138a1b7031639",
                "sha256": "21b27a8fc7119d9b5204fd9c246d3a4fb42f126685def49af03010fb300911cc"
            },
            "downloads": -1,
            "filename": "pytmpinv-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c0eaf1c08b98ae524cd138a1b7031639",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 9091,
            "upload_time": "2025-08-23T11:21:41",
            "upload_time_iso_8601": "2025-08-23T11:21:41.746569Z",
            "url": "https://files.pythonhosted.org/packages/5b/15/5c6a7ff616420e1c6d0d8ef863be98cb9fa3d316fff68e9b90281959ee35/pytmpinv-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "31a114fc4bdf315fd513ae3f8e61f3bccb7c0555346cbce86da386215f55f594",
                "md5": "62f224f261278919b87f529fef68fc7d",
                "sha256": "9b15141b9c71f762eac9a07d44a2c02a73c46ed86f7a4ed24bfd221134d7b01f"
            },
            "downloads": -1,
            "filename": "pytmpinv-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "62f224f261278919b87f529fef68fc7d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 8266,
            "upload_time": "2025-08-23T11:21:43",
            "upload_time_iso_8601": "2025-08-23T11:21:43.048102Z",
            "url": "https://files.pythonhosted.org/packages/31/a1/14fc4bdf315fd513ae3f8e61f3bccb7c0555346cbce86da386215f55f594/pytmpinv-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-23 11:21:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "econcz",
    "github_project": "pytmpinv",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "pytmpinv"
}
        
Elapsed time: 0.70438s