# Linear Programming via Pseudoinverse Estimation
The **Linear Programming via Pseudoinverse Estimation (LPPinv)** is a two-stage estimation method that reformulates linear programs as structured least-squares problems. Based on the [Convex Least Squares Programming (CLSP)](https://pypi.org/project/pyclsp/ "Convex Least Squares Programming") framework, LPPinv solves linear inequality, equality, and bound constraints by (1) constructing a canonical constraint system and computing a pseudoinverse projection, followed by (2) a convex-programming correction stage to refine the solution under additional regularization (e.g., Lasso, Ridge, or Elastic Net).
LPPinv is intended for **underdetermined** and **ill-posed** linear problems, for which standard solvers fail.
## Installation
```bash
pip install pylppinv
```
## Quick Example
```python
from lppinv import lppinv
import numpy as np
# Define inequality constraints A_ub @ x <= b_ub
A_ub = [
[1, 1],
[2, 1]
]
b_ub = [5, 8]
# Define equality constraints A_eq @ x = b_eq
A_eq = [
[1, -1]
]
b_eq = [1]
# Define bounds for x1 and x2
bounds = [(0, 5), (0, None)]
# Run the LP via CLSP
result = lppinv(
c = [1, 1], # not used in CLSP but included for compatibility
A_ub = A_ub,
b_ub = b_ub,
A_eq = A_eq,
b_eq = b_eq,
bounds = bounds
)
# Output solution
print("Solution vector (x):")
print(result.x.flatten())
```
## User Reference
For comprehensive information on the estimator’s capabilities, advanced configuration options, and implementation details, please refer to the [pyclsp module](https://pypi.org/project/pyclsp/ "Convex Least Squares Programming"), on which LPPinv is based.
**LPPINV Parameters:**
`c` : *array_like* of shape *(p,)*, optional
Objective function coefficients. Accepted for API parity; not used by CLSP.
`A_ub` : *array_like* of shape *(i, p)*, optional
Matrix for inequality constraints `A_ub @ x <= b_ub`.
`b_ub` : *array_like* of shape *(i,)*, optional
Right-hand side vector for inequality constraints.
`A_eq` : *array_like* of shape *(j, p)*, optional
Matrix for equality constraints `A_eq @ x = b_eq`.
`b_eq` : *array_like* of shape *(j,)*, optional
Right-hand side vector for equality constraints.
`bounds` : *sequence* of *(low, high)*, optional
Bounds on variables. If a single tuple **(low, high)** is given, it is applied to all variables. If None, defaults to *(0, None)* for each variable (non-negativity).
Please note that either `A_ub` and `b_ub` or `A_eq` and `b_eq` must be provided.
**CLSP Parameters:**
`r` : *int*, default = *1*
Number of refinement iterations for the pseudoinverse-based estimator.
`Z` : *np.ndarray* or *None*
A symmetric idempotent matrix (projector) defining the subspace for Bott–Duffin pseudoinversion. If *None*, the identity matrix is used, reducing the Bott–Duffin inverse to the Moore–Penrose case.
`tolerance` : *float*, default = *square root of machine epsilon*
Convergence tolerance for NRMSE change between refinement iterations.
`iteration_limit` : *int*, default = *50*
Maximum number of iterations allowed in the refinement loop.
`final` : *bool*, default = *True*
If *True*, a convex programming problem is solved to refine `zhat`. The resulting solution `z` minimizes a weighted L1/L2 norm around `zhat` subject to `Az = b`.
`alpha` : *float*, default = *1.0*
Regularization parameter (weight) in the final convex program:
- `α = 0`: Lasso (L1 norm)
- `α = 1`: Tikhonov Regularization/Ridge (L2 norm)
- `0 < α < 1`: Elastic Net
`*args`, `**kwargs` : optional
CVXPY arguments passed to the CVXPY solver.
**Returns:**
*self*
`self.A` : *np.ndarray*
Design matrix `A` = [`C` | `S`; `M` | `Q`], where `Q` is either a zero matrix or *S_residual*.
`self.b` : *np.ndarray*
Vector of the right-hand side.
`self.zhat` : *np.ndarray*
Vector of the first-step estimate.
`self.r` : *int*
Number of refinement iterations performed in the first step.
`self.z` : *np.ndarray*
Vector of the final solution. If the second step is disabled, it equals `self.zhat`.
`self.x` : *np.ndarray*
`m × p` matrix or vector containing the variable component of `z`.
`self.y` : *np.ndarray*
Vector containing the slack component of `z`.
`self.kappaC` : *float*
Spectral κ() for *C_canon*.
`self.kappaB` : *float*
Spectral κ() for *B* = *C_canon^+ A*.
`self.kappaA` : *float*
Spectral κ() for `A`.
`self.rmsa` : *float*
Total root mean square alignment (RMSA).
`self.r2_partial` : *float*
R² for the `M` block in `A`.
`self.nrmse` : *float*
Mean square error calculated from `A` and normalized by standard deviation (NRMSE).
`self.nrmse_partial` : *float*
Mean square error from the `M` block in `A` and normalized by standard deviation (NRMSE).
`self.z_lower` : *np.ndarray*
Lower bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.z_upper` : *np.ndarray*
Upper bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.x_lower` : *np.ndarray*
Lower bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.x_upper` : *np.ndarray*
Upper bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.y_lower` : *np.ndarray*
Lower bound of the diagnostic interval (confidence band) based on κ(`A`).
`self.y_upper` : *np.ndarray*
Upper bound of the diagnostic interval (confidence band) based on κ(`A`).
## Bibliography
To be added.
## License
MIT License — see the [LICENSE](LICENSE) file.
Raw data
{
"_id": null,
"home_page": null,
"name": "pylppinv",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "linear-programing, convex-optimization, least-squares, generalized-inverse, regularization",
"author": null,
"author_email": "The Economist <29724411+econcz@users.noreply.github.com>",
"download_url": "https://files.pythonhosted.org/packages/7d/fd/aa07fb02cf1c2c70a6c36531852359f92ec5f78bcf140d3982eea4c06859/pylppinv-1.0.4.tar.gz",
"platform": null,
"description": "# Linear Programming via Pseudoinverse Estimation\n\nThe **Linear Programming via Pseudoinverse Estimation (LPPinv)** is a two-stage estimation method that reformulates linear programs as structured least-squares problems. Based on the [Convex Least Squares Programming (CLSP)](https://pypi.org/project/pyclsp/ \"Convex Least Squares Programming\") framework, LPPinv solves linear inequality, equality, and bound constraints by (1) constructing a canonical constraint system and computing a pseudoinverse projection, followed by (2) a convex-programming correction stage to refine the solution under additional regularization (e.g., Lasso, Ridge, or Elastic Net). \nLPPinv is intended for **underdetermined** and **ill-posed** linear problems, for which standard solvers fail.\n\n## Installation\n\n```bash\npip install pylppinv\n```\n\n## Quick Example\n\n```python\nfrom lppinv import lppinv\nimport numpy as np\n\n# Define inequality constraints A_ub @ x <= b_ub\nA_ub = [\n [1, 1],\n [2, 1]\n]\nb_ub = [5, 8]\n\n# Define equality constraints A_eq @ x = b_eq\nA_eq = [\n [1, -1]\n]\nb_eq = [1]\n\n# Define bounds for x1 and x2\nbounds = [(0, 5), (0, None)]\n\n# Run the LP via CLSP\nresult = lppinv(\n c = [1, 1], # not used in CLSP but included for compatibility\n A_ub = A_ub,\n b_ub = b_ub,\n A_eq = A_eq,\n b_eq = b_eq,\n bounds = bounds\n)\n\n# Output solution\nprint(\"Solution vector (x):\")\nprint(result.x.flatten())\n```\n\n## User Reference\n\nFor comprehensive information on the estimator\u2019s capabilities, advanced configuration options, and implementation details, please refer to the [pyclsp module](https://pypi.org/project/pyclsp/ \"Convex Least Squares Programming\"), on which LPPinv is based.\n\n**LPPINV Parameters:**\n\n`c` : *array_like* of shape *(p,)*, optional \nObjective function coefficients. Accepted for API parity; not used by CLSP.\n\n`A_ub` : *array_like* of shape *(i, p)*, optional \nMatrix for inequality constraints `A_ub @ x <= b_ub`.\n\n`b_ub` : *array_like* of shape *(i,)*, optional \nRight-hand side vector for inequality constraints.\n\n`A_eq` : *array_like* of shape *(j, p)*, optional \nMatrix for equality constraints `A_eq @ x = b_eq`.\n\n`b_eq` : *array_like* of shape *(j,)*, optional \nRight-hand side vector for equality constraints.\n\n`bounds` : *sequence* of *(low, high)*, optional \nBounds on variables. If a single tuple **(low, high)** is given, it is applied to all variables. If None, defaults to *(0, None)* for each variable (non-negativity).\n\nPlease note that either `A_ub` and `b_ub` or `A_eq` and `b_eq` must be provided.\n\n**CLSP Parameters:** \n\n`r` : *int*, default = *1* \nNumber of refinement iterations for the pseudoinverse-based estimator.\n\n`Z` : *np.ndarray* or *None* \nA symmetric idempotent matrix (projector) defining the subspace for Bott\u2013Duffin pseudoinversion. If *None*, the identity matrix is used, reducing the Bott\u2013Duffin inverse to the Moore\u2013Penrose case.\n\n`tolerance` : *float*, default = *square root of machine epsilon* \nConvergence tolerance for NRMSE change between refinement iterations.\n\n`iteration_limit` : *int*, default = *50* \nMaximum number of iterations allowed in the refinement loop.\n\n`final` : *bool*, default = *True* \nIf *True*, a convex programming problem is solved to refine `zhat`. The resulting solution `z` minimizes a weighted L1/L2 norm around `zhat` subject to `Az = b`.\n\n`alpha` : *float*, default = *1.0* \nRegularization parameter (weight) in the final convex program: \n- `\u03b1 = 0`: Lasso (L1 norm) \n- `\u03b1 = 1`: Tikhonov Regularization/Ridge (L2 norm) \n- `0 < \u03b1 < 1`: Elastic Net\n\n`*args`, `**kwargs` : optional \nCVXPY arguments passed to the CVXPY solver.\n\n**Returns:** \n*self*\n\n`self.A` : *np.ndarray* \nDesign matrix `A` = [`C` | `S`; `M` | `Q`], where `Q` is either a zero matrix or *S_residual*.\n\n`self.b` : *np.ndarray* \nVector of the right-hand side.\n\n`self.zhat` : *np.ndarray* \nVector of the first-step estimate.\n\n`self.r` : *int* \nNumber of refinement iterations performed in the first step.\n\n`self.z` : *np.ndarray* \nVector of the final solution. If the second step is disabled, it equals `self.zhat`.\n\n`self.x` : *np.ndarray* \n`m \u00d7 p` matrix or vector containing the variable component of `z`.\n\n`self.y` : *np.ndarray* \nVector containing the slack component of `z`.\n\n`self.kappaC` : *float* \nSpectral \u03ba() for *C_canon*.\n\n`self.kappaB` : *float* \nSpectral \u03ba() for *B* = *C_canon^+ A*.\n\n`self.kappaA` : *float* \nSpectral \u03ba() for `A`.\n\n`self.rmsa` : *float* \nTotal root mean square alignment (RMSA).\n\n`self.r2_partial` : *float* \nR\u00b2 for the `M` block in `A`.\n\n`self.nrmse` : *float* \nMean square error calculated from `A` and normalized by standard deviation (NRMSE).\n\n`self.nrmse_partial` : *float* \nMean square error from the `M` block in `A` and normalized by standard deviation (NRMSE).\n\n`self.z_lower` : *np.ndarray* \nLower bound of the diagnostic interval (confidence band) based on \u03ba(`A`).\n\n`self.z_upper` : *np.ndarray* \nUpper bound of the diagnostic interval (confidence band) based on \u03ba(`A`).\n\n`self.x_lower` : *np.ndarray* \nLower bound of the diagnostic interval (confidence band) based on \u03ba(`A`).\n\n`self.x_upper` : *np.ndarray* \nUpper bound of the diagnostic interval (confidence band) based on \u03ba(`A`).\n\n`self.y_lower` : *np.ndarray* \nLower bound of the diagnostic interval (confidence band) based on \u03ba(`A`).\n\n`self.y_upper` : *np.ndarray* \nUpper bound of the diagnostic interval (confidence band) based on \u03ba(`A`).\n\n## Bibliography\n\nTo be added.\n\n## License\n\nMIT License \u2014 see the [LICENSE](LICENSE) file.\n",
"bugtrack_url": null,
"license": null,
"summary": "Linear Programming via Pseudoinverse Estimation",
"version": "1.0.4",
"project_urls": {
"Bug Tracker": "https://github.com/econcz/pylppinv/issues",
"Homepage": "https://github.com/econcz/pylppinv"
},
"split_keywords": [
"linear-programing",
" convex-optimization",
" least-squares",
" generalized-inverse",
" regularization"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "633920b1ce504b10609e9a3d1e7b5d41810490073e937e8e2786b520b6ea93d2",
"md5": "41af204192e0143960a83c9241167e50",
"sha256": "c988d82c33f1d35d2cb2d8beaabb747129682d41fbb90500223ea22b43945c23"
},
"downloads": -1,
"filename": "pylppinv-1.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "41af204192e0143960a83c9241167e50",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 6731,
"upload_time": "2025-08-23T11:37:57",
"upload_time_iso_8601": "2025-08-23T11:37:57.634597Z",
"url": "https://files.pythonhosted.org/packages/63/39/20b1ce504b10609e9a3d1e7b5d41810490073e937e8e2786b520b6ea93d2/pylppinv-1.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7dfdaa07fb02cf1c2c70a6c36531852359f92ec5f78bcf140d3982eea4c06859",
"md5": "8b5a92903263bf2c9eccb01a342e1bde",
"sha256": "c0420880d5547f8aed4710263e825e372440801170ab99a2ead69081b188b038"
},
"downloads": -1,
"filename": "pylppinv-1.0.4.tar.gz",
"has_sig": false,
"md5_digest": "8b5a92903263bf2c9eccb01a342e1bde",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 6340,
"upload_time": "2025-08-23T11:37:58",
"upload_time_iso_8601": "2025-08-23T11:37:58.864824Z",
"url": "https://files.pythonhosted.org/packages/7d/fd/aa07fb02cf1c2c70a6c36531852359f92ec5f78bcf140d3982eea4c06859/pylppinv-1.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-23 11:37:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "econcz",
"github_project": "pylppinv",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "pylppinv"
}