# Econometrust 🚀
High-performance econometric regression library written in Rust with Python bindings, delivering blazing-fast OLS, GLS, WLS and IV implementations with comprehensive diagnostic statistics.
[](LICENSE)
[](https://www.python.org/)
[](https://www.rust-lang.org/)
## ✨ Features
- **🏎️ Blazing Fast**: Rust-powered performance with zero-copy numpy integration
- **📊 Comprehensive Diagnostics**: Full suite of econometric tests and statistics
- **🔧 Easy to Use**: Familiar scikit-learn-style API
- **📈 Professional Output**: Detailed regression summaries with interpretive notes
- **🛡️ Robust**: Handles edge cases and memory-efficient operations
### Supported Models
- **OLS (Ordinary Least Squares)**: Classic linear regression with optional robust standard errors
- **GLS (Generalized Least Squares)**: Handles heteroskedasticity and autocorrelation
- **WLS (Weighted Least Squares)**: Handles heteroskedastic errors with known variance structure
- **IV (Instrumental Variables)**: Addresses endogeneity using instrumental variables (exactly identified models only)
### Diagnostic Statistics
- **Durbin-Watson Test**: Detect autocorrelation in residuals
- **Jarque-Bera Test**: Test for normality of residuals
- **Omnibus Test**: Alternative normality test
- **Skewness & Kurtosis**: Distribution shape measures
- **Condition Number**: Multicollinearity detection
- **R-squared & Adjusted R-squared**: Model fit measures
- **F-statistic**: Overall model significance
## 🚀 Quick Start
### Installation
```bash
pip install econometrust
```
### Basic Usage
```python
import numpy as np
from econometrust import OLS, GLS, WLS, IV
# Generate sample data
X = np.random.randn(100, 3)
y = X @ [2.0, 1.5, -0.8] + np.random.normal(0, 0.5, 100)
# Fit OLS model
model = OLS(fit_intercept=True, robust=False)
model.fit(X, y)
# Get comprehensive summary
print(model.summary())
# Access individual results
print(f"Coefficients: {model.coefficients}")
print(f"R-squared: {model.r_squared}")
print(f"Durbin-Watson: {model.durbin_watson}")
```
### Advanced Example with Diagnostics
```python
import numpy as np
from econometrust import OLS
# Create data with potential issues
np.random.seed(42)
n = 100
X = np.random.randn(n, 3)
# Add some autocorrelation to demonstrate diagnostics
errors = np.zeros(n)
errors[0] = np.random.normal(0, 0.5)
for i in range(1, n):
errors[i] = 0.6 * errors[i-1] + np.random.normal(0, 0.3)
y = X @ [1.5, -2.0, 0.8] + errors
# Fit model
model = OLS(fit_intercept=True, robust=True)
model.fit(X, y)
# Get detailed diagnostic summary
summary = model.summary()
print(summary)
# The summary includes:
# - Coefficient estimates with standard errors and t-statistics
# - Model fit statistics (R², Adjusted R², F-statistic)
# - Diagnostic tests (Durbin-Watson, Jarque-Bera, Omnibus)
# - Distribution measures (Skewness, Kurtosis)
# - Multicollinearity indicator (Condition Number)
# - Interpretive notes for significant diagnostic findings
```
## 📊 Sample Output
```
====================================
OLS Regression Results
====================================
Dependent Variable: y No. Observations: 100
Model: OLS Degrees of Freedom: 96
Method: Least Squares F-statistic: 156.42
Date: 2024-01-15 10:30:00 Prob (F-statistic): 1.23e-45
R-squared: 0.830
Adj. R-squared: 0.825
====================================
Coefficients
====================================
Variable Coef Std Err t-stat P>|t| [0.025 0.975]
--------------------------------------------------------------------
const 0.0234 0.0891 0.262 0.794 -0.1536 0.2004
x1 1.4987 0.0934 16.046 0.000 1.3131 1.6843
x2 -1.9876 0.0912 -21.786 0.000 -2.1688 -1.8064
x3 0.7899 0.0888 8.896 0.000 0.6135 0.9663
====================================
Diagnostic Statistics
====================================
Durbin-Watson: 1.234 (Positive autocorrelation detected)
Jarque-Bera: 5.678 (p=0.058)
Omnibus: 4.321 (p=0.115)
Skewness: 0.456
Kurtosis: 3.234
Condition Number: 12.34
====================================
Notes
====================================
- Positive autocorrelation detected (DW = 1.234 < 1.5)
- Consider using robust standard errors or GLS
```
## 🔧 API Reference
### OLS Class
```python
class OLS:
def __init__(self, fit_intercept=True, robust=False)
def fit(self, X, y)
def predict(self, X)
def summary()
# Properties
.coefficients # Coefficient estimates
.intercept # Intercept term
.residuals # Model residuals
.fitted_values # Predicted values
.r_squared # R-squared value
.adjusted_r_squared # Adjusted R-squared
.f_statistic # F-statistic
.durbin_watson # Durbin-Watson statistic
.jarque_bera # Jarque-Bera test statistic
.omnibus # Omnibus test statistic
.skewness # Residual skewness
.kurtosis # Residual kurtosis
.condition_number # Design matrix condition number
```
### GLS Class
```python
class GLS:
def __init__(self, fit_intercept=True)
def fit(self, X, y, sigma) # sigma: covariance matrix
def predict(self, X)
def summary()
# Same properties as OLS
```
### WLS Class
```python
class WLS:
def __init__(self, fit_intercept=True)
def fit(self, X, y, weights) # weights: sample weights
def predict(self, X)
def summary()
# Same properties as OLS, plus:
.weights # Sample weights used for fitting
```
### IV Class
```python
class IV:
def __init__(self, fit_intercept=True)
def fit(self, instruments, regressors, targets) # exactly identified only
def predict(self, regressors)
def summary()
# Properties
.coefficients # Coefficient estimates
.intercept # Intercept term
.residuals # Model residuals
.r_squared # R-squared value (can be negative for IV)
.mse # Mean squared error
.instruments # Instrumental variables used
.regressors # Regressors used for fitting
.covariance_matrix # Coefficient covariance matrix
.standard_errors() # Standard errors of coefficients
.t_statistics() # T-statistics
.p_values() # P-values
.confidence_intervals() # Confidence intervals
```
### WLS Example
```python
import numpy as np
from econometrust import WLS, OLS
# Generate data with heteroscedastic errors
np.random.seed(42)
n_samples = 100
X = np.random.randn(n_samples, 2)
# Error variance increases with X[:, 0]
error_variance = 0.5 + 2 * np.abs(X[:, 0])
errors = np.random.normal(0, np.sqrt(error_variance))
y = 2.0 + X @ [1.5, -0.8] + errors
# Create weights (inverse of variance for optimal efficiency)
weights = 1.0 / error_variance
# Compare OLS (inefficient) vs WLS (efficient)
ols = OLS(fit_intercept=True)
ols.fit(X, y)
wls = WLS(fit_intercept=True)
wls.fit(X, y, weights)
print(f"OLS R²: {ols.r_squared:.4f}")
print(f"WLS R²: {wls.r_squared:.4f}") # Should be higher
print(f"OLS MSE: {ols.mse:.4f}")
print(f"WLS MSE: {wls.mse:.4f}") # Should be lower
# WLS provides more accurate coefficient estimates
# when heteroscedasticity is properly modeled
```
### IV (Instrumental Variables) Example
```python
import numpy as np
from econometrust import IV, OLS
# Generate data with endogeneity
np.random.seed(42)
n_samples = 200
# Create instruments (must be uncorrelated with error term)
z1 = np.random.randn(n_samples) # First instrument
z2 = np.random.randn(n_samples) # Second instrument
instruments = np.column_stack([z1, z2])
# Create error term
error = np.random.randn(n_samples) * 0.5
# Create endogenous regressors (correlated with error)
x1 = z1 + 0.3 * error + np.random.randn(n_samples) * 0.3 # Endogenous
x2 = z2 + 0.1 * np.random.randn(n_samples) # Exogenous instrument-like
regressors = np.column_stack([x1, x2])
# True coefficients
true_beta = [1.5, -0.8]
y = 2.0 + regressors @ true_beta + error
# Compare OLS (biased due to endogeneity) vs IV (consistent)
ols = OLS(fit_intercept=True)
ols.fit(regressors, y)
# IV estimator (exactly identified: 2 instruments for 2 regressors)
iv = IV(fit_intercept=True)
iv.fit(instruments, regressors, y)
print("=== OLS Results (Biased due to endogeneity) ===")
print(f"Coefficients: {ols.coefficients}")
print(f"R-squared: {ols.r_squared:.4f}")
print("\n=== IV Results (Consistent estimates) ===")
print(f"Coefficients: {iv.coefficients}")
print(f"R-squared: {iv.r_squared:.4f}")
print(f"True coefficients: {true_beta}")
# IV handles endogeneity but requires:
# 1. Strong instruments (correlated with regressors)
# 2. Valid instruments (uncorrelated with error term)
# 3. Exact identification (# instruments = # regressors)
print(iv.summary())
```
**Important Notes about IV Estimation:**
- **Exactly Identified Only**: This IV implementation requires the number of instruments to equal the number of regressors
- **Instrument Validity**: Instruments must be correlated with the endogenous regressors but uncorrelated with the error term
- **Higher Variance**: IV estimates typically have higher variance than OLS, requiring larger sample sizes
- **For Overidentified Models**: Use Two-Stage Least Squares (TSLS) when you have more instruments than regressors (planned for future release)
## 🏆 Performance
Econometrust leverages Rust's performance while maintaining Python's ease of use:
- **Memory Efficient**: Zero-copy operations with numpy arrays
- **Fast Computations**: Optimized linear algebra operations
- **Parallel Processing**: Multi-threaded where beneficial
- **Robust Numerics**: Handles edge cases gracefully
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## 📄 License
This project is licensed under both MIT and Apache-2.0 licenses.
## 🔗 Links
- **Repository**: [https://github.com/wdeligt/econometrust](https://github.com/wdeligt/econometrust)
- **Documentation**: [https://github.com/wdeligt/econometrust#readme](https://github.com/wdeligt/econometrust#readme)
- **Issues**: [https://github.com/wdeligt/econometrust/issues](https://github.com/wdeligt/econometrust/issues)
---
*Built with ❤️ using Rust and PyO3*
Raw data
{
"_id": null,
"home_page": null,
"name": "econometrust",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "econometrics, regression, ols, gls, rust, python",
"author": "Wouter de Ligt <wouterdeligt3@gmail.com>",
"author_email": "Wouter de Ligt <wouterdeligt3@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/06/e0/18e6072845a1249d595e2cc7161e361dd67b628f5bd808ff6699efe56ca2/econometrust-0.1.10.tar.gz",
"platform": null,
"description": "# Econometrust \ud83d\ude80\n\nHigh-performance econometric regression library written in Rust with Python bindings, delivering blazing-fast OLS, GLS, WLS and IV implementations with comprehensive diagnostic statistics.\n\n[](LICENSE)\n[](https://www.python.org/)\n[](https://www.rust-lang.org/)\n\n## \u2728 Features\n\n- **\ud83c\udfce\ufe0f Blazing Fast**: Rust-powered performance with zero-copy numpy integration\n- **\ud83d\udcca Comprehensive Diagnostics**: Full suite of econometric tests and statistics\n- **\ud83d\udd27 Easy to Use**: Familiar scikit-learn-style API\n- **\ud83d\udcc8 Professional Output**: Detailed regression summaries with interpretive notes\n- **\ud83d\udee1\ufe0f Robust**: Handles edge cases and memory-efficient operations\n\n### Supported Models\n- **OLS (Ordinary Least Squares)**: Classic linear regression with optional robust standard errors\n- **GLS (Generalized Least Squares)**: Handles heteroskedasticity and autocorrelation\n- **WLS (Weighted Least Squares)**: Handles heteroskedastic errors with known variance structure\n- **IV (Instrumental Variables)**: Addresses endogeneity using instrumental variables (exactly identified models only)\n\n### Diagnostic Statistics\n- **Durbin-Watson Test**: Detect autocorrelation in residuals\n- **Jarque-Bera Test**: Test for normality of residuals\n- **Omnibus Test**: Alternative normality test\n- **Skewness & Kurtosis**: Distribution shape measures\n- **Condition Number**: Multicollinearity detection\n- **R-squared & Adjusted R-squared**: Model fit measures\n- **F-statistic**: Overall model significance\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n\n```bash\npip install econometrust\n```\n\n### Basic Usage\n\n```python\nimport numpy as np\nfrom econometrust import OLS, GLS, WLS, IV\n\n# Generate sample data\nX = np.random.randn(100, 3)\ny = X @ [2.0, 1.5, -0.8] + np.random.normal(0, 0.5, 100)\n\n# Fit OLS model\nmodel = OLS(fit_intercept=True, robust=False)\nmodel.fit(X, y)\n\n# Get comprehensive summary\nprint(model.summary())\n\n# Access individual results\nprint(f\"Coefficients: {model.coefficients}\")\nprint(f\"R-squared: {model.r_squared}\")\nprint(f\"Durbin-Watson: {model.durbin_watson}\")\n```\n\n### Advanced Example with Diagnostics\n\n```python\nimport numpy as np\nfrom econometrust import OLS\n\n# Create data with potential issues\nnp.random.seed(42)\nn = 100\nX = np.random.randn(n, 3)\n\n# Add some autocorrelation to demonstrate diagnostics\nerrors = np.zeros(n)\nerrors[0] = np.random.normal(0, 0.5)\nfor i in range(1, n):\n errors[i] = 0.6 * errors[i-1] + np.random.normal(0, 0.3)\n\ny = X @ [1.5, -2.0, 0.8] + errors\n\n# Fit model\nmodel = OLS(fit_intercept=True, robust=True)\nmodel.fit(X, y)\n\n# Get detailed diagnostic summary\nsummary = model.summary()\nprint(summary)\n\n# The summary includes:\n# - Coefficient estimates with standard errors and t-statistics\n# - Model fit statistics (R\u00b2, Adjusted R\u00b2, F-statistic)\n# - Diagnostic tests (Durbin-Watson, Jarque-Bera, Omnibus)\n# - Distribution measures (Skewness, Kurtosis)\n# - Multicollinearity indicator (Condition Number)\n# - Interpretive notes for significant diagnostic findings\n```\n\n## \ud83d\udcca Sample Output\n\n```\n====================================\n OLS Regression Results\n====================================\n\nDependent Variable: y No. Observations: 100\nModel: OLS Degrees of Freedom: 96\nMethod: Least Squares F-statistic: 156.42\nDate: 2024-01-15 10:30:00 Prob (F-statistic): 1.23e-45\n R-squared: 0.830\n Adj. R-squared: 0.825\n\n====================================\n Coefficients\n====================================\nVariable Coef Std Err t-stat P>|t| [0.025 0.975]\n--------------------------------------------------------------------\nconst 0.0234 0.0891 0.262 0.794 -0.1536 0.2004\nx1 1.4987 0.0934 16.046 0.000 1.3131 1.6843\nx2 -1.9876 0.0912 -21.786 0.000 -2.1688 -1.8064\nx3 0.7899 0.0888 8.896 0.000 0.6135 0.9663\n\n====================================\n Diagnostic Statistics\n====================================\nDurbin-Watson: 1.234 (Positive autocorrelation detected)\nJarque-Bera: 5.678 (p=0.058)\nOmnibus: 4.321 (p=0.115)\nSkewness: 0.456\nKurtosis: 3.234\nCondition Number: 12.34\n\n====================================\n Notes\n====================================\n- Positive autocorrelation detected (DW = 1.234 < 1.5)\n- Consider using robust standard errors or GLS\n```\n\n## \ud83d\udd27 API Reference\n\n### OLS Class\n\n```python\nclass OLS:\n def __init__(self, fit_intercept=True, robust=False)\n def fit(self, X, y)\n def predict(self, X)\n def summary()\n \n # Properties\n .coefficients # Coefficient estimates\n .intercept # Intercept term\n .residuals # Model residuals\n .fitted_values # Predicted values\n .r_squared # R-squared value\n .adjusted_r_squared # Adjusted R-squared\n .f_statistic # F-statistic\n .durbin_watson # Durbin-Watson statistic\n .jarque_bera # Jarque-Bera test statistic\n .omnibus # Omnibus test statistic\n .skewness # Residual skewness\n .kurtosis # Residual kurtosis\n .condition_number # Design matrix condition number\n```\n\n### GLS Class\n\n```python\nclass GLS:\n def __init__(self, fit_intercept=True)\n def fit(self, X, y, sigma) # sigma: covariance matrix\n def predict(self, X)\n def summary()\n \n # Same properties as OLS\n```\n\n### WLS Class\n\n```python\nclass WLS:\n def __init__(self, fit_intercept=True)\n def fit(self, X, y, weights) # weights: sample weights\n def predict(self, X)\n def summary()\n \n # Same properties as OLS, plus:\n .weights # Sample weights used for fitting\n```\n\n### IV Class\n\n```python\nclass IV:\n def __init__(self, fit_intercept=True)\n def fit(self, instruments, regressors, targets) # exactly identified only\n def predict(self, regressors)\n def summary()\n \n # Properties\n .coefficients # Coefficient estimates\n .intercept # Intercept term\n .residuals # Model residuals\n .r_squared # R-squared value (can be negative for IV)\n .mse # Mean squared error\n .instruments # Instrumental variables used\n .regressors # Regressors used for fitting\n .covariance_matrix # Coefficient covariance matrix\n .standard_errors() # Standard errors of coefficients\n .t_statistics() # T-statistics\n .p_values() # P-values\n .confidence_intervals() # Confidence intervals\n```\n\n### WLS Example\n\n```python\nimport numpy as np\nfrom econometrust import WLS, OLS\n\n# Generate data with heteroscedastic errors\nnp.random.seed(42)\nn_samples = 100\nX = np.random.randn(n_samples, 2)\n\n# Error variance increases with X[:, 0]\nerror_variance = 0.5 + 2 * np.abs(X[:, 0])\nerrors = np.random.normal(0, np.sqrt(error_variance))\ny = 2.0 + X @ [1.5, -0.8] + errors\n\n# Create weights (inverse of variance for optimal efficiency)\nweights = 1.0 / error_variance\n\n# Compare OLS (inefficient) vs WLS (efficient)\nols = OLS(fit_intercept=True)\nols.fit(X, y)\n\nwls = WLS(fit_intercept=True)\nwls.fit(X, y, weights)\n\nprint(f\"OLS R\u00b2: {ols.r_squared:.4f}\")\nprint(f\"WLS R\u00b2: {wls.r_squared:.4f}\") # Should be higher\nprint(f\"OLS MSE: {ols.mse:.4f}\")\nprint(f\"WLS MSE: {wls.mse:.4f}\") # Should be lower\n\n# WLS provides more accurate coefficient estimates \n# when heteroscedasticity is properly modeled\n```\n\n### IV (Instrumental Variables) Example\n\n```python\nimport numpy as np\nfrom econometrust import IV, OLS\n\n# Generate data with endogeneity\nnp.random.seed(42)\nn_samples = 200\n\n# Create instruments (must be uncorrelated with error term)\nz1 = np.random.randn(n_samples) # First instrument\nz2 = np.random.randn(n_samples) # Second instrument\ninstruments = np.column_stack([z1, z2])\n\n# Create error term\nerror = np.random.randn(n_samples) * 0.5\n\n# Create endogenous regressors (correlated with error)\nx1 = z1 + 0.3 * error + np.random.randn(n_samples) * 0.3 # Endogenous\nx2 = z2 + 0.1 * np.random.randn(n_samples) # Exogenous instrument-like\nregressors = np.column_stack([x1, x2])\n\n# True coefficients\ntrue_beta = [1.5, -0.8]\ny = 2.0 + regressors @ true_beta + error\n\n# Compare OLS (biased due to endogeneity) vs IV (consistent)\nols = OLS(fit_intercept=True)\nols.fit(regressors, y)\n\n# IV estimator (exactly identified: 2 instruments for 2 regressors)\niv = IV(fit_intercept=True)\niv.fit(instruments, regressors, y)\n\nprint(\"=== OLS Results (Biased due to endogeneity) ===\")\nprint(f\"Coefficients: {ols.coefficients}\")\nprint(f\"R-squared: {ols.r_squared:.4f}\")\n\nprint(\"\\n=== IV Results (Consistent estimates) ===\")\nprint(f\"Coefficients: {iv.coefficients}\")\nprint(f\"R-squared: {iv.r_squared:.4f}\")\nprint(f\"True coefficients: {true_beta}\")\n\n# IV handles endogeneity but requires:\n# 1. Strong instruments (correlated with regressors)\n# 2. Valid instruments (uncorrelated with error term)\n# 3. Exact identification (# instruments = # regressors)\n\nprint(iv.summary())\n```\n\n**Important Notes about IV Estimation:**\n\n- **Exactly Identified Only**: This IV implementation requires the number of instruments to equal the number of regressors\n- **Instrument Validity**: Instruments must be correlated with the endogenous regressors but uncorrelated with the error term\n- **Higher Variance**: IV estimates typically have higher variance than OLS, requiring larger sample sizes\n- **For Overidentified Models**: Use Two-Stage Least Squares (TSLS) when you have more instruments than regressors (planned for future release)\n\n## \ud83c\udfc6 Performance\n\nEconometrust leverages Rust's performance while maintaining Python's ease of use:\n\n- **Memory Efficient**: Zero-copy operations with numpy arrays\n- **Fast Computations**: Optimized linear algebra operations\n- **Parallel Processing**: Multi-threaded where beneficial\n- **Robust Numerics**: Handles edge cases gracefully\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## \ud83d\udcc4 License\n\nThis project is licensed under both MIT and Apache-2.0 licenses.\n\n## \ud83d\udd17 Links\n\n- **Repository**: [https://github.com/wdeligt/econometrust](https://github.com/wdeligt/econometrust)\n- **Documentation**: [https://github.com/wdeligt/econometrust#readme](https://github.com/wdeligt/econometrust#readme)\n- **Issues**: [https://github.com/wdeligt/econometrust/issues](https://github.com/wdeligt/econometrust/issues)\n\n---\n\n*Built with \u2764\ufe0f using Rust and PyO3*\n\n",
"bugtrack_url": null,
"license": "MIT OR Apache-2.0",
"summary": "High-performance econometrics library written in Rust with Python bindings.",
"version": "0.1.10",
"project_urls": {
"Source Code": "https://github.com/wdeligt/econometrust"
},
"split_keywords": [
"econometrics",
" regression",
" ols",
" gls",
" rust",
" python"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e63a71dd49e6813d9c040f1bf7c8deda86527c7fa59b7bca97ab086ba39f7ea3",
"md5": "01db7024a2b610af72ec1304191764ff",
"sha256": "44213b1d345444fa6699365b8a0d9f61f48d1274c578c05039c207d56eede9a2"
},
"downloads": -1,
"filename": "econometrust-0.1.10-cp312-cp312-macosx_11_0_arm64.whl",
"has_sig": false,
"md5_digest": "01db7024a2b610af72ec1304191764ff",
"packagetype": "bdist_wheel",
"python_version": "cp312",
"requires_python": ">=3.8",
"size": 332878,
"upload_time": "2025-07-12T09:57:30",
"upload_time_iso_8601": "2025-07-12T09:57:30.499272Z",
"url": "https://files.pythonhosted.org/packages/e6/3a/71dd49e6813d9c040f1bf7c8deda86527c7fa59b7bca97ab086ba39f7ea3/econometrust-0.1.10-cp312-cp312-macosx_11_0_arm64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "06e018e6072845a1249d595e2cc7161e361dd67b628f5bd808ff6699efe56ca2",
"md5": "b79f2652190ff4b5d5d5f5f0e2311f39",
"sha256": "feaca1a853bb3176bd7fe65c47aa8cb2f435f6ba53dd9e631d5e868ca185b365"
},
"downloads": -1,
"filename": "econometrust-0.1.10.tar.gz",
"has_sig": false,
"md5_digest": "b79f2652190ff4b5d5d5f5f0e2311f39",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 71285,
"upload_time": "2025-07-12T09:57:32",
"upload_time_iso_8601": "2025-07-12T09:57:32.385428Z",
"url": "https://files.pythonhosted.org/packages/06/e0/18e6072845a1249d595e2cc7161e361dd67b628f5bd808ff6699efe56ca2/econometrust-0.1.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-12 09:57:32",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "wdeligt",
"github_project": "econometrust",
"github_not_found": true,
"lcname": "econometrust"
}