# PyELW
This is a Python library for local Whittle and Exact Local Whittle estimation
of the memory parameter of fractionally integrated time series.
## Installation
```shell
pip install pyelw
```
PyELW requires:
- Python (>= 3.13)
- NumPy (tested with 2.3.2)
You can check the latest sources with the command
```shell
git clone https://github.com/jrblevin/pyelw.git
```
### Quick Start Examples
```python
from pyelw import LW, ELW, TwoStepELW
series = load_data() # Replace with your data loading code
n = len(series) # Length of time series
m = n**0.65 # Number of frequencies to use
# Local Whittle (Robinson, 1995)
lw = LW()
result = lw.estimate(series, m=m)
print(f"d_LW = {result['d_hat']}")
# Local Whittle with Hurvich-Chen taper
hc = LW(taper='hc')
result = hc.estimate(series, m=m)
print(f"d_HC = {result['d_hat']}")
# Exact local Whittle (Shimotsu and Phillips, 2005)
elw = ELW()
result = elw.estimate(series, m=m)
print(f"d_ELW = {result['d_hat']}")
# Two step ELW (Shimotsu, 2010)
elw2s = TwoStepELW()
result = elw2s.estimate(series, m=m, detrend_order=1)
print(f"d_2ELW = {result['d_hat']}")
```
## Citing the Package and Methods
The recommended practice is to cite both the specific method used and the PyELW
package. For example:
> We use the exact local Whittle estimator of Shimotsu and Phillips (2005)
> implemented in the PyELW package (Blevins, 2025).
See the references section below for full citations for each of the methods and
the PyELW package. Here is a BibTeX entry for the PyELW paper:
```bibtex
@TechReport{pyelw,
title = {{PyELW}: Exact Local {Whittle} Estimation for Long Memory Time Series in Python},
author = {Jason R. Blevins},
institution = {The Ohio State University},
year = 2025,
type = {Working Paper}
}
```
## Methods Implemented
- `LW` - Untapered and tapered local Whittle estimators
- Untapered local Whittle estimator of Robinson (1995)
(`taper='none'`, default)
- Tapered local Whittle estimators of Velasco (1999)
(`taper='kolmogorov'`, `taper='cosine'`, or `taper='bartlett'`)
- Complex tapered local Whittle estimator of Hurvich and Chen (2000)
(`taper='hc'`)
- `ELW` - Exact local Whittle estimator of Shimotsu and Phillips (2005).
- `TwoStepELW` - Two-step exact local Whittle estimator of Shimotsu (2010).
Each of these classes provides an `estimate()` method which requires
the data (a NumPy ndarray) and the number of frequencies to use.
See the PyELW paper or the examples below for details.
### LW taper Options
By default the `LW` estimator implements the standard (untapered) estimator of
Robinson (1995). However, it also supports several taper options.
You can specify the taper at initialization:
```python
from pyelw import LW
import numpy as np
# Sample data
x = np.random.randn(500)
# Standard untapered local Whittle (Robinson, 1995) - default
lw = LW()
result = lw.estimate(x)
# Kolmogorov taper (Velasco, 1999)
lw_kol = LW(taper='kolmogorov')
result = lw_kol.estimate(x)
# Cosine bell taper (Velasco, 1999)
lw_cos = LW(taper='cosine')
result = lw_cos.estimate(x)
# Triangular Bartlett window taper (Velasco, 1999)
lw_bart = LW(taper='bartlett')
result = lw_bart.estimate(x)
# Hurvich-Chen complex taper (Hurvich and Chen, 2000)
# Note: diff parameter specifies number of times to difference the data
lw_hc = LW(taper='hc')
result = lw_hc.estimate(x, diff=1)
```
You can also override the taper on a per-call basis:
```python
# LW instance with default taper
lw = LW(taper='hc')
# Use the default taper
result = lw.estimate(x)
# Override with a different taper for this call
result = lw.estimate(x, taper='none')
```
### Helper Functions
The library also includes the following helper functions which may be useful:
- `fracdiff` - Fast O(n log n) fractional differencing, following Jensen and Nielsen (2014).
- `arfima` - Simulation of ARFIMA(1,d,0) processes, including ARFIMA(0,d,0) as a special case.
#### Fractional Differencing
```python
from pyelw.fracdiff import fracdiff
import numpy as np
# Generate sample data
x = np.random.randn(100)
# Apply fractional differencing with d=0.3
dx = fracdiff(x, 0.3)
```
#### ARFIMA Simulation
```python
from pyelw.simulate import arfima
# Simulate ARFIMA(1,0.4,0) with phi=0.5
data = arfima(n=1000, d=0.4, phi=0.5, sigma=1.0, seed=123)
```
## Examples
### Example 1: Nile River Level Data
The following example uses Pandas to load a CSV dataset containing
observations on the level of the Nile river and estimates d via LW and ELW.
```python
import pandas as pd
from pyelw import LW, ELW
# Load time series from 'nile' column of data/nile.csv
df = pd.read_csv('data/nile.csv')
nile = pd.to_numeric(df['nile']).values
print(f"Loaded {len(nile)} observations")
# Estimate d using local Whittle estimator
# Use default number of frequencies m
lw = LW()
result = lw.estimate(nile)
print(f"LW estimate of d: {result['d_hat']}")
# Estimate d using exact local Whittle estimator
# Use default number of frequencies m
elw = ELW()
result = elw.estimate(nile)
# Print results
print(f"ELW estimate of d: {result['d_hat']}")
```
Output:
```
Loaded 663 observations
LW estimate of d: 0.40904431255796164
ELW estimate of d: 0.8859171843580865
```
### Example 2: ARFIMA(0,d,0) Process
Here we simulate an ARFIMA(0, 0.3, 0) process and use the simulated data to
estimate d via ELW.
```python
from pyelw import ELW
from pyelw.simulate import arfima
# Set simulation parameters
n = 5000 # Sample size
d_true = 0.3 # True memory parameter
sigma = 1.0 # Innovation standard deviation
seed = 42 # Random seed
# Simulate ARFIMA(0,d,0) process
print(f"Simulating ARFIMA(0,{d_true},0) with n={n} observations...")
x = arfima(n, d_true, sigma=sigma, seed=seed)
# Initialize ELW estimator
elw = ELW()
# Estimate the memory parameter
# Use m = n^0.65 frequencies
m = int(n**0.65)
result = elw.estimate(x, m=m)
# Display results
print(f"True d: {d_true}")
print(f"Estimated d: {result['d_hat']:.4f}")
print(f"Standard error: {result['se']:.4f}")
print(f"Estimation error: {abs(result['d_hat'] - d_true):.4f}")
# 95% confidence interval
ci_lower = result['d_hat'] - 1.96 * result['se']
ci_upper = result['d_hat'] + 1.96 * result['se']
print(f"95% CI: [{ci_lower:.4f}, {ci_upper:.4f}]")
```
Output:
```
Simulating ARFIMA(0,0.3,0) with n=5000 observations...
True d: 0.3
Estimated d: 0.3315
Standard error: 0.0318
Estimation error: 0.0315
95% CI: [0.2692, 0.3939]
```
### Example 3: Real GDP Data from FRED
Here we download real GDP data from FRED using `pandas_datareader` and
estimate d via Two Step ELW:
```python
import numpy as np
import pandas_datareader as pdr
from pyelw import TwoStepELW
# Download real GDP from FRED
print("Downloading real GDP data from FRED...")
series = pdr.get_data_fred('GDPC1', start='1950-01-01', end='2024-12-31')
gdp_data = series.dropna()
gdp = gdp_data.values.flatten()
print(f"Downloaded {len(gdp)} observations")
# Take natural logarithm for growth rate interpretation
log_gdp = np.log(gdp)
print("Using log(real GDP) for analysis")
# Initialize Two-Step ELW estimator
estimator = TwoStepELW()
# Choose bandwidth/number of frequencies
n = len(log_gdp)
m = int(n**0.65)
# Estimate d via Two-Step ELW
print("\nEstimating long memory parameter...")
print(f"Sample size: {n}")
print(f"Number of frequencies: {m}")
result = estimator.estimate(log_gdp, m=m, detrend_order=1, verbose=True)
# Display results
print("\nTwo-Step ELW Results:")
print(f"Estimated d: {result['d_hat']:.4f}")
print(f"Standard error: {result['se']:.4f}")
ci_lower = result['d_hat'] - 1.96 * result['se']
ci_upper = result['d_hat'] + 1.96 * result['se']
print(f"95% CI: [{ci_lower:.4f}, {ci_upper:.4f}]")
```
Output:
```
Downloading real GDP data from FRED...
Downloaded 300 observations
Using log(real GDP) for analysis
Estimating long memory parameter...
Sample size: 300
Number of frequencies: 40
Detrending with polynomial order 1
Using 40 frequencies for both stages
Stage 1: hc tapered LW estimation
Stage 1 estimate: d = 1.0427
Stage 2: Exact local whittle estimation
Starting from Stage 1: d = 1.042677
Final estimate: d = 1.0096
Two-Step ELW Results:
Estimated d: 1.0096
Standard error: 0.0791
95% CI: [0.8547, 1.1646]
```
## Summary of Included Replications
| Filename | Paper | Reference | Estimators | Description |
|-------------------------------------|------------------------------|--------------|-------------------------|--------------------------------------------------------|
| `hurvich_chen_table_1.py` | Hurvich and Chen (2000) | Table I | `LW('hc')` | Monte Carlo with simulated ARFIMA(1,d,0) data. |
| `hurvich_chen_table_1.R` | Hurvich and Chen (2000) | Table I | `LW('hc')` | R version of above, demonstrating corrected code. |
| `hurvich_chen_table_3.py` | Hurvich and Chen (2000) | Table III | `LW('hc')` | Application to IMF International Financial Statistics. |
| `shimotsu_phillips_2005_table_1.py` | Shimotsu and Phillips (2005) | Table 1 | `LW`, `ELW` | Monte Carlo with LW and ELW with ARFIMA(1,d,0) data |
| `shimotsu_phillips_2005_table_2.py` | Shimotsu and Phillips (2005) | Table 2 | `LW('hc', 'bartlett')` | Monte Carlo with tapered LW estimators |
| `shimotsu_2010_table_2.py` | Shimotsu (2010) | Table 2 | `TwoStepELW` | ELW Monte Carlo with ARFIMA(1,d,0) data. |
| `shimotsu_2010_table_8.py` | Shimotsu (2010) | Table 8 | `TwoStepELW` | Application to extended Nelson and Plosser data. |
| `baum_hurn_lindsay.py` | Baum, Hurn, Lindsay (2020) | pp. 576-579 | `LW`, `ELW` | Application to Nile river and sea level data. |
## Unit Tests
A `pytest` comprehensive unit test suite with over 2,400 parametric tests is
included. To run the tests, you'll need to first install the additional test
dependencies, then run `pytest`:
```shell
pip install -r requirements-test.txt
pytest
```
## References
* Blevins, J.R. (2025).
[PyELW: Exact Local Whittle Estimation for Long Memory Time Series in Python](https://jblevins.org/research/pyelw).
Working Paper, The Ohio State University.
* Hurvich, C. M., and W. W. Chen (2000). An Efficient Taper for Potentially
Overdifferenced Long-Memory Time Series. _Journal of Time Series Analysis_
21, 155--180.
* Robinson, P. M. (1995). Gaussian Semiparametric Estimation of Long
Range Dependence. _Annals of Statistics_ 23, 1630--1661.
* Shimotsu, K. (2010). Exact Local Whittle Estimation of Fractional
Integration with Unknown Mean and Time Trend. _Econometric Theory_ 26,
501--540.
* Shimotsu, K. and Phillips, P.C.B. (2005). Exact Local Whittle Estimation
of Fractional Integration. _Annals of Statistics_ 33, 1890--1933.
* Velasco, C. (1999). Gaussian Semiparametric Estimation for Non-Stationary
Time Series. _Journal of Time Series Analysis_ 20, 87--126.
Raw data
{
"_id": null,
"home_page": null,
"name": "pyelw",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.13",
"maintainer_email": null,
"keywords": "fractional integration, long memory, exact local Whittle estimation, ARFIMA, econometrics, time-series",
"author": null,
"author_email": "\"Jason R. Blevins\" <blevins.141@osu.edu>",
"download_url": "https://files.pythonhosted.org/packages/e7/48/eb3db42553fff12602542cc7275a1f92217d9455c809bcf8605d63f4fb6d/pyelw-0.9.0.tar.gz",
"platform": null,
"description": "# PyELW\n\nThis is a Python library for local Whittle and Exact Local Whittle estimation\nof the memory parameter of fractionally integrated time series.\n\n## Installation\n\n```shell\npip install pyelw\n```\n\nPyELW requires:\n\n- Python (>= 3.13)\n- NumPy (tested with 2.3.2)\n\nYou can check the latest sources with the command\n\n```shell\ngit clone https://github.com/jrblevin/pyelw.git\n```\n\n### Quick Start Examples\n\n```python\nfrom pyelw import LW, ELW, TwoStepELW\n\nseries = load_data() # Replace with your data loading code\nn = len(series) # Length of time series\nm = n**0.65 # Number of frequencies to use\n\n# Local Whittle (Robinson, 1995)\nlw = LW()\nresult = lw.estimate(series, m=m)\nprint(f\"d_LW = {result['d_hat']}\")\n\n# Local Whittle with Hurvich-Chen taper\nhc = LW(taper='hc')\nresult = hc.estimate(series, m=m)\nprint(f\"d_HC = {result['d_hat']}\")\n\n# Exact local Whittle (Shimotsu and Phillips, 2005)\nelw = ELW()\nresult = elw.estimate(series, m=m)\nprint(f\"d_ELW = {result['d_hat']}\")\n\n# Two step ELW (Shimotsu, 2010)\nelw2s = TwoStepELW()\nresult = elw2s.estimate(series, m=m, detrend_order=1)\nprint(f\"d_2ELW = {result['d_hat']}\")\n```\n\n## Citing the Package and Methods\n\nThe recommended practice is to cite both the specific method used and the PyELW\npackage. For example:\n\n> We use the exact local Whittle estimator of Shimotsu and Phillips (2005)\n> implemented in the PyELW package (Blevins, 2025).\n\nSee the references section below for full citations for each of the methods and\nthe PyELW package. Here is a BibTeX entry for the PyELW paper:\n\n```bibtex\n@TechReport{pyelw,\n title = {{PyELW}: Exact Local {Whittle} Estimation for Long Memory Time Series in Python},\n author = {Jason R. Blevins},\n institution = {The Ohio State University},\n year = 2025,\n type = {Working Paper}\n}\n```\n\n## Methods Implemented\n\n- `LW` - Untapered and tapered local Whittle estimators\n - Untapered local Whittle estimator of Robinson (1995)\n (`taper='none'`, default)\n - Tapered local Whittle estimators of Velasco (1999)\n (`taper='kolmogorov'`, `taper='cosine'`, or `taper='bartlett'`)\n - Complex tapered local Whittle estimator of Hurvich and Chen (2000)\n (`taper='hc'`)\n- `ELW` - Exact local Whittle estimator of Shimotsu and Phillips (2005).\n- `TwoStepELW` - Two-step exact local Whittle estimator of Shimotsu (2010).\n\nEach of these classes provides an `estimate()` method which requires\nthe data (a NumPy ndarray) and the number of frequencies to use.\nSee the PyELW paper or the examples below for details.\n\n### LW taper Options\n\nBy default the `LW` estimator implements the standard (untapered) estimator of\nRobinson (1995). However, it also supports several taper options.\n\nYou can specify the taper at initialization:\n\n```python\nfrom pyelw import LW\nimport numpy as np\n\n# Sample data\nx = np.random.randn(500)\n\n# Standard untapered local Whittle (Robinson, 1995) - default\nlw = LW()\nresult = lw.estimate(x)\n\n# Kolmogorov taper (Velasco, 1999)\nlw_kol = LW(taper='kolmogorov')\nresult = lw_kol.estimate(x)\n\n# Cosine bell taper (Velasco, 1999) \nlw_cos = LW(taper='cosine')\nresult = lw_cos.estimate(x)\n\n# Triangular Bartlett window taper (Velasco, 1999)\nlw_bart = LW(taper='bartlett')\nresult = lw_bart.estimate(x)\n\n# Hurvich-Chen complex taper (Hurvich and Chen, 2000)\n# Note: diff parameter specifies number of times to difference the data\nlw_hc = LW(taper='hc')\nresult = lw_hc.estimate(x, diff=1)\n```\n\nYou can also override the taper on a per-call basis:\n\n```python\n# LW instance with default taper\nlw = LW(taper='hc')\n\n# Use the default taper\nresult = lw.estimate(x)\n\n# Override with a different taper for this call\nresult = lw.estimate(x, taper='none')\n```\n\n### Helper Functions\n\nThe library also includes the following helper functions which may be useful:\n\n- `fracdiff` - Fast O(n log n) fractional differencing, following Jensen and Nielsen (2014).\n- `arfima` - Simulation of ARFIMA(1,d,0) processes, including ARFIMA(0,d,0) as a special case.\n\n#### Fractional Differencing\n\n```python\nfrom pyelw.fracdiff import fracdiff\nimport numpy as np\n\n# Generate sample data\nx = np.random.randn(100)\n\n# Apply fractional differencing with d=0.3\ndx = fracdiff(x, 0.3)\n```\n\n#### ARFIMA Simulation\n\n```python\nfrom pyelw.simulate import arfima\n\n# Simulate ARFIMA(1,0.4,0) with phi=0.5\ndata = arfima(n=1000, d=0.4, phi=0.5, sigma=1.0, seed=123)\n```\n\n## Examples\n\n### Example 1: Nile River Level Data\n\nThe following example uses Pandas to load a CSV dataset containing\nobservations on the level of the Nile river and estimates d via LW and ELW.\n\n```python\nimport pandas as pd\nfrom pyelw import LW, ELW\n\n# Load time series from 'nile' column of data/nile.csv\ndf = pd.read_csv('data/nile.csv')\nnile = pd.to_numeric(df['nile']).values\nprint(f\"Loaded {len(nile)} observations\")\n\n# Estimate d using local Whittle estimator\n# Use default number of frequencies m\nlw = LW()\nresult = lw.estimate(nile)\nprint(f\"LW estimate of d: {result['d_hat']}\")\n\n# Estimate d using exact local Whittle estimator\n# Use default number of frequencies m\nelw = ELW()\nresult = elw.estimate(nile)\n\n# Print results\nprint(f\"ELW estimate of d: {result['d_hat']}\")\n```\n\nOutput:\n\n```\nLoaded 663 observations\nLW estimate of d: 0.40904431255796164\nELW estimate of d: 0.8859171843580865\n```\n\n### Example 2: ARFIMA(0,d,0) Process\n\nHere we simulate an ARFIMA(0, 0.3, 0) process and use the simulated data to\nestimate d via ELW.\n\n```python\nfrom pyelw import ELW\nfrom pyelw.simulate import arfima\n\n# Set simulation parameters\nn = 5000 # Sample size\nd_true = 0.3 # True memory parameter\nsigma = 1.0 # Innovation standard deviation\nseed = 42 # Random seed\n\n# Simulate ARFIMA(0,d,0) process\nprint(f\"Simulating ARFIMA(0,{d_true},0) with n={n} observations...\")\nx = arfima(n, d_true, sigma=sigma, seed=seed)\n\n# Initialize ELW estimator\nelw = ELW()\n\n# Estimate the memory parameter\n# Use m = n^0.65 frequencies\nm = int(n**0.65)\nresult = elw.estimate(x, m=m)\n\n# Display results\nprint(f\"True d: {d_true}\")\nprint(f\"Estimated d: {result['d_hat']:.4f}\")\nprint(f\"Standard error: {result['se']:.4f}\")\nprint(f\"Estimation error: {abs(result['d_hat'] - d_true):.4f}\")\n\n# 95% confidence interval\nci_lower = result['d_hat'] - 1.96 * result['se']\nci_upper = result['d_hat'] + 1.96 * result['se']\nprint(f\"95% CI: [{ci_lower:.4f}, {ci_upper:.4f}]\")\n```\n\nOutput:\n```\nSimulating ARFIMA(0,0.3,0) with n=5000 observations...\nTrue d: 0.3\nEstimated d: 0.3315\nStandard error: 0.0318\nEstimation error: 0.0315\n95% CI: [0.2692, 0.3939]\n```\n\n### Example 3: Real GDP Data from FRED\n\nHere we download real GDP data from FRED using `pandas_datareader` and\nestimate d via Two Step ELW:\n\n```python\nimport numpy as np\nimport pandas_datareader as pdr\nfrom pyelw import TwoStepELW\n\n# Download real GDP from FRED\nprint(\"Downloading real GDP data from FRED...\")\nseries = pdr.get_data_fred('GDPC1', start='1950-01-01', end='2024-12-31')\ngdp_data = series.dropna()\ngdp = gdp_data.values.flatten()\nprint(f\"Downloaded {len(gdp)} observations\")\n\n# Take natural logarithm for growth rate interpretation\nlog_gdp = np.log(gdp)\nprint(\"Using log(real GDP) for analysis\")\n\n# Initialize Two-Step ELW estimator\nestimator = TwoStepELW()\n\n# Choose bandwidth/number of frequencies\nn = len(log_gdp)\nm = int(n**0.65)\n\n# Estimate d via Two-Step ELW\nprint(\"\\nEstimating long memory parameter...\")\nprint(f\"Sample size: {n}\")\nprint(f\"Number of frequencies: {m}\")\nresult = estimator.estimate(log_gdp, m=m, detrend_order=1, verbose=True)\n\n# Display results\nprint(\"\\nTwo-Step ELW Results:\")\nprint(f\"Estimated d: {result['d_hat']:.4f}\")\nprint(f\"Standard error: {result['se']:.4f}\")\nci_lower = result['d_hat'] - 1.96 * result['se']\nci_upper = result['d_hat'] + 1.96 * result['se']\nprint(f\"95% CI: [{ci_lower:.4f}, {ci_upper:.4f}]\")\n```\n\nOutput:\n\n```\nDownloading real GDP data from FRED...\nDownloaded 300 observations\nUsing log(real GDP) for analysis\n\nEstimating long memory parameter...\nSample size: 300\nNumber of frequencies: 40\nDetrending with polynomial order 1\nUsing 40 frequencies for both stages\nStage 1: hc tapered LW estimation\n Stage 1 estimate: d = 1.0427\nStage 2: Exact local whittle estimation\n Starting from Stage 1: d = 1.042677\n Final estimate: d = 1.0096\n\nTwo-Step ELW Results:\nEstimated d: 1.0096\nStandard error: 0.0791\n95% CI: [0.8547, 1.1646]\n```\n\n## Summary of Included Replications\n\n| Filename | Paper | Reference | Estimators | Description |\n|-------------------------------------|------------------------------|--------------|-------------------------|--------------------------------------------------------|\n| `hurvich_chen_table_1.py` | Hurvich and Chen (2000) | Table I | `LW('hc')` | Monte Carlo with simulated ARFIMA(1,d,0) data. |\n| `hurvich_chen_table_1.R` | Hurvich and Chen (2000) | Table I | `LW('hc')` | R version of above, demonstrating corrected code. |\n| `hurvich_chen_table_3.py` | Hurvich and Chen (2000) | Table III | `LW('hc')` | Application to IMF International Financial Statistics. |\n| `shimotsu_phillips_2005_table_1.py` | Shimotsu and Phillips (2005) | Table 1 | `LW`, `ELW` | Monte Carlo with LW and ELW with ARFIMA(1,d,0) data |\n| `shimotsu_phillips_2005_table_2.py` | Shimotsu and Phillips (2005) | Table 2 | `LW('hc', 'bartlett')` | Monte Carlo with tapered LW estimators |\n| `shimotsu_2010_table_2.py` | Shimotsu (2010) | Table 2 | `TwoStepELW` | ELW Monte Carlo with ARFIMA(1,d,0) data. |\n| `shimotsu_2010_table_8.py` | Shimotsu (2010) | Table 8 | `TwoStepELW` | Application to extended Nelson and Plosser data. |\n| `baum_hurn_lindsay.py` | Baum, Hurn, Lindsay (2020) | pp. 576-579 | `LW`, `ELW` | Application to Nile river and sea level data. |\n\n## Unit Tests\n\nA `pytest` comprehensive unit test suite with over 2,400 parametric tests is\nincluded. To run the tests, you'll need to first install the additional test\ndependencies, then run `pytest`:\n\n```shell\npip install -r requirements-test.txt\npytest\n```\n\n## References\n\n* Blevins, J.R. (2025).\n [PyELW: Exact Local Whittle Estimation for Long Memory Time Series in Python](https://jblevins.org/research/pyelw).\n Working Paper, The Ohio State University.\n\n* Hurvich, C. M., and W. W. Chen (2000). An Efficient Taper for Potentially\n Overdifferenced Long-Memory Time Series. _Journal of Time Series Analysis_\n 21, 155--180.\n\n* Robinson, P. M. (1995). Gaussian Semiparametric Estimation of Long\n Range Dependence. _Annals of Statistics_ 23, 1630--1661.\n\n* Shimotsu, K. (2010). Exact Local Whittle Estimation of Fractional\n Integration with Unknown Mean and Time Trend. _Econometric Theory_ 26,\n 501--540.\n\n* Shimotsu, K. and Phillips, P.C.B. (2005). Exact Local Whittle Estimation\n of Fractional Integration. _Annals of Statistics_ 33, 1890--1933.\n\n* Velasco, C. (1999). Gaussian Semiparametric Estimation for Non-Stationary\n Time Series. _Journal of Time Series Analysis_ 20, 87--126.\n",
"bugtrack_url": null,
"license": "BSD-3-Clause",
"summary": "Exact Local Whittle Estimation for Long Memory Time Series",
"version": "0.9.0",
"project_urls": {
"Bug Tracker": "https://github.com/jrblevin/pyelw/issues",
"Documentation": "https://github.com/jrblevin/pyelw#readme",
"Homepage": "https://github.com/jrblevin/pyelw",
"Repository": "https://github.com/jrblevin/pyelw.git"
},
"split_keywords": [
"fractional integration",
" long memory",
" exact local whittle estimation",
" arfima",
" econometrics",
" time-series"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "aeb6e48fb7efb3b9751ba1fae6f289e2b21328eb6e6475452ec2d29eee5718cb",
"md5": "911e31001f36b8a8398c0e1581f0352a",
"sha256": "e8f5c134cc76ef778918c79646027fbaa35e2dbe2896eeaf08b40600667d62eb"
},
"downloads": -1,
"filename": "pyelw-0.9.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "911e31001f36b8a8398c0e1581f0352a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.13",
"size": 18908,
"upload_time": "2025-08-29T19:23:54",
"upload_time_iso_8601": "2025-08-29T19:23:54.388084Z",
"url": "https://files.pythonhosted.org/packages/ae/b6/e48fb7efb3b9751ba1fae6f289e2b21328eb6e6475452ec2d29eee5718cb/pyelw-0.9.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "e748eb3db42553fff12602542cc7275a1f92217d9455c809bcf8605d63f4fb6d",
"md5": "07b634004b18c523df698e954d57ed77",
"sha256": "53f958e87359102a56a51aabe50fe81956fe19d582627f93aac08d3c3ae96b61"
},
"downloads": -1,
"filename": "pyelw-0.9.0.tar.gz",
"has_sig": false,
"md5_digest": "07b634004b18c523df698e954d57ed77",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.13",
"size": 34660,
"upload_time": "2025-08-29T19:23:55",
"upload_time_iso_8601": "2025-08-29T19:23:55.898764Z",
"url": "https://files.pythonhosted.org/packages/e7/48/eb3db42553fff12602542cc7275a1f92217d9455c809bcf8605d63f4fb6d/pyelw-0.9.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-29 19:23:55",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jrblevin",
"github_project": "pyelw",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "pyelw"
}