ctx-bandits-mcmc


Namectx-bandits-mcmc JSON
Version 1.0.1 PyPI version JSON
download
home_pagehttps://github.com/SarahLiaw/ctx-bandits-mcmc-showdown
SummaryFeel-Good Thompson Sampling for Contextual Bandits: a Markov Chain Monte Carlo Showdown
upload_time2025-10-23 20:41:22
maintainerNone
docs_urlNone
authorEmile Anand, Sarah Liaw
requires_python>=3.8
licenseMIT
keywords thompson-sampling contextual-bandits mcmc reinforcement-learning bayesian
VCS
bugtrack_url
requirements annotated-types certifi charset-normalizer click docker-pycreds filelock fsspec gitdb GitPython idna Jinja2 MarkupSafe mpmath networkx numpy matplotlib pandas pytest scipy platformdirs protobuf psutil pydantic pydantic_core PyYAML requests scikit-learn sentry-sdk setproctitle six smmap sympy torch tqdm typing-inspection typing_extensions urllib3 wandb yfinance xlrd
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Feel-Good Thompson Sampling for Contextual Bandits: a Markov Chain Monte Carlo Showdown

This repository implements various MCMC-based contextual bandit algorithms.

## Features

- **Algorithms**:
  - Langevin Monte Carlo (LMC)
  - Underdamped Langevin Monte Carlo (ULMC)
  - Metropolis-Adjusted Langevin Algorithm (MALA)
  - Hamiltonian Monte Carlo (HMC)
  - Epsilon-Greedy
  - Upper-Confidence-Bound (UCB)
  - Neural Thompson Sampling (NTS)
  - Linear Thompson Sampling (LTS)
  - Neural Upper-Confidence-Bound (NUCB)
  - Neural Greedy (NG)
  - And numerous variants with Feel-Good and smoothed Feel-Good exploration terms

- **Environments**:
  - Linear bandits
  - Logistic bandits
  - Wheel bandit problem
  - Neural bandits

## Installation

### Option 1: Install from PyPI (Recommended)

```bash
pip install ctx-bandits-mcmc
```

With optional dependencies:
```bash
# For development tools
pip install ctx-bandits-mcmc[dev]

# For neural bandit experiments  
pip install ctx-bandits-mcmc[neural]

# All optional dependencies
pip install ctx-bandits-mcmc[dev,neural]
```

### Option 2: Install from Source

```bash
# Clone the repository
git clone https://github.com/SarahLiaw/ctx-bandits-mcmc-showdown.git
cd ctx-bandits-mcmc-showdown

# Install the package
pip install .

# Or install in editable mode for development
pip install -e .[dev]
```

### Option 3: Install from GitHub

```bash
pip install git+https://github.com/SarahLiaw/ctx-bandits-mcmc-showdown.git
```

For detailed installation instructions, platform-specific notes, and troubleshooting, see [INSTALL.md](INSTALL.md).

## Quick Start

### Running Linear Bandit Experiments

To run a linear bandit experiment with the LMC-TS agent:

```bash
python3 run.py --config_path config/linear/lmcts.json
```

### Running Wheel Bandit Experiments

To run the wheel bandit experiment with the ULMC agent:

```bash
python3 run_all_wheel_agents.py --agents ulmc --num_trials 1
```

### Batch Running Multiple Experiments

To run multiple experiments with different seeds:

```bash
python3 run_linear_batch.py --n_seeds 5
```

## Configuration

Configuration files are stored in the `config/` directory, organized by environment type (linear, logistic, wheel, neural). Each agent has its own configuration file with hyperparameters.

## Results

Results are saved in the `results/` directory by default. The directory structure is:

```
results/
  ├── linear/
  ├── logistic/
  └── wheel/
  └── neural/
```

## Posterior Distribution Quality Analysis

### Overview

The `posterior_analysis.py` script provides a controlled comparison of MCMC algorithm posterior approximations against the true analytical Bayesian posterior. This analysis isolates **approximation quality** from **exploration strategy** by running all algorithms on identical data.

**Key insight:** Thompson Sampling theory assumes sampling from the true posterior π*_t. This tool quantifies how well MCMC approximations π̃_t match π*_t.

### Quick Start

Run posterior analysis with default algorithms:

```bash
python posterior_analysis.py
```

This will:
- Generate fixed synthetic data (6 arms, 20 dimensions, 2000 timesteps)
- Run LinTS, LMCTS, FGLMCTS, MALATS, and PLMCTS on identical data
- Compute true analytical posteriors for each arm
- Create 2D scatter plots comparing true (green) vs. algorithm (red) posteriors
- Calculate Wasserstein distances quantifying approximation quality

### Customization

**Select specific algorithms:**
```bash
python posterior_analysis.py --algorithms LinTS LMCTS MALATS
```

**Change random seed:**
```bash
python posterior_analysis.py --seed 42
```

**Available algorithms:**
- `LinTS` - Analytical Thompson Sampling (baseline)
- `LMCTS` - Langevin Monte Carlo TS
- `FGLMCTS` - Feel-Good LMC-TS
- `SFGLMCTS` - Smoothed Feel-Good LMC-TS
- `MALATS` - Metropolis-Adjusted Langevin
- `FGMALATS` - Feel-Good MALA-TS
- `SFGMALATS` - Smoothed Feel-Good MALA-TS
- `PLMCTS` - Preconditioned LMC-TS
- `PFGLMCTS` - Preconditioned Feel-Good LMC-TS
- `PSFGLMCTS` - Preconditioned Smoothed Feel-Good LMC-TS
- `HMCTS` - Hamiltonian Monte Carlo TS
- `FGHMCTS`, `SFGHMCTS`, `PHMCTS`, `PFGHMCTS`, `PSFGHMCTS` - HMC variants

### Configuration

Edit parameters in `posterior_analysis.py` (lines 31-38):

```python
K_ARMS = 6                      # Number of arms
D_DIM = 20                      # Context dimension
LAMBDA_PRIOR = 1.0              # Prior precision
SIGMA_REWARD = 0.5              # Reward noise
T_HORIZON = 2000                # Time horizon
N_POSTERIOR_SAMPLES = 1500      # Samples for visualization
ETA = 1.0                       # Inverse temperature
CORRELATED_CONTEXTS = True      # True: elliptical, False: circular posteriors
```

### Output Structure

Results are saved in `posterior_analysis_YYYYMMDD_HHMMSS/`:

```
posterior_analysis_20251023_100530/
├── synthetic_data.pt                    # Fixed data used by all algorithms
├── results.json                         # Wasserstein distances and play counts
├── LinTS_posterior_comparison.png       # 1×6 grid: true (green) vs. alg (red)
├── LMCTS_posterior_comparison.png
├── MALATS_posterior_comparison.png
└── ...
```

### Interpreting Results

**Visualization (PNG files):**
- **1×6 grid**: One subplot per arm showing β₁ vs β₂ projection
- **Green scatter**: 1500 samples from true analytical posterior
- **Red scatter**: 1500 samples from algorithm's posterior
- **Good approximation**: Red and green overlap
- **Poor approximation**: Red shifted or wrong shape
- **Under-exploration**: Sparse or missing red samples

**Metrics (results.json):**
```json
{
  "LMCTS": {
    "wasserstein_distances": [0.23, 0.34, 0.18, 0.56, 0.29, 1.23],
    "mean_wasserstein": 0.468,
    "num_plays_per_arm": [423, 312, 589, 245, 389, 42]
  }
}
```

- **Wasserstein distance < 0.3**: Excellent approximation
- **0.3 < W < 0.7**: Good approximation
- **W > 1.0**: Poor approximation
- **W = NaN**: Arm never played

### Testing

Run unit tests to verify core functionality:

```bash
python -m pytest test_posterior_analysis.py -v
```

Or with unittest:

```bash
python test_posterior_analysis.py
```

Tests cover:
- Data generation (correlated/uncorrelated)
- True posterior computation
- Feature map correctness
- Sampling procedures
- Wasserstein distance calculation

### Use Cases

1. **Algorithm Development**: Verify new MCMC variants maintain accurate posteriors
2. **Hyperparameter Tuning**: Check if step sizes / burn-in periods affect approximation quality
3. **Failure Mode Diagnosis**: Distinguish under-exploration from poor MCMC convergence
4. **Computational Trade-offs**: Evaluate if preconditioned methods justify extra cost

### Mathematical Background

For linear bandits with reward model r_t = X_t^T β_i + ε_t, the posterior is:

```
Prior:      β_i ~ N(0, λ^-1 I)
Posterior:  β_i | D_t ~ N(μ_post, Σ_post)

Σ_post^-1 = λI + (η/σ²) Σ X_s X_s^T
μ_post = Σ_post · (η/σ²) · Σ X_s r_s
```

This closed-form solution serves as ground truth. MCMC algorithms approximate this through sampling procedures (Langevin, MALA, HMC).

## Weights & Biases Integration

The code is integrated with Weights & Biases for experiment tracking. To use it:

1. Install wandb: `pip install wandb`
2. Log in: `wandb login`
3. Run your experiments - results will be logged to your W&B account

## Testing

### Running Tests

We provide comprehensive unit tests for the posterior analysis functionality:

```bash
# Quick test run
make test

# Verbose output
make test-verbose

# Skip slow tests
make test-quick
```

Or directly with pytest:
```bash
pytest test_posterior_analysis.py -v
```

### Test Coverage

Tests validate:
- ✅ Data generation (correlated/uncorrelated contexts)
- ✅ True posterior computation (Bayesian linear regression)
- ✅ Block diagonal feature maps
- ✅ Posterior sampling procedures
- ✅ Wasserstein distance calculation
- ✅ Complete pipeline integration

See [TESTING.md](TESTING.md) for detailed testing documentation.

## Development Workflow

### Using Make Commands

```bash
make install              # Install dependencies
make test                 # Run tests
make posterior-analysis   # Run posterior analysis
make build                # Build distribution packages
make upload-test          # Upload to TestPyPI
make update-version       # Update package version (interactive)
make clean                # Clean generated files
```

### Updating Package Version

To release a new version:

```bash
# Automated (recommended)
make update-version

# Or manual: update version in setup.py, pyproject.toml, src/__init__.py
# Then: make clean-all && make build && make upload
```

See [VERSION_UPDATE_GUIDE.md](VERSION_UPDATE_GUIDE.md) for complete instructions.

### Adding New Agents

To add a new agent:

1. Create a new class in `src/MCMC.py` inheriting from the base agent class
2. Implement the required methods (`choose_arm`, `update`, etc.)
3. Add the agent to the `format_agent` function in `run.py`
4. Create a configuration file in the appropriate `config/` subdirectory
5. Add tests if implementing new sampling mechanisms

## Citation

If you use this code in your research, please consider citing our paper:

@article{anand2025feelgoodthompsonsamplingcontextual,
      title={Feel-Good Thompson Sampling for Contextual Bandits: a Markov Chain Monte Carlo Showdown}, 
      author={Emile Anand and Sarah Liaw},
      year={2025},
      eprint={2507.15290},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={[https://arxiv.org/abs/2507.15290](https://arxiv.org/abs/2507.15290)}, 
}

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/SarahLiaw/ctx-bandits-mcmc-showdown",
    "name": "ctx-bandits-mcmc",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "thompson-sampling, contextual-bandits, mcmc, reinforcement-learning, bayesian",
    "author": "Emile Anand, Sarah Liaw",
    "author_email": "emiletimothy@outlook.com",
    "download_url": "https://files.pythonhosted.org/packages/37/18/d323f300806ec0f065af536b26a18b04d357b4a0031a7d97f06598186b58/ctx_bandits_mcmc-1.0.1.tar.gz",
    "platform": null,
    "description": "# Feel-Good Thompson Sampling for Contextual Bandits: a Markov Chain Monte Carlo Showdown\n\nThis repository implements various MCMC-based contextual bandit algorithms.\n\n## Features\n\n- **Algorithms**:\n  - Langevin Monte Carlo (LMC)\n  - Underdamped Langevin Monte Carlo (ULMC)\n  - Metropolis-Adjusted Langevin Algorithm (MALA)\n  - Hamiltonian Monte Carlo (HMC)\n  - Epsilon-Greedy\n  - Upper-Confidence-Bound (UCB)\n  - Neural Thompson Sampling (NTS)\n  - Linear Thompson Sampling (LTS)\n  - Neural Upper-Confidence-Bound (NUCB)\n  - Neural Greedy (NG)\n  - And numerous variants with Feel-Good and smoothed Feel-Good exploration terms\n\n- **Environments**:\n  - Linear bandits\n  - Logistic bandits\n  - Wheel bandit problem\n  - Neural bandits\n\n## Installation\n\n### Option 1: Install from PyPI (Recommended)\n\n```bash\npip install ctx-bandits-mcmc\n```\n\nWith optional dependencies:\n```bash\n# For development tools\npip install ctx-bandits-mcmc[dev]\n\n# For neural bandit experiments  \npip install ctx-bandits-mcmc[neural]\n\n# All optional dependencies\npip install ctx-bandits-mcmc[dev,neural]\n```\n\n### Option 2: Install from Source\n\n```bash\n# Clone the repository\ngit clone https://github.com/SarahLiaw/ctx-bandits-mcmc-showdown.git\ncd ctx-bandits-mcmc-showdown\n\n# Install the package\npip install .\n\n# Or install in editable mode for development\npip install -e .[dev]\n```\n\n### Option 3: Install from GitHub\n\n```bash\npip install git+https://github.com/SarahLiaw/ctx-bandits-mcmc-showdown.git\n```\n\nFor detailed installation instructions, platform-specific notes, and troubleshooting, see [INSTALL.md](INSTALL.md).\n\n## Quick Start\n\n### Running Linear Bandit Experiments\n\nTo run a linear bandit experiment with the LMC-TS agent:\n\n```bash\npython3 run.py --config_path config/linear/lmcts.json\n```\n\n### Running Wheel Bandit Experiments\n\nTo run the wheel bandit experiment with the ULMC agent:\n\n```bash\npython3 run_all_wheel_agents.py --agents ulmc --num_trials 1\n```\n\n### Batch Running Multiple Experiments\n\nTo run multiple experiments with different seeds:\n\n```bash\npython3 run_linear_batch.py --n_seeds 5\n```\n\n## Configuration\n\nConfiguration files are stored in the `config/` directory, organized by environment type (linear, logistic, wheel, neural). Each agent has its own configuration file with hyperparameters.\n\n## Results\n\nResults are saved in the `results/` directory by default. The directory structure is:\n\n```\nresults/\n  \u251c\u2500\u2500 linear/\n  \u251c\u2500\u2500 logistic/\n  \u2514\u2500\u2500 wheel/\n  \u2514\u2500\u2500 neural/\n```\n\n## Posterior Distribution Quality Analysis\n\n### Overview\n\nThe `posterior_analysis.py` script provides a controlled comparison of MCMC algorithm posterior approximations against the true analytical Bayesian posterior. This analysis isolates **approximation quality** from **exploration strategy** by running all algorithms on identical data.\n\n**Key insight:** Thompson Sampling theory assumes sampling from the true posterior \u03c0*_t. This tool quantifies how well MCMC approximations \u03c0\u0303_t match \u03c0*_t.\n\n### Quick Start\n\nRun posterior analysis with default algorithms:\n\n```bash\npython posterior_analysis.py\n```\n\nThis will:\n- Generate fixed synthetic data (6 arms, 20 dimensions, 2000 timesteps)\n- Run LinTS, LMCTS, FGLMCTS, MALATS, and PLMCTS on identical data\n- Compute true analytical posteriors for each arm\n- Create 2D scatter plots comparing true (green) vs. algorithm (red) posteriors\n- Calculate Wasserstein distances quantifying approximation quality\n\n### Customization\n\n**Select specific algorithms:**\n```bash\npython posterior_analysis.py --algorithms LinTS LMCTS MALATS\n```\n\n**Change random seed:**\n```bash\npython posterior_analysis.py --seed 42\n```\n\n**Available algorithms:**\n- `LinTS` - Analytical Thompson Sampling (baseline)\n- `LMCTS` - Langevin Monte Carlo TS\n- `FGLMCTS` - Feel-Good LMC-TS\n- `SFGLMCTS` - Smoothed Feel-Good LMC-TS\n- `MALATS` - Metropolis-Adjusted Langevin\n- `FGMALATS` - Feel-Good MALA-TS\n- `SFGMALATS` - Smoothed Feel-Good MALA-TS\n- `PLMCTS` - Preconditioned LMC-TS\n- `PFGLMCTS` - Preconditioned Feel-Good LMC-TS\n- `PSFGLMCTS` - Preconditioned Smoothed Feel-Good LMC-TS\n- `HMCTS` - Hamiltonian Monte Carlo TS\n- `FGHMCTS`, `SFGHMCTS`, `PHMCTS`, `PFGHMCTS`, `PSFGHMCTS` - HMC variants\n\n### Configuration\n\nEdit parameters in `posterior_analysis.py` (lines 31-38):\n\n```python\nK_ARMS = 6                      # Number of arms\nD_DIM = 20                      # Context dimension\nLAMBDA_PRIOR = 1.0              # Prior precision\nSIGMA_REWARD = 0.5              # Reward noise\nT_HORIZON = 2000                # Time horizon\nN_POSTERIOR_SAMPLES = 1500      # Samples for visualization\nETA = 1.0                       # Inverse temperature\nCORRELATED_CONTEXTS = True      # True: elliptical, False: circular posteriors\n```\n\n### Output Structure\n\nResults are saved in `posterior_analysis_YYYYMMDD_HHMMSS/`:\n\n```\nposterior_analysis_20251023_100530/\n\u251c\u2500\u2500 synthetic_data.pt                    # Fixed data used by all algorithms\n\u251c\u2500\u2500 results.json                         # Wasserstein distances and play counts\n\u251c\u2500\u2500 LinTS_posterior_comparison.png       # 1\u00d76 grid: true (green) vs. alg (red)\n\u251c\u2500\u2500 LMCTS_posterior_comparison.png\n\u251c\u2500\u2500 MALATS_posterior_comparison.png\n\u2514\u2500\u2500 ...\n```\n\n### Interpreting Results\n\n**Visualization (PNG files):**\n- **1\u00d76 grid**: One subplot per arm showing \u03b2\u2081 vs \u03b2\u2082 projection\n- **Green scatter**: 1500 samples from true analytical posterior\n- **Red scatter**: 1500 samples from algorithm's posterior\n- **Good approximation**: Red and green overlap\n- **Poor approximation**: Red shifted or wrong shape\n- **Under-exploration**: Sparse or missing red samples\n\n**Metrics (results.json):**\n```json\n{\n  \"LMCTS\": {\n    \"wasserstein_distances\": [0.23, 0.34, 0.18, 0.56, 0.29, 1.23],\n    \"mean_wasserstein\": 0.468,\n    \"num_plays_per_arm\": [423, 312, 589, 245, 389, 42]\n  }\n}\n```\n\n- **Wasserstein distance < 0.3**: Excellent approximation\n- **0.3 < W < 0.7**: Good approximation\n- **W > 1.0**: Poor approximation\n- **W = NaN**: Arm never played\n\n### Testing\n\nRun unit tests to verify core functionality:\n\n```bash\npython -m pytest test_posterior_analysis.py -v\n```\n\nOr with unittest:\n\n```bash\npython test_posterior_analysis.py\n```\n\nTests cover:\n- Data generation (correlated/uncorrelated)\n- True posterior computation\n- Feature map correctness\n- Sampling procedures\n- Wasserstein distance calculation\n\n### Use Cases\n\n1. **Algorithm Development**: Verify new MCMC variants maintain accurate posteriors\n2. **Hyperparameter Tuning**: Check if step sizes / burn-in periods affect approximation quality\n3. **Failure Mode Diagnosis**: Distinguish under-exploration from poor MCMC convergence\n4. **Computational Trade-offs**: Evaluate if preconditioned methods justify extra cost\n\n### Mathematical Background\n\nFor linear bandits with reward model r_t = X_t^T \u03b2_i + \u03b5_t, the posterior is:\n\n```\nPrior:      \u03b2_i ~ N(0, \u03bb^-1 I)\nPosterior:  \u03b2_i | D_t ~ N(\u03bc_post, \u03a3_post)\n\n\u03a3_post^-1 = \u03bbI + (\u03b7/\u03c3\u00b2) \u03a3 X_s X_s^T\n\u03bc_post = \u03a3_post \u00b7 (\u03b7/\u03c3\u00b2) \u00b7 \u03a3 X_s r_s\n```\n\nThis closed-form solution serves as ground truth. MCMC algorithms approximate this through sampling procedures (Langevin, MALA, HMC).\n\n## Weights & Biases Integration\n\nThe code is integrated with Weights & Biases for experiment tracking. To use it:\n\n1. Install wandb: `pip install wandb`\n2. Log in: `wandb login`\n3. Run your experiments - results will be logged to your W&B account\n\n## Testing\n\n### Running Tests\n\nWe provide comprehensive unit tests for the posterior analysis functionality:\n\n```bash\n# Quick test run\nmake test\n\n# Verbose output\nmake test-verbose\n\n# Skip slow tests\nmake test-quick\n```\n\nOr directly with pytest:\n```bash\npytest test_posterior_analysis.py -v\n```\n\n### Test Coverage\n\nTests validate:\n- \u2705 Data generation (correlated/uncorrelated contexts)\n- \u2705 True posterior computation (Bayesian linear regression)\n- \u2705 Block diagonal feature maps\n- \u2705 Posterior sampling procedures\n- \u2705 Wasserstein distance calculation\n- \u2705 Complete pipeline integration\n\nSee [TESTING.md](TESTING.md) for detailed testing documentation.\n\n## Development Workflow\n\n### Using Make Commands\n\n```bash\nmake install              # Install dependencies\nmake test                 # Run tests\nmake posterior-analysis   # Run posterior analysis\nmake build                # Build distribution packages\nmake upload-test          # Upload to TestPyPI\nmake update-version       # Update package version (interactive)\nmake clean                # Clean generated files\n```\n\n### Updating Package Version\n\nTo release a new version:\n\n```bash\n# Automated (recommended)\nmake update-version\n\n# Or manual: update version in setup.py, pyproject.toml, src/__init__.py\n# Then: make clean-all && make build && make upload\n```\n\nSee [VERSION_UPDATE_GUIDE.md](VERSION_UPDATE_GUIDE.md) for complete instructions.\n\n### Adding New Agents\n\nTo add a new agent:\n\n1. Create a new class in `src/MCMC.py` inheriting from the base agent class\n2. Implement the required methods (`choose_arm`, `update`, etc.)\n3. Add the agent to the `format_agent` function in `run.py`\n4. Create a configuration file in the appropriate `config/` subdirectory\n5. Add tests if implementing new sampling mechanisms\n\n## Citation\n\nIf you use this code in your research, please consider citing our paper:\n\n@article{anand2025feelgoodthompsonsamplingcontextual,\n      title={Feel-Good Thompson Sampling for Contextual Bandits: a Markov Chain Monte Carlo Showdown}, \n      author={Emile Anand and Sarah Liaw},\n      year={2025},\n      eprint={2507.15290},\n      archivePrefix={arXiv},\n      primaryClass={cs.LG},\n      url={[https://arxiv.org/abs/2507.15290](https://arxiv.org/abs/2507.15290)}, \n}\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Feel-Good Thompson Sampling for Contextual Bandits: a Markov Chain Monte Carlo Showdown",
    "version": "1.0.1",
    "project_urls": {
        "Documentation": "https://github.com/SarahLiaw/ctx-bandits-mcmc-showdown/README.md",
        "Homepage": "https://github.com/SarahLiaw/ctx-bandits-mcmc-showdown",
        "Paper": "https://arxiv.org/abs/2507.15290",
        "Repository": "https://github.com/SarahLiaw/ctx-bandits-mcmc-showdown"
    },
    "split_keywords": [
        "thompson-sampling",
        " contextual-bandits",
        " mcmc",
        " reinforcement-learning",
        " bayesian"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c212e9e7aa4e2f60f747ab23ba004be9bd148be2ae888118fc6a3708760ef4de",
                "md5": "f38e38f3b94845002fc9aa2dfb97b84a",
                "sha256": "55ed7c3bd2e1217abbbf0006c38e06c4acb8fa0c9bba193fde0349d7e5ee822d"
            },
            "downloads": -1,
            "filename": "ctx_bandits_mcmc-1.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f38e38f3b94845002fc9aa2dfb97b84a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 20382,
            "upload_time": "2025-10-23T20:41:21",
            "upload_time_iso_8601": "2025-10-23T20:41:21.236135Z",
            "url": "https://files.pythonhosted.org/packages/c2/12/e9e7aa4e2f60f747ab23ba004be9bd148be2ae888118fc6a3708760ef4de/ctx_bandits_mcmc-1.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3718d323f300806ec0f065af536b26a18b04d357b4a0031a7d97f06598186b58",
                "md5": "100070a0cfeaaab06742ee56aa5f2d55",
                "sha256": "f2a8ec5a258636ac77ef73d1d46d251d4f5eada238c93466cf4420fbb9015feb"
            },
            "downloads": -1,
            "filename": "ctx_bandits_mcmc-1.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "100070a0cfeaaab06742ee56aa5f2d55",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 97684,
            "upload_time": "2025-10-23T20:41:22",
            "upload_time_iso_8601": "2025-10-23T20:41:22.770258Z",
            "url": "https://files.pythonhosted.org/packages/37/18/d323f300806ec0f065af536b26a18b04d357b4a0031a7d97f06598186b58/ctx_bandits_mcmc-1.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-23 20:41:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "SarahLiaw",
    "github_project": "ctx-bandits-mcmc-showdown",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "annotated-types",
            "specs": [
                [
                    "==",
                    "0.7.0"
                ]
            ]
        },
        {
            "name": "certifi",
            "specs": [
                [
                    "==",
                    "2025.1.31"
                ]
            ]
        },
        {
            "name": "charset-normalizer",
            "specs": [
                [
                    "==",
                    "3.4.1"
                ]
            ]
        },
        {
            "name": "click",
            "specs": [
                [
                    "==",
                    "8.1.8"
                ]
            ]
        },
        {
            "name": "docker-pycreds",
            "specs": [
                [
                    "==",
                    "0.4.0"
                ]
            ]
        },
        {
            "name": "filelock",
            "specs": [
                [
                    "==",
                    "3.18.0"
                ]
            ]
        },
        {
            "name": "fsspec",
            "specs": [
                [
                    "==",
                    "2025.3.2"
                ]
            ]
        },
        {
            "name": "gitdb",
            "specs": [
                [
                    "==",
                    "4.0.12"
                ]
            ]
        },
        {
            "name": "GitPython",
            "specs": [
                [
                    "==",
                    "3.1.44"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    "==",
                    "3.10"
                ]
            ]
        },
        {
            "name": "Jinja2",
            "specs": [
                [
                    "==",
                    "3.1.6"
                ]
            ]
        },
        {
            "name": "MarkupSafe",
            "specs": [
                [
                    "==",
                    "3.0.2"
                ]
            ]
        },
        {
            "name": "mpmath",
            "specs": [
                [
                    "==",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "networkx",
            "specs": [
                [
                    "==",
                    "3.4.2"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "==",
                    "2.2.5"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "pytest",
            "specs": [
                [
                    ">=",
                    "7.0.0"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    ">=",
                    "1.10.0"
                ]
            ]
        },
        {
            "name": "platformdirs",
            "specs": [
                [
                    "==",
                    "4.3.7"
                ]
            ]
        },
        {
            "name": "protobuf",
            "specs": [
                [
                    "==",
                    "5.29.4"
                ]
            ]
        },
        {
            "name": "psutil",
            "specs": [
                [
                    "==",
                    "7.0.0"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "2.11.3"
                ]
            ]
        },
        {
            "name": "pydantic_core",
            "specs": [
                [
                    "==",
                    "2.33.1"
                ]
            ]
        },
        {
            "name": "PyYAML",
            "specs": [
                [
                    "==",
                    "6.0.2"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "==",
                    "2.32.3"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": []
        },
        {
            "name": "sentry-sdk",
            "specs": [
                [
                    "==",
                    "2.26.1"
                ]
            ]
        },
        {
            "name": "setproctitle",
            "specs": [
                [
                    "==",
                    "1.3.5"
                ]
            ]
        },
        {
            "name": "six",
            "specs": [
                [
                    "==",
                    "1.17.0"
                ]
            ]
        },
        {
            "name": "smmap",
            "specs": [
                [
                    "==",
                    "5.0.2"
                ]
            ]
        },
        {
            "name": "sympy",
            "specs": [
                [
                    "==",
                    "1.13.3"
                ]
            ]
        },
        {
            "name": "torch",
            "specs": [
                [
                    "==",
                    "2.7.1"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": [
                [
                    "==",
                    "4.67.1"
                ]
            ]
        },
        {
            "name": "typing-inspection",
            "specs": [
                [
                    "==",
                    "0.4.0"
                ]
            ]
        },
        {
            "name": "typing_extensions",
            "specs": [
                [
                    "==",
                    "4.13.2"
                ]
            ]
        },
        {
            "name": "urllib3",
            "specs": [
                [
                    "==",
                    "2.4.0"
                ]
            ]
        },
        {
            "name": "wandb",
            "specs": [
                [
                    "==",
                    "0.19.9"
                ]
            ]
        },
        {
            "name": "yfinance",
            "specs": []
        },
        {
            "name": "xlrd",
            "specs": [
                [
                    "==",
                    "2.0.2"
                ]
            ]
        }
    ],
    "lcname": "ctx-bandits-mcmc"
}
        
Elapsed time: 1.96025s