evorl


Nameevorl JSON
Version 2.0.0 PyPI version JSON
download
home_pagehttps://github.com/zhangalex1/evorl
SummaryAn evolutionary reinforcement learning framework
upload_time2025-01-09 11:39:43
maintainerNone
docs_urlNone
authorAlex Zhang
requires_python>=3.8
licenseNone
keywords reinforcement-learning evolutionary-algorithms deep-learning pytorch rl
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# EvoRL

An evolutionary reinforcement learning framework that combines evolutionary algorithms with deep RL.

[![Website](https://img.shields.io/badge/Website-evorl.ai-blue)](https://evorl.ai)
[![Twitter](https://img.shields.io/badge/Twitter-@ReinforceEvo-blue)](https://x.com/ReinforceEvo)
[![PyPI version](https://badge.fury.io/py/evorl.svg)](https://badge.fury.io/py/evorl)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## Features

- 🧬 Evolutionary optimization of RL agents
- 🤖 Multiple agent types (DQN, PPO)
- 🔄 Various evolution strategies (CEM, PGPE, NES)
- 📊 Environment normalization and preprocessing
- 🚀 Easy to extend and customize

## Installation

```bash
pip install evorl
```

For development installation with additional tools:
```bash
pip install "evorl[dev]"
```

## Quick Start

### Basic Usage
```python
from evorl import DQNAgent, NormalizedEnv
import gymnasium as gym

# Create environment
env = NormalizedEnv(gym.make("CartPole-v1"))

# Create and train a single agent
agent = DQNAgent(
    state_dim=env.observation_space.shape[0],
    action_dim=env.action_space.n
)

# Training loop
episodes = 100
for episode in range(episodes):
    obs, _ = env.reset()
    done = False
    total_reward = 0

    while not done:
        action = agent.select_action(obs)
        next_obs, reward, terminated, truncated, _ = env.step(action)
        done = terminated or truncated
        agent.update((obs, action, reward, next_obs, done))
        total_reward += reward
        obs = next_obs

    print(f"Episode {episode}: Reward = {total_reward}")
```

### Evolutionary Training
```python
from evorl import Population, CEM

# Create population of agents
population = Population(
    agent_class=DQNAgent,
    state_dim=env.observation_space.shape[0],
    action_dim=env.action_space.n,
    population_size=10
)

# Create evolution strategy
strategy = CEM(elite_frac=0.2)

# Evolution loop
generations = 20
for generation in range(generations):
    # Evaluate population
    metrics = population.evaluate(env, n_episodes=3)
    print(f"Generation {generation}: Mean Fitness = {metrics['mean_fitness']:.2f}")

    # Create next generation
    updates = strategy.compute_updates(population.population, population.fitness_scores)
    population.apply_updates(updates)
```

## Documentation

For detailed documentation, visit [evorl.ai](https://evorl.ai)

## Available Components

### Agents
- `DQNAgent`: Deep Q-Network implementation
- `PPOAgent`: Proximal Policy Optimization implementation

### Evolution Strategies
- `CEM`: Cross-Entropy Method
- `PGPE`: Policy Gradients with Parameter Exploration
- `NES`: Natural Evolution Strategies

### Environment Wrappers
- `NormalizedEnv`: Observation and reward normalization

## Development

```bash
# Clone the repository
git clone https://github.com/zhangalex1/evorl.git
cd evorl

# Install in development mode
pip install -e ".[dev]"

# Run tests
pytest tests/

# Run with coverage
pytest tests/ --cov=evorl
```

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## License

MIT License - see [LICENSE](LICENSE) for details

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/zhangalex1/evorl",
    "name": "evorl",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "reinforcement-learning, evolutionary-algorithms, deep-learning, pytorch, rl",
    "author": "Alex Zhang",
    "author_email": "zhangalex1237@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/0d/48/c9e7ff388f6f98bf3fedbfaa63d8e6b80cfdf7fefdb64466ca87dc25ee00/evorl-2.0.0.tar.gz",
    "platform": null,
    "description": "\n# EvoRL\n\nAn evolutionary reinforcement learning framework that combines evolutionary algorithms with deep RL.\n\n[![Website](https://img.shields.io/badge/Website-evorl.ai-blue)](https://evorl.ai)\n[![Twitter](https://img.shields.io/badge/Twitter-@ReinforceEvo-blue)](https://x.com/ReinforceEvo)\n[![PyPI version](https://badge.fury.io/py/evorl.svg)](https://badge.fury.io/py/evorl)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n## Features\n\n- \ud83e\uddec Evolutionary optimization of RL agents\n- \ud83e\udd16 Multiple agent types (DQN, PPO)\n- \ud83d\udd04 Various evolution strategies (CEM, PGPE, NES)\n- \ud83d\udcca Environment normalization and preprocessing\n- \ud83d\ude80 Easy to extend and customize\n\n## Installation\n\n```bash\npip install evorl\n```\n\nFor development installation with additional tools:\n```bash\npip install \"evorl[dev]\"\n```\n\n## Quick Start\n\n### Basic Usage\n```python\nfrom evorl import DQNAgent, NormalizedEnv\nimport gymnasium as gym\n\n# Create environment\nenv = NormalizedEnv(gym.make(\"CartPole-v1\"))\n\n# Create and train a single agent\nagent = DQNAgent(\n    state_dim=env.observation_space.shape[0],\n    action_dim=env.action_space.n\n)\n\n# Training loop\nepisodes = 100\nfor episode in range(episodes):\n    obs, _ = env.reset()\n    done = False\n    total_reward = 0\n\n    while not done:\n        action = agent.select_action(obs)\n        next_obs, reward, terminated, truncated, _ = env.step(action)\n        done = terminated or truncated\n        agent.update((obs, action, reward, next_obs, done))\n        total_reward += reward\n        obs = next_obs\n\n    print(f\"Episode {episode}: Reward = {total_reward}\")\n```\n\n### Evolutionary Training\n```python\nfrom evorl import Population, CEM\n\n# Create population of agents\npopulation = Population(\n    agent_class=DQNAgent,\n    state_dim=env.observation_space.shape[0],\n    action_dim=env.action_space.n,\n    population_size=10\n)\n\n# Create evolution strategy\nstrategy = CEM(elite_frac=0.2)\n\n# Evolution loop\ngenerations = 20\nfor generation in range(generations):\n    # Evaluate population\n    metrics = population.evaluate(env, n_episodes=3)\n    print(f\"Generation {generation}: Mean Fitness = {metrics['mean_fitness']:.2f}\")\n\n    # Create next generation\n    updates = strategy.compute_updates(population.population, population.fitness_scores)\n    population.apply_updates(updates)\n```\n\n## Documentation\n\nFor detailed documentation, visit [evorl.ai](https://evorl.ai)\n\n## Available Components\n\n### Agents\n- `DQNAgent`: Deep Q-Network implementation\n- `PPOAgent`: Proximal Policy Optimization implementation\n\n### Evolution Strategies\n- `CEM`: Cross-Entropy Method\n- `PGPE`: Policy Gradients with Parameter Exploration\n- `NES`: Natural Evolution Strategies\n\n### Environment Wrappers\n- `NormalizedEnv`: Observation and reward normalization\n\n## Development\n\n```bash\n# Clone the repository\ngit clone https://github.com/zhangalex1/evorl.git\ncd evorl\n\n# Install in development mode\npip install -e \".[dev]\"\n\n# Run tests\npytest tests/\n\n# Run with coverage\npytest tests/ --cov=evorl\n```\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nMIT License - see [LICENSE](LICENSE) for details\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "An evolutionary reinforcement learning framework",
    "version": "2.0.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/zhangalex1/evorl/issues",
        "Documentation": "https://evorl.ai",
        "Homepage": "https://github.com/zhangalex1/evorl",
        "Source Code": "https://github.com/zhangalex1/evorl"
    },
    "split_keywords": [
        "reinforcement-learning",
        " evolutionary-algorithms",
        " deep-learning",
        " pytorch",
        " rl"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c0a7b076d76ec79561de424eb9fc7b22abeb2ff03fb02c044d7fd90bb9e1f1aa",
                "md5": "1a33a7a2c64187894cd4e643a86283ed",
                "sha256": "4a2b95e38db973b2eb04c7d9681523f5879daf1ae41b4a1e929863b4d3305727"
            },
            "downloads": -1,
            "filename": "evorl-2.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1a33a7a2c64187894cd4e643a86283ed",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 13786,
            "upload_time": "2025-01-09T11:39:41",
            "upload_time_iso_8601": "2025-01-09T11:39:41.940804Z",
            "url": "https://files.pythonhosted.org/packages/c0/a7/b076d76ec79561de424eb9fc7b22abeb2ff03fb02c044d7fd90bb9e1f1aa/evorl-2.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0d48c9e7ff388f6f98bf3fedbfaa63d8e6b80cfdf7fefdb64466ca87dc25ee00",
                "md5": "a0dcc328299894baec776a8fcc1fc7e9",
                "sha256": "98a19deccde04b6bedc9f55c8fe772c75a6c4aee80e892e4f91f0ab3c34ee567"
            },
            "downloads": -1,
            "filename": "evorl-2.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "a0dcc328299894baec776a8fcc1fc7e9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 13170,
            "upload_time": "2025-01-09T11:39:43",
            "upload_time_iso_8601": "2025-01-09T11:39:43.280598Z",
            "url": "https://files.pythonhosted.org/packages/0d/48/c9e7ff388f6f98bf3fedbfaa63d8e6b80cfdf7fefdb64466ca87dc25ee00/evorl-2.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-09 11:39:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zhangalex1",
    "github_project": "evorl",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "evorl"
}
        
Elapsed time: 0.37214s