novel-swarms


Namenovel-swarms JSON
Version 0.1.9a0 PyPI version JSON
download
home_page
SummaryA Swarm Simulation Package
upload_time2023-06-13 18:44:31
maintainer
docs_urlNone
author
requires_python>=3.7
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Novel Behavior Discovery in Computation Free Swarms
Contributors: Connor Mattson, Jeremy Clark, Daniel S. Brown

## Required Software
- Python & Pip
- External Python Packages as defined in [requirements.txt](requirements.txt) 

## Setup
Install Python Packages
    
    pip install -r requirements.txt

Test Simulation

    python -m demo.simulation.cyclic_pursuit

Test Evolution (Novelty Search) - Will take a long time to evolve.

    python -m demo.evolution.novelty_search

You're good to go!

[//]: # (## Use as Package)

[//]: # (You can now install this repository as a pip package under the name novel_swarms==0.0.1)

[//]: # ()
[//]: # (    pip install --upgrade git+ssh://git@github.com/Connor-Mattson/NovelSwarmBehavior.git@master)

## Demos

### Simulation

All 6 emergent behaviors defined in Brown et al. are available for simulation from the command line.

    python -m demo.simulation.cyclic_pursuit
    python -m demo.simulation.aggregation
    python -m demo.simulation.dispersal
    python -m demo.simulation.milling
    python -m demo.simulation.wall_following
    python -m demo.simulation.random

To alter world, agent, and sensor settings, modify the configurations in the [Simulation Playground](demo/simulation/playground.py)

    # Edit /demo/simulation/playground.py first
    python -m demo.simulation.playground

### Evolution

Use the following command to replicate the results shown in Brown et al.

    python -m demo.evolution.novelty_search

If you want to modify the parameters for evolution, use the [Evolution Playground](demo/evolution/playground.py)

    # Edit /demo/evolution/playground.py first
    python -m demo.evolution.playground

Evolving behaviors takes a long time, especially as the number of agents and lifespan increase. 
To save results in the [Output Folder](out/), set `save_archive = True` in the GeneticEvolutionConfig class instatiated in the evolution playground.

    GeneticEvolutionConfig(
        ...
        save_archive=True
    )

The resulting genotype (controller archive) and phenotype (behavior vector archive) files are 
saved to the output folder with the names `geno_g{genome_length}_gen{n_generations}_pop{population_size}_{timestamp}.csv` and `pheno_g{genome_length}_gen{n_generations}_pop{population_size}_{timestamp}.csv`.

### Results
If you have results saved to /out (see above section), modify [/demo/results/results_from_file.py](/demo/results/results_from_file.py) with the path to your files (relative to /out)

    archive = NoveltyArchive(
        pheno_file="PHENOTYPE_FILE",    # Replace with your file
        geno_file="GENOTYPE_FILE"       # Replace with your file
    )

Then run

    python -m demo.results.results_from_file

This will allow you to explore the reduced behavior space that you generated from an earlier evolution execution.
You can also use your pheno and geno files for plotting behaviors/controllers over time, as all entries are saved to the archives in order.

## Configuration
As part of our desire to make a framework that can easily be tweaked and expanded, much of the blackbox details are hidden behinds the scenes (in the /src folder).
Use the common configuration interfaces to modify common parameters that do not require a complex knowledge of the codebase.

    from src.novelty.GeneRule import GeneRule
    from src.novelty.evolve import main as evolve
    from src.results.results import main as report
    from src.config.WorldConfig import RectangularWorldConfig
    from src.config.defaults import ConfigurationDefaults
    from src.config.EvolutionaryConfig import GeneticEvolutionConfig

    # Use the default Differential Drive Agent, initialized with a single sensor and normal physics
    agent_config = ConfigurationDefaults.DIFF_DRIVE_AGENT

    # Create a Genotype Ruleset that matches the size and boundaries of your robot controller _max and _min represent
    # the maximum and minimum acceptable values for that index in the genome. mutation_step specifies the largest
    # possible step in any direction that the genome can experience during mutation.
    genotype = [
        GeneRuleContinuous(_max=1.0, _min=-1.0, mutation_step=0.4, round_digits=4),
        GeneRuleContinuous(_max=1.0, _min=-1.0, mutation_step=0.4, round_digits=4),
        GeneRuleContinuous(_max=1.0, _min=-1.0, mutation_step=0.4, round_digits=4),
        GeneRuleContinuous(_max=1.0, _min=-1.0, mutation_step=0.4, round_digits=4),
    ]

    # Use the default Behavior Vector (from Brown et al.) to measure the collective swarm behaviors
    phenotype = ConfigurationDefaults.BEHAVIOR_VECTOR

    # Define an empty Rectangular World with size (w, h) and n agents.
    world_config = RectangularWorldConfig(
        size=(500, 500),
        n_agents=30,
        behavior=phenotype,
        agentConfig=agent_config,
        padding=15
    )

    # Define the breath and depth of novelty search with n_generations and n_populations
    # Modify k_nn to change the number of nearest neighbors used in calculating novelty.
    # Increase simulation_lifespan to allow agents to interact with each other for longer.
    # Set save_archive to True to save the resulting archive to /out.
    novelty_config = GeneticEvolutionConfig(
        gene_rules=genotype,
        phenotype_config=phenotype,
        n_generations=100,
        n_population=100,
        crossover_rate=0.7,
        mutation_rate=0.15,
        world_config=world_config,
        k_nn=15,
        simulation_lifespan=300,
        display_novelty=False,
        save_archive=False,
    )

    # Novelty Search through Genetic Evolution
    archive = evolve(config=novelty_config)

    results_config = ConfigurationDefaults.RESULTS
    results_config.world = world_config
    results_config.archive = archive

    # Take Results from Evolution, reduce dimensionality, and present User with Clusters.
    report(config=results_config)

## Augmentation
We have explored the idea of augmenting this framework further to allow more complex world, sensor, controller, and actuator spaces. 
Much of the backbone to support these augmentations is present in this codebase, but lacks testing and robustness.

We invite you to augment cautiously and carefully test output validity.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "novel-swarms",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "",
    "author_email": "Connor Mattson <c.mattson@utah.edu>",
    "download_url": "https://files.pythonhosted.org/packages/6c/cf/fb00a12f4b651acdced480c127ca9d791b06a74844443237937542713cdf/novel_swarms-0.1.9a0.tar.gz",
    "platform": null,
    "description": "# Novel Behavior Discovery in Computation Free Swarms\nContributors: Connor Mattson, Jeremy Clark, Daniel S. Brown\n\n## Required Software\n- Python & Pip\n- External Python Packages as defined in [requirements.txt](requirements.txt) \n\n## Setup\nInstall Python Packages\n    \n    pip install -r requirements.txt\n\nTest Simulation\n\n    python -m demo.simulation.cyclic_pursuit\n\nTest Evolution (Novelty Search) - Will take a long time to evolve.\n\n    python -m demo.evolution.novelty_search\n\nYou're good to go!\n\n[//]: # (## Use as Package)\n\n[//]: # (You can now install this repository as a pip package under the name novel_swarms==0.0.1)\n\n[//]: # ()\n[//]: # (    pip install --upgrade git+ssh://git@github.com/Connor-Mattson/NovelSwarmBehavior.git@master)\n\n## Demos\n\n### Simulation\n\nAll 6 emergent behaviors defined in Brown et al. are available for simulation from the command line.\n\n    python -m demo.simulation.cyclic_pursuit\n    python -m demo.simulation.aggregation\n    python -m demo.simulation.dispersal\n    python -m demo.simulation.milling\n    python -m demo.simulation.wall_following\n    python -m demo.simulation.random\n\nTo alter world, agent, and sensor settings, modify the configurations in the [Simulation Playground](demo/simulation/playground.py)\n\n    # Edit /demo/simulation/playground.py first\n    python -m demo.simulation.playground\n\n### Evolution\n\nUse the following command to replicate the results shown in Brown et al.\n\n    python -m demo.evolution.novelty_search\n\nIf you want to modify the parameters for evolution, use the [Evolution Playground](demo/evolution/playground.py)\n\n    # Edit /demo/evolution/playground.py first\n    python -m demo.evolution.playground\n\nEvolving behaviors takes a long time, especially as the number of agents and lifespan increase. \nTo save results in the [Output Folder](out/), set `save_archive = True` in the GeneticEvolutionConfig class instatiated in the evolution playground.\n\n    GeneticEvolutionConfig(\n        ...\n        save_archive=True\n    )\n\nThe resulting genotype (controller archive) and phenotype (behavior vector archive) files are \nsaved to the output folder with the names `geno_g{genome_length}_gen{n_generations}_pop{population_size}_{timestamp}.csv` and `pheno_g{genome_length}_gen{n_generations}_pop{population_size}_{timestamp}.csv`.\n\n### Results\nIf you have results saved to /out (see above section), modify [/demo/results/results_from_file.py](/demo/results/results_from_file.py) with the path to your files (relative to /out)\n\n    archive = NoveltyArchive(\n        pheno_file=\"PHENOTYPE_FILE\",    # Replace with your file\n        geno_file=\"GENOTYPE_FILE\"       # Replace with your file\n    )\n\nThen run\n\n    python -m demo.results.results_from_file\n\nThis will allow you to explore the reduced behavior space that you generated from an earlier evolution execution.\nYou can also use your pheno and geno files for plotting behaviors/controllers over time, as all entries are saved to the archives in order.\n\n## Configuration\nAs part of our desire to make a framework that can easily be tweaked and expanded, much of the blackbox details are hidden behinds the scenes (in the /src folder).\nUse the common configuration interfaces to modify common parameters that do not require a complex knowledge of the codebase.\n\n    from src.novelty.GeneRule import GeneRule\n    from src.novelty.evolve import main as evolve\n    from src.results.results import main as report\n    from src.config.WorldConfig import RectangularWorldConfig\n    from src.config.defaults import ConfigurationDefaults\n    from src.config.EvolutionaryConfig import GeneticEvolutionConfig\n\n    # Use the default Differential Drive Agent, initialized with a single sensor and normal physics\n    agent_config = ConfigurationDefaults.DIFF_DRIVE_AGENT\n\n    # Create a Genotype Ruleset that matches the size and boundaries of your robot controller _max and _min represent\n    # the maximum and minimum acceptable values for that index in the genome. mutation_step specifies the largest\n    # possible step in any direction that the genome can experience during mutation.\n    genotype = [\n        GeneRuleContinuous(_max=1.0, _min=-1.0, mutation_step=0.4, round_digits=4),\n        GeneRuleContinuous(_max=1.0, _min=-1.0, mutation_step=0.4, round_digits=4),\n        GeneRuleContinuous(_max=1.0, _min=-1.0, mutation_step=0.4, round_digits=4),\n        GeneRuleContinuous(_max=1.0, _min=-1.0, mutation_step=0.4, round_digits=4),\n    ]\n\n    # Use the default Behavior Vector (from Brown et al.) to measure the collective swarm behaviors\n    phenotype = ConfigurationDefaults.BEHAVIOR_VECTOR\n\n    # Define an empty Rectangular World with size (w, h) and n agents.\n    world_config = RectangularWorldConfig(\n        size=(500, 500),\n        n_agents=30,\n        behavior=phenotype,\n        agentConfig=agent_config,\n        padding=15\n    )\n\n    # Define the breath and depth of novelty search with n_generations and n_populations\n    # Modify k_nn to change the number of nearest neighbors used in calculating novelty.\n    # Increase simulation_lifespan to allow agents to interact with each other for longer.\n    # Set save_archive to True to save the resulting archive to /out.\n    novelty_config = GeneticEvolutionConfig(\n        gene_rules=genotype,\n        phenotype_config=phenotype,\n        n_generations=100,\n        n_population=100,\n        crossover_rate=0.7,\n        mutation_rate=0.15,\n        world_config=world_config,\n        k_nn=15,\n        simulation_lifespan=300,\n        display_novelty=False,\n        save_archive=False,\n    )\n\n    # Novelty Search through Genetic Evolution\n    archive = evolve(config=novelty_config)\n\n    results_config = ConfigurationDefaults.RESULTS\n    results_config.world = world_config\n    results_config.archive = archive\n\n    # Take Results from Evolution, reduce dimensionality, and present User with Clusters.\n    report(config=results_config)\n\n## Augmentation\nWe have explored the idea of augmenting this framework further to allow more complex world, sensor, controller, and actuator spaces. \nMuch of the backbone to support these augmentations is present in this codebase, but lacks testing and robustness.\n\nWe invite you to augment cautiously and carefully test output validity.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A Swarm Simulation Package",
    "version": "0.1.9a0",
    "project_urls": {
        "Bug Tracker": "https://github.com/Connor-Mattson/RobotSwarmSimulator/issues",
        "Homepage": "https://github.com/Connor-Mattson/RobotSwarmSimulator"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fc971804d93675c0e9684179daab00cf19cc33fe2f57a604fab897a35644389e",
                "md5": "fab1fb1515963adfa5ef7ca456930d51",
                "sha256": "f4b8ee9d325b6a504151650ca3d21e8e5a088956212294e94eb2b6054bfc3dc9"
            },
            "downloads": -1,
            "filename": "novel_swarms-0.1.9a0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fab1fb1515963adfa5ef7ca456930d51",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 97029,
            "upload_time": "2023-06-13T18:44:29",
            "upload_time_iso_8601": "2023-06-13T18:44:29.369629Z",
            "url": "https://files.pythonhosted.org/packages/fc/97/1804d93675c0e9684179daab00cf19cc33fe2f57a604fab897a35644389e/novel_swarms-0.1.9a0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6ccffb00a12f4b651acdced480c127ca9d791b06a74844443237937542713cdf",
                "md5": "3989fc9aadf174a98d5fdf0bb25b7892",
                "sha256": "8c1db51bc6d7425c89c7ac352dde058a30f2358498f221e4ee9a2ce1a74d6125"
            },
            "downloads": -1,
            "filename": "novel_swarms-0.1.9a0.tar.gz",
            "has_sig": false,
            "md5_digest": "3989fc9aadf174a98d5fdf0bb25b7892",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 65213,
            "upload_time": "2023-06-13T18:44:31",
            "upload_time_iso_8601": "2023-06-13T18:44:31.173489Z",
            "url": "https://files.pythonhosted.org/packages/6c/cf/fb00a12f4b651acdced480c127ca9d791b06a74844443237937542713cdf/novel_swarms-0.1.9a0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-13 18:44:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Connor-Mattson",
    "github_project": "RobotSwarmSimulator",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "novel-swarms"
}
        
Elapsed time: 0.08357s