monkeyble


Namemonkeyble JSON
Version 1.3.0 PyPI version JSON
download
home_pagehttps://hewlettpackard.github.io/monkeyble/
SummaryEnd-to-end testing framework for Ansible
upload_time2023-02-02 14:32:42
maintainer
docs_urlNone
authorNicolas Marcq
requires_python>=3.8,<4.0
licenseGNU General Public License v3 (GPLv3)
keywords test ansible end2end
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <img src="docs/images/monkeyble_logo.png">
</p>

<h3 align="center">End-to-end testing framework for Ansible</h3>

<p align="center">
<a href="https://hewlettpackard.github.io/monkeyble"><img alt="Doc" src="https://img.shields.io/badge/read-documentation-1abc9c?style=flat-square"></a>
<a href="https://makeapullrequest.com"><img alt="PR" src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square"></a>
</p>

# Monkeyble

Monkeyble is a callback plugin for Ansible that allow to execute end-to-end tests on Ansible playbooks with a 
Pythonic testing approach. 🧐

Monkeyble allows, at task level, to:

- 🐵 Check that a module has been called with expected argument values
- 🙊 Check that a module returned the expected result dictionary
- 🙈 Check the task state (changed, skipped, failed)
- 🙉 Mock a module and return a defined dictionary as result

Monkeyble is designed to be executed by a CI/CD in order to detect regressions when updating an Ansible code base. 🚀 

Complete documentation available [here](https://hewlettpackard.github.io/monkeyble)

## Hello Monkeyble

Let's consider this simple playbook
```yaml
- name: "Hello Monkeyble"
  hosts: localhost
  connection: local
  gather_facts: false
  become: false
  vars:
    who: "Monkeyble"

  tasks:
    - name: "First task"
      set_fact:
        hello_to_who: "Hello {{ who }}"

    - name: "Second task"
      debug:
        msg: "{{ hello_to_who }}"

    - when: "who != 'Monkeyble'"
      name: "Should be skipped task"
      debug:
        msg: "You said hello to somebody else"

    - name: "Push Monkeyble to a fake API"
      uri:
        url: "example.domain/monkeyble"
        method: POST
        body:
          who: "{{ who }}"
        body_format: json
```

We prepare a yaml file that contains a test scenario
```yaml
# monkeyble_scenarios.yaml
monkeyble_scenarios:
  validate_hello_monkey:
    name: "Monkeyble hello world"
    tasks_to_test:

      - task: "First task"
        test_output:
          - assert_equal:
              result_key: result.ansible_facts.hello_to_who
              expected: "Hello Monkeyble"

      - task: "Second task"
        test_input:
          - assert_equal:
              arg_name: msg
              expected: "Hello Monkeyble"

      - task: "Should be skipped task"
        should_be_skipped: true

      - task: "Push Monkeyble to a fake API"
        mock:
          config:
            monkeyble_module:
              consider_changed: true
              result_dict:
                json:
                  id: 10
                  message: "monkey added"
```

We execute the playbook like by passing 
- the dedicated ansible config that load Monkeyble (see install doc)
- the extra var file that contains our scenarios
- one extra var with the selected scenario to validate `validate_hello_monkey`

```bash
ANSIBLE_CONFIG="ansible.cfg" ansible-playbook -v  \
tests/test_playbook.yml \
-e "@tests/monkeyble_scenarios.yml" \
-e "monkeyble_scenario=validate_hello_monkey"
```

Here is the output:
```
PLAY [Hello Monkeyble] *********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
🐵 Starting Monkeyble callback
monkeyble_scenario: validate_hello_monkey
Monkeyble scenario: Monkeyble hello world

TASK [First task] **************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
ok: [localhost] => {"ansible_facts": {"hello_to_who": "Hello Monkeyble"}, "changed": false}
🙊 Monkeyble test output passed ✔
{"task": "First task", "monkeyble_passed_test": [{"test_name": "assert_equal", "tested_value": "Hello Monkeyble", "expected": "Hello Monkeyble"}], "monkeyble_failed_test": []}

TASK [Second task] *************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
🙈 Monkeyble test input passed ✔
{"monkeyble_passed_test": [{"test_name": "assert_equal", "tested_value": "Hello Monkeyble", "expected": "Hello Monkeyble"}], "monkeyble_failed_test": []}
ok: [localhost] => {
    "msg": "Hello Monkeyble"
}

TASK [Should be skipped task] **************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
skipping: [localhost] => {}
🐵 Monkeyble - Task 'Should be skipped task' - expected 'should_be_skipped': True. actual state: True

TASK [Push Monkeyble to a fake API] ********************************************************************************************************************************************************************************************************************************************************************************************************************************************************
🙉 Monkeyble mock module - Before: 'uri' Now: 'monkeyble_module'
changed: [localhost] => {"changed": true, "json": {"id": 10, "message": "monkey added"}, "msg": "Monkeyble Mock module called. Original module: uri"}

PLAY RECAP *********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   

🐵 Monkeyble - ALL TESTS PASSED ✔ - scenario: Monkeyble hello world
```

All tests have passed. The return code on stderr is **0**.

Let's change the test to make it fail. We update the variable `who` at the beginning of the playbook.
```yaml
who: "Dog"
```

We execute the playbook the same way. The result is now the following:
```
ok: [localhost] => {"ansible_facts": {"hello_to_who": "Hello Dog"}, "changed": false}
🙊 Monkeyble failed scenario ❌: Monkeyble hello world
{"task": "First task", "monkeyble_passed_test": [], "monkeyble_failed_test": [{"test_name": "assert_equal", "tested_value": "Hello Dog", "expected": "Hello Monkeyble"}]}
```

This time the test has failed. The return code on stderr is **1**. The CI/CD It would have warned you that something has changed.

## Quick tour

### Test input

Monkeyble allows to check each instantiated argument value when the task is called:

```yml
  - task: "my_task_name"
    test_input:
      - assert_equal:
          arg_name: module_argument
          expected: "my_value"
```

Monkeyble support multiple test methods:

- assert_equal
- assert_not_equal
- assert_in
- assert_not_in
- assert_true
- assert_false
- assert_is_none
- assert_is_not_none
- assert_list_equal
- assert_dict_equal

### Test output

Monkeyble allows to check the output result dictionary of a task

```yml
  - task: "my_task_name"
    test_output:
      - assert_dict_equal:
          dict_key: "result.key.name"
          expected: 
            key1: "my_value"
            key2: "my_other_value"
```

Same methods as the `test_input` are supported.

### Test task states

Monkeyble allow to check the states of a task

```yml
  - task: "my_task_name"
    should_be_skipped: false
    should_be_changed: true
    should_fail: false
```

### Monkey patching

Monkey patching is a technique that allows you to intercept what a function would normally do, substituting its full execution with a return value of your own specification. 
In the case of Ansible, the function is actually a module and the returned value is the "result" dictionary.

Consider a scenario where you are working with public cloud API or infrastructure module. 
In the context of testing, you do not want to create a real instance of an object in the cloud like a VM or a container orchestrator.
But you still need eventually the returned dictionary so the playbook can be executed entirely.

Monkeyble allows to mock a task and return a specific value:
```yml
- task: "my_task_name"
  mock:
    config:
      monkeyble_module:
        consider_changed: true
        result_dict:
          my_key: "mock value"
```

### Cli 

Monkeyble comes with a CLI that allow to execute all tests from a single command and return a summary of test executions.
```bash
monkeyble test

Playbook   | Scenario        | Test passed
-----------+-----------------+-------------
 play1.yml | validate_test_1 | ✅
 play1.yml | validate_test_2 | ✅
 play2.yml | validate_this   | ✅
 play2.yml | validate_that   | ✅
 
 🐵 Monkeyble test result - Tests passed: 4 of 4 tests
```

## Do I need Monkeyble?

The common testing strategy when using Ansible is to deploy to a staging environment that simulates the production.
When a role or a playbook is updated, we usually run an integration test battery against staging again before pushing in production.
In case of an update of the code base, a new execution will be required on the stating environment before the production one, etc...

But when our playbooks are exposed in an [Ansible Controller/AWX](https://www.ansible.com/products/controller) (ex Tower)
or available as a service in a catalog like [Squest](https://github.com/HewlettPackard/squest), we need to be sure that we don't have any regressions 
when updating the code base, especially when modifying a role used by multiple playbooks. Manually testing each playbook would be costly. We commonly give this kind of task to a CI/CD.

Furthermore, Ansible resources are models of desired-state. Ansible modules have their own unit tests and guarantee you of their correct functioning.
As such, it's not necessary to test that services are started, packages are installed, or other such things. 
Ansible is the system that will ensure these things are declaratively true.

So finally, what do we need to test? An Ansible playbook is commonly a bunch of data manipulation before calling a module that will perform a particular action.
For example, we get data from an API endpoint, or from the result of a module, we register a variable, then use a filter transform the data like combining two dictionary, 
transforming into a list, changing the type, extract a specific value, etc... to finally call another module in a new task with the transformed data..

Given a defined list of variable defined as input we want to be sure that a particular task: 

- is well executed (the playbook could have failed before)
- is well called with the expected instantiated arguments
- produced this exact result
- has been skipped, changed or has failed

Monkeyble is a tool that can help you to enhance the quality of your Ansible code base and can be coupled 
with [official best practices](https://docs.ansible.com/ansible/latest/reference_appendices/test_strategies.html).
Placed in a CI/CD it will be in charge of validating that the legacy code is always working as expected.

## Contribute

Feel free to fill an issue containing feature request(s), or (even better) to send a Pull request, we would be happy to collaborate with you.

> If you like the project, star it ⭐, it motivates us a lot 🙂

            

Raw data

            {
    "_id": null,
    "home_page": "https://hewlettpackard.github.io/monkeyble/",
    "name": "monkeyble",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "",
    "keywords": "test,ansible,end2end",
    "author": "Nicolas Marcq",
    "author_email": "nicolas.marcq@hpe.com",
    "download_url": "https://files.pythonhosted.org/packages/3a/c9/9b66cb805843ee66b9491584499216d202230e362cfa20c57e33989b4167/monkeyble-1.3.0.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n    <img src=\"docs/images/monkeyble_logo.png\">\n</p>\n\n<h3 align=\"center\">End-to-end testing framework for Ansible</h3>\n\n<p align=\"center\">\n<a href=\"https://hewlettpackard.github.io/monkeyble\"><img alt=\"Doc\" src=\"https://img.shields.io/badge/read-documentation-1abc9c?style=flat-square\"></a>\n<a href=\"https://makeapullrequest.com\"><img alt=\"PR\" src=\"https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square\"></a>\n</p>\n\n# Monkeyble\n\nMonkeyble is a callback plugin for Ansible that allow to execute end-to-end tests on Ansible playbooks with a \nPythonic testing approach. \ud83e\uddd0\n\nMonkeyble allows, at task level, to:\n\n- \ud83d\udc35 Check that a module has been called with expected argument values\n- \ud83d\ude4a Check that a module returned the expected result dictionary\n- \ud83d\ude48 Check the task state (changed, skipped, failed)\n- \ud83d\ude49 Mock a module and return a defined dictionary as result\n\nMonkeyble is designed to be executed by a CI/CD in order to detect regressions when updating an Ansible code base. \ud83d\ude80 \n\nComplete documentation available [here](https://hewlettpackard.github.io/monkeyble)\n\n## Hello Monkeyble\n\nLet's consider this simple playbook\n```yaml\n- name: \"Hello Monkeyble\"\n  hosts: localhost\n  connection: local\n  gather_facts: false\n  become: false\n  vars:\n    who: \"Monkeyble\"\n\n  tasks:\n    - name: \"First task\"\n      set_fact:\n        hello_to_who: \"Hello {{ who }}\"\n\n    - name: \"Second task\"\n      debug:\n        msg: \"{{ hello_to_who }}\"\n\n    - when: \"who != 'Monkeyble'\"\n      name: \"Should be skipped task\"\n      debug:\n        msg: \"You said hello to somebody else\"\n\n    - name: \"Push Monkeyble to a fake API\"\n      uri:\n        url: \"example.domain/monkeyble\"\n        method: POST\n        body:\n          who: \"{{ who }}\"\n        body_format: json\n```\n\nWe prepare a yaml file that contains a test scenario\n```yaml\n# monkeyble_scenarios.yaml\nmonkeyble_scenarios:\n  validate_hello_monkey:\n    name: \"Monkeyble hello world\"\n    tasks_to_test:\n\n      - task: \"First task\"\n        test_output:\n          - assert_equal:\n              result_key: result.ansible_facts.hello_to_who\n              expected: \"Hello Monkeyble\"\n\n      - task: \"Second task\"\n        test_input:\n          - assert_equal:\n              arg_name: msg\n              expected: \"Hello Monkeyble\"\n\n      - task: \"Should be skipped task\"\n        should_be_skipped: true\n\n      - task: \"Push Monkeyble to a fake API\"\n        mock:\n          config:\n            monkeyble_module:\n              consider_changed: true\n              result_dict:\n                json:\n                  id: 10\n                  message: \"monkey added\"\n```\n\nWe execute the playbook like by passing \n- the dedicated ansible config that load Monkeyble (see install doc)\n- the extra var file that contains our scenarios\n- one extra var with the selected scenario to validate `validate_hello_monkey`\n\n```bash\nANSIBLE_CONFIG=\"ansible.cfg\" ansible-playbook -v  \\\ntests/test_playbook.yml \\\n-e \"@tests/monkeyble_scenarios.yml\" \\\n-e \"monkeyble_scenario=validate_hello_monkey\"\n```\n\nHere is the output:\n```\nPLAY [Hello Monkeyble] *********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************\n\ud83d\udc35 Starting Monkeyble callback\nmonkeyble_scenario: validate_hello_monkey\nMonkeyble scenario: Monkeyble hello world\n\nTASK [First task] **************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************\nok: [localhost] => {\"ansible_facts\": {\"hello_to_who\": \"Hello Monkeyble\"}, \"changed\": false}\n\ud83d\ude4a Monkeyble test output passed \u2714\n{\"task\": \"First task\", \"monkeyble_passed_test\": [{\"test_name\": \"assert_equal\", \"tested_value\": \"Hello Monkeyble\", \"expected\": \"Hello Monkeyble\"}], \"monkeyble_failed_test\": []}\n\nTASK [Second task] *************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************\n\ud83d\ude48 Monkeyble test input passed \u2714\n{\"monkeyble_passed_test\": [{\"test_name\": \"assert_equal\", \"tested_value\": \"Hello Monkeyble\", \"expected\": \"Hello Monkeyble\"}], \"monkeyble_failed_test\": []}\nok: [localhost] => {\n    \"msg\": \"Hello Monkeyble\"\n}\n\nTASK [Should be skipped task] **************************************************************************************************************************************************************************************************************************************************************************************************************************************************************\nskipping: [localhost] => {}\n\ud83d\udc35 Monkeyble - Task 'Should be skipped task' - expected 'should_be_skipped': True. actual state: True\n\nTASK [Push Monkeyble to a fake API] ********************************************************************************************************************************************************************************************************************************************************************************************************************************************************\n\ud83d\ude49 Monkeyble mock module - Before: 'uri' Now: 'monkeyble_module'\nchanged: [localhost] => {\"changed\": true, \"json\": {\"id\": 10, \"message\": \"monkey added\"}, \"msg\": \"Monkeyble Mock module called. Original module: uri\"}\n\nPLAY RECAP *********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************\nlocalhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   \n\n\ud83d\udc35 Monkeyble - ALL TESTS PASSED \u2714 - scenario: Monkeyble hello world\n```\n\nAll tests have passed. The return code on stderr is **0**.\n\nLet's change the test to make it fail. We update the variable `who` at the beginning of the playbook.\n```yaml\nwho: \"Dog\"\n```\n\nWe execute the playbook the same way. The result is now the following:\n```\nok: [localhost] => {\"ansible_facts\": {\"hello_to_who\": \"Hello Dog\"}, \"changed\": false}\n\ud83d\ude4a Monkeyble failed scenario \u274c: Monkeyble hello world\n{\"task\": \"First task\", \"monkeyble_passed_test\": [], \"monkeyble_failed_test\": [{\"test_name\": \"assert_equal\", \"tested_value\": \"Hello Dog\", \"expected\": \"Hello Monkeyble\"}]}\n```\n\nThis time the test has failed. The return code on stderr is **1**. The CI/CD It would have warned you that something has changed.\n\n## Quick tour\n\n### Test input\n\nMonkeyble allows to check each instantiated argument value when the task is called:\n\n```yml\n  - task: \"my_task_name\"\n    test_input:\n      - assert_equal:\n          arg_name: module_argument\n          expected: \"my_value\"\n```\n\nMonkeyble support multiple test methods:\n\n- assert_equal\n- assert_not_equal\n- assert_in\n- assert_not_in\n- assert_true\n- assert_false\n- assert_is_none\n- assert_is_not_none\n- assert_list_equal\n- assert_dict_equal\n\n### Test output\n\nMonkeyble allows to check the output result dictionary of a task\n\n```yml\n  - task: \"my_task_name\"\n    test_output:\n      - assert_dict_equal:\n          dict_key: \"result.key.name\"\n          expected: \n            key1: \"my_value\"\n            key2: \"my_other_value\"\n```\n\nSame methods as the `test_input` are supported.\n\n### Test task states\n\nMonkeyble allow to check the states of a task\n\n```yml\n  - task: \"my_task_name\"\n    should_be_skipped: false\n    should_be_changed: true\n    should_fail: false\n```\n\n### Monkey patching\n\nMonkey patching is a technique that allows you to intercept what a function would normally do, substituting its full execution with a return value of your own specification. \nIn the case of Ansible, the function is actually a module and the returned value is the \"result\" dictionary.\n\nConsider a scenario where you are working with public cloud API or infrastructure module. \nIn the context of testing, you do not want to create a real instance of an object in the cloud like a VM or a container orchestrator.\nBut you still need eventually the returned dictionary so the playbook can be executed entirely.\n\nMonkeyble allows to mock a task and return a specific value:\n```yml\n- task: \"my_task_name\"\n  mock:\n    config:\n      monkeyble_module:\n        consider_changed: true\n        result_dict:\n          my_key: \"mock value\"\n```\n\n### Cli \n\nMonkeyble comes with a CLI that allow to execute all tests from a single command and return a summary of test executions.\n```bash\nmonkeyble test\n\nPlaybook   | Scenario        | Test passed\n-----------+-----------------+-------------\n play1.yml | validate_test_1 | \u2705\n play1.yml | validate_test_2 | \u2705\n play2.yml | validate_this   | \u2705\n play2.yml | validate_that   | \u2705\n \n \ud83d\udc35 Monkeyble test result - Tests passed: 4 of 4 tests\n```\n\n## Do I need Monkeyble?\n\nThe common testing strategy when using Ansible is to deploy to a staging environment that simulates the production.\nWhen a role or a playbook is updated, we usually run an integration test battery against staging again before pushing in production.\nIn case of an update of the code base, a new execution will be required on the stating environment before the production one, etc...\n\nBut when our playbooks are exposed in an [Ansible Controller/AWX](https://www.ansible.com/products/controller) (ex Tower)\nor available as a service in a catalog like [Squest](https://github.com/HewlettPackard/squest), we need to be sure that we don't have any regressions \nwhen updating the code base, especially when modifying a role used by multiple playbooks. Manually testing each playbook would be costly. We commonly give this kind of task to a CI/CD.\n\nFurthermore, Ansible resources are models of desired-state. Ansible modules have their own unit tests and guarantee you of their correct functioning.\nAs such, it's not necessary to test that services are started, packages are installed, or other such things. \nAnsible is the system that will ensure these things are declaratively true.\n\nSo finally, what do we need to test? An Ansible playbook is commonly a bunch of data manipulation before calling a module that will perform a particular action.\nFor example, we get data from an API endpoint, or from the result of a module, we register a variable, then use a filter transform the data like combining two dictionary, \ntransforming into a list, changing the type, extract a specific value, etc... to finally call another module in a new task with the transformed data..\n\nGiven a defined list of variable defined as input we want to be sure that a particular task: \n\n- is well executed (the playbook could have failed before)\n- is well called with the expected instantiated arguments\n- produced this exact result\n- has been skipped, changed or has failed\n\nMonkeyble is a tool that can help you to enhance the quality of your Ansible code base and can be coupled \nwith [official best practices](https://docs.ansible.com/ansible/latest/reference_appendices/test_strategies.html).\nPlaced in a CI/CD it will be in charge of validating that the legacy code is always working as expected.\n\n## Contribute\n\nFeel free to fill an issue containing feature request(s), or (even better) to send a Pull request, we would be happy to collaborate with you.\n\n> If you like the project, star it \u2b50, it motivates us a lot \ud83d\ude42\n",
    "bugtrack_url": null,
    "license": "GNU General Public License v3 (GPLv3)",
    "summary": "End-to-end testing framework for Ansible",
    "version": "1.3.0",
    "split_keywords": [
        "test",
        "ansible",
        "end2end"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ffe417fe7f2583ce36e75373c7fd29c936903770ded5218bbfd06ff606c71175",
                "md5": "5578f03e17fd67941df3975cb512ba66",
                "sha256": "3a05fbc016197d7b4c8ec1c6deb991b24f06e89c3a69d0f75eb08d45f3f56dd3"
            },
            "downloads": -1,
            "filename": "monkeyble-1.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5578f03e17fd67941df3975cb512ba66",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 34475,
            "upload_time": "2023-02-02T14:32:40",
            "upload_time_iso_8601": "2023-02-02T14:32:40.906707Z",
            "url": "https://files.pythonhosted.org/packages/ff/e4/17fe7f2583ce36e75373c7fd29c936903770ded5218bbfd06ff606c71175/monkeyble-1.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3ac99b66cb805843ee66b9491584499216d202230e362cfa20c57e33989b4167",
                "md5": "c7fd293c385972b67cb19f5722e6b108",
                "sha256": "54db1257bacfb72478aadc81ccbe1f756b2fe9e57cf4fd06b3b7d5c64fe0f40e"
            },
            "downloads": -1,
            "filename": "monkeyble-1.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "c7fd293c385972b67cb19f5722e6b108",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 29567,
            "upload_time": "2023-02-02T14:32:42",
            "upload_time_iso_8601": "2023-02-02T14:32:42.638371Z",
            "url": "https://files.pythonhosted.org/packages/3a/c9/9b66cb805843ee66b9491584499216d202230e362cfa20c57e33989b4167/monkeyble-1.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-02 14:32:42",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "monkeyble"
}
        
Elapsed time: 0.04006s