# Summary
This package provides a utility that translates [tests plans](https://learn.microsoft.com/en-us/azure/devops/test/overview?view=azure-devops), suites and cases in Azure DevOps (ADO) into validated gherkin feature files, and then uses `pytest-bdd generate` to create the runners for those tests.
After that, it can validate that the test directory has all of the necessary fixtures to run pytest using pytest-bdd given/when/then fixture decorators.
It leverages ADO's notion of [Shared Steps](https://learn.microsoft.com/en-us/azure/devops/test/share-steps-between-test-cases?view=azure-devops) to reduce duplication when authoring the features and scenarios. This lets given/when/then clauses to be written once, and used many times.
It is capable of leveraging parameters on the test case (both "shared" and "non-shared") as `Examples`, creating a `Scenario Outline` instead of the standard `Scenario`.
# Installation
This package is written only in python, and can be installed using:
```bash
$ pip install adotestplan-to-pytestbdd
```
# Usage
```python
from adotestplan_to_pytestbdd import ADOTestPlan
url = 'https://dev.azure.com/[ORGANIZATON_HERE]'
pat = '[PAT_HERE]'
project = '[PROJECT_HERE]'
out_dir='output'
tp = ADOTestPlan(organization_url=url, pat=pat, project=project, out_dir=out_dir)
```
The above example hasn't yet "done anything", and is equivalent to the following:
```python
from adotestplan_to_pytestbdd import ADOTestPlan
tp = ADOTestPlan()
tp.url = 'https://dev.azure.com/[ORGANIZATON_HERE]'
tp.pat = '[PAT_HERE]'
tp.project = '[PROJECT_HERE]'
tp.out_dir='output'
```
Put differently - until one of the built-in methods is invoked, properties can be set via init or via `property` access.
To populate the internal memory structures from ADO:
```python
tp.populate()
```
Next, to write feature files to disk from the populated:
```python
tp.write_feature_files()
```
At this point, the ADO test plan has been synchronized to feature files on disk. Its possible that is a sufficient stopping point.
At this point begins the pytest-bdd integration.
First, use this method:
```python
tp.write_pytestbdd_runners()
```
to create test_xyz.py files on disk corresponding to the feature files generated above. This is a wrapper around `pytest-bdd generate` (see [ado_test_plan.py](adotestplan_to_pytestbdd/ado_test_plan.py#:~:text=_generate_pytestbdd_for_feature)).
One reason this is seen as useful is that it avoids "checking in" boilerplate/generated code - the test methods created here are _basically_ stubs, the majority of the test occurs in the given/when/then fixtures. With this approach, the test_xyz.py files can be just as ephemeral as the .feature files they are generated from - the one piece that is persistent/checked in is the fixtures where the actual test implementation occurs.
At this point, call:
```python
tp.validate_pytestbdd_runners_against_feature_files()
```
This final call uses pytest utilities to collect all fixtures in the specified test directory, and compares those against the needed fixtures, determined during the `populate()` phase. It will print informative messages, and in the end raise an exception if some fixtures are not found.
# Testing
Please see [TESTING.md](TESTING.md) for notes on running the tests associated with this package. Note this refers to the unit test for validating the package itself, not the tests generated by running this package normally. That can be done after code generation via a normal call to `pytest`.
# Possible Enhancements
- Split the 2 basic pieces of functionality into a separate package (1 being ADO to feature file translation, 2 being feature file to fixture "pool" checking)
- Use `pytest --fixtures` to collect available fixtures instead of raw searching through files. This is likely a more robust way of making sure fixtures aren't being missed. (this was looked into briefly, and it appears to be MUCH less performant than raw searching (on the order of 7s compared to 30ms), so this was tabled for now)
- Document how "tags" can be used to filter test cases if one wanted to extend this utility for their own workflows.
- A lot of the functionality in the [tasks.py](tasks.py) methods `delete_test_work_items` and `generate_test_work_items` may be revelant for "round tripping" this utility - going from .feature files _into_ ADO, which has some utility in and of itself - for instance, migrating from a plain-text file approach to an ADO Test Plan backed approach.
Raw data
{
"_id": null,
"home_page": "https://github.com/bissell-homecare-inc/adotestplan-to-pytestbdd",
"name": "adotestplan-to-pytestbdd",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "ADO, AzureDevOps, BDD",
"author": "David VanKampen",
"author_email": "david.vankampen@bissell.com",
"download_url": "https://files.pythonhosted.org/packages/23/73/7292692c669f90a57f2639d3d8be15e12ab9e1648e440f2d9f215553bc6d/adotestplan_to_pytestbdd-0.1.9.tar.gz",
"platform": null,
"description": "# Summary\n\nThis package provides a utility that translates [tests plans](https://learn.microsoft.com/en-us/azure/devops/test/overview?view=azure-devops), suites and cases in Azure DevOps (ADO) into validated gherkin feature files, and then uses `pytest-bdd generate` to create the runners for those tests.\n\nAfter that, it can validate that the test directory has all of the necessary fixtures to run pytest using pytest-bdd given/when/then fixture decorators.\n\nIt leverages ADO's notion of [Shared Steps](https://learn.microsoft.com/en-us/azure/devops/test/share-steps-between-test-cases?view=azure-devops) to reduce duplication when authoring the features and scenarios. This lets given/when/then clauses to be written once, and used many times.\n\nIt is capable of leveraging parameters on the test case (both \"shared\" and \"non-shared\") as `Examples`, creating a `Scenario Outline` instead of the standard `Scenario`.\n\n# Installation\nThis package is written only in python, and can be installed using:\n```bash\n$ pip install adotestplan-to-pytestbdd\n```\n\n# Usage\n\n```python\nfrom adotestplan_to_pytestbdd import ADOTestPlan\nurl = 'https://dev.azure.com/[ORGANIZATON_HERE]'\npat = '[PAT_HERE]'\nproject = '[PROJECT_HERE]'\nout_dir='output'\ntp = ADOTestPlan(organization_url=url, pat=pat, project=project, out_dir=out_dir)\n```\n\nThe above example hasn't yet \"done anything\", and is equivalent to the following:\n\n```python\nfrom adotestplan_to_pytestbdd import ADOTestPlan\ntp = ADOTestPlan()\ntp.url = 'https://dev.azure.com/[ORGANIZATON_HERE]'\ntp.pat = '[PAT_HERE]'\ntp.project = '[PROJECT_HERE]'\ntp.out_dir='output'\n```\n\nPut differently - until one of the built-in methods is invoked, properties can be set via init or via `property` access.\n\nTo populate the internal memory structures from ADO:\n```python\ntp.populate()\n```\n\nNext, to write feature files to disk from the populated:\n```python\ntp.write_feature_files()\n```\n\nAt this point, the ADO test plan has been synchronized to feature files on disk. Its possible that is a sufficient stopping point.\n\nAt this point begins the pytest-bdd integration.\n\nFirst, use this method:\n```python\ntp.write_pytestbdd_runners()\n```\nto create test_xyz.py files on disk corresponding to the feature files generated above. This is a wrapper around `pytest-bdd generate` (see [ado_test_plan.py](adotestplan_to_pytestbdd/ado_test_plan.py#:~:text=_generate_pytestbdd_for_feature)).\n\nOne reason this is seen as useful is that it avoids \"checking in\" boilerplate/generated code - the test methods created here are _basically_ stubs, the majority of the test occurs in the given/when/then fixtures. With this approach, the test_xyz.py files can be just as ephemeral as the .feature files they are generated from - the one piece that is persistent/checked in is the fixtures where the actual test implementation occurs.\n\nAt this point, call:\n```python\ntp.validate_pytestbdd_runners_against_feature_files()\n```\n\nThis final call uses pytest utilities to collect all fixtures in the specified test directory, and compares those against the needed fixtures, determined during the `populate()` phase. It will print informative messages, and in the end raise an exception if some fixtures are not found.\n\n# Testing\nPlease see [TESTING.md](TESTING.md) for notes on running the tests associated with this package. Note this refers to the unit test for validating the package itself, not the tests generated by running this package normally. That can be done after code generation via a normal call to `pytest`.\n\n# Possible Enhancements\n\n - Split the 2 basic pieces of functionality into a separate package (1 being ADO to feature file translation, 2 being feature file to fixture \"pool\" checking)\n - Use `pytest --fixtures` to collect available fixtures instead of raw searching through files. This is likely a more robust way of making sure fixtures aren't being missed. (this was looked into briefly, and it appears to be MUCH less performant than raw searching (on the order of 7s compared to 30ms), so this was tabled for now)\n - Document how \"tags\" can be used to filter test cases if one wanted to extend this utility for their own workflows.\n - A lot of the functionality in the [tasks.py](tasks.py) methods `delete_test_work_items` and `generate_test_work_items` may be revelant for \"round tripping\" this utility - going from .feature files _into_ ADO, which has some utility in and of itself - for instance, migrating from a plain-text file approach to an ADO Test Plan backed approach.\n",
"bugtrack_url": null,
"license": null,
"summary": "Utility for translating AzureDevOps Test Plans to Gherkin Feature file and Pytest-BDD runners",
"version": "0.1.9",
"project_urls": {
"Bug Tracker": "https://github.com/bissell-homecare-inc/adotestplan-to-pytestbdd/issues",
"Homepage": "https://github.com/bissell-homecare-inc/adotestplan-to-pytestbdd",
"Repository": "https://github.com/bissell-homecare-inc/adotestplan-to-pytestbdd"
},
"split_keywords": [
"ado",
" azuredevops",
" bdd"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d59f07b5a3937faa50d1d2586fd9363a367164618f7bcdb72acf6ae4f5510b62",
"md5": "392cd38f4ab138002f2695cc053a3bc1",
"sha256": "f7c62ef170d9388ab92f44a55be22cf7db93f978941868c8af237d4ee64f7db7"
},
"downloads": -1,
"filename": "adotestplan_to_pytestbdd-0.1.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "392cd38f4ab138002f2695cc053a3bc1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 17313,
"upload_time": "2024-07-08T21:27:53",
"upload_time_iso_8601": "2024-07-08T21:27:53.230652Z",
"url": "https://files.pythonhosted.org/packages/d5/9f/07b5a3937faa50d1d2586fd9363a367164618f7bcdb72acf6ae4f5510b62/adotestplan_to_pytestbdd-0.1.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "23737292692c669f90a57f2639d3d8be15e12ab9e1648e440f2d9f215553bc6d",
"md5": "f3ded32dab5ea076d0711ba40a77b87a",
"sha256": "5d69cf51a8fa135a206a2157f840f14579d5ecd44432f4dee071695b5e0289a5"
},
"downloads": -1,
"filename": "adotestplan_to_pytestbdd-0.1.9.tar.gz",
"has_sig": false,
"md5_digest": "f3ded32dab5ea076d0711ba40a77b87a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 18348,
"upload_time": "2024-07-08T21:27:54",
"upload_time_iso_8601": "2024-07-08T21:27:54.409245Z",
"url": "https://files.pythonhosted.org/packages/23/73/7292692c669f90a57f2639d3d8be15e12ab9e1648e440f2d9f215553bc6d/adotestplan_to_pytestbdd-0.1.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-08 21:27:54",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "bissell-homecare-inc",
"github_project": "adotestplan-to-pytestbdd",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "adotestplan-to-pytestbdd"
}