Name | micropytest JSON |
Version |
0.17.1
JSON |
| download |
home_page | None |
Summary | A micro test runner |
upload_time | 2025-07-11 10:32:46 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | None |
keywords |
pytest
micro
test-runner
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
[](https://pypi.org/project/micropytest/)
[](https://github.com/BeamNG/micropytest/actions/workflows/test_examples.yml)
# microPyTest
**microPyTest** is a minimal, pure python-based test runner that you can use directly in code.


## Key Points
- **Code-first approach**: Import and run tests from your own scripts.
- **Artifact tracking**: Each test can record artifacts (files or data) via a built-in **test context**.
- **Command execution**: Run and interact with external processes with real-time output processing.
- **Test filtering**: Run only the tests you need by specifying patterns.
- **Test arguments**: Pass command-line arguments directly to your tests.
- **Lightweight**: Just Python. No special config or advanced fixtures.
- **Optional CLI**: You can also run tests via the `micropytest` command, but **embedding** in your own code is the primary focus.
## Installation
Install directly from [PyPI](https://pypi.org/project/micropytest/):
```bash
pip install micropytest
```
## Usage in Code
Suppose you have some test files under `my_tests/`:
```python
# my_tests/test_example.py
def test_basic():
assert 1 + 1 == 2
def test_with_context(ctx):
ctx.debug("Starting test_with_context")
assert 2 + 2 == 4
ctx.add_artifact("numbers", {"lhs": 2, "rhs": 2})
```
You can **run** them from a Python script:
```python
import micropytest.core
results = micropytest.core.run_tests(tests_path="my_tests")
passed = sum(r.status == "pass" for r in results)
total = len(results)
print(f"Test run complete: {passed}/{total} passed")
```
- Each test that accepts a `ctx` parameter gets a **TestContext** object with `.debug()`, `.warn()`, `.add_artifact()`, etc.
- Results include logs, artifacts, pass/fail/skip status, and **duration**.
## Command Execution
microPyTest includes a `Command` class for running and interacting with external processes:
```python
from micropytest.command import Command
import sys
def test_interactive_command(ctx):
# Run a Python interpreter interactively
with Command([sys.executable, "-i"]) as cmd:
# Send a command
cmd.write("print('Hello, world!')\n")
# Check the output
stdout = cmd.get_stdout()
# Exit the interpreter
cmd.write("exit()\n")
# Verify the output
assert any("Hello, world!" in line for line in cmd.get_stdout())
```
Key features:
- Run commands with callbacks for real-time output processing
- Interact with processes via stdin
- Access stdout/stderr at any point during execution
- Set custom environment variables and working directories
## Test Filtering
You can run a subset of tests by specifying a filter pattern:
```python
# Run only tests with "artifact" in their name
results = micropytest.core.run_tests(tests_path="my_tests", test_filter="artifact")
```
This is especially useful when you're focusing on a specific area of your codebase.
## Passing Arguments to Tests
Tests can accept and parse command-line arguments using standard Python's `argparse`:
```python
def test_with_args(ctx):
import argparse
# Create parser
parser = argparse.ArgumentParser(description="Test with arguments")
parser.add_argument("--string", "-s", default="default string", help="Input string")
parser.add_argument("--number", "-n", type=int, default=0, help="Input number")
# Parse arguments (ignoring unknown args)
args, _ = parser.parse_known_args()
# Log the parsed arguments
ctx.debug(f"Parsed arguments:")
for key, value in vars(args).items():
ctx.debug(f" {key}: {value}")
# Use the arguments in your test
assert args.string != "", "String should not be empty"
assert args.number >= 0, "Number should be non-negative"
```
When running from the command line, you can pass these arguments directly:
```bash
micropytest --test test_with_args --string="Hello World" --number=42
```
The arguments after your test filter will be passed to your test functions, allowing for flexible test parameterization.
## Differences from pyTest
- **Code-first**: You typically call `run_tests(...)` from Python scripts. The CLI is optional if you prefer it.
- **Artifact handling is built-in**: `ctx.add_artifact("some_key", value)` can store data for later review. No extra plugin required.
- **Command execution built-in**: No need for external plugins to run and interact with processes.
- **No fixtures or plugins**: microPyTest is intentionally minimal. Tests can still share state by passing a custom context class if needed.
- **No configuration**: There's no `pytest.ini` or `conftest.py`. Just put your test functions in `test_*.py` or `*_test.py`.
- **Time estimates for each test**
## Quickstart
See the examples subfolder
## Optional CLI
If you prefer a command-line flow:
```bash
micropytest -p tests/
```
- `--verbose`: Show all debug logs & artifacts.
- `--quiet`: Only prints a final summary.
- `--test`: Run only tests matching the specified pattern.
- `--dry-run`: Show what tests would be run without actually running them (will assume them to pass).
Examples:
```bash
# Run all tests in my_tests directory
micropytest -v my_tests
# Run only tests with "artifact" in their name
micropytest -t artifact my_tests
# Run a specific test and pass arguments to it
micropytest -t test_cmdline_parser --string="Hello" --number=42
```
## Development
To develop with microPyTest, install the required dependencies and run the tests in the project:
```bash
pip install rich
python -m micropytest .
```
### Uploading to PyPI
To build and upload a new version to PyPI:
```bash
# Install build tools
pip install build twine
# Build the distribution packages
python -m build
# Upload to PyPI
python -m twine upload dist/micropytest-xxx.tar.gz
```
Make sure to update the version number in `__init__.py` and `pyproject.toml` before building a new release.
## Changelog
- **v0.17.1** – Bugfix in SVN helper
- **v0.17** – Improve SVN helper
- **v0.16** – Improve test store terminology
- **v0.15** – Add support for environment variables for SVN credentials
- **v0.14** – Add support for storing test results on a remote server using REST API, make core functions sync
- **v0.13** – Make ctrl+c responsively abort tests, fix bugs, add VCS helper functions
- **v0.12** – Improved type safety, allow detecting changes using VCS
- **v0.11.1** – Add dry-run mode
- **v0.11** – Add support for parameterized tests
- **v0.10** – Refactoring, make VSCHelper extensible, breaking: you need to create an instance of VCSHelper to use its functions
- **v0.9.2** – Fix dependency issues and require Python 3.9 or newer
- **v0.9.1** – Make pyproject.toml compatible with Python older than 3.9 (attempt did not work)
- **v0.9** – Several fixes
- **v0.8** – Several fixes and minor improvements
- **v0.7** – Added asyncio support
- **v0.6** – Added rich display support, tag filtering, improved warnings display, VCS helper, and improved command execution
- **v0.5** – Added test filtering and argument passing capabilities
- **v0.4** – Added Command class for process execution and interaction
- **v0.3.1** – Fixed screenshot in pypi
- **v0.3** – Added ability to skip tests
- **v0.2** – Added support for custom context classes
- **v0.1** – Initial release
Enjoy your **micro** yet **mighty** test runner!
Raw data
{
"_id": null,
"home_page": null,
"name": "micropytest",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "pytest, micro, test-runner",
"author": null,
"author_email": "Thomas Fischer <tfischer@beamng.gmbh>",
"download_url": "https://files.pythonhosted.org/packages/18/87/044e56c8f0fb62d7dd7ee0b8c29df5529df875ef7c85cbe150051b25adbc/micropytest-0.17.1.tar.gz",
"platform": null,
"description": "[](https://pypi.org/project/micropytest/)\r\n[](https://github.com/BeamNG/micropytest/actions/workflows/test_examples.yml)\r\n# microPyTest\r\n\r\n**microPyTest** is a minimal, pure python-based test runner that you can use directly in code.\r\n\r\n\r\n\r\n\r\n\r\n## Key Points\r\n\r\n- **Code-first approach**: Import and run tests from your own scripts.\r\n- **Artifact tracking**: Each test can record artifacts (files or data) via a built-in **test context**.\r\n- **Command execution**: Run and interact with external processes with real-time output processing.\r\n- **Test filtering**: Run only the tests you need by specifying patterns.\r\n- **Test arguments**: Pass command-line arguments directly to your tests.\r\n- **Lightweight**: Just Python. No special config or advanced fixtures.\r\n- **Optional CLI**: You can also run tests via the `micropytest` command, but **embedding** in your own code is the primary focus.\r\n\r\n\r\n## Installation\r\n\r\nInstall directly from [PyPI](https://pypi.org/project/micropytest/):\r\n\r\n```bash\r\npip install micropytest\r\n```\r\n\r\n## Usage in Code\r\n\r\nSuppose you have some test files under `my_tests/`:\r\n\r\n```python\r\n# my_tests/test_example.py\r\ndef test_basic():\r\n assert 1 + 1 == 2\r\n\r\ndef test_with_context(ctx):\r\n ctx.debug(\"Starting test_with_context\")\r\n assert 2 + 2 == 4\r\n ctx.add_artifact(\"numbers\", {\"lhs\": 2, \"rhs\": 2})\r\n```\r\n\r\nYou can **run** them from a Python script:\r\n\r\n```python\r\nimport micropytest.core\r\n\r\nresults = micropytest.core.run_tests(tests_path=\"my_tests\")\r\npassed = sum(r.status == \"pass\" for r in results)\r\ntotal = len(results)\r\nprint(f\"Test run complete: {passed}/{total} passed\")\r\n```\r\n\r\n- Each test that accepts a `ctx` parameter gets a **TestContext** object with `.debug()`, `.warn()`, `.add_artifact()`, etc.\r\n- Results include logs, artifacts, pass/fail/skip status, and **duration**.\r\n\r\n## Command Execution\r\n\r\nmicroPyTest includes a `Command` class for running and interacting with external processes:\r\n\r\n```python\r\nfrom micropytest.command import Command\r\nimport sys\r\n\r\ndef test_interactive_command(ctx):\r\n # Run a Python interpreter interactively\r\n with Command([sys.executable, \"-i\"]) as cmd:\r\n # Send a command\r\n cmd.write(\"print('Hello, world!')\\n\")\r\n \r\n # Check the output\r\n stdout = cmd.get_stdout()\r\n \r\n # Exit the interpreter\r\n cmd.write(\"exit()\\n\")\r\n \r\n # Verify the output\r\n assert any(\"Hello, world!\" in line for line in cmd.get_stdout())\r\n```\r\n\r\nKey features:\r\n- Run commands with callbacks for real-time output processing\r\n- Interact with processes via stdin\r\n- Access stdout/stderr at any point during execution\r\n- Set custom environment variables and working directories\r\n\r\n## Test Filtering\r\n\r\nYou can run a subset of tests by specifying a filter pattern:\r\n\r\n```python\r\n# Run only tests with \"artifact\" in their name\r\nresults = micropytest.core.run_tests(tests_path=\"my_tests\", test_filter=\"artifact\")\r\n```\r\n\r\nThis is especially useful when you're focusing on a specific area of your codebase.\r\n\r\n## Passing Arguments to Tests\r\n\r\nTests can accept and parse command-line arguments using standard Python's `argparse`:\r\n\r\n```python\r\ndef test_with_args(ctx):\r\n import argparse\r\n \r\n # Create parser\r\n parser = argparse.ArgumentParser(description=\"Test with arguments\")\r\n parser.add_argument(\"--string\", \"-s\", default=\"default string\", help=\"Input string\")\r\n parser.add_argument(\"--number\", \"-n\", type=int, default=0, help=\"Input number\")\r\n \r\n # Parse arguments (ignoring unknown args)\r\n args, _ = parser.parse_known_args()\r\n \r\n # Log the parsed arguments\r\n ctx.debug(f\"Parsed arguments:\")\r\n for key, value in vars(args).items():\r\n ctx.debug(f\" {key}: {value}\")\r\n \r\n # Use the arguments in your test\r\n assert args.string != \"\", \"String should not be empty\"\r\n assert args.number >= 0, \"Number should be non-negative\"\r\n```\r\n\r\nWhen running from the command line, you can pass these arguments directly:\r\n\r\n```bash\r\nmicropytest --test test_with_args --string=\"Hello World\" --number=42\r\n```\r\n\r\nThe arguments after your test filter will be passed to your test functions, allowing for flexible test parameterization.\r\n\r\n## Differences from pyTest\r\n\r\n- **Code-first**: You typically call `run_tests(...)` from Python scripts. The CLI is optional if you prefer it.\r\n\r\n- **Artifact handling is built-in**: `ctx.add_artifact(\"some_key\", value)` can store data for later review. No extra plugin required.\r\n\r\n- **Command execution built-in**: No need for external plugins to run and interact with processes.\r\n\r\n- **No fixtures or plugins**: microPyTest is intentionally minimal. Tests can still share state by passing a custom context class if needed.\r\n\r\n- **No configuration**: There's no `pytest.ini` or `conftest.py`. Just put your test functions in `test_*.py` or `*_test.py`.\r\n\r\n- **Time estimates for each test**\r\n\r\n## Quickstart\r\n\r\nSee the examples subfolder\r\n\r\n## Optional CLI\r\n\r\nIf you prefer a command-line flow:\r\n\r\n```bash\r\nmicropytest -p tests/\r\n```\r\n\r\n- `--verbose`: Show all debug logs & artifacts.\r\n- `--quiet`: Only prints a final summary.\r\n- `--test`: Run only tests matching the specified pattern.\r\n- `--dry-run`: Show what tests would be run without actually running them (will assume them to pass).\r\n\r\nExamples:\r\n\r\n```bash\r\n# Run all tests in my_tests directory\r\nmicropytest -v my_tests\r\n\r\n# Run only tests with \"artifact\" in their name\r\nmicropytest -t artifact my_tests\r\n\r\n# Run a specific test and pass arguments to it\r\nmicropytest -t test_cmdline_parser --string=\"Hello\" --number=42\r\n```\r\n\r\n## Development\r\n\r\nTo develop with microPyTest, install the required dependencies and run the tests in the project:\r\n\r\n```bash\r\npip install rich\r\npython -m micropytest .\r\n```\r\n\r\n### Uploading to PyPI\r\n\r\nTo build and upload a new version to PyPI:\r\n\r\n```bash\r\n# Install build tools\r\npip install build twine\r\n\r\n# Build the distribution packages\r\npython -m build\r\n\r\n# Upload to PyPI\r\npython -m twine upload dist/micropytest-xxx.tar.gz\r\n```\r\n\r\nMake sure to update the version number in `__init__.py` and `pyproject.toml` before building a new release.\r\n\r\n## Changelog\r\n- **v0.17.1** \u2013 Bugfix in SVN helper\r\n- **v0.17** \u2013 Improve SVN helper\r\n- **v0.16** \u2013 Improve test store terminology\r\n- **v0.15** \u2013 Add support for environment variables for SVN credentials\r\n- **v0.14** \u2013 Add support for storing test results on a remote server using REST API, make core functions sync\r\n- **v0.13** \u2013 Make ctrl+c responsively abort tests, fix bugs, add VCS helper functions\r\n- **v0.12** \u2013 Improved type safety, allow detecting changes using VCS\r\n- **v0.11.1** \u2013 Add dry-run mode\r\n- **v0.11** \u2013 Add support for parameterized tests\r\n- **v0.10** \u2013 Refactoring, make VSCHelper extensible, breaking: you need to create an instance of VCSHelper to use its functions\r\n- **v0.9.2** \u2013 Fix dependency issues and require Python 3.9 or newer\r\n- **v0.9.1** \u2013 Make pyproject.toml compatible with Python older than 3.9 (attempt did not work)\r\n- **v0.9** \u2013 Several fixes\r\n- **v0.8** \u2013 Several fixes and minor improvements\r\n- **v0.7** \u2013 Added asyncio support\r\n- **v0.6** \u2013 Added rich display support, tag filtering, improved warnings display, VCS helper, and improved command execution\r\n- **v0.5** \u2013 Added test filtering and argument passing capabilities\r\n- **v0.4** \u2013 Added Command class for process execution and interaction\r\n- **v0.3.1** \u2013 Fixed screenshot in pypi\r\n- **v0.3** \u2013 Added ability to skip tests\r\n- **v0.2** \u2013 Added support for custom context classes\r\n- **v0.1** \u2013 Initial release\r\n\r\nEnjoy your **micro** yet **mighty** test runner!\r\n\r\n",
"bugtrack_url": null,
"license": null,
"summary": "A micro test runner",
"version": "0.17.1",
"project_urls": {
"Changelog": "https://github.com/BeamNG/micropytest?tab=readme-ov-file#changelog",
"Homepage": "https://github.com/BeamNG/micropytest",
"Issues": "https://github.com/BeamNG/micropytest/issues"
},
"split_keywords": [
"pytest",
" micro",
" test-runner"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "1887044e56c8f0fb62d7dd7ee0b8c29df5529df875ef7c85cbe150051b25adbc",
"md5": "8f9fe1bafcace09b8d0de7886a27fe8a",
"sha256": "45e921062f990b71cb11d2c31631a3f8bd1b6adad6dfb6a711e18066fc0d82b8"
},
"downloads": -1,
"filename": "micropytest-0.17.1.tar.gz",
"has_sig": false,
"md5_digest": "8f9fe1bafcace09b8d0de7886a27fe8a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 31132,
"upload_time": "2025-07-11T10:32:46",
"upload_time_iso_8601": "2025-07-11T10:32:46.678831Z",
"url": "https://files.pythonhosted.org/packages/18/87/044e56c8f0fb62d7dd7ee0b8c29df5529df875ef7c85cbe150051b25adbc/micropytest-0.17.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-11 10:32:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "BeamNG",
"github_project": "micropytest?tab=readme-ov-file#changelog",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "micropytest"
}