# Pipeline Reader
## About
`pipeline-reader` is a package designed to make it easier to user Jenkinsfile-like pipeline files in Python projects. While JSON and YAML are generally sufficient for this task, there may be times where you want to execute code, which JSON and YAML are not well-suited for. In this case, pipeline-reader allows you to define stages containing Python code which are then run sequentially.
## Examples
### Writing a Pipeline
A basic pipeline consists of a pipeline block containing stages as follows:
```
pipeline {
stages {
stage('foo') {
print('do something')
}
}
}
```
You can then add an options block like so
```
pipeline {
options {
"foo1": "bar1"
"foo2": "bar2"
}
stages {
stage('foo') {
print('do something')
}
}
}
```
These options will then be stored in a dictionary and made available via the Pipeline object that `pipeline-reader` returns (this object is also available to the pipeline itself at the `_pipeline` variable)
You can then access these options inside your pipeline script like so:
```
pipeline {
options {
"foo1": "bar1"
"foo2": "bar2"
}
stages {
stage('foo') {
print(_pipeline.options)
}
}
}
```
There are also a group of "protected" options, in the sense that they have an effect on how the pipieline runs. The list of protected options is shown below:
- `exit_on_failure`
- boolean
- If True, then pipeline exits on a failed stage, otherwise pipeline will continue running
In addition, the code context carries over between stages, so running
```
pipeline {
stages {
stage('foo') {
variable = True
}
stage('bar') {
print(variable)
}
}
}
```
will output `True`.
You can also get a little bit fancy with the stages by utilizing a `when` block to control if it fires or not
```
pipeline {
options {
"foo": "bar"
}
stages {
stage('foo') {
when {
_pipeline.options["foo"] == "bar"
}
# This will run only if the "foo" option is set to "bar"
}
}
}
```
If you need to run specific code before your stage executes, you can use a `pre` block to make it more readable
```
pipeline {
stages {
stage('foo') {
pre {
message = 'hello world'
}
# this will print "hello world"
print(message)
}
}
}
```
You can then control what happens after a stage through the use of a `post` block. `post` blocks support `always`, `success`, and `failure` sub-blocks with `always` being run first
```
pipeline {
stages {
stage('foo') {
# if this succeeds then it will run the `always` block followed by the `success` block
post {
success {
print('Success!')
}
failure {
print('failure!')
}
always {
print('Always!')
}
}
}
}
}
```
To help with processes that fail often, you can set a `retry` directive in your stage to tell the pipeline how many times you'll want that staged to be retried
```
pipeline {
stages {
stage('foo') {
retry {5}
# this will retry 5 times after the inital try
}
}
}
```
You can also jump to another stage in any of the post blocks if you so desire with the `goto` directive
```
pipeline {
stages {
stage('foo') {
# this stage will jump to another depending on success or failure
post {
success {
goto {Success Stage}
}
failure {
goto {Failure Stage}
}
}
}
stage('Success Stage') {
# this will be run on a success of the `foo` stage
}
stage('Failure Stage') {
# this will be run on a failure of the `foo` stage
}
}
}
```
Finally, Python and C style comments are both supported inside the pipeline files.
```
# this will be ignored when the file is loaded in
pipeline {
stages {
// this will also be ignored
stage('foo') {
# even comments inside of stages are stripped out
print('do something')
}
}
}
```
### Using `pipeline-reader` in an Application
To utilize `pipeline-reader`, you'll want to use something like what is shown below
```Python
import pipeline_reader
# this will load your pipeline
with open('filename.pipeline') as f:
pipeline = pipeline_reader.load(f)
# this will execute your pipeline
pipeline_reader.run(pipeline, globals(), locals())
```
## To Do
- [x] Parse pipeline file
- [x] Run loaded pipelines
- [x] Pre-stage
- [x] Post-stage
- [x] When-condition
- [x] Catch stage success
- [x] Catch stage failure
- [x] Goto directive
- [x] Retry directive
- [ ] Allow loading of arbitrary block types as plugins
## Contact
If you have any questions or concerns, please reach out to me (John Carter) at jfcarter2358@gmail.com
Raw data
{
"_id": null,
"home_page": "https://github.com/jfcarter2358/pipeline-reader",
"name": "pipeline-reader",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "",
"author": "John Carter",
"author_email": "jfcarter2358@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/ae/40/35f1e0092cc7a14747154ff0960e590cfebe79979e24ccc93c5a597c666c/pipeline-reader-0.0.5.tar.gz",
"platform": null,
"description": "# Pipeline Reader\n\n## About\n\n`pipeline-reader` is a package designed to make it easier to user Jenkinsfile-like pipeline files in Python projects. While JSON and YAML are generally sufficient for this task, there may be times where you want to execute code, which JSON and YAML are not well-suited for. In this case, pipeline-reader allows you to define stages containing Python code which are then run sequentially.\n\n## Examples\n\n### Writing a Pipeline\n\nA basic pipeline consists of a pipeline block containing stages as follows:\n```\npipeline {\n stages {\n stage('foo') {\n print('do something')\n }\n }\n}\n```\nYou can then add an options block like so\n```\npipeline {\n options {\n \"foo1\": \"bar1\"\n \"foo2\": \"bar2\"\n }\n stages {\n stage('foo') {\n print('do something')\n }\n }\n}\n```\nThese options will then be stored in a dictionary and made available via the Pipeline object that `pipeline-reader` returns (this object is also available to the pipeline itself at the `_pipeline` variable)\nYou can then access these options inside your pipeline script like so:\n```\npipeline {\n options {\n \"foo1\": \"bar1\"\n \"foo2\": \"bar2\"\n }\n stages {\n stage('foo') {\n print(_pipeline.options)\n }\n }\n}\n```\nThere are also a group of \"protected\" options, in the sense that they have an effect on how the pipieline runs. The list of protected options is shown below:\n- `exit_on_failure`\n - boolean\n - If True, then pipeline exits on a failed stage, otherwise pipeline will continue running\nIn addition, the code context carries over between stages, so running\n```\npipeline {\n stages {\n stage('foo') {\n variable = True\n }\n stage('bar') {\n print(variable)\n }\n }\n}\n```\nwill output `True`.\nYou can also get a little bit fancy with the stages by utilizing a `when` block to control if it fires or not\n```\npipeline {\n options {\n \"foo\": \"bar\"\n }\n stages {\n stage('foo') {\n when {\n _pipeline.options[\"foo\"] == \"bar\"\n }\n # This will run only if the \"foo\" option is set to \"bar\"\n }\n }\n}\n```\nIf you need to run specific code before your stage executes, you can use a `pre` block to make it more readable\n```\npipeline {\n stages {\n stage('foo') {\n pre {\n message = 'hello world'\n }\n # this will print \"hello world\"\n print(message)\n }\n }\n}\n```\nYou can then control what happens after a stage through the use of a `post` block. `post` blocks support `always`, `success`, and `failure` sub-blocks with `always` being run first\n```\npipeline {\n stages {\n stage('foo') {\n # if this succeeds then it will run the `always` block followed by the `success` block\n post {\n success {\n print('Success!')\n }\n failure {\n print('failure!')\n }\n always {\n print('Always!')\n }\n }\n }\n }\n}\n```\nTo help with processes that fail often, you can set a `retry` directive in your stage to tell the pipeline how many times you'll want that staged to be retried\n```\npipeline {\n stages {\n stage('foo') {\n retry {5}\n # this will retry 5 times after the inital try\n }\n }\n}\n```\nYou can also jump to another stage in any of the post blocks if you so desire with the `goto` directive\n```\npipeline {\n stages {\n stage('foo') {\n # this stage will jump to another depending on success or failure\n post {\n success {\n goto {Success Stage}\n }\n failure {\n goto {Failure Stage}\n }\n }\n }\n stage('Success Stage') {\n # this will be run on a success of the `foo` stage\n }\n stage('Failure Stage') {\n # this will be run on a failure of the `foo` stage\n }\n }\n}\n```\nFinally, Python and C style comments are both supported inside the pipeline files.\n```\n# this will be ignored when the file is loaded in\npipeline {\n stages {\n // this will also be ignored\n stage('foo') {\n # even comments inside of stages are stripped out\n print('do something')\n }\n }\n}\n```\n\n### Using `pipeline-reader` in an Application\nTo utilize `pipeline-reader`, you'll want to use something like what is shown below\n\n```Python\nimport pipeline_reader\n\n# this will load your pipeline\nwith open('filename.pipeline') as f:\n pipeline = pipeline_reader.load(f)\n\n# this will execute your pipeline\npipeline_reader.run(pipeline, globals(), locals())\n\n```\n## To Do\n\n- [x] Parse pipeline file\n- [x] Run loaded pipelines\n- [x] Pre-stage\n- [x] Post-stage\n- [x] When-condition\n- [x] Catch stage success\n- [x] Catch stage failure\n- [x] Goto directive\n- [x] Retry directive\n- [ ] Allow loading of arbitrary block types as plugins\n\n## Contact\n\nIf you have any questions or concerns, please reach out to me (John Carter) at jfcarter2358@gmail.com\n\n",
"bugtrack_url": null,
"license": "",
"summary": "A package for reading Jenkinsfile-like pipeline files",
"version": "0.0.5",
"project_urls": {
"Bug Tracker": "https://github.com/jfcarter2358/pipeline-reader/issues",
"Homepage": "https://github.com/jfcarter2358/pipeline-reader"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "6623cb1ab79af4b768c0d81ea7edcfbb5fd43a0dea2a2365461beb433264133c",
"md5": "1d52f8973832cc4ae70c953f2035a1bc",
"sha256": "b033d73104ccaf0f4f7f4662a72aa1affb191c23220daa1a86a8d4f56783df06"
},
"downloads": -1,
"filename": "pipeline_reader-0.0.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1d52f8973832cc4ae70c953f2035a1bc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 7281,
"upload_time": "2023-05-26T17:13:41",
"upload_time_iso_8601": "2023-05-26T17:13:41.949246Z",
"url": "https://files.pythonhosted.org/packages/66/23/cb1ab79af4b768c0d81ea7edcfbb5fd43a0dea2a2365461beb433264133c/pipeline_reader-0.0.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ae4035f1e0092cc7a14747154ff0960e590cfebe79979e24ccc93c5a597c666c",
"md5": "de4618243f5c2f3f403e6591ea9e6020",
"sha256": "5f4778351ee168f10bd612b8feb9849ebbf13598e2940b6261169326cd5ab26d"
},
"downloads": -1,
"filename": "pipeline-reader-0.0.5.tar.gz",
"has_sig": false,
"md5_digest": "de4618243f5c2f3f403e6591ea9e6020",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 8988,
"upload_time": "2023-05-26T17:13:44",
"upload_time_iso_8601": "2023-05-26T17:13:44.136078Z",
"url": "https://files.pythonhosted.org/packages/ae/40/35f1e0092cc7a14747154ff0960e590cfebe79979e24ccc93c5a597c666c/pipeline-reader-0.0.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-26 17:13:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jfcarter2358",
"github_project": "pipeline-reader",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "cartils",
"specs": [
[
"==",
"0.1.2"
]
]
}
],
"lcname": "pipeline-reader"
}