kubernetes-watch


Namekubernetes-watch JSON
Version 0.1.9 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2025-09-18 12:38:21
maintainerNone
docs_urlNone
authorbmotevalli
requires_python<3.14,>=3.10
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # kube_watch

# To setup the project:

- `poetry install`
- `poetry shell`

# To install package to your environment locally:

python setup.py install

# To publish

`poetry config pypi-token.pypi your-api-token`
`poetry build`
`poetry publish`


# Description
The kube_watch library is build on top of <a href='https://docs.prefect.io/latest/'>Prefect</a>. The library is designed to define workflows in a declaritive and flexible fashion. Originally, workflows in Prefect are defined via decorators such as @flow and @task. In kube_watch, workflows can be defined in a declaritive form via yaml files. The library is mainly focused on running scheduled workflows in kubernetes environment. However, it can easily be extended to be used for any purpose requiring a workflow. The workflow manifest has the following generic structure:

```
workflow:
  name: Dummy Workflow
  runner: concurrent
  tasks:
    - name: Task_A
      module: <module_path>
      task: <func_name>
      inputsArgType: arg
      inputs:
        parameters:
          - name: x1
            value: y1
          - name: x2
            value: y2
          - name: x3
            type: env
            value: Y3

    - name: Task_B
      module: <module_path>
      task: <func_name>
      inputsArgType: arg
      inputs:
        parameters:
          - name: xx1
            value: yy1
          - name: xx2
            value: yy2
      dependency:
        - taskName: Task_A
          inputParamName: xx3

    - name: Task_C
      module: <module_path>
      task: <func_name>
      inputsArgType: arg
      conditional:
        tasks: ["Task_B"]
```


**runner**: concurrent | sequential: if concurrent selected, tasks will be run concurrently.

**module**: all modules are located in 'modules' directory in kube_watch. This is where you can extend the library and add new tasks / modules. Below modules, there are submodules such as providers, clusters, and logic. Within each of this submodules, specific modules are defined. For example: providers.aws contains a series of tasks related to AWS. In this case, <module_path> = providers.aws. To add new tasks, add a new module with a similar pattern and refer the path in your task block. 

**task**: task is simply the name function that you put in the <module_path>. i.e. as you define a function in a module, you can simply start to use it in your manifests.

**inputArgType**: arg | dict | list: if the task functions accept known-fixed number of parameters, then use arg. 

**dependency**: this block defines dependency of a child task to its parent. If **inputParamName** is defined, then OUTPUT of the parent task is passed to the child with an argument name defined by inputParamName.

**IMPORTATN NOTE**: A strict assumption is that task functions return a single output. If there are cases with multiple output, wrap them into a dictionary and unwrap them in the child task.

**conditional**: These are blocks where you can define when a task runs depending on the outcome of its parent. The parent task should return True or False.


Parameters have also a type entry: env | static. static is default value. If type is defined as env, the parameter value is loaded from Environment Variables. In this case, value should be the name of the corresponding env var. 

In above examples:

```
def Task_A(x1, x2, x3):
    # do something
    return output_A

def Task_B(xx1, xx2, xx3):
    # do something else
    return output_B

def Task_C():
    # do another thing
    return output_C
```



# Batch workflows
kube_watch also enables to run workflows in batch. A separate manifest with following form is required:

batchFlows:
  runner: sequential
  items:
    - path: path_to_flow_A.yaml
    - path: path_to_flow_B.yaml
    - path: path_to_flow_C.yaml


# cron_app
The cron_app folder contains an example use case of kube_watch library. The cron_app can be used to deploy a CronJob in a kubernetes environment. The app assumes the manifests are located in a separate repository. It will clone the repo and read the manifests and runs the workflows.

# Connect to a server
## Start Server
`prefect server start`
## To Connect
To connect to a server, simply set the following environment variable: `PREFECT_API_URL`


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "kubernetes-watch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.14,>=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "bmotevalli",
    "author_email": "b.motevalli@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/51/fb/afbfd312d7637147bbf42c3c901fd85faafd4362228bc36bed29a332c643/kubernetes_watch-0.1.9.tar.gz",
    "platform": null,
    "description": "# kube_watch\n\n# To setup the project:\n\n- `poetry install`\n- `poetry shell`\n\n# To install package to your environment locally:\n\npython setup.py install\n\n# To publish\n\n`poetry config pypi-token.pypi your-api-token`\n`poetry build`\n`poetry publish`\n\n\n# Description\nThe kube_watch library is build on top of <a href='https://docs.prefect.io/latest/'>Prefect</a>. The library is designed to define workflows in a declaritive and flexible fashion. Originally, workflows in Prefect are defined via decorators such as @flow and @task. In kube_watch, workflows can be defined in a declaritive form via yaml files. The library is mainly focused on running scheduled workflows in kubernetes environment. However, it can easily be extended to be used for any purpose requiring a workflow. The workflow manifest has the following generic structure:\n\n```\nworkflow:\n  name: Dummy Workflow\n  runner: concurrent\n  tasks:\n    - name: Task_A\n      module: <module_path>\n      task: <func_name>\n      inputsArgType: arg\n      inputs:\n        parameters:\n          - name: x1\n            value: y1\n          - name: x2\n            value: y2\n          - name: x3\n            type: env\n            value: Y3\n\n    - name: Task_B\n      module: <module_path>\n      task: <func_name>\n      inputsArgType: arg\n      inputs:\n        parameters:\n          - name: xx1\n            value: yy1\n          - name: xx2\n            value: yy2\n      dependency:\n        - taskName: Task_A\n          inputParamName: xx3\n\n    - name: Task_C\n      module: <module_path>\n      task: <func_name>\n      inputsArgType: arg\n      conditional:\n        tasks: [\"Task_B\"]\n```\n\n\n**runner**: concurrent | sequential: if concurrent selected, tasks will be run concurrently.\n\n**module**: all modules are located in 'modules' directory in kube_watch. This is where you can extend the library and add new tasks / modules. Below modules, there are submodules such as providers, clusters, and logic. Within each of this submodules, specific modules are defined. For example: providers.aws contains a series of tasks related to AWS. In this case, <module_path> = providers.aws. To add new tasks, add a new module with a similar pattern and refer the path in your task block. \n\n**task**: task is simply the name function that you put in the <module_path>. i.e. as you define a function in a module, you can simply start to use it in your manifests.\n\n**inputArgType**: arg | dict | list: if the task functions accept known-fixed number of parameters, then use arg. \n\n**dependency**: this block defines dependency of a child task to its parent. If **inputParamName** is defined, then OUTPUT of the parent task is passed to the child with an argument name defined by inputParamName.\n\n**IMPORTATN NOTE**: A strict assumption is that task functions return a single output. If there are cases with multiple output, wrap them into a dictionary and unwrap them in the child task.\n\n**conditional**: These are blocks where you can define when a task runs depending on the outcome of its parent. The parent task should return True or False.\n\n\nParameters have also a type entry: env | static. static is default value. If type is defined as env, the parameter value is loaded from Environment Variables. In this case, value should be the name of the corresponding env var. \n\nIn above examples:\n\n```\ndef Task_A(x1, x2, x3):\n    # do something\n    return output_A\n\ndef Task_B(xx1, xx2, xx3):\n    # do something else\n    return output_B\n\ndef Task_C():\n    # do another thing\n    return output_C\n```\n\n\n\n# Batch workflows\nkube_watch also enables to run workflows in batch. A separate manifest with following form is required:\n\nbatchFlows:\n  runner: sequential\n  items:\n    - path: path_to_flow_A.yaml\n    - path: path_to_flow_B.yaml\n    - path: path_to_flow_C.yaml\n\n\n# cron_app\nThe cron_app folder contains an example use case of kube_watch library. The cron_app can be used to deploy a CronJob in a kubernetes environment. The app assumes the manifests are located in a separate repository. It will clone the repo and read the manifests and runs the workflows.\n\n# Connect to a server\n## Start Server\n`prefect server start`\n## To Connect\nTo connect to a server, simply set the following environment variable: `PREFECT_API_URL`\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": null,
    "version": "0.1.9",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0c6c7e8608d6071d47fbf946c6885e3f2dda30d15af9b89b0ea0ea452b48ff59",
                "md5": "d0e613a9e8de3b168f2b3dc4c20524e3",
                "sha256": "abb4418c842e3475dfd10ca383fa11703fa9c3afa38144e2602eff3f28c8e528"
            },
            "downloads": -1,
            "filename": "kubernetes_watch-0.1.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d0e613a9e8de3b168f2b3dc4c20524e3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.14,>=3.10",
            "size": 32193,
            "upload_time": "2025-09-18T12:38:20",
            "upload_time_iso_8601": "2025-09-18T12:38:20.503411Z",
            "url": "https://files.pythonhosted.org/packages/0c/6c/7e8608d6071d47fbf946c6885e3f2dda30d15af9b89b0ea0ea452b48ff59/kubernetes_watch-0.1.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "51fbafbfd312d7637147bbf42c3c901fd85faafd4362228bc36bed29a332c643",
                "md5": "a1241ec778a35e3053648c4556b8453a",
                "sha256": "1b2fe7d7ca09b15be2542578ff315a38ab3270cad04df8c73ee8b37ea70db509"
            },
            "downloads": -1,
            "filename": "kubernetes_watch-0.1.9.tar.gz",
            "has_sig": false,
            "md5_digest": "a1241ec778a35e3053648c4556b8453a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.14,>=3.10",
            "size": 25635,
            "upload_time": "2025-09-18T12:38:21",
            "upload_time_iso_8601": "2025-09-18T12:38:21.629764Z",
            "url": "https://files.pythonhosted.org/packages/51/fb/afbfd312d7637147bbf42c3c901fd85faafd4362228bc36bed29a332c643/kubernetes_watch-0.1.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-18 12:38:21",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "kubernetes-watch"
}
        
Elapsed time: 2.00685s