openshift-client


Nameopenshift-client JSON
Version 2.0.4 PyPI version JSON
download
home_pageNone
SummaryOpenShift python client
upload_time2024-03-27 12:21:31
maintainerNone
docs_urlNone
authorNone
requires_python>=2.7
license Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS Copyright 2020 Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
keywords openshift
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Openshift Python Client
<!-- Install doctoc with `npm install -g doctoc`  then `doctoc README.md --github` -->

<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
**Table of Contents**  *generated with [DocToc](https://github.com/thlorenz/doctoc)*

- [Overview](#overview)
- [Reader Prerequisites](#reader-prerequisites)
- [Setup](#setup)
  - [Prerequisites](#prerequisites)
  - [Installation Instructions](#installation-instructions)
    - [Using PIP](#using-pip)
    - [For development](#for-development)
- [Usage](#usage)
  - [Quickstart](#quickstart)
  - [Selectors](#selectors)
  - [APIObjects](#apiobjects)
  - [Making changes to APIObjects](#making-changes-to-apiobjects)
  - [Running within a Pod](#running-within-a-pod)
  - [Tracking oc invocations](#tracking-oc-invocations)
  - [Time limits](#time-limits)
  - [Advanced contexts](#advanced-contexts)
  - [Something missing?](#something-missing)
  - [Running oc on a bastion host](#running-oc-on-a-bastion-host)
  - [Gathering reports and logs with selectors](#gathering-reports-and-logs-with-selectors)
  - [Advanced verbs:](#advanced-verbs)
- [Examples](#examples)
- [Environment Variables](#environment-variables)
  - [Defaults when invoking `oc`](#defaults-when-invoking-oc)
  - [Master timeout](#master-timeout)
  - [SSH Client Host](#ssh-client-host)

<!-- END doctoc generated TOC please keep comment here to allow auto update -->

## Overview
The [openshift-client-python](https://www.github.com/openshift/openshift-client-python) library aims to provide a readable, concise, comprehensive, and fluent
API for rich interactions with an [OpenShift](https://www.openshift.com) cluster. Unlike other clients, this library exclusively uses the command
line tool (oc) to achieve the interactions. This approach comes with important benefits and disadvantages when compared
to other client libraries.

Pros:
- No additional software needs to be installed on the cluster. If a system with python support can (1) invoke `oc`
locally OR (2) ssh to a host and invoke `oc`, you can use the library.
- Portable. If you have python and `oc` working, you don't need to worry about OpenShift versions or machine architectures.
- Custom resources are supported and treated just like any other resource. There is no need to generate code to support them.
- Quick to learn. If you understand the `oc` command line interface, you can use this library.

Cons:
- This API is not intended to implement something as complex as a controller. For example, it does not implement
watch functionality. If you can't imagine accomplishing your use case through CLI interactions, this API is probably 
not the right starting point for it. 
- If you care about whether a REST API returns a particular error code, this API is probably not for you. Since it
is based on the CLI, high level return codes are used to determine success or failure.

## Reader Prerequisites
* Familiarity with OpenShift [command line interface](https://docs.openshift.org/latest/cli_reference/basic_cli_operations.html)
is highly encouraged before exploring the API's features. The API leverages the [oc](https://docs.openshift.org/latest/cli_reference/index.html)
binary and, in many cases, passes method arguments directly on to the command line. This document cannot, therefore,
provide a complete description of all possible OpenShift interactions -- the user may need to reference
the CLI documentation to find the pass-through arguments a given interaction requires.

* A familiarity with Python is assumed.

## Setup
### Prerequisites
1. Download and install the OpenShift [command-line Tools](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) needed to access your OpenShift cluster.

### Installation Instructions

#### Using PIP
1. Install the `openshift-client` module from PyPI.
    ```bash
    sudo pip install openshift-client
    ```

#### For development
1. Git clone https://github.com/openshift/openshift-client-python.git (or your fork).
2. Add required libraries
    ```bash
    sudo pip install -r requirements.txt
    ```
3. Append ./packages to your PYTHONPATH environment variable (e.g. export PYTHONPATH=$(pwd)/packages:$PYTHONPATH).
4. Write and run your python script!

## Usage

### Quickstart
Any standard Python application should be able to use the API if it imports the openshift package. The simplest
possible way to begin using the API is login to your target cluster before running your first application.

Can you run `oc project` successfully from the command line? Then write your app!

```python
#!/usr/bin/python
import openshift_client as oc

print('OpenShift client version: {}'.format(oc.get_client_version()))
print('OpenShift server version: {}'.format(oc.get_server_version()))

# Set a project context for all inner `oc` invocations and limit execution to 10 minutes
with oc.project('openshift-infra'), oc.timeout(10*60):
    # Print the list of qualified pod names (e.g. ['pod/xyz', 'pod/abc', ...]  in the current project
    print('Found the following pods in {}: {}'.format(oc.get_project_name(), oc.selector('pods').qnames()))
    
    # Read in the current state of the pod resources and represent them as python objects
    for pod_obj in oc.selector('pods').objects():
        
        # The APIObject class exposes several convenience methods for interacting with objects
        print('Analyzing pod: {}'.format(pod_obj.name()))
        pod_obj.print_logs(timestamps=True, tail=15)
    
        # If you need access to the underlying resource definition, get a Model instance for the resource
        pod_model = pod_obj.model
        
        # Model objects enable dot notation and allow you to navigate through resources
        # to an arbitrary depth without checking if any ancestor elements exist.
        # In the following example, there is no need for boilerplate like:
        #    `if .... 'ownerReferences' in pod_model['metadata'] ....`
        # Fields that do not resolve will always return oc.Missing which 
        # is a singleton and can also be treated as an empty dict.
        for owner in pod_model.metadata.ownerReferences:  # ownerReferences == oc.Missing if not present in resource
            # elements of a Model are also instances of Model or ListModel
            if owner.kind is not oc.Missing:  # Compare as singleton
                print('  pod owned by a {}'.format(owner.kind))  # e.g. pod was created by a StatefulSet

```

### Selectors
Selectors are a central concept used by the API to interact with collections
of OpenShift resources. As the name implies, a "selector" selects zero or
more resources on a server which satisfy user specified criteria. An apt
metaphor for a selector might be a prepared SQL statement which can be
used again and again to select rows from a database.

```python
# Create a selector which selects all projects.
project_selector = oc.selector("projects")

# Print the qualified name (i.e. "kind/name") of each resource selected.
print("Project names: " + project_selector.qnames())

# Count the number of projects on the server.
print("Number of projects: " + project_selector.count_existing())

# Selectors can also be created with a list of names.
sa_selector = oc.selector(["serviceaccount/deployer", "serviceaccount/builder"])

# Performing an operation will act on all selected resources. In this case,
# both serviceaccounts are labeled.
sa_selector.label({"mylabel" : "myvalue"})

# Selectors can also select based on kind and labels.
sa_label_selector = oc.selector("sa", labels={"mylabel":"myvalue"})

# We should find the service accounts we just labeled.
print("Found labeled serviceaccounts: " + sa_label_selector.names())

# Create a selector for a set of kinds.
print(oc.selector(['dc', 'daemonset']).describe())
```

The output should look something like this:

```
Project names: [u'projects/default', u'projects/kube-system', u'projects/myproject', u'projects/openshift', u'projects/openshift-infra', u'projects/temp-1495937701365', u'projects/temp-1495937860505', u'projects/temp-1495937908009']
Number of projects: 8
Found labeled serviceaccounts: [u'serviceaccounts/builder', u'serviceaccounts/deployer']
```

### APIObjects

Selectors allow you to perform "verb" level operations on a set of objects, but
what if you want to interact objects at a schema level?

```python
projects_sel = oc.selector("projects")

# .objects() will perform the selection and return a list of APIObjects
# which model the selected resources.
projects = projects_sel.objects()

print("Selected " + len(projects) + " projects")

# Let's store one of the project APIObjects for easy access.
project = projects[0]

# The APIObject exposes methods providing simple access to metadata and common operations.
print('The project is: {}/{}'.format(project.kind(), project.name()))
project.label({ 'mylabel': 'myvalue' })

# And the APIObject allow you to interact with an object's data via the 'model' attribute.
# The Model is similar to a standard dict, but also allows dot notation to access elements
# of the structured data.
print('Annotations:\n{}\n'.format(project.model.metadata.annotations))

# There is no need to perform the verbose 'in' checking you may be familiar with when
# exploring a Model object. Accessing Model attributes will always return a value. If the
# any component of a path into the object does not exist in the underlying model, the
# singleton 'Missing' will be returned.

if project.model.metadata.annotations.myannotation is oc.Missing:
    print("This object has not been annotated yet")

# If a field in the model contains special characters, use standard Python notation
# to access the key instead of dot notation.
if project.model.metadata.annotations['my-annotation'] is oc.Missing:
    print("This object has not been annotated yet")

# For debugging, you can always see the state of the underlying model by printing the
# APIObject as JSON.
print('{}'.format(project.as_json()))

# Or getting deep copy dict. Changes made to this dict will not affect the APIObject.
d = project.as_dict()

# Model objects also simplify looking through kubernetes style lists. For example, can_match
# returns True if the modeled list contains an object with the subset of attributes specified.
# If this example, we are checking if the a node's kubelet is reporting Ready:
oc.selector('node/alpha').object().model.status.conditions.can_match(
    {
        'type': 'Ready',
        'status': "True",
    }
)

# can_match can also ensure nest objects and list are present within a resource. Several
# of these types of checks are already implemented in the openshift.status module.
def is_route_admitted(apiobj):
    return apiobj.model.status.can_match({
        'ingress': [
            {
                'conditions': [
                    {
                        'type': 'Admitted',
                        'status': 'True',
                    }
                ]
            }
        ]
    })
```


### Making changes to APIObjects
```python
# APIObject exposes simple interfaces to delete and patch the resource it represents.
# But, more interestingly, you can make detailed changes to the model and apply those
# changes to the API.

project.model.metadata.labels['my_label'] = 'myvalue'
project.apply()

# If modifying the underlying API resources could be contentious, use the more robust
# modify_and_apply method which can retry the operation multiple times -- refreshing
# with the current object state between failures.

# First, define a function that will make changes to the model.
def make_model_change(apiobj):
    apiobj.model.data['somefile.yaml'] = 'wyxz'
    return True

# modify_and_apply will call the function and attempt to apply its changes to the model
# if it returns True. If the apply is rejected by the API, the function will pull
# the latest object content, call make_model_change again, and try the apply again
# up to the specified retry account.
configmap.modify_and_apply(make_model_change, retries=5)


# For best results, ensure the function passed to modify_and_apply is idempotent:

def set_unmanaged_in_cvo(apiobj):
    desired_entry = {
        'group': 'config.openshift.io/v1',
        'kind': 'ClusterOperator',
        'name': 'openshift-samples',
        'unmanaged': True,
    }

    if apiobj.model.spec.overrides.can_match(desired_entry):
        # No change required
        return False

    if not apiobj.model.spec.overrides:
        apiobj.model.spec.overrides = []

    context.progress('Attempting to disable CVO interest in openshift-samples operator')
    apiobj.model.spec.overrides.append(desired_entry)
    return True

result, changed = oc.selector('clusterversion.config.openshift.io/version').object().modify_and_apply(set_unmanaged_in_cvo)
if changed:
    context.report_change('Instructed CVO to ignore openshift-samples operator')

```


### Running within a Pod
It is simple to use the API within a Pod. The `oc` binary automatically
detects it is running within a container and automatically uses the Pod's serviceaccount token/cacert.

### Tracking oc invocations
It is good practice to setup at least one tracking context within your application so that
you will be able to easily analyze what `oc` invocations were made on your behalf and the result
of those operations. *Note that details about all `oc` invocations performed within the context will
be stored within the tracker. Therefore, do not use a single tracker for a continuously running
process -- it will consume memory for every oc invocation.*

```python
#!/usr/bin/python
import openshift_client as oc

with oc.tracking() as tracker:
    try:
        print('Current user: {}'.format(oc.whoami()))
    except:
        print('Error acquiring current username')
    
    # Print out details about the invocations made within this context.
    print(tracker.get_result())
```

In this case, the tracking output would look something like:
```json
{
    "status": 0, 
    "operation": "tracking", 
    "actions": [
        {
            "status": 0, 
            "verb": "project", 
            "references": {}, 
            "in": null, 
            "out": "aos-cd\n", 
            "err": "", 
            "cmd": [
                "oc", 
                "project", 
                "-q"
            ], 
            "elapsed_time": 0.15344810485839844, 
            "internal": false, 
            "timeout": false, 
            "last_attempt": true
        }, 
        {
            "status": 0, 
            "verb": "whoami", 
            "references": {}, 
            "in": null, 
            "out": "aos-ci-jenkins\n", 
            "err": "", 
            "cmd": [
                "oc", 
                "whoami"
            ], 
            "elapsed_time": 0.6328380107879639, 
            "internal": false, 
            "timeout": false, 
            "last_attempt": true
        }
    ]
}
```

Alternatively, you can record actions yourself by passing an action_handler to the tracking 
contextmanager. Your action handler will be invoked each time an `oc` invocation completes.

```python
def print_action(action):
    print('Performed: {} - status={}'.format(action.cmd, action.status))

with oc.tracking(action_handler=print_action):
    try:
        print('Current project: {}'.format(oc.get_project_name()))
        print('Current user: {}'.format(oc.whoami()))
    except:
        print('Error acquiring details about project/user')

```

### Time limits
Have a script you want to ensure succeeds or fails within a specific period of time? Use
a `timeout` context. Timeout contexts can be nested - if any timeout context expires, 
the current oc invocation will be killed. 

```python
#!/usr/bin/python
import openshift_client as oc

def node_is_ready(node):
    ready = node.model.status.conditions.can_match({
        'type': 'Ready',
        'status': 'True',
    })
    return ready


print("Waiting for up to 15 minutes for at least 6 nodes to be ready...")
with oc.timeout(15 * 60):
    oc.selector('nodes').until_all(6, success_func=node_is_ready)
    print("All detected nodes are reporting ready")
```        

You will be able to see in `tracking` context results that a timeout occurred for an affected
invocation. The `timeout` field will be set to `True`.

### Advanced contexts
If you are unable to use a KUBECONFIG environment variable or need fine grained control over the 
server/credentials you communicate with for each invocation, use openshift-client-python contexts. 
Contexts can be nested and cause oc invocations within them to use the most recently established 
context information.

```python
with oc.api_server('https:///....'):  # use the specified api server for nested oc invocations.
    
    with oc.token('abc..'):  # --server=... --token=abc... will be included in inner oc invocations.
        print("Current project: " + oc.get_project_name())
    
    with oc.token('def..'):  # --server=... --token=def... will be included in inner oc invocations.
        print("Current project: " + oc.get_project_name())
```

You can control the loglevel specified  for `oc` invocations.
```python
with oc.loglevel(6):
   # all oc invocations within this context will be invoked with --loglevel=6
    oc...   
```

You ask `oc` to skip TLS verification if necessary.
```python
with oc.tls_verify(enable=False):
   # all oc invocations within this context will be invoked with --insecure-skip-tls-verify
    oc...   
```

### Something missing?
Most common API iterations have abstractions, but if there is no openshift-client-python API 
exposing the `oc` function you want to run, you can always use `oc.invoke` to directly pass arguments to 
an `oc` invocation on your host.

```python
# oc adm policy add-scc-to-user privileged -z my-sa-name
oc.invoke('adm', ['policy', 'add-scc-to-user', 'privileged', '-z', 'my-sa-name'])
```

### Running oc on a bastion host

Is your oc binary on a remote host? No problem. Easily remote all CLI interactions over SSH using the client_host
context. Before running this command, you will need to load your ssh agent up with a key
appropriate to the target client host.

```python
with openshift_client.client_host(hostname="my.cluster.com", username="root", auto_add_host=True):
    # oc invocations will take place on my.cluster.com host as the root user.
    print("Current project: " + oc.get_project_name())
```

Using this model, your Python script will run exactly where you launch it, but all oc invocations will
occur on the remote host.

### Gathering reports and logs with selectors

Various objects within OpenShift have logs associated with them:
- pods
- deployments
- daemonsets
- statefulsets
- builds
- etc..

A selector can gather logs from pods associated with each (and for each container within those pods). Each
log will be a unique value in the dictionary returned.

```python
# Print logs for all pods associated with all daemonsets & deployments in openshift-monitoring namespace.
with oc.project('openshift-monitoring'):
    for k, v in oc.selector(['daemonset', 'deployment']).logs(tail=500).iteritems():
        print('Container: {}\n{}\n\n'.format(k, v))
```

The above example would output something like:
```
Container: openshift-monitoring:pod/node-exporter-hw5r5(node-exporter)
time="2018-10-22T21:07:36Z" level=info msg="Starting node_exporter (version=0.16.0, branch=, revision=)" source="node_exporter.go:82"
time="2018-10-22T21:07:36Z" level=info msg="Enabled collectors:" source="node_exporter.go:90"
time="2018-10-22T21:07:36Z" level=info msg=" - arp" source="node_exporter.go:97"
...
```

Note that these logs are held in memory. Use tail or other available method parameters to ensure 
predictable and efficient results.

To simplify even further, you can ask the library to pretty-print the logs for you:
```python
oc.selector(['daemonset', 'deployment']).print_logs()
```

And to quickly pull together significant diagnostic data on selected objects, use `report()` or `print_report()`. 
A report includes the following information for each selected object, if available:
- `object` - The current state of the object.
- `describe` - The output of describe on the object.
- `logs` - If applicable, a map of logs -- one of each container associated with the object. 

```python
# Pretty-print a detail set of data about all deploymentconfigs, builds, and configmaps in the 
# current namespace context.
oc.selector(['dc', 'build', 'configmap']).print_report()
```

### Advanced verbs:

Running oc exec on a pod.
```python
    result = oc.selector('pod/alertmanager-main-0').object().execute(['cat'],
                                                                     container_name='alertmanager',
                                                                     stdin='stdin for cat')
    print(result.out())
```

Finding all pods running on a node:
```python
with oc.client_host():
    for node_name in oc.selector('nodes').qnames():
        print('Pods running on node: {}'.format(node_name))
            for pod_obj in oc.get_pods_by_node(node_name):
                print('  {}'.format(pod_obj.fqname()))
```

Example output:
```
...
Pods running on node: node/ip-172-31-18-183.ca-central-1.compute.internal
  72-sus:pod/sus-1-vgnmx
  ameen-blog:pod/ameen-blog-2-t68qn
  appejemplo:pod/ejemplo-1-txdt7
  axxx:pod/mysql-5-lx2bc
...
```

## Examples

- [Some unit tests](examples/cluster_tests.py)

## Environment Variables
To allow openshift-client-python applications to be portable between environments without needing to be modified, 
you can specify many default contexts in the environment. 

### Defaults when invoking `oc`
Establishing explicit contexts within an application will override these environment defaults.
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_OC_PATH` - default path to use when invoking `oc`
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_CONFIG_PATH` - default `--kubeconfig` argument
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_API_SERVER` - default `--server` argument
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_CA_CERT_PATH` - default `--cacert` argument
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_PROJECT` - default `--namespace` argument
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_OC_LOGLEVEL` - default `--loglevel` argument
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SKIP_TLS_VERIFY` - default `--insecure-skip-tls-verify`

### Master timeout
Defines an implicit outer timeout(..) context for the entire application. This allows you to ensure
that an application terminates within a reasonable time, even if the author of the application has
not included explicit timeout contexts. Like any `timeout` context, this value is not overridden
by subsequent `timeout` contexts within the application. It provides an upper bound for the entire
application's oc interactions.

- `OPENSHIFT_CLIENT_PYTHON_MASTER_TIMEOUT` 

### SSH Client Host
In some cases, it is desirable to run an openshift-client-python application using a local `oc` binary and 
in other cases, the `oc` binary resides on a remote client. Encoding this decision in the application
itself is unnecessary.

Simply wrap you application in a `client_host` context without arguments. This will try to pull 
client host information from environment variables if they are present. If they are not present,
the application will execute on the local host.

For example, the following application will ssh to `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME` if it is defined
in the environment. Otherwise, `oc` interactions will be executed on the host running the python application.

```python
with oc.client_host():  # if OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME if not defined in the environment, this is a no-op
    print( 'Found nodes: {}'.format(oc.selector('nodes').qnames()) ) 
```

- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME` - The hostname on which the `oc` binary resides
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_USERNAME` - Username to use for the ssh connection (optional)
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_PORT` - SSH port to use (optional; defaults to 22)
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_AUTO_ADD` - Defaults to `false`. If set to `true`, unknown hosts will automatically be trusted.
- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_LOAD_SYSTEM_HOST_KEYS` - Defaults to `true`. If true, the local known hosts information will be used.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "openshift-client",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=2.7",
    "maintainer_email": "Brad Williams <brawilli@redhat.com>",
    "keywords": "OpenShift",
    "author": null,
    "author_email": "Justin Pierce <jupierce@redhat.com>",
    "download_url": "https://files.pythonhosted.org/packages/15/a0/52cfd08e7ba81c03a70400e173448e600f3ae01c70954423594d3ea12aa8/openshift-client-2.0.4.tar.gz",
    "platform": null,
    "description": "# Openshift Python Client\n<!-- Install doctoc with `npm install -g doctoc`  then `doctoc README.md --github` -->\n\n<!-- START doctoc generated TOC please keep comment here to allow auto update -->\n<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->\n**Table of Contents**  *generated with [DocToc](https://github.com/thlorenz/doctoc)*\n\n- [Overview](#overview)\n- [Reader Prerequisites](#reader-prerequisites)\n- [Setup](#setup)\n  - [Prerequisites](#prerequisites)\n  - [Installation Instructions](#installation-instructions)\n    - [Using PIP](#using-pip)\n    - [For development](#for-development)\n- [Usage](#usage)\n  - [Quickstart](#quickstart)\n  - [Selectors](#selectors)\n  - [APIObjects](#apiobjects)\n  - [Making changes to APIObjects](#making-changes-to-apiobjects)\n  - [Running within a Pod](#running-within-a-pod)\n  - [Tracking oc invocations](#tracking-oc-invocations)\n  - [Time limits](#time-limits)\n  - [Advanced contexts](#advanced-contexts)\n  - [Something missing?](#something-missing)\n  - [Running oc on a bastion host](#running-oc-on-a-bastion-host)\n  - [Gathering reports and logs with selectors](#gathering-reports-and-logs-with-selectors)\n  - [Advanced verbs:](#advanced-verbs)\n- [Examples](#examples)\n- [Environment Variables](#environment-variables)\n  - [Defaults when invoking `oc`](#defaults-when-invoking-oc)\n  - [Master timeout](#master-timeout)\n  - [SSH Client Host](#ssh-client-host)\n\n<!-- END doctoc generated TOC please keep comment here to allow auto update -->\n\n## Overview\nThe [openshift-client-python](https://www.github.com/openshift/openshift-client-python) library aims to provide a readable, concise, comprehensive, and fluent\nAPI for rich interactions with an [OpenShift](https://www.openshift.com) cluster. Unlike other clients, this library exclusively uses the command\nline tool (oc) to achieve the interactions. This approach comes with important benefits and disadvantages when compared\nto other client libraries.\n\nPros:\n- No additional software needs to be installed on the cluster. If a system with python support can (1) invoke `oc`\nlocally OR (2) ssh to a host and invoke `oc`, you can use the library.\n- Portable. If you have python and `oc` working, you don't need to worry about OpenShift versions or machine architectures.\n- Custom resources are supported and treated just like any other resource. There is no need to generate code to support them.\n- Quick to learn. If you understand the `oc` command line interface, you can use this library.\n\nCons:\n- This API is not intended to implement something as complex as a controller. For example, it does not implement\nwatch functionality. If you can't imagine accomplishing your use case through CLI interactions, this API is probably \nnot the right starting point for it. \n- If you care about whether a REST API returns a particular error code, this API is probably not for you. Since it\nis based on the CLI, high level return codes are used to determine success or failure.\n\n## Reader Prerequisites\n* Familiarity with OpenShift [command line interface](https://docs.openshift.org/latest/cli_reference/basic_cli_operations.html)\nis highly encouraged before exploring the API's features. The API leverages the [oc](https://docs.openshift.org/latest/cli_reference/index.html)\nbinary and, in many cases, passes method arguments directly on to the command line. This document cannot, therefore,\nprovide a complete description of all possible OpenShift interactions -- the user may need to reference\nthe CLI documentation to find the pass-through arguments a given interaction requires.\n\n* A familiarity with Python is assumed.\n\n## Setup\n### Prerequisites\n1. Download and install the OpenShift [command-line Tools](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) needed to access your OpenShift cluster.\n\n### Installation Instructions\n\n#### Using PIP\n1. Install the `openshift-client` module from PyPI.\n    ```bash\n    sudo pip install openshift-client\n    ```\n\n#### For development\n1. Git clone https://github.com/openshift/openshift-client-python.git (or your fork).\n2. Add required libraries\n    ```bash\n    sudo pip install -r requirements.txt\n    ```\n3. Append ./packages to your PYTHONPATH environment variable (e.g. export PYTHONPATH=$(pwd)/packages:$PYTHONPATH).\n4. Write and run your python script!\n\n## Usage\n\n### Quickstart\nAny standard Python application should be able to use the API if it imports the openshift package. The simplest\npossible way to begin using the API is login to your target cluster before running your first application.\n\nCan you run `oc project` successfully from the command line? Then write your app!\n\n```python\n#!/usr/bin/python\nimport openshift_client as oc\n\nprint('OpenShift client version: {}'.format(oc.get_client_version()))\nprint('OpenShift server version: {}'.format(oc.get_server_version()))\n\n# Set a project context for all inner `oc` invocations and limit execution to 10 minutes\nwith oc.project('openshift-infra'), oc.timeout(10*60):\n    # Print the list of qualified pod names (e.g. ['pod/xyz', 'pod/abc', ...]  in the current project\n    print('Found the following pods in {}: {}'.format(oc.get_project_name(), oc.selector('pods').qnames()))\n    \n    # Read in the current state of the pod resources and represent them as python objects\n    for pod_obj in oc.selector('pods').objects():\n        \n        # The APIObject class exposes several convenience methods for interacting with objects\n        print('Analyzing pod: {}'.format(pod_obj.name()))\n        pod_obj.print_logs(timestamps=True, tail=15)\n    \n        # If you need access to the underlying resource definition, get a Model instance for the resource\n        pod_model = pod_obj.model\n        \n        # Model objects enable dot notation and allow you to navigate through resources\n        # to an arbitrary depth without checking if any ancestor elements exist.\n        # In the following example, there is no need for boilerplate like:\n        #    `if .... 'ownerReferences' in pod_model['metadata'] ....`\n        # Fields that do not resolve will always return oc.Missing which \n        # is a singleton and can also be treated as an empty dict.\n        for owner in pod_model.metadata.ownerReferences:  # ownerReferences == oc.Missing if not present in resource\n            # elements of a Model are also instances of Model or ListModel\n            if owner.kind is not oc.Missing:  # Compare as singleton\n                print('  pod owned by a {}'.format(owner.kind))  # e.g. pod was created by a StatefulSet\n\n```\n\n### Selectors\nSelectors are a central concept used by the API to interact with collections\nof OpenShift resources. As the name implies, a \"selector\" selects zero or\nmore resources on a server which satisfy user specified criteria. An apt\nmetaphor for a selector might be a prepared SQL statement which can be\nused again and again to select rows from a database.\n\n```python\n# Create a selector which selects all projects.\nproject_selector = oc.selector(\"projects\")\n\n# Print the qualified name (i.e. \"kind/name\") of each resource selected.\nprint(\"Project names: \" + project_selector.qnames())\n\n# Count the number of projects on the server.\nprint(\"Number of projects: \" + project_selector.count_existing())\n\n# Selectors can also be created with a list of names.\nsa_selector = oc.selector([\"serviceaccount/deployer\", \"serviceaccount/builder\"])\n\n# Performing an operation will act on all selected resources. In this case,\n# both serviceaccounts are labeled.\nsa_selector.label({\"mylabel\" : \"myvalue\"})\n\n# Selectors can also select based on kind and labels.\nsa_label_selector = oc.selector(\"sa\", labels={\"mylabel\":\"myvalue\"})\n\n# We should find the service accounts we just labeled.\nprint(\"Found labeled serviceaccounts: \" + sa_label_selector.names())\n\n# Create a selector for a set of kinds.\nprint(oc.selector(['dc', 'daemonset']).describe())\n```\n\nThe output should look something like this:\n\n```\nProject names: [u'projects/default', u'projects/kube-system', u'projects/myproject', u'projects/openshift', u'projects/openshift-infra', u'projects/temp-1495937701365', u'projects/temp-1495937860505', u'projects/temp-1495937908009']\nNumber of projects: 8\nFound labeled serviceaccounts: [u'serviceaccounts/builder', u'serviceaccounts/deployer']\n```\n\n### APIObjects\n\nSelectors allow you to perform \"verb\" level operations on a set of objects, but\nwhat if you want to interact objects at a schema level?\n\n```python\nprojects_sel = oc.selector(\"projects\")\n\n# .objects() will perform the selection and return a list of APIObjects\n# which model the selected resources.\nprojects = projects_sel.objects()\n\nprint(\"Selected \" + len(projects) + \" projects\")\n\n# Let's store one of the project APIObjects for easy access.\nproject = projects[0]\n\n# The APIObject exposes methods providing simple access to metadata and common operations.\nprint('The project is: {}/{}'.format(project.kind(), project.name()))\nproject.label({ 'mylabel': 'myvalue' })\n\n# And the APIObject allow you to interact with an object's data via the 'model' attribute.\n# The Model is similar to a standard dict, but also allows dot notation to access elements\n# of the structured data.\nprint('Annotations:\\n{}\\n'.format(project.model.metadata.annotations))\n\n# There is no need to perform the verbose 'in' checking you may be familiar with when\n# exploring a Model object. Accessing Model attributes will always return a value. If the\n# any component of a path into the object does not exist in the underlying model, the\n# singleton 'Missing' will be returned.\n\nif project.model.metadata.annotations.myannotation is oc.Missing:\n    print(\"This object has not been annotated yet\")\n\n# If a field in the model contains special characters, use standard Python notation\n# to access the key instead of dot notation.\nif project.model.metadata.annotations['my-annotation'] is oc.Missing:\n    print(\"This object has not been annotated yet\")\n\n# For debugging, you can always see the state of the underlying model by printing the\n# APIObject as JSON.\nprint('{}'.format(project.as_json()))\n\n# Or getting deep copy dict. Changes made to this dict will not affect the APIObject.\nd = project.as_dict()\n\n# Model objects also simplify looking through kubernetes style lists. For example, can_match\n# returns True if the modeled list contains an object with the subset of attributes specified.\n# If this example, we are checking if the a node's kubelet is reporting Ready:\noc.selector('node/alpha').object().model.status.conditions.can_match(\n    {\n        'type': 'Ready',\n        'status': \"True\",\n    }\n)\n\n# can_match can also ensure nest objects and list are present within a resource. Several\n# of these types of checks are already implemented in the openshift.status module.\ndef is_route_admitted(apiobj):\n    return apiobj.model.status.can_match({\n        'ingress': [\n            {\n                'conditions': [\n                    {\n                        'type': 'Admitted',\n                        'status': 'True',\n                    }\n                ]\n            }\n        ]\n    })\n```\n\n\n### Making changes to APIObjects\n```python\n# APIObject exposes simple interfaces to delete and patch the resource it represents.\n# But, more interestingly, you can make detailed changes to the model and apply those\n# changes to the API.\n\nproject.model.metadata.labels['my_label'] = 'myvalue'\nproject.apply()\n\n# If modifying the underlying API resources could be contentious, use the more robust\n# modify_and_apply method which can retry the operation multiple times -- refreshing\n# with the current object state between failures.\n\n# First, define a function that will make changes to the model.\ndef make_model_change(apiobj):\n    apiobj.model.data['somefile.yaml'] = 'wyxz'\n    return True\n\n# modify_and_apply will call the function and attempt to apply its changes to the model\n# if it returns True. If the apply is rejected by the API, the function will pull\n# the latest object content, call make_model_change again, and try the apply again\n# up to the specified retry account.\nconfigmap.modify_and_apply(make_model_change, retries=5)\n\n\n# For best results, ensure the function passed to modify_and_apply is idempotent:\n\ndef set_unmanaged_in_cvo(apiobj):\n    desired_entry = {\n        'group': 'config.openshift.io/v1',\n        'kind': 'ClusterOperator',\n        'name': 'openshift-samples',\n        'unmanaged': True,\n    }\n\n    if apiobj.model.spec.overrides.can_match(desired_entry):\n        # No change required\n        return False\n\n    if not apiobj.model.spec.overrides:\n        apiobj.model.spec.overrides = []\n\n    context.progress('Attempting to disable CVO interest in openshift-samples operator')\n    apiobj.model.spec.overrides.append(desired_entry)\n    return True\n\nresult, changed = oc.selector('clusterversion.config.openshift.io/version').object().modify_and_apply(set_unmanaged_in_cvo)\nif changed:\n    context.report_change('Instructed CVO to ignore openshift-samples operator')\n\n```\n\n\n### Running within a Pod\nIt is simple to use the API within a Pod. The `oc` binary automatically\ndetects it is running within a container and automatically uses the Pod's serviceaccount token/cacert.\n\n### Tracking oc invocations\nIt is good practice to setup at least one tracking context within your application so that\nyou will be able to easily analyze what `oc` invocations were made on your behalf and the result\nof those operations. *Note that details about all `oc` invocations performed within the context will\nbe stored within the tracker. Therefore, do not use a single tracker for a continuously running\nprocess -- it will consume memory for every oc invocation.*\n\n```python\n#!/usr/bin/python\nimport openshift_client as oc\n\nwith oc.tracking() as tracker:\n    try:\n        print('Current user: {}'.format(oc.whoami()))\n    except:\n        print('Error acquiring current username')\n    \n    # Print out details about the invocations made within this context.\n    print(tracker.get_result())\n```\n\nIn this case, the tracking output would look something like:\n```json\n{\n    \"status\": 0, \n    \"operation\": \"tracking\", \n    \"actions\": [\n        {\n            \"status\": 0, \n            \"verb\": \"project\", \n            \"references\": {}, \n            \"in\": null, \n            \"out\": \"aos-cd\\n\", \n            \"err\": \"\", \n            \"cmd\": [\n                \"oc\", \n                \"project\", \n                \"-q\"\n            ], \n            \"elapsed_time\": 0.15344810485839844, \n            \"internal\": false, \n            \"timeout\": false, \n            \"last_attempt\": true\n        }, \n        {\n            \"status\": 0, \n            \"verb\": \"whoami\", \n            \"references\": {}, \n            \"in\": null, \n            \"out\": \"aos-ci-jenkins\\n\", \n            \"err\": \"\", \n            \"cmd\": [\n                \"oc\", \n                \"whoami\"\n            ], \n            \"elapsed_time\": 0.6328380107879639, \n            \"internal\": false, \n            \"timeout\": false, \n            \"last_attempt\": true\n        }\n    ]\n}\n```\n\nAlternatively, you can record actions yourself by passing an action_handler to the tracking \ncontextmanager. Your action handler will be invoked each time an `oc` invocation completes.\n\n```python\ndef print_action(action):\n    print('Performed: {} - status={}'.format(action.cmd, action.status))\n\nwith oc.tracking(action_handler=print_action):\n    try:\n        print('Current project: {}'.format(oc.get_project_name()))\n        print('Current user: {}'.format(oc.whoami()))\n    except:\n        print('Error acquiring details about project/user')\n\n```\n\n### Time limits\nHave a script you want to ensure succeeds or fails within a specific period of time? Use\na `timeout` context. Timeout contexts can be nested - if any timeout context expires, \nthe current oc invocation will be killed. \n\n```python\n#!/usr/bin/python\nimport openshift_client as oc\n\ndef node_is_ready(node):\n    ready = node.model.status.conditions.can_match({\n        'type': 'Ready',\n        'status': 'True',\n    })\n    return ready\n\n\nprint(\"Waiting for up to 15 minutes for at least 6 nodes to be ready...\")\nwith oc.timeout(15 * 60):\n    oc.selector('nodes').until_all(6, success_func=node_is_ready)\n    print(\"All detected nodes are reporting ready\")\n```        \n\nYou will be able to see in `tracking` context results that a timeout occurred for an affected\ninvocation. The `timeout` field will be set to `True`.\n\n### Advanced contexts\nIf you are unable to use a KUBECONFIG environment variable or need fine grained control over the \nserver/credentials you communicate with for each invocation, use openshift-client-python contexts. \nContexts can be nested and cause oc invocations within them to use the most recently established \ncontext information.\n\n```python\nwith oc.api_server('https:///....'):  # use the specified api server for nested oc invocations.\n    \n    with oc.token('abc..'):  # --server=... --token=abc... will be included in inner oc invocations.\n        print(\"Current project: \" + oc.get_project_name())\n    \n    with oc.token('def..'):  # --server=... --token=def... will be included in inner oc invocations.\n        print(\"Current project: \" + oc.get_project_name())\n```\n\nYou can control the loglevel specified  for `oc` invocations.\n```python\nwith oc.loglevel(6):\n   # all oc invocations within this context will be invoked with --loglevel=6\n    oc...   \n```\n\nYou ask `oc` to skip TLS verification if necessary.\n```python\nwith oc.tls_verify(enable=False):\n   # all oc invocations within this context will be invoked with --insecure-skip-tls-verify\n    oc...   \n```\n\n### Something missing?\nMost common API iterations have abstractions, but if there is no openshift-client-python API \nexposing the `oc` function you want to run, you can always use `oc.invoke` to directly pass arguments to \nan `oc` invocation on your host.\n\n```python\n# oc adm policy add-scc-to-user privileged -z my-sa-name\noc.invoke('adm', ['policy', 'add-scc-to-user', 'privileged', '-z', 'my-sa-name'])\n```\n\n### Running oc on a bastion host\n\nIs your oc binary on a remote host? No problem. Easily remote all CLI interactions over SSH using the client_host\ncontext. Before running this command, you will need to load your ssh agent up with a key\nappropriate to the target client host.\n\n```python\nwith openshift_client.client_host(hostname=\"my.cluster.com\", username=\"root\", auto_add_host=True):\n    # oc invocations will take place on my.cluster.com host as the root user.\n    print(\"Current project: \" + oc.get_project_name())\n```\n\nUsing this model, your Python script will run exactly where you launch it, but all oc invocations will\noccur on the remote host.\n\n### Gathering reports and logs with selectors\n\nVarious objects within OpenShift have logs associated with them:\n- pods\n- deployments\n- daemonsets\n- statefulsets\n- builds\n- etc..\n\nA selector can gather logs from pods associated with each (and for each container within those pods). Each\nlog will be a unique value in the dictionary returned.\n\n```python\n# Print logs for all pods associated with all daemonsets & deployments in openshift-monitoring namespace.\nwith oc.project('openshift-monitoring'):\n    for k, v in oc.selector(['daemonset', 'deployment']).logs(tail=500).iteritems():\n        print('Container: {}\\n{}\\n\\n'.format(k, v))\n```\n\nThe above example would output something like:\n```\nContainer: openshift-monitoring:pod/node-exporter-hw5r5(node-exporter)\ntime=\"2018-10-22T21:07:36Z\" level=info msg=\"Starting node_exporter (version=0.16.0, branch=, revision=)\" source=\"node_exporter.go:82\"\ntime=\"2018-10-22T21:07:36Z\" level=info msg=\"Enabled collectors:\" source=\"node_exporter.go:90\"\ntime=\"2018-10-22T21:07:36Z\" level=info msg=\" - arp\" source=\"node_exporter.go:97\"\n...\n```\n\nNote that these logs are held in memory. Use tail or other available method parameters to ensure \npredictable and efficient results.\n\nTo simplify even further, you can ask the library to pretty-print the logs for you:\n```python\noc.selector(['daemonset', 'deployment']).print_logs()\n```\n\nAnd to quickly pull together significant diagnostic data on selected objects, use `report()` or `print_report()`. \nA report includes the following information for each selected object, if available:\n- `object` - The current state of the object.\n- `describe` - The output of describe on the object.\n- `logs` - If applicable, a map of logs -- one of each container associated with the object. \n\n```python\n# Pretty-print a detail set of data about all deploymentconfigs, builds, and configmaps in the \n# current namespace context.\noc.selector(['dc', 'build', 'configmap']).print_report()\n```\n\n### Advanced verbs:\n\nRunning oc exec on a pod.\n```python\n    result = oc.selector('pod/alertmanager-main-0').object().execute(['cat'],\n                                                                     container_name='alertmanager',\n                                                                     stdin='stdin for cat')\n    print(result.out())\n```\n\nFinding all pods running on a node:\n```python\nwith oc.client_host():\n    for node_name in oc.selector('nodes').qnames():\n        print('Pods running on node: {}'.format(node_name))\n            for pod_obj in oc.get_pods_by_node(node_name):\n                print('  {}'.format(pod_obj.fqname()))\n```\n\nExample output:\n```\n...\nPods running on node: node/ip-172-31-18-183.ca-central-1.compute.internal\n  72-sus:pod/sus-1-vgnmx\n  ameen-blog:pod/ameen-blog-2-t68qn\n  appejemplo:pod/ejemplo-1-txdt7\n  axxx:pod/mysql-5-lx2bc\n...\n```\n\n## Examples\n\n- [Some unit tests](examples/cluster_tests.py)\n\n## Environment Variables\nTo allow openshift-client-python applications to be portable between environments without needing to be modified, \nyou can specify many default contexts in the environment. \n\n### Defaults when invoking `oc`\nEstablishing explicit contexts within an application will override these environment defaults.\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_OC_PATH` - default path to use when invoking `oc`\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_CONFIG_PATH` - default `--kubeconfig` argument\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_API_SERVER` - default `--server` argument\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_CA_CERT_PATH` - default `--cacert` argument\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_PROJECT` - default `--namespace` argument\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_OC_LOGLEVEL` - default `--loglevel` argument\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SKIP_TLS_VERIFY` - default `--insecure-skip-tls-verify`\n\n### Master timeout\nDefines an implicit outer timeout(..) context for the entire application. This allows you to ensure\nthat an application terminates within a reasonable time, even if the author of the application has\nnot included explicit timeout contexts. Like any `timeout` context, this value is not overridden\nby subsequent `timeout` contexts within the application. It provides an upper bound for the entire\napplication's oc interactions.\n\n- `OPENSHIFT_CLIENT_PYTHON_MASTER_TIMEOUT` \n\n### SSH Client Host\nIn some cases, it is desirable to run an openshift-client-python application using a local `oc` binary and \nin other cases, the `oc` binary resides on a remote client. Encoding this decision in the application\nitself is unnecessary.\n\nSimply wrap you application in a `client_host` context without arguments. This will try to pull \nclient host information from environment variables if they are present. If they are not present,\nthe application will execute on the local host.\n\nFor example, the following application will ssh to `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME` if it is defined\nin the environment. Otherwise, `oc` interactions will be executed on the host running the python application.\n\n```python\nwith oc.client_host():  # if OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME if not defined in the environment, this is a no-op\n    print( 'Found nodes: {}'.format(oc.selector('nodes').qnames()) ) \n```\n\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME` - The hostname on which the `oc` binary resides\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_USERNAME` - Username to use for the ssh connection (optional)\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_PORT` - SSH port to use (optional; defaults to 22)\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_AUTO_ADD` - Defaults to `false`. If set to `true`, unknown hosts will automatically be trusted.\n- `OPENSHIFT_CLIENT_PYTHON_DEFAULT_LOAD_SYSTEM_HOST_KEYS` - Defaults to `true`. If true, the local known hosts information will be used.\n",
    "bugtrack_url": null,
    "license": " Apache License Version 2.0, January 2004 http://www.apache.org/licenses/  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION  1. Definitions.  \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.  \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.  \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.  \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.  \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.  \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.  \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).  \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.  \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"  \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:  (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and  (b) You must cause any modified files to carry prominent notices stating that You changed the files; and  (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and  (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.  You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.  END OF TERMS AND CONDITIONS  Copyright 2020 Red Hat, Inc.  Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at  http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ",
    "summary": "OpenShift python client",
    "version": "2.0.4",
    "project_urls": {
        "Homepage": "https://github.com/openshift/openshift-client-python",
        "Issues": "https://github.com/openshift/openshift-client-python/issues"
    },
    "split_keywords": [
        "openshift"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5bdc8683adfd63fecaadd32d2df466df9a7cd3bf36b2496246c7f27c3f23b4e9",
                "md5": "e5507582180c8508befeb24b867893ff",
                "sha256": "c69d30e40752b468d4440d058d43dfba7a06f6c7c8ca630debab46879ed9d065"
            },
            "downloads": -1,
            "filename": "openshift_client-2.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e5507582180c8508befeb24b867893ff",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=2.7",
            "size": 78928,
            "upload_time": "2024-03-27T12:21:29",
            "upload_time_iso_8601": "2024-03-27T12:21:29.732841Z",
            "url": "https://files.pythonhosted.org/packages/5b/dc/8683adfd63fecaadd32d2df466df9a7cd3bf36b2496246c7f27c3f23b4e9/openshift_client-2.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "15a052cfd08e7ba81c03a70400e173448e600f3ae01c70954423594d3ea12aa8",
                "md5": "56bddb6e95e211c66c63d91940f4c600",
                "sha256": "3fac20a093699f7a60fe79a1ba98dfb4f6e7fff09ffcb299b68439428e1e69c0"
            },
            "downloads": -1,
            "filename": "openshift-client-2.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "56bddb6e95e211c66c63d91940f4c600",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=2.7",
            "size": 82834,
            "upload_time": "2024-03-27T12:21:31",
            "upload_time_iso_8601": "2024-03-27T12:21:31.558662Z",
            "url": "https://files.pythonhosted.org/packages/15/a0/52cfd08e7ba81c03a70400e173448e600f3ae01c70954423594d3ea12aa8/openshift-client-2.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-27 12:21:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "openshift",
    "github_project": "openshift-client-python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "openshift-client"
}
        
Elapsed time: 0.21059s