Skip to main content

Building the OKD payload

· 15 min read

Over the last couple of months, we've been busy building a new OKD release on CentOS Stream CoreOS (SCOS), and were able to present it for the OpenShift Commons Detroit 2022.

While some of us created a Tekton pipeline that could build SCOS on a Kind cluster, others were tediously building the OKD payload with Prow, but also creating a Tekton pipeline for building that payload on any OpenShift or OKD cluster.

The goal of this effort is to enable and facilitate community collaboration and contributions, giving anybody the ability to do their own payload builds and run tests themselves.

This process has been difficult because OpenShift's Prow CI instance is not open to the public, and changes could thus not easily be tested before PR submission. Even after opening a PR, a non-Red Hatter will require a Red Hat engineer to add the /ok-to-test label in order to start Prow testing.

With the new Tekton pipelines, we are now providing a straight forward way for anybody to build and test their own changes first (or even create their own Stream entirely), and then present the results to the OKD Working Group, which will then expedite the review process on the PR.

In this article, I will shed some light on the building blocks of the OKD on SCOS payload, how it is built, both the Prow way, and the Tekton way:

What's the payload?

Until now, the OKD payload, like the OpenShift payload, was built by the ReleaseController in Prow.

The release-controller automatically builds OpenShift release images when new images are created for a given OpenShift release. It detects changes to an image stream, launches a job to build and push the release payload image using oc adm release new, and then runs zero or more ProwJobs against the artifacts generated by the payload.

A release image is nothing more than a ClusterVersionOperator image (CVO), with an extra layer containing the release-manifests folder. This folder contains :

  • image-references: a list of all known images with their SHA digest,
  • yaml manifest files for each operator controlled by the CVO.

The list of images that is included in the release-manifests is calculated from the release image stream, taking :

  • all images with label io.openshift.release.operator=true in that image stream
  • plus any images referenced in the /manifests/image-references file within each of the images with this label.

As you can imagine, the list of images in a release can change from one release to the next, depending on:

  • new operators being delivered within the OpenShift release
  • existing operators adding or removing an operand image
  • operators previously included that are removed from the payload to be delivered independently, through OLM instead.

In order to list the images contained in a release payload, run this command:

oc adm release info ${RELEASE_IMAGE_URL}

For example:

oc adm release info quay.io/okd/scos-release:4.12.0-0.okd-scos-2022-12-02-083740 

Now that we've established what needs to be built, let's take a deeper look at how the OKD on SCOS payload is built.

Building OKD/SCOS the Prow way

The obvious way to build OKD on SCOS is to use Prow - THE Kubernetes-based CI/CD system, which is what builds OCP and OKD on FCOS already today. This is what Kubernetes uses upstream as well. :shrug:

For a new OKD release to land in the releases page, there's a whole bunch of Prow jobs that run. Hang on! It's a long story...

ImageStreams

Let's start by the end 😉, and prepare a new image stream for OKD on SCOS images. This ImageStream (IS) is a placeholder for all images that form the OKD/SCOS payload.

For OKD on Fedora CoreOS (OKD/FCOS) it's named okd.For OKD/SCOS, this ImageStream is named okd-scos.

This ImageStream includes all payload images contained in the specific OKD release based on CentOS Stream CoreOS (SCOS)

Among these payload images, we distinguish:

  • Images that can be shared between OCP and OKD. These are built in Prow and mirrored into the okd-scos ImageStream.
  • Images that have to be specifically built for OKD/SCOS, which are directly tagged into the okd-scos ImageStream. This is the case for images that are specific to the underlying operating system, or contain RHEL packages. These are: the installer images, the machine-config-operator image, the machine-os-content that includes the base operating system OSTree, as well as the ironic image for provisioning bare-metal nodes, and a few other images.

Triggers for building most payload images

Now that we've got the recipient Image Stream for the OKD payload images, let's start building some payloads!

Take the Cluster Network Operator for example:
For this operator, the same image can be used on OCP CI and OKD releases. Most payload images fit into this case.

For such an image, the build is pretty straight forward. When a PR is filed for a GitHub repository that is part of a release payload:

  • The Pre-submit jobs run. It essentially builds the image and stores it in an ImageStream in an ephemeral namespace to run tests against several platforms (AWS, GCP, BareMetal, Azure, etc)

  • Once the tests are green and the PR is approved and merges, the Post-submit jobs run. It essentially promotes the built image to the appropriate release-specific ImageStream:

    • if the PR is for master, images are pushed to the ${next-release} ImageStream
    • If the PR is for release-${MAJOR}.${MINOR}, images are pushed to the ${MAJOR}.${MINOR} ImageStream

Next, the OCP release controller which runs at every change to the ImageStream, will mirror all images from the ${MAJOR}.${MINOR} ImageStream to the scos-${MAJOR}.${MINOR} ImageStream.

As mentioned before, some of the images are not mirrored, and that brings us to the next section, on building those images that have content (whether code or manifests) specific to OKD.

Trigger for building the OKD-specific payload images

For the OKD-specific images, the CI process is a bit different, as the image is built in the PostSubmit job and then directly promoted to the okd-scos IS, without going through the OCP CI to OKD mirroring step. This is called a variant configuration. You can see this for MachineConfigOperator for example.

The built images land directly in the scos-${MAJOR}-${MINOR} ImageStream.

That is why there's no need for OCP's CI release controller to mirror these images from the CI ImageStream: During the PostSubmit phase, images are already getting built in parallel for OCP, OKD/FCOS and OKD/SCOS and pushed, respectively to ocp/$MAJOR.$MINOR, origin/$MAJOR.$MINOR, origin/scos-$MAJOR.$MINOR

OKD release builds

Now the ImageStream scos-$MAJOR.$MINOR is getting populated by payload images. With every new image tag, the release controller for OKD/SCOS will build a release image.

The ReleaseController ensures that OpenShift update payload images (aka release images) are created whenever an ImageStream representing the images in a release is updated.

Thanks to the annotation release.openshift.io/config on the scos-${MAJOR}-{MINOR} ImageStream, the controller will:

  1. Create a tag in the scos-${MAJOR}-{MINOR} ImageStream that uses the release name + current timestamp.
  2. Mirror all of the tags in the input ImageStream so that they can't be pruned.
  3. Launch a job in the job namespace to invoke oc adm release new from the mirror pointing to the release tag we created in step 1.
  4. If the job succeeds in pushing the tag, it sets an annotation on that tag release.openshift.io/phase = "Ready", indicating that the release can be used by other steps. And that's how a new release appears in `https://origin-release.ci.openshift.org/#4.13.0-0.okd-scos
  5. The release state switches to "Verified" when the verification end-to-end test job succeeds.

Building the Tekton way

Building with Prow has the advantage of being driven by new code being pushed to payload components, thus building fresh releases as the code of github.com/openshift evolves.

The problem is that Prow, along with all the clusters involved with it, the ImageStreams, etc. are not accessible to the OKD community outside of RedHat. Also, users might be interested in building custom OKD payload, in their own environment, to experiment exchanging components for example.

To remove this impediment, the OKD team has been working on the OKD Payload pipeline based on Tekton.

Building OKD payloads with Tekton can be done by cloning the okd-payload-pipeline repository. One extra advantage of this repository is the ability to see the list of components that form the OKD payload: In fact, the list under buildconfigs corresponds to the images in the OKD final payload. This list is currently manually synced with the list of OCP images on each release.

The pipeline is fairly simple. Take the build-from-scratch.yaml for example. It has 3 main tasks:

  • Build the base image and the builder image, with which all the payload images will be built
    • The builder image is a CentOS Stream 9 container image that includes all the dependencies needed to build payload components and is used as the build environment for them
    • The built binaries are then layered onto a CentOS Stream 9 base image, creating a payload component image.
    • The base image is shared across all the images in the release payload
  • Build payload images in batches (starting with the ones that don't have any dependencies)
  • Finally, as all OKD payload component images are in the image stream, the OKD release image is in turn built, using the oc adm release new command.

Triggers

For the moment, this pipeline has no triggers. It can be executed manually when needed. We are planning to automatically trigger the pipeline on a daily cadence.

Batch Build Task

With a set of buildConfigs passed in the parameters, this task relies on an openshift oc image containing the client binary and loops on the list of build configs with a oc start-build, and waits for all the builds to complete.

New Release Task

This task simply uses an OpenShift client image to call oc adm release new which creates the release image from the image stream release (on the OKD/OpenShift cluster where this Tekton pipeline is running), and mirroring the release image, and all the payload component images to a registry configured in its parameters.

BuildConfigs

As explained above, the OKD payload Tekton pipeline heavily relies on the buildconfigs. This folder contains one buildconfig yaml file for each image included in the release payload.

Each build config simply uses a builder image to build the operator binary, invoking the correct Dockerfile in the operator repository. Then, the binary is copied as a layer on top of an OKD base image, which is built in the preparatory task of the pipeline.

This process currently uses the OpenShift Builds API. We are planning to move these builds to the Shipwright Builds API in order to enable builds outside of OCP or OKD clusters.

Updating build configs

Upon deploying the Tekton OKD Payload pipeline on an OKD (or OpenShift) cluster, Kustomize is used in order to :

  • patch the BuildConfig files, adding TAGS to the build arguments according to the type of payload we want to build (based on FCOS, SCOS or any other custom stream)
  • patch the BuildConfig files, replacing the builder image references to the non-public registry.ci.openshift.org/ocp/builder in the payload component's Dockerfiles with the builder image reference from the local image stream
  • setting resource requests and limits if needed

Preparing for a new release

The procedure to prepare a new release is still a work in progress at the time of writing.

To build a new release, each BuildConfig file should be updated with the git branch corresponding to that release.
In the future, the branch can be passed along as a kustomization, or in the parameters of the pipeline.

The list of images from a new OCP release (obtained through oc adm release info) must now be synced with the BuildConfigs present here:

  • For any new image, a new BuildConfig file must be added
  • For any image removed from the OCP release, the corresponding BuildConfig file must be removed.

Take away

What are our next steps?

In the coming weeks and months, you can expect lots of changes, especially as the OKD community is picking up usage of OKD/SCOS, and doing their own Tekton Pipeline runs:

  • Work to automate the OKD release procedure is progress by automatically verifying payload image signatures, signing the release, and tagging it on GitHub.

The goal is to deliver a new OKD/SCOS on a sprint (3-weekly) basis, and to provide both the OCP teams and the OKD community with a fresh release to test much earlier than previously with the OCP release cadence.

  • For the moment, OKD/SCOS releases are only verified on AWS. To gain more confidence in our release payloads, we will expand the test matrix to other platforms such as GCP, vSphere and Baremetal
  • Enable GitOps on the Tekton pipeline repository, so that changes to the pipeline are automatically deployed on OperateFirst for the community to use the latest and greatest.
  • The OKD Working Group will be collaborating with the Mass Open Cloud to allow for deployments of test clusters on their baremetal infrastructure.
  • The OKD Working Group will be publishing the Tekton Tasks and Pipelines used to build the SCOS Operating System as well as the OKD payload to Tekton Hub and Artifact Hub
  • The OKD operators Tekton pipeline will be used for community builds of optional OLM operators. A first OKD operator has already been built with it, and other operators are to follow, starting with the Pipelines operator, which has long been an ask by the community
  • Additionally, we are working on multi-arch releases for both OKD/SCOS and OKD/FCOS

Opened perspectives

Although in the near future the OKD team will still rely on Prow to build the payload images, the Tekton pipeline will start getting used to finalize the release.

In addition, this Tekton pipeline has opened up new perspectives, even for OCP teams.

One such example is for the Openshift API team who would like to use the Tekton pipeline to test API changes by building all components that are dependent of the OpenShift API from that PR, create an OKD release and test it thus getting extra quick feedback on impacts of the API changes on the OKD (and later OCP) releases.

Another example is the possibility to build images on other platforms than Openshift or OKD platform, replacing build configs with Shipwright, or why not docker build...

Whatever your favorite flavor is, we are looking forward to seeing the pipelines in action, increasing collaboration and improving our community feedback loop.

OKD Streams - Building the Next Generation of OKD together

· 9 min read

OKD is the community distribution of Kubernetes that powers Red Hat OpenShift. The OKD community has created reusable Tekton build pipelines on a shared Kubernetes cluster for the OKD build pipelines so that they could manage the build & release processes for OKD in the open.

With the operate-first.cloud hosted at the massopen.cloud, the OKD community has launched a fully open source release pipeline that the community can participate in to help support and manage the release cycle ourselves. The OKD Community is now able to build and release stable builds of OKD 4.12 on both Fedora CoreOS and the newly introduced CentOS Stream CoreOS. We are calling it OKD Streams.

New Patterns, New CI/CD Pipelines and a new CoreOS

Today we invite you into our OKD Streams initiative. An OKD Stream refers to a build, test, and release pipeline for any configuration of OKD, the open source kubernetes distribution that powers OpenShift. The OKD working group is pleased to announce the availability of tooling and processes that will enable building and testing many configurations, or "streams". The OKD Working Group and Red Hat Engineering are now testing one such stream that runs an upstream version of RHEL9 via CentOS Streams CoreOS (‘SCOS’ for short) to improve our RHEL9 readiness signal for Red Hat OpenShift. It is the first of many OKD Streams that will enable developers inside and outside of Red Hat to easily experiment with and explore Cloud Native technologies. You can check out our MVP OKD on SCOS release here.

With this initiative, the OKD working group has embraced new patterns and built new partnerships. We have leveraged the concepts in the open source managed service ‘Operate First’ pattern, worked with the CentOS and CoreOS communities to build a pipeline for building SCOS and applied new CI/CD technologies (Tekton) to build a new OKD release build pipeline service. The MVP of OKD Streams, for example, is an SCOS backed version of OKD built with a Tekton pipeline managed by the OKD working group that runs on AWS infrastructure managed by Operate First. Together we are unlocking some of the innovations to get better (and earlier) release signals for Kubernetes , OCP and RHEL and to enable the OKD community to get more deeply involved with the OKD build processes.

The OKD Working group wanted to make participation in all of these activities easier for all Cloud Native developers and this has been the motivating force behind the OKD Streams initiative.

From the ‘One Size Fits All’ to ‘Built to Order’

There are main three problems that both the OKD working group and Red Hat Engineering teams spend a lot of time thinking about:

  1. how do we improve our release signals for OpenShift, RHEL, CoreOS
  2. how do we get features into the hands of our customer and partners faster
  3. how do we enable engineers to experiment and innovate

Previously, what we referred to as an ‘OKD’ release, was built on the most recent release of OKD running on the latest stable release of Fedora CoreOS (FCOS for short). In actuality, we had a singular release pipeline that built a release of OKD with a bespoke version of FCOS. These releases of OKD gave us early signals for the impact of new operating system features that would eventually be landing in RHEL, where they will surface in RHEL CoreOS (RHCOS). It was (and still is) a very good way for developers to experiment with OKD and explore its functionality.

The OKD community wanted to empower wider use of OKD for experimentation in more use cases that required layering on additional resources in some cases, and in others use cases, reducing the footprints for edge and local deployments. OKD has been stable enough for some to run production deployments. CERN’s OKD deployment on OpenStack, for example, is assembled with custom OKD build pipelines. The feedback from these OKD builds has been a source of inspiration for this OKD Streams initiative to enable more such use cases.

The OKD Streams initiative invites more community input and feedback quickly into the project without interrupting the productized builds for OpenShift and OpenShift customers. We can experiment with new features that can then get pushed upstream into Kubernetes or downstream into the OpenShift product. We can reuse the Tekton build pipelines for building streams specific to HPC or Openstack or Bare Metal or whatever the payload customization needs to be for their organizations.

Our goal is to make it simple for others to experiment.

We are experimenting too. The first OKD Streams ‘experiment’ built with the new Tekton build pipeline running on an Operate First AWS Cluster is OKD running on SCOS, which is a future version of OpenShift running on a near-future version of RHEL that's leveraging CentOS Streams CoreOS. This will improve our RHEL9 readiness signal for OCP. Improved RHEL9 readiness signals with input from the community will showcase our work as we explore what the new OKD build service is going to mean for all of us.

Tekton Pipelines as the Building Blocks

Our new OKD Streams are built using Tekton pipelines, which makes it easier for us to explore building many different kinds of pipelines.

Tekton is a Continuous Deployment (CD) system that enables us to run tasks and pipelines in a composable and flexible manner. This fits in nicely with our OKD Streams initiative where the focus is less on the artifacts that are produced than the pipeline that builds it.

While OKD as a payload remains the core focus of the OKD Working Group, we are also collaborating with the Operate First Community to ensure that anyone is able to take the work we have done and lift and shift it to any cloud enabling OKD to run in any Kubernetes-based infrastructure anywhere. Now anybody can experiment and build their own ‘stream’ of OKD with the Tekton pipeline.

This new pipeline approach enables builds that can be customized via parameters, even the tasks within the pipeline can be exchanged or moved around. Add your own tasks. They are reusable templates for creating your own testable stream of OKD. Run the pipelines on any infrastructure, including locally in Kubernetes using podman, for example, or you can run them on a vanilla Kubernetes cluster. We are enabling access to the Operate First managed OKD Build Service to deploy more of these builds and pipelines to get some ideas that we have at Red Hat out into the community for early feedback AND to let other community members test their ideas.

As an open source community, we’re always evolving and learning together. Our goal is to make OKD the goto place to experiment and innovate for the entire OpenShift ecosystem and beyond, to showcase new features and functionalities, and to fail fast and often without impacting product releases or incurring more technical debt.

THE ASK

Help drive faster innovation into OCP, OKD, Kubernetes and RHEL along with the multitude of other Cloud Native open source projects that are part of the OpenShift and the cloud native ecosystem.

  • Download the MVP OKD/SCOS build and deploy it!
  • Review our Tekton OKD Build pipelines. Try running them on your own Kubernetes cluster with Tekton - help us make our pipelines more efficient and easier to re-use.
  • Review our pipeline documentation and help us make it better.
  • Fork our pipelines and add your own tasks and resources and let us know how it goes.
  • Come to an OKD Working Group meeting and share your OKD use cases with the rest of the community. We’ll help you connect with like minded collaborators!

This project is a game changer for lots of open source communities internally and externally. We know there are folks out there in the OKD working group and in the periphery that haven't spoken up and we'd love to hear from you, especially if you are currently doing bespoke OKD builds. Will this unblock your innovation the way we think it will?

Additional Resources

Kudos and Thank you

Operate First’s Infrastructure Team: Thorsten Schwesig, Humair Khan, Tom Coufal, Marcel Hild Red Hat’s CFE Team: Luigi Zuccarelli, Sherine Khoury OKD Working Group: Vadim Rutkovsky, Alessandro Di Stefano, Jaime Magiera, Brian Innes CentOS Cloud and HPC SIGs: Amy Marrich, Christian Glombek, Neal Gompa

OKD at KubeCon + CloudNativeCon North America 2022

· 2 min read

Are you heading to Kubecon/NA October 24, 2022 - October 28, 2022 in Detroit at KubeCon + CloudNativeCon North America 2022?

If so, here's where you'll find members of the OKD Working Group and Red Hat engineers that working on delivering the latest releases of OKD at Kubecon!

October 25th

At the OpenShift Commons Gathering on Tuesday, October 25, 2022 | 9:00 a.m. - 6:00 p.m. EDT, we're hosting an in-person OKD Working Group Lunch & Learn Meet up from 12 noon to 3 pm lead by co-chairs Jaime Magiera (ICPSR at University of Michigan Institute for Social Research), Diane Mueller(Red Hat) and special guests including Michael McCune(Red Hat) in Break-out room D at the Westin Book Cadillac a 10 minute walk from the conference venue. followed by a Lightning Talk: OKD Working Group Update & Road Map on the OpenShift Common main stage at 3:45 pm. The main stage event will be live streamed via Hopin so if you are NOT attending in person, you'll be able to join us online.

Registration for OpenShift Commons Gathering is FREE and OPEN to ALL for both in-person and virtual attendance - https://commons.openshift.org/gatherings/kubecon-22-oct-25/

October 27th

At 11:30 am EDT, the OKD Working Group will hold a Kubecon Virtual Office Hour that on OKD Streams initiatives and the latest release lead by OKD Working Group members: Vadim Rutkovsky, Luigi Mario Zuccarelli, Christian Glombek and Michelle Krejci!

Registration for the virtual Kubecon/NA event is required to join the Kubecon Virtual Office Hour

If you're attending in person and just want to grab a cuppa coffee and have a chat with us, please reach ping either of the OKD working group co-chairs Jaime Magiera (ICPSR at University of Michigan Institute for Social Research), or Diane Mueller(Red Hat)

Come connect with us to discuss the OKD Road Map, OKD Streams initiative, MVP Release of OKD on CentOS Streams and the latest use cases for OKD, and talk all things open with our team.

An introduction

· 3 min read

by Denis Moiseev and Michael McCune

During the course of installing, operating, and maintaining an OKD cluster it is natural for users to come across strange behaviors and failures that are difficult to understand. As Red Hat engineers working on OpenShift, we have many tools at our disposal to research cluster failures and to report our findings to our colleagues. We would like to share some of our experiences, techniques, and tools with the wider OKD community in the hopes of inspiring others to investigate these areas.

As part of our daily activities we spend a significant amount of time investigating bugs, and also failures in our release images and testing systems. As you might imagine, to accomplish this task we use many tools and pieces of tribal knowledge to understand not only the failures themselves, but the complexity of the build and testing infrastructures. As Kubernetes and OpenShift have grown, there has always been an organic growth of tooling and testing that helps to support and drive the development process forward. To fully understand the depths of these processes is to be actively following what is happening with the development cycle. This is not always easy for users who are also focused on delivering high quality service through their clusters.

On 2 September, 2022, we had the opportunity to record a video of ourselves diving into the OKD release artifacts to show how we investigate failures in the continuous integration release pipeline. In this video we walk through the process of finding a failing release test, examining the Prow console, and then exploring the results that we find. We explain what these artifacts mean, how to further research failures that are found, and share some other web-based tools that you can use to find similar failures, understand the testing workflow, and ultimately share your findings through a bug report.

To accompany the video, here are some of the links that we explore and related content:

Finally, if you do find bugs or would like report strange behavior in your clusters, remember to visit issues.redhat.com and use the project OCPBUGS.

Recap OKD Testing and Deployment Workshop - Videos and Additional Resources

· 3 min read

On March 20th, OKD-Working Group hosted a day-long event to bring together people from the OKD and related Open Source project communities to collaborate on testing and documentation of the OKD 4 install and upgrade processes for the various platforms that people are deploying OKD 4 on as well to identify any issues with the current documentation for these processes and triage them together.

The OKD Working Group held a virtual community-hosted workshop on testing and deploying OKD4 on March 20th

The day started with all attendees together in the ‘main stage’ area for 2 hours where community members gave an short welcome along with the following four presentations:

Then attendees then broke into track sessions specific to the deployment target platforms for deep dive demos with live Q/A, answered as many questions as possible about that specific deployment target's configurations, attempted to identify any missing pieces in the documentation and triage the documentation as we went along.

The 4 track break-out rooms set-up for 2.5 hours of deployment walk throughs and Q/A with session leads:

Our goal was to triage our existing community documentation, identify any short comings and encourage your participation in the OKD-Working Group's testing of the installation and upgrade processes for each OKD release.

Resources:

Please avoid using FCOS 33.20210301.3.1 for new OKD installs

· One min read

Due to several issues ([1] and [2]) fresh installations using FCOS 33.20210301.3.1 would fail. The fix is coming in Podman 3.1.0.

Please use an older stable release - 33.20210217.3.0 - as a starting point instead. See download links at https://builds.coreos.fedoraproject.org/browser?stream=stable (might need some scrolling),

Note, that only fresh installs are affected. Also, you won't be left with outdated packages, as OKD does update themselves to latest stable FCOS content during installation/update.

  1. https://bugzilla.redhat.com/show_bug.cgi?id=1936927
  2. https://github.com/openshift/okd/issues/566

-- Cheers, Vadim

OKD Testing and Deployment Workshop

· 3 min read

The OKD Working Group is hosting a virtual workshop on testing and deploying OKD4

On March 20th, OKD-Working Group is hosting a one day event to bring together people from the OKD and related Open Source project communities to collaborate on testing and documentation of the OKD 4 install and upgrade processes for the various platforms that people are deploying OKD 4 on as well to identify any issues with the current documentation for these processes and triage them together.

The day will start with all attendees together in the ‘main stage’ area for 2 hours where we will give an short welcome and describe the logistics for the day, give a brief introduction to OKD4 itself then walk thru a install deployment to vSphere using UPI approach along with a few other more universal best practices such as DNS/DHCP server configuration) that apply to all deployment targets.

Then we will break into tracks specific to the deployment target platforms for deep dive demos with Q/A, try and answer any questions you have about your specific deployment target's configurations, identify any missing pieces in the documentation and triage the documentation as we go.

There will be 4 track break-out rooms set-up for 3 hours of deployment walk throughs and Q/A with session leads:

  • vSphere/UPI - lead by Jaime Magiera (UMich) and Josef Meier (Rohde & Schwarz)
  • Bare Metal/UPI - lead by Andrew Sullivan (Red Hat) and Jason Pittman (Red Hat)
  • Single Node Cluster - lead by Charro Gruver (Red Hat) and Bruce Link (BCIT)
  • Home Lab Setup - lead by Craig Robinson (Red Hat) and Sri Ramanujam (Datto)

Our goal is to triage our existing community documentation, identify any short comings and encourage your participation in the OKD-Working Group's testing of the installation and upgrade processes for each OKD release.

This is community event NOT meant as a substitute for Red Hat technical support.

There is no admission or ticket charge for OKD-Working Group events. However, you are required to complete a free hopin.to platform registration and watch the hopin site for updates about registration and schedule updates.

We are committed to fostering an open and welcoming environment at our working group meetings and events. We set expectations for inclusive behavior through our code of conduct and media policies, and are prepared to enforce these.

You can Register for the workshop here:

https://hopin.com/events/okd-testing-and-deployment-workshop