Skip to main content

OKD 4.19 Release Notes

· 16 min read

Release Notes: 4.19.0-okd-scos.0

This release includes updates across various components, introducing new features, managing feature gates, and resolving numerous bugs to enhance stability and functionality. 4.19.0-okd-scos.0 is the source of this information.

New Features

Several new capabilities and improvements have been introduced in this release:

  • Support for ServiceAccountTokenNodeBinding has been enabled via a feature gate.
  • The OLMv1 Single/OwnNamespace feature is now available behind a feature flag.
  • MachineConfigNodes (MCN) API has been updated to V1 with corresponding CRDs deployed.
  • The CPMSMachineNamePrefix feature gate has been promoted to the default feature set.
  • The GatewayAPIController feature gate has been enabled in the Default featureset and its implementation includes Validating Admission Policy for Gateway API CRDs. GRPC conformance tests have also been added for Gateway API. This feature is NOT supported for OKD because the Openshift service mesh operator, which this feature depends on, is not available as a community operator.
  • MAPI to CAPI migration has been added as a TechPreview feature.
  • DualReplica minimum counts have been added, and the feature has been dropped to DevPreview to enable separation of conflicting enum values.
  • The RouteExternalCertificate feature gate has been promoted to the default feature set with added E2E tests.
  • A Featuregate for the ConsolePlugin ContentSecurityPolicy API has been lifted.
  • MetricsCollectionProfiles has reached GA status.
  • Configuration for external OIDC now supports adding uid and extra claim mappings.
  • The OnClusterBuild featuregate has been promoted to GA.
  • Support for SEV_SNP and TDX confidential instance type selection on GCP has been added.
  • SELinuxMount and SELinuxChangePolicy have been added to DevPreview.
  • The infrastructure object now includes service endpoints and a feature flag.
  • An annotation for validated SCC type has been added.
  • Configuration for vSphere multi disk thinProvisioned has been added.
  • API Updates for GCP Custom API Endpoints have been added.
  • The MarketType field has been added to AwsMachineProviderConfig and validation for this field has been added.
  • UserDefinedNetworks (UDN) has been graduated to GA with associated test improvements.
  • The ClusterVersionOperator API and manifests have been added, including a controller.
  • The HighlyAvailableArbiter control plane topology has been added as a feature for techpreview, with support for changing the minimum for arbiter HA deployments.
  • The KMSEncryptionProvider Feature Gate has been introduced, with support for KMSv2 encryption for ARO HCP using MIv3 and related configuration options.
  • The additionalRoutingCapabilities gate has been promoted in the ClusterNetworkOperator API.
  • Support for vSphere host and vm group based zonal has been added.
  • A MachineNamePrefix field for CPMS has been feature-gated with its feature gate also added.
  • vSphere multi disk support has been added, including provisioning mode for data disks.
  • An initial Monitoring CRD api has been added.
  • The Insights runtime extractor feature has been moved to GA.
  • A new config option for storing Insights archives to persistent volume has been introduced.
  • Insight Operator entitlements for multi arch clusters have been enabled.
  • A liveness probe has been added to the Insights extractor container.
  • The LokiStack gatherer has been added to Insights.
  • CNI subdirectory chaining for composable CNI chaining is available.
  • The nodeslicecontroller has been added to the dockerfile for multus-whereabouts-ipam-cni.
  • The console has added numerous UI/UX improvements including PatternFly 6 updates, features like deleting IDPs, improved helm form in admin perspective, adding a default storage class action, guided tours in admin perspective, add-card item alignment fixes, conversion of HTML elements to PatternFly components, adding dark theme feedback graphic, adding a Getting started section to the project overview page, adding support for extensibility in SnapshotClass and StorageClass pages, adding a favoriting page in the Admin perspective, exposing Topology components to the dynamic plugin SDK, adding support for a Virtualization Engine subscription filter on OperatorHub, adding dev perspective nav options to the admin perspective, adding conditional CSP headers support, adding a Dynamic Plugins nav item, adding telemetry for OLS Import to Console, and adding a customData field to the HorizontalNav component.
  • The monitoring-plugin has been updated with PF-6 migration, improved metrics typeahead, label typeahead, plugin proxy for Perses, and the ability to embed Perses Dashboards.
  • Etcd now has a configurable option for hardware-related timeout delay.
  • GCP PD CSI Driver includes an Attach Limit for Hyperdisk + Gen4 VMs and has been rebased to upstream v1.17.4.
  • The GCP PD CSI Driver Operator can enable VolumeAttributesClass and add custom endpoint args from infrastructure.
  • HyperShift now supports adding a control plane pull secret reference, adding proxy trustedCA to ignition config, testing Azure KMS, capacity reservation in NodePool API, passing featuregates to ocm/oapi, enabling MIv3 for Ingress, configuring KAS goaway-chance, overriding the karpenter image, consuming the KubeAPIServerDNSName API, enabling ppc64le builds, syncing the OpenStack CA cert, limiting CAPI CRD installation on HO, annotating AWSEndpointServices, setting default AWS expirationDate tag, running the kas-bootstrap binary for cpov2, disabling the cluster capabilities flag, enabling MIv3 for Azure file CSI driver, enabling MIv3 for CAPZ, adding e2e tests for image registry capability, adding the konnectivity-proxy sidecar to openshift-oauth-apiserver, checking individual catalog image availability, handling multiple mirror entries, rolling out cpov2 workloads on configmap/secret changes, enabling MIv3 for CNO/CNCC on managed Azure, leveraging ORC to manage the release image on OpenStack, rootless containerized builds, enabling linters, allowing autonode to run upstream karpenter core e2e tests, adding a flag for etcd storage size, auto-approving Karpenter serving CSRs, and providing AWS permission documentation.
  • Machine API Operator supports updating GCP CredentialsRequest, e2e tests for vSphere multi network and Data Disk features, AMD SEV_SNP and TDX confidential computing machines on GCP, adding image/read permissions, adding vSphere check for max networks, adding Azure permissions.
  • vSphere Problem Detector supports host groups.
  • Various tests have been updated or added to support new features and platforms, including OLMv1 preflight permissions checks, MCN V1 API tests, OLMv1 catalogd API endpoint tests, Gateway API tests, testing ratcheting validations, detecting concurrent installer/static pods, platform type external support, and tests for the ImageStreamImportMode feature gate.

Feature Gates

  • CPMSMachineNamePrefix has been promoted to the default feature set.
  • GatewayAPIController has been enabled in the Default featureset. Its implementation includes Validating Admission Policy and is tied to the cluster-ingress-operator. (NOT applicable for OKD)
  • DualReplica minimum count has been added, separation of conflicting enum values enabled, and the feature dropped to DevPreview.
  • RouteExternalCertificate has been promoted to the default feature set.
  • ConsolePlugin ContentSecurityPolicy API feature gate has been lifted.
  • OnClusterBuild has been promoted to GA.
  • GatewayAPI has been re-enabled in the Default featureset and promoted to Tech Preview.
  • VSphereStaticIPs feature gate has been removed.
  • NewOLMPreflightPermissionCheck feature flag has been added and is watched by the cluster-olm-operator.
  • VSphereControlPlaneMachineSet feature gate has been removed.
  • KMS encryption is FeatureGate(d) and the KMSEncryptionProvider Feature Gate has been added.
  • DualReplica featuregate has been added.
  • SELinuxMount and SELinuxChangePolicy have been added to DevPreview.
  • The catalogd metas web api is behind a featuregate.
  • A Feature Gate AND on NetworkLoadBalancer CEL has been added.
  • HighlyAvailableArbiter control plane topology is a feature for techpreview.
  • Persistent Ips feature gate has graduated to GA.
  • MachineNamePrefix field for CPMS is feature-gated with its feature gate also added.
  • CSIDriverSharedResource feature gate has been removed.
  • The ShortCertRotation feature gate has been added and is used to issue short lived certificates in the cluster-kube-apiserver-operator and service-ca-operator.
  • The UserDefinedNetworks feature gate has graduated to GA.
  • The additionalRoutingCapabilities gate has been promoted.
  • The ImageRegistryCapability has been introduced behind a feature gate in HyperShift and tested.
  • The Dynamic Configuration Manager feature gate has follow-up work to be enabled.
  • The cluster-olm-operator watches for the APIV1MetasHandler feature gate.
  • The cluster-olm-operator watches for permissions preflight feature gate.
  • The service-ca-operator does not check featuregates on the operand.

Other Feature Gates Enabled by Default:

  • ConsolePluginContentSecurityPolicy: Status is Enabled in the Default set. The featuregate was lifted for this API. This gate was added to the console-operator.
  • OpenShiftPodSecurityAdmission: Status is Enabled in the Default set.
  • ClusterVersionOperatorConfiguration: Status is Enabled (New) in the Default set.
  • DyanmicServiceEndpointIBMCloud: Status is Enabled (New) in the Default set.
  • GCPCustomAPIEndpoints: Status is Enabled (New) in the Default set. There were API updates for GCP Custom API Endpoints.
  • NewOLMCatalogdAPIV1Metas: Status is Enabled (New) in the Default set. The featuregate for catalogd metas web API was added and is watched for.
  • NewOLMOwnSingleNamespace: Status is Enabled (New) in the Default set. A feature flag was added for OLMv1 Single/OwnNamespace.
  • NewOLMPreflightPermissionChecks: Status is Enabled (New) in the Default set. A feature flag for this was added and is watched for.
  • SigstoreImageVerificationPKI: Status is Enabled (New) in the Default set. A PKI field was added to the image API.
  • VSphereConfigurableMaxAllowedBlockVolumesPerNode: Status is Enabled (New) in the Default set. The MaxAllowedBlockVolumesPerNode field was added to the VSphereCSIDriverConfigSpec.
  • VSphereMultiDisk: Status is Enabled (New) in the Default set. Support for vSphere multi disk was added.
  • ClusterAPIInstallIBMCloud: Status changed from Disabled to Enabled in this set. This feature flag was added to Tech Preview.
  • MachineAPIMigration: Status changed from Disabled to Enabled in this set. MAPI to CAPI migration was added to TechPreview.

Bug Fixes

Numerous bugs have been addressed in this release across various components:

  • Validation for the marketType field in aws-cluster-api-controllers has been added.
  • Fixed issues using 127.0.0.1 for healtz http-endpoints, corrected ASH driver inject env config, and fixed PodDisruptionBudget name for openstack-manila.
  • Azure Stack Hub volume detach failure has been fixed.
  • Panic issues in Azure Stack related to GetZoneByNodeName and when the informer receives cache.DeletedFinalStateUnknown have been fixed.
  • GovCloud Config has been fixed.
  • Cross-subscription snapshot deletion is now allowed in azure-file-csi-driver. CVEs related to golang.org/x/crypto and golang.org/x/net have been addressed.
  • Fixes in the CLI include addressing rpmdiff permissions, using ProxyFromEnvironment for HTTP transport, adjusting the impact summary for Failing=Unknown, populating RESTConfig, bumping glog and golang.org/x/net/crypto dependencies for fixes, ensuring monitor doesn't exit for temp API disconnect, fixing the oc adm node-image create –pxe command, parsing node logs with HTML headers, and obfuscating sensitive data in Proxy resource inspection.
  • Logo alignment in Webkit has been fixed in cluster-authentication-operator. Duplicate OAuth client creation is avoided. An issue updating the starter path for mom integration has been fixed. Etcd readiness checks are excluded from /readyz.
  • Broken ControlPlaneMachineSet integration tests have been fixed. A spelling error in the FeatureGate NewOLMCatalogdAPIV1Metas has been fixed. A typo in insightsDataGather has been fixed. A race in tests using CRD patches has been fixed. Handling of validations requiring multiple feature gates has been fixed. Missing CSP directives have been added. StaticPodOperatorStatus validation for downgrades and concurrent node rollouts has been fixed. Insights types duration validation has been fixed. An example format validation has been added. Unused MAPO fields have been deprecated. Reverted Disable ResilientWatchCacheInitialization.
  • IBM Public Cloud DNS Provider Update Logic has been fixed, along with IBMCloud DNS Propagation Issues in E2E tests. A test is skipped when a specific feature gate is enabled. Single Watch on GWAPI CRD issue has been fixed.
  • Dev cert rotation has been reverted in cluster-kube-apiserver-operator. Etcd endpoints are now checked by targetconfigcontroller. Metrics burn rate calculations and selectors have been adjusted or fixed. Skipping cert generation when networkConfig.status.ServiceNetwork is nil has been fixed. Reverted Disable ResilientWatchCacheInitialization.
  • The gracefully shutdown of the KSVM pod has been fixed.
  • Error handling on port collision in CVO has been improved. A few tests failing on Non-AMD64 machines have been fixed. Unknown USC insights are dropped after a grace period. The preconditions code has been simplified.
  • Numerous console UI/UX and functional bugs have been fixed, including list header wrapping, http context/client handling, quick create button data-quickstart-id, critical alerts section collapsing, runtime errors on MachineConfigPools, switch animation regressions, ACM hiding switcher, favorites button name, listpageheader rendering, tab underline missing, notification drawer spacing, withHandlePromise HOC deprecation, quick start action spacing, operator appearing twice, breadcrumb spacing, web terminal initialize form style, quickstart highlighting, base CSS removal/conversion, VirtualizedTable and ListPageFilter deprecation, OLM CSV empty state link, helpText usage, add card item alignment, ErrorBoundary modal link, DualReplica validation hack, fetching taskRuns by UID, catalog view cleanup, PF6 bug fixes, deployment editing from private git, co-resource-icon clipping, notification drawer keyboard navigation, flaking update-modal tests, orphaned CSS class removal, PDB example YAML missing field, Error state component groups, Developer Catalog renaming, Favorites e2e tests, secret form base64 decoding, typo on tour page, helm chart repository name, SnapshotClass/StorageClass extensibility, plugin type-only warnings, react-helmet/react-measure migration, pipeline ci tests disabling, plugin-api-changed label, getting started alert, perspective merge tests, react-modal/react-tagsinput updates, init containers readiness count, notification drawer overlap, static plugin barrel file references, CaptureTelemetry hooks, flaky Loading tests, admin perspective guided tour disabling, Access review table sort, types/react update, getting started resources content, Node Logs toolbar layout, Loading replacement, favorites icon hover effect, LogViewer theme setting, namespace persistence on perspective switch, secret form drag and drop, logoutOpenShift call removal, NodeLogs Selects closing, missing patternfly styles, monaco theming/sidebar logic, Banner replacement, ODC Project details breadcrumbs, resource list page name filter alignment, VolumeSnapshots not displayed, ResourceLog checkbox replacement, ts-ignore removal, Checkbox filter replacement, monitoring topic update, original path retention on perspective detection, monaco/YAML language server update, subscription values display, Jobs createdTime, CLI links sorting, bottom notifications alignment, notification drawer close button error, Timestamp component, unused static plugin modules, edit resource limit margins, CSRs not loading without permissions, async package upgrade, bold text/link underline issues, dropdown menu overflow, contextId for plugin tabs, OLM operator uninstall message linkify, Observe section display, textarea horizontal expansion, Topology sidebar alert storage, Demo Plugin tab URL, Command Line Terminal tab background color, basic authentication secret type, runtime errors for completed version, QueryBrowser tooltip styles, edit upstream config layout, deployment pod update on imageStream change, Bootstrap radio/checkbox alignment, QuickStart layout, guided tour popover overlap, Edit button bolding, cypress config update, bridge flag for CSP features, CSV details plugin name, Pipeline Repository overview page close button, Topology component exposure, catalog card label alignment, YAMLs directory case sensitivity, Search filter dropdown label i18n, broken codeRefs, CSP headers refresh popover, dev-console cypress test update, plugin name parsing variable, dependency assets copying, ns dropdown UI with web terminal, SourceSecretForm/BasicAuthSubform tech debt, create a Project button, GQL query payload size, non-General User Preference navigation, openshift Authenticate func user token, catalog operator installation parameters, telemetry events OpenShift release version preference, web terminal test failures, errors appending via string, external link icons, BuildSpec details heading font size, capitalization fix for Lightspeed, i18n upload/download, Font Awesome icon alignment, Serverless function test no response, Post TypeScript upgrade changes, helm CI failures, TypeScript upgrade, GQL introspection disabling, code removal, axe-core/cypress-axe upgrade, search tool error, PopupKebabMenu/ClusterConfigurationDropdownField removal, operator installation with + in version name, missing PDB violated translation, Number input focus layout, AlertsRulesDetailPage usage, guessModuleFilePath warnings, channel/version dropdown collapse, webpack 5 upgrade, check-resolution parallel run, Init:0/1 pod status, window.windowErrors saving, ConsolePlugins list display, backend service details runtime error, Function Import error, default StorageClass for ServerlessFunction pipelineVolumeClaimTemplate, Save button enablement in Console plugin enablement, ImagePullSecret duplication, Shipwright build empty params filtering.
  • The managed-by-label populated with an invalid value has been fixed in external-provisioner. CVEs related to golang.org/x/net/crypto have been addressed.
  • Etcd ensure cluster id changes during force-new-cluster, and a compaction induce latency issue has been fixed.
  • Volume unpublish and attachment through reboots has been ensured for kubevirt-csi-driver.
  • A temporary pin on the FRR version has been applied in metallb-frr to a known working rpm.
  • Monitoring plugin fixes include updates to avoid overriding console routes, table scroll/column alignment, performance improvements for incidents page, resetting orthogonal selections, not breaking if cluster doesn’t exist, filtering by cluster name, showing column headings, fixing states filter in aggregated row, clearing old queries, fixing silence alerts data form, re-adding CSV button, allowing refresh interval to be off, removing deleted image dependency, Export as CSV, not showing metrics links in acm perspective, updating datasource on csrf token changes, adding mui/material dependency, fixing typo in predefined metrics, fixing virtualization perses extension point, filter dropdowns, alerts timestamps cutoff, incidents page filters, incidents page loading state, net/http vulnerability, tooltip in row details, fixing incidents filter issues with severities and long standing, incidents dark theme, syncing alert chart to main filter, hotfix for filter requirements, alerting refactor, virtualization perspective routes, potentially undefined variable access, incident chart colors, incidents filter logic/sync, syncing alerts chart/incidents table with days filter, sorting chart bars, reverting reset all filters button, fixing gap in incident charts, using pf v5 variables/table, fixing dev perspective alert URL namespace, incidents page date style, hideshow graph button update, incidents page reset filters, fixing admin console alert detail graph, fixing button spacing on silence form, fixing bounds on bar chart, fixing inverted dropdown toggle, allowing editing of the until field on the silence edit page, fixing feature flagged DX, fixing expanded row rendering, upgrading incidents dropdown, updating incidents charts cursor, removing extra copy.
  • Issues writing network status annotation on CNI ADD have been tolerated in multus-cni. Empty CNI result is properly structured. Getpodcontext cache miss has been fixed.
  • Entrypoint issues have been fixed for multus-whereabouts-ipam-cni, including for new SCOS builds.
  • An error event has been added for failed ingress to route conversion in route-controller-manager.
  • Drop nil metrics during elide transform and capture metric for same has been fixed in telemeter, along with checking nil metric in elide label.
  • Numerous test fixes have been implemented, including increasing timeouts, bumping limits, skipping tests, fixing node selection in MCN tests, fixing MCN tests for two-node clusters, preventing tests using unschedulable nodes, fixing default cert issuer name in RouteExternalCertificate tests, ensuring Git Clone does not run privileged, fixing failed arbiter tests, removing skipped annotation for metal ipv6, adding limit exceptions for Istio, adding cleanup to MCN test, removing CRD schema check, fixing broken intervals charts, fixing egress firewall tests URLs, fixing CBOR data decoding in etcd tests, fixing IPsec tests, validating binary extraction, failing test when operator degrades, using payload pullspec for image info, using non-fake boot image, relying on unstructured for update status, checking load balancer healthcheck port/path, allowing overriding extension binary, re-enabling AWS for router HTTP/2 test, displaying etcd bootstrap event, fixing network name change compatibility, increasing timeouts for live migration, addressing malformed configmap post-test, increasing UDN probe timeouts, adding exceptions outside upgraded window, adding Readiness Probe to Router Status Tests, adding error check for failed cleanup, fixing live migration tests detecting dualstack, extending kubeconfig tests, fixing IPv6 handling in router tests, fixing live migration tests, UDN tests waiting for SCC annotation, fixing auditLogAnalyzer flake error, fixing nmstate deployment failures, showing resources updated too often in auditloganalyzer, skipping OperatorHubSourceError metric checking, adding test case for checking EgressFirewall DNS names, fixing network segmentation eventual consistency, increasing KAPI server timeout, using max time for netpol pods curl requests, moving initialization of OC.
  • Datastore check messages have been improved in vsphere-problem-detector.

OKD 4.19 stable and 4.20 ec have released

· One min read

We’re excited to announce that OKD 4.19.0-scos.0 has been officially promoted to the stable release channel!

You can view the release payload here: 4.19.0-okd-scos.0 and compare the differences with the last stable 4.18 release.

A few significant highlights of this release include:

  • Bootimages and node images are now based on Centos stream CoreOS (scos)
  • Bootimages are available publicly at: https://cloud.centos.org/centos/scos/9/prod/streams/
  • Baremetal installs, assisted and agent based installs work seamlessly now that bootimages have been transitioned to scos
  • Upgrade edges have been added from previous stable release to the new release

Alongside this stable release, we’re also publishing a development preview of the next version: 4.20.0-okd-scos.ec.0 – now available on the 4-scos-next channel for early testing and feedback.

We encourage users and contributors to test the new releases and share feedback via the OKD community channels. Stay tuned for more updates!

Say Hello @ KubeCon EU 2025

· One min read
Zed Spencer-Milnes
Co-chair, OKD Working Group

Members of the OKD Working Group are attending KubeCon EU!

We are looking forward to meeting existing and future users of OKD and talk to other members of the ecosystem about OKD.

OKD Meetup

  • When: 3:30pm - 4:30pm - Tuesday, April 1st 2025
  • Where: Crowne Plaza London Docklands
  • What: No set agenda, just a room to talk all things OKD and meet fellow community members!
  • Who: No preregistration required! Users and Contributors of OKD are encourage to attend

This follows immediately after RedHat OpenShift Commons which you can find out more about here. You do not need to attend RedHat OpenShift Commons to join the OKD Meetup

Hotel Address: Crowne Plaza London Docklands, Royal Victoria Dock, Western Gateway, London, E16 1AL, United Kingdom

OKD 4.17 and 4.16 releases

· 3 min read
Zed Spencer-Milnes
Co-chair, OKD Working Group

We are pleased to announce the release of OKD 4.17, alongside OKD 4.16 to allow upgrades for existing 4.15 clusters.

warning

4.16 is intended only as a pass-through for existing 4.15 clusters. Upgrading existing 4.15 cluster will require manual interventions and special care due to major changes in how OKD is built and assembled which have introduced various side effects.

You're late, why?

Yes, we are. OKD builds became polluted with RHEL content that was included in "payload components" (e.g cluster-infrastructure operators, images, etc that made up OKD). This was highlighted in Summer 2023 and heading into 2024 all OKD releases were stopped until this issue was addressed.

After significant work from a few engineers at RedHat, all components that make up OKD should now be free from RHEL artifacts. This required significant work to build infrastructure and process and chasing issues related to discrepancies between CentOS and RHEL. Most OKD components are now based off CentOS Stream as the base image layer (the license-free upstream to RHEL).

I want to install a new cluster

New cluster installations can follow the normal process. Downloads of client tools with the latest versions of OKD 4.17 embedded can be found here.

I want to upgrade an existing cluster

We recommended attempting upgrades from the latest released version of OKD FCOS 4.15 (4.15.0-0.okd-2024-03-10-010116).

Upgrading existing 4.15 cluster will require manual interventions and special care due to major changes in how OKD is built and assembled which have introduced various side effects.

There is a new area for upgrade notes covering the 4.15 through 4.17

Node operating systems are now based off CentOS Stream CoreOS (SCOS)

As part of this work we have also changed the node operating system to be based off CentOS Stream CoreOS (SCOS) rather than Fedora CoreOS (FCOS). It's worth noting that this work was not part of the OKD Streams (where we produced concurrent releases for FCOS and SCOS) project which for now has been suspended.

The build process for SCOS and it's assembly into OKD in versions greater than 4.16 is vastly different to how it happened as part of OKD Streams in version 4.15 and below.

warning

There are known issues and regressions related to the move from FCOS to SCOS that may effect new and existing clusters. Please refer to OKD Upgrade Notes: From 4.15

Special thanks

The OKD Working Group would like to thank Prashanth Sundararaman of RedHat for their work

OKD Pre-Release Testing July 2024

· One min read
Jaime Magiera
Co-chair, OKD Working Group

Last month, we announced the transition of all development efforts to OKD on SCOS as part of a plan to ensure OKD's longevity. As of a few weeks ago, nightly builds of OKD SCOS have begun to appear on the OpenShift CI system. We're encouraging the community to test these nightlies in non-production environments. Please note that these nightly pre-release builds are not guaranteed an upgrade path to final releases. These are only for testing purposes.

Additionally, please note that the OKD SCOS nightly builds from January-April 2024 should not be installed. These were just tests of the CI/CD process itself. Only the builds from July 2024 onward should be installed and tested.

You can find more information about our testing needs and how to report your results on the Community Testing page.

Please reach out to us with any questions.

Many thanks,

The OKD Working Group Co-Chairs

OKD Working Group Statement

· 3 min read
Jaime Magiera
Co-chair, OKD Working Group

We would like to take a moment to outline what's been happening the past few months in terms of OKD releases and what the future holds for the project.

In Summer of 2023, it came to the attention of Red Hat that licensed content was inadvertently being included in the OKD releases. This necessitates a change of the OKD release materials. At the same time, the Working Group has been striving to increase the community's direct involvement in the build and release process. To address these concerns, Red Hat and the Working Group have been collaborating on defining a path forward over the past few months. This work involves moving OKD builds to a new system, changing the underlying OS, and exposing the new build and release process to community members.

After careful consideration, we've settled on using Centos Stream CoreOS (SCOS) as the underlying operating system for the new builds. We've been working with SCOS since it was first announced at KubeCon U.S. 2022. There's a great opportunity with SCOS for the larger Open Source community to participate in improving OKD and further delineating it from other Kubernetes distributions. The builds will be for x86_64 only while we get our bearings. Given rpm-ostree is the foundation of all modern OKD releases, many existing installations will be able to switch to the SCOS distribution in-place. We're working to outline that procedure in our documentation and identify any edge-cases that may require more work to transition.

The payload for OKD on SCOS is now successfully building. There are still end-to-end tests which need to complete successfully and other housekeeping tasks before pre-release nightly builds can spin up an active cluster. We anticipate this happening within the next few weeks. At that point, members of the community will be able to download these nightly builds for testing and exploration purposes.

On the community involvement and engagement side of things, we'll be relaunching our website to align with the first official release of OKD on SCOS. That site will feature much clearer paths to the information users want to get their clusters up and running. We're redoubling our efforts to help homelabs, single-node, and other similar use cases get off the ground. Likewise, the new website will provide much clearer information on how community members can contribute to the project.

We appreciate everyone's patience over the past few months while we solidified the path forward. We wanted to be confident the pieces would fit together and bring about the desired results before releasing an official statement. From here on out, there will be regular updates on our website.

We understand that there will be lots of questions as this process moves forward. Please post those questions on this discussion thread. We will organize them into this Frequently Asked Questions page.

Many thanks,

The OKD Working Group Co-Chairs

State of affairs in OKD CI/CD

· 6 min read
Jakob Meng
Red Hat

OKD is a community distribution of Kubernetes which is built from Red Hat OpenShift components on top of Fedora CoreOS (FCOS) and recently also CentOS Stream CoreOS (SCOS). The OKD variant based on Fedora CoreOS is called OKD or OKD/FCOS. The SCOS variant is often referred to as OKD/SCOS.

The previous blog posts introduced OKD Streams and its new Tekton pipelines for building OKD/FCOS and OKD/SCOS releases. This blog post gives an overview of the current build and release processes for FCOS, SCOS and OKD. It outlines OKD's dependency on OpenShift, an remnant from the past when its Origin predecessor was a downstream rebuild of OpenShift 3, and concludes with an outlook on how OKD Streams will help users, developers and partners to experiment with future OpenShift.

Fedora CoreOS and CentOS Stream CoreOS

Fedora CoreOS is built with a Jenkins pipeline running in Fedora's infrastructure and is being maintained by the Fedora CoreOS team.

CentOS Stream CoreOS is built with a Tekton pipeline running in a OpenShift cluster on MOC's infrastructure and pushed to quay.io/okd/centos-stream-coreos-9. The SCOS build pipeline is owned and maintained by the OpenShift OKD Streams team and SCOS builds are being imported from quay.io into OpenShift CI as ImageStreams.

OpenShift payload components

At the time of writing, most payload components for OKD/FCOS and OKD/SCOS get mirrored from OCP CI releases. OpenShift CI (Prow and ci-operator) periodically builds OCP images, e.g. for OVN-Kubernetes. OpenShift's release-controller detects changes to image streams, caused by recently built images, then builds and tests a OCP release image. When such an release image passes all non-optional tests (also see release gating docs), the release image and other payload components are mirrored to origin namespaces on quay.io (release gating is subject to change). For example, at most every 3 hours a OCP 4.14 release image will be deployed (and upgraded) on AWS and GCP and afterwards tested with OpenShift's conformance test suite. When it passes the non-optional tests the release image and its dependencies will be mirrored to quay.io/origin (except for rhel-coreos*, *-installer and some other images). These OCP CI releases are listed with a ci tag at amd64.ocp.releases.ci.openshift.org. Builds and promotions of nightly and stable OCP releases are handled differently (i.e. outside of Prow) by the Automated Release Tooling (ART) team.

OKD payload components

A few payload components are built specifically for OKD though, for example OKD/FCOS' okd-machine-os. Unlike RHCOS and SCOS, okd-machine-os, the operating system running on OKD/FCOS nodes, is layered on top of FCOS (also see CoreOS Layering, OpenShift Layered CoreOS).

Note, some payload components have OKD specific configuration in OpenShift CI although the resulting images are not incorporated into OKD release images. For example, OVN-Kubernetes images are built and tested in OpenShift CI to ensure OVN changes do not break OKD.

OKD releases

When OpenShift's release-controller detects changes to OKD related image streams, either due to updates of FCOS/SCOS, an OKD payload component or due to OCP payload components being mirrored after an OCP CI release promotion, it builds and tests a new OKD release image. When such an OKD release image passes all non-optional tests, the image is tagged as registry.ci.openshift.org/origin/release:4.14 etc. This CI release process is similar for OKD/FCOS and OKD/SCOS, e.g. compare these examples for OKD/FCOS 4.14 and with OKD/SCOS 4.14. OKD/FCOS's and OKD/SCOS's CI releases are listed at amd64.origin.releases.ci.openshift.org.

Promotions for OKD/FCOS to quay.io/openshift/okd (published at github.com/okd-project/okd) and for OKD/SCOS to quay.io/okd/scos-release (published at github.com/okd-project/okd-scos) are done roughly every 2 to 3 weeks. For OKD/SCOS, OKD's release pipeline is triggered manually once a sprint to promote CI releases to 4-scos-{next,stable}.

OKD Streams and customizable Tekton pipelines

However, the OKD project is currently shifting its focus from doing downstream rebuilds of OCP to OKD Streams. As part of this strategic repositioning, OKD offers Argo CD workflows and Tekton pipelines to build CentOS Stream CoreOS (SCOS) (with okd-coreos-pipeline), to build OKD/SCOS (with okd-payload-pipeline) and to build operators (with okd-operator-pipeline). The OKD Streams pipelines have been created to improve the RHEL9 readiness signal for Red Hat OpenShift. It allows developers to build and compose different tasks and pipelines to easily experiment with OpenShift and related technologies. Both okd-coreos-pipeline and okd-operator-pipeline are already used in OKD's CI/CD and in the future okd-payload-pipeline might supersede OCP CI for building OKD payload components and mirroring OCP payload components.

Building the OKD payload

· 15 min read

Over the last couple of months, we've been busy building a new OKD release on CentOS Stream CoreOS (SCOS), and were able to present it for the OpenShift Commons Detroit 2022.

While some of us created a Tekton pipeline that could build SCOS on a Kind cluster, others were tediously building the OKD payload with Prow, but also creating a Tekton pipeline for building that payload on any OpenShift or OKD cluster.

The goal of this effort is to enable and facilitate community collaboration and contributions, giving anybody the ability to do their own payload builds and run tests themselves.

This process has been difficult because OpenShift's Prow CI instance is not open to the public, and changes could thus not easily be tested before PR submission. Even after opening a PR, a non-Red Hatter will require a Red Hat engineer to add the /ok-to-test label in order to start Prow testing.

With the new Tekton pipelines, we are now providing a straight forward way for anybody to build and test their own changes first (or even create their own Stream entirely), and then present the results to the OKD Working Group, which will then expedite the review process on the PR.

In this article, I will shed some light on the building blocks of the OKD on SCOS payload, how it is built, both the Prow way, and the Tekton way:

What's the payload?

Until now, the OKD payload, like the OpenShift payload, was built by the ReleaseController in Prow.

The release-controller automatically builds OpenShift release images when new images are created for a given OpenShift release. It detects changes to an image stream, launches a job to build and push the release payload image using oc adm release new, and then runs zero or more ProwJobs against the artifacts generated by the payload.

A release image is nothing more than a ClusterVersionOperator image (CVO), with an extra layer containing the release-manifests folder. This folder contains :

  • image-references: a list of all known images with their SHA digest,
  • yaml manifest files for each operator controlled by the CVO.

The list of images that is included in the release-manifests is calculated from the release image stream, taking :

  • all images with label io.openshift.release.operator=true in that image stream
  • plus any images referenced in the /manifests/image-references file within each of the images with this label.

As you can imagine, the list of images in a release can change from one release to the next, depending on:

  • new operators being delivered within the OpenShift release
  • existing operators adding or removing an operand image
  • operators previously included that are removed from the payload to be delivered independently, through OLM instead.

In order to list the images contained in a release payload, run this command:

oc adm release info ${RELEASE_IMAGE_URL}

For example:

oc adm release info quay.io/okd/scos-release:4.12.0-0.okd-scos-2022-12-02-083740 

Now that we've established what needs to be built, let's take a deeper look at how the OKD on SCOS payload is built.

Building OKD/SCOS the Prow way

The obvious way to build OKD on SCOS is to use Prow - THE Kubernetes-based CI/CD system, which is what builds OCP and OKD on FCOS already today. This is what Kubernetes uses upstream as well. :shrug:

For a new OKD release to land in the releases page, there's a whole bunch of Prow jobs that run. Hang on! It's a long story...

ImageStreams

Let's start by the end 😉, and prepare a new image stream for OKD on SCOS images. This ImageStream (IS) is a placeholder for all images that form the OKD/SCOS payload.

For OKD on Fedora CoreOS (OKD/FCOS) it's named okd.For OKD/SCOS, this ImageStream is named okd-scos.

This ImageStream includes all payload images contained in the specific OKD release based on CentOS Stream CoreOS (SCOS)

Among these payload images, we distinguish:

  • Images that can be shared between OCP and OKD. These are built in Prow and mirrored into the okd-scos ImageStream.
  • Images that have to be specifically built for OKD/SCOS, which are directly tagged into the okd-scos ImageStream. This is the case for images that are specific to the underlying operating system, or contain RHEL packages. These are: the installer images, the machine-config-operator image, the machine-os-content that includes the base operating system OSTree, as well as the ironic image for provisioning bare-metal nodes, and a few other images.

Triggers for building most payload images

Now that we've got the recipient Image Stream for the OKD payload images, let's start building some payloads!

Take the Cluster Network Operator for example:
For this operator, the same image can be used on OCP CI and OKD releases. Most payload images fit into this case.

For such an image, the build is pretty straight forward. When a PR is filed for a GitHub repository that is part of a release payload:

  • The Pre-submit jobs run. It essentially builds the image and stores it in an ImageStream in an ephemeral namespace to run tests against several platforms (AWS, GCP, BareMetal, Azure, etc)

  • Once the tests are green and the PR is approved and merges, the Post-submit jobs run. It essentially promotes the built image to the appropriate release-specific ImageStream:

    • if the PR is for master, images are pushed to the ${next-release} ImageStream
    • If the PR is for release-${MAJOR}.${MINOR}, images are pushed to the ${MAJOR}.${MINOR} ImageStream

Next, the OCP release controller which runs at every change to the ImageStream, will mirror all images from the ${MAJOR}.${MINOR} ImageStream to the scos-${MAJOR}.${MINOR} ImageStream.

As mentioned before, some of the images are not mirrored, and that brings us to the next section, on building those images that have content (whether code or manifests) specific to OKD.

Trigger for building the OKD-specific payload images

For the OKD-specific images, the CI process is a bit different, as the image is built in the PostSubmit job and then directly promoted to the okd-scos IS, without going through the OCP CI to OKD mirroring step. This is called a variant configuration. You can see this for MachineConfigOperator for example.

The built images land directly in the scos-${MAJOR}-${MINOR} ImageStream.

That is why there's no need for OCP's CI release controller to mirror these images from the CI ImageStream: During the PostSubmit phase, images are already getting built in parallel for OCP, OKD/FCOS and OKD/SCOS and pushed, respectively to ocp/$MAJOR.$MINOR, origin/$MAJOR.$MINOR, origin/scos-$MAJOR.$MINOR

OKD release builds

Now the ImageStream scos-$MAJOR.$MINOR is getting populated by payload images. With every new image tag, the release controller for OKD/SCOS will build a release image.

The ReleaseController ensures that OpenShift update payload images (aka release images) are created whenever an ImageStream representing the images in a release is updated.

Thanks to the annotation release.openshift.io/config on the scos-${MAJOR}-{MINOR} ImageStream, the controller will:

  1. Create a tag in the scos-${MAJOR}-{MINOR} ImageStream that uses the release name + current timestamp.
  2. Mirror all of the tags in the input ImageStream so that they can't be pruned.
  3. Launch a job in the job namespace to invoke oc adm release new from the mirror pointing to the release tag we created in step 1.
  4. If the job succeeds in pushing the tag, it sets an annotation on that tag release.openshift.io/phase = "Ready", indicating that the release can be used by other steps. And that's how a new release appears in `https://origin-release.ci.openshift.org/#4.13.0-0.okd-scos
  5. The release state switches to "Verified" when the verification end-to-end test job succeeds.

Building the Tekton way

Building with Prow has the advantage of being driven by new code being pushed to payload components, thus building fresh releases as the code of github.com/openshift evolves.

The problem is that Prow, along with all the clusters involved with it, the ImageStreams, etc. are not accessible to the OKD community outside of RedHat. Also, users might be interested in building custom OKD payload, in their own environment, to experiment exchanging components for example.

To remove this impediment, the OKD team has been working on the OKD Payload pipeline based on Tekton.

Building OKD payloads with Tekton can be done by cloning the okd-payload-pipeline repository. One extra advantage of this repository is the ability to see the list of components that form the OKD payload: In fact, the list under buildconfigs corresponds to the images in the OKD final payload. This list is currently manually synced with the list of OCP images on each release.

The pipeline is fairly simple. Take the build-from-scratch.yaml for example. It has 3 main tasks:

  • Build the base image and the builder image, with which all the payload images will be built
    • The builder image is a CentOS Stream 9 container image that includes all the dependencies needed to build payload components and is used as the build environment for them
    • The built binaries are then layered onto a CentOS Stream 9 base image, creating a payload component image.
    • The base image is shared across all the images in the release payload
  • Build payload images in batches (starting with the ones that don't have any dependencies)
  • Finally, as all OKD payload component images are in the image stream, the OKD release image is in turn built, using the oc adm release new command.

Triggers

For the moment, this pipeline has no triggers. It can be executed manually when needed. We are planning to automatically trigger the pipeline on a daily cadence.

Batch Build Task

With a set of buildConfigs passed in the parameters, this task relies on an openshift oc image containing the client binary and loops on the list of build configs with a oc start-build, and waits for all the builds to complete.

New Release Task

This task simply uses an OpenShift client image to call oc adm release new which creates the release image from the image stream release (on the OKD/OpenShift cluster where this Tekton pipeline is running), and mirroring the release image, and all the payload component images to a registry configured in its parameters.

BuildConfigs

As explained above, the OKD payload Tekton pipeline heavily relies on the buildconfigs. This folder contains one buildconfig yaml file for each image included in the release payload.

Each build config simply uses a builder image to build the operator binary, invoking the correct Dockerfile in the operator repository. Then, the binary is copied as a layer on top of an OKD base image, which is built in the preparatory task of the pipeline.

This process currently uses the OpenShift Builds API. We are planning to move these builds to the Shipwright Builds API in order to enable builds outside of OCP or OKD clusters.

Updating build configs

Upon deploying the Tekton OKD Payload pipeline on an OKD (or OpenShift) cluster, Kustomize is used in order to :

  • patch the BuildConfig files, adding TAGS to the build arguments according to the type of payload we want to build (based on FCOS, SCOS or any other custom stream)
  • patch the BuildConfig files, replacing the builder image references to the non-public registry.ci.openshift.org/ocp/builder in the payload component's Dockerfiles with the builder image reference from the local image stream
  • setting resource requests and limits if needed

Preparing for a new release

The procedure to prepare a new release is still a work in progress at the time of writing.

To build a new release, each BuildConfig file should be updated with the git branch corresponding to that release.
In the future, the branch can be passed along as a kustomization, or in the parameters of the pipeline.

The list of images from a new OCP release (obtained through oc adm release info) must now be synced with the BuildConfigs present here:

  • For any new image, a new BuildConfig file must be added
  • For any image removed from the OCP release, the corresponding BuildConfig file must be removed.

Take away

What are our next steps?

In the coming weeks and months, you can expect lots of changes, especially as the OKD community is picking up usage of OKD/SCOS, and doing their own Tekton Pipeline runs:

  • Work to automate the OKD release procedure is progress by automatically verifying payload image signatures, signing the release, and tagging it on GitHub.

The goal is to deliver a new OKD/SCOS on a sprint (3-weekly) basis, and to provide both the OCP teams and the OKD community with a fresh release to test much earlier than previously with the OCP release cadence.

  • For the moment, OKD/SCOS releases are only verified on AWS. To gain more confidence in our release payloads, we will expand the test matrix to other platforms such as GCP, vSphere and Baremetal
  • Enable GitOps on the Tekton pipeline repository, so that changes to the pipeline are automatically deployed on OperateFirst for the community to use the latest and greatest.
  • The OKD Working Group will be collaborating with the Mass Open Cloud to allow for deployments of test clusters on their baremetal infrastructure.
  • The OKD Working Group will be publishing the Tekton Tasks and Pipelines used to build the SCOS Operating System as well as the OKD payload to Tekton Hub and Artifact Hub
  • The OKD operators Tekton pipeline will be used for community builds of optional OLM operators. A first OKD operator has already been built with it, and other operators are to follow, starting with the Pipelines operator, which has long been an ask by the community
  • Additionally, we are working on multi-arch releases for both OKD/SCOS and OKD/FCOS

Opened perspectives

Although in the near future the OKD team will still rely on Prow to build the payload images, the Tekton pipeline will start getting used to finalize the release.

In addition, this Tekton pipeline has opened up new perspectives, even for OCP teams.

One such example is for the Openshift API team who would like to use the Tekton pipeline to test API changes by building all components that are dependent of the OpenShift API from that PR, create an OKD release and test it thus getting extra quick feedback on impacts of the API changes on the OKD (and later OCP) releases.

Another example is the possibility to build images on other platforms than Openshift or OKD platform, replacing build configs with Shipwright, or why not docker build...

Whatever your favorite flavor is, we are looking forward to seeing the pipelines in action, increasing collaboration and improving our community feedback loop.

OKD Streams - Building the Next Generation of OKD together

· 9 min read

OKD is the community distribution of Kubernetes that powers Red Hat OpenShift. The OKD community has created reusable Tekton build pipelines on a shared Kubernetes cluster for the OKD build pipelines so that they could manage the build & release processes for OKD in the open.

With the operate-first.cloud hosted at the massopen.cloud, the OKD community has launched a fully open source release pipeline that the community can participate in to help support and manage the release cycle ourselves. The OKD Community is now able to build and release stable builds of OKD 4.12 on both Fedora CoreOS and the newly introduced CentOS Stream CoreOS. We are calling it OKD Streams.

New Patterns, New CI/CD Pipelines and a new CoreOS

Today we invite you into our OKD Streams initiative. An OKD Stream refers to a build, test, and release pipeline for any configuration of OKD, the open source kubernetes distribution that powers OpenShift. The OKD working group is pleased to announce the availability of tooling and processes that will enable building and testing many configurations, or "streams". The OKD Working Group and Red Hat Engineering are now testing one such stream that runs an upstream version of RHEL9 via CentOS Streams CoreOS (‘SCOS’ for short) to improve our RHEL9 readiness signal for Red Hat OpenShift. It is the first of many OKD Streams that will enable developers inside and outside of Red Hat to easily experiment with and explore Cloud Native technologies. You can check out our MVP OKD on SCOS release here.

With this initiative, the OKD working group has embraced new patterns and built new partnerships. We have leveraged the concepts in the open source managed service ‘Operate First’ pattern, worked with the CentOS and CoreOS communities to build a pipeline for building SCOS and applied new CI/CD technologies (Tekton) to build a new OKD release build pipeline service. The MVP of OKD Streams, for example, is an SCOS backed version of OKD built with a Tekton pipeline managed by the OKD working group that runs on AWS infrastructure managed by Operate First. Together we are unlocking some of the innovations to get better (and earlier) release signals for Kubernetes , OCP and RHEL and to enable the OKD community to get more deeply involved with the OKD build processes.

The OKD Working group wanted to make participation in all of these activities easier for all Cloud Native developers and this has been the motivating force behind the OKD Streams initiative.

From the ‘One Size Fits All’ to ‘Built to Order’

There are main three problems that both the OKD working group and Red Hat Engineering teams spend a lot of time thinking about:

  1. how do we improve our release signals for OpenShift, RHEL, CoreOS
  2. how do we get features into the hands of our customer and partners faster
  3. how do we enable engineers to experiment and innovate

Previously, what we referred to as an ‘OKD’ release, was built on the most recent release of OKD running on the latest stable release of Fedora CoreOS (FCOS for short). In actuality, we had a singular release pipeline that built a release of OKD with a bespoke version of FCOS. These releases of OKD gave us early signals for the impact of new operating system features that would eventually be landing in RHEL, where they will surface in RHEL CoreOS (RHCOS). It was (and still is) a very good way for developers to experiment with OKD and explore its functionality.

The OKD community wanted to empower wider use of OKD for experimentation in more use cases that required layering on additional resources in some cases, and in others use cases, reducing the footprints for edge and local deployments. OKD has been stable enough for some to run production deployments. CERN’s OKD deployment on OpenStack, for example, is assembled with custom OKD build pipelines. The feedback from these OKD builds has been a source of inspiration for this OKD Streams initiative to enable more such use cases.

The OKD Streams initiative invites more community input and feedback quickly into the project without interrupting the productized builds for OpenShift and OpenShift customers. We can experiment with new features that can then get pushed upstream into Kubernetes or downstream into the OpenShift product. We can reuse the Tekton build pipelines for building streams specific to HPC or Openstack or Bare Metal or whatever the payload customization needs to be for their organizations.

Our goal is to make it simple for others to experiment.

We are experimenting too. The first OKD Streams ‘experiment’ built with the new Tekton build pipeline running on an Operate First AWS Cluster is OKD running on SCOS, which is a future version of OpenShift running on a near-future version of RHEL that's leveraging CentOS Streams CoreOS. This will improve our RHEL9 readiness signal for OCP. Improved RHEL9 readiness signals with input from the community will showcase our work as we explore what the new OKD build service is going to mean for all of us.

Tekton Pipelines as the Building Blocks

Our new OKD Streams are built using Tekton pipelines, which makes it easier for us to explore building many different kinds of pipelines.

Tekton is a Continuous Deployment (CD) system that enables us to run tasks and pipelines in a composable and flexible manner. This fits in nicely with our OKD Streams initiative where the focus is less on the artifacts that are produced than the pipeline that builds it.

While OKD as a payload remains the core focus of the OKD Working Group, we are also collaborating with the Operate First Community to ensure that anyone is able to take the work we have done and lift and shift it to any cloud enabling OKD to run in any Kubernetes-based infrastructure anywhere. Now anybody can experiment and build their own ‘stream’ of OKD with the Tekton pipeline.

This new pipeline approach enables builds that can be customized via parameters, even the tasks within the pipeline can be exchanged or moved around. Add your own tasks. They are reusable templates for creating your own testable stream of OKD. Run the pipelines on any infrastructure, including locally in Kubernetes using podman, for example, or you can run them on a vanilla Kubernetes cluster. We are enabling access to the Operate First managed OKD Build Service to deploy more of these builds and pipelines to get some ideas that we have at Red Hat out into the community for early feedback AND to let other community members test their ideas.

As an open source community, we’re always evolving and learning together. Our goal is to make OKD the goto place to experiment and innovate for the entire OpenShift ecosystem and beyond, to showcase new features and functionalities, and to fail fast and often without impacting product releases or incurring more technical debt.

THE ASK

Help drive faster innovation into OCP, OKD, Kubernetes and RHEL along with the multitude of other Cloud Native open source projects that are part of the OpenShift and the cloud native ecosystem.

  • Download the MVP OKD/SCOS build and deploy it!
  • Review our Tekton OKD Build pipelines. Try running them on your own Kubernetes cluster with Tekton - help us make our pipelines more efficient and easier to re-use.
  • Review our pipeline documentation and help us make it better.
  • Fork our pipelines and add your own tasks and resources and let us know how it goes.
  • Come to an OKD Working Group meeting and share your OKD use cases with the rest of the community. We’ll help you connect with like minded collaborators!

This project is a game changer for lots of open source communities internally and externally. We know there are folks out there in the OKD working group and in the periphery that haven't spoken up and we'd love to hear from you, especially if you are currently doing bespoke OKD builds. Will this unblock your innovation the way we think it will?

Additional Resources

Kudos and Thank you

Operate First’s Infrastructure Team: Thorsten Schwesig, Humair Khan, Tom Coufal, Marcel Hild Red Hat’s CFE Team: Luigi Zuccarelli, Sherine Khoury OKD Working Group: Vadim Rutkovsky, Alessandro Di Stefano, Jaime Magiera, Brian Innes CentOS Cloud and HPC SIGs: Amy Marrich, Christian Glombek, Neal Gompa