Back to index

4.12.35

Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |

Changes from 4.11.59

Note: this page shows the Feature-Based Change Log for a release

Complete Features

These features were completed when this image was assembled

1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI

2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.

3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.

4. List any affected packages or components.

Epic Goal

  • Make it possible to disable the console operator at install time, while still having a supported+upgradeable cluster.

Why is this important?

  • It's possible to disable console itself using spec.managementState in the console operator config. There is no way to remove the console operator, though. For clusters where an admin wants to completely remove console, we should give the option to disable the console operator as well.

Scenarios

  1. I'm an administrator who wants to minimize my OpenShift cluster footprint and who does not want the console installed on my cluster

Acceptance Criteria

  • It is possible at install time to opt-out of having the console operator installed. Once the cluster comes up, the console operator is not running.

Dependencies (internal and external)

  1. Composable cluster installation

Previous Work (Optional):

  1. https://docs.google.com/document/d/1srswUYYHIbKT5PAC5ZuVos9T2rBnf7k0F1WV2zKUTrA/edit#heading=h.mduog8qznwz
  2. https://docs.google.com/presentation/d/1U2zYAyrNGBooGBuyQME8Xn905RvOPbVv3XFw3stddZw/edit#slide=id.g10555cc0639_0_7

Open questions::

  1. The console operator manages the downloads deployment as well. Do we disable the downloads deployment? Long term we want to move to CLI manager: https://github.com/openshift/enhancements/blob/6ae78842d4a87593c63274e02ac7a33cc7f296c3/enhancements/oc/cli-manager.md

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.

 

Manifests are currently present in /bindata and /manifest directories.

 

Here is example of the insights-operator change.

Here is the overall enhancement doc.

 

Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.

Goals

  • Framework for rapid creation of CSI drivers for our cloud providers
  • CSI driver for AWS EBS
  • CSI driver for AWS EFS
  • CSI driver for GCP
  • CSI driver for Azure
  • CSI driver for VMware vSphere
  • CSI Driver for Azure Stack
  • CSI Driver for Alicloud
  • CSI Driver for IBM Cloud

Requirements

Requirement Notes isMvp?
Framework for CSI driver  TBD Yes
Drivers should be available to install both in disconnected and connected mode   Yes
Drivers should upgrade from release to release without any impact   Yes
Drivers should be installable via CVO (when in-tree plugin exists)    

Out of Scope

This work will only cover the drivers themselves, it will not include

  • enhancements to the CSI API framework
  • the migration to said drivers from the the intree drivers
  • work for non-cloud provider storage drivers (FC-SAN, iSCSI) being converted to CSI drivers

Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.

Assumptions

  • Storage SIG won't move out the changeover to a later Kubernetes release

Customer Considerations
Customers will need to be able to use the storage they want.

Documentation Considerations

  • Target audience: cluster admins
  • Updated content: update storage docs to show how to use these drivers (also better expose the capabilities)

This Epic is to track the GA of this feature

Goal

  • Make available the Google Cloud File Service via a CSI driver, it is desirable that this implementation has dynamic provisioning
  • Without GCP filestore support, we are limited to block / RWO only (GCP PD 4.8 GA)
  • Align with what we support on other major public cloud providers.

Why is this important?

  • There is a know storage gap with google cloud where only block is supported
  • More customers deploying on GCE and asking for file / RWX storage.

Scenarios

  1. Install the CSI driver
  2. Remove the CSI Driver
  3. Dynamically provision a CSI Google File PV*
  4. Utilise a Google File PV
  5. Assess optional features such as resize & snapshot

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Customers::

  • Telefonica Spain
  • Deutsche Bank

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.

We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.

Goals

  • To allow OCP users and cluster admins to detect problems early and with as little interaction with Red Hat as possible.
  • When Red Hat is involved, make sure we have all the information we need from the customer, i.e. in metrics / telemetry / must-gather.
  • Reduce storage test flakiness so we can spot real bugs in our CI.

Requirements

Requirement Notes isMvp?
Telemetry   No
Certification   No
API metrics   No
     

Out of Scope

n/a

Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low

Assumptions

Customer Considerations

Documentation Considerations

  • Target audience: internal
  • Updated content: none at this time.

Notes

In progress:

  • CI flakes:
    • Configurable timeouts for e2e tests
      • Azure is slow and times out often
      • Cinder times out formatting volumes
      • AWS resize test times out

 

High prio:

  • Env. check tool for VMware - users often mis-configure permissions there and blame OpenShift. If we had a tool they could run, it might report better errors.
    • Should it be part of the installer?
    • Spike exists
  • Add / use cloud API call metrics
    • Helps customers to understand why things are slow
    • Helps build cop to understand a flake
      • With a post-install step that filters data from Prometheus that’s still running in the CI job.
    • Ideas:
      • Cloud is throttling X% of API calls longer than Y seconds
      • Attach / detach / provisioning / deletion / mount / unmount / resize takes longer than X seconds?
    • Capture metrics of operations that are stuck and won’t finish.
      • Sweep operation map from executioner???
      • Report operation metric into the highest bucket after the bucket threshold (i.e. if 10minutes is the last bucket, report an operation into this bucket after 10 minutes and don’t wait for its completion)?
      • Ask the monitoring team?
    • Include in CSI drivers too.
      • With alerts too

Unsorted

  • As the number of storage operators grows, it would be grafana board for storage operators
    • CSI driver metrics (from CSI sidecars + the driver itself  + its operator?)
    • CSI migration?
  • Get aggregated logs in cluster
    • They're rotated too soon
    • No logs from dead / restarted pods
    • No tools to combine logs from multiple pods (e.g. 3 controller managers)
  • What storage issues customers have? it was 22% of all issues.
    • Insufficient docs?
    • Probably garbage
  • Document basic storage troubleshooting for our supports
    • What logs are useful when, what log level to use
    • This has been discussed during the GSS weekly team meeting; however, it would be beneficial to have this documented.
  • Common vSphere errors, their debugging and fixing. 
  • Document sig-storage flake handling - not all failed [sig-storage] tests are ours

Epic Goal

  • Update all images that we ship with OpenShift to the latest upstream releases and libraries.
  • Exact content of what needs to be updated will be determined as new images are released upstream, which is not known at the beginning of OCP development work. We don't know what new features will be included and should be tested and documented. Especially new CSI drivers releases may bring new, currently unknown features. We expect that the amount of work will be roughly the same as in the previous releases. Of course, QE or docs can reject an update if it's too close to deadline and/or looks too big.

Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.

Why is this important?

  • We want to ship the latest software that contains new features and bugfixes.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.

This includes (but is not limited to):

  • Kubernetes:
    • client-go
    • controller-runtime
  • OCP:
    • library-go
    • openshift/api
    • openshift/client-go
    • operator-sdk

Operators:

  • aws-ebs-csi-driver-operator 
  • aws-efs-csi-driver-operator
  • azure-disk-csi-driver-operator
  • azure-file-csi-driver-operator
  • openstack-cinder-csi-driver-operator
  • gcp-pd-csi-driver-operator
  • gcp-filestore-csi-driver-operator
  • manila-csi-driver-operator
  • ovirt-csi-driver-operator
  • vmware-vsphere-csi-driver-operator
  • alibaba-disk-csi-driver-operator
  • ibm-vpc-block-csi-driver-operator
  • csi-driver-shared-resource-operator

 

  • cluster-storage-operator
  • csi-snapshot-controller-operator
  • local-storage-operator
  • vsphere-problem-detector

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Rebase OpenShift components to k8s v1.24

Why is this important?

  • Rebasing ensures components work with the upcoming release of Kubernetes
  • Address tech debt related to upstream deprecations and removals.

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. k8s 1.24 release

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Feature Overview

  • As an infrastructure owner, I want a repeatable method to quickly deploy the initial OpenShift cluster.
  • As an infrastructure owner, I want to install the first (management, hub, “cluster 0”) cluster to manage other (standalone, hub, spoke, hub of hubs) clusters.

Goals

  • Enable customers and partners to successfully deploy a single “first” cluster in disconnected, on-premises settings

Requirements

4.11 MVP Requirements

  • Customers and partners needs to be able to download the installer
  • Enable customers and partners to deploy a single “first” cluster (cluster 0) using single node, compact, or highly available topologies in disconnected, on-premises settings
  • Installer must support advanced network settings such as static IP assignments, VLANs and NIC bonding for on-premises metal use cases, as well as DHCP and PXE provisioning environments.
  • Installer needs to support automation, including integration with third-party deployment tools, as well as user-driven deployments.
  • In the MVP automation has higher priority than interactive, user-driven deployments.
  • For bare metal deployments, we cannot assume that users will provide us the credentials to manage hosts via their BMCs.
  • Installer should prioritize support for platforms None, baremetal, and VMware.
  • The installer will focus on a single version of OpenShift, and a different build artifact will be produced for each different version.
  • The installer must not depend on a connected registry; however, the installer can optionally use a previously mirrored registry within the disconnected environment.

Use Cases

  • As a Telco partner engineer (Site Engineer, Specialist, Field Engineer), I want to deploy an OpenShift cluster in production with limited or no additional hardware and don’t intend to deploy more OpenShift clusters [Isolated edge experience].
  • As a Enterprise infrastructure owner, I want to manage the lifecycle of multiple clusters in 1 or more sites by first installing the first  (management, hub, “cluster 0”) cluster to manage other (standalone, hub, spoke, hub of hubs) clusters [Cluster before your cluster].
  • As a Partner, I want to package OpenShift for large scale and/or distributed topology with my own software and/or hardware solution.
  • As a large enterprise customer or Service Provider, I want to install a “HyperShift Tugboat” OpenShift cluster in order to offer a hosted OpenShift control plane at scale to my consumers (DevOps Engineers, tenants) that allows for fleet-level provisioning for low CAPEX and OPEX, much like AKS or GKE [Hypershift].
  • As a new, novice to intermediate user (Enterprise Admin/Consumer, Telco Partner integrator, RH Solution Architect), I want to quickly deploy a small OpenShift cluster for Poc/Demo/Research purposes.

Questions to answer…

  •  

Out of Scope

Out of scope use cases (that are part of the Kubeframe/factory project):

  • As a Partner (OEMs, ISVs), I want to install and pre-configure OpenShift with my hardware/software in my disconnected factory, while allowing further (minimal) reconfiguration of a subset of capabilities later at a different site by different set of users (end customer) [Embedded OpenShift].
  • As an Infrastructure Admin at an Enterprise customer with multiple remote sites, I want to pre-provision OpenShift centrally prior to shipping and activating the clusters in remote sites.

Background, and strategic fit

  • This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

Assumptions

  1. The user has only access to the target nodes that will form the cluster and will boot them with the image presented locally via a USB stick. This scenario is common in sites with restricted access such as government infra where only users with security clearance can interact with the installation, where software is allowed to enter in the premises (in a USB, DVD, SD card, etc.) but never allowed to come back out. Users can't enter supporting devices such as laptops or phones.
  2. The user has access to the target nodes remotely to their BMCs (e.g. iDrac, iLo) and can map an image as virtual media from their computer. This scenario is common in data centers where the customer provides network access to the BMCs of the target nodes.
  3. We cannot assume that we will have access to a computer to run an installer or installer helper software.

Customer Considerations

  • ...

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

 

References

 

 

Epic Goal

As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6

As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6

Why is this important?

IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.

Acceptance Criteria

  • Agent-based installer can deploy IPv6 clusters
  • Agent-based installer can deploy dual-stack clusters
  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Previous Work

Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]

Done Checklist * CI - CI is running, tests are automated and merged.

  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>|

For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().

For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.

Epic Goal

As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed

Why is this important?

BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.

Acceptance Criteria

  • A user can provide MCE manifests and have it installed without additional manual steps after the installation is completed
  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story:

As a customer, I want to be able to:

  • Install MCE with the agent-installer

so that I can achieve

  • create an MCE hub with my openshift install

Acceptance Criteria:

Description of criteria:

  • Upstream documentation including examples of the extra manifests needed
  • Unit tests that include MCE extra manifests
  • Ability to install MCE using agent-installer is tested
  • Point 3

(optional) Out of Scope:

We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)

Engineering Details:

This requires/does not require a design proposal.
This requires/does not require a feature gate.

User Story:

As a customer, I want to be able to:

  • Install MCE with the agent-installer

so that I can achieve

  • create an MCE hub with my openshift install

Acceptance Criteria:

Description of criteria:

  • Upstream documentation including examples of the extra manifests needed
  • Unit tests that include MCE extra manifests
  • Ability to install MCE using agent-installer is tested
  • Point 3

(optional) Out of Scope:

We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)

Engineering Details:

This requires/does not require a design proposal.
This requires/does not require a feature gate.

Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode

In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.

This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service

Epic Goal

  • Rebase cluster autoscaler on top of Kubernetes 1.25

Why is this important?

  • Need to pick up latest upstream changes

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story

As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.

Background

We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921

Steps

  • add the --record-duplicated-events flag to all autoscaler deployments from the CAO

Stakeholders

  • openshift eng

Definition of Done

  • autoscaler continues to work as expected and produces events for everything
  • Docs
  • this does not require documentation as it preserves existing behavior and provides no interface for user interaction
  • Testing
  • current tests should continue to pass

Feature Overview

Add GA support for deploying OpenShift to IBM Public Cloud

Goals

Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available

Requirements

Optional requirements

  • OpenShift can be deployed using Mint mode and STS for cloud provider credentials (future release, tbd)
  • OpenShift can be deployed in disconnected mode https://issues.redhat.com/browse/SPLAT-737)
  • OpenShift on IBM Cloud supports User Provisioned Infrastructure (UPI) deployment method (future release, 4.14?)

Epic Goal

  • Enable installation of private clusters on IBM Cloud. This epic will track associated work.

Why is this important?

  • This is required MVP functionality to achieve GA.

Scenarios

  1. Install a private cluster on IBM Cloud.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Background and Goal

Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue. 

Acceptance Criteria

  1. Under guidance from Red Hat CEE, customers can deploy RHEL hotfix packages to MachineConfigPools.
  2. Customers can easily remove the hotfix when the underlying RHCOS image incorporates the fix.

Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.

The overall plan is:

  • Publish the new base image as `rhel-coreos-8` in the release image
  • Also publish the new extensions container (https://github.com/openshift/os/pull/763) as `rhel-coreos-8-extensions`
  • Teach the MCO to use this without also involving layering/build controller
  • Delete old `machine-os-content`

As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.

Acceptance Criteria:

  • Cluster using Custom osImageURL is available via telemetry

After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:

  • Schedule the extensions container as a kubernetes service (just serves a yum repo via http)
  • Change the MCD to write a file into `/etc/yum.repos.d/machine-config-extensions.repo` that consumes it instead of what it does now in pulling RPMs from the mounted container filesystem

 

Why?

  • Decouple control and data plane. 
    • Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
  • Improve security
    • Shift credentials out of cluster that support the operation of core platform vs workload
  • Improve cost
    • Allow a user to toggle what they don’t need.
    • Ensure a smooth path to scale to 0 workers and upgrade with 0 workers.

 

Assumption

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure , and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

 

 

Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit 

Overview 

Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.

Assumption

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure, and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

DoD 

cluster-snapshot-controller-operator is running on the CP. 

More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit 

As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.

  • Add a new cmdline option for the guest cluster kubeconfig file location
  • Parse both kubeconfigs:
    • One from projected service account, which leads to the management cluster.
    • Second from the new cmdline option introduced above. This one leads to the guest cluster.
  • Move creation of manifests/08_webhook_service.yaml from CVO to the operator - it needs to be created in the management cluster.
  • Tag manifests of objects that should not be deployed by CVO in HyperShift by
  • Only on HyperShift:
    • When interacting with Kubernetes API, carefully choose the right kubeconfig to watch / create / update objects in the right cluster.
    • Replace namespaces in all Deployments and other objects that are created in the management cluster. They must be created in the same namespace as the operator.
    • Don’t create operand’s PodDisruptionBudget?
    • Update ValidationWebhookConfiguration to point directly to URL exposed by manifests/08_webhook_service.yaml instead of a Service. The Service is not available in the guest cluster.
    • Pass only the guest kubeconfig to the operands (both the webhook and csi-snapshot-controller).
    • Update unit tests to handle two kube clients.

Exit criteria:

  • cluster-csi-snapshot-controller-operator runs in the management cluster in HyperShift
  • csi-snapshot-controller runs in the management cluster in HyperShift
  • It is possible to take & restore volume snapshot in the guest cluster.
  • No regressions in standalone OCP.

As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.

  • Check and remove manifests/03_configmap.yaml, it does not seem to be useful.
  • Check and remove manifests/03_service.yaml, it does not seem to be useful (at least now).
  • Use DeploymentController from library-go to sync Deployments.
  • Get rid of common/ package? It does not seem to be useful.
  • Use StaticResourceController for static content, including the snapshot CRDs.

Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!

Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.

Exit criteria:

  • The operator code is smaller.
  • No regressions in standalone OCP.
  • Upgrade/downgrade from/to standalone OCP 4.11 works.

Overview 

Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.

Assumption

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure, and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

DoD 

Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.

More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit 

 

As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.

  • Add a new cmdline option for the guest cluster kubeconfig file location
  • Parse both kubeconfigs:
    • One from projected service account, which leads to the management cluster.
    • Second from the new cmdline option introduced above. This one leads to the guest cluster.
  • Only on HyperShift:
    • When interacting with Kubernetes API, carefully choose the right kubeconfig to watch / create / update objects in the right cluster.
    • Replace namespaces in all Deployments and other objects that are created in the management cluster. They must be created in the same namespace as the operator.
  •  
  •  
    • Pass only the guest kubeconfig to the operand (control-plane Deployment of the CSI driver).

Exit criteria:

  • Control plane Deployment of AWS EBS CSI driver runs in the management cluster in HyperShift.
  • Storage works in the guest cluster.
  • No regressions in standalone OCP.

As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.

 

must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents

hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276

 

Exit criteria:

  • verify that hypershift dump cluster --dump-guest-cluster has storage objects from the guest cluster.

As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.

  • Add a new cmdline option for the guest cluster kubeconfig file location
  • Parse both kubeconfigs:
    • One from projected service account, which leads to the management cluster.
    • Second from the new cmdline option introduced above. This one leads to the guest cluster.
  • Tag manifests of objects that should not be deployed by CVO in HyperShift
  • Only on HyperShift:
    • When interacting with Kubernetes API, carefully choose the right kubeconfig to watch / create / update objects in the right cluster.
    • Replace namespaces in all Deployments and other objects that are created in the management cluster. They must be created in the same namespace as the operator.
    • Pass only the guest kubeconfig to the operands (AWS EBS CSI driver operator).

Exit criteria:

  • CSO and AWS EBS CSI driver operator runs in the management cluster in HyperShift
  • Storage works in the guest cluster.
  • No regressions in standalone OCP.

Epic Goal

  • To improve debug-ability of ovn-k in hypershift
  • To verify the stability of of ovn-k in hypershift
  • To introduce a EgressIP reach-ability check that will work in hypershift

Why is this important?

  • ovn-k is supposed to be GA in 4.12. We need to make sure it is stable, we know the limitations and we are able to debug it similar to the self hosted cluster.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated

Dependencies (internal and external)

  1. This will need consultation with the people working on HyperShift

Previous Work (Optional):

  1. https://issues.redhat.com/browse/SDN-2589

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Feature Overview  

Much like core OpenShift operators, a standardized flow exists for OLM-managed operators to interact with the cluster in a specific way to leverage AWS STS authorization when using AWS APIs as opposed to insecure static, long-lived credentials. OLM-managed operators can implement integration with the CloudCredentialOperator in well-defined way to support this flow.

Goals:

Enable customers to easily leverage OpenShift's capabilities around AWS STS with layered products, for increased security posture. Enable OLM-managed operators to implement support for this in well-defined pattern.

Requirements:

  • CCO gets a new mode in which it can reconcile STS credential request for OLM-managed operators
  • A standardized flow is leveraged to guide users in discovering and preparing their AWS IAM policies and roles with permissions that are required for OLM-managed operators 
  • A standardized flow is defined in which users can configure OLM-managed operators to leverage AWS STS
  • An example operator is used to demonstrate the end2end functionality
  • Clear instructions and documentation for operator development teams to implement the required interaction with the CloudCredentialOperator to support this flow

Use Cases:

See Operators & STS slide deck.

 

Out of Scope:

  • handling OLM-managed operator updates in which AWS IAM permission requirements might change from one version to another (which requires user awareness and intervention)

 

Background:

The CloudCredentialsOperator already provides a powerful API for OpenShift's cluster core operator to request credentials and acquire them via short-lived tokens. This capability should be expanded to OLM-managed operators, specifically to Red Hat layered products that interact with AWS APIs. The process today is cumbersome to none-existent based on the operator in question and seen as an adoption blocker of OpenShift on AWS.

 

Customer Considerations

This is particularly important for ROSA customers. Customers are expected to be asked to pre-create the required IAM roles outside of OpenShift, which is deemed acceptable.

Documentation Considerations

  • Internal documentation needs to exists to guide Red Hat operator developer teams on the requirements and proposed implementation of integration with CCO and the proposed flow
  • External documentation needs to exist to guide users on:
    • how to become aware that the cluster is in STS mode
    • how to become aware of operators that support STS and the proposed CCO flow
    • how to become aware of the IAM permissions requirements of these operators
    • how to configure an operator in the proposed flow to interact with CCO

Interoperability Considerations

  • this needs to work with ROSA
  • this needs to work with self-managed OCP on AWS

Market Problem

This Section: High-Level description of the Market Problem ie: Executive Summary

  • As a customer of OpenShift layered products, I need to be able to fluidly, reliably and consistently install and use OpenShift layered product Kubernetes Operators into my ROSA STS clusters, while keeping a STS workflow throughout.
  •  
  • As a customer of OpenShift on the big cloud providers, overall I expect OpenShift as a platform to function equally well with tokenized cloud auth as it does with "mint-mode" IAM credentials. I expect the same from the Kubernetes Operators under the Red Hat brand (that need to reach cloud APIs) in that tokenized workflows are equally integrated and workable as with "mint-mode" IAM credentials.
  •  
  • As the managed services, including Hypershift teams, offering a downstream opinionated, supported and managed lifecycle of OpenShift (in the forms of ROSA, ARO, OSD on GCP, Hypershift, etc), the OpenShift platform should have as close as possible, native integration with core platform operators when clusters use tokenized cloud auth, driving the use of layered products.
  • .
  • As the Hypershift team, where the only credential mode for clusters/customers is STS (on AWS) , the Red Hat branded Operators that must reach the AWS API, should be enabled to work with STS credentials in a consistent, and automated fashion that allows customer to use those operators as easily as possible, driving the use of layered products.

Why it Matters

  • Adding consistent, automated layered product integrations to OpenShift would provide great added value to OpenShift as a platform, and its downstream offerings in Managed Cloud Services and related offerings.
  • Enabling Kuberenetes Operators (at first, Red Hat ones) on OpenShift for the "big3" cloud providers is a key differentiation and security requirement that our customers have been and continue to demand.
  • HyperShift is an STS-only architecture, which means that if our layered offerings via Operators cannot easily work with STS, then it would be blocking us from our broad product adoption goals.

Illustrative User Stories or Scenarios

  1. Main success scenario - high-level user story
    1. customer creates a ROSA STS or Hypershift cluster (AWS)
    2. customer wants basic (table-stakes) features such as AWS EFS or RHODS or Logging
    3. customer sees necessary tasks for preparing for the operator in OperatorHub from their cluster
    4. customer prepares AWS IAM/STS roles/policies in anticipation of the Operator they want, using what they get from OperatorHub
    5. customer's provides a very minimal set of parameters (AWS ARN of role(s) with policy) to the Operator's OperatorHub page
    6. The cluster can automatically setup the Operator, using the provided tokenized credentials and the Operator functions as expected
    7. Cluster and Operator upgrades are taken into account and automated
    8. The above steps 1-7 should apply similarly for Google Cloud and Microsoft Azure Cloud, with their respective token-based workload identity systems.
  2. Alternate flow/scenarios - high-level user stories
    1. The same as above, but the ROSA CLI would assist with AWS role/policy management
    2. The same as above, but the oc CLI would assist with cloud role/policy management (per respective cloud provider for the cluster)
  3. ...

Expected Outcomes

This Section: Articulates and defines the value proposition from a users point of view

  • See SDE-1868 as an example of what is needed, including design proposed, for current-day ROSA STS and by extension Hypershift.
  • Further research is required to accomodate the AWS STS equivalent systems of GCP and Azure
  • Order of priority at this time is
    • 1. AWS STS for ROSA and ROSA via HyperShift
    • 2. Microsoft Azure for ARO
    • 3. Google Cloud for OpenShift Dedicated on GCP

Effect

This Section: Effect is the expected outcome within the market. There are two dimensions of outcomes; growth or retention. This represents part of the “why” statement for a feature.

  • Growth is the acquisition of net new usage of the platform. This can be new workloads not previously able to be supported, new markets not previously considered, or new end users not previously served.
  • Retention is maintaining and expanding existing use of the platform. This can be more effective use of tools, competitive pressures, and ease of use improvements.
  • Both of growth and retention are the effect of this effort.
    • Customers have strict requirements around using only token-based cloud credential systems for workloads in their cloud accounts, which include OpenShift clusters in all forms.
      • We gain new customers from both those that have waited for token-based auth/auth from OpenShift and from those that are new to OpenShift, with strict requirements around cloud account access
      • We retain customers that are going thru both cloud-native and hybrid-cloud journeys that all inevitably see security requirements driving them towards token-based auth/auth.
      •  

References

As an engineer I want the capability to implement CI test cases that run at different intervals, be it daily, weekly so as to ensure downstream operators that are dependent on certain capabilities are not negatively impacted if changes in systems CCO interacts with change behavior.

Acceptance Criteria:

Create a stubbed out e2e test path in CCO and matching e2e calling code in release such that there exists a path to tests that verify working in an AWS STS workflow.

Pre-Work Objectives

Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.

Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster. 
Why customers want this?

  1. Single interface to accomplish their tasks
  2. Consistent UX and patterns
  3. Easily accessible: One URL, one set of credentials

Why we want this?

  • Shared code -  improve the velocity of both teams and most importantly ensure consistency of the experience at the code level
  • Pre-built PF4 components
  • Accessibility & i18n
  • Remove barriers for enabling ACM

Phase 2 Goal: Productization of the united Console 

  1. Enable user to quickly change context from fleet view to single cluster view
    1. Add Cluster selector with “All Cluster” Option. “All Cluster” = ACM
    2. Shared SSO across the fleet
    3. Hub OCP Console can connect to remote clusters API
    4. When ACM Installed the user starts from the fleet overview aka “All Clusters”
  2. Share UX between views
    1. ACM Search —> resource list across fleet -> resource details that are consistent with single cluster details view
    2. Add Cluster List to OCP —> Create Cluster

As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.

cc Ali Mobrem Sho Weimer Jakub Hadvig 

UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and  OpenShiftDedicated

Acceptance criteria:

  • Investigate if console-operator should pass info about which cluster are supported and unsupported to the frontend
  • Unsupported clusters should not appear in the cluster dropdown
  • Unsupported clusters based off
    • defined vendor label
    • non 4.x ocp clusters

Feature Overview

RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.

 

Requirements

  • RHEL 9.x sources for RHCOS builds starting with OCP 4.13 and RHEL 9.2.

 

Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

(Optional) Use Cases

  • 9.2 Preview via Layering No longer necessary assuming we stay the course of going all in on 9.2

Assumptions

  • ...

Customer Considerations

  • ...

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

PROBLEM

We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.

PROPOSAL

Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.

ACCEPTANCE CRITERIA

Image has been switched/included: 

DEPENDENCIES

The SCOS build payload.

RELATED RESOURCES

OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p

OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit

 

Acceptance Criteria

A stable OKD on SCOS is built and available to the community sprintly.

 

This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image

 

```

[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```

 

The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53

Overview 

HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term. 

Main Goals for hosted control planes (HyperShift)

  • Optimize OpenShift for Cost/footprint/ which improves our competitive stance against the *KSes
  • Establish separation of concerns which makes it more resilient for SRE to manage their workload clusters (be it security, configuration management, etc).
  • Simplify and enhance multi-cluster management experience especially since multi-cluster is becoming an industry need nowadays. 

Secondary Goals

HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]

 

Hosted Control Planes (HyperShift) Map 

To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes: 

 

  • Self-managed: In that case, Red Hat would provide hosted control planes as a service that is managed and SREed by the customer for their tenants (hence “self”-managed). In this management model, our external customers are the direct consumers of the multi-cluster control plane as a servie. Once MCE is installed, they can start to self-service dedicated control planes. 

 

  • Managed: This is OpenShift as a managed service, today we only “manage” the CP, and share the responsibility for other system components, more info here. To reduce management costs incurred by service delivery organizations which translates to operating profit (by reducing variable costs per control-plane), as well as to improve user experience, lower platform overhead (allow customers to focus mostly on writing applications and not concern themselves with infrastructure artifacts), and improve the cluster provisioning experience. HyperShift is shipped via MCE, and delivered to Red Hat managed SREs (same consumption route). However, for managed services, additional tooling needs to be refactored to support the new provisioning path. Furthermore, unlike self-managed where customers are free to bring their own observability stack, Red Hat managed SREs need to observe the managed fleet to ensure compliance with SLOs/SLIs/…

 

If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE

High-level Requirements

For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA: 

 

  • Hosted control planes fits well with our multi-cluster story (with MCE)
  • Hosted control planes APIs are stable for consumption  
  • Customers are not paying for control planes/infra components.  
  • Hosted control planes has an HA and a DR story
  • Hosted control planes is in parity with top-level add-on operators 
  • Hosted control planes reports metrics on usage/adoption
  • Hosted control planes is observable  
  • HyperShift as a backend to managed services is fully unblocked.

 

Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc. 

Hosted control planes fits well with our multi-cluster story

Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:

 

 

As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters. 

HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story. 

Thus the following stories are important for HyperShift: 

  • When lifecycling OpenShift clusters (for any OpenShift form factor) on any of the supported providers from MCE/ACM/OCM/CLI as a Cluster Service Consumer  (RH managed SRE, or self-manage SRE/admin):
  • I want to be able to use a consistent UI so I can manage and operate (observe, govern,...) a fleet of clusters.
  • I want to specify HA constraints (e.g., deploy my clusters in different regions) while ensuring acceptable QoS (e.g., latency boundaries) to ensure/reduce any potential downtime for my workloads. 
  • When operating OpenShift clusters (for any OpenShift form factor) on any of the supported provider from MCE/ACM/OCM/CLI as a Cluster Service Consumer  (RH managed SRE, or self-manage SRE/admin):
  • I want to be able to backup any critical data so I am able to restore them in case of hosting service cluster (management cluster) failure. 

Refs:

Hosted control planes APIs are stable for consumption.

 

HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed. 

 

Main user story:  When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees. 

 

Ref: What are we missing in Core HyperShift for GA Readiness?

Customers are not paying for control planes/infra components. 

 

Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.

Assumptions

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure , and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

HyperShift - proposed cuts from data plane

HyperShift has an HA and a DR story

When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer  (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:

  • as means for disaster recovery in the case of total failure
  • so that scaling pressures on a management cluster can be mitigated or a management cluster can be decommissioned.

More information: 

 

Hosted control planes reports metrics on usage/adoption

To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.

See Hosted Control Planes (aka HyperShift) Strategy [Live Document]

Hosted control plane is observable  

Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path. 

HyperShift is in parity with top-level add-on operators

https://issues.redhat.com/browse/OCPPLAN-8901 

Unblock HyperShift as a backend to managed services

HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead). 

 

We should make sure our SD milestones are unblocked by the core team. 

 

Note 

This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.

- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors. 
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771

Epic Goal*

The goal is to split client certificate trust chains from the global Hypershift root CA.

 
Why is this important? (mandatory)

This is important to:

  • assure a workload can be run on any kind of OCP flavor
  • reduce the blast radius in case of a sensitive material leak
  • separate trust to allow more granular control over client certificate authentication

 
Scenarios (mandatory) 

Provide details for user scenarios including actions to be performed, platform specifications, and user personas.  

  1. I would like to be able to run my workloads on any OpenShift-like platform.
    My workloads allow components to authenticate using client certificates based
    on a trust bundle that I am able to retrieve from the cluster.
  1. I don't want my users to have access to any CA bundle that would allow them
    to trust a random certificate from the cluster for client certificate authentication.

 
Dependencies (internal and external) (mandatory)

Hypershift team needs to provide us with code reviews and merge the changes we are to deliver

Contributing Teams(and contacts) (mandatory) 

  • Development - OpenShift Auth, Hypershift
  • Documentation -OpenShift Auth Docs team
  • QE - OpenShift Auth QE
  • PX - I have no idea what PX is
  • Others - others

Acceptance Criteria (optional)

The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.

Drawbacks or Risk (optional)

Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release

Done - Checklist (mandatory)

  • CI Testing -  Basic e2e automationTests are merged and completing successfully
  • Documentation - Content development is complete.
  • QE - Test scenarios are written and executed successfully.
  • Technical Enablement - Slides are complete (if requested by PLM)
  • Engineering Stories Merged
  • All associated work items with the Epic are closed
  • Epic status should be “Release Pending” 
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Incomplete Features

When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release

Epic Goal

  • Enabling integration of single hub cluster to install both ARM and x86 spoke clusters
  • Enabling support for heterogeneous OCP clusters
  • document requirements deployment flows
  • support in disconnected environment

Why is this important?

  • clients request

Scenarios

  1. Users manage both ARM and x86 machines, we should not require to have two different hub clusters
  2. Users manage a mixed architecture clusters without requirement of all the nodes to be of the same architecture

Acceptance Criteria

  • Process is well documented
  • we are able to install in a disconnected environment

We have a set of images

  • quay.io/edge-infrastructure/assisted-installer-agent:latest
  • quay.io/edge-infrastructure/assisted-installer-controller:latest
  • quay.io/edge-infrastructure/assisted-installer:latest

that should become multiarch images. This should be done both in upstream and downstream.

As a reference, we have built internally those images as multiarch and made them available as

  • registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest
  • registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest
  • registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest

They can be consumed by the Assisted Serivce pod via the following env

    - name: AGENT_DOCKER_IMAGE
      value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest
    - name: CONTROLLER_IMAGE
      value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest
    - name: INSTALLER_IMAGE
      value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest

OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes

Ref: https://github.com/openshift/enhancements/pull/1014

 

Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.

A/C:

 - New OLM API version release
 - OLM API dependency updated in OLM Project
 - OLM Subscription API changes  downstreamed
 - OLM Controller changes  downstreamed
 - Changes manually tested on Cluster Bot

Feature Overview

We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.

Goals

  • Feature enhancements (performance, scale, configuration, UX, ...)
  • Modernization (incorporation and productization of new technologies)

Requirements

  • Core Networking Stability
  • Core Networking Performance and Scale
  • Core Neworking Extensibility (Multus CNIs)
  • Core Networking UX (Observability)
  • Core Networking Security and Compliance

In Scope

  • Network Edge (ingress, DNS, LB)
  • SDN (CNI plugins, openshift-sdn, OVN, network policy, egressIP, egress Router, ...)
  • Networking Observability

Out of Scope

There are definitely grey areas, but in general:

  • CNV
  • Service Mesh
  • CNF

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.

Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.

Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.

Dependencies (internal and external):

Prioritized epics + deliverables (in scope / not in scope):

Not in scope:

Estimate (XS, S, M, L, XL, XXL):

Previous Work:

Open questions:

Acceptance criteria:

Epic Done Checklist:

  • CI - CI Job & Automated tests: <link to CI Job & automated tests>
  • Release Enablement: <link to Feature Enablement Presentation> 
  • DEV - Upstream code and tests merged: <link to meaningful PR orf GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>
  • Notes for Done Checklist
    • Adding links to the above checklist with multiple teams contributing; select a meaningful reference for this Epic.
    • Checklist added to each Epic in the description, to be filled out as phases are completed - tracking progress towards “Done” for the Epic.

Description:

As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:

  • Number of routes/shard

Design 2 will be implemented as part of this story.

 

Acceptance Criteria:

  • Support for exporting the above mentioned metrics by Cluster Ingress Operator

Description:

As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:

  • Minimum Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:min  : min(route_metrics_controller_routes_per_shard)
    • Gives the minimum value of Routes per Shard.
  • Maximum Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:max  : max(route_metrics_controller_routes_per_shard)
    • Gives the maximum value of Routes per Shard.
  • Average Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:avg  : avg(route_metrics_controller_routes_per_shard)
    • Gives the average value of Routes per Shard.
  • Median Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:median  : quantile(0.5, route_metrics_controller_routes_per_shard)
    • Gives the median value of Routes per Shard.
  • Number of Routes summed by TLS Termination type
    • Recording Rule – cluster:openshift_route_info:tls_termination:sum : sum (openshift_route_info) by (tls_termination)
    • Gives the number of Routes for each tls_termination value. The possible values for tls_termination are edge, passthrough and reencrypt. 

The metrics should be allowlisted on the cluster side.

The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.

Depends on CFE-478.

Acceptance Criteria:

  • Support for sending the above mentioned metrics from OpenShift clusters to the Red Hat premises by allowlisting metrics on the cluster side

This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.

Epic Goal

  • Allow Operator Authors to easily change the layout of the update graph in a single location so they can version/maintain/release it via git and have more approachable controls about graph vertices than today's replaces, skips and/or skipRange taxonomy
  • Allow Operators authors to have control over channel and bundle channel membership

Why is this important?

  • The imperative catalog maintenance approach so far with opm is being moved to a declarative format (OLM-2127 and OLM-1780) moving away from bundle-level controls but the update graph properties are still attached to a bundle
  • We've received feedback from the RHT internal developer community that maintaining and reasoning about the graph in the context of a single channel is still too hard, even with visualization tools
  • making the update graph easily changeable is important to deliver on some of the promises of declarative index configuration
  • The current interface for declarative index configuration still relies on skips, skipRange and replaces to shape the graph on a per-bundle level - this is too complex at a certain point with a lot of bundles in channels, we need to something at the package level

Scenarios

  1. An Operator author wants to release a new version replacing the latest version published previously
  2. After additional post-GA testing an Operator author wants to establish a new update path to an existing released version from an older, released version
  3. After finding a bug post-GA an Operator author wants to temporarily remove a known to be problematic update path
  4. An automated system wants to push a bundle inbetween an existing update path as a result of an Operator (base) image rebuild (Freshmaker use case)
  5. A user wants to take a declarative graph definition and turn it into a graphical image for visually ensuring the graph looks like they want
  6. An Operator author wants to promote a certain bundle to an additional / different channel to indicate progress in maturity of the operator.

Acceptance Criteria

  • The declarative format has to be user readable and terse enough to make quick modifications
  • The declarative format should be machine writeable (Freshmaker)
  • The update graph is declared and modified in a text based format aligned with the declarative config
  • it has to be possible to add / removes edges at the leave of the graph (releasing/unpublishing a new version)
  • it has to be possible to add/remove new vertices between existing edges (releasing/retracting a new update path)
  • it has to be possible to add/remove new edges in between existing vertices (releasing/unpublishing a version inbetween, freshmaker user case)
  • it has to be possible to change the channel member ship of a bundle after it's published (channel promotion)
  • CI - MUST be running successfully with tests automated
  • it has to be possible to add additional metadata later to implement OLM-2087 and OLM-259 if required

Dependencies (internal and external)

  1. Declarative Index Config (OLM-2127)

Previous Work:

  1. Declarative Index Config (OLM-1780)

Related work

Open questions:

  1. What other manipulation scenarios are required?
    1. Answer: deprecation of content in the spirit of OLM-2087
    2. Answer: cross-channel update hints as described in OLM-2059 if that implementation requires it

 

When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276

 

Jira Description

As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).

Summary / Background

IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions. 

Acceptance Criteria

  • The changes in the PR are available for the releases which uses FBC -> OCP 4.11, 4.12

Definition of Ready

  • PRs merged into downstream OCP repos branches 4.11/4.12

Definition of Done

  • We checked that the downstream images are with the changes applied (i.e.: we can try to verify in the same way that we checked if the changes were in the downstream for the fix OLM-2639 )

enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j

then the command could be used in a manner similar to many k8s examples like

```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```

Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011

tldr: three basic claims, the rest is explanation and one example

  1. We cannot improve long term maintainability solely by fixing bugs.
  2. Teams should be asked to produce designs for improving maintainability/debugability.
  3. Specific maintenance items (or investigation of maintenance items), should be placed into planning as peer to PM requests and explicitly prioritized against them.

While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.

One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.

I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.

We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.


Relevant links:

Epic Goal

  • Change the default value for the spec.tuningOptions.maxConnections field in the IngressController API, which configures the HAProxy maxconn setting, to 50000 (fifty thousand).

Why is this important?

  • The maxconn setting constrains the number of simultaneous connections that HAProxy accepts. Beyond this limit, the kernel queues incoming connections. 
  • Increasing maxconn enables HAProxy to queue incoming connections intelligently.  In particular, this enables HAProxy to respond to health probes promptly while queueing other connections as needed.
  • The default setting of 20000 has been in place since OpenShift 3.5 was released in April 2017 (see BZ#1405440, commit, RHBA-2017:0884). 
  • Hardware capabilities have increased over time, and the current default is too low for typical modern machine sizes. 
  • Increasing the default setting improves HAProxy's performance at an acceptable cost in the common case. 

Scenarios

  1. As a cluster administrator who is installing OpenShift on typical hardware, I want OpenShift router to be tuned appropriately to take advantage of my hardware's capabilities.

Acceptance Criteria

  • CI is passing. 
  • The new default setting is clearly documented. 
  • A release note informs cluster administrators of the change to the default setting. 

Dependencies (internal and external)

  1. None.

Previous Work (Optional):

  1. The  haproxy-max-connections-tuning enhancement made maxconn configurable without changing the default.  The enhancement document details the tradeoffs in terms of memory for various settings of nbthreads and maxconn with various numbers of routes. 

Open questions::

  1. ...

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

 

OCP/Telco Definition of Done

Epic Template descriptions and documentation.

Epic Goal

Why is this important?

  • This regression is a major performance and stability issue and it has happened once before.

Drawbacks

  • The E2E test may be complex due to trying to determine what DNS pods are responding to DNS requests. This is straightforward using the chaos plugin.

Scenarios

  • CI Testing

Acceptance Criteria

  • CI - MUST be running successfully with tests automated

Dependencies (internal and external)

  1. SDN Team

Previous Work (Optional):

  1. N/A

Open questions::

  1. Where do these E2E test go? SDN Repo? DNS Repo?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub
    Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub
    Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.

Feature Overview

  • This Section:* High-Level description of the feature ie: Executive Summary
  • Note: A Feature is a capability or a well defined set of functionality that delivers business value. Features can include additions or changes to existing functionality. Features can easily span multiple teams, and multiple releases.

 

Goals

  • This Section:* Provide high-level goal statement, providing user context and expected user outcome(s) for this feature

 

Requirements

  • This Section:* A list of specific needs or objectives that a Feature must deliver to satisfy the Feature.. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts. If a non MVP requirement slips, it does not shift the feature.

 

Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

 

(Optional) Use Cases

This Section: 

  • Main success scenarios - high-level user stories
  • Alternate flow/scenarios - high-level user stories
  • ...

 

Questions to answer…

  • ...

 

Out of Scope

 

Background, and strategic fit

This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

 

Assumptions

  • ...

 

Customer Considerations

  • ...

 

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?  
  • New Content, Updates to existing content,  Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console.  When viewing a Pod in the console, the field status.HostIP is not visible.

 

Acceptance criteria:

  • Make pod's HostIP field visible in the pod details page, similarly to PodIP field

As a console user I want to have option to:

  • Restart Deployment
  • Retry latest DeploymentConfig if it failed

 

For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.

  • action is disabled if:
    • Deployment is paused

 

For DeploymentConfig we will add 'Retry rollout' action button.  This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.

  • action is enabled if:
    • latest revision of the ReplicationController resource is in Failed phase
  • action is disabled if:
    • latest revision of the ReplicationController resource is in Complete phase
    • DeploymentConfig does not have any rollouts
    • DeploymentConfigs is paused

 

Acceptance Criteria:

  • Add the 'Restart rollout' action button for the Deployment resource to both action menu and kebab menu
  • Add the 'Retry rollout' action button for the DeploymentConfig resource to both action menu and kebab menu

 

BACKGROUND:

OpenShift console will be updated to allow rollout restart deployment from the console itself.

Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.

The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.

Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit

When OCP is performing cluster upgrade user should be notified about this fact.

There are two possibilities how to surface the cluster upgrade to the users:

  • Display a console notification throughout OCP web UI saying that the cluster is currently under upgrade.
  • Global notification throughout OCP web UI saying that the cluster is currently under upgrade.
  • Have an alert firing for all the users of OCP stating the cluster is undergoing an upgrade. 

 

AC:

  • Console-operator will create a ConsoleNotification CR when the cluster is being upgraded. Once the upgrade is done console-operator will remote that CR. These are the three statuses based on which we are determining if the cluster is being upgraded.
  • Add unit tests

 

Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem 

 

Created from: https://issues.redhat.com/browse/RFE-3024

Cloned from OCPSTRAT-377 to represent the backport to 4.12

Backport questions:

 
1) What's the impact/cost to any other critical items on the next release? 
 
Installer and edge are mostly focused on activation/retention and working the list top-to-bottom without release blockers. This is an activation item highly coveted by SD and applicable in existing versions.
 
2) Is it a breaking change to the existing fleet?
 
No.
 
 

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic —

Links:

Enhancement PR: https://github.com/openshift/enhancements/pull/1397 

API PR: https://github.com/openshift/api/pull/1460 

Ingress  Operator PR: https://github.com/openshift/cluster-ingress-operator/pull/928 

Background

Feature Goal: Support OpenShift installation in AWS Shared VPC scenario where AWS infrastructure resources (at least the Private Hosted Zone) belong to an account separate from the cluster installation target account.

The ingress operator is responsible for creating DNS records in AWS Route53 for cluster ingress. Prior to the implementation of this epic, the ingress operator doesn't have the capability to add DNS records into an existing Route 53 hosted zone in the shared VPC.

Epic Goal

  • Add support to the ingress operator for creating DNS records in preexisting Route53 private hosted zones for Shared VPC clusters

Non-Goals

  • Ingress operator support for day-2 operations (i.e. changes to the AWS IAM Role value after installation)  
  • E2E testing (will be handled by the Installer Team) 

Design

As described in the WIP PR https://github.com/openshift/cluster-ingress-operator/pull/928, the ingress operator will consume a new API field that contains the IAM Role ARN for configuring DNS records in the private hosted zone. If this field is present, then the ingress operator will use this account to create all private hosted zone records. The API fields will be described in the Enhancement PR.

The ingress operator code will accomplish this by defining a new provider implementation that wraps two other DNS providers, using one of them to publish records to the public zone and the other to publish records to the private zone.

External DNS Operator Impact

See NE-1299

AWS Load Balancer Operator (ALBO) Impact

See NE-1299

Why is this important?

  • Without this ingress operator support, OpenShift users are unable to create DNS records in a preexisting Route53 private hosted zone which means OpenShift users can't share the Route53 component with a Shared VPC
  • Shared VPCs are considers AWS best practice

Scenarios

  1. ...

Acceptance Criteria

  • Unit tests must be written and automatically run in CI (E2E tests will be handled by the Installer Team)
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Ingress Operator creates DNS Records in preexisting Route53 private hosted zones for shared VPC Clusters
  • Network Edge Team has reviewed all of the related enhancements and code changes for Route53 in Shared VPC Clusters

Dependencies (internal and external)

  1. Installer Team is adding the new API fields required for enabling sharing Route53 with in Shared VPCs in https://issues.redhat.com/browse/CORS-2613
  2. Testing this epic requires having access to two AWS account

Previous Work (Optional):

  1. Significant discussion was done in this thread: https://redhat-internal.slack.com/archives/C68TNFWA2/p1681997102492889?thread_ts=1681837202.378159&cid=C68TNFWA2
  1. Slack channel #tmp-xcmbu-114

Open questions:

  1.  

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

 

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Enable/confirm installation in AWS shared VPC scenario where Private Hosted Zone belongs to an account separate from the cluster installation target account

Why is this important?

  • AWS best practices suggest this setup

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Feature Overview (aka. Goal Summary)  

The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike. 

Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.

In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.

The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike. 

For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing. 

This includes: 

  • Conditions
  • Some Logging 
  • Possibly Some Events 

While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic.  I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state". 

 

Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing

 

https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing

 

The current property description is:

configuration represents the current MachineConfig object for the machine config pool.

But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?

Feature Overview

Telecommunications providers continue to deploy OpenShift at the Far Edge. The acceleration of this adoption and the nature of existing Telecommunication infrastructure and processes drive the need to improve OpenShift provisioning speed at the Far Edge site and the simplicity of preparation and deployment of Far Edge clusters, at scale.

Goals

  • Simplicity The folks preparing and installing OpenShift clusters (typically SNO) at the Far Edge range in technical expertise from technician to barista. The preparation and installation phases need to be reduced to a human-readable script that can be utilized by a variety of non-technical operators. There should be as few steps as possible in both the preparation and installation phases.
  • Minimize Deployment Time A telecommunications provider technician or brick-and-mortar employee who is installing an OpenShift cluster, at the Far Edge site, needs to be able to do it quickly. The technician has to wait for the node to become in-service (CaaS and CNF provisioned and running) before they can move on to installing another cluster at a different site. The brick-and-mortar employee has other job functions to fulfill and can't stare at the server for 2 hours. The install time at the far edge site should be in the order of minutes, ideally less than 20m.
  • Utilize Telco Facilities Telecommunication providers have existing Service Depots where they currently prepare SW/HW prior to shipping servers to Far Edge sites. They have asked RH to provide a simple method to pre-install OCP onto servers in these facilities. They want to do parallelized batch installation to a set of servers so that they can put these servers into a pool from which any server can be shipped to any site. They also would like to validate and update servers in these pre-installed server pools, as needed.
  • Validation before Shipment Telecommunications Providers incur a large cost if forced to manage software failures at the Far Edge due to the scale and physical disparate nature of the use case. They want to be able to validate the OCP and CNF software before taking the server to the Far Edge site as a last minute sanity check before shipping the platform to the Far Edge site.
  • IPSec Support at Cluster Boot Some far edge deployments occur on an insecure network and for that reason access to the host’s BMC is not allowed, additionally an IPSec tunnel must be established before any traffic leaves the cluster once its at the Far Edge site. It is not possible to enable IPSec on the BMC NIC and therefore even OpenShift has booted the BMC is still not accessible.

Requirements

  • Factory Depot: Install OCP with minimal steps
    • Telecommunications Providers don't want an installation experience, just pick a version and hit enter to install
    • Configuration w/ DU Profile (PTP, SR-IOV, see telco engineering for details) as well as customer-specific addons (Ignition Overrides, MachineConfig, and other operators: ODF, FEC SR-IOV, for example)
    • The installation cannot increase in-service OCP compute budget (don't install anything other that what is needed for DU)
    • Provide ability to validate previously installed OCP nodes
    • Provide ability to update previously installed OCP nodes
    • 100 parallel installations at Service Depot
  • Far Edge: Deploy OCP with minimal steps
    • Provide site specific information via usb/file mount or simple interface
    • Minimize time spent at far edge site by technician/barista/installer
    • Register with desired RHACM Hub cluster for ongoing LCM
  • Minimal ongoing maintenance of solution
    • Some, but not all telco operators, do not want to install and maintain an OCP / ACM cluster at Service Depot
  • The current IPSec solution requires a libreswan container to run on the host so that all N/S OCP traffic is encrypted. With the current IPSec solution this feature would need to support provisioning host-based containers.

 

A list of specific needs or objectives that a Feature must deliver to satisfy the Feature. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts.  If a non MVP requirement slips, it does not shift the feature.

requirement Notes isMvp?
     
     
     

 

Describe Use Cases (if needed)

Telecommunications Service Provider Technicians will be rolling out OCP w/ a vDU configuration to new Far Edge sites, at scale. They will be working from a service depot where they will pre-install/pre-image a set of Far Edge servers to be deployed at a later date. When ready for deployment, a technician will take one of these generic-OCP servers to a Far Edge site, enter the site specific information, wait for confirmation that the vDU is in-service/online, and then move on to deploy another server to a different Far Edge site.

 

Retail employees in brick-and-mortar stores will install SNO servers and it needs to be as simple as possible. The servers will likely be shipped to the retail store, cabled and powered by a retail employee and the site-specific information needs to be provided to the system in the simplest way possible, ideally without any action from the retail employee.

 

Out of Scope

Q: how challenging will it be to support multi-node clusters with this feature?

Background, and strategic fit

< What does the person writing code, testing, documenting need to know? >

Assumptions

< Are there assumptions being made regarding prerequisites and dependencies?>

< Are there assumptions about hardware, software or people resources?>

Customer Considerations

< Are there specific customer environments that need to be considered (such as working with existing h/w and software)?>

< Are there Upgrade considerations that customers need to account for or that the feature should address on behalf of the customer?>

<Does the Feature introduce data that could be gathered and used for Insights purposes?>

Documentation Considerations

< What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)? >

< What does success look like?>

< Does this feature have doc impact?  Possible values are: New Content, Updates to existing content,  Release Note, or No Doc Impact>

< If unsure and no Technical Writer is available, please contact Content Strategy. If yes, complete the following.>

  • <What concepts do customers need to understand to be successful in [action]?>
  • <How do we expect customers will use the feature? For what purpose(s)?>
  • <What reference material might a customer want/need to complete [action]?>
  • <Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available. >
  • <What is the doc impact (New Content, Updates to existing content, or Release Note)?>

Interoperability Considerations

< Which other products and versions in our portfolio does this feature impact?>

< What interoperability test scenarios should be factored by the layered product(s)?>

Questions

Question Outcome
   

 

 

Epic Goal

  • Install SNO within 10 minutes

Why is this important?

  • SNO installation takes around 40+ minutes.
  • This makes SNO less appealing when compared to k3s/microshift.
  • We should analyze the  SNO installation, figure our why it takes so long and come up with ways to optimize it

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

  1. https://docs.google.com/document/d/1ULmKBzfT7MibbTS6Sy3cNtjqDX1o7Q0Rek3tAe1LSGA/edit?usp=sharing

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

This is a clone of issue OCPBUGS-14416. The following is the description of the original issue:

Description of problem:

When installing SNO with bootstrap in place the cluster-policy-controller hangs for 6 minutes waiting for the lease to be acquired. 

Version-Release number of selected component (if applicable):

 

How reproducible:

100%

Steps to Reproduce:

1.Run the PoC using the makefile here https://github.com/eranco74/bootstrap-in-place-poc
2.Observe the cluster-policy-controller logs post reboot

Actual results:

I0530 16:01:18.011988       1 leaderelection.go:352] lock is held by leaderelection.k8s.io/unknown and has not yet expired
I0530 16:01:18.012002       1 leaderelection.go:253] failed to acquire lease kube-system/cluster-policy-controller-lock
I0530 16:07:31.176649       1 leaderelection.go:258] successfully acquired lease kube-system/cluster-policy-controller-lock

Expected results:

Expected the bootstrap cluster-policy-controller to release the lease so that the cluster-policy-controller running post reboot won't have to wait the lease to expire.  

Additional info:

Suggested resolution for bootstrap in place: https://github.com/openshift/installer/pull/7219/files#diff-f12fbadd10845e6dab2999e8a3828ba57176db10240695c62d8d177a077c7161R44-R59

Complete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled

Epic Goal

  • Update OpenShift components that are owned by the Builds + Jenkins Team to use Kubernetes 1.25

Why is this important?

  • Our components need to be updated to ensure that they are using the latest bug/CVE fixes, features, and that they are API compatible with other OpenShift components.

Acceptance Criteria

  • Existing CI/CD tests must be passing

This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.

Today the links point at a rule-scoped page, but that page lacks information about recommended resolution.  You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.

We can implement by updating the template here to be:

fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)

or something like that.

 

unknowns

request is clear, solution/implementation to be further clarified

This epic contains all the Dynamic Plugins related stories for OCP release-4.11 

Epic Goal

  • Track all the stories under a single epic

Acceptance Criteria

  •  

This story only covers API components. We will create a separate story for other utility functions.

Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.

We are generating the markdown from the dynamic-plugin-sdk using

yarn generate-doc

Here is the list of the API that the dynamic-plugin-sdk is exposing:

https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a

Acceptance Criteria:

  • Add missing jsdocs for the API that dynamic-plugin-sdk exposes

Out of Scope:

  • This does not include work for integrating the API docs into the OpenShift docs
  • This does not cover other public utilities, only components.

This epic contains all the Dynamic Plugins related stories for OCP release-4.12

Epic Goal

  • Track all the stories under a single epic

Acceptance Criteria

Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.

This would require updates in following repositories:

  1. openshift/api (add the v1 version and generate a new CRD)
  2. openshift/client-go (picku the changes in the openshift/api repo and generate clients & informers for the new v1 version)
  3. openshift/console-operator repository will using both the new v1 version and v1alpha1 in code and manifests folder.

AC:

  • both v1 and v1alpha1 ConsolePlugins should be passed to the console-config.yaml when the plugins are enabled and present on the cluster.

 

NOTE: This story does not include the conversion webhook change which will be created as a follow on story

`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.

This isn't documented today. We need to do that.

Acceptance Criteria

  • Add a note in the "SDK packages" section of the README about the existence of this package and it's purpose
    • The purpose of being a static utility delivery library intended not to be tied to OpenShift Console versions and compatible with multiple version of OpenShift Console

During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.

 

AC: Add `message` property to NotLoadedDynamicPluginInfo type.

Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.

Currently ResourceLink is exported but not ResourceIcon

 

AC:

  • Require the ResourceIcon  from public to dynamic-plugin-sdk
  • Add the component to the dynamic-demo-plugin
  • Add a CI test to check for the ResourceIcon component

 

Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.

There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.

We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.

 

AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.

when defining two proxy endpoints, 
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:

  • alias: forklift-inventory
    authorize: true
    service:
    name: forklift-inventory
    namespace: konveyor-forklift
    port: 8443
    type: Service
  • alias: forklift-must-gather-api
    authorize: true
    service:
    name: forklift-must-gather-api
    namespace: konveyor-forklift
    port: 8443
    type: Service

service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api

but both proxy to the `forklift-must-gather-api` service

e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service

Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:

  • useAccessReviewAllowed (use useAccessReview instead)
  • useSafetyFirst

cc Andrew Ballantyne Bryan Florkiewicz 

Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`

To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.

If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.

The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.

The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.

I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.

Acceptance Criteria:

  • Deprecate the old extension (in docs, with date/stamp)
  • Make a new extension that applies a stricter type
  • Include this new extension next to the old one (with the error boundary around it)

The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx

We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.

This epic contains all the OLM related stories for OCP release-4.12

Epic Goal

  • Track all the stories under a single epic

This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.

 

We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g.  `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures

AC:

  1. Implement logic in the console's backend to read the set of architecture types from console-config.yaml and set it as a SERVER_FLAG.nodeArchitectures (Change similar to https://github.com/openshift/console/commit/39aabe171a2e89ed3757ac2146d252d087fdfd33)
  2. In Operator hub render only operators that are support on any given node, based on the SERVER_FLAG.nodeArchitectures field implemented in CONSOLE-3242.

 

OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86

 

@jpoulin is good to ask about heterogeneous clusters.

This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.

 

We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.

 

AC: 

  1. Implement logic in the console-operator that will scan though all the nodes and build a set of all the architecture types that the cluster nodes run on and pass it to the console-config.yaml
  2. Add unit and e2e test cases in the console-operator repository.

 

@jpoulin is good to ask about heterogeneous clusters.

An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.

As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content. 

 

Acceptance criteria:

  • Remove any unused scss / css content after revamping for dark mode

Epic Goal

  • Enable OpenShift IPI Installer to deploy OCP to a shared VPC in GCP.
  • The host project is where the VPC and subnets are defined. Those networks are shared to one or more service projects.
  • Objects created by the installer are created in the service project where possible. Firewall rules may be the only exception.
  • Documentation outlines the needed minimal IAM for both the host and service project.

Why is this important?

  • Shared VPC's are a feature of GCP to enable granular separation of duties for organizations that centrally manage networking but delegate other functions and separation of billing. This is used more often in larger organizations where separate teams manage subsets of the cloud infrastructure. Enterprises that use this model would also like to create IPI clusters so that they can leverage the features of IPI. Currently organizations that use Shared VPC's must use UPI and implement the features of IPI themselves. This is repetative engineering of little value to the customer and an increased risk of drift from upstream IPI over time. As new features are built into IPI, organizations must become aware of those changes and implement them themselves instead of getting them "for free" during upgrades.

Scenarios

  1. Deploy cluster(s) into service project(s) on network(s) shared from a host project.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story:

As a user, I want to be able to:

  • skip creating service accounts in Terraform when using passthrough credentialsMode.
  • pass the installer service account to Terraform to be used as the service account for instances when using passthrough credentialsMode.

so that I can achieve

  • creating an IPI cluster using Shared VPC networks using a pre-created service account with the necessary permissions in the Host Project.

Acceptance Criteria:

Description of criteria:

  • Upstream documentation
  • Point 1
  • Point 2
  • Point 3

(optional) Out of Scope:

Detail about what is specifically not being delivered in the story

Engineering Details:

1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.

2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.

3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/

4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:

spec:
connectionConfig:
username: username
password:
secretName: secret-name

The secret namespace should be openshift-config to align with the tlsClientConfig behavior.

5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus

Owner: Architect:

Story (Required)

As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated

Background (Required)

We need to support helm installs for Repos that have the basic authentication secret name and namespace.

Glossary

Out of scope

Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.

In Scope

<Defines what is included in this story>

Approach(Required)

If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.

Dependencies

Nonet

Edge Case

NA

Acceptance Criteria

I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth

INVEST Checklist

Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated

Legend

Unknown
Verified
Unsatisfied

Epic Goal

  • Support manifest lists by image streams and the integrated registry. Clients should be able to pull/push manifests lists from/into the integrated registry. They also should be able to import images via `oc import-image` and them pull them from the internal registry.

Why is this important?

  • Manifest lists are becoming more and more popular. Customers want to mirror manifest lists into the registry and be able to pull them by digest.

Scenarios

  1. Manifest lists can be pushed into the integrated registry
  2. Imported manifests list can be pulled from the integrated registry
  3. Image triggers work with manifest lists

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Existing functionality shouldn't change its behavior

Dependencies (internal and external)

  1. ...

Previous Work (Optional)

  1. https://github.com/openshift/enhancements/blob/master/enhancements/manifestlist/manifestlist-support.md

Open questions

  1. Can we merge creation of images without having the pruner?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

ACCEPTANCE CRITERIA

  • The ImageStream object should contain a new flag indicating that it refers to a manifest list
  • openshift-controller-manager uses new openshift/api code to import image streams
  • changing `importMode` of an image stream tag triggers a new import (i.e. updates generation in the tag spec)

NOTES

This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:

 

 - removing or reducing the need for ignition

 - maintaining feature parity between self-driving and managed OCP models

 - adding additional functionality such as hotfixes

Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD

Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic

 

Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof

Epic Goal

  • We need the installer to accept a LB type from user and then we could set type of LB in the following object.
    oc get ingress.config.openshift.io/cluster -o yaml
    Then we can fetch info from this object and reconcile the operator to have the NLB changes reflected.

 

This is an API change and we will consider this as a feature request.

Why is this important?

https://issues.redhat.com/browse/NE-799 Please check this for more details

 

Scenarios

https://issues.redhat.com/browse/NE-799 Please check this for more details

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. installer
  2. ingress operator

Previous Work (Optional):

 No

Open questions::

N/A

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to

  • minimize bugs,
  • reproduce and fix them faster and
  • pin down current behavior of the driver

Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:

  • fast feedback cycle (local test execution)
  • developer in-code documentation
  • easier onboarding for new contributers
  • lower resource consumption
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Description

As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits

Acceptance Criteria

  1. Show a yellow border around deployments if any of the deployments have reached the quota limit
  2. For deployments, if there are any errors associated with resource limits or quotas, include a warning alert in the side panel.
    1. If we know resource limits are the cause, include link to Edit resource limits
    2. If we know pod count is the cause, include a link to Edit pod count

Additional Details:

 

Refer below for more details 

Description

As a user, I would like to be informed in an intuitive way,  when quotas have been reached in a namespace

Acceptance Criteria

  1. Show an alert banner on the Topology and add page for this project/namespace when there is a RQ (Resource Quota) / ACRQ (Applied Cluster Resource Quota) issue
    PF guideline: https://www.patternfly.org/v4/components/alert/design-guidelines#using-alerts 
  2. The above alert should have a CTA link to the search page with all RQ, ACRQ and if there is just one show the details page for the same
  3. For RQ, ACRQ list view show one more column called status with details as shown in the project view.

Additional Details:

 

Refer below for more details 

Goal

Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.

Problem:

We have heard the following requests from customers and developer advocates:

  • Some admins do not want to provide access to the Developer Perspective from the console
  • Some admins do not want to provide non-priv users access to the Admin Perspective from the console

Acceptance criteria:

  1. Cluster administrator is able to "hide" the admin perspective for non-priv users
  2. Cluster administrator is able to "hide" the developer perspective for all users
  3. Be user that User Preferences for individual users behaves appropriately. If only one perspective is available, the perspective switcher is not needed.

Dependencies (External/Internal):

Design Artifacts:

Exploration:

Note:

Description

As an admin, I should be able to see a code snippet that shows how to add user perspectives

Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives

To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).

Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205

Acceptance Criteria

  1. When the admin opens the Console CRD there is a snippet in the sidebar which provides a default YAML which supports the admin to add user perspectives

Additional Details:

Previous work:

  1. https://issues.redhat.com/browse/ODC-5080
  2. https://issues.redhat.com/browse/ODC-5449

Description

As an admin, I want to hide user perspective(s) based on the customization.

Acceptance Criteria

  1. Hide perspective(s) based on the customization
    1. When the admin perspective is disabled -> we hide the admin perspective for all unprivileged users
    2. When the dev perspective is disabled -> we hide the dev perspective for all users
  2. When all the perspectives are hidden from a user or for all users, show the Admin perspective by default

Additional Details:

Description

As an admin, I want to be able to use a form driven experience  to hide user perspective(s)

Acceptance Criteria

  1. Add checkboxes with the options
    1. Hide "Administrator" perspective for non-privileged users
    2.  Hide "Developer" perspective for all users
  2. The console configuration CR should be updated as per the selected option

Additional Details:

Description

As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users

Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource

Acceptance Criteria

  1. Extend the "customization" spec type definition for the CRD in the openshift/api project

Additional Details:

Previous customization work:

  1. https://issues.redhat.com/browse/ODC-5416
  2. https://issues.redhat.com/browse/ODC-5020
  3. https://issues.redhat.com/browse/ODC-5447

Problem:

Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog.  The request is to change access for the cluster, not per user or persona.

Goal:

Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.

Why is it important?

Multiple customer requests.

Acceptance criteria:

  1. As a cluster admin, I can hide/disable access to the developer catalog for all users across all namespaces.
  2. As a cluster admin, I can hide/disable access to a specific sub-catalog in the developer catalog for all users across all namespaces.
    1. Builder Images
    2. Templates
    3. Helm Charts
    4. Devfiles
    5. Operator Backed

Notes

We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services

Dependencies (External/Internal):

Design Artifacts:

Exploration:

Note:

Description

As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.

Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s)  from the Developer Catalog or the Dev catalog as a whole.

To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).

Acceptance Criteria

  1. When the admin opens the Console CRD there is a snippet in the sidebar which provides a default YAML, which supports the admin to add sub-catalogs/the whole dev catalog

Additional Details:

Previous work:

  1. https://issues.redhat.com/browse/ODC-5080
  2. https://issues.redhat.com/browse/ODC-5449

Description

As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.

Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource

Acceptance Criteria

Extend the "customization" spec type definition for the CRD in the openshift/api project

Additional Details:

Previous customization work:

  1. https://issues.redhat.com/browse/ODC-5416
  2. https://issues.redhat.com/browse/ODC-5020
  3. https://issues.redhat.com/browse/ODC-5447

Description

As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.

Acceptance Criteria

  1. Hide all links to the sub-catalog(s) from the add page, topology actions, empty states, quick search, and the catalog itself
  2. The sub-catalog should show Not found if the user opens the sub-catalog directly
  3. The feature should not be hidden if a sub-catalog option is disabled

Additional Details:

Epic Goal

  • Facilitate the transition to for OLM and content to PSA enforcing the `restricted` security profile
  • Use the label synch'er to enforce the required security profile
  • Current content should work out-of-the-box as is
  • Upgrades should not be blocked

Why is this important?

  • PSA helps secure the cluster by enforcing certain security restrictions that the pod must meet to be scheduled
  • 4.12 will enforce the `restricted` profile, which will affect the deployment of operators in `openshift-*` namespaces 

Scenarios

  1. Admin installs operator in an `openshift-*`namespace that is not managed by the label syncher -> label should be applied
  2. Admin installs operator in an `openshift-*` namespace that has a label asking the label syncher to not reconcile it -> nothing changes

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Done only downstream
  • Transition documentation written and reviewed

Dependencies (internal and external)

  1. label syncher (still searching for the link)

Open questions::

  1. Is this only for openshift-* namespaces?

Resources

Stakeholders

  • Daniel S...?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.

Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.

A/C:
 - OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
 - If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled 
 - The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)

The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready. 

This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up. 

 

 

The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.

This epic tracks network tooling improvements for 4.12

New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.

Our estimation for this Epic is 1 engineer * 2 Sprints

WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.

 

Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.

The metric updates every 2 minutes so please be mindful of this when creating the alert.

If the controller is disconnected for 10 minutes, fire an alert.

DoD: Merged to CNO and tested by QE

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Come up with a consistent way to detect node down on OCP and hypershift. Current mechanism for OCP (probe port 9) does not work for hypershift, meaning, hypershift node down detection will be longer (~40 secs). We should aim to have a common mechanism for both. As well, we should consider alternatives to the probing port 9. Perhaps BFD, or other detection.
  • Get clarification on node down detection times. Some customers have (apparently) asked for detection on the order of 100ms, recommendation is to use multiple Egress IPs, so this may not be a hard requirement. Need clarification from PM/Customers.

Why is this important?

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
 
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
 
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]

This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/

Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23

https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help

Incomplete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled

Place holder epic to track spontaneous task which does not deserve its own epic.

AWS has a hard limit of 100 OIDC providers globally. 
Currently each HostedCluster created by e2e creates its own OIDC provider, which results in hitting the quota limit frequently and causing the tests to fail as a result.

 
DOD:
Only a single OIDC provider should be created and shared between all e2e HostedClusters. 

DoD:

At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.

We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise

AC:

We have connectDirectlyToCloudAPIs flag in konnectiviy socks5 proxy to dial directly to cloud providers without going through konnectivity.

This introduce another path for exception https://github.com/openshift/hypershift/pull/1722

We should consolidate both by keep using connectDirectlyToCloudAPIs until there's a reason to not.

 

Once the HostedCluster and NodePool gets stopped using PausedUntil statement, the awsprivatelink controller will continue reconciling.

 

How to test this:

  • Deploy a private cluster
  • Put it in pause once deployed
  • Delete the AWSEndPointService and the Service from the HCP namespace
  • And wait for a reconciliation, the result it's that they should not be recreated
  • Unpause it and wait for recreation.

Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.

We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.

Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.

Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.

The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).

Epic Goal

  • To improve the reliability of disk cleaning before installation and to provide the user with sufficient warning regarding the consequences of the cleaning

Why is this important?

  • Insufficient cleaning can lead to installation failure
  • Insufficient warning can lead to complaints of unexpected data loss

Scenarios

  1.  

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Description of the problem:

Cluster Installation fail if installation disk has lvm on raid:

Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?" 

How reproducible:

100%

Steps to reproduce:

1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)

Actual results:

Installation failed

Expected results:

Installation success

Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as

Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem? 

How reproducible:

Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.

List block devices
/usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME
NAME              MAJ:MIN   SIZE TYPE FSTYPE      KNAME MODEL            UUID                                   WWN                HCTL       VENDOR   STATE   TRAN PKNAME
loop0               7:0   125.9G loop xfs         loop0                  c080b47b-2291-495c-8cc0-2009ebc39839                                                       
loop1               7:1   885.5M loop squashfs    loop1                                                                                                             
sda                 8:0   894.3G disk             sda   INTEL SSDSC2KG96                                        0x55cd2e415235b2db 1:0:0:0    ATA      running sas  
|-sda1              8:1     250M part             sda1                                                          0x55cd2e415235b2db                                  sda
|-sda2              8:2     750M part ext2        sda2                   3aa73c72-e342-4a07-908c-a8a49767469d   0x55cd2e415235b2db                                  sda
|-sda3              8:3      49G part xfs         sda3                   ffc3ccfe-f150-4361-8ae5-f87b17c13ac2   0x55cd2e415235b2db                                  sda
|-sda4              8:4   394.2G part LVM2_member sda4                   Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db                                  sda
`-sda5              8:5     450G part LVM2_member sda5                   W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db                                  sda
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sda5
sdb                 8:16  894.3G disk             sdb   INTEL SSDSC2KG96                                        0x55cd2e415235b31b 1:0:1:0    ATA      running sas  
`-sdb1              8:17  894.3G part LVM2_member sdb1                   6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b                                  sdb
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sdb1
sdc                 8:32  894.3G disk             sdc   INTEL SSDSC2KG96                                        0x55cd2e415235b652 1:0:2:0    ATA      running sas  
`-sdc1              8:33  894.3G part LVM2_member sdc1                   pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652                                  sdc
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sdc1
sdd                 8:48  894.3G disk             sdd   INTEL SSDSC2KG96                                        0x55cd2e41521679b7 1:0:3:0    ATA      running sas  
`-sdd1              8:49  894.3G part LVM2_member sdd1                   exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7                                  sdd
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sdd1
sr0                11:0     989M rom  iso9660     sr0   Virtual CDROM0   2022-06-17-18-18-33-00                                    0:0:0:0    AMI      running usb  

Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda

Actual results:

 The installation will fail with a message that indicates that it could not exclusively access /dev/sda

Expected results:

The installation should proceed and the cluster should start to install.

Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810

Epic Goal

  • Increase success-rate of of our CI jobs
  • Improve debugability / visibility or tests 

Why is this important?

  • Failed presubmit jobs (required or optional) can make an already tested+approved PR to not get in
  • Failed periodic jobs interfere our visibility around stability of features

Epic Goal

Why is this important?

Scenarios
1. …

Acceptance Criteria

  • (Enter a list of Acceptance Criteria unique to the Epic)

Dependencies (internal and external)
1. …

Previous Work (Optional):
1. …

Open questions::
1. …

Done Checklist

  • CI - For new features (non-enablement), existing Multi-Arch CI jobs are not broken by the Epic
  • Release Enablement: <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR orf GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - If the Epic is adding a new stream, downstream build attached to advisory: <link to errata>
  • QE - Test plans in Test Plan tracking software (e.g. Polarion, RQM, etc.): <link or reference to the Test Plan>
  • QE - Automated tests merged: <link or reference to automated tests>
  • QE - QE to verify documentation when testing
  • DOC - Downstream documentation merged: <link to meaningful PR>
  • All the stories, tasks, sub-tasks and bugs that belong to this epic need to have been completed and indicated by a status of 'Done'.

This is a clone of issue MULTIARCH-3683. The following is the description of the original issue:

Flags similar to these https://github.com/openshift/hypershift/blob/main/cmd/cluster/powervs/create.go#L57toL61 from create command are missing in destroy command, so that infra destroy functionality not getting these flags for proper destroy of infra with existing resources.

Description of problem:

check_pkt_length cannot be offloaded without
1) sFlow offload patches in Openvswitch
2) Hardware driver support.

Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.

Version-Release number of selected component (if applicable):

4.11/4.12

How reproducible:

Always

Steps to Reproduce:

1. Any flow that has check_pkt_len()
  5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node)
  6-b: Pod -> NodePort Service traffic (Host Backend - Different Node)
  4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node)
  10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node)
  11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node)
  12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)   

Actual results:

Poor performance due to upcalls when check_pkt_len() is not supported.

Expected results:

Good performance.

Additional info:

https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Run OpenShift builds that do not execute as the "root" user on the host node.

Why is this important?

  • OpenShift builds require an elevated set of capabilities to build a container image
  • Builds currently run as root to maintain adequate performance
  • Container workloads should run as non-root from the host's perspective. Containers running as root are a known security risk.
  • Builds currently run as root and require a privileged container. See BUILD-225 for removing the privileged container requirement.

Scenarios

  1. Run BuildConfigs in a multi-tenant environment
  2. Run BuildConfigs in a heightened security environment/deployment

Acceptance Criteria

  • Developers can opt into running builds in a cri-o user namespace by providing an environment variable with a specific value.
  • When the correct environment variable is provided, builds run in a cri-o user namespace, and the build pod does not require the "privileged: true" security context.
  • User namespace builds can pass basic test scenarios for the Docker and Source strategy build.
  • Steps to run unprivileged builds are documented.

Dependencies (internal and external)

  1. Buildah supports running inside a non-privileged container
  2. CRI-O allows workloads to opt into running containers in user namespaces.

Previous Work (Optional):

  1. BUILD-225 - remove privileged requirement for builds.

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story

As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges

Acceptance Criteria

  • Developers can provide an environment variable to indicate the build should not use privileged containers
  • When the correct env var + value is specified, builds run in a user namespace (non-root on the host)

QE Impact

No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.

Docs Impact

We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.

PX Impact

This likely warrants an OpenShift blog post, potentially?

Notes

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • ...

Why is this important?

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.

Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.

I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.

Goal
Provide an indication that advanced features are used

Problem

Today, customers and RH don't have the information on the actual usage of advanced features.

Why is this important?

  1. Better focus upsell efforts
  2. Compliance information for customers that are not aware they are not using the right subscription

 

Prioritized Scenarios

In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode). 

Not in Scope

Integrate with subscription watch - will be done by the subscription watch team with our help.

Customers

All

Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions

What does success look like?

A clear indication in subscription watch for ODF usage (either essential or advanced). 

1. Proposed title of this feature request

  • Request to add a bool variable into telemetry which indicates the usage of any of the advanced feature, like PV encryption or KMS encryption or external mode etc.

2. What is the nature and description of the request?

  • Today, customers and RH don't have the information on the actual usage of advanced features. This feature will help RH to have a better indication on the statistics of customers using the advanced features and focus better on upsell efforts.

3. Why does the customer need this? (List the business requirements here)

  • As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions.

4. List any affected packages or components.

  • Telemetry

_____________________

Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173

 

Other Complete

This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled

As a developer, I would like to remove the random terraform provider because it is essentially unnecessary and would improve our build process.

 

The random Terraform provider is used in Azure & Azure Stack to create a random string. This could easily be done in go code and passed in as a variable. 

Removing an extra provider would decrease our build time and improve our build stability, which is often failing due to timeouts. 

 

The random string is used here in Azure (and similarly in Azure Stack):

https://github.com/openshift/installer/blob/master/data/data/azure/vnet/main.tf#L23-L27

 

One approach would be to generate the string in tfvars and pass it in as a terraform variable.

Description of problem:

For OVNK to become CNCF complaint, we need to support session affinity timeout feature and enable the e2e's on OpenShift side. This bug tracks the efforts to get this into 4.12 OCP.

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-15720. The following is the description of the original issue:

This is a clone of issue OCPBUGS-14874. The following is the description of the original issue:

Description of problem:

Deploying a helm chart that features a values.schema.json using either 2019-09 or 2020-20 (latest) revisions of the JSON-Schema results in the UI hanging on create with three dots loading... This is not the case if YAML view is used, since I suppose this view is not trying to be clever and let Helm validate the chart values against the schema itself.

Version-Release number of selected component (if applicable):

Reproduced in 4.13, probably affects other versions as well.

How reproducible:

100%

Steps to Reproduce:

1. Go to Helm tab.
2. Click create in top right and select Repository
3. Paste following into YAML view and click Create:

apiVersion: helm.openshift.io/v1beta1
kind: ProjectHelmChartRepository
metadata:
  name: reproducer
spec:
  connectionConfig:
    url: 'https://raw.githubusercontent.com/tumido/helm-backstage/blog2'

4. Go to the Helm tab again (if redirected elsewhere)
5. Click create in top right and select Helm Release
6. In catalog filter select Chart repositories: Reproducer
7. Click on the single tile available (Backstage) and click Create
8. Switch to Form view
9. Leave default values and click Create
10. Stare at the always loading screen that never proceeds further.

Actual results:

Expected results:

It installs and deploys the chart

Additional info:

This is caused by a JSON Schema containing a $schema key pointing which revision of the JSON Schema standard should be used:

{
    "$schema": "https://json-schema.org/draft/2020-12/schema",
}

I've managed to trace this back to this react-jsonschema-form issue:

https://github.com/rjsf-team/react-jsonschema-form/issues/2241

It seems the library used here for validation doesn't support 2019-09 draft and the most current revision 2020-20 revision.

It happens only if the chart follows the JSON Schema standard and declares the revision properly.

Workarounds:

IMO best solution:
Helm form renderer should NOT do any validation, since it can't handle the schema properly. Instead, it should leave this job to the Helm backend. Helm validates the values against the schema when installing the chart anyways. The YAML view also does no validation. That one seems to do the job properly.
 
Currently, there is no formal requirement for charts admitted to the helm curated catalog saying that the most recent JSON Schema revision is 4 years old and later 2 revisions are not supported.

Also, the Form UI should not just hang on submit. Instead, it should at least fail gracefully.

 

Related to:

https://github.com/janus-idp/helm-backstage/issues/64#issuecomment-1587678319

Description of problem:

When deleting a BYOH node in Platform:none, as well as in an Azure IPI cluster the node gets reconciled correctly, however when added back to the cluster it stays in Ready,SchedulingDisabled. When checking the WMCO logs, we can observe the following log:

{"level":"error","ts":"2022-12-14T16:14:31Z","msg":"Reconciler error","controller":"configmap","controllerGroup":"","controllerKind":"ConfigMap","configMap":{"name":"windows-instances","namespace":"openshift-windows-machine-config-operator"},"namespace":"openshift-windows-machine-config-operator","name":"windows-instances","reconcileID":"d66a3142-d52c-43f5-8a42-214ce9c88417","error":"error configuring host with address 10.0.55.21: configuring node network failed: error waiting for k8s.ovn.org/hybrid-overlay-node-subnet node annotation for byoh-2019: timeout waiting for k8s.ovn.org/hybrid-overlay-node-subnet node annotation: timed out waiting for the condition"

And when checking the node's annotation, it is indeed missing:

$ oc get nodes byoh-2019 -o=jsonpath="{.metadata.annotations}"
{"volumes.kubernetes.io/controller-managed-attach-detach":"true","windowsmachineconfig.openshift.io/desired-version":"7.0.0-16f486a","windowsmachineconfig.openshift.io/pub-key-hash":"1df2c166b1c401180523270e9cf6bc2cd2724b9279ea65668a3b95298525a0f5","windowsmachineconfig.openshift.io/username":"wx4EBwMICL6qT+4RY8tgbx4hiRmQdHlwUsHgVGCTVY7S5gG/G5gb/Wzv0JBLhNP9\u003cwmcoMarker\u003ejlmI5ExHPYFrd2Fw6Lxe/6PKEE5/vYAhZ2n1Z2nBIoa1xN1/HEaXhqR2CuXNe7Ez\u003cwmcoMarker\u003eg2Hg+gA=\u003cwmcoMarker\u003e=ubWA"}

Tested in Azure IPI and Platform:None, in both cases the issue got reproduced.

Version-Release number of selected component (if applicable):

$ oc get cm -n openshift-windows-machine-config-operator 
NAME                                   DATA   AGE
kube-root-ca.crt                       1      10h
openshift-service-ca.crt               1      10h
windows-instances                      2      9h
windows-machine-config-operator-lock   0      6h24m
windows-services-7.0.0-16f486a         2      6h23m
$ oc get clusterversion
NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.12.0-rc.4   True        False         6h48m   Cluster version is 4.12.0-rc.4

How reproducible:


Steps to Reproduce:

1. Deploy a OCP 4.11 cluster with WMCO 6.0.0
2. Add one or two byoh nodes to the cluster
3. Upgrade the cluster to OCP 4.12, and later WMCO to 7.0.0
4. Remove one of the byoh nodes using: oc delete node <byoh-node-id>
5. Wait for reconciliation to bring the node back

Actual results:

The deleted node gets re-added but stays in Ready,SchedulingDisabled and the workloads left in Pending state.

Expected results:

The node gets properly added to the cluster and stays in Ready.

Additional info:


Description of problem:

Agent based installation fails during the 3+1 deployment. I found that the machine-api-operator degraded due to minimum worker replica count is 2 and for 3+1 deployment we need to define one worker node.

Version-Release number of selected component (if applicable):

 

How reproducible:

Always

Steps to Reproduce:

1. Create agent.iso (openshift-install agent create image) using install-config.yaml and agent-config.yaml (PFA sample files)
2. Deploy a 3+1 cluster using agent.iso
3. Execute "openshift-install agent wait-for install-complete" command to wait for install complete. 

Actual results:

Getting below error:
ERROR Cluster operator kube-controller-manager Degraded is True with GarbageCollector_Error: GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host 
INFO Cluster operator machine-api Progressing is True with SyncingResources: Progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 
ERROR Cluster operator machine-api Degraded is True with SyncingFailed: Failed when progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 because minimum worker replica count (2) not yet met: current running replicas 1, waiting for [] 
INFO Cluster operator machine-api Available is False with Initializing: Operator is initializing 
INFO Cluster operator monitoring Available is False with UpdatingPrometheusOperatorFailed: Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error. 
ERROR Cluster operator monitoring Degraded is True with UpdatingPrometheusOperatorFailed: Failed to rollout the stack. Error: updating prometheus operator: reconciling Prometheus Operator Admission Webhook Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-operator-admission-webhook: got 1 unavailable replicas 
INFO Cluster operator monitoring Progressing is True with RollOutInProgress: Rolling out the stack. 
INFO Cluster operator network ManagementStateDegraded is False with :  
ERROR Cluster initialization failed because one or more operators are not functioning properly. 
ERROR 				The cluster should be accessible for troubleshooting as detailed in the documentation linked below, 
ERROR 				https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html 

Expected results:

3+1 deployment should be successful.

Additional info:

I found that there is a condition in the machine-api-operator to check that the worker node count should be 2 which is preventing the 3+1 deployment.
https://github.com/openshift/machine-api-operator/blob/master/pkg/operator/sync.go#L322 

Tracker issue for bootimage bump in 4.12. This issue should block issues which need a bootimage bump to fix.

The previous bump was OCPBUGS-2997.

This is a clone of issue OCPBUGS-17365. The following is the description of the original issue:

When we update a Secret referenced in the BareMetalHost, an immediate reconcile of the corresponding BMH is not triggered. In most states we requeue each CR after a timeout, so we should eventually see the changes.

In the case of BMC Secrets, this has been broken since the fix for OCPBUGS-1080 in 4.12.

This is a clone of issue OCPBUGS-5306. The following is the description of the original issue:

Description of problem:

One old machine stuck in Deleting and many co get degraded when doing master replacement on the cluster with OVN network

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2023-01-02-175114

How reproducible:

always after several times

Steps to Reproduce:

1.Install a cluster 
liuhuali@Lius-MacBook-Pro huali-test % oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.12.0-0.nightly-2023-01-02-175114   True        False         30m     Cluster version is 4.12.0-0.nightly-2023-01-02-175114
liuhuali@Lius-MacBook-Pro huali-test % oc get co
NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.12.0-0.nightly-2023-01-02-175114   True        False         False      33m     
baremetal                                  4.12.0-0.nightly-2023-01-02-175114   True        False         False      80m     
cloud-controller-manager                   4.12.0-0.nightly-2023-01-02-175114   True        False         False      84m     
cloud-credential                           4.12.0-0.nightly-2023-01-02-175114   True        False         False      80m     
cluster-api                                4.12.0-0.nightly-2023-01-02-175114   True        False         False      81m     
cluster-autoscaler                         4.12.0-0.nightly-2023-01-02-175114   True        False         False      80m     
config-operator                            4.12.0-0.nightly-2023-01-02-175114   True        False         False      81m     
console                                    4.12.0-0.nightly-2023-01-02-175114   True        False         False      33m     
control-plane-machine-set                  4.12.0-0.nightly-2023-01-02-175114   True        False         False      79m     
csi-snapshot-controller                    4.12.0-0.nightly-2023-01-02-175114   True        False         False      81m     
dns                                        4.12.0-0.nightly-2023-01-02-175114   True        False         False      80m     
etcd                                       4.12.0-0.nightly-2023-01-02-175114   True        False         False      79m     
image-registry                             4.12.0-0.nightly-2023-01-02-175114   True        False         False      74m     
ingress                                    4.12.0-0.nightly-2023-01-02-175114   True        False         False      74m     
insights                                   4.12.0-0.nightly-2023-01-02-175114   True        False         False      21m     
kube-apiserver                             4.12.0-0.nightly-2023-01-02-175114   True        False         False      77m     
kube-controller-manager                    4.12.0-0.nightly-2023-01-02-175114   True        False         False      77m     
kube-scheduler                             4.12.0-0.nightly-2023-01-02-175114   True        False         False      77m     
kube-storage-version-migrator              4.12.0-0.nightly-2023-01-02-175114   True        False         False      81m     
machine-api                                4.12.0-0.nightly-2023-01-02-175114   True        False         False      75m     
machine-approver                           4.12.0-0.nightly-2023-01-02-175114   True        False         False      80m     
machine-config                             4.12.0-0.nightly-2023-01-02-175114   True        False         False      74m     
marketplace                                4.12.0-0.nightly-2023-01-02-175114   True        False         False      80m     
monitoring                                 4.12.0-0.nightly-2023-01-02-175114   True        False         False      72m     
network                                    4.12.0-0.nightly-2023-01-02-175114   True        False         False      83m     
node-tuning                                4.12.0-0.nightly-2023-01-02-175114   True        False         False      80m     
openshift-apiserver                        4.12.0-0.nightly-2023-01-02-175114   True        False         False      75m     
openshift-controller-manager               4.12.0-0.nightly-2023-01-02-175114   True        False         False      76m     
openshift-samples                          4.12.0-0.nightly-2023-01-02-175114   True        False         False      22m     
operator-lifecycle-manager                 4.12.0-0.nightly-2023-01-02-175114   True        False         False      81m     
operator-lifecycle-manager-catalog         4.12.0-0.nightly-2023-01-02-175114   True        False         False      81m     
operator-lifecycle-manager-packageserver   4.12.0-0.nightly-2023-01-02-175114   True        False         False      75m     
platform-operators-aggregated              4.12.0-0.nightly-2023-01-02-175114   True        False         False      74m     
service-ca                                 4.12.0-0.nightly-2023-01-02-175114   True        False         False      81m     
storage                                    4.12.0-0.nightly-2023-01-02-175114   True        False         False      74m     
liuhuali@Lius-MacBook-Pro huali-test % oc get machine
NAME                                         PHASE     TYPE         REGION      ZONE         AGE
huliu-aws4d2-fcks7-master-0                  Running   m6i.xlarge   us-east-2   us-east-2a   85m
huliu-aws4d2-fcks7-master-1                  Running   m6i.xlarge   us-east-2   us-east-2b   85m
huliu-aws4d2-fcks7-master-2                  Running   m6i.xlarge   us-east-2   us-east-2a   85m
huliu-aws4d2-fcks7-worker-us-east-2a-m279f   Running   m6i.xlarge   us-east-2   us-east-2a   80m
huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps   Running   m6i.xlarge   us-east-2   us-east-2a   80m
huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz   Running   m6i.xlarge   us-east-2   us-east-2b   80m
liuhuali@Lius-MacBook-Pro huali-test % oc get controlplanemachineset
NAME      DESIRED   CURRENT   READY   UPDATED   UNAVAILABLE   STATE    AGE
cluster   3         3         3       3                       Active   86m

2.Edit controlplanemachineset, change instanceType to another value to trigger RollingUpdate 
liuhuali@Lius-MacBook-Pro huali-test % oc edit controlplanemachineset cluster
controlplanemachineset.machine.openshift.io/cluster edited
liuhuali@Lius-MacBook-Pro huali-test % oc get machine
NAME                                         PHASE          TYPE         REGION      ZONE         AGE
huliu-aws4d2-fcks7-master-0                  Running        m6i.xlarge   us-east-2   us-east-2a   86m
huliu-aws4d2-fcks7-master-1                  Running        m6i.xlarge   us-east-2   us-east-2b   86m
huliu-aws4d2-fcks7-master-2                  Running        m6i.xlarge   us-east-2   us-east-2a   86m
huliu-aws4d2-fcks7-master-mbgz6-0            Provisioning   m5.xlarge    us-east-2   us-east-2a   5s
huliu-aws4d2-fcks7-worker-us-east-2a-m279f   Running        m6i.xlarge   us-east-2   us-east-2a   81m
huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps   Running        m6i.xlarge   us-east-2   us-east-2a   81m
huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz   Running        m6i.xlarge   us-east-2   us-east-2b   81m
liuhuali@Lius-MacBook-Pro huali-test % oc get machine
NAME                                         PHASE      TYPE         REGION      ZONE         AGE
huliu-aws4d2-fcks7-master-0                  Deleting   m6i.xlarge   us-east-2   us-east-2a   92m
huliu-aws4d2-fcks7-master-1                  Running    m6i.xlarge   us-east-2   us-east-2b   92m
huliu-aws4d2-fcks7-master-2                  Running    m6i.xlarge   us-east-2   us-east-2a   92m
huliu-aws4d2-fcks7-master-mbgz6-0            Running    m5.xlarge    us-east-2   us-east-2a   5m36s
huliu-aws4d2-fcks7-worker-us-east-2a-m279f   Running    m6i.xlarge   us-east-2   us-east-2a   87m
huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps   Running    m6i.xlarge   us-east-2   us-east-2a   87m
huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz   Running    m6i.xlarge   us-east-2   us-east-2b   87m
liuhuali@Lius-MacBook-Pro huali-test % oc get machine
NAME                                         PHASE         TYPE         REGION      ZONE         AGE
huliu-aws4d2-fcks7-master-1                  Running       m6i.xlarge   us-east-2   us-east-2b   101m
huliu-aws4d2-fcks7-master-2                  Running       m6i.xlarge   us-east-2   us-east-2a   101m
huliu-aws4d2-fcks7-master-mbgz6-0            Running       m5.xlarge    us-east-2   us-east-2a   15m
huliu-aws4d2-fcks7-master-nbt9g-1            Provisioned   m5.xlarge    us-east-2   us-east-2b   3m1s
huliu-aws4d2-fcks7-worker-us-east-2a-m279f   Running       m6i.xlarge   us-east-2   us-east-2a   96m
huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps   Running       m6i.xlarge   us-east-2   us-east-2a   96m
huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz   Running       m6i.xlarge   us-east-2   us-east-2b   96m
liuhuali@Lius-MacBook-Pro huali-test % oc get machine
NAME                                         PHASE      TYPE         REGION      ZONE         AGE
huliu-aws4d2-fcks7-master-1                  Deleting   m6i.xlarge   us-east-2   us-east-2b   149m
huliu-aws4d2-fcks7-master-2                  Running    m6i.xlarge   us-east-2   us-east-2a   149m
huliu-aws4d2-fcks7-master-mbgz6-0            Running    m5.xlarge    us-east-2   us-east-2a   62m
huliu-aws4d2-fcks7-master-nbt9g-1            Running    m5.xlarge    us-east-2   us-east-2b   50m
huliu-aws4d2-fcks7-worker-us-east-2a-m279f   Running    m6i.xlarge   us-east-2   us-east-2a   144m
huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps   Running    m6i.xlarge   us-east-2   us-east-2a   144m
huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz   Running    m6i.xlarge   us-east-2   us-east-2b   144m
liuhuali@Lius-MacBook-Pro huali-test % oc get machine
NAME                                         PHASE      TYPE         REGION      ZONE         AGE
huliu-aws4d2-fcks7-master-1                  Deleting   m6i.xlarge   us-east-2   us-east-2b   4h12m
huliu-aws4d2-fcks7-master-2                  Running    m6i.xlarge   us-east-2   us-east-2a   4h12m
huliu-aws4d2-fcks7-master-mbgz6-0            Running    m5.xlarge    us-east-2   us-east-2a   166m
huliu-aws4d2-fcks7-master-nbt9g-1            Running    m5.xlarge    us-east-2   us-east-2b   153m
huliu-aws4d2-fcks7-worker-us-east-2a-m279f   Running    m6i.xlarge   us-east-2   us-east-2a   4h7m
huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps   Running    m6i.xlarge   us-east-2   us-east-2a   4h7m
huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz   Running    m6i.xlarge   us-east-2   us-east-2b   4h7m

3.master-1 stuck in Deleting, and many co get degraded, many pod cannot get Running  
liuhuali@Lius-MacBook-Pro huali-test % oc get co     
NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.12.0-0.nightly-2023-01-02-175114   True        True          True       9s      APIServerDeploymentDegraded: 1 of 4 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-7b65bbc76b-mxl99 pod)...
baremetal                                  4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h8m    
cloud-controller-manager                   4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h11m   
cloud-credential                           4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h8m    
cluster-api                                4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h8m    
cluster-autoscaler                         4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h8m    
config-operator                            4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h9m    
console                                    4.12.0-0.nightly-2023-01-02-175114   False       False         False      150m    RouteHealthAvailable: console route is not admitted
control-plane-machine-set                  4.12.0-0.nightly-2023-01-02-175114   True        True          False      4h7m    Observed 1 replica(s) in need of update
csi-snapshot-controller                    4.12.0-0.nightly-2023-01-02-175114   True        True          False      4h9m    CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods...
dns                                        4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h8m    
etcd                                       4.12.0-0.nightly-2023-01-02-175114   True        True          True       4h7m    GuardControllerDegraded: Missing operand on node ip-10-0-79-159.us-east-2.compute.internal...
image-registry                             4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h2m    
ingress                                    4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h2m    
insights                                   4.12.0-0.nightly-2023-01-02-175114   True        False         False      3h8m    
kube-apiserver                             4.12.0-0.nightly-2023-01-02-175114   True        True          True       4h5m    GuardControllerDegraded: Missing operand on node ip-10-0-79-159.us-east-2.compute.internal
kube-controller-manager                    4.12.0-0.nightly-2023-01-02-175114   True        False         True       4h5m    GarbageCollectorDegraded: error querying alerts: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp 172.30.19.115:9091: i/o timeout
kube-scheduler                             4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h5m    
kube-storage-version-migrator              4.12.0-0.nightly-2023-01-02-175114   True        False         False      162m    
machine-api                                4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h3m    
machine-approver                           4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h8m    
machine-config                             4.12.0-0.nightly-2023-01-02-175114   False       False         True       139m    Cluster not available for [{operator 4.12.0-0.nightly-2023-01-02-175114}]: error during waitForDeploymentRollout: [timed out waiting for the condition, deployment machine-config-controller is not ready. status: (replicas: 1, updated: 1, ready: 0, unavailable: 1)]
marketplace                                4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h8m    
monitoring                                 4.12.0-0.nightly-2023-01-02-175114   False       True          True       144m    reconciling Prometheus Operator Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-operator: got 1 unavailable replicas
network                                    4.12.0-0.nightly-2023-01-02-175114   True        True          False      4h11m   DaemonSet "/openshift-ovn-kubernetes/ovnkube-master" is not available (awaiting 1 nodes)...
node-tuning                                4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h7m    
openshift-apiserver                        4.12.0-0.nightly-2023-01-02-175114   False       True          False      151m    APIServicesAvailable: "apps.openshift.io.v1" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request...
openshift-controller-manager               4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h4m    
openshift-samples                          4.12.0-0.nightly-2023-01-02-175114   True        False         False      3h10m   
operator-lifecycle-manager                 4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h9m    
operator-lifecycle-manager-catalog         4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h9m    
operator-lifecycle-manager-packageserver   4.12.0-0.nightly-2023-01-02-175114   True        False         False      2m44s   
platform-operators-aggregated              4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h2m    
service-ca                                 4.12.0-0.nightly-2023-01-02-175114   True        False         False      4h9m    
storage                                    4.12.0-0.nightly-2023-01-02-175114   True        True          False      4h2m    AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods...
liuhuali@Lius-MacBook-Pro huali-test % 


liuhuali@Lius-MacBook-Pro huali-test % oc get pod --all-namespaces|grep -v Running
NAMESPACE                                          NAME                                                                       READY   STATUS              RESTARTS         AGE
openshift-apiserver                                apiserver-5cbdf985f9-85z4t                                                 0/2     Init:0/1            0                155m
openshift-authentication                           oauth-openshift-5c46d6658b-lkbjj                                           0/1     Pending             0                156m
openshift-cloud-credential-operator                pod-identity-webhook-77bf7c646d-4rtn8                                      0/1     ContainerCreating   0                156m
openshift-cluster-api                              capa-controller-manager-d484bc464-lhqbk                                    0/1     ContainerCreating   0                156m
openshift-cluster-csi-drivers                      aws-ebs-csi-driver-controller-5668745dcb-jc7fm                             0/11    ContainerCreating   0                156m
openshift-cluster-csi-drivers                      aws-ebs-csi-driver-operator-5d6b9fbd77-827vs                               0/1     ContainerCreating   0                156m
openshift-cluster-csi-drivers                      shared-resource-csi-driver-operator-866d897954-z77gz                       0/1     ContainerCreating   0                156m
openshift-cluster-csi-drivers                      shared-resource-csi-driver-webhook-d794748dc-kctkn                         0/1     ContainerCreating   0                156m
openshift-cluster-samples-operator                 cluster-samples-operator-754758b9d7-nbcc9                                  0/2     ContainerCreating   0                156m
openshift-cluster-storage-operator                 csi-snapshot-controller-6d9c448fdd-wdb7n                                   0/1     ContainerCreating   0                156m
openshift-cluster-storage-operator                 csi-snapshot-webhook-6966f555f8-cbdc7                                      0/1     ContainerCreating   0                156m
openshift-console-operator                         console-operator-7d8567876b-nxgpj                                          0/2     ContainerCreating   0                156m
openshift-console                                  console-855f66f4f8-q869k                                                   0/1     ContainerCreating   0                156m
openshift-console                                  downloads-7b645b6b98-7jqfw                                                 0/1     ContainerCreating   0                156m
openshift-controller-manager                       controller-manager-548c7f97fb-bl68p                                        0/1     Pending             0                156m
openshift-etcd                                     installer-13-ip-10-0-76-132.us-east-2.compute.internal                     0/1     ContainerCreating   0                9m39s
openshift-etcd                                     installer-3-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h13m
openshift-etcd                                     installer-4-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h12m
openshift-etcd                                     installer-5-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h7m
openshift-etcd                                     installer-6-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h1m
openshift-etcd                                     installer-8-ip-10-0-48-21.us-east-2.compute.internal                       0/1     Completed           0                168m
openshift-etcd                                     revision-pruner-10-ip-10-0-48-21.us-east-2.compute.internal                0/1     ContainerCreating   0                160m
openshift-etcd                                     revision-pruner-10-ip-10-0-63-159.us-east-2.compute.internal               0/1     Completed           0                160m
openshift-etcd                                     revision-pruner-11-ip-10-0-48-21.us-east-2.compute.internal                0/1     ContainerCreating   0                159m
openshift-etcd                                     revision-pruner-11-ip-10-0-63-159.us-east-2.compute.internal               0/1     Completed           0                159m
openshift-etcd                                     revision-pruner-11-ip-10-0-79-159.us-east-2.compute.internal               0/1     Completed           0                156m
openshift-etcd                                     revision-pruner-12-ip-10-0-48-21.us-east-2.compute.internal                0/1     ContainerCreating   0                156m
openshift-etcd                                     revision-pruner-12-ip-10-0-63-159.us-east-2.compute.internal               0/1     Completed           0                156m
openshift-etcd                                     revision-pruner-12-ip-10-0-79-159.us-east-2.compute.internal               0/1     Completed           0                156m
openshift-etcd                                     revision-pruner-13-ip-10-0-48-21.us-east-2.compute.internal                0/1     ContainerCreating   0                155m
openshift-etcd                                     revision-pruner-13-ip-10-0-63-159.us-east-2.compute.internal               0/1     Completed           0                155m
openshift-etcd                                     revision-pruner-13-ip-10-0-76-132.us-east-2.compute.internal               0/1     ContainerCreating   0                10m
openshift-etcd                                     revision-pruner-13-ip-10-0-79-159.us-east-2.compute.internal               0/1     Completed           0                155m
openshift-etcd                                     revision-pruner-6-ip-10-0-48-21.us-east-2.compute.internal                 0/1     Completed           0                169m
openshift-etcd                                     revision-pruner-6-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                3h57m
openshift-etcd                                     revision-pruner-7-ip-10-0-48-21.us-east-2.compute.internal                 0/1     Completed           0                168m
openshift-etcd                                     revision-pruner-7-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                168m
openshift-etcd                                     revision-pruner-8-ip-10-0-48-21.us-east-2.compute.internal                 0/1     Completed           0                168m
openshift-etcd                                     revision-pruner-8-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                168m
openshift-etcd                                     revision-pruner-9-ip-10-0-48-21.us-east-2.compute.internal                 0/1     Completed           0                166m
openshift-etcd                                     revision-pruner-9-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                166m
openshift-kube-apiserver                           installer-6-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h4m
openshift-kube-apiserver                           installer-7-ip-10-0-48-21.us-east-2.compute.internal                       0/1     Completed           0                168m
openshift-kube-apiserver                           installer-9-ip-10-0-76-132.us-east-2.compute.internal                      0/1     ContainerCreating   0                9m52s
openshift-kube-apiserver                           revision-pruner-6-ip-10-0-48-21.us-east-2.compute.internal                 0/1     Completed           0                169m
openshift-kube-apiserver                           revision-pruner-6-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                3h59m
openshift-kube-apiserver                           revision-pruner-7-ip-10-0-48-21.us-east-2.compute.internal                 0/1     Completed           0                168m
openshift-kube-apiserver                           revision-pruner-7-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                168m
openshift-kube-apiserver                           revision-pruner-8-ip-10-0-48-21.us-east-2.compute.internal                 0/1     Completed           0                166m
openshift-kube-apiserver                           revision-pruner-8-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                166m
openshift-kube-apiserver                           revision-pruner-8-ip-10-0-79-159.us-east-2.compute.internal                0/1     Completed           0                156m
openshift-kube-apiserver                           revision-pruner-9-ip-10-0-48-21.us-east-2.compute.internal                 0/1     ContainerCreating   0                155m
openshift-kube-apiserver                           revision-pruner-9-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                155m
openshift-kube-apiserver                           revision-pruner-9-ip-10-0-76-132.us-east-2.compute.internal                0/1     ContainerCreating   0                9m54s
openshift-kube-apiserver                           revision-pruner-9-ip-10-0-79-159.us-east-2.compute.internal                0/1     Completed           0                155m
openshift-kube-controller-manager                  installer-6-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h11m
openshift-kube-controller-manager                  installer-7-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h7m
openshift-kube-controller-manager                  installer-8-ip-10-0-48-21.us-east-2.compute.internal                       0/1     Completed           0                169m
openshift-kube-controller-manager                  installer-8-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h4m
openshift-kube-controller-manager                  installer-8-ip-10-0-79-159.us-east-2.compute.internal                      0/1     Completed           0                156m
openshift-kube-controller-manager                  revision-pruner-6-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                4h13m
openshift-kube-controller-manager                  revision-pruner-7-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                4h10m
openshift-kube-controller-manager                  revision-pruner-8-ip-10-0-48-21.us-east-2.compute.internal                 0/1     Completed           0                169m
openshift-kube-controller-manager                  revision-pruner-8-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                4h5m
openshift-kube-controller-manager                  revision-pruner-8-ip-10-0-76-132.us-east-2.compute.internal                0/1     ContainerCreating   0                4m36s
openshift-kube-controller-manager                  revision-pruner-8-ip-10-0-79-159.us-east-2.compute.internal                0/1     Completed           0                156m
openshift-kube-scheduler                           installer-6-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h11m
openshift-kube-scheduler                           installer-7-ip-10-0-48-21.us-east-2.compute.internal                       0/1     Completed           0                169m
openshift-kube-scheduler                           installer-7-ip-10-0-63-159.us-east-2.compute.internal                      0/1     Completed           0                4h10m
openshift-kube-scheduler                           installer-7-ip-10-0-79-159.us-east-2.compute.internal                      0/1     Completed           0                156m
openshift-kube-scheduler                           revision-pruner-6-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                4h13m
openshift-kube-scheduler                           revision-pruner-7-ip-10-0-48-21.us-east-2.compute.internal                 0/1     Completed           0                169m
openshift-kube-scheduler                           revision-pruner-7-ip-10-0-63-159.us-east-2.compute.internal                0/1     Completed           0                4h10m
openshift-kube-scheduler                           revision-pruner-7-ip-10-0-76-132.us-east-2.compute.internal                0/1     ContainerCreating   0                4m36s
openshift-kube-scheduler                           revision-pruner-7-ip-10-0-79-159.us-east-2.compute.internal                0/1     Completed           0                156m
openshift-machine-config-operator                  machine-config-controller-55b4d497b6-p89lb                                 0/2     ContainerCreating   0                156m
openshift-marketplace                              qe-app-registry-w8gnc                                                      0/1     ContainerCreating   0                148m
openshift-monitoring                               prometheus-operator-776bd79f6d-vz7q5                                       0/2     ContainerCreating   0                156m
openshift-multus                                   multus-admission-controller-5f88d77b65-nzmj5                               0/2     ContainerCreating   0                156m
openshift-oauth-apiserver                          apiserver-7b65bbc76b-mxl99                                                 0/1     Init:0/1            0                154m
openshift-operator-lifecycle-manager               collect-profiles-27879975-fpvzk                                            0/1     Completed           0                3h21m
openshift-operator-lifecycle-manager               collect-profiles-27879990-86rk8                                            0/1     Completed           0                3h6m
openshift-operator-lifecycle-manager               collect-profiles-27880005-bscc4                                            0/1     Completed           0                171m
openshift-operator-lifecycle-manager               collect-profiles-27880170-s8cbj                                            0/1     ContainerCreating   0                4m37s
openshift-operator-lifecycle-manager               packageserver-6f8f8f9d54-4r96h                                             0/1     ContainerCreating   0                156m
openshift-ovn-kubernetes                           ovnkube-master-lr9pk                                                       3/6     CrashLoopBackOff    23 (46s ago)     156m
openshift-route-controller-manager                 route-controller-manager-747bf8684f-5vhwx                                  0/1     Pending             0                156m
liuhuali@Lius-MacBook-Pro huali-test % 

Actual results:

RollingUpdate cannot complete successfully

Expected results:

RollingUpdate should complete successfully

Additional info:

Must gather - https://drive.google.com/file/d/1bvE1XUuZKLBGmq7OTXNVCNcFZkqbarab/view?usp=sharing

must gather of another cluster hit the same issue (also this template ipi-on-aws/versioned-installer-customer_vpc-disconnected_private_cluster-techpreview-ci and with ovn network): https://drive.google.com/file/d/1CqAJlqk2wgnEuMo3lLaObk4Nbxi82y_A/view?usp=sharing

must gather of another cluster hit the same issue (this template ipi-on-aws/versioned-installer-private_cluster-sts-usgov-ci and with ovn network):
https://drive.google.com/file/d/1tnKbeqJ18SCAlJkS80Rji3qMu3nvN_O8/view?usp=sharing
 
Seems this template ipi-on-aws/versioned-installer-customer_vpc-disconnected_private_cluster-techpreview-ci and with ovn network can often hit this issue.

cloud-controller-manager does not react to changes to infrastructure secrets (in the OpenStack case: clouds.yaml).
As a consequence, if credentials are rotated (and the old ones are rendered useless), load balancer creation and deletion will not succeed any more. Restarting the controller fixes the issue on a live cluster.

Logs show that it couldn't find the application credentials:

Dec 19 12:58:58.909: INFO: At 2022-12-19 12:53:58 +0000 UTC - event for udp-lb-default-svc: {service-controller } EnsuringLoadBalancer: Ensuring load balancer
Dec 19 12:58:58.909: INFO: At 2022-12-19 12:53:58 +0000 UTC - event for udp-lb-default-svc: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: failed to get subnet to create load balancer for service e2e-test-openstack-q9jnk/udp-lb-default-svc: Unable to re-authenticate: Expected HTTP response code [200 204 300] when accessing [GET https://compute.rdo.mtl2.vexxhost.net/v2.1/0693e2bb538c42b79a49fe6d2e61b0fc/servers/fbeb21b8-05f0-4734-914e-926b6a6225f1/os-interface], but got 401 instead
{"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}}: Resource not found: [POST https://identity.rdo.mtl2.vexxhost.net/v3/auth/tokens], error message: {"error":{"code":404,"message":"Could not find Application Credential: 1b78233956b34c6cbe5e1c95445972a4.","title":"Not Found"}}

OpenStack CI has been instrumented to restart CCM after credentials rotation, so that we silence this particular issue and avoid masking any other. That workaround must be reverted once this bug is fixed.

TL;DR

4.12 requires backport of commit:

commit 0111e1faec20d16505a110449966273b430b7ad1
Author: Surya Seetharaman <suryaseetharaman.9@gmail.com>
Date:   Tue Sep 6 21:20:57 2022 +0200

    Support AllocateLoadBalancerNodePortsFalse
    
    This PR supports having allocateloadbalancernodeports
    set to false along with etp=local on lgw mode.
    
    Signed-off-by: Surya Seetharaman <suryaseetharaman.9@gmail.com>

Analysis

Missing BP of LoadBalancerServiceHasNodePortAllocation into 4.12 causes problems with flow creation for these services, even in shared gateway mode

This issue affects services with `allocateLoadBalancerNodePorts: false` in OCP 4.12.

Any deletion of services with `allocateLoadBalancerNodePorts: false` will fail and go into a 15 minute long retry loop. When one recreates a service while a failed deletion is still in progress, the flows on br-ex are not recreated.

Deletion will fail with:

(...)
obj_retry.go:257] Retry object setup: *factory.serviceForGateway <ns>/<service>
obj_retry.go:290] Removing old object: *factory.serviceForGateway <ns>/<service> (failed: %!s(uint8=<retry>))
(...)
obj_retry.go: 298] Retry delete failed for *factory.serviceForGateway <ns><service>, will try again later: error removing port claim for service: <ns>/<service>: invalid service port <service>, err: invalid port number: 0

And while a deletion is still ongoing, add will fail with:

obj_retry.go: 476] Failed to delete old object <ns>/<service> of type *factory.serviceForGateway, during add event: error removing port claim for service: <ns>/<service>: invalid service port <service>, err: invalid port number: 0

onvkube-node will retry 15 times with a 1 minute backoff before it gives up, and while this fails, the object cannot be recreated.

That also means that there are currently 2 workarounds for this (tested):

  • restart all ovnkube-node pods --> this will get rid of the bad cache entries and recreate the br-ex flows
  • delete the service. Wait for +15 minutes (until you no longer see the error message about failed deletion and retries) and recreate the service

The problem can easily be reproduced in 4.12, I tested this on 4.12.17 with SNO:

$ cat fedora-test.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: fedora-service
  labels:
    app: fedora-deployment
spec:
  selector:
    app: fedora-pod
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  sessionAffinity: None
  type: LoadBalancer
  allocateLoadBalancerNodePorts: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fedora-deployment
  labels:
    app: fedora-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: fedora-pod
  template:
    metadata:
      labels:
        app: fedora-pod
    spec:
      containers:
      - name: fedora-a
        image: registry.fedoraproject.org/fedora:latest
        imagePullPolicy: Always
        command:
        - sleep
        - infinity
      - name: fedora-b
        image: registry.fedoraproject.org/fedora:latest
        imagePullPolicy: Always
        command:
        - sleep
        - infinity
oc apply -f fedora-test.yaml
oc delete svc fedora-service
oc apply -f fedora-test.yaml

Logs:

oc logs -n openshift-ovn-kubernetes ovnkube-node-4xg6w -c ovnkube-node -f | grep fedora-service
(...)
I0714 01:59:30.867309    9291 obj_retry.go:491] Creating *factory.serviceForGateway default/fedora-service took: 70.803µs
I0714 01:59:30.875170    9291 obj_retry.go:491] Creating *factory.endpointSliceForGateway default/fedora-service-5bmf8 took: 15.941µs
I0714 01:59:30.875210    9291 obj_retry.go:491] Creating *factory.endpointSliceForStaleConntrackRemoval default/fedora-service-5bmf8 took: 169ns
E0714 01:59:52.496754    9291 obj_retry.go:673] Failed to delete *factory.serviceForGateway default/fedora-service, error: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0
I0714 02:00:02.969493    9291 obj_retry.go:471] Detected stale object during new object add of type *factory.serviceForGateway with the same key: default/fedora-service
W0714 02:00:02.969523    9291 gateway_shared_intf.go:656] Delete service: no service found in cache for endpoint fedora-service in namespace default
I0714 02:00:02.971917    9291 obj_retry.go:491] Creating *factory.endpointSliceForGateway default/fedora-service-74vf8 took: 62.416µs
I0714 02:00:02.971926    9291 obj_retry.go:491] Creating *factory.endpointSliceForStaleConntrackRemoval default/fedora-service-74vf8 took: 255ns
E0714 02:00:03.086557    9291 obj_retry.go:476] Failed to delete old object default/fedora-service of type *factory.serviceForGateway, during add event: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0
I0714 02:00:13.982590    9291 obj_retry.go:257] Retry object setup: *factory.serviceForGateway default/fedora-service
I0714 02:00:13.982621    9291 obj_retry.go:290] Removing old object: *factory.serviceForGateway default/fedora-service (failed: %!s(uint8=1))
I0714 02:00:14.104772    9291 obj_retry.go:298] Retry delete failed for *factory.serviceForGateway default/fedora-service, will try again later: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0
I0714 02:00:22.397338    9291 obj_retry.go:571] Found retry entry for *factory.serviceForGateway default/fedora-service marked for deletion: will delete the object
W0714 02:00:22.397400    9291 gateway_shared_intf.go:656] Delete service: no service found in cache for endpoint fedora-service in namespace default
E0714 02:00:22.601603    9291 obj_retry.go:575] Failed to delete stale object default/fedora-service, during update: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0
I0714 02:00:43.980921    9291 obj_retry.go:257] Retry object setup: *factory.serviceForGateway default/fedora-service
I0714 02:00:43.980948    9291 obj_retry.go:290] Removing old object: *factory.serviceForGateway default/fedora-service (failed: %!s(uint8=1))
W0714 02:00:43.980976    9291 gateway_shared_intf.go:656] Delete service: no service found in cache for endpoint fedora-service in namespace default
I0714 02:00:44.199215    9291 obj_retry.go:298] Retry delete failed for *factory.serviceForGateway default/fedora-service, will try again later: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0

And the following watch shows that the flows are created initially, then upon deletion the flows vanish, then as the service is recreated the flows do not reappear:

watch "ovs-ofctl dump-flows br-ex | grep 192.168.18.100"

I can delete the ovnkube-node pod to recreate the flows:

oc delete pod -n openshift-ovn-kubernetes ovnkube-node-4xg6w

And the flows reappaer:

[root@sno ~]# ovs-ofctl dump-flows br-ex | grep 192.168.18.100 
 cookie=0x849b956ca97beaee, duration=27.925s, table=0, n_packets=0, n_bytes=0, idle_age=27, priority=110,arp,in_port=1,arp_tpa=192.168.18.100,arp_op=1 actions=LOCAL
 cookie=0x849b956ca97beaee, duration=27.925s, table=0, n_packets=0, n_bytes=0, idle_age=27, priority=110,tcp,in_port=1,nw_dst=192.168.18.100,tp_dst=80 actions=output:2
 cookie=0x849b956ca97beaee, duration=27.925s, table=0, n_packets=0, n_bytes=0, idle_age=27, priority=110,tcp,in_port=2,nw_src=192.168.18.100,tp_src=80 actions=output:1

--------------------------------------

The problem does not manifest in 4.13. The difference between 4.12 an 4.13 is a missing backport of 0111e1faec20d16505a110449966273b430b7ad1

Log for service deletion in OCP 4.13:

I0718 13:27:35.699982  334002 obj_retry.go:656] Delete event received for *factory.serviceForGateway default/fedora-service
I0718 13:27:35.700010  334002 gateway_shared_intf.go:679] Deleting service fedora-service in namespace default
I0718 13:27:35.769565  334002 obj_retry.go:656] Delete event received for *factory.endpointSliceForGateway default/fedora-service-6hhds
I0718 13:27:35.769596  334002 gateway_shared_intf.go:856] Deleting endpointslice fedora-service-6hhds in namespace default
I0718 13:27:35.769610  334002 gateway_shared_intf.go:431] No serviceConfig found for service fedora-service in namespace default
I0718 13:27:35.769618  334002 obj_retry.go:656] Delete event received for *factory.endpointSliceForStaleConntrackRemoval default/fedora-service-6hhds

Log for service deletion in OCP 4.12:

I0718 13:28:14.253695   52007 obj_retry.go:653] Delete event received for *factory.serviceForGateway default/fedora-service
I0718 13:28:14.253717   52007 port_claim.go:197] Handle NodePort service fedora-service port 0
I0718 13:28:14.253726   52007 gateway_shared_intf.go:649] Deleting service fedora-service in namespace default
I0718 13:28:14.288844   52007 obj_retry.go:653] Delete event received for *factory.endpointSliceForGateway default/fedora-service-2m857
I0718 13:28:14.288870   52007 gateway_shared_intf.go:817] Deleting endpointslice fedora-service-2m857 in namespace default
I0718 13:28:14.288876   52007 gateway_shared_intf.go:407] No serviceConfig found for service fedora-service in namespace default
I0718 13:28:14.288881   52007 obj_retry.go:653] Delete event received for *factory.endpointSliceForStaleConntrackRemoval default/fedora-service-2m857
E0718 13:28:14.402407   52007 obj_retry.go:673] Failed to delete *factory.serviceForGateway default/fedora-service, error: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0

Both 4.12 and 4.13 have similar code, and `handleService` looks the same as well:

  189 func handleService(svc *kapi.Service, handler handler) []error {                                                        
  190     errors := []error{}                                                                                                 
  191     if !util.ServiceTypeHasNodePort(svc) && len(svc.Spec.ExternalIPs) == 0 {                                            
  192         return errors                                                                                                   
  193     }                                                                                                                   
  194                                                                                                                         
  195     for _, svcPort := range svc.Spec.Ports {                                                                            
  196         if util.ServiceTypeHasNodePort(svc) {                                                                           
  197             klog.V(5).Infof("Handle NodePort service %s port %d", svc.Name, svcPort.NodePort) 

But ServiceTypeHasNodePort in 4.13 correctly differentiates between allocateLoadBalancerNodePorts whereas 4.12 does not:

go-controller/pkg/util/kube.go

  273 func LoadBalancerServiceHasNodePortAllocation(service *kapi.Service) bool {                                             
  274     return service.Spec.AllocateLoadBalancerNodePorts == nil || *service.Spec.AllocateLoadBalancerNodePorts             
  275 }   

  277 // ServiceTypeHasNodePort checks if the service has an associated NodePort or not                                       
  278 func ServiceTypeHasNodePort(service *kapi.Service) bool {                                                               
  279     return service.Spec.Type == kapi.ServiceTypeNodePort ||                                                             
  280         (service.Spec.Type == kapi.ServiceTypeLoadBalancer && LoadBalancerServiceHasNodePortAllocation(service))        
  281 }

In OCP 4.12:

  221 // ServiceTypeHasNodePort checks if the service has an associated NodePort or not                                       
  222 func ServiceTypeHasNodePort(service *kapi.Service) bool {                                                               
  223     return service.Spec.Type == kapi.ServiceTypeNodePort || service.Spec.Type == kapi.ServiceTypeLoadBalancer           
  224 }

Description of problem:

The setting of systemReserved: ephemeral-storage in KubeletConfig is not working as expected. 

Version-Release number of selected component (if applicable):

4.10.z, may exist on other OCP versions as well. 

How reproducible:

always

Steps to Reproduce:

1. Create a KubeletConfig on the node:

apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: system-reserved-config
spec:
  machineConfigPoolSelector:
    matchLabels:
      pools.operator.machineconfiguration.openshift.io/master: ""
  kubeletConfig:
    systemReserved:
      cpu: 500m
      memory: 500Mi
      ephemeral-storage: 10Gi


2. Check node allocatable storage with command: oc describe node |grep -C 5 ephemeral-storage

Actual results:

The Allocatable:ephemeral-storage on the node is not capacity.ephemeral-storage - systemReserved.ephemeral-storage - eviction-thresholds (10% of the capacity.ephemeral-storage by default)  

Expected results:

The Allocatable:ephemeral-storage on the node should be capacity.ephemeral-storage - systemReserved.ephemeral-storage - eviction-thresholds (10% of the capacity.ephemeral-storage by default) 

Additional info:

The root cause might be: process argument '--system-reserved=cpu=500m,memory=500Mi' overwrote the setting in /etc/kubernetes/kubelet.conf, one example:

root        6824       1 27 Sep30 ?        1-09:00:24 kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --runtime-cgroups=/system.slice/crio.service --node-labels=node-role.kubernetes.io/master,node.openshift.io/os_id=rhcos --node-ip=192.168.58.47 --minimum-container-ttl-duration=6m0s --cloud-provider= --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --hostname-override= --register-with-taints=node-role.kubernetes.io/master=:NoSchedule --pod-infra-container-image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a7b6408460148cb73c59677dbc2c261076bc07226c43b0c9192cc70aef5ba62 --system-reserved=cpu=500m,memory=500Mi --v=2 --housekeeping-interval=30s


 

Description of problem:

TestUnmanagedDNSToManagedDNSInternalIngressController E2E test is failing on the error:
{
unmanaged_dns_test.go:272: failed to verify connectivity with workload with reqURL http://10.0.128.7 using external client: timed out waiting for the condition  

How reproducible:

About 75% of the time.

Version-Release number of selected component (if applicable):

4.12

How reproducible:

75%

Steps to Reproduce:

1. Run CI E2E tests on cluster-ingress-operator or 
make test-e2e TEST=TestUnmanagedDNSToManagedDNSInternalIngressController 

Actual results:

E2E test fails about 75% of the time

Expected results:

E2E should always pass

Additional info:

 

Since 4.11 OCP comes with OperatorHub definition which declares a capability
and enables all catalog sources. For OKD we want to enable just community-operators
as users may not have Red Hat pull secret set.
This commit would ensure that OKD version of marketplace operator gets
its own OperatorHub manifest with a custom set of operator catalogs enabled

Description of problem:
When the user selects Serverless as an import strategy and tried to import a Devfile, the import fails because of an invalid Deployment.

Could reproduce this already in 4.11, but its even more prominent in 4.12 when the console automatically selects the resource type serverless when the Serverless operator is installed.

Version-Release number of selected component (if applicable):
Works on 4.10
Failed on 4.11 and 4.12 master

How reproducible:
Always

Steps to Reproduce:
1. Install and setup Serverless operator
1. Switch to dev perspective, navigate to add > import from git
3. Enter a non-Devfile git URL like https://github.com/jerolimov/nodeinfo
4. On 4.11 select resource type Serverless (on 4.12 this should be selected automatically)
5. Update the git URL to a repo with a Devfile like https://github.com/nodeshift-starters/devfile-sample
6. Press create

Actual results:
Import fails with error:

Error "Invalid value: "": name part must be non-empty" for field "spec.template.labels".

Expected results:
Devfile should be imported

Additional info:

Description of problem:

Currently when installing Openshift on the Openstack cluster name length limit is allowed to  14 characters.
Customer wants to know if is it possible to override this validation when installing Openshift on Openstack and create a cluster name that is greater than 14 characters.

Version : OCP 4.8.5 UPI Disconnected 
Environment : Openstack 16 

Issue:
User reports that they are getting error for OCP cluster in Openstack UPI, where the name of the cluster is > 14 characters.

Error events :
~~~
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/openshift-install", "create", "manifests", "--dir=/home/gitlab-runner/builds/WK8mkokN/0/CPE/SKS/pipelines/non-prod/ocp4-openstack-build/ocpinstaller/install-upi"], "delta": "0:00:00.311397", "end": "2022-09-03 21:38:41.974608", "msg": "non-zero return code", "rc": 1, "start": "2022-09-03 21:38:41.663211", "stderr": "level=fatal msg=failed to fetch Master Machines: failed to load asset \"Install Config\": invalid \"install-config.yaml\" file: metadata.name: Invalid value: \"sks-osp-inf-cpe-1-cbr1a\": cluster name is too long, please restrict it to 14 characters", "stderr_lines": ["level=fatal msg=failed to fetch Master Machines: failed to load asset \"Install Config\": invalid \"install-config.yaml\" file: metadata.name: Invalid value: \"sks-osp-inf-cpe-1-cbr1a\": cluster name is too long, please restrict it to 14 characters"], "stdout": "", "stdout_lines": []}
~~~

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

 

Actual results:

Users are getting error "cluster name is too long" when clustername contains more than 14 characters for OCP on Openstack

Expected results:

The 14 characters limit should be change for the OCP clustername on Openstack

Additional info:

 

Not all of the errors reported by the assisted API (and shown in the wait-for bootstrap complete output) actually require user action.

Some appear when the agents first register but resolve themselves relatively quickly in the natural course of events.

Some, like the availability of NTP, don't block the installation from proceeding at all.

We need to think about the best ways of exposing this information to the user.

 Currently controller will set status done each time it sees host that is ready in k8s without looking if it was already set.

time="2022-09-13T19:03:45Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=6258e5a2-4e78-4148-a913-45d704a0fa1d

time="2022-09-13T19:04:05Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=49e4e63f-cf4f-4b9f-b1f3-923c473c09dd

 

 

The test results in sippy look really bad on our less common platforms, but still pretty unacceptable even on core clouds. It's reasonably often the only test that fails. We need to decide what to do here, and we're going to need input from the etcd team.

As of Sep 13th:

  • several vsphere and openstack variant combo's fail this test around 24-32% of the time
  • aws, amd64, ovn, upgrade, upgrade-micro, ha - fails 6% of the time
  • aws, amd64, ovn, upgrade, upgrade-minor, ha - fails 4% of the time
  • gcp, amd64, sdn, upgrade, upgrade-minor, ha - fails 8% of the time
  • globally across all jobs fails around 3% of the time.

Even on some major variant combos, a 4-8% failure rate is too high.
On Sep 13 arch call (no etcd present), Damien mentioned this might be an upstream alert that just isn't well suited for OpenShift's use cases, is this the case and it needs tuning?

Has the problem been getting worse?

I believe this link https://datastudio.google.com/s/urkKwmmzvgo indicates that this may be the case for 4.12, AWS and Azure are both getting worse in ways that I don't see if we change the release to 4.11 where it looks consistent. gcp seems fine on 4.12. We do not have data for vsphere for some reason.

This link shows the grpc_methods most commonly involved: https://search.ci.openshift.org/?search=etcdGRPCRequestsSlow+was+at+or+above&maxAge=48h&context=7&type=junit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job

At a glance: LeaseGrant, MemberList, Txn, Status, Range.

Broken out of TRT-401
For linking with sippy:
[bz-etcd][invariant] alert/etcdGRPCRequestsSlow should not be at or above info
[sig-arch][bz-etcd][Late] Alerts alert/etcdGRPCRequestsSlow should not be at or above info [Suite:openshift/conformance/parallel]

 

This is a clone of issue OCPBUGS-3235. The following is the description of the original issue:

Description of problem:

Frequently we see the loading state of the topology view, even when there aren't many resources in the project.

Including an example

Prerequisites (if any, like setup, operators/versions):

Steps to Reproduce

  1. load topology
  2. if it loads successfully, keep trying  until it fails to load

Actual results:

topology will sometimes hang with the loading indicator showing indefinitely

Expected results:

topology should load consistently without fail

Reproducibility (Always/Intermittent/Only Once):

intermittent

Build Details:

4.9

Additional info:

This is a clone of issue OCPBUGS-3164. The following is the description of the original issue:

During first bootstrap boot we need crio and kubelet on the disk, so we start release-image-pivot systemd task. However, its not blocking bootkube, so these two run in parallel.

release-image-pivot restarts the node to apply new OS image, which may leave bootkube in an inconsistent state. This task should run before bootkube

Description of problem:

With every pod update we are executing a mutate operation to add the pod port to the port group or add the pod IP to an address set. This functionally doesn't hurt, since mutate will not add duplicate values to the same set. However, this is bad for performance. For example, with a 730 network policies affecting a pod, and issuing 7 pod updates would result in over 5k transactions.

This is a clone of issue OCPBUGS-5542. The following is the description of the original issue:

Description of problem:
The project list orders projects by its name and is smart enough to keep a "numerical order" like:

  1. test-1
  2. test-2
  3. test-11

The more prominent project dropdown is not so smart and shows just a simple "ascii ordered" list:

  1. test-1
  2. test-11
  3. test-2

Version-Release number of selected component (if applicable):
4.8-4.13 (master)

How reproducible:
Always

Steps to Reproduce:
1. Create some new projects called test-1, test-11, test-2
2. Check the project list page (in admin perspective)
3. Check the project dropdown (in dev perspective)

Actual results:
Order is

  1. test-1
  2. test-11
  3. test-2

Expected results:
Order should be

  1. test-1
  2. test-2
  3. test-11

Additional info:
none

The 4.12 builds fail all the time. Last successfully build was from May 31.

Error:

# Root Suite.Entire pipeline flow from Builder page "before all" hook for "Background Steps"
AssertionError: Timed out retrying after 80000ms: Expected to find element: `[data-test-id="PipelineResource"]`, but never found it.

Full error:

  Running:  e2e/pipeline-ci.feature                                                         (1 of 1)
Couldn't determine Mocha version


  Logging in as kubeadmin
      Installing operator: "Red Hat OpenShift Pipelines"
      Operator Red Hat OpenShift Pipelines was not yet installed.
      Performing Pipelines post-installation steps
      Verify the CRD's for the "Red Hat OpenShift Pipelines"
  1) "before all" hook for "Background Steps"
      Deleting "" namespace

  0 passing (3m)
  1 failing

  1) Entire pipeline flow from Builder page
       "before all" hook for "Background Steps":
     AssertionError: Timed out retrying after 80000ms: Expected to find element: `[data-test-id="PipelineResource"]`, but never found it.

Because this error occurred during a `before all` hook we are skipping all of the remaining tests.
      at ../../dev-console/integration-tests/support/pages/functions/installOperatorOnCluster.ts.exports.waitForCRDs (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17156:77)
      at performPostInstallationSteps (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17242:21)
      at ../../dev-console/integration-tests/support/pages/functions/installOperatorOnCluster.ts.exports.verifyAndInstallOperator (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17268:5)
      at ../../dev-console/integration-tests/support/pages/functions/installOperatorOnCluster.ts.exports.verifyAndInstallPipelinesOperator (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17272:13)
      at Context.eval (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:20848:13)



[mochawesome] Report JSON saved to /go/src/github.com/openshift/console/frontend/gui_test_screenshots/cypress_report_pipelines.json


  (Results)

  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐
  │ Tests:        13                                                                               │
  │ Passing:      0                                                                                │
  │ Failing:      1                                                                                │
  │ Pending:      0                                                                                │
  │ Skipped:      12                                                                               │
  │ Screenshots:  1                                                                                │
  │ Video:        true                                                                             │
  │ Duration:     2 minutes, 58 seconds                                                            │
  │ Spec Ran:     e2e/pipeline-ci.feature                                                          │
  └────────────────────────────────────────────────────────────────────────────────────────────────┘


  (Screenshots)

  -  /go/src/github.com/openshift/console/frontend/gui_test_screenshots/cypress/scree     (1280x720)
     nshots/e2e/pipeline-ci.feature/Background Steps -- before all hook (failed).png                


  (Video)

  -  Started processing:  Compressing to 32 CRF                                                     
  -  Finished processing: /go/src/github.com/openshift/console/frontend/gui_test_scre   (16 seconds)
                          enshots/cypress/videos/e2e/pipeline-ci.feature.mp4                        

    Compression progress:  100%

====================================================================================================

  (Run Finished)


       Spec                                              Tests  Passing  Failing  Pending  Skipped  
  ┌────────────────────────────────────────────────────────────────────────────────────────────────┐
  │ ✖  e2e/pipeline-ci.feature                  02:58       13        -        1        -       12 │
  └────────────────────────────────────────────────────────────────────────────────────────────────┘
    ✖  1 of 1 failed (100%)                     02:58       13        -        1        -       12  

See also

  1. https://prow.ci.openshift.org/job-history/gs/origin-ci-test/pr-logs/directory/pull-ci-openshift-console-release-4.12-e2e-gcp-console
  2. https://search.ci.openshift.org/?search=Expected+to+find+element&maxAge=336h&context=1&type=all&name=pull-ci-openshift-console-release-4.12-e2e-gcp-console&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job (not exact match, but couldn't create a better filter)

This is a clone of issue OCPBUGS-2144. The following is the description of the original issue:

Description of problem:

Azure IPI creates boot images using the image gallery API now, it will create two image definition resources for both hyperVGeneration V1 and V2. For arm64 cluster, the architecture in image definition hyperVGeneration V1 is x64, but it should be Arm64

Version-Release number of selected component (if applicable):

./openshift-install version
./openshift-install 4.12.0-0.nightly-arm64-2022-10-07-204251
built from commit 7b739cde1e0239c77fabf7622e15025d32fc272c
release image registry.ci.openshift.org/ocp-arm64/release-arm64@sha256:d2569be4ba276d6474aea016536afbad1ce2e827b3c71ab47010617a537a8b11
release architecture arm64

How reproducible:

always

Steps to Reproduce:

1.Create arm cluster using latest arm64 nightly build 
2.Check image definition created for hyperVGeneration V1

Actual results:

The architecture field is x64.
###
$ az sig image-definition show --gallery-name ${gallery_name} --gallery-image-definition lwanazarm1008-rc8wh --resource-group ${rg} | jq -r ".architecture"
x64
The image version under this image definition is for aarch64.
###
$ az sig image-version show --gallery-name gallery_lwanazarm1008_rc8wh --gallery-image-definition lwanazarm1008-rc8wh --resource-group lwanazarm1008-rc8wh-rg --gallery-image-version 412.86.20220922 | jq -r ".storageProfile.osDiskImage.source"
{  "uri": "https://clustermuygq.blob.core.windows.net/vhd/rhcosmuygq.vhd"}
$ az storage blob show --container-name vhd --name rhcosmuygq.vhd --account-name clustermuygq --account-key $account_key | jq -r ".metadata"
{  "Source_uri": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-412.86.202209220538-0-azure.aarch64.vhd"}

Expected results:

Although no VMs with HypergenV1 can be provisioned, the architecture field should be Arm64 even for hyperGenerationV1 image definitions

Additional info:

1.The architecture in image definition hyperVGeneration V2 is Arm64 and installer will use V2 by default for arm64 vm_type, so installation didn't fail by default. But we still need to make architecture consistent in V1.

2.Need to set architecture field for both V1 and V2, now we only set architecture in V2 image definition resource. 
https://github.com/openshift/installer/blob/master/data/data/azure/vnet/main.tf#L100-L128 

Description of problem:

TestEditUnmanagedPodDisruptionBudget flakes in the console-operator e2e

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Flake

Steps to Reproduce:
1. Check https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_console-operator/665/pull-ci-openshift-console-operator-master-e2e-aws-operator/1562005782164148224
2.
3.

Actual results:

Expected results:

Additional info:

There is a chance that the PDB instances is not present since prior to the Unmanaged* TCs the RemoveTest is running which is removing all the console resources (Pods, Services, PDBs, ...).

 

Description of problem:

In looking at jobs on an accepted payload at https://amd64.ocp.releases.ci.openshift.org/releasestream/4.12.0-0.ci/release/4.12.0-0.ci-2022-08-30-122201 , I observed this job https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-serial/1564589538850902016 with "Undiagnosed panic detected in pod" "pods/openshift-controller-manager-operator_openshift-controller-manager-operator-74bf985788-8v9qb_openshift-controller-manager-operator.log.gz:E0830 12:41:48.029165       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)" 

Version-Release number of selected component (if applicable):

4.12

How reproducible:

probably relatively easy to reproduce (but not consistently) given it's happened several times according to this search: https://search.ci.openshift.org/?search=Observed+a+panic%3A+%22invalid+memory+address+or+nil+pointer+dereference%22&maxAge=48h&context=1&type=junit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job

Steps to Reproduce:

1. let nightly payloads run or run one of the presubmit jobs mentioned in the search above
2.
3.

Actual results:

Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)}

Expected results:

no panics

Additional info:

 

This is a clone of issue OCPBUGS-3668. The following is the description of the original issue:

Description of problem:

Installer fails to install 4.12.0-rc.0 on VMware IPI with the script that worked with prior OCP versions.
Error happens during Terraform prepare step when gathering information in the "Platform Provisioning Check". It looks like a permission issue, but we're using the VCenter administrator account. I double checked and that account has all the necessary permissions.

Version-Release number of selected component (if applicable):

OCP installer 4.12.0-rc.0
VSphere & Vcenter 7.0.3 - no pending updates

How reproducible:

always - we observed this already in the nightlies, but wanted to wait for a RC to confirm

Steps to Reproduce:

1. Try to install using the openshift-install binary

Actual results:

Fails during the preparation step

Expected results:

Installs the cluster ;)

Additional info:

This runs in our CICD pipeline, let me know if you want to need access to the full run log:
https://gitlab.consulting.redhat.com/cblum/storage-ocs-lab/-/jobs/219304

This includes the install-config.yaml, all component versions and the full debug log output

These two tests are permafailing on webhook errors related to the CRD:

[sig-installer][Feature:baremetal][Serial] A baremetal deployment without a provisioning network should show the Provisioning Network as 'Disabled' [Suite:openshift/conformance/serial]

[sig-installer][Feature:baremetal][Serial] A baremetal deployment without a provisioning network should [apigroup:config.openshift.io] show the Provisioning Network as 'Disabled' [Suite:openshift/conformance/serial]

[sig-installer][Feature:baremetal][Serial] A baremetal deployment without a provisioning network should allow setting the ProvisioningNetwork to 'Managed' with valid settings [Suite:openshift/conformance/serial]

[sig-installer][Feature:baremetal][Serial] A baremetal deployment without a provisioning network should [apigroup:config.openshift.io] allow setting the ProvisioningNetwork to 'Managed' with valid settings [Suite:openshift/conformance/serial]

job=periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-virtualmedia=all

Example run:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-virtualmedia/1567416810377056256

Sippy links:

https://sippy.dptools.openshift.org/sippy-ng/tests/4.12/analysis?test=%5Bsig-installer%5D%5BFeature%3Abaremetal%5D%5BSerial%5D%20A%20baremetal%20deployment%20without%20a%20provisioning%20network%20should%20allow%20setting%20the%20ProvisioningNetwork%20to%20%27Managed%27%20with%20valid%20settings%20%5BSuite%3Aopenshift%2Fconformance%2Fserial%5D

https://sippy.dptools.openshift.org/sippy-ng/tests/4.12/analysis?test=%5Bsig-installer%5D%5BFeature%3Abaremetal%5D%5BSerial%5D%20A%20baremetal%20deployment%20without%20a%20provisioning%20network%20should%20show%20the%20Provisioning%20Network%20as%20%27Disabled%27%20%5BSuite%3Aopenshift%2Fconformance%2Fserial%5D

This is a clone of issue OCPBUGS-7015. The following is the description of the original issue:

Description of problem:

fail to create vSphere 4.12.2 IPI cluster as apiVIP and ingressVIP are not in machine networks

# ./openshift-install create cluster --dir=/tmp
? SSH Public Key /root/.ssh/id_rsa.pub
? Platform vsphere
? vCenter vcenter.vmware.gsslab.pnq2.redhat.com
? Username administrator@gsslab.pnq
? Password [? for help] ************
INFO Connecting to vCenter vcenter.vmware.gsslab.pnq2.redhat.com
INFO Defaulting to only available datacenter: OpenShift-DC
INFO Defaulting to only available cluster: OCP
? Default Datastore OCP-PNQ-Datastore
? Network PNQ2-25G-PUBLIC-PG
? Virtual IP Address for API [? for help] 192.168.1.10
X Sorry, your reply was invalid: IP expected to be in one of the machine networks: 10.0.0.0/16
? Virtual IP Address for API [? for help]


As the user could not define cidr for machineNetwork when creating the cluster or install-config file interactively, it will use default value 10.0.0.0/16, so fail to create the cluster ot install-config when inputting apiVIP and ingressVIP outside of default machinenNetwork.

Error is thrown from https://github.com/openshift/installer/blob/master/pkg/types/validation/installconfig.go#L655-L666, seems new function introduced from PR https://github.com/openshift/installer/pull/5798

The issue should also impact Nutanix platform.

I don't understand why the installer is expecting/validating VIPs from 10.0.0.0/16 machine network by default when it's not evening asking to input the machine networks during the survey. This validation was not mandatory in previous OCP installers.


 

Version-Release number of selected component (if applicable):

# ./openshift-install version
./openshift-install 4.12.2
built from commit 7fea1c4fc00312fdf91df361b4ec1a1a12288a97
release image quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1
release architecture amd64

How reproducible:

Always

Steps to Reproduce:

1. create install-config.yaml file by running command "./openshift-install create install-config --dir ipi"
2. failed with above error

Actual results:

fail to create install-config.yaml file

Expected results:

succeed to create install-config.yaml file

Additional info:

 The current workaround is to use dummy VIPs from 10.0.0.0/16 machinenetwork to create the install-config first and then modify the machinenetwork and VIPs as per your requirement which is overhead and creates a negative experience.


There was already a bug reported which seems to have only fixed the VIP validation: https://issues.redhat.com/browse/OCPBUGS-881
 

Description of problem:

The current version of openshift's corendns is based on Kubernetes 1.24 packages.  OpenShift 4.12 is based on Kubernetes 1.25.  

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Always

Steps to Reproduce:

1. Check https://github.com/openshift/coredns/blob/release-4.12/go.mod 

Actual results:

Kubernetes packages (k8s.io/api, k8s.io/apimachinery, and k8s.io/client-go) are at version v0.24.0.

Expected results:

Kubernetes packages are at version v0.25.0 or later.

Additional info:

Using old Kubernetes API and client packages brings risk of API compatibility issues.

Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

1.
2.
3.

Actual results:


Expected results:


Additional info:


When we get telemetry from connected clusters, we want to be able to tell when they were created with the agent installer vs. the host assisted service. Currently there is no way to distinguish.

It's not clear whether any particular group owns the namespace of installation methods, or whom we need to notify when we create one.

 in order to have more info to be able to debug router issue in sno , we want to see if router is healthy from node network point of view and enable router access logs,

Lets revert when https://bugzilla.redhat.com/show_bug.cgi?id=2097041 will be found

Description of problem:

Each LB created for a Service type LoadBalancer results in 1 client rule and <# of public subnets> health rules being created.  The rules per SG quota in AWS is quite small; 60 by default, and 200 hard max.  OCP has about 40 rules OOTB. Assuming an HA cluster in 3 AZs, that is 4 rules per LB.  With default AWS quota, only ~5 LBs can be create and with the hard max of 200, only ~40 LBs can be created.

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Always

Steps to Reproduce:

1.  Create Service type LoadBalancer and observe increase in master-sg and worker-sg rules sets
2.
3.

Actual results:

4 rules are created

Expected results:

1 rules is created when the client rule is a superset of the per-subnet health rules

Additional info:

This ~4x the number of Services of type LoadBalancer.  This is required for Hypershift.

This is a clone of issue OCPBUGS-10647. The following is the description of the original issue:

Description of problem:

Cluster Network Operator managed component multus-admission-controller does not conform to Hypershift control plane expectations.

When CNO is managed by Hypershift, multus-admission-controller must run with non-root security context. If Hypershift runs control plane on kubernetes (as opposed to Openshift) management cluster, it adds pod or container security context to most deployments with runAsUser clause inside.

In Hypershift CPO, the security context of deployment containers, including CNO, is set when it detects that SCC's are not available, see https://github.com/openshift/hypershift/blob/9d04882e2e6896d5f9e04551331ecd2129355ecd/support/config/deployment.go#L96-L100. In such a case CNO should do the same, set security context for its managed deployment multus-admission-controller to meet Hypershift standard.

 

How reproducible:

Always

Steps to Reproduce:

1.Create OCP cluster using Hypershift using Kube management cluster
2.Check pod security context of multus-admission-controller

Actual results:

no pod security context is set

Expected results:

pod security context is set with runAsUser: xxxx

Additional info:

This is the highest priority item from https://issues.redhat.com/browse/OCPBUGS-7942 and it needs to be fixed ASAP as it is a security issue preventing IBM from releasing Hypershift-managed Openshift service.

This is a clone of issue OCPBUGS-14336. The following is the description of the original issue:

This is a clone of issue OCPBUGS-1829. The following is the description of the original issue:

Description of problem:

Link to Openshift Route from service is breaking because of hardcoded value of targetPort. If the targetPort gets changed, the route still points to the older value of port as it's hardcoded

Version-Release number of selected component (if applicable):

 

How reproducible:

Always

Steps to Reproduce:

1. Install the latest available version of Openshift Pipelines
2. Create the pipeline and triggerbinding using the attached files
3. Add trigger to the created pipeline from devconsole UI, select the above created triggerbinding while adding trigger
4. Trigger an event using the curl command curl -X POST -d '{ "url": "https://www.github.com/VeereshAradhya/cli" }' -H 'Content-Type: application/json' <route> and make sure that the pipelinerun gets started
5. Update the tagetPort in the svc from 8080 to 8000
6. Again use the above curl command to trigger one more event

Actual results:

The curl command throws error

Expected results:

The curl command should be successful and the pipelinerun should get started successfully

Additional info:

Error:
curl -X POST -d '{ "url": "https://www.github.com/VeereshAradhya/cli" }' -H 'Content-Type: application/json' http://el-event-listener-3o9zcv-test-devconsole.apps.ve412psi.psi.ospqa.com
<html>
  <head>
    <meta name="viewport" content="width=device-width, initial-scale=1">    <style type="text/css">
      body {
        font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
        line-height: 1.66666667;
        font-size: 16px;
        color: #333;
        background-color: #fff;
        margin: 2em 1em;
      }
      h1 {
        font-size: 28px;
        font-weight: 400;
      }
      p {
        margin: 0 0 10px;
      }
      .alert.alert-info {
        background-color: #F0F0F0;
        margin-top: 30px;
        padding: 30px;
      }
      .alert p {
        padding-left: 35px;
      }
      ul {
        padding-left: 51px;
        position: relative;
      }
      li {
        font-size: 14px;
        margin-bottom: 1em;
      }
      p.info {
        position: relative;
        font-size: 20px;
      }
      p.info:before, p.info:after {
        content: "";
        left: 0;
        position: absolute;
        top: 0;
      }
      p.info:before {
        background: #0066CC;
        border-radius: 16px;
        color: #fff;
        content: "i";
        font: bold 16px/24px serif;
        height: 24px;
        left: 0px;
        text-align: center;
        top: 4px;
        width: 24px;
      }      @media (min-width: 768px) {
        body {
          margin: 6em;
        }
      }
    </style>
  </head>
  <body>
    <div>
      <h1>Application is not available</h1>
      <p>The application is currently not serving requests at this endpoint. It may not have been started or is still starting.</p>      <div class="alert alert-info">
        <p class="info">
          Possible reasons you are seeing this page:
        </p>
        <ul>
          <li>
            <strong>The host doesn't exist.</strong>
            Make sure the hostname was typed correctly and that a route matching this hostname exists.
          </li>
          <li>
            <strong>The host exists, but doesn't have a matching path.</strong>
            Check if the URL path was typed correctly and that the route was created using the desired path.
          </li>
          <li>
            <strong>Route and path matches, but all pods are down.</strong>
            Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
          </li>
        </ul>
      </div>
    </div>
  </body>
</html>

Note:

The above scenario works fine if we create triggers using the yaml files instead of using devconsole UI

This is a clone of issue OCPBUGS-14426. The following is the description of the original issue:

This is a clone of issue OCPBUGS-14149. The following is the description of the original issue:

Description of problem:

Cannot list Kepler CSV

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Always

Steps to Reproduce:

1. Install Kepler Community Operator
2. Create Kepler Instance
3. Console gets error and shows "Oops, something went wrong"

Actual results:

Console gets error and shows "Oops, something went wrong"

Expected results:

Should list Kepler Instance

Additional info:

 

Description of problem:

"Failed to open directory, disabling udev device properties" in node-exporter logs

$ for i in $(oc -n openshift-monitoring get pod | grep node-exporter | awk '{print $1}'); do echo $i; oc -n openshift-monitoring logs -c node-exporter $i | grep "Failed to open directory, disabling udev device properties"; echo -e "\n"; done
node-exporter-4279b
ts=2022-10-17T01:16:05.833Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data

node-exporter-9tq64
ts=2022-10-17T01:16:04.642Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data

node-exporter-dwtwh
ts=2022-10-17T01:16:04.936Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data

node-exporter-nrznc
ts=2022-10-17T01:16:05.601Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data

node-exporter-q87s4
ts=2022-10-17T01:16:05.228Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data

node-exporter-twtxj
ts=2022-10-17T01:16:05.249Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data

debug on node, /run/udev/data is readable

# oc debug node/ip-10-0-138-107.us-east-2.compute.internal
Temporary namespace openshift-debug-dhvqv is created for debugging node...
Starting pod/ip-10-0-138-107us-east-2computeinternal-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.0.138.107
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host
sh-4.4# ls -l /run/udev/
total 0
srw-------.  1 root root    0 Oct 17 01:04 control
drwxr-xr-x.  2 root root 3780 Oct 17 01:26 data
drwxr-xr-x. 40 root root  800 Oct 17 01:04 links
drwxr-xr-x.  3 root root   60 Oct 17 01:04 static_node-tags
drwxr-xr-x.  5 root root  100 Oct 17 01:04 tags
drwxr-xr-x.  2 root root  140 Oct 17 01:04 watch
sh-4.4# ls -l /run/udev/data
total 304
-rw-r--r--. 1 root root   55 Oct 17 01:04 +acpi:AMZN0000:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:LNXCPU:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:LNXCPU:01
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:LNXCPU:02
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:LNXCPU:03
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:LNXPWRBN:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:LNXSLPBN:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:LNXSYBUS:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:LNXSYBUS:01
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:LNXSYSTM:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0103:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0303:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0400:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0501:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0A03:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0B00:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0C0F:00
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0C0F:01
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0C0F:02
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0C0F:03
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0C0F:04
-rw-r--r--. 1 root root   57 Oct 17 01:04 +acpi:PNP0F13:00
-rw-r--r--. 1 root root  142 Oct 17 01:04 +input:input0
-rw-r--r--. 1 root root  142 Oct 17 01:04 +input:input1
-rw-r--r--. 1 root root  218 Oct 17 01:04 +input:input2
-rw-r--r--. 1 root root  198 Oct 17 01:04 +input:input4
-rw-r--r--. 1 root root  143 Oct 17 01:04 +input:input5
-rw-r--r--. 1 root root   60 Oct 17 01:04 +module:configfs
-rw-r--r--. 1 root root   66 Oct 17 01:04 +module:fuse
-rw-r--r--. 1 root root  188 Oct 17 01:04 +pci:0000:00:00.0
-rw-r--r--. 1 root root  195 Oct 17 01:04 +pci:0000:00:01.0
-rw-r--r--. 1 root root  213 Oct 17 01:04 +pci:0000:00:01.3
-rw-r--r--. 1 root root  207 Oct 17 01:04 +pci:0000:00:03.0
-rw-r--r--. 1 root root  259 Oct 17 01:04 +pci:0000:00:04.0
-rw-r--r--. 1 root root  208 Oct 17 01:04 +pci:0000:00:05.0
-rw-r--r--. 1 root root   55 Oct 17 01:04 +platform:AMZN0000:00
-rw-r--r--. 1 root root  825 Oct 17 01:04 b259:0
-rw-r--r--. 1 root root 1357 Oct 17 01:04 b259:1
-rw-r--r--. 1 root root 1568 Oct 17 01:04 b259:2
-rw-r--r--. 1 root root 1619 Oct 17 01:04 b259:3
-rw-r--r--. 1 root root 1602 Oct 17 01:04 b259:4
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:144
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:183
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:227
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:228
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:229
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:231
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:235
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:236
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:62
-rw-r--r--. 1 root root    0 Oct 17 01:04 c10:63
-rw-r--r--. 1 root root  193 Oct 17 01:04 c13:32
-rw-r--r--. 1 root root    0 Oct 17 01:04 c13:63
-rw-r--r--. 1 root root  113 Oct 17 01:04 c13:64
-rw-r--r--. 1 root root  113 Oct 17 01:04 c13:65
-rw-r--r--. 1 root root  232 Oct 17 01:04 c13:66
-rw-r--r--. 1 root root  199 Oct 17 01:04 c13:67
-rw-r--r--. 1 root root  143 Oct 17 01:04 c13:68
-rw-r--r--. 1 root root    0 Oct 17 01:04 c162:0
-rw-r--r--. 1 root root    0 Oct 17 01:04 c1:1
-rw-r--r--. 1 root root    0 Oct 17 01:04 c1:11
-rw-r--r--. 1 root root    0 Oct 17 01:04 c1:3
-rw-r--r--. 1 root root    0 Oct 17 01:04 c1:4
-rw-r--r--. 1 root root    0 Oct 17 01:04 c1:5
-rw-r--r--. 1 root root    0 Oct 17 01:04 c1:7
-rw-r--r--. 1 root root    0 Oct 17 01:04 c1:8
-rw-r--r--. 1 root root    0 Oct 17 01:04 c1:9
-rw-r--r--. 1 root root    0 Oct 17 01:04 c202:0
-rw-r--r--. 1 root root    0 Oct 17 01:04 c202:1
-rw-r--r--. 1 root root    0 Oct 17 01:04 c202:2
-rw-r--r--. 1 root root    0 Oct 17 01:04 c202:3
-rw-r--r--. 1 root root    0 Oct 17 01:04 c203:0
-rw-r--r--. 1 root root    0 Oct 17 01:04 c203:1
-rw-r--r--. 1 root root    0 Oct 17 01:04 c203:2
-rw-r--r--. 1 root root    0 Oct 17 01:04 c203:3
-rw-r--r--. 1 root root    0 Oct 17 01:04 c241:0
-rw-r--r--. 1 root root  259 Oct 17 01:04 c242:0
-rw-r--r--. 1 root root    0 Oct 17 01:04 c246:0
-rw-r--r--. 1 root root   23 Oct 17 01:04 c251:0
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:0
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:1
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:10
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:11
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:12
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:13
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:14
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:15
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:16
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:17
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:18
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:19
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:2
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:20
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:21
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:22
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:23
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:24
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:25
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:26
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:27
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:28
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:29
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:3
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:30
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:31
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:32
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:33
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:34
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:35
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:36
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:37
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:38
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:39
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:4
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:40
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:41
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:42
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:43
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:44
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:45
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:46
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:47
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:48
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:49
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:5
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:50
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:51
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:52
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:53
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:54
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:55
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:56
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:57
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:58
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:59
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:6
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:60
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:61
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:62
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:63
-rw-r--r--. 1 root root   20 Oct 17 01:04 c4:64
-rw-r--r--. 1 root root   20 Oct 17 01:04 c4:65
-rw-r--r--. 1 root root   20 Oct 17 01:04 c4:66
-rw-r--r--. 1 root root   20 Oct 17 01:04 c4:67
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:7
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:8
-rw-r--r--. 1 root root    0 Oct 17 01:04 c4:9
-rw-r--r--. 1 root root    0 Oct 17 01:04 c5:0
-rw-r--r--. 1 root root    0 Oct 17 01:04 c5:1
-rw-r--r--. 1 root root    0 Oct 17 01:04 c5:2
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:0
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:1
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:128
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:129
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:130
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:131
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:132
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:133
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:134
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:2
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:3
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:4
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:5
-rw-r--r--. 1 root root    0 Oct 17 01:04 c7:6
-rw-r--r--. 1 root root   87 Oct 17 01:04 n1
-rw-r--r--. 1 root root  360 Oct 17 01:06 n10
-rw-r--r--. 1 root root  360 Oct 17 01:06 n11
-rw-r--r--. 1 root root  360 Oct 17 01:06 n13
-rw-r--r--. 1 root root  360 Oct 17 01:07 n14
-rw-r--r--. 1 root root  595 Oct 17 01:04 n2
-rw-r--r--. 1 root root  360 Oct 17 01:09 n25
-rw-r--r--. 1 root root  360 Oct 17 01:10 n29
-rw-r--r--. 1 root root  195 Oct 17 01:04 n3
-rw-r--r--. 1 root root  360 Oct 17 01:10 n30
-rw-r--r--. 1 root root  360 Oct 17 01:11 n31
-rw-r--r--. 1 root root  360 Oct 17 01:14 n35
-rw-r--r--. 1 root root  360 Oct 17 01:14 n37
-rw-r--r--. 1 root root  360 Oct 17 01:14 n39
-rw-r--r--. 1 root root  188 Oct 17 01:04 n4
-rw-r--r--. 1 root root  360 Oct 17 01:15 n41
-rw-r--r--. 1 root root  193 Oct 17 01:04 n5
-rw-r--r--. 1 root root  360 Oct 17 01:18 n50
-rw-r--r--. 1 root root  362 Oct 17 01:26 n54
-rw-r--r--. 1 root root  189 Oct 17 01:04 n6
-rw-r--r--. 1 root root  357 Oct 17 01:05 n7
-rw-r--r--. 1 root root  357 Oct 17 01:05 n8
-rw-r--r--. 1 root root  359 Oct 17 01:05 n9 

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-10-15-094115
node-exporter version=1.4.0

How reproducible:

always

Steps to Reproduce:

1. check node-exporter logs
2.
3.

Actual results:

"Failed to open directory, disabling udev device properties" in node-exporter logs

Expected results:

no error logs

Additional info:

no functional affection for the cluster
code:
https://github.com/prometheus/node_exporter/blob/release-1.4/collector/diskstats_linux.go#L262-L270

This is a clone of issue OCPBUGS-5548. The following is the description of the original issue:

Description of problem:
This is a follow-up on https://bugzilla.redhat.com/show_bug.cgi?id=2083087 and https://github.com/openshift/console/pull/12390

When creating a Deployment, DeploymentConfig, or Knative Service with enabled Pipeline, and then deleting it again with the enabled option "Delete other resources created by console" (only available on 4.13+ with the PR above) the automatically created Pipeline is not deleted.

When the user tries to create the same resource with a Pipeline again this fails with an error:

An error occurred
secrets "nodeinfo-generic-webhook-secret" already exists

Version-Release number of selected component (if applicable):
4.13

(we might want to backport this together with https://github.com/openshift/console/pull/12390 and OCPBUGS-5547)

How reproducible:
Always

Steps to Reproduce:

  1. Install OpenShift Pipelines operator (tested with 1.8.2)
  2. Create a new project
  3. Navigate to Add > Import from git and create an application
  4. Case 1: In the topology select the new resource and delete it
  5. Case 2: In the topology select the application group and delete the complete app

Actual results:
Case 1: Delete resources:

  1. Deployment (tries it twice!) $name
  2. Service $name
  3. Route $name
  4. ImageStream $name

Case 2: Delete application:

  1. Deployment (just once) $name
  2. Service $name
  3. Route $name
  4. ImageStream $name

Expected results:
Case 1: Delete resource:

  1. Delete Deployment $name should be called just once
  2. (Keep this deletion) Service $name
  3. (Keep this deletion) Route $name
  4. (Keep this deletion) ImageStream $name
  5. Missing deletion of the Tekton Pipeline $name
  6. Missing deletion of the Tekton TriggerTemplate with generated name trigger-template-$name-$random
  7. Missing deletion of the Secret $name-generic-webhook-secret
  8. Missing deletion of the Secret $name-github-webhook-secret

Case 2: Delete application:

  1. (Keep this deletion) Deployment $name
  2. (Keep this deletion) Service $name
  3. (Keep this deletion) Route $name
  4. (Keep this deletion) ImageStream $name
  5. Missing deletion of the Tekton Pipeline $name
  6. Missing deletion of the Tekton TriggerTemplate with generated name trigger-template-$name-$random
  7. Missing deletion of the Secret $name-generic-webhook-secret
  8. Missing deletion of the Secret $name-github-webhook-secret

Additional info:

This is a clone of issue OCPBUGS-2141. The following is the description of the original issue:

Description of problem:

4.12 cluster, no pv for prometheus, the doc still link to 4.8

# oc get co monitoring -o jsonpath='{.status.conditions}' | jq 'map(select(.type=="Degraded"))'
[
  {
    "lastTransitionTime": "2022-10-09T02:36:16Z",
    "message": "Prometheus is running without persistent storage which can lead to data loss during upgrades and cluster disruptions. Please refer to the official documentation to see how to configure storage for Prometheus: https://docs.openshift.com/container-platform/4.8/monitoring/configuring-the-monitoring-stack.html",
    "reason": "PrometheusDataPersistenceNotConfigured",
    "status": "False",
    "type": "Degraded"
  }
]

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-10-05-053337

How reproducible:

always

Steps to Reproduce:

1. no PVs for prometheus, check the monitoring operator status
2.
3.

Actual results:

the doc still link to 4.8

Expected results:

links to the latest doc

Additional info:

slack thread: 
https://coreos.slack.com/archives/G79AW9Q7R/p1665283462123389

When we create an HCP, the Root CA in the HCP namespaces has the certificate and key named as

  • ca.key
  • ca.crt
    But to cert manager expects them to be named as
  • tls.key
  • tls.cert

Done criteria: The Root CA should have the certificate and key named as the cert manager expects.

We added server groups for control plane and computes as part of OSASINFRA-2570, except for UPI that only creates server group for the control plane.

We need to update the UPI scripts to create server group for computes to be consistent with IPI and have the instruction at https://docs.openshift.com/container-platform/4.11/machine_management/creating_machinesets/creating-machineset-osp.html work out of the box in case customers want to create MachineSets on their UPI clusters.

Related to OCPCLOUD-1135.

Description of problem: This is a follow-up to OCPBUGS-2795 and OCPBUGS-2941.

The installer fails to destroy the cluster when the OpenStack object storage omits 'content-type' from responses. This can happen on responses with HTTP status code 204, where a reverse proxy is truncating content-related headers (see this nginX bug report). In such cases, the Installer errors with:

level=error msg=Bulk deleting of container "5ifivltb-ac890-chr5h-image-registry-fnxlmmhiesrfvpuxlxqnkoxdbl" objects failed: Cannot extract names from response with content-type: []

Listing container object suffers from the same issue as listing the containers and this one isn't fixed in latest versions of gophercloud. I've reported https://github.com/gophercloud/gophercloud/issues/2509 and fixing it with https://github.com/gophercloud/gophercloud/issues/2510, however we likely won't be able to backport the bump to gophercloud master back to release-4.8 so we'll have to look for alternatives.

I'm setting the priority to critical as it's causing all our jobs to fail in master.

Version-Release number of selected component (if applicable):

4.8.z

How reproducible:

Likely not happening in customer environments where Swift is exposed directly. We're seeing the issue in our CI where we're using a non-RHOSP managed cloud.

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-5151. The following is the description of the original issue:

Description of problem:

Cx is not able to install new cluster OCP BM IPI. During the bootstrapping the provisioning interfaces from master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install 

Please refer to following BUG --> https://issues.redhat.com/browse/OCPBUGS-872  The problem was solved by applying rd.net.timeout.carrier=30 to the kernel parameters of compute nodes via cluster-baremetal operator. The fix also need to be apply to the control-plane. 

  ref:// https://github.com/openshift/cluster-baremetal-operator/pull/286/files

 

Version-Release number of selected component (if applicable):

 

How reproducible:

Perform OCP 4.10.16 IPI BareMetal install.

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

Customer should be able to install the cluster without any issue.

Additional info:

 

This is a clone of issue OCPBUGS-948. The following is the description of the original issue:

Description of problem:

OLM is setting the "openshift.io/scc" label to "anyuid" on several namespaces:

https://github.com/openshift/operator-framework-olm/blob/d817e09c2565b825afd8bfc9bb546eeff28e47e7/manifests/0000_50_olm_00-namespace.yaml#L23
https://github.com/openshift/operator-framework-olm/blob/d817e09c2565b825afd8bfc9bb546eeff28e47e7/manifests/0000_50_olm_00-namespace.yaml#L8

this label has no effect and will lead to confusion.  It should be set to emptystring for now (removing it entirely will have no effect on upgraded clusters because the CVO does not remove deleted labels, so the next best thing is to clear the value).

For bonus points, OLM should remove the label entirely from the manifest and add migration logic to remove the existing label from these namespaces to handle upgraded clusters that already have it.

Version-Release number of selected component (if applicable):

Not sure how long this has been an issue, but fixing it in 4.12+ should be sufficient.

How reproducible:

always

Steps to Reproduce:

1. install cluster
2. examine namespace labels

Actual results:

label is present

Expected results:


ideally label should not be present, but in the short term setting it to emptystring is the quick fix and is better than nothing.

Description of problem:

InstanceMetadataTags are not supported in AWS C2S region(us-iso-x)

Version-Release number of selected component (if applicable):

 

How reproducible:

always

Steps to Reproduce:

1. OCP4.11 IPI Installation on AWS C2S regions
2. 
3. 

Actual results:

 

Expected results:

 

Additional info:

Actual Error: 

"Error launching resource Instance. Unsupported Operation Specifying InstanceMetadataTags is not yet supported"

There is a related fix on upstream:

resource/aws_instance: Handle regions where instance metadata tags are unsupported
https://github.com/hashicorp/terraform-provider-aws/pull/26631

Console should be using v1 version of the ConsolePlugin model rather then the old v1alpha1.

CONSOLE-3077 was updating this version, but did not made the cut for the 4.12 release. Based on discussion with Samuel Padgett we should be backporting to 4.12.

 

The risk should be minimal since we are only updating the model itself + validation + Readme

Description of problem:

When creating a pod with an additional network that contains a `spec.config.ipam.exclude` range, any address within the excluded range is still iterated while searching for a suitable IP candidate. As a result, pod creation times out when large exclude ranges are used.

Version-Release number of selected component (if applicable):

 

How reproducible:

with big exclude ranges, 100%

Steps to Reproduce:

1. create network-attachment-definition with a large range:

$ cat <<EOF| oc apply -f -       
apiVersion: k8s.cni.cncf.io/v1                                            
kind: NetworkAttachmentDefinition
metadata:
  name: nad-w-excludes
spec:
  config: |-
    {
      "cniVersion": "0.3.1",
      "name": "macvlan-net",
      "type": "macvlan",
      "master": "ens3",
      "mode": "bridge",
      "ipam": {
         "type": "whereabouts",
         "range": "fd43:01f1:3daa:0baa::/64",
         "exclude": [ "fd43:01f1:3daa:0baa::/100" ],
         "log_file": "/tmp/whereabouts.log",
         "log_level" : "debug"
      }
    }
EOF
2. create a pod with the network attached:

$ cat <<EOF|oc apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: pod-with-exclude-range
  annotations:
    k8s.v1.cni.cncf.io/networks: nad-w-excludes
spec:
  containers:
  - name: pod-1
    image: openshift/hello-openshift
EOF

3. check pod status, event log and whereabouts logs after a while: 

$ oc get pods
NAME                        READY   STATUS              RESTARTS   AGE
pod-with-exclude-range      0/1     ContainerCreating   0          2m23s

$ oc get events
<...>
6m39s       Normal    Scheduled                                    pod/pod-with-exclude-range                   Successfully assigned default/pod-with-exclude-range to <worker-node>
6m37s       Normal    AddedInterface                               pod/pod-with-exclude-range                   Add eth0 [10.129.2.49/23] from openshift-sdn
2m39s       Warning   FailedCreatePodSandBox                       pod/pod-with-exclude-range                   Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded

$ oc debug node/<worker-node> - tail /host/tmp/whereabouts.log
Starting pod/<worker-node>-debug ...
To use host binaries, run `chroot /host`
2022-10-27T14:14:50Z [debug] Finished leader election
2022-10-27T14:14:50Z [debug] IPManagement: {fd43:1f1:3daa:baa::1 ffffffffffffffff0000000000000000} , <nil>
2022-10-27T14:14:59Z [debug] Used defaults from parsed flat file config @ /etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.conf
2022-10-27T14:14:59Z [debug] ADD - IPAM configuration successfully read: {Name:macvlan-net Type:whereabouts Routes:[] Datastore:kubernetes Addresses:[] OmitRanges:[fd43:01f1:3daa:0baa::/80] DNS: {Nameservers:[] Domain: Search:[] Options:[]} Range:fd43:1f1:3daa:baa::/64 RangeStart:fd43:1f1:3daa:baa:: RangeEnd:<nil> GatewayStr: EtcdHost: EtcdUsername: EtcdPassword:********* EtcdKeyFile: EtcdCertFile: EtcdCACertFile: LeaderLeaseDuration:1500 LeaderRenewDeadline:1000 LeaderRetryPeriod:500 LogFile:/tmp/whereabouts.log LogLevel:debug OverlappingRanges:true SleepForRace:0 Gateway:<nil> Kubernetes: {KubeConfigPath:/etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.kubeconfig K8sAPIRoot:} ConfigurationPath:PodName:pod-with-exclude-range PodNamespace:default} 
2022-10-27T14:14:59Z [debug] Beginning IPAM for ContainerID: f4ffd0e07d6c1a2b6ffb0fa29910c795258792bb1a1710ff66f6b48fab37af82
2022-10-27T14:14:59Z [debug] Started leader election
2022-10-27T14:14:59Z [debug] OnStartedLeading() called
2022-10-27T14:14:59Z [debug] Elected as leader, do processing
2022-10-27T14:14:59Z [debug] IPManagement - mode: 0 / containerID:f4ffd0e07d6c1a2b6ffb0fa29910c795258792bb1a1710ff66f6b48fab37af82 / podRef: default/pod-with-exclude-range
2022-10-27T14:14:59Z [debug] IterateForAssignment input >> ip: fd43:1f1:3daa:baa:: | ipnet: {fd43:1f1:3daa:baa:: ffffffffffffffff0000000000000000} | first IP: fd43:1f1:3daa:baa::1 | last IP: fd43:1f1:3daa:baa:ffff:ffff:ffff:ffff

Actual results:

Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded

Expected results:

additional network gets attached to the pod

Additional info:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This is a clone of issue OCPBUGS-10433. The following is the description of the original issue:

Description of problem:

When CNO is managed by Hypershift multus-admission-controller does not have correct RollingUpdate parameterts meeting Hypershift requirements outligned here: https://github.com/openshift/hypershift/blob/646bcef53e4ecb9ec01a05408bb2da8ffd832a14/support/config/deployment.go#L81
```
There are two standard cases currently with hypershift: HA mode where there are 3 replicas spread across zones and then non ha with one replica. When only 3 zones are available you need to be able to set maxUnavailable in order to progress the rollout. However, you do not want to set that in the single replica case because it will result in downtime.
```
So when multus-admission-controller has more than one replica the RollingUpdate parameters should be
```
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
```

Version-Release number of selected component (if applicable):

 

How reproducible:

Always

Steps to Reproduce:

1.Create OCP cluster using Hypershift
2.Check rolling update parameters of multus-admission-controller

Actual results:

the operator has default parameters: {"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"}

Expected results:

{"rollingUpdate":{"maxSurge":0,"maxUnavailable":1},"type":"RollingUpdate"}

Additional info:

 

This is a clone of issue OCPBUGS-3032. The following is the description of the original issue:

If installation fails at an early stage (e.g. pulling release images, configuring hosts, waiting for agents to come up) there is no indication that anything has gone wrong, and the installer binary may not even be able to connect.
We should at least display what is happening on the console so that users have some avenue to figure out for themselves what is going on.

Tracker issue for bootimage bump in 4.12. This issue should block issues which need a bootimage bump to fix.

The previous bump was OCPBUGS-5960.

This is a clone of issue OCPBUGS-7102. The following is the description of the original issue:

Description of problem:

https://github.com/openshift/operator-framework-olm/blob/7ec6b948a148171bd336750fed98818890136429/staging/operator-lifecycle-manager/pkg/controller/operators/olm/plugins/downstream_csv_namespace_labeler_plugin_test.go#L309

has a dependency on creation of a next-version release branch.

 

Version-Release number of selected component (if applicable):

4.13

How reproducible:

 

Steps to Reproduce:

1. clone operator-framework/operator-framework-olm
2. make unit/olm
3. deal with a really bumpy first-time kubebuilder/envtest install experience
4. profit

 

 

Actual results:

error

Expected results:

pass

Additional info:

 

 

Description of problem:

By creating network policies with a namespace that has maximum length, it can end up causing this error:

2023-06-22T17:34:40.804880959Z I0622 17:34:40.804851       1 obj_retry.go:318] Retry add failed for *v1.NetworkPolicy ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident/kas, will try again later: failed to create Network Policy ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident/kas: failed to create default deny port groups: error in transact with ops [
{Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:outport == @a7686019953911959437_ingressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} priority:1000] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {08cc8026-4c22-4c52-99cd-e8cd1469c8bd}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:outport == @a7686019953911959437_ingressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} priority:1001] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {08cc8026-4c22-4c52-99cd-e8cd1469c8bd}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @a7686019953911959437_egressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} options:{GoMap:map[apply-after-lb:true]} priority:1000] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {f324353c-a47b-4044-9cd9-dbeef058ada3}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}{Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @a7686019953911959437_egressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} options:{GoMap:map[apply-after-lb:true]} priority:1001] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {f324353c-a47b-4044-9cd9-dbeef058ada3}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}{Op:update Table:Port_Group Row:map[acls:{GoSet:[{GoUUID:08cc8026-4c22-4c52-99cd-e8cd1469c8bd} {GoUUID:08cc8026-4c22-4c52-99cd-e8cd1469c8bd}]} external_ids:{GoMap:map[name:a7686019953911959437_ingressDefaultDeny]} ports:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {d3b52500-963a-4f7b-8928-d869f298d2e8}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}{Op:update Table:Port_Group Row:map[acls:{GoSet:[{GoUUID:f324353c-a47b-4044-9cd9-dbeef058ada3} {GoUUID:f324353c-a47b-4044-9cd9-dbeef058ada3}]} external_ids:{GoMap:map[name:a7686019953911959437_egressDefaultDeny]} ports:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {b128baec-6acd-4683-8c12-5b968bf73bd8}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]results [{Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:0 Error:ovsdb error Details:set contains duplicate UUID:{GoUUID:} Rows:[]} {Count:0 Error: Details: UUID:{GoUUID:} Rows:[]}] and errors [ovsdb error: set contains duplicate]: 1 ovsdb operations failed

 

This is not a problem in 4.14 as we moved to ACL indexes, but in 4.13 and before we compare the ACL name and the external ids. For default deny ACLs we simply store the direction in the external id, and the name of the ACL is limited to 63 characters in OVN. When we create default deny acls, we create one that denies everything, then we also create some allow acls to permit arp and neighbor discovery traffic. These 2 ACLs may be recognized as duplicate because their truncated name (namespace only) and their directions in external ids match.

 

This is a clone of issue OCPBUGS-4166. The following is the description of the original issue:

Description of problem:

This is wrapper bug for library sync of 4.12

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Description of problem:

Image registry pods panic while deploying OCP in ap-south-2 AWS region

Version-Release number of selected component (if applicable):

4.11.2

How reproducible:

Deploy OCP in AWS ap-south-2 region

Steps to Reproduce:

Deploy OCP in AWS ap-south-2 region 

Actual results:

panic: Invalid region provided: ap-south-2

Expected results:

Image registry pods should come up with no errors

Additional info:

 

 

 

 

 

This is a clone of issue OCPBUGS-7438. The following is the description of the original issue:

Description of problem:

The egress service nodeSelector parsing does
not take into account wrong values that cause
errors (such as "name part must consist of alphanumeric characters"),
and the controller does not handle them gracefully given a bad input.
when a bad input is given it should log an error and ignore the service

 

Version-Release number of selected component (if applicable):

 

How reproducible:

create an egress service with a bad nodeSelector:
"{"nodeSelector":{"matchLabels":{"a:b": "c&"}}}"

ovnkube-master controller does not handle it gracefully

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info: