Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Much like core OpenShift operators, a standardized flow exists for OLM-managed operators to interact with the cluster in a specific way to leverage AWS STS authorization when using AWS APIs as opposed to insecure static, long-lived credentials. OLM-managed operators can implement integration with the CloudCredentialOperator in well-defined way to support this flow.
Enable customers to easily leverage OpenShift's capabilities around AWS STS with layered products, for increased security posture. Enable OLM-managed operators to implement support for this in well-defined pattern.
See Operators & STS slide deck.
The CloudCredentialsOperator already provides a powerful API for OpenShift's cluster core operator to request credentials and acquire them via short-lived tokens. This capability should be expanded to OLM-managed operators, specifically to Red Hat layered products that interact with AWS APIs. The process today is cumbersome to none-existent based on the operator in question and seen as an adoption blocker of OpenShift on AWS.
This is particularly important for ROSA customers. Customers are expected to be asked to pre-create the required IAM roles outside of OpenShift, which is deemed acceptable.
This Section: High-Level description of the Market Problem ie: Executive Summary
This Section: Articulates and defines the value proposition from a users point of view
This Section: Effect is the expected outcome within the market. There are two dimensions of outcomes; growth or retention. This represents part of the “why” statement for a feature.
As an engineer I want the capability to implement CI test cases that run at different intervals, be it daily, weekly so as to ensure downstream operators that are dependent on certain capabilities are not negatively impacted if changes in systems CCO interacts with change behavior.
Acceptance Criteria:
Create a stubbed out e2e test path in CCO and matching e2e calling code in release such that there exists a path to tests that verify working in an AWS STS workflow.
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
Cloned from OCPSTRAT-377 to represent the backport to 4.12
Backport questions:
1) What's the impact/cost to any other critical items on the next release?
Installer and edge are mostly focused on activation/retention and working the list top-to-bottom without release blockers. This is an activation item highly coveted by SD and applicable in existing versions.
2) Is it a breaking change to the existing fleet?
No.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic —
Enhancement PR: https://github.com/openshift/enhancements/pull/1397
API PR: https://github.com/openshift/api/pull/1460
Ingress Operator PR: https://github.com/openshift/cluster-ingress-operator/pull/928
Feature Goal: Support OpenShift installation in AWS Shared VPC scenario where AWS infrastructure resources (at least the Private Hosted Zone) belong to an account separate from the cluster installation target account.
The ingress operator is responsible for creating DNS records in AWS Route53 for cluster ingress. Prior to the implementation of this epic, the ingress operator doesn't have the capability to add DNS records into an existing Route 53 hosted zone in the shared VPC.
As described in the WIP PR https://github.com/openshift/cluster-ingress-operator/pull/928, the ingress operator will consume a new API field that contains the IAM Role ARN for configuring DNS records in the private hosted zone. If this field is present, then the ingress operator will use this account to create all private hosted zone records. The API fields will be described in the Enhancement PR.
The ingress operator code will accomplish this by defining a new provider implementation that wraps two other DNS providers, using one of them to publish records to the public zone and the other to publish records to the private zone.
See NE-1299
See NE-1299
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Backport of 4.13 AWS Shared VPC Feature
Backport of 4.13 AWS Shared VPC Feature
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
Telecommunications providers continue to deploy OpenShift at the Far Edge. The acceleration of this adoption and the nature of existing Telecommunication infrastructure and processes drive the need to improve OpenShift provisioning speed at the Far Edge site and the simplicity of preparation and deployment of Far Edge clusters, at scale.
A list of specific needs or objectives that a Feature must deliver to satisfy the Feature. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts. If a non MVP requirement slips, it does not shift the feature.
requirement | Notes | isMvp? |
Telecommunications Service Provider Technicians will be rolling out OCP w/ a vDU configuration to new Far Edge sites, at scale. They will be working from a service depot where they will pre-install/pre-image a set of Far Edge servers to be deployed at a later date. When ready for deployment, a technician will take one of these generic-OCP servers to a Far Edge site, enter the site specific information, wait for confirmation that the vDU is in-service/online, and then move on to deploy another server to a different Far Edge site.
Retail employees in brick-and-mortar stores will install SNO servers and it needs to be as simple as possible. The servers will likely be shipped to the retail store, cabled and powered by a retail employee and the site-specific information needs to be provided to the system in the simplest way possible, ideally without any action from the retail employee.
Q: how challenging will it be to support multi-node clusters with this feature?
< What does the person writing code, testing, documenting need to know? >
< Are there assumptions being made regarding prerequisites and dependencies?>
< Are there assumptions about hardware, software or people resources?>
< Are there specific customer environments that need to be considered (such as working with existing h/w and software)?>
< Are there Upgrade considerations that customers need to account for or that the feature should address on behalf of the customer?>
<Does the Feature introduce data that could be gathered and used for Insights purposes?>
< What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)? >
< What does success look like?>
< Does this feature have doc impact? Possible values are: New Content, Updates to existing content, Release Note, or No Doc Impact>
< If unsure and no Technical Writer is available, please contact Content Strategy. If yes, complete the following.>
< Which other products and versions in our portfolio does this feature impact?>
< What interoperability test scenarios should be factored by the layered product(s)?>
Question | Outcome |
This is a clone of issue OCPBUGS-14416. The following is the description of the original issue:
—
Description of problem:
When installing SNO with bootstrap in place the cluster-policy-controller hangs for 6 minutes waiting for the lease to be acquired.
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1.Run the PoC using the makefile here https://github.com/eranco74/bootstrap-in-place-poc 2.Observe the cluster-policy-controller logs post reboot
Actual results:
I0530 16:01:18.011988 1 leaderelection.go:352] lock is held by leaderelection.k8s.io/unknown and has not yet expired I0530 16:01:18.012002 1 leaderelection.go:253] failed to acquire lease kube-system/cluster-policy-controller-lock I0530 16:07:31.176649 1 leaderelection.go:258] successfully acquired lease kube-system/cluster-policy-controller-lock
Expected results:
Expected the bootstrap cluster-policy-controller to release the lease so that the cluster-policy-controller running post reboot won't have to wait the lease to expire.
Additional info:
Suggested resolution for bootstrap in place: https://github.com/openshift/installer/pull/7219/files#diff-f12fbadd10845e6dab2999e8a3828ba57176db10240695c62d8d177a077c7161R44-R59
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Place holder epic to track spontaneous task which does not deserve its own epic.
AWS has a hard limit of 100 OIDC providers globally.
Currently each HostedCluster created by e2e creates its own OIDC provider, which results in hitting the quota limit frequently and causing the tests to fail as a result.
DOD:
Only a single OIDC provider should be created and shared between all e2e HostedClusters.
DoD:
At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.
We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise
AC:
We have connectDirectlyToCloudAPIs flag in konnectiviy socks5 proxy to dial directly to cloud providers without going through konnectivity.
This introduce another path for exception https://github.com/openshift/hypershift/pull/1722
We should consolidate both by keep using connectDirectlyToCloudAPIs until there's a reason to not.
Once the HostedCluster and NodePool gets stopped using PausedUntil statement, the awsprivatelink controller will continue reconciling.
How to test this:
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Epic Goal
Why is this important?
Scenarios
1. …
Acceptance Criteria
Dependencies (internal and external)
1. …
Previous Work (Optional):
1. …
Open questions::
1. …
Done Checklist
This is a clone of issue MULTIARCH-3683. The following is the description of the original issue:
—
Flags similar to these https://github.com/openshift/hypershift/blob/main/cmd/cluster/powervs/create.go#L57toL61 from create command are missing in destroy command, so that infra destroy functionality not getting these flags for proper destroy of infra with existing resources.
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
As a developer, I would like to remove the random terraform provider because it is essentially unnecessary and would improve our build process.
The random Terraform provider is used in Azure & Azure Stack to create a random string. This could easily be done in go code and passed in as a variable.
Removing an extra provider would decrease our build time and improve our build stability, which is often failing due to timeouts.
The random string is used here in Azure (and similarly in Azure Stack):
https://github.com/openshift/installer/blob/master/data/data/azure/vnet/main.tf#L23-L27
One approach would be to generate the string in tfvars and pass it in as a terraform variable.
Description of problem:
For OVNK to become CNCF complaint, we need to support session affinity timeout feature and enable the e2e's on OpenShift side. This bug tracks the efforts to get this into 4.12 OCP.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-15720. The following is the description of the original issue:
—
—
Description of problem:
Deploying a helm chart that features a values.schema.json using either 2019-09 or 2020-20 (latest) revisions of the JSON-Schema results in the UI hanging on create with three dots loading... This is not the case if YAML view is used, since I suppose this view is not trying to be clever and let Helm validate the chart values against the schema itself.
Reproduced in 4.13, probably affects other versions as well.
100%
1. Go to Helm tab.
2. Click create in top right and select Repository
3. Paste following into YAML view and click Create:
apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: reproducer spec: connectionConfig: url: 'https://raw.githubusercontent.com/tumido/helm-backstage/blog2'
4. Go to the Helm tab again (if redirected elsewhere)
5. Click create in top right and select Helm Release
6. In catalog filter select Chart repositories: Reproducer
7. Click on the single tile available (Backstage) and click Create
8. Switch to Form view
9. Leave default values and click Create
10. Stare at the always loading screen that never proceeds further.
It installs and deploys the chart
This is caused by a JSON Schema containing a $schema key pointing which revision of the JSON Schema standard should be used:
{ "$schema": "https://json-schema.org/draft/2020-12/schema", }
I've managed to trace this back to this react-jsonschema-form issue:
https://github.com/rjsf-team/react-jsonschema-form/issues/2241
It seems the library used here for validation doesn't support 2019-09 draft and the most current revision 2020-20 revision.
It happens only if the chart follows the JSON Schema standard and declares the revision properly.
Workarounds:
IMO best solution:
Helm form renderer should NOT do any validation, since it can't handle the schema properly. Instead, it should leave this job to the Helm backend. Helm validates the values against the schema when installing the chart anyways. The YAML view also does no validation. That one seems to do the job properly.
Currently, there is no formal requirement for charts admitted to the helm curated catalog saying that the most recent JSON Schema revision is 4 years old and later 2 revisions are not supported.
Also, the Form UI should not just hang on submit. Instead, it should at least fail gracefully.
Related to:
https://github.com/janus-idp/helm-backstage/issues/64#issuecomment-1587678319
Description of problem:
When deleting a BYOH node in Platform:none, as well as in an Azure IPI cluster the node gets reconciled correctly, however when added back to the cluster it stays in Ready,SchedulingDisabled. When checking the WMCO logs, we can observe the following log: {"level":"error","ts":"2022-12-14T16:14:31Z","msg":"Reconciler error","controller":"configmap","controllerGroup":"","controllerKind":"ConfigMap","configMap":{"name":"windows-instances","namespace":"openshift-windows-machine-config-operator"},"namespace":"openshift-windows-machine-config-operator","name":"windows-instances","reconcileID":"d66a3142-d52c-43f5-8a42-214ce9c88417","error":"error configuring host with address 10.0.55.21: configuring node network failed: error waiting for k8s.ovn.org/hybrid-overlay-node-subnet node annotation for byoh-2019: timeout waiting for k8s.ovn.org/hybrid-overlay-node-subnet node annotation: timed out waiting for the condition" And when checking the node's annotation, it is indeed missing: $ oc get nodes byoh-2019 -o=jsonpath="{.metadata.annotations}" {"volumes.kubernetes.io/controller-managed-attach-detach":"true","windowsmachineconfig.openshift.io/desired-version":"7.0.0-16f486a","windowsmachineconfig.openshift.io/pub-key-hash":"1df2c166b1c401180523270e9cf6bc2cd2724b9279ea65668a3b95298525a0f5","windowsmachineconfig.openshift.io/username":"wx4EBwMICL6qT+4RY8tgbx4hiRmQdHlwUsHgVGCTVY7S5gG/G5gb/Wzv0JBLhNP9\u003cwmcoMarker\u003ejlmI5ExHPYFrd2Fw6Lxe/6PKEE5/vYAhZ2n1Z2nBIoa1xN1/HEaXhqR2CuXNe7Ez\u003cwmcoMarker\u003eg2Hg+gA=\u003cwmcoMarker\u003e=ubWA"} Tested in Azure IPI and Platform:None, in both cases the issue got reproduced.
Version-Release number of selected component (if applicable):
$ oc get cm -n openshift-windows-machine-config-operator NAME DATA AGE kube-root-ca.crt 1 10h openshift-service-ca.crt 1 10h windows-instances 2 9h windows-machine-config-operator-lock 0 6h24m windows-services-7.0.0-16f486a 2 6h23m $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-rc.4 True False 6h48m Cluster version is 4.12.0-rc.4
How reproducible:
Steps to Reproduce:
1. Deploy a OCP 4.11 cluster with WMCO 6.0.0 2. Add one or two byoh nodes to the cluster 3. Upgrade the cluster to OCP 4.12, and later WMCO to 7.0.0 4. Remove one of the byoh nodes using: oc delete node <byoh-node-id> 5. Wait for reconciliation to bring the node back
Actual results:
The deleted node gets re-added but stays in Ready,SchedulingDisabled and the workloads left in Pending state.
Expected results:
The node gets properly added to the cluster and stays in Ready.
Additional info:
Found when running resource watcher, these keep updating with no real changes, just moving conditions around. Likely needs bugs for all three.
Description of problem:
Agent based installation fails during the 3+1 deployment. I found that the machine-api-operator degraded due to minimum worker replica count is 2 and for 3+1 deployment we need to define one worker node.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create agent.iso (openshift-install agent create image) using install-config.yaml and agent-config.yaml (PFA sample files) 2. Deploy a 3+1 cluster using agent.iso 3. Execute "openshift-install agent wait-for install-complete" command to wait for install complete.
Actual results:
Getting below error: ERROR Cluster operator kube-controller-manager Degraded is True with GarbageCollector_Error: GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host INFO Cluster operator machine-api Progressing is True with SyncingResources: Progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 ERROR Cluster operator machine-api Degraded is True with SyncingFailed: Failed when progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 because minimum worker replica count (2) not yet met: current running replicas 1, waiting for [] INFO Cluster operator machine-api Available is False with Initializing: Operator is initializing INFO Cluster operator monitoring Available is False with UpdatingPrometheusOperatorFailed: Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error. ERROR Cluster operator monitoring Degraded is True with UpdatingPrometheusOperatorFailed: Failed to rollout the stack. Error: updating prometheus operator: reconciling Prometheus Operator Admission Webhook Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-operator-admission-webhook: got 1 unavailable replicas INFO Cluster operator monitoring Progressing is True with RollOutInProgress: Rolling out the stack. INFO Cluster operator network ManagementStateDegraded is False with : ERROR Cluster initialization failed because one or more operators are not functioning properly. ERROR The cluster should be accessible for troubleshooting as detailed in the documentation linked below, ERROR https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html
Expected results:
3+1 deployment should be successful.
Additional info:
I found that there is a condition in the machine-api-operator to check that the worker node count should be 2 which is preventing the 3+1 deployment. https://github.com/openshift/machine-api-operator/blob/master/pkg/operator/sync.go#L322
Tracker issue for bootimage bump in 4.12. This issue should block issues which need a bootimage bump to fix.
The previous bump was OCPBUGS-2997.
This is a clone of issue OCPBUGS-17365. The following is the description of the original issue:
—
When we update a Secret referenced in the BareMetalHost, an immediate reconcile of the corresponding BMH is not triggered. In most states we requeue each CR after a timeout, so we should eventually see the changes.
In the case of BMC Secrets, this has been broken since the fix for OCPBUGS-1080 in 4.12.
This is a clone of issue OCPBUGS-5306. The following is the description of the original issue:
—
Description of problem:
One old machine stuck in Deleting and many co get degraded when doing master replacement on the cluster with OVN network
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2023-01-02-175114
How reproducible:
always after several times
Steps to Reproduce:
1.Install a cluster liuhuali@Lius-MacBook-Pro huali-test % oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2023-01-02-175114 True False 30m Cluster version is 4.12.0-0.nightly-2023-01-02-175114 liuhuali@Lius-MacBook-Pro huali-test % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2023-01-02-175114 True False False 33m baremetal 4.12.0-0.nightly-2023-01-02-175114 True False False 80m cloud-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 84m cloud-credential 4.12.0-0.nightly-2023-01-02-175114 True False False 80m cluster-api 4.12.0-0.nightly-2023-01-02-175114 True False False 81m cluster-autoscaler 4.12.0-0.nightly-2023-01-02-175114 True False False 80m config-operator 4.12.0-0.nightly-2023-01-02-175114 True False False 81m console 4.12.0-0.nightly-2023-01-02-175114 True False False 33m control-plane-machine-set 4.12.0-0.nightly-2023-01-02-175114 True False False 79m csi-snapshot-controller 4.12.0-0.nightly-2023-01-02-175114 True False False 81m dns 4.12.0-0.nightly-2023-01-02-175114 True False False 80m etcd 4.12.0-0.nightly-2023-01-02-175114 True False False 79m image-registry 4.12.0-0.nightly-2023-01-02-175114 True False False 74m ingress 4.12.0-0.nightly-2023-01-02-175114 True False False 74m insights 4.12.0-0.nightly-2023-01-02-175114 True False False 21m kube-apiserver 4.12.0-0.nightly-2023-01-02-175114 True False False 77m kube-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 77m kube-scheduler 4.12.0-0.nightly-2023-01-02-175114 True False False 77m kube-storage-version-migrator 4.12.0-0.nightly-2023-01-02-175114 True False False 81m machine-api 4.12.0-0.nightly-2023-01-02-175114 True False False 75m machine-approver 4.12.0-0.nightly-2023-01-02-175114 True False False 80m machine-config 4.12.0-0.nightly-2023-01-02-175114 True False False 74m marketplace 4.12.0-0.nightly-2023-01-02-175114 True False False 80m monitoring 4.12.0-0.nightly-2023-01-02-175114 True False False 72m network 4.12.0-0.nightly-2023-01-02-175114 True False False 83m node-tuning 4.12.0-0.nightly-2023-01-02-175114 True False False 80m openshift-apiserver 4.12.0-0.nightly-2023-01-02-175114 True False False 75m openshift-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 76m openshift-samples 4.12.0-0.nightly-2023-01-02-175114 True False False 22m operator-lifecycle-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 81m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2023-01-02-175114 True False False 81m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2023-01-02-175114 True False False 75m platform-operators-aggregated 4.12.0-0.nightly-2023-01-02-175114 True False False 74m service-ca 4.12.0-0.nightly-2023-01-02-175114 True False False 81m storage 4.12.0-0.nightly-2023-01-02-175114 True False False 74m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-0 Running m6i.xlarge us-east-2 us-east-2a 85m huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 85m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 85m huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 80m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 80m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 80m liuhuali@Lius-MacBook-Pro huali-test % oc get controlplanemachineset NAME DESIRED CURRENT READY UPDATED UNAVAILABLE STATE AGE cluster 3 3 3 3 Active 86m 2.Edit controlplanemachineset, change instanceType to another value to trigger RollingUpdate liuhuali@Lius-MacBook-Pro huali-test % oc edit controlplanemachineset cluster controlplanemachineset.machine.openshift.io/cluster edited liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-0 Running m6i.xlarge us-east-2 us-east-2a 86m huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 86m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 86m huliu-aws4d2-fcks7-master-mbgz6-0 Provisioning m5.xlarge us-east-2 us-east-2a 5s huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 81m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 81m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 81m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-0 Deleting m6i.xlarge us-east-2 us-east-2a 92m huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 92m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 92m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 5m36s huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 87m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 87m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 87m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-1 Running m6i.xlarge us-east-2 us-east-2b 101m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 101m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 15m huliu-aws4d2-fcks7-master-nbt9g-1 Provisioned m5.xlarge us-east-2 us-east-2b 3m1s huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 96m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 96m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 96m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-1 Deleting m6i.xlarge us-east-2 us-east-2b 149m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 149m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 62m huliu-aws4d2-fcks7-master-nbt9g-1 Running m5.xlarge us-east-2 us-east-2b 50m huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 144m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 144m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 144m liuhuali@Lius-MacBook-Pro huali-test % oc get machine NAME PHASE TYPE REGION ZONE AGE huliu-aws4d2-fcks7-master-1 Deleting m6i.xlarge us-east-2 us-east-2b 4h12m huliu-aws4d2-fcks7-master-2 Running m6i.xlarge us-east-2 us-east-2a 4h12m huliu-aws4d2-fcks7-master-mbgz6-0 Running m5.xlarge us-east-2 us-east-2a 166m huliu-aws4d2-fcks7-master-nbt9g-1 Running m5.xlarge us-east-2 us-east-2b 153m huliu-aws4d2-fcks7-worker-us-east-2a-m279f Running m6i.xlarge us-east-2 us-east-2a 4h7m huliu-aws4d2-fcks7-worker-us-east-2a-qg9ps Running m6i.xlarge us-east-2 us-east-2a 4h7m huliu-aws4d2-fcks7-worker-us-east-2b-ps6tz Running m6i.xlarge us-east-2 us-east-2b 4h7m 3.master-1 stuck in Deleting, and many co get degraded, many pod cannot get Running liuhuali@Lius-MacBook-Pro huali-test % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2023-01-02-175114 True True True 9s APIServerDeploymentDegraded: 1 of 4 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-7b65bbc76b-mxl99 pod)... baremetal 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m cloud-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 4h11m cloud-credential 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m cluster-api 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m cluster-autoscaler 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m config-operator 4.12.0-0.nightly-2023-01-02-175114 True False False 4h9m console 4.12.0-0.nightly-2023-01-02-175114 False False False 150m RouteHealthAvailable: console route is not admitted control-plane-machine-set 4.12.0-0.nightly-2023-01-02-175114 True True False 4h7m Observed 1 replica(s) in need of update csi-snapshot-controller 4.12.0-0.nightly-2023-01-02-175114 True True False 4h9m CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods... dns 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m etcd 4.12.0-0.nightly-2023-01-02-175114 True True True 4h7m GuardControllerDegraded: Missing operand on node ip-10-0-79-159.us-east-2.compute.internal... image-registry 4.12.0-0.nightly-2023-01-02-175114 True False False 4h2m ingress 4.12.0-0.nightly-2023-01-02-175114 True False False 4h2m insights 4.12.0-0.nightly-2023-01-02-175114 True False False 3h8m kube-apiserver 4.12.0-0.nightly-2023-01-02-175114 True True True 4h5m GuardControllerDegraded: Missing operand on node ip-10-0-79-159.us-east-2.compute.internal kube-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False True 4h5m GarbageCollectorDegraded: error querying alerts: Post "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query": dial tcp 172.30.19.115:9091: i/o timeout kube-scheduler 4.12.0-0.nightly-2023-01-02-175114 True False False 4h5m kube-storage-version-migrator 4.12.0-0.nightly-2023-01-02-175114 True False False 162m machine-api 4.12.0-0.nightly-2023-01-02-175114 True False False 4h3m machine-approver 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m machine-config 4.12.0-0.nightly-2023-01-02-175114 False False True 139m Cluster not available for [{operator 4.12.0-0.nightly-2023-01-02-175114}]: error during waitForDeploymentRollout: [timed out waiting for the condition, deployment machine-config-controller is not ready. status: (replicas: 1, updated: 1, ready: 0, unavailable: 1)] marketplace 4.12.0-0.nightly-2023-01-02-175114 True False False 4h8m monitoring 4.12.0-0.nightly-2023-01-02-175114 False True True 144m reconciling Prometheus Operator Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-operator: got 1 unavailable replicas network 4.12.0-0.nightly-2023-01-02-175114 True True False 4h11m DaemonSet "/openshift-ovn-kubernetes/ovnkube-master" is not available (awaiting 1 nodes)... node-tuning 4.12.0-0.nightly-2023-01-02-175114 True False False 4h7m openshift-apiserver 4.12.0-0.nightly-2023-01-02-175114 False True False 151m APIServicesAvailable: "apps.openshift.io.v1" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request... openshift-controller-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 4h4m openshift-samples 4.12.0-0.nightly-2023-01-02-175114 True False False 3h10m operator-lifecycle-manager 4.12.0-0.nightly-2023-01-02-175114 True False False 4h9m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2023-01-02-175114 True False False 4h9m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2023-01-02-175114 True False False 2m44s platform-operators-aggregated 4.12.0-0.nightly-2023-01-02-175114 True False False 4h2m service-ca 4.12.0-0.nightly-2023-01-02-175114 True False False 4h9m storage 4.12.0-0.nightly-2023-01-02-175114 True True False 4h2m AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods... liuhuali@Lius-MacBook-Pro huali-test % liuhuali@Lius-MacBook-Pro huali-test % oc get pod --all-namespaces|grep -v Running NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver apiserver-5cbdf985f9-85z4t 0/2 Init:0/1 0 155m openshift-authentication oauth-openshift-5c46d6658b-lkbjj 0/1 Pending 0 156m openshift-cloud-credential-operator pod-identity-webhook-77bf7c646d-4rtn8 0/1 ContainerCreating 0 156m openshift-cluster-api capa-controller-manager-d484bc464-lhqbk 0/1 ContainerCreating 0 156m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-5668745dcb-jc7fm 0/11 ContainerCreating 0 156m openshift-cluster-csi-drivers aws-ebs-csi-driver-operator-5d6b9fbd77-827vs 0/1 ContainerCreating 0 156m openshift-cluster-csi-drivers shared-resource-csi-driver-operator-866d897954-z77gz 0/1 ContainerCreating 0 156m openshift-cluster-csi-drivers shared-resource-csi-driver-webhook-d794748dc-kctkn 0/1 ContainerCreating 0 156m openshift-cluster-samples-operator cluster-samples-operator-754758b9d7-nbcc9 0/2 ContainerCreating 0 156m openshift-cluster-storage-operator csi-snapshot-controller-6d9c448fdd-wdb7n 0/1 ContainerCreating 0 156m openshift-cluster-storage-operator csi-snapshot-webhook-6966f555f8-cbdc7 0/1 ContainerCreating 0 156m openshift-console-operator console-operator-7d8567876b-nxgpj 0/2 ContainerCreating 0 156m openshift-console console-855f66f4f8-q869k 0/1 ContainerCreating 0 156m openshift-console downloads-7b645b6b98-7jqfw 0/1 ContainerCreating 0 156m openshift-controller-manager controller-manager-548c7f97fb-bl68p 0/1 Pending 0 156m openshift-etcd installer-13-ip-10-0-76-132.us-east-2.compute.internal 0/1 ContainerCreating 0 9m39s openshift-etcd installer-3-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h13m openshift-etcd installer-4-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h12m openshift-etcd installer-5-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h7m openshift-etcd installer-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h1m openshift-etcd installer-8-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-10-ip-10-0-48-21.us-east-2.compute.internal 0/1 ContainerCreating 0 160m openshift-etcd revision-pruner-10-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 160m openshift-etcd revision-pruner-11-ip-10-0-48-21.us-east-2.compute.internal 0/1 ContainerCreating 0 159m openshift-etcd revision-pruner-11-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 159m openshift-etcd revision-pruner-11-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-etcd revision-pruner-12-ip-10-0-48-21.us-east-2.compute.internal 0/1 ContainerCreating 0 156m openshift-etcd revision-pruner-12-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-etcd revision-pruner-12-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-etcd revision-pruner-13-ip-10-0-48-21.us-east-2.compute.internal 0/1 ContainerCreating 0 155m openshift-etcd revision-pruner-13-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 155m openshift-etcd revision-pruner-13-ip-10-0-76-132.us-east-2.compute.internal 0/1 ContainerCreating 0 10m openshift-etcd revision-pruner-13-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 155m openshift-etcd revision-pruner-6-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 169m openshift-etcd revision-pruner-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 3h57m openshift-etcd revision-pruner-7-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-7-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-8-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-8-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 168m openshift-etcd revision-pruner-9-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 166m openshift-etcd revision-pruner-9-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 166m openshift-kube-apiserver installer-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h4m openshift-kube-apiserver installer-7-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 168m openshift-kube-apiserver installer-9-ip-10-0-76-132.us-east-2.compute.internal 0/1 ContainerCreating 0 9m52s openshift-kube-apiserver revision-pruner-6-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 169m openshift-kube-apiserver revision-pruner-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 3h59m openshift-kube-apiserver revision-pruner-7-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 168m openshift-kube-apiserver revision-pruner-7-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 168m openshift-kube-apiserver revision-pruner-8-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 166m openshift-kube-apiserver revision-pruner-8-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 166m openshift-kube-apiserver revision-pruner-8-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-kube-apiserver revision-pruner-9-ip-10-0-48-21.us-east-2.compute.internal 0/1 ContainerCreating 0 155m openshift-kube-apiserver revision-pruner-9-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 155m openshift-kube-apiserver revision-pruner-9-ip-10-0-76-132.us-east-2.compute.internal 0/1 ContainerCreating 0 9m54s openshift-kube-apiserver revision-pruner-9-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 155m openshift-kube-controller-manager installer-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h11m openshift-kube-controller-manager installer-7-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h7m openshift-kube-controller-manager installer-8-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 169m openshift-kube-controller-manager installer-8-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h4m openshift-kube-controller-manager installer-8-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-kube-controller-manager revision-pruner-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h13m openshift-kube-controller-manager revision-pruner-7-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h10m openshift-kube-controller-manager revision-pruner-8-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 169m openshift-kube-controller-manager revision-pruner-8-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h5m openshift-kube-controller-manager revision-pruner-8-ip-10-0-76-132.us-east-2.compute.internal 0/1 ContainerCreating 0 4m36s openshift-kube-controller-manager revision-pruner-8-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-kube-scheduler installer-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h11m openshift-kube-scheduler installer-7-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 169m openshift-kube-scheduler installer-7-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h10m openshift-kube-scheduler installer-7-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-kube-scheduler revision-pruner-6-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h13m openshift-kube-scheduler revision-pruner-7-ip-10-0-48-21.us-east-2.compute.internal 0/1 Completed 0 169m openshift-kube-scheduler revision-pruner-7-ip-10-0-63-159.us-east-2.compute.internal 0/1 Completed 0 4h10m openshift-kube-scheduler revision-pruner-7-ip-10-0-76-132.us-east-2.compute.internal 0/1 ContainerCreating 0 4m36s openshift-kube-scheduler revision-pruner-7-ip-10-0-79-159.us-east-2.compute.internal 0/1 Completed 0 156m openshift-machine-config-operator machine-config-controller-55b4d497b6-p89lb 0/2 ContainerCreating 0 156m openshift-marketplace qe-app-registry-w8gnc 0/1 ContainerCreating 0 148m openshift-monitoring prometheus-operator-776bd79f6d-vz7q5 0/2 ContainerCreating 0 156m openshift-multus multus-admission-controller-5f88d77b65-nzmj5 0/2 ContainerCreating 0 156m openshift-oauth-apiserver apiserver-7b65bbc76b-mxl99 0/1 Init:0/1 0 154m openshift-operator-lifecycle-manager collect-profiles-27879975-fpvzk 0/1 Completed 0 3h21m openshift-operator-lifecycle-manager collect-profiles-27879990-86rk8 0/1 Completed 0 3h6m openshift-operator-lifecycle-manager collect-profiles-27880005-bscc4 0/1 Completed 0 171m openshift-operator-lifecycle-manager collect-profiles-27880170-s8cbj 0/1 ContainerCreating 0 4m37s openshift-operator-lifecycle-manager packageserver-6f8f8f9d54-4r96h 0/1 ContainerCreating 0 156m openshift-ovn-kubernetes ovnkube-master-lr9pk 3/6 CrashLoopBackOff 23 (46s ago) 156m openshift-route-controller-manager route-controller-manager-747bf8684f-5vhwx 0/1 Pending 0 156m liuhuali@Lius-MacBook-Pro huali-test %
Actual results:
RollingUpdate cannot complete successfully
Expected results:
RollingUpdate should complete successfully
Additional info:
Must gather - https://drive.google.com/file/d/1bvE1XUuZKLBGmq7OTXNVCNcFZkqbarab/view?usp=sharing must gather of another cluster hit the same issue (also this template ipi-on-aws/versioned-installer-customer_vpc-disconnected_private_cluster-techpreview-ci and with ovn network): https://drive.google.com/file/d/1CqAJlqk2wgnEuMo3lLaObk4Nbxi82y_A/view?usp=sharing must gather of another cluster hit the same issue (this template ipi-on-aws/versioned-installer-private_cluster-sts-usgov-ci and with ovn network): https://drive.google.com/file/d/1tnKbeqJ18SCAlJkS80Rji3qMu3nvN_O8/view?usp=sharing Seems this template ipi-on-aws/versioned-installer-customer_vpc-disconnected_private_cluster-techpreview-ci and with ovn network can often hit this issue.
cloud-controller-manager does not react to changes to infrastructure secrets (in the OpenStack case: clouds.yaml).
As a consequence, if credentials are rotated (and the old ones are rendered useless), load balancer creation and deletion will not succeed any more. Restarting the controller fixes the issue on a live cluster.
Logs show that it couldn't find the application credentials:
Dec 19 12:58:58.909: INFO: At 2022-12-19 12:53:58 +0000 UTC - event for udp-lb-default-svc: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Dec 19 12:58:58.909: INFO: At 2022-12-19 12:53:58 +0000 UTC - event for udp-lb-default-svc: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: failed to get subnet to create load balancer for service e2e-test-openstack-q9jnk/udp-lb-default-svc: Unable to re-authenticate: Expected HTTP response code [200 204 300] when accessing [GET https://compute.rdo.mtl2.vexxhost.net/v2.1/0693e2bb538c42b79a49fe6d2e61b0fc/servers/fbeb21b8-05f0-4734-914e-926b6a6225f1/os-interface], but got 401 instead {"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}}: Resource not found: [POST https://identity.rdo.mtl2.vexxhost.net/v3/auth/tokens], error message: {"error":{"code":404,"message":"Could not find Application Credential: 1b78233956b34c6cbe5e1c95445972a4.","title":"Not Found"}}
OpenStack CI has been instrumented to restart CCM after credentials rotation, so that we silence this particular issue and avoid masking any other. That workaround must be reverted once this bug is fixed.
4.12 requires backport of commit:
commit 0111e1faec20d16505a110449966273b430b7ad1
Author: Surya Seetharaman <suryaseetharaman.9@gmail.com>
Date: Tue Sep 6 21:20:57 2022 +0200
Support AllocateLoadBalancerNodePortsFalse
This PR supports having allocateloadbalancernodeports
set to false along with etp=local on lgw mode.
Signed-off-by: Surya Seetharaman <suryaseetharaman.9@gmail.com>
Missing BP of LoadBalancerServiceHasNodePortAllocation into 4.12 causes problems with flow creation for these services, even in shared gateway mode
This issue affects services with `allocateLoadBalancerNodePorts: false` in OCP 4.12.
Any deletion of services with `allocateLoadBalancerNodePorts: false` will fail and go into a 15 minute long retry loop. When one recreates a service while a failed deletion is still in progress, the flows on br-ex are not recreated.
Deletion will fail with:
(...) obj_retry.go:257] Retry object setup: *factory.serviceForGateway <ns>/<service> obj_retry.go:290] Removing old object: *factory.serviceForGateway <ns>/<service> (failed: %!s(uint8=<retry>)) (...) obj_retry.go: 298] Retry delete failed for *factory.serviceForGateway <ns><service>, will try again later: error removing port claim for service: <ns>/<service>: invalid service port <service>, err: invalid port number: 0
And while a deletion is still ongoing, add will fail with:
obj_retry.go: 476] Failed to delete old object <ns>/<service> of type *factory.serviceForGateway, during add event: error removing port claim for service: <ns>/<service>: invalid service port <service>, err: invalid port number: 0
onvkube-node will retry 15 times with a 1 minute backoff before it gives up, and while this fails, the object cannot be recreated.
That also means that there are currently 2 workarounds for this (tested):
The problem can easily be reproduced in 4.12, I tested this on 4.12.17 with SNO:
$ cat fedora-test.yaml
---
apiVersion: v1
kind: Service
metadata:
name: fedora-service
labels:
app: fedora-deployment
spec:
selector:
app: fedora-pod
ports:
- protocol: TCP
port: 80
targetPort: 8080
sessionAffinity: None
type: LoadBalancer
allocateLoadBalancerNodePorts: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fedora-deployment
labels:
app: fedora-deployment
spec:
replicas: 1
selector:
matchLabels:
app: fedora-pod
template:
metadata:
labels:
app: fedora-pod
spec:
containers:
- name: fedora-a
image: registry.fedoraproject.org/fedora:latest
imagePullPolicy: Always
command:
- sleep
- infinity
- name: fedora-b
image: registry.fedoraproject.org/fedora:latest
imagePullPolicy: Always
command:
- sleep
- infinity
oc apply -f fedora-test.yaml oc delete svc fedora-service oc apply -f fedora-test.yaml
Logs:
oc logs -n openshift-ovn-kubernetes ovnkube-node-4xg6w -c ovnkube-node -f | grep fedora-service (...) I0714 01:59:30.867309 9291 obj_retry.go:491] Creating *factory.serviceForGateway default/fedora-service took: 70.803µs I0714 01:59:30.875170 9291 obj_retry.go:491] Creating *factory.endpointSliceForGateway default/fedora-service-5bmf8 took: 15.941µs I0714 01:59:30.875210 9291 obj_retry.go:491] Creating *factory.endpointSliceForStaleConntrackRemoval default/fedora-service-5bmf8 took: 169ns E0714 01:59:52.496754 9291 obj_retry.go:673] Failed to delete *factory.serviceForGateway default/fedora-service, error: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0 I0714 02:00:02.969493 9291 obj_retry.go:471] Detected stale object during new object add of type *factory.serviceForGateway with the same key: default/fedora-service W0714 02:00:02.969523 9291 gateway_shared_intf.go:656] Delete service: no service found in cache for endpoint fedora-service in namespace default I0714 02:00:02.971917 9291 obj_retry.go:491] Creating *factory.endpointSliceForGateway default/fedora-service-74vf8 took: 62.416µs I0714 02:00:02.971926 9291 obj_retry.go:491] Creating *factory.endpointSliceForStaleConntrackRemoval default/fedora-service-74vf8 took: 255ns E0714 02:00:03.086557 9291 obj_retry.go:476] Failed to delete old object default/fedora-service of type *factory.serviceForGateway, during add event: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0 I0714 02:00:13.982590 9291 obj_retry.go:257] Retry object setup: *factory.serviceForGateway default/fedora-service I0714 02:00:13.982621 9291 obj_retry.go:290] Removing old object: *factory.serviceForGateway default/fedora-service (failed: %!s(uint8=1)) I0714 02:00:14.104772 9291 obj_retry.go:298] Retry delete failed for *factory.serviceForGateway default/fedora-service, will try again later: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0 I0714 02:00:22.397338 9291 obj_retry.go:571] Found retry entry for *factory.serviceForGateway default/fedora-service marked for deletion: will delete the object W0714 02:00:22.397400 9291 gateway_shared_intf.go:656] Delete service: no service found in cache for endpoint fedora-service in namespace default E0714 02:00:22.601603 9291 obj_retry.go:575] Failed to delete stale object default/fedora-service, during update: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0 I0714 02:00:43.980921 9291 obj_retry.go:257] Retry object setup: *factory.serviceForGateway default/fedora-service I0714 02:00:43.980948 9291 obj_retry.go:290] Removing old object: *factory.serviceForGateway default/fedora-service (failed: %!s(uint8=1)) W0714 02:00:43.980976 9291 gateway_shared_intf.go:656] Delete service: no service found in cache for endpoint fedora-service in namespace default I0714 02:00:44.199215 9291 obj_retry.go:298] Retry delete failed for *factory.serviceForGateway default/fedora-service, will try again later: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0
And the following watch shows that the flows are created initially, then upon deletion the flows vanish, then as the service is recreated the flows do not reappear:
watch "ovs-ofctl dump-flows br-ex | grep 192.168.18.100"
I can delete the ovnkube-node pod to recreate the flows:
oc delete pod -n openshift-ovn-kubernetes ovnkube-node-4xg6w
And the flows reappaer:
[root@sno ~]# ovs-ofctl dump-flows br-ex | grep 192.168.18.100 cookie=0x849b956ca97beaee, duration=27.925s, table=0, n_packets=0, n_bytes=0, idle_age=27, priority=110,arp,in_port=1,arp_tpa=192.168.18.100,arp_op=1 actions=LOCAL cookie=0x849b956ca97beaee, duration=27.925s, table=0, n_packets=0, n_bytes=0, idle_age=27, priority=110,tcp,in_port=1,nw_dst=192.168.18.100,tp_dst=80 actions=output:2 cookie=0x849b956ca97beaee, duration=27.925s, table=0, n_packets=0, n_bytes=0, idle_age=27, priority=110,tcp,in_port=2,nw_src=192.168.18.100,tp_src=80 actions=output:1
--------------------------------------
The problem does not manifest in 4.13. The difference between 4.12 an 4.13 is a missing backport of 0111e1faec20d16505a110449966273b430b7ad1
Log for service deletion in OCP 4.13:
I0718 13:27:35.699982 334002 obj_retry.go:656] Delete event received for *factory.serviceForGateway default/fedora-service I0718 13:27:35.700010 334002 gateway_shared_intf.go:679] Deleting service fedora-service in namespace default I0718 13:27:35.769565 334002 obj_retry.go:656] Delete event received for *factory.endpointSliceForGateway default/fedora-service-6hhds I0718 13:27:35.769596 334002 gateway_shared_intf.go:856] Deleting endpointslice fedora-service-6hhds in namespace default I0718 13:27:35.769610 334002 gateway_shared_intf.go:431] No serviceConfig found for service fedora-service in namespace default I0718 13:27:35.769618 334002 obj_retry.go:656] Delete event received for *factory.endpointSliceForStaleConntrackRemoval default/fedora-service-6hhds
Log for service deletion in OCP 4.12:
I0718 13:28:14.253695 52007 obj_retry.go:653] Delete event received for *factory.serviceForGateway default/fedora-service I0718 13:28:14.253717 52007 port_claim.go:197] Handle NodePort service fedora-service port 0 I0718 13:28:14.253726 52007 gateway_shared_intf.go:649] Deleting service fedora-service in namespace default I0718 13:28:14.288844 52007 obj_retry.go:653] Delete event received for *factory.endpointSliceForGateway default/fedora-service-2m857 I0718 13:28:14.288870 52007 gateway_shared_intf.go:817] Deleting endpointslice fedora-service-2m857 in namespace default I0718 13:28:14.288876 52007 gateway_shared_intf.go:407] No serviceConfig found for service fedora-service in namespace default I0718 13:28:14.288881 52007 obj_retry.go:653] Delete event received for *factory.endpointSliceForStaleConntrackRemoval default/fedora-service-2m857 E0718 13:28:14.402407 52007 obj_retry.go:673] Failed to delete *factory.serviceForGateway default/fedora-service, error: error removing port claim for service: default/fedora-service: invalid service port fedora-service, err: invalid port number: 0
Both 4.12 and 4.13 have similar code, and `handleService` looks the same as well:
189 func handleService(svc *kapi.Service, handler handler) []error { 190 errors := []error{} 191 if !util.ServiceTypeHasNodePort(svc) && len(svc.Spec.ExternalIPs) == 0 { 192 return errors 193 } 194 195 for _, svcPort := range svc.Spec.Ports { 196 if util.ServiceTypeHasNodePort(svc) { 197 klog.V(5).Infof("Handle NodePort service %s port %d", svc.Name, svcPort.NodePort)
But ServiceTypeHasNodePort in 4.13 correctly differentiates between allocateLoadBalancerNodePorts whereas 4.12 does not:
go-controller/pkg/util/kube.go
273 func LoadBalancerServiceHasNodePortAllocation(service *kapi.Service) bool { 274 return service.Spec.AllocateLoadBalancerNodePorts == nil || *service.Spec.AllocateLoadBalancerNodePorts 275 } 277 // ServiceTypeHasNodePort checks if the service has an associated NodePort or not 278 func ServiceTypeHasNodePort(service *kapi.Service) bool { 279 return service.Spec.Type == kapi.ServiceTypeNodePort || 280 (service.Spec.Type == kapi.ServiceTypeLoadBalancer && LoadBalancerServiceHasNodePortAllocation(service)) 281 }
In OCP 4.12:
221 // ServiceTypeHasNodePort checks if the service has an associated NodePort or not 222 func ServiceTypeHasNodePort(service *kapi.Service) bool { 223 return service.Spec.Type == kapi.ServiceTypeNodePort || service.Spec.Type == kapi.ServiceTypeLoadBalancer 224 }
Description of problem:
The setting of systemReserved: ephemeral-storage in KubeletConfig is not working as expected.
Version-Release number of selected component (if applicable):
4.10.z, may exist on other OCP versions as well.
How reproducible:
always
Steps to Reproduce:
1. Create a KubeletConfig on the node: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: system-reserved-config spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" kubeletConfig: systemReserved: cpu: 500m memory: 500Mi ephemeral-storage: 10Gi 2. Check node allocatable storage with command: oc describe node |grep -C 5 ephemeral-storage
Actual results:
The Allocatable:ephemeral-storage on the node is not capacity.ephemeral-storage - systemReserved.ephemeral-storage - eviction-thresholds (10% of the capacity.ephemeral-storage by default)
Expected results:
The Allocatable:ephemeral-storage on the node should be capacity.ephemeral-storage - systemReserved.ephemeral-storage - eviction-thresholds (10% of the capacity.ephemeral-storage by default)
Additional info:
The root cause might be: process argument '--system-reserved=cpu=500m,memory=500Mi' overwrote the setting in /etc/kubernetes/kubelet.conf, one example: root 6824 1 27 Sep30 ? 1-09:00:24 kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --runtime-cgroups=/system.slice/crio.service --node-labels=node-role.kubernetes.io/master,node.openshift.io/os_id=rhcos --node-ip=192.168.58.47 --minimum-container-ttl-duration=6m0s --cloud-provider= --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --hostname-override= --register-with-taints=node-role.kubernetes.io/master=:NoSchedule --pod-infra-container-image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a7b6408460148cb73c59677dbc2c261076bc07226c43b0c9192cc70aef5ba62 --system-reserved=cpu=500m,memory=500Mi --v=2 --housekeeping-interval=30s
Description of problem:
TestUnmanagedDNSToManagedDNSInternalIngressController E2E test is failing on the error: { unmanaged_dns_test.go:272: failed to verify connectivity with workload with reqURL http://10.0.128.7 using external client: timed out waiting for the condition
How reproducible:
About 75% of the time.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
75%
Steps to Reproduce:
1. Run CI E2E tests on cluster-ingress-operator or make test-e2e TEST=TestUnmanagedDNSToManagedDNSInternalIngressController
Actual results:
E2E test fails about 75% of the time
Expected results:
E2E should always pass
Additional info:
Since 4.11 OCP comes with OperatorHub definition which declares a capability
and enables all catalog sources. For OKD we want to enable just community-operators
as users may not have Red Hat pull secret set.
This commit would ensure that OKD version of marketplace operator gets
its own OperatorHub manifest with a custom set of operator catalogs enabled
Description of problem:
When the user selects Serverless as an import strategy and tried to import a Devfile, the import fails because of an invalid Deployment.
Could reproduce this already in 4.11, but its even more prominent in 4.12 when the console automatically selects the resource type serverless when the Serverless operator is installed.
Version-Release number of selected component (if applicable):
Works on 4.10
Failed on 4.11 and 4.12 master
How reproducible:
Always
Steps to Reproduce:
1. Install and setup Serverless operator
1. Switch to dev perspective, navigate to add > import from git
3. Enter a non-Devfile git URL like https://github.com/jerolimov/nodeinfo
4. On 4.11 select resource type Serverless (on 4.12 this should be selected automatically)
5. Update the git URL to a repo with a Devfile like https://github.com/nodeshift-starters/devfile-sample
6. Press create
Actual results:
Import fails with error:
Error "Invalid value: "": name part must be non-empty" for field "spec.template.labels".
Expected results:
Devfile should be imported
Additional info:
Description of problem:
Currently when installing Openshift on the Openstack cluster name length limit is allowed to 14 characters. Customer wants to know if is it possible to override this validation when installing Openshift on Openstack and create a cluster name that is greater than 14 characters. Version : OCP 4.8.5 UPI Disconnected Environment : Openstack 16 Issue: User reports that they are getting error for OCP cluster in Openstack UPI, where the name of the cluster is > 14 characters. Error events : ~~~ fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/openshift-install", "create", "manifests", "--dir=/home/gitlab-runner/builds/WK8mkokN/0/CPE/SKS/pipelines/non-prod/ocp4-openstack-build/ocpinstaller/install-upi"], "delta": "0:00:00.311397", "end": "2022-09-03 21:38:41.974608", "msg": "non-zero return code", "rc": 1, "start": "2022-09-03 21:38:41.663211", "stderr": "level=fatal msg=failed to fetch Master Machines: failed to load asset \"Install Config\": invalid \"install-config.yaml\" file: metadata.name: Invalid value: \"sks-osp-inf-cpe-1-cbr1a\": cluster name is too long, please restrict it to 14 characters", "stderr_lines": ["level=fatal msg=failed to fetch Master Machines: failed to load asset \"Install Config\": invalid \"install-config.yaml\" file: metadata.name: Invalid value: \"sks-osp-inf-cpe-1-cbr1a\": cluster name is too long, please restrict it to 14 characters"], "stdout": "", "stdout_lines": []} ~~~
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
Actual results:
Users are getting error "cluster name is too long" when clustername contains more than 14 characters for OCP on Openstack
Expected results:
The 14 characters limit should be change for the OCP clustername on Openstack
Additional info:
Not all of the errors reported by the assisted API (and shown in the wait-for bootstrap complete output) actually require user action.
Some appear when the agents first register but resolve themselves relatively quickly in the natural course of events.
Some, like the availability of NTP, don't block the installation from proceeding at all.
We need to think about the best ways of exposing this information to the user.
Currently controller will set status done each time it sees host that is ready in k8s without looking if it was already set.
time="2022-09-13T19:03:45Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=6258e5a2-4e78-4148-a913-45d704a0fa1d
time="2022-09-13T19:04:05Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=49e4e63f-cf4f-4b9f-b1f3-923c473c09dd
The test results in sippy look really bad on our less common platforms, but still pretty unacceptable even on core clouds. It's reasonably often the only test that fails. We need to decide what to do here, and we're going to need input from the etcd team.
As of Sep 13th:
Even on some major variant combos, a 4-8% failure rate is too high.
On Sep 13 arch call (no etcd present), Damien mentioned this might be an upstream alert that just isn't well suited for OpenShift's use cases, is this the case and it needs tuning?
Has the problem been getting worse?
I believe this link https://datastudio.google.com/s/urkKwmmzvgo indicates that this may be the case for 4.12, AWS and Azure are both getting worse in ways that I don't see if we change the release to 4.11 where it looks consistent. gcp seems fine on 4.12. We do not have data for vsphere for some reason.
This link shows the grpc_methods most commonly involved: https://search.ci.openshift.org/?search=etcdGRPCRequestsSlow+was+at+or+above&maxAge=48h&context=7&type=junit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
At a glance: LeaseGrant, MemberList, Txn, Status, Range.
Broken out of TRT-401
For linking with sippy:
[bz-etcd][invariant] alert/etcdGRPCRequestsSlow should not be at or above info
[sig-arch][bz-etcd][Late] Alerts alert/etcdGRPCRequestsSlow should not be at or above info [Suite:openshift/conformance/parallel]
This is a clone of issue OCPBUGS-3235. The following is the description of the original issue:
—
Frequently we see the loading state of the topology view, even when there aren't many resources in the project.
Including an example
topology will sometimes hang with the loading indicator showing indefinitely
topology should load consistently without fail
intermittent
4.9
This is a clone of issue OCPBUGS-3164. The following is the description of the original issue:
—
During first bootstrap boot we need crio and kubelet on the disk, so we start release-image-pivot systemd task. However, its not blocking bootkube, so these two run in parallel.
release-image-pivot restarts the node to apply new OS image, which may leave bootkube in an inconsistent state. This task should run before bootkube
Description of problem:
With every pod update we are executing a mutate operation to add the pod port to the port group or add the pod IP to an address set. This functionally doesn't hurt, since mutate will not add duplicate values to the same set. However, this is bad for performance. For example, with a 730 network policies affecting a pod, and issuing 7 pod updates would result in over 5k transactions.
This is a clone of issue OCPBUGS-5542. The following is the description of the original issue:
—
Description of problem:
The project list orders projects by its name and is smart enough to keep a "numerical order" like:
The more prominent project dropdown is not so smart and shows just a simple "ascii ordered" list:
Version-Release number of selected component (if applicable):
4.8-4.13 (master)
How reproducible:
Always
Steps to Reproduce:
1. Create some new projects called test-1, test-11, test-2
2. Check the project list page (in admin perspective)
3. Check the project dropdown (in dev perspective)
Actual results:
Order is
Expected results:
Order should be
Additional info:
none
The 4.12 builds fail all the time. Last successfully build was from May 31.
Error:
# Root Suite.Entire pipeline flow from Builder page "before all" hook for "Background Steps" AssertionError: Timed out retrying after 80000ms: Expected to find element: `[data-test-id="PipelineResource"]`, but never found it.
Full error:
Running: e2e/pipeline-ci.feature (1 of 1) Couldn't determine Mocha version Logging in as kubeadmin Installing operator: "Red Hat OpenShift Pipelines" Operator Red Hat OpenShift Pipelines was not yet installed. Performing Pipelines post-installation steps Verify the CRD's for the "Red Hat OpenShift Pipelines" 1) "before all" hook for "Background Steps" Deleting "" namespace 0 passing (3m) 1 failing 1) Entire pipeline flow from Builder page "before all" hook for "Background Steps": AssertionError: Timed out retrying after 80000ms: Expected to find element: `[data-test-id="PipelineResource"]`, but never found it. Because this error occurred during a `before all` hook we are skipping all of the remaining tests. at ../../dev-console/integration-tests/support/pages/functions/installOperatorOnCluster.ts.exports.waitForCRDs (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17156:77) at performPostInstallationSteps (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17242:21) at ../../dev-console/integration-tests/support/pages/functions/installOperatorOnCluster.ts.exports.verifyAndInstallOperator (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17268:5) at ../../dev-console/integration-tests/support/pages/functions/installOperatorOnCluster.ts.exports.verifyAndInstallPipelinesOperator (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17272:13) at Context.eval (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:20848:13) [mochawesome] Report JSON saved to /go/src/github.com/openshift/console/frontend/gui_test_screenshots/cypress_report_pipelines.json (Results) ┌────────────────────────────────────────────────────────────────────────────────────────────────┐ │ Tests: 13 │ │ Passing: 0 │ │ Failing: 1 │ │ Pending: 0 │ │ Skipped: 12 │ │ Screenshots: 1 │ │ Video: true │ │ Duration: 2 minutes, 58 seconds │ │ Spec Ran: e2e/pipeline-ci.feature │ └────────────────────────────────────────────────────────────────────────────────────────────────┘ (Screenshots) - /go/src/github.com/openshift/console/frontend/gui_test_screenshots/cypress/scree (1280x720) nshots/e2e/pipeline-ci.feature/Background Steps -- before all hook (failed).png (Video) - Started processing: Compressing to 32 CRF - Finished processing: /go/src/github.com/openshift/console/frontend/gui_test_scre (16 seconds) enshots/cypress/videos/e2e/pipeline-ci.feature.mp4 Compression progress: 100% ==================================================================================================== (Run Finished) Spec Tests Passing Failing Pending Skipped ┌────────────────────────────────────────────────────────────────────────────────────────────────┐ │ ✖ e2e/pipeline-ci.feature 02:58 13 - 1 - 12 │ └────────────────────────────────────────────────────────────────────────────────────────────────┘ ✖ 1 of 1 failed (100%) 02:58 13 - 1 - 12
See also
This is a clone of issue OCPBUGS-2144. The following is the description of the original issue:
—
Description of problem:
Azure IPI creates boot images using the image gallery API now, it will create two image definition resources for both hyperVGeneration V1 and V2. For arm64 cluster, the architecture in image definition hyperVGeneration V1 is x64, but it should be Arm64
Version-Release number of selected component (if applicable):
./openshift-install version ./openshift-install 4.12.0-0.nightly-arm64-2022-10-07-204251 built from commit 7b739cde1e0239c77fabf7622e15025d32fc272c release image registry.ci.openshift.org/ocp-arm64/release-arm64@sha256:d2569be4ba276d6474aea016536afbad1ce2e827b3c71ab47010617a537a8b11 release architecture arm64
How reproducible:
always
Steps to Reproduce:
1.Create arm cluster using latest arm64 nightly build 2.Check image definition created for hyperVGeneration V1
Actual results:
The architecture field is x64. ### $ az sig image-definition show --gallery-name ${gallery_name} --gallery-image-definition lwanazarm1008-rc8wh --resource-group ${rg} | jq -r ".architecture" x64 The image version under this image definition is for aarch64. ### $ az sig image-version show --gallery-name gallery_lwanazarm1008_rc8wh --gallery-image-definition lwanazarm1008-rc8wh --resource-group lwanazarm1008-rc8wh-rg --gallery-image-version 412.86.20220922 | jq -r ".storageProfile.osDiskImage.source" { "uri": "https://clustermuygq.blob.core.windows.net/vhd/rhcosmuygq.vhd"} $ az storage blob show --container-name vhd --name rhcosmuygq.vhd --account-name clustermuygq --account-key $account_key | jq -r ".metadata" { "Source_uri": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-412.86.202209220538-0-azure.aarch64.vhd"}
Expected results:
Although no VMs with HypergenV1 can be provisioned, the architecture field should be Arm64 even for hyperGenerationV1 image definitions
Additional info:
1.The architecture in image definition hyperVGeneration V2 is Arm64 and installer will use V2 by default for arm64 vm_type, so installation didn't fail by default. But we still need to make architecture consistent in V1. 2.Need to set architecture field for both V1 and V2, now we only set architecture in V2 image definition resource. https://github.com/openshift/installer/blob/master/data/data/azure/vnet/main.tf#L100-L128
Description of problem:
TestEditUnmanagedPodDisruptionBudget flakes in the console-operator e2e
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Flake
Steps to Reproduce:
1. Check https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_console-operator/665/pull-ci-openshift-console-operator-master-e2e-aws-operator/1562005782164148224
2.
3.
Actual results:
Expected results:
Additional info:
There is a chance that the PDB instances is not present since prior to the Unmanaged* TCs the RemoveTest is running which is removing all the console resources (Pods, Services, PDBs, ...).
Description of problem:
In looking at jobs on an accepted payload at https://amd64.ocp.releases.ci.openshift.org/releasestream/4.12.0-0.ci/release/4.12.0-0.ci-2022-08-30-122201 , I observed this job https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-serial/1564589538850902016 with "Undiagnosed panic detected in pod" "pods/openshift-controller-manager-operator_openshift-controller-manager-operator-74bf985788-8v9qb_openshift-controller-manager-operator.log.gz:E0830 12:41:48.029165 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)"
Version-Release number of selected component (if applicable):
4.12
How reproducible:
probably relatively easy to reproduce (but not consistently) given it's happened several times according to this search: https://search.ci.openshift.org/?search=Observed+a+panic%3A+%22invalid+memory+address+or+nil+pointer+dereference%22&maxAge=48h&context=1&type=junit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Steps to Reproduce:
1. let nightly payloads run or run one of the presubmit jobs mentioned in the search above 2. 3.
Actual results:
Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)}
Expected results:
no panics
Additional info:
This is a clone of issue OCPBUGS-3668. The following is the description of the original issue:
—
Description of problem:
Installer fails to install 4.12.0-rc.0 on VMware IPI with the script that worked with prior OCP versions. Error happens during Terraform prepare step when gathering information in the "Platform Provisioning Check". It looks like a permission issue, but we're using the VCenter administrator account. I double checked and that account has all the necessary permissions.
Version-Release number of selected component (if applicable):
OCP installer 4.12.0-rc.0 VSphere & Vcenter 7.0.3 - no pending updates
How reproducible:
always - we observed this already in the nightlies, but wanted to wait for a RC to confirm
Steps to Reproduce:
1. Try to install using the openshift-install binary
Actual results:
Fails during the preparation step
Expected results:
Installs the cluster ;)
Additional info:
This runs in our CICD pipeline, let me know if you want to need access to the full run log: https://gitlab.consulting.redhat.com/cblum/storage-ocs-lab/-/jobs/219304 This includes the install-config.yaml, all component versions and the full debug log output
These two tests are permafailing on webhook errors related to the CRD:
[sig-installer][Feature:baremetal][Serial] A baremetal deployment without a provisioning network should show the Provisioning Network as 'Disabled' [Suite:openshift/conformance/serial]
[sig-installer][Feature:baremetal][Serial] A baremetal deployment without a provisioning network should [apigroup:config.openshift.io] show the Provisioning Network as 'Disabled' [Suite:openshift/conformance/serial]
[sig-installer][Feature:baremetal][Serial] A baremetal deployment without a provisioning network should allow setting the ProvisioningNetwork to 'Managed' with valid settings [Suite:openshift/conformance/serial]
[sig-installer][Feature:baremetal][Serial] A baremetal deployment without a provisioning network should [apigroup:config.openshift.io] allow setting the ProvisioningNetwork to 'Managed' with valid settings [Suite:openshift/conformance/serial]
job=periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-virtualmedia=all
Sippy links:
This is a clone of issue OCPBUGS-7015. The following is the description of the original issue:
—
Description of problem:
fail to create vSphere 4.12.2 IPI cluster as apiVIP and ingressVIP are not in machine networks # ./openshift-install create cluster --dir=/tmp ? SSH Public Key /root/.ssh/id_rsa.pub ? Platform vsphere ? vCenter vcenter.vmware.gsslab.pnq2.redhat.com ? Username administrator@gsslab.pnq ? Password [? for help] ************ INFO Connecting to vCenter vcenter.vmware.gsslab.pnq2.redhat.com INFO Defaulting to only available datacenter: OpenShift-DC INFO Defaulting to only available cluster: OCP ? Default Datastore OCP-PNQ-Datastore ? Network PNQ2-25G-PUBLIC-PG ? Virtual IP Address for API [? for help] 192.168.1.10 X Sorry, your reply was invalid: IP expected to be in one of the machine networks: 10.0.0.0/16 ? Virtual IP Address for API [? for help] As the user could not define cidr for machineNetwork when creating the cluster or install-config file interactively, it will use default value 10.0.0.0/16, so fail to create the cluster ot install-config when inputting apiVIP and ingressVIP outside of default machinenNetwork. Error is thrown from https://github.com/openshift/installer/blob/master/pkg/types/validation/installconfig.go#L655-L666, seems new function introduced from PR https://github.com/openshift/installer/pull/5798 The issue should also impact Nutanix platform. I don't understand why the installer is expecting/validating VIPs from 10.0.0.0/16 machine network by default when it's not evening asking to input the machine networks during the survey. This validation was not mandatory in previous OCP installers.
Version-Release number of selected component (if applicable):
# ./openshift-install version ./openshift-install 4.12.2 built from commit 7fea1c4fc00312fdf91df361b4ec1a1a12288a97 release image quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1 release architecture amd64
How reproducible:
Always
Steps to Reproduce:
1. create install-config.yaml file by running command "./openshift-install create install-config --dir ipi" 2. failed with above error
Actual results:
fail to create install-config.yaml file
Expected results:
succeed to create install-config.yaml file
Additional info:
The current workaround is to use dummy VIPs from 10.0.0.0/16 machinenetwork to create the install-config first and then modify the machinenetwork and VIPs as per your requirement which is overhead and creates a negative experience. There was already a bug reported which seems to have only fixed the VIP validation: https://issues.redhat.com/browse/OCPBUGS-881
Description of problem:
The current version of openshift's corendns is based on Kubernetes 1.24 packages. OpenShift 4.12 is based on Kubernetes 1.25.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Check https://github.com/openshift/coredns/blob/release-4.12/go.mod
Actual results:
Kubernetes packages (k8s.io/api, k8s.io/apimachinery, and k8s.io/client-go) are at version v0.24.0.
Expected results:
Kubernetes packages are at version v0.25.0 or later.
Additional info:
Using old Kubernetes API and client packages brings risk of API compatibility issues.
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
When we get telemetry from connected clusters, we want to be able to tell when they were created with the agent installer vs. the host assisted service. Currently there is no way to distinguish.
It's not clear whether any particular group owns the namespace of installation methods, or whom we need to notify when we create one.
in order to have more info to be able to debug router issue in sno , we want to see if router is healthy from node network point of view and enable router access logs,
Lets revert when https://bugzilla.redhat.com/show_bug.cgi?id=2097041 will be found
Description of problem:
Each LB created for a Service type LoadBalancer results in 1 client rule and <# of public subnets> health rules being created. The rules per SG quota in AWS is quite small; 60 by default, and 200 hard max. OCP has about 40 rules OOTB. Assuming an HA cluster in 3 AZs, that is 4 rules per LB. With default AWS quota, only ~5 LBs can be create and with the hard max of 200, only ~40 LBs can be created.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Create Service type LoadBalancer and observe increase in master-sg and worker-sg rules sets 2. 3.
Actual results:
4 rules are created
Expected results:
1 rules is created when the client rule is a superset of the per-subnet health rules
Additional info:
This ~4x the number of Services of type LoadBalancer. This is required for Hypershift.
This is a clone of issue OCPBUGS-10647. The following is the description of the original issue:
—
Description of problem:
Cluster Network Operator managed component multus-admission-controller does not conform to Hypershift control plane expectations. When CNO is managed by Hypershift, multus-admission-controller must run with non-root security context. If Hypershift runs control plane on kubernetes (as opposed to Openshift) management cluster, it adds pod or container security context to most deployments with runAsUser clause inside. In Hypershift CPO, the security context of deployment containers, including CNO, is set when it detects that SCC's are not available, see https://github.com/openshift/hypershift/blob/9d04882e2e6896d5f9e04551331ecd2129355ecd/support/config/deployment.go#L96-L100. In such a case CNO should do the same, set security context for its managed deployment multus-admission-controller to meet Hypershift standard.
How reproducible:
Always
Steps to Reproduce:
1.Create OCP cluster using Hypershift using Kube management cluster 2.Check pod security context of multus-admission-controller
Actual results:
no pod security context is set
Expected results:
pod security context is set with runAsUser: xxxx
Additional info:
This is the highest priority item from https://issues.redhat.com/browse/OCPBUGS-7942 and it needs to be fixed ASAP as it is a security issue preventing IBM from releasing Hypershift-managed Openshift service.
This is a clone of issue OCPBUGS-14336. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-1829. The following is the description of the original issue:
—
Description of problem:
Link to Openshift Route from service is breaking because of hardcoded value of targetPort. If the targetPort gets changed, the route still points to the older value of port as it's hardcoded
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Install the latest available version of Openshift Pipelines 2. Create the pipeline and triggerbinding using the attached files 3. Add trigger to the created pipeline from devconsole UI, select the above created triggerbinding while adding trigger 4. Trigger an event using the curl command curl -X POST -d '{ "url": "https://www.github.com/VeereshAradhya/cli" }' -H 'Content-Type: application/json' <route> and make sure that the pipelinerun gets started 5. Update the tagetPort in the svc from 8080 to 8000 6. Again use the above curl command to trigger one more event
Actual results:
The curl command throws error
Expected results:
The curl command should be successful and the pipelinerun should get started successfully
Additional info:
Error: curl -X POST -d '{ "url": "https://www.github.com/VeereshAradhya/cli" }' -H 'Content-Type: application/json' http://el-event-listener-3o9zcv-test-devconsole.apps.ve412psi.psi.ospqa.com <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> <style type="text/css"> body { font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; line-height: 1.66666667; font-size: 16px; color: #333; background-color: #fff; margin: 2em 1em; } h1 { font-size: 28px; font-weight: 400; } p { margin: 0 0 10px; } .alert.alert-info { background-color: #F0F0F0; margin-top: 30px; padding: 30px; } .alert p { padding-left: 35px; } ul { padding-left: 51px; position: relative; } li { font-size: 14px; margin-bottom: 1em; } p.info { position: relative; font-size: 20px; } p.info:before, p.info:after { content: ""; left: 0; position: absolute; top: 0; } p.info:before { background: #0066CC; border-radius: 16px; color: #fff; content: "i"; font: bold 16px/24px serif; height: 24px; left: 0px; text-align: center; top: 4px; width: 24px; } @media (min-width: 768px) { body { margin: 6em; } } </style> </head> <body> <div> <h1>Application is not available</h1> <p>The application is currently not serving requests at this endpoint. It may not have been started or is still starting.</p> <div class="alert alert-info"> <p class="info"> Possible reasons you are seeing this page: </p> <ul> <li> <strong>The host doesn't exist.</strong> Make sure the hostname was typed correctly and that a route matching this hostname exists. </li> <li> <strong>The host exists, but doesn't have a matching path.</strong> Check if the URL path was typed correctly and that the route was created using the desired path. </li> <li> <strong>Route and path matches, but all pods are down.</strong> Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running. </li> </ul> </div> </div> </body> </html>
Note:
The above scenario works fine if we create triggers using the yaml files instead of using devconsole UI
This is a clone of issue OCPBUGS-14426. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-14149. The following is the description of the original issue:
—
Description of problem:
Cannot list Kepler CSV
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Install Kepler Community Operator 2. Create Kepler Instance 3. Console gets error and shows "Oops, something went wrong"
Actual results:
Console gets error and shows "Oops, something went wrong"
Expected results:
Should list Kepler Instance
Additional info:
Description of problem:
"Failed to open directory, disabling udev device properties" in node-exporter logs
$ for i in $(oc -n openshift-monitoring get pod | grep node-exporter | awk '{print $1}'); do echo $i; oc -n openshift-monitoring logs -c node-exporter $i | grep "Failed to open directory, disabling udev device properties"; echo -e "\n"; done node-exporter-4279b ts=2022-10-17T01:16:05.833Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-9tq64 ts=2022-10-17T01:16:04.642Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-dwtwh ts=2022-10-17T01:16:04.936Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-nrznc ts=2022-10-17T01:16:05.601Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-q87s4 ts=2022-10-17T01:16:05.228Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data node-exporter-twtxj ts=2022-10-17T01:16:05.249Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
debug on node, /run/udev/data is readable
# oc debug node/ip-10-0-138-107.us-east-2.compute.internal Temporary namespace openshift-debug-dhvqv is created for debugging node... Starting pod/ip-10-0-138-107us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.138.107 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-4.4# ls -l /run/udev/ total 0 srw-------. 1 root root 0 Oct 17 01:04 control drwxr-xr-x. 2 root root 3780 Oct 17 01:26 data drwxr-xr-x. 40 root root 800 Oct 17 01:04 links drwxr-xr-x. 3 root root 60 Oct 17 01:04 static_node-tags drwxr-xr-x. 5 root root 100 Oct 17 01:04 tags drwxr-xr-x. 2 root root 140 Oct 17 01:04 watch sh-4.4# ls -l /run/udev/data total 304 -rw-r--r--. 1 root root 55 Oct 17 01:04 +acpi:AMZN0000:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXCPU:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXCPU:01 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXCPU:02 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXCPU:03 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXPWRBN:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXSLPBN:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXSYBUS:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXSYBUS:01 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:LNXSYSTM:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0103:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0303:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0400:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0501:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0A03:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0B00:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:00 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:01 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:02 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:03 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0C0F:04 -rw-r--r--. 1 root root 57 Oct 17 01:04 +acpi:PNP0F13:00 -rw-r--r--. 1 root root 142 Oct 17 01:04 +input:input0 -rw-r--r--. 1 root root 142 Oct 17 01:04 +input:input1 -rw-r--r--. 1 root root 218 Oct 17 01:04 +input:input2 -rw-r--r--. 1 root root 198 Oct 17 01:04 +input:input4 -rw-r--r--. 1 root root 143 Oct 17 01:04 +input:input5 -rw-r--r--. 1 root root 60 Oct 17 01:04 +module:configfs -rw-r--r--. 1 root root 66 Oct 17 01:04 +module:fuse -rw-r--r--. 1 root root 188 Oct 17 01:04 +pci:0000:00:00.0 -rw-r--r--. 1 root root 195 Oct 17 01:04 +pci:0000:00:01.0 -rw-r--r--. 1 root root 213 Oct 17 01:04 +pci:0000:00:01.3 -rw-r--r--. 1 root root 207 Oct 17 01:04 +pci:0000:00:03.0 -rw-r--r--. 1 root root 259 Oct 17 01:04 +pci:0000:00:04.0 -rw-r--r--. 1 root root 208 Oct 17 01:04 +pci:0000:00:05.0 -rw-r--r--. 1 root root 55 Oct 17 01:04 +platform:AMZN0000:00 -rw-r--r--. 1 root root 825 Oct 17 01:04 b259:0 -rw-r--r--. 1 root root 1357 Oct 17 01:04 b259:1 -rw-r--r--. 1 root root 1568 Oct 17 01:04 b259:2 -rw-r--r--. 1 root root 1619 Oct 17 01:04 b259:3 -rw-r--r--. 1 root root 1602 Oct 17 01:04 b259:4 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:144 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:183 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:227 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:228 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:229 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:231 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:235 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:236 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:62 -rw-r--r--. 1 root root 0 Oct 17 01:04 c10:63 -rw-r--r--. 1 root root 193 Oct 17 01:04 c13:32 -rw-r--r--. 1 root root 0 Oct 17 01:04 c13:63 -rw-r--r--. 1 root root 113 Oct 17 01:04 c13:64 -rw-r--r--. 1 root root 113 Oct 17 01:04 c13:65 -rw-r--r--. 1 root root 232 Oct 17 01:04 c13:66 -rw-r--r--. 1 root root 199 Oct 17 01:04 c13:67 -rw-r--r--. 1 root root 143 Oct 17 01:04 c13:68 -rw-r--r--. 1 root root 0 Oct 17 01:04 c162:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:11 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:4 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:5 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:7 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:8 -rw-r--r--. 1 root root 0 Oct 17 01:04 c1:9 -rw-r--r--. 1 root root 0 Oct 17 01:04 c202:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c202:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c202:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c202:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c203:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c203:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c203:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c203:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c241:0 -rw-r--r--. 1 root root 259 Oct 17 01:04 c242:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c246:0 -rw-r--r--. 1 root root 23 Oct 17 01:04 c251:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:10 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:11 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:12 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:13 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:14 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:15 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:16 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:17 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:18 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:19 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:20 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:21 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:22 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:23 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:24 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:25 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:26 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:27 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:28 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:29 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:30 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:31 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:32 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:33 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:34 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:35 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:36 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:37 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:38 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:39 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:4 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:40 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:41 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:42 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:43 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:44 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:45 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:46 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:47 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:48 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:49 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:5 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:50 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:51 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:52 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:53 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:54 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:55 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:56 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:57 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:58 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:59 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:6 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:60 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:61 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:62 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:63 -rw-r--r--. 1 root root 20 Oct 17 01:04 c4:64 -rw-r--r--. 1 root root 20 Oct 17 01:04 c4:65 -rw-r--r--. 1 root root 20 Oct 17 01:04 c4:66 -rw-r--r--. 1 root root 20 Oct 17 01:04 c4:67 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:7 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:8 -rw-r--r--. 1 root root 0 Oct 17 01:04 c4:9 -rw-r--r--. 1 root root 0 Oct 17 01:04 c5:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c5:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c5:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:0 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:1 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:128 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:129 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:130 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:131 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:132 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:133 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:134 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:2 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:3 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:4 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:5 -rw-r--r--. 1 root root 0 Oct 17 01:04 c7:6 -rw-r--r--. 1 root root 87 Oct 17 01:04 n1 -rw-r--r--. 1 root root 360 Oct 17 01:06 n10 -rw-r--r--. 1 root root 360 Oct 17 01:06 n11 -rw-r--r--. 1 root root 360 Oct 17 01:06 n13 -rw-r--r--. 1 root root 360 Oct 17 01:07 n14 -rw-r--r--. 1 root root 595 Oct 17 01:04 n2 -rw-r--r--. 1 root root 360 Oct 17 01:09 n25 -rw-r--r--. 1 root root 360 Oct 17 01:10 n29 -rw-r--r--. 1 root root 195 Oct 17 01:04 n3 -rw-r--r--. 1 root root 360 Oct 17 01:10 n30 -rw-r--r--. 1 root root 360 Oct 17 01:11 n31 -rw-r--r--. 1 root root 360 Oct 17 01:14 n35 -rw-r--r--. 1 root root 360 Oct 17 01:14 n37 -rw-r--r--. 1 root root 360 Oct 17 01:14 n39 -rw-r--r--. 1 root root 188 Oct 17 01:04 n4 -rw-r--r--. 1 root root 360 Oct 17 01:15 n41 -rw-r--r--. 1 root root 193 Oct 17 01:04 n5 -rw-r--r--. 1 root root 360 Oct 17 01:18 n50 -rw-r--r--. 1 root root 362 Oct 17 01:26 n54 -rw-r--r--. 1 root root 189 Oct 17 01:04 n6 -rw-r--r--. 1 root root 357 Oct 17 01:05 n7 -rw-r--r--. 1 root root 357 Oct 17 01:05 n8 -rw-r--r--. 1 root root 359 Oct 17 01:05 n9
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-15-094115 node-exporter version=1.4.0
How reproducible:
always
Steps to Reproduce:
1. check node-exporter logs 2. 3.
Actual results:
"Failed to open directory, disabling udev device properties" in node-exporter logs
Expected results:
no error logs
Additional info:
no functional affection for the cluster code: https://github.com/prometheus/node_exporter/blob/release-1.4/collector/diskstats_linux.go#L262-L270
This is a clone of issue OCPBUGS-5548. The following is the description of the original issue:
—
Description of problem:
This is a follow-up on https://bugzilla.redhat.com/show_bug.cgi?id=2083087 and https://github.com/openshift/console/pull/12390
When creating a Deployment, DeploymentConfig, or Knative Service with enabled Pipeline, and then deleting it again with the enabled option "Delete other resources created by console" (only available on 4.13+ with the PR above) the automatically created Pipeline is not deleted.
When the user tries to create the same resource with a Pipeline again this fails with an error:
An error occurred
secrets "nodeinfo-generic-webhook-secret" already exists
Version-Release number of selected component (if applicable):
4.13
(we might want to backport this together with https://github.com/openshift/console/pull/12390 and OCPBUGS-5547)
How reproducible:
Always
Steps to Reproduce:
Actual results:
Case 1: Delete resources:
Case 2: Delete application:
Expected results:
Case 1: Delete resource:
Case 2: Delete application:
Additional info:
We need to merge downstream on master the new retry code from https://github.com/ovn-org/ovn-kubernetes/pull/3171, so that we can also merge the retry logic applied to ovn-k node.
This is a clone of issue OCPBUGS-2141. The following is the description of the original issue:
—
Description of problem:
4.12 cluster, no pv for prometheus, the doc still link to 4.8
# oc get co monitoring -o jsonpath='{.status.conditions}' | jq 'map(select(.type=="Degraded"))' [ { "lastTransitionTime": "2022-10-09T02:36:16Z", "message": "Prometheus is running without persistent storage which can lead to data loss during upgrades and cluster disruptions. Please refer to the official documentation to see how to configure storage for Prometheus: https://docs.openshift.com/container-platform/4.8/monitoring/configuring-the-monitoring-stack.html", "reason": "PrometheusDataPersistenceNotConfigured", "status": "False", "type": "Degraded" } ]
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-05-053337
How reproducible:
always
Steps to Reproduce:
1. no PVs for prometheus, check the monitoring operator status 2. 3.
Actual results:
the doc still link to 4.8
Expected results:
links to the latest doc
Additional info:
slack thread: https://coreos.slack.com/archives/G79AW9Q7R/p1665283462123389
When we create an HCP, the Root CA in the HCP namespaces has the certificate and key named as
Done criteria: The Root CA should have the certificate and key named as the cert manager expects.
We added server groups for control plane and computes as part of OSASINFRA-2570, except for UPI that only creates server group for the control plane.
We need to update the UPI scripts to create server group for computes to be consistent with IPI and have the instruction at https://docs.openshift.com/container-platform/4.11/machine_management/creating_machinesets/creating-machineset-osp.html work out of the box in case customers want to create MachineSets on their UPI clusters.
Related to OCPCLOUD-1135.
Description of problem: This is a follow-up to OCPBUGS-2795 and OCPBUGS-2941.
The installer fails to destroy the cluster when the OpenStack object storage omits 'content-type' from responses. This can happen on responses with HTTP status code 204, where a reverse proxy is truncating content-related headers (see this nginX bug report). In such cases, the Installer errors with:
level=error msg=Bulk deleting of container "5ifivltb-ac890-chr5h-image-registry-fnxlmmhiesrfvpuxlxqnkoxdbl" objects failed: Cannot extract names from response with content-type: []
Listing container object suffers from the same issue as listing the containers and this one isn't fixed in latest versions of gophercloud. I've reported https://github.com/gophercloud/gophercloud/issues/2509 and fixing it with https://github.com/gophercloud/gophercloud/issues/2510, however we likely won't be able to backport the bump to gophercloud master back to release-4.8 so we'll have to look for alternatives.
I'm setting the priority to critical as it's causing all our jobs to fail in master.
Version-Release number of selected component (if applicable):
4.8.z
How reproducible:
Likely not happening in customer environments where Swift is exposed directly. We're seeing the issue in our CI where we're using a non-RHOSP managed cloud.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-5151. The following is the description of the original issue:
—
Description of problem:
Cx is not able to install new cluster OCP BM IPI. During the bootstrapping the provisioning interfaces from master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install Please refer to following BUG --> https://issues.redhat.com/browse/OCPBUGS-872 The problem was solved by applying rd.net.timeout.carrier=30 to the kernel parameters of compute nodes via cluster-baremetal operator. The fix also need to be apply to the control-plane. ref:// https://github.com/openshift/cluster-baremetal-operator/pull/286/files
Version-Release number of selected component (if applicable):
How reproducible:
Perform OCP 4.10.16 IPI BareMetal install.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Customer should be able to install the cluster without any issue.
Additional info:
This is a clone of issue OCPBUGS-948. The following is the description of the original issue:
—
Description of problem:
OLM is setting the "openshift.io/scc" label to "anyuid" on several namespaces: https://github.com/openshift/operator-framework-olm/blob/d817e09c2565b825afd8bfc9bb546eeff28e47e7/manifests/0000_50_olm_00-namespace.yaml#L23 https://github.com/openshift/operator-framework-olm/blob/d817e09c2565b825afd8bfc9bb546eeff28e47e7/manifests/0000_50_olm_00-namespace.yaml#L8 this label has no effect and will lead to confusion. It should be set to emptystring for now (removing it entirely will have no effect on upgraded clusters because the CVO does not remove deleted labels, so the next best thing is to clear the value). For bonus points, OLM should remove the label entirely from the manifest and add migration logic to remove the existing label from these namespaces to handle upgraded clusters that already have it.
Version-Release number of selected component (if applicable):
Not sure how long this has been an issue, but fixing it in 4.12+ should be sufficient.
How reproducible:
always
Steps to Reproduce:
1. install cluster 2. examine namespace labels
Actual results:
label is present
Expected results:
ideally label should not be present, but in the short term setting it to emptystring is the quick fix and is better than nothing.
Description of problem:
InstanceMetadataTags are not supported in AWS C2S region(us-iso-x)
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. OCP4.11 IPI Installation on AWS C2S regions 2. 3.
Actual results:
Expected results:
Additional info:
Actual Error: "Error launching resource Instance. Unsupported Operation Specifying InstanceMetadataTags is not yet supported" There is a related fix on upstream: resource/aws_instance: Handle regions where instance metadata tags are unsupported https://github.com/hashicorp/terraform-provider-aws/pull/26631
Console should be using v1 version of the ConsolePlugin model rather then the old v1alpha1.
CONSOLE-3077 was updating this version, but did not made the cut for the 4.12 release. Based on discussion with Samuel Padgett we should be backporting to 4.12.
The risk should be minimal since we are only updating the model itself + validation + Readme
Description of problem:
When creating a pod with an additional network that contains a `spec.config.ipam.exclude` range, any address within the excluded range is still iterated while searching for a suitable IP candidate. As a result, pod creation times out when large exclude ranges are used.
Version-Release number of selected component (if applicable):
How reproducible:
with big exclude ranges, 100%
Steps to Reproduce:
1. create network-attachment-definition with a large range: $ cat <<EOF| oc apply -f - apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: nad-w-excludes spec: config: |- { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "ens3", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "fd43:01f1:3daa:0baa::/64", "exclude": [ "fd43:01f1:3daa:0baa::/100" ], "log_file": "/tmp/whereabouts.log", "log_level" : "debug" } } EOF 2. create a pod with the network attached: $ cat <<EOF|oc apply -f - apiVersion: v1 kind: Pod metadata: name: pod-with-exclude-range annotations: k8s.v1.cni.cncf.io/networks: nad-w-excludes spec: containers: - name: pod-1 image: openshift/hello-openshift EOF 3. check pod status, event log and whereabouts logs after a while: $ oc get pods NAME READY STATUS RESTARTS AGE pod-with-exclude-range 0/1 ContainerCreating 0 2m23s $ oc get events <...> 6m39s Normal Scheduled pod/pod-with-exclude-range Successfully assigned default/pod-with-exclude-range to <worker-node> 6m37s Normal AddedInterface pod/pod-with-exclude-range Add eth0 [10.129.2.49/23] from openshift-sdn 2m39s Warning FailedCreatePodSandBox pod/pod-with-exclude-range Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded $ oc debug node/<worker-node> - tail /host/tmp/whereabouts.log Starting pod/<worker-node>-debug ... To use host binaries, run `chroot /host` 2022-10-27T14:14:50Z [debug] Finished leader election 2022-10-27T14:14:50Z [debug] IPManagement: {fd43:1f1:3daa:baa::1 ffffffffffffffff0000000000000000} , <nil> 2022-10-27T14:14:59Z [debug] Used defaults from parsed flat file config @ /etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.conf 2022-10-27T14:14:59Z [debug] ADD - IPAM configuration successfully read: {Name:macvlan-net Type:whereabouts Routes:[] Datastore:kubernetes Addresses:[] OmitRanges:[fd43:01f1:3daa:0baa::/80] DNS: {Nameservers:[] Domain: Search:[] Options:[]} Range:fd43:1f1:3daa:baa::/64 RangeStart:fd43:1f1:3daa:baa:: RangeEnd:<nil> GatewayStr: EtcdHost: EtcdUsername: EtcdPassword:********* EtcdKeyFile: EtcdCertFile: EtcdCACertFile: LeaderLeaseDuration:1500 LeaderRenewDeadline:1000 LeaderRetryPeriod:500 LogFile:/tmp/whereabouts.log LogLevel:debug OverlappingRanges:true SleepForRace:0 Gateway:<nil> Kubernetes: {KubeConfigPath:/etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.kubeconfig K8sAPIRoot:} ConfigurationPath:PodName:pod-with-exclude-range PodNamespace:default} 2022-10-27T14:14:59Z [debug] Beginning IPAM for ContainerID: f4ffd0e07d6c1a2b6ffb0fa29910c795258792bb1a1710ff66f6b48fab37af82 2022-10-27T14:14:59Z [debug] Started leader election 2022-10-27T14:14:59Z [debug] OnStartedLeading() called 2022-10-27T14:14:59Z [debug] Elected as leader, do processing 2022-10-27T14:14:59Z [debug] IPManagement - mode: 0 / containerID:f4ffd0e07d6c1a2b6ffb0fa29910c795258792bb1a1710ff66f6b48fab37af82 / podRef: default/pod-with-exclude-range 2022-10-27T14:14:59Z [debug] IterateForAssignment input >> ip: fd43:1f1:3daa:baa:: | ipnet: {fd43:1f1:3daa:baa:: ffffffffffffffff0000000000000000} | first IP: fd43:1f1:3daa:baa::1 | last IP: fd43:1f1:3daa:baa:ffff:ffff:ffff:ffff
Actual results:
Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Expected results:
additional network gets attached to the pod
Additional info:
This is a clone of issue OCPBUGS-10433. The following is the description of the original issue:
—
Description of problem:
When CNO is managed by Hypershift multus-admission-controller does not have correct RollingUpdate parameterts meeting Hypershift requirements outligned here: https://github.com/openshift/hypershift/blob/646bcef53e4ecb9ec01a05408bb2da8ffd832a14/support/config/deployment.go#L81 ``` There are two standard cases currently with hypershift: HA mode where there are 3 replicas spread across zones and then non ha with one replica. When only 3 zones are available you need to be able to set maxUnavailable in order to progress the rollout. However, you do not want to set that in the single replica case because it will result in downtime. ``` So when multus-admission-controller has more than one replica the RollingUpdate parameters should be ``` strategy: type: RollingUpdate rollingUpdate: maxSurge: 0 maxUnavailable: 1 ```
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1.Create OCP cluster using Hypershift 2.Check rolling update parameters of multus-admission-controller
Actual results:
the operator has default parameters: {"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"}
Expected results:
{"rollingUpdate":{"maxSurge":0,"maxUnavailable":1},"type":"RollingUpdate"}
Additional info:
This is a clone of issue OCPBUGS-3032. The following is the description of the original issue:
—
If installation fails at an early stage (e.g. pulling release images, configuring hosts, waiting for agents to come up) there is no indication that anything has gone wrong, and the installer binary may not even be able to connect.
We should at least display what is happening on the console so that users have some avenue to figure out for themselves what is going on.
Tracker issue for bootimage bump in 4.12. This issue should block issues which need a bootimage bump to fix.
The previous bump was OCPBUGS-5960.
This is a clone of issue OCPBUGS-7102. The following is the description of the original issue:
—
Description of problem:
https://github.com/openshift/operator-framework-olm/blob/7ec6b948a148171bd336750fed98818890136429/staging/operator-lifecycle-manager/pkg/controller/operators/olm/plugins/downstream_csv_namespace_labeler_plugin_test.go#L309
has a dependency on creation of a next-version release branch.
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Steps to Reproduce:
1. clone operator-framework/operator-framework-olm 2. make unit/olm 3. deal with a really bumpy first-time kubebuilder/envtest install experience 4. profit
Actual results:
error
Expected results:
pass
Additional info:
Description of problem:
By creating network policies with a namespace that has maximum length, it can end up causing this error: 2023-06-22T17:34:40.804880959Z I0622 17:34:40.804851 1 obj_retry.go:318] Retry add failed for *v1.NetworkPolicy ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident/kas, will try again later: failed to create Network Policy ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident/kas: failed to create default deny port groups: error in transact with ops [ {Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:outport == @a7686019953911959437_ingressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} priority:1000] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {08cc8026-4c22-4c52-99cd-e8cd1469c8bd}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:outport == @a7686019953911959437_ingressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} priority:1001] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {08cc8026-4c22-4c52-99cd-e8cd1469c8bd}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @a7686019953911959437_egressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} options:{GoMap:map[apply-after-lb:true]} priority:1000] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {f324353c-a47b-4044-9cd9-dbeef058ada3}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}{Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @a7686019953911959437_egressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} options:{GoMap:map[apply-after-lb:true]} priority:1001] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {f324353c-a47b-4044-9cd9-dbeef058ada3}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}{Op:update Table:Port_Group Row:map[acls:{GoSet:[{GoUUID:08cc8026-4c22-4c52-99cd-e8cd1469c8bd} {GoUUID:08cc8026-4c22-4c52-99cd-e8cd1469c8bd}]} external_ids:{GoMap:map[name:a7686019953911959437_ingressDefaultDeny]} ports:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {d3b52500-963a-4f7b-8928-d869f298d2e8}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}{Op:update Table:Port_Group Row:map[acls:{GoSet:[{GoUUID:f324353c-a47b-4044-9cd9-dbeef058ada3} {GoUUID:f324353c-a47b-4044-9cd9-dbeef058ada3}]} external_ids:{GoMap:map[name:a7686019953911959437_egressDefaultDeny]} ports:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {b128baec-6acd-4683-8c12-5b968bf73bd8}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]results [{Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:0 Error:ovsdb error Details:set contains duplicate UUID:{GoUUID:} Rows:[]} {Count:0 Error: Details: UUID:{GoUUID:} Rows:[]}] and errors [ovsdb error: set contains duplicate]: 1 ovsdb operations failed
This is not a problem in 4.14 as we moved to ACL indexes, but in 4.13 and before we compare the ACL name and the external ids. For default deny ACLs we simply store the direction in the external id, and the name of the ACL is limited to 63 characters in OVN. When we create default deny acls, we create one that denies everything, then we also create some allow acls to permit arp and neighbor discovery traffic. These 2 ACLs may be recognized as duplicate because their truncated name (namespace only) and their directions in external ids match.
This is a clone of issue OCPBUGS-4166. The following is the description of the original issue:
—
Description of problem:
This is wrapper bug for library sync of 4.12
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Image registry pods panic while deploying OCP in ap-south-2 AWS region
Version-Release number of selected component (if applicable):
4.11.2
How reproducible:
Deploy OCP in AWS ap-south-2 region
Steps to Reproduce:
Deploy OCP in AWS ap-south-2 region
Actual results:
panic: Invalid region provided: ap-south-2
Expected results:
Image registry pods should come up with no errors
Additional info:
This is a clone of issue OCPBUGS-7438. The following is the description of the original issue:
—
Description of problem:
The egress service nodeSelector parsing does not take into account wrong values that cause errors (such as "name part must consist of alphanumeric characters"), and the controller does not handle them gracefully given a bad input. when a bad input is given it should log an error and ignore the service
Version-Release number of selected component (if applicable):
How reproducible:
create an egress service with a bad nodeSelector: "{"nodeSelector":{"matchLabels":{"a:b": "c&"}}}" ovnkube-master controller does not handle it gracefully
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info: