Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem:
Create Loadbalancer type service within the OCP 4.11.x OVNKubernetes cluster to expose the api server endpoint, the service does not response for normal oc request. But some of them are working, like "oc whoami", "oc get --raw /api"
Version-Release number of selected component (if applicable):
4.11.8 with OVNKubernetes
How reproducible:
always
Steps to Reproduce:
1. Setup openshift cluster 4.11 on AWS with OVNKubernetes as the default network 2. Create the following service under openshift-kube-apiserver namespace to expose the api ---- apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1800" finalizers: - service.kubernetes.io/load-balancer-cleanup name: test-api namespace: openshift-kube-apiserver spec: allocateLoadBalancerNodePorts: true externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack loadBalancerSourceRanges: - <my_ip>/32 ports: - nodePort: 31248 port: 6443 protocol: TCP targetPort: 6443 selector: apiserver: "true" app: openshift-kube-apiserver sessionAffinity: None type: LoadBalancer 3. Setup the DNS resolution for the access xxx.mydomain.com ---> <elb-auto-generated-dns> 4. Try to access the cluster api via the service above by updating the kubeconfig to use the custom dns name
Actual results:
No response from the server side. $ time oc get node -v8 I1025 08:29:10.284069 103974 loader.go:375] Config loaded from file: bmeng.kubeconfig I1025 08:29:10.294017 103974 round_trippers.go:420] GET https://rh-api.bmeng-ccs-ovn.3o13.s1.devshift.org:6443/api/v1/nodes?limit=500 I1025 08:29:10.294035 103974 round_trippers.go:427] Request Headers: I1025 08:29:10.294043 103974 round_trippers.go:431] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json I1025 08:29:10.294052 103974 round_trippers.go:431] User-Agent: oc/openshift (linux/amd64) kubernetes/e40bd2d I1025 08:29:10.365119 103974 round_trippers.go:446] Response Status: 200 OK in 71 milliseconds I1025 08:29:10.365142 103974 round_trippers.go:449] Response Headers: I1025 08:29:10.365148 103974 round_trippers.go:452] Audit-Id: 83b9d8ae-05a4-4036-bff6-de371d5bec12 I1025 08:29:10.365155 103974 round_trippers.go:452] Cache-Control: no-cache, private I1025 08:29:10.365161 103974 round_trippers.go:452] Content-Type: application/json I1025 08:29:10.365167 103974 round_trippers.go:452] X-Kubernetes-Pf-Flowschema-Uid: 2abc2e2d-ada3-4cb8-a86f-235df3a4e214 I1025 08:29:10.365173 103974 round_trippers.go:452] X-Kubernetes-Pf-Prioritylevel-Uid: 02f7a188-43c7-4827-af58-5ebe861a1891 I1025 08:29:10.365179 103974 round_trippers.go:452] Date: Tue, 25 Oct 2022 08:29:10 GMT ^C real 17m4.840s user 0m0.567s sys 0m0.163s However, it has the correct response if using --raw to request, eg: $ oc get --raw /api/v1 --kubeconfig bmeng.kubeconfig {"kind":"APIResourceList","groupVersion":"v1","resources":[{"name":"bindings","singularName":"","namespaced":true,"kind":"Binding","verbs":["create"]},{"name":"componentstatuses","singularName":"","namespaced":false,"kind":"ComponentStatus","verbs":["get","list"],"shortNames":["cs"]},{"name":"configmaps","singularName":"","namespaced":true,"kind":"ConfigMap","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["cm"],"storageVersionHash":"qFsyl6wFWjQ="},{"name":"endpoints","singularName":"","namespaced":true,"kind":"Endpoints","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["ep"],"storageVersionHash":"fWeeMqaN/OA="},{"name":"events","singularName":"","namespaced":true,"kind":"Event","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["ev"],"storageVersionHash":"r2yiGXH7wu8="},{"name":"limitranges","singularName":"","namespaced":true,"kind":"LimitRange","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["limits"],"storageVersionHash":"EBKMFVe6cwo="},{"name":"namespaces","singularName":"","namespaced":false,"kind":"Namespace","verbs":["create","delete","get","list","patch","update","watch"],"shortNames":["ns"],"storageVersionHash":"Q3oi5N2YM8M="},{"name":"namespaces/finalize","singularName":"","namespaced":false,"kind":"Namespace","verbs":["update"]},{"name":"namespaces/status","singularName":"","namespaced":false,"kind":"Namespace","verbs":["get","patch","update"]},{"name":"nodes","singularName":"","namespaced":false,"kind":"Node","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["no"],"storageVersionHash":"XwShjMxG9Fs="},{"name":"nodes/proxy","singularName":"","namespaced":false,"kind":"NodeProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"nodes/status","singularName":"","namespaced":false,"kind":"Node","verbs":["get","patch","update"]},{"name":"persistentvolumeclaims","singularName":"","namespaced":true,"kind":"PersistentVolumeClaim","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["pvc"],"storageVersionHash":"QWTyNDq0dC4="},{"name":"persistentvolumeclaims/status","singularName":"","namespaced":true,"kind":"PersistentVolumeClaim","verbs":["get","patch","update"]},{"name":"persistentvolumes","singularName":"","namespaced":false,"kind":"PersistentVolume","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["pv"],"storageVersionHash":"HN/zwEC+JgM="},{"name":"persistentvolumes/status","singularName":"","namespaced":false,"kind":"PersistentVolume","verbs":["get","patch","update"]},{"name":"pods","singularName":"","namespaced":true,"kind":"Pod","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["po"],"categories":["all"],"storageVersionHash":"xPOwRZ+Yhw8="},{"name":"pods/attach","singularName":"","namespaced":true,"kind":"PodAttachOptions","verbs":["create","get"]},{"name":"pods/binding","singularName":"","namespaced":true,"kind":"Binding","verbs":["create"]},{"name":"pods/ephemeralcontainers","singularName":"","namespaced":true,"kind":"Pod","verbs":["get","patch","update"]},{"name":"pods/eviction","singularName":"","namespaced":true,"group":"policy","version":"v1","kind":"Eviction","verbs":["create"]},{"name":"pods/exec","singularName":"","namespaced":true,"kind":"PodExecOptions","verbs":["create","get"]},{"name":"pods/log","singularName":"","namespaced":true,"kind":"Pod","verbs":["get"]},{"name":"pods/portforward","singularName":"","namespaced":true,"kind":"PodPortForwardOptions","verbs":["create","get"]},{"name":"pods/proxy","singularName":"","namespaced":true,"kind":"PodProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"pods/status","singularName":"","namespaced":true,"kind":"Pod","verbs":["get","patch","update"]},{"name":"podtemplates","singularName":"","namespaced":true,"kind":"PodTemplate","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"LIXB2x4IFpk="},{"name":"replicationcontrollers","singularName":"","namespaced":true,"kind":"ReplicationController","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["rc"],"categories":["all"],"storageVersionHash":"Jond2If31h0="},{"name":"replicationcontrollers/scale","singularName":"","namespaced":true,"group":"autoscaling","version":"v1","kind":"Scale","verbs":["get","patch","update"]},{"name":"replicationcontrollers/status","singularName":"","namespaced":true,"kind":"ReplicationController","verbs":["get","patch","update"]},{"name":"resourcequotas","singularName":"","namespaced":true,"kind":"ResourceQuota","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["quota"],"storageVersionHash":"8uhSgffRX6w="},{"name":"resourcequotas/status","singularName":"","namespaced":true,"kind":"ResourceQuota","verbs":["get","patch","update"]},{"name":"secrets","singularName":"","namespaced":true,"kind":"Secret","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"S6u1pOWzb84="},{"name":"serviceaccounts","singularName":"","namespaced":true,"kind":"ServiceAccount","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["sa"],"storageVersionHash":"pbx9ZvyFpBE="},{"name":"serviceaccounts/token","singularName":"","namespaced":true,"group":"authentication.k8s.io","version":"v1","kind":"TokenRequest","verbs":["create"]},{"name":"services","singularName":"","namespaced":true,"kind":"Service","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["svc"],"categories":["all"],"storageVersionHash":"0/CO1lhkEBI="},{"name":"services/proxy","singularName":"","namespaced":true,"kind":"ServiceProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"services/status","singularName":"","namespaced":true,"kind":"Service","verbs":["get","patch","update"]}]}
Expected results:
The normal oc request should be working.
Additional info:
There is no such issue for clusters with openshift-sdn with the same OpenShift version and same LoadBalancer service. We suspected that it might be related to the MTU setting, but this cannot explain why OpenShiftSDN works well. Another thing might be related is that the OpenShiftSDN is using iptables for service loadbalancing and OVN is dealing that within the OVN services.
Please let me know if any debug log/info is needed.
This is a clone of issue OCPBUGS-948. The following is the description of the original issue:
—
Description of problem:
OLM is setting the "openshift.io/scc" label to "anyuid" on several namespaces: https://github.com/openshift/operator-framework-olm/blob/d817e09c2565b825afd8bfc9bb546eeff28e47e7/manifests/0000_50_olm_00-namespace.yaml#L23 https://github.com/openshift/operator-framework-olm/blob/d817e09c2565b825afd8bfc9bb546eeff28e47e7/manifests/0000_50_olm_00-namespace.yaml#L8 this label has no effect and will lead to confusion. It should be set to emptystring for now (removing it entirely will have no effect on upgraded clusters because the CVO does not remove deleted labels, so the next best thing is to clear the value). For bonus points, OLM should remove the label entirely from the manifest and add migration logic to remove the existing label from these namespaces to handle upgraded clusters that already have it.
Version-Release number of selected component (if applicable):
Not sure how long this has been an issue, but fixing it in 4.12+ should be sufficient.
How reproducible:
always
Steps to Reproduce:
1. install cluster 2. examine namespace labels
Actual results:
label is present
Expected results:
ideally label should not be present, but in the short term setting it to emptystring is the quick fix and is better than nothing.
Description of problem:
In OCP 4.9, the package-server-manager was introduced to manage the packageserver CSV. However, when OCP 4.8 in upgraded to 4.9, the packageserver stays stuck in v0.17.0, which is the version in OCP 4.8, and v0.18.3 does not roll out, which is the version in OCP 4.9
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Install OCP 4.8 2. Upgrade to OCP 4.9 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2022-08-31-160214 True True 50m Working towards 4.9.47: 619 of 738 done (83% complete) $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.47 True False 4m26s Cluster version is 4.9.47
Actual results:
Check packageserver CSV. It's in v0.17.0 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE packageserver Package Server 0.17.0 Succeeded
Expected results:
packageserver CSV is at 0.18.3
Additional info:
packageserver CSV version in 4.8: https://github.com/openshift/operator-framework-olm/blob/release-4.8/manifests/0000_50_olm_15-packageserver.clusterserviceversion.yaml#L12 packageserver CSV version in 4.9: https://github.com/openshift/operator-framework-olm/blob/release-4.9/pkg/manifests/csv.yaml#L8
Description of problem:
Alert actions are not triggering modal from where storage cluster can be expanded.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
1/1
Steps to Reproduce:
1. Fill up a storage cluster to 80% 2. Alert is seen in cluster dashboard. 3. Click the Add Capacity button
Actual results:
Modal is not launched.
Expected results:
Modal should be launched.
Additional info:
Description of problem:
If a customer creates a machine with a networks section like this networks: - filter: {} noAllowedAddressPairs: false subnets: - filter: {} uuid: primary-subnet-uuid - filter: {} noAllowedAddressPairs: true subnets: - filter: {} uuid: other-subnet-uuid primarySubnet: primary-subnet-uuid Then all the ports are created without the allowed address pairs. Doing some research in the source code, I have found that: - For each entry on the networks: section, networks are filtered as per its filter: section[1] - Then, if the subnets: section of the network entry is not empty, for each of the network IDs found above[2], 2 things are done that are relevant for this situatoin: - The net ID is saved on a netsWithoutAllowedAddressPairs[3]. That map is later checked while creating any port[4]. - For each subnet entry that matches the network ID, a port is created[5]. So, the problematic behavior happens due to the following: - Both entries in the networks array have empty filters. This means that both entries selected all the neutron networks. - This configuration results in one port per subnet as expected because, in the later traversal of the subnets array of each entry[5], it is filtering by subnet and creating a single port as expected. - However, the entry with "noAllowedAddressPairs: true" is selecting all the neutron networks, so it adds all of them to the netsWithoutAllowedAddressPairs map[3], regardless of the subnets filtering. - As all the networks are in noAllowedAddressPairs: true array, all the ports created for the VM have their allowed address pairs removed[4]. Why do we consider this behavior undesired? I understand that, if we create a port for a network that has no allowed pairs, we create all the other ports in the same networks without the pairs. However, it is surprising that a port in a network is removed the allowed address pairs due to a setting in an entry that yielded no port on that network. In other words, one would expect that the same subnet filtering that happens on each network entry in what regards yielding ports for the VM would also work for the noAllowedPairs parameter.
Version-Release number of selected component (if applicable):
4.10.30
How reproducible:
Always
Steps to Reproduce:
1. Create a machineset like in the description 2. 3.
Actual results:
All ports have no address pairs
Expected results:
Only the port on the secondary subnet has no address pairs.
Additional info:
A simple workaround would be to just fill the filter so that a single network is selected for each network entry. References: [1] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L576 [2] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L580 [3] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L581-L583 [4] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L658-L660 [5] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L610-L625
Description of problem:
Kebab menu for helm repository is showing inconsistent behavior
Version-Release number of selected component (if applicable): 4.12
How reproducible: Always
Steps to Reproduce:
1. Create some helm chart repository
2. Go to the Helm page and switch to the repositories tab
3. Open kebab menu for different repos
Actual results:
Menus are overlapping
Expected results:
The menu should work properly; one menu should close before opening a new one
Additional info:
Video has been added for the reference
Description of problem: As discovered in https://issues.redhat.com/browse/OCPBUGS-2795, gophercloud fails to list swift containers when the endpoint speaks HTTP2. This means that CIRO will provision a 100GB cinder volume even though swift is available to the tenant.
We're for example seeing this behavior in our CI on vexxhost.
The gophercloud commit that fixed this issue is https://github.com/gophercloud/gophercloud/commit/b7d5b2cdd7ffc13e79d924f61571b0e5f74ec91c, specifically the `|| ct == ""` part on line 75 of openstack/objectstorage/v1/containers/results.go. This commit made it in gophercloud v0.18.0.
CIRO still depends on gophercloud v0.17.0. We should bump gophercloud to fix the bug.
Version-Release number of selected component (if applicable):
All versions. Fix should go to 4.8 - 4.12.
How reproducible:
Always, when swift speaks HTTP2.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-3304. The following is the description of the original issue:
—
Assisted-service can use only one mirror of the release image. In the install-config, the user may specify multiple matching mirrors. Currently the last matching mirror is the one used by assisted-service. This is confusing; we should use the first matching one instead.
This is a clone of issue OCPBUGS-2384. The following is the description of the original issue:
—
Version:
$ openshift-install version
openshift-install 4.10.0-0.nightly-2021-12-23-153012
built from commit 94a3ed9cbe4db66dc50dab8b85d2abf40fb56426
release image registry.ci.openshift.org/ocp/release@sha256:39cacdae6214efce10005054fb492f02d26b59fe9d23686dc17ec8a42f428534
release architecture amd64
Platform: alibabacloud
Please specify:
What happened?
Unexpected error of 'Internal publish strategy is not supported on "alibabacloud" platform', because Internal publish strategy should be supported for "alibabacloud", please clarify otherwise, thanks!
$ openshift-install create install-config --dir work
? SSH Public Key /home/jiwei/.ssh/openshift-qe.pub
? Platform alibabacloud
? Region us-east-1
? Base Domain alicloud-qe.devcluster.openshift.com
? Cluster Name jiwei-uu
? Pull Secret [? for help] *********
INFO Install-Config created in: work
$
$ vim work/install-config.yaml
$ yq e '.publish' work/install-config.yaml
Internal
$ openshift-install create cluster --dir work --log-level info
FATAL failed to fetch Metadata: failed to load asset "Install Config": invalid "install-config.yaml" file: publish: Invalid value: "Internal": Internal publish strategy is not supported on "alibabacloud" platform
$
What did you expect to happen?
"publish: Internal" should be supported for platform "alibabacloud".
How to reproduce it (as minimally and precisely as possible)?
Always
This is a clone of issue OCPBUGS-1453. The following is the description of the original issue:
—
Description of problem:
TargetDown alert fired while it shouldn't. Prometheus endpoints are not always properly unregistered and the alert will therefore think that some Kube service endpoints are down
Version-Release number of selected component (if applicable):
The problem as always been there.
How reproducible:
Not reproducible. Most of the time Prometheus endpoints are properly unregistered. Aim here is to get the TargetDown Prometheus expression be more resilient; this can be tested on past metrics data in which the unregistration issue was encountered.
Steps to Reproduce:
N/A
Actual results:
TargetDown alert triggered while Kube service endpoints are all up & running.
Expected results:
TargetDown alert should not have been trigerred.
Description of problem:
Provisioning interface on master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP 4.10.16 IPI BareMetal install.
Customer is performing an OCP 4.10.16 IPI BareMetal install and bootstrap node provisions just fine, but when master nodes are booted for provisioning, they are not getting an ipv4 address via dhcp. As such, the install is not moving forward at this point.
Version-Release number of selected component (if applicable):
OCP 4.10.16
How reproducible:
Perform OCP 4.10.16 IPI BareMetal install.
Actual results:
provisioning interface comes up (as evidenced by ipv6 address) but is not getting an ipv4 address via dhcp. OCP install / provisioning fails at this point.
Expected results:
provisioning interface successfully received an ipv4 ip address and successfully provisioned master nodes (and subsequently worker nodes as well.)
Additional info:
As a troubleshooting measure, manually adding an ipv4 ip address did allow the coreos image on the bootstrap node to be reached via curl.
Further, the kernel boot line for the first master node was updated for a static ip addresss assignment for further confirmation that the master node would successfully image this way which further confirming that the issue is the provisioning interface not receiving an ipv4 ip address from the dhcp server.
Description of problem:
etcd and kube-apiserver pods get restarted due to failed liveness probes while deleting/re-creating pods on SNO
Version-Release number of selected component (if applicable):
4.10.32
How reproducible:
Not always, after ~10 attempts
Steps to Reproduce:
1. Deploy SNO with Telco DU profile applied 2. Create multiple pods with local storage volumes attached(attaching yaml manifest) 3. Force delete and re-create pods 10 times
Actual results:
etcd and kube-apiserver pods get restarted, making to cluster unavailable for a period of time
Expected results:
etcd and kube-apiserver do not get restarted
Additional info:
Attaching must-gather. Please let me know if any additional info is required. Thank you!
Description of problem:
When user selects a installed operator (for example, openshift elastic search) in operator hub and navigating to installed operator page from operator information page
with the help of "view it here" option, "404 Not found" information has wrongly shown/appeared although it navigates to the installed operator at the end.
Version-Release number of selected components (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
Actual results:
Wrong message "404: Not found" while the user selects an installed operator and navigates from operator hub to installed operator page.
Browser console log indicate as below
main-chunk-525818b154a57a9b220a.min.js:1 unhandled error: Uncaught TypeError: Cannot read properties of undefined (reading 'firstElementChild') TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/static/vendors~main-chunk-40fab65853dff2fbc413.min.js:118:125992) at HTMLDivElement.l (https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/static/vendors~main-chunk-40fab65853dff2fbc413.min.js:118:126387) TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) at HTMLDivElement.l (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) window.onerror @ main-chunk-525818b154a57a9b220a.min.js:1 vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 Uncaught TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) at HTMLDivElement.l (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) c @ vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 l @ vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 scroll (async) componentWillUnmount @ vendor-patternfly-core-chunk-006bb1499791fa7cfea7.min.js:38397 hs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 bs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 hs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 bs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Oc @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 t.unstable_runWithPriority @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171690 Hi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Ac @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 pc @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 (anonymous) @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 t.unstable_runWithPriority @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171690 Hi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Vi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 qi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 De @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Yt @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 main-chunk-525818b154a57a9b220a.min.js:1 GET https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/api/kubernetes/apis/operators.coreos.com/v1alpha1/clusterserviceversions/elasticsearch-operator.5.5.0 404 (Not Found)
Expected results:
Installed operator details should show without any error when the user selects an installed operator and navigates from operator hub to installed operator page.
Additional info:
Reproduced in both chrome[103.0.5060.114 (Official Build) (64-bit)] and firefox[91.11.0esr (64-bit)] browsers
Attached screen share for the same issue InstalledOperatorNavigation404.mp4
In the Known Issues section of the OpenStack-specific Installer docs issues, there is a point about control plane anti-affinity.
The known issue has several problems:
This is a clone of issue OCPBUGS-3883. The following is the description of the original issue:
—
While doing a PerfScale test of we noticed that the ovnkube pods are not being spread out evenly among the available workers. Instead they are all stacking on a few until they fill up the available allocatable ebs volumes (25 in the case of m5 instances that we see here).
An example from partway through our 80 hosted cluster test when there were ~30 hosted clusters created/in progress
There are 24 workers available:
```
$ for i in `oc get nodes l node-role.kubernetes.io/worker=,node-role.kubernetes.io/infra!=,node-role.kubernetes.io/workload!= | egrep -v "NAME" | awk '{ print $1 }'`; do echo $i `oc describe node $i | grep -v openshift | grep ovnkube -c`; done
ip-10-0-129-227.us-west-2.compute.internal 0
ip-10-0-136-22.us-west-2.compute.internal 25
ip-10-0-136-29.us-west-2.compute.internal 0
ip-10-0-147-248.us-west-2.compute.internal 0
ip-10-0-150-147.us-west-2.compute.internal 0
ip-10-0-154-207.us-west-2.compute.internal 0
ip-10-0-156-0.us-west-2.compute.internal 0
ip-10-0-157-1.us-west-2.compute.internal 4
ip-10-0-160-253.us-west-2.compute.internal 0
ip-10-0-161-30.us-west-2.compute.internal 0
ip-10-0-164-98.us-west-2.compute.internal 0
ip-10-0-168-245.us-west-2.compute.internal 0
ip-10-0-170-103.us-west-2.compute.internal 0
ip-10-0-188-169.us-west-2.compute.internal 25
ip-10-0-188-194.us-west-2.compute.internal 0
ip-10-0-191-51.us-west-2.compute.internal 5
ip-10-0-192-10.us-west-2.compute.internal 0
ip-10-0-193-200.us-west-2.compute.internal 0
ip-10-0-193-27.us-west-2.compute.internal 7
ip-10-0-199-1.us-west-2.compute.internal 0
ip-10-0-203-161.us-west-2.compute.internal 0
ip-10-0-204-40.us-west-2.compute.internal 23
ip-10-0-220-164.us-west-2.compute.internal 0
ip-10-0-222-59.us-west-2.compute.internal 0
```
This is running quay.io/openshift-release-dev/ocp-release:4.11.11-x86_64 for the hosted clusters and the hypershift operator is quay.io/hypershift/hypershift-operator:4.11 on a 4.11.9 management cluster
Our CMO e2e tests create several containers besides the standard CMO deployment. These pods do currently not set any security context capabilities. Currently this creates a warning like so:
W0705 08:35:38.590283 15206 warnings.go:70] would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "alertmanager-webhook-e2e-testutil" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "alertmanager-webhook-e2e-testutil" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "alertmanager-webhook-e2e-testutil" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "alertmanager-webhook-e2e-testutil" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
We should be proactive and set security capability contraints. From this run this seems to impact the following pods/containers:
Both are used more then once.
Description of problem:
When providing install-config as
platform: baremetal: apiVIP: 192.168.122.10 ingressVIP: 192.168.122.11
agent installer fails with
bin/openshift-install agent create cluster-manifests
FATAL failed to fetch Agent Manifests: failed to load asset "Install Config": invalid install-config configuration: [Platform.Baremetal.ApiVips: Required value: apiVips must be set for baremetal platform, Platform.Baremetal.IngressVips: Required value: ingressVips must be set for baremetal platform]
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Git clone latest installer https://github.com/openshift/installer and build it 2. Provide install-config.yaml for baremetal platform with deprecated apiVip and ingressVip set 3. Create agent image with "bin/openshift-install agent create cluster-manifests"
Actual results:
bin/openshift-install agent create cluster-manifests FATAL failed to fetch Agent Manifests: failed to load asset "Install Config": invalid install-config configuration: [Platform.Baremetal.ApiVips: Required value: apiVips must be set for baremetal platform, Platform.Baremetal.IngressVips: Required value: ingressVips must be set for baremetal platform]
Expected results:
agent installer should upconvert the depreacted fields and should not error. apiVip, ingressVip should be upconverted into apiVips and ingressVips respectively
Additional info:
During a normal installation, there are hundreds of debug logs reading:
bootstrap configmap not found: configmaps "bootstrap" not found
and dozens of the form:
Still waiting for cluster to initialize: ...
with duplicate data.
We should only log when we have some new information to report, not every time we poll.
Description of problem:
With every pod update we are executing a mutate operation to add the pod port to the port group or add the pod IP to an address set. This functionally doesn't hurt, since mutate will not add duplicate values to the same set. However, this is bad for performance. For example, with a 730 network policies affecting a pod, and issuing 7 pod updates would result in over 5k transactions.
This is a clone of issue OCPBUGS-4701. The following is the description of the original issue:
—
Description of problem:
In at least 4.12.0-rc.0, a user with read-only access to ClusterVersion can see a "Control plane is hosted" banner (despite the control plane not being hosted), because hasPermissionsToUpdate is false, so canPerformUpgrade is false.
Version-Release number of selected component (if applicable):
4.12.0-rc.0. Likely more. I haven't traced it out.
How reproducible:
Always.
Steps to Reproduce:
1. Install 4.12.0-rc.0
2. Create a user with cluster-wide read-only permissions. For me, it's via binding to a sudoer ClusterRole. I'm not sure where that ClusterRole comes from, but it's:
$ oc get -o yaml clusterrole sudoer apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: "2020-05-21T19:39:09Z" name: sudoer resourceVersion: "7715" uid: 28eb2ffa-dccd-47e8-a2d5-6a95e0e8b1e9 rules: - apiGroups: - "" - user.openshift.io resourceNames: - system:admin resources: - systemusers - users verbs: - impersonate - apiGroups: - "" - user.openshift.io resourceNames: - system:masters resources: - groups - systemgroups verbs: - impersonate
3. View /settings/cluster
Actual results:
See the "Control plane is hosted" banner.
Expected results:
Possible cases:
Changelog between 3.5.5 and 3.5.4:
https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md#v355-tbd
Changelog between 3.5.3 and 3.5.4:
https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md#v354-2022-04-24
We should deprecate and eventually remove react-helmet as a shared plugin dependency. This dependency is small, and plugins can bring their own version if needed.
This requires updated our webpack plugin to allow dependency fallbacks when a shared dependency is not present.
AC:
Copied from an upstream issue: https://github.com/operator-framework/operator-lifecycle-manager/issues/2830
What did you do?
When attempting to reinstall an operator that uses conversion webhooks by
The resulting InstallPlan enters a failed state with message similar to
error validating existing CRs against new CRD's schema for "devworkspaces.workspace.devfile.io": error listing resources in GroupVersionResource schema.GroupVersionResource{Group:"workspace.devfile.io", Version:"v1alpha1", Resource:"devworkspaces"}: conversion webhook for workspace.devfile.io/v1alpha2, Kind=DevWorkspace failed: Post "https://devworkspace-controller-manager-service.test-namespace.svc:443/convert?timeout=30s": service "devworkspace-controller-manager-service" not found
When the original CSVs are deleted, the operator's main deployment and service are removed, but CRDs are left in-cluster. However, since the service/CA bundle/deployment that serve the conversion webhook are removed, conversion webhooks are broken at that point. Eventually this impacts garbage collection on the cluster as well.
This can be reproduced by installing the DevWorkspace Operator from the Red Hat catalog. (I can provide yamls/upstream images that reproduce as well, if that's helpful). It may be necessary to create a DevWorkspace in the cluster before deletion, e.g. by oc apply -f https://raw.githubusercontent.com/devfile/devworkspace-operator/main/samples/plain.yaml
What did you expect to see?
Operator is able to be reinstalled without removing CRDs and all instances.
What did you see instead? Under which circumstances?
It's necessary to completely remove the operator including CRDs. For our operator (DevWorkspace), this also makes uninstall especially complicated as finalizers are used (so CRDs cannot be deleted if the controller is removed, and the controller cannot be restored by reinstalling)
Environment
operator-lifecycle-manager version: 4.10.24
Kubernetes version information: Kubernetes Version: v1.23.5+012e945 (OpenShift 4.10.24)
Kubernetes cluster kind: OpenShift
Description of problem:
pkg/devfile/sample_test.go fails after devfile registry was updated (https://github.com/devfile/registry/pull/126)
OCPBUGS-1677 is about updating our assertion so that the CI job runs successfully again. We might want to backport this as well.
This is about updating the code that the test should use a mock response instead of the latest registry content OR check some specific attributes instead of comparing the full JSON response.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Clone openshift/console
2. Run ./test-backend.sh
Actual results:
Unit tests fail
Expected results:
Unit tests should pass again
Additional info:
This is a clone of issue OCPBUGS-6049. The following is the description of the original issue:
—
Description of problem:
We show the UpdateInProgress component (the progress bars) when the cluster update status is Failing, UpdatingAndFailing, or Updating. The inclusion of the Failing case results in a bug where the progress bars can display when an update is not occurring (see attached screenshot).
Steps to Reproduce:
1. Add the following overrides to ClusterVersion config (/k8s/cluster/config.openshift.io~v1~ClusterVersion/version) spec: overrides: - group: apps kind: Deployment name: console-operator namespace: openshift-console-operator unmanaged: true - group: rbac.authorization.k8s.io kind: ClusterRole name: console-operator namespace: '' unmanaged: true 2. Wait for ClusterVersion changes to roll out. 3. Visit /settings/cluster and note the progress bars are present and displaying 100% but the cluster is not updating
Actual results:
Progress bars are displaying when not updating.
Expected results:
Progress bars should not display when updating.
OpenShift 4.12 is going to be built with Go 1.19, but automatic migration in our repository has failed. The migration should be done manually.
[1]: https://github.com/openshift/cluster-image-registry-operator/pull/802
This is a clone of issue OCPBUGS-5016. The following is the description of the original issue:
—
Description of problem:
When editing any pipeline in the openshift console, the correct content cannot be obtained (the obtained information is the initial information).
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
Developer -> Pipeline -> select pipeline -> Details -> Actions -> Edit Pipeline -> YAML view -> Cancel -> Actions -> Edit Pipeline -> YAML view
Actual results:
displayed content is incorrect.
Expected results:
Get the content of the current pipeline, not the "pipeline create" content.
Additional info:
If cancel or save in the "Pipeline Builder" interface after "Edit Pipeline", can get the expected content. ~ Developer -> Pipeline -> select pipeline -> Details -> Actions -> Edit Pipeline -> Pipeline builder -> Cancel -> Actions -> Edit Pipeline -> YAML view :Display resource content normally ~
Found when running resource watcher, these keep updating with no real changes, just moving conditions around. Likely needs bugs for all three.
This is a clone of issue OCPBUGS-6213. The following is the description of the original issue:
—
Please review the following PR: https://github.com/openshift/machine-config-operator/pull/3450
The PR has been automatically opened by ART (#aos-art) team automation and indicates
that the image(s) being used downstream for production builds are not consistent
with the images referenced in this component's github repository.
Differences in upstream and downstream builds impact the fidelity of your CI signal.
If you disagree with the content of this PR, please contact @release-artists
in #aos-art to discuss the discrepancy.
Closing this issue without addressing the difference will cause the issue to
be reopened automatically.
Originally reported by lance5890 in issue https://github.com/openshift/cluster-etcd-operator/issues/1000
Under some circumstances the static pod machinery fails to populate the node status in time to generate the correct env variables for ETCD_URL_HOST, ETCD_NAME etc. The pods that come up will fail to accept those variables.
This is particularly pronounced in SNO topologies, leading to installation failures.
The fix is to fail fast in the targetconfig/envvar controller to ensure the CEO goes degraded instead of silently failing on the rollout of an invalid static pod.
cloud-controller-manager does not react to changes to infrastructure secrets (in the OpenStack case: clouds.yaml).
As a consequence, if credentials are rotated (and the old ones are rendered useless), load balancer creation and deletion will not succeed any more. Restarting the controller fixes the issue on a live cluster.
Logs show that it couldn't find the application credentials:
Dec 19 12:58:58.909: INFO: At 2022-12-19 12:53:58 +0000 UTC - event for udp-lb-default-svc: {service-controller } EnsuringLoadBalancer: Ensuring load balancer Dec 19 12:58:58.909: INFO: At 2022-12-19 12:53:58 +0000 UTC - event for udp-lb-default-svc: {service-controller } SyncLoadBalancerFailed: Error syncing load balancer: failed to ensure load balancer: failed to get subnet to create load balancer for service e2e-test-openstack-q9jnk/udp-lb-default-svc: Unable to re-authenticate: Expected HTTP response code [200 204 300] when accessing [GET https://compute.rdo.mtl2.vexxhost.net/v2.1/0693e2bb538c42b79a49fe6d2e61b0fc/servers/fbeb21b8-05f0-4734-914e-926b6a6225f1/os-interface], but got 401 instead {"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}}: Resource not found: [POST https://identity.rdo.mtl2.vexxhost.net/v3/auth/tokens], error message: {"error":{"code":404,"message":"Could not find Application Credential: 1b78233956b34c6cbe5e1c95445972a4.","title":"Not Found"}}
OpenStack CI has been instrumented to restart CCM after credentials rotation, so that we silence this particular issue and avoid masking any other. That workaround must be reverted once this bug is fixed.
Description of problem:
The current version of openshift's corendns is based on Kubernetes 1.24 packages. OpenShift 4.12 is based on Kubernetes 1.25.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Check https://github.com/openshift/coredns/blob/release-4.12/go.mod
Actual results:
Kubernetes packages (k8s.io/api, k8s.io/apimachinery, and k8s.io/client-go) are at version v0.24.0.
Expected results:
Kubernetes packages are at version v0.25.0 or later.
Additional info:
Using old Kubernetes API and client packages brings risk of API compatibility issues.
This is a clone of issue OCPBUGS-3633. The following is the description of the original issue:
—
I think something is wrong with the alerts refactor, or perhaps my sync to 4.12.
Failed: suite=[openshift-tests], [sig-instrumentation][Late] Alerts shouldn't report any unexpected alerts in firing or pending state [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel] Passed 1 times, failed 0 times, skipped 0 times: we require at least 6 attempts to have a chance at success
We're not getting the passes - from https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/aggregated-azure-ovn-upgrade-4.12-micro-release-openshift-release-analysis-aggregator/1592021681235300352, the successful runs don't show any record of the test at all. We need to record successes and failures for aggregation to work right.
This is a clone of issue OCPBUGS-3432. The following is the description of the original issue:
—
Description of problem:
E2E test cases for knative and pipeline packages have been disabled on CI due to respective operator installation issues. Tests have to be enabled after new operator version be available or the issue resolves
References:
https://coreos.slack.com/archives/C6A3NV5J9/p1664545970777239
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
acquiring node lock for assigning ip address, node: %s, ip: %sci-ln-g470i52-1d09d-slz7m-worker-westus-6wt7k10.0.128.102
Description of problem:
$ oc adm must-gather -- gather_ingress_node_firewall [must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dec5a08681e11eedcd31f075941b74f777b9187f0e711a498a212f9d96adb2f When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 0ef60b50-4378-431d-8ca2-faa5af098274 ClusterVersion: Stable at "4.12.0-0.nightly-2022-09-26-111919" ClusterOperators: clusteroperator/insights is not available (Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed ) because Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed[must-gather ] OUT namespace/openshift-must-gather-fr7kc created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-xx2fh created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dec5a08681e11eedcd31f075941b74f777b9187f0e711a498a212f9d96adb2f created [must-gather-xvfj4] POD 2022-09-28T16:57:00.887445531Z /bin/bash: /usr/bin/gather_ingress_node_firewall: Permission denied [must-gather-xvfj4] OUT waiting for gather to complete [must-gather-xvfj4] OUT downloading gather output [must-gather-xvfj4] OUT receiving incremental file list [must-gather-xvfj4] OUT ./ [must-gather-xvfj4] OUT [must-gather-xvfj4] OUT sent 27 bytes received 40 bytes 26.80 bytes/sec [must-gather-xvfj4] OUT total size is 0 speedup is 0.00 [must-gather ] OUT namespace/openshift-must-gather-fr7kc deleted [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-xx2fh deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 0ef60b50-4378-431d-8ca2-faa5af098274 ClusterVersion: Stable at "4.12.0-0.nightly-2022-09-26-111919" ClusterOperators: clusteroperator/insights is not available (Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed ) because Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Get the below error when upgrading to OCP 4.12 from 4.9->4.10->4.11.
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-08-24-091058 True True 4h Unable to apply 4.12.0-0.nightly-2022-08-24-053339: the workload openshift-operator-lifecycle-manager/package-server-manager cannot roll out - lastTransitionTime: "2022-08-25T04:47:36Z" lastUpdateTime: "2022-08-25T04:47:36Z" message: 'pods "package-server-manager-85b6dc4d89-sdzcc" is forbidden: violates PodSecurity "restricted:v1.24": seccompProfile (pod or container "package-server-manager" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")' reason: FailedCreate status: "True" type: ReplicaFailure
Version-Release number of selected component (if applicable):
MacBook-Pro:~ jianzhang$ oc exec catalog-operator-c5c655d5c-b9lcn -- olm --version
OLM version: 0.19.0
git commit: 8a984d41acc67c0bc9bfe807fadeef23f83abd44
How reproducible:
always
Steps to Reproduce:
1. Install OCP 4.11.0-0.nightly-2022-08-24-091058
2. Upgrade it to 4.12.0-0.nightly-2022-08-24-053339
Actual results:
The cluster upgrading is blocked. Get the above errors as described.
Expected results:
Upgraded to 4.12 from old OCP versions 4.5, 4.9 successfully.
Additional info:
MacBook-Pro:~ jianzhang$ oc get deployment package-server-manager -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "5" include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" creationTimestamp: "2022-08-25T00:14:08Z" generation: 5 labels: app: package-server-manager name: package-server-manager namespace: openshift-operator-lifecycle-manager ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 3fd29082-0e76-4b09-988e-78cb5fc7c8b5 resourceVersion: "169028" uid: c8f7cbe2-4f82-40ce-9468-817ffefa903f spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: package-server-manager strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: null labels: app: package-server-manager spec: containers: - args: - --name - $(PACKAGESERVER_NAME) - --namespace - $(PACKAGESERVER_NAMESPACE) command: - /bin/psm - start env: - name: PACKAGESERVER_NAME value: packageserver - name: PACKAGESERVER_IMAGE value: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d49e1e27114f4b719bc8f3c222b2c5934d3b8028c79ec8e2bd288f6e9b5b3d5c - name: PACKAGESERVER_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: RELEASE_VERSION value: 4.12.0-0.nightly-2022-08-24-053339 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d49e1e27114f4b719bc8f3c222b2c5934d3b8028c79ec8e2bd288f6e9b5b3d5c imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: package-server-manager readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError dnsPolicy: ClusterFirst nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/master: "" priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true serviceAccount: olm-operator-serviceaccount serviceAccountName: olm-operator-serviceaccount terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 status: availableReplicas: 1 conditions: - lastTransitionTime: "2022-08-25T03:14:20Z" lastUpdateTime: "2022-08-25T03:14:20Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2022-08-25T04:47:36Z" lastUpdateTime: "2022-08-25T04:47:36Z" message: 'pods "package-server-manager-85b6dc4d89-sdzcc" is forbidden: violates PodSecurity "restricted:v1.24": seccompProfile (pod or container "package-server-manager" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")' reason: FailedCreate status: "True" type: ReplicaFailure - lastTransitionTime: "2022-08-25T04:57:37Z" lastUpdateTime: "2022-08-25T04:57:37Z" message: ReplicaSet "package-server-manager-85b6dc4d89" has timed out progressing. reason: ProgressDeadlineExceeded status: "False" type: Progressing observedGeneration: 5 readyReplicas: 1 replicas: 1 unavailableReplicas: 1
This is a clone of issue OCPBUGS-4022. The following is the description of the original issue:
—
Description of problem:
Unnecessary react warning:
Warning: Each child in a list should have a unique "key" prop. Check the render method of `NavSection`. See https://reactjs.org/link/warning-keys for more information. NavItemHref@http://localhost:9012/static/main-785e94355aeacc12c321.js:5141:88 NavSection@http://localhost:9012/static/main-785e94355aeacc12c321.js:5294:20 PluginNavItem@http://localhost:9012/static/main-785e94355aeacc12c321.js:5582:23 div PerspectiveNav@http://localhost:9012/static/main-785e94355aeacc12c321.js:5398:134
Version-Release number of selected component (if applicable):
4.11 was fine
4.12 and 4.13 (master) shows this warning
How reproducible:
Always
Steps to Reproduce:
1. Open browser log
2. Open web console
Actual results:
React warning
Expected results:
Obviously no react warning
When installing OCP cluster with worker nodes VM type specified as high performance, some of the configuration settings of said VMs do not match the configuration settings a high performance VM should have.
Specific configurations that do not match are described in subtasks.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
When installing OCP cluster with worker nodes VM type specified as high performance, manual and automatic migration is enabled in the said VMs.
However, high performance worker VMs are created with default values of the engine, so only manual migration should be enabled.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
How reproducible: 100%
How to reproduce:
1. Create install-config.yaml with a vmType field and set it to high performance, i.e.:
apiVersion: v1 baseDomain: basedomain.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: affinityGroupsNames: [] vmType: high_performance replicas: 2 ...
2. Run installation
./openshift-install create cluster --dir=resources --log-level=debug
3. Check worker VM's configuration in the RHV webconsole.
Expected:
Only manual migration (under Host) should be enabled.
Actual:
Manual and automatic migration is enabled.
Description of problem:
metal3 pod does not come up on SNO when creating Provisioning with provisioningNetwork set to Disabled The issue is that on SNO, there is no Machine, and no BareMetalHost, it is looking of Machine objects to populate the provisioningMacAddresses field. However, when provisioningNetwork is Disabled, provisioningMacAddresses is not used anyway. You can work around this issue by populating provisioningMacAddresses with a dummy address, like this: kind: Provisioning metadata: name: provisioning-configuration spec: provisioningMacAddresses: - aa:aa:aa:aa:aa:aa provisioningNetwork: Disabled watchAllNamespaces: true
Version-Release number of selected component (if applicable):
4.11.17
How reproducible:
Try to bring up Provisioning on SNO in 4.11.17 with provisioningNetwork set to Disabled apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: Disabled watchAllNamespaces: true
Steps to Reproduce:
1. 2. 3.
Actual results:
controller/provisioning "msg"="Reconciler error" "error"="machines with cluster-api-machine-role=master not found" "name"="provisioning-configuration" "namespace"="" "reconciler group"="metal3.io" "reconciler kind"="Provisioning"
Expected results:
metal3 pod should be deployed
Additional info:
This issue is a result of this change: https://github.com/openshift/cluster-baremetal-operator/pull/307 See this Slack thread: https://coreos.slack.com/archives/CFP6ST0A3/p1670530729168599
Description of problem:
release-4.12 of openshift/cloud-provider-openstack is missing some commits that were backported in upstream project into the release-1.25 branch. We should import them in our downstream fork.
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-3524. The following is the description of the original issue:
—
Description of problem:
Install fully private cluster on Azure against 4.12.0-0.nightly-2022-11-10-033725, sa for coreOS image have public access.
$ az storage account list -g jima-azure-11a-f58lp-rg --query "[].[name,allowBlobPublicAccess]" -o tsv
clusterptkpx True
imageregistryjimaazrsgcc False
same profile on 4.11.0-0.nightly-2022-11-10-202051, sa for coreos image are not publicly accessible.
$ az storage account list -g jima-azure-11c-kf9hw-rg --query "[].[name,allowBlobPublicAccess]" -o tsv
clusterr8wv9 False
imageregistryjimaaz9btdx False
Checked that terraform-provider-azurerm version is different between 4.11 and 4.12.
4.11: v2.98.0
4.12: v3.19.1
In terraform-provider-azurerm v2.98.0, it use property allow_blob_public_access to manage sa public access, the default value is false.
In terraform-provider-azurerm v3.19.1, property allow_blob_public_access is renamed to allow_nested_items_to_be_public , the default value is true.
https://github.com/hashicorp/terraform-provider-azurerm/blob/main/CHANGELOG.md#300-march-24-2022
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-11-10-033725
How reproducible:
always on 4.12
Steps to Reproduce:
1. Install fully private cluster on azure against 4.12 payload 2. 3.
Actual results:
sa for coreos image is publicly accessible
Expected results:
sa for coreos image should not be publicly accessible
Additional info:
only happened on 4.12
This is a clone of issue OCPBUGS-5505. The following is the description of the original issue:
—
The upgradeability check in CVO is throttled (essentially cached) for a nondeterministic period of time, same as the minimal sync period computed at runtime. The period can be up to 4 minutes, determined at CVO start time as 2minutes * (0..1 + 1). We agreed with Trevor that such throttling is unnecessarily aggressive (the check is not that expensive). It also causes CI flakes, because the matching test only has 3 minutes timeout. Additionally, the non-determinism and longer throttling results makes UX worse by actions done in the cluster may have their observable effect delayed.
discovered in 4.10 -> 4.11 upgrade jobs
The test seems to flake ~10% of 4.10->4.11 Azure jobs (sippy). There does not seem to be that much impact on non-Azure jobs though which is a bit weird.
Inspect the CVO log and E2E logs from failing jobs with the provided [^check-cvo.py] helper:
$ ./check-cvo.py cvo.log && echo PASS || echo FAIL
Preferably, inspect CVO logs of clusters that just underwent an upgrade (upgrades makes the original problematic behavior more likely to surface)
$ ./check-cvo.py openshift-cluster-version_cluster-version-operator-5b6966c474-g4kwk_cluster-version-operator.log && echo PASS || echo FAIL FAIL: Cache hit at 11:59:55.332339 0:03:13.665006 after check at 11:56:41.667333 FAIL: Cache hit at 12:06:22.663215 0:03:13.664964 after check at 12:03:08.998251 FAIL: Cache hit at 12:12:49.997119 0:03:13.665598 after check at 12:09:36.331521 FAIL: Cache hit at 12:19:17.328510 0:03:13.664906 after check at 12:16:03.663604 FAIL: Cache hit at 12:25:44.662290 0:03:13.666759 after check at 12:22:30.995531 Upgradeability checks: 5 Upgradeability check cache hits: 12 FAIL
Note that the bug is probabilistic, so not all unfixed clusters will exhibit the behavior. My guess of the incidence rate is about 30-40%.
$ ./check-cvo.py openshift-cluster-version_cluster-version-operator-7b8f85d455-mk9fs_cluster-version-operator.log && echo PASS || echo FAIL Upgradeability checks: 12 Upgradeability check cache hits: 11 PASS
The actual numbers are not relevant (unless the upgradeabilily check count is zero, which means the test is not conclusive, the script warns about that), lack of failure is.
$ curl --silent https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade/1607602927633960960/artifacts/e2e-azure-upgrade/gather-extra/artifacts/pods/openshift-cluster-version_cluster-version-operator-7b7d4b5bbd-zjqdt_cluster-version-operator.log | grep upgradeable.go ... I1227 06:50:59.023190 1 upgradeable.go:122] Cluster current version=4.10.46 I1227 06:50:59.042735 1 upgradeable.go:42] Upgradeable conditions were recently checked, will try later. I1227 06:51:14.024345 1 upgradeable.go:42] Upgradeable conditions were recently checked, will try later. I1227 06:53:23.080768 1 upgradeable.go:42] Upgradeable conditions were recently checked, will try later. I1227 06:56:59.366010 1 upgradeable.go:122] Cluster current version=4.11.0-0.ci-2022-12-26-193640 $ curl --silent https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade/1607602927633960960/artifacts/e2e-azure-upgrade/openshift-e2e-test/artifacts/e2e.log | grep 'Kubernetes 1.25 and therefore OpenShift 4.12' Dec 27 06:51:15.319: INFO: Waiting for Upgradeable to be AdminAckRequired for "Kubernetes 1.25 and therefore OpenShift 4.12 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/6955381 for details and instructions." ... Dec 27 06:54:15.413: FAIL: Error while waiting for Upgradeable to complain about AdminAckRequired with message "Kubernetes 1.25 and therefore OpenShift 4.12 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/6955381 for details and instructions.": timed out waiting for the condition
The test passes. Also, the "Upgradeable conditions were recently checked, will try later." messages in CVO logs should never occur after a deterministic, short amount of time (I propose 1 minute) after upgradeability was checked.
I tested the throttling period in https://github.com/openshift/cluster-version-operator/pull/880. With the period of 15m, the test passrate was 4 of 9. Wiht the period of 1m, the test did not fail at all.
Some context in Slack thread
Name: Routing
Description: Please change the "Routing" component to be a subcomponent "router" of the "Networking" component.
Component: change to "Networking".
Subcomponent: change to "router".
Existing fields (default assignee, default QA contact, default CC email list, etc.) should remain the same as they currently are.
Default Assignee: aos-network-edge-staff@bot.bugzilla.redhat.com
Default QA Contact: hongli@redhat.com
Default CC List: aos-network-edge-staff@bot.bugzilla.redhat.com
Additional Notes:
I filled in "Default CC email list" because the form validation would not permit me to omit it. However, it can be left empty in Bugzilla (it is currently empty).
If possible, we would like this change to be done prior to the Bugzilla-to-Jira migration to avoid the need to make the change after the migration.
Description of problem:
With "createFirewallRules: Enabled", after successful "create cluster" and then "destroy cluster", the created firewall-rules in the shared VPC are not deleted.
Version-Release number of selected component (if applicable):
$ ./openshift-install version ./openshift-install 4.12.0-0.nightly-2022-09-28-204419 built from commit 9eb0224926982cdd6cae53b872326292133e532d release image registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc release architecture amd64
How reproducible:
Always
Steps to Reproduce:
1. try IPI installation with "createFirewallRules: Enabled", which succeeded 2. try destroying the cluster, which succeeded 3. check firewall-rules in the shared VPC
Actual results:
After destroying the cluster, its firewall-rules created by installer in the shared VPC are not deleted.
Expected results:
Those firewall-rules should be deleted during destroying the cluster.
Additional info:
$ gcloud --project openshift-qe-shared-vpc compute firewall-rules list --filter='network=installer-shared-vpc' NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED ci-op-xpn-ingress-common installer-shared-vpc INGRESS 60000 tcp:6443,tcp:22,tcp:80,tcp:443,icmp False ci-op-xpn-ingress-health-checks installer-shared-vpc INGRESS 60000 tcp:30000-32767,udp:30000-32767,tcp:6080,tcp:6443,tcp:226 24,tcp:32335 False ci-op-xpn-ingress-internal-network installer-shared-vpc INGRESS 60000 udp:4789,udp:6081,udp:500,udp:4500,esp,tcp:9000-9999,udp: 9000-9999,tcp:10250,tcp:30000-32767,udp:30000-32767,tcp:10257,tcp:10259,tcp:22623,tcp:2379-2380 FalseTo show all fields of the firewall, please show in JSON format: --format=json To show all fields in table format, please see the examples in --help. $ $ yq-3.3.0 r test2/install-config.yaml platform gcp: projectID: openshift-qe region: us-central1 computeSubnet: installer-shared-vpc-subnet-2 controlPlaneSubnet: installer-shared-vpc-subnet-1 createFirewallRules: Enabled network: installer-shared-vpc networkProjectID: openshift-qe-shared-vpc $ $ yq-3.3.0 r test2/install-config.yaml metadata creationTimestamp: null name: jiwei-1013-01 $ $ openshift-install create cluster --dir test2 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" INFO Consuming Install Config from target directory INFO Creating infrastructure resources... INFO Waiting up to 20m0s (until 4:06AM) for the Kubernetes API at https://api.jiwei-1013-01.qe.gcp.devcluster.openshift.com:6443... INFO API v1.24.0+8c7c967 up INFO Waiting up to 30m0s (until 4:20AM) for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 40m0s (until 4:42AM) for the cluster at https://api.jiwei-1013-01.qe.gcp.devcluster.openshift.com:6443 to initialize... INFO Checking to see if there is a route at openshift-console/console... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/fedora/test2/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.jiwei-1013-01.qe.gcp.devcluster.openshift.com INFO Login to the console with user: "kubeadmin", and password: "wWPkc-8G2Lw-xe2Vw-DgWha" INFO Time elapsed: 39m14s $ $ openshift-install destroy cluster --dir test2 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" INFO Stopped instance jiwei-1013-01-464st-worker-b-pmg5z INFO Stopped instance jiwei-1013-01-464st-worker-a-csg2j INFO Stopped instance jiwei-1013-01-464st-master-1 INFO Stopped instance jiwei-1013-01-464st-master-2 INFO Stopped instance jiwei-1013-01-464st-master-0 INFO Deleted 2 recordset(s) in zone qe INFO Deleted 3 recordset(s) in zone jiwei-1013-01-464st-private-zone INFO Deleted DNS zone jiwei-1013-01-464st-private-zone INFO Deleted bucket jiwei-1013-01-464st-image-registry-us-central1-ulgxgjfqxbdnrhd INFO Deleted instance jiwei-1013-01-464st-master-0 INFO Deleted instance jiwei-1013-01-464st-worker-a-csg2j INFO Deleted instance jiwei-1013-01-464st-master-1 INFO Deleted instance jiwei-1013-01-464st-worker-b-pmg5z INFO Deleted instance jiwei-1013-01-464st-master-2 INFO Deleted disk jiwei-1013-01-464st-master-2 INFO Deleted disk jiwei-1013-01-464st-master-1 INFO Deleted disk jiwei-1013-01-464st-worker-b-pmg5z INFO Deleted disk jiwei-1013-01-464st-master-0 INFO Deleted disk jiwei-1013-01-464st-worker-a-csg2j INFO Deleted address jiwei-1013-01-464st-cluster-public-ip INFO Deleted address jiwei-1013-01-464st-cluster-ip INFO Deleted forwarding rule a516d89f9a4f14bdfb55a525b1a12a91 INFO Deleted forwarding rule jiwei-1013-01-464st-api INFO Deleted forwarding rule jiwei-1013-01-464st-api-internal INFO Deleted target pool a516d89f9a4f14bdfb55a525b1a12a91 INFO Deleted target pool jiwei-1013-01-464st-api INFO Deleted backend service jiwei-1013-01-464st-api-internal INFO Deleted instance group jiwei-1013-01-464st-master-us-central1-a INFO Deleted instance group jiwei-1013-01-464st-master-us-central1-c INFO Deleted instance group jiwei-1013-01-464st-master-us-central1-b INFO Deleted health check jiwei-1013-01-464st-api-internal INFO Deleted HTTP health check a516d89f9a4f14bdfb55a525b1a12a91 INFO Deleted HTTP health check jiwei-1013-01-464st-api INFO Time elapsed: 4m18s $ $ gcloud --project openshift-qe-shared-vpc compute firewall-rules list --filter='network=installer-shared-vpc' NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED ci-op-xpn-ingress-common installer-shared-vpc INGRESS 60000 tcp:6443,tcp:22,tcp:80,tcp:443,icmp False ci-op-xpn-ingress-health-checks installer-shared-vpc INGRESS 60000 tcp:30000-32767,udp:30000-32767,tcp:6080,tcp:6443,tcp:22624,tcp:32335 False ci-op-xpn-ingress-internal-network installer-shared-vpc INGRESS 60000 udp:4789,udp:6081,udp:500,udp:4500,esp,tcp:9000-9999,udp:9000-9999,tcp:10250,tcp:30000-32767,udp:30000-32767,tcp:10257,tcp:10259,tcp:22623,tcp:2379-2380 False jiwei-1013-01-464st-api installer-shared-vpc INGRESS 1000 tcp:6443 False jiwei-1013-01-464st-control-plane installer-shared-vpc INGRESS 1000 tcp:22623,tcp:10257,tcp:10259 False jiwei-1013-01-464st-etcd installer-shared-vpc INGRESS 1000 tcp:2379-2380 False jiwei-1013-01-464st-health-checks installer-shared-vpc INGRESS 1000 tcp:6080,tcp:6443,tcp:22624 False jiwei-1013-01-464st-internal-cluster installer-shared-vpc INGRESS 1000 tcp:30000-32767,udp:9000-9999,udp:30000-32767,udp:4789,udp:6081,tcp:9000-9999,udp:500,udp:4500,esp,tcp:10250 False jiwei-1013-01-464st-internal-network installer-shared-vpc INGRESS 1000 icmp,tcp:22 False k8s-a516d89f9a4f14bdfb55a525b1a12a91-http-hc installer-shared-vpc INGRESS 1000 tcp:30268 False k8s-fw-a516d89f9a4f14bdfb55a525b1a12a91 installer-shared-vpc INGRESS 1000 tcp:80,tcp:443 FalseTo show all fields of the firewall, please show in JSON format: --format=json To show all fields in table format, please see the examples in --help. $ FYI manually deleting those firewall-rules in the shared VPC does work. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-api Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-api]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-control-plane Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-control-plane]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-etcd Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-etcd]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-health-checks Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-health-checks]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-internal-cluster Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-internal-cluster]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q jiwei-1013-01-464st-internal-network Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/jiwei-1013-01-464st-internal-network]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q k8s-a516d89f9a4f14bdfb55a525b1a12a91-http-hc Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/k8s-a516d89f9a4f14bdfb55a525b1a12a91-http-hc]. $ gcloud --project openshift-qe-shared-vpc compute firewall-rules delete -q k8s-fw-a516d89f9a4f14bdfb55a525b1a12a91 Deleted [https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/firewalls/k8s-fw-a516d89f9a4f14bdfb55a525b1a12a91]. $ $ gcloud --project openshift-qe-shared-vpc compute firewall-rules list --filter='network=installer-shared-vpc' NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED ci-op-xpn-ingress-common installer-shared-vpc INGRESS 60000 tcp:6443,tcp:22,tcp:80,tcp:443,icmp False ci-op-xpn-ingress-health-checks installer-shared-vpc INGRESS 60000 tcp:30000-32767,udp:30000-32767,tcp:6080,tcp:6443,tcp:22624,tcp:32335 False ci-op-xpn-ingress-internal-network installer-shared-vpc INGRESS 60000 udp:4789,udp:6081,udp:500,udp:4500,esp,tcp:9000-9999,udp:9000-9999,tcp:10250,tcp:30000-32767,udp:30000-32767,tcp:10257,tcp:10259,tcp:22623,tcp:2379-2380 FalseTo show all fields of the firewall, please show in JSON format: --format=json To show all fields in table format, please see the examples in --help. $
This is a clone of issue OCPBUGS-2513. The following is the description of the original issue:
—
Description of problem:
Agent based installation is failing for Disconnected env due to pull secret is required for registry.ci.openshift.org. As we are installing cluster in disconnected env, only mirror registry secrets should be enough for pulling the image.
Version-Release number of selected component (if applicable):
registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-18-041406
How reproducible:
Always
Steps to Reproduce:
1. Setup mirror registry with this registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-18-041406 release. 2. Add the ICSP information in the install-config file 4. Create agent.iso using install-config.yaml and agent-config.yaml 5. ssh to the node zero to see the error in create-cluster-and-infraenv.service.
Actual results:
create-cluster-and-infraenv.service is failing with below error: time="2022-10-18T09:36:13Z" level=fatal msg="Failed to register cluster with assisted-service: AssistedServiceError Code: 400 Href: ID: 400 Kind: Error Reason: pull secret for new cluster is invalid: pull secret must contain auth for \"registry.ci.openshift.org\""
Expected results:
create-cluster-and-infraenv.service should be successfully started.
Additional info:
Refer this similar bug https://bugzilla.redhat.com/show_bug.cgi?id=1990659
This is a clone of issue OCPBUGS-4089. The following is the description of the original issue:
—
The kube-state-metric pod inside the openshift-monitoring namespace is not running as expected.
On checking the logs I am able to see that there is a memory panic
~~~
2022-11-22T09:57:17.901790234Z I1122 09:57:17.901768 1 main.go:199] Starting kube-state-metrics self metrics server: 127.0.0.1:8082
2022-11-22T09:57:17.901975837Z I1122 09:57:17.901951 1 main.go:66] levelinfomsgTLS is disabled.http2false
2022-11-22T09:57:17.902389844Z I1122 09:57:17.902291 1 main.go:210] Starting metrics server: 127.0.0.1:8081
2022-11-22T09:57:17.903191857Z I1122 09:57:17.903133 1 main.go:66] levelinfomsgTLS is disabled.http2false
2022-11-22T09:57:17.906272505Z I1122 09:57:17.906224 1 builder.go:191] Active resources: certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
2022-11-22T09:57:17.917758187Z E1122 09:57:17.917560 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
2022-11-22T09:57:17.917758187Z goroutine 24 [running]:
2022-11-22T09:57:17.917758187Z k8s.io/apimachinery/pkg/util/runtime.logPanic(
)
2022-11-22T09:57:17.917758187Z /usr/lib/golang/src/runtime/panic.go:1038 +0x215
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/internal/store.ingressMetricFamilies.func6(0x40)
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/internal/store/ingress.go:136 +0x189
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/internal/store.wrapIngressFunc.func1(
)
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/pkg/metric_generator/generator.go:107 +0xd8
~~~
Logs are attached to the support case
Description of problem:
In a 4.11 cluster with only openshift-samples enabled, the 4.12 introduced optional COs console and insights are installed. While upgrading to 4.12, CVO considers them to be disabled explicitly and skips reconciling them. So these COs are not upgraded to 4.12. Installed COs cannot be disabled, so CVO is supposed to implicitly enable them. $ oc get clusterversion -oyaml { "apiVersion": "config.openshift.io/v1", "kind": "ClusterVersion", "metadata": { "creationTimestamp": "2022-09-30T05:02:31Z", "generation": 3, "name": "version", "resourceVersion": "134808", "uid": "bd95473f-ffda-402d-8fe3-74f852a9d6eb" }, "spec": { "capabilities": { "additionalEnabledCapabilities": [ "openshift-samples" ], "baselineCapabilitySet": "None" }, "channel": "stable-4.11", "clusterID": "8eda5167-a730-4b39-be1d-214a80506d34", "desiredUpdate": { "force": true, "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "" } }, "status": { "availableUpdates": null, "capabilities": { "enabledCapabilities": [ "openshift-samples" ], "knownCapabilities": [ "Console", "Insights", "Storage", "baremetal", "marketplace", "openshift-samples" ] }, "conditions": [ { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Unable to retrieve available updates: currently reconciling cluster version 4.12.0-0.nightly-2022-09-28-204419 not found in the \"stable-4.11\" channel", "reason": "VersionNotFound", "status": "False", "type": "RetrievedUpdates" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Capabilities match configured spec", "reason": "AsExpected", "status": "False", "type": "ImplicitlyEnabledCapabilities" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Payload loaded version=\"4.12.0-0.nightly-2022-09-28-204419\" image=\"registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc\" architecture=\"amd64\"", "reason": "PayloadLoaded", "status": "True", "type": "ReleaseAccepted" }, { "lastTransitionTime": "2022-09-30T05:23:18Z", "message": "Done applying 4.12.0-0.nightly-2022-09-28-204419", "status": "True", "type": "Available" }, { "lastTransitionTime": "2022-09-30T07:05:42Z", "status": "False", "type": "Failing" }, { "lastTransitionTime": "2022-09-30T07:41:53Z", "message": "Cluster version is 4.12.0-0.nightly-2022-09-28-204419", "status": "False", "type": "Progressing" } ], "desired": { "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "4.12.0-0.nightly-2022-09-28-204419" }, "history": [ { "completionTime": "2022-09-30T07:41:53Z", "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "startedTime": "2022-09-30T06:42:01Z", "state": "Completed", "verified": false, "version": "4.12.0-0.nightly-2022-09-28-204419" }, { "completionTime": "2022-09-30T05:23:18Z", "image": "registry.ci.openshift.org/ocp/release@sha256:5a6f6d1bf5c752c75d7554aa927c06b5ea0880b51909e83387ee4d3bca424631", "startedTime": "2022-09-30T05:02:33Z", "state": "Completed", "verified": false, "version": "4.11.0-0.nightly-2022-09-29-191451" } ], "observedGeneration": 3, "versionHash": "CSCJ2fxM_2o=" } } $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-28-204419 True False False 93m cloud-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h56m cloud-credential 4.12.0-0.nightly-2022-09-28-204419 True False False 3h59m cluster-autoscaler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m config-operator 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m console 4.11.0-0.nightly-2022-09-29-191451 True False False 3h45m control-plane-machine-set 4.12.0-0.nightly-2022-09-28-204419 True False False 117m csi-snapshot-controller 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m dns 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m etcd 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m image-registry 4.12.0-0.nightly-2022-09-28-204419 True False False 3h46m ingress 4.12.0-0.nightly-2022-09-28-204419 True False False 151m insights 4.11.0-0.nightly-2022-09-29-191451 True False False 3h48m kube-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m kube-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-scheduler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-storage-version-migrator 4.12.0-0.nightly-2022-09-28-204419 True False False 91m machine-api 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m machine-approver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m machine-config 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m monitoring 4.12.0-0.nightly-2022-09-28-204419 True False False 3h44m network 4.12.0-0.nightly-2022-09-28-204419 True False False 3h55m node-tuning 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m openshift-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-samples 4.12.0-0.nightly-2022-09-28-204419 True False False 116m operator-lifecycle-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m service-ca 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m storage 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-28-204419
How reproducible:
Always
Steps to Reproduce:
1. Install a 4.11 cluster with only openshift-samples enabled 2. Upgrade to 4.12 3.
Actual results:
The 4.12 introduced optional CO console and insights are not upgraded to 4.12
Expected results:
All the installed COs get upgraded
Additional info:
Description of problem:
openshift-install does not detect releaseImage mismatches between cluster-image-set.yaml and registries.conf
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Create ZTP inputs for image generation where registries.conf does not have any source matching the binary releaseimage (the binary image which can be obtained by running "openshift-install version". You can also set this value in ZTP manifest cluster-image-set.yaml 2.run openshift-install agent create image
Actual results:
Image is generated with no warnings
Expected results:
Image is generated with warning message - "The ImageContentSources configuration in install-config.yaml should have at-least one source field matching the releaseImage value %s", releaseImagePath
Additional info:
This is a clone of issue OCPBUGS-3027. The following is the description of the original issue:
—
Description of problem:
When running the console in development mode per https://github.com/openshift/console#frontend-development, metrics do not load on the cluster overview, pods list page, pod details page (Metrics tab is missing), etc.
Samuel Padgett suspects the changes in https://github.com/openshift/console/commit/0bd839da219462ea585183de1c856fb60e9f96fb are related.
This is a clone of issue OCPBUGS-6175. The following is the description of the original issue:
—
Description of problem:
When the cluster is configured with Proxy the swift client in the image registry operator is not using the proxy to authenticate with OpenStack, so it's unable to reach the OpenStack API. This issue became evident since recently the support was added to not fallback to cinder in case swift is available[1].
[1]https://github.com/openshift/cluster-image-registry-operator/pull/819
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Deploy a cluster with proxy and restricted installation 2. 3.
Actual results:
Expected results:
Additional info:
We should reformat assisted-installer ops.go code and use exec commands as interface to allow mocking
This is a clone of issue OCPBUGS-2727. The following is the description of the original issue:
—
Description of problem:
CVO recently introduced a new precondition RecommendedUpdate[1]. While we request an upgrade to a version which is not an available update, the precondition got UnknownUpdate and blocks the upgrade. # oc get clusterversion/version -ojson | jq -r '.status.availableUpdates'null # oc get clusterversion/version -ojson | jq -r '.status.conditions[]|select(.type == "ReleaseAccepted")' { "lastTransitionTime": "2022-10-20T08:16:59Z", "message": "Preconditions failed for payload loaded version=\"4.12.0-0.nightly-multi-2022-10-18-153953\" image=\"quay.io/openshift-release-dev/ocp-release-nightly@sha256:71c1912990db7933bcda1d6914228e8b9b0d36ddba265164ee33a1bca06fe695\": Precondition \"ClusterVersionRecommendedUpdate\" failed because of \"UnknownUpdate\": RetrievedUpdates=False (VersionNotFound), so the recommended status of updating from 4.12.0-0.nightly-multi-2022-10-18-091108 to 4.12.0-0.nightly-multi-2022-10-18-153953 is unknown.", "reason": "PreconditionChecks", "status": "False", "type": "ReleaseAccepted" } [1]https://github.com/openshift/cluster-version-operator/pull/841/
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-multi-2022-10-18-091108
How reproducible:
Always
Steps to Reproduce:
1. Install a 4.12 cluster 2. Upgrade to a version which is not in the available update # oc adm upgrade --allow-explicit-upgrade --to-image=quay.io/openshift-release-dev/ocp-release-nightly@sha256:71c1912990db7933bcda1d6914228e8b9b0d36ddba265164ee33a1bca06fe695warning: The requested upgrade image is not one of the available updates.You have used --allow-explicit-upgrade for the update to proceed anywayRequesting update to release image quay.io/openshift-release-dev/ocp-release-nightly@sha256:71c1912990db7933bcda1d6914228e8b9b0d36ddba265164ee33a1bca06fe695
Actual results:
CVO precondition check fails and blocks upgrade
Expected results:
Upgrade proceeds
Additional info:
OVS 2.17+ introduced an optimization of "weak references" to substantially speed up database snapshots. in some cases weak references may leak memory; to aforementioned commit fixes that and has been pulled into ovs2.17-62 and later.
This is a clone of issue OCPBUGS-4252. The following is the description of the original issue:
—
Description of problem: When visiting the Terminal tab of a Node details page, an error is displayed instead of the terminal
Steps to Reproduce:
1. Go to the Terminal tab of a Node details page (e.g., /k8s/cluster/nodes/ip-10-0-129-13.ec2.internal/terminal)
2. Note the error alert that appears on the page instead of the terminal.
Description of problem:
Upgrade OCP 4.11 --> 4.12 fails with one 'NotReady,SchedulingDisabled' node and MachineConfigDaemonFailed.
Version-Release number of selected component (if applicable):
Upgrade from OCP 4.11.0-0.nightly-2022-09-19-214532 on top of OSP RHOS-16.2-RHEL-8-20220804.n.1 to 4.12.0-0.nightly-2022-09-20-040107. Network Type: OVNKubernetes
How reproducible:
Twice out of two attempts.
Steps to Reproduce:
1. Install OCP 4.11.0-0.nightly-2022-09-19-214532 (IPI) on top of OSP RHOS-16.2-RHEL-8-20220804.n.1. The cluster is up and running with three workers: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True False 51m Cluster version is 4.11.0-0.nightly-2022-09-19-214532 2. Run the OC command to upgrade to 4.12.0-0.nightly-2022-09-20-040107: $ oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 --allow-explicit-upgrade --force=true warning: Using by-tag pull specs is dangerous, and while we still allow it in combination with --force for backward compatibility, it would be much safer to pass a by-digest pull spec instead warning: The requested upgrade image is not one of the available updates.You have used --allow-explicit-upgrade for the update to proceed anyway warning: --force overrides cluster verification of your supplied release image and waives any update precondition failures. Requesting update to release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 3. The upgrade is not succeeds: [0] $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True True 17h Unable to apply 4.12.0-0.nightly-2022-09-20-040107: wait has exceeded 40 minutes for these operators: network One node degrided to 'NotReady,SchedulingDisabled' status: $ oc get nodes NAME STATUS ROLES AGE VERSION ostest-9vllk-master-0 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-1 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-2 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-worker-0-4x4pt NotReady,SchedulingDisabled worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-h6kcs Ready worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-xhz9b Ready worker 18h v1.24.0+3882f8f $ oc get pods -A | grep -v -e Completed -e Running NAMESPACE NAME READY STATUS RESTARTS AGE openshift-openstack-infra coredns-ostest-9vllk-worker-0-4x4pt 0/2 Init:0/1 0 18h $ oc get events LAST SEEN TYPE REASON OBJECT MESSAGE 7m15s Warning OperatorDegraded: MachineConfigDaemonFailed /machine-config Unable to apply 4.12.0-0.nightly-2022-09-20-040107: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] 7m15s Warning MachineConfigDaemonFailed /machine-config Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-20-040107 True False False 18h baremetal 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-credential 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cluster-autoscaler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h config-operator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h console 4.12.0-0.nightly-2022-09-20-040107 True False False 18h control-plane-machine-set 4.12.0-0.nightly-2022-09-20-040107 True False False 17h csi-snapshot-controller 4.12.0-0.nightly-2022-09-20-040107 True False False 19h dns 4.12.0-0.nightly-2022-09-20-040107 True True False 19h DNS "default" reports Progressing=True: "Have 5 available node-resolver pods, want 6." etcd 4.12.0-0.nightly-2022-09-20-040107 True False False 19h image-registry 4.12.0-0.nightly-2022-09-20-040107 True True False 18h Progressing: The registry is ready... ingress 4.12.0-0.nightly-2022-09-20-040107 True False False 18h insights 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-apiserver 4.12.0-0.nightly-2022-09-20-040107 True True False 18h NodeInstallerProgressing: 1 nodes are at revision 11; 2 nodes are at revision 13 kube-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-scheduler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-storage-version-migrator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-api 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-approver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-config 4.11.0-0.nightly-2022-09-19-214532 False True True 16h Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] marketplace 4.12.0-0.nightly-2022-09-20-040107 True False False 19h monitoring 4.12.0-0.nightly-2022-09-20-040107 True False False 18h network 4.12.0-0.nightly-2022-09-20-040107 True True True 19h DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2022-09-20T14:16:13Z... node-tuning 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-apiserver 4.12.0-0.nightly-2022-09-20-040107 True False False 18h openshift-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-samples 4.12.0-0.nightly-2022-09-20-040107 True False False 17h operator-lifecycle-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h service-ca 4.12.0-0.nightly-2022-09-20-040107 True False False 19h storage 4.12.0-0.nightly-2022-09-20-040107 True True False 19h ManilaCSIDriverOperatorCRProgressing: ManilaDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods... [0] http://pastebin.test.redhat.com/1074531
Actual results:
OCP 4.11 --> 4.12 upgrade fails.
Expected results:
OCP 4.11 --> 4.12 upgrade success.
Additional info:
Attached logs of the NotReady node - [^journalctl_ostest-9vllk-worker-0-4x4pt.log.tar.gz]
Description of problem:
The Alertmanager silence create / edit form got a new "Negative matcher" option in 4.12 (see https://issues.redhat.com/browse/OCPBUGSM-47734). However, there is nothing to explain what this option means and it will likely not be obvious from the label alone unless you are already quite familiar with Alertmanager. After discussion with the docs team, it was decided that adding some explanation in context in the UI would be much better than adding an explanation to the documentation.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Go to Admin perspective 2. Go to Observe > Alerting > Silences page 3. Click on the Create button ("Negative matcher" option is shown with no explanation)
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-3668. The following is the description of the original issue:
—
Description of problem:
Installer fails to install 4.12.0-rc.0 on VMware IPI with the script that worked with prior OCP versions. Error happens during Terraform prepare step when gathering information in the "Platform Provisioning Check". It looks like a permission issue, but we're using the VCenter administrator account. I double checked and that account has all the necessary permissions.
Version-Release number of selected component (if applicable):
OCP installer 4.12.0-rc.0 VSphere & Vcenter 7.0.3 - no pending updates
How reproducible:
always - we observed this already in the nightlies, but wanted to wait for a RC to confirm
Steps to Reproduce:
1. Try to install using the openshift-install binary
Actual results:
Fails during the preparation step
Expected results:
Installs the cluster ;)
Additional info:
This runs in our CICD pipeline, let me know if you want to need access to the full run log: https://gitlab.consulting.redhat.com/cblum/storage-ocs-lab/-/jobs/219304 This includes the install-config.yaml, all component versions and the full debug log output
We do not have a well defined method to find these all just yet, identifying that would be a good first step.
At runtime we know the version of OpenShift that we're installing, so we can dynamically generate the OS_IMAGES environment variable to point at the image for the current release. This will prevent having to add to the hard-coded list for every release.
Description of problem:
health_statuses_insights metrics is showing disabled rules in "total". In other fields, it shows the correct amount. In the code linked below, we can see that the "Disabled" rules are only skipped during the value assigning of TotalRisk
How reproducible:
Always
Steps to Reproduce:
1. Upload a fake archive to trigger health checks (for example with rule CVE_2020_8555_kubernetes) 2. Disable one of the rules through https://console.redhat.com/api/insights-results-aggregator/v1/clusters/{cluster.id}/rules/{rule}/error_key/{error_key}/disable 3. Create support secret and set endpoint="https://httpstat.us/200" 4. restart insights operator 5. wait for alerts to trigger 6. Check health_statuses_insights metrics.
rule:
ccx_rules_ocp.external.rules.ocp_version_end_of_life.report
error_key:
OCP4X_BEYOND_EOL
Actual results:
"moderate" health_statuses_insights shows 2 triggers "total" shows 3. Therefore, it is accounting for the deactivated rule.
Expected results:
"moderate" health_statuses_insights shows 2 triggers "total" health_statuses_insights shows 2 triggers (doesn't account for deactivated rule)
Additional info:
If there is any issue in triggering this events, you may contact me and I can help with the steps.
Description of problem:
prometheus-k8s-0 ends in CrashLoopBackOff with evel=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0" on SNO after hard reboot tests
Version-Release number of selected component (if applicable):
4.11.6
How reproducible:
Not always, after ~10 attempts
Steps to Reproduce:
1. Deploy SNO with Telco DU profile applied 2. Hard reboot node via out of band interface 3. oc -n openshift-monitoring get pods prometheus-k8s-0
Actual results:
NAME READY STATUS RESTARTS AGE prometheus-k8s-0 5/6 CrashLoopBackOff 125 (4m57s ago) 5h28m
Expected results:
Running
Additional info:
Attaching must-gather. The pod recovers successfully after deleting/re-creating. [kni@registry.kni-qe-0 ~]$ oc -n openshift-monitoring logs prometheus-k8s-0 ts=2022-09-26T14:54:01.919Z caller=main.go:552 level=info msg="Starting Prometheus Server" mode=server version="(version=2.36.2, branch=rhaos-4.11-rhel-8, revision=0d81ba04ce410df37ca2c0b1ec619e1bc02e19ef)" ts=2022-09-26T14:54:01.919Z caller=main.go:557 level=info build_context="(go=go1.18.4, user=root@371541f17026, date=20220916-14:15:37)" ts=2022-09-26T14:54:01.919Z caller=main.go:558 level=info host_details="(Linux 4.18.0-372.26.1.rt7.183.el8_6.x86_64 #1 SMP PREEMPT_RT Sat Aug 27 22:04:33 EDT 2022 x86_64 prometheus-k8s-0 (none))" ts=2022-09-26T14:54:01.919Z caller=main.go:559 level=info fd_limits="(soft=1048576, hard=1048576)" ts=2022-09-26T14:54:01.919Z caller=main.go:560 level=info vm_limits="(soft=unlimited, hard=unlimited)" ts=2022-09-26T14:54:01.921Z caller=web.go:553 level=info component=web msg="Start listening for connections" address=127.0.0.1:9090 ts=2022-09-26T14:54:01.922Z caller=main.go:989 level=info msg="Starting TSDB ..." ts=2022-09-26T14:54:01.924Z caller=tls_config.go:231 level=info component=web msg="TLS is disabled." http2=false ts=2022-09-26T14:54:01.926Z caller=main.go:848 level=info msg="Stopping scrape discovery manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:862 level=info msg="Stopping notify discovery manager..." ts=2022-09-26T14:54:01.926Z caller=manager.go:951 level=info component="rule manager" msg="Stopping rule manager..." ts=2022-09-26T14:54:01.926Z caller=manager.go:961 level=info component="rule manager" msg="Rule manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:899 level=info msg="Stopping scrape manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:858 level=info msg="Notify discovery manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:891 level=info msg="Scrape manager stopped" ts=2022-09-26T14:54:01.926Z caller=notifier.go:599 level=info component=notifier msg="Stopping notification manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:844 level=info msg="Scrape discovery manager stopped" ts=2022-09-26T14:54:01.926Z caller=manager.go:937 level=info component="rule manager" msg="Starting rule manager..." ts=2022-09-26T14:54:01.926Z caller=main.go:1120 level=info msg="Notifier manager stopped" ts=2022-09-26T14:54:01.926Z caller=main.go:1129 level=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0"
Tracker bug for bootimage bump in 4.12. This bug should block bugs which need a bootimage bump to fix.
The previous tracker is OCPBUGS-561.
Description of problem:
This bug is a copy of https://bugzilla.redhat.com/show_bug.cgi?id=2137616 as fix needs to go on OCP side. For must gather and attached screenshots please refer the bugzilla.
Add Capacity button does not exist after upgrade OCP version [OCP4.11->OCP4.12]
Version-Release number of selected component (if applicable):
ODF Version:4.11.3-3 OCP Version: 4.12.0-0.nightly-2022-10-24-103753 Provider: AWS
How reproducible:
Steps to Reproduce:
1.Install ODF4.11 +OCP4.11 2.Upgrade OCP4.11 to OCP4.12 3.Log in to the OpenShift Web Console. 4.Click Operators → Installed Operators. 5.Click OpenShift Data Foundation Operator. 6.Click the Storage Systems tab. 7.Click the Action Menu (⋮) on the far right of the storage system name to extend the options menu. "Add Capacity" button does not exist on menu. *Attached Screenshot
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4026. The following is the description of the original issue:
—
Description of problem:
There is an endless re-render loop and a browser feels slow to stuck when opening the add page or the topology.
Saw also endless API calls to /api/kubernetes/apis/binding.operators.coreos.com/v1alpha1/bindablekinds/bindable-kinds
Version-Release number of selected component (if applicable):
1. Console UI 4.12-4.13 (master)
2. Service Binding Operator (tested with 1.3.1)
How reproducible:
Always with installed SBO
But the "stuck feeling" depends on the browser (Firefox feels more stuck) and your locale machine power
Steps to Reproduce:
1. Install Service Binding Operator
2. Create or update the BindableKinds resource "bindable-kinds"
apiVersion: binding.operators.coreos.com/v1alpha1 kind: BindableKinds metadata: name: bindable-kinds
3. Open the browser console log
4. Open the console UI and navigate to the add page
Actual results:
1. Saw endless API calls to /api/kubernetes/apis/binding.operators.coreos.com/v1alpha1/bindablekinds/bindable-kinds
2. Browser feels slow and get stuck after some time
3. The page crashs after some time
Expected results:
1. The API call should be called just once
2. The add page should just work without feeling laggy
3. No crash
Additional info:
Get introduced after we watching the bindable-kinds resource with https://github.com/openshift/console/pull/11161
It looks like this happen only if the SBO is installed and the bindable-kinds resource exist, but doesn't contain any status.
The status list all available bindable resource types. I could not reproduce this by installing and uninstalling an operator, but you can manually create or update this resource as mentioned above.
Description of problem:
Setting up Github App from the console is lacking the required permission
Events and Permissions: https://pipelinesascode.com/docs/install/github_apps/
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Setup Github App from administrator perspective.
2. Create Repository and configure it to use the Github App method.
Actual results:
Creates Github App with limited permission.
Expected results:
Created Github App should contain all the required permission and should trigger the pipelinerun successfully on git events.
Additional info:
Console needs to update the default_events and default_permissions here it has to be matching with the CLI - refer this.
we need to update the See Github permission section in the UI as well.
Description of problem:
ovnkube-trace fails on hypershift deployments:
https://bugzilla.redhat.com/show_bug.cgi?id=2066891#c8
getDatabaseURIs looks for pods with container ovnkube-master, and those don't exist in hypershift.
https://github.com/ovn-org/ovn-kubernetes/blob/6b8acf05cb6043ebdc42d9d36e700390baabea4a/go-controller/cmd/ovnkube-trace/ovnkube-trace.go#L540
~~~
// Returns nbAddress, sbAddress, protocol == "ssl", nil
func getDatabaseURIs(coreclient *corev1client.CoreV1Client, restconfig *rest.Config, ovnNamespace string) (string, string, bool, error) {
containerName := "ovnkube-master"
var err error
found := false
var podName string
listOptions := metav1.ListOptions{}
pods, err := coreclient.Pods(ovnNamespace).List(context.TODO(), listOptions)
if err != nil
for _, pod := range pods.Items {
for _, container := range pod.Spec.Containers {
if container.Name == containerName
}
}
if !found
~~~
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-3278. The following is the description of the original issue:
—
Description of problem:
When doing openshift-install agent create image, one should not need to provide platform specific data like boot MAC addresses.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Create install-config with only VIPs in Baremetal platform section apiVersion: v1 metadata: name: foo baseDomain: test.metalkube.org networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.122.0/23 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 0 controlPlane: name: master replicas: 3 hyperthreading: Enabled architecture: amd64 platform: baremetal: apiVIPs: - 192.168.122.10 ingressVIPs: - 192.168.122.11 --- apiVersion: v1beta1 metadata: name: foo rendezvousIP: 192.168.122.14 2.openshift-install agent create image
Actual results:
ERROR failed to write asset (Agent Installer ISO) to disk: cannot generate ISO image due to configuration errors ERROR failed to fetch Agent Installer ISO: failed to load asset "Install Config": failed to create install config: invalid "install-config.yaml" file: [platform.baremetal.hosts: Invalid value: []*baremetal.Host(nil): bare metal hosts are missing, platform.baremetal.Hosts: Required value: not enough hosts found (0) to support all the configured ControlPlane replicas (3)]
Expected results:
Image gets generated
Additional info:
We should go into install-config validation code, detect if we are doing agent-based installation and skip the hosts checks
Description of problem:
When trying to enable Hardware Backed Management Ports (e.g. Virtual functions) on BF2 in NIC mode OR any other MLX NICs (CX-6, CX-5) by setting the node_mgmt_port_netdev_flags flags to a VF in the CNO; then OVN-K Node will crash.
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Always
Steps to Reproduce:
Start by enabling OvS HWOL and setting sriovnetworknodepolicy https://docs.openshift.com/container-platform/4.11/networking/hardware_networks/configuring-hardware-offloading.html 1. Scale down CNO: oc scale --replicas=0 deploy/network-operator -n openshift-network-operator 2. Make changes to OVN-K node: oc edit daemonsets ovnkube-node -n openshift-ovn-kubernetes a. Find "node_mgmt_port_netdev_flags=" and replace it with something like this: node_mgmt_port_netdev_flags= if [[ ${K8S_NODE} != *"master"* ]]; then node_mgmt_port_netdev_flags="--ovnkube-node-mgmt-port-netdev=ens1f0v0" fi b. Additionally you have to add the "node_mgmt_port_netdev_flags" to the " exec /usr/bin/ovnkube --init-node "${K8S_NODE}"" call in the same script. Since this is missing. 3. Save the edit. 4. Observe OVN-K node on baremetal worker nodes.
Actual results:
I0822 14:21:56.250285 496356 ovs.go:204] Exec(3): stderr: "" I0822 14:21:56.250290 496356 node.go:310] Detected support for port binding with external IDs I0822 14:21:56.250516 496356 management-port-dpu.go:181] Setup management port dpu host: ens1f0v0 F0822 14:21:56.250568 496356 ovnkube.go:133] failed to set management port name. file exists Workaround is to go to the node and run this command: sudo ovs-vsctl del-port br-int ovn-k8s-mp0
Expected results:
There should not be any errors when changing node_mgmt_port_netdev_flags to a valid value.
Additional info:
Reported here: https://github.com/ovn-org/ovn-kubernetes/pull/3160 Discussed briefly here: https://issues.redhat.com/browse/OCPBUGS-4098 Fixed Upstream here: https://github.com/ovn-org/ovn-kubernetes/pull/3251
Description of problem:
seeing test failure due to panic in cvo here:
Undiagnosed panic detected in pod expand_less 0s { pods/openshift-cluster-version_cluster-version-operator-96cf55b5-rffgt_cluster-version-operator_previous.log.gz:E0915 18:38:42.763315 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) pods/openshift-cluster-version_cluster-version-operator-96cf55b5-rffgt_cluster-version-operator_previous.log.gz:E0915 18:38:42.763418 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)}
full error from logs:
/E0915 18:38:42.763315 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 187 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1934980?, 0x2bc6240}) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x4d2604?}) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x1934980, 0x2bc6240}) /usr/lib/golang/src/runtime/panic.go:838 +0x207 github.com/openshift/cluster-version-operator/pkg/cvo.(*SyncWorker).calculateNext(0xc0015c6000, 0xc001df2000) /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/sync_worker.go:716 +0x14d github.com/openshift/cluster-version-operator/pkg/cvo.(*SyncWorker).Start.func1() /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/sync_worker.go:575 +0x2a9 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x10000000000?) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001df2000?, {0x1e44e60, 0xc002739f50}, 0x1, 0xc00058e0c0) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x989680, 0x0, 0x60?, 0x0?) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 github.com/openshift/cluster-version-operator/pkg/cvo.(*SyncWorker).Start(0xc0015c6000?, {0x1e5eb30?, 0xc0000cacc0?}, 0x10?, {0x0?, 0x0?}, {0x0?, 0x0?}) /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/sync_worker.go:556 +0x145 github.com/openshift/cluster-version-operator/pkg/cvo.(*Operator).Run.func2() /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/cvo.go:387 +0x83 created by github.com/openshift/cluster-version-operator/pkg/cvo.(*Operator).Run /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/cvo.go:385 +0x4af E0915 18:38:42.763418 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
Version-Release number of selected component (if applicable):
How reproducible:
currently unsure hit this in a test run, but shouldn't ever panic.
Steps to Reproduce:
1. 2. 3.
Actual results:
panic in cvo pod
Expected results:
no panic in cvo pod
Additional info:
Description of problem:
When enabling OvS HWOL on 4.12.0 nightly, traffic does not pass between pods.
Version-Release number of selected component (if applicable):
4.12.0 nightly
How reproducible:
Always
Steps to Reproduce:
1. Create 2 pods with sriov and try to ping between them (same node or different node)
Actual results:
No Traffic Passes (Ping or other)
Expected results:
Traffic Passes (Ping or other)
Additional info:
Missing this commit in 4.12 branch https://github.com/openshift/ovn-kubernetes/commit/37c6c1d7039fd4c8f3cca560691a254e720172de
This is a clone of issue OCPBUGS-7102. The following is the description of the original issue:
—
Description of problem:
https://github.com/openshift/operator-framework-olm/blob/7ec6b948a148171bd336750fed98818890136429/staging/operator-lifecycle-manager/pkg/controller/operators/olm/plugins/downstream_csv_namespace_labeler_plugin_test.go#L309
has a dependency on creation of a next-version release branch.
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Steps to Reproduce:
1. clone operator-framework/operator-framework-olm 2. make unit/olm 3. deal with a really bumpy first-time kubebuilder/envtest install experience 4. profit
Actual results:
error
Expected results:
pass
Additional info:
Not all of the errors reported by the assisted API (and shown in the wait-for bootstrap complete output) actually require user action.
Some appear when the agents first register but resolve themselves relatively quickly in the natural course of events.
Some, like the availability of NTP, don't block the installation from proceeding at all.
We need to think about the best ways of exposing this information to the user.
Grafana has been removed in 4.11 and we can safely remove any logic in CMO that deals with Grafana (except dashboards since they are used by OCP console).
Another point to clarify is to communicate to ProdSec and ART that Grafana isn't part of OCP anymore.
– NOT A BUG –
This was a story, but CI is not working for OLM project, so moved to OCPBUGS where it is.
----------------------------
upstream the `opm alpha diff` functionality moved to `oc-mirror` team by a non-RH actor.
This story is to track downstreaming the two PRs.
The only thing to verify here is that there is no more `opm alpha diff` command.
Other changes in the PRs are to externalize some interfaces and implement an undocumented alpha-level internal channel-level property list.
Description of problem:
cluster-version-operator pod crashloop during the bootstrap process might be leading to a longer bootstrap process causing the installer to timeout and fail. The cluster-version-operator pod is continuously restarting due to a go panic. The bootstrap process fails due to the timeout although it completes the process correctly after more time, once the cluster-version-operator pod runs correctly. $ oc -n openshift-cluster-version logs -p cluster-version-operator-754498df8b-5gll8 I0919 10:25:05.790124 1 start.go:23] ClusterVersionOperator 4.12.0-202209161347.p0.gc4fd1f4.assembly.stream-c4fd1f4 F0919 10:25:05.791580 1 start.go:29] error: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 127.0.0.1:6443: connect: connection refused goroutine 1 [running]: k8s.io/klog/v2.stacks(0x1) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:860 +0x8a k8s.io/klog/v2.(*loggingT).output(0x2bee180, 0x3, 0x0, 0xc00017d5e0, 0x1, {0x22e9abc?, 0x1?}, 0x2beed80?, 0x0) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:825 +0x686 k8s.io/klog/v2.(*loggingT).printfDepth(0x2bee180, 0x0?, 0x0, {0x0, 0x0}, 0x1?, {0x1b9cff0, 0x9}, {0xc000089140, 0x1, ...}) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2 k8s.io/klog/v2.(*loggingT).printf(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:612 k8s.io/klog/v2.Fatalf(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:1516 main.init.3.func1(0xc00012ac80?, {0x1b96f60?, 0x6?, 0x6?}) /go/src/github.com/openshift/cluster-version-operator/cmd/start.go:29 +0x1e6 github.com/spf13/cobra.(*Command).execute(0xc00012ac80, {0xc0002fea20, 0x6, 0x6}) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:860 +0x663 github.com/spf13/cobra.(*Command).ExecuteC(0x2bd52a0) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:974 +0x3b4 github.com/spf13/cobra.(*Command).Execute(...) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:902 main.main() /go/src/github.com/openshift/cluster-version-operator/cmd/main.go:29 +0x46
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-18-234318
How reproducible:
Most of the times, with any network type and installation type (IPI, UPI and proxy).
Steps to Reproduce:
1. Install OCP 4.12 IPI $ openshift-install create cluster 2. Wait until bootstrap is completed
Actual results:
[...] level=error msg="Bootstrap failed to complete: timed out waiting for the condition" level=error msg="Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane."
NAMESPACE NAME READY STATUS RESTARTS AGE openshift-cluster-version cluster-version-operator-754498df8b-5gll8 0/1 CrashLoopBackOff 7 (3m21s ago) 24m openshift-image-registry image-registry-94fd8b75c-djbxb 0/1 Pending 0 6m44s openshift-image-registry image-registry-94fd8b75c-ft66c 0/1 Pending 0 6m44s openshift-ingress router-default-64fbb749b4-cmqgw 0/1 Pending 0 13m openshift-ingress router-default-64fbb749b4-mhtqx 0/1 Pending 0 13m openshift-monitoring prometheus-operator-admission-webhook-6d8cb95cf7-6jn5q 0/1 Pending 0 14m openshift-monitoring prometheus-operator-admission-webhook-6d8cb95cf7-r6nnk 0/1 Pending 0 14m openshift-network-diagnostics network-check-source-8758bd6fc-vzf5k 0/1 Pending 0 18m openshift-operator-lifecycle-manager collect-profiles-27726375-hlq89 0/1 Pending 0 21m
$ oc -n openshift-cluster-version describe pod cluster-version-operator-754498df8b-5gll8 Name: cluster-version-operator-754498df8b-5gll8 Namespace: openshift-cluster-version Priority: 2000000000 Priority Class Name: system-cluster-critical Node: ostest-4gtwr-master-1/10.196.0.68 Start Time: Mon, 19 Sep 2022 10:17:41 +0000 Labels: k8s-app=cluster-version-operator pod-template-hash=754498df8b Annotations: openshift.io/scc: hostaccess Status: Running IP: 10.196.0.68 IPs: IP: 10.196.0.68 Controlled By: ReplicaSet/cluster-version-operator-754498df8b Containers: cluster-version-operator: Container ID: cri-o://1e2879600c89baabaca68c1d4d0a563d4b664c507f0617988cbf9ea7437f0b27 Image: registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69 Image ID: registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69 Port: <none> Host Port: <none> Args: start --release-image=registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69 --enable-auto-update=false --listen=0.0.0.0:9099 --serving-cert-file=/etc/tls/serving-cert/tls.crt --serving-key-file=/etc/tls/serving-cert/tls.key --v=2 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Message: I0919 10:33:07.798614 1 start.go:23] ClusterVersionOperator 4.12.0-202209161347.p0.gc4fd1f4.assembly.stream-c4fd1f4 F0919 10:33:07.800115 1 start.go:29] error: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 127.0.0.1:6443: connect: connection refused goroutine 1 [running]: [43/497] k8s.io/klog/v2.stacks(0x1) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:860 +0x8a k8s.io/klog/v2.(*loggingT).output(0x2bee180, 0x3, 0x0, 0xc000433ea0, 0x1, {0x22e9abc?, 0x1?}, 0x2beed80?, 0x0) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:825 +0x686 k8s.io/klog/v2.(*loggingT).printfDepth(0x2bee180, 0x0?, 0x0, {0x0, 0x0}, 0x1?, {0x1b9cff0, 0x9}, {0xc0002d6630, 0x1, ...}) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2 k8s.io/klog/v2.(*loggingT).printf(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:612 k8s.io/klog/v2.Fatalf(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:1516 main.init.3.func1(0xc0003b4f00?, {0x1b96f60?, 0x6?, 0x6?}) /go/src/github.com/openshift/cluster-version-operator/cmd/start.go:29 +0x1e6 github.com/spf13/cobra.(*Command).execute(0xc0003b4f00, {0xc000311980, 0x6, 0x6}) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:860 +0x663 github.com/spf13/cobra.(*Command).ExecuteC(0x2bd52a0) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:974 +0x3b4 github.com/spf13/cobra.(*Command).Execute(...) /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:902 main.main() /go/src/github.com/openshift/cluster-version-operator/cmd/main.go:29 +0x46 Exit Code: 255 Started: Mon, 19 Sep 2022 10:33:07 +0000 Finished: Mon, 19 Sep 2022 10:33:07 +0000 Ready: False Restart Count: 7 Requests: cpu: 20m memory: 50Mi Environment: KUBERNETES_SERVICE_PORT: 6443 KUBERNETES_SERVICE_HOST: 127.0.0.1 NODE_NAME: (v1:spec.nodeName) CLUSTER_PROFILE: self-managed-high-availability Mounts: /etc/cvo/updatepayloads from etc-cvo-updatepayloads (ro) /etc/ssl/certs from etc-ssl-certs (ro) /etc/tls/service-ca from service-ca (ro) /etc/tls/serving-cert from serving-cert (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access (ro) onditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: etc-ssl-certs: Type: HostPath (bare host directory volume) Path: /etc/ssl/certs HostPathType: etc-cvo-updatepayloads: Type: HostPath (bare host directory volume) Path: /etc/cvo/updatepayloads HostPathType: serving-cert: Type: Secret (a volume populated by a Secret) SecretName: cluster-version-operator-serving-cert Optional: false service-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: openshift-service-ca.crt Optional: false kube-api-access: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3600 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: node-role.kubernetes.io/master= Tolerations: node-role.kubernetes.io/master:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 120s node.kubernetes.io/unreachable:NoExecute op=Exists for 120s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 25m default-scheduler no nodes available to schedule pods Warning FailedScheduling 21m default-scheduler 0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is no t helpful for scheduling. Normal Scheduled 19m default-scheduler Successfully assigned openshift-cluster-version/cluster-version-operator-754498df8b-5gll8 to ostest-4gtwr-master-1 by ostest-4gtwr-bootstrap Warning FailedMount 17m kubelet Unable to attach or mount volumes: unmounted volumes=[serving-cert], unattached volumes=[service-ca kube-api-access etc-ssl-certs etc-cvo-updatepayloads serving-cert]: timed out waiting for the condition Warning FailedMount 17m (x9 over 19m) kubelet MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found Normal Pulling 15m kubelet Pulling image "registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69" Normal Pulled 15m kubelet Successfully pulled image "registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69" in 7.481824271s Normal Started 14m (x3 over 15m) kubelet Started container cluster-version-operator Normal Created 14m (x4 over 15m) kubelet Created container cluster-version-operator Normal Pulled 14m (x3 over 15m) kubelet Container image "registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69" already present on machine Warning BackOff 4m22s (x52 over 15m) kubelet Back-off restarting failed container
Expected results:
No panic?
Additional info:
Seen in most of OCP on OSP QE CI jobs.
Attached [^must-gather-install.tar.gz]
Description of problem: The product name for Azure Red Hat OpenShift was incorrect in Customer Case Management (CCM). As a result, the console included this incorrect product name in order for the support case link to correctly route. https://issues.redhat.com/browse/CPCCM-9926 fixed the incorrect product name, so now the support case link for Azure needs to be updated to reflect the correct product name.
Description of the problem:
https://github.com/openshift/assisted-installer/issues/513
How reproducible:
https://github.com/openshift/assisted-installer/issues/513
Steps to reproduce:
1. Create cluster with proxy and additional certificate bundle
2.Install
Actual results:
Controller failed to reach service cause of self signed certificate
Expected results:
Installation succeeds
Description of problem:
This is just a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2105570 for purposes of cherry-picking.
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-2891. The following is the description of the original issue:
—
Deprovisioning can fail with the error:
level=warning msg=unrecognized elastic load balancing resource type listener arn=arn:aws:elasticloadbalancing:us-west-2:460538899914:listener/net/a9ac9f1b3019c4d1299e7ededc92b42b/a6f0655da877ddd4/45e05ee69d99bab0
Further background is available in this write up:
https://docs.google.com/document/d/1TsTqIVwHDmjuDjG7v06w_5AAbXSisaDX-UfUI9-GVJo/edit#
Incident channel:
incident-aws-leaking-tags-for-deleted-resources
This is a clone of issue OCPBUGS-5182. The following is the description of the original issue:
—
Description of problem:
Deploy IPI cluster on azure cloud, set region as westeurope, vm size as EC96iads_v5 or EC96ias_v5. Installation fails with below error: 12-15 11:47:03.429 level=error msg=Error: creating Linux Virtual Machine: (Name "jima-15a-m6fzd-bootstrap" / Resource Group "jima-15a-m6fzd-rg"): compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="The VM size 'Standard_EC96iads_v5' is not supported for creation of VMs and Virtual Machine Scale Set with '<NULL>' security type." Similar as https://bugzilla.redhat.com/show_bug.cgi?id=2055247. From azure portal, we can see that the type of both vm size EC96iads_v5 and EC96ias_v5 are confidential compute. Might also need to do similar process for them as what did in bug 2055247.
Version-Release number of selected component (if applicable):
4.12 nightly build
How reproducible:
Always
Steps to Reproduce:
1. Prepare install-config.yaml file, set region as westeurope, vm size as EC96iads_v5 or EC96ias_v5 2. Deploy IPI azure cluster 3.
Actual results:
Install failed with error in description
Expected results:
Installer should be exited during validation and show expected error message.
Additional info:
Description of the problem:
Noticed there were no thread IDs in the assisted-installer logs when debugging 240 node cluster deployment with MCE (slack thread) making it difficult to debug.
How reproducible: 100%
Steps to reproduce:
1. Create cluster using assisted service and start the install
2. Look at the assisted-installer logs
Actual results:
Logs look like
time="2022-07-14T16:17:31Z" level=info msg="Start complete installation step, with params success: true, error info: "
Expected results: Thread ID would also print so we can understand which thread it came from
Adding setReportCaller to true will also help
Description of problem:
It is a disconnected cluster on AWS. There is an issue configuring Egress IP where the cluster uses STS. While looking into cloud-network-config-controller pod it is trying to connect to the global sts service "https://sts.amazonaws.com/" rather it should connect to the regional one "https://ec2.ap-southeast-1.amazonaws.com".
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create a disconected OCP cluster on AWS.
$ oc get netnamespace | grep egress egress-ip-test 2689387 ["172.16.1.24"]
$ oc get hostsubnet NAME HOST HOST IP SUBNET EGRESS CIDRS EGRESS IPS ip-172-16-1-151.ap-southeast-1.compute.internal ip-172-16-1-151.ap-southeast-1.compute.internal 172.16.1.151 10.130.0.0/23 ip-172-16-1-53.ap-southeast-1.compute.internal ip-172-16-1-53.ap-southeast-1.compute.internal 172.16.1.53 10.131.0.0/23 ["172.16.1.24"] ip-172-16-2-15.ap-southeast-1.compute.internal ip-172-16-2-15.ap-southeast-1.compute.internal 172.16.2.15 10.128.0.0/23 ip-172-16-2-77.ap-southeast-1.compute.internal ip-172-16-2-77.ap-southeast-1.compute.internal 172.16.2.77 10.128.2.0/23 ip-172-16-3-111.ap-southeast-1.compute.internal ip-172-16-3-111.ap-southeast-1.compute.internal 172.16.3.111 10.129.0.0/23 ip-172-16-3-79.ap-southeast-1.compute.internal ip-172-16-3-79.ap-southeast-1.compute.internal 172.16.3.79 10.129.2.0/23
$ oc logs sdn-controller-6m5kb -n openshift-sdn I0922 04:09:53.348615 1 vnids.go:105] Allocated netid 2689387 for namespace "egress-ip-test" E0922 04:24:00.682018 1 egressip.go:254] Ignoring invalid HostSubnet ip-172-16-1-53.ap-southeast-1.compute.internal (host: "ip-172-16-1-53.ap-southeast-1.compute.internal", ip: "172.16.1.53", subnet: "10.131.0.0/23"): related node object "ip-172-16-1-53.ap-southeast-1.compute.internal" has an incomplete annotation "cloud.network.openshift.io/egress-ipconfig", CloudEgressIPConfig: <nil>
$ oc logs cloud-network-config-controller-5c7556db9f-x78bs -n openshift-cloud-network-config-controller E0922 04:26:59.468726 1 controller.go:165] error syncing 'ip-172-16-2-77.ap-southeast-1.compute.internal': error retrieving the private IP configuration for node: ip-172-16-2-77.ap-southeast-1.compute.internal, err: error: cannot list ec2 instance for node: ip-172-16-2-77.ap-southeast-1.compute.internal, err: WebIdentityErr: failed to retrieve credentials caused by: RequestError: send request failed caused by: Post "https://sts.amazonaws.com/": dial tcp 54.239.29.25:443: i/o timeout, requeuing in node workqueue
$ oc get Infrastructure -o yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: "2022-09-22T03:28:15Z" generation: 1 name: cluster resourceVersion: "598" uid: 994da301-2a96-43b7-b43b-4b7c18d4b716 spec: cloudConfig: name: "" platformSpec: aws: serviceEndpoints: - name: sts url: https://sts.ap-southeast-1.amazonaws.com - name: ec2 url: https://ec2.ap-southeast-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.ap-southeast-1.amazonaws.com type: AWS status: apiServerInternalURI: https://api-int.openshiftyy.ocpaws.sadiqueonline.com:6443 apiServerURL: https://api.openshiftyy.ocpaws.sadiqueonline.com:6443 controlPlaneTopology: HighlyAvailable etcdDiscoveryDomain: "" infrastructureName: openshiftyy-wfrpf infrastructureTopology: HighlyAvailable platform: AWS platformStatus: aws: region: ap-southeast-1 serviceEndpoints: - name: ec2 url: https://ec2.ap-southeast-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.ap-southeast-1.amazonaws.com - name: sts url: https://sts.ap-southeast-1.amazonaws.com type: AWS kind: List metadata: resourceVersion: ""