Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
Much like core OpenShift operators, a standardized flow exists for OLM-managed operators to interact with the cluster in a specific way to leverage AWS STS authorization when using AWS APIs as opposed to insecure static, long-lived credentials. OLM-managed operators can implement integration with the CloudCredentialOperator in well-defined way to support this flow.
Enable customers to easily leverage OpenShift's capabilities around AWS STS with layered products, for increased security posture. Enable OLM-managed operators to implement support for this in well-defined pattern.
See Operators & STS slide deck.
The CloudCredentialsOperator already provides a powerful API for OpenShift's cluster core operator to request credentials and acquire them via short-lived tokens. This capability should be expanded to OLM-managed operators, specifically to Red Hat layered products that interact with AWS APIs. The process today is cumbersome to none-existent based on the operator in question and seen as an adoption blocker of OpenShift on AWS.
This is particularly important for ROSA customers. Customers are expected to be asked to pre-create the required IAM roles outside of OpenShift, which is deemed acceptable.
This Section: High-Level description of the Market Problem ie: Executive Summary
This Section: Articulates and defines the value proposition from a users point of view
This Section: Effect is the expected outcome within the market. There are two dimensions of outcomes; growth or retention. This represents part of the “why” statement for a feature.
As an engineer I want the capability to implement CI test cases that run at different intervals, be it daily, weekly so as to ensure downstream operators that are dependent on certain capabilities are not negatively impacted if changes in systems CCO interacts with change behavior.
Acceptance Criteria:
Create a stubbed out e2e test path in CCO and matching e2e calling code in release such that there exists a path to tests that verify working in an AWS STS workflow.
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Place holder epic to track spontaneous task which does not deserve its own epic.
DoD:
At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.
We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise
AWS has a hard limit of 100 OIDC providers globally.
Currently each HostedCluster created by e2e creates its own OIDC provider, which results in hitting the quota limit frequently and causing the tests to fail as a result.
DOD:
Only a single OIDC provider should be created and shared between all e2e HostedClusters.
AC:
We have connectDirectlyToCloudAPIs flag in konnectiviy socks5 proxy to dial directly to cloud providers without going through konnectivity.
This introduce another path for exception https://github.com/openshift/hypershift/pull/1722
We should consolidate both by keep using connectDirectlyToCloudAPIs until there's a reason to not.
Once the HostedCluster and NodePool gets stopped using PausedUntil statement, the awsprivatelink controller will continue reconciling.
How to test this:
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem:
See the Insights nomination https://issues.redhat.com/browse/INSIGHTOCP-1197 and the KCS article https://access.redhat.com/solutions/7008996
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-3432. The following is the description of the original issue:
—
Description of problem:
E2E test cases for knative and pipeline packages have been disabled on CI due to respective operator installation issues. Tests have to be enabled after new operator version be available or the issue resolves
References:
https://coreos.slack.com/archives/C6A3NV5J9/p1664545970777239
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
The install_type field in telemetry data is not automatically set from the installer invoker value. Any values we wish to appear must be explicity converted to the corresponding install_type value.
Currently this make clusters installed with the agent-based installer (invoker agent-installer) invisible in telemetry.
This relates to the recovery of a cluster following an etcd outage.
The ingress path to kube-apiserver is:
───────────> VIP ─────────────────> Local HAProxy ────┬─> kube-apiserver-master-0 (managed by keepalived) │ ├─> kube-apiserver-master-1 │ └─> kube-apiserver-master-2
Each master is running an HAProxy which load balances between the 3 kube-apiservers. Each HAProxy is running health checks against each kube-apiserver, and will add or remove it from the available pool based on its health.
We only use keepalived to ensure that HAProxy is not a single point of failure. It is the job of keepalived to ensure that incoming traffic is being directed to an HAProxy which is functioning correctly.
The current health check we are using for keepalived involves polling /readyz against the local HAProxy. While this seems intuitively correct it is in fact testing the wrong thing. It is testing whether the kube-apiserver it connects to is functioning correctly. However, this is not the purpose of keepalived. HAProxy runs health checks against kube-apiserver backends. keepalived simply selects a correctly functioning HAProxy.
This becomes important during recovery from an outage. When none of the kube-apiservers are healthy this health check will fail continuously, and the API VIP will move uselessly between masters. However the situation is much worse when only one of the kube-apiservers is up. In this case there is a high probability that it is overloaded and at least rate limiting incoming connections. This may lead us to fail the keepalived health check and fail the VIP over to the next HAProxy. This will cause all open kube-apiserver connections to reset, even the established ones. This increases the load on the kube-apiserver and increases the probability that the health check will fail again.
Ideally the keepalived health check would check only the health of HAProxy itself, not the health of the pool of kube-apiservers. In practise it will probably never be necessary to move the VIP while the master is up, regardless of the health of the cluster. A network partition affecting HAProxy would already be handled by VRRP between the masters, so it may be that it would be sufficient to check that the local HAProxy pod is healthy.
This is a clone of issue OCPBUGS-11719. The following is the description of the original issue:
—
Description of problem:
According to the slack thread attached: Cluster uninstallation is stuck when load balancers are removed before ingress controllers. This can happen when the ingress controller removal fails and the control plane operator moves on to deleting load balancers without waiting.
Version-Release number of selected component (if applicable):
4.12.z 4.13.z
How reproducible:
Whenever the load balancer is deleted before the ingress controller
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Load balancer deletion waits for the ingress controller deletion
Additional info:
This bug is a backport clone of [Bugzilla Bug 2100181](https://bugzilla.redhat.com/show_bug.cgi?id=2100181). The following is the description of the original bug:
—
Created attachment 1891950
log
Description of problem:
Prior to OCP 4.7.48, the configure-ovs script picked the corrected bonded interface for br-ex. In OCP 4.7.48 we have that is consistently fail. It picks one of the slave interfaces (ens3f0).
Version-Release number of selected component (if applicable):
OCP Release > OCP 4.7.37
How reproducible:
100%
Steps to Reproduce:
1. Deploy an OCP cluster with bonding
2.
3.
Actual results:
Expected results:
configure-ovs should not fail and assign the correct interface to br-ex (bond1)
Additional info:
There appears to be a new default NM profile from 4.7.37 to 4.7.38 a that was not there before
github rate limit failures for upi image downloading govc.
Description of problem:
- After upgrading to OCP 4.10.41, thanos-ruler-user-workload-1 in the openshift-user-workload-monitoring namespace is consistently being created and deleted. - We had to scale down the Prometheus operator multiple times so that the upgrade is considered as successful. - This fix is temporary. After some time it appears again and Prometheus operator needs to be scaled down and up again. - The issue is present on all clusters in this customer environment which are upgraded to 4.10.41.
Version-Release number of selected component (if applicable):
How reproducible:
N/A, I wasn't able to reproduce the issue.
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-10221. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5469. The following is the description of the original issue:
—
Description of problem:
When changing channels it's possible that multiple new conditional update risks will need to be evaluated. For instance, a cluster running 4.10.34 in a 4.10 channel today only has to evaluate `OpenStackNodeCreationFails` but when the channel is changed to a 4.11 channel multiple new risks require evaluation and the evaluation of new risks is throttled at one every 10 minutes. This means if there are three new risks it may take up to 30 minutes after the channel has changed for the full set of conditional updates to be computed. This leads to a perception that no update paths are recommended because most will not wait 30 minutes, they expect immediate feedback.
Version-Release number of selected component (if applicable):
4.10.z, 4.11.z, 4.12, 4.13
How reproducible:
100%
Steps to Reproduce:
1. Install 4.10.34 2. Switch from stable-4.10 to stable-4.11 3.
Actual results:
Observe no recommended updates for 10-20 minutes because all available paths to 4.11 have a risk associated with them
Expected results:
Risks are computed in a timely manner for an interactive UX, lets say < 10s
Additional info:
This was intentional in the design, we didn't want risks to continuously re-evaluate or overwhelm the monitoring stack, however we didn't anticipate that we'd have long standing pile of risks and realize how confusing the user experience would be. We intend to work around this in the deployed fleet by converting older risks from `type: promql` to `type: Always` avoiding the evaluation period but preserving the notification. While this may lead customers to believe they're exposed to a risk they may not be, as long as the set of outstanding risks to the latest version is limited to no more than one it's likely no one will notice. All 4.10 and 4.11 clusters currently have a clear path toward relatively recent 4.10.z or 4.11.z with no more than one risk to be evaluated.
Description of problem:
ClusterOperator status get's updated when the conditions are re-ordered. There doesn't seem to be any change to the conditions except the reorder.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
kubectl get clusteroperator monitoring -oyaml --watch
Actual results:
status: conditions: - lastTransitionTime: "2022-08-25T23:39:59Z" message: Successfully rolled out the stack. reason: RollOutDone status: "True" type: Available - lastTransitionTime: "2022-08-25T23:39:59Z" status: "False" type: Progressing - lastTransitionTime: "2022-08-25T23:39:59Z" message: 'Prometheus is running without persistent storage which can lead to data loss during upgrades and cluster disruptions. Please refer to the official documentation to see how to configure storage for Prometheus: https://docs.openshift.com/container-platform/4.8/monitoring/configuring-the-monitoring-stack.html' reason: PrometheusDataPersistenceNotConfigured status: "False" type: Degraded - lastTransitionTime: "2022-08-25T23:39:59Z" status: "True" type: Upgradeable
Expected results:
I would have expected no update, since nothing changed.
status: conditions: - lastTransitionTime: "2022-08-25T23:39:59Z" status: "True" type: Upgradeable - lastTransitionTime: "2022-08-25T23:39:59Z" message: Successfully rolled out the stack. reason: RollOutDone status: "True" type: Available - lastTransitionTime: "2022-08-25T23:39:59Z" status: "False" type: Progressing - lastTransitionTime: "2022-08-25T23:39:59Z" message: 'Prometheus is running without persistent storage which can lead to data loss during upgrades and cluster disruptions. Please refer to the official documentation to see how to configure storage for Prometheus: https://docs.openshift.com/container-platform/4.8/monitoring/configuring-the-monitoring-stack.html' reason: PrometheusDataPersistenceNotConfigured status: "False" type: Degraded
Additional info:
Description of problem:
Pipeline Repository (Pipeline-as-code) list never shows an Event type.
Version-Release number of selected component (if applicable):
4.9+
How reproducible:
Always
Steps to Reproduce:
Actual results:
Pipeline Repository list shows a column Event type but no value.
Expected results:
Pipeline Repository list should show the Event type from the matching Pipeline Run.
Similar to the Pipeline Run Details page based on the label.
Additional info:
The list page packages/pipelines-plugin/src/components/repository/list-page/RepositoryRow.tsx renders obj.metadata.namespace as event type.
I believe we should show the Pipeline Run event type instead. packages/pipelines-plugin/src/components/repository/RepositoryLinkList.tsx uses
{plrLabels[RepositoryLabels[RepositoryFields.EVENT_TYPE]]}to render it.
Also the Pipeline Repository details page tried to render the Branch and Event type from the Repository resource. My research says these properties doesn't exist on the Repository resource. The code should be removed from the Repository details page.
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
1 The debugging endpoint /debug/pprof is exposed over the unauthenticated 10251 port 2 This debugging endpoint can potentially leak sensitive information
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-6011. The following is the description of the original issue:
—
Description of problem:
The 4.12.0 openshift-client package has kubectl 1.24.1 bundled in it when it should have 1.25.x
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Very
Steps to Reproduce:
1. Download and unpack https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/stable/openshift-client-linux-4.12.0.tar.gz 2. ./kubectl version
Actual results:
# ./kubectl version Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"1928ac4250660378a7d8c3430478dfe77977cb2a", GitTreeState:"clean", BuildDate:"2022-12-07T05:08:22Z", GoVersion:"go1.18.7", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4
Expected results:
kubectl version 1.25.x
Additional info:
Description of problem:
In OCP 4.9, the package-server-manager was introduced to manage the packageserver CSV. However, when OCP 4.8 in upgraded to 4.9, the packageserver stays stuck in v0.17.0, which is the version in OCP 4.8, and v0.18.3 does not roll out, which is the version in OCP 4.9
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Install OCP 4.8 2. Upgrade to OCP 4.9 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2022-08-31-160214 True True 50m Working towards 4.9.47: 619 of 738 done (83% complete) $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.47 True False 4m26s Cluster version is 4.9.47
Actual results:
Check packageserver CSV. It's in v0.17.0 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE packageserver Package Server 0.17.0 Succeeded
Expected results:
packageserver CSV is at 0.18.3
Additional info:
packageserver CSV version in 4.8: https://github.com/openshift/operator-framework-olm/blob/release-4.8/manifests/0000_50_olm_15-packageserver.clusterserviceversion.yaml#L12 packageserver CSV version in 4.9: https://github.com/openshift/operator-framework-olm/blob/release-4.9/pkg/manifests/csv.yaml#L8
This is a clone of issue OCPBUGS-2579. The following is the description of the original issue:
—
On disabling the helm and
import-from-samples actions in customization, Helm Charts and Samples options are still enabled in topology add actions.
Under
spec:
customization:
addPage:
disabledActions:
Insert snippet of Add page actions. (attached screenshot for reference)
Actual result:
Helm Charts and Samples options are still enabled in topology add actions even after disabling them in customization
Expected result:
Helm Charts and Samples options should be disabled(hidden)
This is a clone of issue OCPBUGS-7438. The following is the description of the original issue:
—
Description of problem:
The egress service nodeSelector parsing does not take into account wrong values that cause errors (such as "name part must consist of alphanumeric characters"), and the controller does not handle them gracefully given a bad input. when a bad input is given it should log an error and ignore the service
Version-Release number of selected component (if applicable):
How reproducible:
create an egress service with a bad nodeSelector: "{"nodeSelector":{"matchLabels":{"a:b": "c&"}}}" ovnkube-master controller does not handle it gracefully
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
The issue found while testing HOSTEDCP-400 and HOSTEDCP-401.
Hypershift operator installed with flags:
--platform-monitoring=operator-only --enable-uwm-telemetry-remote-write=true --metrics-set=telemetry
Service monitors and pod monitors in the control plane:
[jiezhao@cube hypershift]$ oc get servicemonitor -n clusters-jz-test NAME AGE catalog-operator 45m cluster-version-operator 45m etcd 46m kube-apiserver 46m kube-controller-manager 45m monitor-multus-admission-controller 43m monitor-ovn-master-metrics 43m node-tuning-operator 45m olm-operator 45m openshift-apiserver 45m openshift-controller-manager 45m [jiezhao@cube hypershift]$ oc get podmonitor -n clusters-jz-test NAME AGE cluster-image-registry-operator 46m controlplane-operator 47m hosted-cluster-config-operator 46m ignition-server 47m
In OCP management web console, go to Observe->Targets:
1. Status of service monitor 'monitor-multus-admission-controller' is Down, error: Scraped failed: server returned HTTP status 401 Unauthorized. It doesn't have cluster id in target labels 2. Target of pod monitor 'cluster-image-registry-operator' is missing, not shown
Description of problem:
When creating a pod with an additional network that contains a `spec.config.ipam.exclude` range, any address within the excluded range is still iterated while searching for a suitable IP candidate. As a result, pod creation times out when large exclude ranges are used.
Version-Release number of selected component (if applicable):
How reproducible:
with big exclude ranges, 100%
Steps to Reproduce:
1. create network-attachment-definition with a large range: $ cat <<EOF| oc apply -f - apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: nad-w-excludes spec: config: |- { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "ens3", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "fd43:01f1:3daa:0baa::/64", "exclude": [ "fd43:01f1:3daa:0baa::/100" ], "log_file": "/tmp/whereabouts.log", "log_level" : "debug" } } EOF 2. create a pod with the network attached: $ cat <<EOF|oc apply -f - apiVersion: v1 kind: Pod metadata: name: pod-with-exclude-range annotations: k8s.v1.cni.cncf.io/networks: nad-w-excludes spec: containers: - name: pod-1 image: openshift/hello-openshift EOF 3. check pod status, event log and whereabouts logs after a while: $ oc get pods NAME READY STATUS RESTARTS AGE pod-with-exclude-range 0/1 ContainerCreating 0 2m23s $ oc get events <...> 6m39s Normal Scheduled pod/pod-with-exclude-range Successfully assigned default/pod-with-exclude-range to <worker-node> 6m37s Normal AddedInterface pod/pod-with-exclude-range Add eth0 [10.129.2.49/23] from openshift-sdn 2m39s Warning FailedCreatePodSandBox pod/pod-with-exclude-range Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded $ oc debug node/<worker-node> - tail /host/tmp/whereabouts.log Starting pod/<worker-node>-debug ... To use host binaries, run `chroot /host` 2022-10-27T14:14:50Z [debug] Finished leader election 2022-10-27T14:14:50Z [debug] IPManagement: {fd43:1f1:3daa:baa::1 ffffffffffffffff0000000000000000} , <nil> 2022-10-27T14:14:59Z [debug] Used defaults from parsed flat file config @ /etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.conf 2022-10-27T14:14:59Z [debug] ADD - IPAM configuration successfully read: {Name:macvlan-net Type:whereabouts Routes:[] Datastore:kubernetes Addresses:[] OmitRanges:[fd43:01f1:3daa:0baa::/80] DNS: {Nameservers:[] Domain: Search:[] Options:[]} Range:fd43:1f1:3daa:baa::/64 RangeStart:fd43:1f1:3daa:baa:: RangeEnd:<nil> GatewayStr: EtcdHost: EtcdUsername: EtcdPassword:********* EtcdKeyFile: EtcdCertFile: EtcdCACertFile: LeaderLeaseDuration:1500 LeaderRenewDeadline:1000 LeaderRetryPeriod:500 LogFile:/tmp/whereabouts.log LogLevel:debug OverlappingRanges:true SleepForRace:0 Gateway:<nil> Kubernetes: {KubeConfigPath:/etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.kubeconfig K8sAPIRoot:} ConfigurationPath:PodName:pod-with-exclude-range PodNamespace:default} 2022-10-27T14:14:59Z [debug] Beginning IPAM for ContainerID: f4ffd0e07d6c1a2b6ffb0fa29910c795258792bb1a1710ff66f6b48fab37af82 2022-10-27T14:14:59Z [debug] Started leader election 2022-10-27T14:14:59Z [debug] OnStartedLeading() called 2022-10-27T14:14:59Z [debug] Elected as leader, do processing 2022-10-27T14:14:59Z [debug] IPManagement - mode: 0 / containerID:f4ffd0e07d6c1a2b6ffb0fa29910c795258792bb1a1710ff66f6b48fab37af82 / podRef: default/pod-with-exclude-range 2022-10-27T14:14:59Z [debug] IterateForAssignment input >> ip: fd43:1f1:3daa:baa:: | ipnet: {fd43:1f1:3daa:baa:: ffffffffffffffff0000000000000000} | first IP: fd43:1f1:3daa:baa::1 | last IP: fd43:1f1:3daa:baa:ffff:ffff:ffff:ffff
Actual results:
Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Expected results:
additional network gets attached to the pod
Additional info:
Hypershift does not use kubernetes.default.svc as the api audience on the KAS. It is set to the URL of the OIDC provider. ROSA also does this so I don't imagine this test passes for it either at the moment.
Explicit setting of the Audiences on the TokenRequest is not required. If not set, it will just default to the audiences configured in the KAS.
Causing conformance failure for hypershift
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-hypershift-main-periodics-4.13-conformance-aws-ovn/1620240601058381824
This is a clone of issue OCPBUGS-11636. The following is the description of the original issue:
—
Description of problem:
The ACLs are disabled for all newly created s3 buckets, this causes all OCP installs to fail: the bootstrap ignition can not be uploaded: level=info msg=Creating infrastructure resources... level=error level=error msg=Error: error creating S3 bucket ACL for yunjiang-acl413-4dnhx-bootstrap: AccessControlListNotSupported: The bucket does not allow ACLs level=error msg= status code: 400, request id: HTB2HSH6XDG0Q3ZA, host id: V6CrEgbc6eyfJkUbLXLxuK4/0IC5hWCVKEc1RVonSbGpKAP1RWB8gcl5dfyKjbrLctVlY5MG2E4= level=error level=error msg= with aws_s3_bucket_acl.ignition, level=error msg= on main.tf line 62, in resource "aws_s3_bucket_acl" "ignition": level=error msg= 62: resource "aws_s3_bucket_acl" ignition { level=error level=error msg=failed to fetch Cluster: failed to generate asset "Cluster": failure applying terraform for "bootstrap" stage: failed to create cluster: failed to apply Terraform: exit status 1 level=error level=error msg=Error: error creating S3 bucket ACL for yunjiang-acl413-4dnhx-bootstrap: AccessControlListNotSupported: The bucket does not allow ACLs level=error msg= status code: 400, request id: HTB2HSH6XDG0Q3ZA, host id: V6CrEgbc6eyfJkUbLXLxuK4/0IC5hWCVKEc1RVonSbGpKAP1RWB8gcl5dfyKjbrLctVlY5MG2E4= level=error level=error msg= with aws_s3_bucket_acl.ignition, level=error msg= on main.tf line 62, in resource "aws_s3_bucket_acl" "ignition": level=error msg= 62: resource "aws_s3_bucket_acl" ignition {
Version-Release number of selected component (if applicable):
4.11+
How reproducible:
Always
Steps to Reproduce:
1.Create a cluster via IPI
Actual results:
install fail
Expected results:
install succeed
Additional info:
Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023 - https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/ https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-error-responses.html - After you apply the bucket owner enforced setting for Object Ownership, ACLs are disabled.
This is a clone of issue OCPBUGS-3358. The following is the description of the original issue:
—
Description of problem:
Due to changes in BUILD-407 which merged into release-4.12, we have a permafailing test `e2e-aws-csi-driver-no-refreshresource` and are unable to merge subsequent pull requests.
Version-Release number of selected component (if applicable):
How reproducible: Always
Steps to Reproduce:
1. Bring up cluster using release-4.12 or release-4.13 or master branch 2. Run `e2e-aws-csi-driver-no-refreshresource` test 3.
Actual results:
I1107 05:18:31.131666 1 mount_linux.go:174] Cannot run systemd-run, assuming non-systemd OS
I1107 05:18:31.131685 1 mount_linux.go:175] systemd-run failed with: exit status 1
I1107 05:18:31.131702 1 mount_linux.go:176] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
Expected results:
Test should pass
Additional info:
This is a clone of issue OCPBUGS-948. The following is the description of the original issue:
—
Description of problem:
OLM is setting the "openshift.io/scc" label to "anyuid" on several namespaces: https://github.com/openshift/operator-framework-olm/blob/d817e09c2565b825afd8bfc9bb546eeff28e47e7/manifests/0000_50_olm_00-namespace.yaml#L23 https://github.com/openshift/operator-framework-olm/blob/d817e09c2565b825afd8bfc9bb546eeff28e47e7/manifests/0000_50_olm_00-namespace.yaml#L8 this label has no effect and will lead to confusion. It should be set to emptystring for now (removing it entirely will have no effect on upgraded clusters because the CVO does not remove deleted labels, so the next best thing is to clear the value). For bonus points, OLM should remove the label entirely from the manifest and add migration logic to remove the existing label from these namespaces to handle upgraded clusters that already have it.
Version-Release number of selected component (if applicable):
Not sure how long this has been an issue, but fixing it in 4.12+ should be sufficient.
How reproducible:
always
Steps to Reproduce:
1. install cluster 2. examine namespace labels
Actual results:
label is present
Expected results:
ideally label should not be present, but in the short term setting it to emptystring is the quick fix and is better than nothing.
CI is failing due to the updated pod security admission controller. We need to update the console test pods with the correct security values.
Error: Command failed: echo '{"apiVersion":"v1","kind":"Pod","metadata":
{"name":"test-jxlpt-event-test-pod","namespace":"test-jxlpt"},"spec":{"containers":[
{"name":"httpd","image":"image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest"}]}}' | kubectl create -n test-jxlpt -f -
Error from server (Forbidden): error when creating "STDIN": pods "test-jxlpt-event-test-pod" is forbidden: violates PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "httpd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "httpd" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "httpd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "httpd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
This is a clone of issue OCPBUGS-3304. The following is the description of the original issue:
—
Assisted-service can use only one mirror of the release image. In the install-config, the user may specify multiple matching mirrors. Currently the last matching mirror is the one used by assisted-service. This is confusing; we should use the first matching one instead.
Description of problem:
Deployed hypershift cluster with recent multi-arch build. Storage cluster operator has become available but having below warning message PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: "rbac/attacher_role.yaml" (string): clusterroles.rbac.authorization.k8s.io "ibm-powervs-block-external-attacher-role" is forbidden: user "system:serviceaccount:openshift-cluster-csi-drivers:powervs-block-csi-driver-operator" (groups=["system:serviceaccounts" "system:serviceaccounts:openshift-cluster-csi-drivers" "system:authenticated"]) is attempting to grant RBAC permissions not currently held: PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: {APIGroups:["csi.storage.k8s.io"], Resources:["csinodeinfos"], Verbs:["get" "list" "watch"]} PowerVSBlockCSIDriverOperatorCRDegraded: PowerVSBlockCSIDriverStaticResourcesControllerDegraded: "rbac/attacher_binding.yaml" (string): clusterroles.rbac.authorization.k8s.io "ibm-powervs-block-external-attacher-role" not found
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.Deploy 4.12.0-0.nightly-multi-2022-09-01-220105 nightly build
Actual results:
Expected results:
Additional info:
Possibly a regression introduced by OCPBUGS-7898, but a 4.12.14 cluster with None infrastructure submitted the following Insights for the cloud-controller-manager ClusterOperator:
2023-05-05T00:08:07Z Upgradeable=False AsExpected:
4.12.14
Unclear.
1. Run a 4.12.14 cluster, for some unclear subset of cluster configuration.
2. $ oc get -o json clusteroperator cloud-controller-manager | jq '.status.conditions[] | select(.type == "Upgradeable")'
False with AsExpected and an empty message.
True with AsExpected, or False with a different reason and a message.
Test output:
=== RUN TestAll/serial/TestCanaryRoute
canary_test.go:78: failed to create pod openshift-ingress-canary/canary-route-check: pods "canary-route-check" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "curl" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "curl" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "curl" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "curl" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
This is a clone of issue OCPBUGS-11333. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10690. The following is the description of the original issue:
—
Description of problem:
according to PR: https://github.com/openshift/cluster-monitoring-operator/pull/1824, startupProbe for UWM prometheus/platform prometheus should be 1 hour, but startupProbe for UWM prometheus is still 15m after enabled UWM, platform promethues does not have issue, startupProbe is increased to 1 hour
$ oc -n openshift-user-workload-monitoring get pod prometheus-user-workload-0 -oyaml | grep startupProbe -A20 startupProbe: exec: command: - sh - -c - if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi failureThreshold: 60 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 3 ... $ oc -n openshift-monitoring get pod prometheus-k8s-0 -oyaml | grep startupProbe -A20 startupProbe: exec: command: - sh - -c - if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi failureThreshold: 240 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 3
Version-Release number of selected component (if applicable):
4.13.0-0.nightly-2023-03-19-052243
How reproducible:
always
Steps to Reproduce:
1. enable UWM, check startupProbe for UWM prometheus/platform prometheus 2. 3.
Actual results:
startupProbe for UWM prometheus is still 15m
Expected results:
startupProbe for UWM prometheus should be 1 hour
Additional info:
since startupProbe for platform prometheus is increased to 1 hour, and no similar bug for UWM prometheus, won't fix the issue is OK.
Description of problem:
Deploy IPI cluster on multi datacenter/cluster vsphere env, installer failed with some reason, then tried to destroy cluster, and found that one vm folder under one of datacenters is not deleted. When installer exit, following objects are attached with tag jima15b-cq7z7 sh-4.4$ govc tags.attached.ls jima15b-cq7z7 | xargs govc ls -L /IBMCloud/vm/jima15b-cq7z7 /datacenter-2/vm/jima15b-cq7z7 /datacenter-2/vm/jima15b-cq7z7/jima15b-cq7z7-rhcos-us-west-us-west-1a /IBMCloud/vm/jima15b-cq7z7/jima15b-cq7z7-rhcos-us-east-us-east-2a /IBMCloud/vm/jima15b-cq7z7/jima15b-cq7z7-rhcos-us-east-us-east-3a /IBMCloud/vm/jima15b-cq7z7/jima15b-cq7z7-rhcos-us-east-us-east-1a /IBMCloud/vm/jima15b-cq7z7/jima15b-cq7z7-bootstrap sh-4.4$ ./openshift-install destroy cluster --dir ipi_missingzones/ INFO Destroyed VirtualMachine=jima15b-cq7z7-rhcos-us-west-us-west-1a INFO Destroyed VirtualMachine=jima15b-cq7z7-rhcos-us-east-us-east-2a INFO Destroyed VirtualMachine=jima15b-cq7z7-rhcos-us-east-us-east-3a INFO Destroyed VirtualMachine=jima15b-cq7z7-rhcos-us-east-us-east-1a INFO Destroyed VirtualMachine=jima15b-cq7z7-bootstrap INFO Destroyed Folder=jima15b-cq7z7 INFO Deleted Tag=jima15b-cq7z7 INFO Deleted TagCategory=openshift-jima15b-cq7z7 INFO Time elapsed: 55s After destroying cluster, folder jima15b-cq7z7 is still there, not deleted. sh-4.4$ govc ls /datacenter-2/vm/ | grep jima15b-cq7z7 /datacenter-2/vm/jima15b-cq7z7
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-18-141547
How reproducible:
always when installer fails to create infrastructure, it works when installation is successful.
Steps to Reproduce:
1. deploy IPI cluster on vsphere env configured multi datacenter/cluster 2. installer failed to create infrastructure with some reason 3. destroy cluster 4. one folder is not deleted
Actual results:
one folder is not deleted
Expected results:
All infrastructures created by installer should be removed
Additional info:
The 4.12 builds fail all the time. Last successfully build was from May 31.
Error:
# Root Suite.Entire pipeline flow from Builder page "before all" hook for "Background Steps" AssertionError: Timed out retrying after 80000ms: Expected to find element: `[data-test-id="PipelineResource"]`, but never found it.
Full error:
Running: e2e/pipeline-ci.feature (1 of 1) Couldn't determine Mocha version Logging in as kubeadmin Installing operator: "Red Hat OpenShift Pipelines" Operator Red Hat OpenShift Pipelines was not yet installed. Performing Pipelines post-installation steps Verify the CRD's for the "Red Hat OpenShift Pipelines" 1) "before all" hook for "Background Steps" Deleting "" namespace 0 passing (3m) 1 failing 1) Entire pipeline flow from Builder page "before all" hook for "Background Steps": AssertionError: Timed out retrying after 80000ms: Expected to find element: `[data-test-id="PipelineResource"]`, but never found it. Because this error occurred during a `before all` hook we are skipping all of the remaining tests. at ../../dev-console/integration-tests/support/pages/functions/installOperatorOnCluster.ts.exports.waitForCRDs (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17156:77) at performPostInstallationSteps (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17242:21) at ../../dev-console/integration-tests/support/pages/functions/installOperatorOnCluster.ts.exports.verifyAndInstallOperator (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17268:5) at ../../dev-console/integration-tests/support/pages/functions/installOperatorOnCluster.ts.exports.verifyAndInstallPipelinesOperator (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:17272:13) at Context.eval (https://console-openshift-console.apps.ci-op-issiwkzy-bc347.XXXXXXXXXXXXXXXXXXXXXX/__cypress/tests?p=support/commands/index.ts:20848:13) [mochawesome] Report JSON saved to /go/src/github.com/openshift/console/frontend/gui_test_screenshots/cypress_report_pipelines.json (Results) ┌────────────────────────────────────────────────────────────────────────────────────────────────┐ │ Tests: 13 │ │ Passing: 0 │ │ Failing: 1 │ │ Pending: 0 │ │ Skipped: 12 │ │ Screenshots: 1 │ │ Video: true │ │ Duration: 2 minutes, 58 seconds │ │ Spec Ran: e2e/pipeline-ci.feature │ └────────────────────────────────────────────────────────────────────────────────────────────────┘ (Screenshots) - /go/src/github.com/openshift/console/frontend/gui_test_screenshots/cypress/scree (1280x720) nshots/e2e/pipeline-ci.feature/Background Steps -- before all hook (failed).png (Video) - Started processing: Compressing to 32 CRF - Finished processing: /go/src/github.com/openshift/console/frontend/gui_test_scre (16 seconds) enshots/cypress/videos/e2e/pipeline-ci.feature.mp4 Compression progress: 100% ==================================================================================================== (Run Finished) Spec Tests Passing Failing Pending Skipped ┌────────────────────────────────────────────────────────────────────────────────────────────────┐ │ ✖ e2e/pipeline-ci.feature 02:58 13 - 1 - 12 │ └────────────────────────────────────────────────────────────────────────────────────────────────┘ ✖ 1 of 1 failed (100%) 02:58 13 - 1 - 12
See also
This is a clone of issue OCPBUGS-8741. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5889. The following is the description of the original issue:
—
Description of problem:
Customer running a cluster with following config: 4.10.23 AWS/IPI OVNKubernetes Observed that in namespace with networkpolicy rules enabled, and a policy for allow-from-same namespace, pods will have different behaviors when calling service IP's hosted in that same namespace. Example: Deployment1 with two pods (A/B) exists in namespace <EXAMPLE> Deployment2 with 1 pod hosting a service and route exists in same namespace Pod A will unexpectedly stop being able to call service IP of deployment2; Pod B will never lose access to calling service IP of deployment2. Pod A remains able to call out through br-ex interface, tag the ROUTE address, and reach deployment2 pod via haproxy (this never breaks) Pod A remains able to reach the local gateway on the node Host node for Pod A is able to reach the service IP of deployment2 and remains able to do so, even while pod A is impacted. Issue can be mitigated by applying a label or annotation to pod A, which immediately allows it to reach internal service IPs again within the namespace. I suspect that the issue is to do with the networkpolicy rules failing to stay updated on the pod object, and the pod needs to be 'refreshed' --> label appendation/other update, to force the pod to 'remember' that it is allowed to call peers within the namespace. Additional relevant data: - pods affects throughout cluster; no specific project/service/deployment/application - pods ride on different nodes all the time (no one node affected) - pods with fail condition are on same node with other pods without issue - multiple namespaces see this problem - all namespaces are using similar networkpolicy isolation and allow-from-same-namespace ruleset (which matches our documentation on syntax).
Version-Release number of selected component (if applicable):
4.10.23
How reproducible:
every time --> unclear what the trigger is that causes this; pods will be functional and several hours/days later, will stop being able to talk to peer services.
Steps to Reproduce:
1. deploy pod with at least two replicas in a namespace with allow-from same network policy 2. deploy a different service and route example httpd instance in same namespace 3. observe that one of the two pods may fail to reach service IP after some time 4. apply annotation to pod and it is immediately able to reach services again.
Actual results:
pods intermittently fail to reach internal service addresses, but are able to be interacted with otherwise, and can reach upstream/external addresses including routes on cluster.
Expected results:
pods should not lose access to service network peers.
Additional info:
see next comments for relevant uploads/sosreports and inspects.
This is a clone of issue OCPBUGS-14620. The following is the description of the original issue:
—
Description of problem:
When installing a HyperShift cluster into ap-southeast-3 (currently only availble in the production environment), the install will never succeed due to the hosted KCM pods stuck in CrashLoopBackoff
Version-Release number of selected component (if applicable):
4.12.18
How reproducible:
100%
Steps to Reproduce:
1. Install a HyperShift Cluster in ap-southeast-3 on AWS
Actual results:
kube-controller-manager-54fc4fff7d-2t55x 1/2 CrashLoopBackOff 7 (2m49s ago) 16m kube-controller-manager-54fc4fff7d-dxldc 1/2 CrashLoopBackOff 7 (93s ago) 16m kube-controller-manager-54fc4fff7d-ww4kv 1/2 CrashLoopBackOff 7 (21s ago) 15m With selected "important" logs: I0606 15:16:25.711483 1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="ConfigMap" apiVersion="v1" type="Normal" reason="LeaderElection" message="kube-controller-manager-54fc4fff7d-ww4kv_6dbab916-b4bf-447f-bbb2-5037864e7f78 became leader" I0606 15:16:25.711498 1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="kube-controller-manager-54fc4fff7d-ww4kv_6dbab916-b4bf-447f-bbb2-5037864e7f78 became leader" W0606 15:16:25.741417 1 plugins.go:132] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-aws I0606 15:16:25.741763 1 aws.go:1279] Building AWS cloudprovider F0606 15:16:25.742096 1 controllermanager.go:245] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": not a valid AWS zone (unknown region): ap-southeast-3a
Expected results:
The KCM pods are Running
Description of problem:
The name of "Role" on Compute -> Nodes page should update to "Roles" to match the name in the CLI
Compare with other resources, the title of the column should keep pace with the name in CLI
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
1. Login OCP with CLI, use below command to get nodes information
$ oc get nodes
2. Go to Compute -> nodes page, check the column name of "Role"
3.
Actual results:
CLI will return information as below shown, and the title of the column is "ROLES"
NAME STATUS ROLES AGE VERSION ip-10-0-145-18.us-east-2.compute.internal Ready worker 9h v1.24.0+4f0dd4d ip-10-0-145-203.us-east-2.compute.internal Ready master 9h v1.24.0+4f0dd4d ip-10-0-163-205.us-east-2.compute.internal Ready master 9h v1.24.0+4f0dd4d ip-10-0-169-118.us-east-2.compute.internal Ready worker 9h v1.24.0+4f0dd4d ip-10-0-198-234.us-east-2.compute.internal Ready master 9h v1.24.0+4f0dd4d ip-10-0-212-34.us-east-2.compute.internal Ready worker 9h v1.24.0+4f0dd4d
But in UI, the name of ROLES is "Role" which is incorrect. (Attached)
Expected results:
The title of "Role" should update to "Roles"
Additional info:
For the disconnected installation , we should not be able to provision machines successfully with publicIP:true , this has been the behavior earlier till -
4.11 and around 17th Aug nightly released 4.12 , but it has started allowing creation of machines with publicIP:true set in machineset
Issue reproduced on - Cluster version - 4.12.0-0.nightly-2022-08-23-223922
It is always reproducible .
Steps :
Create machineset using yaml with
{"spec":{"providerSpec":{"value":{"publicIP": true}}}}
Machineset created successfully and machine provisioned successfully .
This seems to be regression bug refer - https://bugzilla.redhat.com/show_bug.cgi?id=1889620
Here is the must gather log - https://drive.google.com/file/d/1UXjiqAx7obISTxkmBsSBuo44ciz9HD1F/view?usp=sharing
Here is the test successfully ran for 4.11 , for exactly same profile and machine creation failed with InvalidConfiguration Error- https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Runner/575822/console
We can confirm disconnected cluster using below there would be lot of mirrors used in those -
oc get ImageContentSourcePolicy image-policy-aosqe -o yaml apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: creationTimestamp: "2022-08-24T09:08:47Z" generation: 1 name: image-policy-aosqe resourceVersion: "34648" uid: 20e45d6d-e081-435d-b6bb-16c4ca21c9d6 spec: repositoryDigestMirrors: - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/olmqe source: quay.io/olmqe - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshifttest source: quay.io/openshifttest - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshift-qe-optional-operators source: quay.io/openshift-qe-optional-operators - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.stage.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: brew.registry.redhat.io
In multinode we can check nodes object in kubeapi as we can't really validate hosts that are not part of cluster, only the one controller is running on.
And we should validate ip of the host controller is running on.
In case ip was changed log it
This is a clone of issue OCPBUGS-10888. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10887. The following is the description of the original issue:
—
Description of problem:
Following https://bugzilla.redhat.com/show_bug.cgi?id=2102765 respectively https://issues.redhat.com/browse/OCPBUGS-2140 problems with OpenID Group sync have been resolved. Yet the problem documented in https://bugzilla.redhat.com/show_bug.cgi?id=2102765 still does exist and we see that Groups that are being removed are still part of the chache in oauth-apiserver, causing a panic of the respective components and failures during login for potentially affected users. So in general, it looks like that oauth-apiserver cache is not properly refreshing or handling the OpenID Groups being synced. E1201 11:03:14.625799 1 runtime.go:76] Observed a panic: interface conversion: interface {} is nil, not *v1.Group goroutine 3706798 [running]: k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1.1() k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:103 +0xb0 panic({0x1aeab00, 0xc001400390}) runtime/panic.go:838 +0x207 k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1.1.1() k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit.go:80 +0x2a k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1.1() k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit.go:89 +0x250 panic({0x1aeab00, 0xc001400390}) runtime/panic.go:838 +0x207 github.com/openshift/library-go/pkg/oauth/usercache.(*GroupCache).GroupsFor(0xc00081bf18?, {0xc000c8ac03?, 0xc001400360?}) github.com/openshift/library-go@v0.0.0-20211013122800-874db8a3dac9/pkg/oauth/usercache/groups.go:47 +0xe7 github.com/openshift/oauth-server/pkg/groupmapper.(*UserGroupsMapper).processGroups(0xc0002c8880, {0xc0005d4e60, 0xd}, {0xc000c8ac03, 0x7}, 0x1?) github.com/openshift/oauth-server/pkg/groupmapper/groupmapper.go:101 +0xb5 github.com/openshift/oauth-server/pkg/groupmapper.(*UserGroupsMapper).UserFor(0xc0002c8880, {0x20f3c40, 0xc000e18bc0}) github.com/openshift/oauth-server/pkg/groupmapper/groupmapper.go:83 +0xf4 github.com/openshift/oauth-server/pkg/oauth/external.(*Handler).login(0xc00022bc20, {0x20eebb0, 0xc00041b058}, 0xc0015d8200, 0xc001438140?, {0xc0000e7ce0, 0x150}) github.com/openshift/oauth-server/pkg/oauth/external/handler.go:209 +0x74f github.com/openshift/oauth-server/pkg/oauth/external.(*Handler).ServeHTTP(0xc00022bc20, {0x20eebb0, 0xc00041b058}, 0x0?) github.com/openshift/oauth-server/pkg/oauth/external/handler.go:180 +0x74a net/http.(*ServeMux).ServeHTTP(0x1c9dda0?, {0x20eebb0, 0xc00041b058}, 0xc0015d8200) net/http/server.go:2462 +0x149 github.com/openshift/oauth-server/pkg/server/headers.WithRestoreAuthorizationHeader.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) github.com/openshift/oauth-server/pkg/server/headers/oauthbasic.go:27 +0x10f net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0xc0005e0280?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/authorization.go:64 +0x498 net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:79 +0x178 net/http.HandlerFunc.ServeHTTP(0x2f6cea0?, {0x20eebb0?, 0xc00041b058?}, 0x3?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/server/filters/maxinflight.go:187 +0x2a4 net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0x11?, {0x20eebb0?, 0xc00041b058?}, 0x1aae340?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/impersonation.go:50 +0x21c net/http.HandlerFunc.ServeHTTP(0xc000d52120?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:79 +0x178 net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0xc0015d8100?, {0x20eebb0?, 0xc00041b058?}, 0xc000531930?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1({0x7fae682a40d8?, 0xc00041b048}, 0x9dbbaa?) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit.go:111 +0x549 net/http.HandlerFunc.ServeHTTP(0xc00003def0?, {0x7fae682a40d8?, 0xc00041b048?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x7fae682a40d8, 0xc00041b048}, 0xc0015d8100) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:79 +0x178 net/http.HandlerFunc.ServeHTTP(0x0?, {0x7fae682a40d8?, 0xc00041b048?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x7fae682a40d8, 0xc00041b048}, 0xc0015d8100) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x7fae682a40d8?, 0xc00041b048?}, 0x20cfd00?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.withAuthentication.func1({0x7fae682a40d8, 0xc00041b048}, 0xc0015d8100) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/authentication.go:80 +0x8b9 net/http.HandlerFunc.ServeHTTP(0x20f0f20?, {0x7fae682a40d8?, 0xc00041b048?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x7fae682a40d8, 0xc00041b048}, 0xc000e69e00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:88 +0x46b net/http.HandlerFunc.ServeHTTP(0xc0019f5890?, {0x7fae682a40d8?, 0xc00041b048?}, 0xc000848764?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.WithCORS.func1({0x7fae682a40d8, 0xc00041b048}, 0xc000e69e00) k8s.io/apiserver@v0.22.2/pkg/server/filters/cors.go:75 +0x10b net/http.HandlerFunc.ServeHTTP(0xc00149a380?, {0x7fae682a40d8?, 0xc00041b048?}, 0xc0008487d0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1() k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:108 +0xa2 created by k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:94 +0x2cc goroutine 3706802 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x19eb780?, 0xc001206e20}) k8s.io/apimachinery@v0.22.2/pkg/util/runtime/runtime.go:74 +0x99 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0xc0016aec60, 0x1, 0x1560f26?}) k8s.io/apimachinery@v0.22.2/pkg/util/runtime/runtime.go:48 +0x75 panic({0x19eb780, 0xc001206e20}) runtime/panic.go:838 +0x207 k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0005047c8, {0x20eecd0?, 0xc0010fae00}, 0xdf8475800?) k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:114 +0x452 k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x20eecd0, 0xc0010fae00}, 0xc000e69d00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/request_deadline.go:101 +0x494 net/http.HandlerFunc.ServeHTTP(0xc0016af048?, {0x20eecd0?, 0xc0010fae00?}, 0xc0000bc138?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1({0x20eecd0?, 0xc0010fae00}, 0xc000e69d00) k8s.io/apiserver@v0.22.2/pkg/server/filters/waitgroup.go:59 +0x177 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20eecd0?, 0xc0010fae00?}, 0x7fae705daff0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithAuditAnnotations.func1({0x20eecd0, 0xc0010fae00}, 0xc000e69c00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit_annotations.go:37 +0x230 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20eecd0?, 0xc0010fae00?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x20eecd0?, 0xc0010fae00}, 0xc000e69b00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/warning.go:35 +0x2bb net/http.HandlerFunc.ServeHTTP(0x1c9dda0?, {0x20eecd0?, 0xc0010fae00?}, 0xd?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x20eecd0, 0xc0010fae00}, 0x0?) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/cachecontrol.go:31 +0x126 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20eecd0?, 0xc0010fae00?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/httplog.WithLogging.func1({0x20ef480?, 0xc001c20620}, 0xc000e69a00) k8s.io/apiserver@v0.22.2/pkg/server/httplog/httplog.go:103 +0x518 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20ef480?, 0xc001c20620?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1({0x20ef480, 0xc001c20620}, 0xc000e69900) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/requestinfo.go:39 +0x316 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20ef480?, 0xc001c20620?}, 0xc0007c3f70?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.withRequestReceivedTimestampWithClock.func1({0x20ef480, 0xc001c20620}, 0xc000e69800) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/request_received_time.go:38 +0x27e net/http.HandlerFunc.ServeHTTP(0x419e2c?, {0x20ef480?, 0xc001c20620?}, 0xc0007c3e40?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x20ef480?, 0xc001c20620?}, 0xc0004ff600?) k8s.io/apiserver@v0.22.2/pkg/server/filters/wrap.go:74 +0xb1 net/http.HandlerFunc.ServeHTTP(0x1c05260?, {0x20ef480?, 0xc001c20620?}, 0x8?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.withAuditID.func1({0x20ef480, 0xc001c20620}, 0xc000e69600) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/with_auditid.go:66 +0x40d net/http.HandlerFunc.ServeHTTP(0x1c9dda0?, {0x20ef480?, 0xc001c20620?}, 0xd?) net/http/server.go:2084 +0x2f github.com/openshift/oauth-server/pkg/server/headers.WithPreserveAuthorizationHeader.func1({0x20ef480, 0xc001c20620}, 0xc000e69600) github.com/openshift/oauth-server/pkg/server/headers/oauthbasic.go:16 +0xe8 net/http.HandlerFunc.ServeHTTP(0xc0016af9d0?, {0x20ef480?, 0xc001c20620?}, 0x16?) net/http/server.go:2084 +0x2f github.com/openshift/oauth-server/pkg/server/headers.WithStandardHeaders.func1({0x20ef480, 0xc001c20620}, 0x4d55c0?) github.com/openshift/oauth-server/pkg/server/headers/headers.go:30 +0x18f net/http.HandlerFunc.ServeHTTP(0x0?, {0x20ef480?, 0xc001c20620?}, 0xc0016afac8?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc00098d622?, {0x20ef480?, 0xc001c20620?}, 0xc000401000?) k8s.io/apiserver@v0.22.2/pkg/server/handler.go:189 +0x2b net/http.serverHandler.ServeHTTP({0xc0019f5170?}, {0x20ef480, 0xc001c20620}, 0xc000e69600) net/http/server.go:2916 +0x43b net/http.(*conn).serve(0xc0002b1720, {0x20f0f58, 0xc0001e8120}) net/http/server.go:1966 +0x5d7 created by net/http.(*Server).Serve net/http/server.go:3071 +0x4db
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4.11.13
How reproducible:
- Always
Steps to Reproduce:
1. Install OpenShift Container Platform 4.11 2. Configure OpenID Group Sync (as per https://docs.openshift.com/container-platform/4.11/authentication/identity_providers/configuring-oidc-identity-provider.html#identity-provider-oidc-CR_configuring-oidc-identity-provider) 3. Have users with hundrets of groups 4. Login and after a while, remove some Groups from the user in the IDP and from OpenShift Container Platform 5. Try to login again and see the panic in oauth-apiserver
Actual results:
User is unable to login and oauth pods are reporting a panic as shown above
Expected results:
oauth-apiserver should invalidate the cache quickly to remove potential invalid references to non exsting groups
Additional info:
We are seeing windows to linux networking failures, across all PRs.
This is occurring across all clouds.
Example test failure
seems this could have been due to the downstream merge, the windows jobs did not pass before the PR was merged
Job that failed against the downstream merge, but did not prevent it from merging
This is blocking all PRs against the WMCO repo.
Failures like:
$ oc login --token=... Logged into "https://api..." as "..." using the token provided. Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get projects.project.openshift.io)
break login, which tries to gather information before saving the configuration, including a giant project list.
Ideally login would be able to save the successful login credentials, even when the informative gathering had difficulties. And possibly the informative gathering could be made conditional (--quiet or similar?) so expensive gathering could be skipped in use-cases where the context was not needed.
Description of problem:
Network policy code has some problems, most of them are races, therefore it can be difficult to reproduce and verify, here is the list 1. all kinds of add/delete port to/from default deny port group failures, possible symptoms: - port should’ve been added to default deny port group, but wasn’t: connections that should’ve been dropped are allowed - port should’ve been deleted from default deny port group, but wasn’t: connections that should be allowed are dropped - db ops failures when an attempt to add/delete port to/from default deny port group fails, e.g. because this operation already was done 2. default deny port group was overwritten when 2 network policies are created in a namespace at the same time. Can lead to ports not being added to the default deny port group => denied connections will be allowed 3. handle error when getting local pod from the cache fails, possible symptoms - "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy" log message - pod is not added to netpol port groups, network policy is not applied 4. creating deleted namespace via ensureNamespaceLocked, symptoms: - namespace was deleted, but address set is present in the db 5. policy acl loglevel update wasn’t applied, possible symptoms: - netpol acl log level isn’t set/updated to namespace loglevel 6. netpol cleanup failures, symptoms: - network policy failed to be deleted, something is still left in the db, error messages like - "failed to destroy network policy" - "Rollback of default port groups and acls for policy: %s/%s failed, Unable to ensure namespace for network policy" 7. concurrent write to sets.String - this will panic, you won’t miss 8. retry for network policy handler after network policy was deleted, you should see failures saying that some network policy related object is nil or doesn’t exist, e.g. - "peer AddressSet is nil, cannot add <object>" 9. host network and completed pods selected by network policy can produce error logs, no real harm - "Failed to get LSP for pod <namespace>/<name> for networkPolicy %s refetching err" 10. namespace pod handlers are never stopped, can affect memory usage and look like a memory leak 11. add local pod failure, since netpol port group is not committed to db yet, error looks like - "Failed to create *factory.localPodSelector <name>, error: object not found"
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
Example 1 1. Create network policy with [in/e]gress selector that applies to a namespace labeled project: myproject apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: test spec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: project: myproject 2. Use oc apply to delete network policy and crate a pod in project: myproject namespace at the same time 3. check ovnkube-master logs for "peer AddressSet is nil, cannot add peer pod(s)", this should retry with the same error 15 times 4. This may not work from the first try, since we need to hit specific order of network policy delete and pod add handling 5. With the new version no error messages should be present Example 2 1. create network policy that applies to a namespace test piVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: test spec: podSelector: {} policyTypes: - Ingress ingress: 2. Create host network pod in namespace test 3. Check 15 logs saying "Failed to get LSP for pod %s/%s for networkPolicy %s refetching err: " 4. check final log "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy" 5. With the new version no error message should be present All the other cases are difficult to reproduce, maybe just running some standard network policy tests and making sure everything works will be a good verification.
Actual results:
Expected results:
Additional info:
Description of problem:
When the user selects Serverless as an import strategy and tried to import a Devfile, the import fails because of an invalid Deployment.
Could reproduce this already in 4.11, but its even more prominent in 4.12 when the console automatically selects the resource type serverless when the Serverless operator is installed.
Version-Release number of selected component (if applicable):
Works on 4.10
Failed on 4.11 and 4.12 master
How reproducible:
Always
Steps to Reproduce:
1. Install and setup Serverless operator
1. Switch to dev perspective, navigate to add > import from git
3. Enter a non-Devfile git URL like https://github.com/jerolimov/nodeinfo
4. On 4.11 select resource type Serverless (on 4.12 this should be selected automatically)
5. Update the git URL to a repo with a Devfile like https://github.com/nodeshift-starters/devfile-sample
6. Press create
Actual results:
Import fails with error:
Error "Invalid value: "": name part must be non-empty" for field "spec.template.labels".
Expected results:
Devfile should be imported
Additional info:
Description of problem:
[OVN][OSP] After reboot egress node, egress IP cannot be applied anymore.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-11-07-181244
How reproducible:
Frequently happened in automation. But didn't reproduce it in manual.
Steps to Reproduce:
1. Label one node as egress node 2. Config one egressIP object STEP: Check one EgressIP assigned in the object. Nov 8 15:28:23.591: INFO: egressIPStatus: [{"egressIP":"192.168.54.72","node":"huirwang-1108c-pg2mt-worker-0-2fn6q"}] 3. Reboot the node, wait for the node ready.
Actual results:
EgressIP cannot be applied anymore. Waited more than 1 hour. oc get egressip NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egressip-47031 192.168.54.72
Expected results:
The egressIP should be applied correctly.
Additional info:
Some logs E1108 07:29:41.849149 1 egressip.go:1635] No assignable nodes found for EgressIP: egressip-47031 and requested IPs: [192.168.54.72] I1108 07:29:41.849288 1 event.go:285] Event(v1.ObjectReference{Kind:"EgressIP", Namespace:"", Name:"egressip-47031", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'NoMatchingNodeFound' no assignable nodes for EgressIP: egressip-47031, please tag at least one node with label: k8s.ovn.org/egress-assignable W1108 07:33:37.401149 1 egressip_healthcheck.go:162] Could not connect to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107): context deadline exceeded I1108 07:33:37.401348 1 master.go:1364] Adding or Updating Node "huirwang-1108c-pg2mt-worker-0-2fn6q" I1108 07:33:37.437465 1 egressip_healthcheck.go:168] Connected to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107)
After this log, seems like no logs related to "192.168.54.72" happened.
When we get telemetry from connected clusters, we want to be able to tell when they were created with the agent installer vs. the host assisted service. Currently there is no way to distinguish.
It's not clear whether any particular group owns the namespace of installation methods, or whom we need to notify when we create one.
Description of problem:
When you migrate a HostedCluster, the AWSEndpointService conflicts from the old MGMT Server with the new MGMT Server. The AWSPrivateLink_Controller does not have any validation when this happens. This is needed to make the Disaster Recovery HC Migration works. So the issue will raise up when the nodes of the HostedCluster cannot join the new Management cluster because the AWSEndpointServiceName is still pointing to the old one.
Version-Release number of selected component (if applicable):
4.12 4.13 4.14
How reproducible:
Follow the migration procedure from upstream documentation and the nodes in the destination HostedCluster will keep in NotReady state.
Steps to Reproduce:
1. Setup a management cluster with the 4.12-13-14/main version of the HyperShift operator. 2. Run the in-place node DR Migrate E2E test from this PR https://github.com/openshift/hypershift/pull/2138: bin/test-e2e \ -test.v \ -test.timeout=2h10m \ -test.run=TestInPlaceUpgradeNodePool \ --e2e.aws-credentials-file=$HOME/.aws/credentials \ --e2e.aws-region=us-west-1 \ --e2e.aws-zones=us-west-1a \ --e2e.pull-secret-file=$HOME/.pull-secret \ --e2e.base-domain=www.mydomain.com \ --e2e.latest-release-image="registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2023-03-17-063546" \ --e2e.previous-release-image="registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2023-03-17-063546" \ --e2e.skip-api-budget \ --e2e.aws-endpoint-access=PublicAndPrivate
Actual results:
The nodes stay in NotReady state
Expected results:
The nodes should join the migrated HostedCluster
Additional info:
This is a clone of issue OCPBUGS-10678. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10655. The following is the description of the original issue:
—
Description of problem:
The dev console shows a list of samples. The user can create a sample based on a git repository. But some of these samples doesn't include a git repository reference and could not be created.
Version-Release number of selected component (if applicable):
Tested different frontend versions against a 4.11 cluster and all (oldest tested frontend was 4.8) show the sample without git repository.
But the result also depends on the installed samples operator and installed ImageStreams.
How reproducible:
Always
Steps to Reproduce:
Actual results:
The git repository is not filled and the create button is disabled.
Expected results:
Samples without git repositories should not be displayed in the list.
Additional info:
The Git repository is saved as "sampleRepo" in the ImageStream tag section.
Description of problem: upon attempting to install OCP 4.10 UPI on baremetal ppc64le, the openshift-install gather command returns `panic: unsupported platform "none"`
Version-Release number of selected component (if applicable):
OCP 4.10.16
openshift-install 4.10.24
How reproducible:
easily
Steps to Reproduce:
1. create install config
2. create manifests
3. create ignition configs
4. openshift-install gather bootstrap --log-level "debug"
Actual results:
DEBUG OpenShift Installer 4.10.24
DEBUG Built from commit d63a12ba0ec33d492093a8fc0e268a01a075f5da
DEBUG Fetching Bootstrap SSH Key Pair...
DEBUG Loading Bootstrap SSH Key Pair...
DEBUG Using Bootstrap SSH Key Pair loaded from state file
DEBUG Reusing previously-fetched Bootstrap SSH Key Pair
DEBUG Fetching Install Config...
DEBUG Loading Install Config...
DEBUG Loading SSH Key...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Cluster Name...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Networking...
DEBUG Loading Platform...
DEBUG Loading Pull Secret...
DEBUG Loading Platform...
DEBUG Loading Install Config from both state file and target directory
DEBUG On-disk Install Config matches asset in state file
DEBUG Using Install Config loaded from state file
DEBUG Reusing previously-fetched Install Config
panic: unsupported platform "none"
goroutine 1 [running]:
github.com/openshift/installer/pkg/terraform/stages/platform.StagesForPlatform({0x146f2d0a, 0x1619aa08})
/go/src/github.com/openshift/installer/pkg/terraform/stages/platform/stages.go:55 +0x2ff
main.runGatherBootstrapCmd({0x14d8e028, 0x1})
/go/src/github.com/openshift/installer/cmd/openshift-install/gather.go:115 +0x2d6
main.newGatherBootstrapCmd.func1(0xc001364500, {0xc0005a0b40, 0x2, 0x2})
/go/src/github.com/openshift/installer/cmd/openshift-install/gather.go:65 +0x59
github.com/spf13/cobra.(*Command).execute(0xc001364500, {0xc0005a0b20, 0x2, 0x2})
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0xc001334c80)
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:902
main.installerMain()
/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:72 +0x29e
main.main()
/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:50 +0x125
Expected results:
I'm not really sure what I expected to happen. I've never used that gather before..
I would assume at least no panicking.
Additional info:
Description of problem:
Invalid documentation link in knative-plugin README https://github.com/openshift/console/blob/master/frontend/packages/knative-plugin/README.md
This is a clone of issue OCPBUGS-2873. The following is the description of the original issue:
—
Description of problem:
Prometheus fails to scrape metrics from the storage operator after some time.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Install storage operator. 2. Wait for 24h (time for the certificate to be recycled). 3.
Actual results:
Targets being down because Prometheus didn't reload the CA certificate.
Expected results:
Prometheus reloads its client TLS certificate and scrapes the target successfully.
Additional info:
This bug is a backport clone of [Bugzilla Bug 2115265](https://bugzilla.redhat.com/show_bug.cgi?id=2115265). The following is the description of the original bug:
—
Description of problem:
Starting with https://github.com/openshift/console/pull/11866 the action (kebab icon) menu button on the right side in the search is changed from a `ResourceKebab` to a `LazyActionMenu` for `HelmChartRepositories`.
We use this implementation it in other places as well and maybe also in other table rows?
When the search shows multiple tables and the user opens the menu at the end of one table, the dropdown options is shown below the "Add from navigation" or "Remove from navigation" button of the next table.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Navigate to the search
2. Open the resource selector and search for HelmChartRepo, select HelmChartRepositories and ProjectHelmChartRepositories
3. HCR should have at least one repo. Click on the action menu of the first table.
Actual results:
The new menu is shown partly behind the button "Add from navigation" or "Remove from navigation"
Some buttons are not clickable.
Expected results:
The menu should shown above the button "Add from navigation" or "Remove from navigation"
All buttons should be clickable.
Additional info:
Description of problem:
Installed and uninstalled some helm charts, and got now an issue with helm charts on all our releases. The issue is solved in 4.13.
The frontend tries to load /api/helm/releases?ns=christoph and the backend crashes with the error below.
Tl;dr:
It crashes here in the helm lib: https://github.com/openshift/console/blob/release-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/util.go#L66
And the missing out of bounds check is added on master: https://github.com/openshift/console/blob/master/vendor/helm.sh/helm/v3/pkg/storage/driver/util.go#L66
As part of the helm bump https://github.com/openshift/console/pull/12246
2023/02/15 13:09:09 http: panic serving [::1]:43264: runtime error: slice bounds out of range [:3] with capacity 0 goroutine 3291 [running]: net/http.(*conn).serve.func1() /usr/lib/golang/src/net/http/server.go:1850 +0xbf panic({0x2f8d700, 0xc0004dfaa0}) /usr/lib/golang/src/runtime/panic.go:890 +0x262 helm.sh/helm/v3/pkg/storage/driver.decodeRelease({0x0?, 0xc000776930?}) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/util.go:66 +0x305 helm.sh/helm/v3/pkg/storage/driver.(*Secrets).List(0xc000b2ff80, 0xc0004bbe60) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/secrets.go:95 +0x26f helm.sh/helm/v3/pkg/action.(*List).Run(0xc0005fb800) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/action/list.go:161 +0xc5 github.com/openshift/console/pkg/helm/actions.ListReleases(0xc00037d680?) /home/christoph/git/openshift/console-4.12/pkg/helm/actions/list_releases.go:11 +0x6b github.com/openshift/console/pkg/helm/handlers.(*helmHandlers).HandleHelmList(0xc00014f000, 0xc000844960, {0x351ae00, 0xc00086d180}, 0x7fea2c6e5900?) /home/christoph/git/openshift/console-4.12/pkg/helm/handlers/handlers.go:154 +0xdb github.com/openshift/console/pkg/server.(*Server).HTTPHandler.func7.1({0x351ae00?, 0xc00086d180?}, 0x7fea56daf108?) /home/christoph/git/openshift/console-4.12/pkg/server/server.go:286 +0x3c net/http.HandlerFunc.ServeHTTP(0xc0009b8170?, {0x351ae00?, 0xc00086d180?}, 0xc000c5b9f8?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.(*ServeMux).ServeHTTP(0x2f32e80?, {0x351ae00, 0xc00086d180}, 0xc000248800) /usr/lib/golang/src/net/http/server.go:2487 +0x149 github.com/openshift/console/pkg/server.securityHeadersMiddleware.func1({0x351ae00, 0xc00086d180}, 0x7fea2c5c8248?) /home/christoph/git/openshift/console-4.12/pkg/server/middleware.go:116 +0x3af net/http.HandlerFunc.ServeHTTP(0xc0009ed667?, {0x351ae00?, 0xc00086d180?}, 0x109034e?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.serverHandler.ServeHTTP({0xc001048120?}, {0x351ae00, 0xc00086d180}, 0xc000248800) /usr/lib/golang/src/net/http/server.go:2947 +0x30c net/http.(*conn).serve(0xc0007580a0, {0x351cca0, 0xc000145740}) /usr/lib/golang/src/net/http/server.go:1991 +0x607 created by net/http.(*Server).Serve /usr/lib/golang/src/net/http/server.go:3102 +0x4db 2023/02/15 13:09:09 http: panic serving [::1]:43256: runtime error: slice bounds out of range [:3] with capacity 0 goroutine 3290 [running]: net/http.(*conn).serve.func1() /usr/lib/golang/src/net/http/server.go:1850 +0xbf panic({0x2f8d700, 0xc000273440}) /usr/lib/golang/src/runtime/panic.go:890 +0x262 helm.sh/helm/v3/pkg/storage/driver.decodeRelease({0x0?, 0xc0004dc8a0?}) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/util.go:66 +0x305 helm.sh/helm/v3/pkg/storage/driver.(*Secrets).List(0xc000de8e88, 0xc0011cb400) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/secrets.go:95 +0x26f helm.sh/helm/v3/pkg/action.(*List).Run(0xc00068d800) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/action/list.go:161 +0xc5 github.com/openshift/console/pkg/helm/actions.ListReleases(0xc00037d680?) /home/christoph/git/openshift/console-4.12/pkg/helm/actions/list_releases.go:11 +0x6b github.com/openshift/console/pkg/helm/handlers.(*helmHandlers).HandleHelmList(0xc00014f000, 0xc000844960, {0x351ae00, 0xc000b60b60}, 0x7fea2c47e700?) /home/christoph/git/openshift/console-4.12/pkg/helm/handlers/handlers.go:154 +0xdb github.com/openshift/console/pkg/server.(*Server).HTTPHandler.func7.1({0x351ae00?, 0xc000b60b60?}, 0x7fea56daf5b8?) /home/christoph/git/openshift/console-4.12/pkg/server/server.go:286 +0x3c net/http.HandlerFunc.ServeHTTP(0xc0003d72b0?, {0x351ae00?, 0xc000b60b60?}, 0xc000bcd9f8?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.(*ServeMux).ServeHTTP(0x2f32e80?, {0x351ae00, 0xc000b60b60}, 0xc000cabd00) /usr/lib/golang/src/net/http/server.go:2487 +0x149 github.com/openshift/console/pkg/server.securityHeadersMiddleware.func1({0x351ae00, 0xc000b60b60}, 0x7fea2c6d9838?) /home/christoph/git/openshift/console-4.12/pkg/server/middleware.go:116 +0x3af net/http.HandlerFunc.ServeHTTP(0xc000344f47?, {0x351ae00?, 0xc000b60b60?}, 0x109034e?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.serverHandler.ServeHTTP({0xc001048180?}, {0x351ae00, 0xc000b60b60}, 0xc000cabd00) net/http.(*ServeMux).ServeHTTP(0x2f32e80?, {0x351ae00, 0xc000b60b60}, 0xc000cabd00) /usr/lib/golang/src/net/http/server.go:2487 +0x149 github.com/openshift/console/pkg/server.securityHeadersMiddleware.func1({0x351ae00, 0xc000b60b60}, 0x7fea2c6d9838?) /home/christoph/git/openshift/console-4.12/pkg/server/middleware.go:116 +0x3af net/http.HandlerFunc.ServeHTTP(0xc000344f47?, {0x351ae00?, 0xc000b60b60?}, 0x109034e?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.serverHandler.ServeHTTP({0xc001048180?}, {0x351ae00, 0xc000b60b60}, 0xc000cabd00) /usr/lib/golang/src/net/http/server.go:2947 +0x30c net/http.(*conn).serve(0xc000758000, {0x351cca0, 0xc000145740}) /usr/lib/golang/src/net/http/server.go:1991 +0x607 created by net/http.(*Server).Serve /usr/lib/golang/src/net/http/server.go:3102 +0x4db 2023/02/15 13:09:09 http: panic serving [::1]:42956: runtime error: slice bounds out of range [:3] with capacity 0 goroutine 3261 [running]: net/http.(*conn).serve.func1() /usr/lib/golang/src/net/http/server.go:1850 +0xbf panic({0x2f8d700, 0xc000273740}) /usr/lib/golang/src/runtime/panic.go:890 +0x262 helm.sh/helm/v3/pkg/storage/driver.decodeRelease({0x0?, 0xc0005f6000?}) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/util.go:66 +0x305 helm.sh/helm/v3/pkg/storage/driver.(*Secrets).List(0xc00094a570, 0xc0003d79e0) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/secrets.go:95 +0x26f helm.sh/helm/v3/pkg/action.(*List).Run(0xc00068d800) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/action/list.go:161 +0xc5 github.com/openshift/console/pkg/helm/actions.ListReleases(0xc00037d680?) /home/christoph/git/openshift/console-4.12/pkg/helm/actions/list_releases.go:11 +0x6b github.com/openshift/console/pkg/helm/handlers.(*helmHandlers).HandleHelmList(0xc00014f000, 0xc000844960, {0x351ae00, 0xc000b48a80}, 0x7fea2c403300?) /home/christoph/git/openshift/console-4.12/pkg/helm/handlers/handlers.go:154 +0xdb github.com/openshift/console/pkg/server.(*Server).HTTPHandler.func7.1({0x351ae00?, 0xc000b48a80?}, 0x7fea56dafa68?) /home/christoph/git/openshift/console-4.12/pkg/server/server.go:286 +0x3c net/http.HandlerFunc.ServeHTTP(0xc0011cbb60?, {0x351ae00?, 0xc000b48a80?}, 0xc000ff59f8?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.(*ServeMux).ServeHTTP(0x2f32e80?, {0x351ae00, 0xc000b48a80}, 0xc0002a3c00) /usr/lib/golang/src/net/http/server.go:2487 +0x149 github.com/openshift/console/pkg/server.securityHeadersMiddleware.func1({0x351ae00, 0xc000b48a80}, 0x7fea2c478e18?) /home/christoph/git/openshift/console-4.12/pkg/server/middleware.go:116 +0x3af net/http.HandlerFunc.ServeHTTP(0xc00084bfc7?, {0x351ae00?, 0xc000b48a80?}, 0x109034e?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.serverHandler.ServeHTTP({0xc000c3f890?}, {0x351ae00, 0xc000b48a80}, 0xc0002a3c00) /usr/lib/golang/src/net/http/server.go:2947 +0x30c net/http.(*conn).serve(0xc0008a9f40, {0x351cca0, 0xc000145740}) /usr/lib/golang/src/net/http/server.go:1991 +0x607 created by net/http.(*Server).Serve /usr/lib/golang/src/net/http/server.go:3102 +0x4db 2023/02/15 13:09:09 http: panic serving [::1]:42954: runtime error: slice bounds out of range [:3] with capacity 0 goroutine 3247 [running]: net/http.(*conn).serve.func1() /usr/lib/golang/src/net/http/server.go:1850 +0xbf panic({0x2f8d700, 0xc000273a88}) /usr/lib/golang/src/runtime/panic.go:890 +0x262 helm.sh/helm/v3/pkg/storage/driver.decodeRelease({0x0?, 0xc0005f78f0?}) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/util.go:66 +0x305 helm.sh/helm/v3/pkg/storage/driver.(*Secrets).List(0xc000de9560, 0xc0009b8c00) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/secrets.go:95 +0x26f helm.sh/helm/v3/pkg/action.(*List).Run(0xc0005fb800) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/action/list.go:161 +0xc5 github.com/openshift/console/pkg/helm/actions.ListReleases(0xc00037d680?) /home/christoph/git/openshift/console-4.12/pkg/helm/actions/list_releases.go:11 +0x6b github.com/openshift/console/pkg/helm/handlers.(*helmHandlers).HandleHelmList(0xc00014f000, 0xc000844960, {0x351ae00, 0xc000b60ee0}, 0x7fea2effb100?) /home/christoph/git/openshift/console-4.12/pkg/helm/handlers/handlers.go:154 +0xdb github.com/openshift/console/pkg/server.(*Server).HTTPHandler.func7.1({0x351ae00?, 0xc000b60ee0?}, 0x7fea56daf5b8?) /home/christoph/git/openshift/console-4.12/pkg/server/server.go:286 +0x3c net/http.HandlerFunc.ServeHTTP(0xc0002a91d0?, {0x351ae00?, 0xc000b60ee0?}, 0xc000c319f8?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.(*ServeMux).ServeHTTP(0x2f32e80?, {0x351ae00, 0xc000b60ee0}, 0xc000cab000) /usr/lib/golang/src/net/http/server.go:2487 +0x149 github.com/openshift/console/pkg/server.securityHeadersMiddleware.func1({0x351ae00, 0xc000b60ee0}, 0x7fea2eff84e8?) /home/christoph/git/openshift/console-4.12/pkg/server/middleware.go:116 +0x3af net/http.HandlerFunc.ServeHTTP(0xc000df4be7?, {0x351ae00?, 0xc000b60ee0?}, 0x109034e?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.serverHandler.ServeHTTP({0xc000d2d320?}, {0x351ae00, 0xc000b60ee0}, 0xc000cab000) /usr/lib/golang/src/net/http/server.go:2947 +0x30c net/http.(*conn).serve(0xc0002688c0, {0x351cca0, 0xc000145740}) /usr/lib/golang/src/net/http/server.go:1991 +0x607 created by net/http.(*Server).Serve /usr/lib/golang/src/net/http/server.go:3102 +0x4db 2023/02/15 13:09:09 http: panic serving [::1]:55334: runtime error: slice bounds out of range [:3] with capacity 0 goroutine 3328 [running]: net/http.(*conn).serve.func1() /usr/lib/golang/src/net/http/server.go:1850 +0xbf panic({0x2f8d700, 0xc000273dd0}) /usr/lib/golang/src/runtime/panic.go:890 +0x262 helm.sh/helm/v3/pkg/storage/driver.decodeRelease({0x0?, 0xc000d0b020?}) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/util.go:66 +0x305 helm.sh/helm/v3/pkg/storage/driver.(*Secrets).List(0xc000de98a8, 0xc0001cb670) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/storage/driver/secrets.go:95 +0x26f helm.sh/helm/v3/pkg/action.(*List).Run(0xc000dad800) /home/christoph/git/openshift/console-4.12/vendor/helm.sh/helm/v3/pkg/action/list.go:161 +0xc5 github.com/openshift/console/pkg/helm/actions.ListReleases(0xc00037d680?) /home/christoph/git/openshift/console-4.12/pkg/helm/actions/list_releases.go:11 +0x6b github.com/openshift/console/pkg/helm/handlers.(*helmHandlers).HandleHelmList(0xc00014f000, 0xc000844960, {0x351ae00, 0xc000b610a0}, 0x7fea2effb100?) /home/christoph/git/openshift/console-4.12/pkg/helm/handlers/handlers.go:154 +0xdb github.com/openshift/console/pkg/server.(*Server).HTTPHandler.func7.1({0x351ae00?, 0xc000b610a0?}, 0x7fea56daf5b8?) /home/christoph/git/openshift/console-4.12/pkg/server/server.go:286 +0x3c net/http.HandlerFunc.ServeHTTP(0xc000430260?, {0x351ae00?, 0xc000b610a0?}, 0xc000e469f8?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.(*ServeMux).ServeHTTP(0x2f32e80?, {0x351ae00, 0xc000b610a0}, 0xc000537900) /usr/lib/golang/src/net/http/server.go:2487 +0x149 github.com/openshift/console/pkg/server.securityHeadersMiddleware.func1({0x351ae00, 0xc000b610a0}, 0x7fea2c6da648?) /home/christoph/git/openshift/console-4.12/pkg/server/middleware.go:116 +0x3af net/http.HandlerFunc.ServeHTTP(0xc000df53f7?, {0x351ae00?, 0xc000b610a0?}, 0x109034e?) /usr/lib/golang/src/net/http/server.go:2109 +0x2f net/http.serverHandler.ServeHTTP({0xc0005f7a10?}, {0x351ae00, 0xc000b610a0}, 0xc000537900) /usr/lib/golang/src/net/http/server.go:2947 +0x30c net/http.(*conn).serve(0xc000c203c0, {0x351cca0, 0xc000145740}) /usr/lib/golang/src/net/http/server.go:1991 +0x607 created by net/http.(*Server).Serve /usr/lib/golang/src/net/http/server.go:3102 +0x4db
Version-Release number of selected component (if applicable):
4.8-4.12 doesn't show a helm release list.
4.13 works fine
How reproducible:
Always with this Helm chart secret:
Steps to Reproduce:
Unable to reproduce this manually again.
But you can apply the Secret at the end to any namespace and test it with that on 4.8-4.12.
Actual results:
Crash
Expected results:
No crash
Additional info:
Secret to reproduce this issue:
kind: Secret apiVersion: v1 metadata: name: sh.helm.release.v1.dotnet.v1 labels: name: dotnet owner: helm status: deployed version: '1' data: release: >- H4sIAAAAAAAC/+S9a3ObTNIw/Ff06v74OgkgKxu5aj8YYiEUiUTI4rTZ2mIGDEjD4REgGe2T//7UzAAChGzLcZLr3r2qrorFYejpc/d0z/y7H1qB07/p21EaOmn/qu+HD1H/5t/9B3+bpP+ynRhFuWP3b/ocww3eMdw79vqeG9xcj25Y7v3H4XA0ZJkh9/8z7A3D9K/6yHrNW7aDnJQ8T34kcOvHqR+F/Zu+FCaphVAPRkGMH+pf9ZPUSrMEA11+56ofRqmDL30PjSjb9t7Ld/c9K457ftIDmY9sP3T/v9591Nv5zr6Xeg692kORm1z1tll48z38HkaQXOgB+IHio/fu3UOEULTHd+UodXqpZ6W9HH/iM/l44IRpb+8j1Ns6cbRNe9/7d9utFFiu8y1D6Hu/Z4V273u/usJbcPP14eF7v5eFqY9qsPhJNcn3va8hdLrvXdHP+3hA+mXg9KwsjQIr9aGFUN7bRgg5di/K0vf9H1d96FnbFNM0cFLLtlIL/92m+87ZJhTjzHvmPXtCh9vexEFBj4zVS6MCLjw5SoUK5ciHFn4n6V/1N06+j7Z20r/5R3+L5xs4+HLx0X9e9a3YV6sP77j+Vd8KwygtBrj5N4X9X9kW9W/6XprGyc2HD66fehl4D6PgQxQ7YeL5D+k7z0HBO/J08qH4Z+sgx0qc5IMd7UMUWfaHrWN7VvqOfv8dmWjXtfepe+j/+HHVRxHc9G/CDKGrfuoEMbIIl/2jQl918YP89f5u+T59xLikOO47gySVRBRIwnBpao/I0GU026DDUhsebHGcAIEfPSxiEwzUXBKGXxV14Ro6v5dEdJDEKWtpjxtLG4bSkl+BnOcsTR1IEyUyl7xvaygxBT4BnH2YCXxua9cfBTfeGTm9JonT9WyQ/k0SUWZwj6wprlwpUHa2OES2MMwMjUWSf5tJE3YkCUxqBqMEiKOB4MZfwUBB+DuGvnAdbcRCn78zdT4BA5Sa2pCRJnYMxL0LA3UPBlNGEqZjGE6nQBuHpsqzQNz7kjjOTOHWX2qsZ3LqwtYek0UwXlvMKDB9ybW1IWNpe9cWPVTOlc5b3gGdT0xdQTOf/wYCGbXmHMOcXwOO3QNRZczlvoQxJt9f8gNLe0wkcYokccza4ig1dCU2uHECJhsXknmqG0kcsbZw/YXSSM1MQou//73/46qDuP/yHBQ72+R9GqM2fRVkBigzl7e+KY4YEKjMLBh6QFv5ksBi+v7NyfmNqZmerT0yTV4gz7kzZHpgoiKYD0MgjnxD22cgGKfmasSZ+jS3NEwPdiSE6d9mSx6BYOHOdHYkuHhsxjVFNbC0IZKE6QYMlMzUFxkQx76pPR4k/zZ90JkvlqgmYDk8WMJobYnj3P4cuc4gcWcbOTL0KVPCgp/F1y1tuAYTddOYVygjIKprWxzl98fxsxpssenfZgvO82C4yBY6v1cDNYcc2gGf8LoHJ7eZNVB9U59mmMYwH8YgJ/M8WNoo++rzf3Pys2N8kiZjlvJnEx8Ybiw7syBljUDNMbymPs8s7dMOaOPM0GxkCqzv3BfzRlM8Fw9yq2zFqbkdoLW5PMIIBjwCoRxZmsnMArSbDaYsCJUYaKuPkljI0W2Bf224IfA8QQ/IqYmpyQwYTOeGNkVgMi/54xxOiIxSfPAxCOTExnxQpzHmkaXkzp7GbQxCmTG04drsmPs9GYMv+LSYC4Er+nKev1GK8Xlffl+ntKDPjlkwWbhfntE7X5a3mRqME1tTD+V4hdxgWn6kcFZyUcj2kDE0WPJoIbeEv8/ILTFSMAoffPd9bgWoUzdrhvbIYl4xQjUG4iIztaFnBI+I6oTYgyLSavxZ6KHhDopqBjkvNsMF4TN7fffF4lBmfo7cBRlL+Qy4YWBp8AvQVMbQFM8W7z4K/q1LaFfQo1PWKh1yTeYrCXxS8A15XxJ4Qq/udx89I1ATmBPe+CSJwxgECgLhwpXpnA5QVNdf3cglenDCs/bn6Isk3LrSRNmR6wL5xtbShptSJo/0ovp6FhTvCkPyHJEBAtutK4kE/o/S5KwNojRdNedZ0YXaj8jU7+p8UOHe1pW9rS8ygm/h1lfE0dri1Jzam5Xf5K8TePe2LkcrTl3DQGWOcPON6zU81GTxwnFbOkoS+AO29wa3KunIODqPx55Y+oLSQLTjih7CrWvr0/jst0M1t4j8VrDmpmZQvHfwNgzUoK2vT+cj70CoIGei3Fm6VMCN4WpcP/sNgxtltqhe23dKDP2WbilwDQdKbugKMgebNh7uC/wU/CjvbH26NlWZgcGYMTV7WKNLAINRevyNYUxjECw+SndUp6wGSm5q41QVx2HFm0JT/jr4i/Im+abqAVXxzLzQXQX8T+kPi/o89/ZkigyNXRkazGxdRqsA24DxgfK8eoDiuLA5dfhr987pa13eG5rcgkVNILc60qrGt3DAe5jfztGrC15w/juFrxS5eLyzPBRiP/Dx3tTk3NQXDbjgRE3AWEbGYIrqfP6snHWOyZ/wFsWjmtnr6MtRTxpddEYgNNo4I8/bEz6RBI8BLPKAtj/3bkz0lsBnWP8R3/iz+zRcwq07W5bzvPWVu9HqfqOudFZeLdTpSlX5h9V4+m25UT+rglSDC8sCheE8fmQG+3K2zi9gMPo/2N+QJnsX6urOFldN/dqtGwrdpB4qOa1dK+WMjDFRdpo2ToHQBccUQW7EwkBGMO+0Pye4sSfT2ORsBMPSvt3ibwyhuPoo3ck7EJixyciRoQ1Ds/V+t+2nZj+w4hdYflNU90AcDWfBeA/FRxwtMNjaryZTbOVzW0SY8ggEY59EDxjqYLy3VBPBUI4Bd/1RmhiPQlBqnxJi1oO3cWrqimeKY8a4Jxx5oWeHrXRh0Q+llaaS1/luwzPfuwB7b1gahFtqkYKjF4I9ZiCiNY6QAHedwfp8D5H7IDD17zFQbEjChkaFm6w5zhBz397Up4yFuSLkc+xNw1CJTX1+ziN5AUXtKuPSIqmh83EVJKwjMi2Yj7j5Ii4dmEYAKwQsssXxxtAVjzpBzzorjYCZOHAFmhtC0f1ucnT428pi0eX0dBhkNTO0aWJq2LFWH7uEFwxUBk6w80dY0JU2pwZQ8jelcvJAMNzZImXziq3E0hEf7U1teLBEFBBHLR8xMEChyal5gy2EW1fb14zXJGkqLGFKDLS0jv9WV4DFPeo0rqMySVA3QP7sNmo9f+sXYvFRCi9yKv3mt/nRydhYVLXHzQrjQ8Djy3tTm2MnJoXio2eLqwwOeGTkwzXgcBCCMhwQaAfeLoMXqe6ETI740dvKjo79sVCuhdjMT4xzpZKwMqUq6aiUq2BSKp2n1NCVtXXXUhNP8+WBJCIGyg5unggYShU0URDQ+QQ7bSXPt4MaykOnMLw2WPkqHJ0jgv9D9OVp9Y0ySydBV0WjplNwer/hPNbU3JdOA6dgWuycJYYRMQvi6I5jgHVPvjkDf3zGQSHOtGdpw7rRazkI/MDUWk5AIaNPmY/Cofva1lk1epDEHSzxOZl6ILBRN//xOxgqh9Mx8P9M05E+BvAtPUC+WZedBf5+6cjY4jg3OZVZiaPcFloOcUUbbEaUncGNkvI9bK5scbQG3L6VFDjiveX4VSary9mpwZqbehF46E3a1Pmkwzk8M37DDD/Ou4KizoCs4rfE0k0EAvUAWT4H3BR1wHzyTIO3X+bcVvADEXEm5s2BjM25by5P+Xt+L70G754pYh6QD9i9MoIOfiF2QInfai4d3+xw3O/yroD9aX2jZrZ/yq+mNuR+Bl78/pflifv2Gr5BIDRFYoNP+aW697OwKuF0B17MH81v2cEosTUW3WsjFoTK4av7NP886WrW/KQuHVTTqx6c8ImlyZ4toh3w2X19nFM9h20dgW9h6Ep0GoBV+G6Oixqub1Yf42JeC80dmKipuWJ3tjZkYN6lJ19Otzbe34rfnImSG6uGbbs0wD7ylu4xMBg37HUHnNfdibYmf8FnfYcLQj//Z3mK2MLA0mzZ0G9P7cul8chr+Eh/PV0qnL7Q5+kO58gKdpJuHSs4F6L/pnioHT9S/2nVsQhUD/Gb4/3VYsrSX5bIIvFoa+v8AnCPsXFMuCaAkz3wOXLtyZR9WVJlG2Wpc0lCJZzubF1Bbc3cxjgMRiyOsp7E+JiO9RfGNAOqSLqEWUYwNOMqnf0KWCH2io/LMx4MbGSPsVe+oONMlP2ZKCXHUkCWzfRpaOrKgi7XX79ASxR0C5WkS/vZ4hF3tqjmQENZkUR6KpKtj8mY+jS1tGGhLY/WzNJwZCqzMFDHpmgjtTH+sOLiF34nBqGMjIGamyt1Y3LqqvFdxO+wN+Esn0ppn+ITTOaZxanD2tLR1tQxTPu00uZkrJeO07Bo3GUp9+5xTVE92GKFt8+LlUw9kYB4T7UIgt+YusxUONnI/IIjli+g1mz1qnk9//23n7PBjT9Ti2sSS15fXjm5d9/MZJHvPs9Pa+O3zKOB//oSXOPbX33+0+y49Ec0eHcERPUrlttZ8Br4O73A0uM4s/yeONudD51nsrX1ZfOqFGPxF0ua17J29gTtO5YOj7gu56CS5Ytq2bfKtq2fh6e7XKTb/tSzORdmseh7HV6cXJVD/fOqv7NQ5pBqPFJPQcryojB1HtPP/rYsj3NCCyDH7t+k28zBP3flHeLnLYmfd2+5J7WHNwNSbYivJbEF8Y2qqq9/1c8SR6F1fPLxiQcLJc6Pq36UpXFGShs3fmj3b2iZ5fFbV/04S7ylA7dOSsH5gS8hVL901d86DxU4MNo67yhIWyeJsi3EU6fPJam1TbP42zZaOzDt3/StOMYgbv3u6sSytNDZOSiKne2HhPPf1T7jPPZ/XBVVrHgSterJb1v8QupTvFfIJRO/6gdRFqbfrNTr3/RryyLlotcHPPHaAP3/+Z/eccCeG/U8Z+vgb9fI5IS78TYKqp+P6dYSojC1/NDZVijwQz89vYr8nRM6SfJtGwEHA5zCeBnBjUNoE0fbtEAQqarEvxtVlOTOVfHcJ+YTQ8BPIxih/k3/XvjWv+qn1tZ10m/VI5gxt45l+43v4pHE4qsFeqqBjwBsHYLnpH/DdlCZ+LgNrFOWrkNQgpwiWqVqCRi3D5h4TjkOPL1kO0nqh4TAwm3HK60v+mHiwGzr3Nmuc+9sg+LVbxHyYd6/6SuO7W8xJ5JK22OhavUk9s5t1yGTLpTxfR5jlAsoS1JnK2HU7iKUBc4c81SFBHotqYTGRRGwUCm8X3fOduvbTnWbyPhRtAtAsLT3iSlIKQjQcwISMpSLRr5yMDgPAe3O/+rf+tZEYeDnaDfj4gPgrlPIyZGpsd4sGOVmPtrAYBzYArOX81H1XrWY01xn9LGpaATW5ULNOnKd/eniElHrS+mjJEx3RhAjY7DoXITCbo0xmMZwQtxdArdSVPzBnGcscVUGkKQyVZrYHlaptvjJLYLTXelaSP7+NFH+3Dy6FsROFt7OzG2c+HCg5KSq2N+7UjAk1br6cv/k+11zLioHd6Z/2ZxnPn9XVsNifIECZ7CsjikSKjNf6oZpwiTn8GHjoL6bvvWFR1JphJ+TQpmB2NTn0rkxC95REOTk3NJ5khwi7yLFMwnsY0aaoJ295AcGqY5WdmVF84wrTe3ZuZxcbyT12nMtEiC/godL90CaTBEQxwwO1TH9GhXaP8mvx8rKC3hWmPqAG2HeyDq//5xsFt+ccUoMORzGruic7kafVwIv0GqeV/CaPo1/H6+pyyWVtYmlK5KtSUX1vRzb4ih/gr/Owk8qAX8X/Bs7tmllot/9bseifMVfHVVNvw3vvGeLLpGD2WY4VgV+X8EgmjHmJTCQXDiZ7qxAXdsCv7H0KXXzw00mud0wPpzVt1OySGrqHqOIKL9knlo+PdiTaQwC6M+wbQjVBAhT+yxe6fs49F/DAO1pGobI2+cSBklUYhjQin9nyQ8sXYks7brsuJhY+oLyIdXjLORWrqHPqXxNpjswWLhmMMqbHRyX82pHheJTeqbgGxJ+ER0AuCml2Sv0x5N6cTLPrXJeoppPB3P3hcUsdRo0Fgqep/lFtv/nZOJCH6Br7s+Of57uddhZyKlV50yjCvZl+DrhCSMY7YCoesD/ORwoolqmMrH/Fz+BC9fS5y6WH2pTG1XAQzDAsjNFlM9UD3KYPliupGfp+/C0/1bot/r3/izfqKfzp37UsSjmUOiM6Wkl9gvwsYie4reL9U+5mHSR3QlGPrUJr7E78t7U5NgMEOapgSWqOcXR0c+2OZQAgffNJe1aKPUS4IbrUtfawrV7r43wuzEIzB0MWJriXUcuibUm84+zfLQBnHxoFf2tAcfsjFqB04wWfwVgME1nh0Um+yOq9ybzssMK25608PfT4wKcxwBt/0t0IO3++LO83JL/CAxgId9FvFl0SxrByoUT9WCJ6uYZO1SHF4FQTQv73iULiSRM7wAnb029uL+c2m+kc5vdLK/Sszyo41kSppmtPSYSYn4K5yuMR5JSpZ0A1BbuXVuXkSnwOxA8DukCMl38/XbPuFNG2RlcimCxEDz9A3qk0fkg/Jm4XW3wJunWdWFB45nPy2Awxfcz7LcBjRZDfPX5yJ4oe3iIdjNO2RmDeWupVt6B5ahW4CcVhbNY5zA7WbjmZlxNv4xHtFBXr+sO+6Hw8w6zgXrAc51pRUGyTuBMSzhhPoxskU1e4V8jEBoX+I48kIJxDoPx8GfxroRTZGoEH6RADMcPpva4eRnOT7taTXG0hvmIMXR5C/NRDGi8n5IlkyXbKkYZbUzNjEGwSvG3LX26AwGLQLhI7WCcWxqJi9OGvs9fFVMeix4vi12O+Yqfi11EGKiI4HHZKOJ0sS0F4iKT7tgdDLAdHRJbVi1bix5jT/jDV//TrqOLltpITt6BQEZwohR/m7GJ50vHqDqNZ3q9A4ZtFI0/gdcLc0FVbqlbj7w0/un0P8plLRyzYzlWNySvRX2yTWM3AHEUkPyXLrX7STpzT40ek/wpHTjN8XcwDz/93Em+KAahgkxOzV8T792Haopljc6L35mkGLiEg8S55VLfK3IZdb64TP/XaPhmeqghr2+jj9bE/9R5pvg7sDSbhQEdY8axHgiwj8LWZbOhd2D+6TU5oqLMRl2VOuU35YCxPNCxCL9U5T409z2YIoPuZPFWOD2Y+pSzNKJXsH4agAFppAlng+rbO2nAs0bwGEPOIz55tSStz49/L1kCN3yNnhdHuT2ZX5SDMfRpbugb/yd1ejV/LJukMeHt+fZlOObozgjUT7mr4fU1dpOUZ/z+nBb2z3we+1TYT8jMnPeAz39bsLdlSd4vidsuWAc4E1tPd4B7RIZ2/Qx8z+WlFVSWr5kkN2NUuUublMbIERioiUnj7AJP6uac3/kiXdVcr6o1vvwFcjJEphYu4K5dKI42dD0THY744HcWLT5/u7wUKat6LR+8MMdyoTwc7RaRidX9eE5wYmjX7j0jL0p8/Np4aYjsib2DpPwV7qg84tioKGfUFml17UU5llet2Z2Rpaqc9/J330KWL8/JEhpCztvZZKcpTLPjHC6B/0U83ZDxRnnsz+Prcnl/zr5OQaDQkshq7UZBMGBjUPhtRSf+L8XTC8t+/3fgT8S2BmEZdY1AjQzdjMFAomthIsogp65tHfMgjiGHyBiQOR6etyuX5vDqMpsi56K8/y/J5z2Hy5mpb3CsnmB7S9Yhib2RLoD5Rba3seZ6UtJ7Sa7z7fJ1F/gtLXjJTnfeDgaIcZb8ulUC/dvw2PzuX57XPv8hPD1Xbv/TOu51tQAvt6+08V3N7An2iwqf+U7m2+XpNCYmMXKA5wvarQb+ZXh+GU5e8Ny53P2T7z9B5+AxtgO13mB8YY756Me+mM879YKZm5pK8pqSeALTRTlQGhc3d7kzuFFK8p5lc6fP7klNBInNPzGztbFrNegeZlyzsXKmecjQHhlLKHL4Z++znoH92/Go3VgZmdo4sUX3Vfm3dmP5XyJPrY03VqPZHssLjjt/6fptE6/hvEW769QSVQ9MlKiIL9Zn72tjskaun6Xl5TkRskZyGW08GE49Z/mWMkQa/C+hxdbWpggGQ0TjMlTGfztbHB+swRzHzyRvM6Mbp6QtnO7K3UNnmprZAcoBNyR59pluejBAtJZRq8vgK/KlgZrZGzMHHPOXWAujGwi8LZ4x75sBCrGfPdP5nSnU12Gkc/dx3J8a+u1Oqq3vmEvWA+I+tTh1iOkAJip+x7P06WGmoWym3aXleEAb72fa+NCkEYtAaMYwGGUA01VgOUOfxiT+1OevoN9TzeoXrZU8VSf8nA4rW+WKuEa9JrFuME4sPca+B8mXPLnHWI5p/oiey2HCy+b3MzmK19QdP4enO7KOSOJp9UDqEqhP1LJrw2YutagvbeUs6jhGpsDHwOe9xn5zwlvVi9GNJqpNsf54vdjjruR3WmNP4Stw2Yk7W/z0VvWHr8lz/5ZamBfJV7FzKwiVw0ty10/Bc64m5omaCw5wjyzQVBkMSMzwx21Oe/OTX7GGZIujnNQCDKoNN3avqFcpcXf4477Q5LhxyJvXUgxIq6tncqvdcXOR1/g1HRvA/XEfsgOmt+c3skYJxJFnTuY7unYzIpvUNfyQ8FU82LWpzx+R4YWmbGDOU3iWjQ3lDrQ+hdah4PnPaK8GZ2kKtpMezHkPx8RwsCA5C/xMfVNDmP+c34n9P4NT9/ZkvrO5UW5xjztDw7zN7zBNpAHxMRlTK2LmJ/y+0w3nGhv1tPh/Wp4acYYmL10ve9rHs0XPswX+YIkjFogLUs9q4jkG40QSx2sYjA4w5/eGPiUt5IY23JA8uKgWrfS8ZGqPqSTSEydgp09wgZ2l9ezf7EDNYYA2f6wPcmAiGE5jfK/wRfKyT6HY7iWT7lAKxfHansybtcH0vvuF1Ko2T0Gw9LkLRDWwBd4H4jiz8k2B0/ZY9HnM02W9iSmUa/i1+p5zcn+6iViD/8r7+F+yJcBV/8FHzWNwlLvbz/O794F93OVBym+z+426ku48BETGLQ70+LJYNtmgWHItWmiKEoRg4ZbtooRE4p2r5cOviqrYX5opKmaGRVH4FEtiMySSxFEm3ZGUyD1NiVx/Efz5WhrzualPEdTVGHIIi+GXRaAmYACL9ouT+yXrp4a+oeHMZEPKe4o2KNr2xI0P5rL4Rs6TA2+kstT3NjbBoCrZz0ytKKcT5dzUSOrZa5fi0jCBtJXuyVjiOJPG83xZazeSRBLCkzDL1D0Gm1lDIwfzhJYuM6QFbaweytaB8pAfilN5BzSW4NrQ565B2NZEEIdNrfKlSq1OVAx/WXb9UQh41xCnHuTcL4Cb154vS5z5DS1Nl9IaTC7QNykdi61KuGdL9vgsKaVR829L5V5Rp5qiTh9UdTonBxUVBz3MdPVA58uib0tFXTHq8n4zlpXlbTrTilJvn90bunzA6tj8zGxWd+P7FWt/W22a3zM11rO0/Wh6p8qLFZqTed1GXzRtxEqi7IGCxrboxTA/lp0b4caF4vhQtOrgMQ/0erGhemOjezZ1liyh5Uwvrom3FM8ilRGrPBhKvE0kcZQX9F+TMLqWrsBqpmwlMpY8dwzDUWLeR18MkXcdbZyC25jyCtk9ivJRdbiFz5/Ccxu7hnj7haoZIk9DSZwOpQnvwYFMQ3qBSe2yXEJQtMXqUVVU+UHZoLmi3WaCG62lfO4WJfrlZv0Ul0UrBAhUrKIYkJclNSs8D9JuUOOd2PRpW5TNjbH5wM8Xz1B+rFphMA5/nwzUWw9+VgYO1n+4DFRtQCF+3yv5YXQ6f8Wl+/09MQbdCW7UPOxFdaku5SNqV1AGB4oHxEd3JvDYttDWOm6c0eX2MqWH+YHwBU2hTKqSZxIyf3WxLPBDQ2MToWi77zwwpzpwpNle0nmgTClzuhy1ZY7apgKGfSyY2uPOzslhZ540UUjalyyniapncK5LZXBVtrwQWTqWLVV0ood9YduHiC5JbV1miO3VhmEpE4ZYO0xGw7qnOuQIy0NFl0qvDRQPhgoLx3wOuBgZA4XSYDJFZqDmdVccDOysbZtnGLfPHT6i3bpAVw82Lb0j/EDa27QxQ+aPXZ/i0Dz7eEAYCzl1Q3VWZ0k61k9fHgSeurVL/pN01wrfJntXEdXA0NXEFvD98cYUEbmH9cqyGeq6D7fxJyyTsyWT4nfxv7X/3Qfhtvrdcb/1P/9JCPeu0XAFMX/v3ddut9GYD3EZi9blwi873XaD/ySNiRtb8E55oF7pq4waOhLrXY1p7oWpL71f1i5UzDuFxbdgzqaGNiT7dVoTJQXCxjWC8cHUaBljIQfURi4Leol87UAxm+CX7AdaO/TwOZwaBF71gG34g8CviBxRH7f080rbtoOBGmCfkhxmSPy49LSVj2WiN1jSqy3XKsd2Kq5soyq+p8/TqlWlWNqrntWbrVclnme64pkD+eE8/rH+P8F9FZ6QvdpxmKYtShocD9Ipdpc0BCqP7ZLcsqUU82oR3jRae5op8oJPtQWRZWOi5HYhQytOZUrfifpdWN/KyOj2Nco2D+KbwEDlLOzvc8QnwvqSqUI2n5+b2pij+hQdsO0vxqB7q2q3ZB5n+KoWKj/uHBwLiEOEdT/l0es2vlgYIDIfhUOMNCZbgxTb0FD9a2pDgpciPglr2zR8aizxc4ixJqpfyGNuaoT/U1NUOWx/oKjms8E8t7CfQHamU9ZgMEUzTb2mMlOXreMhl0WbjCdNZI8c5nmy1HHtLoKVCws7DHJ+dwzjCxwe45+TbXUKe1TRp5jnzuZOl5ukyR7re7d+ABrls2pLEvcoU7Tlnshyc5yztHvA/l0dN/fzbL6kOpTwWWGnzS4dfBY/vA+DcWZwbibdYX9fLmz6JxdU6YbKxnfxbnlYaiOWftIm+6XtOX+YFMHb+LglCp1D6QNQvMBQxWOtKA2J30jthV/yxPF5WuZ3G5nLYTjz+TUc4HtRXPhtXmv7kcpelDkB4nNyj3UYqMyTuZV6Gcsj9auOB1Qqh1o728EgKblTmh+3dflz/No4TC842ho4UP2Z1tqepbZNwMlcirYz7ANj3jF1L4YD5UB9Eh77MG09c/YQPMoH57ZnoH5ErfRrY+oyhnNT4sOsxW54ruVZMF/X82yuvZRuRX5pUh3BdSxFe7neKWMaHFMft/MSWc8W5ehYDt38RrEV0JAc3NzOO5U7NE/mJ7S86GBAiodzS8fHHFDHQYcNO6DRGK2S6+D0+crPpHyc1eTi/DYlxziL5AUKXdTM7RFa7et29iI+owcXqoV/drJlSFMfvAgvGJaaT0V0xG3aplMz3lTcwofu3Oblue04sH/ctRUMGEjYr2ht3XAbnclhxBiGrnEehLNLSL+wNYXyEsxHNd+PLejJHg9nJf54y66cOSy1sC0s5NxaDrQ4T4jSyFVPtnc4cxDtZ2YkCXYLt7DNd418GPUJurdFqHTkLyybrNNAL/z1Bn7ZYwxk6yQXl1a+eCOXqpTtQKd+dmhSXq4thRuimtkBOZDVA2HsXTLXs0u7A8WDXIrjjozaT5bs1j/TWj4HVx4u/UR51LJZttXhz1VLHJSG9aXXepwgu9KEHIT90SjtnfDHW51LGtXPKirstkzj/pe3oxNbSu3nUac1zpErW885YlvqJahuC48HWxxnTnBXyd5bLY3W8tjl1hRl/J7WYThZtj6ZT7Plu/BJyI7bZHn3eKh7I8+Mx1hwjywcKAhu0ILyzJuVMZQwHWZaq32vgufTGk5UH4gI2yCsN+vtxtXfMx3PhZwd1po3bXs+4fVa67kxqZ4p6ZgV8S+Sxmm5XpCRNYIxE838UfOasHlL2fjJdurSfpzslP983mDJL+mzzWXr2jIplj0Ghirln7sxWRuAOb8GIqLlUiTfK5N4C/PVPdbTiOpp6uOP15jeMD/nD57IYXmaQO0kD+Xo4xQtuMWaI/ExVrQV+5jnKH1MgWdgMA7MAK2PvhbhWbfU42dwR09oKNuwxSd9zU4cqneP9zNNzYzBdAgnXbj8hbYyUBlaYoHlZ0TnpM93ZVudKbDltZYP0o2HZovmsMvfI3Sg5QMkB0vaSVutnTQm1mVE6UBOG6E6fMCX56xW12cC0bkIBHLu6DxjCHsstx4cyIw1me5szY6MZ+B+WcskOue/Hk9GWdb5qNHynptVjKluyLzGU2SKKK/yAXejfdE+8FESpjwQH3d2sS2Zpct03cLfuxI6vif5z8yLU4eFP5iDgUlLK8QUOcuOcsbSXynyLRZdxyvXpzLpDsvocGcL/NoS1bVVtbSXsSXKMN1MDjGzZ+A6OUGExmiLssyo5BNyeofAe7auRGAwje3Jxm2f6GHUWqhKG/ucvDbHKHKaZV4Vz/sNvvHcqS0ndDnG+LW2Lv7zYsm32+sCslVujefLNnpySLjAdPstJ61VaocP2/VM6euRUzPuTW3laoPqtA6yrqcv3WzWuvblF/v5v7E9qsu3bJ2D2YHLjrM3f70f8do4tIkbmJ/FRWEH5HP3U0MjvPhwrsXplC9p3NT2o01tyL21//wLW4dOeKRqx9G651be+zXxYqPlZg+4RXNLujP3q7abXK6vgaeG9pjMNGw32GKNgt0bGsphzsbAZz2CCzwex3pQYL3mVm3SE1vldfjlLzw/8qyv2pF7rpXxN8suqxzRY+yQvCrKJPHOXRS4uNfG+yfybi88S7XW/kH9wzngqvUJHFOlpJxf6Mw3d7chYJiEUWP7uGPsDHevgrOxlYlcyxW29GZzy5OOLQaesEHts1dP4+eiTH9VnhN7eBO5eL5E/gRX1b2zMKp/DDZLG8b2XT2ul0/sD3lGaJd2vzW8JC5PADfewHy0B5xCc1X64tn8TNeZtKd5p9b5t2+cQ3lheXkJe1keTutqViPO1Kc59lcL/736Dc8cxXL0sevbF/IepgWt1Vm4hmYypi6V22cVW5gX95fXzbX3Yk20a028tb7ZOdbDbfT3/o9//rjq0/OuGkeTtc9Q+nOnj/04e7LYf95BYv+rTwC7/MCv1x/UdeHpXPWztU7O0wqs0H/AP2767969+x7+T29JjjK76VHe+NB9FOP30Ip91dkmfhTe9Hbs9xDz701vSZ/5HgZOatlWat18D3s9TKFyQPwbWcBBCbnV63kOCt4n3gfoWdu0/lSvZ8Xx+00GnG3opE7y3o8+tEfqesYPk9QK4bPPBVZouY79DuQ3vYmDguNzlfjix7ZZmPr1ryaxAwnsaR47N70K0fhS4iAHptH25q0nQNi9GPVdgVDMT/QKvX/TwzxdXSmY/6Z3L3wrLx75sXz4OaofJbqD8FYcJx+O1P9cPfsfzQC9nhWGUUoEsJwEkbDme+nWd11nm9z0/u+7Ev//KP/o9f59/LPX+95/2EbB9/5N4yq+jjH7vX/zvWUZvvev2k9i1JAnC7NEpfx7v/7cj6vWV30H2Vh5kxcxZ78vSf+e/ILVQY3/YP75nsyyPuKP8s9/1uSi1Iw3PbZDJgIrhd6szgAvo/NLKV3CX36uzof4P9T89JP893LYXs6Hl/DiS/mx16uQj/87Eq02zVJr7B1Q5wFC0nKskmdq9+uKpz1UXQGV/1Xf/naikaohuzQTIU3dA2h9s3IGbk6GIx9qw9I0662XCgt/OpSVeje9Dy/7Qv3My2qk4tDLm+cVK/E2qY/UoVnJ3Sbdj3qWxzcF8up/h6WlbnIxjTSqsE3R0dKMxb36BGp8TU9qLYciFlz0C8hd/8gS2danJL/Yjy5H2DoP5fdLv50AkG6t1HHzBgiUSwpRJn8vm4/1ethA1Bj2qam3Jl98+HiHBCE3vQr55V0n3HUojO/9z1/v5bv7fy3vb5X71bd/fVO+Tu+E+6ZlISc844etOKZ3KvtXei2Fv0T4VvCs0HWelxKinhIywQ4p6bC6Rymp4ea/Q0pQFG2ymEYMxWRQBC1008MhxvO4JkFMB5bp9TNYVvDN/xJ/v1Q8rVinLXClv15KeM3nLm1IWqGjFszd9HAsV/iTT4WDN70yGvwe9q/6O0ooEojWsxDQ2/pJGsVe/8f/CwAA//+hqYUMpacAAA== type: helm.sh/release.v1
Decoded json:
{ "name": "dotnet", "info": { "first_deployed": "2023-02-14T23:49:12.655951052+01:00", "last_deployed": "2023-02-14T23:49:12.655951052+01:00", "deleted": "", "description": "Install complete", "status": "deployed", "notes": "\nYour .NET app is building! To view the build logs, run:\n\noc logs bc/dotnet --follow\n\nNote that your Deployment will report \"ErrImagePull\" and \"ImagePullBackOff\" until the build is complete. Once the build is complete, your image will be automatically rolled out." }, "chart": { "metadata": { "name": "dotnet", "version": "0.0.1", "description": "A Helm chart to build and deploy .NET applications", "keywords": [ "runtimes", "dotnet" ], "apiVersion": "v2", "annotations": { "chart_url": "https://github.com/openshift-helm-charts/charts/releases/download/redhat-dotnet-0.0.1/redhat-dotnet-0.0.1.tgz" } }, "lock": null, "templates": [ /* removed */ ], "values": { "build": { "contextDir": null, "enabled": true, "env": null, "imageStreamTag": { "name": "dotnet:3.1", "namespace": "openshift", "useReleaseNamespace": false }, "output": { "kind": "ImageStreamTag", "pushSecret": null }, "pullSecret": null, "ref": "dotnetcore-3.1", "resources": null, "startupProject": "app", "uri": "https://github.com/redhat-developer/s2i-dotnetcore-ex" }, "deploy": { "applicationProperties": { "enabled": false, "mountPath": "/deployments/config/", "properties": "## Properties go here" }, "env": null, "envFrom": null, "extraContainers": null, "initContainers": null, "livenessProbe": { "tcpSocket": { "port": "http" } }, "ports": [ { "name": "http", "port": 8080, "protocol": "TCP", "targetPort": 8080 } ], "readinessProbe": { "httpGet": { "path": "/", "port": "http" } }, "replicas": 1, "resources": null, "route": { "enabled": true, "targetPort": "http", "tls": { "caCertificate": null, "certificate": null, "destinationCACertificate": null, "enabled": true, "insecureEdgeTerminationPolicy": "Redirect", "key": null, "termination": "edge" } }, "serviceType": "ClusterIP", "volumeMounts": null, "volumes": null }, "global": { "nameOverride": null }, "image": { "name": null, "tag": "latest" } }, "schema": "removed", "files": [ { "name": "README.md", "data": "removed" } ] }, "config": { "build": { "enabled": true, "imageStreamTag": { "name": "dotnet:3.1", "namespace": "openshift", "useReleaseNamespace": false }, "output": { "kind": "ImageStreamTag" }, "ref": "dotnetcore-3.1", "startupProject": "app", "uri": "https://github.com/redhat-developer/s2i-dotnetcore-ex" }, "deploy": { "applicationProperties": { "enabled": false, "mountPath": "/deployments/config/", "properties": "## Properties go here" }, "livenessProbe": { "tcpSocket": { "port": "http" } }, "ports": [ { "name": "http", "port": 8080, "protocol": "TCP", "targetPort": 8080 } ], "readinessProbe": { "httpGet": { "path": "/", "port": "http" } }, "replicas": 1, "route": { "enabled": true, "targetPort": "http", "tls": { "enabled": true, "insecureEdgeTerminationPolicy": "Redirect", "termination": "edge" } }, "serviceType": "ClusterIP" }, "image": { "tag": "latest" } }, "manifest": "---\n# Source: dotnet/templates/service.yaml\napiVersion: v1\nkind: Service\nmetadata:\n name: dotnet\n labels:\n helm.sh/chart: dotnet\n app.kubernetes.io/name: dotnet\n app.kubernetes.io/instance: dotnet\n app.kubernetes.io/managed-by: Helm\n app.openshift.io/runtime: dotnet\nspec:\n type: ClusterIP\n selector:\n app.kubernetes.io/name: dotnet\n app.kubernetes.io/instance: dotnet\n ports:\n - name: http\n port: 8080\n protocol: TCP\n targetPort: 8080\n---\n# Source: dotnet/templates/deployment.yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: dotnet\n labels:\n helm.sh/chart: dotnet\n app.kubernetes.io/name: dotnet\n app.kubernetes.io/instance: dotnet\n app.kubernetes.io/managed-by: Helm\n app.openshift.io/runtime: dotnet\n annotations:\n image.openshift.io/triggers: |-\n [\n {\n \"from\":{\n \"kind\":\"ImageStreamTag\",\n \"name\":\"dotnet:latest\"\n },\n \"fieldPath\":\"spec.template.spec.containers[0].image\"\n }\n ]\nspec:\n replicas: 1\n selector:\n matchLabels:\n app.kubernetes.io/name: dotnet\n app.kubernetes.io/instance: dotnet\n template:\n metadata:\n labels:\n helm.sh/chart: dotnet\n app.kubernetes.io/name: dotnet\n app.kubernetes.io/instance: dotnet\n app.kubernetes.io/managed-by: Helm\n app.openshift.io/runtime: dotnet\n spec:\n containers:\n - name: web\n image: dotnet:latest\n ports:\n - name: http\n containerPort: 8080\n protocol: TCP\n livenessProbe:\n tcpSocket:\n port: http\n readinessProbe:\n httpGet:\n path: /\n port: http\n volumeMounts:\n volumes:\n---\n# Source: dotnet/templates/buildconfig.yaml\napiVersion: build.openshift.io/v1\nkind: BuildConfig\nmetadata:\n name: dotnet\n labels:\n helm.sh/chart: dotnet\n app.kubernetes.io/name: dotnet\n app.kubernetes.io/instance: dotnet\n app.kubernetes.io/managed-by: Helm\n app.openshift.io/runtime: dotnet\nspec:\n output:\n to:\n kind: ImageStreamTag\n name: dotnet:latest\n source:\n type: Git\n git:\n uri: https://github.com/redhat-developer/s2i-dotnetcore-ex\n ref: dotnetcore-3.1\n strategy:\n type: Source\n sourceStrategy:\n from:\n kind: ImageStreamTag\n name: dotnet:3.1\n namespace: openshift\n env:\n - name: \"DOTNET_STARTUP_PROJECT\"\n value: \"app\"\n triggers:\n - type: ConfigChange\n---\n# Source: dotnet/templates/imagestream.yaml\napiVersion: image.openshift.io/v1\nkind: ImageStream\nmetadata:\n name: dotnet\n labels:\n helm.sh/chart: dotnet\n app.kubernetes.io/name: dotnet\n app.kubernetes.io/instance: dotnet\n app.kubernetes.io/managed-by: Helm\n app.openshift.io/runtime: dotnet\nspec:\n lookupPolicy:\n local: true\n---\n# Source: dotnet/templates/route.yaml\napiVersion: route.openshift.io/v1\nkind: Route\nmetadata:\n name: dotnet\n labels:\n helm.sh/chart: dotnet\n app.kubernetes.io/name: dotnet\n app.kubernetes.io/instance: dotnet\n app.kubernetes.io/managed-by: Helm\n app.openshift.io/runtime: dotnet\nspec:\n to:\n kind: Service\n name: dotnet\n port:\n targetPort: http\n tls:\n termination: edge\n insecureEdgeTerminationPolicy: Redirect\n", "version": 1 }
Description of problem:
This a bug record to pin down dependencies version in CMO release 4.12 after the release-4.12 branch was detached from master branch.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
N/A
Steps to Reproduce:
N/A
Actual results:
N/A
Expected results:
N/A
Additional info:
None.
Description of problem:
One multus case always fail in QE e2e testing. Using same net-attach-def and pod configure files, testing passed in 4.11 but failed in 4.12 and 4.13
Version-Release number of selected component (if applicable):
4.12 and 4.13
How reproducible:
All the times
Steps to Reproduce:
[weliang@weliang networking]$ oc create -f https://raw.githubusercontent.com/weliang1/verification-tests/master/testdata/networking/multus-cni/NetworkAttachmentDefinitions/runtimeconfig-def-ipandmac.yaml networkattachmentdefinition.k8s.cni.cncf.io/runtimeconfig-def created [weliang@weliang networking]$ oc get net-attach-def -o yaml apiVersion: v1 items: - apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: creationTimestamp: "2023-01-03T16:33:03Z" generation: 1 name: runtimeconfig-def namespace: test resourceVersion: "64139" uid: bb26c08f-adbf-477e-97ab-2aa7461e50c4 spec: config: '{ "cniVersion": "0.3.1", "name": "runtimeconfig-def", "plugins": [{ "type": "macvlan", "capabilities": { "ips": true }, "mode": "bridge", "ipam": { "type": "static" } }, { "type": "tuning", "capabilities": { "mac": true } }] }' kind: List metadata: resourceVersion: "" [weliang@weliang networking]$ oc create -f https://raw.githubusercontent.com/weliang1/verification-tests/master/testdata/networking/multus-cni/Pods/runtimeconfig-pod-ipandmac.yaml pod/runtimeconfig-pod created [weliang@weliang networking]$ oc get pod NAME READY STATUS RESTARTS AGE runtimeconfig-pod 0/1 ContainerCreating 0 6s [weliang@weliang networking]$ oc describe pod runtimeconfig-pod Name: runtimeconfig-pod Namespace: test Priority: 0 Node: weliang-01031-bvxtz-worker-a-qlwz7.c.openshift-qe.internal/10.0.128.4 Start Time: Tue, 03 Jan 2023 11:33:45 -0500 Labels: <none> Annotations: k8s.v1.cni.cncf.io/networks: [ { "name": "runtimeconfig-def", "ips": [ "192.168.22.2/24" ], "mac": "CA:FE:C0:FF:EE:00" } ] openshift.io/scc: anyuid Status: Pending IP: IPs: <none> Containers: runtimeconfig-pod: Container ID: Image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5zqd (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-k5zqd: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 26s default-scheduler Successfully assigned test/runtimeconfig-pod to weliang-01031-bvxtz-worker-a-qlwz7.c.openshift-qe.internal Normal AddedInterface 24s multus Add eth0 [10.128.2.115/23] from openshift-sdn Warning FailedCreatePodSandBox 23s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_runtimeconfig-pod_test_7d5f3e7a-846d-4cfb-ac78-fd08b27102ae_0(cff792dbd07e8936d04aad31964bd7b626c19a90eb9d92a67736323a1a2303c4): error adding pod test_runtimeconfig-pod to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [test/runtimeconfig-pod/7d5f3e7a-846d-4cfb-ac78-fd08b27102ae:runtimeconfig-def]: error adding container to network "runtimeconfig-def": Interface name contains an invalid character / Normal AddedInterface 7s multus Add eth0 [10.128.2.116/23] from openshift-sdn Warning FailedCreatePodSandBox 7s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_runtimeconfig-pod_test_7d5f3e7a-846d-4cfb-ac78-fd08b27102ae_0(d2456338fa65847d5dc744dea64972912c10b2a32d3450910b0b81cdc9159ca4): error adding pod test_runtimeconfig-pod to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [test/runtimeconfig-pod/7d5f3e7a-846d-4cfb-ac78-fd08b27102ae:runtimeconfig-def]: error adding container to network "runtimeconfig-def": Interface name contains an invalid character / [weliang@weliang networking]$
Actual results:
Pod is not running
Expected results:
Pod should be in running state
Additional info:
The path used by --rotated-pod-logs to gather the rotated pod logs from /var/log/pods node folder via /api/v1/nodes/${NODE}/proxy/logs/${LOG_PATH} is only valid for regular pods but not for static pods.
The main problem is that, while normal pods have their rotated logs at this /var/log/pods/${POD_NAME}_${POD_UID_IN_API}/${CONTAINER_NAME}, static pods have them at /var/log/pods/${POD_NAME}_${CONFIG_HASH}/${CONTAINER_NAME} because the UID cannot be known at the time that the static pod is born (because static pods are created by kubelet before registering them in the kube-apiserver, and UID is assigned by the kube-apiserver).
The visible results of that are:
4.10
Always if there are static pods.
1. oc adm inspect --rotated-pod-logs ns/openshift-etcd (or any other project with static pods).
error: errors occurred while gathering data: one or more errors occurred while gathering pod-specific data for namespace: openshift-etcd [one or more errors occurred while gathering container data for pod etcd-master-0.example.net: the server could not find the requested resource, one or more errors occurred while gathering container data for pod etcd-master-1.example.net: the server could not find the requested resource, one or more errors occurred while gathering container data for pod etcd-master-2.example.net: the server could not find the requested resource]
No errors like the ones above and rotated pod logs to be gathered, if present.
Despite being marked as experimental, this --rotated-pod-logs is used in must-gather, so this issue can be easily reproduced by just running a default must-gather. I focused on bare oc adm inspect reproducers for simplicity.
This is a clone of issue OCPBUGS-4377. The following is the description of the original issue:
—
Description of problem:
--> Service name search ability while creating the Route from the console
2. What is the nature and description of the request?
--> While creating the route from the console(OCP dashboard) there is no option to search the service by name, we need to select the service from the drop-down list only, we need the searchability so that the user can type the service name and can select the service which comes at the top in search results.
3. Why does the customer need this? (List the business requirements here)
--> Sometimes it is a very hectic task to select the service from the drop-down list, In one of the customer case they have 150 services in the namespace and they need to scroll down too long for selecting the service.
4. List any affected packages or components.
--> OCP console
5. Expected result.
--> Have the ability to type the service name while creating the route.
This is a clone of issue OCPBUGS-3612. The following is the description of the original issue:
—
Description of problem:
OCP 4.12 deployments making use of secondary bridge br-ex1 for CNI fail to start ovs-configuration service, with multiple failures.
Version-Release number of selected component (if applicable):
Openshift 4.12.0-rc.0 (2022-11-10)
How reproducible:
Until now always at least one node out of four workers fails, not always the same node, sometimes several nodes.
Steps to Reproduce:
1. Preparing to configure ipi on the provisioning node - RHEL 8 ( haproxy, named, mirror registry, rhcos_cache_server ..) 2. configuring the install-config.yaml (attached) - provisioningNetwork: enabled - machine network: single stack ipv4 - disconnected installation - ovn-kubernetes with hybrid-networking setup - LACP bonding setup using MC manifests at day1 * bond0 -> baremetal 192.168.32.0/24 (br-ex) * bond0.662 -> interface for secondary bridge (br-ex1) 192.168.66.128/26 - secondary bridge defined in /etc/ovnk/extra_br