- from argocd applications
- Fixes OutOfSync issues caused by operator-added default values
- ServerSideApply causes stricter field management that conflicts with
CRD defaults
- to Traefik ingress controller
- Update all ingress files to use ingressClassName: traefik
- Update cert-manager ClusterIssuer to use traefik class
- Remove haproxy.org annotations from ingress files
- Update vault helm-values to use traefik
- Remove optional operator (?) from jqPathExpressions
- Add apiVersion and kind to ignored fields for volumeClaimTemplates
- Prevents continuous sync loop caused by Kubernetes removing these
fields from StatefulSet
- application
- Falco: 100m → 30m
- Falcosidekick Web UI: 50m → 30m
The previous commit only updated helm-values/falco.yaml which wasn't
being used. The ArgoCD Application uses inline helm values.
- to allow cnpg join pod scheduling
- Falco: 40m → 30m
- Falcosidekick Web UI: 50m → 30m
- Velero UI: 50m → 30m
This frees up ~40m CPU on worker nodes to allow CNPG join pods
(which request 500m) to be scheduled successfully.
- limit from falco
Kubernetes rejects cpu: "" as invalid quantity format. Will allow
DaemonSet
to be created with default CPU limit, then manually patch and disable
auto-sync.
- from vault and falco
- Remove cpu line from limits section (not just set to null)
- Prevents Helm charts from applying default CPU limit values
- Eliminates CPU throttling for infrastructure components
- from infrastructure components
- velero-ui: Remove 200m CPU limit
- metallb controller: Remove 100m CPU limit
- metallb speaker: Remove 100m CPU limit (300m total across 3 nodes)
- falco: Remove 1000m CPU limit (3000m total across 3 nodes)
Total CPU limits removed: ~3600m
This eliminates CPU throttling and reduces CPU limits overcommit from
131% to 0%.
- to prevent throttling
Removed CPU limits from all infrastructure components while keeping
memory limits for protection:
- cnpg: removed 500m CPU limit
- external-secrets: removed 200m, 100m CPU limits (operator, webhook,
certController)
- falco: removed 500m CPU limit (falcosidekick webui)
- vault: removed 500m CPU limit
- velero: removed 500m, 1000m CPU limits (server, node-agent)
Benefits:
- ✅ Prevents CPU throttling
- ✅ Better performance and lower latency
- ✅ More efficient resource utilization
- ✅ Simpler management (only requests to tune)
Memory limits are kept to prevent memory leaks and OOM issues.
- for worker-node-2
Reduced Falco DaemonSet CPU request to prevent node-agent
scheduling failures:
- Falco: 50m → 40m (actual usage ~39m)
This optimization frees up 10m CPU per node. On worker-node-2,
this contributes to the total 110m CPU savings needed for
Velero node-agent (30m request) to be scheduled successfully.
Worker-node-2 CPU allocation before: 840m/1000m (84%)
Worker-node-2 CPU allocation after: 730m/1000m (73%)