- with traefik forwardauth
- Add Authentik helm chart and ArgoCD application
- Configure Traefik ForwardAuth middleware for SSO
- Add External Secrets for Vault integration
- Apply SSO middleware to Velero UI as test
- from argocd applications
- Fixes OutOfSync issues caused by operator-added default values
- ServerSideApply causes stricter field management that conflicts with
CRD defaults
- to Traefik ingress controller
- Update all ingress files to use ingressClassName: traefik
- Update cert-manager ClusterIssuer to use traefik class
- Remove haproxy.org annotations from ingress files
- Update vault helm-values to use traefik
- from worker-1 to master
- Enable scheduling on mayne-vcn (master)
- Disable scheduling and request eviction on mayne-worker-1
- Longhorn will use only master and worker-2
- Remove namespace definition from deployment.yaml
- Namespace now only defined in namespace.yaml
- Fixes ComparisonError: may not add resource with already registered id
- preserveunknownfields
- Add .spec.preserveUnknownFields to ignoreDifferences for all Longhorn
CRDs
- Prevents OutOfSync status caused by Kubernetes auto-adding this field
- Affects: engines, engineimages, instancemanagers, nodes, replicas,
settings, volumes
- auto-creation for cnpg operator
Add monitoring.grafanaDashboard.create=true to automatically deploy
the official CNPG Grafana dashboard as a ConfigMap that Grafana can
discover and import.
- successful migr...
All applications (gitea, jaejadle, todo, mas, umami) have been
successfully
migrated to CloudNativePG. All databases verified working on CNPG
cluster.
- to include Longhorn nodes via kust...
- Changed source from ingress-only to full longhorn/ directory
- Use kustomize to manage ingress + nodes together
- Enables GitOps management of Longhorn Node disk configs
- to dedicated 50gb disks
- Update defaultDataPath to /mnt/longhorn-storage
- Add Node CRs for all nodes with new disk configuration
- Evict data from old /var/lib/longhorn disks to new disks
- Nodes: mayne-vcn, mayne-worker-1, mayne-worker-2
- to allow cnpg join pod scheduling
- Falco: 40m → 30m
- Falcosidekick Web UI: 50m → 30m
- Velero UI: 50m → 30m
This frees up ~40m CPU on worker nodes to allow CNPG join pods
(which request 500m) to be scheduled successfully.
- storage
- Add Longhorn Helm chart configuration
- Configure UI ingress at longhorn0213.kro.kr
- Set CPU limits to null to prevent throttling
- Configure 3 replicas for high availability
- Set Longhorn as default StorageClass
- to prevent throttling
Removed CPU limits from all infrastructure components while keeping
memory limits for protection:
- cnpg: removed 500m CPU limit
- external-secrets: removed 200m, 100m CPU limits (operator, webhook,
certController)
- falco: removed 500m CPU limit (falcosidekick webui)
- vault: removed 500m CPU limit
- velero: removed 500m, 1000m CPU limits (server, node-agent)
Benefits:
- ✅ Prevents CPU throttling
- ✅ Better performance and lower latency
- ✅ More efficient resource utilization
- ✅ Simpler management (only requests to tune)
Memory limits are kept to prevent memory leaks and OOM issues.