- with traefik forwardauth
- Add Authentik helm chart and ArgoCD application
- Configure Traefik ForwardAuth middleware for SSO
- Add External Secrets for Vault integration
- Apply SSO middleware to Velero UI as test
- from argocd applications
- Fixes OutOfSync issues caused by operator-added default values
- ServerSideApply causes stricter field management that conflicts with
CRD defaults
- to Traefik ingress controller
- Update all ingress files to use ingressClassName: traefik
- Update cert-manager ClusterIssuer to use traefik class
- Remove haproxy.org annotations from ingress files
- Update vault helm-values to use traefik
- from worker-1 to master
- Enable scheduling on mayne-vcn (master)
- Disable scheduling and request eviction on mayne-worker-1
- Longhorn will use only master and worker-2
- Remove namespace definition from deployment.yaml
- Namespace now only defined in namespace.yaml
- Fixes ComparisonError: may not add resource with already registered id
- preserveunknownfields
- Add .spec.preserveUnknownFields to ignoreDifferences for all Longhorn
CRDs
- Prevents OutOfSync status caused by Kubernetes auto-adding this field
- Affects: engines, engineimages, instancemanagers, nodes, replicas,
settings, volumes
- auto-creation for cnpg operator
Add monitoring.grafanaDashboard.create=true to automatically deploy
the official CNPG Grafana dashboard as a ConfigMap that Grafana can
discover and import.
- successful migr...
All applications (gitea, jaejadle, todo, mas, umami) have been
successfully
migrated to CloudNativePG. All databases verified working on CNPG
cluster.
- to include Longhorn nodes via kust...
- Changed source from ingress-only to full longhorn/ directory
- Use kustomize to manage ingress + nodes together
- Enables GitOps management of Longhorn Node disk configs
- to dedicated 50gb disks
- Update defaultDataPath to /mnt/longhorn-storage
- Add Node CRs for all nodes with new disk configuration
- Evict data from old /var/lib/longhorn disks to new disks
- Nodes: mayne-vcn, mayne-worker-1, mayne-worker-2
- to allow cnpg join pod scheduling
- Falco: 40m → 30m
- Falcosidekick Web UI: 50m → 30m
- Velero UI: 50m → 30m
This frees up ~40m CPU on worker nodes to allow CNPG join pods
(which request 500m) to be scheduled successfully.
- storage
- Add Longhorn Helm chart configuration
- Configure UI ingress at longhorn0213.kro.kr
- Set CPU limits to null to prevent throttling
- Configure 3 replicas for high availability
- Set Longhorn as default StorageClass
- to prevent throttling
Removed CPU limits from all infrastructure components while keeping
memory limits for protection:
- cnpg: removed 500m CPU limit
- external-secrets: removed 200m, 100m CPU limits (operator, webhook,
certController)
- falco: removed 500m CPU limit (falcosidekick webui)
- vault: removed 500m CPU limit
- velero: removed 500m, 1000m CPU limits (server, node-agent)
Benefits:
- ✅ Prevents CPU throttling
- ✅ Better performance and lower latency
- ✅ More efficient resource utilization
- ✅ Simpler management (only requests to tune)
Memory limits are kept to prevent memory leaks and OOM issues.
- for selective velero backup
Added pod annotation to exclude PVC data from Velero backups while
preserving MinIO resource definitions:
- backup.velero.io/backup-volumes-excludes: export
This prevents circular backup of the velero-backups bucket while
still backing up MinIO StatefulSet, Services, and configuration.
Note: MinIO bucket data (bucket, bucket-dev, velero-backups) will
NOT be backed up. Consider separate backup strategy for critical
bucket data if needed.
- resources and prevent circul...
- Reduce node-agent CPU request from 100m to 50m
- Fixes scheduling issue on mayne-worker-2 (was at 99% CPU)
- Enables node-agent to run on all 3 nodes for complete backup
coverage
- Exclude minio namespace from backups
- Prevents circular backup (backing up the backup storage)
- Minio config is in Git and can be recreated
- Saves significant storage space
- management
- Added ingress files for MinIO (API and Console) and pgweb
- Updated kustomization files to include ingress resources
- Migrated from centralized ingress management to per-app architecture