Kubernetes Deployment
This guide covers deploying simple-backup on Kubernetes using two methods: direct CronJob deployment and the Helm chart with Kubernetes Operator.
Table of Contents
- Method 1: CronJob Deployment
- Method 2: Helm Chart with Operator (Recommended)
- Storage Configuration
- Monitoring and Troubleshooting
Method 1: CronJob Deployment
Direct Kubernetes CronJob deployment for simple backup schedules.
Prerequisites
- Docker image built and pushed to a registry
- PersistentVolumeClaim for data to backup
- Storage credentials
Quick Start
Build and push the Docker image:
bashdocker build -t your-registry/simple-backup:latest . docker push your-registry/simple-backup:latestCreate a secret for storage credentials:
bashkubectl create secret generic simple-backup-secrets \ --from-literal=S3_ACCESS_KEY_ID=your-key \ --from-literal=S3_SECRET_ACCESS_KEY=your-secretCreate ConfigMap for backup configuration:
yamlapiVersion: v1 kind: ConfigMap metadata: name: simple-backup-config data: BACKUP_SOURCE_PATH: "/data" BACKUP_DEST_SERVICE: "s3" BACKUP_DEST_ROOT: "backups" BACKUP_COMPRESSION: "tar.zst" BACKUP_RETENTION_DAILY: "7" BACKUP_RETENTION_WEEKLY: "4" BACKUP_RETENTION_MONTHLY: "6" S3_BUCKET: "my-backup-bucket" S3_REGION: "us-east-1"Create CronJob:
yamlapiVersion: batch/v1 kind: CronJob metadata: name: simple-backup spec: schedule: "0 2 * * *" # Daily at 2 AM concurrencyPolicy: Forbid successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 3 jobTemplate: spec: backoffLimit: 2 template: spec: restartPolicy: OnFailure containers: - name: backup image: your-registry/simple-backup:latest envFrom: - configMapRef: name: simple-backup-config - secretRef: name: simple-backup-secrets volumeMounts: - name: data mountPath: /data readOnly: true resources: limits: cpu: 500m memory: 512Mi requests: cpu: 250m memory: 256Mi volumes: - name: data persistentVolumeClaim: claimName: data-to-backupApply the manifests:
bashkubectl apply -f configmap.yaml kubectl apply -f cronjob.yaml
Verify Deployment
# Check CronJob
kubectl get cronjobs
# View job history
kubectl get jobs
# View logs from latest job
kubectl logs -l job-name=simple-backup-28123456Method 2: Helm Chart with Operator (Recommended)
The Kubernetes Operator provides a declarative way to manage backups using Custom Resource Definitions (CRDs).
Installation
Option A: Install Operator via Helm
Build and push operator image:
bashcd helm/simple-backup-operator/operator docker build -t your-registry/simple-backup-operator:latest . docker push your-registry/simple-backup-operator:latestInstall the operator:
bashhelm install simple-backup-operator ./helm/simple-backup-operator \ --create-namespace \ --namespace simple-backup-system \ --set image.repository=your-registry/simple-backup-operator \ --set image.tag=latestVerify installation:
bashkubectl get pods -n simple-backup-system kubectl get crd backupjobs.backup.simple-backup.io
Option B: Install from Values File
Create operator-values.yaml:
image:
repository: your-registry/simple-backup-operator
tag: "latest"
pullPolicy: IfNotPresent
replicaCount: 1
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
watchNamespace: "" # Watch all namespaces
operator:
logLevel: INFOInstall:
helm install simple-backup-operator ./helm/simple-backup-operator \
-f operator-values.yaml \
--create-namespace \
--namespace simple-backup-systemCreating BackupJobs
Build and push backup image:
bashdocker build -t your-registry/simple-backup:latest . docker push your-registry/simple-backup:latestCreate storage credentials secret:
bashkubectl create secret generic s3-credentials \ --from-literal=S3_BUCKET=my-backup-bucket \ --from-literal=S3_REGION=us-east-1 \ --from-literal=S3_ACCESS_KEY_ID=your-key \ --from-literal=S3_SECRET_ACCESS_KEY=your-secretCreate a BackupJob resource:
yamlapiVersion: backup.simple-backup.io/v1alpha1 kind: BackupJob metadata: name: postgres-backup namespace: default spec: source: path: /var/lib/postgresql/data pvcName: postgres-data destination: service: s3 root: backups/postgres secretRef: s3-credentials compression: tar.zst schedule: "0 2 * * *" retention: daily: 7 weekly: 4 monthly: 6 yearly: 2 resources: limits: cpu: "1" memory: "1Gi" requests: cpu: "500m" memory: "512Mi"Apply the BackupJob:
bashkubectl apply -f backupjob.yaml
Managing BackupJobs
# List all backup jobs
kubectl get backupjobs
kubectl get bj # Short name
# Get detailed info
kubectl describe backupjob postgres-backup
# Check status
kubectl get backupjob postgres-backup -o yaml
# View generated CronJob
kubectl get cronjobs
# View job history
kubectl get jobs
# View logs
kubectl logs -l backupjob=postgres-backup
# Suspend backup
kubectl patch backupjob postgres-backup -p '{"spec":{"suspend":true}}'
# Resume backup
kubectl patch backupjob postgres-backup -p '{"spec":{"suspend":false}}'
# Delete backup job (also deletes generated CronJob)
kubectl delete backupjob postgres-backupMultiple Backup Jobs Example
---
apiVersion: backup.simple-backup.io/v1alpha1
kind: BackupJob
metadata:
name: database-backup
spec:
source:
path: /data/postgres
pvcName: postgres-data
destination:
service: s3
root: backups/database
secretRef: s3-credentials
schedule: "0 2 * * *"
compression: tar.zst
retention:
daily: 7
weekly: 4
---
apiVersion: backup.simple-backup.io/v1alpha1
kind: BackupJob
metadata:
name: uploads-backup
spec:
source:
path: /app/uploads
pvcName: uploads-data
destination:
service: s3
root: backups/uploads
secretRef: s3-credentials
schedule: "0 3 * * *"
compression: tar.gz
retention:
weekly: 8
monthly: 12Encrypted Backups
apiVersion: v1
kind: Secret
metadata:
name: age-encryption
stringData:
recipient: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
---
apiVersion: backup.simple-backup.io/v1alpha1
kind: BackupJob
metadata:
name: encrypted-backup
spec:
source:
path: /data/sensitive
pvcName: sensitive-data
destination:
service: s3
root: backups/encrypted
secretRef: s3-credentials
encryption:
enabled: true
recipientSecretRef:
name: age-encryption
key: recipient
compression: tar.zst
schedule: "0 2 * * *"Pausing Workloads During Backup
For data consistency, you can automatically pause pods using a specific PVC during backup operations. This is particularly useful for:
- StatefulSets with multiple replicas: Only pods using the specific PVC being backed up are paused, maintaining service availability through other replicas
- Databases or stateful applications requiring quiescing before backup
apiVersion: backup.simple-backup.io/v1alpha1
kind: BackupJob
metadata:
name: postgres-backup
spec:
source:
path: /var/lib/postgresql/data
pvcName: postgres-data-0 # Specific PVC to backup
destination:
service: s3
root: backups/postgres
secretRef: s3-credentials
compression: tar.zst
schedule: "0 2 * * *"
pauseWorkloads:
enabled: true
# Automatically finds and pauses all pods using postgres-data-0
# Other pods (using postgres-data-1, postgres-data-2, etc.) continue running
retention:
daily: 7
weekly: 4How it works:
- Init container finds all pods using the specified PVC (
source.pvcName) - Deletes those pods (not scaling the entire workload to 0)
- Waits for deletion to complete
- Main container performs the backup from the PVC
- Kubernetes controller (Deployment/StatefulSet/ReplicaSet) automatically recreates the deleted pods
- Sidecar container confirms pods are being recreated
Example with StatefulSet (3 replicas):
# StatefulSet has 3 pods:
# - postgres-0 → postgres-data-0 ← Being backed up (paused)
# - postgres-1 → postgres-data-1 ← Still running ✓
# - postgres-2 → postgres-data-2 ← Still running ✓
apiVersion: backup.simple-backup.io/v1alpha1
kind: BackupJob
metadata:
name: postgres-0-backup
spec:
source:
path: /var/lib/postgresql/data
pvcName: postgres-data-0 # Only backup this specific PVC
destination:
service: s3
root: backups/postgres-0
secretRef: s3-credentials
pauseWorkloads:
enabled: true # Only postgres-0 will be pausedImportant notes:
- Requires the operator's service account to have pod delete permissions (already configured in the Helm chart)
- The backup pod uses
serviceAccountName: simple-backup-operator - Only pods mounting the specified PVC are affected
- Other replicas continue serving traffic during backup
- Deleted pods are automatically recreated by their controller (Deployment/StatefulSet)
- For StatefulSets with multiple PVCs per pod, create separate BackupJobs for each PVC
Storage Configuration
AWS S3
kubectl create secret generic s3-credentials \
--from-literal=S3_BUCKET=my-bucket \
--from-literal=S3_REGION=us-east-1 \
--from-literal=S3_ACCESS_KEY_ID=YOUR_KEY \
--from-literal=S3_SECRET_ACCESS_KEY=YOUR_SECRETBackupJob:
destination:
service: s3
root: backups/myapp
secretRef: s3-credentialsAzure Blob Storage
kubectl create secret generic azure-credentials \
--from-literal=AZURE_CONTAINER=my-container \
--from-literal=AZURE_ACCOUNT_NAME=myaccount \
--from-literal=AZURE_ACCOUNT_KEY=YOUR_KEYBackupJob:
destination:
service: azblob
secretRef: azure-credentialsGoogle Cloud Storage
kubectl create secret generic gcs-credentials \
--from-literal=GCS_BUCKET=my-bucket \
--from-file=GCS_CREDENTIAL=./service-account.jsonBackupJob:
destination:
service: gcs
secretRef: gcs-credentialsWebDAV
kubectl create secret generic webdav-credentials \
--from-literal=WEBDAV_ENDPOINT=https://webdav.example.com \
--from-literal=WEBDAV_USERNAME=user \
--from-literal=WEBDAV_PASSWORD=passBackupJob:
destination:
service: webdav
secretRef: webdav-credentialsMonitoring and Troubleshooting
View Operator Logs
kubectl logs -n simple-backup-system -l app.kubernetes.io/name=simple-backup-operator -fView Backup Job Logs
# Get recent jobs
kubectl get jobs --sort-by=.metadata.creationTimestamp
# View logs from specific job
kubectl logs job/postgres-backup-cronjob-28123456
# View logs by label
kubectl logs -l backupjob=postgres-backupCommon Issues
BackupJob created but no CronJob appears:
# Check operator logs
kubectl logs -n simple-backup-system -l app.kubernetes.io/name=simple-backup-operator
# Check RBAC permissions
kubectl auth can-i create cronjobs --as=system:serviceaccount:simple-backup-system:simple-backup-operatorImage pull errors:
# Create image pull secret
kubectl create secret docker-registry regcred \
--docker-server=your-registry.com \
--docker-username=your-user \
--docker-password=your-pass
# Add to operator values
imagePullSecrets:
- name: regcredPermission errors in backup pod:
# Check PVC exists
kubectl get pvc postgres-data
# Check secret exists
kubectl get secret s3-credentials
# Describe pod for details
kubectl describe pod -l backupjob=postgres-backupUninstallation
CronJob Method
kubectl delete cronjob simple-backup
kubectl delete configmap simple-backup-config
kubectl delete secret simple-backup-secretsOperator Method
# Delete all BackupJobs (this also deletes generated CronJobs)
kubectl delete backupjobs --all
# Uninstall operator
helm uninstall simple-backup-operator -n simple-backup-system
# Delete CRD (optional - removes BackupJob resource type)
kubectl delete crd backupjobs.backup.simple-backup.io
# Delete namespace
kubectl delete namespace simple-backup-systemSecurity Best Practices
- Use RBAC to limit access to BackupJob resources
- Store credentials in Kubernetes Secrets, never in ConfigMaps
- Use read-only volumes for source data
- Regularly rotate storage credentials
- Enable age encryption for sensitive data
- Use external secret management (Vault, AWS Secrets Manager)
- Set resource limits to prevent resource exhaustion
- Use network policies to restrict operator network access