How to Deploy Traefik Proxy Using Flux and GitOps Principles
GitOps makes configuration management seamless by creating a single source of truth for configuration changes so that changes can be transparent, validated, and low-risk. This article and this GitHub repository will show you how Traefik Proxy and Flux can work together to help you implement GitOps principles. But before we jump in, what are these tools, and how do they help?
What is Flux?
What is Traefik Proxy?
How to deploy Traefik Proxy in multiple clusters using Flux
- Two Kubernetes clusters acting as, e.g., staging and production environments
- An empty GitHub repo
- Exported environment variables
GITHUB_TOKEN
,GITHUB_USER
,GITHUB_REPO
- The latest Flux CLI installed on a workstation
- Base - the initial configuration that should be common to all environments (staging and production).
- Staging - the configuration for staging environments. It is inherited from the base configuration and applies the changes related specifically to the staging environment.
- Production - the configuration for the production environment. Again, it is inherited from the base configuration with the changes needed for the production environment.
Step 1: Create the infrastructure repository
gh repo create flux-traefik-demo --public --description "Flux and Traefik - demo" --clone
git init
. gh
command refers to the GitHub CLI). If your home directory is a Git repository, you might want to run git init
in an empty directory before running the above command. This will create the remote repository and set up the Git remote.Step 2: Create the repository structure
- The infrastructure contains common infrastructure tools, such as Helm repository definitions or common infrastructure components.
- Clusters contain Flux configurations.
mkdir -pv ./apps/{base,staging,production}/traefik ./clusters/{production,staging} ./infrastructure/{sources,crds}
├── apps
│ ├── base
│ ├── production
│ └── staging
├── infrastructure
│ ├── crds
│ └── sources
└── clusters
├── production
└── staging
Step 3: Create the Traefik base configuration
cat > ./apps/base/traefik/rbac.yaml <<EOF
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- traefik.containo.us
resources:
- middlewares
- middlewaretcps
- ingressroutes
- traefikservices
- ingressroutetcps
- ingressrouteudps
- tlsoptions
- tlsstores
- serverstransports
verbs:
- get
- list
- watch
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: traefik-ingress-controller
namespace: traefik
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: traefik
EOF
cat > ./apps/base/traefik/traefik.yaml << EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: traefik
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: traefik
app.kubernetes.io/instance: traefik
template:
metadata:
labels:
app.kubernetes.io/name: traefik
app.kubernetes.io/instance: traefik
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- name: traefik
image: traefik:2.5.4
args:
- "--entryPoints.web.address=:8000/tcp"
- "--entryPoints.websecure.address=:8443/tcp"
- "--entryPoints.traefik.address=:9000/tcp"
- "--api=true"
- "--api.dashboard=true"
- "--ping=true"
- "--providers.kubernetescrd"
- "--providers.kubernetescrd.allowCrossNamespace=true"
readinessProbe:
httpGet:
path: /ping
port: 9000
failureThreshold: 1
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /ping
port: 9000
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 2
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- name: web
containerPort: 8000
protocol: TCP
- name: websecure
containerPort: 8443
protocol: TCP
- name: traefik
containerPort: 9000
protocol: TCP
volumeMounts:
- mountPath: /data
name: storage-volume
volumes:
- name: storage-volume
emptyDir: {}
EOF
cat > ./apps/base/traefik/svc.yaml << EOF
---
apiVersion: v1
kind: Service
metadata:
name: traefik
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
spec:
selector:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
name: web
targetPort: web
protocol: TCP
- port: 443
name: websecure
targetPort: websecure
protocol: TCP
EOF
cat > ./apps/base/traefik/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- rbac.yaml
- traefik.yaml
- svc.yaml
EOF
Note
What is Kustomize? Kustomize is a standalone tool that customizes Kubernetes objects through a Kustomization file. Check the Kubernetes documentation for an overview of Kustomize.
Step 4: Customize Traefik Proxy configurations for your production cluster
cat > ./apps/production/traefik/namespace.yaml << EOF
---
apiVersion: v1
kind: Namespace
metadata:
name: traefik-production
EOF
cat > ./apps/production/traefik/traefik-patch.yaml << EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: traefik
spec:
template:
spec:
containers:
- name: traefik
args:
- "--entryPoints.web.address=:8000/tcp"
- "--entryPoints.websecure.address=:8443/tcp"
- "--entryPoints.traefik.address=:9000/tcp"
- "--api=true"
- "--api.dashboard=true"
- "--ping=true"
- "--providers.kubernetescrd"
- "--providers.kubernetescrd.allowCrossNamespace=true"
- "--certificatesresolvers.myresolver.acme.storage=/data/acme.json"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
- "--certificatesresolvers.myresolver.acme.email=jakub.hajek+webinar@traefik.io"
EOF
cat > ./apps/production/traefik/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: traefik-production
resources:
- namespace.yaml
- ../../base/traefik
patchesStrategicMerge:
- traefik-patch.yaml
EOF
Step 5: Customize Traefik Proxy configurations for your staging cluster
cat > ./apps/staging/traefik/namespace.yaml << EOF
---
apiVersion: v1
kind: Namespace
metadata:
name: traefik-staging
EOF
cat > ./apps/staging/traefik/traefik-patch.yaml << EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: traefik
spec:
template:
spec:
containers:
- name: traefik
args:
- "--entryPoints.web.address=:8000/tcp"
- "--entryPoints.websecure.address=:8443/tcp"
- "--entryPoints.traefik.address=:9000/tcp"
- "--api=true"
- "--api.dashboard=true"
- "--ping=true"
- "--providers.kubernetescrd"
- "--providers.kubernetescrd.allowCrossNamespace=true"
- "--certificatesresolvers.myresolver.acme.storage=/data/acme.json"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
- "--certificatesresolvers.myresolver.acme.email=jakub.hajek+webinar@traefik.io"
- "--certificatesresolvers.myresolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
EOF
cat > ./apps/staging/traefik/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: traefik-staging
resources:
- namespace.yaml
- ../../base/traefik
patchesStrategicMerge:
- traefik-patch.yaml
EOF
Step 6: Create a Traefik Proxy CRD
cat > ./infrastructure/crds/traefik-crds.yaml << EOF
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
name: traefik-crds
namespace: flux-system
spec:
interval: 30m
url: https://github.com/traefik/traefik-helm-chart.git
ref:
tag: v10.3.0
ignore: |
# exclude all
/*
# path to crds
!/traefik/crds/
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: traefik-api-crds
namespace: flux-system
spec:
interval: 15m
prune: false
sourceRef:
kind: GitRepository
name: traefik-crds
namespace: flux-system
healthChecks:
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: ingressroutes.traefik.containo.us
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: ingressroutetcps.traefik.containo.us
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: ingressrouteudps.traefik.containo.us
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: middlewares.traefik.containo.us
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: middlewaretcps.traefik.containo.us
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: serverstransports.traefik.containo.us
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: tlsoptions.traefik.containo.us
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: tlsstores.traefik.containo.us
- apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
name: traefikservices.traefik.containo.us
EOF
cat > ./infrastructure/crds/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: flux-system
resources:
- traefik-crds.yaml
EOF
cat > ./infrastructure/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- crds
EOF
Step 7: Create the initial Flux configuration for your production cluster
cat > ./infrastructure/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- crds
EOF
cat > ./clusters/production/infrastructure.yaml << EOF
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: infrastructure
namespace: flux-system
spec:
interval: 10m0s
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure
prune: true
wait: true
EOF
Step 8: Create the initial Flux configuration for your staging cluster
cat > ./clusters/staging/apps.yaml << EOF
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: apps
namespace: flux-system
spec:
interval: 10m0s
dependsOn:
- name: infrastructure
sourceRef:
kind: GitRepository
name: flux-system
path: ./apps/staging
prune: true
wait: true
EOF
cat > ./clusters/staging/infrastructure.yaml << EOF
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: infrastructure
namespace: flux-system
spec:
interval: 10m0s
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure
prune: true
wait: true
EOF
Step 9: Create the initial commit
├── apps
│ ├── base
│ │ └── traefik
│ │ ├── kustomization.yaml
│ │ ├── rbac.yaml
│ │ ├── svc.yaml
│ │ └── traefik.yaml
│ ├── production
│ │ └── traefik
│ │ ├── kustomization.yaml
│ │ ├── namespace.yaml
│ │ └── traefik-patch.yaml
│ └── staging
│ └── traefik
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ └── traefik-patch.yaml
├── clusters
│ ├── production
│ │ ├── apps.yaml
│ │ └── infrastructure.yaml
│ └── staging
│ ├── apps.yaml
│ └── infrastructure.yaml
└── infrastructure
├── crds
│ ├── kustomization.yaml
│ └── traefik-crds.yaml
├── kustomization.yaml
└── sources
Step 10: Bootstrap your clusters
export GITHUB_TOKEN
export GITHUB_USER
export GITHUB_REPO
GITHUB_USER
refers to your username, and GITHUB_REPO
refers to the repo you created earlier.flux bootstrap github \
--branch=main \
--context=t1.aws.traefiklabs.tech \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--path=clusters/staging \
--components-extra=image-reflector-controller,image-automation-controller \
--personal
flux get all
flux bootstrap github \
--branch=main \
--context=t2.aws.traefiklabs.tech \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--path=clusters/production \
--components-extra=image-reflector-controller,image-automation-controller \
--personal
Note
The two commands are very similar. The only difference is in thepath
parameter that's set to eitherclusters/staging
orclusters/production
.
Step 11: Create a base deployment for the whoami
application
whoami
application. The whoami
application is a simple demo application that displays headers from the containers. It is a kind of echo server that we use for testing purposes.mkdir -pv ./apps/{base,staging,production}/whoami
cat > ./apps/base/whoami/deployment.yaml << EOF
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: whoamiv1
labels:
name: whoamiv1
spec:
replicas: 1
selector:
matchLabels:
task: whoamiv1
template:
metadata:
labels:
task: whoamiv1
spec:
containers:
- name: whoamiv1
image: traefik/traefikee-webapp-demo:v2
args:
- -ascii
- -name=FOO
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /ping
port: 80
failureThreshold: 1
initialDelaySeconds: 2
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 2
resources:
requests:
cpu: 10m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: whoamiv1
namespace: app
spec:
ports:
- name: http
port: 80
selector:
task: whoamiv1
EOF
whoami
application. The Ingressroute
is one of Traefik CRD's resources that create IngressRoute
objects to create routers that expose applications externally. You can learn more about this in the official Traefik Proxy documentation on Kubernetes IngressRoutes.cat > ./apps/base/whoami/ingressroute.yaml << EOF
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: whoami
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(\`fix.me\`)
services:
- kind: Service
name: whoamiv1
port: 80
tls:
certResolver: myresolver
EOF
cat > ./apps/base/whoami/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- ingressroute.yaml
EOF
Step 12: Create a custom release for your staging environment
whoami
application will be deployed.cat > ./apps/staging/whoami/namespace.yaml <<EOF
---
apiVersion: v1
kind: Namespace
metadata:
name: whoami-staging
EOF
cat > ./apps/staging/whoami/whoami-patch.yaml << EOF
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: whoamiv1
spec:
replicas: 4
template:
spec:
containers:
- name: whoamiv1
args:
- -ascii
- -name=STAGING
EOF
whoami
application. The patch is needed to update the host rule that will be used to access the whoami
application.cat > ./apps/staging/whoami/ingressroute-patch.yaml <<EOF
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: whoami
spec:
routes:
- kind: Rule
match: Host(\`whoami.t1.demo.traefiklabs.tech\`)
services:
- kind: Service
name: whoamiv1
port: 80
EOF
whoami
application on your staging cluster.cat > ./apps/staging/whoami/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: whoami-staging
resources:
- namespace.yaml
- ../../base/whoami
patchesStrategicMerge:
- whoami-patch.yaml
- ingressroute-patch.yaml
EOF
Step 13: Create a custom release for your production environment
whoami
application to be deployed to production.cat > ./apps/production/whoami/namespace.yaml << EOF
---
apiVersion: v1
kind: Namespace
metadata:
name: whoami-production
EOF
whoami
application. It is a tiny change to show the difference between staging and production examples. In an actual use case, you can add more changes to the production environment, such as mounting production ConfigMaps, Secrets, or environment variables.cat > ./apps/production/whoami/whoami-patch.yaml << EOF
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: whoamiv1
spec:
replicas: 8
template:
spec:
containers:
- name: whoamiv1
args:
- -ascii
- -name=PRODUCTION
EOF
whoami
application. The patch just changed the value of Host matching rule.cat > ./apps/production/whoami/ingressroute-patch.yaml << EOF
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: whoami
spec:
routes:
- kind: Rule
match: Host(\`whoami.t2.demo.traefiklabs.tech\`)
services:
- kind: Service
name: whoamiv1
port: 80
EOF
whoami
application to your production cluster.cat > ./apps/production/whoami/kustomization.yaml << EOF
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: whoami-production
resources:
- namespace.yaml
- ../../base/whoami
patchesStrategicMerge:
- whoami-patch.yaml
- ingressroute-patch.yaml
EOF
Conclusion
- Deployed Traefik Proxy with the required resources (service account, RBAC)
- Deployed a test application
- Created the Ingressroute to reach the test application
patchStrategicMerge
to update our configuration for these environments accordingly. Again, the base configuration was inherited, and the only changes were applied and merged to have the final manifests.