Local Development With Rancher Desktop
In this example, I’m going to take an existing rails application, and do some local development in rancher-desktop. This guide requires at least rancher-desktop 0.6.0
First I pull down our kubernetes configuration into the pwd
and add that to .gitignore
git clone git@github.com:myorg/myapp-config.git
echo "myapp-config" >> .gitignore
Then I set the environment variable CHART_LOCATION to this location
export CHART_LOCATION=myapp-config/chart
in real life, i use direnv (https://direnv.net/) to manage my project specific configuration
Then, I modify the apps helm chart to be compatible with my local development workflow. This mainly consists of
- overriding the main apps
command
- mounting my
pwd
into the app containersWORKDIR
- injecting some variables into the container
values.yaml⌗
...
localDev:
enabled: false
path: /Users/me/git/myapp
storageClassName: local-path
additionalEnv: []
we have a “localDev” feature flag, that when enabled, creates a local-path
pv, and pvc
persistence.yaml⌗
{{- if .Values.localDev.enabled }}
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: rancher.io/local-path
name: {{ .Release.Name }}-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 8Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: {{ .Release.Name }}-data
namespace: {{ .Release.Namespace }}
hostPath:
path: {{ .Values.localDev.path }}
type: DirectoryOrCreate
persistentVolumeReclaimPolicy: Delete
storageClassName: {{ .Values.localDev.storageClassName }}
{{- end }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-data
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: {{ .Values.localDev.storageClassName }}
{{- end }}
deployment.yaml⌗
apiVersion: apps/v1
kind: Deployment
metadata:
spec:
template:
spec:
volumes:
{{- if .Values.localDev.enabled }}
- name: data
persistentVolumeClaim:
claimName: {{ .Release.Name }}-data
{{- end }}
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
command:
{{- toYaml .Values.command | nindent 12 }}
env:
{{- toYaml .Values.additionalEnv | nindent 12 }}
{{- if .Values.localDev.enabled }}
volumeMounts:
- name: data
mountPath: /app/app
subPath: app
- name: data
mountPath: /app/config
subPath: config
- name: data
mountPath: /app/lib
subPath: lib
- name: data
mountPath: /app/lib
subPath: lib
- name: data
mountPath: /app/Gemfile
subPath: Gemfile
- name: data
mountPath: /app/Gemfile.lock
subPath: Gemfile.lock
- name: data
mountPath: /app/package.json
subPath: package.json
- name: data
mountPath: /app/yarn.lock
subPath: yarn.lock
{{- end }}
...
In the deployment maniest, we mount the pv we created above into the apps workdir. in the example above, I only mount in the directories and paths that I know churn often. If we were to mount in the whole $pwd we’d have terribly slow development due to the speed of local-path
pv.
I then wrap the whole thing up into a small shell script. we use nerdctl to build the image into the k8s.io namespace (so that kuberentes can pick it up), and then deploy the helm chart into the rancher-destkop
context
./bin/deploy-local.sh⌗
#!/bin/bash
tag=$(uuidgen)
repo=$(basename $(pwd))
chart_location=${CHART_LOCATION}
# make sure we are in the rancher-desktop context,
kubectx rancher-desktop
kubectl create namespace $repo
kubens $repo
# build image into k8s.io namespace (so rancher can pull it)
nerdctl -n k8s.io build -t $repo:$tag .
helm upgrade --install $repo \
--set image.repository=$repo \
--set image.tag=$tag \
--set localDev.enabled='true' \
--set localDev.path="$(pwd)" \
--set command={sleep,infinity} \
--set redis.storageClass='local-path' \
--set 'additionalEnv[0].name'="SYMWS_PIN" \
--set 'additionalEnv[0].value'="${SYMWS_PIN}" \
--set 'additionalEnv[1].name'="SYMWS_URL" \
--set 'additionalEnv[1].value'="${SYMWS_URL}" \
--set 'additionalEnv[2].name'="SYMWS_USERNAME" \
--set 'additionalEnv[2].value'="${SYMWS_USERNAME}" \
$chart_location
once the chart has been deployed, i “pop a shell” into the app container, set the rails env, and startup the app
kubectl exec -it $(kubectl get pods -l app.kubernetes.io/name=app -o name) -- bash
bundle exec rails s -b '0.0.0.0'
I run rails s
by hand, instead of allowing the containers command:
to do it, so that i can have an shell that i can ctrl+c
and a shell that binding.pry
works with.
Lastly, I do a port forward to the pod, and party
kubectl port-forward $(kubectl get pods -l app.kubernetes.io/name=app -o name) 3000:3000