How to mount SMB share behind OpenVPN to Pod on GKE

Tomáš Papež
4 min readFeb 7, 2021

A client asked me to prepare an SMB mount behind a VPN to access the integration partner's data. An easy way is to deploy VM, install OpenVPN and deploy the application there, but almost everything is running on GKE then I wanted to explore what options I do have there.

VPN Setup

The easy part was the VPN, the client provided me with OpenVPN config and I just did 2 simple steps:

  • create a VM with IP Forwarding ON and static internal IP (terraformed)
  • run Ansible to that machine and install and setup OpenVPN
  • create a route in GCP to single static IP where the SMB lives (terraformed)

In the config of OpenVPN I used an UP and DOWN script to make sure the routing works as expected

UP
#!/bin/bash
sysctl -w net.ipv4.ip_forward=1
iptables -t nat -A POSTROUTING -o ${1} -j MASQUERADE
DOWN
#!/bin/bash
iptables -t nat -D POSTROUTING -o ${1} -j MASQUERADE

The last part was in GCP to add a new route and I did it via Terraform where I just used the previously created IP but you can use for example gcloud:

gcloud compute routes create VPN --destination-range = <your-range> --next-ho-address = <my-static-ip>

I do recommend you add description and tags for better resource management and reporting.

Now I had a working VPN tunnel from any host in my network to that SMB share. To secure this solution I want to re-work it to using network tags and allow traffic only from tagged nodes in my cluster (network).

Kubernetes here we go!

The first and main problem: How to mount it?

  • The first idea was to install https://github.com/kubernetes-csi/csi-driver-smb but it failed on custom client settings
  • Another one was to mount to the second container in POD and share access via {emty_dir} but that's impossible as you will basically re-mount the path in the SMB container and APP container will not see the data
  • Mount directly to the container where the application lives

I decided to go with the third option as the easiest one but it was not like that. I will show you the whole YAML file and explain the most critical parts:

apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/example-app: unconfined
name: example-app
spec:
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
nodeSelector:
pool_name: g1_s_ubuntu
tolerations:
- key: "dedicated"
operator: "Equal"
value: "ubuntu"
effect: "NoSchedule"
containers:
- image: <image>
name: example-app
securityContext:
privileged: true
capabilities:
add:
- CAP_SYS_ADMIN
- DAC_READ_SEARCH
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "mount -t cifs -o username=Space,password=$SMB_PASS //<IP>/data /mnt/external"]
preStop:
exec:
command: ["/bin/sh","-c","umount /mnt/external"]
  • Make sure that no profile is loaded in AppArmor via annotation
  • you can't use default ContainerOS images! You will need to create an Ubuntu node-pool and in my case, I used taints and tolerations to make sure only the designed workload will run on Ubuntu nodes everything else will stay on the primary node pool with Container OS
  • You will need to add two capabilities to your container CAP_SYS_ADMIN and DAC_READ_SEARCH and privileged mode to allow your container to load kernel modules and manipulate with the network. If you're not familiar with capabilities you can find more here.
  • The final step was to mount the share and for that, I used the lifecycle hooks where I just added all parameters needed you can see a simple example. In the container, you need to have installed cifs-utils via apt.

And everything was cool until I tried to run it! I went into error where google was useless, why it worked from VM and not from GKE cluster? I found the solution — you need to allow ports 445 and 139 and you already know how.

The biggest side-effect of this setup is that the application needs to run in the container as root because no other user have the privilege to mount. There might be a better option but I needed to deliver this asap then I did this tradeoff and I will explore the security options later. As for now, I'm happy that it's at least separated on a dedicated node.

TODO:

  • use network tags to allow VPN <> Ubuntu node pool only
  • secure the Container to do not run it as root

Do you have a better solution? Let me know I will be more than happy to explore and re-deploy this application.

--

--