So there I was…again…
- Part 1: Preparation
- Part 1.5: Database Changes
- Part 2: Primary Node Deployment and Config
- Part 3: Load Balancer Configuration
- Part 4: SSL Configuration
- Part 5: Deploy Additional Nodes
You’re probably wondering why this post exists. Well, it’s because after all the kerfuffle (is that even a word???) with my #NSX-T networking, I decided to go ahead and field a Kubernetes-hosted database server rather than use my full VM. Why you ask? Why not! 😉 It’s Kubernetes and everyone (who’s anyone) knows if you are able to use Kubernetes, you use Kubernetes.
In all seriousness, I have been wanting to try and find ways to leverage MS SQL Server as a container and this use case seemed really good for the following reasons:
- Security: I am only exposing a single database–the one associated with this app–in the event the server or service is compromised (it IS in the DMZ after all)
- Simplicity: Why worry about Windows Server updates when all I have to do is monitor Cumulative Updates for MS SQL Server
- Availability: Kubernetes deployments are self-healing. In the event the pod fails, it will be restarted. This doesn’t always work if the pod hangs or the service fails–but the pod keeps going.
- Need to configure liveliness checks to catch this bit
- Also want to configure this in an Always On Availability Group which is now supported in the MS SQL Server 2019 container.
So what does this do to my architecture? Not too much. Here’s the updated diagram:

As you can see, the reference to a SQL Server VM has now been changed to a Pod. This pod is hosted directly on my vCenter “Workload Management” enabled cluster. And the nice thing about this setup, is that I can monitor it’s compute, memory, and storage utilization just like any other VM:

For everyone’s reference, I have put the YAML and SQL files used to create this deployment up on github. You can clone the repository with the following command:
$ git clone https://github.com/darkhonor/mssql-wso.git
In order to adjust the YAML files for your environment, you will want to change the mssql-storage.yaml file to use a Storage Class that you have available to you. Also, the values specified throughout support up to 1,000 users, which is great for my Homelab environment (also the smallest they have). The mssql.yaml file should be fairly straightforward for anyone who has worked with Kubernetes deployments and services. At this point, I’m only exposing the primary port because I’m not doing anything else with the deployment. I’ve also specified the exact version of the SQL Server I want to use. As of today, the latest MS SQL Server 2017 Cumulative Update is 21. I’ve also separated the instance, data, and logging directories into separate volume mounts to help keep data utilization under tighter control.
Once you’ve made your adjustments, you can create the deployment using the following kubectl commands:
$ kubectl create secret generic mssql-wso --from-literal=SA_PASSWORD="SuperSecretPassword"
$ kubectl apply -f .\mssql-storage.yaml
$ kubectl apply -f .\mssql.yaml
Do these one at a time to make sure everything creates successfully. The first will create the Secret referenced in the mssql.yaml file. When creating a MS SQL Server container, you have to specify a password for the “sa” user. This command will create it in Kubernetes and keep it hidden from general view. You can use the following command to verify the PersistentVolumeClaims are created correctly:
$ kubectl get pvc
The following command will show you the state of your deployment:
$ kubectl get all
In my case, these produce the following:

Once the database service is up and running, you can use the included SQL files to create the database you will use for Workspace ONE Access, the user to login with, and grant the appropriate permissions:

Once the file is loaded and the password is adjusted to whatever you set for your SA password, click the “Execute” button. This will run all of the commands in the file and configure your database appropriately.
Once this is done, you’re ready to proceed to the next step, which is installing the first node in the cluster.
Until next time. I hope you found this useful. I’m always looking for feedback and ideas to work with. Now that my infrastructure is settling down, I will be ranging all over the place a bit. But it will come together as I get all of the pieces put together.
3 replies on “SAML Authentication with Workspace ONE Access: Part 1.5 Database Changes”
[…] Part 1.5: Database Changes […]
[…] Part 1.5: Database Changes […]
[…] Part 1.5: Database Changes […]