Hello! So I’m a little late for this posting, but I had to clear up a few things with my employer before I got too deep into this blog. With that out of the way, I’m hoping to try and get at least 1 post per week up, if not more, depending on progress I make or requests at my Twitter (for now). Hopefully this weekend I will finish up and be comfortable with folks signing up here to comment.
So today I wanted to give a little background on what my #Homelab setup is. I made an earlier posting about the software and hardware aspects. Today, I wanted to talk about the network layout and setup. One of the main reasons for this blog for me is to have a discussion about real-world network and service designs. I have tried to keep that in mind as I designed my environment within the limits of computing and storage capacity. The diagram below shows the basic components of my #Homelab environment.
As you can see, I have a Ubiquity UDM Pro as my primary router/gateway for the network. I really like this device because the ability to provision and manage VLANs across it, my Ubiquity 16XG, and my Ubiquity USW-24-Pro is phenomenal. Don’t get me wrong, there are some distinct downsides which drove some of my design decisions, but overall, I’m pleased with the setup.
I have several VLANs established currently as shown in the table below. As you can guess from my naming scheme, I’m hoping to expand my #Homelab to encompass a second “site” using my Ubiquity EdgeRouter. The “Type” column refers to the Ubiquity Network Type field. They have “Corporate”, “Guest”, “WAN”, “VLAN Only”, “Remote User VPN”, and “Site-to-Site VPN”. The “Corporate” option allows the UDM to be a gateway and automatically route all connected “Corporate” networks throughout the environment. “VLAN Only” implies that another device may be the gateway device or it is a non-routable network on purpose.
|Network Name||VLAN ID||Type||Gateway||Subnet / Netmask|
As shown in the diagram above, I have 2 physical hosts in a Cluster and 4 virtual ESXi Hosts in a second Cluster. The second cluster has a vSAN Datastore whereas the physical cluster just has on-host Datastores for now. I have brought a FreeNAS VM up as a VMware Workstation VM on my desktop for the times when I need to do some maintenance and balance between the two hosts. It’s not fun, so I don’t do it often. The NUC can only do so much within the environment.
I brought up NSX-T in my environment in my bid to try out the new vSphere with Tanzu features of ESXi 7 Update 1. THAT was an experience that I’ll share in a later segment. Currently all three NSX Manager nodes and the 2 Edge nodes all reside on the Dell Physical host, desx1. I have made the virtual hosts NSX Transport Nodes since I can control that environment much easier. I have had a few challenges with my NUC, it’s USB 1Gbps NIC, and Jumbo Frames, so for now, none of my physical hosts can take a workload. I’m hoping you can click and see a full-size picture, but this is my current NSX-T Network Topology. As you can see, I have several additional segments and T1 routers established as part of the Tanzu networking stack. I’ll cover those in more detail in a later posting.
Given I didn’t mention it in my earlier posting, the specs for each of the Virtual ESXi hosts are:
- 8 vCPUs (4 cores per socket)
- 64 GB RAM (8 GB reservation)
- 3 Hard Disks:
- OS: 8 GB HDD, Thin Provisioned
- vSAN Cache: 64 GB SSD, Thin Provisioned
- vSAN Capacity: 512 GB SSD, Thin Provisioned
- 6 Network Adapters
- 2 Configured for a Management Trunk (Management and vMotion VLANs)
- 2 Configured for a Storage Trunk (vSAN VLAN)
- 2 Configured for a Production Trunk (All VLANs for now)
With 4 of these added to a Cluster within vCenter, I have 100 GHz of computing, 256 GB RAM, and 2 TB of storage available for my testing workloads.
Finally, I have a Windows Server 2019 system that I’m using as a Domain Controller and DNS in the environment. I have another one that I am planning to use as a VMware Horizon Connection Server. I also have a series of Ubuntu Server 20.04 LTS, Red Hat Enterprise Linux 8, and Windows 10 systems that I use as clients within the environment. All in all, it’s a nice little setup and I can get easily distracted when working on something. It has provided me with a phenomenal lab environment to try out new technologies and work through “Enterprise Configurations” using more traditional setups. Next time I will talk about some of the challenges and decisions that I had to make to marry my Ubiquity physical network with the NSX-T virtual network. There were some trade-offs made and I will enjoy hearing your comments and feedback.
Thoughts on this? Hit me up on my Twitter. I would love to hear your feedback and suggestions. What am I missing as core functionality? What I did I mess up already? How does this setup compare to yours?