Several subnets are ordered with each VCF instances. These subnets are used for management, storage and vMotion traffic. These subnets should not be used for customer workload VMs. Instead, customers need to order additional subnets to be used by their VMs. For this recipe, I’ll assume your VMs need access to the internet, as well as the SoftLayer private network which provides access to your SoftLayer Virtual Server Instances (VSI), bare metals, NFS shares, and other SoftLayer services.
Start by identifying the hostnames associated with your VCF instance by opening the instances details page in the IBM Cloud VMware solutions portal. The hostnames always contain the VCF instance name.
- Go to the Device List in the SoftLayer Portal and filter by your VCF instance name.
- Click on one of the servers associated with the VCF instance to open the server configuration details page.
- Scroll down to the network configuration section for the server. Make note of the public and private VLAN IDs (e.g., VCF01LON: lon02.fcr02a.1204 and VCF01LON: lon02.bcr02a.1721).
- Click on each VLAN and take a screenshot of the subnets that already exist on each VLAN.
- At the top of the SoftLayer Portal UI navigate to Network->IP Management->Subnets.
- Click Order IP Addresses. Specify type Portable Public and size 8. Then hit continue.
- On the next page select the public VLAN noted above and click continue.
- Fill out the rest of the ordering information and hit Place Order.
- Optionally, repeat the ordering process for type Portal Private subnet of size 32.
Be certain to specify the private VLAN ID noted above. This is only needed if your VMs must access the private SoftLayer network and related services.
- Wait about 5 minutes and you should receive an email when the subnet addition completes. In the emails, make note of the subnet identifiers (e.g., 10.164.11.96/27 and 126.96.36.199/29). 10.x.x.x is for the private network and in this case the 169 address is for the public network. If you don’t receive the emails, return to the server VLAN pages and compare the current list of subnets to the screenshot from Step 4. The subnets that are not in the screenshots are the new ones.
- From the top navigation menu go to Network->IP Management->Subnets and filter the list by subnet identifier. Click your public and private subnets and make note of the gateway IP addresses for each.
Creating the Logical Switches
The NSX logical switch is the virtualize networking equivalent of a physical switch. It is used to create VLANs within the virtualized NSX network known as VXLANs. To create the logical switches:
- In the Home menu of vCenter Web Client, click Networking & Security.
- Click Logical Switches and click the green plus sign (+) to create a new logical switch.
- Name the switch “Workload” and accept all other defaults. This switch/VXLAN will be used by your VMs.
- Create another logical switch named “Workload HA”. This VXLAN will be used for high availability heartbeating between the Edge appliances we will be creating in subsequent steps.
- Create another logical switch named “Workload Transit”. This VXLAN will be used to connect the Logical Router and Edge Gateway created below.
Creating NSX Distributed Logical Router
In this step we will be creating the NSX Distributed Logical Router. The DLR is used to route network traffic between the VMs connected to the logical switches. This is known as east-west routing.
- Under Networking & Security, click NSX Edges. Then click the green plus sign (+) to create a new edge.
- On the first panel click Logical (Distributed) Router as the install type. Name the DLR “workload-nsx-dlr”. Enable both checkboxes for Deploy NSX Edge and Enable High Availability.
- On the next page specify a password and click next.
- On this page use the green plus sign (+) to add two Edge Appliances. The values to specify should look like this:
Note: Be sure to add two appliances for HA to work.
- On the Interfaces tab you will need to create 2 interfaces. You only need to specify Name, Type, Connected To, IP and subnet length. The Transit Uplink interface type will be “uplink” and Workload will be an “internal” interface. Use the values as seen here:
- On the Default Gateway Settings tab, deselect the “Configure Default Gateway” checkbox. We will configure dynamic routing between the DLR and ESG in a subsequent step.
- Click Next, then click Finish to deploy the DLR.
- Wait for the DLR status to go to Deployed, then double click the workload-nsx-dlr to edit its routing configuration.
- In the DLR edit panels go to Manage->Routing->Global Configuration, then click Edit next to “Dynamic Routing Configuration” and select the Transit Uplink as the Router ID. Click Publish Changes to save the change.
- Go to Routing->OSPF. Click Edit next to OSPF Configuration and specify these values:
- Scroll down to the OSPF “Area to Interface Mapping” section and click the green plus sign (+). In the add mapping dialog specify the Transit Uplink and Area 51. Click Publish Changes.
Creating the NSX Edge Service Gateway
Now we need to create the NSX ESG. The ESG will be used for routing traffic between the VXLANs in your virtual network and the VLANs in the physical network. ESG provides several networking services such as firewalls, DHCP, L2 bridge, VPNs, and more. We will only be using the ESG for north-south routing, Network Address Translation (NAT), and as a firewall.
- Under Networking & Security -> NSX Edges click the green plus sign (+) to create the ESG.
- On the first panel click Edge Service Gateway as the install type. Name the ESG “customer-nsx-edge”. Enable both checkboxes for “Deploy NSX Edge” and “Enable High Availability”.
- On the next page specify a password and click Next.
- On the Configure deployment page specify “large” for the size and use the green plus sign (+) to add two Edge Appliances using the same appliance options used for the DLR.
- On the next page you will need to add 4 interfaces. All will be of type “uplink”, except the interface named “Internal”. The values will look as follows:
The Private Uplink and Public Uplink interfaces are connected to Distributed Portgroups, while the Transit Uplink and Internal interfaces are connected to Logical Switches. The only difference in your environment will be the IP addresses used for the Private and Public Uplink interfaces. These IP addresses will be from the subnets you ordered from SoftLayer.
If you opted not to order a private subnet, then you will not need a Private Uplink interface. The Public Uplink will have a primary and a secondary IP address. One will be used for NATing while the other is used for management. It is advised that you use the SoftLayer Portal UI to edit the notes of the IP addresses you decide to use on your subnets.
- On the Default gateway settings page, select the Public Uplink as the vNIC and specify the gateway on your public SoftLayer subnet.
- On the Firewall and HA page enable the Configure Firewall default policy checkbox and specify a default policy to accept all traffic.
It is advised that you go in later and change your default policy to deny and only accept the needed traffic. This can be done on the Firewall tab of the ESG details page.
- Review the configuration options and hit Finish to deploy the ESG.
- Once deployed, double click the customer-nsx-edge and go to the Manage->Routing tab. Click Edit next to “Dynamic Routing Configuration” and select the Transit Uplink interface for the Router ID (i.e. 192.168.100.1). Click Publish Changes.
- Go to the OSPF tab and click edit configuration. Select all 3 checkboxes to enable OSPF.
- Scroll down to the Area to Interface Mapping section. Click the green plus sign (+) to add a mapping for the Transit Uplink and area 51. Click Publish Changes.
- If you opted to connect to the private SoftLayer network, go to the Static Routes tab and add a static route where the network is 10.0.0.0/8 and the next hop is the gateway address from the private SoftLayer subnet you ordered earlier. For interface select the Private Uplink, click Publish Changes.
- Click the NAT tab on the top of the Manage page. Then click the green plus sign (+) to add a new SNAT rule which will translate the 192.168 traffic from the VMs to one of the public IP addresses ordered from SoftLayer. The values are shown below; you only need to modify the “Translated Source IP” which should be the secondary IP address you used on the Public Uplink interface of your ESG. Click Publish Changes when you’re done.
- If your VMs need access to the private SoftLayer network, then add another SNAT rule. Set “Applied On” to be the Private Uplink. Set Original Source IP to be “192.168.0.0/16” and set “Translated Source IP” to be the private 10.x IP address used when creating the Private Uplink interface (e.g., 10.164.11.124). Click Publish Changes.
Seeing NSX in Action
At this point the virtualize network is setup! Now we just need to see it work. To start you will need to create a couple of test VMs. In this case I’ll use CentOS VMs, however the same concepts apply to any OS. When creating your VMs be certain they have a Network Adapter (aka vNIC) which is connected to the Workload VXLAN as seen here:
- Find the first workload VM under Host & Clusters, right click and select Open Console.
- We need to change the default gateway, IP address, netmask, and DNS servers of the VM. Start by modifying the default gateway to be 192.168.10.1, which is the IP of the DLR on the Workload VXLAN. For CentOS the gateway is set in the /etc/sysconfig/network file, using the GATEWAY variable.
- The remaining network changes on the VM are done using the /etc/sysconfig/network-scripts/ifcfg-eth0 file, see these instructions for more details.
- The IP address of the VM should be set to an address between 192.168.10.10 and 192.168.10.255. These are from the virtualized subnet associated with the Workload VXLAN.
- The netmask should be set to 255.255.255.0.
- The DNS server can be set to the SoftLayer’s DNS servers on the private network
(e.g., 10.0.80.11) or to the IP of a public DNS provider.
- Repeat these steps for all VMs connected to the Workload VXLAN.
At this point all of your VMs should be able to access each other via their 192.168 addresses. They should also be able to access the internet and the SoftLayer private network. This is achievable because the VMs are using the DLRs IP address (192.168.10.1) as their default gateway. The DLR and the ESG are sharing their routes via OSPF, so the DLR knows that the ESG can route to the public internet as well as the SoftLayer private network. The networking continues to work uninterrupted even as the VMs are migrated between different hosts.
Here are a few links for more information: