Overview

Skill Level: Intermediate

Basic ability to use the SoftLayer Customer Portal, familiarity with vSphere and VMware NSX.

This recipe explains how VCF users can configure networking for their virtual machines. Providing step-by-step instructions to use VMware's NSX platform on the IBM Cloud to allow virtual machines to communicate with each other and the internet.

Ingredients

VMware Cloud Foundation on IBM Cloud (VCF) is an automated standard deployment of VMware vSphere in the IBM Cloud.  The VCF automation installs, licenses, and configures VMWare’s NSX platform.  The automation sets up the NSX Manager, NSX Controllers, NSX Transport Zone, and prepares each host with the NSX components.  The end-user must configure NSX for use by their workload VMs if the VMs need to communicate with each other and have access to the internet. This recipe will use the following ingredients to achieve virtualized networking for workload VMs:

  • Ready to use single-site VCF instance
  • SoftLayer public and private subnets
  • NSX VXLANs
  • NSX Distributed Logical Router (DLR)
  • NSX Edge Service Gateway (ESG)
  • Two CentOS VMs

 

 

Step-by-step

  1. Ordering subnets

    Several subnets are ordered with each VCF instances.  These subnets are used for management, storage and vMotion traffic.  These subnets should not be used for customer workload VMs.  Instead, customers need to order additional subnets to be used by their VMs.  For this recipe, I’ll assume your VMs need access to the internet, as well as the SoftLayer private network which provides access to your SoftLayer Virtual Server Instances (VSI), bare metals, NFS shares, and other SoftLayer services. 

    Start by identifying the hostnames associated with your VCF instance by opening the instances details page in the IBM Cloud VMware solutions portal.  The hostnames always contain the VCF instance name.

    1. Go to the Device List in the SoftLayer Portal and filter by your VCF instance name.
    2. Click on one of the servers associated with the VCF instance to open the server configuration details page.
    3. Scroll down to the network configuration section for the server. Make note of the public and private VLAN IDs (e.g., VCF01LON: lon02.fcr02a.1204 and VCF01LON: lon02.bcr02a.1721).
    4. Click on each VLAN and take a screenshot of the subnets that already exist on each VLAN.
    5. At the top of the SoftLayer Portal UI navigate to Network->IP Management->Subnets.
    6. Click Order IP Addresses. Specify type Portable Public and size 8. Then hit continue.
    7. On the next page select the public VLAN noted above and click continue.
    8. Fill out the rest of the ordering information and hit Place Order.
    9. Optionally, repeat the ordering process for type Portal Private subnet of size 32.
      Be certain to specify the private VLAN ID noted above. This is only needed if your VMs must access the private SoftLayer network and related services.
    10. Wait about 5 minutes and you should receive an email when the subnet addition completes. In the emails, make note of the subnet identifiers (e.g., 10.164.11.96/27 and 169.45.106.208/29). 10.x.x.x is for the private network and in this case the 169 address is for the public network. If you don’t receive the emails, return to the server VLAN pages and compare the current list of subnets to the screenshot from Step 4. The subnets that are not in the screenshots are the new ones.
    11. From the top navigation menu go to Network->IP Management->Subnets and filter the list by subnet identifier. Click your public and private subnets and make note of the gateway IP addresses for each.

     

  2. Creating the Logical Switches

    The NSX logical switch is the virtualize networking equivalent of a physical switch. It is used to create VLANs within the virtualized NSX network known as VXLANs. To create the logical switches:

    1. In the Home menu of vCenter Web Client, click Networking & Security.
    2. Click Logical Switches and click the green plus sign (+) to create a new logical switch.
    3. Name the switch “Workload” and accept all other defaults. This switch/VXLAN will be used by your VMs.
      LogicalSwitches
    4. Create another logical switch named “Workload HA”. This VXLAN will be used for high availability heartbeating between the Edge appliances we will be creating in subsequent steps.
    5. Create another logical switch named “Workload Transit”. This VXLAN will be used to connect the Logical Router and Edge Gateway created below.
  3. Creating NSX Distributed Logical Router

    In this step we will be creating the NSX Distributed Logical Router. The DLR is used to route network traffic between the VMs connected to the logical switches. This is known as east-west routing.

    1. Under Networking & Security, click NSX Edges. Then click the green plus sign (+) to create a new edge.
    2. On the first panel click Logical (Distributed) Router as the install type. Name the DLR “workload-nsx-dlr”. Enable both checkboxes for Deploy NSX Edge and Enable High Availability.
    3. On the next page specify a password and click next.
    4. On this page use the green plus sign (+) to add two Edge Appliances. The values to specify should look like this:
      DLR_Appliances
      Note: Be sure to add two appliances for HA to work.
    5. On the Interfaces tab you will need to create 2 interfaces. You only need to specify Name, Type, Connected To, IP and subnet length. The Transit Uplink interface type will be “uplink” and Workload will be an “internal” interface. Use the values as seen here:
      DLR_Interfaces
    6. On the Default Gateway Settings tab, deselect the “Configure Default Gateway” checkbox. We will configure dynamic routing between the DLR and ESG in a subsequent step.
    7. Click Next, then click Finish to deploy the DLR.
    8. Wait for the DLR status to go to Deployed, then double click the workload-nsx-dlr to edit its routing configuration.
    9. In the DLR edit panels go to Manage->Routing->Global Configuration, then click Edit next to “Dynamic Routing Configuration” and select the Transit Uplink as the Router ID. Click Publish Changes to save the change.
    10. Go to Routing->OSPF. Click Edit next to OSPF Configuration and specify these values:
      DLR_OSPF
    11. Scroll down to the OSPF “Area to Interface Mapping” section and click the green plus sign (+). In the add mapping dialog specify the Transit Uplink and Area 51. Click Publish Changes.

     

  4. Creating the NSX Edge Service Gateway

    Now we need to create the NSX ESG. The ESG will be used for routing traffic between the VXLANs in your virtual network and the VLANs in the physical network. ESG provides several networking services such as firewalls, DHCP, L2 bridge, VPNs, and more. We will only be using the ESG for north-south routing, Network Address Translation (NAT), and as a firewall.

    1. Under Networking & Security -> NSX Edges click the green plus sign (+) to create the ESG.
    2. On the first panel click Edge Service Gateway as the install type. Name the ESG “customer-nsx-edge”. Enable both checkboxes for “Deploy NSX Edge” and “Enable High Availability”.
    3. On the next page specify a password and click Next.
    4. On the Configure deployment page specify “large” for the size and use the green plus sign (+) to add two Edge Appliances using the same appliance options used for the DLR.
    5. On the next page you will need to add 4 interfaces. All will be of type “uplink”, except the interface named “Internal”. The values will look as follows:
      ESG_Interfaces
      The Private Uplink and Public Uplink interfaces are connected to Distributed Portgroups, while the Transit Uplink and Internal interfaces are connected to Logical Switches. The only difference in your environment will be the IP addresses used for the Private and Public Uplink interfaces. These IP addresses will be from the subnets you ordered from SoftLayer.
      If you opted not to order a private subnet, then you will not need a Private Uplink interface. The Public Uplink will have a primary and a secondary IP address. One will be used for NATing while the other is used for management. It is advised that you use the SoftLayer Portal UI to edit the notes of the IP addresses you decide to use on your subnets.
    6. On the Default gateway settings page, select the Public Uplink as the vNIC and specify the gateway on your public SoftLayer subnet.
    7. On the Firewall and HA page enable the Configure Firewall default policy checkbox and specify a default policy to accept all traffic.
      It is advised that you go in later and change your default policy to deny and only accept the needed traffic. This can be done on the Firewall tab of the ESG details page.
    8. Review the configuration options and hit Finish to deploy the ESG.
    9. Once deployed, double click the customer-nsx-edge and go to the Manage->Routing tab. Click Edit next to “Dynamic Routing Configuration” and select the Transit Uplink interface for the Router ID (i.e. 192.168.100.1). Click Publish Changes.
    10. Go to the OSPF tab and click edit configuration. Select all 3 checkboxes to enable OSPF.
    11. Scroll down to the Area to Interface Mapping section. Click the green plus sign (+) to add a mapping for the Transit Uplink and area 51. Click Publish Changes.
    12. If you opted to connect to the private SoftLayer network, go to the Static Routes tab and add a static route where the network is 10.0.0.0/8 and the next hop is the gateway address from the private SoftLayer subnet you ordered earlier. For interface select the Private Uplink, click Publish Changes.
    13. Click the NAT tab on the top of the Manage page. Then click the green plus sign (+) to add a new SNAT rule which will translate the 192.168 traffic from the VMs to one of the public IP addresses ordered from SoftLayer. The values are shown below; you only need to modify the “Translated Source IP” which should be the secondary IP address you used on the Public Uplink interface of your ESG. Click Publish Changes when you’re done.
      ESG_SNAT
    14. If your VMs need access to the private SoftLayer network, then add another SNAT rule. Set “Applied On” to be the Private Uplink. Set Original Source IP to be “192.168.0.0/16” and set “Translated Source IP” to be the private 10.x IP address used when creating the Private Uplink interface (e.g., 10.164.11.124). Click Publish Changes.

     

  5. Seeing NSX in Action

    At this point the virtualize network is setup! Now we just need to see it work. To start you will need to create a couple of test VMs. In this case I’ll use CentOS VMs, however the same concepts apply to any OS. When creating your VMs be certain they have a Network Adapter (aka vNIC) which is connected to the Workload VXLAN as seen here:

    VM_vNIC

    1. Find the first workload VM under Host & Clusters, right click and select Open Console.
    2. We need to change the default gateway, IP address, netmask, and DNS servers of the VM. Start by modifying the default gateway to be 192.168.10.1, which is the IP of the DLR on the Workload VXLAN. For CentOS the gateway is set in the /etc/sysconfig/network file, using the GATEWAY variable.
    3. The remaining network changes on the VM are done using the /etc/sysconfig/network-scripts/ifcfg-eth0 file, see these instructions for more details.
    4. The IP address of the VM should be set to an address between 192.168.10.10 and 192.168.10.255. These are from the virtualized subnet associated with the Workload VXLAN.
    5. The netmask should be set to 255.255.255.0.
    6. The DNS server can be set to the SoftLayer’s DNS servers on the private network
      (e.g., 10.0.80.11) or to the IP of a public DNS provider.
    7. Repeat these steps for all VMs connected to the Workload VXLAN.

     

    At this point all of your VMs should be able to access each other via their 192.168 addresses. They should also be able to access the internet and the SoftLayer private network. This is achievable because the VMs are using the DLRs IP address (192.168.10.1) as their default gateway. The DLR and the ESG are sharing their routes via OSPF, so the DLR knows that the ESG can route to the public internet as well as the SoftLayer private network. The networking continues to work uninterrupted even as the VMs are migrated between different hosts.

    Here are a few links for more information:

Join The Discussion