A PowerVC customer was unable to access YouTube and requested a transcript of the video. The following is a written transcript of the narration from this video

Video Title: PowerVC 1.3.3 SDN Demo

Video Description:

A demo of the Software Defined Networking (SDN) features available in PowerVC 1.3.3.

For further info on PowerVC SDN, visit:

IBM PowerVC Knowledge Center
SDN Blog Post

Video transcript:
Hello everybody, today we’re going to look at the Software Defined Networking, or SDN, functions available in PowerVC 1.3.3. SDN was first introduced in PowerVC during the 1.3.2 release in December of 2016, at that time it was a “tech preview” which meant it wasn’t intended for production use yet, and there was no way to upgrade into or out of the tech preview. We’re excited to announce that starting with PowerVC 1.3.3, SDN is now fully supported as an important part of the product, and is ready for use in production environments.

Let’s start by reviewing what we mean when we say SDN, since this term can refer to a lot different things.

To understand SDN, lets talk about the world before SDN, before virtualization. In this world, we had long running workloads tied to exactly one system. Because of this assumption, network rules could be added to the switch that plugged directly into the server. When virtualization was brought into the picture, we ended up with many VMs on a single system, and VMs can move around as they are migrated from server to server. Because of this we can’t set switch rules anymore, and as VM density grows we stop focusing on traffic rules, and focus instead just on making sure the VMs all have the connectivity they need.

The goal with SDN is to bring that control back. We want to be able to modify workload networking without modifying physical switches.

Key Benefits of SDN:
-quality: control throughput of workload
-automation: thousands of VMs with policy based management, rules applied quickly
-capacity: add additional network capacity to remove bottle necks
-speed: rules applied immediately throughout the network
-security: control whether workloads can talk to each other, restrict ports or IP addresses

Our SDN solution requires three main pieces: a programmable switch, a Level 3 gateway/router, and a controller. The programmable virtual switch sits in the Novalink partition. Any inter-partition communication just goes through the vswitch, and never needs to leave the system. The vswitch has rules applied to it by the controller, and the controller is PowerVC. The controller has high level policies in it, it works with vswitch to “compile” rules and apply them to enforce things like overlay traffic.

The final piece is the router. Since we’re virtualizing the network, certain traffic will go into the overlay network, so we need something to put that traffic on the public wide area network, and that’s the virtual router.

The SDN use cases we’re supporting in PowerVC 1.3.3 are overlay networks, which is basically a network on a network. The specific technology we’re supporting for overlay networks is called VXLAN. To enable these overlay networks, we are also supporting virtual routers which will run on network nodes, and external IP addresses. We’ll look more at these during the demo.

To demonstrate the SDN function, let’s use PowerVC to build an environment where we have 10 web servers and 2 load balancers all communicating on a private overlay network, and then configure just the load balancers with external IP addresses so they can be accessed by the public network.

This is a PowerVC 1.3.3 system and we can see it’s managing a single Novalink host. This Novalink host has been installed in SDN mode. There is a special option in the Novalink installer to enable a host for SDN, this option will assign physical Ethernet I/O to the Novalink partition. This configuration can also be done manually to existing Novalink installs.

The next thing we need to use PowerVC SDN is what we’re calling a “network node”. This can be a physical server or a VM but it needs to have direct access to physical I/O so the Ethernet adapter can be run in “promiscuous” mode and can see all traffic. This network node is where our virtual router and virtual switch will run, and this takes the place of a physical router and switch.

Now we’ll create two networks in our environment. The first network will represent the external, routable network. I’m creating this as a “flat” network which means no VLAN tagging or segmentation IDs will be used. This is because the subnet for the network overlaps the subnet on the network node. If this wasn’t the case, I would create a VLAN network instead.

The second network I’ll create is my overlay network. This will use a technology called VXLAN, which makes the packets slightly smaller to provide room for a tag. All of the virtual machines deployed on this VXLAN network will be on their own private subnet, regardless of how the machines are physically connected.

Now I can deploy my VMs using my new vxlan network. I’ll start by doing a batch deploy for the 10 web server VMs. The only thing I will specify is the VM name and then I’ll put them on the vxlan network. I’ll let the private IP address be automatically selected from the pool.

Next I will do a batch deploy for the two load balancers, again just specifying their names and putting them on the vxlan network.

Now that the VMs are deployed, they can communicate with each other on the private vxlan network, but other systems on the public network cannot connect to them. To connect from outside we will need to assign an external IP address. Let’s click through to the load balancer VM details, and we can see the network interface for our vxlan network. Now we can apply an external IP to that. Once that external IP address is applied to the VM, we can connect to that IP address using SSH, and we can see that the VM has no knowledge of the external IP address. The network node takes care of routing the external IP address back to the VM.
[show SSH into load balancer, show it only has 10.* private IP]

So we looked at Novalink hosts that require physical I/O to use SDN, the network nodes where our virtual routing and switching run, how to create an external network and a private network, and how to do a deploy to our private overlay network and then use external IP addresses to access it. We hope you’ll find these SDN features in PowerVC useful in your environment, and we’re looking forward to continue developing our SDN offering. Thanks for watching!

Join The Discussion

Your email address will not be published. Required fields are marked *