Digital Developer Conference: Cloud Security 2021 -- Build the skills to secure your cloud and data Register free

Discover best-practice VPC configuration for application deployment

IBM Cloud® Virtual Private Cloud (VPC) makes it quick and simple to deploy new application environments. This ability becomes even more powerful when combined with Terraform and Red Hat® Ansible to enable complex multi-tier applications to be deployed repeatably and reliability in minutes.

This article looks at a best-practice VPC configuration suitable for application provisioning with Red Hat Ansible and how deployment can be automated using Terraform with IBM Cloud Schematics. Users already familiar with Ansible will know that it uses SSH to communicate with remote machines access to perform app installation. The question arises of how to secure SSH access over the public network to a VPC environment. The solution is to utilize a bastion host on the public network to provide a secure gateway for SSH accessto the VPC.

Bastion hosts are a commonly used solution for providing access to machines on a remote network. This is an approach that also works well with IBM Cloud VPC to safeguard remote access to Virtual Server Instances (VSIs). Using a bastion host to secure access is strongly recommended when performing software configuration or management using Red Hat Ansible with IBM Cloud Schematics.

VPC, bastion host, and network security configuration can be challenging, but are well suited to being automated using Terraform and IBM Cloud Schematics. I will take you through the configuration of security groups and network Access Control Lists (ACLs), how they can be used to restrict public access to a bastion and to route SSH traffic to only the app servers on the private network, before continuing to show how VPC and bastion host deployment can be automated on IBM Cloud using reusable Terraform modules. Alongside this article is a companion Terraform example for IBM Cloud Schematics that illustrates all the principles discussed in a template that can be used to deploy a VPC multi-tier application environment. In a later article, I will look at using Ansible to deploy an application into this environment.

The Terraform modules and example are supplied as-is and only seek to implement a reasonable set of best practices for VPC and bastion host configuration. Your own organization may have additional requirements that may need to be implemented before it can be used.


To get the most out of this article, you should have a general understanding of IBM VPC and VSIs. To run the example in IBM Cloud Schematics, you will also need an IBM Cloud account. The resources deployed by the example are chargeable.

Estimated time

Take 20 minutes to read this article, then try the example in IBM Cloud Schematics, which will take 15-20 minutes of elapsed time to deploy.

Bastion hosts and VPCs

A bastion host or jump server is a well-understood solution for remote server access using SSH. The bastion host is a locked-down server in its own subnet with an IP address accessible via the public internet. The app servers themselves remain isolated in private subnets, secure from direct access from the internet. The only SSH connection allowed to the app servers is by first connecting through the bastion host. Public network access to the bastion host is restricted to SSH only, limiting the attack surface. Similarly, the app servers are restricted to only accepting SSH connections from the bastion host. The figure here illustrates a VPC configuration with a bastion host to ensure that all SSH traffic is routed via a bastion host.

Figure 1

SSH traffic comes in from the internet via a floating IP address (FIP) used as the public IP address of the bastion host. In this figure, the bastion ACL and security group only allow inbound internet access from the defined source address on the public internet. All other internet traffic is explicitly denied by the bastion ACL. The bastion ACL and security group use outbound rules to limit SSH connectivity on port 22 to the VSIs in the front-end security group and subnet range Similarly, the front-end ACL and security group inbound rule only allow SSH access on port 22 from the bastion security group and bastion subnet range Both ACLs include additional allow rules for return traffic, and are shown in this figure with a port designation of ephemeral. The topic of ephemeral ports will be covered next.

In the VPC network model, security groups and network ACLs are key to securing SSH access via the public internet to the app servers.

Security group and ACL configuration

Security groups act as a stateful firewall for associated VSIs. They control both inbound and outbound traffic at an individual VSI level. Rules are stateful, which means that the firewall automatically allows the return traffic in response to a request. No separate outbound rules for return traffic are required.

In contrast, ACLs act as a stateless firewall for associated IP subnets. This is a distinct and separate firewall to the stateful Security Group rules that allow access to and from the instances. ACLs being stateless means both inbound and outbound rules are required for the subnet to allow the request and its returning response. ACL rule creation is more complex than that for security groups as source and destination CIDRs and ports are specified — task made more complex as it requires an understanding of how the TCP and UDP protocols use ephemeral ports when they create connections.

Ephemeral ports

With TCP or UDP the destination port number for the target server is determined by the protocol itself — for SSH this is 22. The source port for the client is a random high port known as an ephemeral port and is used to communicate with the known server destination port. For Linux machines, this high port is typically in the range 1024-65535.

If I was to SSH from my local workstation (client) to a VSI (server), the connection would look like: --->

22 is the SSH port I’m connecting to on the server (destination); 53029 is the ephemeral port used on my local workstation (source).

The return route communicates from port 22 on the VSI back to the same ephemeral port on the workstation: --->

This usage of ephemeral ports is important to setting up ACL rules, as IBM Cloud ACLs require both the source and destination CIDRs and port ranges to be specified. The following illustrates the pair of ACL rules that would enable SSH traffic from my local workstation to and back from a VSI.

Inbound/ Outbound Allow/Deny Protocol Source IP and port Destination IP and port
Inbound Allow TCP /24 ports 1024-65535 Ports 22-22
outbound Allow TCP Ports 22-22 /24 ports 1024-65535

Given this added configuration complexity, a reasonable question to ask is, “Are ACLs required as well as security groups?” The answer here is yes due to the public network access. ACLs provide a valuable second layer of defense that prevents accidental misconfiguration of security groups leaving VSIs vulnerable to attack.

To take the complexity out of this task, a Terraform module is used to automate the creation of security group and ACL rules. Taking the source and destination CIDRs as input, the module creates all required inbound and outbound rules and configures them with protocol and ephemeral ports as needed.

Configuring a VPC with a bastion host

As introduced earlier, a common usage of a bastion host is to enable secure access for provisioning and configuration of application code using Red Hat Ansible. The Ansible connection is over SSH or WinRM to the target VSIs. The figure here illustrates the example Terraform VPC configuration associated with this article, suitably configured with a bastion host for use with Red Hat Ansible and IBM Cloud Schematics.

Figure 2

Ansible ingress for software configuration comes into the app VSIs on port 22 via the bastion host VSI, subnet, and security group. Users access the application hosted by the front-end webservers via the load balancer on port 443. A public gateway implements external access to the internet to allow Ansible to pull down open source software packages and fixes for installation on the VSIs.

The lightly shaded box in the figure illustrates the scope of the configuration to add SSH access. The overlap of the shaded box on the front- and back-end subnets shows that the addition of a bastion host to a VPC requires more than the config of the bastion itself. Bastion connectivity has dependencies on the config of the app tier SG’s and subnet ACLs that require SSH access. The bastion Terraform module only addresses the bastion host config; the rules for bastion access have to be added to the modules or config files implementing the app tiers. This is illustrated in the companion example using front-end and back-end modules.

ACL and security group rules for SSH access

For the VPC example referenced in this article, the table below identifies all the ACL and security group rules required for SSH access, application and IBM Cloud Services. The same table methodology can be used to identify the required rules for any VPC configuration.

Table 2

Rules in the Bastion column are created by the Bastion Terraform module. The rules in the Internal SSH access row for front-end and back-end config identify the rules that must be added to the VPC configuration for the front- and back-end tiers to enable SSH access. These are handled by the front- and back-end modules. For each bastion outbound rule, there is a corresponding inbound app tier rule and vice-versa.

The application access row illustrates the rules required for application access from the front-end tier to the back-end tier. These rules will be replicated for each of the application protocols in use.

The Cloud Services access row illustrates the rules required for access to management services, including DNS, NTP, mirror repos on CIDR, and IBM Cloud services accessed via private service endpoints CIDR

VPC subnet addressing and CIDRs

The rules table above illustrates that the number of ACL rules for each application tier can grow rapidly as more protocols and destinations are added. Both the front- and back-end tiers shown here have eight ACL rules each. This has practical implications as network performance can be impacted with large numbers of rules, and for each ACL there is also a combined limit of 25 inbound and outbound rules. This is OK when an application is deployed in a single MZR zone, as there are only three subnets — one for each tier and hence few sources and destinations.

If high availability is desired, the application could be deployed across all three zones in an MZR. With three tiers, this results in potentially nine subnet ranges. When inbound and outbound rules are included, this equates to 18 ACL rules for SSH access. Without care, the bastion host ACL will exceed the allowed number of rules.

To keep the number of rules to a manageable level, my preference is for ACL rules for each tier to be scoped to a single CIDR, as shown in the table below. A CIDR range is defined for each app tier, and up to three subnet ranges are carved out of this for the MZR zones. This allows for a single ACL rule with a single CIDR range for all three MZR zones in the app tier. With one inbound and outbound rule for each of the three tiers, it results in a total of six rules, which is more manageable.

Tier CIDR Zone1 CIDR Zone2 CIDR Zone2 CIDR

In the bastion Terraform module and VPC example, the VPC subnet ranges for each tier are explicitly set using the address prefix option to follow this addressing approach. The relevant prefixes are then passed to the security group and ACL config of each app tier (bastion, frontend, backend) in a top-down deterministic approach. With the use of prefixes, all subnet ranges are then known and available prior to the config of the first ACLs.

Bastion host configuration with Terraform

Having described the background to bastion host configuration, security groups and ACLs, this section on deploying a bastion host using Terraform is relatively short. In the companion example the complexity of security group and ACL rule configuration is all handled by reusable Terraform modules. Each application tier in the VPC is configured by its own Terraform module.

Modules are the key to writing reusable, maintainable, and testable Terraform configs. Instead of having the same code repeated in templates for multiple staging and production environments, all can reuse code from the same module. The modules in the example use a feature new in Terraform 0.12 of dynamic blocks.

With dynamic blocks, you can construct repeatable nested blocks of ACL rules and security groups as required based on the passed subnet ranges. This gives great flexibility, as rules can be dynamically defined depending on the number of input subnets. It avoids the static definition of the many ACL rules and supports reuse of the module within other Terraform templates.

A fragment of the bastion Terraform module is shown here. An array of rule input values sourceinboundrules is created from the list of passed ssh_source_cidr_blocks on the for loop. This is passed as input to the dynamic rules block in the ibm_is_network_acl resource. Each element of the array defines a complete ACL rule. The input to the module is the list of subnet ranges requiring SSH access.

sourceinboundrules = [
    for entry in var.ssh_source_cidr_blocks :
    ["allow", entry, "", "inbound", "tcp", 1024, 65535, 22, 22]

resource "ibm_is_network_acl" "bastion_acl" {
  name           = "${var.unique_id}-bastion-acl"
  vpc            = var.ibm_is_vpc_id
  resource_group = var.ibm_is_resource_group_id
  dynamic "rules" {
    for_each = [for i in local.rulesmerge :

    content {
  dynamic "tcp" {
     for_each = rules.value.type == "tcp" ? [
     ] : []
     content {

In all the modules, dynamic blocks mask the complexity of defining rules and security groups based on the user-provided input CIDR ranges. This allows the modules to be reused in other templates with different subnet ranges.

The code here shows the bastion Terraform module config from the example.

module "bastion" {
  source                   = "./bastionmodule"
  ibm_region               = var.ibm_region
  bastion_count            = 1
  unique_id                = var.vpc_name
  bastion_cidr             = var.bastion_cidr
  ssh_source_cidr_blocks   = local.bastion_ingress_cidr
  destination_cidr_blocks  = [var.frontend_cidr, var.backend_cidr]
  destination_sgs          = [module.frontend.security_group_id]
  ssh_key_id               =

There are only five required input parameters to define the region, the VPC, and connectivity. For the inputs, the three important fields are:

  • bastion_cidr
  • destination_cidr_blocks
  • destination_sgs

Referring back to the section on VPC subnet addressing, the first two take the app tier CIDRs for the bastion, front end, and back end. The last takes as input the resource IDs of the security groups for the back- and front-end tiers. The Terraform dynamic block definition takes care of generating the required rules based on these inputs.

The output module.bastion.security_group_id of the module provides the security_group_id of the bastion security group for input to the front- and back-end modules. The front- and back-end modules take a similar approach passing in the tier CIDRs to dynamically create the ACLs and security groups.


Find the detailed technical steps to deploy the companion example in the README file in the IBM Cloud Schematics GitHub repository, including how to:

  1. Generate an SSH Key and upload to IBM Cloud
  2. Create the Schematics workspace
  3. Generate plan in Schematics
  4. Apply plan in Schematics

Review the created VPC and resources in the IBM Cloud dashboard. SSH access from IBM Cloud Schematics can be validated by setting the input variable ssh_accesscheck to true and applying the template again.


Red Hat Ansible, Terraform, and IBM Cloud VPC make it quick and simple to deploy new applications, reliably and repeatably. IBM Cloud Schematics and Terraform modules make it easy to use Ansible, by taking the complexity out of configuring the network security features of IBM Cloud VPC.