Designing Google Cloud Networks

  1. Choosing Google Cloud Storage and Data Solutions
  2. Choosing a Google Cloud Deployment Platform
  3. Designing Google Cloud Networks

VPC Networks and Subnets

In Google Cloud, VPC networks are global, and you can either create auto mode networks and have one subnet per region or create your own custom mode network, where you get to specify which region to create a subnet in.

Resources across regions can communicate using their internal IP addresses without any added interconnect. For example, the diagram on the right shows two subnets in different regions, with a server on each subnet.

  • When creating networks, create subnets for the regions you want to operate in.
  • Resources across regions can reach each other without any added interconnect.
  • If you are a global company, choose regions around the world.
  • If your users are close together, choose the region closest to them plus a backup region.
  • A project can have multiple networks.

They can communicate with each other using their internal IP addresses because they are connected to the same VPC network.

Selecting which regions to create subnets in depends on your requirements. For example, if you are a global company, you will most likely create subnetworks in regions across the world. If users are within a particular region, it may be suitable to select just one subnet in a region closest to these users and maybe a backup region close by.

To create custom subnets, you specify the region and the internal IP address range as illustrated in the screenshots on the right.

The IP ranges of the subnets don’t need to be derived from a single CIDR block, but they cannot overlap with other subnets of the same VPC network. This applies to primary and secondary ranges. Secondary ranges allow you to define alias IP addresses. Also, you can expand the primary IP address space of any subnets without any workload shutdown or downtime.

Once you define your subnets, machines in the same VPC network can communicate with each other through their internal IP address regardless of the subnet they are connected to.

A single VM can have multiple network interfaces connecting to different VPC networks. This graphic illustrates an example of a Compute Engine instance connected to four different networks covering production, test, infra, and an outbound network.

A VM must have at least one network interface but can have up to eight, depending on the instance type and the number of vCPUs. A general rule is that with more vCPUs, more network interfaces are possible. All of the network interfaces must be created when the instance is created, and each interface must be attached to a different network.

Shared VPCs

Shared VPC allows an organization to connect resources from multiple projects of a single organization to a common VPC network. This allows the resources to communicate with each other securely and efficiently using internal IPs from that network.


This graphic shows a scenario where a shared VPC is used by three other projects, namely service projects A, B and C. Each of these projects has a VM instance that is attached to the Shared VPC.

Shared VPC is a centralized approach to multi-project networking, because security and network policy occurs in a single designated VPC network. This allows for network administrator rights to be removed from developers so that they can focus on what they do best. Meanwhile, organization network administrators maintain control of resources, such as subnets, firewall rules, and routes, while delegating the control of creating resources, such as instances, to service project administrators or developers.

Load Balancing

Global Load Balancers

Global load balancers provide access to services deployed in multiple regions. For example, the load balancer shown on this slide has a backend with two instance instance groups deployed in different regions.


Global load balancing is supported by HTTP load balancers and TCP and SSL proxies in Google Cloud.

HTTP(S) Load Balancer

For an HTTP load balancer, a global anycast IP address can be used, simplifying DNS look up. By default requests are routed to the region closest to the requester.

If you are using HTTP(S) load balancing, you should leverage Cloud CDN to achieve lower latency and decrease egress costs. You can enable Cloud CDN by simply checking a box when configuring an HTTP(S) global load balancer.

Cloud CDN caches content across the world using Google Cloud’s edge-caching locations. This means that content is cached closest to the user making the requests. The data that is cached can be from a variety of sources, including Compute Engine instances, GKE pods, or Cloud Storage buckets.

Regional Load Balancer

For services deployed in a single region, use a regional load balancer.


This graphic illustrates resources deployed with a single region and Cloud Load Balancing routing requests to these resources. Regional load balancers support HTTP and any TCP or UDP port.

If your load balancers have public IP addresses, traffic will likely be traversing the Internet. Best practices suggesting to secure this traffic with SSL, which is available for HTTP and TCP load balancers.

Summary of Load Balancers


Connecting Networks

VPC Peering

VPC peering allows private RFC 1918 connectivity across two VPC networks regardless of whether they belong to the same project or the same organization. Each VPC network will have firewall rules that define traffic that is allowed or denied between the networks.

  • Can be the same or different organizations.
  • Subnet ranges cannot overlap.
  • Network admins for each VPC must approve the peering requests.


Network administrators for each VPC network must configure a VPC peering request for a connection to be established.

Cloud VPN (Classic VPN and HA VPN)

Cloud VPN securely connects your on-premises network to your Google Cloud VPC network through an IPsec VPN tunnel. Traffic traveling between the two networks is encrypted by one VPN gateway and then decrypted by another VPN gateway.

  • Useful for low-volume data connections.
  • Classic VPN: 99.9% SLA.
  • High-Availability (HA) VPN: 99.99% SLA.
  • Supports
    • Site-to-site VPN
    • Static routes (Classic VPN only)
    • Dynamic routes (Cloud Router)
    • IKEv1 and IKEv2 ciphers

This protects your data as it travels over the public Internet. That’s why Cloud VPN is useful for low volume data connections.

As a managed service, Cloud VPN provides an SLA of 99.9 percent monthly uptime for the classic VPN Configuration and 99.99 percent monthly uptime for the high availability VPN configuration.

The classic VPN gateways have a single interface and a single external IP address. Whereas the high availability VPN gateways have two interfaces with two external IP addresses, one for each gateway. The choice of VPN gateways comes down to your SLA requirements and routing options.

Cloud VPN support, site-to-site VPN, static routes and dynamic routes using Cloud Router, and IKEv1 and IKEv2 ciphers. However, static routes are only supported by classic VPN. Also, Cloud VPN doesn’t support use cases where clients computers need to dial in to a VPN using Client VPN software.


This diagram shows a classic VPN connection between your VPC and on-premises network. Your VPC network has subnets in us-east1 and us-west, with the Google Cloud resources in each of those regions. These resources are able to communicate using their internal IP addresses because routing within a network is automatically configured, assuming that firewall rules allow the communication.

Now, in order to connect your on-premises network and it’s resources, you need to configure your Cloud VPN gateway, your on-premises VPN gateway, and two VPN tunnels.

The Cloud VPN gateway is a regional resource that uses a regional external IP address. Your on-premises VPN gateway can be a physical device in your data center, or a physical or software based VPN offering in another Cloud providers network. This VPN gateway also has an external IP address.

A VPN tunnel then connects your VPN gateways and serves as a virtual medium to which encrypted traffic is passed. In order to create a connection between two VPN gateways, you must establish two VPN tunnels. Each tunnel defines a connection from the perspective of it’s gateway and traffic can only pass when a pair of tunnels is established.

Now, one thing to remember when using Cloud VPN is the maximum transmission unit or MTU for your on-premises VPN gateway cannot be greater than 1,460 bytes. This is because of the encryption and the capsulation of packets.

HA (High Availability) VPN)

HA VPN is a high availability Cloud VPN solution that lets your security connect to your on-premises network, to your Virtual Private Cloud through IPSec VPN connection in a single region.

HA VPN provides an SLA of 99.99 percent service availability. To guarantee a 99.99 percent availability SLA for HA VPN connections, you must properly configure two or four tunnels from your HA VPN gateway to your peer VPN gateway, or to another HA VPN gateway.

  • Provides 99.99% service availability.
  • Google Cloud automatically chooses two external IP addresses.
    • Supports multiple tunnels.
    • VPN tunnels connected to HA VPN gateways must use dynamic (BGP) routing.
  • Supports site-to-site VPN for different topologies/configuration scenarios
    • An HA VPN gateway to peer VPN devices
    • An HA VPN gateway to an Amazon Web Services (AWS) virtual private gateway
    • Two HA VPN gateways connected to each other

When you create a HA VPN gateway, Google Cloud automatically chooses two external IP addresses. One for each of it’s fixed number of two interfaces. Each IP address is automatically chosen from a unique address pool to support high availability. Each of the HA VPN gateway interfaces support multiple tunnels. You can also create multiple HA VPN gateways.

When you delete the HA VPN gateway, Google Cloud releases the IP addresses for reuse. You can configure HA VPN gateway with only one active interface and one external IP address. However, this configuration does not provide a 99.99 percent service availability SLA.

VPN tunnels connected to HA VPN gateways must use dynamic routing, BGP. Depending on the way that you configure root priorities for HA VPN tunnels, you can create an active active, or active passive routing configuration. HA VPN supports site-to-site VPN in one of the following recommended topologies or configuration scenarios.

A HA VPN gateway to peer VPN devices, a HA VPN gateway to an Amazon Web Services virtual private gateway, or two HA VPN gateways connected to each other.

There are three typical peer gateway configurations for HA VPN. A HA VPN gateway to two separate peer VPN devices, each with its own IP address, a HA VPN gateway to one peer VPN device that uses two separate IP addresses, and a HA VPN gateway to one peer VPN device that uses one IP address.


In this topology, one HA VPN gateway connects to two peer devices. Each peer device has one interface and one external IP address. The HA VPN gateway uses two tunnels, one tunnel to each peer device.

If your peer side gateway is hardware-based, having a second peer side gateway provides redundancy and fail-over on that side of the connection. A second physical gateway lets you take one of the gateways offline for software upgrades or for other scheduled maintenance.

It also protects you if there is a failure in one of your devices. In Google Cloud, the redundancy type for this configuration takes the value two IPs redundancy. The example shown here provides 99.99 percent availability.

You can connect two Google Cloud VPC networks together using a HA VPN gateway in each network. The configuration shown provides 99.99 percent availability.


From the perspective of each HA VPN gateway, you create two tunnels. You connect interface zero on one HA VPN gateway to interface zero on the other HA VPN gateway, and interface one on one HA VPN gateway to interface one on the other HA VPN gateway.

Cloud Router


In order to use dynamic routes, you need to configure Cloud Routers. A Cloud Router can manage routes for a Cloud VPN tunnel using Border Gateway Protocol or BGP.

This routing method allows for routes to be updated and exchanged without changing the tunnel configuration. This allows for new subnets like staging in the VPC network and rack 30 in the peer network to be seamlessly advertised between networks.

Cloud Interconnect

If you need a dedicated high-speed connection between networks, consider using Cloud Interconnect. Cloud Interconnect has two options for extending on-premises networks. Dedicated Interconnect and Partner Interconnect.

Dedicated Interconnect provides a direct connection to a co-location facility. The co-location facility must support either 10 gigabits per second or 100 gigabits per seconds circuits. A dedicated connection can bundle up to eight 10 gigabits per second connection or two 100 gigabits per second connection for a maximum of 200 gigabits per second.

  • Dedicated Interconnect provides a direct connection to a colocation facility.
    • From 10 to 200 Gbps
  • Partner Interconnect provides a connection through a service provider.
    • Can purchase less bandwidth from 50 Mbps
  • Allows access to VPC resources using internal IP address space.
  • Private Google Access allows on-premises hosts to access Google services using private IPs.

Partner Interconnect provides a connection through a service provider. This can be useful for lower bandwidth requirements, starting from 50 megabits per second.

In both cases, Cloud Interconnect allows access to VPC resources using an internal IP address space. You can even configure Private Google Access for on-premises hosts to allow them to access Google services using private IP addresses.

An example of dedicated interconnect.


An example of partner interconnect.


  • March 20, 2024