Create a standard load balancer in Azure with Terraform

Kristi Ndoni
5 min readMar 7, 2023

--

Infrastructure as Code (IaC) is a software engineering approach that involves the use of code to automate the provisioning and management of IT infrastructure. With IaC, you can define and deploy infrastructure resources, such as virtual machines, networks, and load balancers, in a repeatable and scalable way, using tools such as Terraform.

This approach allows you to treat infrastructure as if it were software, enabling you to version, test, and collaborate on infrastructure changes with the same level of agility and control as you would with software code. In this article, we will explore how to use Terraform to create a standard load balancer in Azure, by defining the infrastructure resources as code and deploying them using Terraform’s.

How to create Azure Standard Load Balancer with Backend Pools in Terraform

Below is a list of parts which constitutes this build. Every resource will be divided in modules.

  • Resource Group
  • Virtual Machines
  • Network Interfaces
  • Standard LoadBalancer
  • Virtual Network
  • Network security groups

The Modules

To keep our deployment scripts organized and as simple as possible, we will break the scripts in to composable modules. Each module needs to have its own folder and in it the main.tf file, and if needed, variables.tf locals.tf and outputs.tf files.

Since resource group is already created I will use it as a local block on main.tf of the whole project. So for now I will start with vnet_module.

  • Here I declare a virtual network called appnetwork, resource group will be added as a variable in main together with location, while the address space of the network is defined as a local block at the beginning of the file.
locals {
virtual_network = {
name="app-network"
address_space="10.0.0.0/16"
}
}

resource "azurerm_virtual_network" "appnetwork" {
name = "vnet_lb"
resource_group_name = var.resource_group_name
address_space = [local.virtual_network.address_space]
location = var.location
}
output "vnet_name" {
value = azurerm_virtual_network.appnetwork.name
}

output "virtual_network_id" {
value = azurerm_virtual_network.appnetwork.id
}
variable "resource_group_name" {}

variable "location" {
default = "westeurope"
}
  • Next, we define a module for creating subnet, which accepts a name, virtual network name and a resource group name, and returns the generated subnet id:
resource "azurerm_subnet" "subnet1" {
name = "Subnet1"
resource_group_name = var.resource_group
virtual_network_name = var.vnet_name
}

variable "resource_group" {}

variable "vnet_name" {}

output "subnet_id" {
value = data.azurerm_subnet.subnet1.id
}
  • Now, we need to create two nic resources for our two linux machines that will serve as a backend pool for the load balancer. Here we add a count variable that will hold the number of the virtual machines so a special name will be generated for each of the network interface cards. This block will output the nic id and private ip address. Since we have two special nics we create a for block to output each address.
resource "azurerm_network_interface" "nic" {
location = var.location
name = var.nic_name[count.index]
resource_group_name = var.resource_group
count = var.count_nr

ip_configuration {
name = var.ip_config_name
private_ip_address_allocation = var.private_ip
subnet_id = var.subnet_id
}
}

output "nic_id" {
value = azurerm_network_interface.nic.*.id
}

output "private_ip_add" {
value = [for nic in azurerm_network_interface.nic : nic.private_ip_address]
}

variable "location" {
default = "westeurope"
}

variable "nic_name" {}

variable "resource_group" {}

variable "ip_config_name" {
default = "ip_name"
}

variable "private_ip" {
default = "Dynamic"
}

variable "subnet_id" {}

variable "count_nr" {}
  • Next, we are ready to create the virtual machines. To generate the password I have used the random provider. The linux machines will get a username, password, location, will get the nic ids that we got from output above etc. Important here is to specify the count that will be used as number of the virtual machines. Every other detail can be found on terraform documentation. We output from this module the password and ip of the virtual machines.
resource "random_password" "random_pass" {
length = 8
special = true
upper = true
numeric = true
}

resource "azurerm_linux_virtual_machine" "linux_vm1" {
admin_username = var.vm_username
admin_password = random_password.random_pass.result
location = var.vm_location
name = var.vm_name
network_interface_ids = [var.network_interface_ids[count.index]]
resource_group_name = var.resource_group
size = var.vm_size
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
offer = "UbuntuServer"
publisher = "Canonical"
sku = "16.04-LTS"
version = "latest"
}
disable_password_authentication = false
count = var.vm_count
}
output "password_vm" {
value = random_password.random_pass.result
}

output "vm_ip" {
value = [for ip in azurerm_linux_virtual_machine.linux_vm1 : ip.private_ip_address]
}

variable "vm_username" {}

variable "vm_location" {
default = "westeurope"
}

variable "vm_name" {}

variable "network_interface_ids" {}

variable "resource_group" {}

variable "vm_size" {}

variable "vm_count" {}
  • Next, we can create the load balancer. Firstly we need to create a public ip address that will serve for the load balancer to be reached by the client. After we create it we can add it in the load balancer as frontend_ip_configuration.
resource "azurerm_public_ip" "loadip" {
allocation_method = "Static"
location = var.location
name = "load-ip"
resource_group_name = var.resource_group_name
sku = "Standard"
}

resource "azurerm_lb" "appbalancer" {
location = var.location
name = "app-balancer"
resource_group_name = var.resource_group_name
sku = "Standard"
sku_tier = "Regional"
frontend_ip_configuration {
name = "frontend-ip"
public_ip_address_id = azurerm_public_ip.loadip.id
}
depends_on = [
azurerm_public_ip.loadip
]
}
  • Now we can create a pool for the backend pool addresses that will get as input the virtual machines. We also create a probe to for testing the load balancer and a rule for accessing the machines at port 22.
resource "azurerm_lb_backend_address_pool" "poolA"{
loadbalancer_id = azurerm_lb.appbalancer.id
name = "PoolA"
}

resource "azurerm_lb_backend_address_pool_address" "appvmaddress" {
count = var.number_of_machines
backend_address_pool_id = azurerm_lb_backend_address_pool.poolA.id
#ip_address = azurerm_network_interface.appinterface[count.index].private_ip_address
ip_address = var.ip_address[count.index]
name = "appvm${count.index}"
virtual_network_id = var.virtual_network_id
#virtual_network_id = azurerm_virtual_network.appnetwork.id
}

resource "azurerm_lb_probe" "probeA" {
loadbalancer_id = azurerm_lb.appbalancer.id
name = "probeA"
port = 22
protocol = "Tcp"

}

resource "azurerm_lb_rule" "RuleA" {
backend_port = 22
frontend_ip_configuration_name = "frontend-ip"
frontend_port = 22
loadbalancer_id = azurerm_lb.appbalancer.id
name = "RuleA"
protocol = "Tcp"
backend_address_pool_ids = [azurerm_lb_backend_address_pool.poolA.id]
probe_id = azurerm_lb_probe.probeA.id
}
variable "number_of_machines" {
type = number
description = "This defines the number of virtual machines in the virtual network"
default = 2
}

variable "resource_group_name" {}

variable "location" {
default = "westeurope"
}

variable "ip_address" {}

variable "virtual_network_id" {}

Finally we are ready to connect everything together in the main of the project we add every module.

locals {
resource_group = "RG_BOOTCAMP_CLOUD_NETWORKING_KRISTI_NDONI"
location = "westeurope"
nic_name = ["nic_vm1","nic_vm2"]
vm_count = "2"
vm_name = ["vm_pool1", "vm_pool2"]
}

module "vnet_module" {
source = "./modules/vnet_module"
resource_group_name = local.resource_group
location = local.location
}

module "subnet_module" {
source = "./modules/subnet_module"
resource_group= local.resource_group
vnet_name = module.vnet_module.vnet_name
}

module "nic_module" {
source = "./modules/nic_module"
nic_name = local.nic_name
resource_group = local.resource_group
subnet_id = module.subnet_module.subnet_id
count_nr = length(local.vm_name)
}

module "linux_vm" {
source = "./modules/vm_module"
network_interface_ids = module.nic_module.nic_id
resource_group = local.resource_group
vm_size = "Standard_B2s"
vm_username = "demouser"
vm_count = local.vm_count
for_each = {
for i in range(local.vm_count) :
"vmlb-${i+1}" => i+1
}

vm_name = each.key
}

#azurerm_network_interface.appinterface[count.index].private_ip_address
module "lb_module" {
source = "./modules/lb_module"
ip_address = module.nic_module.private_ip_add
resource_group_name = local.resource_group
virtual_network_id = module.vnet_module.virtual_network_id
}

module "storage_account" {
source = "./modules/storageaccount_module"
location = local.location
resource_group_name = local.resource_group
}

Don’t forget to also add the providers. Now you can run:

terraform init

terraform plan

terraform apply

In conclusion, Infrastructure as Code is a powerful approach that can help you automate and streamline your infrastructure management processes. With the use of tools like Terraform, you can easily define, deploy, and manage infrastructure resources in a standardized and scalable way. By following the steps outlined in this article, you should now have a good understanding of how to create a standard load balancer in Azure with Terraform. This is just the beginning of what you can achieve with Infrastructure as Code, so we encourage you to continue exploring and leveraging this powerful approach to optimize your infrastructure management practices.

--

--