Configuring pfSense and VLANs on Proxmox with a single NIC and Managed Switch

I’m setting up a VLAN on my Proxmox server to segregate test VMs from my home network. I’ve configured a VLAN with id 10 on my D-Link switch for the port that my Proxmox server is connected to.

I’ve followed the majority of the steps in this excellent guide here, and captured additional screenshots along the way (mostly for my own referenece).

In Proxmox, create a bridge with no IP, and enable ‘VLAN aware’:

Create a new VM for pfSense from the downloaded pfsense ISO from here. For the network, use the default/original network bridge (vmbr0), not the new one just created above – this will be your WAN NIC for pfSense:

One VM is created, don’t boot it yet, but add a second Network Device – for the bridge, use the new one created in the earlier step – this will be your LAN NIC for machines within the VLAN:

Boot the VM and select option to install:

Select option to configure networks. In Proxmox, look at the 2 network devices – the first one should be connected to your default Proxmox bridge (vmbr0) and the second one should be the new one we just added (vmbr99):

For your WAN interface, connect the one that is your default Proxmox bridge, in this case vmbr0:

I’ve left everything default for the WAN interface and then pressed Continue:

On the next screen it shows LAN connection as ‘not assigned’ – select it and press ‘Assign/Continue’:

Select the second interface (vtnet1) that is connected to the new bridge, vmbr99:

Configure your VLAN tags, I’ve set to 10 to match what I’ve already configured on my D-Link manged switch:

I’ve configure my CIDR range as 10.0.10.0/24 and DHCP range of 10.0.10.2 – 10.0.10.254 for this network:

Unless you have a pfSense Plus subscription, select the CE version:

To access the webConfigurator interface we need to temporily disable the pfSense firewall, which we’ll update shortly. In the Console for the pfSense VM, enter option 8 then enter ‘pfctl -d’. It should respond with ‘pf disabled’:

In a brower, go to the WAN ip shown in the Console, and logon with defaults admin/pfsense. Change your password when prompted.

Under interfaces, select your WAN interface and uncheck these 2 options (to enable access to IPs on your VLAN subsets from your local home network IPs:

After applying changes, go back to your Proxmox console for the VM and run ‘pfctl -d again, and the web interface should be accessible again.

To setup a firewall rule to allow access to the pfSense VM from your home network, go to ‘Firewall / Rules / WAN’ and set up a rule with source = ‘WAN subnets’ and destination = ‘This firewall’. Save and apply. Afer a couple of seconds you should have access to the webConfigurator, and the rule should appear like this:

To enable DHCP for your VLAN subnet range, go to Services / DHCP server. If you see this message:

… follow the link and enable ‘Kea DHCP’ backend.

Go back to Services / DHCP Server, check that DHCP is enabled, scrolldown to Primay Address Pool and configure the IP range your your subnet:

From this point you should be ready to go.

To configure a VMs to use the VLAN network and route through pfSense, instead of using the defaul vmbr0 bridge, select the new vmbr99 that you added:

As an example when setting up a new Ubuntu 24.04 server, during the install from ISO, under Network Configuration. you should see the VM magically gets a new IP allocated from your pfSense DHCP server:

In pfSense Status / DHCP Leases you should see this new allocated IP:

To allow access from your home lan to VMs within your new VLAN subnet, you need to:

a) add a pfSense firewall rule allow traffic from your WAN subnet (or a specific ip) to any specific IP destinations (or the whole VLAN subnet if you want to allow access to everything in the VLAN):

And then on the machine(s) that needs to access your VMs in the new VLAN, add a route where the gateway is the ip address of your pfSense VM that is going to handle routing the traffic between your WAN and the VLAN:

sudo route add -net 10.0.10.0/24 [gateway ip]

Where:

  • 10.0.10.0/24 is the CIDR for the VLAN I want to access
  • [gateway ip] is the IP of the pfSense VM that’s connected to your home network

I tested ssh’ing into my new Ubuntu server on VLAN 10 and it’s all good!

Creating a Proxmox template from a Cloud Image, cloning to create additional VMs

Download a Cloud Image as a .qcow2 image from https://cloud-images.ubuntu.com/

Steps to create template and cloning a template are from docs here: https://pve.proxmox.com/wiki/Cloud-Init_Support

Create Proxmox Template from image

Note that the path to the image is an absolute path.

I’m numbering my base image/template as 200 but you can use any value that’s not already in use:

qm create 200 --memory 2048 --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-pci
qm set 200 --scsi0 local-lvm:0,import-from=/root/ubuntu-server-24.04-cloudimg-amd64.img
qm set 200 --ide2 local-lvm:cloudinit
qm set 200 --boot order=scsi0
qm template 200

Clone template to a new VM

qm clone 200 201 -name [VM_NAME]
qm set 201 --sshkey [YOUR_SSH_KEY].pub
qm set 201 --ipconfig0 ip=[YOUR-IP]/24,gw=[YOUR-GW]
qm disk resize 201 scsi0 20G

Repurposing HDDs from my HP DL380 rack server in a downsized HP Elitedesk 800 g5 homelab

I used to run an HP DL380 G7 2U rack server for my homelab, but it would have been too expensive to ship during an international house move so abandoned it during the move. In the meantime I went back to running Proxmox on my 2008 MacPro (which I did have shipped), but after a few years of service (I got it used from eBay in 2017) it’s now started to have some random hardware issues:

  • the RAM has slowly been dying, stick by stick
  • one of the RAM riser boards is showing a red error LED when any of the sticks are in a certain position on the board, so I think the whole board might need to be replaced

Anyway, short story is I’m down in 16GB in the Mac Pro.

In the meantime I’ve been browing r/homelab and other places, and eventually settled on the realisation that a used small form factor office desktop, like the HP Prodesk and Elitedesk PCs is more than enough for what I need (and substantially quieter). Given that you can pick them up in various specs around £100+ I even thought about getting a couple to set up a Kubernetes cluster, but I just picked up one to start with, an Elitedesk 800 G5 SFF with an i7 and 32GB. The SFF model has room inside for a few HDD and/or SDDs, and a slot on the mobo for a NVMe SDD too.

I wanted to do a stock check of what I had on the shelf that I can repurpose for the new Elitedesk – most of these were pulled from the DL380:

Purchased 2017- first HDDs in the DL380, but replaced because of the fan noise (I’m not sure if I still have these):

  • HGST 7K750-500 HTS727550A9E364 (0J23561) 500GB 7200RPM 16MB Cache SATA 3.0Gb/s 2.5″

Purchased in 2017: Second set of disks added to the RAID array to my DL380:

  • 2x WD Black 2.5″ 750GB

Purchased in 2019 to add additional space to the RAID array on my DL380 – these were refurbs from Amazon:

  • 2x WD Blue 1TB Mobile Hard Disk Drive – 5400 RPM SATA 6 Gb/s 128MB Cache 2.5 Inch – WD10SPZX (Certified Refurbished)

The SDD that came with the Elitedesk is also used (wasn’t lucky enough to get a new one), and according to SMART it has 13300 power-on hours on it. No errors, but about half way through it’s life. Although it’s probably ok for a while I’ll probably swap it out. Now I’ve looked at the stats I may go for a new NVMe for the boot disk, and add the 2x WD Blacks for storage.

Before I start moving things around though, I need to get some SATA cables…