Configuring pfSense and VLANs on Proxmox with a single NIC and Managed Switch

I’m setting up a VLAN on my Proxmox server to segregate test VMs from my home network. I’ve configured a VLAN with id 10 on my D-Link switch for the port that my Proxmox server is connected to.

I’ve followed the majority of the steps in this excellent guide here, and captured additional screenshots along the way (mostly for my own referenece).

In Proxmox, create a bridge with no IP, and enable ‘VLAN aware’:

Create a new VM for pfSense from the downloaded pfsense ISO from here. For the network, use the default/original network bridge (vmbr0), not the new one just created above – this will be your WAN NIC for pfSense:

One VM is created, don’t boot it yet, but add a second Network Device – for the bridge, use the new one created in the earlier step – this will be your LAN NIC for machines within the VLAN:

Boot the VM and select option to install:

Select option to configure networks. In Proxmox, look at the 2 network devices – the first one should be connected to your default Proxmox bridge (vmbr0) and the second one should be the new one we just added (vmbr99):

For your WAN interface, connect the one that is your default Proxmox bridge, in this case vmbr0:

I’ve left everything default for the WAN interface and then pressed Continue:

On the next screen it shows LAN connection as ‘not assigned’ – select it and press ‘Assign/Continue’:

Select the second interface (vtnet1) that is connected to the new bridge, vmbr99:

Configure your VLAN tags, I’ve set to 10 to match what I’ve already configured on my D-Link manged switch:

I’ve configure my CIDR range as 10.0.10.0/24 and DHCP range of 10.0.10.2 – 10.0.10.254 for this network:

Unless you have a pfSense Plus subscription, select the CE version:

To access the webConfigurator interface we need to temporily disable the pfSense firewall, which we’ll update shortly. In the Console for the pfSense VM, enter option 8 then enter ‘pfctl -d’. It should respond with ‘pf disabled’:

In a brower, go to the WAN ip shown in the Console, and logon with defaults admin/pfsense. Change your password when prompted.

Under interfaces, select your WAN interface and uncheck these 2 options (to enable access to IPs on your VLAN subsets from your local home network IPs:

After applying changes, go back to your Proxmox console for the VM and run ‘pfctl -d again, and the web interface should be accessible again.

To setup a firewall rule to allow access to the pfSense VM from your home network, go to ‘Firewall / Rules / WAN’ and set up a rule with source = ‘WAN subnets’ and destination = ‘This firewall’. Save and apply. Afer a couple of seconds you should have access to the webConfigurator, and the rule should appear like this:

To enable DHCP for your VLAN subnet range, go to Services / DHCP server. If you see this message:

… follow the link and enable ‘Kea DHCP’ backend.

Go back to Services / DHCP Server, check that DHCP is enabled, scrolldown to Primay Address Pool and configure the IP range your your subnet:

From this point you should be ready to go.

To configure a VMs to use the VLAN network and route through pfSense, instead of using the defaul vmbr0 bridge, select the new vmbr99 that you added:

As an example when setting up a new Ubuntu 24.04 server, during the install from ISO, under Network Configuration. you should see the VM magically gets a new IP allocated from your pfSense DHCP server:

In pfSense Status / DHCP Leases you should see this new allocated IP:

To allow access from your home lan to VMs within your new VLAN subnet, you need to:

a) add a pfSense firewall rule allow traffic from your WAN subnet (or a specific ip) to any specific IP destinations (or the whole VLAN subnet if you want to allow access to everything in the VLAN):

And then on the machine(s) that needs to access your VMs in the new VLAN, add a route where the gateway is the ip address of your pfSense VM that is going to handle routing the traffic between your WAN and the VLAN:

sudo route add -net 10.0.10.0/24 [gateway ip]

Where:

  • 10.0.10.0/24 is the CIDR for the VLAN I want to access
  • [gateway ip] is the IP of the pfSense VM that’s connected to your home network

I tested ssh’ing into my new Ubuntu server on VLAN 10 and it’s all good!

Repurposing HDDs from my HP DL380 rack server in a downsized HP Elitedesk 800 g5 homelab

I used to run an HP DL380 G7 2U rack server for my homelab, but it would have been too expensive to ship during an international house move so abandoned it during the move. In the meantime I went back to running Proxmox on my 2008 MacPro (which I did have shipped), but after a few years of service (I got it used from eBay in 2017) it’s now started to have some random hardware issues:

  • the RAM has slowly been dying, stick by stick
  • one of the RAM riser boards is showing a red error LED when any of the sticks are in a certain position on the board, so I think the whole board might need to be replaced

Anyway, short story is I’m down in 16GB in the Mac Pro.

In the meantime I’ve been browing r/homelab and other places, and eventually settled on the realisation that a used small form factor office desktop, like the HP Prodesk and Elitedesk PCs is more than enough for what I need (and substantially quieter). Given that you can pick them up in various specs around £100+ I even thought about getting a couple to set up a Kubernetes cluster, but I just picked up one to start with, an Elitedesk 800 G5 SFF with an i7 and 32GB. The SFF model has room inside for a few HDD and/or SDDs, and a slot on the mobo for a NVMe SDD too.

I wanted to do a stock check of what I had on the shelf that I can repurpose for the new Elitedesk – most of these were pulled from the DL380:

Purchased 2017- first HDDs in the DL380, but replaced because of the fan noise (I’m not sure if I still have these):

  • HGST 7K750-500 HTS727550A9E364 (0J23561) 500GB 7200RPM 16MB Cache SATA 3.0Gb/s 2.5″

Purchased in 2017: Second set of disks added to the RAID array to my DL380:

  • 2x WD Black 2.5″ 750GB

Purchased in 2019 to add additional space to the RAID array on my DL380 – these were refurbs from Amazon:

  • 2x WD Blue 1TB Mobile Hard Disk Drive – 5400 RPM SATA 6 Gb/s 128MB Cache 2.5 Inch – WD10SPZX (Certified Refurbished)

The SDD that came with the Elitedesk is also used (wasn’t lucky enough to get a new one), and according to SMART it has 13300 power-on hours on it. No errors, but about half way through it’s life. Although it’s probably ok for a while I’ll probably swap it out. Now I’ve looked at the stats I may go for a new NVMe for the boot disk, and add the 2x WD Blacks for storage.

Before I start moving things around though, I need to get some SATA cables…

Running Ansible playbooks against RHEL 8 servers

I’m experimenting with some Ansible playbooks against local VMs, in particular, for some reason a RHEL 8 VM, and getting some unitelligible errors:

File \"<frozen importlib._bootstrap_external>\", line 1112, in _legacy_get_spec\r\n  File \"<frozen importlib._bootstrap>\", line 441, in spec_from_loader\r\n  File \"<frozen importlib._bootstrap_external>\", line 544, in spec_from_file_location\r\n  File \"/tmp/ansible_ansible.legacy.setup_payload_z3bjr2pn/ansible_ansible.legacy.setup_payload.zip/ansible/module_utils/basic.py\", line 5\r\nSyntaxError: future feature annotations is not defined\r\n", "msg": "MODULE FAILURE: No start of json char found\nSee stdout/stderr for the exact error", "rc": 1}}, "msg": "The following modules failed to execute: ansible.legacy.setup\n"}

Googling for various parts of this error, I think the key error is:

SyntaxError: future feature annotations is not defined

… as this shows up in a few posts, and in particular this excellent post by Jeff Geerling that explains exactly what is going on with Python version incompatibilities between later versions of Ansible and RHEL 8 (which uses an older version of Python, 3.7)

Ansible version on my Mac:

❯ ansible-playbook --version
ansible-playbook [core 2.18.6]

Downgrading to Ansible 9x with brew:

❯ brew install ansible@9
==> Fetching downloads for: ansible@9
Warning: ansible@9 has been deprecated because it is not maintained upstream! It will be disabled on 2025-11-30

For personal projects this is not much of a big deal, and I don’t think I’m particularly taking advantage of any newer Ansible features, but bit of a version dependency nightmare.

Now I get:

❯ ansible --version
ansible [core 2.16.14]

… and can successfully apply playbooks against my RHEL 8 VM.

Site update: Migrating hosting providers – automating deployment with Terraform, Ansible and GitLab CI Pipelines

Over the past couple of years I’ve been working on and off on a personal project to migrate and update a GitLab CI pipeline on my self-hosted GitLab for building and deploying this site. Unfortunately my self-hosted GitLab used to be on a e-waste HP DL380 G7 rack server that I no longer have after moving house, so I’ve gone back to using my old 2008 MacPro 3,1 as a Proxmox server, where I now run GitLab (which oddly is what I first used this Mac for several years ago).

As part of the update, I wanted to achieve a couple of goals:

  • update the GitLab pipeline to deploy to a staging server for testing, and then deploy to the live server
  • template any deployment files that are server/domain specific
  • update my Docker images for WordPress, updating the plugins, and anything that needs to be in the image to support the runtime, e.g. nginx, php plugins for nginx etc.
  • move to a new cloud provider that would allow me to provision VMs with Terraform
  • automate updating SSL certs with Let’s Encrypt certbot

I won’t share my completed pipeline because I don’t want to share specifics about how my WordPress site is configured, but I’ll give an overview of what I used to automate various parts of it:

While I’ve ended up with a working solution that meets my goals (I can run the pipeline to deploy to my test server or deploy latest to my new live server), I still have a few areas I could improve:

  • GitLab CI Environments, and parameterization – I don’t feel I’ve taken enough advantage of these yet. The jobs that deploy to my test server run automatically, but the deploy to my live set is the same set of jobs that I manually run, and configured to deploy to a different server – I feel there’s more I can parameterize here and need to do some more experimentation in this area

Although this effort was spread over a couple of years before I got to a point of completion, it was a great opportunity to gain some more experience across all these tools.