Running Proxmox VXLAN SDN over WireGuard with OPNsense NAT
This is a setup for three Proxmox VE hosts where isolated guest networks span the cluster, inter-node tenant traffic is encrypted, and one OPNsense VM acts as the edge gateway for all tenant VNets.
The core layout is:
- a WireGuard full mesh across the Proxmox hosts
- a Proxmox SDN VXLAN Zone whose peer list uses the hosts’
wg0addresses - one VNet per isolated tenant subnet
- one OPNsense VM that provides each VNet’s default gateway, DHCP, NAT, and port forwarding
VXLAN encapsulates L2 frames in UDP, but it does not provide encryption by itself. Using WireGuard addresses as VXLAN peers puts the VXLAN underlay inside the encrypted WireGuard tunnels.
All addresses in this post are examples. Replace them with the LAN, gateway, bridge names, and host names used in your environment. Make sure none of the tenant, WireGuard, or management subnets overlap.
Target layout
Existing LAN / underlay: 192.168.10.0/24
pve-a 192.168.10.11
pve-b 192.168.10.12
pve-c 192.168.10.13
WireGuard underlay:
pve-a wg0 10.255.255.1/24
pve-b wg0 10.255.255.2/24
pve-c wg0 10.255.255.3/24
Proxmox SDN VXLAN:
zone: vxwg
peers: 10.255.255.1,10.255.255.2,10.255.255.3
mtu: 1370
VNets:
vnet10 -> subnet 10.10.10.0/24
vnet20 -> subnet 10.10.20.0/24
vnet30 -> subnet 10.10.30.0/24
OPNsense VM:
WAN -> existing LAN bridge, 192.168.10.50/24
LAN10 -> vnet10, 10.10.10.1/24
LAN20 -> vnet20, 10.10.20.1/24
LAN30 -> vnet30, 10.10.30.1/24Setup diagram
flowchart TB
subgraph LAN["Existing LAN / underlay<br/>192.168.10.0/24"]
PVEA["pve-a<br/>LAN 192.168.10.11<br/>wg0 10.255.255.1"]
PVEB["pve-b<br/>LAN 192.168.10.12<br/>wg0 10.255.255.2"]
PVEC["pve-c<br/>LAN 192.168.10.13<br/>wg0 10.255.255.3"]
WAN["OPNsense WAN<br/>192.168.10.50/24"]
GW["Upstream gateway"]
end
PVEA <-->|WireGuard UDP/51820| PVEB
PVEB <-->|WireGuard UDP/51820| PVEC
PVEC <-->|WireGuard UDP/51820| PVEA
subgraph VXWG["Proxmox SDN VXLAN zone: vxwg<br/>peers: 10.255.255.1, 10.255.255.2, 10.255.255.3<br/>MTU 1370"]
VXPEERS["VXLAN over wg0<br/>UDP/4789"]
VNET10["vnet10<br/>10.10.10.0/24"]
VNET20["vnet20<br/>10.10.20.0/24"]
VNET30["vnet30<br/>10.10.30.0/24"]
end
PVEA -.-> VXPEERS
PVEB -.-> VXPEERS
PVEC -.-> VXPEERS
WAN --> EDGE["OPNsense<br/>NAT<br/>DHCP<br/>firewall"]
EDGE --> LAN10["LAN10 gateway<br/>10.10.10.1/24<br/>MTU 1370"]
EDGE --> LAN20["LAN20 gateway<br/>10.10.20.1/24<br/>MTU 1370"]
EDGE --> LAN30["LAN30 gateway<br/>10.10.30.1/24<br/>MTU 1370"]
LAN10 --- VNET10
LAN20 --- VNET20
LAN30 --- VNET30
VNET10 --> VM10["tenant VMs<br/>10.10.10.0/24"]
VNET20 --> VM20["tenant VMs<br/>10.10.20.0/24"]
VNET30 --> VM30["tenant VMs<br/>10.10.30.0/24"]
GW --- WAN
192.168.10.50 is the tenant service front-door IP on the existing LAN.
You can expose services through that single address with port forwarding.
This does not remove or hide the three Proxmox management IPs.
If tenants must not reach the Proxmox management network, add firewall rules for that explicitly.
This setup encrypts tenant VXLAN traffic that crosses Proxmox nodes. Traffic between guests on the same host does not leave that host, and traffic from the OPNsense WAN side to the existing LAN is not inside WireGuard.
The single OPNsense VM is also a single gateway and NAT point. For high availability, plan OPNsense HA/CARP, DHCP behavior, VM placement, and failover of the front-door IP separately. This post keeps the design intentionally simple.
Preflight checks
Before configuring SDN, confirm these assumptions.
- Proxmox VE SDN is available on every node.
On Proxmox VE 8.1 and newer, the core SDN packages are installed by default.
On older upgraded hosts, install the SDN package and make sure
/etc/network/interfacesincludes files from/etc/network/interfaces.d/. - The existing LAN allows each Proxmox node to reach the other nodes on
UDP/51820. - The Proxmox host firewall, if enabled, can allow WireGuard on the LAN side
and VXLAN on
wg0. - The OPNsense WAN IP, tenant subnets, and WireGuard subnet do not overlap with any existing network.
- This guide is IPv4-focused. If you add IPv6 tenant networks, revisit MTU, Router Advertisements, firewall rules, and NAT assumptions.
For older Proxmox VE installations, install the SDN package and make sure the interface include line is present.
sudo apt update
sudo apt install libpve-network-perl ifupdown2
sudo grep -q '^source /etc/network/interfaces.d/\*' /etc/network/interfaces \
|| echo 'source /etc/network/interfaces.d/*' | sudo tee -a /etc/network/interfacesBuild the WireGuard full mesh
Install WireGuard and generate keys on every Proxmox node.
sudo apt update
sudo apt install wireguard
sudo install -d -m 700 /etc/wireguard
sudo sh -c 'umask 077; wg genkey > /etc/wireguard/wg0.key'
sudo sh -c 'wg pubkey < /etc/wireguard/wg0.key > /etc/wireguard/wg0.pub'
sudo chmod 600 /etc/wireguard/wg0.key
sudo chmod 644 /etc/wireguard/wg0.pub
sudo cat /etc/wireguard/wg0.pubCollect the public keys from all three nodes. For the PrivateKey value in each node’s
/etc/wireguard/wg0.conf, use that node’s private key.
sudo cat /etc/wireguard/wg0.keypve-a
[Interface]
Address = 10.255.255.1/24
ListenPort = 51820
PrivateKey = <PRIVATE_KEY_OF_PVE_A>
MTU = 1420
[Peer]
PublicKey = <PUBLIC_KEY_OF_PVE_B>
AllowedIPs = 10.255.255.2/32
Endpoint = 192.168.10.12:51820
[Peer]
PublicKey = <PUBLIC_KEY_OF_PVE_C>
AllowedIPs = 10.255.255.3/32
Endpoint = 192.168.10.13:51820pve-b
[Interface]
Address = 10.255.255.2/24
ListenPort = 51820
PrivateKey = <PRIVATE_KEY_OF_PVE_B>
MTU = 1420
[Peer]
PublicKey = <PUBLIC_KEY_OF_PVE_A>
AllowedIPs = 10.255.255.1/32
Endpoint = 192.168.10.11:51820
[Peer]
PublicKey = <PUBLIC_KEY_OF_PVE_C>
AllowedIPs = 10.255.255.3/32
Endpoint = 192.168.10.13:51820pve-c
[Interface]
Address = 10.255.255.3/24
ListenPort = 51820
PrivateKey = <PRIVATE_KEY_OF_PVE_C>
MTU = 1420
[Peer]
PublicKey = <PUBLIC_KEY_OF_PVE_A>
AllowedIPs = 10.255.255.1/32
Endpoint = 192.168.10.11:51820
[Peer]
PublicKey = <PUBLIC_KEY_OF_PVE_B>
AllowedIPs = 10.255.255.2/32
Endpoint = 192.168.10.12:51820Keep AllowedIPs limited to each peer’s WireGuard /32.
Do not add tenant subnets like 10.10.10.0/24, 10.10.20.0/24,
or 10.10.30.0/24 here.
Those subnets should be carried at L2 by VXLAN, not routed at L3 through WireGuard.
wg-quick derives routes from peer AllowedIPs, so adding tenant subnets here would
create the wrong L3 routing behavior for this design.
If the nodes are on the same LAN and are not behind NAT, PersistentKeepalive is usually unnecessary.
Use it only when a peer behind NAT or a stateful firewall needs to keep an idle mapping alive.
Enable WireGuard with systemd and verify it.
sudo systemctl enable --now wg-quick@wg0
sudo wg show
ping -c 3 10.255.255.2
ping -c 3 10.255.255.3If you already brought the interface up manually with wg-quick up wg0, do not immediately
run systemctl enable --now wg-quick@wg0 against the already-created interface.
Instead, either only enable the service for the next boot, or bring the interface down and let systemd start it.
# Option A: leave the manual session up and enable only for boot
sudo systemctl enable wg-quick@wg0
# Option B: hand control to systemd now
sudo wg-quick down wg0
sudo systemctl enable --now wg-quick@wg0Before creating the VXLAN Zone, confirm that traffic to the peer WireGuard IPs uses wg0.
ip route get 10.255.255.2
ip route get 10.255.255.3Allow the firewall traffic
If the Proxmox host firewall is enabled, allow UDP/51820 between the node LAN addresses.
VXLAN uses UDP/4789 by default.
If the host firewall also filters input on wg0, allow UDP/4789 from 10.255.255.0/24 on wg0.
In short:
- LAN side:
UDP/51820from the other Proxmox node IPs wg0side:UDP/4789from the WireGuard subnet
Create the Proxmox SDN VXLAN Zone
In the Proxmox GUI, create a VXLAN Zone.
Datacenter -> SDN -> Zones -> Create -> VXLAN
ID: vxwg
Nodes: pve-a, pve-b, pve-c
Peers: 10.255.255.1,10.255.255.2,10.255.255.3
MTU: 1370List all node WireGuard addresses in the VXLAN peer list, including the local node’s address.
The routing table should make these peer addresses reachable through wg0.
If the WireGuard MTU is 1420, start the VXLAN Zone MTU at 1370.
VXLAN encapsulation needs 50 bytes, so 1420 - 50 = 1370.
If the real LAN underlay uses an MTU below 1500, reduce the WireGuard and VXLAN MTUs further.
Then create the VNets.
Datacenter -> SDN -> VNets -> Create
VNet: vnet10 Zone: vxwg Tag/VNI: 10010
VNet: vnet20 Zone: vxwg Tag/VNI: 10020
VNet: vnet30 Zone: vxwg Tag/VNI: 10030Apply the SDN changes after creating the zone and VNets.
This setup does not rely on Proxmox SDN DHCP, SDN Subnets, or SNAT. OPNsense owns the L3 gateway, DHCP, and NAT roles for each tenant subnet. You can document the CIDR ranges in Proxmox if desired, but do not enable a competing Proxmox-provided gateway, DHCP service, or SNAT for these VNets.
After applying SDN changes, you can confirm that Proxmox created local bridge interfaces.
ip link show vnet10
ip link show vnet20
ip link show vnet30To confirm VXLAN traffic is using WireGuard, run this on two nodes while pinging a VM or gateway across nodes.
sudo tcpdump -ni wg0 udp port 4789Deploy the OPNsense VM
Create the OPNsense VM on one node first. Attach four NICs.
net0 -> existing LAN bridge, for example vmbr0 = WAN
net1 -> vnet10 = LAN10
net2 -> vnet20 = LAN20
net3 -> vnet30 = LAN30Use VirtIO NICs.
Inside OPNsense, assign addresses like this.
WAN:
IPv4: 192.168.10.50/24
Gateway: <LAN_GATEWAY>
MTU: default or 1500
LAN10:
IPv4: 10.10.10.1/24
MTU: 1370
LAN20:
IPv4: 10.10.20.1/24
MTU: 1370
LAN30:
IPv4: 10.10.30.1/24
MTU: 1370Because the WAN address is an RFC1918 private address, disable Block private networks on the OPNsense WAN interface. If the existing LAN uses bogon or special-use addresses, review Block bogon networks as well.
OPNsense LAN10, LAN20, and LAN30 sit on VXLAN VNets, so set their interface MTU to 1370,
matching the VXLAN Zone MTU.
The WAN interface is attached directly to the existing LAN, so leave it at the default or 1500
unless the upstream LAN requires a smaller MTU.
If you later lower the VXLAN Zone MTU, set the three OPNsense LAN interface MTUs to the same value.
When a tenant VM and the OPNsense VM are on different Proxmox nodes, the tenant VM’s default-gateway traffic crosses VXLAN over WireGuard. When they are on the same node, that traffic stays local to the host.
Enable DHCP on each internal interface in OPNsense. For a small environment, Dnsmasq DHCP is enough.
LAN10: 10.10.10.100 - 10.10.10.199
LAN20: 10.10.20.100 - 10.10.20.199
LAN30: 10.10.30.100 - 10.10.30.199If guest operating systems do not automatically use the VNet MTU, either set the guest interface MTU
manually to 1370, or advertise DHCP option 26 with value 1370 where your DHCP service and clients
support it. Do not assume every client honors a DHCP-provided MTU option; test large packets.
Do not install the OPNsense WireGuard plugin for this build. The encrypted segment is the VXLAN underlay between Proxmox hosts, not inside the firewall VM.
Outbound NAT
Go to Firewall -> NAT -> Outbound in OPNsense. For a normal setup with one external/front-door IP, leave Outbound NAT on Automatic.
Tenant VMs use OPNsense as their default gateway,
and OPNsense NATs them through 192.168.10.50 toward the existing LAN or upstream router.
This avoids having to add static routes for 10.10.10.0/24, 10.10.20.0/24, and 10.10.30.0/24
on the upstream network.
Isolate the subnets
The VNets separate L2.
However, OPNsense is the router between 10.10.10.0/24, 10.10.20.0/24, and 10.10.30.0/24.
So inter-subnet blocking belongs in OPNsense firewall rules.
Normal OPNsense quick rules are first match wins. Put the block rules above the general pass rule.
LAN10 rules
- Block
10.10.10.0/24->10.10.20.0/24 - Block
10.10.10.0/24->10.10.30.0/24 - Optional: block
10.10.10.0/24->192.168.10.0/24except approved services - Pass
10.10.10.0/24->any
LAN20 rules
- Block
10.10.20.0/24->10.10.10.0/24 - Block
10.10.20.0/24->10.10.30.0/24 - Optional: block
10.10.20.0/24->192.168.10.0/24except approved services - Pass
10.10.20.0/24->any
LAN30 rules
- Block
10.10.30.0/24->10.10.10.0/24 - Block
10.10.30.0/24->10.10.20.0/24 - Optional: block
10.10.30.0/24->192.168.10.0/24except approved services - Pass
10.10.30.0/24->any
Repeat the same pattern if you later add vnet40.
The optional management-LAN block matters because a broad Pass -> any rule also allows tenant VMs
to initiate connections toward the existing LAN unless a more specific block rule appears first.
Use aliases for tenant networks, management networks, and allowed services to keep the rules readable.
Port forwarding
Inbound access to tenant VMs is configured with OPNsense Destination NAT, also known as Port Forward.
For example, publish an HTTPS service on a VM in vnet20 through external port 8443.
Interface: WAN
Protocol: TCP
Destination: 192.168.10.50
Destination port: 8443
Redirect target IP: 10.10.20.21
Redirect target port: 443
Filter rule: add associated filter rule, or create the filter rule manuallyIf you create the filter rule manually, create it on the WAN interface. Because NAT is processed before filtering, the firewall rule destination is the translated internal address.
Interface: WAN
Action: Pass
Protocol: TCP
Source: any, or a restricted source alias
Destination: 10.10.20.21
Destination port: 443Other examples:
WAN TCP 192.168.10.50:2222 -> 10.10.10.11:22
WAN TCP 192.168.10.50:33061 -> 10.10.30.31:3306As long as each external port is unique, one front-door IP can publish many internal services.
Restrict source addresses where possible; Source: any exposes the service to anything that can reach
192.168.10.50 on the existing LAN or through any upstream port forward.
Attach workload VMs
Attach each VM to exactly one VNet unless the VM is intentionally multi-homed.
vnet10 guest -> gateway 10.10.10.1
vnet20 guest -> gateway 10.10.20.1
vnet30 guest -> gateway 10.10.30.1Use OPNsense DHCP or assign static addresses. Since each VNet appears as a common Linux bridge on every node, the VM can run on any node in the cluster.
For static guests, set the interface MTU to 1370 if the guest still defaults to 1500.
Check MTU
MTU is the most likely source of problems in this design.
Test the WireGuard path first.
ping -c 3 -M do -s 1392 10.255.255.2For IPv4 ICMP, 1392 + 28 = 1420.
Then test between guests on the same VNet but different nodes.
ping -c 3 -M do -s 1342 10.10.10.121342 + 28 = 1370.
If HTTPS stalls, package managers hang, or large pings fail,
lower the VXLAN MTU further or explicitly set guest interface MTU to 1370 or below.
If only some paths fail, check for a lower MTU on the physical LAN, WAN uplink, nested virtualization layer,
or any intermediate firewall.
When internal clients need the front-door IP
If a tenant VM must reach a published service through 192.168.10.50 instead of the private address,
use one of these:
- split DNS, so internal clients resolve the service to its internal address
- OPNsense reflection or hairpin NAT
Split DNS is usually simpler and keeps source addresses clearer. Use reflection or hairpin NAT only when clients really need to use the front-door address internally. If the client and server are in the same L2 subnet, hairpin NAT needs both destination NAT and source NAT to avoid asymmetric return traffic.
Troubleshooting checklist
Use these checks when connectivity fails.
# WireGuard status and handshakes
sudo wg show
# Route to a VXLAN peer should use wg0
ip route get 10.255.255.2
# VXLAN packets should appear on wg0 when cross-node tenant traffic flows
sudo tcpdump -ni wg0 udp port 4789
# Check created SDN bridge interfaces
ip link show vnet10
bridge link show | grep vnet
# Check large packets on the WireGuard and tenant paths
ping -c 3 -M do -s 1392 10.255.255.2
ping -c 3 -M do -s 1342 10.10.10.12In OPNsense, check these areas.
- Firewall -> Log Files -> Live View for blocked tenant or WAN-forwarded traffic
- Firewall -> NAT -> Outbound to confirm automatic outbound NAT is active
- Firewall -> NAT -> Destination NAT (Port Forward) for DNAT rules
- Firewall -> Rules -> WAN for associated or manually created pass rules
- Interfaces to confirm LAN10, LAN20, and LAN30 MTU values
Build order
- Confirm SDN support and firewall prerequisites on all Proxmox nodes.
- Build the WireGuard full mesh between the three Proxmox nodes.
- Verify ping between
10.255.255.1,10.255.255.2, and10.255.255.3. - Verify
ip route getto each peer useswg0. - Create the Proxmox SDN VXLAN Zone
vxwg. - Create
vnet10,vnet20, andvnet30. - Apply SDN changes and confirm the VNet interfaces exist on every node.
- Attach one WAN and three LAN NICs to the OPNsense VM.
- Set the OPNsense WAN to
192.168.10.50/24. - Set the OPNsense LAN gateways to
10.10.10.1,10.10.20.1, and10.10.30.1. - Set the OPNsense LAN10, LAN20, and LAN30 interface MTUs to
1370. - Enable DHCP on each OPNsense tenant interface.
- Add the inter-subnet block rules and any management-LAN block rules.
- Place one test VM in each VNet.
- Test cross-node VNet connectivity and MTU.
- Test one port forward, for example
192.168.10.50:2222 -> 10.10.10.11:22. - Move real workloads only after the test path works.
References
- Proxmox VE SDN documentation
- Proxmox VE SDN source documentation
- Proxmox VE network documentation
- WireGuard installation
- WireGuard quick start
- wg-quick Linux man page
- OPNsense interface configuration
- OPNsense DHCP
- OPNsense NAT
- OPNsense firewall rules
- OPNsense reflection and hairpin NAT
Acknowledgements
This post was created with assistance from AI.