1. Document history

Date Author Changes

2020-08-09

dh

Initial version

2020-08-09

dh

Some updates on integration with web site and github, license added, preface rewritten

2. Preface

This is a book about setting up a Linux server and a number of services on it. It describes a so-called "virtualized setup". That is, the physical machine on which everything runs is separated into a number of "virtual machines" which run completely independent instances of operating systems. The virtual machines are held together by the operating system on the physical host - sometimes referenced as "hypervisor".

This setup is not totally unusual. In fact, these days it’s the normal way of working with many-core-multi-gigabyte systems available for purchase or rent.

There is, however, still a remarkable shortage of descriptions how to setup such a system based on IPv6 as primary protocol. Even though the classic IPv4 address pool is drought for several years now, setups continue to describe how to work with one public IPv4 address for the host system and a network of private IPv4 addresses for the virtual machines.

The setup described here works genuinely with IPv6 as communication protocol. The most important advantage is that all virtual machines can be accessed by official IP addresses directly without any further ado. Of course, this only works if the client also has IPv6 connectivity which is finally the case for more and more systems these days.

We also show how IPv6 and IPv4 get interconnected in this setup: How can an IPv6-only virtual machine access an IPv4-only server? How can IPv4-only clients access services on the IPv6-only virtual machines? Are there services which definitely need an IPv4 address on a virtual host (spoiler: yes) and how do we attach them?

2.1. License

You are free to:

  • Share — copy and redistribute the material in any medium or format

  • Adapt — remix, transform, and build upon the material

for any purpose, even commercially.

Under the following terms:

  • Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

  • ShareAlike — If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.

2.2. Distribution

This guide is an Asciidoc document. It is hosted on Github and can be read online (or downloaded) as HTML or PDF document:

2.3. About the author

Born in 1972, Dirk Hillbrecht started working with computers in 1986, when his school offered a programming course using Turbo Pascal 3.0 on Apple II computers. In the 1990ies he studied "Mathematics with focus on Computer Science" in Hannover. He administrated networks of Sun and Silicon Graphics machines at the university and witnessed the raise of Linux almost from its beginnings. Since 2000, he writes application software for fleet management and carsharing in Java. He still lives in Hannover, Germany.

2.4. About this guide

After having administrated Unix and Linux systems since the 1990ies, I’ve rented my first real root server for myself only in 2016. While that machine worked and served things quite reliably, I felt it aging pretty fast. It was an Ubuntu 16.04 system, it had only 4 GB of RAM and a quite outdated Athlon64 processor.

Due to these parameters it was not possible to run virtualized setups - which I wanted to do to seperate services. E.g. that machine has a complete e-mail server setup which I intended to use for my private e-mail but still hesitated to activate as it was rather complex and "manually setup". I followed an instruction guide on the internet which even its author said is outdated only two years later.

There were other shortcomings like a less-than-optimal Let’s-encrypt setup. Let’s encrypt started almost at the same time as my server installation and there have been quite some optimisations since then.

All in all, my setup was aging, it was not sufficient for much more stuff on it and in the meantime, you got far more capable hardware for not so much more money.

So, in 2018 I decided to start "from scratch" with the system design. The first and most important design decision was about IP connectivity. IPv4 had run out of address space since more than five years then. IPv6 has been available for a decade or so. I wanted to do it in a modern way:

  • I assume to have only one IPv4 address but a sufficiently large IPv6 network routed to the rented server.

  • I build the system in a way that all "real" services run in virtual machines managed by Linux' KVM system.

  • The physical host gets the IPv4 address.

  • All virtual machines get only IPv6 addresses. No offical IPv4, not even a dual stack with private IPv4 and masquerading. Only IPv6.

  • Virtual machines can access IPv4-only services in the internet through a NAT64/DNS64 gateway on the host.

  • Services on the virtual machines are only generally available from IPv6 addresses.

  • To serve incoming IPv4 requests, application proxys on the physical host forward traffic to the actual service handlers on a virtual machine if needed.

  • If for any reason a service on a virtual machine absolutely needs its own direct IPv4 connectivity, it is added "on top" of the setup.

Implementing this scheme initially took me about a week. I wrote all steps down and published a series of blog articles in my personal blog. As systems evolve and interest remained, I continued to update the articles. Eventually, I came to the conclusion that this scheme of updating was not flexible enough. So, I decided to rewrite the articles (and some unpublished articles with further explanations) into an Asciidoc document and publish it on github. As usual, this project became a bit bigger as I expected and after integrating and updating all information I suddenly had a PDF document of 80 pages.

That was a bit more than I expected, but things are as they are. Have fun reading this guide! I hope it helps yousetting up your own, modern, IPv6-first setup.

About IP addresses
IP addresses in this guide are made up and sometimes scrambled like 1.2.X.Y or 1234:5:X:Y::abcd. X and Y actually have to be numbers, of course…​

The physical host

The first thing to start a virtualized server setup is to prepare the physical host. Sometimes it’s also called "hypervisor" or "domain zero". It all means the same: It is the actual hardware ("bare metal") with an operating system on it which hosts all the virtual machines to come. I’ll stick with the term "physical host" in this document when I reference this installation.

The operating system of the physical host can be totally different from the operating systems on the virtual machines. It can be a specialized system like VMWare’s ESXi or an extremly shrunken Linux or BSD system. The setup described here, however, is based on a stock Ubuntu installation for the physical host. It is a very broadly used system with tons of descriptions for different configuration. Chances are good that you find solutions for specific problems of your installation "in the internet" rather easily.

The physical host has to cope with the specific network infrastructure it is installed in. Depending on this external setup, configuration can be quite different between installations.

3. Physical hosts with one /64 network

IPv6 standards define that every host with IPv6 connectivity must have at least one /64 network assigned. Such an environment is the German hosting provider Hetzner Online: They only route one /64 network to each host. That disallows any routed setup between the physical host and the virtual machines. We’ll use the only network we have to access the physical host and all virtual machines. The default gateway for the physical host will be the link-local address pointing to Hetzner’s infrastructure. The default gateway for the virtual machines will be the physical host.

Why Hetzner Online?

This document wants to be an independent guide for setting up IPv6-first setups. However, my main information and research source are the servers I administrate myself. They are located at the German hoster Hetzner Online, so my knowledge and experience comes mainly from their environment.

Hopefully, in the future other environments are added to this guide to make it less centric about one special provider.

3.1. Initial setup of the host system at Hetzner’s

If you rent the Hetzner server, order it with the "rescue system" booted. That gives the most control over how the system is configured. I suggest that you access the server in this early stage by its IP address only. We’ll change the IPv6 address of the system later in the install process. If you want to have a DNS entry, use something interim to throw away later, e.g. <plannedname>-install.example.org.

I obtained my server in Hetzner’s "rescue system" which allows the OS installation through the installimage script. I wanted to work with as much default components and configurations as possible, so I decided for the Ubuntu 20.04 [1] install image offered by Hetzner.

Stay with default setups as much as possible
I strongly advise you to always stick with the offered setups from your hosting provider as much as possible. It increases your chance for support and your chances are much higher to find documentation if you run into problems.

Logging into the new server gives you a welcoming login screen somehow like this:

Login messages in a Hetzner rescue system
dh@workstation:~$ ssh -l root 2a01:4f8:1:3::2
The authenticity of host '2a01:4f8:1:3::2 (2a01:4f8:1:3::2)' can't be established.
ECDSA key fingerprint is SHA256:dbSkzn0MlzoJXr8yeEuR0pNp9FEH4mNsfKgectkTedk.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '2a01:4f8:1:3::2' (ECDSA) to the list of known hosts.

-------------------------------------------------------------------

  Welcome to the Hetzner Rescue System.

  This Rescue System is based on Debian 8.0 (jessie) with a newer
  kernel. You can install software as in a normal system.

  To install a new operating system from one of our prebuilt
  images, run 'installimage' and follow the instructions.

  More information at http://wiki.hetzner.de

-------------------------------------------------------------------

Hardware data:

   CPU1: AMD Athlon(tm) 64 X2 Dual Core Processor 5600+ (Cores 2)
   Memory:  3704 MB
   Disk /dev/sda: 750 GB (=> 698 GiB)
   Disk /dev/sdb: 750 GB (=> 698 GiB)
   Total capacity 1397 GiB with 2 Disks

Network data:
   eth0  LINK: yes
         MAC:  00:24:21:21:ac:99
         IP:   241.61.86.241
         IPv6: 2a01:4f8:1:3::2/64-/77777777777
         RealTek RTL-8169 Gigabit Ethernet driver

root@rescue ~ #

You might want to write down MAC address and IP addresses of the system. Note, however, that they are also included in the delivery e-mail sent by Hetzner when the server is ready.

SSH keys for Hetzner rescue images
If you put your ssh public key into your Hetzner account and select it in the order process for the machine, it will not only be put into the rescue system but also into the root account of the freshly installed machine. If you work this way, you never have to enter any passwords during the installation process. You can also select it each time you request a rescue system.

The system has two harddisks. I use them as a software RAID 1 as offered by the install script. This allows for at least some desaster recovery in case of a disk failure. And for systems like this, I do not install any partitions at all (apart from the Hetzner-suggested swap and /boot partition). The KVM disks will go to qcow2 files which are just put into the host’s file system. Modern file systems fortunately do not have any problems with 200+ GB files and this way, all the virtual guest harddisks are also covered by RAID.

Hetzner’s installimage process is controlled by a configuration file. Its (striped-down) version for the system I work on reads like this:

installimage control file for the physical host
##  HARD DISK DRIVE(S):
# Onboard: SAMSUNG HD753LJ
DRIVE1 /dev/sda
# Onboard: SAMSUNG HD753LJ
DRIVE2 /dev/sdb

##  SOFTWARE RAID:
## activate software RAID?  < 0 | 1 >
SWRAID 1
## Choose the level for the software RAID < 0 | 1 | 10 >
SWRAIDLEVEL 1

##  BOOTLOADER:
BOOTLOADER grub

##  HOSTNAME:
HOSTNAME whatever (change this one to your system name, not with domain name)

##  PARTITIONS / FILESYSTEMS: (keep the defaults)
PART swap swap 4G
PART /boot ext3 512M
PART / ext4 all

##  OPERATING SYSTEM IMAGE: (you have selected this earlier in installimage)
IMAGE /root/.oldroot/nfs/install/../images/Ubuntu-1804-bionic-64-minimal.tar.gz
Installing the server deletes all data previously on it!
Just to be sure: If you use installimage (or similar installation routines from other providers) on an existing system, all data will be deleted on that system. If unsure, check twice that you are on the right system. A mistake at this point may be impossible to correct afterwards!
Install protocol with installimage
                Hetzner Online GmbH - installimage

  Your server will be installed now, this will take some minutes
             You can abort at any time with CTRL+C ...

         :  Reading configuration                           done
         :  Loading image file variables                    done
         :  Loading ubuntu specific functions               done
   1/16  :  Deleting partitions                             done
   2/16  :  Test partition size                             done
   3/16  :  Creating partitions and /etc/fstab              done
   4/16  :  Creating software RAID level 1                  done
   5/16  :  Formatting partitions
         :    formatting /dev/md/0 with swap                done
         :    formatting /dev/md/1 with ext3                done
         :    formatting /dev/md/2 with ext4                done
   6/16  :  Mounting partitions                             done
   7/16  :  Sync time via ntp                               done
         :  Importing public key for image validation       done
   8/16  :  Validating image before starting extraction     done
   9/16  :  Extracting image (local)                        done
  10/16  :  Setting up network config                       done
  11/16  :  Executing additional commands
         :    Setting hostname                              done
         :    Generating new SSH keys                       done
         :    Generating mdadm config                       done
         :    Generating ramdisk                            done
         :    Generating ntp config                         done
  12/16  :  Setting up miscellaneous files                  done
  13/16  :  Configuring authentication
         :    Fetching SSH keys                             done
         :    Disabling root password                       done
         :    Disabling SSH root login without password     done
         :    Copying SSH keys                              done
  14/16  :  Installing bootloader grub                      done
  15/16  :  Running some ubuntu specific functions          done
  16/16  :  Clearing log files                              done

                  INSTALLATION COMPLETE
   You can now reboot and log in to your new system with
  the same password as you logged in to the rescue system.

root@rescue ~ # reboot

Installing the system this way brings a fresh and rather small Ubuntu system on the disk. Note that ssh will complain massively about the changed host key of the system, but that is ok. You’re now booting the installed system which has another host key than the rescue system you used before.

First login into the installed host
dh@workstation:~$ ssh -l root 2a01:4f8:1:3::2
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
[...]
Offending ECDSA key in /home/dh/.ssh/known_hosts
  remove with:
  ssh-keygen -f "/home/dh/.ssh/known_hosts" -R "2a01:4f8:1:3::2"
ECDSA host key for 2a01:4f8:1:3::2 has changed and you have requested strict checking.
Host key verification failed.
dh@workstation:~$ ssh-keygen -f "/home/dh/.ssh/known_hosts" -R "2a01:4f8:1:3::2"
# Host 2a01:4f8:1:3::2 found
/home/dh/.ssh/known_hosts updated.
dh@workstation:~$ ssh -l root 2a01:4f8:1:3::2
The authenticity of host '2a01:4f8:1:3::2 (2a01:4f8:1:3::2)' can't be established.
ECDSA key fingerprint is SHA256:z2+iz/3RRC3j6GT8AtAHJYnZvP9kdzw8fW8Aw5GPl0q.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '2a01:4f8:1:3::2' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-38-generic x86_64)
[...]
root@merlin ~ #

After having booted into it, I had some hours of remarkably degraded performance as the RAID 1 had to initialize the disk duplication completely. Be aware of this, your server will become faster once this is over. Use cat /proc/mdstat to see what’s going on on your harddisks.

Check RAID array status
root@merlin ~ # cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
      4190208 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
      727722816 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  4.0% (29775168/727722816) finish=147.8min speed=78670K/sec
      bitmap: 6/6 pages [24KB], 65536KB chunk

md1 : active raid1 sdb2[1] sda2[0]
      523712 blocks super 1.2 [2/2] [UU]

unused devices: <none>

If you install an e-mail server (or have some external mail service you want to use for system e-mails), you should enable alarming messages if the RAID degrades due to diskfailure. A RAID only protects against hardware failures if actually failed hardware is replaced quick enough.

Test the rescue system

This is a good moment to test whether Hetzner’s rescue mechanism works. Sometimes, the servers are not correctly configured in the BIOS and do not load the rescue system even if this is requested in the interface:

  • Activate the "rescue system boot" in the Robot interface. Select your ssh key so that you do not have to enter a password

  • Reboot the machine.

  • Logging in via ssh after 1 to 2 minutes should being up the rescue system. Just reboot the machine from the command line - there is no need to rescue now.

  • The system will come up again into the installed system.

If something is wrong here, contact support and let them solve the problem. If you make mistakes in the host’s network configuration, you will need the rescue mode to sort things out.

3.2. Put /tmp into a ramdisk

One thing which is totally independent from IPv6 and KVM is the /tmp directory. It contains temporary files. I like to put it into a ramdisk. Add one line to /etc/fstab and replace /tmp with the following commands:

Addition to /etc/fstab to put /tmp into a ramdisk and activate it
echo "none /tmp tmpfs size=2g 0 0" >> /etc/fstab && \
mv /tmp /oldtmp && mkdir /tmp && mount /tmp && rm -rf /oldtmp

This setup allows /tmp to grow up to 2 GB which is ok if the system has more than, say, 30 GB of memory. You can, of course, allow more or less. Note that the memory is only occupied if /tmp really stores that much data. An empty /tmp does not block any memory!

The mkdir creates /tmp without any special access rights. Fortunately, declaring the file system to be tmpfs in /etc/fstab above makes the access rights 1777 (or rwxrwxrwt) - which is exactly what we need for /tmp.

You should reboot the system after this change. There are changes that wiping /tmp this way confuses processes.

On the reboots
You will read "reboot the system" often during this guide. This is not a joke! We configure very basic system and network settings here and it is crucial that these settings are correct if the system starts up! Check this step by step by rebooting and fix any problems before continuing. Otherwise, your server will be unreliable - and that’s a bad thing!

3.3. Preparing the network settings of the host

We do now have a freshly installed system. Unfortunately, it is not quite ready to serve as a KVM host. For this, we first have to configure a network bridge on the system.

I must say that I felt rather uncomfortable with Hetzner’s IPv6 approach in the beginning. Having only one /64 IPv6 network disallows a routed setup. Due to the way how IPv6 works, you cannot split this network sensibly into smaller ones. I really suggest reading Why Allocating a /64 is Not Wasteful and Necessary and especially The Logic of Bad IPv6 Address Management to find out how the semantics of the IPv6 address space differ from the IPv4 one. If you have a hoster who gives you a ::/56 or even ::/48 network, you can surely manage your addresses differently. Most probably, you will go with a routed setup.

Since my start on the IPv6 road, I learned however that Hetzner’s approach is not that wrong. They use the link local fe80:: address range for gateway definitions and this a totally valid approach.

Anyway, we have to use what we get. First, enable IPv6 forwarding globally by issuing

sysctl -w net.ipv6.conf.all.forwarding=1

Also enable this setting in /etc/sysctl.conf to make it permanent.

Now use ip a to get device name and MAC address of the physical network card of the system:

Example initial network setup on the physical host
root@merlin ~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:24:21:21:ac:99 brd ff:ff:ff:ff:ff:ff
    inet 241.61.86.241/32 scope global enp2s0
       valid_lft forever preferred_lft forever
    inet6 2a01:4f8:1:3::2/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::224:21ff:fe21:ac99/64 scope link
       valid_lft forever preferred_lft forever

Your network device’s name may differ. It can be something like enpXsY as in this example or enoX. On all modern Linux distributions, it will begin with en, however…​

Here the common track for all systems ends. In the Linux world, multiple network configuration setups have evolved over time. The most common ones are:

  • Direct setup in configuration files in /etc/network. This is old-school networking setup, especially when combined with a System-V-initialisation process. I do not cover this here but you find a plethora of installation guides in the internet for this.

  • Systemd-based configuration with files in /etc/systemd/network. This is how many modern distributions handle system start and network setup these days. Ubuntu did it until 17.04, Hetzner’s Ubuntu did it longer. I cover this two sections further.

  • Netplan with a configuration in /etc/netplan. This kind of "meta-configuration" is used by Ubuntu since 17.10 and by Hetzner since November 2018 for 18.04 and 18.10. I describe the needed changes in the following section.

3.3.1. Ubuntu 18.04 and later with Netplan

Ubuntu 18.04 comes with a relatively new tool named Netplan to configure the network. Since about November 2018, Hetzner uses this setup in their install process. Note that earlier Ubuntu installations are provided with systemd-networkd-based setup described below.

Netplan uses configuration files with YAML syntax. In most cases, there is only one file: /etc/netplan/01-netcfg.yaml. For freshly installed Hetzner servers, it looks somehow like this:

Netplan network configuration on a Hetzner server
root@merlin /etc/netplan # cat 01-netcfg.yaml
### Hetzner Online GmbH installimage
network:
  version: 2
  renderer: networkd
  ethernets:
    enp2s0:
      addresses:
        - 241.61.86.241/32
        - 2a01:4f8:1:3::2/64
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: 241.61.86.225
      gateway6: fe80::1

Change it like this:

Netplan configuration as needed for the physical host
root@merlin ~ # cat /etc/netplan/01-netcfg.yaml
### Hetzner Online GmbH installimage
network:
  version: 2
  renderer: networkd
  ethernets:
    enp2s0:
      dhcp4: false
      dhcp6: false
  bridges:
    br0:
      accept-ra: false
      macaddress: 00:24:21:21:ac:99
      interfaces:
        - enp2s0
      addresses:
        - 241.61.86.241/32
        - 2a01:4f8:1:3::2/64
      routes:
        - on-link: true
          to: 0.0.0.0/0
          via: 241.61.86.225
      gateway6: fe80::1

You disable both DHCP protocols on the physical device and attach it to the newly defined bridge device br0. That bridge gets all definitions from the physical device. It also gets the MAC address of the physical device, otherwise Hetzner’s network will not route any packets to it.

Note that you also disable any IPv6 auto-configuration on the br0 device by adding accept-ra: false into its configuration. We’ll setup the routing advertisement daemon lateron for the virtual machines, but it should not interact with the physical host.

Netplan has the very nice capability to apply a new configuration to a running system and roll it back if something goes wrong. Just type netplan try. If the countdown counts down (some stalled seconds at the beginning are allowed), just hit Enter and make the change permanent. Otherwise, wait for two minutes and Netplan will restore the old configuration so that you should be able to login again and fix the problem without further ado. I suggest to finish this on success with a complete reboot to be really sure that the new configuration is applied on system startup.

After a reboot, the network device list should look like this:

Network devices with changed Netplan configuration
root@merlin ~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 00:24:21:21:ac:99 brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:24:21:21:ac:99 brd ff:ff:ff:ff:ff:ff
    inet 241.61.86.241/32 scope global br0
       valid_lft forever preferred_lft forever
    inet6 2a01:4f8:1:3::2/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::224:21ff:fe21:ac99/64 scope link
       valid_lft forever preferred_lft forever

Note that the physical device enp2s0 and the bridge br0 have the same MAC address. This is intentional!

You should test now that you can login to the system through both IPv6 and IPv4 protocol, use ssh -6 <hostname> and ssh -4 <hostname> to enforce the IP protocol version.

3.3.2. Ubuntu 18.04 and other systems with systemd-networkd

This section is not updated any more
This section is not updated any more. Actually, Ubuntu gave up on direct systemd configuration.

Until October 2018, Hetzner used a systemd-networkd-based setup on Ubuntu, even with 18.04. If you have such a system, you get the same result in a different way. Creating a bridge for virtual machines using systemd-networkd explains the basics nicely.

With this system, go to /etc/systemd/network and define a bridge device in file 19-br0.netdev:

Bridge configuration with systemd-networkd in /etc/systemd/network/19-br0.netdev
[NetDev]
Name=br0
Kind=bridge
MACAddress=<MAC address of the physical network card of the host>

[Bridge]
STP=true

It is extremly important to define the MAC address, or Hetzner will not route traffic to the system. STP seems not mandatory, does not hurt either. I kept it in.

Then, assign bridge to physical device in 20-br0-bind.network:

Bridge assignment in 20-br0-bind.network
[Match]
Name=eno1

[Network]
Bridge=br0

Now copy the original file created by Hetzner (here: 10-eno1.network) to 21-br0-conf.network and replace the matching name from the physical device to the bridge. In fact, you only replace the eno1 (or whatever you network device’s name is) with br0. You also add IPv6AcceptRA=no to prevent the physical host’s network being influenced from the SLAAC messages of radvd which is installed later:

Changed main network configuration
[Match]
Name=br0

[Network]
Address=<IPv6 address assigned by Hetzner, do not change>
Gateway=fe80::1  // This is always the IPv6 gateway in Hetzner's network setup
Gateway=<IPv4 gateway assigned by Hetzner, do not change>
IPv6AcceptRA=no

[Address]
Address=<IPv4 address of the system assigned by Hetzner, do not change>
Peer=<IPv4 peer assigned by Hetzner, do not change>

Rename the original file 10-eno1.network to something not detected by systemd, e.g. 10-eno1.networkNO. Keep it around in case something goes wrong.

After these changes, the physical device has not any networks attached. This is important so that the bridge can grab it on initialization. Let’s see whether everything works and reboot the system.

If something goes wrong: Boot into rescue system, mount partition, rename 10-eno1.networkNO back into original name ending in .network. Reboot again. Investigate. Repeat until it works…​

3.4. Change IPv6 address

You might think about changing the IPv6 address of the physical host. The Hetzner provider configures them always having 0:0:0:2 as address. While there is nothing wrong with that, giving the host a random address makes the whole installation a bit less vulnerable to brute-force attacks.

Fortunately, changing the address is really simple. In the Netplan-based setup, it is in /etc/netplan/01-netcfg.yaml. Look for the addresses of the br0 device:

network:
[...]
  bridges:
    br0:
[...]
      addresses:
        - 2a01:4f8:1:3::2/64

Change it’s host part (the lower 64 bits) to more or less whatever you like

        - 2a01:4f8:1:3:6745:a24b:cc39:9d1/64

If you work with systemd-networkd, the network configuration is in /etc/systemd/network/21-br0-conf.network if you followed this guide:

[Network]
Address=2a01:4f8:1:3::2/64

Change it to

[Network]
Address=2a01:4f8:1:3:6745:a24b:cc39:9d1/64

You can also add and not replace the additional address. Then, your server can be accessed through both addresses. While it is absolutely no problem to have multiple IPv6 addresses on the same device, it can make configuration of services more difficult as the correct address for outgoing messages has to be selected correctly. I would suggest not to do this. Stay with one address.

Use netplan try or systemctl restart systemd-networkd to apply the new settings. Note that if you are connected via IPv6, your connection will be interrupted and you have to reconnect. If you are connected via IPv4 (e.g. by ssh <IPv4-address> or ssh -4 <hostname>), your connection should survive. systemd-networkd, however, might need several seconds to sort everything out.

If everything works, add a reboot. In theory, restarting the network configuration should be sufficient, but my system sometimes behaved strangely after that.

ssh to the new address should now work. If it doesn’t and your are locked out, again use the rescue system to sort it out.

Now is the time to add the physical host to the DNS:

  • Add an AAAA record in the domain the system should be reachable in.

  • Add a PTR record in the hoster’s reversal IP entries. If there is already an entry for the former address, you can remove it by simply wiping out the server name and pressing "Enter".

  • While you’re at it, also add the A record and the PTR record for the IPv4 address of the Host.

Keep DNS time-to-live short!
I strongly suggest that you set the TTL for all DNS entries as short as possible during the setup, something between 2 and 5 minutes. If you make a mistake and you have a TTL of multiple hours or even a day, you may have serious issues with the name service as long as the TTL of the wrong entries is not invalid everywhere.
The rescue system IP address
If you ever have to reboot your server into Hetzner’s rescue system, keep in mind that it will get its original IPv6 address ending in ::2. You will not be able to access it through its DNS name. You might want to add a DNS entry for <servername>-rescue.example.org for such cases. Of course, you have to remember that, too…​

3.5. Ensure correct source MAC address

Our virtual machines will have their own MAC addresses. Otherwise, the IPv6 auto configuration would not work. Unfortunately, these MAC addresses will also leak through the bridge into Hetzner’s network and that might lead to trouble as the provider does only accept the actual assigned MAC address of the main server as valid.

To prevent such problems perform MAC address rewriting using the ebtables command. You might need to install it using apt install ebtables first. Then use:

ebtables rule to stop virtual MAC addresses from leaking outside
ebtables -t nat -A POSTROUTING -j snat --to-src <MAC address of the physical network card of the host>

I’ve added this to /etc/rc.local. On a default installation of Ubuntu 20.04 (or 18.04), this file does not exist. If you create it, make it look like this:

Example /etc/rc.local
#!/bin/bash

# force source MAC address of all packets to the official address of the physical server
ebtables -t nat -A POSTROUTING -j snat --to-src 00:24:21:21:ac:99

exit 0

Replace the address in the example with your actual physical MAC address! Also, make the file executable with chmod +x /etc/rc.local.

"The internet" claims that you need to add other files to systemd for /etc/rc.local being evaluated in Ubuntu. At least for me this was not needed, it "just worked". Check whether the rule has been added:

Output of ebtables with required MAC rewriting
root@merlin ~ # ebtables -t nat -L
Bridge table: nat

Bridge chain: PREROUTING, entries: 0, policy: ACCEPT

Bridge chain: OUTPUT, entries: 0, policy: ACCEPT

Bridge chain: POSTROUTING, entries: 1, policy: ACCEPT
-j snat --to-src 00:24:21:21:ac:99 --snat-target ACCEPT
root@merlin ~ #

Reboot the systems once more to check if the rule survives a reboot.

At this stage, lean back for a moment! The difficult part is done. You have the network setup of your KVM host up and running. The rest is much easier and will not potentially kill network access to system. Also, the stuff coming now is much less provider-specific. While the initial network setup might work considerably different with another hosting provider, chances are good that the following steps are the same regardsless of where you have placed your host.

4. Setup the KVM environment

The physical host is now up and running. To actually host virtual machines with IPv6-first connectivity, some more services need to be installed.

4.1. Create a non-root user on the system.

Installation processes like Hetzner’s often generate a system with only the root user. To make it more compliant with usual Ubuntu behaviour, add a non-root user:

  • Create user with adduser.

  • Put your ssh public key into .ssh/authorized_keys of that user.

You can now either perform the following steps as that user with sudo -s or continue as directly logged in root via ssh.

4.2. NAT64 with Tayga

As I wrote, our virtual machines shall have IPv6-only internet. That implies that they cannot access systems which are IPv4-only. Unfortunately, even in 2020 there are quite popular sites like github.com which do not have any IPv6 connectivity at all. To make such systems accessible from the guest systems, we setup a NAT64 service which performs a network address translation for exactly this case.

I decided to go with the "Tayga" server. It’s scope is limited to exactly perform NAT64. This makes it necessary to add further services to make all this really useable but it also minimizes configuration complexity.

Start by installing the tayga service by the usual apt install tayga.

In /etc/tayga.conf, enable the disabled ipv6-addr directive as this is needed for working with the well-known prefix. You should set the IPv6 address to something random in your IPv6 subnet:

Random network address for the tayga NAT64 service
ipv6-addr 2a01:4f8:1:3:135d:6:4b27:5f

Additionally, switch the prefix directive from the activated 2001…​ one to the 64:ff9b::/96 one:

Change Tayga’s prefix
# prefix 2001:db8:1:ffff::/96
prefix 64:ff9b::/96

The whole Tayga configuration reads like this afterwards:

Tayga configuration on the physical host
# Minimum working NAT64 Tayga configuration for KVM host with IPv6-only guests

# (A) Basic setup
# Device name, this is the default
tun-device nat64
# Data dir for stateful NAT information
data-dir /var/spool/tayga

# (B) IPv6 setup
# The "well-known" prefix for NAT64
prefix 64:ff9b::/96
# IPv6 address, from the official ::/64 network
ipv6-addr 2a01:4f8:X:Y:14a5:69be:7e23:89

# (C) IPv4 setup
# Pool of dynamic addresses
dynamic-pool 192.168.255.0/24
# IPv4 address, not to be used otherwise in the network
ipv4-addr 192.168.255.1

Test the new setup by starting tayga once in foreground:

systemctl stop tayga  <-- Disable if already started
tayga -d --nodetach

This should give something like this:

Output of Tayga running in foreground
starting TAYGA 0.9.2
Using tun device nat64 with MTU 1500
TAYGA's IPv4 address: 192.168.255.1
TAYGA's IPv6 address: 2a01:4f8:1:3:135d:6:4b27:5f
NAT64 prefix: 64:ff9b::/96
Note: traffic between IPv6 hosts and private IPv4 addresses (i.e. to/from 64:ff9b::10.0.0.0/104, 64:ff9b::192.168.0.0/112, etc) will be dropped.  Use a translation prefix within your organization's IPv6 address space instead of 64:ff9b::/96 if you need your IPv6 hosts to communicate with private IPv4 addresses.
Dynamic pool: 192.168.255.0/24

Stop the manually started instance with Ctrl-C.

Enable the service explicitly on Ubuntu 18.04 and earlier

On Ubuntu 18.04 and Ubuntu 16.04, you have to explicitly enable the service. Edit /etc/default/tayga. Set RUN to yes:

Change in /etc/default/tayga
# Change this to "yes" to enable tayga
RUN="yes"

Launch the service with systemctl start tayga. After that, systemctl status tayga should say the Active state is active (running), the log lines in the status output should end with

... systemd[1]: Started LSB: userspace NAT64.
Forgot to enable the service on Ubuntu 18.04 and earlier?
If the Active state is active (exited) and the protocol says something about set RUN to yes, you have forgotten to enable the RUN option in /etc/default/tayga. Correct it as described above and issue systemctl stop tayga and systemctl start tayga.

4.3. DNS64 with bind

NAT64 is usually used together with a so-called "DNS64" name server. This is a specially configured name server. If a client asks it for an IPv6 name resolution, i.e. an AAAA name service record and there is only an IPv4 A record for the requested name, the DNS64 name server "mocks up" an AAAA record munging the IPv4 address and a "well-known prefix" to a synthetical IPv6 address. This address - surprise, surprise - points directly to a nicely prepared NAT64 server so that the IPv6 system talks to an IPv4 system transparently hidden behind the NAT64 proxy.

diag 3467693162d2b579e7b088a56f3d2e8b
Figure 1. How DNS64 and NAT64 play together

We setup the DNS64 server using a classic bind DNS server. Modern versions include DNS64, it only has to be activated. Start the install with the usual apt install bind9.

Our bind is a forwarding only-server only for our own virtual machines. On Debian-derived systems, the bind options needed for this setup are located in /etc/bind/named.conf.options. Edit that file and enter the following entries:

Options for bind in /etc/bind/named.conf.options
options {
        directory "/var/cache/bind";

        forwarders {
                2a01:4f8:0:1::add:1010;  # Hetzner name servers
                2a01:4f8:0:1::add:9999;
                2a01:4f8:0:1::add:9898;
        };

        dnssec-validation auto;

        auth-nxdomain no;    # conform to RFC1035
        listen-on {};
        listen-on-v6 {
                <IPv6 network assigned by provider>::/64;
        };
        allow-query { localnets; };
        dns64 64:ff9b::/96 {
                clients { any; };
        };
};

The actual important definition is the dns64 section at the bottom of the options definitions. It enables the DNS64 mode of bind and defines the IPv6 address range into which the addresses should be converted.

It also important to define listen-on {}; to disable listening on the IPv4 port altogether - we do not need it. Restricting allow-query to the localnets is also important to prevent the server from becoming an open DNS relay. We only need it for our internal network.

The forwarders section defines the name servers this bind will ask if it does not know the answer itself - which is almost always the case. I put Hetzner’s server names here. Of course, you must either use the DNS of your hoster or provider or a free and open server like Google’s public DNS at 2001:4860:4860::8888 and 2001:4860:4860::8844.

Check the networks twice
Check the network in listen-on-v6 and also check the forwarders. You whole IP address resolution will not work if one of these is wrong.

Restart the daemon and check that it is enabled and running:

systemctl restart bind9
systemctl status bind9

After these steps, you have a working DNS64 server which you can use for all you virtual machines on the system. You can test that it really answers with DNS64-changed entries by querying something which does not have an IPv6 address:

Obtaining AAAA record for a server which does not have one by DNS64
root@physical:~# host github.com  # Query using external default DNS server
github.com has address 140.82.118.3
github.com mail is handled by [...]

root@physical:~# host github.com 2a01:4f8:1:2:3:4:5:6  # Give IPv6 address of local server
[...]
github.com has address 140.82.118.3
github.com has IPv6 address 64:ff9b::8c52:7603
github.com mail is handled by [...]

Note how the DNS server running on the physical host returns the additional IPv6 address with 64:ff9b prefix. To be sure that the local server is really addressed, give its IPv6 address as additional parameter to the host command as shown above.

Using an external DNS64 server
So far, the name server is only used for DNS64. You can also use the Google servers 2001:4860:4860::6464 and 2001:4860:4860::64 (yes, these are other servers than the public DNS servers mentioned above) offering this service. Their replies are compatible with our NAT64 setup. However, having an own server reduces external dependencies and allows for additional services lateron.

4.4. Router advertisement with radvd

With NAT64 and DNS64 in place, we’re almost ready to server virtual machines on the host. The last missing bit is the network configuration.

Of course, you could configure your virtual hosts' network manually. However, IPv6 offers very nice auto-configuration mechanisms - and they are not difficult to install. The key component is the "router advertisement daemon". It’s more or less the IPv6-version of the notorious DHCP service used in IPv4 setups to centralize the IP address management.

For this service, we use the radvd router advertisement daemon on the bridge device so that our virtual machines get their network setup automatically by reading IPv6 router advertisements. Install radvd and also radvdump for testing through the usual Debian/Ubuntu apt install radvd radvdump.

Then, create the configuration file /etc/radvd.conf. It should contain the following definitions:

Configuration in /etc/radvd.conf
interface br0 {
        AdvSendAdvert on;
        AdvManagedFlag off;
        AdvOtherConfigFlag off;
        AdvDefaultPreference high;
        prefix <IPv6 network assigned by provider>::/64 {
                AdvOnLink on;
                AdvAutonomous on;
                AdvRouterAddr on;
        };
        RDNSS <IPv6 address of the physical host> {};
        route 64:ff9b::/96 {};
};
Advertise infinite lifetime

IPv6 route advertisement is prepared for dynamically changing routes. In our setup, however, all routes are static. It might be sensible to add this information to the configuration:

interface br0 { [...]
        prefix <IPv6 network assigned by provider>::/64 {
                [...]
                AdvValidLifetime infinity;
        };

        RDNSS <IPv6 address of the physical host> {};
        route 64:ff9b::/96 {
                AdvRouteLifetime infinity;
        };
};

More research is needed on whether this is really needed.

Use Googles DNS64 servers

If you opted for the Google DNS64 servers to do the job, write instead

        RDNSS 2001:4860:4860::6464 2001:4860:4860::64 {};

A radvd configuration must always be read as advertisement of the machine serving it. So, you do not write something like "service X is on machine Y" but "This machine offers X".

Having this in mind, the configuration advertises all three network settings needed by the virtual machines:

  1. The prefix section defines that this host announces itself as router (AdvRouterAddr) to the given network and allows the machines to use SLAAC for generating their own IPv6 address (AdvAutonomous).

  2. The RDNSS section declares this machine to be the DNS resolver for the virtual machines.

  3. The route section adds the static route for NAT64 ip addresses to this machine.

Start radvd and make it a permanent service (coming up automatically after reboot) using

Commands to activate radvd service
systemctl start radvd
systemctl enable radvd

If you start radvdump soon after starting radvd, you will see the announcements sent by radvd in irregular intervals. It should contain the network router, the DNS server and the NAT64 route. For some reason, radvd seems to stop sending unsolicitated advertisements after some time if noone is listening.

After radvd is up and running, check the physical host’s bridge interface with ip a show dev br0. If you find something like

Spurious auto-configured routes on br0
    inet6 2a01:4f8:1:2345:abc:4680:1:22/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 85234sec preferred_lft 14943sec

your bridge is responding to the network announcements. Go back to the network configuration above and add accept-ra: false for Netplan or IPv6AcceptRA=no for systemd-networkd. On your bridge, all routes must be static (i.e. no dynamic modifier) and valid and preferred forever:

Correct routes on br0
    inet6 2a01:4f8:1:2345:abc:4680:1:22/64 scope global
       valid_lft forever preferred_lft forever

The route section advertises that this system routes the 64:ff9b:: network. Only with this definition the virtual servers know where to send the packets for the emulated IPv6 addresses for the IPv4-only servers to.

After changing the configuration, restart radvd and check its output with radvdump. It should contain both the DNS server and the NAT64 route.

The nasty Hetzner pitfall
In their own documentation, Hetzner also describes how to setup radvd. For the DNS servers, however, they use IPv6 example addresses from the 2001:db8 realm. It took me three days and severe doubts about Hetzner’s IPv6 setup to find out, that my only mistake was to copy these wrong IP addresses for the DNS server into the configuration. Don’t make the same mistake…​

You have now prepared everything for the IPv6-only virtual machines to come: They get their network configuration through the centrally administrated radvd. The advertised setup includes a name server with DNS64 an a NAT64 route to access IPv4-only systems.

About non-virtual network setups
So far, this document describes how to setup a root server with virtual machines. Especially NAT64/DNS64 is completely independent of that. If you administrate a (real) computer network and want to lay ground for IPv6-only machines in that, do exactly the same with your physical machines: Install Tayga and the DNS64-capable Bind9 on router behind which the IPv6-only systems reside. This might be the "firewall" of classical setups. Then, your actual computers play the role of the virtual machines in this guide.

4.5. Virtualisation with KVM

We’re now ready for the final steps! Our network is configured far enough so that we really can start installing virtual machines on our system. For this, we of course need KVM. For Ubuntu, I followed the first steps of this guide:

First, check that the system supports virtualisation at all. Issue

egrep -c '(vmx|svm)' /proc/cpuinfo

and verify that the result is greater then 0. Then, apply

apt install cpu-checker
kvm-ok

and check that the result is

INFO: /dev/kvm exists
KVM acceleration can be used

If not, the BIOS settings of the system must be corrected. Contact the hosting provider to sort that out.

Now you can install KVM and the required helper packages. On Ubuntu 20.04 the command is

Command to install KVM on Ubuntu 20.04
apt install qemu-kvm libvirt-daemon bridge-utils virtinst libvirt-daemon-system virt-top libguestfs-tools libosinfo-bin qemu-system virt-manager

On Ubuntu 18.04 or 16.04 the list of packages is slightly different

Command to install KVM on Ubuntu 18.04 or 16.04
apt install qemu qemu-kvm libvirt-bin bridge-utils virt-manager

This will install a rather large number of new packages on your host. Finally, it will be capable to serve virtual machines.

Next step is to load the vhost_net module into the kernel and make it available permanently:

modprobe vhost_net
echo "vhost_net" >> /etc/modules

The libvirtd daemon should already be up and running at this point. If this is for any reason not the case, start and enable it with the usual systemctl commands or whatever the init system of your host server requires to do this.

Do NOT install dnsmasq on Ubuntu 20.04

If you look into the start messages with systemctl status libvirtd, you might see a message Cannot check dnsmasq binary /usr/sbin/dnsmasq: No such file or directory. Do not install the dnsmasq package! The message is misleading and gone with the next restart. If you install dnsmasq, it will fight with bind on the port and your DNS64 service will become unreliable!

Note that even if you do not install dnsmasq, you will have a dnsmasq process running on the system. This is ok! This program comes from the dnsmasq-base package and runs aside of bind without interfering with it.

To simplify installation and administration of your virtual machines, add the "normal" user you created above to the libvirt user group. I prefer doing this by simply adding the user name to the definition in /etc/group:

Add USERNAME to the libvirt user group in /etc/group
libvirt:x:<groupid>:USERNAME

You should perform a final reboot after these steps to be sure that everything works together correctly and comes up again after a reboot.

Well, that’s it! Our system can get its first virtual machine!

The virtual machines

After the physical host is up and running, we can install the virtual machines. They will carry the services which this machine actually will be used for. Thie guide is not so much about virtualized environments in general but more about IPv6-first or IPv6-only setups. However, we’ll also look at the general virtual machine concepts.

5. Overview on virtual machines

At this moment, we have a fully operational IPv6 autoconfiguration environment. Every modern operating system will configure itself and have full IPv6 connectivity.

Additionally, through the NAT64/DNS64 address translation layer, all the IPv6-only virtual machines will be able to access IPv4-only servers seamlessly and also without any additional configuration requirements.

5.1. Initializing a virtual machine

Installing a virtual machine might sound difficult to you if you have never done this before. In fact, it is not. After all, it is even simpler than the remote installation on a hosted system once you get used to it. On the physical machine, you are dependent on what the hosting provider offers as installation procedures. KVM offers you more or less a complete virtualized graphical console which allows to act just as you were sitting in front the (virtual) computer’s monitor. This way, you can install whatever you like.

Furthermore, if you make a configuration mistake on the physical host, you might end with a broken machine. If this happens in a virtual machine, you have several ways to solve the problem: You can connect to the console and log in directly without network access. If the machine does not boot any more, you can even mount the virtual hard disk into the physical machine and try to fix. And if the machine is for any reason broken beyond repair, you can just throw it away and start over with a fresh installation.

I suggest to start with a throw-away virtual machine. It will not contain any "real" services but only show any remaining problems in the setup. Further, it allows to test and learn the whole process of installing a virtual machine in the setup.

  • Copy the installation ISO image for the system to install onto the host.

  • Connect to the KVM system using virt-manager. Of course, you might also use another client, but I find this rather easy.

  • Use the ssh connection of the normal user created on the host.

  • Start the host creation.

From hereon, things become rather standard. We’re now in the process of installing a guest system in a KVM guest. My best practices are these:

  • Assign as many CPUs to the virtual machine as the hardware has. Only if you suspect the virtual machine to grab too much resources, reduce the CPU number.

  • Use cow2 image file for the virtual harddisk. It is the most flexible way once it comes to migrating the virtual machine and today’s file systems can cope with the double indirection quite well.

  • Give the virtual machine definition the same name the system will later have.

The one interesting point is the network configuration at the very end of the definition process. Here, enter the "Network selection" before creating the machine. Select "Name of common device" and give the name of bridge explicitly. Here it is br0.

virt mana network
Figure 2. Select the correct network device

If you are ready, press "Create" and summon your first virtual system.

5.2. DNS and auto-boot

To work with the long IPv6 addresses conveniently, DNS is almost mandatory. You should enter the virtual machine now into the domain.

  • Find the virtual machine’s IP address. On Linux systems, ip a does the job. You may also derive it manually from the virtual machine’s MAC address (as SLAAC uses it, too).

  • Create an entry in the DNS zone of the system. Note that you only enter a AAAA record, not an old-fashioned A record for an IPv4 address. The system just has no IPv4 address…​

  • In my opinion, it makes also sense to always create a reverse IP entry for IPv6 hosts. If for any reason your DNS AAAA entry vanishes, you still have the reverse IP entry which assigns name and IP address. Reverse IP entries are always managed in the DNS realm of the IP network owner. In my case, they are edited in Hetzner’s robot interface.

The SLAAC mechanism derives the IP address from the MAC address of the virtual machine. So, it will be static even though it has nowhere been configured explicitly.

If you want your virtual machine to be started automatically if the physical host starts up, you have to set the corresponding flag in the KVM configuration. In virt-manager, you find it in the "Boot options" for the virtual machine.

5.3. Adding an IPv4 address to the virtual machine

The first and foremost advice for this topic: Don’t do it!

Your virtual machine is fully connected to the internet any can be reached from anywhere and for any service. The only restriction is that a connecting party must use IPv6. Most systems, however, are connected by IPv6 these days.

Even if the virtual hosts a service which must also be accessible by IPv4-only clients, a direct IPv4 connection for the virtual machine is not mandatory. Especially for HTTP-based services, we will configure a reverse proxying scheme on the physical host which allows transparent connections to IPv6-only web servers on the virtual machines for IPv4-only clients. So, only configure a direct IPv4 connection into a virtual machine if it runs a service which requires it.

In the provider-based scenarios, the provider has to assign an additional IPv4 address to the physical host. Then, the physical host must be configured in a way that it passes the IPv4 address to the virtual machine. Finally, the operating system on the virtual machine must handle these connections. Do like this:

Obtain an additional IPv4 address for your server from your hoster. If you already have a network of them assigned to your server, you can use one of those, of course.

Provider-dependent setup
The actual setup depends on the network scheme of your provider. Hetzner implements a direct peer-to-peer routing of IPv4 addresses. We continue based on this setup.

5.3.1. The physical host

If the physical host is configured with Netplan, add the route to the virtual guest explicitly:

Additional IPv4 address in Netplan-based configuration on the physical host
root@merlin ~ # cat /etc/netplan/01-netcfg.yaml
network:
  [...]
  bridges:
    br0:
      [...]
      addresses:
        - 241.61.86.241/32
        - 2a01:4f8:1:3::2/64
      routes:
        [...]
        - to: 241.61.86.104/32  # <-- IPv4 address of the virtual machine
          via: 241.61.86.241  # <-- IPv4 address of the host
[...]

After adding these entries, use netplan try to test the changes. If the configuration still works (countdown may stall for 4 to 5 seconds), press Enter and make the changes permanent.

More information needed
There is not that much information about Netplan-based network configuration stuff in the internet, unfortunately. Let me know if this configuration did not work for you.

On a physical host with systemd-networkd setup, add an entry to the bridge device configuration. In the description above, this is the file /etc/systemd/network/21-br0-conf.network. Add the following lines:

Additional IPv4 address in systemd-networkd’s br0 configuration
[Address]
Address=<IPv4 address of the host>
Peer=<IPv4 address for the virtual machine>

There are two [Address] entries now in this file which both have the same Address but different Peer entries. That’s intentional.

5.3.2. The virtual machine

On the virtual machine, add the IPv4 address to the Netplan configuration, usually in /etc/netplan/01-netcfg.yaml. It reads completely like this:

Netplan configuration on the virtual machine with additional IPv4 connectivity
network:
  version: 2
  renderer: networkd
  ethernets:
    ens3:
      dhcp6: yes
      addresses: [ IPv4 address for the virtual machine/32 ]
      routes:
        - to: 0.0.0.0/0
          via: IPv4 address OF THE PHYSICAL HOST
          on-link: true
The physical host is the IPv4 default route!
Note that - at least in the Hetzner network - it is crucial that you declare the physical host as the default route for IPv4 traffic from the virtual machines! If you set the gateway given by Hetzner, traffic is not routed. In this case, you can reach the guest from the host but from nowhere else via IPv4.

On the virtual machine, you can apply your changes with netplan try and pressing Enter, too. You can check the IPv4 routes after that which should only show one entry:

IPv4 routing table on the virtual machine
# ip -4 r
default via <IPv4 address of physical host> dev ens3 proto static onlink
Information on systemd-networkd-based setups missing
I have not configured systemd-networkd-based virtual machines so far, so I do not know how to set them up correctly. But it should be easy as only a static address and gateway entry is needed.

Depending on the installation routine of the operating system, there could be another thing to change. Check whether your /etc/hosts contains a line

Wrong line in /etc/hosts
127.0.1.1 <name of virtual machine>

This might have been added during installation as your system had no IPv4 connectivity at all at that stage. Now that you have full IPv4 connectivity, this can be misleading to some systems. Exchange it with

Correct line in /etc/hosts
<IPv4 address of virtual machine> <name of virtual machine> <name of virtual machine>.<domain of virtual machine>

e.g.

1.2.3.4 virthost virthost.example.org

Finally, add DNS and reverse DNS entries for the IPv4 address for the virtual machine. Now, it is directly accessible by both IPv4 and IPv6 connections.

5.3.3. Name service on IPv4-enhanced virtual machines.

If you follow this guide, an IPv4-enhanced virtual machine will still connect to IPv4-only servers via IPv6! The reason is the DNS64-enhanced name server. It will deliver IPv6 addresses for such servers and the outgoing connection will be established through the NAT64 gateway.

Normally, this has no drawbacks. We install the IPv4 connectivity only for clients which need to connect to a service on the virtual machine via IPv4 - and this is the case with the configuration described above. Remember that the whole NAT64/DNS64 magic happens at the DNS layer. My advice is to generally not configure a special DNS server for virtual guests with direct IPv4 connectivity but use the DNS64-enhanced system from the IPv6 auto configuration.

There are exceptions to this rule, however. The most remarkable one is if the virtual machine becomes an e-mail server. In this case, it must establish outgoing connections to IPv4 servers via IPv4 or otherwise its connections are blocked. We’ll see how to handle this in the respective section.

5.3.4. About IPv4 auto-configuration

This setup assigns IPv4 addresses statically. One could argue that the usual auto configuration setups like DHCP and MAC-address-based IP address assignment should be used. This would, of course, be possible.

I opted against such setups as I don’t think, they are necessary. The setup described here is based on IPv6 and it can be run as IPv6-only setup. For IPv6, we have complete connectivity and auto-configuration - which is even completely static so no configuration entries are needed for individual machines.

The complexity of IPv4 setups comes from the workarounds established against its limitations, most notably to notorious NAT setups and addresses from some local address ranges.

All this is not needed any more with IPv6!

My strong advise is: Use IPv6 as the standard protocol for everything you do on all your systems. Take IPv4 only as bridge technology for the legacy parts of the internet. Only assign one single IPv4 address to your virtual machines if you absolutely must.

And for such a setup, you do not need fancy autoconfiguration. Just enter the address on the physical host and the virtual machine and you’re done. If somewhen in the future, IPv4 connectivity is not needed any more, throw it away.

It’s much simpler and less error-prone than administrating additional DHCP servers and configuration files.

6. Operating-systems in the virtual machines

You can install any operating system which is capable of running in a virtualized setup. Of course, it must be able to use IPv6 and it should be able to auto-configure its network settings via SLAAC. Fortunately, all modern operating systems support such setups nowadays.

6.1. Ubuntu 20.04

We’re now at a stage where you can install any operating system which is installable in KVM virtual machines. I give some advises for the Ubuntu 20.04 network installer:

  • Install by simply pressing the "Install" button. I never needed any additional kernel parameters.

  • Select correct keyboard or it will drive you nuts.

  • Network autodetection will idle around when looking for the non-existing DHCP server. Keep calm. Apart from that, it will simply setup everything correctly.

  • Enter the hostname, preferrably the same as the name of the virtual machine to keep it simple…​

  • Check whether you provider has their own mirror for the installation server. Hetzner has, therefore you can save download time:

    • Go to top of mirror list and press enter information manually

    • For Hetzner: mirror.hetzner.de. This server also works also with IPv6. But even IPv4 servers would be possible due to our NAT64/DNS64 setup.

    • Set directory for the Hetzner server to /ubuntu/packages/

  • You do not need a HTTP proxy.

  • Install should start.

  • I suggest to not partition the virtual hard disk in any way. It is not needed.

  • Everything else is as usual. In the software selection, you should at least select the "OpenSSH server" so that you can log into the system after installation.

As this is Ubuntu 20.04, this machine uses netplan for network configuration. It has a very simple definition file in /etc/netplan/01-netcfg.yaml:

Netplan configuration in a Ubuntu 20.04 virtual machine
# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    ens3:
      dhcp6: yes

Netplan summarizes router advertisement in the "dhcp6" statement. You do not need to change anything here.

Note that after (re-)booting the virtual machine, it may take some seconds until it has configured its network interface. Once it has done so, everything should work without problems.

6.2. Ubuntu 18.04

Ubuntu 18.04 is installed exactly the same way as Ubuntu 20.04.

6.3. Ubuntu 16.04

Ubuntu 16.04 is installed almost exactly the same way as Ubuntu 20.04. The only difference is that Ubuntu 16.04 still uses systemd-networkd for network configuration. Therefore, its network settings are located at a different place and also differ syntactically. However, as with the more modern cousins, you do not have to change anything manually due to the autoconfiguration.

6.4. Windows 10

Windows "just works" when installed in an IPv6-only virtual machine. It takes all the advertisements from radvd and configures its network appropriately. The result is a Windows which has no IPv4 network aprt from the auto-configured one.

Win10 IPv6 only
Figure 3. Windows 10 with IPv6-only connectivity

Services

So far, we have described how to setup the basic operating systems for physical host and virtual machines. Usually, systems are installed to run services.

The whole idea of a virtualized server setup is to use the physical host for almost nothing and run all services in the virtual machines. So, if not explicitly stated otherwise, the following instructions always assume that you work on a virtual machine.

Having IPv6-centric - or ideally IPv6-only - virtual machines, running serices on them brings one or the other interesting configuration detail. In the following chapters of this guide, we look at a number of different services and describe these details.

Don’t be shy about adding yet another virtual machine

A tremendous advantage of the setup described here is that installing another virtual machine is really cheap! You do not need to care about scarce IP addresses - with IPv6 you literally have as many as you want! Also, virtualized servers are quite lean these days, it does not really matter if you have three services in three virtual machines or in one - or on the bare metal.

So, use virtual machines as your default way of doing things! Install your services in different machines if you want to seperate them. One service wants a special operating system environment? No problem, just put it into its own virtual machine! You can really easily seperate things this way.

And if you find out that a service is not needed any more - no problem. Just shut down the virtual machine and it is gone. Delete it - and the service will never come back. You totally garbled the configuration? Delete the virtual machine and start over.

Virtualisation makes administation of services remarkably more flexible than having "everything in the same environment".

7. SSL certificates with Let’s Encrypt

Today’s internet requires encrypted connections for any serious web service. Encrypted connections require certified encryption keys. Such certificates must be issued by a trusted certificate provider. "Let’s encrypt" is such a provider which grants the certificated for free.

This chapter is not polished so far
This chapter still work at progress at some points. While the overall scheme works reliably, there are still rough edges. As SSL connections are crucial for all other services and Let’s encrypt is a very good source for them, I released this guide nevertheless. Feel free to use other schemes or resources for your certificates or help increasing the quality of this chapter.

7.1. The basic idea

Lets recap shortly how SSL works:

  • You have a secret key on your (web) server which you use to setup encrypted connections with clients.

  • You have a certificate from a 3rd party which confirms that this key is actually yours and which contains information a client needs to encrypt messages your secret key can decrypt (and vice versa).

  • You send that certificate to a client which connects to you.

  • The client evaluates the certificate to gain information about the key to use and to read the confirmation that the information is valid.

  • Therefore, the client also must trust the 3rd party which certified the key above.

The mentioned 3rd party is the certification authority (CA). To issue a valid certificate, it must answer two questions positively:

  1. Is the key really your key? This problem is solved cryptographically by so-called "certificate signing requests (CSRs)". They guarantee cryptographically that the issued certificate can only be used with the key they are created for.

  2. Are you really you? For long time, this was the way more difficult problem as there were no simple technical ways to determine that someone is really the person they claim.

Let’s encrypt, however, uses a completely automatable workflow which is based on so-called challenges. The protocol and scheme are called "ACME" and work this way:

  1. You: Hey, Let’s encrypt, please certify my key for example.org.

  2. Let’s encrypt: Ok, to prove that you have example.org under your control, perform a challenge. Only someone with sufficient rights can do it and that I can check it afterwards.

  3. You: <Perform the challenge> Done!

  4. Let’s encrypt: <Check the challenge> Great. Here’s your certificate

  5. You: Thanks. <Copy key and certificate where they are needed>

7.2. Site-specific certificates and HTTP-based challenge

There are different types of certificates. First, you can certify a key for only one special server name, e.g. "www.example.org"; then, only a service which uses exactly this name can use the certificate. This is the type of certificate you read most about when it comes to Let’s Encrypt. Its challenge is rather simple:

  1. You: Hey, Let’s encrypt, please certify my key for www.example.org.

  2. Let’s encrypt: Sure. Please put a file /some-path/<unpredictable random name> on www.example.org.

  3. You: <Create the file> Done!

  4. Let’s encrypt: Get http://www.example.org/some-path/<unpredictable random name>. Found it! Great. Here’s your certificate.

  5. You: Thanks. <Copy key and certificate into web servers SSL key store>

The very nice thing about this scheme is that the Let’s-Encrypt server, the Let’s-Encrypt client certbot and your web server can handle out this all on their own. certbot knows how to configure a local Apache so that it serves the file which the Let’s Encrypt server wants to see. There are even web servers like Caddy which have this scheme already built-in.

However, for the mixed IPv4/IPv6 setups, this scheme has some shortcomings. It can only be performed on the virtual machine, as only there the web server can change the contents it server in accordance with the Let’s encrypt certificate request tool. While this would work consistently as Let’s encrypt explains on https://letsencrypt.org/docs/ipv6-support/, the key has to be copied to the IPv4 proxy server. Otherwise, encrypted IPv4 connections are not possible. If for any reason the IPv6 web server is unresponsive and Let’s encrypt falls back to IPv4, the challenge fails and the certificate is not issued.

All in all, I decided against this scheme for all my domains.

7.3. DNS-based challenge and wildcard certificates

In my setup, I use a certificate for any name below example.org. Such a wildcard certificate is expressed as *.example.org and allows to use the certified key for everything below the domain name, e.g. www.example.org, smtp.example.org, imap.example.org and anything else.

For such wildcard certificates, the HTTP-based challenge is not sufficient. You have to prove that you not only control one name in the domain, but the complete domain. Fortunately, there is a way to automate even this: The DNS entries of the domain.

Obtaining a certificate for *.example.org works with the following dialog:

  1. You: Hello Let’s encrypt, please certify my key for example.org.

  2. Let’s Encrypt: Ok, please set the String "<long random String>" as TXT record in the DNS domain as _acme-challenge.example.org

  3. You: <Change the DNS entries> Here you are!

  4. Let’s Encrypt: <Performs DNS query> Great. Here’s your certificate.

  5. You: Thanks. <Copy key and certificate where they are needed>

This works perfectly. There is one tiny problem: The DNS entries have to be changed. And there is no general scheme how this can be automatized with the usual DNS server system. Many providers do only offer a web-based interface for editing or have defined a proprietary interface for changes to the DNS entries. And such interfaces tend to give very much power to the caller.

DNS-based challenge for non-wildcard certificates
Of course, you can use DNS-based challenges also for certificates with specific names only. As you’ll see, our keys will actually be certified for *.example.com and example.com. Wildcard certificates simplify the management a bit as they are immediately useable for any new names in a domain.

7.4. DNS rerouting

Thankfully, there is a solution even for this problem: A special tiny DNS server which is tightly bound to the whole challenge process and can be programmed to return the required challenge. Such a server exists, it is the "ACME-DNS" server on https://github.com/joohoi/acme-dns.

But this server runs on its own name which might even be in a totally different domain, e.g. acme-dns.beispiel.de.

And here comes the final piece: The domain example.org gets a static CNAME entry pointing _acme-challenge.example.com to a a DNS entry in acme-dns.beispiel.de. Only that entry is the requested TXT record Let’s Encrypt’s certification server checks for the certificate validation.

The complete scheme with ACME, ACME-DNS and all the routing steps becomes rather elaborate:

diag 687384e684fa9d83ccdb12c33fd7eb38
Figure 4. Certificate validation with Let’s Encrypt, ACME, and ACME-DNS

This scheme finally allows to fully automatically obtain and manage Let’s-Encrypt-certified SSL keys. And full automatism is important for the whole Let’s Encrypt scheme as the issued certificates only have rather short validity time ranges: They are always only valid for 90 days and can be renewed from day 60 of their validity on. So, each certificate is renewed four to six times per year!

Especially when the number of certificates becomes larger, a manual process will be really, really tedious.

7.5. Set up the whole process

Time to bring things together. We setup the following scheme:

  • One system is responsible for gathering and renewing the certificates from Let’s Encrypt.

  • The whole key set is copied to all virtual machines and the physical host so that all web servers can access them.

In my original installation, I chose the physical host as the hub for this scheme. While there is nothing wrong with that technically, it contradicts the rule not to run any unnecessary services at the physical host.

However you do it, first thing you need is the certificate robot certbot. In Ubuntu 18.04 and 20.04, the included one is sufficient, so installation is a simple apt install certbot.

About standards and implementations
Technically, it is not 100% correct to say you need "certbot" to communicate with "Let’s encrypt". Both are implementations of a vendor-neutral standard, so it would be more precise to say you need a "client" to an "ACMEv2 server" where ACME is the acronyme of "Automatic Certificate Management Environment". As this guide uses Let’s Encrypt and certbot, I stick with these terms.

For the ACME-DNS part, you need an ACME-DNS server. For the moment, I decided to go with the public service offered by acme-dns.io. As its administrators say, this has security implications: You use an external service in the signing process of your SSL certificates. Wrong-doers could tamper your domain or get valid certificates for your domains. However, as we see, you’ll restrict access to your data to your own server, you are the only one who can change the crucial CNAME entry in your domain and you have an unpredictable account name on the ACME-DNS server. As long as you trust the server provider, this all seems secure enough to me.

It is absolutely possible and even encouraged by their makers to setup your own ACME-DNS instance, and as the server is a more or less self-contained Go binary, it seems to be not overly complicated. However, this has to be elaborated further.

Finally, you need the ACME-DNS client. It is a short Python program. Download it into /etc/acme-dns from its source on github:

Get ACME-DNS
# mkdir /etc/acme-dns
# curl -o /etc/acme-dns/acme-dns-auth.py \
    https://raw.githubusercontent.com/joohoi/acme-dns-certbot-joohoi/master/acme-dns-auth.py

Load the file into and editor end edit the configuration at top of the file:

ACMEDNS_URL = "https://auth.acme-dns.io" (1)
STORAGE_PATH = "/etc/acme-dns/acmedns.json" (2)
ALLOW_FROM = ["<IPv4 address of physical host/32", "IPv6 address of physical host/128"] (3)
FORCE_REGISTER = False (4)
1 This is the URL of the ACME-DNS server. The default setting is the public service from acme-dns.
2 The ACME-DNS client needs to store some information locally. This setting lets it store its information also in the /etc/acme-dns directory.
3 During account generation on the server, the client can restrict access to this account to certain source addresses. Setting this to the IP addresses of the machine running the registration process increases security.
4 The client program will register an account at the ACME-DNS server on first access. If that account should be overwritten somewhen in the future, this variable must be set once to True when running the program. Normally, it should always be False.

Now you can generate key and certificate for your first domain. Run certbot on a domain to be secured with SSL once manually:

First invocation of certbot on a domain
certbot -d example.com -d "*.example.com" \ (1)
        --server https://acme-v02.api.letsencrypt.org/directory \ (2)
        --preferred-challenges dns \ (3)
        --manual (4)
        --manual-auth-hook /etc/acme-dns/acme-dns-auth.py \ (5)
        --debug-challenges \ (6)
        certonly (7)
1 You request a certificate for example.com and *.example.com as the wildcard version does not match the domain-only server name.
2 The ACME server must be capable of ACMEv2 as ACMEv1 does not know about DNS-based challenges.
3 We use the DNS-based challenge.
4 The challenge is performed "manually", i.e. not by one of the methods incorporated into certbot.
5 However, this "manual" method uses the acme-dns client as hook for the authentication step, so after all it can be performed without interaction.
6 In this call, however, we want certbot to stop before checking the challenge, because we have to setup the CNAME entry in the example.com domain.
7 Finally the certbot operation: Issue a certificate.

Executing this call halts just before Certbot checks the certificate. Instead, it prints the required CNAME for the domain. So, now you have to edit your domain data and add this CNAME.

acme dns cname
Figure 5. DNS entry for the ACME-DNS CNAME in NamedManager

After this entry has been added and spread through the DNS, let certbot continue.

Only at this first certification process, manual interaction is needed. After that, certbot can, together with acme-dns-auth.py, handle the whole process automatically. You can see the setup in /etc/letsencrypt/renewal/example.com.conf.

For the process of certification that means: Put a call to certbot renew into the weekly cronjobs on the machine and forget about it.

Certbot puts the currently valid keys certificates for a domain into a directory /etc/letsencrypt/live/<domain>. Just use the files in that directory for Apache, Dovecot, Postfix and whatever.

7.6. Copy files to virtual machines

My suggestion is to run Certbot in a centralized way on one machine (in my current setup: the physical host) and copy the complete directory with certbot’s keys and certificates onto all machines which need one of the certificates.

Note that you need the certificates for all your web servers not only on the virtual machine where the web site runs, but also on the physical host which is responsible for the incoming IPv4 connections to the site. It helps to prevent errors if you just copy the complete data set onto both machines at the same place.

So ,just copy the /etc/letsencrypt directory to all virtual machines, do not install certbot there.

7.7. Planned extensions and corrections

From all services describes in this guide, the Let’s Encrypt setup is the least consolidated one. There are some elements missing to make it really polished:

  • certbot should run on a dedicated virtual machine and not on the physical host. Implementing it this way was mainly due to experimenting with the whole setup.

  • ACME-DNS should use its own server. I would place it on the same virtual machine where certbot runs. This was, that special DNS server would only be accessible via IPv6, but the docs say, this is no problem any more in 2020.

  • The copying process must be enhanced. Currently, certbot is run by root which is not the best idea of all. There should be a dedicated certbot account which runs Certbot and is allowed to copy Certbot’s directory to the other machines. So far, copying the certificates is still a step which is not completely automatized in my setup.

  • Some mechanism must restart the services after the new certificates have been rolled out. Currently, the certificate renewal happens somewhen between 30 and 22 days before the old certificate expires. Often, system reboots due to automatic updates restart the Apaches or the e-mail services in time. This is not guaranteed, however.

Taking all this into consideration, this section of this guide is the clearest candidate for a larger rewrite.

8. E-mail server

When it comes to a complete personal internet services setup, the single most important service is probably e-mail. It is crucial for many different tasks:

  • Of course, sending and receiving information to and from other people.

  • Having a base for the other services to verify or re-verify authentication - ever tried to reset a password to a service without an e-mail account?

  • Getting information about the health of the own systems or confirmation messages of automatic tasks.

E-mail is important. The question is: Host yourself or use a service? To be clear: Hosting an e-mail server yourself these days is not a simple task. There is a plethora of - more or less clever - schemes against spamming and fraud and if your server does not follow them, it is quite likely that outgoing messages will simply be blocked or even thrown away by other systems. If you mess up your setup, the same could accidentially happen to incoming messages. And unfortunately, you will not find a tool which you install out of the box so that you just "have" a mail server. You have to configure multiple programs so that they play nicely together.

The advantages, however, are obvious: It’s your own e-mail on your own server. You are not bound to any storage capacities you have to pay for. You can configure the services as you like. If you misconfigure them, it’s your fault and only your fault.

Having said all this, there is one thing that is absolutely impossible today: Running an e-mail server for a domain without an IPv4 address. While this is technically no problem, there are so many e-mail servers out there which are frankly IPv4-only that you are more or less cut off any meaningful e-mail service. And due to the aforementioned anti-spam measures, it is also not possible to use port-forwarding from the physical host into the virtual machine for the IPv4 connectivity. At least, I could not figure out how to do this reliably.

So, unfortunately the very first "real" service I describe in this "IPv6-only" installation guide must be dual-stacked with its own IPv4 address.

Remarkably low amount of IPv6 traffic
It is remarkable how small the amount of IPv6 e-mail traffic actually seems to be. This setup creates a mail server which is reachable by IPv4 and IPv6 in absolutely equal manner. However, the actual amount of incoming IPv6 traffic of my own server has been less then 2% as of August 2019. It is a bit devastating…​

8.1. Basic server and network setup

So, here we are. Start with bringing up a fresh virtual machine as described before. I gave my e-mail server a virtual harddisk of 250 GB - of which 60 GB are currently used. It has 6 GB of main memory. This is rather a lot, but in my setup, there are 64 GB of physical memory in the host. It is no problem…​

Perform all steps so that you have your e-mail server-to-be accessible from outside via IPv6 and IPv4 ssh connections with appropriate DNS entries in your domain.

After having the server prepaired on network and system level, it is time to actually setup the mail system. And here I must admit, that I have not developed my own scheme. Instead, I followed https://thomas-leister.de/en/mailserver-debian-stretch/. While it uses Debian stretch as base, I also had no problem following it on Ubuntu 18.04.

Thomas describes a contemporary e-mail server setup. The server allows for incoming and outgoing traffic. It is setup according to all anti-spam requirements with DKIP, DMARC and SPF entries in DNS. It offers SMTP and IMAP access for clients. It is a multi-user multi-domain setup. I run this since late 2018 without any problems.

8.2. Setup notes and hints

Using Thomas' description is very straight forward. However, I’d like to add some notes.

First and foremost, it helped me tremendously to understand that you should use a totally unrelated domain for the mail server - especially, if you plan to serve multiple domains on it. Let’s assume you want to serve example.net and example.com e-mail on your server. My advise to setup this:

  • Obtain beispiel.de as an independent domain only for e-mail. For my own setup, I obtained a .email domain. It costs a bit more than a simple .de domain, but it’s really nifty…​

  • Use this domain as primary domain in the setup documentation.

  • Let the main e-mail name server, usually mx0.beispiel.de, have A and AAAA DNS entries. After all, this setup works for both IP address spaces.

  • Point the MX entry for any domain you want to serve on your server to that mx0.beispiel.de. Remember - the MX for a domain can be in any domain!

  • After having setup the DNS for the first "real" domain, say example.net, you can simply copy SPF, DMARC, and DKIP entries from that domain to example.com and whichever other domain you want to serve.

  • Add the IPv4 addresses and the IPv6 network of the whole server to the mynetworks list in /etc/postfix/main.cf. That makes your mailserver a smart host for all other servers. So, every of your virtual servers - and even the physical machine - can send e-mail without any further setup steps. You just have to add the mail server as smart host (or forwarding host) for all outgoing e-mail.

  • Regarding the DNS setup of the mail server itself: This is the only of your virtual servers which should not use the DNS64/NAT64 setup of the physical host! This server is fully connected to IPv4, so it can reach all IPv4 servers directly. It also must do so, otherwise outgoing IPv4 connections would come from the "wrong" IPv4 address and the whole anti-spam mayhem would start again. The mail server setup instructions suggest to install a small DNS server on the mail server itself. Just follow that instructions and you’re done.

Setting things up like this makes the further DNS setup rather simple: All your mail clients connect to the dedicated mail server domain, in the example mx0.beispiel.de, also those for example.net, example.com or whatever. This way, you only need SSL certificates for the mail domain.

8.3. Consistency of DNS entries

Two things are absolutely crucial regarding the name service setup of the mail server:

  1. The DNS and reverse DNS entries for the mail exchange server must be consistent! The A and AAAA record for mx0.beispiel.de must both point to the virtual server in your setup and the PTR reverse entries of the IP addresses must point back to mx0.beispiel.de. This is the main reason why we need that additional IPv4 address and why I strongly recommend to use a dedicated domain for this. If you fail in this setup step, you will have massive problems with outgoing e-mail being spamblocked (and noone willing to help you…​).

  2. It is also important that the myhostname setting in /etc/postfix/main.cf must contain the full qualified domain name of that mail server, i.e. mx0.beispiel.de in this example. The whole mail server is mainly known in DNS by its mail domain name.

8.4. Setup of mail clients

As I already explained, the mail domains which are hosted on the system can have totally different names! There are also no PTR records needed. For each mail domain, the following steps must be done:

  • An MX entry in DNS must exist and point to mx0.beispiel.de.

  • The domain and the users must be entered in the data storage for postfix and dovecot (no reboot required after changes). Just follow the setup description of Thomas Leister above.

  • Users set the login name in their mail programs to e.g. username@example.net.

  • They should, as already explained, set IMAP and SMTP server name to the name of the mail server, e.g. mx0.beispiel.de, not to anything in their actual domain! Even if there are DNS entries pointing to the mail server, there would be massive certificate warnings as the SSL certificates do not contain the mail domains, but only the domain of the mail server.</ul>

8.5. Recap

I use this setup since autumn of 2018 without any problems. I’ve also added a number of domains to my server. This takes only minutes, works without problems and covers all kinds of setups like local delivery, delivery to multiple addresses, forwarding, catch-all addresses. It is a really fine thing.

You might miss a webmail interface. I have decided not to add that to the mail server, but to another virtual web servers. Setup of those is covered in the next chapter.

9. Web server

I won’t describe how to set up and configure an Apache or Nginx web server. There is a plethora of such guides on the internet. I will only cover that one small difference between our setup here and the every-person’s server out there:

Our web server in a virtual machine does not have IPv4 connectivity.

Of course, you can say: "I don’t care. People should use IPv6 or they miss the service." If you do that, you’re done. The server works as expected and everything works as long as all connections can be established via IPv6.

Unfortunately, things are not that simple in many cases. There are still many installations which have only IPv4 connectivity, even in 2020. Even if a network is connected via IPv6, it does not mean that every device can use the protocol. E.g. many microcontrollers (like the ESP 8266) do only have an IPv4 procotol stack implemented. So, an IPv6-only web server is in itself no problem but even in 2020 it will not be generally reachable.

On the other hand, the whole idea behind this setup is to have IPv6-only virtual machines. It does not help if we have an IPv6-only setup - only to add IPv4 connectivity to each machine due to the services running on them.

Fortunately, for HTTP-based services, there is an elegant solution to this problem: We use a relay which takes IPv4 traffic and forwards it to the IPv6 machine. To be more precise: We use our physical host to relay incoming IPv4 connections to the servers on its virtual machine(s) with a web server configured as reverse proxy. This way, we split incoming connections on different servers based on the protocol:

  • IPv6 clients do connect directly to the web server on the virtual machine.

  • IPv4 clients do connect to the physical host via IPv4 which forwards the connection to the virtual machine via IPv6

diag 9fbd51fc592e5ec713a74e31569af684
Figure 6. Connect to web server via IPv6 or IPv4

9.1. Configure appropriate DNS entries

For this scheme to work, DNS entries must be setup in a special way: For a web site www.example.org, you define an A record which points to the physical host but an AAAA record which points to the virtual machine.

After all, this is the main idea behind everything which is following now.

Different machines for the same name
Note that you should really understand what happens here: www.example.org will lead you to different machines depending on the IP protocol version you use. If you connect to the machine with this name e.g. via ssh (for maintenance purposes), you might end on the physical host if your workstation has no IPv6 connectivity! I really strongly suggest that you train yourself to never ever use the DNS names of the web sites for such purposes. It’s virthost.example.org which carries www.example.org - and if you need to access the machine, you connect to virthost, not to www.

For the following examples, we assume that the following DNS entries exist:

Existing DNS entries
physical.example.org A 1.2.3.4  # IPv4 address of the physical host

physical.example.org AAAA 11:22:33::1  # IPv6 address of the physical host
virtual.example.org AAAA 11:22:33::44  # IPv6 address of the virtual machine

9.1.1. Direct DNS records for the web sites

This is the simple approach: You just put direct A and AAAA records for the web servers. Assuming the entries for the hosts as described above you define the following entries:

Direct DNS entries for HTTP services
www.example.org A 1.2.3.4          # IPv4 address of the physical host
www.example.org AAAA 11:22:33::44  # IPv6 address of the virtual machine
blog.example.org A 1.2.3.4         # same addresses for other services on the same host
blog.example.org AAAA 11:22:33::44

This way, the appropriate name resolution is performed for each protocol and leads directly to the web server in charge.

9.1.2. CNAME entries

The direct approach is very straight forward but has some drawbacks. You do not see on first sight that these addresses are somehow "special" in that they point to different servers depending on the protocol. And, of course, if for any reason the IP address changes, it must be changed for all entries individually.

Therefore, you can follow a two-step approach: Define A and AAAA entries once for the virtual/physical address pairs and reference them for the individual addresses via CNAME entries:

DNS entries for HTTP services via CNAMEs
virtual64.example.org A 1.2.3.4          # IPv4 address of the physical host
virtual64.example.org AAAA 11:22:33::44  # IPv6 address of the virtual machine

www.example.org CNAME virtual64.example.org
blog.example.org CNAME virtual64.example.org

This way, you can even establish the convention to have such an entry with a certain name for each virtual machine, here <machinename>64 for <machinename>.

9.1.3. Common SSL certificates

Clients may establish HTTPS connections via IPv6 and IPv4. Therefore, the HTTP servers on both physical host and virtual machine must have the keys for the domains. You can run the web-server-based certification schemes on the virtual machine or use the DNS-based certification scheme.

Finally, the SSL certificates for your web site must be available on both the virtual machine and the physical host. We assume the certificates below /etc/letsencrypt.

9.2. Setup on the virtual machine

Install apache2. Then, enable rewrite using a2enmod ssl rewrite. This is needed for encryption and the mandatory redirect from http to https.

Create the configuration, e.g. in /etc/apache2/sites-available/testsite.conf:

Apache configuration on the virtual machine
<VirtualHost *:80>
    ServerAdmin admin@example.org
    ServerName www.example.org

    # Rewriting everything to SSL
    RewriteEngine on
    RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]
</VirtualHost>

<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerAdmin admin@example.org
    ServerName www.example.org

    # SSL certificate location
    SSLCertificateFile /etc/letsencrypt/live/example.org/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.org/privkey.pem
    Include /etc/letsencrypt/options-ssl-apache.conf

    # Logging
    ErrorLog ${APACHE_LOG_DIR}/www.example.org-error.log
    CustomLog ${APACHE_LOG_DIR}/www.example.org-access.log combined

    # Actual content definitions
    DocumentRoot /usr/local/webspace/www.example.org
    <Directory /usr/local/webspace/www.example.org>
        Require all granted
        AllowOverride All
    </Directory>
</VirtualHost>
</IfModule>

Enable the website using a2ensite testsite.conf. Now, it is available via IPv6.

9.3. Setup the IPv4 proxy on the physical host

Install apache also here. You have to enable the proxy modules additionally to SSL and rewrite using a2enmod ssl rewrite proxy proxy_http.

Create the site configuration, also in /etc/apache2/sites-available/testsite.conf:

Apache configuration on the physical host
<VirtualHost *:80>
    ServerAdmin admin@example.org
    ServerName www.example.org

    # Rewrite everything to SSL
    RewriteEngine on
    RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]
</VirtualHost>

<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerAdmin admin@example.org
    ServerName www.example.org

    # SSL certificate stuff
    SSLCertificateFile /etc/letsencrypt/live/example.org/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.org/privkey.pem
    Include /etc/letsencrypt/options-ssl-apache.conf

    # Proxy settings
    SSLProxyEngine on
    ProxyRequests Off
    ProxyPass / https://www.example.org/
    ProxyPassReverse / https://www.example.org/

    # Logging
    ErrorLog ${APACHE_LOG_DIR}/www.example.org-error.log
    CustomLog ${APACHE_LOG_DIR}/www.example.org-access.log combined
</VirtualHost>
</IfModule>

Enable it with a2ensite www.example.org. Now, your website is also available via IPv4.

Note that the Apache on the physical host resolves its proxy target simply as www.example.org. This works as by specification IPv6 name resolution always superceeds IPv4 name resolution. This way, the physical host actually forwards the incoming request to the real server on the virtual machine.

Note that we define the IPv4 redirection server directly on the physical host and not as forwarder to the HTTP-definition in the virtual machine. This way, proxied requests are reduced.

10. Cryptpad instance

Cryptpad is a browser-based collaborative document management which stores all information on the server encrypted. So, even administrators of the server system cannot read the information unless they have a key for access.

We install a complete self-contained cryptpad server on a virtual machine. It is located in one dedicated user account and proxied with an Apache server which takes care for the SSL transport encryption.

In our IPv6-first setup, there is one special problem: Cryptpad uses an additional HTTP/Websocket connection which must be forwarded. Therefore, we modify the setup a bit:

  • Apache is a proxy-only on both the physical host and the virtual machine.

  • The actual cryptpad service runs on port 3000.

  • Access to the "raw" cryptpad is restricted to the local network only.

  • The forwarding Apache on the physical host does not forward to the Apache on the virtual machine, but directly to the cryptpad service.

10.1. Setup firewall and service name

To restrict access to port 3000, we need a firewall. Ubuntu comes with ufw which does the job. Install it with apt install ufw. Then perform the following commands for the basic setup:

Basic setup of ufw
ufw allow from 2a01:4f8:1:2::/64 to any port 3000  # Local IPv6 network
ufw allow OpenSSH  # very important, otherwise you are locked out
ufw allow "Apache Full"  # meta-rule for the Apache on the virtual machine

If you have other services on the virtual machine, add their respective ports with more allow commands. You can get the list of installed application packages with ufw app list. Note that you do not need a local Postfix instance for cryptpad.

ufw stores the actual firewall rules in files below /etc/ufw. The app rules applied above get stored in /etc/ufw/user6.rules. You should not temper that file, stay with ufw commands for configuration. If everything is in place, change /etc/ufw/ufw.conf and set

Enable ufw in /etc/ufw/ufw.conf
ENABLE=yes

To check that everything works as expected, perform ip6tables -L which should be empty now. Start ufw with systemctl restart ufw and run ip6tables -L again. Now you should see a rather lengthy list of rule chains and rules, among them the rules regarding port 3000 you gave above.

Test that you can reach the Apache server on the system.

You should now add the DNS name of your cryptpad instance. Remember what we said about system names and service names: The virtual machine Cryptpad will run on is not accessible via IPv4 directly. Therefore, you need a proxy on the IPv4-accessible physical host of your installation. As a result, the DNS entries for accessing your Cryptpad instance will point to different servers in their A and AAAA records. To avoid confusion, use the Cryptpad service entries only for accessing the service and use the name of the virtual machine for maintenance.

This said, add the appropriate entries to your DNS records. We will assume cryptpad.example.com as name for the Cryptpad service in this guide.

10.2. Install cryptpad

Cryptpad is open-source software. Their producers offer storage space on their own cryptpad servers as business model. Therefore, they are not overly eager to promote independent installations. Nevertheless, it is no problem to run and maintain a self-hosted installation of the software as long as you have some idea about what you are doing there.

Start with creating a user "cryptpad" with adduser --disabled-password cryptpad. Add your key in its .ssh/authorized_keys file so that you can log into the account.

As user cryptpad, you install some software packages needed by Cryptpad:

  • First is nvm, follow the instructions on https://github.com/nvm-sh/nvm.

  • Log out and in again to have you user-local nvm accessible.

  • Now install Node.js. In late 2018, Cryptpad used version 6.6.0, so the command is

Install Node.js 6.6.0
nvm install 6.6.0

This installs the Node package manager npm and sets paths correctly. Check it with

Path to npm
$ which npm
/home/cryptpad/.nvm/versions/node/v6.6.0/bin/npm
  • Finally you need bower. Install it with

Install bower
npm install -g bower

Now you are ready to actually install Cryptpad.

Stay as user cryptpad and start by obtaining the software:

Download Cryptpad
$ cd $HOME
$ git clone https://github.com/xwiki-labs/cryptpad.git cryptpad

This installs Cryptpad right from its Github repositories. It’s the default way of installing an independent Cryptpad instance. Installing this way has the big advantage that updating to a newer Cryptpad version is a simple git pull, followed by the instructions in the version announcement.

Now you perform the basic setup:

Basic Cryptpad setup
$ cd $HOME/cryptpad
$ npm install
$ bower install

These are also the routine steps after an update. Note that especially the npm install step seems to download "half of the internet". This is expected behaviour. Cryptpad comes from the Javascript/Node.js sphere and those folks love to use a plethora of library packages with themselves use another plethora of library packages. Fortunately, subsequent updates will only touch a subset of these libraries…​

After installation comes configuration:

Configuration of Cryptpad
$ cd $HOME/cryptpad/config
$ cp config.example.js config.js

Now you edit config.js. It’s a JSON file and there are three important changes to be performed:

Important changes in Cryptpad configuration
var _domain = 'https://cryptpad.example.com/';
[...]
httpAddress: '<public IPv6 address as in "ip -6 a" of the virtual machine>',
adminEmail: false,

We configure cryptpad itself in a way that it uses it’s domain name in the _domain variable. httpAddress is the actual address cryptpad starts its own HTTP server on. To be sure that this happens on the correct interface, we use the actual IPv6 address here.

After this step, Cryptpad is configured completely but not yet started. We come back to this a moment later.

As a final step, you should remove the $HOME/cryptpad/customize subdirectory if you do not really need it. It will not be updated during updates and might carry outdated information after updates.

10.3. Integrate Cryptpad into systemd

Usually, Cryptpad is a service which runs permanently. Therefore, it should be started on system startup. For systemd-controlled servers as any modern Debian or Ubuntu installation, add a systemd service unit file:

systemd service unit in /home/cryptpad/cryptpad.service
[Unit]
Description=CryptPad service

[Service]
ExecStart=/home/cryptpad/.nvm/versions/node/v6.6.0/bin/node /home/cryptpad/cryptpad/server.js
WorkingDirectory=/home/cryptpad/cryptpad
Restart=always
User=cryptpad

[Install]
WantedBy=multi-user.target

Symlink this file into /etc/systemd/system:

ln -s /home/cryptpad/cryptpad.service /etc/systemd/system

Now you can start the cryptpad service:

# systemctl start cryptpad
# systemctl enable cryptpad

The cryptpad server should now be reachable on the virtual machine and on the physical host on its port 3000. Test it with curl http://cryptpad.example.com:3000/ on both systems. From any other computer, the service must not be reachable due to the firewall blocking the access. Do test this, also!

10.4. Apache on virtual machine

To make Cryptpad accessible from the outside, we configure an Apache proxy. For Cryptpad, it needs to proxy websockets, so (as root) run the command a2enmod proxy_wstunnel.

Then, the Apache configuration is rather straight forward:

Apache configuration on the virtual machine in /etc/apache2/sites-available/cryptpad.conf
<VirtualHost *:80>
    ServerAdmin myname@example.com
    ServerName cryptpad.example.com

    # Rewrite everything to SSL
    RewriteEngine on
    RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]
</VirtualHost>

<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerAdmin myname@example.com
    ServerName cryptpad.example.com

    # SSL certificate stuff
    SSLCertificateFile /etc/letsencrypt/live/cryptpad.example.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/cryptpad.example.com/privkey.pem
    Include /etc/letsencrypt/options-ssl-apache.conf

    # Proxy settings, "2a01:4f8:1:2:5054:ff:fe12:3456" is the IPv6 address of the virtual machine
    ProxyPass /cryptpad_websocket  ws://[2a01:4f8:1:2:5054:ff:fe12:3456]:3000/cryptpad_websocket
    ProxyPreserveHost  On
    ProxyPass /        http://[2a01:4f8:1:2:5054:ff:fe12:3456]:3000/
    ProxyPassReverse / http://[2a01:4f8:1:2:5054:ff:fe12:3456]:3000/

    # Logging
    ErrorLog ${APACHE_LOG_DIR}/cryptpad-error.log
    CustomLog ${APACHE_LOG_DIR}/cryptpad-access.log combined
</VirtualHost>
</IfModule>

After that, activate the virtual host:

a2ensite cryptpad
systemctl reload apache2

If everything comes up without errors, you can access your Cryptpad from any IPv6-connected computer. Check that loading a pad actually works, otherwise there is a problem with the forwarding rule for the websocket.

10.5. Apache on physical host

The Cryptpad instance is not yet accessible from IPv4 clients. For this, you need another Apache proxy on the physical host. The very nice thing here is that it can be configured exactly as its compaignon on the virtual machine! So, on the physical host as root do this:

  • Enable the websocket proxy module with a2enmod proxy_wstunnel.

  • Copy /etc/apache2/sites-available/cryptpad.conf from the virtual machine to the physical host at the same location.

  • Take care that the SSL keys are located at the correct position.

  • Enable the host with a2ensite cryptpad.

  • Activate the configuration with systemctl reload apache2.

Now, cryptpad can also be reached from any IPv4-only hosts.

Note that on the physical host, you forward to port 3000 on the virtual machine. This is the reason why the port must be reachable from the physical host. Port 3000 on the physical host is totally unaffected from all of this and in fact, you could just install another, independent service there without breaking your Cryptpad on the virtual machine.

Special tasks and setups

While IPv6 is out for two decades now, there are still tasks for which it is difficult to get decent documentation on the internet. As an IPv6-first setup like this raises a lot of issues, this section collects the results of such investigation on certain tasks.

11. Testing IPv4 connectivity from an IPv6-connected host

If you work with this setup, you usually will have an IPv6-connected workstation so that you can access your virtual machines without any proxying. That makes it a bit complicated to actually test the IPv4 connectivity of services as IPv6 is by definition always the preferred protocol - if it is available.

At least on Linux systems, switching off IPv6 for testing purposes is not difficult, fortunately:

  • Close all (IPv6) ssh etc. connections! They will be stalled anyway if you turn off IPv6.

  • Check with ip -6 a that you (still) have IPv6 addresses.

  • Perform echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6. This disables IPv6 completely on this computer immediately.

  • Check with ip -6 a that you have no IPv6 addresses any more.

Every and all connections from this computer are performed via IPv4 now. Remember that ssh-ing to your virtual machines is not possible any more now! You have to ssh to your physical host (or any other IPv6-connected machine) and only from there open a connection to the virtual machine.

To reenable IPv6, perform the following steps:

  • Perform echo 0 > /proc/sys/net/ipv6/conf/all/disable_ipv6. This reenables IPv6.

  • As you usually get your IPv6 addresses via router advertisements, you will not have IPv6 routing again immediately. You might shutdown and restart your network devices for that.

It might even be that some services do not behave well if they get cut off their network connectivity so harshly. You might need to reboot the system to get everything in order again. Fortunately, just performing some echo commands to /proc files is always a volatile operation. After a reboot, everything about turning off IPv4 this way will be forgotten.

12. Static routes between servers

Sometimes, some hosts are interconnected directly additional to the usual link to the network’s general infrastructure. E.g. a database cluster could get such an additional link between the clustered systems. This way, traffic between the machines is undisturbed by any general traffic and additionally strictly kept in the cluster.

diag 3c4ddeab2ac850e9eddd2ab0d2fc1a91
Figure 7. Routing with an additional link

With IPv6, such a scheme is extremely simple to setup. Let’s assume we have those three servers phyA, phyB, and phyC. Every host has an assigned /64 network and two network cards: eth0 is connected to the "general infrastructure" and used for accessing the internet, eth1 is connected to some local switch which interconnects all the systems.

If you set up everything as written in this guide, you will have eth0 on the hosts as default path for the packets. And as the servers do not know better, they will connect to each other via this path. The local switch is not used.

To use it, you must add a static route to the setup. E.g. phyA must know: "To connect to anything in the 2001:db8:b:b::/64 or in the 2001:db8:c:c::/64 network, route the packet via eth1." With IPv6, you define such routings via link-local addresses and devices.

Let’s start with the address. "Link-local" addresses are an IPv6 special which has no counterpart in the IPv4 world. Actually, each IPv6-capable network link automatically assign itself a world-wide unique "link local unicast" address. It starts with fe8 to feb. In our example, the eth1 network card in phyB got the link-local address fe80::ff:fee1:b.

This address cannot be used to access the server from the outside as link-local addresses are never routed between networks. However, it can be target of a local routing, e.g. on an additional local switch like in this example. It is possible and sensible to say: "The routing target network 2001:db8:b:b::/64 can be reached via fe80::ff:fee1:b."

If you configure such a routing, however, the routing source must know how to reach that via address. The network address does not help as all network cards in the routing source have an fe[8-b] network address. The routing can only be used if it is additionally bound to a network card.

And this is precisely what we do: On phyA (and phyC) we configure: "The routing target network 2001:db8:b:b::/64 can be reached via fe80::ff:fee1:b on device eth1."

12.1. Manual configuration with Netplan

In Netplan, this configuration is very simple to archieve:

Netplan configuration with static route on phyA (and phyC)
network:
  [...]
  ethernets:
    [...]
    eth1:
      dhcp6: no
      routes:
        - to: 2001:db8:b:b::/64
          via: fe80:ff:fee1:b

It is important that eth1 is mentioned at all in the configuration, otherwise it will not be brought up by the kernel. Then, the route is simply added. netplan try or netplan apply activates it and of course it will be setup again after a reboot.

The same way, you configure phyB, but with the network and link-local via address of phyA:

Netplan configuration with static route on phyB (and phyC)
network:
  [...]
  ethernets:
    [...]
    eth1:
      dhcp6: no
      routes:
        - to: 2001:db8:a:a::/64
          via: fe80:ff:fee1:a

12.2. Auto-configuration with radvd

This does not work with Ubuntu 20.04
The systemd-networkd included into Ubuntu 20.04, which is behind the Netplan-based network configuration, does not handle the advertisement messages correctly and drops the routes once their first advertisement expires or simply does not receive them in the first place. It’s a mess…​

Configuring the additional static routes on all systems explicitly can become rather tedious. Fortunately, if you have radvd running on the system - as we have on our physical hosts - it can also do this job. The idea is that each system which is connected to the local switch just announces itself as gateway to its network. On phyA, the according definition looks like this:

radvd configuration for eth1 on phyA
interface eth1 {
  AdvSendAdvert on;
  AdvDefaultLifetime 0;
  route 2001:db8:a:a::/64 {
    AdvRouteLifetime infinity;
  };
};

With AdvSendAdvert we enable information advertisement generally. AdvDefaultLifetime 0 declares that this system is not a default router - which is important as it only can handle packages for its own network. Finally, route declares the route to the network of this system. AdvRouteLifetime infinity declares this route to be valid forever.

Equipping all servers connected to the local switch will make all servers connected to all other servers through their respective eth1 links. The routing table of phyA will look like this:

Routing table parts of phyA
2001:db8:b:b::/64 via fe80::ff:fee1:b dev eth1 proto ra metric 1024 pref medium
2001:db8:c:c::/64 via fe80::ff:fee1:c dev eth1 proto ra metric 1024 pref medium
[...]
default via fe80::1 dev eth0 proto static metric 1024 pref medium

12.3. Considerations

IPv6 follows some routing concepts rather different from what’s usual with IPv4. Using the link-local IPv6 addresses of the interfaces has significant advantages over the usual "private address" approach of IPv4 in such situations: On IP level, it is totally transparent over which interface the packets are sent. phyB is always phyB, regardless whether it is connected from phyA or from anywhere else. Only phyA, however, will use the local switch for connecting, while everyone else will go over the global infrastructure. And on phyA, you do not have to think about routing issues. You just connect to phyB and in the background, routing is done through the appropriate interface.

Epilogue

13. Experiences and lessons learned

I use the setup described here since October 2018 for my own internet server which serves multiple web sites, my complete private e-mail and a number of other services. The system works very reliably, I only had severe problems once when Ubuntu published a broken kernel which crashed the system constantly. Apart from that, routing, networking in general and the whole virtualisation do not make any problems.

Very early in the process, I learned that even in 2018, you cannot survive in the internet without IPv4 connectivity. In the beginning, I even planned to go without the NAT64/DNS64, but there are too many services which are unreachable then as they are only accessible via IPv4.

I also had to relax my "No IPv4"-policy for the virtual machines. E.g. e-mail servers can simply not be made available without direct IPv4 connectivity - or no IPv4-only mail servers will deliver any e-mail to them. For all more sophisticated services like video conferencing servers this also holds true. Therefore, adding an IPv4 address to the virtual machines is far more often needed than I hoped. However, it is important that this is always only an "on top" connectivity. Everything continues to work if you remove the IPv4 infrastructure - it is only unreachable from the "IPv4 internet" then. Hopefully, in the not too distance future this becomes less and less of an issue so that a true IPv6-only setup can be used without any disadvantages.

14. Maturity of configurations

This guide describes several configurations with different level of maturity. This is how the maturity is as of August 2020:

  • Physical host

    • Hetzner Online: Almost all systems configured this way are located at Hetzner’s data center.

      Ubuntu 20.04

      Guide has been in use for several installations and all seems to work fine. Usable for production systems as long as no router advertisements are used.

      Ubuntu 18.04

      In usage for production systems since almost two years. Flawless. Use this for setups which "just have to run". Use this if in doubt what to use.

      Ubuntu 16.04

      Still works but should not be used any more. Support will drop in Spring 2021.

    • Local network: I have also installed one system using this setup in a local network.

      Ubuntu 18.04

      Works. Also the only one I’ve tried.

  • Virtual machines

    Ubuntu 20.04

    Works out of the box. Especially, here are not problems with IPv6 router advertisements. Strange world…​

    Ubuntu 18.04

    Works out of the box.

    Windows 10

    Works out of the box.


1. This guide often refers to Ubuntu 18.04. I fact, installation works the same on both if not stated otherwise explicitly.