This series of posts is about installing the Proxmox hypervisor on to a low powered server and setting up various internal webservices with SSL and single sign-on using FreeIPA.

Initial Installation

The proxmox install is a simple graphical install where the majority of the default options are used. The installer will require you to set a root password and email. Once the install is finished this will be the only account available and is used for both local shell login and login the the Proxmox web UI. Proxmox will email the supplied email address from the inbuilt Postfix server should Proxmox detect a fault such as SMART errors.

The Proxmox installer also requires that a network device is setup. This needs to be a device that is connected to the machine being used to initially configure the hypervisor as none of the other network devices are made active. After installation other network devices can be set up but will require a reboot for the changes to be applied.

Proxmox configuration

Now that the install is complete the configuration can do by visiting port the HTTPS interface on port 8006 of the IP address set up at installation. The UI can be logged into using the created root user and the PAM realm. As the node doesn't have a paid subscription there will be message after login about no subscription, this can be ignored and the more adventurous can remove it.

While logged in as the root user you can configure the settings on the overall datacenter and the individual nodes. Prior to updating, the repositories must be amended to pull from subscription-free repositories. This is done by selecting the Shell tab in the node and then opening the apt pve sources file with:

nano /etc/apt/sources.list.d/pve-enterprise.list

and altering the source to:

#deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise
deb http://download.proxmox.com/debian/pve stretch pve-no-subscription

To update the node select the node and navigate to Updates. By selecting Refresh and then Upgrade the node will update. During upgrading a shell will be opened and can be closed after the upgrade completes.

Once the node is up to date additional network devices can be set up. By selecting the System -> Network tab of the node the network devices are shown. Network devices cannot be directly connected to virtual machines or LXC containers. In order to use these devices, bridges must be created. The network devices can be setup by creating Linux Bridges and assigning the network devices to them in the bridge ports. There can only be one default gateway set in the bridges and this is used by the node to fetch updates. Any bridge with an IP address assigned can be used to access the Proxmox web UI at the set address. Once the network changes have been made the node must be rebooted to apply the changes.

In order to allow for a more graduated level of control additional users can be created with different levels of permissions. A user is created in the Datacenter section under the Permissions -> Users tab and can be temporary or permanent. Groups are created in the coresponding Permissions -> Groups tab. Once a user is created the permission can be assigned, this can be done at a group or user level. This is achieved in the Permissions tab where Path selects the scope of the permission and '/' is the full datacenter. What can be achieved with each assigned role can be seen in the Permissions -> Roles.

VM setup

Once the node configuration is complete ISOs and container templates must be loaded onto the node to be used for virtual machine and container installations. Container templates can be downloaded on to the node from the Content tab in the local storage section. From there the Templates button can be used to select container templates from the Proxmox library. Similarly the Upload button can be used to upload ISOs from the local machine. Alternatively ISOs can be directly downloaded onto the node using wget from the node's shell to download into /var/lib/vz/template/iso

Creating Virtual Machines

The virtual machine can be created form the uploaded ISOs using the Create VM button on the top bar. This will open an initial dialog box where the VM name and unique ID can be set. The unique ID cannot be altered after the VM is created and the VM name is used for the hostname when cloud-init is used.

Initial VM option selection

The following screen allows the selection of ISO to use for the installation and to set the type of OS being installed to allow KVM options to be set. The following screens allow the selection of disk size, CPU and RAM allocation and network. Most settings can be left on default and only disk size, CPU cores, memory and network bridge should be set.

VM OS selection

When Creating a VM which has a GUI the noVNC console has a default resolution of 800x640. In order to alter the resolution the display type needs to be set to 'vmware' in the VM Hardware tab. Additonally the VM requires QEMU Agent to be installed, on linux this can be done by:

sudo apt install qemu-guest-agent -y

A useful addition is to install the QEMU Agent and enable it in the Options tab. This allows the VM to be shutdown more gracefully and displays the current IP addresses of the VM. Since Proxmox 5.1 the xterm.js terminal has been available for VMs. This terminal runs in javascript in the browser and allows for copying and pasting and resizing the terminal. To enable the terminal a virtual serial port needs to be added to the VM, this can be done from the node's shell with:

qm set <VM-UID> -serial0 socket

Then the VM needs to be configured to output to serial terminal by editing the grub configuration at /etc/default/grub:

# To GRUB_CMDLINE_LINUX parameter add
GRUB_CMDLINE_LINUX="quiet console=tty0 console=ttyS0,115200" 

Creating Containers

LXC Containers offer a compromise between the security of a VM and the resource usage of a program running directly on the host. LXC has the benifit of fast spin up time and idle RAM usage of just 36 MB for an ubuntu 18.04 container. Because containers run as namespaces of the host various parameters can be altered while the container is running, these include altering CPU cores, RAM size and network address as well as increase the disk size. However the DNS can only be altered while the container is stopped and cannot be switched between priviledged and unpriviledged from the UI.

Priviledged containers run functions with the same UIDs on the host as in the container, as such processes run as root in the container have root privileges on the host so are not considered to be secure against privilege escalation from within the container. Unprivileged containers have their UIDs and GIDs mapped to user-level values on the host and more strict regulation on cgroup access as such they are considered secure as long as the host has up to date security patches. While containers allow many services to be run with limited resources they have drawbacks caused mostly by the security policies. Such issues include that a default kerberos realm attempts to assign UIDs beyond the standard mapped range of an unprivileged container and Docker and FreeIPA are not functional without running in a container where most of the apparmor settings are disabled.

Using the Create CT button LXC containers can be created, this will open a dialog box where the unique ID, hostname, password and ssh keys can be set. Additionally the container can be made unpriviledged, unpriviledged containers have the UIDs and GIDs mapped on the host so are considered secure as the container doesn't have access to root on the host. The hostname must be a valid DNS name and is used as the name of the container. The container will be created with only a root user.

Initial LXC option selection

The following pages allow the CPU, RAM and disk allocation to be set. On the network page the internal interface name and connected bridge can be set. For the IP address section if an IP address is not set and the radio button is not switched to DHCP then the interface will not receive an IP address.

LXC network selection

Using Cloud-Init

Cloud-init allows templates to be created and various parameters altered when a new VM is created from it. A cloud-init template can be created by initially downloading a cloud image, for Ubuntu 18.04 this can be done on the host shell to download the image:

# download the image
wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img

A VM then needs to be created without a hard drive being created.

# create a new VM
qm create 9000 --memory 2048 --net0 virtio,bridge=vmbr0

The downloaded image is then converted into the hard drive for the VM.

# import the downloaded disk to local-lvm storage
qm importdisk 9000 bionic-server-cloudimg-amd64.img local-lvm

# finally attach the new disk to the VM as scsi drive
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-1

A cloud-init drive is then added to allow parameters to be transfered into the VM and a serial terminal set up as is required.

# add cloud-init drive to the VM
qm set 9000 --ide2 local-lvm:cloudinit

# set the scsi disk as the sole boot device
qm set 9000 --boot c --bootdisk scsi0

# add a serial port to the VM and set the terminal to serial
qm set 9000 --serial0 socket --vga serial0

Once the initial VM is setup certain other improvements can be made to the template. This original disk size can be increased by selecting the disk in the Hardware tab and pressing Resize disk, setting the OS Type in the options tab allows KVM to make specific optimisations. Once the external optimisations have been completed creating a user and network in the cloud-init drive and starting the VM allows further improvements to be made. Once the VM is booted and logged in, the VM can be updated and the QEMU agent installed:

sudo apt update && sudo apt install qemu-guest-agent -y

After the modifications have been made cloud-init can be reset so that it will run again on the next boot and the bash history is cleaned:

sudo cloudinit clean
unset HISTFILE

Once the VM is shutdown the QEMU agent needs to be enabled in the Options tab. Now that the modifications are complete the VM can be converted into a template by selecting More -> Convert to template.

Cloud-init is then used once a VM is created from the template with the parameters set before the first boot. The username and password can be set in the Cloud-Init tab along with the user SSH key, IP address and DNS. The hostname is taken from the VM name.

Other issues

  • Failed to allocate directory watch: Too many open files
    • sysctl fs.inotify.max_user_instances=512