Setting up a single node virtualization server

Introduction
While container technologies are the main goal of many solution architects for running applications. Virtual machines (VM) are still crucial to day-to-day operations as some applications, at the moment, cannot be containerized due to certain constraints. Microsoft’s Active Directory and servers to run containers are 2 major reasons why VMs are still needed.
In this post, we will go over how to set up a virtualization server that will help to quickly spin up VMs.
Our setup entails the following:
1 physical server with the following hardware:
- CPU: AMD FX(TM)-8320 Eight-Core Processor
- RAM: 32 GB RAM
- Storage: 1 TB (WDC WDS100T2B0A)
- Motherboard: Gigabyte GA-78LMT-USB3 6.0
We will be using CentOS Linux release 7.7.1908.
Package installations and server configuration
First, we need to configure our basic server to run VMs using libvirt/KVM. Perform a minimal CentOS installation. Note: CentOS 8 is out but for now we will stick with CentOS 7 as packages tend to be more stable. When we have tested this post of CentOS 8 we will update this post.
Install the following packages:
$ sudo yum -y install centos-release-openstack-stein openvswitch libvirt libguestfs-tools virtinst
We need to install the openstack package to get access to the openvswitch package. After the installation run the following command to start the virtualization service and openvswitch.
$ sudo systemctl enable openvswitch
$ sudo systemctl start openvswitch
$ sudo systemctl enable libvirtd
$ sudo systemctl start libvirtd
The network needs to be changed to accommodate the use of OpenVSwitch. The main network interface will act as an uplink for the OpenVSwitch device to access the network. To perform this, we need to configure the first network interface. To make this setup uniform across the majority of servers, reconfigure the kernel to force the network interfaces to use the naming convention eth0..ethX rather than enp3s0, etc. This will help avoid headaches when deploying this setup on multiple systems with differing hardware.
Edit /etc/default/grub and make sure the following values reflect the following:
GRUB_TERMINAL_OUTPUT=”console”
GRUB_CMDLINE_LINUX=”rhgb quiet net.ifnames=0 biosdevname=0"
Then update grub:
$ sudo grub2-mkconfig -o -o /boot/grub2/grub.cfg
When we reboot the server the network interface will now use the ethX format.
Now, create /etc/sysconfig/network-scripts/ifcfg-eth0 with the following:
DEVICE=eth0
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=ovs-br0
ONBOOT=yes
And, create /etc/sysconfig/network-scripts/ifcfg-ovs-br0
DEVICE=ovs-br0
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.5.11
NETMASK=255.255.255.0
GATEWAY=192.168.5.1
ONBOOT=yes
DNS1=1.1.1.1
You can change IPADDR and GATEWAY to reflect your environment.
When the server has been rebooted, log in and run:
$ ip a
You should see something that resembles the following:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
link/ether 1c:1b:0d:ff:ff:2d brd ff:ff:ff:ff:ff:ff
inet6 fe80::1e1b:fff:ff1f:ffff/64 scope link
valid_lft forever preferred_lft forever4: ovs-br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 1c:1b:0d:ff:ff:2d brd ff:ff:ff:ff:ff:ff
inet 192.168.5.11/24 brd 192.168.5.255 scope global ovs-br0
valid_lft forever preferred_lft forever
inet6 fe80::ec4e:ffff:ffff:ffff/64 scope link
valid_lft forever preferred_lft forever
The interface ovs-br0 should now contain the IP address of the server and eth0 should not have the IP address. Perform a ping operation to verify everything is in order.
Configuring the virtualization services
We will be using libvirt/KVM for our hypervisor and VM management. Whenever working with cloud or any virtualization products, it is easy to break it down into 3 key components. Network component to handle all the network communication, here we are using OpenVSwitch. Storage component to handle the data storage needed for the solution, either attached storage to the VM or network/distributed storage using NFS, Object Stores. Finally, a compute component that will handle linking the network and storage to computing resources like RAM and CPU and then making the virtual machine available to the user. We will skip the storage component for this setup as we will use the OS disk to handle all the storage for now.
Create an ovs-network.xml with the following contents:
Now load that configuration into libvirt via the following:
$ sudo virsh
virsh # net-define ovs-network.xml
virsh # net-start ovs-network
virsh # net-autostart ovs-network
virsh # net-listName State Autostart Persistent
— — — — — — — — — — — — — — — — — — — — — — — — — — — — —
default active yes yes
ovs-network active no yes
Now we have the network component activated. When we use libvirt, we just pass the network name ‘ovs-network’ to utilize the OpenVSwitch.
We will now need an image to load into libvirt. The fastest way is to grab an image from the CentOS repository. We will use virt-customize to set the root password and remove cloud-init. Cloud-init is a fantastic tool to help auto-provision of the servers but we will look at that in a later post.
$ curl -o CentOS-7-x86_64-GenericCloud.qcow2.xz https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2.xz
$ sudo yum -y install unxz
$ unxz CentOS-7-x86_64-GenericCloud.qcow2.xz
$ virt-customize -a CentOS-7-x86_64-GenericCloud.qcow2 — root-password password:MyAwesomePassword — uninstall cloud-init
$ sudo cp CentOS-7-x86_64-GenericCloud.qcow2 /var/lib/libvirt/images/CentOS-stock.qcow2
$ sudo restorecon -FvvR /var/lib/libvirt/images/
For convenience, I have documented a domain XML (CentOS-stock.xml )to this post for you to copy and quickly start working with a VM in your setup.
The above XML will utilize 2 CPU cores and 2 GB of RAM. The interface section instructs the VM to attach a network interface to the OpenVSwitch device. There are other two ways to connect to this VM if the network fails. The first is via a serial console and the second is using VNC. We will cover those 2 later in the post.
Log onto the virsh console and define this domain:
$ sudo virsh
virsh # define CentOS-stock.xml
virsh # start CentOS-stock
This will start up the VM and shortly you will be able to connect to it via serial console interface. From the virsh console run the following command:
virsh # console CentOS-stock
You will see the command prompt appear and you may log in as the root user with the password supplied with the virt-customize command.
Exit the virsh console and run the command ss -tln . You should see a similar output.
[salim@localhost ~]$ ss -tln
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 1 *:5900 *:*
The 5900 represents the VNC port for you to connect to. We are using unauthenticated access to the port but there is a firewall present that will block the communication. You may redefine the graphics tag in the domain XML section to include a password and configure the firewall to only allow certain IPs in. For now, we can open the port to access with the following command.
$ sudo firewall-cmd — add-port=5900/tcp –permanent
$ sudo firewall-cmd –reload
With VNC or another client, enter the IP address of the server hosting the VM. In our case that would be 192.168.5.11. You can now access the Linux CLI. SSH is also enabled by default but you will need to log in as root and create a user.
[Thanks to Jordan M for pointing out some mistakes]