Saturday 30 August 2014

How to Manage and Use LVM (Logical Volume Management)

How to Manage and Use LVM (Logical Volume Management) 

In our previous article we told you what LVM is and what you may want to use it for, and today we are going to walk you through some of the key management tools of LVM so you will be confident when setting up or expanding your installation.
As stated before, LVM is a abstraction layer between your operating system and physical hard drives. What that means is your physical hard drives and partitions are no longer tied to the hard drives and partitions they reside on. Rather, the hard drives and partitions that your operating system sees can be any number of separate hard drives pooled together or in a software RAID.
To manage LVM there are GUI tools available but to really understand what is happening with your LVM configuration it is probably best to know what the command line tools are. This will be especially useful if you are managing LVM on a server or distribution that does not offer GUI tools.
Most of the commands in LVM are very similar to each other. Each valid command is preceded by one of the following:
  • Physical Volume = pv
  • Volume Group = vg
  • Logical Volume = lv
The physical volume commands are for adding or removing hard drives in volume groups. Volume group commands are for changing what abstracted set of physical partitions are presented to your operating in logical volumes. Logical volume commands will present the volume groups as partitions so that your operating system can use the designated space.

Downloadable LVM Cheat Sheet

To help you understand what commands are available for each prefix we made a LVM cheat sheet. We will cover some of the commands in this article, but there is still a lot you can do that won’t be covered here.
All commands on this list will need to be run as root because you are changing system wide settings that will affect the entire machine.
(Click on the thumbnail for a full size image)

How to View Current LVM Information

The first thing you may need to do is check how your LVM is set up. The s and displaycommands work with physical volumes (pv), volume groups (vg), and logical volumes (lv) so it is a good place to start when trying to figure out the current settings.
The display command will format the information so it’s easier to understand than the scommand. For each command you will see the name and path of the pv/vg and it should also give information about free and used space.
The most important information will be the PV name and VG name. With those two pieces of information we can continue working on the LVM setup.

Creating a Logical Volume

Logical volumes are the partitions that your operating system uses in LVM. To create a logical volume we first need to have a physical volume and volume group. Here are all of the steps necessary to create a new logical volume.
Create physical volume
We will start from scratch with a brand new hard drive with no partitions or information on it. Start by finding which disk you will be working with. (/dev/sda, sdb, etc.)
Note: Remember all of the commands will need to be run as root or by adding ‘sudo’ to the beginning of the command.
fdisk -l
If your hard drive has never been formatted or partitioned before you will probably see something like this in the fdisk output. This is completely fine because we are going to create the needed partitions in the next steps.
Our new disk is located at /dev/sdb so lets use fdisk to create a new partition on the drive.
There are a plethora of tools that can create a new partition with a GUI, including Gparted, but since we have the terminal open already, we will use fdisk to create the needed partition.
From a terminal type the following commands:
fdisk /dev/sdb
This will put you in a special fdisk prompt.
Enter the commands in the order given to create a new primary partition that uses 100% of the new hard drive and is ready for LVM. If you need to change the partition size or want multiple partions I suggest using GParted or reading about fdisk on your own.
Warning: The following steps will format your hard drive. Make sure you don’t have any information on this hard drive before following these steps.
  • n = create new partition
  • p = creates primary partition
  • 1 = makes partition the first on the disk
Push enter twice to accept the default first cylinder and last cylinder.
To prepare the partition to be used by LVM use the following two commands.
  • t = change partition type
  • 8e = changes to LVM partition type
Verify and write the information to the hard drive.
  • p = view partition setup so we can review before writing changes to disk
  • w = write changes to disk
After those commands, the fdisk prompt should exit and you will be back to the bash prompt of your terminal.
Enter pvcreate /dev/sdb1 to create a LVM physical volume on the partition we just created.
You may be asking why we didn’t format the partition with a file system but don’t worry, that step comes later.

Create volume Group
Now that we have a partition designated and physical volume created we need to create the volume group. Luckily this only takes one command.
vgcreate vgpool /dev/sdb1
Vgpool is the name of the new volume group we created. You can name it whatever you’d like but it is recommended to put vg at the front of the label so if you reference it later you will know it is a volume group.
Create logical volume
To create the logical volume that LVM will use:
lvcreate -L 3G -n lvstuff vgpool
The -L command designates the size of the logical volume, in this case 3 GB, and the -n command names the volume. Vgpool is referenced so that the lvcreate command knows what volume to get the space from.
Format and Mount the Logical Volume
One final step is to format the new logical volume with a file system. If you want help choosing a Linux file system, read our how to that can help you choose the best file system for your needs.
mkfs -t ext3 /dev/vgpool/lvstuff
Create a mount point and then mount the volume somewhere you can use it.
mkdir /mnt/stuff
mount -t ext3 /dev/vgpool/lvstuff /mnt/stuff

Resizing a Logical Volume

One of the benefits of logical volumes is you can make your shares physically bigger or smaller without having to move everything to a bigger hard drive. Instead, you can add a new hard drive and extend your volume group on the fly. Or if you have a hard drive that isn’t used you can remove it from the volume group to shrink your logical volume.
There are three basic tools for making physical volumes, volume groups, and logical volumes bigger or smaller.
Note: Each of these commands will need to be preceded by pv, vg, or lv depending on what you are working with.
  • resize – can shrink or expand physical volumes and logical volumes but not volume groups
  • extend – can make volume groups and logical volumes bigger but not smaller
  • reduce – can make volume groups and logical volumes smaller but not bigger
Let’s walk through an example of how to add a new hard drive to the logical volume “lvstuff” we just created.
Install and Format new Hard Drive
To install a new hard drive follow the steps above to create a new partition and add change it’s partition type to LVM (8e). Then use pvcreate to create a physical volume that LVM can recognize.
Add New Hard Drive to Volume Group
To add the new hard drive to a volume group you just need to know what your new partition is, /dev/sdc1 in our case, and the name of the volume group you want to add it to.
This will add the new physical volume to the existing volume group.
vgextend vgpool /dev/sdc1
Extend Logical Volume
To resize the logical volume we need to say how much we want to extend by size instead of by device. In our example we just added a 8 GB hard drive to our 3 GB vgpool. To make that space usable we can use lvextend or lvresize.
lvextend -L8G /dev/vgpool/lvstuff
While this command will work you will see that it will actually resize our logical volume to 8 GB instead of adding 8 GB to the existing volume like we wanted. To add the last 3 available gigabytes you need to use the following command.
lvextend -L+3G /dev/vgpool/lvstuff
Now our logical volume is 11 GB in size.
Extend File System
The logical volume is 11 GB but the file system on that volume is still only 3 GB. To make the file system use the entire 11 GB available you have to use the command resize2fs. Just point resize2fs to the 11 GB logical volume and it will do the magic for you.
resize2fs /dev/vgpool/lvstuff
Note: If you are using a different file system besides ext3/4 please see your file systems resize tools.
Shrink Logical Volume
If you wanted to remove a hard drive from a volume group you would need to follow the above steps in reverse order and use lvreduce and vgreduce instead.
  1. resize file system (make sure to move files to a safe area of the hard drive before resizing)
  2. reduce logical volume (instead of + to extend you can also use – to reduce by size)
  3. remove hard drive from volume group with vgreduce

Backing up a Logical Volume

Snapshots is a feature that some newer advanced file systems come with but ext3/4 lacks the ability to do snapshots on the fly. One of the coolest things about LVM snapshots is your file system is never taken offline and you can have as many as you want without taking up extra hard drive space.
When LVM takes a snapshot, a picture is taken of exactly how the logical volume looks and that picture can be used to make a copy on a different hard drive. While a copy is being made, any new information that needs to be added to the logical volume is written to the disk just like normal, but changes are tracked so that the original picture never gets destroyed.
To create a snapshot we need to create a new logical volume with enough free space to hold any new information that will be written to the logical volume while we make a backup. If the drive is not actively being written to you can use a very small amount of storage. Once we are done with our backup we just remove the temporary logical volume and the original logical volume will continue on as normal.
Create New Snapshot
To create a snapshot of lvstuff use the lvcreate command like before but use the -s flag.
lvcreate -L512M -s -n lvstuffbackup /dev/vgpool/lvstuff
Here we created a logical volume with only 512 MB because the drive isn’t being actively used. The 512 MB will store any new writes while we make our backup.
Mount New Snapshot
Just like before we need to create a mount point and mount the new snapshot so we can copy files from it.
mkdir /mnt/lvstuffbackup
mount /dev/vgpool/lvstuffbackup /mnt/lvstuffbackup
Copy Snapshot and Delete Logical Volume
All you have left to do is copy all of the files from /mnt/lvstuffbackup/ to an external hard drive or tar it up so it is all in one file.
Note: tar -c will create an archive and -f will say the location and file name of the archive. For help with the tar command use man tar in the terminal.
tar -cf /home/rothgar/Backup/lvstuff-ss /mnt/lvstuffbackup/
Remember that while the backup is taking place all of the files that would be written to lvstuff are being tracked in the temporary logical volume we created earlier. Make sure you have enough free space while the backup is happening.
Once the backup finishes, unmount the volume and remove the temporary snapshot.
umount /mnt/lvstuffbackup
lvremove /dev/vgpool/lvstuffbackup/

Deleting a Logical Volume

To delete a logical volume you need to first make sure the volume is unmounted, and then you can use lvremove to delete it. You can also remove a volume group once the logical volumes have been deleted and a physical volume after the volume group is deleted.
Here are all the commands using the volumes and groups we’ve created.
umount /mnt/lvstuff
lvremove /dev/vgpool/lvstuff
vgremove vgpool
pvremove /dev/sdb1 /dev/sdc1

APACHE CLOUDSTACK INSTALLATION ON CENTOS6.5

Quick Installation Guide for CentOS

Overview

What exactly are we building?

Infrastructure-as-a-Service (IaaS) clouds can be a complex thing to build, and by definition they have a plethora of options, which often lead to confusion for even experienced admins who are newcomers to building cloud platforms. The goal for this runbook is to provide a straightforward set of instructions to get you up and running with CloudStack with a minimum amount of trouble.

High level overview of the process

This runbook will focus on building a CloudStack cloud using KVM with CentOS 6.4 with NFS storage on a flat layer-2 network utilizing layer-3 network isolation (aka Security Groups), and doing it all on a single piece of hardware.
KVM, or Kernel-based Virtual Machine is a virtualization technology for the Linux kernel. KVM supports native virtualization atop processors with hardware virtualization extensions.
Security Groups act as distributed firewalls that control access to a group of virtual machines.

Prerequisites

To complete this runbook you’ll need the following items:
  1. At least one computer which supports hardware virtualization.
  2. The CentOS 6.5 x86_64 minimal install CD
  3. A /24 network with the gateway being at xxx.xxx.xxx.1, no DHCP should be on this network and none of the computers running CloudStack will have a dynamic address. Again this is done for the sake of simplicity.

Environment

Before you begin , you need to prepare the environment before you install CloudStack. We will go over the steps to prepare now.

Operating System

Using the CentOS 6.5 x86_64 minimal install ISO, you’ll need to install CentOS on your hardware. The defaults will generally be acceptable for this installation.
Once this installation is complete, you’ll want to connect to your freshly installed machine via SSH as the root user. Note that you should not allow root logins in a production environment, so be sure to turn off remote logins once you have finished the installation and configuration.

Configuring the network

By default the network will not come up on your hardware and you will need to configure it to work in your environment. Since we specified that there will be no DHCP server in this environment we will be manually configuring your network interface. We will assume, for the purposes of this exercise, that eth0 is the only network interface that will be connected and used.
Connecting via the console you should login as root. Check the file /etc/sysconfig/network-scripts/ifcfg-eth0, it will look like this by default:
DEVICE="eth0"
HWADDR="52:54:00:B9:A6:C0"
NM_CONTROLLED="yes"
ONBOOT="no"
Unfortunately, this configuration will not permit you to connect to the network, and is also unsuitable for our purposes with CloudStack. We want to configure that file so that it specifies the IP address, netmask, etc., as shown in the following example:
Note
You should not use the Hardware Address (aka the MAC address) from our example for your configuration. It is network interface specific, so you should keep the address already provided in the HWADDR directive.
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
Note
IP Addressing - Throughout this document we are assuming that you will have a /24 network for your CloudStack implementation. This can be any RFC 1918 network. However, we are assuming that you will match the machine address that we are using. Thus we may use 172.16.10.2 and because you might be using the 192.168.55.0/24 network you would use 192.168.55.2
Now that we have the configuration files properly set up, we need to run a few commands to start up the network:
# chkconfig network on

# service network start

Hostname

CloudStack requires that the hostname be properly set. If you used the default options in the installation, then your hostname is currently set to localhost.localdomain. To test this we will run:
# hostname --fqdn
At this point it will likely return:
localhost
To rectify this situation - we’ll set the hostname by editing the /etc/hosts file so that it follows a similar format to this example:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.10.2 srvr1.cloud.priv
After you’ve modified that file, go ahead and restart the network using:
# service network restart
Now recheck with the hostname –fqdn command and ensure that it returns a FQDN response

SELinux

At the moment, for CloudStack to work properly SELinux must be set to permissive. We want to both configure this for future boots and modify it in the current running system.
To configure SELinux to be permissive in the running system we need to run the following command:
# setenforce 0
To ensure that it remains in that state we need to configure the file /etc/selinux/config to reflect the permissive state, as shown in this example:
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

NTP

NTP configuration is a necessity for keeping all of the clocks in your cloud servers in sync. However, NTP is not installed by default. So we’ll install and and configure NTP at this stage. Installation is accomplished as follows:
# yum -y install ntp
The actual default configuration is fine for our purposes, so we merely need to enable it and set it to start on boot as follows:
# chkconfig ntpd on
# service ntpd start

Configuring the CloudStack Package Repository

We need to configure the machine to use a CloudStack package repository.
Note
The Apache CloudStack official releases are source code. As such there are no ‘official’ binaries available. The full installation guide describes how to take the source release and generate RPMs and and yum repository. This guide attempts to keep things as simple as possible, and thus we are using one of the community-provided yum repositories.
To add the CloudStack repository, create /etc/yum.repos.d/cloudstack.repo and insert the following information.
[cloudstack]
name=cloudstack
baseurl=http://cloudstack.apt-get.eu/rhel/4.4/
enabled=1
gpgcheck=0

NFS

Our configuration is going to use NFS for both primary and secondary storage. We are going to go ahead and setup two NFS shares for those purposes. We’ll start out by installing nfs-utils.
# yum install nfs-utils
We now need to configure NFS to serve up two different shares. This is handled comparatively easily in the /etc/exports file. You should ensure that it has the following content:
/secondary *(rw,async,no_root_squash,no_subtree_check)
/primary *(rw,async,no_root_squash,no_subtree_check)
You will note that we specified two directories that don’t exist (yet) on the system. We’ll go ahead and create those directories and set permissions appropriately on them with the following commands:
# mkdir /primary
# mkdir /secondary
CentOS 6.x releases use NFSv4 by default. NFSv4 requires that domain setting matches on all clients. In our case, the domain is cloud.priv, so ensure that the domain setting in /etc/idmapd.conf is uncommented and set as follows: Domain = cloud.priv
Now you’ll need uncomment the configuration values in the file /etc/sysconfig/nfs
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
RQUOTAD_PORT=875
STATD_PORT=662
STATD_OUTGOING_PORT=2020
Now we need to configure the firewall to permit incoming NFS connections. Edit the file /etc/sysconfig/iptables
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p udp --dport 111 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p tcp --dport 111 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p tcp --dport 2049 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p tcp --dport 32803 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p udp --dport 32769 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p tcp --dport 892 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p udp --dport 892 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p tcp --dport 875 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p udp --dport 875 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p tcp --dport 662 -j ACCEPT
-A INPUT -s 172.16.10.0/24 -m state --state NEW -p udp --dport 662 -j ACCEPT
Now you can restart the iptables service with the following command:
# service iptables restart
We now need to configure the nfs service to start on boot and actually start it on the host by executing the following commands:
# service rpcbind start
# service nfs start
# chkconfig rpcbind on
# chkconfig nfs on

Management Server Installation

We’re going to install the CloudStack management server and surrounding tools.

Database Installation and Configuration

We’ll start with installing MySQL and configuring some options to ensure it runs well with CloudStack.
Install by running the following command:
# yum -y install mysql-server
With MySQL now installed we need to make a few configuration changes to /etc/my.cnf. Specifically we need to add the following options to the [mysqld] section:
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'
Now that MySQL is properly configured we can start it and configure it to start on boot as follows:
# service mysqld start
# chkconfig mysqld on

Installation

We are now going to install the management server. We do that by executing the following command:
# yum -y install cloudstack-management
With the application itself installed we can now setup the database, we’ll do that with the following command and options:
# cloudstack-setup-databases cloud:password@localhost --deploy-as=root
When this process is finished, you should see a message like “CloudStack has successfully initialized the database.”
Now that the database has been created, we can take the final step in setting up the management server by issuing the following command:
# cloudstack-setup-management

System Template Setup

CloudStack uses a number of system VMs to provide functionality for accessing the console of virtual machines, providing various networking services, and managing various aspects of storage. This step will acquire those system images ready for deployment when we bootstrap your cloud.
Now we need to download the system VM template and deploy that to the share we just mounted. The management server includes a script to properly manipulate the system VMs images.
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /secondary \
-u http://cloudstack.apt-get.eu/systemvm/4.4/systemvm64template-4.4.0-6-kvm.qcow2.bz2 \
-h kvm -F
That concludes our setup of the management server. We still need to configure CloudStack, but we will do that after we get our hypervisor set up.

KVM Setup and Installation

KVM is the hypervisor we’ll be using - we will recover the initial setup which has already been done on the hypervisor host and cover installation of the agent software, you can use the same steps to add additional KVM nodes to your CloudStack environment.

Prerequisites

We explicitly are using the management server as a compute node as well, which means that we have already performed many of the prerequisite steps when setting up the management server, but we will list them here for clarity. Those steps are:
  1. Configuring the network
  2. Hostname
  3. SELinux
  4. NTP
  5. Configuring the CloudStack Package Repository
You shouldn’t need to do that for the management server, of course, but any additional hosts will need for you to complete the above steps.

Installation

Installation of the KVM agent is trivial with just a single command, but afterwards we’ll need to configure a few things.
# yum -y install cloudstack-agent

KVM Configuration

We have two different parts of KVM to configure, libvirt, and QEMU.

QEMU Configuration

KVM configuration is relatively simple at only a single item. We need to edit the QEMU VNC configuration. This is done by editing /etc/libvirt/qemu.conf and ensuring the following line is present and uncommented.
vnc_listen=0.0.0.0

Libvirt Configuration

CloudStack uses libvirt for managing virtual machines. Therefore it is vital that libvirt is configured correctly. Libvirt is a dependency of cloud-agent and should already be installed.
  1. In order to have live migration working libvirt has to listen for unsecured TCP connections. We also need to turn off libvirts attempt to use Multicast DNS advertising. Both of these settings are in /etc/libvirt/libvirtd.conf
    Set the following paramaters:
    listen_tls = 0
    listen_tcp = 1
    tcp_port = "16059"
    auth_tcp = "none"
    mdns_adv = 0
    
  2. Turning on “listen_tcp” in libvirtd.conf is not enough, we have to change the parameters as well we also need to modify /etc/sysconfig/libvirtd:
    Uncomment the following line:
    #LIBVIRTD_ARGS="--listen"
    
  3. Restart libvirt
    # service libvirtd restart
    

KVM configuration complete

That concludes our installation and configuration of KVM, and we’ll now move to using the CloudStack UI for the actual configuration of our cloud.

Configuration

As we noted before we will be using security groups to provide isolation and by default that implies that we’ll be using a flat layer-2 network. It also means that the simplicity of our setup means that we can use the quick installer.

UI Access

To get access to CloudStack’s web interface, merely point your browser tohttp://172.16.10.2:8080/client The default username is ‘admin’, and the default password is ‘password’. You should see a splash screen that allows you to choose several options for setting up CloudStack. You should choose the Continue with Basic Setup option.
You should now see a prompt requiring you to change the password for the admin user. Please do so.

Setting up a Zone

A zone is the largest organization entity in CloudStack - and we’ll be creating one, this should be the screen that you see in front of you now. And for us there are 5 pieces of information that we need.
  1. Name - we will set this to the ever-descriptive ‘Zone1’ for our cloud.
  2. Public DNS 1 - we will set this to ‘8.8.8.8’ for our cloud.
  3. Public DNS 2 - we will set this to ‘8.8.4.4’ for our cloud.
  4. Internal DNS1 - we will also set this to ‘8.8.8.8’ for our cloud.
  5. Internal DNS2 - we will also set this to ‘8.8.4.4’ for our cloud.
Note
CloudStack distinguishes between internal and public DNS. Internal DNS is assumed to be capable of resolving internal-only hostnames, such as your NFS server’s DNS name. Public DNS is provided to the guest VMs to resolve public IP addresses. You can enter the same DNS server for both types, but if you do so, you must make sure that both internal and public IP addresses can route to the DNS server. In our specific case we will not use any names for resources internally, and we have indeed them set to look to the same external resource so as to not add a namerserver setup to our list of requirements.

Pod Configuration

Now that we’ve added a Zone, the next step that comes up is a prompt for information regading a pod. Which is looking for several items.
  1. Name - We’ll use Pod1 for our cloud.
  2. Gateway - We’ll use 172.16.10.1 as our gateway
  3. Netmask - We’ll use 255.255.255.0
  4. Start/end reserved system IPs - we will use 172.16.10.10-172.16.10.20
  5. Guest gateway - We’ll use 172.16.10.1
  6. Guest netmask - We’ll use 255.255.255.0
  7. Guest start/end IP - We’ll use 172.16.10.30-172.16.10.200

Cluster

Now that we’ve added a Zone, we need only add a few more items for configuring the cluster.
  1. Name - We’ll use Cluster1
  2. Hypervisor - Choose KVM
You should be prompted to add the first host to your cluster at this point. Only a few bits of information are needed.
  1. Hostname - we’ll use the IP address 172.16.10.2 since we didn’t set up a DNS server.
  2. Username - we’ll use ‘root’
  3. Password - enter the operating system password for the root user

Primary Storage

With your cluster now setup - you should be prompted for primary storage information. Choose NFS as the storage type and then enter the following values in the fields:
  1. Name - We’ll use ‘Primary1’
  2. Server - We’ll be using the IP address 172.16.10.2
  3. Path - Well define /primary as the path we are using

Secondary Storage

If this is a new zone, you’ll be prompted for secondary storage information - populate it as follows:
  1. NFS server - We’ll use the IP address 172.16.10.2
  2. Path - We’ll use /secondary
Now, click Launch and your cloud should begin setup - it may take several minutes depending on your internet connection speed for setup to finalize.
That’s it, you are done with installation of your Apache CloudStack cloud.