Katello: Create a Domain, Subnet, Installation Media, OS, Provisioning Templates, Host Groups, PXE Boot

Working with Katello – part 3. We’re continuing with Katello configuration, we’ll create a domain, FTP installation media, tweak some provisioning templates, deploy KVM guests.

This article is part of the Homelab Project with KVM, Katello and Puppet series.

Homelab

We have Katello installed on a CentOS 7 server:

katello.hl.local (10.11.1.4) – see here for installation instructions

See the image below to identify the homelab part this article applies to.

The Plan

Below is a step-by-step plan that we’ll be following in this article.

  1. Step 1: create a domain.
  2. Step 2: create a subnet.
  3. Step 3: set up FTP installation media for provisioning.
  4. Step 4: create a hardened partition table for provisioning.
  5. Step 5: create an operating system.
  6. Step 6: provisioning templates and Puppet 4.
  7. Step 7: create a host group.
  8. Step 8: create a new host.
  9. Step 9: create KVM guests on Proxmox.
  10. Step 10: PXE boot the VMs.

Configure Katello

Step 1: Create a Domain

Chances are the domain name has already beet set up if you provided it during the Katello installation.

# hammer domain create --name "hl.local"

If you get an error saying that the “Name has already been taken”, it likely due to the Katello server not being assigned to the organisation.

If that is the case, then after assigning the Katello server to the organisation you will be able to see the domain name.

# hammer domain list
---|---------
ID | NAME    
---|---------
1  | hl.local
---|---------

Step 2: Create a Subnet

In order to create a subnet we need to know our TFTP ID.

TFTP ID is actually our proxy ID.

# hammer proxy list
---|------------------|-------------------------------|--------------------------
ID | NAME             | URL                           | FEATURES                 
---|------------------|-------------------------------|--------------------------
1  | katello.hl.local | https://katello.hl.local:9090 | Templates, Pulp, TFTP,...
---|------------------|-------------------------------|--------------------------

Create a new subnet:

# hammer subnet create \
  --organizations "Lisenet" \
  --locations "HomeLab" \
  --name "homelab_LAN" \
  --network "10.11.1.0" \
  --mask "255.255.255.0" \
  --network-type "IPv4" \
  --gateway "10.11.1.1" \
  --dns-primary "10.11.1.2" \
  --dns-secondary "10.11.1.3" \
  --boot-mode "DHCP" \
  --ipam "None" \
  --domain-ids "1" \
  --tftp-id "1"

Note the IPs of the redundand DNS servers that we created previously. Verify:

# hammer subnet list
---|-------------|-----------|---------------|--------
ID | NAME        | NETWORK   | MASK          | VLAN ID
---|-------------|-----------|---------------|--------
1  | homelab_LAN | 10.11.1.0 | 255.255.255.0 |        
---|-------------|-----------|---------------|--------

Step 3: FTP Installation Media

We’ll install and configure vsftpd to provide CentOS installation media via FTP.

# yum install vsftpd
# systemctl enable vsftpd

Configure firewall if not done already:

# firewall-cmd --permanent --add-service=ftp
# firewall-cmd --reload

Edit the file /etc/vsftpd/vsftpd.conf and configure the following:

anonymous_enable=YES
write_enable=NO

Restart the service:

# systemctl restart vsftpd

Note that FTP volume /var/ftp/pub/ should have at least 6GB of free disk space. If you don’t have a CentOS 7 DVD, you can download it from the Internet. Make sure it’s a full DVD and not a minimal one.

When done, attach the ISO file to the KVM guest and mount the disk:

# mount /dev/cdrom /mnt

Create a folder to store installation files, and sync everything from the disk:

# mkdir /var/ftp/pub/CentOS_7_x86_64
# rsync -rv --progress /mnt/ /var/ftp/pub/CentOS_7_x86_64/

Unmount the disk when finished:

# umount /mnt

Restore SELinux labels:

# restorecon -Rv /var/ftp/pub/

One last thing, can mount /var/ftp/pub with “ro,nodev,noexec,nosuid”, as we only need to read the files.

Create CentOS 7 installation media:

# hammer medium create \
  --organizations "Lisenet" \
  --locations "HomeLab" \
  --name CentOS7_DVD_FTP \
  --path "ftp://katello.hl.local/pub/CentOS_7_x86_64/" \
  --os-family "Redhat"

Verify:

# hammer medium list
---|-----------------|--------------------------------------------
ID | NAME            | PATH                                       
---|-----------------|--------------------------------------------
9  | CentOS7_DVD_FTP | ftp://katello.hl.local/pub/CentOS_7_x86_64/
---|-----------------|--------------------------------------------

Step 4: Create a Hardened Partition Table

Katello comes with several different partition tables that can be used out of the box. Here are some of the default ones:

# hammer partition-table list|grep default
83  | CoreOS default fake          | Coreos   
85  | Jumpstart default            | Solaris  
87  | Junos default fake           | Junos    
88  | Kickstart default            | Redhat   
89  | NX-OS default fake           | NXOS     
90  | Preseed default              | Debian   
92  | XenServer default            | Xenserver

As we can see above, the default one for Red Hat is called “Kickstart default”.

We are going to create a hardened partition table so that we have control over partitions and mountpoint that get created.

Create a file hardened_ptable.txt with the following content:

< %#
kind: ptable
name: Kickstart hardened 32GB
oses:
- CentOS
- Fedora
- RedHat
%>

# System bootloader configuration
bootloader --location=mbr --boot-drive=sda --timeout=3
# Partition clearing information
clearpart --all --drives=sda
zerombr 

# Disk partitioning information
part /boot --fstype="xfs" --ondisk=sda --size=1024 --label=boot --fsoptions="rw,nodev,noexec,nosuid"

# 30GB physical volume
part pv.01  --fstype="lvmpv" --ondisk=sda --size=30720
volgroup vg_os pv.01

logvol /        --fstype="xfs"  --size=4096 --vgname=vg_os --name=lv_root
logvol /home    --fstype="xfs"  --size=512  --vgname=vg_os --name=lv_home --fsoptions="rw,nodev,nosuid"
logvol /tmp     --fstype="xfs"  --size=1024 --vgname=vg_os --name=lv_tmp  --fsoptions="rw,nodev,noexec,nosuid"
logvol /var     --fstype="xfs"  --size=6144 --vgname=vg_os --name=lv_var  --fsoptions="rw,nosuid"
logvol /var/log --fstype="xfs"  --size=512  --vgname=vg_os --name=lv_log  --fsoptions="rw,nodev,noexec,nosuid"
logvol swap     --fstype="swap" --size=2048 --vgname=vg_os --name=lv_swap --fsoptions="swap"

Create a new partition table:

# hammer partition-table create \
  --organizations "Lisenet" \
  --locations "HomeLab" \
  --name "Kickstart hardened 32GB" \
  --os-family "Redhat" \
  --operatingsystems "CentOS 7.4.1708" \
  --file hardened_ptable.txt

Verify:

# hammer partition-table list|egrep 'ID|hardened'
ID  | NAME                         | OS FAMILY
103 | Kickstart hardened 32GB      | Redhat

Step 5: Create an Operating System

This step isn’t required in our particular case because the Katellos server is deployed on CentOS 7, and the operating system for that has already been created. Since our homelab is CentOS 7 only, there is no need to create a new OS.

For references, if we were to create a new CentOS 7 operating system, we would do the following:

# hammer os create \
  --name "CentOS" \
  --major "7" \
  --minor "4.1708" \
  --family "Redhat" \
  --password-hash "SHA512" \
  --architectures "x86_64" \
  --media "CentOS7_DVD_FTP" \
  --partition-tables "Kickstart hardened 32GB"

Note references to the FTP installation media and the hardened partioning table.

Step 6: Provisioning Templates and Puppet 4

Foreman includes many template examples. To get an overall idea, do the following:

# hammer template list | less

Katello ships with a number of templates in addition to the standard Foreman ones, e.g.:

Katello Kickstart Default – Kickstart template for CentOS, RHEL and other Red Hat-compatible operating systems.
Katello Kickstart Default Finish – image-based provisioning.
subscription_manager_registration – Snippet for registering a host for content.

To customise any of the above templates, we can clone them and add our changes.

While this sounds good, I have to admit that the default provisioning templates didn’t quite work for me, as I kept getting Puppet installation issues, mainly due to filesystem paths being different between Puppet versions 3 and 4. I had to use the following to make Puppet 4 installation successful:

os_family = @host.operatingsystem.family
  if os_family == 'Redhat'
    var_dir = '/opt/puppetlabs/puppet/cache'
    log_dir = '/var/log/puppetlabs/puppet'
    run_dir = '/var/run/puppetlabs'
    ssl_dir = '/etc/puppetlabs/puppet/ssl'
  end

I ended up cloning some of the templates (see below), editing them manually to match my needs, and creating the new ones.

  1. Katello Kickstart Default (Provisioning template)
  2. Katello Kickstart Default Finish (Finish template)
  3. puppet.conf (Snippet)
  4. puppet_setup (Snippet)
  5. subscription_manager_registration (Snippet)

The way I’ve achieved this was fairly simple. First of all, I had to dump the template to a file:

# hammer template dump \
  --id "Katello Kickstart Default" > template1.txt

Then edit the file template1.txt, add changes and check for errors:

# erb -x -T '-' template1.txt|ruby -c

Finally, create a new template from the file:

# hammer template create \
  --organizations "Lisenet" \
  --locations "HomeLab" \
  --file "template1.txt" \
  --name "Katello Kickstart Puppet4 Default" \
  --type "provision" \
  --operatingsystems "CentOS 4.1708"

To avoid confusion we can always verify the template kind that we need to use from the list below:

# hammer template kinds
---------
NAME     
---------
PXELinux 
PXEGrub  
PXEGrub2 
iPXE     
provision
finish   
script   
user_data
ZTP      
POAP     
snippet  
---------

When we provision a new CentOS 7 server, it gets the subscription-manager package installed from the “os” repository, and then the system registers against Katello. This allows us to use products and repositories, and manage packages that are available to the server via a lifecycle environment.

The caveat is, however, that by default, public CentOS repositories remain enabled on the server. What we ideally want to do is to remove all public CentOS repositories when the system gets registered, because at that point repositories the system is subscribed to will already be available. To achieve this, we need to modify the snippet for subscription_manager_registration and create a custom one that removes public CentOS repositories, e.g.:

echo "Registering the System"
subscription-manager register --org="< %= @host.rhsm_organization_label %>" --name="< %= @host.name %>" --activationkey="< %= @host.params['kt_activation_keys'] %>"
echo "Removing public CentOS repositories"
rm -rvf /etc/yum.repos.d/CentOS-*

Step 7: Create a Host Group

A host group is in some ways similar to an inherited node declaration, in that it is a high level grouping of classes that can be named and treated as a unit.

This is then treated as a template and is selectable during the creation of a new host and ensures that the host is configured in one of your pre-defined states.

In addition to defining which Puppet classes get included when building this host type we are also able to assign variables and provisioning information to a host group to further refine the behavior of the Puppet runtime.

Now, we don’t need a Puppet environment in order to create a host group, but in practice it’s benefitial to have one since we intend to use Puppet to manage servers. The default Puppet environment is called “production”:

# hammer environment list
---|-----------
ID | NAME      
---|-----------
1  | production
---|-----------

Let us create a new environment called “homelab” (I find the name “homelab” more appropriate that “production”):

# hammer environment create \
 --name "homelab" \
 --organizations "Lisenet" \
 --locations "HomeLab"

Verify:

# hammer environment list
---|-----------
ID | NAME      
---|-----------
2  | homelab   
1  | production
---|-----------

We’ll use the homelab environment when creating a host group. We’ll also need to create a Puppet folder structure when configuring Puppet modules, but this will be covered in the next article.

We’ll need the content source ID, which is the same as our Proxy ID:

# hammer proxy list
---|------------------|-------------------------------|--------------------------
ID | NAME             | URL                           | FEATURES                 
---|------------------|-------------------------------|--------------------------
1  | katello.hl.local | https://katello.hl.local:9090 | Templates, Pulp, TFTP,...
---|------------------|-------------------------------|--------------------------

Create a new host group for CentOS 7.

# hammer hostgroup create \
  --query-organization "Lisenet" \
  --locations "HomeLab" \
  --name "el7_group" \
  --description "Host group for CentOS 7 servers" \
  --lifecycle-environment "stable" \
  --content-view "el7_content" \
  --content-source-id "1" \
  --environment "homelab" \
  --puppet-proxy "katello.hl.local" \
  --puppet-ca-proxy "katello.hl.local" \
  --domain "hl.local" \
  --subnet "homelab_LAN" \
  --architecture "x86_64" \
  --operatingsystem "CentOS 4.1708" \
  --medium "CentOS7_DVD_FTP" \
  --partition-table "Kickstart hardened 32GB" \
  --pxe-loader "PXELinux BIOS" \
  --root-pass "PleaseChangeMe"

Remember the activation key which we created in the previous article? Now it’s time to associate that activation key with the host group:

# hammer hostgroup set-parameter  \
  --name "kt_activation_keys" \
  --value "el7-key" \
  --hostgroup "el7_group"

Step 8: Create a New Host

We want to create hosts for all our homelab servers.

VM ID GUEST_NAME MAC_ADDR IP_ADDR
203 ldap1.hl.local 10.11.1.11 00:22:FF:00:00:11
204 ldap2.hl.local 10.11.1.12 00:22:FF:00:00:12
205 monitoring.hl.local 10.11.1.13 00:22:FF:00:00:13
206 syslog.hl.local 10.11.1.14 00:22:FF:00:00:14
207 storage1.hl.local 10.11.1.15 00:22:FF:00:00:15
208 storage2.hl.local 10.11.1.16 00:22:FF:00:00:16
209 db1.hl.local 10.11.1.17 00:22:FF:00:00:17
210 db2.hl.local 10.11.1.18 00:22:FF:00:00:18
211 proxy1.hl.local 10.11.1.19 00:22:FF:00:00:19
212 proxy2.hl.local 10.11.1.20 00:22:FF:00:00:20
213 web1.hl.local 10.11.1.21 00:22:FF:00:00:21
214 web2.hl.local 10.11.1.22 00:22:FF:00:00:22
215 backup.hl.local 10.11.1.23 00:22:FF:00:00:23

If we loop the details above through the following command, we’ll end up with all the servers that we need. The VM ID is used by Proxmox only and isn’t required by Katello.

# hammer host create \
  --name "$GUEST_NAME" \
  --hostgroup "el7_group" \
  --interface "type=interface,mac=$MAC_ADDR,ip=$IP_ADDR,managed=true,primary=true,provision=true"

We can check to see what’s inside the provisioning template by going to the following page:

https://katello.hl.local/unattended/provision?spoof=10.11.1.11

We should now have the following hosts available:

# hammer host list
---|---------------------|------------------|------------|------------|-------------------|--------------|----------------------
ID | NAME                | OPERATING SYSTEM | HOST GROUP | IP         | MAC               | CONTENT VIEW | LIFECYCLE ENVIRONMENT
---|---------------------|------------------|------------|------------|-------------------|--------------|----------------------
41 | admin1.hl.local     | CentOS 7.4.1708  | el7_group  | 10.11.1.2  | 00:22:ff:00:00:02 | el7_content  | stable               
42 | admin2.hl.local     | CentOS 7.4.1708  | el7_group  | 10.11.1.3  | 00:22:ff:00:00:03 | el7_content  | stable               
38 | backup.hl.local     | CentOS 7.4.1708  | el7_group  | 10.11.1.23 | 00:22:ff:00:00:23 | el7_content  | stable               
26 | db1.hl.local        | CentOS 7.4.1708  | el7_group  | 10.11.1.17 | 00:22:ff:00:00:17 | el7_content  | stable               
27 | db2.hl.local        | CentOS 7.4.1708  | el7_group  | 10.11.1.18 | 00:22:ff:00:00:18 | el7_content  | stable               
2  | katello.hl.local    | CentOS 7.4.1708  |            | 10.11.1.4  | 00:22:ff:00:00:04 |              |                      
32 | ldap1.hl.local      | CentOS 7.4.1708  | el7_group  | 10.11.1.11 | 00:22:ff:00:00:11 | el7_content  | stable               
33 | ldap2.hl.local      | CentOS 7.4.1708  | el7_group  | 10.11.1.12 | 00:22:ff:00:00:12 | el7_content  | stable               
34 | monitoring.hl.local | CentOS 7.4.1708  | el7_group  | 10.11.1.13 | 00:22:ff:00:00:13 | el7_content  | stable               
28 | proxy1.hl.local     | CentOS 7.4.1708  | el7_group  | 10.11.1.19 | 00:22:ff:00:00:19 | el7_content  | stable               
29 | proxy2.hl.local     | CentOS 7.4.1708  | el7_group  | 10.11.1.20 | 00:22:ff:00:00:20 | el7_content  | stable               
39 | storage1.hl.local   | CentOS 7.4.1708  | el7_group  | 10.11.1.15 | 00:22:ff:00:00:15 | el7_content  | stable               
40 | storage2.hl.local   | CentOS 7.4.1708  | el7_group  | 10.11.1.16 | 00:22:ff:00:00:16 | el7_content  | stable               
35 | syslog.hl.local     | CentOS 7.4.1708  | el7_group  | 10.11.1.14 | 00:22:ff:00:00:14 | el7_content  | stable               
30 | web1.hl.local       | CentOS 7.4.1708  | el7_group  | 10.11.1.21 | 00:22:ff:00:00:21 | el7_content  | stable               
31 | web2.hl.local       | CentOS 7.4.1708  | el7_group  | 10.11.1.22 | 00:22:ff:00:00:22 | el7_content  | stable               
---|---------------------|------------------|------------|------------|-------------------|--------------|----------------------

Step 9: Create KVM Guests on Proxmox

Before we can PXE boot the VMs, we first need to create them.

Again, if we take into account the VM details that we’ve listed above and loop them through following qm command, we’ll end up with all the servers that we need.

# qm create $ID \
  --name $GUEST_NAME \
  --boot cn \
  --cores 1 \
  --hotplug disk,cpu \
  --memory 1536 \
  --net0 bridge=vmbr0,model=virtio,macaddr=$MAC_ADDR \
  --onboot 1 \
  --ostype l26 \
  --scsi0 file=data_hdd:32,format=qcow2 \
  --scsihw virtio-scsi-pci \
  --sockets 1 \
  --startup order=$ID

One thing that’s worth mentioning, when you create a new VM for PXE boot, make sure that it has at least 1536MB of RAM, as otherwise it may fail to provision successfuly. The amount of RAM can be reduced after a VM is configured.

Also note the disk size that we use, it’s set to 32GB to match the hardened provisioning template.

Step 10: PXE Boot the VMs

Start the VMs, watch them PXE boot and get provisioned by Katello.

To start the VM with the ID of 203, simply do the following:

# qm start 203

The end result should be something like this:

# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID             
       200 admin1.hl.local      running    512               10.00 2439      
       201 admin2.hl.local      running    512               10.00 2491      
       202 katello.hl.local     running    10240             10.00 2551      
       203 ldap1.hl.local       running    768               32.00 2598      
       204 ldap2.hl.local       running    768               32.00 2654      
       205 monitoring.hl.local  running    1024              32.00 29490     
       206 syslog.hl.local      running    2048              32.00 2758      
       207 storage1.hl.local    running    768               32.00 2637      
       208 storage2.hl.local    running    768               32.00 8626      
       209 db1.hl.local         running    1024              32.00 2952      
       210 db2.hl.local         running    1024              32.00 3007      
       211 proxy1.hl.local      running    768               32.00 3067      
       212 proxy2.hl.local      running    768               32.00 1129      
       213 web1.hl.local        running    768               32.00 3168      
       214 web2.hl.local        running    768               32.00 1491      
       215 backup.hl.local      running    768               32.00 1263

As mentioned earlier, the amount of RAM can be changed after a VM gets provisioned.

What’s Next?

If all went well, then at this point we should have all of the homelab VMs created and running.

We’ll start looking into Puppet modules, and how we can use them to configure various homelab services automatically.

Leave a Reply

Your email address will not be published. Required fields are marked *