Provisioning RHV 4.1 hypervisors using Satellite 6.2


The RHV-H 4.1 installation documentation describes a method to provision RHV-H hypervisors using PXE and/or Satellite. This document covers all steps required to archieve such configuration in a repeatable manner.


  • A RHV-H installation ISO file such as RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso downloaded into Satellite/Capsules.
  • script (see here)

Creating the installation media in Satellite and Capsules

  • Deploy the RHV-H iso in /var/lib/pulp/tmp
  • Run the to populate installation media directories. It will make the following files available:
    • kernel and initrd files in tftp://host/ISOFILE/vmlinuz and tftp://host/ISOFILE/initrd
    • DVD installation media in /var/www/html/pub/ISOFILE directory
    • squashfs.img file in /var/www/html/pub/ISOFILE directory


# ./ RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso
Mounting /var/lib/pulp/tmp/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso in /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso ...
mount: /dev/loop0 is write-protected, mounting read-only
Copying ISO contents to /var/www/html/pub/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso ...
Extracting redhat-virtualization-host-image-update ...
1169874 blocks
Copying squash.img to public directory . Available as http://sat62.lab.local/pub/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso/squashfs.img ...
Copying /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso/images/pxeboot/vmlinuz /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso/images/pxeboot/initrd.img to /varlib/tftpboot/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso ...
Unmounting /mnt/RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso

Create Installation media

RHVH Installation media


This is not used as it the kickstart will use a liveimg url, not a media url; however Satellite is stubborn and still requires it.

Create partition table

name : RHVH Kickstart default

kind: ptable
name: Kickstart default
clearpart --all --initlabel
autopart --type=thinp

Create pxelinux for RHV

Based on Kickstart default PXELinux

kind: PXELinux
name: RHVH Kickstart default PXELinux
# This file was deployed via '<%= @template_name %>' template
# Supported host/hostgroup parameters:
# blacklist = module1, module2
#   Blacklisted kernel modules
options = []
if @host.params['blacklist']
    options << "modprobe.blacklist=" + @host.params['blacklist'].gsub(' ', '')
options = options.join(' ')


LABEL rhvh
    KERNEL <%= @host.params["rhvh_image"] %>/vmlinuz <%= @kernel %>
    APPEND initrd=<%= @host.params["rhvh_image"] %>/initrd.img inst.stage2=http://<%= %>/pub/<%= @host.params["rhvh_image"] %>/ ks=<%= foreman_url('provision') %> intel_iommu=on ssh_pwauth=1 local_boot_trigger=<%= foreman_url("built") %> <%= options %>

Create Kickstart for RHV

File under Satellite Kickstart Default for RHVH .

Note that the variable is used to point to the capsule associated to this host, rather than the Satellite server itself.

kind: provision
name: Satellite Kickstart default
rhel_compatible = == 'Redhat' && != 'Fedora'
os_major = @host.operatingsystem.major.to_i
# safemode renderer does not support unary negation
pm_set = @host.puppetmaster.empty? ? false : true
puppet_enabled = pm_set || @host.params['force-puppet']
salt_enabled = @host.params['salt_master'] ? true : false
section_end = (rhel_compatible && os_major <= 5) ? '' : '%end'
# not required # url --url=http://<%= %>/pub/<%= @host.params["rhvh_image"] %>
lang en_US.UTF-8
selinux --enforcing
keyboard es

<% subnet = @host.subnet -%>
<% if subnet.respond_to?(:dhcp_boot_mode?) -%>
<% dhcp = subnet.dhcp_boot_mode? && !@static -%>
<% else -%>
<% dhcp = !@static -%>
<% end -%>

network --bootproto <%= dhcp ? 'dhcp' : "static --ip=#{@host.ip} --netmask=#{subnet.mask} --gateway=#{subnet.gateway} --nameserver=#{[subnet.dns_primary, subnet.dns_secondary].select(&:present?).join(',')}" %> --hostname <%= @host %><%= os_major >= 6 ? " --device=#{@host.mac}" : '' -%>

rootpw --iscrypted <%= root_pass %>
firewall --<%= os_major >= 6 ? 'service=' : '' %>ssh
authconfig --useshadow --passalgo=sha256 --kickstart
timezone --utc <%= @host.params['time-zone'] || 'UTC' %>

<% if == 'Fedora' and os_major <= 16 -%>
# Bootloader exception for Fedora 16:
bootloader --append="nofb quiet splash=quiet <%=ks_console%>" <%= grub_pass %>
part biosboot --fstype=biosboot --size=1
<% else -%>
bootloader --location=mbr --append="nofb quiet splash=quiet" <%= grub_pass %>
<% end -%>

<% if @dynamic -%>
%include /tmp/diskpart.cfg
<% else -%>
<%= @host.diskLayout %>
<% end -%>


liveimg --url=http://<%= foreman_server_fqdn %>/pub/<%= @host.params["rhvh_image"] %>/squashfs.img

%post --nochroot
exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
cp -va /etc/resolv.conf /mnt/sysimage/etc/resolv.conf
/usr/bin/chvt 1
) 2>&1 | tee /mnt/sysimage/root/install.postnochroot.log
<%= section_end -%>

logger "Starting anaconda <%= @host %> postinstall"

nodectl init

exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
<% if subnet.respond_to?(:dhcp_boot_mode?) -%>
<%= snippet 'kickstart_networking_setup' %>
<% end -%>

#update local time
echo "updating system time"
/usr/sbin/ntpdate -sub <%= @host.params['ntp-server'] || '' %>
/usr/sbin/hwclock --systohc

<%= snippet "subscription_manager_registration" %>

<% if['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%>
<%= snippet "idm_register" %>
<% end -%>

# update all the base packages from the updates repository
#yum -t -y -e 0 update

<%= snippet('remote_execution_ssh_keys') %>


<% if @provisioning_type == nil || @provisioning_type == 'host' -%>
# Inform the build system that we are done.
echo "Informing Foreman that we are built"
wget -q -O /dev/null --no-check-certificate <%= foreman_url %>
<% end -%>
) 2>&1 | tee /root/
exit 0

<%= section_end -%>

Create new Operating system

  • Name: RHVH
  • Major Version: 7
  • Partition table: RHVH Kickstart default
  • Installation media: RHVH Installation media
  • Templates: "Kickstart default PXELinux for RHVH" and "Satellite kickstart default for RHVH"

Associate the previously-created provisioning templates with this OS.

Create a new hostgroup

Create a new host-group with a Global Parameter called rhvh_image . This parameter is used by the provisioning templates to generate the installation media paths as required.


rhvh_image = RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso


Final thoughts

Future Satellite versions of satellite might include better integration of RHV-H provisioning; however the method described above can be used in the meantime.

Happy hacking!

RHV 4.2: Using rhv-log-collector-analyzer to assess your virtualization environment

RHV 4.2 includes a tool that allows to quickly analyze your RHV environment. It bases its analysis in either a logcollector report (sosreport and others), or it can connect live to you environment and generate some nice JSON or HTML output.

NOTE : This article is deprecated and is only left for historical reasons.

rhv-log-collector-analyzer now only supports live reporting, so use the method rhv-log-collector-analyzer --live to grab a snapshot of your deployment and verify its status.

You'll find it already installed in RHV 4.2 , and gathering a report is as easy as:

# rhv-log-collector-analyzer --live
Generating reports:
Generated analyzer_report.html

If you need to assess an existing logcollector report on a new system that never had a running RHV-Manager, things get a bit more complicated:

root@localhost # yum install -y ovirt-engine
root@localhost # su - postgres
postgres@localhost ~ # source scl_source enable rh-postgresql95
postgres@localhost ~ # cd /tmp
postgres@localhost /tmp # time rhv-log-collector-analyzer  /tmp/sosreport-LogCollector-20181106134555.tar.xz

Preparing environment:
Temporary working directory is /tmp/tmp.do6qohRDhN
Unpacking postgres data. This can take up to several minutes.
sos-report extracted into: /tmp/tmp.do6qohRDhN/unpacked_sosreport
pgdump extracted into: /tmp/tmp.do6qohRDhN/pg_dump_dir
Welcome to unpackHostsSosReports script!
Extracting sosreport from hypervisor HYPERVISOR1 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR1
Extracting sosreport from hypervisor HYPERVISOR2 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR2
Extracting sosreport from hypervisor HYPERVISOR3 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR3
Extracting sosreport from hypervisor HYPERVISOR4 in /tmp/ovirt-log-collector-analyzer-hosts/HYPERVISOR4

Creating a temporary database in /tmp/tmp.do6qohRDhN/postgresDb/pgdata. Log of initdb is in /tmp/tmp.do6qohRDhN/initdb.log

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
Importing the dump into a temporary database. Log of the restore process is in /tmp/tmp.do6qohRDhN/db-restore.log

Generating reports:
Generated analyzer_report.html

Cleaning up:
Stopping temporary database
Removing temporary directory /tmp/tmp.do6qohRDhN

You'll find a analyzer_report.html file in your current working directory. It can be reviews with a text-only browser such as lynx/links , or opened with a proper full-blown browser.

Bonus track

Sometimes it can also be helpful to check the database dump that is included in the logcollector report. In order to do that, you can do something like:

Review pg_dump_dir in the log above: /tmp/tmp.do6qohRDhN/pg_dump_dir .

Initiate a new postgres instance as follows :

postgres@localhost $ source scl_source enable rh-postgresql95
postgres@localhost $ export PGDATA=/tmp/foo
postgres@localhost $ initdb -D ${PGDATA} 
postgres@localhost $ /opt/rh/rh-postgresql95/root/usr/libexec/postgresql-ctl start -D ${PGDATA} -s -w -t 30 &
postgres@localhost $ psql -c "create database testengine"
postgres@localhost $ psql -c "create schema testengine"
postgres@localhost $ psql testengine < /tmp/tmp.*/pg_dump_dir/restore.sql

Happy hacking!

Satellite 6: Upgrading to Satellite 6.3

We have a new and shiny Satellite 6.3.0 available as of now, so I just bit the bullet and upgraded my lab's Satellite.

The first thing to know if you now have the foreman-maintain tool to do some pre-flight checks, as well as drive the upgrade. You'll need to enable the repository (included as a part of RHEL product) :

subscription-manager repos --disable="*" --enable rhel-7-server-rpms --enable rhel-7-server-satellite-6.3-rpms --enable rhel-server-rhscl-7-rpms --enable rhel-7-server-satellite-maintenance-6-rpms

yum install -y rubygem-foreman_maintain

Check your Satellite health with :

# foreman-maintain health check
Running ForemanMaintain::Scenario::FilteredScenario
Check for verifying syntax for ISP DHCP configurations:               [FAIL]
undefined method `strip' for nil:NilClass
Check for paused tasks:                                               [OK]
Check whether all services are running using hammer ping:             [OK]
Scenario [ForemanMaintain::Scenario::FilteredScenario] failed.

The following steps ended up in failing state:


Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="foreman-proxy-verify-dhcp-config-syntax"

And finally perform the upgrade with :

# foreman-maintain upgrade  run  --target-version 6.3 --whitelist="foreman-proxy-verify-dhcp-config-syntax,disk-performance,repositories-setup"                          

Running Checks before upgrading to Satellite 6.3
Skipping pre_upgrade_checks phase as it was already run before.
To enforce to run the phase, use `upgrade run --phase pre_upgrade_checks`

Scenario [Checks before upgrading to Satellite 6.3] failed.

The following steps ended up in failing state:


Resolve the failed steps and rerun
the command. In case the failures are false positives,
use --whitelist="foreman-proxy-verify-dhcp-config-syntax"

Running Procedures before migrating to Satellite 6.3
Skipping pre_migrations phase as it was already run before.
To enforce to run the phase, use `upgrade run --phase pre_migrations`

Running Migration scripts to Satellite 6.3
Setup repositories: 
- Configuring repositories for 6.3                                    [FAIL]    
Failed executing subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-server-rhscl-7-rpms --enable=rhel-7-server-satellite-maintenance-6-rpms --enable=rhel-7-server-satellite-tools-6.3-rpms --enable=rhel-7-server-satellite-6.3-rpms, exit status 1:
Error: 'rhel-7-server-satellite-6.3-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.
Repository 'rhel-7-server-rpms' is enabled for this system.
Repository 'rhel-7-server-satellite-maintenance-6-rpms' is enabled for this system.
Repository 'rhel-7-server-satellite-tools-6.3-rpms' is enabled for this system.
Repository 'rhel-server-rhscl-7-rpms' is enabled for this system.
Update package(s) : 
  (yum stuff)

Upgrading, to monitor the progress on all related services, please do:
  foreman-tail | tee upgrade-$(date +%Y-%m-%d-%H%M).log
Upgrade Step: stop_services...
Upgrade Step: start_databases...
Upgrade Step: update_http_conf...
Upgrade Step: migrate_pulp...
Upgrade Step: mark_qpid_cert_for_update...
Marking certificate /root/ssl-build/satmaster.rhci.local/satmaster.rhci.local-qpid-broker for update
Upgrade Step: migrate_candlepin...
Upgrade Step: migrate_foreman...
Upgrade Step: Running installer...
Installing             Done                                               [100%] [............................................]
  The full log is at /var/log/foreman-installer/satellite.log
Upgrade Step: restart_services...
Upgrade Step: db_seed...
Upgrade Step: correct_repositories (this may take a while) ...
Upgrade Step: correct_puppet_environments (this may take a while) ...
Upgrade Step: clean_backend_objects (this may take a while) ...
Upgrade Step: remove_unused_products (this may take a while) ...
Upgrade Step: create_host_subscription_associations (this may take a while) ...
Upgrade Step: reindex_docker_tags (this may take a while) ...
Upgrade Step: republish_file_repos (this may take a while) ...
Upgrade completed!

Running Procedures after migrating to Satellite 6.3
katello-service start: 
- No katello service to start                                         [OK]      
Turn off maintenance mode:                                            [OK]
re-enable sync plans: 
- Total 4 sync plans are now enabled.                                 [OK]      

Running Checks after upgrading to Satellite 6.3
Check for verifying syntax for ISP DHCP configurations:               [FAIL]
undefined method `strip' for nil:NilClass
Check for paused tasks:                                               [OK]
Check whether all services are running using hammer ping:             [OK]

Upgrade finished.

Happy hacking!