Satellite 6.2 : Pruning mongodb database

Satellite 6 uses a MongoDB database to keep track, among other things, which RPMs compose a certain content view, etc. As the Satellite gets used and CVs are created and deleted, it can leave some old data around.

A way to trim that data can be achieved with the MongoDB built-in tools, mongod. Note that you will need as much free space as your database uses at the moment, plus a couple of GB. You can check the MongoDB documentation for more details:

https://docs.mongodb.com/v2.6/faq/storage/#how-do-i-reclaim-disk-space

This can help improve content view promotion time among other tasks.

The process would be:

  • Stop Satellite services with katello-service stop .
  • Backup your MongoDB ; a tar would suffice or use katello-backup .
  • Launch the repair: mongod --dbpath {{ mongodb_path }} --repair
  • Start services with katello-service start .

This can reduce MongoDB database size up to 50% and halve publication time :-)

Happy hacking!

Tales from the Field: Updating RHEV 3.5 to RHV 4.1

Welcome to the first post in the Tales from the Field series. In this section I indent to collect the most useful stories about different products and how they are being used in every day situations.

For a starter, I'll cover the new RHV suite, now on version 4.1, and the upgrade path to have an older environment migrated to the latest version.

RHV fundamentals

RHEV is now simply known as Red Hat Virtualization and it has several components:

  • The RHEV Manager, a system that acts as a central configuration and orchestration point for all hypervisors.

  • Several types of hypervisors:

    • RHEL-6 hypervisors. Used primarily in the RHEV 3.x suite.
    • RHEL-7 hypervisors. Used in the latter RHEV 3.x and 4.x suites.
    • RHEV-H 6 (vintage) hypervisors. Used primarily in the RHEV 3.x suite.
    • RHEV-H 7 (vintage) hypervisors. Used in the latter RHEV 3.x suite.
    • RHEV-H 7 (ngn) hypervisors. The new hypervisor in RHEV 4.x suite.

Standard RHEV to RHV upgrade path

Red Hat has put together some documentation and labs that should ease the upgrade process:

The 3.6 to 4.0 step is probably the one that requires most planning, as it involves rebuilding the manager with a new version of the underlying operating system (RHEL6 to RHEL7).

Shortcutting the upgrade

If I'm running a 3.5 cluster I surely do not need to install/upgrade every version of RHEV to end up in the latest version, do I?

Unfortunately, yes. But there are some tweaks that can be done in the process to ease out the migration.

  • If you are running a 3.5 cluster with either EL6 or EL7 3.5 hypervisors, you'll need to reinstall them. RHV4 uses the next generation node (NGN) hypervisors which are not upgradable from previous versions of RHEV-H. The good news is it's now available a 3.6 NGN version of the hypervisor. So you can reinstall some of your existing 3.5 hosts with 3.6NGN, and that's the only reinstallation you will need.
  • Once you have updated your manager to 3.5.latest, then to 3.6 you are in a position to start reinstalling your hypervisors with 3.6 NGN. Once that part is done, it is required you change your cluster compatibility settings to 3.6 and recommended you reboot all your VMs. There is an ongoing buzilla BZ#1474870 that should address documenting what is the official recommendation regarding VM reboot on an upgrade scenario.
  • Once you are in RHEV 3.6 Manager with 3.6 compat level, you need to backup your engine database and keep in a safe place. If you're migrating to a Hosted Engine from a standalone Engine, you'll need it to restore the data. If your plan is to reinstall your standalone Engine with RHEL7, it's the moment to do so.
  • Once your Engine is available, you'll run a restore then an upgrade so it's upgraded to the latest 4.0 version .
  • If your 3.6 hypervisors have a configured RHSM/Satellite subscription, you'll be able to launch a 3.6 to 4.0 upgrade from the Manager itself. This greatly eases the upgrade as no manual provision is required.
  • After all hypervisors have been migrated to 4.0, change cluster compatibility to 4.0 .
  • Rinse and repeat for Manager & host upgrades to 4.1 .

Happy hacking!

Satellite 6.2 : Adding RHV4 compute resources

Satellite server allows to manage compute resources such as Vmware and Red Hat Virtualization (RHEV/RHV). In the later RHV versions there's a caveat on how they should be added into Satellite.

  • The RHV certificate file has changed location:

http://rhv4-fqdn//ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA

  • The API endpoint should be specified with a 'v3' tag, as follows :

https://rhv4-fqdn/ovirt-engine/api/v3

Happy hacking!

Injecting proxy configuration into remote-viewer (RHEV/oVirt console)

One of the things that bother me is that I have not been able to find an easy way to change proxy settings when using the remote-viewer tool to connect to RHEV/oVirt virtual machines.

Finally I managed to put some thought into how to handle this situation -- and the easiest way I found is just to puto a wrapper into remote-viewer so whenver it's invoked via the browser, it can do all required mangling of the *.vv file we just downloaded.

The virt-viwer *.vv files are just INI files so updating them is usually quite easy via :

All in all, my wrapper looks like this :

#!/bin/bash

# Wrapper around remote-viewer to inject proxy parameters.
# Needs ansible, but can be trivially amended to use crudini.

export PATH=$PATH:/usr/bin:/usr/sbin

path=$1
chmod 600 $path

logger $0 $path


# Add your magic about detecting whether we need to use a proxy in the 'if' below ;-)
if [ $(netstat -putan 2> /dev/null | grep -P ':3128.*LISTEN' -c) -ne 0 ]; then
    logger $0 enabling proxy
    ansible -m ini_file localhost -a "state=present section=virt-viewer option=proxy value=\"http://localhost:3128\" path=\"$path\""
    ansible -m ini_file localhost -a "state=present section=ovirt       option=proxy value=\"http://localhost:3128\" path=\"$path\""
    sed -i 's# = #=#g' $path
fi

#logger < $path
/usr/bin/remote-viewer $path

exit $?

And then you just need to instruct your browser to launch remote-viewer.sh rather than remote-viewer to have your proxy settings automatically added.

Happy hacking!

Satellite 6: Force repository download

As a part of the Satellite 6.2.9 release, it is now possible to force a re-download of a certain repository.

This is a big help to fix the following scenarios:

  • Missing/broken on-disk RPM files due to human error, bitrot, others.
  • Restoring a Satellite without the Pulp data.

The resynchornization can be launched with :

# hammer repository list --organization "Example"
# hammer repository synchronize --validate-contents=true --id=42

Alternatively a full RPM spool check can be triggered with :

# foreman-rake katello:validate_yum_content --trace

There is further information in the following KCS note.

Happy hacking!

Taming PackageKit's disk usage

PackageKit is is a D-Bus abstraction layer that allows the session user to manage packages in a secure way using a cross-distro, cross-architecture API.

To me is more a space hog that caches all available RPM updates of my installed software, with no reasonable way of getting them cleaned/expunged.

There are a few bugs still opened for F24 and F25 on its default behaviour and while no consensus has been reached, I was still in need of a way to trim the usage of my /var/cache/PackageKit directory.

The long term solution is apparently to just disable the software updates via Gnome's gconf :

$ gsettings get org.gnome.software download-updates
true

$ gsettings set org.gnome.software download-updates false

$  gsettings get org.gnome.software download-updates
false

This will only affect future downloads; it does not take care of actually cleaning the currently cached packages. The pkcon tool should take care of this:

# pkcon refresh force -c -1
Refreshing cache              [=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Downloading repository information[=========================]         
Loading cache                 [=========================]         
Finished                      [=========================]         

... however it seems it only cleans old information partially. In the end I ended up going through the drastic method and just sudo rm -rf /var/cache/PackageKit/* .

Happy hacking!

Identifying VirtIO disks in RHEV

When you have a lot of disks attached to a single VM, it can be cumbersome to identify which VM disk is what inside the VM. For example, you might have several 100 GB disks named vde, vdf, vdg and have no obvious way to properly detect which one is the right one.

Say you want to remove your vdd disk, and have it removed from your RHEV environment.

You can check their VirtIO identifier with:

# find /dev/disk/by-id "(" -name "virtio*" -and -not -name "*part*" ")"  -exec ls -l "{}" +
lrwxrwxrwx. 1 root root 9 Mar  8 09:18 /dev/disk/by-id/virtio-38ebb33e-db1c-4048-b -> ../../vdb
lrwxrwxrwx. 1 root root 9 Mar  8 09:18 /dev/disk/by-id/virtio-4d18901c-1dea-414d-b -> ../../vda
lrwxrwxrwx. 1 root root 9 Mar  8 09:18 /dev/disk/by-id/virtio-69c3688d-f8b3-464c-8 -> ../../vdc
lrwxrwxrwx. 1 root root 9 Mar  8 09:18 /dev/disk/by-id/virtio-b1666304-fba1-44a2-a -> ../../vdd

We can confirm the VirtIO identifier for vdd is b1666304-fba1-44a2 .

Now we need to map this to the human-readable names available in the RHEV Web UI. This can be done by leveraging the URL as follows :

# curl --silent -k -u "admin@internal:password" https://server:port/api/vms/UUID/disks | grep -P 'disk href|alias'

    <disk href="/api/vms/VM-UUID/disks/38ebb33e-db1c-4048-a345-e381abc7f8cc" id="c8aad46c-9373-4670-a345-e381abc7f8cc">
        <alias>MyVMName_Disk3</alias>
    <disk href="/api/vms/VM-UUID/disks/4d18901c-1dea-414d-9cca-811dac616443" id="b7840cff-8f55-4dbb-9cca-811dac616443">
        <alias>MyVMName_Disk2</alias>
    <disk href="/api/vms/VM-UUID/disks/69c3688d-f8b3-464c-87e1-000e19d85c79" id="e9938a29-b792-4f6e-87e1-000e19d85c79">
        <alias>MyVMName_Disk1</alias>
    <disk href="/api/vms/VM-UUID/disks/b1666304-fba1-44a2-9cb1-fb4cfcbc19dd" id="9c2df047-343e-484a-9cb1-fb4cfcbc19dd">
        <alias>MyVMName_Disk4</alias>

The UUID for the instance can be checked in the RHEV Web UI by opening the VMs tab and clicking in the VM name.

Happy hacking! :-)

Listing Hypervisors and VMs in Red Hat Satellite

One of my current pet peeves in Satellite is getting a list of Hypervisors and their associated VMs. This is specially helpful for trouleshooting virt-who or performing general Satellite clean up tasks.

With the snippet below we can get a list of all hypervisors configured in Satellite:

Additionally, virt-who created entities are usually named virt-who-UUID-1 or virt-who-HypervisorHostname-1. In order to easily identify them in the Satellite Web UI, the following filter can be used as a search pattern:

name ~ "virt-who-%-%-%"

Happy hacking!

Deploying the Cloudforms appliance template in vCloud Director

Red Hat provides the Cloudforms software nicely packaged as a Vmware OVA template; unfortunately this means that some manual work is required to deploy it under vCloud Director. Note that this blog post only talks about getting the template in vCloud Director; managing a vCloud Director provider is outside the supported list of Cloud providers for Cloudforms.

Once we have downloaded the Cloudforms software, the first thing is converting the template from OVA to OVF formats - the only supported format for vCloud Director.

# ovftool cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ova cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
Opening OVA source: cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ova
Opening OVF target: cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
Writing OVF package: cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
Transfer Completed                    
Warning:
 - Wrong file size specified in OVF descriptor for 'disk.vmdk' (specified: 42949672960, actual 707240448).
 - No manifest entry found for: 'disk.vmdk'.
 - No manifest file found.
Completed successfully

Note there is a warning message regarding the disk.vmdk file; this is due the template being in Thin format. In order to vCloud to accept the OVF file, it is necessary we modify the produced cfme-vsphere-*.ova with the right size:

sed -i 's#42949672960#707240448#g' cfme-vsphere-*.ovf

Once we have done that, we need to ammend the Manifest file with the right sha1sum:

# sha1sum  cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
17df197d9ef7859414aac0f6703808a9a8b99286  cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf

# cat cfme-vsphere-5.7.1.3-1.x86_64.vsphere.mf
SHA1(cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf)= 17df197d9ef7859414aac0f6703808a9a8b99286
SHA1(cfme-vsphere-5.7.1.3-1.x86_64.vsphere-disk1.vmdk)= 696baa7f8803beca7be2ad21cde2b6cc975c6c57

Finally, we can import the Template into vCloud Director itself, using again the ovftool software:

# ovftool --vCloudTemplate cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf  "vcloud://myuser@myvcloud.example.org:443?org=myorg&vappTemplate=CFME42-Template&catalog=MyCatalog&vdc=MyVDC"
Opening OVF source: cfme-vsphere-5.7.1.3-1.x86_64.vsphere.ovf
The manifest validates
Enter login information for target vcloud://myvcloud.example.org/myorg
Username: myuser
Password: *******
Opening vCloud target: vcloud://myuser@myvcloud.example.org/
Deploying to vCloud vApp template: vcloud://myuser@myvcloud.example.org/
Transfer Completed
Completed successfully

Once we complete that step, the template is available to start deploying a new Cloudforms VM.

Happy hacking!