Gerrit for dummies and Software Factory

Last Thursday I attended a great presentation by Javier Peña, an introduction to Gerrit. In no particular order, this are the things that catched my eye:

  • Gerrit is a code review system. It sits between your source code repository (Git) and your developers, so any proposed change can be better discussed and reviewed. In parallel, Gerrit can be instructed to launch integration tests to ensure the proposed change doesn't break anything. If the CI integration tests are successfull, the patch can be merged with the code. Otherwise, further changes can be done by the submitter with git commit --amend until the patch is succesfully merged (or discarded).
  • It is better than Github's review workflow because:
    • It's not propietary ;-)
    • Can work in a disconnected fashion with Gertty
    • Reviewing dozens of commits for a single PR can get confusing fast, specially with large patches.
    • As mentioned earlier, Gerrit can be easily integrated with several CI systems.
  • Important opensource projects such as Openstack and oVirt are using it, although it started at Google.

There is project that is integrating Git, Gerrit, Zuul, Nodepool and a bunch of other tools to make development easier. It's called Software Factory and you can find additional info in their documentation: https://softwarefactory-project.io/docs/.

Software Factory logo

Happy hacking!

Satellite 6: Configuring vlan tagging and bonding deployments

One of the features provided by Satellite 6 is both provisioning of hosts (via PXE or other methods), and automated network configuration. We can use Satellite to automatically configure our bonds and vlan tagging for us.

Let's review how this all is done using the Satellite web UI. My suggestion is to configure these details one by one, and examine how the generated kickstart changes in each step. This will help us identifies issues and mistakes before starting a lenghty trial and error process provisioning physical servers :-)

The kickstart rendered template is available under https://SATELLITE/unattended/provision?hostname=your.host.name .

VLAN tagging

This is the simplest scenario; we just want to configure vlan tagging in an existing interface.

Imagine we want to configure the following interfaces:

eth0: PXE (no specific configuration mentioned here)
eth1: Base NIC for vlan tagging
eth1.666 : 192.168.0.10/24 , using vlan 666.

We need to ensure that:

  • We have configured a domain.
  • We have configured a subnet, and is attached to that domain.
  • The network is configured to use Static boot mode (this is a personal preference of mine -- I'd prefer all my interfaces to become up regardless the availabity of a DHCP capsule).

Once we perform that, we can perform a server discovery and start editing the network interfaces with the relevant info.

So we'd configure the following interfaces in Satellite :

  • eth0:
    • Mac address: <autodiscovered>
    • Device identifier: eth0
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: <blank>
    • Managed: true
  • eth0.666:
    • Mac address: <blank>
    • Device identifier: eth0.666
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: 192.168.0.10
    • Managed: true
    • Virtual: true
    • Attached device: eth0
    • Tag: 666

It's important that you configure the eth0 interface itself; otherwise when eth0.666 is enabled, it'll fail because the parent device isn't ready.

All in all, your generated configuration should look like :

####### parent device #######
# eth0 interface
real=`ip -o link | grep 00:50:56:04:1a:8a | awk '{print $2;}' | sed s/:$//`

# ifcfg files are ignored by NM if their name contains colons so we convert colons to underscore
sanitized_real=$real

cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$sanitized_real
BOOTPROTO="none"
IPADDR=""
NETMASK="255.255.255.0"
GATEWAY="172.16.16.1"
DEVICE=$real
HWADDR="00:50:56:04:1a:8a"
ONBOOT=yes
PEERDNS=no
PEERROUTES=no
EOF


###### vlan tagging #######
# eth0.666 interface
real=`ip -o link | grep 00:50:56:04:1a:8a | awk '{print $2;}' | sed s/:$//`
  real=`echo eth0.666 | sed s/eth0/$real/`

# ifcfg files are ignored by NM if their name contains colons so we convert colons to underscore
sanitized_real=$real

cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$sanitized_real
BOOTPROTO="none"
IPADDR="192.168.0.10"
NETMASK="255.255.255.0"
GATEWAY="192.168.0.1"
DEVICE=$real
ONBOOT=yes
PEERDNS=no
PEERROUTES=no
VLAN=yes
EOF

Bonding

In the same way as before, we need to configure the underlying interfaces before we configure the bonded one.

eth0: PXE (no specific configuration mentioned here)
eth1: Bond slave
eth2: Bond slave
bond0: Active-Passive bond enslaving eth1 and eth2

For this example we'll be configuring eth1 and eth2 as a slaves of bond0, that will have an IP assigned to it. It is very important you configure both bond slaves first, then the bond interface. Otherwise the bond won't be properly linked to the slaves and the template won't properly generate the kickstart.

  • eth1:
    • Mac address: <autodiscovered>
    • Device identifier: eth1
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: <blank>
    • Managed: true
  • eth2:
    • Mac address: <autodiscovered>
    • Device identifier: eth2
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: <blank>
    • Managed: true
  • bond0:
    • Type: bond0
    • Mac address: <none>
    • Device identifier: bond0
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: 192.168.0.11
    • Managed: true
    • Bond configuration:
      • Mode: Active-Backup
      • Attached devices: eth0,eth1
      • Bond options: ""

The generated config looks like :

# bond0 interface
real="bond0"
cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$real
BOOTPROTO="none"
IPADDR="172.16.16.230"
NETMASK="255.255.255.0"
GATEWAY="172.16.16.1"
DEVICE=$real
ONBOOT=yes
PEERDNS=no
PEERROUTES=no
DEFROUTE="no"
TYPE=Bond
BONDING_OPTS=" mode=active-backup"
BONDING_MASTER=yes
NM_CONTROLLED=no
EOF



# eth1 interface
real=`ip -o link | grep 00:50:56:04:1a:8f | awk '{print $2;}' | sed s/:$//`

# ifcfg files are ignored by NM if their name contains colons so we convert colons to underscore
sanitized_real=$real

cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$sanitized_real
BOOTPROTO="none"
DEVICE=$real
HWADDR="00:50:56:04:1a:8f"
ONBOOT=yes
PEERDNS=no
PEERROUTES=no
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
EOF



# eth2 interface
real=`ip -o link | grep 00:50:56:04:1a:90 | awk '{print $2;}' | sed s/:$//`

# ifcfg files are ignored by NM if their name contains colons so we convert colons to underscore
sanitized_real=$real

cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$sanitized_real
BOOTPROTO="none"
DEVICE=$real
HWADDR="00:50:56:04:1a:90"
ONBOOT=yes
PEERDNS=no
PEERROUTES=no
NM_CONTROLLED=no
MASTER=bond0
SLAVE=yes
EOF

Bonding + VLAN tagging

In this final example we want to configure a bond and add different vlan-tagged interfaces to it :

eth0: PXE (no specific configuration mentioned here)
eth1: Bond slave
eth2: Bond slave
bond0: Active-Passive bond enslaving eth1 and eth2
bond0.666: Interface in vlan 666 (192.168.6.6/24)
bond0.777: Interface in vlan 777 (192.168.7.7/24)
  • eth1:
    • Mac address: <autodiscovered>
    • Device identifier: eth1
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: <blank>
    • Managed: true
  • eth2:
    • Mac address: <autodiscovered>
    • Device identifier: eth2
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: <blank>
    • Managed: true
  • bond0:
    • Type: Bond
    • Mac address: <none>
    • Device identifier: bond0
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: 192.168.0.11
    • Managed: true
    • Bond configuration:
      • Mode: Active-Backup
      • Attached devices: eth0,eth1
      • Bond options: ""
  • bond0.666:
    • Type: interface
    • Mac address: <blank>
    • Device identifier: bond0.666
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: 192.168.6.6
    • Managed: true
    • Virtual: true
    • Attached device: eth0
    • Tag: 666
  • bond0.777:
    • Type: interface
    • Mac address: <blank>
    • Device identifier: bond0.777
    • DNS name: <none>
    • Domain: your.domain
    • Subnet: your-subnet-with-Static-bootproto
    • IP Address: 192.168.7.7
    • Managed: true
    • Virtual: true
    • Attached device: eth0
    • Tag: 777

Happy hacking!

First steps with Infoblox

Infoblox produces some appliances that do DNS/DHCP management, full network IPAM management and so on. Since I needed to so some usage of their APIs I've had to set up an infoblox appliance and here I'm jotting down some of the steps I took for easier reference.

The overall steps are:

  • Download the appliance from www.infoblox.com
  • Deploy on your favourite virtualization system, ej KVM.
  • Start the VM, and ensure the cpu and memory prerequisites are set.

If you use Vmware/vCloud and the OVA you'll probably have most network and password details prompted upon when deploying the appliance so in that regard is a bit more straight forward to deploy.

Once your VM is started, you can log in with :

user: admin
pass: infoblox

The first step is configure the network, which can be done with:

Infoblox > set network
Enter IPv4 address [Default: 172.16.16.102]: 
Enter netmask [Default: 255.255.255.0]: 
Enter gateway address [Default: 172.16.16.1]: 
NOTICE: Additional IPv6 interface can be configured only via GUI.
Become grid member? (y or n): n

 New Network Settings:
  IPv4 address:         172.16.16.102
  IPv4 Netmask:         255.255.255.0
  IPv4 Gateway address: 172.16.16.1

        Is this correct? (y or n): y

And now the most confusing thing for new starters is getting your licenses right. If you're using an evaluation license, you don't need to register in the infoblox website, but rather have the appliance generate some 60-day evaluation licenses.

You'll need more than one. They can be checked as below :

Infoblox > show license all
Public IP       License Type                Kind      Exp Date   Replaced Hardware ID             License String

To generate the evaluation licenses:

Infoblox > set temp_license

  1. DNSone (DNS, DHCP)
  2. DNSone with Grid (DNS, DHCP, Grid)
  3. Network Services for Voice (DHCP, Grid)
  4. Add DNS Server license
  5. Add DHCP Server license
  6. Add Grid license
  7. Add Microsoft management license
  8. Add vNIOS license
  9. Add Multi-Grid Management license
 10. Add Query Redirection license
 11. Add Response Policy Zones license
 12. Add FireEye license
 13. Add DNS Traffic Control license
 14. Add Cloud Network Automation license
 15. Add Security Ecosystem license
 16. Add Flex Grid Activation license

Select license (1-16) or q to quit: 1

This action will generate a temporary 60-day DNSone license.
Are you sure you want to do this? (y or n): y
DNS temporary license installed.
DHCP temporary license installed.

Temporary license is installed.


The UI needs to be restarted in order to reflect license changes.
Restart UI now, this will log out all UI users? (y or n):y

You will need to repeat this process a bunch of times until all required licenses are in place. As a guideline, this are the licneses I needed to build a working appliance :

Infoblox > show license
Version         : 8.1.2-356916
Hardware ID     : 42140354685089f1cdccff04ff9cec5d

License Type    : DNS
Expiration Date : 11/27/2017
License String  : EwAAAEdPGOmqWmv1aFgZbs+JuxsU6WM=

License Type    : DHCP
Expiration Date : 11/27/2017
License String  : FAAAAEdJCOXkWyS7bRNXbM6P8ARA7mYv

License Type    : vNIOS (model IB-VM-820)
Expiration Date : 11/27/2017
License String  : GgAAAFVPAvrrFSX0IxYcIsyO9k8K7nUvmR1TaVew

vNIOS: CPU cores detected: 2 - [License allows: 2]
vNIOS: System memory detected: 4096MB - [License allows: 7168MB]

License Type    : Grid
Expiration Date : 11/27/2017
License String  : GgAAAEZPH/DqGWWuLEFXbM3C9U8K7WEsnEUFal64

Only once your subscriptions are properly attached your web interface will become available under https://your.appliance.ip .

For the Cisco-oriented people, the appliance CLI is somewhat similar to some Cisco devices. Is specially useful the show tech-support command that will show all low-level configuration and status.

Happy hacking!

Configuring an iscsi volume for RHV usage

This is a cheatsheet to quickly configure storage and export it as an iSCSI volume using RHEL7 targetcli and have it configured under RHV. This is by no means a production configuration as your RHEL7 system might become a single point of failure, but it convers nicely building a home lab or test environment.

Just for clarity, a quick reminder of iSCSI concepts:

  • An iSCSI target provides some storage (here called server),
  • An iSCSI initiator uses this available storage (here called client).

Prerequisites

Configure server's storage

You can configure several types of backends, and for me the most versatile is using LVM's Logical Volumes. You'll need to create your volumes in advance, for example:

lvcreate yourVG -n yourLV1 -L 50G

Install software

Install the targetcli RPM:

yum install -y targetcli

Enable the target daemon (NOT targetd)

systemctl enable --now target.service

Gather RHV configuration

You'll need to gather the following information from RHV:

  • IQN (iSCSI identifier)

Configure and enable iSCSI

targetcli provides a very simple way to create iscsi targets once you understand how it works. Namely what needs to be done is:

  • Add your backend devices. This is where you add into targetcli's control the LVM devices created in previous steps
  • Create an IQN target. This is a collection of luns shared to the same system(s) under the same group. It is used later to apply ACLs so only certain hosts can use certain LUNs.
  • Add LUNs into your IQN target. After creating your IQN target, you need to add the backstore devices so they're shared via iSCSI.
  • Add ACLs into your IQN target. Unless configured otherwise, LUNs are not visible to systems unless they're added into the right ACL.

Here is a dump on how all this can be accomplished with targetcli:

Add storage into RHV

foo