Transfer network-manager connections to new computer

I just recently got a new Laptop (a T440p) to test it, if it suits my needs. The old x220 is still a nice companion, but I kind of need a bit more horsepower on my daily companion.

Let me tell you something about the magic of Linux - I just removed the old SSD from my x220 and put it into the T440p. It booted out of the box, with all my configurations and everything in place. No need to reconfigure or even reinstall anything. I could be productive in a couple of minutes. Also transferring the SSD was a matter of some screws, so no problem at all. That's how it should be. That's one of the reasons, why soldered-in SSD suck so badly.

So, everything was working nicely, except for some reason NetworkManager seemed to have forgotten all the Wifi connections. Except, they were still there and configured, just for the Wifi interface of the other laptop. And reconfiguring all of them is kind of boring. There has to be a better way

NetworkManager & System connection

NetworkManager stores all the connections in /etc/NetworkManager/system-connections. They are there as plaintext files (restricted to root through)

Turns out, the line we need to change in every file is the following

This is the mac-address of my old laptop, and I just need to replace it with the MAC of my new laptop. Easy as goo pie. The following one-liner does the job. Remember to replace YY:YY:YY:YY:YY:YY with the MAC address of your laptop

After that, I restarted NetworkManager, and it was nicely connecting to the available Wifi.

TL;DR

  • NetworkManager Wifi-Connections are MAC-Sensitive
  • To bring your connections to a new Wifi adapter you need to change the mac-address line of every connection with the following script

btrfs being notable slow in HDD RAID6

Disclaimer: I don't want to blame anyone. I love btrfs and would very much like to get it running in a good manner. This is a documentation about a use case, where I have some issues

I am setting up a new server for our Telescope systems. The server acts as supervisor for a handful of virtual machines, that control the telescope, instruments and provide some small services such as a file server. More or less just very basic services, nothing fancy, nothing exotic.

The server has in total 8 HDD, configured as RAID6 connected via a Symbios Logic MegaRAID SAS-3 and I would like to setup btrfs on the disks, which turns out to be horribly slow. The decision for btrfs came from resilience considerations, especially regarding the possibility of creating snapshots and conveniently transferring them to another storage system. I am using OpenSuSE 15.0 LEAP, because I wanted a not too old Kernel and because OpenSuSE and btrfs should work nicely together (A small sidekick in the direction of RedHat to rethink their strategy of abandoning btrfs in the first place)

The whole project runs now into problems, because in comparison to xfs or ext4, btrfs performs horribly in this environment. And with horribly I mean a factor of 10 and more!

The problem

The results as text are listed below. Scroll to the end of the section

I use the tool dbench to measure IO throughput on the RAID. When comparing btrfs to ext4 or xfs I notice, the overall throughput is about a factor of 10 (!!) lower, as can be seen in the following figure:

Throughput measurement using dbench - xfs's throughput is 11.60 times as high as btrfs !!!

sda1 and sda2 are obviosly the same disk and created with default parameters, as the following figure demonstrates

My first suspicion was, that maybe cow (copy-on-write) could be the reason for the performance issue. So I tried to disable cow by setting chattr +C and by remounting the filesystem with nodatacow. Both without any noticeable differences as shown in the next two figures

Same dbench run as before, with with chattr +C. No notice-worthy difference
Same dbench run as in the beginning, but with nodatacow mount option. Still, negligible difference

Hmmmm, looks like something's fishy with the filesystem. For completion I also wanted to swap /dev/sda1 and /dev/sda2 (in case something is wrong with the partition alignment) and for comparison also include ext4. So I reformatted /dev/sda1 with the btrfs system (was previously the xfs filesystem) and /dev/sda2 with ext4 (was previously the btrfs partition). The results stayed the same, although there are some minor differences between ext4 and xfs (not discussed here). The order of magnitude difference between btrfs and ext4/xfs remained

Swapping /dev/sda1 and /dev/sda2 and testing dbench on ext4 yields the same results: btrfs performs badly on this configuration

So, here are the results from the figures.

/dev/sda1 - xfsThroughput 7235.15 MB/sec 10 procs
/dev/sda2 - btrfsThroughput 623.654 MB/sec 10 procs
/dev/sda2 - btrfs (chattr +C)Throughput 609.842 MB/sec 10 procs
/dev/sda2 - btrfs (nodatacow)Throughput 606.169 MB/sec 10 procs
/dev/sda1 - btrfsThroughput 636.441 MB/sec 10 procs
/dev/sda2 - ext4Throughput 7130.93 MB/sec 10 procs

And here is the also the text file of the detailed initial test series. In the beginning I said something like a factor of 10 and more. This relates to the results in the text field. When setting dbench to use synchronous IO operations, btrfs gets even worse.

Other benchmarks

Phoronix did a comparison between btrfs, ext4, f2fs and xfs on Linux 4.12, Linux 4.13 and Linux 4.14 that also revealed differences between the filesystems of varying magnitude.

As far as I can interpret their results, they stay rather inconclusive. Sequential reads perform fastest on btrfs, while sequential writes apperently are horrible (Factor 4.3 difference).

Linux 5.0

Yeah, I am running a recent self-made build of Linux 5.0 🙂

Despite the major version number change, there's nothing more special about this version, that with other releases. Still, I find this pretty cool!

Now, back to work ...

Gridengine and CentOS 7

... there's life in the old dog yet!

We are still using the Gridengine on some of our high performance clusters and getting that thing running isn't really a piece of cake. Since Oracle bought Sun, things have changed a little bit: First of all, the good old (TM) Sun Grid engine doesn't exist anymore. There are some clones of it, with the most promising candidate being probably the Son of Grid Engine project. This is also what I will refer as gridengine henceforth. Noticeworthy, but not covered is the OpenGrid Scheduler and the commercial Univa Grid Engine (I'm not linking there), which is just the old Sun Grid engine, but sold to Univa and commercially distributed

In the Debian world, there is a gridengine deb packet, which just works nicely as it should do. There was a el6 port for CentOS 6, but there is nothing official for CentOS 7 (yet?). I've build them myself and everyone is free to use them as they are. They are provided as-they-are, so no support or warranty of any kind are provided. Also, they should work just fine as they are

Building the Son of Grid Engine

The process was difficult enough to make me fork the repository and setup my own GitHub project. My fork contains two bugfixes, which prevented the original source from building.
The project contains also build instructions in the README.md for OpenSuSE 15 and CentOS 7 and pre-compiled rpms in the releases section.

Short notes about building

The Gridengine comes with it's own build tool, called aimk. One can say a lot about it, but if treated correctly it works okayish. The list of requirements is long and listed in the README.md for CentOS 7 and OpenSuSE 15. It hopefully also works for any other versions.

SGE uses a lot of different libraries. Mixing architectures for a single cluster environment is in general a bad idea and SGE might work, but you really don't want to hassle with the inevitable white hairs that come with all of the unpredictable and sometimes not-easy-to-understand voodoo errors that occur. Just ... Don't do that!

I never used the Hadoop build, so all binaries and everything is tested with -no-herd.

For the impatient (not commented)

Notes about installing the Gridengine

I've tried to automate the install with ansible, but the install_execd -auto script proves to be quiet unreliable. After several failed attempts, I decided to install the Gridengine manually from a shared NFS directory.

This is in general a good idea, as the spool directory anyways needs to be in a NFS share. To prevent trouble I have separated the binaries (read-only NFS) from the spool directory (read-write access to all nodes).

I've tried to mix CentOS and OpenSuSE. The Gridengine works with each other, but you will run into other problems as the execution environment is different. Don't do that!

Running the SGE over NFS is the way I recommend. Be aware of the hassle, when the master node becomes unresponsive. In that case, don't do magic tricks, just reboot the nodes. Everything else is fishy.

Known problems with Son of Grid engine

This section is dedicated to document two bugs and make them appear on google, so that other unfortunate beings, who encounter the same problems can find a solution. I've encountered two errors, when trying to build the original 8.1.9 version

This problem was the reason for me to fork it. Comment out line 51 in sge-8.1.9/source/3rdparty/qtcsh/sh.proc.c


I encountered this error when building as root. Try building as unprivileged user (which you should do anyways!)

Mirrors

I am mirroring the current version of Son of Grid engine on my ftp-server. My own fork is in the GitHub repository gridengine.

iphex

I wrote a small bash script, that transforms IP addresses into HEX format. The tool consists of 10 lines of bash script

I needed the tool to match IP-addresses to HEX files for PXE boot. Normally PXE boot fetches first the MAC-address, and then iteratively for the HEX representation of the IP address, with reducing the number of matching characters. Oracle documents the behavior very nicely for the IP address "192.0.2.91"which matches "C000025B" and the imaginary MAC-address "88:99:AA:BB:CC:DD". Then the PXE client probes for the following files (in the given order)

Now, with iphex I can easily convert the more used numerical representation of IP-addresses like 192.168.2.91 into the IMHO not directly visible HEX representation.

Zsh and Home/End/Delete buttons

I've notices that in Zsh under Mate, the HOME/END/DELETE buttons are for some reasons not working as I expected them to work. I use vim keybindings, and am still accustomed to sometimes hit the end button to reach the end of the line. So far it has never been a problem, but zsh just reacts weirdly here. Before triggering a rage quit, I found a solution of how to deal with it. Put the following lines in your .zshrc and you're good

I found this solution here and mirror it on my blog, in case the original solution gets lost or something.

I also link oh-my-zsh here, in case someone just hopped on zsh as well and wants to make it as fancy as possible 🙂

Resizing a btrfs parition

This is a simple note to myself, in case I need to re-do this again. How to resize a btrfs partition to maximum size (or full capacity)

  1. Resize the partition using parted
  2. Resize the btrfs filesystem using

    Would it make sense to create an alias, so that also btrfs filesystem resize 100% or other percentages would work?

In a nutshell example

Detailed example

First resize the partition (I use parted for that purpose)

Done.

Getting VeraCrypt running on a custom build Kernel

Having your own compiled Linux Kernel is a nice thing for various reasons. First, you are not stuck with the (depending on your distribution possibly outdated) Kernel versions your distribution and you highly customize your experience. Some people want to have a super-fast lightweight Kernel, I'm more on the other side of the spectrum. But that's a matter of flavor.

A side-effect is that you learn a lot more about Linux - inevitably issues will arise, from not working KVM (upcoming post) because of iptable issues to VeraCrypt that cannot operate with Kernel support.


Getting your custom Kernel ready for VeraCrypt

I've encountered the following error

device-mapper: reload ioctl on veracrypt1 failed: Invalid argument
Command failed

I've started with that. ioctl based errors normally are a good indicator that something in your Kernel configuration is or missing or misconfigured.
In this case it was the missing support for crypto targets in the device mapper (I suppose).

Fortunately the Gentoo-Forums provide some very useful informations. Make sure you have configured the following options in your Kernel

Device Drivers --->
[*] Multiple devices driver support (RAID and LVM) --->
<*> Device mapper support
<*> Crypt target support
[*] Block Devices --->
<*> Loopback device support
File systems --->
<*> FUSE (Filesystem in Userspace) support
[*] Cryptographic API --->
<*> RIPEMD-160 digest algorithm
<*> SHA384 and SHA512 digest algorithms
<*> Whirlpool digest algorithms
<*> LRW support
<*> XTS support
<*> AES cipher algorithms
<*> Serpent cipher algorithm
<*> Twofish cipher algorithm

Re-build your Kernel, and everything should work fine 🙂

Ubuntu - Building own Kernel

One of the reasons why I like Ubuntu is its simple usage. Most stuff works out-of-the-box or is configurable pretty easy. So it's also pretty easy to compile your own kernel.

The reason I wanted to build my own kernel were some issues with the amdgpu graphics card. Since Kernel 4.15 AMD has pushed it's recent open-source drivers upstream, so I wanted to give it a try.

In a nutshell

In principle you have to follow those simple steps

  1. MAKE SURE GRUB HAS A TIMEOUT so you can select an old kernel, in case something went wrong
  2. Download kernel sources from kernel.org
  3. Extract the sources into a directory and change into that directory
  4. Copy current configuration from /boot/config-uname -r to .config
  5. Check current configuration using make localmodconfig
  6. Compile using make [-j8]
  7. Install by using sudo make modules_install install

More details

For now I'm assuming we want to compile the current stable kernel, witch is 4.15.6

  1. Download kernel sources from kernel.org - I won't post a direct link to a kernel, because that will become outdated pretty soon!
  2. Extract the sources into a directory and change into that directory

I download the file and extract it with tar. For me it was like

In general it's save to hit the return key and just use the default values. But keep that in mind, if you run into problems you might have a more detailed look and the options

Now it's time to compile the kernel. Use -j4 to use 4 threads for building. I in general use up to 16, but that depends on your system. People report in general good results in taking a number between 1x and 2x the number of CPU cores you have. I have 8, so I choose 16, but that's up to you

Now watch the build process and grab a cup of coffee. That might take a while ....

If the build process completes, then run a simply a make modules_install and make install to install the new kernel

In Ubuntu this triggers a grub-update as well, so it should work the next time you boot into your system.

Nice 🙂