I'm running openSuse LEAP on my workstation and sometimes I have to struggle to get recent versions of some program to work. Recently I had to fight with two applications, with only minor success: Steam and GIMP. Probably I'm not the only one who occasionally find itself trapped in a dilemma here. It's a clash of two different philosophies: On one hand on my workstation I want to have a rather stable distribution (thus LEAP, on my Laptop I run Tumbleweed), on the other hand I'm using certain programs, that only come in outdated packages (Gimp 2.8!!1!) or have some issues with outdated libraries ecc. e.g. Steam works nicely with some games, others like Civilization Beyond Earth do not work at all.
Flatpak!
Flatpak solves this issue here: It delivers a piece of software including all dependencies and stuff. It's thus able to decouple a single program from the system dependencies. This solved the above mentioned clash: The bare system can be a rather conservative and stable one, and you still can run a very recent version of your program, without going through the nightmare of a dependency hell.
Gimp 2.10 for instance can be installed in two lines and just works, whereas I spend an awful lot of time to get it running as native application with unsolvable dependency conflicts at the end. It was running some time ago, but then for some reason it broke. That's not what stable is about.
1
flatpak install org.gimp.GIMP
That one installs it and it just runs. And that's pretty neat.
... from the black magic voodoo ssh box of the tech priests ...
[Scroll down for the TL;DR section]
I'm writing this post as an ode to ProxyJump of ssh, one of the little helpers that make your day awesome. If you are working on multiple computers in different companies/networks at some point you encounter the scenario, where you want to access a computer, that is only reachable via another computer. Let's say, you need to access your office computer names datenhalde from home, but datenhalde is only reachable via the company network. Luckily your company provides a public ssh gateway named gateway), where you can connect from your home computer (named zuhause). On a Friday you decide that it's a day where you want to work without interrupts from home. Perhaps you just brewed a nice cup of coffee and start your work
1
2
3
4
5
phoenix@zuhause$ssh gateway
Last login:Tue Aug2717:42:202019from1.2.3.4
phoenix@gateway$ssh datenhalde
Last login:Mon Aug2609:31:522019from192.168.22.72
phoenix@datenhalde$
At some point, you might find it unnecessary boring to always type in ssh gateway and then ssh datenhalde and you wonder, if there is not a more convenient way, to directly access datenhalde from zuhause via gateway, but without the fuzz of redundant ssh typing.
This is where ProxyJump comes into play. Use -J
1
2
3
phoenix@zuhause$ssh-Jgateway datenhalde
Last login:Mon Aug2609:31:522019from192.168.22.72
phoenix@datenhalde$
Here ssh connects first to gateway and then to datenhalde. Awesome!
ssh config for even more convenience
Even better, you can put the ProxyJump into your ssh config, so every time you access a host, if first jumps to the given gateway host and then to the destination. Too complicated formulated? Just look at the following example
1
2
3
4
# ~/.ssh/config
HOST datenhalte
HOSTNAME datenhalde
PROXYJUMP gateway
Now, if you connect to datenhalde via ssh, it automatically and transparently first jumps to gateway and then to datenhalte. This configuration then applies to all protocols that are building atop ssh, like scp, rsync or libvirt.
1
2
3
phoenix@zuhause$ssh datenhalde
Last login:Mon Aug2609:31:522019from192.168.22.72
phoenix@datenhalde$
TL;DR
Want to connect to your working computer datenhalde via a ssh gateway in one single command?
1
ssh-Jgateway datenhalde
Want to configure your ssh-configuration to always jump to gateway before connecting to datenhalde?
1
2
3
4
# ~/.ssh/config
HOST datenhalte
HOSTNAME datenhalde
PROXYJUMP gateway
Then your ssh connections will transparently always jump over gateway
1
ssh datenhalde
ProxyJump for the glory!
ProxyJump is a tool for the tech priests, and it's imperative that every adept of the Adeptus Mechanicus shall be able to handle it. ... in the (unlikely?) case in Warhammer 40k they also use ssh ...
There are two kinds of users: The ones, who encrypt their stuff and the ones, who never had lost something or never got something stolen. Imagine your Laptop being stolen on the train. Not only, you probably lost all of your data (Backups!), but also there is now a stranger that has access to potentially very private data - Pictures of your last birthday, company records you need to keep secret or the new piece of code that is awesome and the capital of your startup you just wanted to create. Plenty of reasons for investing a little bit of time in your digital self-defence, and a sane full-disk encryption is a major part of it.
I personally have two different approaches: VeraCrypt (successor of TrueCrypt after they surprisingly closed, leaving lots of speculations about their non-commitment to build in a secret backdoor for a large state agency, but that is based on pure speculation and is on the list of conspiracy theorems that probably turn out to be at least somehow concise enough to be taken seriously.) seems to be sane enough to be used or LUKS. In this article I'm gonna cover LUKS, as I consider this the canonical way for every mature Linux environment.
Recipe
Assuming the HDD is /dev/sdb and you want to call it Cryptodisk
Reformat the external HDD. Create a single partition with any filesystem (we will overwrite this in the next step)
Make sure all filesystems for that disk are unmounted.
cryptsetup -y -v luksFormat /dev/sdb1
Enter passphrase.
Write the passphrase on a sheet of paper and store it on a safe place.
REALLY do it. You probably will forget the passphrase and then you can cry your data goodbye
Open the crypdevice: cryptsetup luksOpen /dev/sdb1 Cryptodisk
mkfs.xfs /dev/sdb1
cryptsetup luksClose Cryptodisk
Plug the disk out and re-plug it in. In caja/nemo it should appear as encrypted device. If you click on it, it asks for the passphrase and it will be mounted. Alternatively, user cryptsetup cryptsetup luksOpen /dev/sdb1 Cryptodisk mount /dev/mapper/Cryptodisk /mnt/Cryptodisk
Note: It's possible to compartmentalize multiple partitions by putting a LVM volume atop cryptsetup. This is more advanced but pretty much straightforward.
Step by Step guide
I plug in my HDD and assume it's gonna be recognised as /dev/sdb.
First, We need to make sure that it's unmounted
1
$sudo umount/dev/sdb?
Next, format the HDD. I normally use parted, but gparted seems to be the nicer way, as it's graphical and pretty easy. So, start gparted on the disk
1
2
$sudo gparted/dev/sdb
# In Wayland this might cause trouble, as sudo and Wayland are not super nice to each other ... That's beyond the scope of what I write down here, sorry :-)
MAKE SURE IT's THE RIGHT DISK. Do the partitions look like the ones expected? Is there anything fishy? Once you clear your partition table or (even worse) wrote a new filesystem, it's unlikely you can fetch your data without any losses. Take a breath and double-check before doing anything.
OK, Then create the partition you want to encrypt. Select a random filesystem, as we are anyways going to delete the filesystem. It's only important to create the layout correctly. In my case it looks like the following: one partition that takes the full space (Little bit of empty space at the end is needed by GPT for the Backup table)
gparted with the newly created layout
Close gparted and encrypt the partition using cryptsetup
1
2
3
4
5
6
7
8
9
$sudo cryptsetup-y-vluksFormat/dev/sdb1
WARNING!
========
Thiswill overwrite data on/dev/sdb1 irrevocably.
Are you sure?(Type uppercase yes):YES
Enter LUKS passphrase:
Verify passphrase:
Command successful.
Congratulations, you created your first encrypted parition! Now we are gonna put a filesystem on that one, so you can actually use it 🙂
So, we are gonna "open" the cryptdevice. This means, we are putting an encryption/decryption layer, atop which we can run our filesystem
1
2
# Assuming you want to name it Cryptodisk. The name doesn't matter, it's just for the system to find the device
$sudo cryptsetup luksOpen/dev/sdb1 Cryptodisk
Cryptsetup asks for the Passphrase. After successfully opening the device, it will be listed as /dev/mapper/Cryptodisk
Now we create a filesystem. I chose xfs because it's a nice working horse, that runs everywhere, but you can choose whatever you want.
1
$sudo mkfs.xfs/dev/mapper/Cryptodisk
Great, now you've created the filesystem. Close the disk with cryptsetup
1
2
$sudo cryptsetup luksClose Cryptodisk
$sync
Wait until everything on the disk has been written (it stops flashing, depending on your disk) and unplug the disk.
The next time you plug your disk in, it will be recognised by caja/nemo as Encrypted Device, you type in your Passphrase and it will be automatically mounted (or with cryptstetup luksOpen and mount but the purpose was to create a convenient way to work with your external disks).
The encrypted disk appears conveniently and can be mounted with a single click
Congratulations, you just created your first fully encrypted external HDD!
The headline picture was created btw. by using the amazing dekryptize tool - a really cool ncurses animation to show how decrypting is definitely NOT working 😉
I've just got a brand new Raspberry Pi 4. For now I'm just playing around a bit with it. Until openSuSE Leap will be available, I'm using Raspbian Buster which comes by default with ext4. Since I want to have snapshots, the first thing I want to do is to convert the existing root partition into btrfs. So let's do this.
0. Get Raspbian
First, flash Raspbian to a SD card and boot it. I also recommend to run a system update after booting into Raspbian. There are plenty of tutorials on the internet, that are probably far better than what I can write.
1. Prepare initramfs
In Raspbian btrfs is included as module. In order to make the kernel mount a btrfs root filesystem, we need to build the corresponding initramfs. First install the necessary tools
1
sudo apt install initramfs-tools btrfs-tools
Now we add the btrfs module to /etc/initramfs-tools/modules
1
2
3
4
5
$vi/etc/initramfs-tools/modules
btrfs
xor
zlib_deflate
raid6_pq
Next is to build the initramfs
1
mkinitramfs-o/boot/initramfs-btrfs.gz
And tell the bootloader to load the initramfs, by editing /boot/config.txt
1
2
3
4
5
6
7
8
$vi/boot/config.txt
# For more options and informations see
# http://rpf.io/configtxt
# Some settings may impact device functionality. See link above for details
initramfs initramfs-btrfs.gz
[...]
And then reboot the device, to check if everything is set up properly. If the boot succeeds, shutdown the Raspberry and take the SD-Card to another computer. If you run at this stage into trouble, probably a filename is wrong and you should be still able to recover. Otherwise: Just start from scratch - at this point really nothing is lost.
2. Convert ext4 rootfs to btrfs
In my case I insert the SD card into my laptop. The SD card gets recognised as /dev/mmcblk0 and contains the following partitions:
To convert the filesystem to btrfs, we are now doing the following steps:
Optional: Make sure, the rootfs is clean (run fstck)
Convert ext4 to btrfs using btrfs-convert
Mount new btrfs root
Edit /etc/fstab
Edit /boot/cmdline.txt
On my system, I have to do the following steps
1
2
3
4
$fsck.ext4/dev/mmcblk0p2# This is optional but I recommend it
$btrfs-convert/dev/mmcblk0p2# Convert ext4 root to btrfs
$mkdir/mnt/sdcard# Here we will mount the rootfs
$mount-tbtrfs/dev/mmcblk0p2/mnt/sdcard# Mount newly created rootfs
Now we edit /etc/fstab and change ext4 to btrfs. We also need to disable the filesystem-check by setting the last two digits in the btrfs line to 0
1
2
3
4
$vi/mnt/sdcard/etc/fstab# Edit /etc/fstab
proc/proc proc defaults00
PARTUUID=4301c17b-01/boot vfat defaults02
PARTUUID=4301c17b-02/btrfs defaults,noatime00
IMPORTANT: Set the last two settings in /etc/fstab to 0 and 0. The last 0 is especially important for btrfs root, since fsck and btrfs do not go so well together.
Lastly we edit /boot/cmdline.txt. We neet to replace rootfstype=ext4 to rootfstype=btrfs and set fsck.repair=no
IMPORTANT: It is crucial to set fsck.repair=no. I was stuck at some weird "mounting failed: Invalid argument" errors, because the system wanted to perform a fsck and failed.
3. Now the fun starts
This is only the kickoff. Now the funny things, like subvolumes, snapshots ecc. start
Have a lot of fun! 🙂
Caveats
After a kernel update, you will need to run mkinitramfs again. Probably it's the best to only do manual kernel updates (even security updates) as otherwise your Raspi might not be able to boot again.
Additional notes
Check those notes, in case something went wrong. Those emphasis the steps I had to to to make this work
Fsck had cause me a lot of trouble. In case you run into mount invalid errors, check if you have disable fsck in /etc/fstab (the last zero) and in /boot/cmdline.txt
Apperently btrfs-convert doesn't change the UUID. If you find yourself with "device not found" or similar errors, this might has changed and you will need to change the UUIDs
After a Kernel update you will need to run mkinitramfs again. Keep that in mind (and maybe disable auto-updates)
Common pitfalls
1
mounting/dev/mmcblk0p2 failed:Invalid argument
Crappy image of the console output with the "mounting ... failed: invalid argument" error
I got this error message when I forgot to edit cmdline.txt. Make sure, you have configured /boot/cmdline.txt correctly (especially the rootfstype=btrfs and fsck.repair=no)
I just wrote a small Bash script for creating offline-backups of a bunch of virtual machines on a server using btrfs snapshots.
The script shutsdown all running KVM machines, waits until they are down, creates a (readonly) btrfs snapshot and spins the machines back up. All together takes less than a minute. After the process I have an image of all KVM machines in the state, when the machines are shut down. This is then suitable for storing the machine image files on a different machine to have a complete working state of the machines. This is part of my backup (more hardware failure) strategy for one of our general purpose servers at work.
The KVM instances need to be in a btrfs subvolume, otherwise it doesn't work
See the script as gist on GitHub. You will need to do some adjustments and probably test it a couple of times, until it will work nicely.
I just recently got a new Laptop (a T440p) to test it, if it suits my needs. The old x220 is still a nice companion, but I kind of need a bit more horsepower on my daily companion.
Let me tell you something about the magic of Linux - I just removed the old SSD from my x220 and put it into the T440p. It booted out of the box, with all my configurations and everything in place. No need to reconfigure or even reinstall anything. I could be productive in a couple of minutes. Also transferring the SSD was a matter of some screws, so no problem at all. That's how it should be. That's one of the reasons, why soldered-in SSD suck so badly.
So, everything was working nicely, except for some reason NetworkManager seemed to have forgotten all the Wifi connections. Except, they were still there and configured, just for the Wifi interface of the other laptop. And reconfiguring all of them is kind of boring. There has to be a better way
NetworkManager & System connection
NetworkManager stores all the connections in /etc/NetworkManager/system-connections. They are there as plaintext files (restricted to root through)
Turns out, the line we need to change in every file is the following
1
2
[wifi]
mac-address=XX:XX:XX:XX:XX:XX
This is the mac-address of my old laptop, and I just need to replace it with the MAC of my new laptop. Easy as goo pie. The following one-liner does the job. Remember to replace YY:YY:YY:YY:YY:YY with the MAC address of your laptop
Disclaimer: I don't want to blame anyone. I love btrfs and would very much like to get it running in a good manner. This is a documentation about a use case, where I have some issues
I am setting up a new server for our Telescope systems. The server acts as supervisor for a handful of virtual machines, that control the telescope, instruments and provide some small services such as a file server. More or less just very basic services, nothing fancy, nothing exotic.
The server has in total 8 HDD, configured as RAID6 connected via a Symbios Logic MegaRAID SAS-3 and I would like to setup btrfs on the disks, which turns out to be horribly slow. The decision for btrfs came from resilience considerations, especially regarding the possibility of creating snapshots and conveniently transferring them to another storage system. I am using OpenSuSE 15.0 LEAP, because I wanted a not too old Kernel and because OpenSuSE and btrfs should work nicely together (A small sidekick in the direction of RedHat to rethink their strategy of abandoning btrfs in the first place)
The whole project runs now into problems, because in comparison to xfs or ext4, btrfs performs horribly in this environment. And with horribly I mean a factor of 10 and more!
The problem
The results as text are listed below. Scroll to the end of the section
I use the tool dbench to measure IO throughput on the RAID. When comparing btrfs to ext4 or xfs I notice, the overall throughput is about a factor of 10 (!!) lower, as can be seen in the following figure:
Throughput measurement using dbench - xfs's throughput is 11.60 times as high as btrfs !!!
sda1 and sda2 are obviosly the same disk and created with default parameters, as the following figure demonstrates
My first suspicion was, that maybe cow (copy-on-write) could be the reason for the performance issue. So I tried to disable cow by setting chattr +C and by remounting the filesystem with nodatacow. Both without any noticeable differences as shown in the next two figures
Same dbench run as before, with with chattr +C. No notice-worthy differenceSame dbench run as in the beginning, but with nodatacow mount option. Still, negligible difference
Hmmmm, looks like something's fishy with the filesystem. For completion I also wanted to swap /dev/sda1 and /dev/sda2 (in case something is wrong with the partition alignment) and for comparison also include ext4. So I reformatted /dev/sda1 with the btrfs system (was previously the xfs filesystem) and /dev/sda2 with ext4 (was previously the btrfs partition). The results stayed the same, although there are some minor differences between ext4 and xfs (not discussed here). The order of magnitude difference between btrfs and ext4/xfs remained
Swapping /dev/sda1 and /dev/sda2 and testing dbench on ext4 yields the same results: btrfs performs badly on this configuration
So, here are the results from the figures.
/dev/sda1 - xfs
Throughput 7235.15 MB/sec 10 procs
/dev/sda2 - btrfs
Throughput 623.654 MB/sec 10 procs
/dev/sda2 - btrfs (chattr +C)
Throughput 609.842 MB/sec 10 procs
/dev/sda2 - btrfs (nodatacow)
Throughput 606.169 MB/sec 10 procs
/dev/sda1 - btrfs
Throughput 636.441 MB/sec 10 procs
/dev/sda2 - ext4
Throughput 7130.93 MB/sec 10 procs
And here is the also the text file of the detailed initial test series. In the beginning I said something like a factor of 10 and more. This relates to the results in the text field. When setting dbench to use synchronous IO operations, btrfs gets even worse.
Other benchmarks
Phoronix did a comparison between btrfs, ext4, f2fs and xfs on Linux 4.12, Linux 4.13 and Linux 4.14 that also revealed differences between the filesystems of varying magnitude.
As far as I can interpret their results, they stay rather inconclusive. Sequential reads perform fastest on btrfs, while sequential writes apperently are horrible (Factor 4.3 difference).
We are still using the Gridengine on some of our high performance clusters and getting that thing running isn't really a piece of cake. Since Oracle bought Sun, things have changed a little bit: First of all, the good old (TM) Sun Grid engine doesn't exist anymore. There are some clones of it, with the most promising candidate being probably the Son of Grid Engine project. This is also what I will refer as gridengine henceforth. Noticeworthy, but not covered is the OpenGrid Scheduler and the commercial Univa Grid Engine (I'm not linking there), which is just the old Sun Grid engine, but sold to Univa and commercially distributed
In the Debian world, there is a gridengine deb packet, which just works nicely as it should do. There was a el6 port for CentOS 6, but there is nothing official for CentOS 7 (yet?). I've build them myself and everyone is free to use them as they are. They are provided as-they-are, so no support or warranty of any kind are provided. Also, they should work just fine as they are
Building the Son of Grid Engine
The process was difficult enough to make me fork the repository and setup my own GitHub project. My fork contains two bugfixes, which prevented the original source from building. The project contains also build instructions in the README.md for OpenSuSE 15 and CentOS 7 and pre-compiled rpms in the releases section.
Short notes about building
The Gridengine comes with it's own build tool, called aimk. One can say a lot about it, but if treated correctly it works okayish. The list of requirements is long and listed in the README.md for CentOS 7 and OpenSuSE 15. It hopefully also works for any other versions.
SGE uses a lot of different libraries. Mixing architectures for a single cluster environment is in general a bad idea and SGE might work, but you really don't want to hassle with the inevitable white hairs that come with all of the unpredictable and sometimes not-easy-to-understand voodoo errors that occur. Just ... Don't do that!
I never used the Hadoop build, so all binaries and everything is tested with -no-herd.
sudo SGE_ROOT="/opt/sge"scripts/distinst-local-allall-noexit# asks for confirmation
export SGE_ROOT="/opt/sge"
cd$SGE_ROOT
./install_qmaster# On the Master Host
./install_execd# On the execution host (Compute node)
Notes about installing the Gridengine
I've tried to automate the install with ansible, but the install_execd -auto script proves to be quiet unreliable. After several failed attempts, I decided to install the Gridengine manually from a shared NFS directory.
This is in general a good idea, as the spool directory anyways needs to be in a NFS share. To prevent trouble I have separated the binaries (read-only NFS) from the spool directory (read-write access to all nodes).
I've tried to mix CentOS and OpenSuSE. The Gridengine works with each other, but you will run into other problems as the execution environment is different. Don't do that!
Running the SGE over NFS is the way I recommend. Be aware of the hassle, when the master node becomes unresponsive. In that case, don't do magic tricks, just reboot the nodes. Everything else is fishy.
Known problems with Son of Grid engine
This section is dedicated to document two bugs and make them appear on google, so that other unfortunate beings, who encounter the same problems can find a solution. I've encountered two errors, when trying to build the original 8.1.9 version
I wrote a small bash script, that transforms IP addresses into HEX format. The tool consists of 10 lines of bash script
1
2
3
4
5
6
7
8
9
#!/bin/bash
if[$# -lt 1 ]; then
echo"IP-Address to HEX converter"
echo"Usage: \"`basename $0` IPADDRESS\""
echo" e.g. `basename $0` 127.0.0.1 = `$0 127.0.0.1`"
else
IP_ADDR=$1
printf'%02X'${IP_ADDR//./ }; echo
fi
I needed the tool to match IP-addresses to HEX files for PXE boot. Normally PXE boot fetches first the MAC-address, and then iteratively for the HEX representation of the IP address, with reducing the number of matching characters. Oracle documents the behavior very nicely for the IP address "192.0.2.91"which matches "C000025B" and the imaginary MAC-address "88:99:AA:BB:CC:DD". Then the PXE client probes for the following files (in the given order)
1
2
3
4
5
6
7
8
9
/tftpboot/pxelinux.cfg/01-88-99-aa-bb-cc-dd
/tftpboot/pxelinux.cfg/C000025B
/tftpboot/pxelinux.cfg/C000025
/tftpboot/pxelinux.cfg/C00002
/tftpboot/pxelinux.cfg/C0000
/tftpboot/pxelinux.cfg/C000
/tftpboot/pxelinux.cfg/C00
/tftpboot/pxelinux.cfg/C0
/tftpboot/pxelinux.cfg/C
Now, with iphex I can easily convert the more used numerical representation of IP-addresses like 192.168.2.91 into the IMHO not directly visible HEX representation.