Zsh and Home/End/Delete buttons

I've notices that in Zsh under Mate, the HOME/END/DELETE buttons are for some reasons not working as I expected them to work. I use vim keybindings, and am still accustomed to sometimes hit the end button to reach the end of the line. So far it has never been a problem, but zsh just reacts weirdly here. Before triggering a rage quit, I found a solution of how to deal with it. Put the following lines in your .zshrc and you're good

I found this solution here and mirror it on my blog, in case the original solution gets lost or something.

I also link oh-my-zsh here, in case someone just hopped on zsh as well and wants to make it as fancy as possible 🙂

Kernel build bug - KVM_AMD and CRYPTO_DEV_CPP

About a week ago, I failed to build a Kernel for my new Ryzen 2700X working machine. After some time of configuring my kernel I run into some weird problems

The problem

I wanted to have a Kernel with KVM_AMD support enabled. The build was going on fine, until some weird linker errors appeared.

(Full output [Pastebin])

Since I'm a Kernel rookie, it took me some time to realize what was going on. A google search didn't revealed a solution, other than something similar on Unix Stackexchange, that was not directly applicable for my case.

The problem persisted and is reproducible in linux-4.17.1 and linux-4.16.15, using this config file. Building linux-4.14.49 was doing fine. For any options that were not defined by the config file I chose the default suggestion.


Workaround

The problem arises, if CONFIG_CRYPTO_DEV_CCP_DD is compiled as module [=m], also if the SEV is not used. Enabling CONFIG_CRYPTO_DEV_CCP_DD to be compiled in the kernel [=y] is a workaround for the issue.

This commit already revealed the issue.

I had to Include the "Secure Processor device driver", that is found in Cryptographic API > Hardware crypto devices

Weirdly, the suggested solution from Unix Stackexchange was not solving the problem for me, neither was it causing problems. I could build the Kernel (4.17.1) with "Kernel-based Virtual Machine Support" set as module. But those are just my two cents, it might have been an issue some versions ago ...

Unluckily I cannot contribute to Unix Stackexchange yet (not enough reputation *sigh*), so I cannot improve the answer there.

Thanks to Richard!

Many thanks to Richard, who provided me with support, regarding nailing it down to a bug in the Kernel build system.

Resizing a btrfs parition

This is a simple note to myself, in case I need to re-do this again. How to resize a btrfs partition to maximum size (or full capacity)

  1. Resize the partition using parted
  2. Resize the btrfs filesystem using

    Would it make sense to create an alias, so that also btrfs filesystem resize 100% or other percentages would work?

In a nutshell example

Detailed example

First resize the partition (I use parted for that purpose)

Done.

Getting VeraCrypt running on a custom build Kernel

Having your own compiled Linux Kernel is a nice thing for various reasons. First, you are not stuck with the (depending on your distribution possibly outdated) Kernel versions your distribution and you highly customize your experience. Some people want to have a super-fast lightweight Kernel, I'm more on the other side of the spectrum. But that's a matter of flavor.

A side-effect is that you learn a lot more about Linux - inevitably issues will arise, from not working KVM (upcoming post) because of iptable issues to VeraCrypt that cannot operate with Kernel support.


Getting your custom Kernel ready for VeraCrypt

I've encountered the following error

device-mapper: reload ioctl on veracrypt1 failed: Invalid argument
Command failed

I've started with that. ioctl based errors normally are a good indicator that something in your Kernel configuration is or missing or misconfigured.
In this case it was the missing support for crypto targets in the device mapper (I suppose).

Fortunately the Gentoo-Forums provide some very useful informations. Make sure you have configured the following options in your Kernel

Device Drivers --->
[*] Multiple devices driver support (RAID and LVM) --->
<*> Device mapper support
<*> Crypt target support
[*] Block Devices --->
<*> Loopback device support
File systems --->
<*> FUSE (Filesystem in Userspace) support
[*] Cryptographic API --->
<*> RIPEMD-160 digest algorithm
<*> SHA384 and SHA512 digest algorithms
<*> Whirlpool digest algorithms
<*> LRW support
<*> XTS support
<*> AES cipher algorithms
<*> Serpent cipher algorithm
<*> Twofish cipher algorithm

Re-build your Kernel, and everything should work fine 🙂

Ubuntu - Building own Kernel

One of the reasons why I like Ubuntu is its simple usage. Most stuff works out-of-the-box or is configurable pretty easy. So it's also pretty easy to compile your own kernel.

The reason I wanted to build my own kernel were some issues with the amdgpu graphics card. Since Kernel 4.15 AMD has pushed it's recent open-source drivers upstream, so I wanted to give it a try.

In a nutshell

In principle you have to follow those simple steps

  1. MAKE SURE GRUB HAS A TIMEOUT so you can select an old kernel, in case something went wrong
  2. Download kernel sources from kernel.org
  3. Extract the sources into a directory and change into that directory
  4. Copy current configuration from /boot/config-uname -r to .config
  5. Check current configuration using make localmodconfig
  6. Compile using make [-j8]
  7. Install by using sudo make modules_install install

More details

For now I'm assuming we want to compile the current stable kernel, witch is 4.15.6

  1. Download kernel sources from kernel.org - I won't post a direct link to a kernel, because that will become outdated pretty soon!
  2. Extract the sources into a directory and change into that directory

I download the file and extract it with tar. For me it was like

In general it's save to hit the return key and just use the default values. But keep that in mind, if you run into problems you might have a more detailed look and the options

Now it's time to compile the kernel. Use -j4 to use 4 threads for building. I in general use up to 16, but that depends on your system. People report in general good results in taking a number between 1x and 2x the number of CPU cores you have. I have 8, so I choose 16, but that's up to you

Now watch the build process and grab a cup of coffee. That might take a while ....

If the build process completes, then run a simply a make modules_install and make install to install the new kernel

In Ubuntu this triggers a grub-update as well, so it should work the next time you boot into your system.

Nice 🙂

Lightning detector - Kickoff

I recently purchased a MOD-1016 chip for lightning detection in order to improve my weather station. The chip is based on the AS3935 chip and ships as a complete I2C-ready breakout module from embedded adventures.

First steps

I skip the following parts because I consider them trivial:

  • Soldering
  • Wiring to a Arduino Nano

The wiring part is actually the most tricky part, I will provide the schematics once I have a running system. For now I focus on getting the system online. The wiring on the following picture is accurate:

Wiring of the MOD-1016 to the Arduino Nano

I put everything together in a nice box to protect the electronics from the environment. In the end it will end up outdoors in my garden.
The box is IP55 compliant, so when deployed for real I will put it in additional plastic bag to avoid any issues that comes from rain. For the first experiments IP55 is fine. And this is how it looks like

Wired box, open

A small reader program is in my meteo repository (in the Lightning directory) on GitHub, and I let it run for 1.5 days.
I had some problems with the serial port on high baud rates, so i configured it for 9600 baud. The serial connection over this period was fine, but it seems that the location has too much interference.
All I got was constant "DISTURBER DETECTED"

Right now all I get out is "DISTURBER DETECTED"

Looks like I need some fine-tuning. I disconnected the device and will run some tests with my laptop on the go.

For now I have a running serial connection, the chip delivers some output, so I'm expecting that with some fine-tuning I should get this thingy running soon.

Ubuntu Linux - Map Wacom to one screen when using multiple screens

Quick Know-how post. How to limit a Wacom tablet in Ubuntu Linux to one screen, if you are having multiple displays.

Keywords: Wacom, Ubuntu, Linux, multiple screens, multiple displays


We need to gather system information with xrandr and xinput. First we use xrandr to determine the display, where the tablet should be active. Run xrandr

I want to have it on the primary display, witch is in my case DP-4.

Next we need to list the devices using xinput | grep -i Wacom

For me, it's id 12 to 15. No we map the Wacom tablet to DP-4

Done 🙂

Raspberry Pi 3 and H.256

Some time ago I got annoyed by some movies encoded in h.256 are not running smoothly on the Kodi of my Raspberry Pi 3.

h.256 is a block-oriented quite new video compression algorithm, that is unfortunately not supported natively by the hardware decoder on the Raspberry Pi 3. So it has to be done in Software, and apparently the computational power of the Pi is too weak.

Surfing through some fora, I found some people claiming, that overclocking the Raspberry Pi should be the solution. So I decided to give it a try.


Overclocking goal

The goal was to bring the h.265 codec smoothly on the screen, using 1080p@30fps. Some people said, that overclocking the pi to 1300MHz should be enough. So that's where I have to go.

Only do overclocking, with an adequate cooling system! Since the Raspberry ships without any heat sink, I needed to buy one.

Cooling system

I've decided to go with a plain Aluminium heat sink, but monitor the temperature very closely with cputemp and gputemp, two tools that ship by default on the Raspbian and OpenELEC:

Works. During the whole overclocking procedure I was connected to the Raspberry via ssh to monitor the temperatures very closely. At least one readout every second, ready to intervene if something goes nasty.

The goal was to keep the temperature below 85 degrees (soft-limit) and immediately cancel the procedure after 90 degree (hard-limit). During the overclocking procedure I reached the hard-limit.

Overclocking

And here we go. Backup /flash/config.txt before editing, so that you can set it back to default, once you have finished.

There are plenty of examples on this page. You may need to figure out, witch one works for you.

There's also this amazing wiki page about overclocking, the risks and the pitfalls. I think that's the resource you should read before getting started.

The default clocking settings for a Raspberry Pi are commented out, thus if you are unsure about your current configuration, just comment it out, reboot and you'll have the stable plain configuration again.

In the end, I tries to get the system working with the following configuration

System was stable, but heating up a lot, up to more than 85 degrees. That's an important threshold, because at 85 degrees the Raspberry starts to protect itself from the heat death by throttling down the CPU. So in overall you'll have no benefit from overclocking, except a small boost in performance before it throttles you down to worse throughput, than with plain vanilla settings.

Results

The poor Raspberry Pi got really hot during this procedure! 1300 MHz would be too much for a long-period. Also because it throttles itself down, I did not get any benefit from the overclocking procedure.

And although I pushed it to the limit with the available cooling system, I couldn't reach a smooth experience.

So I consider h.265 as not suitable for my Raspberry Pi. Well, seems that I have to encode it to something more Raspberry friendly 🙂

Still, it was a nice project!

 

Weblinks

Shared memory C++ class - Standalone

Since  needed it for one of my research projects, I have created an offspin of the SharedMemory class in my FlexLib2 library.

The created SharedMemory class supports the creation of SharedMemory segments in the OpenMPI context. In the end you will have a shared memory segment on every machine, where the program is executed.

Shared memory example
Shared memory example

An example code is included. It compiles on gcc 4.8.4 on Ubuntu 14.04 with C++98 and C++11 Standart.

It includes a Makefile and an example program. Checkout the README file for details. And: Have fun 🙂

[Link to the source]

 

If you need some more info about how to deal with POSIX shared memory on a Linux system, I can recommend this following article.