perjantai 30. syyskuuta 2016

Reinstalling NOOBS and Rasbian on Raspberry Pi

I've got a small gluster of six Raspberry Pies, and I wanted to update them all. I'm just too lazy to update every SD card one by one, so I wondered if it is possible to do the update without removing SD cards from Raspberries. And it is!

Installation of NOOBS creates a partition on the SD card, /dev/mmcblk0p1. This partition contains files needed for install (details are available at So only thing you need to do is to download new Noobs, mount /dev/mmcblk0p1 on device and replace files.

So you need do something like following to update NOOBS
curl -L -o
sudo mount -t vfat  /dev/mmcblk0p1 /mnt
sudo rm -rf /mnt/*
sudo unzip -d /mnt/
Then you can boot up the Raspberry and start the NOOBS recovery by pressing shift -key during start up. But I'm too lazy to do even that. Luckily it is possible to make the NOOBS automatically start recovery and install new OS.

The behaviour of NOOBS can be controlled with commandline options. These options are defined in file called "recovery.cmdline"  in the root of  /dev/mmcblk0p1. The default contents of the file are following:

quiet ramdisk_size=32768 root=/dev/ram0 init=/init vt.cur_default=1 elevator=deadline

To make the installer start by default, you have to add "runinstaller" option. This only starts the installer, but it will need user input to continue. Another option, silentinstall, will tell the installer to go forth and install OS. Just make sure that there is only one OS in os/ -directory, and if it has more that one flavour, edit it's flavours.json file (details in

So the recovery.cmdline should have following contents
runinstaller silentinstall quiet ramdisk_size=32768 root=/dev/ram0 init=/init vt.cur_default=1 elevator=deadline
After installation, the installer does remove the "runinstaller" -option from recovery.cmdline so it does not reinstall on every boot. The "silentinstall" option remains, though.

So when everything is in place, at next reboot, there's a new version of noobs, and it will install the OS automatically. Just remember, that everything on Raspberry will be wiped!

Here's a ansible playbook that does everything. It will take quite a while to complete, as the NOOBS image file is pretty big and takes a while to download and transfer to hosts. Reason why I'm downloading NOOBS to local machine is that I'm running this playbook with six Raspberries and it should be faster to download NOOBS once to local machine and the transfer it to Raspberries instead if downloading it on every Raspberry
- hosts: all
    - noobs_file:
    - recovery_directory: /mnt/recovery
    - name: download noobs if not present
      local_action: get_url url= dest={{playbook_dir}}/{{noobs_file}}
      become: no 
    - name: mount device
      mount: name=/mnt/recovery src=/dev/mmcblk0p1 fstype=vfat state=mounted 
    - name: remove old noobs
      file: path={{recovery_directory}}/* state=absent 
    - name: unzip noobs
      unarchive: src={{noobs_file}} dest={{recovery_directory}} owner=root group=root 
    - name: set reinstall
      lineinfile: dest={{recovery_directory}}/recovery.cmdline regexp='^(runinstaller)?\s?(silentinstall)?\s?(.*)$' line='runinstaller silentinstall \3' backrefs=yes 
    - name: unmount device
      mount: name=/mnt/recovery src=/dev/mmcblk0p1 fstype=vfat state=unmounted 
    - name: reboot
      command: shutdown -r now
      ignore_errors: True
  become: yes


torstai 8. syyskuuta 2016

Getting full error message from "docker service ps"

I was trying out docker swarm, network and services, and for some reason my nginx containers failed to start. Unfortunately, "docker service ps my-web" truncated the error, giving something like below
e5qw27qr4qbc9vrm68g3i9tl0   my-web.1  nginx  node3  Shutdown       Failed 2 seconds ago          "starting container failed: ca…"
There will be "--no-trunc" in version 1.13, which should resolve this. Meanwhile, using "docker inspect e5qw27qr4qbc9vrm68g3i9tl0" (id from docker service ps) gave the full error message.

In this case, the VM created with docker-machine did not have necessary pieces to connect secured network.

sunnuntai 7. elokuuta 2016

Ubuntu Xenial64 on Virtual Box and Vagrant

There was a lot of strange problems with ubuntu/xenial64, and in there is a mention by Seth Vargo (employee of Hashicorp)
The ubuntu/xenial64 box is built wrong and horribly broken. Please note that "ubuntu" is the name of a user, not a representation of a canonical source for ubuntu images. Please try bento/ubuntu-16.04 instead. Thanks.

These errors included following:

rejecting i/o to offline device
This happened almost everytime after heavier I/O operations, for example after loading Docker images.

stderr: Inappropriate ioctl for device
I think that this happened when Vagrant tried to setup network interfaces, mainly "enp0s8".

So just use bento/ubuntu-16.04

tiistai 5. heinäkuuta 2016

Jenkins Workflow: Executing build step for every change in commit

At work, we wanted send an email for every change that was made in a project. By default, Jenkins likes to collate changes into as few builds as possible, and normally sends an email per build.

The solution seemed to be usage of Jenkins Pipeline. Jenkins Pipeline enables creation and execution of jobs "on the fly" as needed.

First problem was to get access to ChangeLogSet. There is some preset variables in Jenkinsfile, but I could not find documentation for them. After some googling, Stack Overflow came to rescue.

def changes = currentBuild.rawBuild.changeSets

But when this was executed, Jenkins complained
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: Scripts not permitted to use method getRawBuild

There's a "In-process Script Approval" -tool in Jenkins, where you can allow usage of these methods.

After that was solved, next problem was with serialization. As the actual job execution is transferred to different node, every non-serializable object caused an exception. To prevent this, I had to null objects in proper places. This then prevented running jobs in loop, as the variables in loops needed to be nulled before job execution. So I had to collect jobs into map, and after every job was defined, null everything and use "parallel" -task to execute jobs.

So the whole thing is here:

//changes is
def changes = currentBuild.rawBuild.changeSets
//We need to create branches for later execution, as otherwise there would be serialization exceptions
branches = [:]
for (int j = 0; j < changes.size(); j++) {
    def change = changes.get(j)
    for (int i = 0; i < change.getItems().size(); i++) {
        def entry = change.getItems()[i]
        def commitTitleWithCaseNumber = entry.getMsg()
        def commitMessage = entry.getComment()
        //split from first non digit
        def caseNumber = (commitTitleWithCaseNumber =~ /^[0-9]*/)
        // check that caseNumber was in case place
        if( !caseNumber[0].isEmpty() && commitTitleWithCaseNumber.startsWith(caseNumber[0])) {
          // Remove number from title, just for nicer subject line
          def commitTitle = commitTitleWithCaseNumber.substring(caseNumber[0].length()).trim()
          def number = caseNumber[0]
          branches["mail-${j}-${i}"] = {
              node {
                  emailext body: commitMessage, subject: "[Sysart ${number}] ${commitTitle}", to: ''
        // Need to forcibly null all non serializable classes
        caseNumber = null
        entry = null
    change = null
changes = null
stage 'Mail'
parallel branches
This was a little more difficult that I had expected, mainly because of serialization complications. But in the end, it works so it cannot be completely stupid.

maanantai 27. kesäkuuta 2016

docker: Error response from daemon: invalid bit range [4, 4]

Fooling around with docker, trying to create a overlay network. Copied some settings from net, and when starting container, docker reported an error.

root@infra-front:~# docker network create -d overlay --subnet= --gateway= --ip-range= test


root@infra-front:~# docker run --rm -ti --net test alpine sh

docker: Error response from daemon: invalid bit range [4, 4].

It seems that my network settings where wrong. For now, I just removed gateway and subnet and things started to work.

tiistai 14. kesäkuuta 2016

Jaspersoft Studio 6.2.2 on Fedora 23: no swt-pi-gtk in java.library.path

When starting Jaspersoft Studio 6.2.2 only thing I got was

Jaspersoft Studio:
GTK+ Version Check
Jaspersoft Studio:
An error has occurred. See the log file

Log file had: cannot open shared object file: No such file or directory
no swt-pi-gtk in java.library.path
/home/jyrki/.swt/lib/linux/x86/ cannot open shared object file: No such file or directory
Can't load library: /home/jyrki/.swt/lib/linux/x86/
Problem got fixed after installing gtk2.i686 (32 bit version)

sudo dnf install gtk2.i686

Using ldd (print shared library dependencies) helped to find out what was actually missing, as the error message is somewhat miss leading (Can't load library: /home/jyrki/.swt/lib/linux/x86/

ldd /home/jyrki/projects/jasper/
ldd: warning: you do not have execution permission for `/home/jyrki/projects/jasper/' (0xf7741000) => not found => /lib/ (0xf76af000) => /lib/ (0xf76a8000) => /lib/ (0xf74da000) => /lib/ (0xf74bd000) => /lib/ (0xf737b000) => /lib/ (0xf723a000) => /lib/ (0xf7226000) => /lib/ (0xf7214000)
/lib/ (0x5660d000) => /lib/ (0xf71ed000) => /lib/ (0xf71e8000) => /lib/ (0xf71e4000)

tiistai 19. huhtikuuta 2016

Using Keycloak APIs: "RESTEASY004655: Unable to invoke request"

Following exception was thrown while executing multiple calls to Keycloak API.

Caused by: RESTEASY004655: Unable to invoke request
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(
at org.jboss.resteasy.client.jaxrs.internal.ClientInvocation.invoke(
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientInvoker.invoke(
at org.jboss.resteasy.client.jaxrs.internal.proxy.ClientProxy.invoke(
at com.sun.proxy.$Proxy276.findAll(Unknown Source)
at org.keycloak.admin.client.resource.ClientsResource$ Source)
Caused by: java.lang.IllegalStateException: Invalid use of BasicClientConnManager: connection still allocated.
Make sure to release the connection before allocating another one.
at org.apache.http.util.Asserts.check(
at org.apache.http.impl.conn.BasicClientConnectionManager.getConnection(
at org.apache.http.impl.conn.BasicClientConnectionManager$1.getConnection(
at org.apache.http.impl.client.DefaultRequestDirector.execute(
at org.apache.http.impl.client.AbstractHttpClient.doExecute(
at org.apache.http.impl.client.CloseableHttpClient.execute(
at org.apache.http.impl.client.CloseableHttpClient.execute(
at org.jboss.resteasy.client.jaxrs.engines.ApacheHttpClient4Engine.invoke(

I was calling

and did not read anything from response. Simple fix was

            def response = keycloak.realm(realm).clients().create(representation)

lauantai 26. maaliskuuta 2016

Problem with Kubernetes SkyDNS healtz

I had some problems when trying to get DNS working on Kubernetes. I followed instructions from Everything seemed to be working well, but the pod got restarted after 30 seconds. The log for healthz -container had following entries:

2016/03/19 04:25:25 Client ip requesting /healthz probe servicing cmd sleep 10 && nslookup kubernetes.default.svc.kube.local localhost >/dev/null
2016/03/19 04:25:25 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.kube.local': Name does not resolve, at 2016-03-19 04:25:23.967737423 +0000 UTC, error exit status 1
After trying a lot of things, I found a bug report for Alpine Linux. Basically, the nslookup does not respect the server parameter, if /etc/resolv.conf  has entries. Comment on that issues recommends using dig or drill for querying.

So I made a simple image and pushed it into docker hub, Nothing fancy, just added drill ( I used the existing image as base as I wanted to have the exechealtz available.

Then I had to change the healtz command to
drill -q kubernetes.default.svc.kube.local @localhost

perjantai 18. maaliskuuta 2016

Kubernetes 1.2.0 beta-1 not starting on Raspbian 8.0

While trying to start kubernetes v1.2.0 on Raspbian 8.0 I ran into problems. Only k8s-master and k8s-master-proxy containers were started so the system was not getting up properly. Logs for k8s-master were telling following:

7215 kubelet.go:2365] skipping pod synchronization - [Failed to start ContainerManager system validation failed - Following Cgroup subsystem not mounted: [memory]]
Cgroup memory subsystem is not enabled by default. You can enable it by adding
into /boot/cmdline. Reboot is needed after this.

You can check if the memory subsystem is enabled by listing /sys/fs/cgroup/ which should the have directory called "memory" among others
blkio  cpu  cpuacct  cpu,cpuacct  cpuset  devices  freezer  memory  net_cls  systemd


sunnuntai 14. helmikuuta 2016

Reveal.js with backgound image and company logo, all in css theme

I wanted to use reveal.js for my presentations. In our company, we have (as usual) a standard template for presentations. The problem was that the background had two parts: A small triangle in top left corner, and company logo in bottom right corner.

CSS3 supports having multiple backgrounds, so after a little tinkering I came up with following css snippets.

body {
  background-image: url('theme_images/corner.svg'), url('theme_images/sysart.svg');
  background-repeat: no-repeat;
  background-position: top left, bottom right;
  background-size: auto 30%, 20% 20%;

The frontpage was some what different.
html.title body {
  background:url("theme_images/front.svg"), url('theme_images/sysart.svg');
  background-repeat: no-repeat;
  background-position: top left, bottom right;
  background-size: 100% auto, 20% 20%;
The whole css can be seen in