perjantai 5. syyskuuta 2014

Getting project version from Maven project in Jenkins

Execute system Groovy Script with following script
import hudson.FilePath
import hudson.remoting.VirtualChannel

def pomFile = build.getParent().getWorkspace().child('pom.xml').readToString();
def project = new XmlSlurper().parseText(pomFile);      
def param = new hudson.model.StringParameterValue("MAVEN_VERSION", project.version.toString());
def paramAction = new hudson.model.ParametersAction(param);
build.addAction(paramAction);
Now you can use the "MAVEN_VERSION" in build, for example pass it on with "Trigger parameterized build on other projects" post build action by adding predefined parameters:
project_version=${MAVEN_VERSION}
Or in some shell commands
build.sh ${MAVEN_VERSION}
I've found this to be useful when one project deploys artefacts into repository, and another project wants to use those artefacts with exact version number.

Bash script for resolving Docker ports

Docker containers can expose ports to outer world when needed. This is done by giving "-P" flag to docker run -command. This will publish all exposed ports to "a random high port from the range 49000 to 49900" (from Docker userguide). Even though the user guide doesn't explicitly say that the docker daemon will track what ports are published, I would guess that it does.

Publishing to random ports is useful as then you can have multiple containers running at the same. We need this for our CI -setup. But there's also requirement for accessing container from outside, so we need to resolve the port published by -P in build scripts.

"docker inspect" is a command which can be used for this. It takes "--format=template" parameter, which can be used to output information about container.

So following bash script resolves public port for given container name and exposed port.

#!/bin/bash
set -o nounset
set -o errexit
function resolvePort() {
  local container=$1
  local exposedPort=$2
  local port=$(docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}}{{if eq $p "'$exposedPort'/tcp"}}{{(index $conf 0).HostPort}}{{end}} {{end}}' $container)
  echo $port
}
 The magic happens in
{{range $p, $conf := .NetworkSettings.Ports}}{{if eq $p "'$exposedPort'/tcp"}}{{(index $conf 0).HostPort}}{{end}} {{end}}
In easier to read format:
{{range $p, $conf := .NetworkSettings.Ports}}
    {{if eq $p "'$exposedPort'/tcp"}}
        {{(index $conf 0).HostPort}}
    {{end}}
{{end}}
The "{{range $p, $conf := .NetworkSettings.Ports}}" iterates over ports configuration. It is like map, and one key-value -pair looks something like this
"80/tcp": [
  {
      "HostIp": "0.0.0.0",
      "HostPort": "49101"
  }

$p is the key and $conf is the value. $p is the exposed port and its value is something like "80/tcp".

Then there's {{if eq $p "'$exposedPort'/tcp"}}, which is simple comparison.

The value of $conf is a array, and in this use case, we just need the first value (also there is just one). So in
{{(index $conf 0).HostPort}}
(index $conf 0) gives just that, and then (index $conf 0).HostPort returns 49101.

This can be then used
readonly publicPort=$(resolvePort containerName 80)
curl localhost:$publicPort
We might have been able to avoid this by using another container for tests and doing container linking, but this seems to work okay. And I wanted to learn about docker inspect --format :)

maanantai 2. kesäkuuta 2014

Using HTTP Basic Authentication with YUM

HTTP Basic Authentication is supported by yum. You just have to add username and password into repository configuration

username=username
password=password
But, at least on the older versions of Centos, this does not work. Problem is that yum relies on python library called "urlgrabber" for connection. The version of this library that is available on repositories doesn't seem to working. You can see the packages in the repository with "yum info", "yum search" but you cannot install them.

I resolved this problem by installing urlgrabber from sources:

git clone git://yum.baseurl.org/urlgrabber.git/
cd urlgrabber
python setup.py install
That got it working.

Trying out Ansible with Vagrant


There's two virtual machines in this setup, called "ansible" and "development". The first one ("ansible") is the host that is running ansible, and the latter one is the target. Both are running Fedora 20 images created as explained in previous blog post.

There's three, somewhat advanced, configurations in the Vagrantfile. The Vagrantfile defines two different hosts. They have to be named, and they can have different configurations. The.

To make things easier, there's a private network between these two hosts. This way the hosts can have predefined IPs, which can then be used for making connections between them.

Third thing is provisioning setup. Provisioning simply means configuring the environment by installing packages and modifying configurations. Here, simple bash script is used. This script installs Ansible from source, sets up profile -file to source env-setup on login and does some other minor things.

So here's the Vagrantfile:

VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.define "development" do |development|
    development.vm.box = "basic-fedora-20-x86_64"
    development.vm.network :private_network, ip: "192.168.111.100"
  end
  config.vm.define "ansible" do |ansible|
    ansible.vm.box = "basic-fedora-20-x86_64"
    ansible.vm.network :private_network, ip: "192.168.111.101"
    ansible.vm.provision "shell", path: "install_ansible.sh"
  end
end 
When defining multiple machines, vagrant commands are applied to all by default. So you can start both of machines with "vagrant up" in the same directory where the Vagrantfile is. After a while, both machine have booted.

Then you can ssh into "ansible" with vagrant ssh ansible. On login, you should see something like

Setting up Ansible to run out of checkout...
 The directory, where the Vagrant file is, can be found from /vagrant. In that directory, you can fined a inventory file (development-hosts) and simple playbook (base.yml). To make sure that everything is working, go to /vagrant -directory and execute

ansible -i development-hosts -u vagrant -k -m ping all
which will ask for a password (which is "vagrant") and should then print

ansible | success >> {
    "changed": false,
    "ping": "pong"
}
deployment | success >> {
    "changed": false,
    "ping": "pong"
}
The command  "ansible -i development-hosts -k -m ping all" is different than the one in tutorial. "-i development-hosts" tells ansible to use given file as inventory, "-u vagrant" means that the user who makes connection is vagrant and "-k" makes ansible to ask a ssh password. The "-u vagrant" is not necessary here, because without it, ansible would use the username of currently logged in user.

When ping is working, you can run the "base.yml" playbook with

ansible-playbook -i development-hosts -k base.yml
This playbook will output information about default ipv4 interface using debug -module.

Now you have a pretty good playground for trying out Ansible. First thing to do would be setting up public key authentication so you do not need to write password all the time (hint: authorized_key module in ansible)


tiistai 6. toukokuuta 2014

Initialize virtual machine with Vagrant

I like to have well defined environment for my projects. By "well defined" I mean that the environment must be explicitly defined, ie. there must be a way to initialize whole environment over and over again while being sure that the environment is exactly the same.

In Linux world, environment can be defined as distribution (ie. Fedora, Ubuntu, Mint), installed packages and configurations. These are actually a major dependency to your whole project, and they can cause major headaches. Packages are continuously updated with security and bug fixes, their behavior can change, the version of packages can be different between different distribution and even between installations. So these must be controlled.

I've previously blogged about how Veewee can be used for creating VirtualBox -images in controlled manner. In those posts, I created a Fedora 20 -image from scratch. One of the key files in that process is the kicstart -definition, which is specific to Redhat -derived distros. In this kickstart file, you can define what packages to install into your system. Major caveat here is that then you are tightly coupled to Redhat -distros. Also updating these packages in controlled manner is impossible.

Configurations are completely different beast. If you ssh into your environment and make a change, changes are that you will not remember to do the same change next time. So you're creating so called "Snowflake" -environments.

So you need a tool for handling packages and configurations. Main option here are Puppet, Chef, CF-Engine, Salt and Ansible. Others exists, but I would say that those are the biggest ones.

But how you can use these for initializing virtual machine? First step is to create running instances which are identical to each other.

First, you should have a way to control your Virtual machine with definition files that can be in version control system. Veewee is a tool which can define the basics of VM, ie. things like disk size, operating system, some boot stuff.

Veewee can then output Vagrant boxes, which are binary files. Vagrant is a tool for controlling VM instances, ie. creating, starting, stopping and destroying.

After you've created and added a Vagrant box, you can start using it.

vagrant init basic-fedora-20-x86_64

This command creates an Vagrantfile into your current directory. The Vagrantfile is a text file, which can then be controlled. In simplest form, it is just

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "basic-fedora-20-x86_64"
end


"vagrant init" creates a Vagrantfile, which has a lot of commented lines. I'd recommend that you read all of them. Seriously.

After this, you can just execute in the directory where the Vagrantfile is
vagrant up

Which will start your virtual machine defined in Vagrantfile. This might take some time, but after this you can ssh into your box.

vagrant ssh

This will log you in as "vagrant" -user. As the kickstart file, which defines the basic things for Fedora installation, sets the passwordless sudo for vagrant user, you have all the control needed.

Be free to fool around, but remember that every change you make is persisted. If you want to a clean environment, you have to logout from the running virtual machine and execute
vagrant destroy
vagrant up

Lastly, you can stop the instance with
vagrant halt

and bring it back up with
vagrant up

lauantai 26. huhtikuuta 2014

Building Fedora 20 image with Veewee

I was in need of Fedora 20 virtual machine, and at the same time wanted to learn something new. So instead of googling around for Vagrant image, I decided to use Veewee to build a new one. The resulting source files are available at Github

After installing Veewee, I started to create my image file. Veewee has a lot of predefined templates, one of which was Fedora-20-x86_64.

One of my goals in project structures is to have everything related to a project available with one checkout. So in this case, I want to have the Veewee definition file in the same directory that the rest of the files. The basic usage of bundler and Veewee requires that the 'bundle exec veewee' is executed in the Veewee -directory. So you have to define the working directory when running command

NOTE: THIS DOESN'T SEEM TO WORKING RIGHT NOW, https://github.com/jedi4ever/veewee/issues/936

bundle exec veewee vbox define 'basic-fedora-20-x86_64' 'Fedora-20-x86_64' -w ../project/veewee

This will create the definitions directory under "../project/veewee/", ie. "../project/veewee/definitions/basic-fedora-20-x86_64". I like to have the tool name as the directory name here, so there's some hint what these files are.

NOTE: WORKING COMMAND, EXECUTE IN YOUR project/veewee -directory

BUNDLE_GEMFILE=/home/jyrki/projects/veewee/Gemfile bundle exec veewee vbox define 'basic-fedora-20-x86_64' 'Fedora-20-x86_64'

After executing this command, you should have following project structure:

example-project
`-- veewee
    `-- definitions
        `-- basic-fedora-20-x86_64
            |-- base.sh
            |-- chef.sh
            |-- cleanup.sh
            |-- definition.rb
            |-- ks.cfg
            |-- puppet.sh
            |-- ruby.sh
            |-- vagrant.sh
            |-- virtualbox.sh
            |-- vmfusion.sh
            `-- zerodisk.sh

I would say that the most interesting file here is "ks.cfg", which is the kickstart file defining the installation. From there you can change
disk sizes etc.

Last command to execute for buidling image is

BUNDLE_GEMFILE=/home/jyrki/projects/veewee/Gemfile bundle exec veewee vbox build basic-fedora-20-x86_64

This will start VirtualBox
and starts to execute commands on it. Some of these include typing to the console, which is kind of funny to look at. The Virtualbox is left running, so you can ssh into it with the command

ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -p 7222 -l vagrant 127.0.0.1

Before you can use the image with vagrant, you have to export it from veewee and then add it into the vagrant. First execute command

BUNDLE_GEMFILE=/home/jyrki/projects/veewee/Gemfile bundle exec veewee vbox export basic-fedora-20-x86_64

which will shutdown the machine if it is running and export it to "basic-fedora-20-x86_64.box" -file. Now this file can be imported to vagrant with

vagrant box add 'basic-fedora-20-x86_64' 'basic-fedora-20-x86_64.box'

After this, you can start using the box in your Vagrantfiles.

tiistai 22. huhtikuuta 2014

Lessons learned from running Jenkins Slaves as Docker boxes


I've been running Jenkins slaves from docker containers just for a week now. In general, they have been working wonderfully. Of course there has been some kins and glitches, mainly when stopping and destroying containers. Version of Ansible script used for during this post can be found from https://github.com/sysart/ansible-jenkins-docker/tree/limits and most recent from https://github.com/sysart/ansible-jenkins-docker

Systemd

I was trying to control the containers with systemd, but this seemed cause some problems. It seems that it is quite easy to get into a situation, where the container was restarted immediately after being stopped, causing some weird problems. These became visible when docker refused to remove containers, complaining that their mounts were still in use. So I decided to forget the usage of systemd and just use the docker -module from ansible to stop running containers.

- name: stop {{container_names}}
  docker: image="{{image_name}}" name="{{item}}" state=stopped
  with_items: container_names
After this, stopping and starting works perfectly

Volumes

Be careful when you access files. Lets say that you have something like
VOLUME [ "/workspace" ]
ADD file /workspace/file
And then you run this with
docker run image /bin/ls /workspace
You will the see the file. But if you mount a volume, ie.
docker run -v /hostdir:/workspace image /bin/ls /workspace
 The directory will be empty. This bit me when I wanted to have the home directory for jenkins user to be on a volume, but have some files added from Dockerfile. I ended up with linking few directories in start script to achieve what I wanted.

Limits and memory

The default memory amount for containers was pretty low, but it was easy to adjust. And in Fedora, there's a default process limit set in "/etc/security/limits.d/90-nproc.conf" -file which forces process limit to 1024 for all users.
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
*          soft    nproc     1024
root       soft    nproc     unlimited
This had to be changed for both host and containers. How this showed up was a random "Exception in thread "Thread-0" java.lang.OutOfMemoryError: unable to create new native thread" during test run.


sunnuntai 20. huhtikuuta 2014

Installing Veewee on Fedora 20 with rbenv

I was in need of Fedora 20 virtual machine, and at the same time wanted to learn something new. So instead of googling around for Vagrant image, I decided to use Veewee to build a new one.

First, I needed to install Veewee by following instructions.

rbenv install 1.9.2-p320

Last 10 log lines:
ossl_pkey_ec.c:819:29: note: each undeclared identifier is reported only once for each function it appears in
ossl_pkey_ec.c: In function ‘ossl_ec_group_set_seed’:
ossl_pkey_ec.c:1114:89: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (EC_GROUP_set_seed(group, (unsigned char *)RSTRING_PTR(seed), RSTRING_LEN(seed)) != RSTRING_LEN(seed))
^
/usr/bin/gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/openssl -DRUBY_EXTCONF_H=\"extconf.h\" -I/home/jyrki/.rbenv/versions/1.9.2-p320/include -fPIC -O3 -ggdb -Wextra -Wno-unused-parameter -Wno-parentheses -Wpointer-arith -Wwrite-strings -Wno-missing-field-initializers -Wno-long-long -o ossl_cipher.o -c ossl_cipher.c
make[1]: *** [ossl_pkey_ec.o] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: Leaving directory `/tmp/ruby-build.20140418062211.18615/ruby-1.9.2-p320/ext/openssl'
make: *** [mkmain.sh] Error 1

This is a known issue, which can be fixed by patching rbenv. Note that the patch contains changes to test/openssl/test_pkey_ec.rb, which doesn't seem to be present on 1.9.2-p320. So the command found from issue description needs some modifications.

curl -fsSL https://bugs.ruby-lang.org/projects/ruby-trunk/repository/revisions/41808/diff?format=diff | filterdiff -x ChangeLog -x test/openssl/test_pkey_ec.rb | rbenv install --patch 1.9.2-p320
(edit: Reported as ruby-build #555)
Note that running this requires filterdiff command from patchutil -package.

After that, rest of the installation went smoothly.

maanantai 14. huhtikuuta 2014

Self registering Jenkins hosts with Docker and Ansible

At work (Sysart), we have had a lot of problems with Jenkins builds interfering each other when running on same slave. Problems ranged from port conflicts to trying to use same Firefox instance during Selenium tests. Easiest thing to solve this is to run only one build at a time per slave. But we have some decent hardware with Intel I7 processors (4 cores, HT enabled), so running one job at a time per slave is kinda wasteful.

Previously, we used Ovirt for creating virtual machines, and then added them manually to Jenkins as slaves. But as we wanted to 10+ slaves, this would've been tedious. Also running a VM has overhead, which starts to hurt pretty quickly.

So enter Docker, Ansible and Swarm -plugin.

Basic idea in this is to have Docker image which connects to Jenkins immediately at the start. The image contains everything needed for running our tests, including stuff required for Selenium tests like Firefox. Building of images and containers are handled with Ansibles docker and docker-image -modules, actual starting and stopping of running containers is done with systemd mainly because I wanted to learn how to use that too :). Systemd also has systemd-journal, which is pretty amazing.

The image is build on containing host for now, as it was just easier. I'm definitely checking Docker repository in near future.

Volumes are used for workspace, mainly to persist Maven repositories between restarts. I had some problems with write permissions on the first try, but resolved this with some bash scripting.

Started containers can have labels, which are just added in the playbook with docker -modules "command" -variable. There's some funny quoting to get parameters right, see "start.sh" for details.

Main files are added below and example playbook with module can be found from Github.

Of course, there were some problems doing this.

Ansible
  • The docker-image module reports changes every time. This effectively prevents usage of handlers to restart containers.
  • Couldn't get uri -module to accept multiple http codes as return code ("Can also be comma separated list of status codes."). Most likely just misunderstanding of documentation
  • service -module failed to parse output when starting/stopping container services complaining about inability to parse json. Docker start and stop output the id of container in to stdout, so this might be the reason?

Docker
  • docker -d starts all the containers as default. This can be prevented by adding -r as parameter. But this doesn't seem to affect when the service is restarted. If docker -d starts the containers, then systemd tries to start container which fails causing restart.
  • I couldn't get volumes to be chowned for jenkins user. We need to have a non-root user for our tests, as we do some filesystem permission tests. 
  • docker -d starts all the containers as default. This can be prevented by adding -r as parameter. But this doesn't seem to affect when the service is restarted. If docker -d starts the containers, then systemd tries to start container which fails causing restart.
Jenkins
  • Slave removal is slow, which can easily cause problems as containers are stopped and restarted quickly. Luckily this can be checked via REST api.

There's still few things I'd like to add here:
  • Enable commiting and downloading the used container for a given test run. This would be helpful in situations where tests were successful on developers environment but not on Jenkins. But then, developers should use same image base as the test environment :)
  • Have a production image, which would be extended by test image.

And protip for image development. Have two different images, "jenkins-slave" and "jenkins-slave-test". The "jenkins-slave-test" is inherited from "jenkins-slave", but has ENTRYPOINT overridden to "/bin/bash" so you can explore the image.

So, the main parts of how this was done. I'm sure that there's a lot of better ways to do things, so please, tell me :).

The jenkins_slaves.yml -playbook is something like this:
- hosts: jenkins-slaves
vars:
- jenkins_master: "http://jenkins.example.com"
- container_names: [ builder1, builder2, builder3, builder4, builder5, builder6 ]
roles:
- { role: docker-host, image_name: jenkins_builder }

The template for docker file is following:
FROM fedora
MAINTAINER jyrki.puttonen@sysart.fi
RUN yum install -y java-1.7.0-openjdk-devel blackbox firefox tigervnc-server dejavu-sans-fonts dejavu-serif-fonts ImageMagick unzip ansible puppet git tigervnc
RUN useradd jenkins
ADD vncpasswd /home/jenkins/.vnc/passwd
RUN chown -R jenkins:jenkins /home/jenkins/.vnc
# Run as jenkins user. Biggest reason for this is that in our tests, we want
# # check some filesystem rights, and those tests will fail if the user is root.
#ADD http://maven.jenkins-ci.org/content/repositories/releases/org/jenkins-ci/plugins/swarm-client/1.15/swarm-client-1.15-jar-with-dependencies.jar /home/jenkins/
ADD swarm-client-1.15-jar-with-dependencies.jar /home/jenkins/
# Without this, maven has problems with umlauts in tests
ENV JAVA_TOOL_OPTIONS -Dfile.encoding=UTF8
#so vncserver etc use right directory
ENV HOME /home/jenkins
WORKDIR /home/jenkins/
ADD start.sh /home/jenkins/
RUN chmod 755 /home/jenkins/start.sh
ENTRYPOINT ["/home/jenkins/start.sh"]

Start.sh starts jenkins swarm plugin:
#!/bin/bash
OWNER=$(stat -c %U /workspace)
if [ OWNER != "jenkins" ]
then
chown -R jenkins:jenkins /workspace
fi
# Use swarm client to connect to jenkins. Broadcast didn't work due to container networking,
# so easiest thing to do was just to set right address.
{% set labelscsv = labels|join(",") -%}
{% set labelsflag = '-labels ' + labelscsv -%}
su -c "/usr/bin/java -jar swarm-client-1.15-jar-with-dependencies.jar -master {{jenkins_master}} -executors 1 -mode {{mode}} {{ labelsflag if labels else '' }} -fsroot /workspace $@" - jenkins


vars/main.yml has following variables defined
docker_directory: "docker"
image_name: "igor-builder"
docker_file: "Dockerfile.j2"
docker_data_directory: "/data/docker"
image_build_directory: "{{docker_data_directory}}/{{image_name}}"

And tasks/main.yml is like this. There's a lot of comments inside so I decided to include it as is to here.

# As I want to control individual containers with systemd, install new unit
# file that adds "-r" to options so docker -d doesn't start containers.
# Without this, containers started by systemd would fail to start, and would be
# started again
- name: install unit file for docker
copy: src=docker.service dest=/etc/systemd/system/docker.service
notify:
- reload systemd

# Install docker from updates-testing, as there 0.9.1 available and it handles deleting containers better
- name: install docker
yum: name=docker-io state=present enablerepo=updates-testing

- name: start docker service
service: name=docker enabled=yes state=started

- name: install virtualenv
yum: name=python-virtualenv state=absent

- name: install pip
yum: name=python-pip state=present

# docker module requires version that is > 0.3, which is not in Fedora repos, so install with pip
- name: install docker-py
pip: name=docker-py state=present

- name: create working directory {{image_build_directory}} for docker
file: path={{image_build_directory}} state=directory

- name: install unit file for systemd {{container_names}}
template: src=container-runner.service.j2 dest=/etc/systemd/system/{{item}}.service
with_items: container_names
notify:
- enable services for {{container_names}}
- reload systemd

# Setup files needed for building docker image for Jenkins usage
- name: Download swarm client
get_url: url="http://maven.jenkins-ci.org/content/repositories/releases/org/jenkins-ci/plugins/swarm-client/1.15/swarm-client-1.15-jar-with-dependencies.jar" dest={{image_build_directory}}

- name: copy vnc password file
copy: src=vncpasswd dest={{image_build_directory}}

- name: copy additional files
copy: src={{item}} dest={{image_build_directory}}
with_items: additional_files

- name: create start.sh
template: src=start.sh.j2 dest={{image_build_directory}}/start.sh validate="bash -n %s"

- name: copy {{docker_file}} to host
template: src="{{docker_file}}" dest="{{image_build_directory}}/Dockerfile"

# This is something I would like to dom but docker module can't set volumes as rw,
# volumes="/data/builders/{{item}}:/home/jenkins/work:rw"
# Also I couldn't get the user to "jenkins" for volumes
- name: create volume directories for containers
file: path="/data/builders/{{item}}" state=directory
with_items: container_names

#
# For some reason, this will always return changed
- name: build docker image {{ image_name }}
docker_image: path="{{image_build_directory}}" name="{{image_name}}" state=present
notify:
- stop {{container_names}}
- wait for containers to be removed on Jenkins side
- remove {{container_names}}
- create containers {{container_names}} with image {{image_name}}
- wait for containers to be started
- start {{container_names}}


and handlers/main.yml
- name: reload systemd
command: systemctl daemon-reload

# Can't use service here, Ansible fails to parse output
- name: enable services for {{container_names}}
command: /usr/bin/systemctl enable {{ item }}
with_items: container_names

# service cannot be used here, Ansible fails to parse output.
- name: stop {{container_names}}
command: /usr/bin/systemctl stop {{ item }}
# service: name={{ item }} state=stopped
with_items: container_names

# Jenkins takes a while to remove slaves. If containers are started immediately, they will have names
#.containing ip -address of the host in them. ugly :(
- name: wait for containers to be removed on Jenkins side
command: curl -s -w %{http_code} {{ jenkins_master }}/computer/{{ansible_hostname}}-{{item}}/api/json -o /dev/null
register: result
tags: check
until: result.stdout.find("404") != -1
retries: 10
delay: 5
with_items: container_names

- name: remove {{container_names}}
docker: name="{{item}}" state=absent image="{{image_name}}"
with_items: container_names

- name: create containers {{container_names}} with image {{image_name}}
docker: image="{{image_name}}" name="{{item}}" hostname="{{item}}" memory_limit=2048MB state=present command="\"-name {{ansible_hostname}}-{{item}}\"" volumes="/data/builders/{{item}}:/workspace"
with_items: container_names

- name: wait for containers to be started
pause: seconds=10

- name: start {{container_names}}
command: /usr/bin/systemctl start {{ item }}
with_items: container_names


perjantai 7. maaliskuuta 2014

Installing biber for biblatex on Fedora 20.

On Fedora, biber isn't part of biblatex packaging. Reason for this seams to be that it bring along a lot of perl dependencies (https://bugzilla.redhat.com/show_bug.cgi?id=584063).

Luckily there's a copr for biber, http://copr.fedoraproject.org/coprs/cbm/Biber/. To use this, create biber.repo file in /etc/yum.repos.d/ with contents from http://copr.fedoraproject.org/coprs/cbm/Biber/repo/fedora-20-i386/. Seems to be working fine, but it really does bring along those Perl dependencies. Well, disk is cheap.

torstai 27. helmikuuta 2014

Vagrant: The guest operating system of the machine could not be detected!

TLDR; check that the shell defined by your config.ssh.shell is installed

Edit: Reported as https://github.com/mitchellh/vagrant/issues/3040 and got immediate reaction, cool.

Today I encountered following error with Vagrant when executing "vagrant up":

The guest operating system of the machine could not be detected!
Vagrant requires this knowledge to perform specific tasks such
as mounting shared folders and configuring networks. Please add
the ability to detect this guest operating system to Vagrant
by creating a plugin or reporting a bug.

This box had worked before, and it was based on Ubuntu 13.10, so I was pretty confused.

After enabling debug logging for vagrant (well, after updating vagrant and trying zillion different things :)), following log was given:


.... (a lot of log)
DEBUG ssh: stderr: bash: /bin/zsh: No such file or directory

DEBUG ssh: Exit status: 127
ERROR warden: Error occurred: The guest operating system of the machine could not be detected!
.... (a lot of log)
ERROR vagrant: The guest operating system of the machine could not be detected!
Vagrant requires this knowledge to perform specific tasks such
as mounting shared folders and configuring networks. Please add
the ability to detect this guest operating system to Vagrant
by creating a plugin or reporting a bug.
ERROR vagrant: /opt/vagrant/embedded/gems/gems/vagrant-1.4.3/lib/vagrant/guest.rb:48:in `detect!'
/opt/vagrant/embedded/gems/gems/vagrant-1.4.3/lib/vagrant/machine.rb:182:in `guest'

After seeing this, the reason was obvious: I had added zsh as shell with

config.ssh.shell = "/bin/zsh"

And it was not installed to box.

maanantai 17. helmikuuta 2014

Few tips for making application Docker friendly (Or some problems with WebLogic 10.3.6 and Docker)

I've been playing around with Docker trying to install WebLogic 10.3.6. Here's some tips for making application Docker friendly, mainly from issues I encountered. Perhaps some of these are just general good practices

1. Make it possible to bypass disk space checks
Docker (0.8) doesn't allow you to define disk size, so when Weblogic tries to verify that there's enough space on the disk, it fails. Luckily there's a parameter which makes it possible to ignore this. As a side note, Oracle DB has the same option but it still asks user "Are you sure" or something similar. Sucks when trying to install with puppet.

2. Use environment variables to define configuration options.
Docker has excellent way to define environment variables in Dockerfile and in run -command. Also there's a lot of useful variables present. I found this when trying to deploy separate front- and backend -servers, and needed a way to define addresses for connections.

3. Do not assume anything about ports or addresses.
WebLogic maven plugin seems to have some weird issue when port is forwarded to different one, ie. docker run -p 7001:7002 for Admin server. This causes deployment to fail. Just a little thing you should test.

I've managed to get WebLogic up and running with docker, which is a huge plus. Testing our application with WebLogic is so much easier now.

And no, we did not select WebLogic. Our customer did.

torstai 23. tammikuuta 2014

Weblogic with Docker: Insufficient disk space!

When installing WebLogic 10.3.6 in docker container, the installer complained that there wasn't enough disk space.

Insufficient disk space! The installer requires:
222MB for the BEA Home at /opt/wls/Middleware11gR1,
476MB for the product at /opt/wls/Middleware11gR1/coherence_3.7,/opt/wls/Middleware11gR1/utils/uninstall/WebLogic_Platform_10.3.6.0,/opt/wls/Middleware11gR1/wlserver_10.3 and
1087MB temporary work space at /tmp/bea805456445130318772.tmp.
There is only 1MB available at /opt/wls/Middleware11gR1.

I think that Docker resizes filesystem as needed, so it is just the installer that gets confused. According http://docs.oracle.com/cd/E24329_01/doc.1211/e26593/issues.htm#BCFIHGCJ
this check can be circumvented with -Dspace.detection=false.

As I'm playing around with Edmond Biemonds fantastic puppet modules (https://github.com/biemond/puppet), I needed to have a way for adding this parameter into installation command. Luckily it was simple, and pull request has already been merged (https://github.com/biemond/puppet/pull/28).