Fork me on GitHub

Programming, Internet & more

Continuous Delivery with Jenkins and Docker

If you’ve ever wondered how to actually build a continuous delivery pipeline for your project this is going to be the ultimative guide. I will tell you how you can build a continuous delivery pipeline from ground up.

The pipeline will make heavy use of Jenkins together with Docker to provide a stable platform to build on.

The overall goal is to setup a build process which runs on every commit, compiles all the classes, runs all the unit tests and automatically deploys the application to provide a running instance which could be used by testers.

Requirements

To follow this guide you will need a few applications installed. I will use Vagrant to create a virtual machine (vagrant box) which will make it really easy for you to follow and avoid installing a bunch of software on your machine.

Applications on your machine (host):

Applications on vagrant box (guest):

Setting up vagrant

The first thing you have to do is install VirtualBox and Vagrant. It’s pretty easy just follow the instructions of the installers.

If you have done that you can clone the repository from github that I’ve prepared. In the repository there is a Vagrantfile which will setup your virtual machine with all the necessary stuff.

After you’ve cloned the repository open a command shell in the cloned directory and start the virtual machine.

vagrant up

Depending on your internet connection it may take a while until that command has finished. This command will do a lot of work by downloading and installing all the applications on the virtual machine that we’ll need in a minute.

Once the command has finished you can log into the vagrant box.

vagrant ssh

This command will ssh into the virtual machine and should look like this:

Vagrant ssh

Preparing jenkins

When you startup your vagrant box jenkins will be automatically started in the background.
As soon as jenkins is up and running you can open a browser on your host machine and navigate to jenkins http://localhost:9080.

Now lets install some plugins for jenkins:

  • Git plugin to be able to fetch git projects from github
  • Copy Artifact plugin to be able to copy artifacts from one project to another

The easiest way is by navigating to Manage Jenkins -> Manage Plugins -> Available and filter for the plugin names

Jenkins install plugins

As soon as the plugins are installed you have to install maven to be able to build maven projects. This can be done in Manage Jenkins -> Configure System -> Maven .

Jenkins install maven

Configuring the build job in jenkins

You can now create your first build job which will compile your application and run some basic unit tests. We’ll be using slackspace-javaee-classpath-properties as an example application. The project is a Java EE application hosted on GitHub so it’s pretty easy to include the project into our pipeline. As the project uses maven we can easily compile it without having to worry about dependencies.

Create a new maven project in your jenkins with name “javaee-classpath-properties-build”.
Now you have to choose Git as Source Code Management system and use the correct url to the repository: https://github.com/cternes/slackspace-javaee-classpath-properties.git.

Jenkins maven project
Jenkins configure source code management

As build goal use package as we want to compile the project and package all the classes into a *.war file.

Jenkins configure maven goals in project

The last step is to define a post-build action called Archive the artifacts. The files to archive should be set to target/*.war. This will ensure that it is possible to access the packaged war-file in another project later on.

You can now test if the job works by clicking on the Build Now button. If everything works the job should be displayed with a blue circle after a while.

Configuring the staging job in jenkins

Now it is time to configure another job that will make use of docker containers to provide a running instance of the application for testing.

First of all create another jenkins job named javaee-classpath-properties-staging. The type should be Freestyle project.

Jenkins free-style project

Leave the Source Code Management untouched. Instead check the build trigger Build after other projects are built and enter the name of our build project (javaee-classpath-properties-build).

Add a build step of type Copy artifacts from another project and enter the following properties:

  • Project name: javaee-classpath-properties-build
  • Which build: Latest successful build
  • Check Stable build only
  • Artifacts to copy: target/*.war

Add another build step Execute shell and insert the command:

docker ps -a | grep 'javaee-classpath-properties-staging:latest' | awk '{print $1}' | xargs --no-run-if-empty docker kill

echo "FROM glassfish:4.1\nMAINTAINER cternes <github@slackspace.de>\n# Deploy application\nADD target/javaee-classpath-properties.war /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/" > $WORKSPACE/Dockerfile

service=$JOB_NAME
service_port=8080
docker build -t $service $WORKSPACE
container_id=$(docker run -d -p $service_port:$service_port $service)

echo "App running on http://localhost:$service_port/javaee-classpath-properties/"

Jenkins Docker Build Step

Ok, let me explain what we’ve just did. We have configured another job which will be started automatically after our build job. The job will only be triggered if the build job was successful. We’ve also configured that the generated war-file from our build job will be copied to the staging job and therefore will be reused. This ensures that we’re using exactly the same file that was compiled and tested during the build job. This is an extremely important concept because we’re now indepent from other commits which have been done during the time the build job was running and eventually are breaking the build.

The main work which is done in this job is a little bit cryptic, hidden in the Execute shell build step. What it basically does is creating a Dockerfile and then building a new docker image with that Dockerfile.

While building the docker image it is downloading a glassfish docker container and injects our generated war-file into it. After that the docker container will be started, which means the Glassfish Java EE Application Server will be started and our application will be automatically deployed into the Glassfish. As soon as the docker container is running we can access our application through the browser by opening http://localhost:8080/javaee-classpath-properties/.

You can now test if everything works by “building” the staging job.

Putting it all together

The hard work is done. You can see the results by building the job “javaee-classpath-properties-build” in your jenkins. This will compile the application, run the tests and package the application. When the first job has finished successfully, the second job will be triggered automatically and will fire up a docker container with your application inside.

After a short time you should have access to your application and can test it in the browser. Note how fast the docker container starts up. On my machine the second job runs at most times under a second. That’s amazingly fast to startup a whole testing environment from the ground.

Next Steps

You’ve learned the basics how to build a continuous delivery pipeline with Jenkins and Docker. There is a lot more what can be done from here like configuration management, acceptance tests, service discovery, monitoring and so on.

But this article should be only a kickstart to develop your own ideas. Any comments or feedback is appreciated.

Posted in javaee, programming, tutorials | 1 Comment

Speed up development with vagrant

Maybe you know this situation: Coming to a new project often requires to install a lot of stuff. Databases, Web servers, Dependencies and all the other stuff is starting to clutter your workstation. If you switch often between projects or you just want to try out a new technology for some days this could be quite annoying.

A few years ago, we all thought virtualization will come to the rescue. Just setup a virtual machine with all required technologies, install your favorite IDE and start development inside
the virtual machine. In fact this approach also has it’s problems. Programs in the virtual machine are responding quite slow, the IDE is not very reactive and if you’ve used a virtual machine for a while it is cluttered with a lot of stuff and getting slower and slower.

Lightweight virtualization with Vagrant

A year ago I discovered Vagrant and since then I’m using it heavily for development and trying out new stuff. Vagrant is taking the approach of using virtual machines to the next step. With Vagrant you can setup a fresh virtual machine within seconds and also destroy it within seconds.

There are several possibilities on how to use vagrant. I’m using Vagrant to separate runtime and development environments by setting up virtual machines with runtime environments
but keeping development stuff on the host machine. That means if I’m developing a Java EE application, I’m using vagrant to setup a virtual machine with Java and an application server like Glassfish. The application itself will be developed, as normal, with an IDE on my machine. To let the application run, I will deploy the application on the application server in the virtual machine. To access the application I can use a browser on my machine to access the application server in the virtual machine. Thus, I can keep my workstation free from runtime stuff but at the same time using the native speed of my machine to develop the application.

One important part is that you can start with a fresh installed virtual machine anytime. That means if you’ve messed up something in your virtual machine you can just throw the virtual machine away and create a new one with exactly the same settings within seconds. Vagrant makes it very easy to do this. In fact I’m starting off with a new virtual machine every day to keep it as clean as possible.

How to start

To give you a small overview about how vagrant works it’s best if you’re trying it out with a small example. Let’s try to run an apache webserver within a vagrant managed virtual machine.

To start with vagrant you have to install two things. One is VirtualBox and one is Vagrant itself.

After you’ve installed the two programs you can start creating your first Vagrantfile. Please create a file named Vagrantfile in your home directory.

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
    # Every Vagrant virtual environment requires a box to build off of.
  config.vm.box = "phusion/ubuntu-14.04-amd64"

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8000" will access port 80 on the guest machine.
  config.vm.network "forwarded_port", guest: 80, host: 8000

  config.vm.provision :shell, path: "bootstrap.sh"

end

The Vagrantfile is the core of vagrant and is required to configure a virtual machine. Let’s look at some details

config.vm.box = "phusion/ubuntu-14.04-amd64"

Here we’re telling vagrant to use Ubuntu as the operating system of the virtual machine. When starting the virtual machine, vagrant will download Ubuntu once and stores it for later usage. That means if you start up the virtual machine a second time or building another virtual machine with the same operating system it is already cached and will not downloaded a second time.

config.vm.network "forwarded_port", guest: 80, host: 8000

This will make the webserver (port 80) which runs inside the virtual machine accessible to your host machine on port 8000.

config.vm.provision :shell, path: "bootstrap.sh"

That means after starting up the virtual machine, this script will be executed on the virtual machine. This can be used to further configure the virtual machine e.g. by installing additional tools.

Now it’s time to create the bootstrap.sh file. Create it in the same directory as your Vagrantfile.

apt-get update
apt-get install -y apache2

This bootstrap file will download and install apache as a webserver on the virtual machine during startup.

Let’s startup the virtual machine by using a terminal/command line in the folder with the Vagrantfile and executing the following command:

vagrant up

Vagrant will now download the Ubuntu image and install the apache webserver. Depending on your internet connection this can take a while.
After vagrant is done you should see the message “Starting web server apache2” on the console.

Vagrant startup

The virtual machine is now already running in the background and we can test if we can reach the webserver. You can do this by firing up a browser and check if you can reach http://localhost:8000.

If everything works, you should see the default landing page of apache. Congratulations, you’ve successfully setup a virtual machine and accessed it with your browser.

Vagrant Apache2 Ubuntu Landing Page

Working with virtual machines

Now that our virtual machine is up and running we can take a closer look inside the machine. To do this you can open an ssh shell to get inside the virtual machine.

vagrant ssh

This command will directly ssh you inside the virtual machine. You can take a look around, create or edit files, make everything you can do with a normal machine.
Let’s do an experiment and navigate to the folder /vagrant on the file system and list all the files.

cd /vagrant
ls -l

Vagrant images

You can see the files which were used to create the virtual machine. In fact we’re now in a folder on the host system. Moreover we have full read/write access to this folder (try it by creating a file here). This means we can easily exchange files between our host machine and the virtual machine. It’s possible to share more folders between the host machine and the virtual machine by configuring it in the Vagrantfile.

To exit the ssh mode you can press CTRL+D.

At some time we have to stop the virtual machine and fire it up later again. There are two possibilities, either persist the changes that are done in the virtual machine or throw the virtual machine away and start with a fresh one.

To stop the virtual machine but keep the changes you can use

vagrant halt

and later on

vagrant up

to start the virtual machine again. Please note that the boostrap.sh file will not be executed again, the webserver and all of your changes are still there.

If you prefer to throw the virtual machine away and start fresh (like I do) you can use

vagrant destroy

and use

vagrant up

to get a fresh copy. In this case the boostrap.sh file will be executed and the webserver will be downloaded and installed again. Note that all of your previous changes are wiped away.

Vagrant destroy

There is a way to preserve changes by extracting a new image from a virtual machine (e.g. to preserve installed programs). This image can then be used as a base for other virtual machines but this will go beyond this article (see vagrant package in the documentation).

Summary

I tried to give you a quick introduction to Vagrant and I hope you can see the potential of Vagrant. It can be used to quickly throw some programs into a virtual machine, try them out and then wipe away all of it without cluttering your main machine. It is also a very good utility when you’re trying to separate development and runtime environments.

There is a lot more what you can do with Vagrant. Just make sure to check out the documentation.

Posted in programming | Leave a comment

Injecting properties in Java EE applications

In almost any application there are some settings that must be read from somewhere to configure the application. User names or IP addresses are great examples for such settings.
To use settings is the standard procedure to make software configurable. There are many possibilities that one can use to achieve this. One example would be to store such settings in a database. Another one, probably the most simple one, is to read settings from a file.

To make things really simple lets focus on storing the settings in a file. If you’re building a Java EE application you can make use of dependency injection with CDI.
CDI makes it actually really simple to create a provider class for your configuration files. The key is the @Produces keyword which will be looked up at runtime and injects the result of the @Produces method into other CDI enabled classes.

@Produces
public Properties provideServerProperties() {
    Properties p = readPropertiesFromFile("myfile.properties");
    return p;
}

Generic approach with annotations

The next level to this and a more generic approach is to use a dedicated annotation that can be used to mark injection points and also support multiple configuration files. I’m using an annotation called PropertiesFromFile with one property which determines the configuration file which should be used. The name of the configuration file is optional and if not provided, a file named config.properties will be used.

@Qualifier
@Target({ElementType.METHOD, ElementType.FIELD})
@Retention(RetentionPolicy.RUNTIME)
public @interface PropertiesFromFile {

    /**
     * This value must be a properties file in the classpath.
     */

    @Nonbinding
    String value() default "config.properties";
}

Please note that the configuration files need to be on the classpath of the application. If you’re using maven this can be achieved by putting the files in the path src\main\resources.

To use the new annotation the producer class needs to be adapted.

@Dependent
public class PropertyReader {

    @Produces
    @PropertiesFromFile
    public Properties provideServerProperties(InjectionPoint ip) {
        //get filename from annotation
        String filename = ip.getAnnotated().getAnnotation(PropertiesFromFile.class).value();
        return readProperties(filename);
    }
   
    private Properties readProperties(String fileInClasspath) {
        InputStream is = this.getClass().getClassLoader().getResourceAsStream(fileInClasspath);
       
        try {
            Properties properties = new Properties();
            properties.load(is);
            return properties;
        } catch (IOException e) {
            System.err.println("Could not read properties from file " + fileInClasspath + " in classpath. " + e);
        }
       
        return null;
    }
}

At runtime when an annotation of type @PropertiesFromFile is found, CDI will look for the corresponding producer. If a found, the producer method will be called and the InjectionPoint will be used as parameter. In the producer method the filename is read from the annoation, the corresponding properties file will be loaded from the classpath and the properties will be returned.

Injecting the properties at runtime

To inject the properties it is sufficient to use the @PropertiesFromFile annotation together with @Inject in any CDI managed class.

public class StartupManager {
   
    @Inject
    @PropertiesFromFile("custom.properties")
    Properties customProperties;
}

In the class above the properties of the file custom.properties will be injected at runtime.

Summary

The usage of a custom annotation and a producer class makes it very easy to inject complex objects into other CDI managed classes at runtime. With only one simple annotation it is possible to get a lot of work done in the background and abstract repeating and non domain-specific logic away. With the above solution one can very simply inject properties from settings files into arbitrary classes.

Of course reading properties from a file is a really simple solution for configuration management but in very small applications it might be sufficient and could come in handy.

A complete working example can be found on GitHub. You can look at the log file of the application server while deploying the sample application as the properties will be printed into the log file.

In addition there is a basic servlet available at http://localhost:8080/javaee-classpath-properties/ which will print out the properties of config.properties.

Posted in javaee, programming | 1 Comment

IIS Error 500 ExtensionlessUrlHandler

I’ve recently encountered the following error in my IIS after starting a (previously) working ASP .NET application:
Handler “ExtensionlessUrlHandler-Integrated-4.0” has a bad module “ManagedPipelineHandler” in its module list
(or in German: Der Handler “ExtensionlessUrlHandler-Integrated-4.0” weist das ungültige Modul “ManagedPipelineHandler” in der Modulliste auf)

The error appeared after a fresh reinstall of Windows. After a lot of googling around for the problem I realized that the installation order of IIS and .NET framework leads to this error (WTF Microsoft).

The solution is to simply re-register ASP .NET in IIS.

c:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe -i
Posted in programming | 2 Comments

C# How to force decimal precision in xml serialization

Recently, I’ve tried to serialize some xml in c# and stumbled accross the problem that I had to force the precision scale of decimal values. By default the XmlSerializer uses the exact value of the underlying decimal value while serializing to xml.

That means, if you assign 2 to a decimal value your xml will look like

<myvalue>2</myvalue>

But if you assign 2.00 to a decimal value it will look like

<myvalue>2.00</myvalue>

One solution could be to always use the Math.Round() function to round your decimal values. But this will lead to a lot of unnecessary and unmaintainable code.
Instead I wanted a solution that automatically cares about the decimal scale during xml serialization.

XmlSerializer extension

The solution I’ve come up with uses an extension method for the XmlSerializer class.
This method iterates through all public properties, looks for decimal values and applies the target precision on the decimal value. It also works for fairly complex xml serialization trees with nested objects and lists.

using System.Collections;
using System.Globalization;
using System.Reflection;
using System.Xml;
using System.Xml.Serialization;

namespace Slackspace.Serializer
{
    public static class XmlSerializerExtensions
    {
        // the target format of the decimal precision, change to your needs
        private const string NumberFormat = "0.00";

        public static void SerializeWithDecimalFormatting(this XmlSerializer serializer, Stream stream, object o)
        {
            IteratePropertiesRecursively(o);
           
            serializer.Serialize(stream, o);
        }

        private static void IteratePropertiesRecursively(object o)
        {
            if (o == null)
                return;

            var type = o.GetType();

            var properties = type.GetProperties();

            // enumerate the properties of the type
            foreach (var property in properties)
            {
                var propertyType = property.PropertyType;

                // if property is a generic list
                if (propertyType.Name == "List`1")
                {
                    var val = property.GetValue(o, null);
                    var elements = val as IList;

                    if (elements != null)
                    {
                        // then iterate through all elements
                        foreach (var item in elements)
                        {
                            IteratePropertiesRecursively(item);
                        }
                    }
                }
                else if (propertyType == typeof (decimal))
                {
                    // check if there is a property with name XXXSpecified, this is the case if we have a type of decimal?
                    var specifiedPropertyName = string.Format("{0}Specified", property.Name);
                    var isSpecifiedProperty = type.GetProperty(specifiedPropertyName);
                    if (isSpecifiedProperty != null)
                    {
                        // only apply the format if the value of XXXSpecified is true, otherwise we will get a nullRef exception for decimal? types
                        var isSpecifiedPropertyValue = isSpecifiedProperty.GetValue(o, null) as bool?;
                        if (isSpecifiedPropertyValue == true)
                        {
                            FormatDecimal(property, o);
                        }
                    }
                    else
                    {
                        // if there is no property with name XXXSpecified, we can safely format the decimal
                        FormatDecimal(property, o);
                    }
                }
                else
                {
                    // if property is a XML class (contains XML in name) iterate through properties of this class
                    if (propertyType.Name.ToLower().Contains("xml") && propertyType.IsClass)
                    {
                        IteratePropertiesRecursively(property.GetValue(o));
                    }
                }
            }
        }

        private static void FormatDecimal(PropertyInfo p, object o)
        {
            // if property is decimal, apply correct number format
            var value = (decimal) p.GetValue(o, null);
            var formattedString = value.ToString(NumberFormat, CultureInfo.InvariantCulture);
            p.SetValue(o, decimal.Parse(formattedString), null);
        }

    }
}

Note that the decimal precision is fixed in the NumberFormat field and is applied for all decimal values in the xml.

Usage

To use the serializer extension you can simply call the new extension method instead of the default one:

using (var ms = new MemoryStream())
{
    var xmlObject = MyObjectXml();
   
    var serializer = new XmlSerializer(typeof(MyObjectXml));
    serializer.Serialize(ms, xmlObject);
}

What about Nested Objects?

One word about nested objects in your xml graph. If you use nested objects then you must name your classes with Xml at the end (or exchange the xml string in the code) in order to make it work with the serializer extension, otherwise the properties in these classes will not be inspected and the decimal precision scale cannot be applied.

Example for nested objects:

using System.Collections.Generic;
using System.Xml.Serialization;

namespace Slackspace.Serializer.Model
{
    public class MyObjectXml
    {
        [XmlAttribute(AttributeName = "id")]
        public long Id { get; set; }

        [XmlArray(ElementName = "students")]
        [XmlArrayItem(ElementName = "student")]
        public List<StudentXml> Students { get; set; }

    }

    public class StudentXml
    {
        [XmlAttribute(AttributeName = "averageGrade")]
        public decimal AverageGrade { get; set; }
    }
}

Are nullable decimals supported?

When you’re using nullable decimals in your xml classes then you can just use the standard model with the Specified property. In order of completeness here is an example that makes use of a nullable decimal value.

using System.Xml.Serialization;

namespace Slackspace.Serializer.Model
{
    public class MyXmlObject
    {
        [XmlAttribute(AttributeName = "price")]
        public decimal XmlPrice { get { return Price.Value; } set { Price = value; } }  

        [XmlIgnore]
        public decimal? Price { get; set; }

        public bool XmlPriceSpecified { get { return Price.HasValue; } }
    }
}

Summary

If you want to force the decimal precision during xml serialization the best way I found was to make use of the extension method concept in c#. The extension of the xml serializer makes it really easy to don’t care about the decimal scale at all and do all the hard work during the serialization state.

Posted in c#, programming, tutorials | 1 Comment

Openkeepass: Java API for KeePass 2.x

KeePass is a well known password safe for your passwords. It has a lot of features, proved security and is really a good way of storing your passwords and login information. Personally, I’m using KeePass for a long time now.

Java API for KeePass

However, sometimes there is a need that you have to access the KeePass database programmatically. And there comes the problem. There are really good frameworks available for C# but if you’ve ever looked for an opensource java library that is capable of reading KeePass 2.x databases you were probably surprised that there are only libraries out there that can read KeePass 1.x databases.

I guess this has to do with the fact that the file format of KeePass 2.x has dramatically changed since the old 1.x version and is now based on XML. However, since there is no alternative available I’ve managed to write my own java API for KeePass 2.x files.

Openkeepass for reading KeePass files

With the library you will get a quick way to access the database files. I’ve tried to make the API as simple as possible to use.
If you want to read all password entries from a KeePass database you can achieve that with the following code:

// Open Database
KeePassFile database = KeePassDatabase.getInstance("Database.kdbx").openDatabase("MasterPassword");

// Retrieve all entries
List<Entry> entries = database.getEntries();

If you want to search for a specific entry you can do that as well:

// Search for single entry
Entry sampleEntry = database.getEntryByTitle("Sample Entry");

Looking for entries that contain a specific string? No problem:

// Search for entries that contain the word 'entry'
List<Entry> entries = keePassFile.getEntriesByTitle("entry", false);

You could also work with groups:

// Retrieve all groups of the first level
List<Group> groups = keePassFile.getTopGroups();

There are more examples available on GitHub.

Installation

If you want to use openkeepass, you can grab it directly from GitHub or even simpler just add it as a maven dependency.

<dependency>
    <groupId>de.slackspace</groupId>
    <artifactId>openkeepass</artifactId>
    <version>0.4.0</version>
</dependency>

Pitfalls

There is one pitfall that you could fall into while using the library. While KeePass is using strong cryptography, you have to make sure that you’ve installed the Java Cryptography Extension (JCE) on your system. You can download it directly from Oracle.

If you have not installed it, you will run into an InvalidKeyException:

java.security.InvalidKeyException: Illegal key size

OpenSource

As always the whole source code is opensource and available on GitHub.

Posted in open-source | 12 Comments

How to convert files in a directory from windows to unix

I you copy files from windows to a unix system, you have to take care about the line endings.

This is a quicktip how you can convert all files in a directory from windows to unix line endings. You need the program dos2unix, if it’s not installed on your system you can install it with

apt-get install dos2unix

Then you can convert all the files in your current directory with:

find . -type f -exec dos2unix {} \;
Posted in linux | Leave a comment

Parse XML files with Groovy

Have you ever wondered how to parse a XML document with Groovy?

Assume that we have the following xml stored in the file notes.xml:


<notes>
    <note>
        <to>John</to>
        <from>Kyle</from>
        <heading>Reminder</heading>
        <body>Don't forget me this weekend!</body>
    </note>
    <note>
        <to>Alex</to>
        <from>Dave</from>
        <heading>Grocery list</heading>
        <body>Milk and Cheese</body>
    </note>
</notes>

Parsing this document with Groovy is really simple. To retrieve the note from Dave we can use the following code:


// parse
def xml = new XmlSlurper().parse("notes.xml")

// find nodes by from
def nodes = xml.depthFirst().find {
    it.name() == 'note' && it.getProperty('from') == 'Dave'
}

// print results
println(nodes)

That’s it. It’s just that simple.

Posted in programming | Leave a comment

McLaren P1

“The McLaren P1 is a road car, not a racing car. But it does happen to be a road car that is so much more exciting than many of the racing cars that have been built!”

Posted in Uncategorized | Leave a comment

Analyze and Visualize your Azure Logfiles with Alfa

I’ve been using Microsoft’s Cloud Platform Azure for several years now. Azure has built in logging support which can be used by applications deployed to Azure.

Logging is as simple as calling one of the methods from the System.Diagnostics.Trace class.


Trace.TraceInformation();  
Trace.TraceWarning();  
Trace.TraceError()

All these logging information will NOT be stored in a file, but in a table called WADLogsTable which is located in the Azure Storage.

Using the Azure Storage to display log data

To display these log data you have to query the Azure Storage. The problem with that, is that querying the storage is a real pain as Gaurav Mantri has already pointed out back in 2012.

The Azure Storage has a few disadvantages when it comes to displaying log data:

  • Querying for logs in a specific timeframe using the Timestamp attribute is incredibly slow
  • You can’t filter the logs by attributes like Level, Role, DeploymentId etc. because the query gets so slow that it becomes unuseable
  • You cannot filter for strings in the Message attribute

I searched a long time for tools that will display the log data in the Azure Storage in a nice way but couldn’t find one.

Alfa (Azure Logfile Analyzer)

Back in 2013, I decided to create my own tool called Alfa (Azure Logfile Analyzer) which should solve all these flaws with the Azure Logs.

Alfa is an application which is designed as a background service. The idea behind it is that Alfa periodically fetches the log data from Azure and stores it into its own datastore called ElasticSearch. From there you can go forward and use all visualization tools which can work with data stored in ElasticSearch. One great example is Kibana which is what I’m using to visualize my log data.

I’m using Alfa in production for almost 1 year now and I’m happy to announce that it has pushed my log analysis experience a big step forward compared to using the default Azure Storage.

The main features of Alfa are:

  • Filter logs by date/time
  • Filter logs by any attribute
  • Search in log messages
  • Aggregate all logfiles in one place (even from different Azure applications)

So, if you feel the same pain like I did, while you’re trying to view your Azure log data, make sure to give Alfa a try.

Alfa runs under Windows and Linux. Under Windows you can run it as a service.

As always, the source code is available at GitHub where you can also find all release binaries for download.

Posted in open-source, programming | Leave a comment