Fork me on GitHub

Programming, Internet & more

CORS with Keycloak and Spring Boot

A quick tip regarding Keycloak, Spring Boot and some kind of JavaScript UI technology. When you’re trying to connect a JavaScript UI like Angular to a backend which is secured by Keycloak you have to be aware of CORS. I’m pretty sure you already know this.

However, there is a common pitfall, you have to enable CORS 2 times. First you have to enable CORS on Spring Boot level to make sure your origin is allowed to make calls to the REST api.

This can be done with a WebMvcConfigurerAdapter like this:

public class FilterProvider {

    public WebMvcConfigurer corsConfiguration() {
        return new WebMvcConfigurerAdapter() {
            public void addCorsMappings(CorsRegistry registry) {
                        .allowedMethods(HttpMethod.GET.toString(), HttpMethod.POST.toString(),
                                HttpMethod.PUT.toString(), HttpMethod.DELETE.toString(), HttpMethod.OPTIONS.toString())

The second time you have to enable CORS is explicitly for Keycloak. If you forget this, your UI won’t be able to connect to your REST api. To enable CORS for Keycloak you can simply add the following to your file:

# Keycloak Enable CORS
keycloak.cors = true

The configuration is simple but trust me this could easily drive you crazy if you forget it. The browser will constantly complain about missing CORS headers. Additionally this error message could be misleading because you already enabled CORS for Spring Boot, right?

So, hopefully this tip will help you the next time when you’re running into this problem with Keycloak.

Posted in programming | Leave a comment

Openkeepass: Android Support

Finally, openkeepass version 0.6.0 is out and its major feature in this version is the Android support.

The whole Xml serialization core was rewritten and now uses the Android friendly Simple XML Api.

Release Notes of openkeepass v0.6.0


  • Fixed an issue that could bring the password of an entry into illegal state.
  • Fixed circular reference that occured when moving an entry multiple times to history.
  • Fixed incorrect data check in HashedBlockInputStream.
  • Fixed invalid placeholder in CrsAlgorithm.
  • Fixed leak of resources.
  • Changed

  • Changed XML library from JAXB to SimpleXML (Android support)
  • Changed crypto library from BouncyCastle to SpongyCastle (Android support)
  • Added

  • Contracts between builders and domain objects to support loose coupling
  • KeePass files can now be cloned by using cloneKeePassFile() in GroupZipper
  • GroupBuilder can now add and remove a list of entries
  • Times support in EntryBuilder
  • A lot of refactoring under the hood
  • Removed

  • getZippper() from KeePassFileBuilder
  • Where can I get it

    You can download it with maven or directly from GitHub.

    Posted in open-source | Leave a comment

    Authentication with Spring Boot, AngularJS and Keycloak

    In this tutorial, I want to show you how to combine Keycloak with AngularJS and Spring Boot.

    The result will be a small application where you will get a frontend written in AngularJS and the big topics regarding authentication like user-registration, password reset, login page etc. are already solved.

    The backend is implemented with Spring Boot and will provide a REST API with business services exposed. Authentication in the backend is also solved by using Keycloak which means that all REST endpoints will be secured and that you will get all the user information in the backend as well as in the frontend.


    We will use the following tools and frameworks to build the application:

    Maven (Version 3.3.9)
    Spring Boot (Version 1.3.3.RELEASE)
    Keycloak (Version 1.9.1.Final)
    AngularJS (Version 1.5.0)

    Create a realm in Keycloak

    Start up Keycloak by using one of the Standalone executeables in the /bin directory of Keycloak which will start Keycloak immediately by using the bundled WildFly application server.

    After the startup, open up a browser and navigate to http://localhost:8080.

    If this is the first time you start Keycloak it is required to create an admin user.

    After you have created the admin user, use this user to login into the Administration Console. By default Keycloak uses the Master realm to manage its own users. You should never use this realm to authenticate your own application. Instead create your own realm for the authentication of your application.

    Create a new realm by hovering over the Master realm and click on the “Add realm” button. Enter a name, we will use Demo-Realm, and click on “Create”. After that you will be navigated to the configuration page of the realm.

    Open the “Login” tab and enable the features “User registration” and “Forgot password”. By enabling, these features will be added to the login page automatically and will cover the common use-cases that appear in nearly every web application.

    Create roles in Keycloak

    Now, lets add some roles that can be assigned to users later on to handle authorization. We need two roles that will come into play later on. One is the admin role and the other one is the manager role.

    Click on the “Roles” link in the navigation and after that, click on the button “Add Role” at the right side of the screen. This will navigate you to the “Add Role” screen. Enter admin as role name and click on “Save”. Create another role named manager.

    Create clients in Keycloak

    After we have created our roles we can create the clients for our application. We will need two clients. One for the frontend application and one for the REST backend.

    Let’s start by creating the frontend client. Click on the “Clients” link in the navigation and then use the “Create” button to create a new client.

    Use tutorial-frontend as client id and click on “Save”.

    You will be navigated to the client settings. Use the access type public and the following URLs and save the client:

    • Valid Redirect URIs: http://localhost:8000/*
    • Base URL: http://localhost:8000/index.html
    • Web Origins: http://localhost:8000

    Now the frontend client is completely configured and can be used.

    Lets create another client for the backend and name it tutorial-backend. This time configure the access type as bearer-only because the REST backend should only be called when a user has already logged in.

    Now the basic configuration in Keycloak is done and we can start building the applications.

    Frontend application with AngularJS

    We want to build a small Angular application that is secured by a login page and where access is restricted to registered users only. Therefore every request which goes to the Angular application should be checked if it comes from a registered user or not. If a request does not come from a registered user, the request will be redirected to the login page where a user can register.

    To interact with Keycloak from our AngularJS application, Keycloak is providing a JavaScript-Adapter directly on the Keycloak server. This adapter will be used to check if a request is authenticated and can be integrated in our application by including the JavaScript file into our html page:

    <script src="//localhost:8080/auth/js/keycloak.js"></script>

    Futhermore we have to configure the KeyCloak adapter to ensure it knows who our application is and where to find the Keycloak server. We can provide this information as a json file which can be downloaded directly from the Keycloak server.

    To get the json file, open the Keycloak administration console and navigate to the frontend client page. Then open the “Installation” tab and choose “Keycloak OIDC JSON” as format. Then download the JSON file and store it in the angular application as /keycloak/keycloak.json.

    One important thing to know is because we are only allowing registered users to have access to the application we have to manually bootstrap AngularJS and we cannot rely on the automatic bootstrapping with the ng-app directive.

    The following code demonstrates how to authenticate a request and bootstrap angular only if the request comes from a registered user:

    // on every request, authenticate user first
    angular.element(document).ready(() => {
        window._keycloak = Keycloak('keycloak/keycloak.json');

            onLoad: 'login-required'
        .success((authenticated) => {
            if(authenticated) {
                    angular.bootstrap(document, ['keycloak-tutorial']); // manually bootstrap Angular
            else {
        .error(function () {

    If an unregistered user opens the application he will be automatically redirected to the Keycloak login page. In the application we can access user information like login name by getting the user profile with loadUserProfile().

    That is basically all what is needed to secure our Angular application. To ensure the user information is transmitted to the backend we also have to add the users access token to the request header while calling a backend REST service. This can be done like this application wide:

    // use bearer token when calling backend
    app.config(['$httpProvider', function($httpProvider) {
        var token = window._keycloak.token;    
        $httpProvider.defaults.headers.common['Authorization'] = 'BEARER ' + token;

    Backend application with Spring Boot

    The backend application should be secured against unauthorized access. Therefore, like in the frontend, only requests coming from registered users should be accepted.

    First, it is important to add the maven Keycloak dependencies for Tomcat and Spring Boot:


    Then we have to configure keycloak just like we did in the frontend application. This time the configuration is done the Spring Boot way in the file:

    keycloak.realm = Demo-Realm
    keycloak.realmKey = MI...
    keycloak.auth-server-url = http://localhost:8080/auth
    keycloak.ssl-required = external
    keycloak.resource = tutorial-backend
    keycloak.bearer-only = true
    keycloak.credentials.secret = e12cdacf-0d79-4945-a57a-573a833c1acc

    The values can be retrieved from the “Installation” tab in the administration console of Keycloak for the backend client. One important thing here is to not forget the secret. The secret can be retrieved from the “Credentials” tab of the backend client.

    To secure the REST API endpoints a few other entries in the file are important:

    keycloak.securityConstraints[0].securityCollections[0].name = spring secured api
    keycloak.securityConstraints[0].securityCollections[0].authRoles[0] = admin
    keycloak.securityConstraints[0].securityCollections[0].authRoles[1] = manager
    keycloak.securityConstraints[0].securityCollections[0].patterns[0] = /api/*

    The patterns property defines the pattern of the API endpoints with * acting as wildcard. That means every endpoint under api like /api/contracts or /api/users is protected by Keycloak. Every other endpoint that is not explicitly listed is NOT secured by Keycloak and is publicly available.

    The authRoles property defines which Keycloak roles are allowed to access the defined endpoints.

    If everything is configured correctly the Keycloak adapter for Spring Boot should intercept incoming request automatically and reject unauthorized requests.

    To access detailed user information in the backend we can use the KeycloakPrincipal class from the Keycloak-SpringBoot adapter. The KeycloakPrincipal will automatically be injected by Spring if used in a REST controller class as method parameter. Detailed user information can then be retrieved by using the AccessToken like this example in a REST controller:

        @RequestMapping(method = RequestMethod.GET)
        public void getUserInformation(KeycloakPrincipal<RefreshableKeycloakSecurityContext> principal) {
            AccessToken token = principal.getKeycloakSecurityContext().getToken();
            String id = token.getId();
            String firstName = token.getGivenName();
            String lastName = token.getFamilyName();

            // ...

    In the full example you will see that I have build a MethodArgument Resolver to avoid having to deal with the Principal in the REST controllers but that is absolutely not necessary and just for convenience.


    The simplest way to start up a demo is to clone the application source with:

    git clone

    Then you can either configure the frontend and backend application with the correct settings from Keycloak as described above OR use the existing KeyCloak configuration in keycloak/demo-realm.json to import the realm in Keycloak and avoid having to manually configure the applications.

    After that use maven to start both applications with the following command:

    mvn spring-boot:run

    Then navigate to http://localhost:8000 and you should find yourself landing on the Keycloak login page. Register yourself as a new user. After registering you will be redirected back to the Angular application and should see some details about your user.

    Please note that currently your user is not associated to any role defined earlier. That means accessing the backend is impossible because we have only allowed managers and admins to access the backend. To give your user access to the backend we have to map your user to a role.

    To do this, open the Keycloak admin console and navigate to “Users”. Then click on the button “View all users” and click on your username. After that navigate to the “Role Mappings” tab and assign the role “manager” to your user.

    Now open up the Angular app again and you should see a “Call backend service” button that only managers can see. Click on it and some contracts from the backend should be returned together with your user information which comes also from the backend.

    This proves that the frontend and the backend is correctly secured by Keycloak.


    As always, you can find a full working example at GitHub.

    Posted in spring | 7 Comments

    Openkeepass: Feature release with write support

    Openkeepass is around for a while now and has proved to be a major player when it comes to reading of KeePass files (especially KeePass 2.x).

    However, since its initial release I’ve received a lot of requests regarding write support of KeePass files. After a lot of work this is finally done and has found its way into a new feature release of openkeepass.

    Writing KeePass files

    The major feature of version 0.5.0 is to support writing of KeePass files. To simply create a new KeePass file from scratch you can use the fluent API of the builders. A very simple example to write a KeePass file with only one single entry would be this:

    // Create an entry
    Entry entryOne = new EntryBuilder("First entry")
    .password("Carls secret")

    // Create the database file
    KeePassFile keePassFile = new KeePassFileBuilder("myNewKeePassFile")

    // Write database file to disk
    KeePassDatabase.write(keePassFile, "masterPassword", "myNewKeePassFile.kdbx");

    By nesting the available builders it is also possible to create much more complex structures in the KeePass file.

    The code to create this structure looks like this:

    // Create the more complex tree structure
    Group root = new GroupBuilder("TestDb")
                .addEntry(new EntryBuilder("First entry").build())
                .addGroup(new GroupBuilder("Banking").build())
                .addGroup(new GroupBuilder("Internet")
                        .addGroup(new GroupBuilder("Shopping")
                                .addEntry(new EntryBuilder("Second entry").build())

    // Create the database file    
    KeePassFile keePassFile = new KeePassFileBuilder("TestDb")

    // Write database file to disk     
    KeePassDatabase.write(keePassFile, "masterPassword", "myNewKeePassFile.kdbx");

    Modify existing KeePass files

    There is also a new concept called zipper which comes into play when a KeePass structure should be modified instead of creating a new one from scratch. The zipper can be used to easily navigate through the tree structure of a KeePass file and can be compared with the concept of an iterator. There is always a pointer to an element in the tree and by navigating through the tree, the pointer to the current element will be shifted around.

    This is very helpful if you want to replace some nodes in the tree. Please note that entries cannot be replaced directly in the tree, you have to modify the parent group of an entry if you want to modify or replace an entry.

    The following example shows how to rename an existing group by using the zipper:

    // Open keepass file
    KeePassFile database = KeePassDatabase.getInstance("database.kdbx").openDatabase("password");
    // Navigate through tree to group
    GroupZipper zipper = new KeePassFileBuilder(database).getZipper()
    // Rename group
    Group group = zipper.getNode();
    Group modifiedGroup = new GroupBuilder(group).name("test2").build();
    // Replace old group with new one
    KeePassFile modifiedDatabase = zipper.replace(modifiedGroup).close();

    // Write database file to disk
    KeePassDatabase.write(modifiedDatabase, "password", "modifiedDatabase.kdbx");

    Where can I get it

    You can download it with maven or directly from GitHub.

    Posted in open-source | 2 Comments

    How to check for updated dependencies with maven

    Regularly updating your dependencies in a project is important because it ensures that you will get all the nice bugfixes that were done in the meantime in some of the libraries you use. However this can be a time-consuming and annoying task.

    Good news if you’re using maven because there is a nice command which exactly does this for you, checking if a new version is available for one of your project dependencies. The following maven command can be used:

    mvn versions:display-dependency-updates

    And the output looks like this:

    [INFO] --- versions-maven-plugin:2.2:display-dependency-updates (default-cli) @ openkeepass ---
    [INFO] The following dependencies in Dependencies have newer versions:
    [INFO]   org.bouncycastle:bcprov-jdk15on ......................... 1.53 -> 1.54

    Just be careful while updating because of eventually breaking changes.

    Posted in programming | Leave a comment

    Continuous Delivery with Jenkins and Docker

    If you’ve ever wondered how to actually build a continuous delivery pipeline for your project this is going to be the ultimative guide. I will tell you how you can build a continuous delivery pipeline from ground up.

    The pipeline will make heavy use of Jenkins together with Docker to provide a stable platform to build on.

    The overall goal is to setup a build process which runs on every commit, compiles all the classes, runs all the unit tests and automatically deploys the application to provide a running instance which could be used by testers.


    To follow this guide you will need a few applications installed. I will use Vagrant to create a virtual machine (vagrant box) which will make it really easy for you to follow and avoid installing a bunch of software on your machine.

    Applications on your machine (host):

    Applications on vagrant box (guest):

    Setting up vagrant

    The first thing you have to do is install VirtualBox and Vagrant. It’s pretty easy just follow the instructions of the installers.

    If you have done that you can clone the repository from github that I’ve prepared. In the repository there is a Vagrantfile which will setup your virtual machine with all the necessary stuff.

    After you’ve cloned the repository open a command shell in the cloned directory and start the virtual machine.

    vagrant up

    Depending on your internet connection it may take a while until that command has finished. This command will do a lot of work by downloading and installing all the applications on the virtual machine that we’ll need in a minute.

    Once the command has finished you can log into the vagrant box.

    vagrant ssh

    This command will ssh into the virtual machine and should look like this:

    Vagrant ssh

    Preparing jenkins

    When you startup your vagrant box jenkins will be automatically started in the background.
    As soon as jenkins is up and running you can open a browser on your host machine and navigate to jenkins http://localhost:9080.

    Now lets install some plugins for jenkins:

    • Git plugin to be able to fetch git projects from github
    • Copy Artifact plugin to be able to copy artifacts from one project to another

    The easiest way is by navigating to Manage Jenkins -> Manage Plugins -> Available and filter for the plugin names

    Jenkins install plugins

    As soon as the plugins are installed you have to install maven to be able to build maven projects. This can be done in Manage Jenkins -> Configure System -> Maven .

    Jenkins install maven

    Configuring the build job in jenkins

    You can now create your first build job which will compile your application and run some basic unit tests. We’ll be using slackspace-javaee-classpath-properties as an example application. The project is a Java EE application hosted on GitHub so it’s pretty easy to include the project into our pipeline. As the project uses maven we can easily compile it without having to worry about dependencies.

    Create a new maven project in your jenkins with name “javaee-classpath-properties-build”.
    Now you have to choose Git as Source Code Management system and use the correct url to the repository:

    Jenkins maven project
    Jenkins configure source code management

    As build goal use package as we want to compile the project and package all the classes into a *.war file.

    Jenkins configure maven goals in project

    The last step is to define a post-build action called Archive the artifacts. The files to archive should be set to target/*.war. This will ensure that it is possible to access the packaged war-file in another project later on.

    You can now test if the job works by clicking on the Build Now button. If everything works the job should be displayed with a blue circle after a while.

    Configuring the staging job in jenkins

    Now it is time to configure another job that will make use of docker containers to provide a running instance of the application for testing.

    First of all create another jenkins job named javaee-classpath-properties-staging. The type should be Freestyle project.

    Jenkins free-style project

    Leave the Source Code Management untouched. Instead check the build trigger Build after other projects are built and enter the name of our build project (javaee-classpath-properties-build).

    Add a build step of type Copy artifacts from another project and enter the following properties:

    • Project name: javaee-classpath-properties-build
    • Which build: Latest successful build
    • Check Stable build only
    • Artifacts to copy: target/*.war

    Add another build step Execute shell and insert the command:

    docker ps -a | grep 'javaee-classpath-properties-staging:latest' | awk '{print $1}' | xargs --no-run-if-empty docker kill

    echo "FROM glassfish:4.1\nMAINTAINER cternes <>\n# Deploy application\nADD target/javaee-classpath-properties.war /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/" > $WORKSPACE/Dockerfile

    docker build -t $service $WORKSPACE
    container_id=$(docker run -d -p $service_port:$service_port $service)

    echo "App running on http://localhost:$service_port/javaee-classpath-properties/"

    Jenkins Docker Build Step

    Ok, let me explain what we’ve just did. We have configured another job which will be started automatically after our build job. The job will only be triggered if the build job was successful. We’ve also configured that the generated war-file from our build job will be copied to the staging job and therefore will be reused. This ensures that we’re using exactly the same file that was compiled and tested during the build job. This is an extremely important concept because we’re now indepent from other commits which have been done during the time the build job was running and eventually are breaking the build.

    The main work which is done in this job is a little bit cryptic, hidden in the Execute shell build step. What it basically does is creating a Dockerfile and then building a new docker image with that Dockerfile.

    While building the docker image it is downloading a glassfish docker container and injects our generated war-file into it. After that the docker container will be started, which means the Glassfish Java EE Application Server will be started and our application will be automatically deployed into the Glassfish. As soon as the docker container is running we can access our application through the browser by opening http://localhost:8080/javaee-classpath-properties/.

    You can now test if everything works by “building” the staging job.

    Putting it all together

    The hard work is done. You can see the results by building the job “javaee-classpath-properties-build” in your jenkins. This will compile the application, run the tests and package the application. When the first job has finished successfully, the second job will be triggered automatically and will fire up a docker container with your application inside.

    After a short time you should have access to your application and can test it in the browser. Note how fast the docker container starts up. On my machine the second job runs at most times under a second. That’s amazingly fast to startup a whole testing environment from the ground.

    Next Steps

    You’ve learned the basics how to build a continuous delivery pipeline with Jenkins and Docker. There is a lot more what can be done from here like configuration management, acceptance tests, service discovery, monitoring and so on.

    But this article should be only a kickstart to develop your own ideas. Any comments or feedback is appreciated.

    Posted in javaee, programming, tutorials | 1 Comment

    Speed up development with vagrant

    Maybe you know this situation: Coming to a new project often requires to install a lot of stuff. Databases, Web servers, Dependencies and all the other stuff is starting to clutter your workstation. If you switch often between projects or you just want to try out a new technology for some days this could be quite annoying.

    A few years ago, we all thought virtualization will come to the rescue. Just setup a virtual machine with all required technologies, install your favorite IDE and start development inside
    the virtual machine. In fact this approach also has it’s problems. Programs in the virtual machine are responding quite slow, the IDE is not very reactive and if you’ve used a virtual machine for a while it is cluttered with a lot of stuff and getting slower and slower.

    Lightweight virtualization with Vagrant

    A year ago I discovered Vagrant and since then I’m using it heavily for development and trying out new stuff. Vagrant is taking the approach of using virtual machines to the next step. With Vagrant you can setup a fresh virtual machine within seconds and also destroy it within seconds.

    There are several possibilities on how to use vagrant. I’m using Vagrant to separate runtime and development environments by setting up virtual machines with runtime environments
    but keeping development stuff on the host machine. That means if I’m developing a Java EE application, I’m using vagrant to setup a virtual machine with Java and an application server like Glassfish. The application itself will be developed, as normal, with an IDE on my machine. To let the application run, I will deploy the application on the application server in the virtual machine. To access the application I can use a browser on my machine to access the application server in the virtual machine. Thus, I can keep my workstation free from runtime stuff but at the same time using the native speed of my machine to develop the application.

    One important part is that you can start with a fresh installed virtual machine anytime. That means if you’ve messed up something in your virtual machine you can just throw the virtual machine away and create a new one with exactly the same settings within seconds. Vagrant makes it very easy to do this. In fact I’m starting off with a new virtual machine every day to keep it as clean as possible.

    How to start

    To give you a small overview about how vagrant works it’s best if you’re trying it out with a small example. Let’s try to run an apache webserver within a vagrant managed virtual machine.

    To start with vagrant you have to install two things. One is VirtualBox and one is Vagrant itself.

    After you’ve installed the two programs you can start creating your first Vagrantfile. Please create a file named Vagrantfile in your home directory.

    # -*- mode: ruby -*-
    # vi: set ft=ruby :

    # Vagrantfile API/syntax version. Don't touch unless you know what you're doing!

    Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
        # Every Vagrant virtual environment requires a box to build off of. = "phusion/ubuntu-14.04-amd64"

      # Create a forwarded port mapping which allows access to a specific port
      # within the machine from a port on the host machine. In the example below,
      # accessing "localhost:8000" will access port 80 on the guest machine. "forwarded_port", guest: 80, host: 8000

      config.vm.provision :shell, path: ""


    The Vagrantfile is the core of vagrant and is required to configure a virtual machine. Let’s look at some details = "phusion/ubuntu-14.04-amd64"

    Here we’re telling vagrant to use Ubuntu as the operating system of the virtual machine. When starting the virtual machine, vagrant will download Ubuntu once and stores it for later usage. That means if you start up the virtual machine a second time or building another virtual machine with the same operating system it is already cached and will not downloaded a second time. "forwarded_port", guest: 80, host: 8000

    This will make the webserver (port 80) which runs inside the virtual machine accessible to your host machine on port 8000.

    config.vm.provision :shell, path: ""

    That means after starting up the virtual machine, this script will be executed on the virtual machine. This can be used to further configure the virtual machine e.g. by installing additional tools.

    Now it’s time to create the file. Create it in the same directory as your Vagrantfile.

    apt-get update
    apt-get install -y apache2

    This bootstrap file will download and install apache as a webserver on the virtual machine during startup.

    Let’s startup the virtual machine by using a terminal/command line in the folder with the Vagrantfile and executing the following command:

    vagrant up

    Vagrant will now download the Ubuntu image and install the apache webserver. Depending on your internet connection this can take a while.
    After vagrant is done you should see the message “Starting web server apache2” on the console.

    Vagrant startup

    The virtual machine is now already running in the background and we can test if we can reach the webserver. You can do this by firing up a browser and check if you can reach http://localhost:8000.

    If everything works, you should see the default landing page of apache. Congratulations, you’ve successfully setup a virtual machine and accessed it with your browser.

    Vagrant Apache2 Ubuntu Landing Page

    Working with virtual machines

    Now that our virtual machine is up and running we can take a closer look inside the machine. To do this you can open an ssh shell to get inside the virtual machine.

    vagrant ssh

    This command will directly ssh you inside the virtual machine. You can take a look around, create or edit files, make everything you can do with a normal machine.
    Let’s do an experiment and navigate to the folder /vagrant on the file system and list all the files.

    cd /vagrant
    ls -l

    Vagrant images

    You can see the files which were used to create the virtual machine. In fact we’re now in a folder on the host system. Moreover we have full read/write access to this folder (try it by creating a file here). This means we can easily exchange files between our host machine and the virtual machine. It’s possible to share more folders between the host machine and the virtual machine by configuring it in the Vagrantfile.

    To exit the ssh mode you can press CTRL+D.

    At some time we have to stop the virtual machine and fire it up later again. There are two possibilities, either persist the changes that are done in the virtual machine or throw the virtual machine away and start with a fresh one.

    To stop the virtual machine but keep the changes you can use

    vagrant halt

    and later on

    vagrant up

    to start the virtual machine again. Please note that the file will not be executed again, the webserver and all of your changes are still there.

    If you prefer to throw the virtual machine away and start fresh (like I do) you can use

    vagrant destroy

    and use

    vagrant up

    to get a fresh copy. In this case the file will be executed and the webserver will be downloaded and installed again. Note that all of your previous changes are wiped away.

    Vagrant destroy

    There is a way to preserve changes by extracting a new image from a virtual machine (e.g. to preserve installed programs). This image can then be used as a base for other virtual machines but this will go beyond this article (see vagrant package in the documentation).


    I tried to give you a quick introduction to Vagrant and I hope you can see the potential of Vagrant. It can be used to quickly throw some programs into a virtual machine, try them out and then wipe away all of it without cluttering your main machine. It is also a very good utility when you’re trying to separate development and runtime environments.

    There is a lot more what you can do with Vagrant. Just make sure to check out the documentation.

    Posted in programming | Leave a comment

    Injecting properties in Java EE applications

    In almost any application there are some settings that must be read from somewhere to configure the application. User names or IP addresses are great examples for such settings.
    To use settings is the standard procedure to make software configurable. There are many possibilities that one can use to achieve this. One example would be to store such settings in a database. Another one, probably the most simple one, is to read settings from a file.

    To make things really simple lets focus on storing the settings in a file. If you’re building a Java EE application you can make use of dependency injection with CDI.
    CDI makes it actually really simple to create a provider class for your configuration files. The key is the @Produces keyword which will be looked up at runtime and injects the result of the @Produces method into other CDI enabled classes.

    public Properties provideServerProperties() {
        Properties p = readPropertiesFromFile("");
        return p;

    Generic approach with annotations

    The next level to this and a more generic approach is to use a dedicated annotation that can be used to mark injection points and also support multiple configuration files. I’m using an annotation called PropertiesFromFile with one property which determines the configuration file which should be used. The name of the configuration file is optional and if not provided, a file named will be used.

    @Target({ElementType.METHOD, ElementType.FIELD})
    public @interface PropertiesFromFile {

         * This value must be a properties file in the classpath.

        String value() default "";

    Please note that the configuration files need to be on the classpath of the application. If you’re using maven this can be achieved by putting the files in the path src\main\resources.

    To use the new annotation the producer class needs to be adapted.

    public class PropertyReader {

        public Properties provideServerProperties(InjectionPoint ip) {
            //get filename from annotation
            String filename = ip.getAnnotated().getAnnotation(PropertiesFromFile.class).value();
            return readProperties(filename);
        private Properties readProperties(String fileInClasspath) {
            InputStream is = this.getClass().getClassLoader().getResourceAsStream(fileInClasspath);
            try {
                Properties properties = new Properties();
                return properties;
            } catch (IOException e) {
                System.err.println("Could not read properties from file " + fileInClasspath + " in classpath. " + e);
            return null;

    At runtime when an annotation of type @PropertiesFromFile is found, CDI will look for the corresponding producer. If a found, the producer method will be called and the InjectionPoint will be used as parameter. In the producer method the filename is read from the annoation, the corresponding properties file will be loaded from the classpath and the properties will be returned.

    Injecting the properties at runtime

    To inject the properties it is sufficient to use the @PropertiesFromFile annotation together with @Inject in any CDI managed class.

    public class StartupManager {
        Properties customProperties;

    In the class above the properties of the file will be injected at runtime.


    The usage of a custom annotation and a producer class makes it very easy to inject complex objects into other CDI managed classes at runtime. With only one simple annotation it is possible to get a lot of work done in the background and abstract repeating and non domain-specific logic away. With the above solution one can very simply inject properties from settings files into arbitrary classes.

    Of course reading properties from a file is a really simple solution for configuration management but in very small applications it might be sufficient and could come in handy.

    A complete working example can be found on GitHub. You can look at the log file of the application server while deploying the sample application as the properties will be printed into the log file.

    In addition there is a basic servlet available at http://localhost:8080/javaee-classpath-properties/ which will print out the properties of

    Posted in javaee, programming | 1 Comment

    IIS Error 500 ExtensionlessUrlHandler

    I’ve recently encountered the following error in my IIS after starting a (previously) working ASP .NET application:
    Handler “ExtensionlessUrlHandler-Integrated-4.0” has a bad module “ManagedPipelineHandler” in its module list
    (or in German: Der Handler “ExtensionlessUrlHandler-Integrated-4.0” weist das ungültige Modul “ManagedPipelineHandler” in der Modulliste auf)

    The error appeared after a fresh reinstall of Windows. After a lot of googling around for the problem I realized that the installation order of IIS and .NET framework leads to this error (WTF Microsoft).

    The solution is to simply re-register ASP .NET in IIS.

    c:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe -i
    Posted in programming | Leave a comment

    C# How to force decimal precision in xml serialization

    Recently, I’ve tried to serialize some xml in c# and stumbled accross the problem that I had to force the precision scale of decimal values. By default the XmlSerializer uses the exact value of the underlying decimal value while serializing to xml.

    That means, if you assign 2 to a decimal value your xml will look like


    But if you assign 2.00 to a decimal value it will look like


    One solution could be to always use the Math.Round() function to round your decimal values. But this will lead to a lot of unnecessary and unmaintainable code.
    Instead I wanted a solution that automatically cares about the decimal scale during xml serialization.

    XmlSerializer extension

    The solution I’ve come up with uses an extension method for the XmlSerializer class.
    This method iterates through all public properties, looks for decimal values and applies the target precision on the decimal value. It also works for fairly complex xml serialization trees with nested objects and lists.

    using System.Collections;
    using System.Globalization;
    using System.Reflection;
    using System.Xml;
    using System.Xml.Serialization;

    namespace Slackspace.Serializer
        public static class XmlSerializerExtensions
            // the target format of the decimal precision, change to your needs
            private const string NumberFormat = "0.00";

            public static void SerializeWithDecimalFormatting(this XmlSerializer serializer, Stream stream, object o)
                serializer.Serialize(stream, o);

            private static void IteratePropertiesRecursively(object o)
                if (o == null)

                var type = o.GetType();

                var properties = type.GetProperties();

                // enumerate the properties of the type
                foreach (var property in properties)
                    var propertyType = property.PropertyType;

                    // if property is a generic list
                    if (propertyType.Name == "List`1")
                        var val = property.GetValue(o, null);
                        var elements = val as IList;

                        if (elements != null)
                            // then iterate through all elements
                            foreach (var item in elements)
                    else if (propertyType == typeof (decimal))
                        // check if there is a property with name XXXSpecified, this is the case if we have a type of decimal?
                        var specifiedPropertyName = string.Format("{0}Specified", property.Name);
                        var isSpecifiedProperty = type.GetProperty(specifiedPropertyName);
                        if (isSpecifiedProperty != null)
                            // only apply the format if the value of XXXSpecified is true, otherwise we will get a nullRef exception for decimal? types
                            var isSpecifiedPropertyValue = isSpecifiedProperty.GetValue(o, null) as bool?;
                            if (isSpecifiedPropertyValue == true)
                                FormatDecimal(property, o);
                            // if there is no property with name XXXSpecified, we can safely format the decimal
                            FormatDecimal(property, o);
                        // if property is a XML class (contains XML in name) iterate through properties of this class
                        if (propertyType.Name.ToLower().Contains("xml") && propertyType.IsClass)

            private static void FormatDecimal(PropertyInfo p, object o)
                // if property is decimal, apply correct number format
                var value = (decimal) p.GetValue(o, null);
                var formattedString = value.ToString(NumberFormat, CultureInfo.InvariantCulture);
                p.SetValue(o, decimal.Parse(formattedString), null);


    Note that the decimal precision is fixed in the NumberFormat field and is applied for all decimal values in the xml.


    To use the serializer extension you can simply call the new extension method instead of the default one:

    using (var ms = new MemoryStream())
        var xmlObject = MyObjectXml();
        var serializer = new XmlSerializer(typeof(MyObjectXml));
        serializer.Serialize(ms, xmlObject);

    What about Nested Objects?

    One word about nested objects in your xml graph. If you use nested objects then you must name your classes with Xml at the end (or exchange the xml string in the code) in order to make it work with the serializer extension, otherwise the properties in these classes will not be inspected and the decimal precision scale cannot be applied.

    Example for nested objects:

    using System.Collections.Generic;
    using System.Xml.Serialization;

    namespace Slackspace.Serializer.Model
        public class MyObjectXml
            [XmlAttribute(AttributeName = "id")]
            public long Id { get; set; }

            [XmlArray(ElementName = "students")]
            [XmlArrayItem(ElementName = "student")]
            public List<StudentXml> Students { get; set; }


        public class StudentXml
            [XmlAttribute(AttributeName = "averageGrade")]
            public decimal AverageGrade { get; set; }

    Are nullable decimals supported?

    When you’re using nullable decimals in your xml classes then you can just use the standard model with the Specified property. In order of completeness here is an example that makes use of a nullable decimal value.

    using System.Xml.Serialization;

    namespace Slackspace.Serializer.Model
        public class MyXmlObject
            [XmlAttribute(AttributeName = "price")]
            public decimal XmlPrice { get { return Price.Value; } set { Price = value; } }  

            public decimal? Price { get; set; }

            public bool XmlPriceSpecified { get { return Price.HasValue; } }


    If you want to force the decimal precision during xml serialization the best way I found was to make use of the extension method concept in c#. The extension of the xml serializer makes it really easy to don’t care about the decimal scale at all and do all the hard work during the serialization state.

    Posted in c#, programming, tutorials | Leave a comment