Wednesday, July 2, 2014

Akka 2.3.4 and Scala 2.11.1 - on Android

This post outlines what is necessary to set up a Maven project to deploy a Scala application which utilizes Akka to Android.


Is Akka not a server framework?

Well, it is. I just wanted to see if I could manage to come up with a configuration for Maven and its Android plugin - not to forget the wonder tool every Android developer is always very happy to configure - Proguard :)

Besides, there are also others who want to get it to run on the mobile devices or did it already.

Why Scala on Android? Maven???

I do a little bit of Android programming also in my spare time, I even <shameless plug>created an app which you definitely should check out</shameless plug>, albeit I'm not using Akka for this one.
You can tell by reading my blog that I'm struggling with Maven on a daily basis - that is why I gave it a shot (because of work/life balance you know.)

I know that the 'normal' way to build scala projects is to "use the sbt", and like others  I do use it for some projects successfully. Sometimes however Maven is just ... here.

If you have the choice you should check out sbt and one of the android plugins available. The  proguard step is mandatory in the presented setup, without it it won't work. You can ease the pain of waiting for the proguard step to finish a little bit by tweaking the proguard configuration file and comment out the optimization steps.

If you still want to do it with Maven, read on or jump directly to the source on github.

Prerequisites

For this project, I used the exciting maven-android-sdk-deployer project which deploys everything you need in your local maven repository. Just follow the installation instructions on the project website.

Macroid Akka Fragments

An important piece in the puzzle is also the macroid project - check it out. For the example project I've stolen borrowed some code from the macroid-akka-fragment project which provides the necessary glue code for Akka.

In order to compile the example project, you don't need to do anything, the links were provided for reference.

Some remarks on the pom

Have a look at the configuration of the scala-maven-plugin in the project pom.xml - there are some interesting flags which enable various warnings/features for the scala compiler which help you write more correct code. See this blog post for more details.

In my opinion, the most precious thing about the whole post is the proguard configuration file. In my experience, it is quite cumbersome to come up with a working proguard configuration.

Rant: The typical user experience is that in the first few minutes you hunt for an "optimal" proguard file, after fiddling around for some hours you'll turn of all warnings and stick to the first "working" configuration.

Finally ... I get a compile error.

The maven configuration for the scala plugin is set to "incremental" - This incremental mode only works if the zinc server is also running. That means you have to start it via "zinc -start" once. I used zinc 0.3.5-SNAPSHOT for the compilation.

In theory, after cloning the repository from here and after entering

mvn clean package android:deploy

You should see something similar like this:

[INFO] --- scala-maven-plugin:3.1.6:compile (default) @ android-akka ---
[INFO] Using zinc server for incremental compilation

[info] Compiling 2 Scala sources and 2 Java sources to /Users/lad/temp/android-akka/target/classes...

- dont forget to connect your android device and activate the usb debugging mode

The app will look like this:

screenshot of the app (created with genymotion, a great android emulator!)
The app doesn't do anything interesting - it uses just one Actor (and one button), but it could serve as a start project for your own experiments with Akka, Scala, Maven and Android.

Thanks for reading!

Saturday, May 17, 2014

Sudoku Capturer 1.4

Today I released a new version of my Sudoku solver app for Android.

Sudoku Capturer 1.4 with incremental number detection

From a user perspective, the most prominent new feature is that the app now shows incremental progress of the numbers which were recognized successfully. This fixes one of the biggest problem with the approach the application had before - numbers which were identified erroneously and which led to a deadlock in the solving algorithm itself.

Currently, on each frame the application makes a quick sanity check if a number would violate the basic rules of the sudoku game - that is to say if at a given cell for example the number seven would be identified, the application now checks if there is already a seven in the same row, column or section. If yes, the whole Sudoku is rendered invalid and the detection algorithm starts from scratch.

In older versions of the application, only one frame of the video stream would be the input for the solving algorithm, which frequently led to non terminating behavior of the solving algorithm itself.

The application is now counting how often a certain number is recognized for a given cell, and after hitting a certain threshold the probability that the detection was correct is certainly higher than without using this simple strategy.

Furthermore, if the Sudoku Capturer app is not able to build up a library of all number from 1 to 9 it paints the number with a internal font - this should happen only very rarely, though.

Give it a try on on your android device, I would be interested in your feedback.

You can download the Sudoku Capturer application in the play store:

Sunday, April 13, 2014

JMH - Java Microbenchmark Harness

JMH is a tool to do micro benchmarking on the JVM and on this weekend I used it to do some testing for some performance hotspots in my personal projects.



The JMH landing page is pretty much self explanatory, there is a a samples page provided, too.

The only issue I had with JMH is that I couldn't get the Scala methods to be picked up by the @GenerateMicroBenchmark run. I did a little workaround by creating a Java class which then delegates to the scala implementations I wanted to test - this works well enough for me.

methods can be annotated which configure JMH
The above screenshot shows the general idea - The Java code above calls a Scala object which gathers all performance testing related code.  In the example above, the SudokuBenchmarkProxy object is also the right place to setup the environment for the performance test to run in.

Update: Those problems had to do with my project setup. I have a multimodule maven build and the benchmarks are located in a submodule. It seems that at the moment the wrapper code in target/generated-sources/jmh is not generated when run in multimodule scenario, but only when called directly where the pom with the jmh plugin is defined.

There is even a maven scala archetype for JMH available which provides the basic setup, which I used for the initial project setup. The scala archetype (0.5.6), invocated like this:


will produce a configuration like this:


This runs fine when it is used on it's own, but if you want to use it in a submodule, change it to this:


After this, JMH will create the wrapper code also for multimodule maven builds. At least it solved the issue for me.

Update 2: As it turns out, this issue is already been worked on, and was only recently discussed on the mailing list.

However,  the most pleasant mode to develop / debug / work on your performance bottlenecks with JMH is to use it's API. Like this you are able to directly call it, for example like this:


This executes JMH on your code, using the regex pattern to search for annotated methods. The run() method even returns the collected data, like such you could "easily" set up an automated performance regression testing system I suppose.

JMH by default gives you a jar file called "microbenchmarks.jar" which contains all dependencies (as defined by maven). Like this it is rather trivial to execute the tests on different machines.

Speaking of this one has to underline that performance testing is hard - the hardest part of it is to create comparable environments you run your tests run in. But this is trivial (?) compared with understanding the inner workings of the JVM - JMH hides much of this complexity and gives you here a head start!

Moreover, micro benchmarking is a science in itself. Make sure you are repairing the "right" part of your application. It only pays off if you hit the big points, and such points are more often than not buried in "management processes".

Micro benchmarking - a science in itself


Micro benchmarking maybe implies micro optimizations which often have the side effect of obfuscating your code base - and without a regression test harness built around those parts of your application it is likely that the next clean code developer just "repairs" the app back to the start. ;)

But if need be and you _have_ to squeeze everything out of your jvm(s), JMH will be a great addition to your toolset.

If you want to know more, ask this guy.




Monday, February 24, 2014

Sudoku Solver - on Android

I've ported my Sudoku application now to Android - here is the proof:

You can download it from the play store and support me and my blog by clicking like crazy on the ads! Go for it!!!

Here is what you get:


Sunday, February 23, 2014

JavaFX3D and Leapmotion

Last weekend I've created a prototype application based on LeapMotion and JavaFX3D technology.





Thanks to José Pereda who generously opensourced his Leap3DFX project I was able to mash up features of ControlsFX and JavaFX3D to create the application shown above.

I wanted to explore how you could combine the JavaFX Widgets and 3D content and how you could interact between the two worlds. Furthermore it was interesting to see what is necessary to load 3D models created with Blender - for this task there exist also libraries which make it trivial to load complex models into your scene.

Check out the source at github (and also the original post about leap motion on José's blog!)

Monday, February 3, 2014

Building Visual Studio 2012 projects with Maven

In this post I want to describe how you can setup an automated build for Visual Studio C++ projects using the build toolchain based around maven.

java build in c++ land


Disclaimer: I'm aware of Microsofts's Team Foundation Server ALM stack. It is a powerful and  wonderful toolsuite for software development. On the other hand, if you don't want (or can't afford) TFS, the presented approach may be an alternative.

Goals

  • It should be possible to compile Visual Studio projects on a build server
  • The build should be automated
  • Tests should be executed alongside with compiling the code
  • Compiled artifacts should be delivered to the developers on demand
  • There should be a minimal impact for the workflow of the C++ developers
  • Code should be partitioned into modules
  • Modules have a public interface
  • Different modules can have different release cycles
  • If there are build errors or test failures, the development team should be informed
  • ...
This sounds like a continuous integration story -  in the Java world it would be a no brainer - just use Jenkins, Maven, a server to distribute your artifacts and a Unit Testing Framework - everything is set up and ready to go.

As it turns out, this can also be true for compiling C++ using Visual Studio projects. The only difference is that you have to know how to configure the maven build - that is to say how to organize your source code and tinker with some xml files. It imposes some work on the initial setup, but after that it works quite well, if not good enough.

A cornerstone of this approach is that the workflow for the C++ guys is the same as if the whole build process would not exist. What I mean here is that the C++ experts can configure their solution in (almost) any way they want, and use the tool they are comfortable with for doing so: Visual Studio. Likewise, they shouldn't be bothered configuring anything related to the build process.

The only concession they have to make is to understand the basic maven workflow (mvn install, mvn generate-resources) which can be hidden behind some custom tool actions in VS.

Because of those considerations,  I won't involve the otherwise well suited Maven NAR plugin for the build, since this would involve configuring dependencies on project level - this would interfere with the configuration in the solution files. Moreover, this plugin supports OS independent builds, but this is not a goal which I'm pursuing here.

Example scenario


programmers like rectangles and arrows


Suppose you have two applications which consist of three modules. App 1 only needs module A and has some extra functionality. App 2 on the other hand has three modules (module A, B, and C) and also its own program logic. Let's assume that app 2 is more complex than app 1, but still they share the same code (module A).

Code sharing using C++ can be either done by linking it statically or dynamically. In the first case, you will have lib files, which end up in the executable, or, in the second case, you have dynamic link libraries (dlls) which can be shared for more than one application (you just have to make sure that they can be found by the exe file at runtime.)

Typically, all components have a development lifecycle of their own - meaning for example that module A has to be extended for app 2, since requirements may have changed. App 1 doesn't need those changes, it would happily work with the "old" version of module A. Even worse, app 1 may not even work with the changes made to module A.

This describes the general problem that certain parts of your software stack have different release cycles. One module may not change for weeks or months (then it is either a very well written module or just plain uninteresting ;-) ), other modules are hotspots of activity.  A mechanism to reference a module in a certain version would be very handy.

You can address the versioning issue by branching and tagging module A source code and check out the appropriate version and compile module A every time (or at least once thanks to the incremental compile features of Visual Studio).

However, there may be other constraints which make this approach not feasible, like needing a special compile step which has high demands on the machine or licensing costs for involved compilers. Maybe the compilation of the dependencies just takes too long - you get the idea.

All in all, you are just interested in linking module A and be able to interface it - the compilation step for module A should be done by the domain expert who knows all the quirks to compile or maintain it. Ideally and in general the build should be automated and run on a mouse click. The binaries should be downloaded from a repository - maven and it's infrastructure are very good with those things.

The idea is now that you create a pom which delegates the compile and test phase to the cpp compiler, and fetches its dependencies with the maven dependency plugin. Typically, the dependencies have to be fetched in both Debug and Release mode, alongside with pdb's or other compile artefacts you'll need to be able to create or debug the final application.

After some experimentation I came to the conclusion that you get the best results if you define that every solution is a module on it's own (each vcxproj should be part of only one solution). For every solution file you define one pom which is used to describe the module's dependencies and it's artifacts which the compile step is producing at the end of the day for the given solution.

Prerequisites


As mentioned above, you'll need Visual Studio, Java, Maven and a versioning system (git, subversion) on the developer machines. On the build server you should install Jenkins and also Visual Studio. Finally Nexus would be a good choice for distributing the artifacts.

upload to nexus is easy


External dependencies (libs, dll's and header files) can be put into a zip file with a decent directory structure and uploaded to the nexus.

How to setup your source code

example setup for the code organisation

The figure above depicts a possible structure for the example application. You can see that for every module you have to define a pom file, and also every application has a pom file. Also every module or application has one solution file. There is the convention that the solution file of a module only references project files which are contained within this module. This makes it possible also to take advantage of the release process which comes for free with maven.

Every module has an interface (header files) and a file which defines what are the output artifacts of this specific module. The details are all to be configured using the facilities which are available in the solution files. Like this every developer who is owner of a module can setup it's specific compilation without having to know specifics about maven.

An advantage of this is that you can work with different versions of the modules, and the dependencies are both documented and part of the build by residing in the pom.xml files.

How are module dependencies defined in the solution files?


Like mentioned above a module should not reference any files outside of its scope. This means that for example source code in module-b-p2 can reference source code in module-b-p1 using relative paths (although I would say that this is bad practice) - but it is not allowed for module-b source code to reference module-a source code directly - this would break the module barriers.

The question is: how can we now reference from one module to the other? The trick is now that module artifacts (libs,dlls,pdbs and interface) are fetched by the maven dependency plugin to the target folder of a module, and by exploiting the $(SolutionDir) variable which is available in Visual Studio we have something similar like the ${project.basedir} in maven.

Lets have a look at following figure:

app1 folder after mvn initialize
What one can see above is that there is a new folder in the app1 directory called 'target' where, by convention, all build artifacts, temporary files and dependencies for the app1 build are stored. By configuring the maven dependency plugin in a certain way it is quite trivial to put the build output from module-a in the target\deps\$(Configuration) directory. If you configure additional link library directories and special include directories in Visual Studio, the compiler will happily compile your files.

By convention, the build outputs should be placed under target\rt\$(Configuration) folder. To be able to properly debug the application, the build process should also place all runtime dependencies to this directory.


Example pom file



Example artefact assembly file



Example interface assembly file



Screenshots of Visual Studio projects (properties)



General configuration properties for a Visual Studio project


linker include paths have to be adapted


How to get acceptance in the development team


Building applications in a heterogeneous environment is not always easy, everybody has to leave his comfort zone and adopt something new. For programmers who are not acquainted with the maven build system (and I assume most of the C++ crowd doesn't know about it) it is better not to make them type in things like "mvn clean" or "mvn package", they want a better integration in their IDE.

Luckily enough you can customize Visual Studio in many ways, and one way I would suggest to integrate maven commands as easily accessible buttons in their IDE. This can be done by configuring this once per seat.

Tools -> External Tools : configure maven

This custom tool can be placed on a button which essentially reduces all maven magic to one mouse click for the uninterested developer. If you have three of them (one for mvn install, one to "fetch dependencies" and additionally one for clean up (mvn clean)) the devs will be happy. The normal workflow for a developer will then be an update of the code with the given versioning system and a mouse click to get the newly built dependencies.

But it is so much overhead!


I don't really think it is. If you follow certain conventions like outlined in this post, creating new modules becomes a fairly easy task. Typically, you won't introduce new modules every day.

Depending on the size of your development team, only the senior dictator developer will decide when to jump to a new version of a module with potential breaking changes. In fact, pinning down dependencies like that is very successful in java land, so why shouldn't it make sense also for c++ projects?

One should not forget that, leaving those module versionings and module depency definitions aside, nothing really changes for the average developer. Typically, most of the devs work in one module (which may contain dozens of subprojects which may be arbitrarily complex), and only the tech leads compose modules together.

Advantages


If you do it right you get goodies like being able to release your cpp code with the maven release plugin. This is a huge win and definitely worth the trouble of setting up the initial build. You could profit by the plethora of possibilities which the maven ecosystem gives - for example easy filtering of resources, arbitrary plugins, reporting (for example integrate doxygen reports in your build or use the not so popular but still very cool "site" feature for writing versionable documentation) ... and it's free.

I hope someone finds this useful, thanks for reading!

Wednesday, January 15, 2014

Home made JavaFX SceneBuilder

Ever wanted to have your own build of SceneBuilder? It's easy!



A month ago the source code of Scenebuilder was published. This was great news for the JavaFX community, since it is an example of a non trivial application written in JavaFX written by pros.

As far as I understand, you are even allowed to use the source of Scenebuilder (or parts of it) to build your own applications, since the code is released in a BSD style license. By the way, check out this site if you want to get an elevator pitch for various opensource licenses.

So what is necessary to have your own version of SceneBuilder?

Setting up Scenebuilder in Intellij


The source code for Scene builder can be found in the OpenJFX repository. If you have all tools, just clone the openjfx rt repository. If you are new to the game, you should follow the instructions given here.

After cloning the repository, you will find the source code for the Scene Builder in

rt/apps/scenebuilder

I gave it a try to just import this directory with IntelliJ 13 Community Edition (ignoring the build.xml file) and was able to set it up without build errors in no time! Very nice. The only thing I had to do manually was to set the project SDK to the latest JDK 8 preview and set the project language to 8.0 (with lambdas).


Then, in the project explorer, hit right mouse button and run the main method of the SceneBuilderApp - if you are lucky then you will be rewarded with the following screen:


I've changed the label.untitled in SceneBuilderApp.properties to 'My very own untitled Document' just to illustrate that it is not the default installation.

In addition, I would advise you to import JavaFX8 sources for Intellij like described here.

Project Layout


The project consists of two main parts, SceneBuilderApp and SceneBuilderKit.  The SceneBuilderApp contains the application logic and the SceneBuilderKit the infrastructure. The latter has far more classes, but they are also not application specific. I'm pretty sure that one could find one or the other gem in it. But as far as this post goes, I only describe some aspects of the SceneBuilderApp project.

FXML Handling in SceneBuilderApp


I was interested specifically in how SceneBuilder itself would manage the loading of FXML for its internal controllers.

The first thing which caught my attention was how the SceneBuilderApp project was structured. It seems like that every Controller, its fxml and css and other resources are grouped together in a package.

Controllers, FXML and resources grouped together
If you have a bunch of controllers, this will surely help to keep a clean structure. Using naming conventions clearly improves readability.

fxml and associated static images are placed in the same directory

The controllers extend an abstract controller, which provides utility methods for interacting with the stage. In essence, every controller has its reference to the stage baked in, which comes in handy for many things, for example when handling the setOnCloseRequest(...) on the stage.

Every Controller has its own FXMLLoader, which is fed with resources, the fxml and the controller class. What is interesting also is that right after the loading of the fxml took place and every widget is injected properly, it is checked if the expected widgets don't contain null references.

Like this you can shield yourself from accidental mismatch of FXML id's and fail fast - also a good habit, especially if the application code and the fxml code is maintained by more than one developer.

The FXML is loaded implicitly when you first access the controller - i.e. when you open up the window first time. In this context, checking out the AbstractWindowController and its derivatives is definitely some well spend time for an aspiring JavaFX developer. There are some gems in it like resizing the stage for various screens like seen in the method clampWindow - have a look :)



Example: PreferencesWindowController


Of course I won't discuss every class here, but I'll have a look at one example Controller, let's take the PreferencesWindowController. 

SceneBuilder's Preferences Dialog opened in SceneBuilder (v1.1)
First thing you see are the member declarations which are annotated with the @FXML annotation. Those attributes are injected by the FXMLLoader when loading the fxml ... but when and how does this happen?

head of PreferencesWindowController source



Actually, this problem was tackled in a nice way; every controller which extends from AbstractWindowController has a method called openWindow. This then triggers a chain of calls which ultimately result in the loading of the FXML and hence in injecting the right objects to the attributes of the concrete Controller. The following illustration may help here:

how controllers get to their fxml (pic was made with skitch btw) 

As far as I can tell the FXML gets loaded only once, and the code which does the magic is shared with all controllers. :) After loading the controllerDidLoadFxml() method is called, which resides again in the PreferencesWindowController. The preferences themselves are managed in a singleton named PreferencesController, and the values are now filled up for to the UI. 

At this point in time the ChangeListeners are initialized. The BackgroundImageListener for example changes the default background when selecting the corresponding ChoiceBox. The Listeners are all defined as static inner classes, which keeps things private to the Controller. 

Btw, for the permanent storage of the values the Preferences class was chosen - a nice utility for storing user or app specific values in a portable way.

There are many more aspects to discover in the SceneBuilder source code, I merely scratched the surface.  For me the most important thing is how to structure an application, best practices etc. - here the SceneBuilder source code is a great inspiration. 

It is really trivial to get to your own installation of SceneBuilder, and the code is very readable and definitely worth a look!