jOOQ & Immutables & Jackson: Great together

I really like Java. But only the plain one. I hate all the enterprise frameworks like Spring, Java EE or Hibernate. I consider them enormous and verbose. It my eyes usage of them leads to corporate applications:

  • Application so complex that it is hard to avoid duplicity in code because everone is afraid to change anything.
  • Applications that does not adapt to company needs so company needs to adapt to applications.
  • Applications that is not possible to maintain and improve any more.

Feels like hell for me 🙂 I prefer small, single purpose libraries. For that reason is Navigo3 using multiple libraries for data.

For building access classes to the database we are using jOOQ. You just point it to database and it will generate code for handling it. We are doing that during every Gradle build. If you then use generated code for data access it is automatically verified by compiler. Compiler will complain if anything is wrong – data type was changed, column removed, etc. Because objects read from database can be basically plain POJOs, they may be directly serialized into JSON and used in REST API. I don’t need to say how easy is to find references to certain object, for example column when compared to SQL strings in code…mainly for fields like ‘createdAt’ which is in all tables.

For effortless data structures (POJOs) we are using Immutables. It is matter of seconds to prepare data class with all necessary access methods or builder. No need to write any getter/setter anymore! This is handy mainly for programmers who like functional programming principles. Data are data and they are shaped by functions. Also since I am using Immutables I use static methods much more than before. The reason is that can create methods that returns complex output easily. Formerly I preferred to keep much richer internal state of objects and methods on that object were modifying it instead returning value. This leads to much easier refactoring – static method is so much easier to move that instance method!

For JSON (de)serialization we are using Jackson. Both jOOQ and Immutables have embedded great support for it. It is even possible to serialize combination of both.

Consider this example: I have REST API method that should return information about person – GET /person-info/12345. But not only that, I would like to return also information about company assigned to person – for example to avoid second request during rendering person detail in React.js.

So task is to compose DbPerson and DbCompany classes (generated by jOOQ) into single class that can be (de)serialized. Easy done as nested interface of REST handler:

class Handler {
	@JsonSerialize(as = ImmutablePersonAndCompany.class)
	@JsonDeserialize(as = ImmutablePersonAndCompany.class)
	public interface PersonAndCompany {
		DbPerson getPerson();
		DbCompany getCompany();


So you get generated ImmutablePersonAndCompanyBuilder, fill it with both objects loaded by jOOQ and serialize it using Jackson. Task accomplished.

Using this technique it is easy to add fields into data. For example adding information about address can be done by adding single line.

We also utilize POJO nature of this libraries for documentation. It is quite easy to traverse definition of such classes using reflection and automatically generate documentation from it. Such documentation is always up-to-date without any manual actions.



React.js components with embedded styles

You have probably already seen global CSS files with thousands of lines that may influence anything. Change is too dangerous so you just add new rules. To the point everything must be rewritten.

I love React’s ability to build reusable components. Write once, put anywhere you like. This rule does not apply just for JavaScript, it may apply for CSS too. Did you know that you can use css loader to autoprefix rules from CSS/SASS/LESS file? So every require of such file prepares class names that does not clash with anything else (like style__item___3dKqL).

That way you can prepare components that contains embedded styles that applies only for them. No complex CSS selectors are required – just use plain names:


At the same time you can import application wide constants using SASS so you do not lose ability to set colors, sizes, … in single place.

Usage in code is simple:


(btw. to avoid join operation you can use classnames lib)

Right now I use following:


module.exports = {};


const path = require('path')
const webpack = require('webpack')

module.exports = {
    entry: './src/index.js',
    output: {
        path: path.resolve(__dirname, 'dist'),
        filename: 'web-ui-bundle.js'
    module: {
        rules: [
                test: /\.scss$/,
                use: [
                        loader: 'style-loader'
                        loader: 'css-loader',
                        options: {
                            modules: true   //!!!!!! This enables modules
                            localIdentName: '[name]_[local]__[hash:base64:15]'
                        loader: 'postcss-loader',
                        options: {
                        loader: 'sass-loader'
                test: /\.js$/,
                use: 'babel-loader',
                exclude: /node_modules/
    devServer: {
        contentBase: './',
        hot: true
    plugins: [
     new webpack.HotModuleReplacementPlugin()


    "name": "web-ui",
    "description": "",
    "version": "0.0.1",
    "dependencies": {
        "react": "15.6.1",
        "react-dom": "15.6.1"
    "devDependencies": {
        "webpack": "3.5.2",
        "babel-core": "6.25.0",
        "babel-loader": "7.1.1",
        "babel-preset-es2015": "6.24.1",
        "babel-preset-react": "6.24.1",
        "webpack-dev-server": "2.7.1",
        "style-loader": "0.18.2",
        "css-loader": "0.28.4",
        "postcss-loader": "2.0.6",
        "sass-loader": "6.0.6",
        "node-sass": "4.5.3"
    "scripts": {
        "build": "webpack",
        "devel": "webpack-dev-server --hot"
    "browserslist": ["defaults", "not ie < 11"]

React.js: Build your components catalog

When you design large information system UI it is important to stay uniform. Nobody likes when pages varies in appearance or behavior. Typically it is necessary to write UI/UX design manual that describes allowed tools and techniques.

I was thinking about ways how to prepare list of UI components. So we can pick component when creating new functionality and avoid writing new. I was considering Confluence but it would need extra effort to keep list of components up to date.

Then I got it: We can employ React’s ability to prepare reusable components and easily compose catalog. Here is our:


Catalog is embedded in application but visible only to developers. Every component is shown in typical situation (to have concise list) and it may be expanded (Date component above) to be seen in various configurations – error state, different input values, empty, …

We are using physics particles categories for components and each category has own section:

  • Quarks – simplest components that only use passed properties (Button, FormattedInput, …)
  • Atoms – components that call server API or are attached to forms or application state (ContextMenu, CurrencyInput)
  • Molecules – larger function blocks that contains application logic (UserForm, TasksList)
  • Pages – ok, this is not physics category 😉

Each level uses components from upper levels. And each level is harder to instantiate outside its real usage. According to my experience it is quite feasible to have Quarks and Atoms in catalog. I believe that MobX is helping here but I have no long term experience with Redux. Molecules depends too much on API and application data so it would be hard to mock it.

You can also think about catalog as mild form of test. You are testing that:

  1. You can reuse component, there are no hidden dependencies
  2. There is no clash among components when used together
  3. All components basically works


Embedding React into legacy web pages

The product I am enthusiastically working on – Navigo3 – was historically written as combination of XSLT and JSP. Do you remember XSLT?


Short time before I joined it it was decided that is should be remade in React.js. It was great decision but transition is not that easy.

Because Navigo3 is in production usage we have to transform from XSLT to React step by step. Some pages are that complex that it would be handy being able to combine XSLT and React on single page. Typically it is required on pages that combine many types of information – dashboards. A few graphs, tables, list of tasks, … It would be hard to rewrite all at glance. But rewrite each one is quite simple.


At the same time we have new pages which are written completely in React and uses hashtag navigation. In main entry file index.js there is redux-router hierarchy and it expects that it runs on purely React page. This actually does not match with demand to embed some components into JSP/XSLT static page with classical URL navigation, right?

But I would like to share React components between brand new react pages and legacy XSLT pages! And at some moment just compose React page from existing components and abandon legacy ones. So how to do that?

  1. Prepare second entry file (like indexLegacy.js) and prepare build task for it (we use Webpack).
  2. Insert result bundle file via <script src=’bundleLegacy.js’/> into JSP/XSLT pages right before </body> element.
  3. In JSP/XSLT/whatever page put empty DIV tag in place where React components should be rendered and give it nice id or class name – like “hack_legacy_tasks_list”.
  4. In new entry file write logic that basically perform list of IFs that fills prepared DIVs – but only if they exists:
    • Check if desired element with unique name is presented in DOM (we use jQuery for that)
    • If it is presented render desired component into it (using React.render(<MyComponent>, element))
  5. Of course list of IFs can be replaced by some metadata and have generic code that process them.
  6. If component needs some context, you can wrap it. But you have to do it for all of them separately. They do not share tree hierarchy (React instance).
  7. If all components on page needs to share something, it can be instantiated in entry file and passed to all components.

That’s it. Lets sum up some advantages:

  • Components are shared between new shiny React pages and legacy pages.
  • There is single entry bundle for legacy stuff. If your legacy pages includes shared footer file you can place reference to legacy bundle there. Because unique ids/classnames are used for binding, it should not harm any existing content.
  • You may place multiple components per legacy page – insert multiple DIVs.

React.js FTW!

Here is sample of indexLegacy.js

import React from "react"
import {render} from "react-dom"
import moment from "moment"

import jQuery from 'jquery'

import {Utils} from './utils/Utils'

//container for legacy components
import {LegacyApp} from './containers/LegacyApp'

import {ReactDemo} from './quark/ReactDemo'
//...and other 10+ components

//key is CSS selector, value is function that returns React element
const instances = {
  '.hack_react_demo': (elem)=><ReactDemo/>   //this defines method that creates React component for HTML element <div class='hack_react_demo'/> placed everywhere
  //...and other 10+ components
//render reactElement into DOM elemenet wrapped by LegacyApp
function __createReactInstance(element, reactElement) {
  render(<LegacyApp>{reactElement}</LegacyApp>, element)

//for each entry in instances object
    //for each DOM element found by jQuery
    jQuery(selector).each((i, element)=>{
      //get function for selector and call it (passing DOM element)
      const reactElement = instances[selector](element)

      //create React component
      __createReactInstance(element, reactElement)

Selenium: Get rid of logs in Eclipse

Selenium for Java by default uses java.utils.logging (JUL) infrastructure for logging. During development I have seen logs like:

Nov 21, 2016 8:16:32 AM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Attempting bi-dialect session, assuming Postel's Law holds true on the remote end
Nov 21, 2016 8:16:34 AM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Detected dialect: OSS

I am using log4j2 through slf4j and just creating logger in configuration file did not worked. Solution was:

1. Add Gradle/Maven dependency to log4j-jul

2. As documentation says set system property “java.util.logging.manager” to “org.apache.logging.log4j.jul.LogManager” either via -D parameter or in Main like:

public static void main(String[] args) {
  System.setProperty("java.util.logging.manager", "org.apache.logging.log4j.jul.LogManager");

It must be set BEFORE any call to log4j!!!

3. Define log4j logger:

<Logger name="org.openqa.selenium" level="warn" additivity="false">
 <AppenderRef ref="console"/>

And unnecessary logs are gone!

git gui: Error when opening dialogs

I have 3 monitors configuration and recently I have noticed issue in ‘git gui’. When I tried to revert changes or merge branch, this dialog appeared:


That caused hang of window. In command line was visible an error:

jarek@lp-jarek:~/Navigo3/master/np$ git gui
bgerror failed to handle background error.
 Original error: bad pad value "3m": must be positive screen distance
 Error in bgerror: bad pad value "3m": must be positive screen distance

After some googling I have found an issue (bogus DPI):

jarek@lp-jarek:~$ xdpyinfo | grep -A1 dimen
 dimensions: 5408x1152 pixels (0x0 millimeters)
 resolution: -2147483648x-2147483648 dots per inch

And fixed it via:

jarek@lp-jarek:~$ xrandr --dpi 96
jarek@lp-jarek:~$ xdpyinfo | grep -A1 dimen
 dimensions: 5408x1152 pixels (1430x304 millimeters)
 resolution: 96x96 dots per inch

i-tec USB 3.0 Dual Docking Station & Ubuntu 16.04

[There is updated article for Ubuntu 17.10]

My desktop looks like:

2016-10-27 08.18.44.jpg

I wanted to have 2 external monitors, but my Dell  Inspiron 15 (7559) has only one HDMI port. Other thing was that I had to plug/unplug many cables when I wanted to go to meeting room. So I decided to buy USB 3.0 dock. Not an easy task when you love Linux. Most docks only support Windows and OSX.

I have purchased i-tec USB 3.0 Dual Docking because they promise Ubuntu 16.04 compatibility. They only “forgot” to mention that only KMS drivers are supported, so forgot official NVIDIA blob drivers. Even with Nouveau drivers there was issues. Because I had also another issues I switched off dedicated card completely.

Then it was necessary to install DisplayLink driver from here. But after reboot graphic part of dock did not work. By running dmesg I have found out that kernel module was not loaded. I am using SecureBoot and their kernel module is not signed. So I have to do following as root:

cd /root
mkdir signing
cd signing

#generate certificate - do this once, but keep result files - you will have to sign modules for every kernel update
openssl req -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -nodes -days 36500 -subj "/CN=Descriptive name/"

#import your certificate as trusted into SecureBoot
#remember entered password - you have to fill it after reboot and confirm adding certificate
mokutil --import MOK.der

#sign your module - you have to do this on every kernel update
/usr/src/linux-headers-$(uname -r)/scripts/sign-file sha256 ./MOK.priv ./MOK.der $(modinfo -n evdi)

Then reboot your computer, authorize new certificate and dock should work!

Currently I have last issue: when playing music through dock it is time to time reset. It disconnects monitors, network and audio…