Dirty clean code. Creative Stuff. Stuff.

A first approach to contract test

In this post I will talk about contract test: what they are and how you can use them.

Sometimes you have to unit test multiple implementations of the same interface. So basically you have the same tests for multiple concrete implementation of the same interface. In a case like this one, contract test could help you save a lot of time. Using contract test you will be able to run the same set of test for different concrete implementations.
How does it work? The main point is to have a template base abstract “ContractTest” test class that incapsulate the logic of the tests using abstract methods that use the base interface of the objects under test. These abstract methods will be implmented in the subclasses of this “ContractTest” class and they will feed the test with a concrete implementation of the interface used in the declaration of the abstract methods.
Let’s see an example to make everything more clear!!!
The example is a standalone Java project that uses Junit 4 and Mockito 2.8, but nothing stops you from applying this concept to other languages/platform (in fact, I learned and implemented contract test on a component inside an Android App :heart_eyes:).
Suppose for example that we have the following interface:

public interface Command {
    void execute();

We have two object that implement that interface: AccountCommand and SettingsCommand.

class AccountCommand implements Command {
    private MenuActionsListener menuActionsListener;

    AccountCommand(MenuActionsListener menuActionsListener) {
        this.menuActionsListener = menuActionsListener;

    public void execute() {

public class SettingsCommand implements Command {
    private MenuActionsListener menuActionsListener;

    SettingsCommand(MenuActionsListener menuActionsListener) {
        this.menuActionsListener = menuActionsListener;

    public void execute() {

As you can see the two implementations look very similar. So it’s time to rock with contract test :metal:!!!!
We can write a CommandContract base test class that contains the logic of the test we want to write. In our specific case we want to assure that when a command is executed, by calling the execute() method, the menuActionsListener is called with the correct method on each concrete implementation of Command. So our CommandContract implementation is:

abstract class CommandContract {
    private Command command;
    private MenuActionsListener menuActionsListener;

    public void commandIsExecuted() throws Exception {
        command = givenACommand(menuActionsListener);

    private void givenAMenuActionListener() {
        menuActionsListener = mock(MenuActionsListener.class);

    protected abstract Command givenACommand(MenuActionsListener menuActionsListener);

    private void whenACommandIsExecuted() {

    protected abstract void thenTheCorrectMenuActionIsInvoked(MenuActionsListener menuActionsListener);

As you can see in the commandIsExecuted() test we use all the abstract method to define the test of generic command implementation. Now in the test subclasses we will implement the abstract method to feed the test with the various implementation of our concrete commands.
So we create an AccountCommandTest class, subclass of CommandContract, to test our AccountCommand class:

public class AccountCommandTest extends CommandContract {

    protected Command givenACommand(MenuActionsListener menuActionsListener) {
        return new AccountCommand(menuActionsListener);

    protected void thenTheCorrectMenuActionIsInvoked(MenuActionsListener menuActionsListener) {

We create also a SettingsCommandTest class, subclass of CommandContract, to test our AccountCommand class:

public class SettingsCommandTest extends CommandContract {

    protected Command givenACommand(MenuActionsListener menuActionsListener) {
        return new SettingsCommand(menuActionsListener);

    protected void thenTheCorrectMenuActionIsInvoked(MenuActionsListener menuActionsListener) {

As you can see we tested all our concrete Command implementations without replicating the unit tests logic. Wonderful :open_mouth::heart_eyes:!!!
Here you can find the complete example (a Maven project developed using IntelliJ, Juni4, Mockito).
It’s time for you to test contract test on your project :joy::laughing:!!!

Swift Closure: demystifying @escaping and @autoclosure attributes

In this post I will talk about Swift closure and the potential of the @escaping and @autoclosure attributes.

As reported in the official swift documentation and as we saw in in one of my previous post, closures are:

self-contained blocks of functionality that can be passed around and used in your code. They can capture and store references to any constants and variables from the context in which they are defined.

In this post I will show you two interesting closure features: @autoclosure and @escaping.
An @escaping closure is passed as a parameter to a function, but it is not executed inside it. So, basically the closure is executed after the function returns. The classical example is a closure being stored in a variable outside that function.
An @autoclosure is a closure without parameter that is automatically created to wrap an expression that’s being passed as an argument to a function. This two attributes combined have great potential. Let’s see an example where you can avoid multiple if/switch with the use of closure and these two attributes.
You could start “abusing” closures and use them everywhere after mastering these two attributes!! :stuck_out_tongue_winking_eye: (Maybe it’s better to stay calm and don’t abuse closures even after seeing this attributes :relieved:).

Swift closure everywhere

For example we can have a UITableView and we want to execute different action for each cell displayed. If we don’t use closure and the attributes @autoclosure and @escaping, we need to distinguish the cells using the position or eventually casting some specialization of a class used to represent the cell data. Suppose instead that each cell shows an instance of an Operation class, defined in this way:

class Operation {
    let name: String
    let action: () -> ()
    init(name: String, action: @autoclosure @escaping () -> ()) { = name
        self.action = action

So, basically in the constructor we are expecting something that will be enclosed in a closure, thanks to the @autoclosure attribute, and we store it as an instance variable of our class. We can store it because we are using also the @escaping attribute. Now in our controller we can define an array of operation that will be the datasource to our UITableViewController. We can pass in the constructor of each Operation instance the function that corresponds to the operation that we want to execute. This function will be executed in the table view delegate method public func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) by accessing the corresponding element in the data source array, without the need to identify the exact cell type selected. Here you can find the complete OperationsViewController:

class OperationsViewController: UITableViewController {
    var operations: [Operation] = []
    override func viewDidLoad() {
        self.operations.append(Operation(name: "Operation 1", action: self.showOrangeDetail()))
        self.operations.append(Operation(name: "Operation 2", action: self.showGreenDetail()))
    //MARK: TableView Datasource
    public override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
        return self.operations.count
    public override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
        let cell: UITableViewCell = tableView.dequeueReusableCell(withIdentifier: "OperationCell")!
        cell.textLabel?.text = self.operations[indexPath.row].name
        return cell
    //MARK: TableView Delegate
    public override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
    //MARK: Actions
    private func showOrangeDetail() {
        self.performSegue(withIdentifier: "OrangeSegue", sender: nil)
    private func showGreenDetail() {
        self.performSegue(withIdentifier: "GreenSegue", sender: nil)

You can download the complete example here.
So basically: no if, no switch, only love :heart: for @autoclosure and @escaping :heart_eyes:.

Swift Closure: what they are and syntax

In this post I will talk about Swift closure: what they are and their syntax.

As reported on the official Apple swift documentation closures are:

Closures are self-contained blocks of functionality that can be passed around and used in your code. They can capture and store references to any constants and variables from the context in which they are defined.

Closures are in many ways what blocks are in Objective-C (or lamba function in other languages). As it was for blocks, it is not easy to remeber their syntax. This post is intended to be a reference for me (and you, readers :wink:) about closure syntax. You could also take a look at F$%&£&g closure syntax.

Declared as a variable (valid also for let constants):

var closure: (parameters) -> returnType

Declared as an optional variable:

var closure: ((parameters) -> returnType)?

Declared as a typealias:

typealias ClosureType = (parameters) -> returnType

Declared as a function parameter and then call that function:

func myFunction(closure: (parameters) -> returnType)  {


/** You can explictly write the type of parameters. **/

//Call with round brackets.
myFunction(closure: { (parameters) -> returnType in

//Call without round brackets (only if closure is the last parameter).
myFunction { (parameters) -> returnType in

There is also the possibility to use a shorthand for the parameter: you can call them using $ followed by the index of the argument in the call. Last but not least, you can capture self avoing retain cycle using [unowned self] before the parameters. Go and show to the world the power of closure in Swift!! :sunglasses:

A physically based scene with three.js

In this post I will show you how to create a scene using three.js with support for Physically Based Rendering.

I love three.js. I think it’s one of the most beautiful javascript and computer graphics library out there. Don’t you know what three.js is? Let’s see the description from the official github repo:

JavaScript 3D library. The aim of the project is to create an easy to use, lightweight, 3D library. The library provides canvas, svg, CSS3D and WebGL renderers.

Simple and clear (I love this kind of definition :relieved:). Three.js is a library built on top of WebGL aiming to simplify the computer graphics development for the web. It has a lot of different features, including the support for Physically Based Rendering. Let’s see the potential of this library. In this post I will show you how to create a simple physically base scene. At the end of this post you will have created a scene like the one in the following image:

Threejs first scene

The mesh We will use are a simplified version of the ones available from the Stanford scan repository in PLY format.
Let’s start from the setup. We can use a simple html page, similar to the one described in the three.js doc (shown below). We will put our assets (mesh, textures etc.) in the folder /assets/models.

<!DOCTYPE html>
        <meta charset=utf-8>
        <title>My first three.js app</title>
            body { margin: 0; }
            canvas { width: 100%; height: 100% }
    	<script src="js/three.js"></script>
    	    // Our Javascript will go here.

The first thing we will need to create is a Scene. We will also need to create a Camera, a TextureLoader for texture loading, a PLYLoader to load our PLY meshes and a WebGLRenderer. Finally we will need an instance of OrbitControls, a three.js extension that we use to orbit around the scene.

var scene = new THREE.Scene();
var camera = createCamera();
var textureLoader = new THREE.TextureLoader();
var plyLoader = new THREE.PLYLoader();
var renderer = createRenderer();
var controls = createOrbitsControls(camera, renderer);

For the camera, we create a PerspectiveCamera. As the name says, it uses the perspective projection to simulate the human eye views (this is one of the two main camera type used in computer graphics along with orthogonal projection. We place the camera in front of the scene, and we set the vertical field of view FOV of the viewing frustum to 75 degrees, the aspect ratio using the current width and height of the window, the near and far plane of the viewing frustum respectively to 0.1 and 1000 (to avoid the discard of mesh added to the scene).

function createCamera() {

    var camera = new THREE.PerspectiveCamera(75, window.innerWidth/window.innerHeight, 0.1, 1000);
    camera.position.z = 8;
    camera.position.y = 0;
    camera.position.x = 0;

    return camera;

We create a renderer with the alpha property set to true, in case we want to integrate in another HTML page and we want the background to be visible until the scene is loaded. We set the gamma correction for input and output colors by settings the properties gammaInput and gammaOutput to true. We also enable shadow mapping by setting shadowMap.enabled to true, setting it to use the percentage closer filtering with bilinear filtering. Finally we set the size of the renderer to the same size of the window where we will display the scene.

function createRenderer() {

    var renderer = new THREE.WebGLRenderer({alpha: true});
    renderer.physicallyCorrectLights = true;
    renderer.gammaInput = true;
    renderer.gammaOutput = true;
    renderer.shadowMap.enabled = true;
    renderer.shadowMap.bias = 0.0001;
    renderer.shadowMap.type = THREE.PCFSoftShadowMap;
    renderer.setSize($(window).width(), $(window).height());

    return renderer;

Next we setup the OrbitControls instance to manage an automatic rotation around the scene. You can customize this function to let the user manage the movement with keyboard or touch control (on mobile :iphone:).

function createOrbitsControls(camera, renderer) {

    var controls = new THREE.OrbitControls(camera, renderer.domElement);
    controls.enableZoom = false;
    controls.autoRotate = true;
    controls.enablePan = false;
    controls.keyPanSpeed = 7.0;
    controls.enableKeys = false; = new THREE.Vector3(0, 0, 0);
    controls.mouseButtons = {};

    return controls;

Now we can add the renderer to the DOM (we attach it to the body) of the page. We can now start to customize the scene by setting the background color to indaco (0x303F9F) (remember: the recommended way to set color in three.js is by HEX value). We can then add the main light and the hemisphere light.

//Add rendering dom element to page.

//Setup scene.
scene.background = new THREE.Color(0x303F9F);

We create the main light as a point light using the PointLight class. In the constructor we set its color to white, its intensity to 1 (default), 20 for the distance from the light where the intensity is 0, and finally the decay to 2 ( this is the amount the light dims along the distance of the light, and must be set to 2 for physically base light).
We then set its power as the same of a 100 Watt bulb (1700 Lumen) light and we place it above the scene, to create some sort of street light effect (light beam from above). We also active the ability to cast generate shadow by setting castShadow to true, we force the shadow map size to 512x512 pixel (to increase performance, as the default is 1024) and we give a little blur on the shadow by setting the radius property to 1.5. We also create a geometry and a material for the light:

  • the geometry is a sphere with radius 0
  • the material is a complete emissive physically based material In fact, the MeshStandardMaterial is the three.js implementation of a physically based material (so it’s real: three.js rocks with physically based rendering :open_mouth:).
function createLight() {

    var lightGeometry = new THREE.SphereGeometry(0);

    var lightMaterial = new THREE.MeshStandardMaterial({
        emissive: 0xffffee,
        emissiveIntensity: 1,
        color: 0x000000

    var light = new THREE.PointLight(0xffffff, 1, 20, 2);
    light.power = 1700;
    light.castShadow = true;
    light.shadow.mapSize.width = 512;
    light.shadow.mapSize.heigth = 512;
    light.shadow.radius = 1.5;

    light.add(new THREE.Mesh(lightGeometry, lightMaterial));
    light.position.set(0, 5, 3);

    return light;

For the hemisphere light, we create it using the HemisphereLight class. We set the sky color to dark blue (0x303F9F), the ground color to black (0x000000) and its intendisty to 1.

function createHemisphereLight() {

    return new THREE.HemisphereLight(0x303F9F, 0x000000, 1);

Now we can start to add the stars, the PLY mesh models and the floor mesh model. Each mesh model is added to the scene at the end of its load method.

//Load models.
loadStars(textureLoader, function (stars) {


        color: 0x3F51B5,
        roughness: 0.5,
        metalness: 0.7,
        clearCoat: 0.5,
        clearCoatRoughness: 0.5,
        reflectivity: 0.7
    new THREE.Vector3(3, -3, 0),
    new THREE.Vector3(0, -Math.PI / 3.0, 0),
    function (mesh) {


        color: 0x448AFF,
        roughness: 0.1,
        metalness: 0.9,
        clearCoat: 0.0,
        clearCoatRoughness: 0.2,
        reflectivity: 1
    new THREE.Vector3(-3, -3, 0),
    new THREE.Vector3(0, -Math.PI, 0),
    function (mesh) {


        color: 0xCCFFFF,
        roughness: 0.9,
        metalness: 0.1,
        clearCoat: 0.0,
        clearCoatRoughness: 0.5,
        reflectivity: 0.1
    new THREE.Vector3(0, -3, 1.5),
    new THREE.Vector3(0, -Math.PI, 0),
    function (mesh) {


loadFloor(textureLoader, function (mesh) {


For the stars, we use the textureLoader to load a circle png texture. Whe the texture load is completed, we create a lot of Geometry with random position. Whe also create the material using the texture obtained from the loader (and we set on it a transparent background). Now we can create some WebGL Points using the related three.js class.

function loadStars(textureLoader, completeLoad) {

    textureLoader.load("assets/models/textures/circle.png", function (texture) {

        var starsGeometry = new THREE.Geometry();

        for (var i = 0; i < 10000; i++) {

            var star = new THREE.Vector3();
            star.x = 2000 * Math.random() - 1000;
            star.y = 2000 * Math.random();
            star.z = 2000 * Math.random() - 1000;


        var starsMaterial = new THREE.PointsMaterial({
            color: 0x888888,
            map: texture,
            transparent: true,

        var stars = new THREE.Points(starsGeometry, starsMaterial);


For the PLY models, we use the PLY loader to obtain the corrsponding geometry. Then we create a MeshPhysicalMaterial using the parameters received. We also set the position and rotation of the mesh and we force the update of the local transform using the updateMatrix() method. We set castShadow to true, as we need that this meshes are considered in shadow mapping. We finally set also matrixAutoUpdate to false, as we don’t need to recalculate the position of the mesh on each frame (our meshes are static).

function loadPlyModelUsingPhysicalMaterial(plyLoader, path, parameters, position, rotation, completeLoad) {

    plyLoader.load(path, function (geometry) {

        var material = new THREE.MeshPhysicalMaterial(parameters);
        var mesh = new THREE.Mesh(geometry, material);
        mesh.position.set(position.x, position.y, position.z);
        mesh.rotation.set(rotation.x, rotation.y, rotation.z);
        mesh.castShadow = true;
        mesh.matrixAutoUpdate = false;


For the floor, we use again the textureLoader to load a texture of a marble surface. We then set the wrapS and wrapT property to RepeatWrapping, to have the texture repeated on the entire surface. We then create a MeshStandardMaterial, that is the base material for MeshPhysicalMaterial, and so it is also a physically based material. We finally set also here the position, rotation and matrixAutoUpdate to false.

function loadFloor(textureLoader, completionFunction) {

    textureLoader.load("assets/models/textures/marble.jpg", function (texture) {

        texture.wrapS = THREE.RepeatWrapping;
        texture.wrapT = THREE.RepeatWrapping;
        texture.repeat.set(100, 100);

        var floorMat = new THREE.MeshStandardMaterial({
            roughness: 0.7,
            metalness: 0.1,
            map: texture

        var floorGeometry = new THREE.PlaneGeometry(1000, 1000);
        var floorMesh = new THREE.Mesh(floorGeometry, floorMat);
        floorMesh.receiveShadow = true;
        floorMesh.rotation.x = -Math.PI / 2.0;
        floorMesh.position.y = -3;
        floorMesh.matrixAutoUpdate = false;


We are ready to render our scene. We just need to create the rendering loop with the following code:

var render = function () {

    renderer.render(scene, camera);

The entire scene code is showed below in the gist.

Yeah!!! You made it!! You create a 3D computer graphics web application using three.js :blush:!! And it is also a scene that supports advanced feature, in particular physically based rendering :open_mouth:!!

I know three.js

You know three.js now. You’re ready to conquer the web 3D world now :smirk:. Ah!! I was forgetting: you can find a live demo of the scene we created on the homepage of my website.

Gihub Pages and Jekyll: chicio coding birth

So, how I created this blog? Let’s go through the development process of its creation. This is yet another blog post about the creation of a website using Github Pages and Jekyll. But you know, I have to do it.

This will be the first official post of my blog. So, the topic from which I want to start is the development of this website. This is a blog post about this blog (are you serious!? :laughing:). This blog has been built using Github Pages. What exactly are they? Let’s see the definition taken from the github documentation:

GitHub Pages is designed to host your personal, organization, or project pages directly from a GitHub repository. To learn more about the different types of GitHub Pages sites, see “User, organization, and project pages.” You can create and publish GitHub Pages online using the Jekyll Theme Chooser. If you prefer to work locally, you can use GitHub Desktop or the command line. GitHub Pages is a static site hosting service and doesn’t support server-side code such as, PHP, Ruby, or Python.

Github Pages supports Jekyll. Also in this case let’s see the definition from the documentation:

Jekyll is a simple, blog-aware, static site generator perfect for personal, project, or organization sites.

This seems the perfect combination for a personal site + blog!!! Let’s see what I used to develop this blog:

  • Github Pages
  • Jekyll
  • Node + Gulp as a task/build runner for development
  • Bootstrap + Sass for CSS/HTML
  • Gsap and Scrollmagic for animation
  • Cloudflare as CDN

I also used threejs for the background scene on my homepage. I will talk about it in a different post. First of all I installed node. Then I created the Jekyll basic directory structure. Then I run the command:

$ npm init

to create the package.json (the file that will contain the metadata of my project, including its dependencies).
Then I installed Gulp:

$ npm install --save-dev gulp

I decided to use the following gulp libraries to improve my work (using the same command used for Gulp to install them):

  • gulp-concat and gulp-uglify to concatenate all my css and js in a style.min.css, index.min.js and vendor.min.js (the last one for third party library)
  • gulp-sass to compile sass into CSS
  • gulp-uglify for minification
  • child_process to launch Jekyll along side Gulp, as explained by Aaron Lasseigne on his blog post
  • critical to dynamically extract the CSS critical path from the various template, in order to be complaint with the Google page speed recommendation.
  • browser-sync for live reloading during development
  • travis for CI

Below you can find the complete gulpfile:

As you can see I have two gulp task. I use the first from my local environment during development. The second one is used by by travis to make a test build on each commit. All the assets created are saved in the assets folder. Jekyll copies each folder that is not prefix with and underscore. I also installed some gems to improve and automatize some function of my site:

  • jekyll-seo-tag, to automatically create meta and JSON-LD
  • jekyll-sitemap, to automatically generate the sitemap
  • octopress-minify-html, to minify the HTML
  • jekyll-paginate, to support pagination
  • jemoji, to support emojii in posts

Each of this gem has its own configuration values in the _config.yml or in the front matter using the YAML format.
With this setup it was easy to develop: I just needed to execute the command

$ gulp

that launch the default gulp task and start to write my HTML/CSS/Javascript code. The website is updated on each modification and live rendered in the browser (thank you browser-sync :relaxed:).
After the implementation made also some infrastructure setup to customize my github pages website. In particular I added two things:

  • I bought a custom domain,, to substitute the default github pages url for user site (“username” I bought my domain from the italian dns provider
  • I added CloudFlare CDN in order to:
    • speed up the content loading and reach the 99% score on the Google Pagespeed test
    • add HTTPS and HTTP/2 support

In this way the pages load faster than light :zap:.
That’s it. My website + blog is up and running!!