# CHICIO CODING

## Flow, the static type checker for Javascript: how to use it and a brief comparison with TypeScript

In this post I will talk about how I used Flow to do static type checking on the Javascript code of my blog and I will also do a brief comparison with its main rival TypeScript.

In my daily job at lastminute.com group I usually work with TypeScript. Instead, my personal website (the one where you are reading this article), has been developed using JavaScript for the client side.
As you may already know JavaScript is untyped, so it doesn’t have static typing. This could lead to bugs and it become more error prone proportionally to the increase of the source code base. As I already explained in another post, TypeScript is a typed superset of JavaScript that compiles to plain JavaScript for any browser, any host and any OS. TypeScript is basically “JavaScript on steroid”: it provides optional, static type checking at compile time. Since it is a superset of JavaScript, all JavaScript code is valid TypeScript code.
As a consequence of the fact that I already use Typescript during my daily job I started to think if there was another way to have static type checking on my JavaScript code already in place for my website (the one where you’re reading this blog post ). You know, if you already know a tool or a programming language you want to try the other technologies you could find on the market so you’re prepared for the future in your career . This is the reason why I decided to use Flow for my website JavaScript source code.
What is Flow? Flow is a static type checker for javascript developed by Facebook. Let’s see it’s main features taken from the homepage of the official site:

• TYPE INFERENCE: using data flow analysis, Flow infers types and tracks data as it moves through your code. You don’t need to fully annotate your code before Flow can start to find bugs.
• REALTIME FEEDBACK: Flow gives you fast feedback while you code by incrementally rechecking your code as you make changes.
• JAVASCRIPT, YOUR WAY: Flow is designed to understand idiomatic JavaScript. It understands common JavaScript patterns and many of the weird things we JavaScript developers love to do.
• EASY INTEGRATION: Flow integrates well with many tools, making it easy to insert into your existing workflow and toolchain.

It seems really cool!!! . In this post I will described my experience with Flow. I will show you how I integrated it in my website JavaScript build process and how I used it to do static type checking for my JavaScript source code. So let’s start from the installation process.

#### Installation and setup

First of all, I added Flow to my dev dependecies. I decided to use Flow in combination with flow-remove-types, a small CLI tool for stripping Flow type annotations from files.

npm install --save-dev flow-remove-types
npm install --save-dev flow-bin


Then I decided to create a new script phase flow that launches the script flow.sh. In this shell script I do all the Flow operations:

• I move into my js folder with cd _js
• I run Flow to execute the static type checking on my code base with the command ../node_modules/.bin/flow
• I run flow-remove-types to strip Flow type annotations from JavaScript files. The generated files are saved in the folder ../_jsbuild/ specified in the destination folder flag -d. I also specified the --pretty option in order to be able to remove the whitespaces that flow-remove-types leaves in the source when it removes the types annotation.

Below you can see the entire script I created (that you can find also here).

#!/usr/bin/env sh

# Enter into js source folder
cd _js

# Run flow
../node_modules/.bin/flow

# Remove flow types
../node_modules/.bin/flow-remove-types ../_js/ -d ../_jsbuild/ -i flow-typed/ --pretty


As you may expect with a standard npm script phase, I can invoke it in a shell with the command npm run flow. The final step of the setup was to install flow-typed. Flow-type is a repository of third-party library Flow interface definitions. In the next section I will explain to you why I need it (and also you will need it ). Flow-typed must be installed globally. This could be done with the following command (I ran this command in the setup script I have for my website, that I launch when I have to prepare the development environment on a new computer):

npm install --global flow-typed


Let’s start to add some types. To show you how to add types with Flow I will start with the piece of code you can find below. This piece of code contains the animation used to show an image after it had been downloaded from the network.

import { TweenLite } from 'gsap'

const lazyLoadImageAnimation = (image, delay) => {
TweenLite.from(image, 0.3, {
opacity: 0,
delay
})
}



The first thing I have to do is to tell to Flow that I want to check the types for this file. To do this I just have to add the comment /* @flow */ at the top of the file.
Then I can start to add types. Flow supports the following primitive types

• boolean
• number
• string
• void (undefined)
• null
• any, a way to opt-out of using the type checker. A variable of type any will accept any type of value. Using any is completely unsafe, and should be avoided whenever possible.
• mixed, will accept any type of value as any. The difference is that when you try to use a value of a mixed type you must first figure out what the actual type is or you’ll end up with an error.

Obviously Flow support also objects, classes and interface. So I can try to add the types to the parameters of the lazyLoadImageAnimation function. The first parameter is an DOM element that is then passed to the TweenLite from function. So I can try to give the Element class type to the image parameter. The second parameter, delay, is a number used to tweak the TweenLite animation configuration. So, as you may expect , I can try to give the type number to it. Below you can find the final implementation with Flow types.

/* @flow */
import { TweenLite } from 'gsap'

const lazyLoadImageAnimation = (image: Element, delay: number): void => {
TweenLite.from(image, 0.3, {
opacity: 0,
delay
})
}



If I try to run npm run flow I expect that everything goes well, but instead…I received the following error: “Cannot resolve module gsap.”. Below in the screenshot you can see the error reported.

What’s happening here? The TweenLite class is imported from the third party library gsap and Flow doesn’t know the types definition for it. I have to provide this definition in order to enable Flow to do the type checking also on the parts of code that refer to a third library. This can be done in two ways:

• check to see if the flow-typed repo contains the types definition for the library I’m using and eventually install it
• write your own Flow type definition for the library you are using if it is not present in the flow-typed directory

Unfortunately, in this case the flow-typed repository doesn’t contain a type definition for the gsap library. So I will need to write my own gsap library types definition. How can I do that? First of all let’s create a folder inside the project called flow-typed (what a coincidence!!!! ). This is the standard folder where Flow searches for third party library types definition. If you want you can customize the search path with a custom folder in the .flowconfig file. Then I create a new file gsap.js inside it. In this file I declare a new module definition with the syntax declare module "<module name>". In this case it will be declare module "gsap" . Then I can declare a new class to be exported, TweenLite, that is the one I’m using in the piece of code above. To this class I will add the definition for all the method I’m using and for which I need the Flow type definition. From the piece of code above it’s easy to see that the only method I’m using is from(...), so I can add the types definition only for it. To do this I just have to declare the signature of the method with the types I expect for each parameter. One thing to be noted is that the first and the third parameter could accept different types as specified in the gsap documentation. This is why I put any as type. Basically I’m saying that for the first and third parameter I don’t want to do any type check . Below you can find the complete declaration implementation.

declare module "gsap" {
declare export class TweenLite extends Animation {
static from(target: any, duration: number, vars: any): TweenLite;
}
}


If I try to run again the command npm run flow everything will work as expected .
Let’s see another example. The piece of code below is used to load a webfont with the help of the webfontloader library.

import WebFont from 'webfontloader'

const loadFont = (finish) => {
google: { families: ['Open Sans'] },
active: finish ? finish() : undefined,
inactive: finish ? finish() : undefined
})
}



Let’s start by adding the type for the finish parameter. The finish parameter is a function that could be null, so I need to use a maybe type, usually known as Optional types. So I need to declare it as an optional function ?(() => void). Then again I have an import from an external library webfontloader. In this case the webfontloader module declaration could be taken from the flow-typed repo. To do that, I can install the definition contained in that repo in my source code with the following command:

flow-typed install webfontloader@v1.x.x


Now I’m ready to run the command npm run flow again. Everything works as expected and Flow says that my types are correct. This is the final version of the source could with all the types.

import WebFont from 'webfontloader'

const loadFont = (finish: ?(() => void)): void => {
google: { families: ['Open Sans'] },
active: finish ? finish() : undefined,
inactive: finish ? finish() : undefined
})
}



#### Flow vs TypeScript

So how does it compare Flow to TypeScript? The main reasons to choose Flow instead of TypeScript are:

• It’s an easy-to-use utility. Flow is is not a complete language like TypeScript. It is an utility that you can add to your code. this could be as simple as just put /* @flow */ at the beginning of the file. In fact with Flow you’re still, writing JavaScript code.
• very good React support. Flow comes from Facebook as React, so you will find easier to integrate Flow with React and React Native.

The cons of Flow with respect to TypeScript are:

• smaller community compared to the TypeScript one. This basically means that for Flow you will find fewer tutorials, online resources and library definitions.
• better IDE integrations. TypeScript is much better supported in terms of automated refactoring tools

#### Conclusion

This is my experience with Flow. In the end I think it’s a good alternative to TypeScript, especially for the “true pure javascript lovers”. Let me know in the comments if you like it or if you prefer TypeScript .

## Blender tutorial: Cycles overview

In this new post of the series Blender tutorial I will talk about Cycles rendering engine.

In the previous post of the series “Blender tutorial” we talked about character rigging. In this post we will talk about the Cycles rendering engine. What is exactly Cycles? Let’s see a quote taken from Blender doc:

Cycles is Blender’s ray-tracing production render engine.

Cycles is a state of the art ray tracing engine built into Blender. We can try to achieve the same level of realism of other computer graphics production tools (photorealistic level).
Let’s start by see how we can activate it. From the menu at the top of the 3D window choose Cycle Render. When we do this operation, the render properties tab will change, as a consequence of the fact that Cycles as other option compared to the standard Blender Render. One of the most important thing to note is that cycles uses the CPU and the GPU of our computer to render our scenes in interactive mode. This basically means that we can see the final rendered scene and navigate through it. Wonderful !!!

How do we create material for cycles? We can create a material from the same tab we previously saw. When the cycles render is selected the option to customize the material change accordingly. In particular there’s a surface option where we can select from a list of BSDF the type of surface BSDF we want. The other option will change accordingly based on this selection. We can also add texture like for standard material. To do that we simply have to go into the color option and select the texture we want.

For what concern lights, Cycles support different type of lights. The type of lights and their setup are similar to the one we can find in the standard Blender engine:

• point
• sun
• spot
• hemi
• area

We can also use ambient occlusion to improve the realism of light. We can activate it like we did before under the World tab in the properties. That’s all my friends for Blender. This was the last post for the “Blender tutorial” series. I hope you enjoyed these series of tutorials about Blender.

## Blender tutorial: armatures and character rigging

In this new post of the series Blender tutorial I will talk about armatures and character rigging.

In the previous post of the series “Blender tutorial” we talked about animation. In this post we will talk about armatures and character rigging.
Let’s start from armatures. Armatures are composed of bones. We can create bones by selecting Add -> Armature -> Single Bone. Every bone is compose of:

• a base
• a body
• a tip

By selecting the body we can move the entire bone into a new position. By selecting the base or the tip we can move just one of the end of the bone. When a bone is selected two new tab becomes available in the properties panel:

• the armature tab
• the bone tab

Let’s see the armature tab first. First we have a the display options, to manage how the bone is displayed. We can also show the name, the color and more important we can activate the X-Ray mode that let us see the bone through the character.

To create a complete armature we have some tools similar to the one we previously seen for modeling. After selecting edit mode, in the left panel of the 3D window we have the options extrude and subdivide that let us create a complete skeleton for a character. In the scene we will find our armature object with all the bones connected.

We can now start to add the armature to an object. To do that we just have to place/create the bones inside our object. After that we select the object and the bones (in this specific order), and we choose from the menu at the bottom of the 3D window the option Object -> Parent -> Armature Deform. After this operation, we can go into pose mode (by choosing it in the menu at the bottom of the 3D window). If we move one of our bones in this mode, the part of the object/mesh that contains that bone will move accordingly.

Sometimes we will also need to constraint a bone to the move of other bones. Instead of doing it manually, we can use Inverse kinematics. To be more clear this is an extract of the blender doc:

IK simplifies the animation process, and makes it possible to make more advanced animations with lesser effort. IK allows you to position the last bone in a bone chain and the other bones are positioned automatically

We can add an inverse kinematic constraint on a bone by choosing it from the menu under the Bone constraint tab in the properties panel. This tab will appear only when we are in pose mode.

After setting bones, armature and inverse kinematics we are ready to animate our character. To do that, we just need to be in Pose mode and set the keyframes as we did for standard animation.

That’s all for character rigging. In the next post we will talk about the rendering engine Cycles.

## Blender tutorial: animation

In this new post of the series Blender tutorial I will talk about animations.

In the previous post of the series “Blender tutorial” we talked about camera and rendering options/effects in Blender. In this post we will talk about animation. Let’s start from the timeline. The timeline is usually placed at the bottom of the default layout of Blender. In it you can select a specific frame by clicking on it in the timeline. We also have some controls to start/stop/fast back and forward the animation. There is also the possibility to set a start and end frame of the animation.

To create an animation we need first of all to set keyframes. To do this we have to select the frame that we want as keyframe in the timeline, and the go in the space properties panel, change one of the spatial properties we want to animate (location, rotation or scale) and right click on them to show a menu where we can select “Insert keyframes”. After that you will see the value of the property we decided to animate to become yellow (orange in my images below because I have a custom theme).

To make an animation we need at least 2 keyframes. We can set keyframes also by selecting the record button in the timeline. This button will let us set a keyframe for the property we select from the list just near it. The keyframes are shown on the timeline as a yellow line.

After settings the second keyframe we finally have our first animation. In the video below you can see the final result.

In our previous first animation we animated the movement of the object. In Blender we are not limited to location or rotation for animation: we can basically animate any property we want. We can animate color, scale, light camera and so on. Let’s see for example how we can animate the energy property of a light. To do this we basically have to follow the same approach we used for the previous location animation: we modify the energy property and we set a keyframe for each value we want (remember that we need at least two keyframe to have an animation). After that our animation is ready and can be played (remember to switch the 3D window to texture mode if you want to see the animation with execute a complete render).

Sometimes we will need to do to a more precise setup of our animation: change the values between keyframes, change the interpolation method and so on. To do these operation we can use the graph editor. This is a a separate editor that we can open by selecting graph editor from editor selector in an existing panel inside our Blender layout (or in a new panel created ad-hoc). In this new editor we can zoom in/out by using ctrl + mouse movement. On the left you will find all the animation for a specific object selected in the scene. From here we can modify the keyframe curve by selecting it and moving in the position we prefer. We can also modify the interpolation by selecting our animation curve and choosing Key -> Interpolation mode from the menu.

The graph editor in not the only animation editor. We can use also the dope sheet. It could be more faster to edit an animation in it instead of the animation editor. In it we don’t have the curve of interpolation of the animation. We just have a representation of the keyframes and we can edit them by dragging after selection with a right click of the mouse.
Last but not least we have animation path. To create an animation path we have to add a path by using the menu Add -> Curve -> Path. Then we need to add a constraint to the object we want to animate so that it will follow the path . After that we can set the keyframes using the same approach showed before.

That’s all for animation. In the next post we will talk about character rigging.

## Blender tutorial: camera and rendering

In this new post of the series Blender tutorial I will talk about camera and rendering.

In the previous post of the series “Blender tutorial” we talked about light in Blender. In this post we will talk about camera and rendering. Let’s start from camera.
If we select a camera, we can access its properties from the specific camera tab in the properties panel. Here we have a section called “Display”, that let us customize how we see the camera in the viewport (limits, names and so on). Then we have a “Lens” section, where we can choose the type of the camera:

• orthogonal
• perspective
• panoramic

For the perspective camera we can change:

• the focal length
• the shift, so as the word say we can change the shift of the camera from its center
• the clipping, the start end distance between which the object see by the camera will be rendered.

For the orthographic camera the most important parameter is the orthographic scale, that represent the maximum dimension (in scene units) of the portion of space captured from the camera.

We can place cameras manually or we can use constraints. We can create a constraint by clicking the specific constraint tab (the one with the chain as icon) in the properties panel of the camera and add a new constraint. We can use for example a Damped Track and correlate the movement of our camera to the position of an object we select as the one to be tracked.

For what concerns rendering, we have the possibility to control it in the properties panel under the render tab (the one with the camera). For example we can customize where the render will happen. By default the render will show the result in the image editor, but we can change it by selecting one of the available option in the list. We can change the dimensions of the render result in terms of width/height but also in terms of FPS. We will see why this parameter is important in a future post about animations. We can also customize the algorithm used for anti aliasing. Very important, we can also customize the shading option (as we already see in a previous post about shadows), the performance (for example by adjusting the number of thread that Blender will be allowed to use), and the format of final output of the rendering.

As we said before in Blender it is possible to render animation. We will go through all the details about animation in a future post. For now we can see how to achieve cool rendering effects, for example motion blur. The motion blur is an effect you have when an object is moving in the length of the exposure of the camera. We can achieve this effect by activating it in the rendering properties and setting the number of motion sample we want. We can also modify the shutter to change the final result of the motion blur (the default value for samples is 1, and we need to modify it because with the value of 1 we will have no motion blur).

One last effect we can achieve with the Blender render is depth of field. This effect simulate the fact that only a part of the scene is in focus, based on the focal distance from the camera. To setup the depth of field in our scene, first of all we have to activate it by increasing the distance option in the depth of field section of the camera properties.

After that we have to switch to the node editor, add a defocus filter, and connect the render image and depth to the filter, and again the image to the composite final result.

That’s all for camera and rendering. In the next post we will talk about animation.