# CHICIO CODING

## Android Studio vs Xcode vs AppCode: a brief comparison about coding speed

In this posts I will compare the coding speed that it is possible to achieve in some of the JetBrains IDEs and Xcode, in terms of code creation and refactoring.

IDE, Integrated Development Environment, are the software developer toolboxes. When I started to work at lastminute.com group my knowledge of the Android platform was very limited. But… lastminute.com group is an agile software development company and one of the technique we use during our development workflow is pair programming: two developers work at the same feature on the same workstation. As reported on Wikipedia, one of the the main advantages of pair programming is knowledge sharing:

Knowledge is constantly shared between pair programmers, whether in the industry or in a classroom, many sources suggest that students show higher confidence when programming in pairs, and many learn whether it be from tips on programming language rules to overall design skill. In “promiscuous pairing”, each programmer communicates and works with all the other programmers on the team rather than pairing only with one partner, which causes knowledge of the system to spread throughout the whole team. Pair programming allows the programmers to examine their partner’s code and provide feedback which is necessary to increase their own ability to develop monitoring mechanisms for their own learning activities.

This is why I started to work with my colleague Francesco Bonfadelli, a senior Android, iOS and Backend developer. During our pair programming sessions I learned a lot about developing mobile apps for the Android platform. One of the thing I learned in the first few days is the difference between the official IDEs: Android Studio and Xcode. After seeing the coding speed that Francesco was able to achieve during an Android coding session, and how much slower it is to do the same things in Xcode for iOS, I realized how much more advanced is Android Studio with its set of refactoring features in comparison with Xcode.
In this post I will briefly analysed some IDEs commonly used for mobile application development focusing on the coding speed that is possible to achieve by using them and I will explain to you why, at the time of this writing, I started to prefer the JetBrains IDEs family (not only for mobile application development ).

### Xcode

I always loved Xcode. I started to use it 8 years ago and it’s still here with me during my daily job. It opens in a few seconds and you can start to code very quickly. But…. what happens when your app code start to increase in terms of complexity and you need to do a simple refactoring operation? Does it help you in some way when it need to create a new class/property? Does it help you when you need to navigate in your code and you need to jump quickly from one class to another? Well, to be honest it doesn’t help you so much. Even a simple renaming could become a painful operation, especially if you have a project with mixed Swift/Objective-C parts. Everything must be done manually. Consider for example this list of mixed code creation/refactoring operations:

• create a new class
• instantiate it and keep it as a local variable
• add a method to the previous class
• add a parameter to the method previously created
• extract the local variable as a property of controller in which I created it

In the following video I will try to do these operations in Xcode. At the time of this writing the available Xcode version is 9.2.

More than 2 minutes to implement all the stuff in the above list. Really slow, isn’t it?!??

### Android Studio

Before lastminute.com group, I had used Android Studio just a few times for some very simple Android apps. Then I started to work with Francesco and he introduced me to the power of JetBrains IDEs. This IDE gives you the ability to navigate quickly in you source code, create and modify classes, and allows you to do a lot of other refactoring operations without leaving the keyboard! Basically you can write code and forget about your mouse!! . The list of keyboard shortcuts you can use in your development flow is endless. You can find the complete list here. Let’s try to do the exact same operations I did before with Xcode, and I also add a rename of the class created at the end of all the previous operation. At the time of this writing the available Android Studio version is 3.0.1.

Only 50 seconds and I did all the stuff (and I wans’t pushing hard on the keyboard.. .). As you can see, Android Studio gives you the ability to write code at the speed of light!!! .

### AppCode

As you can image, after working a few hours with Android Studio, I started to wonder if there’s an IDE that let me setup the same write code style and workflow. Here another colleague that I worked with, Tommaso Resti, a senior iOS and Android developer, showed me AppCode for the first time. This is another IDE from JetBrains for iOS development. It allows you to improve your development speed by allowing you to use some of the refactoring tools that you can find in Android Studio. However it’s not all peace and light in this case. Some of the refactoring tools are not available for Swift and you will still need Xcode to work on Xib and Storyboard (the JetBrains team developed a plugin for interface builder, but that has been discontinued). Anyway, if you start to get used to the Android Studio writing code workflow, you will feel at home with AppCode .

### Final thoughts

Android Studio and AppCode are based on IntelliJ IDEA, the famous Java IDE from JetBrains. But that’s half of the story: JetBrains IDE family is really big. You can find an IDE for each of your favourite language:

• CLion, for C and C++
• PhpStorm
• PyCharm
• RubyMine for Ruby
• GoLand for GO
• Rider for C#

So no worries: if you want to start to improve you coding speed probably there’s an IDE for your favourite language. Xcode will always have a special place in my heart. I will still continue to use it in my daily job as a mobile developer. But… the coding speed I gained with the JetBrains IDEs could not be ignored . This is why I started to prefer them .

## SceneKit and physically based rendering

In this post I will guide you in the creation of a scene using SceneKit and its physically based rendering features.

SceneKit is one of the Apple framework I love the most. What is SceneKit? Let’s see the definition from the developer apple website:

SceneKit combines a high-performance rendering engine with a descriptive API for import, manipulation, and rendering of 3D assets. Unlike lower-level APIs such as Metal and OpenGL that require you to implement in precise detail the rendering algorithms that display a scene, SceneKit requires only descriptions of your scene’s contents and the actions or animations you want it to perform.

As you can see from the definition there’s a lot of stuff inside it. Basically by using SceneKit you will be able to create games and other 3D applications without the need to know any computer graphics algorithms, physics simulations stuff and so on. You basically describe a Scene in terms of objects and features and Apple will do everything for you .
One of the most interesting thing about SceneKit on the computer graphics side has been introduced in 2016: physically based rendering (PBR).
We’ve already seen what PBR is in a previous post, so you already know its theoretical foundation stuff (or go to check it out in case you missed it ). So that means that SceneKit could render physically based scene using its own entirely new physically based rendering engine. Is it worth it? Sure!! So, let’s try it! In this post we will create a scene from scratch that uses the main PBR features you can find inside SceneKit. At the end of this post you will be able to render the scene contained in the image below. So it’s time to start coding!!

The general approach used in the construction of the scene will be the following: for each main scene category component we will create a class that encapsulate the creation of the corresponding SCNNode, the base SceneKit unit element, and its setup to obtain the feature we want.
The first class we are going to create is the Light class that encapsulate the base features we need to set up the light: position, rotation and generic color. Light in SceneKit are represented using the SCNLight class.

class Light {
let node: SCNNode

init(lightNode: SCNNode) {
node = lightNode
}

init(lightFeatures: LightFeatures) {
node = SCNNode()
createLight()
set(lightFeatures: lightFeatures)
}

func createLight() {
node.light = SCNLight()
}

private func set(lightFeatures: LightFeatures) {
node.light?.color = lightFeatures.color
node.position = lightFeatures.position
node.eulerAngles = lightFeatures.orientation;
}
}


The basic features of the light must be passed at construction time using a LightFeatures objects.

class LightFeatures {
let position: SCNVector3
let orientation: SCNVector3
let color: UIColor

init(position: SCNVector3, orientation: SCNVector3, color: UIColor) {
self.position = position
self.orientation = orientation
self.color = color
}
}


We are now ready to create our PhysicallyBasedLight as an child of Light class. Our physically based light will be of type .directional, and we will customize its intensity and temperature. The intensity is the flux of the light (again, go to check my first post about physically based rendering if you don’t know what it is ), and the second one is the color temperature expressed in Kelvin (remember: 6500 K corresponds to pure white sunlight). We also activate other interesting features: by setting castsShadow to true we activate the rendering of shadow using shadow mapping technique, and by setting orthographicScale to 10 we extend a little bit the visible area of the scene from the light, so we are improving the construction of the shadow map.

class PhysicallyBasedLight: Light {

init(lightFeatures: LightFeatures, physicallyBasedLightFeatures: PhysicallyBasedLightFeatures) {
super.init(lightFeatures: lightFeatures)
set(physicallyBasedLightFeatures: physicallyBasedLightFeatures)
}

private func set(physicallyBasedLightFeatures: PhysicallyBasedLightFeatures) {
node.light?.type = .directional
node.light?.intensity = physicallyBasedLightFeatures.lumen
node.light?.temperature = physicallyBasedLightFeatures.temperature
}

node.light?.orthographicScale = 10
}
}


As for the basic light, we create also for the physically based features a class that will store the configuration and that must be injected at construction time (as you can see from the previous class init), that we will call PhysicallyBasedLightFeatures.

class PhysicallyBasedLightFeatures {
let lumen: CGFloat
let temperature: CGFloat

init(lumen: CGFloat, temperature: CGFloat) {
self.lumen = lumen
self.temperature = temperature
}
}


For physically based rendering we need also another kind of lighting setup to achieve the best result. We need to set up the SCNScene, the object that contains all the SCNNode elements of a scene, the lightingEnviroment and background properties. These ones let SceneKit approximate more accurately the indirect lighting calculation. To set this features we create a new class, PhysicallyBasedLightingEnviroment, that will receive the scene to setup. On this class will set a cubemap on the lightingEnviroment.contents property and its intensity on the lightingEnviroment.intensity property. To match the result of this lighting setup, it will set the background .contents with the same cubemap used for the lightingEnviroment.contents property.

class PhysicallyBasedLightingEnviroment {
let cubeMap: [String]
let intensity: CGFloat

init(cubeMap: [String], intensity: CGFloat) {
self.cubeMap = cubeMap
self.intensity = intensity
}

func setLightingEnviromentFor(scene: SCNScene) {
scene.lightingEnvironment.contents = cubeMap
scene.lightingEnvironment.intensity = intensity
scene.background.contents = cubeMap
}
}


Next step: the camera. We create a Camera class, that will contain a reference, again, to a SCNNode on which an SCNCamera has been defined. For the camera we need to set first of all some geometric properties like the position, rotation and the pivot point that we will use as reference for the animation of the camera. Last but not least we set the flag wantHDR to apply High Dynamic Range post processing to adjust the general brightness of the scene with respect to the display.

class Camera {
let node: SCNNode

init(cameraNode: SCNNode, wantsHDR: Bool = false) {
node = cameraNode
}

init(position: SCNVector3, rotation: SCNVector4, wantsHDR: Bool = false, pivot: SCNMatrix4? = nil) {
node = SCNNode()
createCameraOnNode()
set(position: position, rotation: rotation, pivot: pivot)
}

private func createCameraOnNode() {
node.camera = SCNCamera()
}

node.camera?.wantsHDR = wantsHDR
}

private func set(position aPosition: SCNVector3, rotation aRotation: SCNVector4, pivot aPivot: SCNMatrix4?) {
node.position = aPosition
node.rotation = aRotation
node.pivot = aPivot ?? node.pivot
}
}


Now its time to think about the objects we want to display in the scene. For that reason we create a Object class that will represent each kind of object we want to show in the scene. Obviously as for the previous classes, also the Object class will expose a node property of type SCNNode that represents our object in the scene. We define this class with multiple initializer that let as create object instances using various configurations: init as an empty object, init using a SCNGeomtry instance, using a mesh loaded as a MDLObject using the Model I\O Apple framework. This framework let us import/export 3D models in a wide range of common available formats.

class Object {
let node: SCNNode

init(position: SCNVector3, rotation: SCNVector4) {
node = SCNNode()
set(position: position, rotation: rotation)
}

init(geometry: SCNGeometry, position: SCNVector3, rotation: SCNVector4) {
node = SCNNode(geometry: geometry)
set(position: position, rotation: rotation)
}

init(mesh: MDLObject, position: SCNVector3, rotation: SCNVector4) {
node = SCNNode(mdlObject: mesh)
set(position: position, rotation: rotation)
}

private func set(position: SCNVector3, rotation: SCNVector4) {
node.position = position
node.rotation = rotation
}
}


Now we are ready to define a PhysicallyBasedObject class that will inherit all the capabilities of the Object class and will set all the features needed to make the object rendered using physically based rendering. Even if all the initializer are available to this subclass, we will require a mesh as MDLObject at construction time, because we will display some particular mesh objects (we will discuss about them later). At construction time we will require also the position and rotation and the PhysicallyBasedMaterial material. By assigning it to the firstMaterial property of the geometry of our node, our object will be rendered as a physically based object using the SceneKit physically based rendering engine. NB: the mesh that we will use doesn’t contain any material so by assigning the firstMaterial property the mesh will use it for the entire surface.

class PhysicallyBasedObject: Object {

init(mesh: MDLObject, material: PhysicallyBasedMaterial, position: SCNVector3, rotation: SCNVector4) {
super.init(mesh: mesh, position: position, rotation: rotation)
node.geometry?.firstMaterial = material.material
}
}


So, the next question is: how do we define the PhysicallyBasedMaterial class? We create PhysicallyBasedMaterial with a single property material of type SCNMaterial. On this material property we set:

• the lightingModel to .physicallyBased, to mark it for SceneKit as a physically based material
• diffuse.contents property with an appropriate diffuse value.
• roughness.contents property with an appropriate roughness value.
• metalness.contents property with an appropriate metalness value.
• normal.contents property with an appropriate normal value.
• ambientOcclusion.contents property with an appropriate ambient occlusion value

As you can see, we have all the properties we discussed in my introduction to physically based rendering. We have also other properties that help us improve the realism, especially with indirect lighting for what concern the ambient occlusion (this property/technique is not related to PBR but helps to improve the final rendering). Which kind of values accept this properties? As stated in the Apple documentation you can assign to the contents property:

• a color (NSColor/UIColor/CGColor)
• a number (NSNumber)
• an image (NSImage/UIImage/CGImage)
• a string
• a CALayer
• a texture (SKTexture/MDLTexture/MTLTexture/GLKTextureInfo)
• a SKScene
• an array of six image that represents a cube map (as we did for the lightingEnviroment.contents property).
class PhysicallyBasedMaterial {
let material: SCNMaterial

init(diffuse: Any, roughness: Any, metalness: Any, normal: Any, ambientOcclusion: Any? = nil) {
material = SCNMaterial()
material.lightingModel = .physicallyBased
material.diffuse.contents = diffuse
material.roughness.contents = roughness
material.metalness.contents = metalness
material.normal.contents = normal
material.ambientOcclusion.contents = ambientOcclusion
}
}


Now it’s time to construct our scene !! We start by creating a new class PhysicallyBasedScene, subclass of SCNScene. In this way we can customize the default initializer with the step needed to add all the element of our scene, and also because in this way we have direct access to all the properties of SCNScene. We also define a protocol, Scene, that we will use to manage some gesture and animate the scene. So in the initializer we will call three methods: createCamera() in which we will create the camera, createLight() in which we will create the lights, createObjects() in which we will create the objects. NB: we need to define also the initializer with coder because we are subclassing a class that adopt the NSSecureCoding that is an extension of the NSCoding protocol that has this required initializer.

@objc class PhysicallyBasedScene: SCNScene, Scene {
var camera: Camera!

override init() {
super.init()
createCamera()
createLight()
createObjects()
}

fatalError("init(coder:) has not been implemented")
}

...
...
}


So we start by creating our camera. We place it in front of the scene with the pivot moved a little bit and HDR post processing activated.

private func createCamera() {
camera = Camera(
position: SCNVector3Make(0, 2, 0),
wantsHDR: true,
pivot: SCNMatrix4MakeTranslation(0, 0, -8)
)
}


Then we create our lights. We create a physically based light with power of 100 lumen and a color temperature of 4000K. In this way we can match the warm orange color of the cubemap used for the lighting environment that we set in the scene.

private func createLight() {
createPhysicallyLightingEnviroment()
}

private func createPhysicallyBasedLight() -> PhysicallyBasedLight {
let lightFeatures = LightFeatures(
position: SCNVector3Make(-2, 5, 4),
color: UIColor.white
)
let physicallyBasedLightFeatures = PhysicallyBasedLightFeatures(lumen: 100, temperature: 4000)
let physicallyBasedLight = PhysicallyBasedLight(
lightFeatures: lightFeatures,
physicallyBasedLightFeatures: physicallyBasedLightFeatures
)
return physicallyBasedLight
}

private func createPhysicallyLightingEnviroment() {
let enviroment = PhysicallyBasedLightingEnviroment(
cubeMap: ["rightPBR.png", "leftPBR.png", "upPBR.png", "downPBR.png", "backPBR.png", "frontPBR.png"],
intensity: 1.0
)
enviroment.setLightingEnviromentFor(scene: self)
}


Finally we can place our 4 objects: one basic plane mesh and 3 mesh taken from the Stanford scan repository. These mesh are: the dragon, the happy buddha and Lucy. All this meshes will be rendered using the PhysicallyBasedObject. We take the textures used to model the various material from freepbr website.

private func createObjects() {
}

let floor = PhysicallyBasedObject(
material: PhysicallyBasedMaterial(
diffuse: "floor-diffuse.png",
roughness: NSNumber(value: 0.8),
metalness: "floor-metalness.png",
normal: "floor-normal.png",
ambientOcclusion: "floor-ambient-occlusion.png"
),
position: SCNVector3Make(0, 0, 0),
rotation: SCNVector4Make(0, 0, 0, 0)
)
}

let dragon = PhysicallyBasedObject(
material: PhysicallyBasedMaterial(
diffuse: "rustediron-diffuse.png",
roughness: NSNumber(value: 0.3),
metalness: "rustediron-metalness.png",
normal: "rustediron-normal.png"
),
position: SCNVector3Make(-2, 0, 3),
)
}

let buddha = PhysicallyBasedObject(
material: PhysicallyBasedMaterial(
diffuse: "cement-diffuse.png",
roughness: NSNumber(value: 0.8),
metalness: "cement-metalness.png",
normal: "cement-normal.png",
ambientOcclusion: "cement-ambient-occlusion.png"
),
position: SCNVector3Make(-0.5, 0, 0),
rotation: SCNVector4Make(0, 0, 0, 0)
)
}

let lucy = PhysicallyBasedObject(
material: PhysicallyBasedMaterial(
diffuse: "copper-diffuse.png",
roughness: NSNumber(value: 0.0),
metalness: "copper-metalness.png",
normal: "copper-normal.png"
),
position: SCNVector3Make(2, 0, 2),
rotation: SCNVector4Make(0, 0, 0, 0)
)
}


The meshes are stored as wavefront obj file (the easiest file format of all time ). As you can see from the previous code, we use a class called MeshLoader. How does it work? It uses the Model I/O Apple framework to load the obj file as a MDLAsset and then we extract the first MDLObject.

class MeshLoader {

static func loadMeshWith(name: String, ofType type: String) -> MDLObject {
let path = Bundle.main.path(forResource: name, ofType: type)!
let asset = MDLAsset(url: URL(fileURLWithPath: path))
return asset[0]!
}
}


We are almost ready to render our scene. The last thing to do is to implement the methods of the Scene protocol to add some movement to the scene. This method will be called by a one tap gesture attached to the main view that will render our scene (we will see it in a few moments). Inside it we use the method runAction to rotate the camera around its pivot, that we moved previously to have a rotation axis to move the camera around the scene.

func actionForOnefingerGesture(withLocation location: CGPoint, andHitResult hitResult: [Any]!) {
around: SCNVector3Make(0, 1, 0),
duration: 30))
}


We are ready to render our scene. Assign an instance of our PhysicallyBasedScene to a SCNView and see the beautiful results of our work. Below you can find a video of the scene we created.

That’s it!! You’ve made it!! Now you can show to your friends your physically based scene and be proud of it . You can find this example with other scenes in this github repository.

## React Native: use multiple RCTRootView instances in an existing iOS app

In this post I show you how is it possible to use multiple RCTRootView instances in an existing iOS app.

If we want to start to use React Native in an existing app, it’s really easy. We can have our first React Native component live inside our app by just following the getting started tutorial for existing app. But what happen if we need to use multiple react native component in different parts of our existing apps ? In this tutorial I will show you how we can use multiple instances of RCTRootView to show different React Native components in different parts of your app. Consider, for example, a simple iOS existing app with React Native. It has two very simple React Native components:

• BlueScreen, that shows a blue view
• RedScreen, that shows a red view
class BlueScreen extends React.Component {
render() {
return (
<View style={styles.blue} />
);
}
}

class RedScreen extends React.Component {
render() {
return (
<View style={styles.red} />
);
}
}

const styles = StyleSheet.create({
blue: {
backgroundColor: "#0000FF",
width: "100%",
height: "100%"
},
red: {
backgroundColor: "#FF0000",
width: "100%",
height: "100%"
}
});

AppRegistry.registerComponent('BlueScreen', () => BlueScreen);
AppRegistry.registerComponent('RedScreen', () => RedScreen);


On the native side there’s a controller, ReactViewController, that shows React Native components given their name.

class ReactViewController: UIViewController {
init(moduleName: String) {
super.init(nibName: nil, bundle: nil)
view = RCTRootView(bundleURL: URL(string: "http://localhost:8081/index.bundle?platform=ios"),
moduleName: moduleName,
initialProperties: nil,
launchOptions: nil)
}

fatalError("init(coder:) has not been implemented")
}
}


There’s also another controller, MainViewController, that shows the React Native components described above using multiple instances of the ReactViewController. The UI of the app is very simple: there are two buttons on the view of the MainViewController. A tap on the first one shows the ReactViewController with a RCTRootView that contains the RedComponent. A tap on the second one shows the ReactViewController with a RCTRootView that contains the BlueComponent.
This basically means that in this apps there are multiple RCTRootView, one for each controller created. This instances are kept alive at the same time (because the MainViewController keeps a reference to the two ReactViewController). The code to start the React Native components is the same contained in the getting started tutorial for existing app.

class MainViewController: UIViewController {
private let blueViewController: ReactViewController
private let redViewController: ReactViewController

blueViewController = ReactViewController(moduleName: "BlueScreen")
redViewController = ReactViewController(moduleName: "RedScreen")
}

@IBAction func showRedScreen(_ sender: Any) {
}

@IBAction func showBlueScreen(_ sender: Any) {
}
}


If we try to run the app something very strange will happen:

• if we do a live reload, we will see our React components refreshed multiple times
• if we press cmd + ctrl + z (shake gesture simulation) in the simulator 2 dev menu will be shown

• if we do a live reload while we’re in debug mode the app could crash

What’s happening here? Well, there’s something wrong in our code. If we take a look at the comments in the code of React Native for the RCTRootView initializer, we will notice something very strange:

/**
* - Designated initializer -
*/
- (instancetype)initWithBridge:(RCTBridge *)bridge
moduleName:(NSString *)moduleName
initialProperties:(NSDictionary *)initialProperties NS_DESIGNATED_INITIALIZER;

/**
* - Convenience initializer -
* A bridge will be created internally.
* This initializer is intended to be used when the app has a single RCTRootView,
* otherwise create an RCTBridge and pass it in via initWithBridge:moduleName:
* to all the instances.
*/
- (instancetype)initWithBundleURL:(NSURL *)bundleURL
moduleName:(NSString *)moduleName
initialProperties:(NSDictionary *)initialProperties
launchOptions:(NSDictionary *)launchOptions;


Whaaaaaaattttt ?????!?!?!??? This basically means that the documentation in the getting started is considering only the case where we will have a single RCTRootView instance. So we need to do something to our ReactViewController so that we can keep multiple RCTRootView alive at the same time. The solution to our problem is contained in the comments of the initializer above: we need to use the designated RCTRootView initializer to start to use multiple instances of them at the same time in the app. So the new ReactViewController with the new RCTRootView initialization is the following one:

class ReactViewController: UIViewController {

init(moduleName: String, bridge: RCTBridge) {
super.init(nibName: nil, bundle: nil)
view = RCTRootView(bridge: bridge,
moduleName: moduleName,
initialProperties: nil)
}

fatalError("init(coder:) has not been implemented")
}
}


Where do we get an instance of RCTBridge for the new init of the ReactViewController and RCTRootView? A new object, ReactNativeBridge, creates a new RCTBridge instance and store it as a property.
The RCTBridge instance needs a RCTBridgeDelegate. Another new object, ReactNativeBridgeDelegate, will be the delegate of the RCTBridge.

class ReactNativeBridge {
let bridge: RCTBridge

init() {
bridge = RCTBridge(delegate: ReactNativeBridgeDelegate(), launchOptions: nil)
}
}

class ReactNativeBridgeDelegate: NSObject, RCTBridgeDelegate {

func sourceURL(for bridge: RCTBridge!) -> URL! {
return URL(string: "http://localhost:8081/index.bundle?platform=ios")
}
}


Now it is possible to modify the MainViewController. This controller will create a single ReactNativeBridge with a single RCTBridge instance. This instance will be passed to the two ReactViewController. So they will basically share the same bridge instance.

class MainViewController: UIViewController {
private let blueViewController: ReactViewController
private let redViewController: ReactViewController
private let reactNativeBridge: ReactNativeBridge

reactNativeBridge = ReactNativeBridge()
blueViewController = ReactViewController(moduleName: "BlueScreen",
bridge: reactNativeBridge.bridge)
redViewController = ReactViewController(moduleName: "RedScreen",
bridge: reactNativeBridge.bridge)
}

@IBAction func showRedScreen(_ sender: Any) {
}

@IBAction func showBlueScreen(_ sender: Any) {
}
}


Now if we try to run the app again everything will work as expected:

• if we do a live reload, we will see our React components refreshed just one time
• if we press cmd + ctrl + z in the simulator 1 dev menu will be shown
• no more crashes with live reload in debug mode

The entire source code of the app used as example for this post is contained in this github repo. Now we’re ready to use multiple React Native components at the same time in our app .

## Physically based rendering: informal introduction

In this post I will give you an informal introduction (and my personal understanding) about Physically based rendering.

Physically Based Rendering (PBR) is one of the latest and most exciting trend in computer graphics. PBR is “everywhere” in computer graphics. But wait, what is it PBR ? PBR uses physically correct lighting and shading models to treat light as it behaves in the real world. As a consequence of the fact that what could be seen in a computer graphics application is decided by how light is represented, with PBR it is possible to reach a new level of realism. But wait, what do we mean with “physically correct”?
Before giving an answer and try to give a detail definition of PBR we need to understand well some important concepts.

#### What is light?

Light is a form of electromagnetic radiation. Specifically, it is a small subset of the entire electromagnetic radiation spectrum with wavelength between 400 nm and 700 nm. The set of studies and techniques that try to describe and measure how the electromagnetic radiation of light is propagated, reflected and transmitted is called radiometry. What are the fundamental quantities described by radiometry? The first one is the called flux, it describes the amount of radiant energy emitted, reflected or transmitted from a surface per unit time. The radiant energy is the energy of an electromagnetic radiation. The unit measure of flux is joules per seconds $\frac{J}{s}$, and it is usually reported with the Greek letter $\phi$.
Other two important quantities of radiometry are irradiance and radiant exitance. The first one described flux arriving at a surface per unit area. The second one describe flux leaving a surface per unit area (Pharr et al., 2010 [1]). Formally irradiance is described with the following equation:

where the differential flux $d\phi$ is computed over the differential area $dA$. It is measured as units of watt per square meter.
Before proceeding to the last radiometry quantity definition, it is useful to give the definition of solid angle. A solid angle is an extension of a 2D angle in 3D on a unit sphere. It is the total area projected by an object on a unit sphere centered at a point $p$. It is measured in steradians. The entire unit sphere corresponds to a solid angle of $4\pi$ (the surface area of the unit sphere). A solid angle is usually indicated as $\Omega$, but it is possible also to represent it with $\omega$, that is the set of all direction vectors anchored at $p$ that point toward the area on the unit sphere and the object (Pharr et al., 2010 [1]). Now it is possible to give the definition of radiance, that is flux density per unit solid angle per unit area:

In this case $dA^{\perp}$ is the projected area $dA$ on a surface perpendicular to $\omega$. So radiance describe the limit of measurement of incident light at the surface as a cone of incident directions of interest ${d\omega}$ becomes very small, and as the local area of interest on the surface $dA$ also becomes very small (Pharr et al., 2010 [1]). It is useful to make a distinction between radiance arriving at a point, usually called incident radiance and indicated with $L_{i}(p,\omega)$, and radiance leaving a point called exitant radiance and indicated with $L_{o}(p,\omega)$. This distinction will be used in the equations described below. It is important also to note another useful property, that connect the two types of radiance:

#### The rendering equation

The rendering equation was introduced by James Kajiya in 1986 [2]. Sometimes it is also called the LTE, Light Transport Equation. It is the equation that describes the equilibrium distribution of radiance in a scene (Pharr et al., 2010 [3]). It gives the total reflected radiance at a point as a sum of emitted and reflected light from a surface. This is the formula of the rendering equation:

In this formula the meaning of each symbols are:

• $p$ is a point on a surface in the scene
• $\omega_{o}$ is the outgoing light direction
• $\omega_{i}$ is the incident light direction
• $L_{o}(p,\omega)$ is the exitant radiance at a point $p$
• $L_{e}(p,\omega)$ is the emitted radiance at a point $p$
• $\Omega$ is the unit hemisphere centered around the normal at point $p$
• $\int_{\Omega}…d\omega_{i}$ is the integral over the unit hemisphere
• $f_{r}(p,\omega_{i},\omega_{0})$ is the Bidirectional Reflectance Distribution Function and we will talk about it in a few moments
• $L_{i}(p,\omega)$ is the incident radiance arriving at a point $p$
• $\cos\theta_{i}$ is given by the dot product between 𝜔: and the normal at point $p$, and is the attenuation factor of the irradiance due to incident angle

#### BRDF

One of the main component of the rendering equation previously described is the Bidirectional Reflectance Distribution Function (BRDF). This function describes how light is reflected from a surface. It represents a constant of proportionality between the differential exitant radiance and the differential irradiance at a point $p$ (Pharr et al., 2010 [1]). The parameter of this function are: the incident light direction, the outgoing light direction and a point on the surface. The formula for this function in terms of radiometric quantities is the following:

The BRDF has two important properties:

• it is a symmetric function, so for all pair of directions $f_{r}(p,\omega_{i},\omega_{o}) = f_{r}(p,\omega_{o},\omega_{i})$
• it satisfies the energy conservation principle: the light reflected is less than or equal to the incident light.

A lot of models has been developed to describe the BRDF of different surfaces. In particular, in the last years the microfacet models have gained attention. In these kind of models the surface is represented as composed by infinitely small microfactes that model in a more realistic way the vast majority of surfaces in the real world. Each one of these microfactes has is geometric definition (in particular its normal).
Some specific material surfaces, for example glass, reflect and transmit light at the same time. So a fraction of light goes through the material. For this reason, there’s another function, the Bidirectional Transmittance Distribution Function, BTDF, defined in the same way as the BRDF, but with the directions $\omega_{i}$ and $\omega_{o}$ placed in the opposite hemisphere around $p$ (Pharr et al., 2010 [1]). It is usually indicated as $f_{t}(p,\omega_{i},\omega_{o})$. The Fresnel equations tries to define the behaviour of light between different surfaces. They also help us to get the balance between different kind of reflections changes based on the angle at which you view the surface.

#### Physically Based Rendering

So let’s go back to our original question: What is PBR? PBR is a model that enclose a set of techniques that try to simulate how the light behaves in the real world. Taking an extraction from the Wikipedia definition:

PBR is often characterized by an approximation of a real, radiometric bidirectional reflectance distribution function (BRDF) to govern the essential reflections of light, the use of reflection constants such as specular intensity, gloss, and metallicity derived from measurements of real-world sources, accurate modeling of global illumination in which light bounces and/or is emitted from objects other than the primary light sources, conservation of energy which balances the intensity of specular highlights with dark areas of an object, Fresnel conditions that reflect light at the sides of objects perpendicular to the viewer, and accurate modeling of roughness resulting from microsurfaces.

You can see from the definition that PBR is a model that uses all the concepts we saw previously in this article to try to get the most accurate results in terms of realism in a computer graphics applications. PBR engines and asset pipelines let the artist define materials in terms of more realistic components, instead of tweaking ad-hoc parameters based on the type of the surface. Usually in these kind of engine/asstes pipeline the main parameter used to specify a surface features are:

• albedo/diffuse: this component controls the base color/reflectivity of the surface
• metallic: this component specifies the is the surface is metallic or not
• roughness: this component specifies how rough a surface is on a per texel basis
• normal: this component is a classical normal map of the surface

What results can you achieve suing PBR? These are two example images: the first one is taken from my physically based spectral path tracing engine Spectral Clara Lux Tracer and the second one is taken from PBRT, the physically based engine described in the book “Physically based rendering: from theory to implementation” by M. Pharr, W. Jakob, G. Humphreys.

How coooool are these images???? We are at the end of the introduction. I hope now it is at least clear what is PBR !! See you for other stuff about computer graphics and PBR .

[1] M. Pharr and G. Humphreys, “Color and radiometry,” in Physically based rendering: from theory to implementation, 2nd Edition ed., Burlington, Massachusetts: Morgan Kaufmann, 2010, ch. 5, pp. 261-297.
[2] J. T. Kajiya, “The Rendering Equation,” in SIGGRAPH ‘86, Dallas, 1986, pp. 143-150.
[3] M. Pharr and G. Humphreys, “Light transport I: surface reflection,” in Physically based rendering: from theory to implementation, 2nd ed., Burlington, Morgan Kaufmann, 2010, ch. 15, pp. 760-770.

## React Native and Realm: custom manual link for an iOS app with custom directory structure

In this post I will show you how to install realm as a dependency in a React Native project with custom folders structure without using react-native link command.

What is React Native? It is one of the most successful and loved mobile development framework. It let you build real native mobile application using Javascript. It has been developed by Facebook. Let’s see the definition from the official website:

Build native mobile apps using JavaScript and React. React Native lets you build mobile apps using only JavaScript. It uses the same design as React, letting you compose a rich mobile UI from declarative components. With React Native, you don’t build a “mobile web app”, an “HTML5 app”, or a “hybrid app”. You build a real mobile app that’s indistinguishable from an app built using Objective-C or Java. React Native uses the same fundamental UI building blocks as regular iOS and Android apps. You just put those building blocks together using JavaScript and React.

Cool !!!! Isn’t it? Write an app using Javascript with the same performance of native code. You can also reuse your native component and bridge them to the javascript side.
Most of the time the React Native framework will help you also to manage dependencies inside you project. But sometimes, especially if your project doesn’t follow the standard React Native directories structure you can have some problem when you try to link you external library.
While I was working on an existing native app integrated with React Native that has a custom directories structure for the react-native and native code, I found some problem to add Realm, the famous open source dbms, as a dependency to the project.
In this post I will show you an example of how you can add Realm to your app that has a custom React Native installation. Let’s start !! To describe the installation process I will use a sample app I created for this post called ReactNativeRealmManualLink. You can find it with realm installed in this github repo.
Suppose you have a project like the one I shared above, in which React Native is contained in a subfolder of the iOS project, instead of the other way around in a standard React Native installation.

First, to add realm as a dependency we need to install it through npm with following command.

npm install --save realm


Then we try to link the library to the native code with the standard React Native command.

react-native link realm


But here something strange happens: as you can see from the screenshot below the command fails to link the library. So we need to find another way to install the library.

Usually, if the previous command fails, you have to do the manual linking. To do it we navigate inside the node_modules folder, contained in the React Native folder of our project, to found the realm folder. Inside it you will find an Xcode project named RealmReact, that you have to drag into our project. After that we have to add a reference to the static library libRealmReact and compile the project.

Now you would expect that everything works fine but…

What’s happening? The RealmReact project is expecting the React Native headers in a relative position with respect to its original position. Arrrgghhh !! We need to find another way…

What can we do? We can start by observing that the RealmReact project is just a “container project” for:

• RealmJS project, that generates two static libraries libRealmJS.a and libGCDWebServers.a
• an Objective-C++ class RealmReact
• an Objective-C++ file RealmAnalytics

So we can try to modify our main project by:

• adding the RealmJS project and the Objective-C++ files/classes as references
• linking the static libraries libRealmJS.a and libGCDWebServers.a to our main project and see if everything works

Now we need to add to the Header search path option of our main project the paths that were setted in the RealmReact project. In this way the RealmJS project will be able to find some headers it needs. You can find the complete list of the folder that we need to add in the screenshot below.

Now if we try to compile our app we expect that everything works fine but…ERROR !!! The build fails !!!

It seems like that in order to be able to compile the C++ source code contained in RealmJS we need to set a recent C++ version in our project setting that supports some new features like auto return type on static function. We can set it to C++ 14 and set the Standard Library to the LLVM one with C++ 11 support.

One final step is to remove the flag -all_load from the Other linker flag option of the main project (if you have it). In this way we avoid to load all Objective-C symbols and have the “duplicated symbols” error.

We are now ready to build our app and see if everything works. To do this we create a sample native view controller that has a RCTRootView

class ReactNativeRealmController: UIViewController {
let jsCodeLocation = URL(string: "http://localhost:8081/index.bundle?platform=ios")
view = RCTRootView(
bundleURL: jsCodeLocation,
moduleName: "ReactNativeRealmScreen",
initialProperties: nil,
launchOptions: nil
)
}
}


and a sample react component with some realm write/read operations.

const Realm = require('realm');

class ReactNativeRealmScreen extends React.Component {
constructor(props) {
super(props);
this.state = {
realm: null
};±
}

componentWillMount() {
Realm.open({
schema: [{name: 'Band', properties: {name: 'string', singer: 'string'}}]
}).then(realm => {
realm.write(() => {
realm.create('Band', {name: 'HIM', singer: 'Ville Valo'});
});
this.setState({ realm });
});
}

render() {
const message = this.state.realm
? 'The singer of HIM band is: ' + this.state.realm.objects('Band').filtered('name = "HIM"')[0].singer

return (
<View style={styles.container}>
<Text>
{message}
</Text>
</View>
);
}
}

const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
backgroundColor: '#FFFFFF',
}
});

AppRegistry.registerComponent('ReactNativeRealmScreen', () => ReactNativeRealmScreen, false);


We are now ready to build our app and, as expected, everything works fine.

That’s it!! As I told you before you can find the complete example in this github repo. We are now ready to create our React Native component with realm .