CHICIO CODING

Dirty clean code. Creative Stuff. Stuff.

Blender tutorial: selecting and transforming objects

Second post of the “Blender tutorial” series. This time we will learn how to select and move objects.


In the first post of the “Blender tutorial” series we learned how the user interface is composed and the most important commands to navigate inside a scene. This time we will be focus on selecting and moving objects inside the 3D view.
We can select an object by using the right button of your mouse. If you’re on the MacBook like me, the right mouse is emulated in some ways (my personal preferences is to use two fingers to emulate a right click). To select multiple objects we can hold the shift button and select with a right click the objects we want. The objects that we will select will be marked in the outliner editor, with a little circle, and in the 3D view with a little border. The color of the border will change based on the theme you selected. In the screenshot below we can see that I selected 2 of the 3 objects in the scene. If we have multiple objects selected, the last one will have a different border. In my case the cube is the last object selected.

blender selecting objects 1

To deselect an object we can just right click on it again. We can also select all the objects in a scene, including cameras and lights, by pressing the “a” key.
There’s also a select menu that gives us more control over the selections we can do. In particular, we have the Circle select, that lets us select objects based on a circle pattern of selection, and the Border select, that lets us select objects based on a squared pattern of selection. There are also other option to select random objects or invert the current selection.

blender selecting objects 2

To translate objects, we can use the transform tools. We can find them under Tools -> Transform. As a consequence of the fact that we are trying to translate an object in 3D space using the mouse pointer tha works in 2D space, it is difficult to understand in which direction we are doing our translation. We can constraint the move to just one axis by pressing:

  • “x” key for the x axis
  • “y” key for the y axis
  • “z” key for the z axis

blender moving objects 1

There’s also the possibility to move an object with discrete values by using its location properties panel under the properties editor or using the object properties panel in the 3D window. Finally we can move objects using also the 3D manipulator widget in the 3D window. We can activate it by pressing on its icon. After that when you select and object you will see three axes. Drag one of them to translate the object in that direction.

blender moving objects 2

We can rotate and scaling an object using the same tools we used for the translation:

  • the transformation tools
  • the 3D manipulator widget

One important thing to consider when we are working with transformation is the transform orientation. This option defines the orientation of the transform operation. This is very important because it influences the final result of the transform operation. You can change the transform orientation in the 3D manipulator widget.

blender moving objects 3

The 3D manipulator widget will place the start of the transform based on the origin of an objects. We can change it by selecting one of the option under Object -> Transform in object mode:

  • Geometry to origin
  • Origin to geometry
  • Origin to 3D cursor
  • Origin to Center of mass (surface)
  • Origin to center of mass (volume)

blender change objects origin 1

When we want to transform a group of objects at once we have a number of options to change the pivot point of the selection. We can choose it by selecting one of the option available from the list near the 3D transform manipulator widget.

blender change objects pivot 1

That’s all for selection and transform of objects. In the next post we will start to explore the art of modeling in Blender.

React Native: the power of RCTBundleURLProvider to build, run and debug on an iOS device from Xcode

In this post I will talk about how to setup up you React Native project on iOS to build, run and debug it on a real device from Xcode.


In the last few days I was working in pair programming with my colleague Mariano Patafio on some new features for a React Native app. Mariano is a senior iOS and Android developer and a true “:apple: Apple fag :apple::laughing: (like me :stuck_out_tongue_closed_eyes:). At some point during our pair session we wanted to test the app on a real iOS device. The app we were working on was an existing iOS app in which we added some React Native views. If you follow the instructions contained in the React Native doc about integrating it in an existing app, you will discover that with that setup you will not be able to run your app on a real device from Xcode. It will work just in the simulator.
In this post I will show you what we discovered: it is possible with the right setup to build, run and debug your React Native app from Xcode. To do this I will use the React Native example app I used in this previous post I wrote about how to create multiple RCTRootView inside you existing app integrated with React Native. The app is very simple: it contains a main screen with 2 buttons that let the user open two different React Native views. You can find this example with the implementation described below in this github repo.
Let’s assume we start with the old implementation of the app described above, where we implemented a
ReactNativeBridgeDelegate that returns the url of the index.bundle that contains our React Native code JS compiled. This url was pointing to a localhost environment.

class ReactNativeBridge {
    let bridge: RCTBridge
    
    init() {
        bridge = RCTBridge(delegate: ReactNativeBridgeDelegate(), launchOptions: nil)
    }
}

class ReactNativeBridgeDelegate: NSObject, RCTBridgeDelegate {
    
    func sourceURL(for bridge: RCTBridge!) -> URL! {
        return URL(string: "http://localhost:8081/index.bundle?platform=ios")
    }
}

React Native bridge delegate localhost

If we try to build this app on an iPhone, and we open one of the React Native screen we will receive the following error (obviously based on the fact that we are trying to access localhost from the iPhone, and our React Native node server is running on the MacBook Pro where we are building the app).

React Native error on device

How can we build on a real device? First of all we need to add a new build phase to our project that let us run the React Native Xcode Bundler before the real build. The React Native Xcode Bundler is a shell script with name react-native-xcode.sh that you can find inside your react native npm package under <you app root folder.>/node_modules/react-native/scripts/. This script must take as input our React Native index.js.

React Native setup bundler

Now we can change our ReactNativeBridgeDelegate implementation. Instead of returning an hard coded url, we use the RCTBundleURLProvider.sharedSettings().jsBundleURL(forBundleRoot: "index", fallbackResource: nil) method. We need to pass "index" as bundle root parameter (the name of the main js file).

React Native bundle url provider setup

Now we can try to build an run again the app on a real device. As you can see now everything works as expected.

React Native app working on device

What’s happening under the hood? Which kind of “magic” are we using here :smirk:? If we start to debug from the call to RCTBundleURLProvider.sharedSettings().jsBundleURL(forBundleRoot: "index", fallbackResource: nil) and we go inside the React Native source code at some point we will see a call to a method named guessPackagerHost. In this method there’s a piece of code that tries to open and read the content of a file named ip.txt (this file is supposed to be in the main bundle of the app). The string returned by this method is used as hostname in the url used by React Native to call the packager we are running on our mac.
Who did create this ip.txt file? Previously we added the execution of the React Native Bundler script as build phase. If we look at the source code of this script you will find the following piece of code:

React Native ip txt generation

Whaaaaaaattttt?!?!?!?!?!? :satisfied: This piece of code basically creates a file named ip.txt that contains the IP address of your computer, extracted using an ifconfig command, concatenated with the domain xip.io. So the file will contain a string like the following one: <your computer IP address>.xip.io. This is the string returned by the guessPackagerHost method. In the screenshot below you can find the source code of this method and the string that it returns.

React Native my local ip

What is the xip.io string added after the IP address? xip.io is a public free DNS server created at basecamp. Below you can find a quote from the homepage of the service:

What is xip.io? xip.io is a magic domain name that provides wildcard DNS for any IP address. Say your LAN IP address is 10.0.0.1. Using xip.io,

      10.0.0.1.xip.io   resolves to   10.0.0.1
  www.10.0.0.1.xip.io   resolves to   10.0.0.1    mysite.10.0.0.1.xip.io   resolves to   10.0.0.1   foo.bar.10.0.0.1.xip.io   resolves to   10.0.0.1

…and so on. You can use these domains to access virtual hosts on your development web server from devices on your local network, like iPads, iPhones, and other computers. No configuration required!

How does it work? xip.io runs a custom DNS server on the public Internet. When your computer looks up a xip.io domain, the xip.io DNS server extracts the IP address from the domain and sends it back in the response.

React Native xip.io

This basically means that xip.io is a domain name we can use to access our local packager environment on our mac from our iPhone and iPad, based on the fact that the devices are all on the same network.
That’s all, and as you can see everything works “like magic” :relaxed:.

Blender tutorial: user interface

In this new series of post I will talk about learning to use Blender, the famous 3D computer graphics software. A series of tutorial for beginners/newbies (like me).


If you like computer graphics, at some point during your studies you will want to learn to use a 3D content creation software. That’s why I started to study Blender, the most beautiful and famous open source 3D software available for free in the market. In this series of posts I will guide you through its features with some tutorials and references list. At the time of this writing the available version of Bender used for this tutorials is the 2.79. All the tutorials are written using Blender on a MacBook Pro.
Let’s start from the user interface.

The default layout is composed of individual panels, and inside each one of them we can find an editor. The main editors are:

  • info editor, in the top left part of the screen. It contains the main menu of Blender. It contains also a layout switcher, to quickly change the layout of Blender based on our needs (animation, modeling…) and a renderer switcher, to select the renderer engine.
  • outliner editor, it contains a list of all the objects in the scene.
  • properties editor, it contains the properties of an object. It is context specific, so its content will change accordingly to the selected object. It contains also a lot of context specific tabs with specific properties for different context.
  • timeline and animation editor, used to create and modify animation
  • viewport, that contains the 3D window in which our scene is shown and where we can add, remove or modify objects.

blender ui editors

We can switch a panel from one editor to another by clicking on the icon that shows the current editor selected: a list with all the available editors will be shown and we can choose one of them.

blender ui switch editor

On the left side of the viewport we can find a series of tabs that contain some operations, tools and actions we can apply to the 3D window content. This tabs will change based on the fact that we selected or not an object and also based on which type of object we selected. We can also show the object properties sub-panel by clicking the plus (+) button on the right. That sub-panel gives us some information about the object we selected in the 3D window.
At the bottom of the 3D window we can find the 3D manipulator widget, that allows us to scale, rotate and translate object with the mouse drag.
Then we have the layer switcher, that will let us create layered scene (we will talk about layers in a future post).
We have also the viewport shading button, that let use choose the desired type of visualization we want for our scene:

  • bounding box
  • wireframe
  • solid, that shows also colors of the objects
  • texture mode, that shows also textures of the objects
  • materials, that let us adds the materials to our objects
  • rendered

Finally we have the editing interaction mode selector, that allow us to switch between editing mode:

  • object mode, that allow us to deal with individual objects
  • edit mode, that allow us to change the objects

The menus on the left of the editing mode selector will change accordingly to the mode selected.

blender ui 3D window

To navigate in the 3D space, usually Blender require a 3 button mouse (we will see below how to emulate a 3 buttons mouse). Anyway, as we’re on a MacBook pro we can do the following basic operation with the “alternative” default mapping:

  • orbit around in the scene by dragging with two fingers
  • zoom in/out in the scene with pinch and zoom
  • pan in the scene with shift + drag with two fingers

There are also some other basic useful 3D navigation commands:

  • align view to cursor and show all objects with shift + “c” (or alternatively in object mode View -> Align View -> Center Cursor)
  • align view to one side with the options Left, Right, Top, Bottom, Front, Back contained in the View menu
  • change between orthographic and perspective view with the menu option View -> View Persp/Ortho

You can change the user preferences by going to File -> User Preferences. Here you can modify settings for:

  • Interface, so what Blender should show in the interface
  • Editing, so how we edit objects
  • Input, how mouse and keyboard are configured
  • Add-ons, where you can manage plugins
  • Themes, to change the color of the interface
  • File, to configure standard paths
  • System, for system specific settings

To be noted is the option “Emulate 3 Button mouse” in the input settings. This option let Blender emulates a 3 button mouse using the Alt button. Int this way you can use this setting to use Blender with standard keys for mouse.
That’s enough first post. See you in the second tutorial about selecting and translating objects.

Android Studio vs Xcode vs AppCode: a brief comparison about coding speed

In this posts I will compare the coding speed that it is possible to achieve in some of the JetBrains IDEs and Xcode, in terms of code creation and refactoring.


IDE, Integrated Development Environment, are the software developer toolboxes. When I started to work at lastminute.com group my knowledge of the Android platform was very limited. But… lastminute.com group is an agile software development company and one of the technique we use during our development workflow is pair programming: two developers work at the same feature on the same workstation. As reported on Wikipedia, one of the the main advantages of pair programming is knowledge sharing:

Knowledge is constantly shared between pair programmers, whether in the industry or in a classroom, many sources suggest that students show higher confidence when programming in pairs, and many learn whether it be from tips on programming language rules to overall design skill. In “promiscuous pairing”, each programmer communicates and works with all the other programmers on the team rather than pairing only with one partner, which causes knowledge of the system to spread throughout the whole team. Pair programming allows the programmers to examine their partner’s code and provide feedback which is necessary to increase their own ability to develop monitoring mechanisms for their own learning activities.

This is why I started to work with my colleague Francesco Bonfadelli, a senior Android, iOS and Backend developer. During our pair programming sessions I learned a lot about developing mobile apps for the Android platform. One of the thing I learned in the first few days is the difference between the official IDEs: Android Studio and Xcode. After seeing the coding speed that Francesco was able to achieve during an Android coding session, and how much slower it is to do the same things in Xcode for iOS, I realized how much more advanced is Android Studio with its set of refactoring features in comparison with Xcode.
In this post I will briefly analysed some IDEs commonly used for mobile application development focusing on the coding speed that is possible to achieve by using them and I will explain to you why, at the time of this writing, I started to prefer the JetBrains IDEs family (not only for mobile application development :bowtie:).

Xcode

I always loved Xcode. I started to use it 8 years ago and it’s still here with me during my daily job. It opens in a few seconds and you can start to code very quickly. But…. what happens when your app code start to increase in terms of complexity and you need to do a simple refactoring operation? Does it help you in some way when it need to create a new class/property? Does it help you when you need to navigate in your code and you need to jump quickly from one class to another? Well, to be honest it doesn’t help you so much. Even a simple renaming could become a painful operation, especially if you have a project with mixed Swift/Objective-C parts. Everything must be done manually. Consider for example this list of mixed code creation/refactoring operations:

  • create a new class
  • instantiate it and keep it as a local variable
  • add a method to the previous class
  • add a parameter to the method previously created
  • extract the local variable as a property of controller in which I created it

In the following video I will try to do these operations in Xcode. At the time of this writing the available Xcode version is 9.2.

More than 2 minutes to implement all the stuff in the above list. Really slow, isn’t it?!?? :fearful:

Android Studio

Before lastminute.com group, I had used Android Studio just a few times for some very simple Android apps. Then I started to work with Francesco and he introduced me to the power of JetBrains IDEs. This IDE gives you the ability to navigate quickly in you source code, create and modify classes, and allows you to do a lot of other refactoring operations without leaving the keyboard! Basically you can write code and forget about your mouse!! :open_mouth:. The list of keyboard shortcuts you can use in your development flow is endless. You can find the complete list here. Let’s try to do the exact same operations I did before with Xcode, and I also add a rename of the class created at the end of all the previous operation. At the time of this writing the available Android Studio version is 3.0.1.

Only 50 seconds and I did all the stuff (and I wans’t pushing hard on the keyboard.. .:stuck_out_tongue_winking_eye:). As you can see, Android Studio gives you the ability to write code at the speed of light!!! :flushed:.

AppCode

As you can image, after working a few hours with Android Studio, I started to wonder if there’s an IDE that let me setup the same write code style and workflow. Here another colleague that I worked with, Tommaso Resti, a senior iOS and Android developer, showed me AppCode for the first time. This is another IDE from JetBrains for iOS development. It allows you to improve your development speed by allowing you to use some of the refactoring tools that you can find in Android Studio. However it’s not all peace and light in this case. Some of the refactoring tools are not available for Swift and you will still need Xcode to work on Xib and Storyboard (the JetBrains team developed a plugin for interface builder, but that has been discontinued). Anyway, if you start to get used to the Android Studio writing code workflow, you will feel at home with AppCode :relaxed:.

Final thoughts

Android Studio and AppCode are based on IntelliJ IDEA, the famous Java IDE from JetBrains. But that’s half of the story: JetBrains IDE family is really big. You can find an IDE for each of your favourite language:

  • CLion, for C and C++
  • PhpStorm
  • PyCharm
  • RubyMine for Ruby
  • GoLand for GO
  • Rider for C#

So no worries: if you want to start to improve you coding speed probably there’s an IDE for your favourite language. Xcode will always have a special place in my heart. I will still continue to use it in my daily job as a mobile developer. But… the coding speed I gained with the JetBrains IDEs could not be ignored :smiling_imp:. This is why I started to prefer them :heart:.

SceneKit and physically based rendering

In this post I will guide you in the creation of a scene using SceneKit and its physically based rendering features.


SceneKit is one of the Apple framework I love the most. What is SceneKit? Let’s see the definition from the developer apple website:

SceneKit combines a high-performance rendering engine with a descriptive API for import, manipulation, and rendering of 3D assets. Unlike lower-level APIs such as Metal and OpenGL that require you to implement in precise detail the rendering algorithms that display a scene, SceneKit requires only descriptions of your scene’s contents and the actions or animations you want it to perform.

As you can see from the definition there’s a lot of stuff inside it. Basically by using SceneKit you will be able to create games and other 3D applications without the need to know any computer graphics algorithms, physics simulations stuff and so on. You basically describe a Scene in terms of objects and features and Apple will do everything for you :sunglasses:.
One of the most interesting thing about SceneKit on the computer graphics side has been introduced in 2016: physically based rendering (PBR).
We’ve already seen what PBR is in a previous post, so you already know its theoretical foundation stuff (or go to check it out in case you missed it :wink:). So that means that SceneKit could render physically based scene using its own entirely new physically based rendering engine. Is it worth it? Sure!! :blush: So, let’s try it! In this post we will create a scene from scratch that uses the main PBR features you can find inside SceneKit. At the end of this post you will be able to render the scene contained in the image below. So it’s time to start coding!!

Physically based scene right

The general approach used in the construction of the scene will be the following: for each main scene category component we will create a class that encapsulate the creation of the corresponding SCNNode, the base SceneKit unit element, and its setup to obtain the feature we want.
The first class we are going to create is the Light class that encapsulate the base features we need to set up the light: position, rotation and generic color. Light in SceneKit are represented using the SCNLight class.

class Light {
    let node: SCNNode
    
    init(lightNode: SCNNode) {
        node = lightNode
    }
    
    init(lightFeatures: LightFeatures) {
        node = SCNNode()
        createLight()
        set(lightFeatures: lightFeatures)
    }
    
    func createLight() {
        node.light = SCNLight()
    }
    
    private func set(lightFeatures: LightFeatures) {
        node.light?.color = lightFeatures.color
        node.position = lightFeatures.position
        node.eulerAngles = lightFeatures.orientation;
    }
}

The basic features of the light must be passed at construction time using a LightFeatures objects.

class LightFeatures {
    let position: SCNVector3
    let orientation: SCNVector3
    let color: UIColor
    
    init(position: SCNVector3, orientation: SCNVector3, color: UIColor) {
        self.position = position
        self.orientation = orientation
        self.color = color
    }
}

We are now ready to create our PhysicallyBasedLight as an child of Light class. Our physically based light will be of type .directional, and we will customize its intensity and temperature. The intensity is the flux of the light (again, go to check my first post about physically based rendering if you don’t know what it is :stuck_out_tongue:), and the second one is the color temperature expressed in Kelvin (remember: 6500 K corresponds to pure white sunlight). We also activate other interesting features: by setting castsShadow to true we activate the rendering of shadow using shadow mapping technique, and by setting orthographicScale to 10 we extend a little bit the visible area of the scene from the light, so we are improving the construction of the shadow map.

class PhysicallyBasedLight: Light {
    
    init(lightFeatures: LightFeatures, physicallyBasedLightFeatures: PhysicallyBasedLightFeatures) {
        super.init(lightFeatures: lightFeatures)
        set(physicallyBasedLightFeatures: physicallyBasedLightFeatures)
        activateShadow()
    }
    
    private func set(physicallyBasedLightFeatures: PhysicallyBasedLightFeatures) {
        node.light?.type = .directional
        node.light?.intensity = physicallyBasedLightFeatures.lumen
        node.light?.temperature = physicallyBasedLightFeatures.temperature
    }
    
    private func activateShadow() {
        node.light?.castsShadow = true
        node.light?.orthographicScale = 10        
    }
}

As for the basic light, we create also for the physically based features a class that will store the configuration and that must be injected at construction time (as you can see from the previous class init), that we will call PhysicallyBasedLightFeatures.

class PhysicallyBasedLightFeatures {
    let lumen: CGFloat
    let temperature: CGFloat
    
    init(lumen: CGFloat, temperature: CGFloat) {
        self.lumen = lumen
        self.temperature = temperature
    }
}

For physically based rendering we need also another kind of lighting setup to achieve the best result. We need to set up the SCNScene, the object that contains all the SCNNode elements of a scene, the lightingEnviroment and background properties. These ones let SceneKit approximate more accurately the indirect lighting calculation. To set this features we create a new class, PhysicallyBasedLightingEnviroment, that will receive the scene to setup. On this class will set a cubemap on the lightingEnviroment.contents property and its intensity on the lightingEnviroment.intensity property. To match the result of this lighting setup, it will set the background .contents with the same cubemap used for the lightingEnviroment.contents property.

class PhysicallyBasedLightingEnviroment {
    let cubeMap: [String]
    let intensity: CGFloat
    
    init(cubeMap: [String], intensity: CGFloat) {
        self.cubeMap = cubeMap
        self.intensity = intensity
    }
    
    func setLightingEnviromentFor(scene: SCNScene) {
        scene.lightingEnvironment.contents = cubeMap
        scene.lightingEnvironment.intensity = intensity
        scene.background.contents = cubeMap
    }
}

Next step: the camera. We create a Camera class, that will contain a reference, again, to a SCNNode on which an SCNCamera has been defined. For the camera we need to set first of all some geometric properties like the position, rotation and the pivot point that we will use as reference for the animation of the camera. Last but not least we set the flag wantHDR to apply High Dynamic Range post processing to adjust the general brightness of the scene with respect to the display.

class Camera {
    let node: SCNNode
    
    init(cameraNode: SCNNode, wantsHDR: Bool = false) {
        node = cameraNode
        setAdvancedFeatures(wantsHDR: wantsHDR)
    }
    
    init(position: SCNVector3, rotation: SCNVector4, wantsHDR: Bool = false, pivot: SCNMatrix4? = nil) {
        node = SCNNode()
        createCameraOnNode()
        setAdvancedFeatures(wantsHDR: wantsHDR)
        set(position: position, rotation: rotation, pivot: pivot)
    }
    
    private func createCameraOnNode() {
        node.camera = SCNCamera()
    }
    
    private func setAdvancedFeatures(wantsHDR: Bool) {
        node.camera?.automaticallyAdjustsZRange = true
        node.camera?.wantsHDR = wantsHDR
    }
    
    private func set(position aPosition: SCNVector3, rotation aRotation: SCNVector4, pivot aPivot: SCNMatrix4?) {
        node.position = aPosition
        node.rotation = aRotation
        node.pivot = aPivot ?? node.pivot
    }
}

Now its time to think about the objects we want to display in the scene. For that reason we create a Object class that will represent each kind of object we want to show in the scene. Obviously as for the previous classes, also the Object class will expose a node property of type SCNNode that represents our object in the scene. We define this class with multiple initializer that let as create object instances using various configurations: init as an empty object, init using a SCNGeomtry instance, using a mesh loaded as a MDLObject using the Model I\O Apple framework. This framework let us import/export 3D models in a wide range of common available formats.

class Object {
    let node: SCNNode
    
    init(position: SCNVector3, rotation: SCNVector4) {
        node = SCNNode()
        node.castsShadow = true
        set(position: position, rotation: rotation)
    }
    
    init(geometry: SCNGeometry, position: SCNVector3, rotation: SCNVector4) {
        node = SCNNode(geometry: geometry)
        node.castsShadow = true
        set(position: position, rotation: rotation)
    }
    
    init(mesh: MDLObject, position: SCNVector3, rotation: SCNVector4) {
        node = SCNNode(mdlObject: mesh)
        node.castsShadow = true
        set(position: position, rotation: rotation)
    }
    
    private func set(position: SCNVector3, rotation: SCNVector4) {
        node.position = position
        node.rotation = rotation
    }
}

Now we are ready to define a PhysicallyBasedObject class that will inherit all the capabilities of the Object class and will set all the features needed to make the object rendered using physically based rendering. Even if all the initializer are available to this subclass, we will require a mesh as MDLObject at construction time, because we will display some particular mesh objects (we will discuss about them later). At construction time we will require also the position and rotation and the PhysicallyBasedMaterial material. By assigning it to the firstMaterial property of the geometry of our node, our object will be rendered as a physically based object using the SceneKit physically based rendering engine. NB: the mesh that we will use doesn’t contain any material so by assigning the firstMaterial property the mesh will use it for the entire surface.

class PhysicallyBasedObject: Object {
    
    init(mesh: MDLObject, material: PhysicallyBasedMaterial, position: SCNVector3, rotation: SCNVector4) {
        super.init(mesh: mesh, position: position, rotation: rotation)
        node.geometry?.firstMaterial = material.material
    }
}

So, the next question is: how do we define the PhysicallyBasedMaterial class? We create PhysicallyBasedMaterial with a single property material of type SCNMaterial. On this material property we set:

  • the lightingModel to .physicallyBased, to mark it for SceneKit as a physically based material
  • diffuse.contents property with an appropriate diffuse value.
  • roughness.contents property with an appropriate roughness value.
  • metalness.contents property with an appropriate metalness value.
  • normal.contents property with an appropriate normal value.
  • ambientOcclusion.contents property with an appropriate ambient occlusion value

As you can see, we have all the properties we discussed in my introduction to physically based rendering. We have also other properties that help us improve the realism, especially with indirect lighting for what concern the ambient occlusion (this property/technique is not related to PBR but helps to improve the final rendering). Which kind of values accept this properties? As stated in the Apple documentation you can assign to the contents property:

  • a color (NSColor/UIColor/CGColor)
  • a number (NSNumber)
  • an image (NSImage/UIImage/CGImage)
  • a string
  • a CALayer
  • a texture (SKTexture/MDLTexture/MTLTexture/GLKTextureInfo)
  • a SKScene
  • an array of six image that represents a cube map (as we did for the lightingEnviroment.contents property).
class PhysicallyBasedMaterial {
    let material: SCNMaterial
    
    init(diffuse: Any, roughness: Any, metalness: Any, normal: Any, ambientOcclusion: Any? = nil) {
        material = SCNMaterial()
        material.lightingModel = .physicallyBased
        material.diffuse.contents = diffuse
        material.roughness.contents = roughness
        material.metalness.contents = metalness
        material.normal.contents = normal
        material.ambientOcclusion.contents = ambientOcclusion
    }
}

Now it’s time to construct our scene :relieved:!! We start by creating a new class PhysicallyBasedScene, subclass of SCNScene. In this way we can customize the default initializer with the step needed to add all the element of our scene, and also because in this way we have direct access to all the properties of SCNScene. We also define a protocol, Scene, that we will use to manage some gesture and animate the scene. So in the initializer we will call three methods: createCamera() in which we will create the camera, createLight() in which we will create the lights, createObjects() in which we will create the objects. NB: we need to define also the initializer with coder because we are subclassing a class that adopt the NSSecureCoding that is an extension of the NSCoding protocol that has this required initializer.

@objc class PhysicallyBasedScene: SCNScene, Scene {
    var camera: Camera!
    
    override init() {
        super.init()
        createCamera()
        createLight()
        createObjects()
    }
    
    required init?(coder aDecoder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }

    ...
    ...
}    

So we start by creating our camera. We place it in front of the scene with the pivot moved a little bit and HDR post processing activated.

private func createCamera() {
    camera = Camera(
        position: SCNVector3Make(0, 2, 0),
        rotation: SCNVector4Make(1, 0, 0, GLKMathDegreesToRadians(-5)),
        wantsHDR: true,
        pivot: SCNMatrix4MakeTranslation(0, 0, -8)
    )
    rootNode.addChildNode(camera.node)
}

Then we create our lights. We create a physically based light with power of 100 lumen and a color temperature of 4000K. In this way we can match the warm orange color of the cubemap used for the lighting environment that we set in the scene.

private func createLight() {
    rootNode.addChildNode(createPhysicallyBasedLight().node)
    createPhysicallyLightingEnviroment()
}

private func createPhysicallyBasedLight() -> PhysicallyBasedLight {
    let lightFeatures = LightFeatures(
        position: SCNVector3Make(-2, 5, 4),
        orientation: SCNVector3Make(GLKMathDegreesToRadians(-45), GLKMathDegreesToRadians(-25), 0),
        color: UIColor.white
    )
    let physicallyBasedLightFeatures = PhysicallyBasedLightFeatures(lumen: 100, temperature: 4000)
    let physicallyBasedLight = PhysicallyBasedLight(
        lightFeatures: lightFeatures,
        physicallyBasedLightFeatures: physicallyBasedLightFeatures
    )
    return physicallyBasedLight
}

private func createPhysicallyLightingEnviroment() {
    let enviroment = PhysicallyBasedLightingEnviroment(
        cubeMap: ["rightPBR.png", "leftPBR.png", "upPBR.png", "downPBR.png", "backPBR.png", "frontPBR.png"],
        intensity: 1.0
    )
    enviroment.setLightingEnviromentFor(scene: self)
}

Finally we can place our 4 objects: one basic plane mesh and 3 mesh taken from the Stanford scan repository. These mesh are: the dragon, the happy buddha and Lucy. All this meshes will be rendered using the PhysicallyBasedObject. We take the textures used to model the various material from freepbr website.

private func createObjects() {
    addFloor()
    addDragon()
    addBuddha()
    addLucy()
}

private func addFloor() {
    let floor = PhysicallyBasedObject(
        mesh: MeshLoader.loadMeshWith(name: "Floor", ofType: "obj"),
        material: PhysicallyBasedMaterial(
            diffuse: "floor-diffuse.png",
            roughness: NSNumber(value: 0.8),
            metalness: "floor-metalness.png",
            normal: "floor-normal.png",
            ambientOcclusion: "floor-ambient-occlusion.png"
        ),
        position: SCNVector3Make(0, 0, 0),
        rotation: SCNVector4Make(0, 0, 0, 0)
    )
    rootNode.addChildNode(floor.node)
}

private func addDragon() {
    let dragon = PhysicallyBasedObject(
        mesh: MeshLoader.loadMeshWith(name: "Dragon", ofType: "obj"),
        material: PhysicallyBasedMaterial(
            diffuse: "rustediron-diffuse.png",
            roughness: NSNumber(value: 0.3),
            metalness: "rustediron-metalness.png",
            normal: "rustediron-normal.png"
        ),
        position: SCNVector3Make(-2, 0, 3),
        rotation: SCNVector4Make(0, 1, 0, GLKMathDegreesToRadians(20))
    )
    rootNode.addChildNode(dragon.node)
}

private func addBuddha() {
    let buddha = PhysicallyBasedObject(
        mesh: MeshLoader.loadMeshWith(name: "HappyBuddha", ofType: "obj"),
        material: PhysicallyBasedMaterial(
            diffuse: "cement-diffuse.png",
            roughness: NSNumber(value: 0.8),
            metalness: "cement-metalness.png",
            normal: "cement-normal.png",
            ambientOcclusion: "cement-ambient-occlusion.png"
        ),
        position: SCNVector3Make(-0.5, 0, 0),
        rotation: SCNVector4Make(0, 0, 0, 0)
    )
    rootNode.addChildNode(buddha.node)
}

private func addLucy() {
    let lucy = PhysicallyBasedObject(
        mesh: MeshLoader.loadMeshWith(name: "Lucy", ofType: "obj"),
        material: PhysicallyBasedMaterial(
            diffuse: "copper-diffuse.png",
            roughness: NSNumber(value: 0.0),
            metalness: "copper-metalness.png",
            normal: "copper-normal.png"
        ),
        position: SCNVector3Make(2, 0, 2),
        rotation: SCNVector4Make(0, 0, 0, 0)
    )
    rootNode.addChildNode(lucy.node)
}

The meshes are stored as wavefront obj file (the easiest file format of all time :relieved:). As you can see from the previous code, we use a class called MeshLoader. How does it work? It uses the Model I/O Apple framework to load the obj file as a MDLAsset and then we extract the first MDLObject.

class MeshLoader {
    
    static func loadMeshWith(name: String, ofType type: String) -> MDLObject {
        let path = Bundle.main.path(forResource: name, ofType: type)!
        let asset = MDLAsset(url: URL(fileURLWithPath: path))
        return asset[0]!
    }
}

We are almost ready to render our scene. The last thing to do is to implement the methods of the Scene protocol to add some movement to the scene. This method will be called by a one tap gesture attached to the main view that will render our scene (we will see it in a few moments). Inside it we use the method runAction to rotate the camera around its pivot, that we moved previously to have a rotation axis to move the camera around the scene.

func actionForOnefingerGesture(withLocation location: CGPoint, andHitResult hitResult: [Any]!) {
    self.camera.node.runAction(SCNAction.rotate(by: CGFloat(GLKMathDegreesToRadians(360)),
                                                around: SCNVector3Make(0, 1, 0),
                                                duration: 30))
}

We are ready to render our scene. Assign an instance of our PhysicallyBasedScene to a SCNView and see the beautiful results of our work. Below you can find a video of the scene we created.

That’s it!! You’ve made it!! Now you can show to your friends your physically based scene and be proud of it :sunglasses:. You can find this example with other scenes in this github repository.