CHICIO CODING

Dirty clean code. Creative Stuff. Stuff.

End to end (e2e) cross platform testing for your mobile apps with Appium

In this post I will talk about how to use Appium to write cross platform end to end tests for you mobile apps.


During my daily job I’m used to write unit test for my code. In fact, I usually develop using Test Driven Development technique. Anyway at the end of the development of a new feature you want to be sure that the entire system works as expected. In particular for a mobile developer, you want to test the entire new feature flow inside your app. This is usually what is called end to end test. In the last few months the mobile team “Team Cook” at lastminute.com group, of which I’m a member, decided to try an end to end testing infrastructure for the mobile apps of our main brand lastminute.com, volagratis and rumbo. In this post I will described this testing infrastructure and how it works.

Software

To put in place the e2e infrastructure we chose:

Development

The first thing we did was installing all the siftware stack previously described on our CI machine. As a consequence of the fact that we want to run tests for both iOS and Android a macOS based CI machine is needed (because you need to install Xcode). Fortunately, our CI machine was already an Apple Computer so we didn’t need to change anything.
After that we created a new javascript project, that follows the structure of the WebdriverIO sample code contained in the Appium github repository. This sample project is written using ES5 syntax, so we decided to upgrade it to use ES6 syntax and compile it using Babel. This is possible by launching mocha and specifing babel as the compiler. This is the final command to launch our tests:

mocha --compilers js:babel-core/register --timeout 6000000 test

This is the final package.json with all the dependecies and scripts phases.

{
  "name": "e2e-tests",
  "version": "1.0.0",
  "description": "e2e tests",
  "main": "index.js",
  "scripts": {
    "pretest": "./download-artifacts.sh",
    "test": "mocha --compilers js:babel-core/register --timeout 6000000 test"
  },
  "author": "Fabrizio Duroni",
  "license": "MIT",
  "devDependencies": {
    "assert": "^1.4.1",
    "babel-core": "^6.26.3",
    "babel-preset-env": "^1.7.0",
    "babel-plugin-transform-runtime": "^6.23.0",
    "chai": "^4.1.2",
    "mocha": "^5.0.0",
    "webdriverio": "^4.12.0"
  }
}

This is the .babelrc file used to configure the babel compiler. The transform-runtime plugin is needed in order to be able to use arrow functions.

{
  "presets": ["env"],
  "plugins": [
    ["transform-runtime", {
      "polyfill": false,
      "regenerator": true
    }]
  ]
}

As you may already notice from the package.json file above, there’s a pretest phase that launches a script called download-artifacts.sh. This a custom script we added to download the latest release of our iOS ipa and Android apk artifacts. This will be the apps installed on the iOS simulators/Android emulators and tested with appium.
After that we created the iOS and Android appium config to be used by our tests.

import path from "path";

const iOSConfig = {
  protocol: "http",
  host: "localhost",
  port: 4723,
  path: "/wd/hub",
  logLevel: "verbose",
  desiredCapabilities: {
      platformName: "iOS",
      automationName: "XCUITest",
      deviceName: "iPhone 8",
      platformVersion: "11.4",
      clearSystemFiles: true,
      wdaStartupRetryInterval: 1000,
      useNewWDA: true,
      waitForQuiescence: false,
      shouldUseSingletonTestManager: false,
      app: path.resolve(__dirname, "..", "apps", "<ipa downloaded using pretest download.sh script>"),
      orientation: "PORTRAIT",
  }
};

const androidConfig = {
    host: "localhost",
    port: 4723,
    logLevel: "verbose",
    desiredCapabilities: {
        platformName: "Android",
        automationName: "UiAutomator2",
        deviceName: "Pixel_XL_API_27",
        platformVersion: "8.1",
        app: path.resolve(__dirname, "..", "apps", "<apk downloaded using pretest download.sh script>")
    }
};

export {iOSConfig, androidConfig}

One important thing to note about the configuration above is that for iOS we were forced to set the following four options:

wdaStartupRetryInterval: 1000,
useNewWDA: true,
waitForQuiescence: false,
shouldUseSingletonTestManager: false,

These were need in order to avoid a know bug in appium for iOS that causes the appium test suite to get stuck during the creation of the appium WebdriverIO session. After the configuration setup we were ready to write our first tests. To write them we used the Appium desktop app to record the interfaction with our apps. The outcome of the recording is a test source code written in the language + driver you prefer (in our case JavaScript + WebdriverIO). Remember that appium uses the accessibility id on iOS and the content-desc on Android to unify the search method for both platform. If this fields are not setted correctly with the UI elements you interact with, the appium desktop app will generate a XPath queries for XCUITest or UiAutomator. This will cause you to write two tests with the same interaction just to change the UI elements identifier (or write some wrapper with parametrized UIElements). So the best solution to have appium works correctly is to set the fields above with the same values on both iOS and Android.
After that we launched the appium server on the CI machine previously configured and we created a new Jenkins job that clones the e2e-tests project and runs the command:

npm run test

This job is autmatically triggered (cron) every day at 8 PM. That’s it!!! This is how we tested our apps with Appium. Tha’s all for appium and mobile end to end tests. If you have any question don’t hesitate to comment on this post below :sparkling_heart:.

Create a Swift library compatible with the Swift Package Manager for macOS and Linux

In this post I will talk about how to create a Swift library compatible with macOS and Linux.


A few times ago I published ID3TagEditor, a Swift library to read and modify the ID3 tag of mp3 files (I described it in this previous post). This library was compatible with iOS, Apple TV, watchOS and macOS. Then one day a user of my library opened a new issue on the library github repo with title “Build Error” and a description of a build error on Linux.

id3tageditor linux build issue

The library had a simple Package.swift, but honestly I never tested it with the Swift Package Manager (SPM) on Linux nor on macOS :sweat_smile: (this was the only feature that I didn’t test :sweat_smile:). Soooo I though: “It’s time to add full support for the Swift Package Manager to ID3TagEditor and port it also on Linux!!!!” :sparkling_heart: In this post I will describe how you can create a Swift library package for the Swift Package Manager compatible with macOS and Linux for an existing project. Obviously, I will show you the entire process using my ID3TagEditor as example.
First of all, if you are starting with a new library project, you will use the following SPM init command:

swift package init --type library

This command will create all the files and folders you need to develop your library. But in my case, I was working on an existing project. This is why I created all the needed files manually and I will describe them in details so you can understand better the meaning of each one of them.
The first file needed is the Package.swift. This file must be created in the root folder of your project. This file contains some Swift code that defines the properties of the project using the PackageDescription module API. At the moment of this writing there are 3 API versions of the PackageDescription API:

For my ID3TagEditor I used the Version 4.2.

 // swift-tools-version:4.2
 
 import PackageDescription
 
 let package = Package(
     name: "ID3TagEditor",
     products: [
         .library(
             name: "ID3TagEditor",
             targets: ["ID3TagEditor"]
         ),
     ],
     dependencies: [],
     targets: [
         .target(
             name: "ID3TagEditor",
             dependencies: [],
             path: "./Source"
         ),
         .testTarget(
             name: "ID3TagEditorTests",
             dependencies: ["ID3TagEditor"],
             path: "./Tests",
             exclude: [
                 "Parsing/Frame/Content/Size/ID3FrameContentSizeParserTest.swift",
                 "Parsing/Frame/Content/Operation/ID3FrameStringContentParsingOperationTest.swift",
                 "Parsing/Frame/Size/ID3FrameSizeParserTest.swift",
                 "Parsing/Tag/Size/ID3TagSizeParserTest.swift",
                 "Parsing/Tag/Version/ID3TagVersionParserTest.swift",
                 "Acceptance/ID3TagEditorTestAcceptanceTest.swift",
                 "Mp3/Mp3FileReaderTest.swift"
             ]
         ),
     ],
     swiftLanguageVersions: [.v4_2]
 )

Let’s see in details the meaning of each option:

  • name, the name of the package
  • products, the list of all products in the package. You can have executable or library products. In my case I have library product and for that I have to specify:
    • name, the name of the product
    • targets, the targets which are supposed to be used by other packages, i.e. the public API of a library package
  • dependencies, a list of package dependencies for our package. At the moment ID3TagEditor doesn’t have any dependencies so I declared an empty array.
  • targets, the list of targets in the package. In my case I have two target:
    • ID3TagEditor, that is the main target of the library and is a classic .target. For this target you specify its name, its dependencies and the path to the source files. In my case I have everything inside the Source folder.
    • ID3TagEditorTests, that is the .testTarget of the library. For this target I had to specify an additional exclude option. The tests excluded contains some references to bundle resources, and at the moment of this writing the SPM doesn’t support resource bundles.
  • swiftLanguageVersions, that contains the set of supported Swift language versions.

Next I had to create a XCTestManifests.swift file inside the Tests folder. This file contains an extension for each XCTestCase subclass I included in my test target. This extension contains an array __allTest that exposes a list of all my test methods inside my XCTestCase subclasses. At the end of this file you can find a __allTests() function that pass all the test methods to the testCase() utility function. __allTests() and testCase are available only on Linux platform (and in fact the __allTests() function is wrapped in a conditional check #if !os(macOS)). Below you can see a part of the XCTestManifests.swift file for the ID3TagEditor library.

import XCTest

extension ID3AlbumArtistFrameCreatorTest {
    static let __allTests = [
        ("testFrameCreationWhenThereIsAnAlbumArtist", testFrameCreationWhenThereIsAnAlbumArtist),
        ("testNoFrameCreationWhenThereIsNoAlbumArtist", testNoFrameCreationWhenThereIsNoAlbumArtist),
    ]
}

extension ID3AlbumFrameCreatorTest {
    static let __allTests = [
        ("testFrameCreationWhenThereIsAnAlbum", testFrameCreationWhenThereIsAnAlbum),
        ("testNoFrameCreationWhenThereIsNoAlbum", testNoFrameCreationWhenThereIsNoAlbum),
    ]
}

extension ID3ArtistFrameCreatorTest {
    static let __allTests = [
        ("testFrameCreationWhenThereIsAnArtist", testFrameCreationWhenThereIsAnArtist),
        ("testNoFrameCreationWhenThereIsNoArtist", testNoFrameCreationWhenThereIsNoArtist),
    ]
}

//other extensions, one for each unit test class...
...

#if !os(macOS)
public func __allTests() -> [XCTestCaseEntry] {
    return [
        testCase(ID3AlbumArtistFrameCreatorTest.__allTests),
        testCase(ID3AlbumFrameCreatorTest.__allTests),
        testCase(ID3ArtistFrameCreatorTest.__allTests),
        ...
    ]
}
#endif

Next I created a LinuxMain.swift file in the root folder of my ID3TagEditor project. This file loads all the test on Linux (using the functions and extensions defined in the previous XCTestManifests.swift file).

import XCTest

import ID3TagEditorTests

var tests = [XCTestCaseEntry]()
tests += ID3TagEditorTests.__allTests()

XCTMain(tests)

Now I was ready to test ID3TagEditor using the SPM on macOS and Linux. To do this I used Ubuntu as distro. The distro version used at the moment of this writing is the 18.04 LTS.
First of all, how do I install Swift on linux? I downloaded the Swift release for Linux from the Swift download page. The version I used is the one you can find at this link. Then I installed the additional packages clang and libicu-dev with the following shell command.

sudo apt-get install clang libicu-dev

Then I extracted from the archive previously downloaded the Swift release folder and I added to my shell enviroment variable PATH the path to /usr/bin folder contained inside the this release folder.

tar xzf swift-<VERSION>-<PLATFORM>.tar.gz

//Add this to your shell profile file
export PATH=/<path to the Swift release folder>/usr/bin:"${PATH}"

The setup was done. Now I was able to test ID3TagEditor as a SPM library on Linux. To do this I created a new project inside the demo folder of the ID3TagEditor project called Demo Ubuntu. This is an executable SPM project that has ID3TagEditor as package dependecies. The executable is a command line application that opens a mp3 file, parses its ID3 tag and print it to the standard output. To test my work I just cloned the ID3TagEditor on Linux (and also on macOS :stuck_out_tongue_winking_eye:) and launched the following commands in the root folder of the Demo Ubuntu project:

swift build
swift run

Below you can see some screenshot taken from both Linux and macOS that shows the final output of the demo Demo Ubuntu after you execute the swift run command.

id3tageditor SPM demo ubuntu id3tageditor SPM demo macOS

Coool!! Now the ID3TagEditor is fully compatible with the SPM and could be used in Swift applications for both macOS and Linux. You can see the entire codebase of the ID3TagEditor in this github repository. Now you can start to port your libraries and applications on Linux with the Swift Package Manager :sparkles:.

React Native: a simple architecture for Native Modules communication with your UIViewController on iOS

In this post I will talk about a simple architecture for communication between React Native Native modules (aka bridges) and your native code on iOS.


As we saw in a previous post for fragments/Activities Android, sometimes when you integrated React Native in an existing app, you will want to be able let your Native Modules bridges communicate with your UIVIewController, especially the ones that contain the React Native View. In this post I will show you an architecture to put in place this communication on iOS, that will be compatible with all the features of React Native (for example it will work also with the live reload functionality). This is an architecture I put in place for our apps at lastminute.com group. To show this architecture I will create a simple app that show a React Native screen as a modal. I will then implement the close button functionality by calling a native module from the onPress on a React Native button. Below you can see the final result.

The architecture I put in place is based on the NSNotificationCenter. The description of this component of the iOS SDK is the following one:

A notification dispatch mechanism that enables the broadcast of information to registered observers. Objects register with a notification center to receive notifications (NSNotification objects) using the addObserver (_:selector:name:object:) or addObserver(forName:object:queue:using:) methods. When an object adds itself as an observer, it specifies which notifications it should receive. An object may therefore call this method several times in order to register itself as an observer for several different notifications.

This definition basically means that with this api we are able to register class to events send by another one. This is exactly what we need to put in place the communication between our Native Modules bridges and our UIViewController. Let’s start from the MainViewController. In it there’s only a button with an action to start the React Native modal UIViewController called ReactNativeModalViewController.

 class MainViewController: UIViewController {
     override func viewDidLoad() {
         super.viewDidLoad()
     }
     
     @IBAction func showReactNativeModal(_ sender: Any) {
         present(ReactNativeModalViewController(), animated: true, completion: nil)
     }
 }

The ReactNativeModalViewController is a UIViewController with the setup needed to launch a React Native context. This UIViewController is an observer of the ReactEventCloseModal event in the NSNotificationCenter. This event is generated in the Native Modules bridge. The action executed for this event is contained in the closeModal event.

class ReactNativeModalViewController: UIViewController {
    override func viewDidLoad() {
        setupReactNative()
        registerToReactNativeEvents()
    }
    
    private func setupReactNative() {
        let rootView = RCTRootView(
            bundleURL: URL(string: "http://localhost:8081/index.bundle?platform=ios"),
            moduleName: "ReactNativeModal",
            initialProperties: nil,
            launchOptions: nil
        )
        self.view = rootView
    }
    
    private func registerToReactNativeEvents() {
        NotificationCenter.default.addObserver(self,
                                               selector: #selector(closeModal),
                                               name: NSNotification.Name(rawValue: ReactEventCloseModal),
                                               object: nil)
    }
    
    @objc private func closeModal() {
        DispatchQueue.main.async { [unowned self] in
            self.dismiss(animated: true, completion: nil)
        }
    }
}

Now let’s have a look at Native Module created for the app, the ReactNativeModalBridge. In this bridge there just one react method, closeModal. This is the one called from the React Native JS side. In this method we are sending an event with the identifier ReactEventCloseModal. This identifier is defined inside the files ReactNativeEvents .h/ReactNativeEvents.m as a constant with string value closeModal. The ReactNativeModalViewController is subscribed to this type of event (as we saw above). This basically means that when the closeModal bridge method is called from the React Native Javascript code a new event ReactEventCloseModal is generated and the ReactNativeModalViewController will execute the subscribed method defined in it. We have all setup to have our Native Modules communication with our controllers :open_mouth:. Below you can find the header and implementations of the ReactNativeModalBridge bridge (written in Objective-C :sparkling_heart:).

#import <Foundation/Foundation.h>
#import <React/RCTBridgeModule.h>

@interface ReactNativeModalBridge : NSObject<RCTBridgeModule>

@end
  
  
  
#import "ReactNativeModalBridge.h"
#import "ReactNativeEvents.h"

@implementation ReactNativeModalBridge
RCT_EXPORT_MODULE();

RCT_EXPORT_METHOD(closeModal) {
    [[NSNotificationCenter defaultCenter] postNotificationName:ReactEventCloseModal object:nil];
}

@end

Now it’s time to see the javascript code. Below you can see the ReactNativeModal component. Inside this component there is a call to the native module NativeModules.ReactNativeModalBridge.closeModal() described above. In this way the modal will be closed directly from the native side.

class ReactNativeModal extends React.Component {
    render() {
        return (
            <View style={styles.container}>
                <Text style={styles.hello}>Hello modal!</Text>
                <Text style={styles.message}>
                    I'm a react native component. Click on the button to close me using native function.
                </Text>
                <Button
                    title={"Close me"}
                    onPress={() => NativeModules.ReactNativeModalBridge.closeModal()}
                />
            </View>
        );
    }
}

That’s all for our native modules communication architecture iOS. You can find the complete example in this github repository. If you want to know how we managed the same problem on the Android platform :rocket: you can read my other post about the same topic.

React Native: a simple architecture for Native Modules communication with your Activities and Fragments on Android

In this post I will talk about a simple architecture for communication between React Native Native modules (aka bridges) and your native code on Android.


Sometimes a React Native app needs to access to the native API or needs/want to call some existing native code you already have in place. This is why Native Modules have been created for both iOS and Android.
Sometimes when you integrated React Native in an existing app, you will want to be able let your Native Modules bridges communicate with your activities and fragment, especially the ones that contain the React Native View . In this post I will show you an architecture to put in place this communication on Android, that will be compatible with all the features of React Native (for example it will work also with the live reload functionality). This is an architecture I put in place with my colleague Felice Giovinazzo in our apps at lastminute.com group. Felice is a senior fullstack developer with many years of experiences (he is the “lastminute” veteran of our team) and a computer graphics enthusiast like me :revolving_hearts::sparkling_heart:.
To show you this architecture I will create a simple app that show a React Native screen as a modal. I will then implement the close button functionality by calling a native module from the onPress on a React Native button. Below you can see the final result.

The architecture we put in place is based on a Event Bus in which the Native Modules bridges notify the subscribed Activities/Fragments of the actions to be executed. So each one of them is subscribed to specific events to which they are able to respond. We choose Otto as event bus library (we don’t want to reinvent the wheel :bomb:). Let’s start from the MainActivity. In it there’s only a button with an action to start the React Native modal activity

public class MainActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
    }

    public void showReactNativeModal(View view) {
        startActivity(new Intent(this, ReactNativeModalActivity.class));
    }
}

The ReactNativeModalActivity is an Activity with the setup needed to launch a React Native context. This activity is registered to the event bus to be able to listen to events from the Native Modules bridges. In this case the activity is subscribed to just one event with the method @Subscribe public void close(ReactNativeModalBridge .CloseModalEvent event).

public class ReactNativeModalActivity extends AppCompatActivity implements DefaultHardwareBackBtnHandler {

    private final int OVERLAY_PERMISSION_REQ_CODE = 8762;
    private ReactRootView mReactRootView;
    private ReactInstanceManager mReactInstanceManager;

    @Override
    protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        registerToReactEvents();
        askReactDrawingPermission();
        setupReactView();
    }

    @Override
    protected void onDestroy() {
        super.onDestroy();
        unregisterToReactEvents();

        if (mReactInstanceManager != null) {
            mReactInstanceManager.onHostDestroy(this);
        }
        if (mReactRootView != null) {
            mReactRootView.unmountReactApplication();
        }
    }

    private void registerToReactEvents() {
        ((NativeModulesApplication)getApplication())
                .getBus()
                .register(this);
    }

    private void unregisterToReactEvents() {
        ((NativeModulesApplication)getApplication())
                .getBus()
                .unregister(this);
    }

    @Subscribe
    public void close(ReactNativeModalBridge.CloseModalEvent event) {
        finish();
    }

    private void askReactDrawingPermission() {
        if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
            if (!Settings.canDrawOverlays(this)) {
                Intent intent = new Intent(
                        Settings.ACTION_MANAGE_OVERLAY_PERMISSION,
                        Uri.parse("package:" + getPackageName())
                );
                startActivityForResult(intent, OVERLAY_PERMISSION_REQ_CODE);
            }
        }
    }

    private void setupReactView() {
        ...
    }

    ...
}

Now let’s have a look at Native Module created for the app, the ReactNativeModalBridge. In this bridge there just one react method, closeModal. This is the one called from the React Native JS side. In this method we are sending an event of type CloseModalEvent. The ReactNativeModalActivity is subscribed to this type of event (as we saw above). This basically means that when the closeModal bridge method will be called from the React Native Javascript code a new event CloseModalEvent is generated and the ReactNativeModalActivity will execute the subscribed method defined in it. We have all setup to have our Native Modules communication with our activities (and eventually fragment with the same approach if we need them :neckbeard:).

public class ReactNativeModalBridge extends ReactContextBaseJavaModule {

    public ReactNativeModalBridge(ReactApplicationContext reactContext) {
        super(reactContext);
    }

    @Override
    public String getName() {
        return this.getClass().getSimpleName();
    }

    @ReactMethod
    public void closeModal() {
        final Activity currentActivity = getCurrentActivity();
        currentActivity.runOnUiThread(new Runnable() {
            @Override
            public void run() {
                ((NativeModulesApplication) currentActivity.getApplication())
                        .getBus()
                        .post(new CloseModalEvent());
            }
        });
    }

    public class CloseModalEvent { }
}

Now it’s time to see the javascript code. Below you can see the ReactNativeModal component. Inside this component there is a call to the native module NativeModules.ReactNativeModalBridge.closeModal() described above. In this way the modal will be closed directly from the native side.

class ReactNativeModal extends React.Component {
    render() {
        return (
            <View style={styles.container}>
                <Text style={styles.hello}>Hello modal!</Text>
                <Text style={styles.message}>
                    I'm a react native component. Click on the button to close me using native function.
                </Text>
                <Button
                    title={"Close me"}
                    onPress={() => NativeModules.ReactNativeModalBridge.closeModal()}
                />
            </View>
        );
    }
}

That’s all for our native modules communication architecture on Android. You can find the complete example in this github repository. If you want to know how we managed the same problem on the iOS platform :apple::iphone::heartbeat: you can read my other post about the same topic.

My first experience as speaker at Voxxed Days 2018: a talk about React, React Native and Typescript

In this post I will talk about my first experience as a speaker at a conference: a talk about React, React Native and Typescript with Alessandro Romano.


In the last few months I talked a lot about React Native and Typescript. In the team where I work at lastminute.com group I acquired a strong knowledge of the Typescript + React + React Native technology stack. There has been also some few changes in the team: Emanuele Ianni, (do you remember I already talked about him in my previous post?), my technical team leader left the company. He was supposed to do a talk at Voxxed 2018 about our journey as a team into the world of React + React Native + Typescript. So I got the opportunity to go in his place to the Voxxed as a co-speaker of Alessandro Romano, my other colleague selected as a speaker for this event. Alessandro, also known as “the Clean”, is a senior software developer with many years of experience that just got graduated from University of Insubria in Varese (do you remember I already talked also about him?).
The title of the talk was: React (Native) & Typescript: A journey to a unified team using a common language. In this post I will talk about all the process we went into from the first draft preparation until the talk :grin:.

Slides preparation

Let’s start from the slides preparation. I started to work on the presentation with Alessandro two months before the event. We decided to structure our talk as a story telling. We begin describing how our team was composed and how we applied agile methodologies: basically, we were divided in three silos, back-end, front-end and mobile doing separated user stories and ceremonies. Then we embraced a new journey, we challenged ourself to become a feature team having end-to-end user stories and unified ceremonies. The technology stack we chose was the main facilitator of this process and eventually transformed us into the mythological creature of the fullstack developer and it was composed of:

  • TypeScript as common language
  • React for the frontend of the customer area (manage all the products of your booking) of our websites
  • React Native for the frontend of the mobile apps of our main brands

After this introduction we created a section for each of the technology above were we described the pros and cons of each one. Last but not least we presented the Cross Selling feature: a real use case in which we were able to share the business logic between the two environments using a pure TypeScript library.

Company dry run, feedback and final present

When the presentation was ready we planned an internal dry run. We usually call this kind of meetings “schiscia time” because they are planned during lunch time: the participants will enjoy their launch while the speakers show their stuff. So we planned our “schiscia time” for the 8th October.
A lot of colleagues attended the talk and gave some very useful feedback. The two major observation we received were:

  • less coding. We created a lot of slides with screenshots taken directly from our IDEs with a lot of code, especially in the section of the presentation where we described the new technology stack we “married”. They were not so easy to read and in some cases they were actually stealing the focus of our attendees. So we decided to remove them. The only slides with code that we kept were the ones in the section “Share the code: cross selling feature” where we present a real use case of development on our products. On this slides we changed the IDEs screenshots with some formatted code using a syntax highlighter (and honestly, after that change the slides looked much more beautiful :heart_eyes:).
  • more focus on the journey. A lot of our colleagues told us that from the presentation they didn’t feel what it took to transform ourselves from platform specific developer to the mythological creature of the fullstack developer. In our presentation there were a lot details about React, React Native and TypeScript but not as much on our workflow with the new technology stack. In fact after choosing React + React Native + TypeScript we started to:
    • do pair programming without considering the technology skills
    • do end-to-end user stories, from the backend service to the frontend (mobile app and web)

So we started to review the slides and we basically created a new presentation :smile:. It took us almost a week. After the review we made 2 simulations of the entire presentation during the week before the event to be more confident. Last but not least, the Human Resources department gave us a company t-shirt to promote our company brand at the conference.

voxxed 2018 tshirt

The talk

Then the day of the talk arrived after a not so quiet sleep. The Voxxed Days 2018 in Ticino was set to take place on 20th of October. We arrived at the Palazzo dei Congressi in Lugano at 8.45 AM. We were excited to queue in the dedicated speakers area for the firt time after so many conferences attended! We checked-in and got our shiny speaker badge. Then we moved to the lounge section to enjoy breakfast.

voxxed 2018 breakfast and badge

Our talk was planned at 2.30 PM, so we had time to attend some other sessions. At 11.50 AM we decided to do a last presentation simulation to review some details and then we went to lunch. At 2.00 PM we started to feel the strain. The start of our talk was really close. We entered in the room of our session at 2.15 PM and we did the setup of our laptop for the presentation.

voxxed 2018 pre talk

Then the room started to fill in. As scheduled at 2.30 PM we started our presentation. The presentation went smooth. We kept the scheduled time per slide we planned in our simulation. The change of speaker between the various part of the presentation worked perfectly. At the end we answered to some questions and we received applause from the audience.

voxxed 2018 pre talk

Conclusion

Tha’s all for my first experience as a conference speaker. It has been a good experience. After 10 years in the IT field (had I already been working for 10 years?!??! :cold_sweat:) it was a great pleasure to be on the other side of the stage.

voxxed 2018 clean

Special thanks to all the team Lynch that give me the opportunity to be the “replacement” speaker for this conference. Special thanks also to Alessandro Romano “the clean”. He is one of the best cospeaker and coworker you could ever find :heart:. If you want to know more about the topic of our presentation, below you can find the full session recorded :bowtie:.