CHICIO CODING

Dirty clean code. Creative Stuff. Stuff.

React Native + Typescript, love at first sight. Setup in an existing app.

In this post I will show you how to setup React Native + Typescript in an existing app.


In the last few months at lastminute.com group I worked on the following project: rebuild the native mobile apps of the main brands lastminute.com, Volagratis and Rumbo with a new interface and new features. Let’s compare the old and the new home of the lastminute.com app. The changes are quite impressive :sunglasses:.

Compare home app lastminute.com

For this “app relaunch” project we decided to use React Native (I already talked about this framework in some previous posts that you can find in the archive section). We didn’t rewrite the apps from scratch. We decided to integrated React Native in the existing code base and:

  • use Native Modules to reuse some native code we already had in place for some features (for example the login).
  • write the new stuff completely in React Native whenever possible.

We also took another important decision when we started the project: we choose TypeScript instead of Javascript as main language to write our React Native stuff. What is TypeScript? It is an open-source programming language developed and maintained by Microsoft. They describe it on its official website with the following definition:

Typescript is a typed superset of Javascript that compiles to plain Javascript. Any browser. Any host. Any OS. Open source.

What does it means? It means that TypeScript is basically “Javascript on steroid”: it provides optional, static type checking at compile time. Since it is a superset of JavaScript, all JavaScript code is valid TypeScript code. TypeScript is helpful if you are a developer that comes from other strongly typed language and with you have a strong knowledge of Object Oriented programming because it let you reuse a lot of the programming technique you already know.
React Native officially supports Javascript. How can we setup React Native + Typescript? In this post we will see how to integrate React Native and Typescript in an existing app and we will add a new screen done in React Native where we will show the photo of the day that we will read from the Nasa open API. Below you can find what we will achieve. The first screen is a standard native screen. The second one is a React Native screen.

react native typescript app

Let’s start to setup our project for React Native and TypesScript. First, React Native integration. For this task we can just follow the React Native documentation regarding the integration with existing app. Then we can start to integrate TypeScript. We will use yarn as dependencies manager instead of npm (you can use it also to install the dependencies needed to setup React Native in an existing app). Yarn is a fast, reliable and secure dependencies manager released by Facebook in October 2016. Our project directories structure will be the one contained in the screenshots below. The existing native codebase is contained inside the ios and android folders.

react native typecript directories

So let’s start by installing TypeScript and the types for React Native. We can do it with the following commands from the root of our project:

 yarn add --dev typescript
 yarn add --dev @types/react @types/react-native

After that we need to configure TypeScript in our project. We can start to do that by running the following command:

yarn tsc --init --pretty --jsx react

Now we have a new file in the root of our project: the tsconfig.json file. This file is the configuration file for the tsc, the TypeScript compiler. We can customize it for our needs (React). In particular, we need to enable the option allowSyntheticDefaultImports to allow default imports from modules with no default export. We also customized the baseUrl and paths options. By settings them in this way and by adding a package.json file inside the app folder with name: "app", we can place all our source code in the app folder and when we need to import a class we will set the path starting from the app base folder (so basically we are defining the root of our source code in a nice way for our imports).
Below you can find the complete tsconfig.json file configured for our needs.

{
  "compilerOptions": {
    "target": "es2015",
    "module": "es2015",
    "allowJs": true,
    "checkJs": true,
    "jsx": "react-native",
    "removeComments": true,
    "strict": true,
    "noUnusedLocals": true,
    "noUnusedParameters": true,
    "noImplicitReturns": true,
    "moduleResolution": "node",
    "baseUrl": "app",
    "paths": {
      "app/*": [ "./*" ]
    },
    "allowSyntheticDefaultImports": true
  },
  "typeRoots": [
    "./node_modules/@types"
  ],
  "types": [
    "react",
    "react-native",
    "jasmine",
    "jest"
  ],
  "exclude": [
    "node_modules",
    "app/__tests__",
    "rn-cli.config.js"
  ]
}

After that we need to install the React Native TypeScript Transformer. This transformer will allow the React Native cli to automatically transpile our TypeScript code into Javascript on demand. This is the command to install the transformer:

yarn add --dev react-native-typescript-transformer

After that we need to configure the React Native cli to actually use the transformer by adding the following configuration to the rn-cli.config.js file (create it in the project root directory). This file is the React Native configuration file.

module.exports = {
  getTransformModulePath() {
    return require.resolve('react-native-typescript-transformer');
  },
  getSourceExts() {
    return ['ts', 'tsx'];
  },
};

That’s all for the main source code setup. Now we can start to set up also the testing infrastructure. We will use jest, a testing framework from Facebook, and typemoq, a TypeScript specific mocking library. To use Jest with Typescript we will install ts-jest, a TypeScript preprocessor with source map support for Jest that lets us use Jest to test projects written in TypeScript.

yarn add --dev ts-jest
yarn add --dev typemoq

As you remember from the directory structure I show you above, the __tests__ folder is not in the usual React Native project position. It is placed inside the app folder. To be able to put our test in this folder we need to add to it a jest.config.js file and set some custom option for related to the module resolution. Below you can find the entire file with all the details.

module.exports = {
    'preset': 'react-native',
    'moduleFileExtensions': [
        'ts',
        'tsx',
        'js',
    ],
    'rootDir': '../..',
    'transform': {
        '^.+\\.(js)$': '<rootDir>/node_modules/babel-jest',
        '\\.(ts|tsx)$': '<rootDir>/node_modules/ts-jest/preprocessor.js',
    },
    'testMatch': ['**/__tests__/**/*.(ts|tsx|js|jsx)?(x)', '**/?(*.)(spec|test).(ts|tsx|js|jsx)?(x)'],
    'testPathIgnorePatterns': [
        '\\.snap$',
        '<rootDir>/node_modules/',
        'jest.config.js',
    ],
    'moduleDirectories': [
        'node_modules',
        '../',
    ],
};

We are now ready to write our app. Basically the screen that shows the nasa photo is the NasaPhotoViewerScreen. This component uses NasaPhotoInformationComponent and some React Native standard component to show the information that comes from the API. The informations are fetched using the NasaPhotoService. The NasaPhotoViewerScreen and the NasaPhotoService are glued together using the Model-View-Presenter architecture
in the NasaPhotoComponentPresenter. As you can see from the code below, TypeScript has a syntax that is similar to other language like Java, C# (and many other :sunglasses:).

export class NasaPhotoService {
  async retrieve(): Promise<any> {
    return fetch('https://api.nasa.gov/planetary/apod?api_key=1cygunHJsSwDug6zJjF3emev3QAP8yFLppohLuxb')
      .then((response) => response.json())
  }
}

...

export class NasaPhotoComponentPresenter {
  private nasaPhotoRepository: NasaPhotoRepository
  private nasaPhotoView: NasaPhotoView

  constructor(nasaPhotoView: NasaPhotoView, nasaPhotoRepository: NasaPhotoRepository) {
    this.nasaPhotoRepository = nasaPhotoRepository
    this.nasaPhotoView = nasaPhotoView
  }

  async onStart(): Promise<void> {
    try {
      const nasaPhoto = await this.nasaPhotoRepository.load();
      this.nasaPhotoView.showValid(nasaPhoto);
    } catch (_) {
      this.nasaPhotoView.showAn("Network error")
    }
  }
}

...

export class NasaPhotoViewerScreen extends React.Component<Props, State> implements NasaPhotoView {
  private readonly presenter: NasaPhotoComponentPresenter

  constructor(props: Props) {
    super(props)
    this.state = {
      photo: NasaPhoto.empty()
    }
    this.presenter = new NasaPhotoComponentPresenter(
      this,
      new NasaPhotoRepository(new NasaPhotoService(), new NasaPhotoAdapter())
    )
  }

  componentWillMount() {
    this.presenter.onStart()
  }

  showAn(error: string): void {
    alert(error)
  }

  showValid(photo: NasaPhoto): void {
    this.setState({photo})
  }

  render() {
    return (
      <ScrollView style={styles.container}>
        <Image
          style={styles.image}
          source={{uri: this.state.photo.url}}
        />
        <NasaPhotoInformationComponent
          title={this.state.photo.title}
          date={this.state.photo.date}
          description={this.state.photo.description}
        />
      </ScrollView>
    );
  }
}

interface Props {
  name: string
}

interface State {
  photo: NasaPhoto
}

const styles = StyleSheet.create({
  container: {
    width: "100%",
    height: "100%"
  },
  image: {
    width: "100%",
    height: 220,
    resizeMode: "cover",
  }
});

You can check all the code of the sample described above in this github repository and see all the TypeScript components I created for the app I shown you above.
That’s it!!! React Native + TypeScript: :hearts: love at first sight :hearts:.

Blender tutorial: introduction to basics of modeling - part 2

In this new post of the series Blender tutorial I will continue to talk about the fundamental of modeling in Blender.


In the previous post of the series “Blender tutorial” we introduced the first part of basics of modeling. Let’s continue our exploration with other useful tool we can use for modeling.
Let’s start with Sculpt mode. We can enable it in the editing/interaction mode selector in the bottom bar of the 3D window. In this mode you can literally sculpt you mesh. It possible to customize the sculpt mode by setting some of its properties:

  • radius of the sculpt
  • feather of the sculpt, that changes how the sculpt fall-off
  • auto smooth
  • add/subtract to pull in/out vertices while sculpting

blender sculpting

Sculpt mode supports also paint textures and strokes. But more important it supports symmetry: by selecting and axis any change on one side will be mirrored on the other one.
Another interesting option is the edge loop modeling. Basically this means that if we select a edge loop with alt + right click in edit mode/edge select mode, we can modify the edge loop by using the 3D axis that are shown after the selection.
The next tool available for modeling is the extrude tool. With this tool we can modify our geometry and more details. In particular, we can use it to create branches, legs and other parts that are out of the main body of the mesh. We can activate the extrude under Mesh tools -> Extrude Region/Individual. The extrude region will push out the elements selected as a single block. The extrude individual will extrude the elements selected individually.

blender extrude

Another useful tool that we can use for modeling is smooth shading. We can use these to smooth the surface of objects where the polygons of the mesh have too much hard edges. We can find the it under Tools -> Shading while an object is selected in Object mode or under Shading -> Faces/Edges/Vertices while an object is in Edit mode.

blender smooth shading

The last tool we can use for some simple modeling is the subdiving mesh tool. There are two ways to do subdivision: tool:

  • subdivision of the mesh itself
  • subdivision surface

Now we will look at subdivision of the mesh itself. To do it we need to be in edit mode on the object and select all with the a key. We can find the subdivision option under Tools -> Subdivide. If we click on that a series of subdivision properties will appear, that let us customize the number of cuts, the smoothness and other minor properties.

blender subdivision

Remember: each number of cuts increment QUADRUPLICATE the number of polygon in your mesh.
In the next chapter we will talk about advanced modeling techniques and tools.

Asynchronous testing in Swift

In this post I will talk about asynchronous testing in Swift.


As we saw in this post and also in this other one, closures are one of the most important building block of Swift. They are extensively used inside the iOS SDK.
But in the previous posts about closures I didn’t answer one very important question: how can you do unit test asynchronous operation and closure? It seems Apple has the answer for us!! Inside the iOS Testing framework we have expectations.

Clarity closure expectation test

How do they work? To test that asynchronous operations (and closure) behave as expected, you create one or more expectations within your test, and then fulfill those expectations when the asynchronous operation completes successfully. Your test method waits until all expectations are fulfilled or a specified timeout expires. The general code structure for expectation with closure is like the following example:

let expectation = XCTestExpectation(description: "Expectation description")

yourInstance.method(param: "aParam") {
    <Your assert using XCTAssert...>
    expectation.fulfill()
}

wait(for: [expectation], timeout: <time to wait the fulfillment of the expecation>)

Basically to test asynchronous operation/closure you must:

  • create an expectation that is an instance of XCTestExpectation
  • execute your closure, make your assert on the closure return value/parameter and call the method fulfill of XCTestExpectation

So, what about a more complex example? Let’s see how powerful expectation are and most importantly how we can test them. Suppose for example we have a use case class called PasswordUpdateUseCase with the following implementation:

public class PasswordUpdateUseCase {
    private let passwordService: PasswordService
    private let passwordRepository: PasswordRepository
    
    public init(passwordService: PasswordService, passwordRepository: PasswordRepository) {
        self.passwordService = passwordService
        self.passwordRepository = passwordRepository
    }
    
    public func update(password: String) {
        passwordService.update(password: password) { success, error in
            if success {
                self.passwordRepository.save(password: password)
            }
        }
    }
}

As you can see inside the update method we have an instance of PasswordService that, as the method name suggest, execute an update of the user password and return the result of the operation inside a closure. How do we unit test? Let’s see how we can achieve our objective using some handmade mock and expectation. For this post I will NOT USE the “Given-then-when” structure I used in a previous post, because I want to keep the focus on the code structure. First of all, to test our use case we need to mock the PasswordRepository. In our test we want to verify if our save method has been called or not. We can achieve this objective by implementing a spy object, PasswordDatabaseRepositorySpy, that exposes a status property savePasswordHasBeenCalled.

class PasswordDatabaseRepositorySpy: PasswordRepository {
    private(set) var savePasswordHasBeenCalled = false
    
    func save(password: String) {
        savePasswordHasBeenCalled = true
    }
}

Now it’s time to mock our PasswordService. We need to mock it so that it has the following features:

  • it exposes a status property that let us know if the method update has been called
  • it simulates an asynchronous call inside the update method
  • it can fullfil the expecation of our test in time

A lot of stuff to do. Let’s see how we can implement it. We will call it PasswordNetworkServiceSpy.

class PasswordNetworkServiceSpy: PasswordService {
    private(set) var updatePasswordHasBeenCalled = false
    private let expectation: XCTestExpectation
    private let successful: Bool

    init(expectation: XCTestExpectation, successful: Bool) {
        self.expectation = expectation
        self.successful = successful
    }
    
    func update(password: String, completion: @escaping (Bool, Error) -> ()) {
        DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(200)) {
            self.updatePasswordHasBeenCalled = true
            completion(self.successful, NSError(domain: "error", code: -1, userInfo: nil))
            self.expectation.fulfill()
        }
    }
}

The interesting thing of our implementation is that our spy will be in charge of fulfill the expectation, because the closure executed inside the PasswordUpdateUseCase is created inside our PasswordService spy, and we have to be sure that after its execution the expectation.fulfill() is called.
Now we are ready to write our unit tests. We will test two cases: update successful and update failure.

class AsynchronousTestingClosureDependencyTests: XCTestCase {
    
    func testUseCaseUpdatePasswordSuccessful() {
        let updateExpectation = expectation(description: "updateExpectation")
        let service = PasswordNetworkServiceSpy(expectation: updateExpectation, successful: true)
        let repository = PasswordDatabaseRepositorySpy()
        let passwordUseCase = PasswordUpdateUseCase(passwordService: service,
                                                    passwordRepository: repository)
        passwordUseCase.update(password: "::password::")
        wait(for: [updateExpectation], timeout: 300)
        XCTAssertTrue(service.updatePasswordHasBeenCalled)
        XCTAssertTrue(repository.savePasswordHasBeenCalled)
    }
    
    func testUseCaseUpdatePasswordFail() {
        let updateExpectation = expectation(description: "updateExpectation")
        let service = PasswordNetworkServiceSpy(expectation: updateExpectation, successful: false)
        let repository = PasswordDatabaseRepositorySpy()
        let passwordUseCase = PasswordUpdateUseCase(passwordService: service,
                                                    passwordRepository: repository)
        passwordUseCase.update(password: "::password::")
        wait(for: [updateExpectation], timeout: 300)
        XCTAssertTrue(service.updatePasswordHasBeenCalled)
        XCTAssertFalse(repository.savePasswordHasBeenCalled)
    }
}

As you can see in this test we have an example of an expectation creation/usage. In each test we are calling the wait(for: [updateExpectation], timeout: 300) so that the tests will “wait” until the expectation is fulfilled or the max timeout is reach (and in the last case the test fails, no matter the other condition). The most strange thing is related to the order of instruction between the wait and the various XCTAssert. To make our tests work we need to wait until the closure inside the update method of the use case is completed. Then we can make our assertion and verify that our conditions are verified to make our test pass (so, in this case, we can verify that our various method on the various collaborators have/have not been called). We are done with our example. As you can see you can experiment a little bit with expectations and implement complex patterns to verify your closure. You can find the complete example discussed above here. Expectation: your true friend for asynchronous code testing :heart:.

Mp3ID3Tagger: a native macOS app to edit the ID3 tag of your mp3 files written using RxSwift and RxCocoa

The third of a short series of post in which I describe my two latest project: ID3TagEditor and Mp3ID3Tagger. In this post I will talk about Mp3ID3Tagger, a macOS application to edit id3 tag of your mp3 files.


In this previous post I described the reason why I develop Mp3ID3Tagger, a macOS app to edit the id3 tag of your mp3 files that leverage on the power of ID3TagEditor. Below you can find the app logo.

MP3ID3Tagger macOS app RxSwift

So how did I develop MP3ID3Tagger? I was about to start the development following the classic approach to develop an app on every Apple OS: Model View Controller and plain Swift. But then I though: “This is the perfect project to test one of the last programming technique I recently learned: Reactive Programming/Reaactive Extensions with RxSwift and RxCocoa!!!!!! In this way I can also try to use a different architectural pattern: the Model View ViewModel (MVVM):sunglasses:. What kind of architectural pattern is the MVVM? What are Reactive Programming, Reactive Extensions, RxSwift and RxCocoa???
Let’s start from the first one. The MVVM is an architectural pattern invented by the Microsoft software engineers Ken Cooper and Ted Peters. As for other architecture patterns I described in the past, the MVVM is useful to clearly separate the UI development from the business logic. The main components of the MVVM are:

  • the Model, that usually represents the business logic of the application.
  • the View, as in the other architectural pattern, the view is the structure, layout, and appearance of what a user sees on the screen.
  • the View model, that usually represents an abstraction of the view exposing public properties and commands.
  • the Binder interprets bindings defined in the View, observes the View Model for changes in state and updates the View and finally observes the View for changes in state and updates the View Model.

From the definition above we see that the MVVM needs something to bind the view to the view model in a platform independent way. This is why we need RxSwift, RxCocoa and Reactive Extensions (usually called ReactiveX). What are they? Let’s see some quote for the definitions:

Reactive Extensions (also known as ReactiveX or Rx) is a set of tools allowing imperative programming languages to operate on sequences of data regardless of whether the data is synchronous or asynchronous. It provides a set of sequence operators that operate on each item in the sequence. …. ReactiveX is API for asynchronous programming with observable streams … RxSwift is the Swift version of ReactiveX (Rx) …. RxCocoa is a framework that helps make Cocoa APIs used in iOS and OS X easier to use with reactive techniques ….

The main components of RxSwift are:

  • Observables, that are something which emit notifications of change, and Observers, that are something which subscribe to an Observable, in order to be notified when it has changed
  • Subjects, that are entity that act both as an Observable and as an Observer
  • Operator, that are basically functions that work on Observable and return Observable

So RxSwift and RxCocoa let us create an abstraction from the platform specific UI implementation and let us implement our ViewModel by working in an event-driven way: the ViewModel only works with streams of data that comes from Observable and Subjects of RxSwift. RxCocoa gives us an abstraction over Cocoa and Cocoa Touch specific components and let us work with generic observable UI component. This basically means that:

  • RxSwift and RxCocoa are our Binder of the MVVM
  • the Various View and View Controllers are the View of the MVVM
  • the ID3TagEditor will be the Model of the MVVM
  • the ViewModel will connect the View and the ID3TagEditor Model in a platform UI independent way

With this architecture we can also think about using the same Model and ViewModel on different platform. So if in the future I will develop an iOS version of Mp3ID3Tagger, I will only have to develop the View part. So let’s start to see how I implemented Mp3ID3Tagger, the app subject of this post. Let’s start from the UI to see how MP3ID3Tagger does look like. The app has only one screen where the user can input its the data he/she want to insert into the tag. There is a button on the left to select the cover and all the textual/numeric values on the left. The values that could be set from a list are implemented as NSPopUpButton components.

MP3ID3Tagger interface

The first building block is the ViewModel base class. This class is useful to centralize the setup of a disposeBag. The DisposeBag it’s an RxSwift component that keeps a reference to all the Disposable you add to it. The Observable are Disposable, so you can add them to it to have an ARC-like behaviour: when the DisposeBag will be released all the Disposable instances it keeps will be released as well. So by having the ViewModel base class all the ViewModel will have a disposeBag by default where they will add their disposables. As we have seen before the app just one screen, so there’s just one ViewModel subclass to represent that screen, the Mp3ID3TaggerViewModel class. This class has 4 properties:

  • id3TagReader, of type ID3TagReader. This class has the responsibility to read a tag from an mp3 file when an openAction occurs. So ID3TagReader will be a subscriber of the openAction observable.
  • id3TagWriter, of type ID3TagWriter. This class has the responsibility to save a new tag to the mp3 file currently opened (the last openAction value) when a saveAction occurs. So ID3TagWriter will be a subscriber of the saveAction observable.
  • form, of type Form. This class has the responsibility to fill the fields of the form on the UI with values of the ID3tag read by the id3TagReader when an mp3 file has been opened. It has also the responsibility to collect the data contained in the form so that the id3TagWriter can write them when a saveAction occurs.
  • saveResult, of type PublishSubject<Bool>. This subject publishes the result of a save action made by the id3TagWriter.
class Mp3ID3TaggerViewModel: ViewModel {
    let id3TagReader: ID3TagReader
    let id3TagWriter: ID3TagWriter
    let form: Form
    let saveResult: PublishSubject<Bool>
    
    init(openAction: Observable<String>, saveAction: Observable<Void>) {
        self.id3TagReader = ID3TagReader(id3TagEditor: ID3TagEditor(), openAction: openAction)
        self.id3TagWriter = ID3TagWriter(id3TagEditor: ID3TagEditor(), saveAction: saveAction)
        self.form = Form()
        self.saveResult = PublishSubject<Bool>()
        super.init()

        id3TagReader.read { [unowned self] id3Tag in
            self.form.fillFields(using: id3Tag)
        }
        
        id3TagWriter.write(input: Observable.combineLatest(form.readFields(), openAction)) { result in
            self.saveResult.onNext(result)
        }
    }
} 

Now we can see the details of all these collaborators of our view model. Let’s start from the ID3TagReader. This class keeps a reference to an instance of the ID3TagEditor. Its main function is read(_ finish: @escaping (ID3Tag?) -> ()). In this function there is the subscribe to the openAction observable received at construction time (passed by the Mp3ID3TaggerViewModel). Each new value received from the openAction is a path to a new mp3 file. This path is passed to the ID3TagEditor instance that read of the ID3 tag of the song. If everything goes well, the tag is returned to the caller by using the finish closure. If you remember the Mp3ID3TaggerViewModel class, in this finish closure the form class is called that execute the fill of the fields (we will see below how it does this operation).

class ID3TagReader {
    private let id3TagEditor: ID3TagEditor
    private let openAction: Observable<String>
    private let disposeBag: DisposeBag
    
    init(id3TagEditor: ID3TagEditor, openAction: Observable<String>) {
        self.id3TagEditor = id3TagEditor
        self.openAction = openAction
        self.disposeBag = DisposeBag()
    }
    
    func read(_ finish: @escaping (ID3Tag?) -> ()) {
        openAction.subscribe(onNext: { [unowned self] path in
            do {
                let id3Tag = try self.id3TagEditor.read(from: path)
                finish(id3Tag)
            } catch {
                finish(nil)
            }
        }).disposed(by: disposeBag)
    }
}

Then we have the ID3TagWriter class. Like the ID3TagReader, this class keeps a reference to an instance of the ID3TagEditor. Its main function is write(input: Observable<(ID3Tag, String)>, _ finish: @escaping (Bool) -> ()). This function takes two parameters:

  • input of type Observable<(ID3Tag, String)>. This is an observable on a tuple composed by the path of an mp3 file and an ID3 tag
  • finish of type (Bool) -> ()

Inside this function there’s the subscription to the saveAction observable received at construction time from the Mp3ID3TaggerViewModel class. This observable is combined with the input observable received as parameter and described above and a new subscription to the result of the combination is created: each time we receive a path to an mp3 file, an ID3 tag and a save action is triggered the ID3TagEditor instance is used to write the ID3 tag to the mp3 file. The called of the write function of the ID3TagWriter is notified of the result of the operation by calling the finish operation.

class ID3TagWriter {
    private let id3TagEditor: ID3TagEditor
    private let saveAction: Observable<Void>
    private let disposeBag: DisposeBag
    
    init(id3TagEditor: ID3TagEditor, saveAction: Observable<Void>) {
        self.id3TagEditor = id3TagEditor
        self.saveAction = saveAction
        self.disposeBag = DisposeBag()
    }
    
    func write(input: Observable<(ID3Tag, String)>, _ finish: @escaping (Bool) -> ()) {
        saveAction
            .withLatestFrom(input)
            .subscribe(onNext: { [unowned self] event in
                do {
                    try self.id3TagEditor.write(tag: event.0, to: event.1)
                    finish(true)
                } catch {
                    finish(false)
                }
            })
            .disposed(by: disposeBag)
    }
}

Now let’s see the Form class and its collaborators. This class has 5 collaborators. Each collaborator represents a subset of the form fields. This fields are represented as Variable subject of the specific type of the fields. In this way we are able to publish new values (by using the value property) to this observable and at the same time observe their values. In fact in this class there are two functions:

  • readFields(), that creates an observable from the fields observables by combining them using the Rx operator combineLatest
  • fillFields(using id3Tag: ID3Tag?), that sets the value of the fields observables with the received id3 tag (read by the ID3TagReader when an mp3 file has been opened)

Below you can find the Form class with all the implementations also for its collaborators. In this way it’s easy to note what I stated above: the set of all the Variable fields of this classes matches the set of the UI components that we saw in the screenshot of the app that you saw above. One last important thing to note: the class AttachedPictureField forces the type of the attached picture to be saved to FrontCover. In this way the ID3TagEditor will write the ID3 tag with the correct data to display the album cover on my renault clio!!! :relieved:

class Form {
    let basicSongFields: BasicSongFields
    let versionField: VersionField
    let trackPositionInSetFields: TrackPositionInSetFields
    let genreFields: GenreFields
    let attachedPictureField: AttachedPictureField
    
    init() {
        self.basicSongFields = BasicSongFields()
        self.versionField = VersionField()
        self.trackPositionInSetFields = TrackPositionInSetFields()
        self.genreFields = GenreFields()
        self.attachedPictureField = AttachedPictureField()
    }
    
    func readFields() -> Observable<ID3Tag> {
        return Observable.combineLatest(
            versionField.validVersion,
            basicSongFields.observe(),
            trackPositionInSetFields.trackPositionInSet,
            genreFields.genre,
            attachedPictureField.observeAttachPictureCreation()
        ) { (version, basicFields, trackPositionInSet, genre, image) -> ID3Tag in
            return ID3Tag(
                version: version,
                artist: basicFields.artist,
                albumArtist: basicFields.albumArtist,
                album: basicFields.album,
                title: basicFields.title,
                year: basicFields.year,
                genre: genre,
                attachedPictures: image,
                trackPosition: trackPositionInSet
            )
        }
    }
    
    func fillFields(using id3Tag: ID3Tag?) {
        fillBasicFieldsUsing(id3Tag: id3Tag)
        fillVersionFieldUsing(id3Tag: id3Tag)
        fillTrackPositionFieldsUsing(id3Tag: id3Tag)
        fillGenreFieldsUsing(id3Tag: id3Tag)
        fillAttachedPictureUsing(id3Tag: id3Tag)
    }
    
    private func fillBasicFieldsUsing(id3Tag: ID3Tag?) {
        basicSongFields.title.value = id3Tag?.title
        basicSongFields.artist.value = id3Tag?.artist
        basicSongFields.album.value = id3Tag?.album
        basicSongFields.albumArtist.value = id3Tag?.albumArtist
        basicSongFields.year.value = id3Tag?.year
    }
    
    private func fillVersionFieldUsing(id3Tag: ID3Tag?) {
        if let version = id3Tag?.properties.version.rawValue {
            versionField.version.value = Int(version)
        }
    }
    
    private func fillTrackPositionFieldsUsing(id3Tag: ID3Tag?) {
        if let trackPosition = id3Tag?.trackPosition {
            trackPositionInSetFields.trackPosition.value = String(trackPosition.position)
            fillTotalTracksFieldUsing(id3Tag: id3Tag)
        }
    }
    
    private func fillTotalTracksFieldUsing(id3Tag: ID3Tag?) {
        if let totalTracks = id3Tag?.trackPosition?.totalTracks {
            trackPositionInSetFields.totalTracks.value = String(totalTracks)
        }
    }
    
    private func fillGenreFieldsUsing(id3Tag: ID3Tag?) {
        if let genre = id3Tag?.genre {
            genreFields.genreIdentifier.value = genre.identifier?.rawValue
            genreFields.genreDescription.value = genre.description
        }
    }
    
    private func fillAttachedPictureUsing(id3Tag: ID3Tag?) {
        if let validAttachedPictures = id3Tag?.attachedPictures, validAttachedPictures.count > 0 {
            attachedPictureField.attachedPicture.value = ImageWithType(data: validAttachedPictures[0].art,
                                                                       format: validAttachedPictures[0].format)
        }
    }
}

....

typealias BasicSongFieldsValues = (title: String?, artist: String?, album: String?, albumArtist: String?, year: String?)

class BasicSongFields {
    let title: Variable<String?>
    let artist: Variable<String?>
    let album: Variable<String?>
    let albumArtist: Variable<String?>
    let year: Variable<String?>
    
    init() {
        self.title = Variable<String?>(nil)
        self.artist = Variable<String?>(nil)
        self.album = Variable<String?>(nil)
        self.albumArtist = Variable<String?>(nil)
        self.year = Variable<String?>(nil)
    }
    
    func observe() -> Observable<BasicSongFieldsValues> {
        return Observable.combineLatest(
            title.asObservable(),
            artist.asObservable(),
            album.asObservable(),
            albumArtist.asObservable(),
            year.asObservable()
        ) { title, artist, album, albumArtist, year in
            return BasicSongFieldsValues(title: title,
                                         artist: artist,
                                         album: album,
                                         albumArtist: albumArtist,
                                         year: year)
        }
    }
}

....

class VersionField {
    let version: Variable<Int?>
    let validVersion: Observable<ID3Version>

    init() {
        self.version = Variable<Int?>(3)
        self.validVersion = version.asObservable().map { (versionSelected) -> ID3Version in
            return ID3Version(rawValue: UInt8(versionSelected ?? 0)) ?? .version3
        }
    }
}

....

class TrackPositionInSetFields {
    let trackPosition: Variable<String?>
    let totalTracks: Variable<String?>
    let trackPositionInSet: Observable<TrackPositionInSet?>
    
    init() {
        self.trackPosition = Variable<String?>(nil)
        self.totalTracks = Variable<String?>(nil)
        self.trackPositionInSet = Observable.combineLatest(
            trackPosition.asObservable(),
            totalTracks.asObservable()
        ) { (trackPosition, totalTracks) -> TrackPositionInSet? in
            if let validTrackPositionAsString = trackPosition,
                let validTrackPosition = Int(validTrackPositionAsString) {
                return TrackPositionInSet(position: validTrackPosition,
                                          totalTracks: TrackPositionInSetFields.convertToNumber(totalTracks: totalTracks))
            }
            return nil
        }
    }
    
    private static func convertToNumber(totalTracks: String?) -> Int? {
        if let validTotalTracks = totalTracks {
            return Int(validTotalTracks)
        }
        return nil
    }
}

....

class GenreFields {
    let genreIdentifier: Variable<Int?>
    let genreDescription: Variable<String?>
    let genre: Observable<Genre?>
    
    init() {
        self.genreIdentifier = Variable<Int?>(nil)
        self.genreDescription = Variable<String?>(nil)
        self.genre = Observable.combineLatest(
            genreIdentifier.asObservable(),
            genreDescription.asObservable()
        ) { (genreIdentifier, genreDescription) -> Genre? in
            if let validGenre = genreIdentifier,
                let validId3Genre = ID3Genre(rawValue: validGenre) {
                return Genre(genre: validId3Genre, description: genreDescription)
            }
            return nil
        }
    }
}

....

class AttachedPictureField {
    let attachedPicture: Variable<ImageWithType?>

    init() {
        self.attachedPicture = Variable<ImageWithType?>(nil)
    }

    func observeAttachPictureCreation() -> Observable<[AttachedPicture]?> {
        return attachedPicture
            .asObservable()
            .map({ imageWithType in
                if let validImageWithType = imageWithType {
                    return [AttachedPicture(art: validImageWithType.data,
                                            type: .FrontCover,
                                            format: validImageWithType.format)]
                } else {
                    return nil
                }
            })
    }
}

Now it’s time to see the view controller of the app that basically corresponds to the View of the MVVM. Its name is Mp3ID3TaggerViewController. This controller will implement a protocol I defined: the BindableView protocol. This protocol represents the View part in the MVVM architecture. This protocol must be implemented only by subclasses of the NSViewController. The protocol contains a property and a function. The viewModel forces the class (the View) to have a property that represents its ViewModel. The function bindViewModel is where the View and the View model are bound together. The bindViewModel must be called inside one the lifecycle methods of the NSViewController.

protocol BindableView where Self: NSViewController {
    associatedtype ViewModelType
    var viewModel: ViewModelType! { get set }
    func bindViewModel()
}

If we look at the implementation of the bindViewModel method, we can see where something “magical” is happening :crystal_ball:: an instance of Mp3ID3TaggerViewModel class is created and the UI components that represents the various field of the form are bounded to the view model fields by using the custom opertator <->. So this operator let us define the what is called two way binding or bidirectional binding using RxSwift:

  • each Variable field of the view model is bounded to a field on the UI. This basically means that each value we set a in the value property of a Variable field will be displayed on the UI Cocoa specific field.

  • each value inserted in the UI Cocoa specific field will be set in the corresponding Variable field on the view model.

In this way the View Model is completely decoupled from the View part (in this case the NSViewController). This means that we can reuse the same ViewModel to create other versions of Mp3ID3Tagger for other platforms. This is absolutely fantastic!!!!! :heart_eyes::relaxed:. Last but not least in the controller we have also some other functions:

  • open(_ sender: Any?) and save(_ sender: Any?) that manage the open an mp3 file and save of the same file
  • bindSaveAction() that observe the result of a save action
  • openImage(imageUrl: URL) and bindAttachedPictureField() that manage the bind and the subscription to an open action of an image to be used as front cover for the id3 tag.
infix operator <-> : DefaultPrecedence

func <-> <T>(property: ControlProperty<T>, variable: Variable<T>) -> Disposable {
    let bindToUIDisposable = variable.asObservable()
        .bind(to: property)
    let bindToVariable = property
        .subscribe(onNext: { n in
            variable.value = n
        }, onCompleted:  {
            bindToUIDisposable.dispose()
        })
    
    return CompositeDisposable(bindToUIDisposable, bindToVariable)
}

....

class Mp3ID3TaggerViewController: NSViewController, BindableView {
    private let disposeBag: DisposeBag = DisposeBag()
    private let openAction: PublishSubject<String> = PublishSubject<String>()
    private let saveAction: PublishSubject<Void> = PublishSubject<Void>()
    private let stringToID3ImageExtensionAdapter = StringToID3ImageExtensionAdapter()
    var viewModel: Mp3ID3TaggerViewModel!
    @IBOutlet weak var versionPopUpbutton: NSPopUpButton!
    @IBOutlet weak var fileNameLabel: NSTextField!
    @IBOutlet weak var titleTextField: NSTextField!
    @IBOutlet weak var artistTextField: NSTextField!
    @IBOutlet weak var albumTextField: NSTextField!
    @IBOutlet weak var albumArtistField: NSTextField!
    @IBOutlet weak var yearTextField: NSTextField!
    @IBOutlet weak var trackPositionTextField: NSTextField!
    @IBOutlet weak var totalTracksTextField: NSTextField!
    @IBOutlet weak var genrePopUpMenu: NSPopUpButton!
    @IBOutlet weak var genreDescriptionTextField: NSTextField!
    @IBOutlet weak var imageSelectionButton: NSButton!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        self.bindViewModel()
    }
    
    func bindViewModel() {
        viewModel = Mp3ID3TaggerViewModel(openAction: openAction.asObservable(), saveAction: saveAction.asObservable())
        (titleTextField.rx.text <-> viewModel.form.basicSongFields.title).disposed(by: disposeBag)
        (artistTextField.rx.text <-> viewModel.form.basicSongFields.artist).disposed(by: disposeBag)
        (albumTextField.rx.text <-> viewModel.form.basicSongFields.album).disposed(by: disposeBag)
        (albumArtistField.rx.text <-> viewModel.form.basicSongFields.albumArtist).disposed(by: disposeBag)
        (yearTextField.rx.text <-> viewModel.form.basicSongFields.year).disposed(by: disposeBag)
        (versionPopUpbutton.rx.selectedItemTag <-> viewModel.form.versionField.version).disposed(by: disposeBag)
        (trackPositionTextField.rx.text <-> viewModel.form.trackPositionInSetFields.trackPosition).disposed(by: disposeBag)
        (totalTracksTextField.rx.text <-> viewModel.form.trackPositionInSetFields.totalTracks).disposed(by: disposeBag)
        (genrePopUpMenu.rx.selectedItemTag <-> viewModel.form.genreFields.genreIdentifier).disposed(by: disposeBag)
        (genreDescriptionTextField.rx.text <-> viewModel.form.genreFields.genreDescription).disposed(by: disposeBag)
        self.bindAttachedPictureField()
        self.bindSaveAction()
    }
    
    private func bindAttachedPictureField() {
        viewModel
            .form
            .attachedPictureField
            .attachedPicture
            .asObservable()
            .filter({ $0 != nil })
            .subscribe(onNext: { self.imageSelectionButton.image = NSImage(data: $0!.data) })
            .disposed(by: disposeBag)
        imageSelectionButton.rx.tap.subscribe(onNext: { tap in
            NSOpenPanel.display(in: self.view.window!,
                                fileTypes: ["png", "jpg", "jpeg"],
                                title: "Select an Image file",
                                onOkResponse: self.openImage)
        }).disposed(by: disposeBag)
    }
    
    private func bindSaveAction() {
        viewModel.saveResult
            .asObservable()
            .subscribe(onNext: { (result) in
                let alert = NSAlert()
                alert.addButton(withTitle: "Ok")
                alert.messageText = result ? "Mp3 saved correctly!" : "Error during save!"
                alert.beginSheetModal(for: self.view.window!, completionHandler: nil)
            })
            .disposed(by: disposeBag)
    }
    
    private func openImage(imageUrl: URL) {
        if let image = try? Data(contentsOf: imageUrl) {
            let type = self.stringToID3ImageExtensionAdapter.adapt(format: imageUrl.pathExtension)
            self.viewModel.form.attachedPictureField.attachedPicture.value = ImageWithType(data: image, format: type)
            self.imageSelectionButton.image = NSImage(data: image)
        }
    }

    @IBAction func open(_ sender: Any?) {
        NSOpenPanel.display(in: self.view.window!,
                            fileTypes: ["mp3"],
                            title: "Select an MP3 file",
                            onOkResponse: {
                                self.openAction.onNext($0.path)
                                self.fileNameLabel.stringValue = $0.lastPathComponent
        })
    }
    
    @IBAction func save(_ sender: Any?) {
        saveAction.onNext(())
    }
} 

We’re done with Mp3ID3Tagger. I hope you liked my architectural choices and how I developed it by leveraging the power of RxSwift and RxCocoa :sunglasses::relieved:. Obviously don’t forget to see the official Mp3ID3Tagger repo
and obviously to download the Mp3ID3Tagger app from this link and use it!!! :heartpulse::sparkling_heart:

ID3TagEditor: a Swift framework to read and write ID3 tag of your mp3 files for macOS, iOS, tvOS and watchOS

The second of a short series of post in which I describe my two latest project: ID3TagEditor and Mp3ID3Tagger. In this post I will describe how I created ID3TagEditor.


In this previous post I described the reason why I developed ID3TagEditor, a swift library to edit ID3 tag of mp3 files with support for macOS, iOS, watchOS and tvOS. In this post I will described how I developed it. Below you can find the library logo.

ID3TagEditor logo

But before going deeper in the details of ID3TagEditor it useful to know how the ID3 tag standard works (you can find the full reference on the official site). The definition reported on it for the ID3 standard is:

An ID3 tag is a data container within an MP3 audio file stored in a prescribed format

This definition means that an ID3 tag is basically a chunk of information stored at the beginning of an mp3 file. The standard defines the format that any developer can use to read and write this information. Let’s see an example of an ID3 tag using a hex editor.

ID3 tag example

A tag is composed by an header and a series of frames. The tag header has a size of 10 bytes contains the following information (for both v2 and v3):

  • ID3 tag file identifier, 3 bytes, usually represented as “ID3”
  • tag version, 2 bytes, a couple of number that represent the major version and the revision version (e.g. 0x03 0x00)
  • flags, 1 bytes, contains three configurations flags represented as %abc00000 (bit to 1)
  • size, 4 bytes. Quoting the ID3 standard the size is:

the size of the complete tag after unsychronisation, including padding, excluding the header but not excluding the extended header. The ID3v2 tag size is encoded with four bytes where the most significant bit (bit 7) is set to zero in every byte, making a total of 28 bits. The zeroed bits are ignored, so a 257 bytes long tag is represented as $00 00 02 01. …. Only 28 bits(representing up to 256MB) are used in the size description…

ID3 tag header

A frame is composed of an header and a custom content. The frame header contains the following information, that change in size between versions:

  • frame id, 3 bytes in version 2 and 4 bytes in version 3
  • size, 3 bytes in version 2 and 4 bytes in version 3 that the describe the total size of the frame excluding the header
  • option flags, 2 bytes available only in version 3

So the frame header size is 10 bytes in version 3 and 6 bytes in version 2. After the header there is the custom specific frame flags/options and the frame content. Below you can find an example of a frame in a version 3 tag.

ID3 frame example

Last but not least at the end of the ID3 tag there are also 2 KB of offset (you can see it in the previous images, that series of endless 0x00 at the end of the tag :relieved:). How does ID3TagEditor read and write all this information? The main api of the framework are two simple methods:

/**
 Read the ID3 tag contained in the mp3 file.

 - parameter path: path of the mp3 file to be parsed.

 - throws: Could throw `InvalidFileFormat` if an mp3 file doesn't exists at the specified path.

 - returns: an ID3 tag or nil, if a tag doesn't exists in the file.
 */
public func read(from path: String) throws -> ID3Tag?

/**
 Writes the mp3 to a new file or overwrite it with the new ID3 tag.

 - parameter tag: the ID3 tag that written in the mp3 file.
 - parameter path: path of the mp3 file where we will write the tag.
 - parameter newPath: path where the file with the new tag will be written. **If nil, the mp3 file will be overwritten**.
 If nothing is passed, the file will be overwritten at its current location.

 - throws: Could throw `TagTooBig` (tag size > 256 MB) or `InvalidTagData` (no data set to be written in the
 ID3 tag).
 */
public func write(tag: ID3Tag, to path: String, andSaveTo newPath: String? = nil) throws

So the ID3TagEditor framework has two main parts: one for read/parse an mp3 file and one for write an ID3 tag to the mp3 file.
Let’s start from the read/parsing part. The main entry point of the library is the class ID3TagParser that is instantiated from a ID3TagParserFactory. Its main function is the called parse. As the name suggest it parses the various frames. Before that there are three operation:

  • the version of the tag is extracted by a collaborator called ID3TagVersionParser
  • a check if a tag is available in the mp3 file loaded. This check is done by a collaborator named ID3TagPresence
  • the size of the tag is extracted by a collaborator called ID3TagSizeParser
....
 
func parse(mp3: Data) -> ID3Tag? {
    let version = tagVersionParser.parse(mp3: mp3 as Data)
    if (tagPresence.isTagPresentIn(mp3: mp3 as Data, version: version)) {
        let id3Tag = ID3Tag(version: version, size: 0)
        parseTagSizeFor(mp3: mp3 as NSData, andSaveInId3Tag: id3Tag)
        parseFramesFor(mp3: mp3 as NSData, id3Tag: id3Tag)
        return id3Tag
    }
    return nil
}

....

The parsing of each frame is done in the function parseFramesFor.

....
private func parseFramesFor(mp3: NSData, id3Tag: ID3Tag) {
    var currentFramePosition = id3TagConfiguration.headerSize();
    while currentFramePosition < id3Tag.properties.size {
        let frame = getFrameFrom(mp3: mp3, position: currentFramePosition, version: id3Tag.properties.version)
        frameContentParser.parse(frame: frame, id3Tag: id3Tag)
        currentFramePosition += frame.count;
    }
}

private func getFrameFrom(mp3: NSData, position: Int, version: ID3Version) -> Data {
    let frameSize = frameSizeParser.parse(mp3: mp3, framePosition: position, version: version)
    let frame = mp3.subdata(with: NSMakeRange(position, frameSize))
    return frame
}
....

How does the parsing for each frame work? How does ID3TagEditor recognize the correct frame and execute the correct parsing based on the frame type? The answer is inside the ID3FrameContentParser class, used inside the parseFramesFor(mp3: NSData, id3Tag: ID3Tag) function. This class uses the Command Pattern to launch the correct parsing operations for the current frame type. The list of frame parsing operations is stored inside inside a dictionary where the key is the FrameType enum. This enum generically identify the frame type, and is mapped to the correct ID3 frame identifier for each version in the ID3FrameConfiguration function frameTypeFor(identifier: frameIdentifier, version: version). As you can see below the extraction of the frame identifier is done in the getFrameTypeFrom(frame: Data, version: ID3Version) -> FrameType.

 class ID3FrameContentParser: FrameContentParser {
     private let frameContentParsingOperations: [FrameType : FrameContentParsingOperation]
     private var id3FrameConfiguration: ID3FrameConfiguration
 
     init(frameContentParsingOperations: [FrameType : FrameContentParsingOperation],
          id3FrameConfiguration: ID3FrameConfiguration) {
         self.frameContentParsingOperations = frameContentParsingOperations
         self.id3FrameConfiguration = id3FrameConfiguration
     }
 
     func parse(frame: Data, id3Tag: ID3Tag) {
         let frameType = getFrameTypeFrom(frame: frame, version: id3Tag.properties.version)
         if (isAValid(frameType: frameType)) {
             frameContentParsingOperations[frameType]?.parse(frame: frame, id3Tag: id3Tag)
         }
     }
 
     private func getFrameTypeFrom(frame: Data, version: ID3Version) -> FrameType {
         let frameIdentifierSize = id3FrameConfiguration.identifierSizeFor(version: version)
         let frameIdentifierData = [UInt8](frame.subdata(in: Range(0...frameIdentifierSize - 1)))
         let frameIdentifier = toString(frameIdentifier: frameIdentifierData)
         let frameType = id3FrameConfiguration.frameTypeFor(identifier: frameIdentifier, version: version)
         return frameType
     }
 
     private func isAValid(frameType: FrameType) -> Bool {
         return frameType != .Invalid
     }
 
     private func toString(frameIdentifier: [UInt8]) -> String {
         return frameIdentifier.reduce("") { (convertedString, byte) -> String in
             return convertedString + String(Character(UnicodeScalar(byte)))
         }
     }
 }

If we want to go deeper we can have a look at the ID3FrameContentParsingOperationFactory. This class initialize the classes used as command to parse the various type of frames. I will talk about their implementation details in other posts (because this classes contain a lot of cool swift stuff that I can use to write a lot of other posts :smirk:).

class ID3FrameContentParsingOperationFactory {
    static func make() -> [FrameType : FrameContentParsingOperation] {
        let paddingRemover = PaddingRemoverUsingTrimming()
        let id3FrameConfiguration = ID3FrameConfiguration()
        return [
            .Artist: ID3FrameStringContentParsingOperation(
                    paddingRemover: paddingRemover, 
                    id3FrameConfiguration: id3FrameConfiguration
            ) { (id3Tag: ID3Tag, frameContentWithoutPadding: String) in
                id3Tag.artist = frameContentWithoutPadding
            },
            .AlbumArtist: ID3FrameStringContentParsingOperation(
                    paddingRemover: paddingRemover,
                    id3FrameConfiguration: id3FrameConfiguration
            ) { (id3Tag: ID3Tag, frameContentWithoutPadding: String) in
                id3Tag.albumArtist = frameContentWithoutPadding
            },
            .Album: ID3FrameStringContentParsingOperation(
                    paddingRemover: paddingRemover,
                    id3FrameConfiguration: id3FrameConfiguration
            ) { (id3Tag: ID3Tag, frameContentWithoutPadding: String) in
                id3Tag.album = frameContentWithoutPadding
            },
            .Title: ID3FrameStringContentParsingOperation(
                    paddingRemover: paddingRemover,
                    id3FrameConfiguration: id3FrameConfiguration
            ) { (id3Tag: ID3Tag, frameContentWithoutPadding: String) in
                id3Tag.title = frameContentWithoutPadding
            },
            .AttachedPicture: AttachedPictureFrameContentParsingOperation(
                    id3FrameConfiguration: id3FrameConfiguration,
                    pictureTypeAdapter: ID3PictureTypeAdapter(
                            id3FrameConfiguration: ID3FrameConfiguration(),
                            id3AttachedPictureFrameConfiguration: ID3AttachedPictureFrameConfiguration()
                    )
            ),
            .Year: ID3FrameStringContentParsingOperation(
                    paddingRemover: paddingRemover,
                    id3FrameConfiguration: id3FrameConfiguration
            ) { (id3Tag: ID3Tag, frameContentWithoutPadding: String) in
                id3Tag.year = frameContentWithoutPadding
            },
            .Genre: ID3FrameStringContentParsingOperation(
                    paddingRemover: paddingRemover,
                    id3FrameConfiguration: id3FrameConfiguration
            ) { (id3Tag: ID3Tag, frameContentWithoutPadding: String) in
                id3Tag.genre = ID3GenreStringAdapter().adapt(genre: frameContentWithoutPadding)
            },
            .TrackPosition : ID3FrameStringContentParsingOperation(
                    paddingRemover: paddingRemover,
                    id3FrameConfiguration: id3FrameConfiguration
            ) { (id3Tag: ID3Tag, frameContentWithoutPadding: String) in
                id3Tag.trackPosition = ID3TrackPositionStringAdapter().adapt(trackPosition: frameContentWithoutPadding)
            }
        ]
    }
}

Let’s see instead how ID3TagEditor write a new tag to an mp3 file. The creation of the tag is done by the ID3TagCreator class used inside the Mp3WithID3TagBuilder class, the one that execute the real write on the mp3 file with the new tag on disk. The main function of the ID3TagCreator class is create(id3Tag: ID3Tag) throws -> Data. Inside this function the frames are created from the data passed to the framework as an ID3Tag class. If all the the frame validation goes well a new tag header is created and again, if the tag header is valid (the size of the tag is valid), a new Data object is returned to the Mp3WithID3TagBuilder class and is written to the mp3 file.

class ID3TagCreator {
    private let id3FrameCreatorsChain: ID3FrameCreatorsChain
    private let uInt32ToByteArrayAdapter: UInt32ToByteArrayAdapter
    private let id3TagConfiguration: ID3TagConfiguration

    ....

    func create(id3Tag: ID3Tag) throws -> Data {
        var frames = id3FrameCreatorsChain.createFrames(id3Tag: id3Tag, tag: [UInt8]())
        if thereIsNotValidDataIn(frames: frames) {
            throw ID3TagEditorError.InvalidTagData
        }
        frames.append(contentsOf: createFramesEnd())
        let header = createTagHeader(contentSize: frames.count, id3Tag: id3Tag);
        let tag = header + frames
        if (isTooBig(tag: tag)) {
            throw ID3TagEditorError.TagTooBig
        }
        return Data(bytes: tag)
    }
    
    ....
}    

How are the frames data created? The answer is inside the ID3FrameCreatorsChain and the ID3FrameCreatorsChainFactory classes. The factory class creates a Chain of responsibility, where each subclass of the ID3FrameCreatorsChain class is a specialization with the responsibility to write a specific frame type. At the end of the chain an [Uint8] array is returned. This is basically an array of bytes, that is then converted into a Data object at the end of the create(id3Tag: ID3Tag) throws -> Data of the ID3TagCreator class (where also the tag header is added as we saw before). Below you can find the chain creation contained in the ID3FrameCreatorsChainFactory class (again, we will see the details of the various classes contained in the chain in other future posts :stuck_out_tongue_winking_eye: This framework contains too much cool swift stuff :flushed:) . One important thing to note: the ID3AttachedPicturesFramesCreator class is able to create attached picture frames that sets the type of the cover to one from the list defined in the ID3 standard. In this way I can use my ID3TagEditor framework to tag the mp3 with the correct data that I need to display the mp3 files cover on the media nav system of my clio!!! :relieved:

class ID3FrameCreatorsChainFactory {
    static func make() -> ID3FrameCreatorsChain {
        let paddingAdder = PaddingAdderToEndOfContentUsingNullChar()
        let frameConfiguration = ID3FrameConfiguration()
        let uInt32ToByteArrayAdapter = UInt32ToByteArrayAdapterUsingUnsafePointer()
        let frameContentSizeCalculator = ID3FrameContentSizeCalculator(
                uInt32ToByteArrayAdapter: uInt32ToByteArrayAdapter
        )
        let frameFlagsCreator = ID3FrameFlagsCreator()
        let frameFromStringUTF16ContentCreator = ID3FrameFromStringContentCreator(
                frameContentSizeCalculator: frameContentSizeCalculator,
                frameFlagsCreator: frameFlagsCreator,
                stringToBytesAdapter: ID3UTF16StringToByteAdapter(paddingAdder: paddingAdder,
                                                                  frameConfiguration: frameConfiguration)
        )
        let frameFromStringISO88591ContentCreator = ID3FrameFromStringContentCreator(
            frameContentSizeCalculator: frameContentSizeCalculator,
            frameFlagsCreator: frameFlagsCreator,
            stringToBytesAdapter: ID3ISO88591StringToByteAdapter(paddingAdder: paddingAdder,
                                                                 frameConfiguration: frameConfiguration)
        )
        let albumFrameCreator = ID3AlbumFrameCreator(
                frameCreator: frameFromStringUTF16ContentCreator,
                id3FrameConfiguration: frameConfiguration
        )
        let albumArtistCreator = ID3AlbumArtistFrameCreator(
                frameCreator: frameFromStringUTF16ContentCreator,
                id3FrameConfiguration: frameConfiguration
        )
        let artistFrameCreator = ID3ArtistFrameCreator(
                frameCreator: frameFromStringUTF16ContentCreator,
                id3FrameConfiguration: frameConfiguration
        )
        let titleFrameCreator = ID3TitleFrameCreator(
                frameCreator: frameFromStringUTF16ContentCreator,
                id3FrameConfiguration: frameConfiguration
        )
        let attachedPictureFrameCreator = ID3AttachedPicturesFramesCreator(
                attachedPictureFrameCreator: ID3AttachedPictureFrameCreator(
                        id3FrameConfiguration: frameConfiguration,
                        id3AttachedPictureFrameConfiguration: ID3AttachedPictureFrameConfiguration(),
                        frameContentSizeCalculator: frameContentSizeCalculator,
                        frameFlagsCreator: frameFlagsCreator
                )
        )
        let yearFrameCreator = ID3YearFrameCreator(
                frameCreator: frameFromStringISO88591ContentCreator,
                id3FrameConfiguration: frameConfiguration
        )
        let genreFrameCreator = ID3GenreFrameCreator(
                frameCreator: frameFromStringISO88591ContentCreator,
                id3FrameConfiguration: frameConfiguration
        )
        let trackPositionFrameCreator = ID3TrackPositionFrameCreator(
                frameCreator: frameFromStringISO88591ContentCreator,
                id3FrameConfiguration: frameConfiguration
        )
        albumFrameCreator.nextCreator = albumArtistCreator
        albumArtistCreator.nextCreator = artistFrameCreator
        artistFrameCreator.nextCreator = titleFrameCreator
        titleFrameCreator.nextCreator = yearFrameCreator
        yearFrameCreator.nextCreator = genreFrameCreator
        genreFrameCreator.nextCreator = trackPositionFrameCreator
        trackPositionFrameCreator.nextCreator = attachedPictureFrameCreator
        return albumFrameCreator
    }
}

That’s it!!! This is the general structure of the ID3TagEditor framework. If you want to discover more about this framework you can have a look at my github repo and start to make some contribution :heart::purple_heart:. Obviously, you must also continue to read my blog and wait for the other posts about other implementation details I promised above (if you’re too lazy to go see by yourself :kissing_heart::satisfied:).