Lightweight Migrations in Core Data Tutorial

When you create a Core Data app, you design an initial data model for your app. However, after you ship your app inevitably you’ll want to make changes to your data model. What do you do then? You don’t want to break the app for existing users!

You can’t predict the future, but with Core Data, you can migrate toward the future with every new release of your app. The migration process will update data created with a previous version of the data model to match the current data model.

This Core Data migrations tutorial discusses the many aspects of Core Data migrations by walking you through the evolution of a note-taking app’s data model. You’ll start with a simple app with only a single entity in its data model.

Let the great migration begin!

Note: This tutorial assumes some basic knowledge of Core Data and Swift.

When to Migrate

When is a migration necessary? The easiest answer to this common question is “when you need to make changes to the data model.”

However, there are some cases in which you can avoid a migration. If an app is using Core Data merely as an offline cache, when you update the app, you can simply delete and rebuild the data store. This is only possible if the source of truth for your user’s data isn’t in the data store. In all other cases, you’ll need to safeguard your user’s data.

That said, any time it’s impossible to implement a design change or feature request without changing the data model, you’ll need to create a new version of the data model and provide a migration path.

The Migration Process

When you initialize a Core Data stack, one of the steps involved is adding a store to the persistent store coordinator. When you encounter this step, Core Data does a few things prior to adding the store to the coordinator.

First, Core Data analyzes the store’s model version. Next, it compares this version to the coordinator’s configured data model. If the store’s model version and the coordinator’s model version don’t match, Core Data will perform a migration, when enabled.

Note: If migrations aren’t enabled, and the store is incompatible with the model, Core Data will simply not attach the store to the coordinator and specify an error with an appropriate reason code.

To start the migration process, Core Data needs the original data model and the destination model. It uses these two versions to load or create a mapping model for the migration, which it uses to convert data in the original store to data that it can store in the new store. Once Core Data determines the mapping model, the migration process can start in earnest.

Migrations happen in three steps:

  1. First, Core Data copies over all the objects from one data store to the next.
  2. Next, Core Data connects and relates all the objects according to the relationship mapping.
  3. Finally, enforce any data validations in the destination model. Core Data disables destination model validations during the data copy.

You might ask, “If something goes wrong, what happens to the original source data store?” With nearly all types of Core Data migrations, nothing happens to the original store unless the migration completes without error. Only when a migration is successful, will Core Data remove the original data store.

Types of Migrations

In my own experience, I’ve found there are a few more migration variants than the simple distinction between lightweight and heavyweight. Below, I’ve provided the more subtle variants of migration names, but these names are not official categories by any means. You’ll start with the least complex form of migration and end with the most complex form.

Lightweight Migrations

Lightweight migration is Apple’s term for the migration with the least amount of work involved on your part. This happens automatically when you use NSPersistentContainer, or you have to set some flags when building your own Core Data stack. There are some limitations on how much you can change the data model, but because of the small amount of work required to enable this option, it’s the ideal setting.

Manual Migrations

Manual migrations involve a little more work on your part. You’ll need to specify how to map the old set of data onto the new set, but you get the benefit of a more explicit mapping model file to configure. Setting up a mapping model in Xcode is much like setting up a data model, with similar GUI tools and some automation.

Custom Manual Migrations

This is level 3 on the migration complexity index. You’ll still use a mapping model, but complement that with custom code to specify custom transformation logic on data. Custom entity transformation logic involves creating an NSEntityMigrationPolicy subclass and performing custom transformations there.

Fully Manual Migrations

Fully manual migrations are for those times when even specifying custom transformation logic isn’t enough to fully migrate data from one model version to another. Custom version detection logic and custom handling of the migration process are necessary.

Getting Started

Download the starter project for this tutorial here.

Build and run the app UnCloudNotes in the iPhone simulator. You’ll see an empty list of notes:

Tap the plus (+) button in the top-right corner to add a new note. Add a title (there is default text in the note body to make the process faster) and tap Create to save the new note to the data store. Repeat this a few times so you have some sample data to migrate.

Back in Xcode, open the UnCloudNotesDatamodel.xcdatamodeld file to show the entity modeling tool in Xcode. The data model is simple — just one entity, a Note, with a few attributes.

You’re going to add a new feature to the app; the ability to attach a photo to a note. The data model doesn’t have any place to persist this kind of information, so you’ll need to add a place in the data model to hold onto the photo. But you already added a few test notes in the app. How can you change the model without breaking the existing notes?

It’s time for your first migration!

Note: The Xcode 8 console logs contain far more information than in previous releases. The workaround to this issue is to add OS_ACTIVITY_MODE=disable to the current scheme environment variables.

A Lightweight Migration

In Xcode, select the UnCloudNotes data model file if you haven’t already selected it. This will show you the Entity Modeler in the main work area. Next, open the Editor menu and select Add Model Version…. Name the new version UnCloudNotesDataModel v2 and ensure UnCloudNotesDataModel is selected in the Based on model field. Xcode will now create a copy of the data model.

Note: You can give this file any name you want. The sequential v2, v3, v4, et cetera naming helps you easily tell the versions apart.

This step will create a second version of the data model, but you still need to tell Xcode to use the new version as the current model. If you forget this step, selecting the top level UnCloudNotesDataModel.xcdatamodeld file will perform any changes you make to the original model file. You can override this behavior by selecting an individual model version, but it’s still a good idea to make sure you don’t accidentally modify the original file.

In order to perform any migration, you want to keep the original model file as it is, and make changes to an entirely new model file.

In the File Inspector pane on the right, there is a selection menu toward the bottom called Model Version. Change that selection to match the name of the new data model, UnCloudNotesDataModel v2:

Once you’ve made that change, notice in the project navigator the little green check mark icon has moved from the previous data model to the v2 data model:

Core Data will try to first connect the persistent store with the ticked model version when setting up the stack. If a store file was found, and it isn’t compatible with this model file, a migration will be triggered. The older version is there to support migration. The current model is the one Core Data will ensure is loaded prior to attaching the rest of the stack for your use.

Make sure you have the v2 data model selected and add an image attribute to the Note entity. Set the attribute’s name to image and the attribute’s type to Transformable.

Since this attribute is going to contain the actual binary bits of the image, you’ll use a custom NSValueTransformer to convert from binary bits to a UIImage and back again. Just such a transformer has been provided for you in ImageTransformer. In the Value Transformer Name field in the Data Model Inspector on the right of the screen, enter ImageTransformer and enter UnCloudNotes in the Module field.

Note: When referencing code from your model files, just like in Xib and Storyboard files, you’ll need to specify a module (UnCloudNotes in this case) to allow the class loader to find the exact code you want to attach.

The new model is now ready for some code! Open Note.swift and add the following property below displayIndex:

@NSManaged var image: UIImage?

Build and run the app. You’ll see your notes are still magically displayed! It turns out lightweight migrations are enabled by default. This means every time you create a new data model version, and it can be auto migrated, it will be. What a time saver!

Inferred Mapping Models

It just so happens Core Data can infer a mapping model in many cases when you enable the shouldInferMappingModelAutomatically flag on the NSPersistentStoreDescription. Core Data can automatically look at the differences in two data models and create a mapping model between them.

For entities and attributes that are identical between model versions, this is a straightforward data pass through mapping. For other changes, just follow a few simple rules for Core Data to create a mapping model.

In the new model, changes must fit an obvious migration pattern, such as:

  • Deleting entities, attributes or relationships
  • Renaming entities, attributes or relationships using the renamingIdentifier
  • Adding a new, optional attribute
  • Adding a new, required attribute with a default value
  • Changing an optional attribute to non-optional and specifying a default value
  • Changing a non-optional attribute to optional
  • Changing the entity hierarchy
  • Adding a new parent entity and moving attributes up or down the hierarchy
  • Changing a relationship from to-one to to-many
  • Changing a relationship from non-ordered to-many to ordered to-many (and vice versa)
Note: Check out Apple’s documentation for more information on how Core Data infers a lightweight migration mapping:https://developer.apple.com/library/Mac/DOCUMENTATION/Cocoa/Conceptual/CoreDataVersioning/Articles/vmLightweightMigration.html

As you see from this list, Core Data can detect, and more importantly, automatically react to, a wide variety of common changes between data models. As a rule of thumb, all migrations, if necessary, should start as lightweight migrations and only move to more complex mappings when the need arises.

As for the migration from UnCloudNotes to UnCloudNotes v2, the image property has a default value of nil since it’s an optional property. This means Core Data can easily migrate the old data store to a new one, since this change follows item 3 in the list of lightweight migration patterns.

Image Attachments

Now the data is migrated, you need to update the UI to allow image attachments to new notes. Luckily, most of this work has been done for you. :]

Open Main.storyboard and find the Create Note scene. Underneath, you’ll see the Create Note With Images scene that includes the interface to attach an image.

The Create Note scene is attached to a navigation controller with a root view controller relationship. Control-drag from the navigation controller to the Create Note With Images scene and select the root view controller relationship segue.

This will disconnect the old Create Note scene and connect the new, image-powered one instead:

Next, open AttachPhotoViewController.swift and add the following method to the UIImagePickerControllerDelegate extension:

func imagePickerController(_ picker: UIImagePickerController,
  didFinishPickingMediaWithInfo info: [String: Any]) {

  guard let note = note else { return }

  note.image =
    info[UIImagePickerControllerOriginalImage] as? UIImage

  _ = navigationController?.popViewController(animated: true)
}

This will populate the new image property of the note once the user selects an image from the standard image picker.

Next, open CreateNoteViewController.swift and replace viewDidAppear with the following:

override func viewDidAppear(_ animated: Bool) {
  super.viewDidAppear(animated)

  guard let image = note?.image else {
    titleField.becomeFirstResponder()
    return
  }

  attachedPhoto.image = image
  view.endEditing(true)
}

This will display the new image if the user has added one to the note.

Next, open NotesListViewController.swift and update tableView(_:cellForRowAt) with the following:

override func tableView(_ tableView: UITableView,
                        cellForRowAt indexPath: IndexPath)
                        -> UITableViewCell {

  let note = notes.object(at: indexPath)
  let cell: NoteTableViewCell
  if note.image == nil {
    cell = tableView.dequeueReusableCell(
      withIdentifier: "NoteCell",
      for: indexPath) as! NoteTableViewCell
  } else {
    cell = tableView.dequeueReusableCell(
      withIdentifier: "NoteCellWithImage",
      for: indexPath) as! NoteImageTableViewCell
  }

  cell.note = note
  return cell
}

This will dequeue the correct UITableViewCell subclass based on the note having an image present or not. Finally, open NoteImageTableViewCell.swift and add the following to updateNoteInfo(note:):

noteImage.image = note.image

This will update the UIImageView inside the NoteImageTableViewCell with the image from the note.

Build and run, and choose to add a new note:

Tap the Attach Image button to add an image to the note. Choose an image from your simulated photo library and you’ll see it in your new note:

The app uses the standard UIImagePickerController to add photos as attachments to notes.

Note: To add your own images to the Simulator’s photo album, drag an image file onto the open Simulator window. Thankfully, the iOS 10 Simulator comes with a library of photos ready for your use. :]

If you’re using a device, open AttachPhotoViewController.swift and set the sourceType attribute on the image picker controller to .camera to take photos with the device camera. The existing code uses the photo album, since there is no camera in the Simulator.

Advertisements
Lightweight Migrations in Core Data Tutorial

Building a Text to Speech App Using AVSpeechSynthesizer

iOS is an operating system with many possibilities, allowing to create from really simple to super-advanced applications. There are times where applications have to be multi-featured, providing elegant solutions that exceed the limits of the common places, and lead to a superb user experience. Also, there are numerous technologies one could exploit, and in this tutorial we are going to focus on one of them, which is no other than the Text to Speech.

Text-to-speech (TTS) is not something new in iOS 8. Since iOS 7 dealing with TTS has been really easy, as the code required to make an app speak is straightforward and easy to be handled. To make things more precise, iOS 7 introduced a new class named AVSpeechSynthesizer, and as you understand from its prefix it’s part of the powerful AVFoundation framework. The AVSpeechSynthesizer class, along with some other classes, can produce speech based on a given text (or multiple pieces of text), and provides the possibility to configure various properties regarding it.

The AVSpeechSynthesizer is the responsible class for carrying out the heavy work of converting text to speech. It’s capable of initiating, pausing, stopping and continuing a speech process. However, it doesn’t interact directly with the text. There’s an intermediate class that does that job, and is called AVSpeechUtterance. An object of this class represents a piece of text that should be spoken, and to put it really simply, an utterance is actually the text that’s about to be spoken, enriched with some properties regarding the final output. The most important of those properties that the AVSpeechUtterance class handles (besides the text) are the speech rate, pitch and volume. There are a few more, but we’ll see them in a while. Also, an utterance object defines the voice that will be used for speaking. A voice is an object of the AVSpeechSynthesisVoice class. It always matches to a specific language, and up to now Apple supports 37 different voices, meaning voices for 37 different locales (we’ll talk about that later).

Once an utterance object has been properly configured, it’s passed to a speech synthesizer object so the system starts producing the speech. To speak many pieces of text, meaning many utterances, doesn’t require any effort at all; all it takes is to set the utterances to the synthesizer in the order that should be spoken, and the synthesizer automatically queues them.

Along with the AVSpeechSynthesizer class comes the the AVSpeechSynthesizerDelegate protocol. It contains useful delegate methods that if they get used properly, they allow to keep track of the progress of the speech, and the currently spoken text too. Tracking the progress might be something that you won’t need in your apps, but if you will, then here you’ll see how you can achieve that. It’s a bit of a tricky process, but once you understand how everything works, all will become pretty clear.

If you want, feel free to take a look at the official documentation about all those classes. In my part, I stop this introductory discussion here, as there are a lot of things to do, but trust me, all of them are really interesting. As a final note, I need to say that all the testings of the demo app must be done in real device (iPhone). Unfortunately, text-to-speech doesn’t work in the Simulator, so plug your phones and let’s go.

Demo App Overview

We’ll begin our today exploration with an overview of the demo application (as we always do). The first thing I must say, is that we’re not going to build it from scratch. You can download a starter project from this link, which contains a minimum implementation, and of course, the interface already made in the Interface Builder. I would suggest to look around a bit before we continue, so you get familiarized with it and be able to move faster in the upcoming parts.

Let me describe now the app details. It is a navigation based application parted from two distinct view controllers. In the first one, there’s a textview where we’ll be adding the text that we want to be spoken. This textview occupies the most of the screen, and right below there are three buttons. The first button will be used to start the speech, while the other two are for pausing and stopping it. They won’t appear simultaneously; when the Speak button is visible the other two will be hidden, and the opposite. We’ll switch between them by implementing an easy animated effect. Additionally, right below the buttons exists a progress view. We’ll use it to show the progress of the speech during time.

The second view controller will be used to specify various settings regarding the speech. More specifically, through this view controller we’ll be able to change the values of the rate, pitch and volume properties, but furthermore we’ll be able to pick a different voice for the speech, other than the default one.

Through the implementation that follows in the next parts, you’ll find out that bringing text-to-speech to life is relatively easy. Once we get finished with the creation of all the TTS-related stuff, I’ll give you an extra section in this tutorial that will make the demo application much more alive and interactive. We’ll work with some text properties, and eventually we will make it capable of highlighting the currently spoken word by changing its foreground (text) color from black to orange. Even though what we’ll do there is not a part of the TTS process, I think it’s a pretty interesting topic to work with. It’s up to you to decide whether you’ll read it or not, as it consists of an extra content.

The following screenshots illustrate the two view controllers of the app, and an instance of it during the speech where the currently spoken word is highlighted. Take a first taste, and we will begin the real work.

text to speech demo

Speaking Made Easy

So, now you are aware of what the today project is about and you have downloaded the starter project, let’s begin implementing. This time we’ll go straight to the point, as we are about to make the currently empty demo application capable of speaking without any delay. As I said in the beginning, the main role when dealing with text to speech features has the AVSpeechSynthesizer class. This is the one that will perform the actual speech, however it doesn’t just work on its own. It is always used in conjunction with other classes as we’ll see next, because various configurations regarding the speech outcome do not apply directly to this one.

First things first, and begin by opening the ViewController.swift file in Xcode. There, we need to declare and initialize a class variable (a constant actually) of the aforementioned class. So, go to the top of the class where all the outlet properties are already declared, and somewhere there add the following line. It’s just one, but a really important one:

That simple. Now, in order for the speechSynthesizer to “speak”, it’s needed to be provided it with some text to do that. However, we don’t just set a string value representing the text directly to that object as a property. Instead, we must make use of the AVSpeechUtterance class that I mentioned in the introduction. This, besides than accepting the text that should be spoken, it also contains properties that affect the final result. We’ll see about them in just a while. So, putting it all together, we are going to create an object of this class so as to set the text we want to be spoken, and finally to perform the actual speech using the object we declared right above.

An object of the AVSpeechUtterance class doesn’t have to be public like the last one; instead it can be locally declared, and that’s what we’ll do. Among the few implemented stuff existing in the starter project you’ve got in your hands, there’s an action method called speak(_:). This method is just defined in the class and connected to the proper button in Interface Builder. Navigate there, as it’s time to make it speak for first time by adding the couple next lines:

Obviously, upon the speechUtterance object initialization we specify the text that should be spoken. In this case we want our app to say everything existing in the textview, therefore we pass it the whole text of the textview. Next, we simply make a call to the speakUtterance(_:) method of the speech synthesizer object, and we’re telling the synthesizer to speak using the text we set in the utterance variable. As you’ll see next, this method also says to the synthesizer object to apply any settings or effects to the speech.

Now you can go ahead and run the application. Whatever you have written in the textview it will be spoken out loud from your device! Cool!

The voice that you’ll be hearing during the speech progress matches to the current locale settings of your device. By changing that, another voice matching to the new locale will be used to speak. You can find out that by modifying your current language settings if you’re curious enough. However, if you can sit back and wait for a while, you won’t really have to do that; we will add a feature later so we can pick a voice right through our demo app!

Now, let’s play around a bit with three properties regarding the actual speech: The rate, the pitchMultiplier and the volume. All these three properties have default values, and belong to the AVSpeechUtterance class. They also have maximum and minimum values. Right next you’re given with the maximum and minimum values they can get:

  1. Rate: Its minimum value specified by the AVSpeechUtteranceMinimumSpeechRate constant (0.0), and the maximum by the AVSpeechUtteranceMaximumSpeechRate (1.0). Its default value is defined by the AVSpeechUtteranceDefaultSpeechRate constant.
  2. pitchMultiplier: The pitch multiplier acceptable values are between 0.5 and 2.0, where 1.0 is the default value.
  3. volume: It ranges from 0.0 to 1.0, and by default it’s set to 1.0.

Time to use all these three properties to see how they can affect out spoken text. In the speak(_:) action method once again, add the next lines, but make sure to call the speakUtterance(_:) method as the last command. Note that you can set any value you desire in the following properties (within range of course); the following are just random values:

Run the app once again now, and listen to the new way the text is spoken. It’s really great that the default speech way is modified according to our preferences, and in order to achieve a perfect result in a real app, it’s necessary to play with the above values or give users the option to change them.

Making a quick sum-up at this point, you see that we wrote no more than five lines, but yet we managed to implement the basic feature of the application. Now that we have taken the first taste, we’ll keep adding new stuff and transforming the app to a more complex and advanced one.

Handling Rate, Pitch and Volume

As you can assume by running the application, applying various values to the rate, pitch and volume can result to interesting speech outcome. However, it’s quite annoying to terminate the app and re-run it every time you want to change any of them. It would be much more handy to be able to do so in the app, and if you remember, that’s one of our goals here. However, before we implement that functionality, we much find a more general way to work with these three values, and of course, we must figure out how we will save them permanently.

Let’s get started by declaring three new class variables. Each one of them will match to a value (rate, pitch, volume), and we’ll assign them to the speech utterance object properties, instead of using plain numbers. Let’s see:

Then, go back again in the speak(_:) method, and replace any number values with the above variables, similarly to what is shown next:

Nice, this is a more general way, and the speech will be produced according to the values of the above variables.

There are several approaches to store permanently the above variables. The easiest one, and perfectly suitable for the demo application we are implementing, is to use the user defaults (NSUserDefaults). Further than simply saving and loading from the user defaults dictionary, we will also register default values, meaning that we’ll set predefined values for the above three variables upon the first app run. Even though that sounds a bit unnecessary to do, you’ll understand later that it’s required, because we are going to load the existing stored values during the settings implementation. Let’s don’t run though, we’ll do everything in turn.

Let’s implement a couple of custom methods now. In the first one, we’ll register the user defaults as shown next:

In the above segment we set some initial values to each variable, and then we add them to a dictionary. We save this dictionary to the user defaults file by calling the registerDefaults(_:) method, and with that we manage to have data from the beginning of the app.

In the second method we will load the data from the user defaults dictionary, and we’ll assign it to the proper variables. It’s easy enough:

Two things to mention here: At first, notice that the method returns a Bool value; True when the data has been loaded and assigned to the variables successfully, and False when there’s no data at all. At second, we check if there is stored data in the user defaults simply by using an if let statement and asking for the rate value. If it exists, then they all exist (we’ll stick to that logic here, however in a real app you would be much more safe if you would check each value separately).

Now, let’s use the above methods, as simply defining them is not good enough. In the viewDidLoad method, we will try to load the data from the user defaults. If there’s no data, then we’ll register the default values:

Note here that we are not going to create any method at all to save altered values to the user defaults. This is something that will take place later in the settings view controller, where after having changed any value graphically we’ll store it so it gets reflected during the speech.

Pausing and Stopping the Speech

If you took a look in the storyboard of the starter project you downloaded, then you would have noticed that besides the Speak button there are two more: Pause and Stop. As you correctly assume, these two buttons will be used to pause and stop the speech process respectively, as a talking app would be useless if it only speaks, but it’s unable to stop doing that.

The pause and stop functionalities are not the same, as they have one important difference. When pausing, the speech can continue from the point it was left off. On the other hand, when stopping, the speech does not continue from the point it was stopped, but it starts over again.

In the ViewController.swift file you can find two other action methods, named pauseSpeech(_:) and stopSpeech(_:) respectively. Let’s go to add the missing code, starting from the pause method:

Pausing the speech is literally a matter of a single line. By calling the pauseSpeakingAtBoundary(_:) method of the synthesizer object, we force the speech to pause. There’s one important element here, and that is the parameter value. This method wants to know whether it should pause immediately and then continue from the point it was left off, or it should pause after it has spoken the current word, and continue that way from the next word later. This kind of information is described from an enum value, named AVSpeechBoundary. This enum has two values:

  1. AVSpeechBoundaryImmediate: Using this, the pause happens instantly.
  2. AVSpeechBoundaryWord: With this, the pause happens once the current word has been spoken.

In the above code, I chose to set the second value, but you can change that if you want.

Similarly to the above implementation, the stop method is the following:

This time we make use of another method of the synthesizer object, named stopSpeakingAtBoundary(_:). This method makes the speech process stop, and the argument it accepts is the same as before; a speech boundary value. In contrary to the previous method, here I chose to stop immediately the speech process.

Go now and run the app. Tap on the Speak button to let it start speaking, and then try both the Pause and Stop buttons.

You don’t see the buttons? But of course! That’s reasonable to happen, as we didn’t perform any action at all to make them visible. The only button shown there is the Speak button.

Let’s fix that by implementing another custom method. In it, we’ll apply an easy animation for showing and hiding the buttons. I think it’s pointless to say that when the Pause and Stop buttons are visible, the Speak button must be hidden. Just before we see this method, let me underline that these two buttons are not actually hidden; their alpha value has been set to zero.

In the next code segment I give you the implementation of this new method, and we’ll discuss it right next:

As you see in the first line, the method accepts a Bool argument telling whether the Speak button should be hidden or not. When it’s true, we’ll show the Speak button, and when it’s false, we’ll hide it and we’ll make the Pause and Stop buttons visible. In the method body we begin by specifying two float (CGFloat) variables representing the desired alpha value of the buttons. Note that if the Speak button must be hidden, we change the values of these variables. Then, using a simple UIView animation block we assign the final alpha value to each button. Note that the pauseStopButtonsAlphaValue variable describes the alpha state for both the Pause and Stop button.

The above method will result in a fade-in and fade-out effect. If you want, you can change the animation duration (0.25 seconds) to any value you want. Now that is ready, we can go an call it. That must take place in all the action methods of the buttons, and let’s begin by updating the most recent ones, those we saw in the beginning of this part. Right next you can see them containing the call to the above method:

In both of them we call the animateActionButtonAppearance(_:) method passing the false argument.

In the speak(_:) method things are a bit more complex, as further than simply calling the above method, we must also check if the synthesizer is currently speaking or not. That’s really important, because if it is speaking and gets paused, we must tell the synthesizer to simply continue when the Speak button gets tapped again. If it’s not speaking, then we must tell it to start doing so. To put it in other words, we must tell the synthesizer how to proceed after any of the Pause or Stop buttons has been tapped.

In the following code snippet you can see all the needed changes:

At first we check if the synthesizer is not speaking using the speaking property of the object. If that’s the case, then we do what we saw earlier. However, if it’s speaking (meaning that was speaking before the Pause button was tapped), then we call the continueSpeaking method to let the synthesizer continue right from the point it was paused. Lastly, we hide the Speak button and show the other two animated, and this time we pass the true value as an argument, denoting that we want the Speak button to be hidden.

Now you can try to run the app again. You will see the Pause and Stop buttons appearing, and you can use them to pause and stop the speech. That’s nice, and with little effort, uh?

Speaking Multiple Utterances

At this point our synthesizer can speech pretty well the text we provide it with, and that is the content of the textview. However, a synthesizer is capable of doing more than that, and to be specific, it can queue and speech multiple texts, instead of just one.

Queueing multiple text so they’re spoken by the synthesizer is not something we need to manually do. Each piece of text is provided to the synthesizer to the way we have already seen, and it is automatically queued. What we only have to take care about, is to make sure that we give synthesizer the texts in the proper order.

In this example we’ll “feed” our synthesizer with multiple texts in such way: We’ll break the text of the textview in pieces based on the line break character (\n). Each piece of text that will come up (let’s name it paragraph, because actually this is it), will be set in order to the synthesizer, so it gets spoken. Besides that, we’ll meet a new property here that we will use to set a small delay between subsequent text speeches.

In the speak(_:) action method, go in the body of the if statement, and just comment-out or delete its contents:

In the empty if now add the code shown in the next snippet:

Let’s see what happens here: At first, we break the text in paragraphs, separating the text based on the break line character. The componentsSeparatedByString(:_) method returns an array of substrings, and of course all break line characters are missing. Next, in a loop we access each piece of text (each paragraph) one by one. We initialize at first the utterance object using the current paragraph, and we set all of the properties we desire. At the end, we ask from the synthesizer to speak the utterance, and with that each text is queued to be spoken in the correct order.

Note in the above snippet the postUtteranceDelay property. This is used to add a delay after the current text has been spoken, and in our case we set it to a really small value. You can add a delay before the new text as well, you just need to access the preUtteranceDelay property (we don’t use it here). Using either of these aforementioned properties is something we want here, because we need our synthesizer to “take a breath” before it continues to the next paragraph. Also, we want it to speak in a more natural way (make bigger pauses between paragraphs).

Everything else in the above method remains the same. You can try the app again now. This time make sure that you “break” your text in the textview by tapping the Return key. You might think that the synthesizer says everything as one text, but the truth is that this time uses multiple texts.

Tracking Speech Progress

Text-to-Speech solutions can be useful in various kind of applications, and maybe there will be times where you’ll need to keep track of the speech progress. Once you have this information, you can use it in many ways: Internally in your app, to display the current progress graphically to the user, and many more. However, getting the speech progress from the synthesizer might be a bit of a tricky job, so here I am going to show you how to succeed this. I believe that what I’ll show you is going to be good enough in most cases, however definitely is not the only way to track the speech progress. I’ll show you a solution, and then you can step on it to implement your own. Unfortunately, speech synthesizer doesn’t provide a simple, straightforward way to inform us about that.

In the storyboard file of the project and in the View Controller scene specifically, you will find a progress bar lying right under the action buttons we used earlier. Our goal is to reflect the speech progress on it, so we have an idea how long the speech process has gone.

As I said, there is no straight way to get the tracking information we want. However, the delegate methods of the synthesizer class can help us a lot towards that goal. As you will see next, we are going to adopt the AVSpeechSynthesizerDelegate protocol, and then we will be based on three delegate methods to calculate the progress. Things are going to become a bit complex here, so I’ll try to be as detailed as possible.

At first, let’s adopt the protocol I just mentioned about, and let’s tell the synthesizer object that our class is its delegate. In the ViewController.swift file go to the class opening line, and declare the protocol’s name:

In the viewDidLoad method:

Now, let’s define, without adding any code yet, the following three methods:

Before we proceed, it’s important to explain the logic we’re about to follow, so we are all in the same page. I’ll begin talking about the last delegate method above, where as you see there’s a parameter named characterRange. This value is really an important one, because it “tells” us the range of the currently spoken word in the utterance that the synthesizer speaks. To make that clear, imagine that the spoken text is the following line:

My name is Gabriel

… and that currently the synthesizer says the “name” word. The range in this case is: (3, 4), where 3 is the location of the range, and 4 is its length.

Note: I advice you to read a bit about the NSRange structure in the web, so you get the meaning not only of that, but of what we’ll also do next. In short, a NSRange variable specifies a text range using two values: The location that shows the position of the first character in the whole text, and the length that shows how long the range is.

So, returning to our work again, you can guess that if we know the range of the currently spoken word and also know the total length of the text (which we do through the utterance parameter), we can calculate the progress as a percentage. Great, but that’s good just in case we have one utterance only. What happens if there are more than one, and the synthesizer must produce speech for multiple utterances?

The solution in that problem will rely on the previous thoughts regarding the range parameter, and I will advance it by saying that we will calculate the progress made in the whole text (all pieces of text, not just the current one) by mixing the current range with the length of texts that have been already spoken. Of course, that means that we must keep track of various data regarding the utterance that is currently spoken, the total number of utterances, the total text length, and more.

Let’s make everything clear and specific now. Go to the top of the ViewController class, and declare the following properties:

Here’s what they are for:

  1. totalUtterances: In it we will store the total utterances (the total different texts queued) that the synthesizer will produce speech for.
  2. currentUtterance: The number of the utterance that’s currently being spoken (ex. 2 out of 10).
  3. totalTextLength: It will keep the length of the whole text (all text pieces). We’ll compare the range of the words against this value later.
  4. spokenTextLengths: In this variable we’ll store the length of the texts that have been already spoken.

The above variables must get their initial values, and the proper place to do that is the speak(_:) action method. Go in it, make sure you are in the if statement body, and then add the lines noted below:

In the first addition above, we store the total number of utterances to the totalUtterances variable, and as you see, it’s equal to the total number of the text pieces. To the rest variables we just set the zero as their initial value. Next, in the for loop we calculate the total length of all texts, as we queue them to the synthesizer.

Now, let’s take advantage of the delegate methods to do some calculations. At first, when the synthesizer starts to speak an utterance, we must increase the currentUtterance value by one. That way we will be always able to know what the number of the currently spoken text is (among all). Here it is:

Next, we must take some actions when the speech of a text gets finished. Let me show you first what I mean exactly, and then it will be easier to discuss about it:

Initially, we calculate the total length of all the spoken texts up to now. Note that +1 at the end of the line: If we won’t add it, the total progress calculation will be wrong, because the total length of the original text contains the line breaks characters, which are not processed during the speech progress.

Next, we calculate the progress after a piece of text (an utterance) has been spoken. This will make the whole progress calculation and representation smoother. Note that this is not the main spot where the progress is calculated; instead, it works as a complementary calculation. As you see, we assign that progress to the pvSpeechProgress progress view.

Lastly, and regardless the progress tracking functionality, we make sure to hide the Pause and Stop buttons once the utterance that was just spoken is the last one.

Now, the last delegate method where we’ll make use of the current range. This is the place where the most of the progress will be calculated, and here’s how it should be done:

Note that we add the total spoken text lengths to the current range location, and use that value against the total text length to find the progress percentage. Then we set it to the progress view (note that the progress view’s progress must be between 0.0 and 1.0, that’s why we divide by 100).

You may run the app now, but if you do so you won’t see the progress view. That’s because initially its alpha value is set to zero. To fix this, simply go to the custom method where we animated the alpha value changing of the buttons, and add the progress view in the animation block as shown next:

Feel free now to run the app. Let it speak, and watch the progress view growing as the speech moves forward.

Working in Settings View Controller

The SettingsViewController is the place where all the important utterance-related values are going to be modified in a graphical way. The next screenshot displays the final outcome of our work in this view controller:

As you see in the above image, we are going to change the values of the speech rate, pitch and volume using a slider view. Any modification to them will be reflected to the speech after the Save button has been tapped. But that’s not only. There is going to be a picker view with all the available speech voices listed on it, so the current voice can be changed when picking another one.

We’ll implement all of the above and we’ll see the details as we move along. The first and quite important task we have to do, is to load the current rate, pitch and volume values from the user defaults, and set them to the proper controls in the Settings view controller. Open the SettingsViewController.swift file, and at the top of the class declare the following variables:

The speechSettings constant is declared and initialized at once. It’s a NSDictionary object containing all the existing values being in the user defaults dictionary. Also, we declare three float values for the rate, pitch and volume. The next step is to initialize them in the viewDidLoad:

Note here that I omit to check if the values actually exist in the user defaults. I consider they’re there and nothing it’s nil. However in a real application you should really avoid what I do here and do all the proper checking.

If you examine the Settings View Controller scene in the storyboard file, you’ll notice that there are two prototype cells, but for now we’re only interested in the first one:

Notice that there are two labels there, right above the slider view: The left one will be used to display the name of the current setting (such as “Rate”), and the right one will be used to display the actual value. Also, the slider’s value will be properly set. These three subviews (two labels and the slider) have tag values so they can be recognized in code. These tag values are:

  1. Left label: Tag value = 10
  2. Right label: Tag value = 20
  3. Slider: Tag value = 30

With the above fact in mind, let’s return to the SettingsViewController.swift file. In there you can find already defined the most important and necessary tableview methods, but for now there’s not logic implemented yet. So, let’s start adding some logic by specifying the total number of rows that will exist in the tableview:

Later we’ll change the above return value to 4, so we include the voices prototype cell. For now, we’re just good.

The rest of our work will take place in the tableView(_:cellForRowAtIndexPath:) method. The first step here is to make sure that the current cell regards a rate, pitch or volume cell, to dequeue the prototype cell, and “extract” the above controls from the cell. Let’s see what I mean up to this point:

As you can find out in the snippet above, we check the current row number to make sure that the cell regards any of the three values of interest. Next, we dequeue the cell, and here I should make an important notice: We use the dequeueReusableCellWithIdentifier(_:forIndexPath:) tableview method (notice the index path parameter), instead of the dequeueReusableCellWithIdentifier(_:), because in the second case the cell returns nil values for its subviews.

Lastly, we get the two labels and the slider using their tag values.

Next, depending on the current index path row we are going to display the rate, pitch and volume values to the labels:

Two points to note here: At first, there’s a float local variable declared right before the switch statement, named value. As you see in the code following the declaration, we store the actual value of each speech property to that variable, and if you’re wondering why we do that, then I just have to say that we’ll use it to set the sliders’ values. You’ll see pretty soon. The second point I’d like to highlight here is how we assign each value to the valueLabel label properly formatted using the NSString class. With the syntax above we let the three values to be displayed using two decimal points.

Now we must setup the sliders. In each case of the switch statement we’ll do two things at the same time: We’ll set the maximum and minimum values of each slider, and we’ll connect them with an action method that will be called every time their value gets changed. Here is that addition:

Lastly, we must set the real values to the sliders. We won’t do that in each case above, as that’s something common for all of them. What we’ll only do is to go after the switch statement and add the next couple of lines:

The condition above is super-important, because if we omit it the sliders are never going to work as expected. In the next part we’re going to implement their behavior and change the respective values, and without that condition the sliders will never follow our finger towards the direction we order them to move.

Changing Rate, Pitch and Volume Values

In the previous part we specified the handleSliderValueChanged(_:) as the action method that should be called when the sliders change value. As you noticed, we didn’t implement this method, but here’s the place that we’ll do that. Here’s its definition:

Now, depending on the slider that has changed value, we must update the proper property (rate, pitch, volume) appropriately. But, how can we determine and really say which slider is?

Well, there are two solutions to that. The first one is to get the slider’s superview, which is the cell containing it. Then, we can specify the index path of the cell using the tableview and finally decide which property the slider regards.

The second solution is quite different, and it relies on subclassing the UISlider class. To make that clear, we can subclass the UISlider class, and add a new property to that subclass. That property will be an identifier value, which we’ll use for distinguishing our three sliders. Of course, we’ll also have to change the class of each slider from UISlider to the new one. Generally, subclasses can be really powerful, as they allow you to add properties and methods not existing to parent classes. If you don’t use subclasses, maybe it would be a nice idea to start doing so when it’s needed.

Let’s create our new subclass. Hit the Cmd + N in Xcode, and create a new Cocoa Touch Class file. Make it subclass of the UISlider class, and name it CustomSlider. Once you create it, open it so we can edit it.

As I said before, we are going to declare just one property here, named sliderIdentifier. It’s going to be an Int value. Besides than simply declaring it, we must add a couple of init methods, as the consist of the minimum implementation that should be done when subclassing:

The second init method is required and can’t be omitted. As you see, the extremely simple implementation above will give the solution to our problem.

Open now the Main.storyboard file and go straight ahead to the Settings View Controller scene. There, select the slider in the prototype cell and open the Identity Inspector in the Utilities pane. In the Custom Class section, set the CustomSlider value in the Class field.

t29_5_change_slider_class

Now, let’s return to the SettingsViewController.swift file. Go to the definition of the above action method, and change its parameter type as shown next:

Before we implement it, let’s make an addition to the tableView(_:cellForRowAtIndexPath:) method. In here, we must set value to the sliderIdentifer property of each slider, but also set the proper class of the slider when downcasting:

Now it’s easy to implement the action method; we’ll be based on the above identifier values:

Notice that we must reload the tableview once we change a value.

If you run the application now and go to settings view controller, you’ll see the rate, pitch and volume values to get changed when you move the sliders.

Applying Settings

All the implementation we did in the Settings view controller will be totally meaningless if we don’t save the changed values, so they can be used in the View Controller class. So, here we’ll save everything by making two easy steps.

The first step is to save all properties managed in the Settings (modified and not) to the user defaults. This will take place in the saveSettings(_:) action method which can be found in the SettingsViewController.swift file. This action method is called when the save bar button item gets tapped. Let’s see that:

All it’s easy in the above implementation. Note that we call the synchronize() method to update the user defaults with the new values.

The second step we must make is to notify the ViewController class that the above values have been changed, so the speech synthesizer uses them the next time that will speak out some text. We’ll succeed this by using the delegation pattern.

In the SettingsViewController.swift file, go at the top of it right before the class definition. There, add the following protocol:

Next, in the SettingsViewController class declare a delegate object:

Back in the saveSettings(_:) action method now, let’s add a couple of missing rows. As you’ll see in the following segment, at first we make a call to the delegate method we just declared, and then we pop the settings view controller from the navigation stack so we return back to the first view controller:

Okay, now the half job has been done. The above call to the delegate method will have no effect at all if we don’t make the ViewController class conform to the SettingsViewControllerDelegate, and then to implement that delegate method in there.

Switch now to the ViewController.swift file. First of all, adopt our custom protocol:

Now, there’s an intermediate stage before we implement the delegate method. We must make this class the delegate of the SettingsViewController, for that reason implement the next one:

By overriding the prepareForSegue(_:sender:) method, we can access the view controller that is about to be loaded (in this case the SettingsViewController), and this can be done through the destinationViewController of the segue that will be performed. Note that the identifier value of the segue is “idSegueSettings”, and I set it in the Interface Builder. Remember that this identifier can be any value you want, as long as it’s unique and no other segue with the same identifier exists. Anyway, in the above condition we finally set our class as the delegate for the settings view controller.

Now, we’re able to implement the delegate method. In it, we’ll load the saved values from the user defaults and we’ll set them to the respective properties of our class (rate, pitch, volume). With that, the next time the synthesizer will speak, all utterances will use the new settings. Here’s the implementation:

That’s all! Go now and run the app, change the various settings and let the synthesizer speak. Do many tests and see how each setting affects the produced speech.

Selecting a Different Voice

The default voice used to produce the speech in iOS matches to the current locale of the system (the language settings, etc). That means that if you currently have the US locale applied, the US voice will speak. If you have the French locale, the French voice will speak. However, it’s possible to change the default voice and have your text to be spoken with a different voice, and with a different accent.

Our goal in this part is to make the demo application capable of using another voice, and we’ll succeed this by providing such a setting in the settings view controller. Up to this point, Apple supports 37 different voices. Programmatically speaking now, we are going to use another class named AVSpeechSynthesisVoice, and through this we’ll access all voices for the supported locales.

There’s a class method called speechVoices() that returns all voices. You can see really easily what the supported locales are, if you simply add the following line to any viewDidLoad method you want:

The first five results the above line returns are:

As you guess, we are going to be based on that method to fetch all the supported locales, make it possible to select one, and eventually allow the synthesizer to speak using another voice. From the returned data (the one you see above) we are going to use the language code. With that code then we’ll specify the language name, and we’ll create a dictionary containing both the name and code for each language. Then, this dictionary will be added to an array, so we can access later all of those dictionaries and display all languages to the settings.

Let’s get started, as there are more than a few things to do here. Open the SettingsViewController.swift file, and declare the following array and integer variable at the top of the class:

As I said, we’ll use the array to store all language names and codes matching to the supported voices. The integer value will be used as an index for the selected voice language.

Now, let’s create a custom method where we’ll do all what I described earlier. Let’s see it, and then we’ll talk about it:

The code is pretty straightforward; the only thing I’d like to highlight is that we use the NSLocale class to get the language name using the code as shown above. Now that this method is implemented, we must call it, and we’ll do so in the viewDidLoad method:

Our next step is to display the data we just prepared. In the Settings view controller scene in the storyboard file, there’s a second prototype cell with a picker view in it. We’ll use this cell to display all voice languages, and that means that we have to deal again with the tableview methods.

First of all, change the number of the returned rows from 3 to 4:

Next, in the tableView(_:cellForRowAtIndexPath:) method add the following else, right before you return the cell:

You see that we make the settings view controller class the delegate and datasource of the picker view, as through the methods of those protocols we’re able to provide data to it and also be aware of the user selection. Before we continue we must fix the errors Xcode is currently showing simply by adopting the following two protocols:

Let’s implement now the required picker view delegate and datasource methods. At first, we must return the number of the components the picker view will have (obviously just one), and how many rows this component will contain:

Now, let’s display the voice languages to the picker view:

It’s really simple what we do here: We get the dictionary existing in the index of the array matching to the row specified by the parameter value, and from this dictionary we “extract” and return the language name. Just that.

There’s one last method missing, and in this one we must keep the selection of the user when taps on a row:

To sum up here, we have managed to get all the languages for the supported voices, display them to the picker view, and made it possible to select a row. But how are we going to let the synthesizer know about the selected voice?

The answer is to use once again the delegation pattern, and this time we have it already implemented. Actually, we must only add one line in the saveSettings(_:) method:

Note that this line should be added before the user defaults synchronization. With this single line, we manage to store in the user defaults the language code of the selected voice.

Now, let’s open the ViewController.swift file, where we need to make a couple of new additions. First of all, declare the following variable at the top of the class:

You may assume from its name what is for. Next, head to the didSaveSettings() delegate method, and enrich it with the next line:

As you see, we store in the preferredVoiceLanguageCode variable the value of the selected voice language code.

Lastly, in the speak(_:) action method we must tell the synthesizer about the selected voice. Actually, we are not going to do something like that in the synthesizer object, but we’ll set the voice language code to each utterance object. In the for loop add the next snippet:

We’re ready. Go and test the app once again. As you’ll see, now you can pick another voice for the produced speech in the settings view controller. I’m pretty sure that you’ll find the result really interesting.

Bonus Material: Highlighting the Spoken Text

Normally, at this point this tutorial comes to its end. So, if you want to give a final try to the demo application and stop reading, feel free to do it. On the other hand, if you want to see how to make the app more interesting and alive, then keep reading, as I considered to be really nice to make the spoken text to be highlighted during the speech process. There are cases where this feature is useful, and even it sounds easy, it requires some amount of work. Note though that this task has nothing to do with the text-to-speech process; instead we’ll use another framework to achieve this goal: The UITextKit framework.

If you’re comfortable working with Text Kit (which was first introduced in iOS 7), then you will be just fine with the rest content in this part. If you’re not familiar with Text Kit, then here is a link of an older tutorial of mine regarding it. I won’t provide much details about the Text Kit specifics; I’ll mostly describe the steps required to highlight the spoken text, so if there’s something that you don’t understand beyond any short explanation I give, go and search either on the above link or in the web for more information.

So, let’s get started. Trust me, this is a part that I find pretty interesting as I enjoy playing with Text Kit, and I hope that you will do so. The first step before performing any programming actions, is to convert the plain text of the textview to an attributed text. That’s important and necessary, as we’re going to apply changes to a textview property which is part of the Text Kit, named TextStorage. Actually, the text storage property of the textview is an object of the NSTextStorage class, and it’s responsible for storing the textview text along with its attributes. Such an attribute is the font name, including information whether the text is bold or italic (this information is called font traits). Another such an attribute is the text foreground color. Even custom attributes can be set regarding an attributed text. Anyway, I won’t go any further on that, but I have to tell you that we’re going to access the text attributes to change the text color (foreground color) of the currently spoken word, therefore we need to use the text storage property, and furthermore, we need to work on an attributed text.

With that, open the storyboard file in the Interface Builder. Go to the View Controller scene, and select the textview. Open the Utilities pane, and then the Attributes Inspector. There, click on the Text drop down control and select the Attributed value instead of the Plain and you’re done:

Now, the next step is to set the initial font to the textview text as an attribute. We’ll do that before performing any text highlighting, because we will need to revert the selected text back to its normal state, and of course, using its normal font. So, in the ViewController.swift file let’s write the next custom method:

Here’s what takes place above:

  • At first, we specify the range of the whole text. In a previous part I had talked about the NSRange structure, but I think it’s good to remind here too that the first parameter is the start of the range, while the second one is its length.
  • Next, we assign the text of the area specified by the range (actually the whole text) to a mutable attributed string, so it’s possible to add attribute to it.
  • Using the addAttribute(_:value:range:) method of the attributed string object we set the font attribute (NSFontAttributeName), specifying the font name and size, and the range that this attribute will apply.
  • Lastly, we prepare the text storage for editing. We replace the current text with the attributed one, and we end by making any changes in the text storage valid.

The above method must be called in the viewDidLoad:

Now it is coming the interesting path, the highlighting of the words. Our target here is to change the foreground color of the currently spoken word, and make it orange when is spoken, while we’ll turn it back to black after it has been spoken. Our whole work will take place in the speechSynthesizer(_:willSpeakRangeOfSpeechString:utterance:) delegate method because, if you remember, it’s the only place where we can get the range of the currently spoken word. At this point is necessary to declare a new variable in the class:

In this variable we are going to store the range of the previously selected word, so we can make it black again after it has been spoken.

Let’s go in the aforementioned method now to start adding some code. I’ll give it in pieces as it’s easier to understand it. There are also comments that will help you in that.

So, initially we want to specify the range of the current word, not in the context of the current text, but in the context of all texts (utterances) that are about to be spoken. Once that range has been determined, we will use it to select the appropriate area in the textview:

Next, we must access the current attributes of the selected text, and keep a copy of the font attribute. We’ll need it later to make sure that the font won’t get changed. Besides finding the current attributes, we’ll also assign the selected text to a mutable attributed string, so we can add the foreground color attribute later:

Time to highlight the selected text, and of course to set the original font again. If we don’t do that, it will be replaced by the system font:

There’s a small detail we have to take care of here: If the selected word is out of the visible area of the screen, then it won’t be shown, therefore we should scroll the textview a bit:

The last step in this method is to update the text storage property of the textview, so any changes we made to actually apply. During the editing session of the text storage property, we won’t just replace the selected word with the new attributed string we set above; we will also check if there was another highlighted word previously that has been spoken already, and we’ll change its foreground color back to normal (black). Note that we have already done once all that you see below:

And one last, but important detail:

At this point our work is pretty much done. There’s though one minor flaw that you’ll be able to see if only you run the application, but I’ll tell you what exactly is it in advance. The last highlighted word (last word in orange color) doesn’t get un-highlighted, and that’s something that must be fixed.

You probably already imagine the solution, which no other than doing something similar to all the above but not in that method. Right next I give you one more custom method where we fix the problem I mentioned, and the inner comments will help you understand each step:

Lastly, let’s perform a small update in the speechSynthesizer(_:didFinishSpeechUtterance:) delegate method, where we’ll call the above one:

That’s all. Now, every time the synthesizer speaks, the spoken word will be highlighted with an orange color.

Building a Text to Speech App Using AVSpeechSynthesizer

How To Make an iPhone Transmit an iBeacon Emitter And Working With iBeacons

How To Make an iPhone Transmit an iBeacon Emitter

iOS 7.0 introduced not only the ability to detect iBeacons, but also the ability to create iBeacons – for iPhones and iPads to broadcast their own beacon signal that can then be detected by other devices. To make this work, you add these two imports:

import CoreBluetooth
import CoreLocation

Now make your view controller (or other class) conform to the CBPeripheralManagerDelegate protocol, like this:

class ViewController: UIViewController, CBPeripheralManagerDelegate {

To make your beacon work, you need to create three properties: the beacon itself, plus two Bluetooth properties that store configuration and management information. Add these three now:

var localBeacon: CLBeaconRegion!
var beaconPeripheralData: NSDictionary!
var peripheralManager: CBPeripheralManager!

Finally the code: here are three functions you can use to add local beacons to your app. The first one creates the beacon and starts broadcasting, the second one stops the beacon, and the third one acts as an intermediary between your app and the iOS Bluetooth stack:

func initLocalBeacon() {
    if localBeacon != nil {
        stopLocalBeacon()
    }

    let localBeaconUUID = "5A4BCFCE-174E-4BAC-A814-092E77F6B7E5"
    let localBeaconMajor: CLBeaconMajorValue = 123
    let localBeaconMinor: CLBeaconMinorValue = 456

    let uuid = UUID(uuidString: localBeaconUUID)!
    localBeacon = CLBeaconRegion(proximityUUID: uuid, major: localBeaconMajor, minor: localBeaconMinor, identifier: "Your private identifer here")

    beaconPeripheralData = localBeacon.peripheralData(withMeasuredPower: nil)
    peripheralManager = CBPeripheralManager(delegate: self, queue: nil, options: nil)
}

func stopLocalBeacon() {
    peripheralManager.stopAdvertising()
    peripheralManager = nil
    beaconPeripheralData = nil
    localBeacon = nil
}

func peripheralManagerDidUpdateState(_ peripheral: CBPeripheralManager) {
    if peripheral.state == .poweredOn {
        peripheralManager.startAdvertising(beaconPeripheralData as! [String: AnyObject]!)
    } else if peripheral.state == .poweredOff {
        peripheralManager.stopAdvertising()
    }
}

func peripheralManagerDidStartAdvertising(_ peripheral: CBPeripheralManager, error: Error?) {
 if error != nil {
 print("error" + (error?.localizedDescription)!)
 }
 }
 
 func peripheralManager(_ peripheral: CBPeripheralManager, willRestoreState dict: [String : Any]) {
 print("dict \(dict)")
 }

iBeacon TUTORIAL

There are many iBeacon devices available; a quick Google search should help reveal them to you. But when Apple introduced iBeacon, they also announced that any compatible iOS device could act as an iBeacon. The list currently includes the following devices:

  • iPhone 4s or later
  • 3rd generation iPad or later
  • iPad Mini or later
  • 5th generation iPod touch or later

An iBeacon is nothing more than a Bluetooth Low Energy device that advertises information in a specific structure. Those specifics are beyond the scope of this tutorial, but the important thing to understand is that iOS can monitor for iBeacons that emit three values known as: UUID, major and minor.

UUID is an acronym for universally unique identifier, which is a 128-bit value that’s usually shown as a hex string like this: B558CBDA-4472-4211-A350-FF1196FFE8C8. In the context of iBeacons, a UUID is generally used to represent your top-level identity.

Major and minor values provide a little more granularity on top of the UUID. These values are simply 16 bit unsigned integers that identify each individual iBeacon, even ones with the same UUID.

For instance, if you owned multiple department stores you might have all of your iBeacons emit the same UUID, but each store would have its own major value, and each department within that store would have its own minor value. Your app could then respond to an iBeacon located in the shoe department of your Miami, Florida store.

ForgetMeNot Starter Project

Download the starter project here — it contains a simple interface for adding and removing items from a table view. Each item in the table view represents a single iBeacon emitter, which in the real world translates to an item that you don’t want to leave behind.

Build and run the app; you’ll see an empty list, devoid of items. Press the + button at the top right to add a new item as shown in the screenshot below:

To add an item, you simply enter a name for the item and the values corresponding to its iBeacon. You can find your iBeacon’s UUID by reviewing your iBeacon’s documentation – try adding it now, or use some placeholder values, as shown below:

Press Save to return to the list of items; you’ll see your item with a location of Unknown, as shown below:

You can add more items if you wish, or swipe to delete existing ones. UserDefaults persists the items in the list so that they’re available when the user re-launches the app.

On the surface it appears there’s not much going on; most of the fun stuff is under the hood. The unique aspect in this app is the Item model class which represents the items in the list.

Open Item.swift and have a look at it in Xcode. The model class mirrors what the interface requests from the user, and it conforms to NSCoding so that it can be serialized and deserialized to disk for persistence.

Now take a look at AddItemViewController.swift. This is the controller for adding a new item. It’s a simple UIViewController, except that it does some validation on user input to ensure that the user enters valid names and UUIDs.

The Add button at the bottom left becomes tappable as soon as txtName and txtUUID are both valid.

Now that you’re acquainted with the starter project, you can move on to implementing the iBeacon bits into your project!

Core Location Permissions

Your device won’t listen for your iBeacon automatically — you have to tell it to do this first. The CLBeaconRegion class represents an iBeacon; the CL class prefix indicates that it is part of the Core Location framework.

It may seem strange for an iBeacon to be related to Core Location since it’s a Bluetooth device, but consider that iBeacons provide micro-location awareness while GPS provides macro-location awareness. You would leverage the Core Bluetooth framework for iBeacons when programming an iOS device to act as an iBeacon, but when monitoring for iBeacons you only need to work with Core Location.

Your first order of business is to adapt the Item model for CLBeaconRegion.

Open Item.swift and add the following import to the top of the file:

import CoreLocation

Next, change the majorValue and minorValue definitions as well as the initializer as follows:

let majorValue: CLBeaconMajorValue
let minorValue: CLBeaconMinorValue

init(name: String, icon: Int, uuid: UUID, majorValue: Int, minorValue: Int) {
  self.name = name
  self.icon = icon
  self.uuid = uuid
  self.majorValue = CLBeaconMajorValue(majorValue)
  self.minorValue = CLBeaconMinorValue(minorValue)
}

CLBeaconMajorValue and CLBeaconMinorValue are both a typealias for UInt16, and are used for representing major and minor values in the CoreLocation framework.

Although the underlying data type is the same, this improves readability of the model and adds type safety so you don’t mix up major and minor values.

Open ItemsViewController.swift, add the Core Location import to the top of the file:

import CoreLocation

Add the following property to the ItemsViewController class:

let locationManager = CLLocationManager()

You’ll use this CLLocationManager instance as your entry point into Core Location.

Next, replace viewDidLoad() with the following:

override func viewDidLoad() {
  super.viewDidLoad()

  locationManager.requestAlwaysAuthorization()

  loadItems()
}

The call to requestAlwaysAuthorization() will prompt the user for access to location services if they haven’t granted it already. Always and When in Use are variants on location permissions. When the user grants Always authorization to the app, the app can start any of the available location services while it is running in the foreground or background.

The point of this app is to monitor for iBeacon regions at all times, so you’ll need the Always location permissions scope for triggering region events while the app is both in the foreground and background.

iOS requires that you set up a string value in Info.plist that will be displayed to the user when access to their location is required by the app. If you don’t set this up, location services won’t work at all — you don’t even get a warning!

Open Info.plist and add a new entry by clicking on the + that appears when you select the Information Property List row.

Fortunately, the key you need to add is in the pre-defined list shown in the dropdown list of keys — just scroll down to the Privacy section. Select the key Privacy – Location Always Usage Description and make sure the Type is set to String. Then add the phrase you want to show to the user to tell them why you need location services on, for example: “ForgetMeNot needs to know where you are”.

Build and run your app; once running, you should be shown a message asking you to allow the app access to your location:

Select ‘Allow’, and the app will be able to track your iBeacons.

Listening for Your iBeacon

Now that your app has the location permissions it needs, it’s time to find those beacons! Add the following class extension to the bottom of ItemsViewController.swift :

// MARK: - CLLocationManagerDelegate

extension ItemsViewController: CLLocationManagerDelegate {
}

This will declare ItemsViewController as conforming to CLLocationManagerDelegate. You’ll add the delegate methods inside this extension to keep them nicely grouped together.

Next, add the following line inside of viewDidLoad():

locationManager.delegate = self

This sets the CLLocationManager delegate to self so you’ll receive delegate callbacks.

Now that your location manager is set up, you can instruct your app to begin monitoring for specific regions using CLBeaconRegion. When you register a region to be monitored, those regions persist between launches of your application. This will be important later when you respond to the boundary of a region being crossed while your application is not running.

Your iBeacon items in the list are represented by the the Item model via the items array property. CLLocationManager, however, expects you to provide a CLBeaconRegion instance in order to begin monitoring a region.

In Item.swift create the following helper method on Item:

func asBeaconRegion() -> CLBeaconRegion {
  return CLBeaconRegion(proximityUUID: uuid,
                                major: majorValue,
                                minor: minorValue,
                           identifier: name)
}

This returns a new CLBeaconRegion instance derived from the current Item.

You can see that the classes are similar in structure to each other, so creating an instance of CLBeaconRegion is very straightforward since it has direct analogs to the UUID, major value, and minor value.

Now you need a method to begin monitoring a given item. Open ItemsViewController.swift and add the following method to ItemsViewController:

func startMonitoringItem(_ item: Item) {
  let beaconRegion = item.asBeaconRegion()
  locationManager.startMonitoring(for: beaconRegion)
  locationManager.startRangingBeacons(in: beaconRegion)
}

This method takes an Item instance and creates a CLBeaconRegion using the method you defined earlier. It then tells the location manager to start monitoring the given region, and to start ranging iBeacons within that region.

Ranging is the process of discovering iBeacons within the given region and determining their distance. An iOS device receiving an iBeacon transmission can approximate the distance from the iBeacon. The distance (between transmitting iBeacon and receiving device) is categorized into 3 distinct ranges:

  • Immediate Within a few centimeters
  • Near Within a couple of meters
  • Far Greater than 10 meters away
Note: The real distances for Far, Near, and Immediate are not specifically documented, but this Stack Overflow Question gives a rough overview of the distances you can expect.

By default, monitoring notifies you when the region is entered or exited regardless of whether your app is running. Ranging, on the other hand, monitors the proximity of the region only while your app is running.

You’ll also need a way to stop monitoring an item’s region after it’s deleted. Add the following method to ItemsViewController:

func stopMonitoringItem(_ item: Item) {
  let beaconRegion = item.asBeaconRegion()
  locationManager.stopMonitoring(for: beaconRegion)
  locationManager.stopRangingBeacons(in: beaconRegion)
}

The above method reverses the effects of startMonitoringItem(_:) and instructs the CLLocationManagerto stop monitor and ranging activities.

Now that you have the start and stop methods, it’s time to put them to use! The natural place to start monitoring is when a user adds a new item to the list.

Have a look at addBeacon(_:) in ItemsViewController.swift. This protocol method is called when the user hits the Add button in AddItemViewController and creates a new Item to monitor. Find the call to persistItems() in that method and add the following line just before it:

startMonitoringItem(item)

That will activate monitoring when the user saves an item. Likewise, when the app launches, the app loads persisted items from UserDefaults, which means you have to start monitoring for them on startup too.

In ItemsViewController.swift, find loadItems() and add the following line inside the for loop at the end:

startMonitoringItem(item)

This will ensure each item is being monitored.

Now you need to take care of removing items from the list. Find tableView(_:commit:forRowAt:) and add the following line inside the if statement:

stopMonitoringItem(items[indexPath.row])

This table view delegate method is called when the user deletes the row. The existing code handles removing it from the model and the view, and the line of code you just added will also stop the monitoring of the item.

At this point you’ve made a lot of progress! Your application now starts and stops listening for specific iBeacons as appropriate.

You can build and run your app at this point; but even though your registered iBeacons might be within range your app has no idea how to react when it finds one…time to fix that!

Acting on Found iBeacons

Now that your location manager is listening for iBeacons, it’s time to react to them by implementing some of the CLLocationManagerDelegate methods.

First and foremost is to add some error handling, since you’re dealing with very specific hardware features of the device and you want to know if the monitoring or ranging fails for any reason.

Add the following two methods to the CLLocationManagerDelegate class extension you defined earlier at the bottom of ItemsViewController.swift:

func locationManager(_ manager: CLLocationManager, monitoringDidFailFor region: CLRegion?, withError error: Error) {
  print("Failed monitoring region: \(error.localizedDescription)")
}
  
func locationManager(_ manager: CLLocationManager, didFailWithError error: Error) {
  print("Location manager failed: \(error.localizedDescription)")
}

These methods will simply log any received errors as a result of monitoring iBeacons.

If everything goes smoothly in your app you should never see any output from these methods. However, it’s possible that the log messages could provide very valuable information if something isn’t working.

The next step is to display the perceived proximity of your registered iBeacons in real-time. Add the following stubbed-out method to the CLLocationManagerDelegate class extension:

func locationManager(_ manager: CLLocationManager, didRangeBeacons beacons: [CLBeacon], in region: CLBeaconRegion) {
    
  // Find the same beacons in the table.
  var indexPaths = [IndexPath]()
  for beacon in beacons {
    for row in 0..<items.count {
        // TODO: Determine if item is equal to ranged beacon
    }
  }
    
  // Update beacon locations of visible rows.
  if let visibleRows = tableView.indexPathsForVisibleRows {
    let rowsToUpdate = visibleRows.filter { indexPaths.contains($0) }
    for row in rowsToUpdate {
      let cell = tableView.cellForRow(at: row) as! ItemCell
      cell.refreshLocation()
    }
  }
}

This delegate method is called when iBeacons come within range, move out of range, or when the range of an iBeacon changes.

The goal of your app is to use the array of ranged iBeacons supplied by the delegate methods to update the list of items and display their perceived proximity. You’ll start by iterating over the beacons array, and then iterating over items to see if there are matches between in-range iBeacons and the ones in your list. Then the bottom portion updates the location string for visible cells. You’ll come back to the TODO section in just a moment.

Open Item.swift and add the following property to the Item class:

var beacon: CLBeacon?

This property stores the last CLBeacon instance seen for this specific item, which is used to display the proximity information.

Now add the following equality operator at the bottom of the file, outside the class definition:

func ==(item: Item, beacon: CLBeacon) -> Bool {
  return ((beacon.proximityUUID.uuidString == item.uuid.uuidString)
        && (Int(beacon.major) == Int(item.majorValue))
        && (Int(beacon.minor) == Int(item.minorValue)))
}

This equality function compares a CLBeacon instance with an Item instance to see if they are equal — that is, if all of their identifiers match. In this case, a CLBeacon is equal to an Item if the UUID, major, and minor values are all equal.

Now you’ll need to complete the ranging delegate method with a call to the above helper method. Open ItemsViewController.swift and return to locationManager(_:didRangeBeacons:inRegion:). Replace the TODO comment in the innermost for loop with the following:

if items[row] == beacon {
  items[row].beacon = beacon
  indexPaths += [IndexPath(row: row, section: 0)]
}

Here, you set the cell’s beacon when you find a matching item and iBeacon. Checking that the item and beacon match is easy thanks to your equality operator!

Each CLBeacon instance has a proximity property which is an enum with values of far, near, immediate, and unknown.

Add the following method to Item:

func nameForProximity(_ proximity: CLProximity) -> String {
  switch proximity {
  case .unknown:
    return "Unknown"
  case .immediate:
    return "Immediate"
  case .near:
    return "Near"
  case .far:
    return "Far"
  }
}

This returns a human-readable proximity value from proximity which you’ll use next.

Still in Item, add the following method:

func locationString() -> String {
  guard let beacon = beacon else { return "Location: Unknown" }
  let proximity = nameForProximity(beacon.proximity)
  let accuracy = String(format: "%.2f", beacon.accuracy)
    
  var location = "Location: \(proximity)"
  if beacon.proximity != .unknown {
    location += " (approx. \(accuracy)m)"
  }
    
  return location
}

This generates a nice, neat string describing not only the proximity range of the beacon, but also the approximate distance.

Now it’s time to use that new method to display the perceived proximity of the ranged iBeacon.

Open ItemCell.swift and add the following to just below the lblName.text = item.name line of code:

lblLocation.text = item.locationString()

This displays the location for each cell’s beacon. And to ensure it shows updated info, add the following inside refreshLocation():

lblLocation.text = item?.locationString() ?? ""

refreshLocation() is called each time the locationManager ranges the beacon, which sets the cell’s lblLocation.text property with the perceived proximity value and approximate ‘accuracy’ taken from the CLBeacon.

This latter value may fluctuate due to RF interference even when your device and iBeacon are not moving, so don’t rely on it for a precise location for the beacon.

Now ensure your iBeacon is registered and move your device closer or away from your device. You’ll see the label update as you move around, as shown below:

You may find that the perceived proximity and accuracy is drastically affected by the physical location of your iBeacon; if it is placed inside of something like a box or a bag, the signal may be blocked as the iBeacon is a very low-power device and the signal may easily become attenuated.

Keep this in mind when designing your application — and when deciding the best placement for your iBeacon hardware.

How To Make an iPhone Transmit an iBeacon Emitter And Working With iBeacons

Saving and Loading Objects From Disk Memory

Saving and Loading List Items

Even though in the previous part we built the basic functionality of the demo app, we still must add two more important features so it works as expected. This means that we must make it capable of saving the shopping list permanently to the disk, and also load the list from the disk (if exists) when the application gets launched.

As the structure of our data is a mutable array, we can easily write and read to and from the disk, using the methods that the NSMutableArray class provides. Obviously, for doing both of the above, we are going to create two separate functions. So, let’s get started with the next one, which is responsible for saving the contents of the array to a file. The file is named shopping_list:

The first two lines consist of a standard piece of code that returns the path to the documents directory of the app. Once we know that, we form the path to the file, specifying the file name at the same time. Lastly, the key to the above function is the writeToFile(_:atomically:) method of the NSMutableArray class. This is the one that actually saves the array contents to the disk.

By having the above method implemented, we can go ahead and call it where it’s needed. If you think of what we did in the previous part, you can assume that we are going to call it in two places: When a new item is added to the list, and when an existing item is removed from the list.

At first, go to the textFieldShouldReturn(textField:) delegate method of the textfield. Just right before the return command, make a call to the above method:

Nice, each new item now will be saved to the disk too. Also, let’s pay a visit to the removeItemAtIndex(index:) function too, where we actually delete an item. As we just did, let’s make a call to the save function and let’s add it as the last command as well:

With all the above we can be sure that any changes that might take place in our data, they will become permanent too.

Now, we can proceed by implementing the exact opposite functionality, the data loading. At first, let’s see the definition of the respective function:

It’s important to highlight the fact that we must always check if a file exists before we load its contents. In iOS, we do that by using the NSFileManager class exactly as shown above. If the file doesn’t exists and the condition returns false, nothing will happen at all. On the other hand, if the file exists, we initialise the shoppingList array with the contents of the file, and then we reload the tableview so it displays the loaded data.

As a last step, we must call this function. As we want the data to be loaded right after the app has been launched, we’ll do that in the viewDidLoad method:

The two functionalities we added to the application in this part, will become really helpful later, when we’ll handle the notification actions without running the app on the foreground.

 

Saving and Loading Objects From Disk Memory

Creating Interactive Notifications in iOS 8

Creating Notification Actions

Several times up to this point I talked about notification actions, but always generally speaking. Now, it’s time to see what they really are.

An action is an object of the UIMutableUserNotificationAction class. This class is new in iOS 8, and provides various useful properties so an action can be properly configured. These are:

  • identifier: This is a string value, that uniquely identifies an action among all in an application. Obviously, you should never define two or more actions with the same identifier. Also, using this property we’ll be able to determine the chosen action by the user upon the notification appearance. We’ll see that later.
  • title: The title is the text that is shown on the action button to the user. This can be either a simple, or a localized string. Be cautious and always set proper titles to your actions, so the user can instantly understand by reading the 1-2 title words what is going to happen by selecting it.
  • destructive: This is a bool value. When it is set to true the respective button in the notification has red background colour. Note that this happens in the banner mode only. Usually, actions regarding deletion, removal and anything else critical are marked as destructive, so they increase the user’s attention.
  • authenticationRequired: This property is a bool value also. When it becomes true, the user must necessarily authenticate himself to the device before the action is performed. It’s extremely useful in cases where the action is critical enough, and any unauthorised access can damage the application’s data.
  • activationMode: This is an enum property, and defines whether the app should run in the foreground or in the background when the action is performed. The possible values specifying each mode are two: (a) UIUserNotificationActivationModeForeground, and (b) UIUserNotificationActivationModeBackground. In background, the app is given just a few seconds to perform the action.

While I was describing this demo application, I said that we’re going to create three distinct actions:

  1. An action to just make the notification go away, without anything else to be performed.
  2. An action to edit the list (actually to add a new item).
  3. An action to delete the entire list.

Let’s see how each one is written in code. For every action, we will set the values to all the aforementioned properties. The first one:

The identifier of this action is “justInform”. As you see, this action will be performed in the background, and as there’s nothing dangerous with it we set to false the destructive and the authenticationRequired properties.

The next one:

Apparently, this action requires the app to run in the foreground so we can edit the shopping list. Also, we don’t want nobody else to mess with our list, so we set the authenticationRequired to true.

And the third and last one:

With this action we’ll allow the user to delete the entire shopping list without launching the application in the foreground. However, this is a dangerous action for the data, therefore we tell the app that it’s destructive and that authentication is required to proceed.

By looking at the above actions setup, you understand that configuring them is an easy task.

Now that we have done all the above, let’s add them to the setupNotificationSettings function we started to implement in the previous part:

Action Categories

When the actions of a notification have been defined, they can then be grouped together in categories. Actually, categories are what is written in the settings, and not the actions themselves, so you should always create them, unless of course you present a notification without actions. Usually, a category matches to a notification, so supposing that all notifications in an application support actions, there should be as many categories as the notifications.

In this demo application, we are going to create just one notification, so we are going to have one category only. Programmatically speaking now, a category is an object of the UIMutableUserNotificationCategory class, which is also new in iOS 8. This class has just one property and one method. The property is a string identifier that uniquely identifies a category (similarly to actions), and the method is used to group the actions together.

Let’s see a few more things about that method, and let’s start by its signature (taken by the Apple documentation directly):

The first parameter regards the actions that should be grouped for the category. It’s an array containing all the actions, and the order of the of the action objects in the array specifies the order they will appear in the notification.

The second parameter is quite important. The context is an enum type, and describes the context of the alert that the notification will appear into. There are two possible values:

  1. UIUserNotificationActionContextDefault: It matches to a full alert that is appeared to the centre of the screen (when the device is unlocked).
  2. UIUserNotificationActionContextMinimal: It matches to a banner alert.

In the default context, the category can accept up to four actions, which will be displayed in the predefined order in the notification alert (at the screen centre). In the minimal context, up to two actions can be set to appear in the banner alert. Notice that in the second case, you must choose what the most important actions are among all so they appear in the banner notification. In our implementation we’ll specify the actions for both contexts.

As I said, the first parameter in the above method must be an array. For this reason, our initial step is to create two arrays with the actions for each context. Let’s go back to our function:

Next, let’s create a new category for which we will set an identifier, and then we will provide the above arrays for the two contexts:

And… that’s all it takes to create a category regarding the actions of a notification.

Registering Notification Settings

In the last three parts we configured all the new features of the local notification, and now we have only left to write everything to settings. For this purpose, we will use the UIUserNotificationSettings class (new in iOS 8), and through the following init method we’ll provide the types and the category of the notification:

The first parameter is the types we defined for the notification. The second parameter is a NSSet object, in which all the categories for all the existing notifications in an application must be set. In this example, we have just one category, however we’ll create a NSSet object no matter what.

Let’s continue the function implementation with that:

Now, we can create a new object of the UIUserNotificationSettings class and pass the required arguments:

Lastly, let’s write (register) the settings in the Settings app using the next line:

The first time the above code will work, it will create a new record for our application in the Settings app.

Now, before I present you the whole setupNotificationSettings() after all the previous additions, let me say something more. This function will be called in the viewDidLoad method, and that means that its content will be executed every time the app runs. However, as the notification settings are not going to change and therefore it’s pointless to set them again and again, it would be wise to contain all the above code in an if statement. In the condition of this statement, we will check if the notification types have been specified or not, where in the second case the if body will be executed. This is translated to code as shown next:

At first, we get the existing notification settings using the currentUserNotificationSettings() method of the UIApplication class. This method returns a settings object, through which we can check the current value of the types property. Remember that this property is an enum type. If its value equals to None, meaning that no notification types have been set, then we’ll allow it to register the notification settings by doing all the above in the previous parts, otherwise will do nothing.

With the above condition we avoid unneeded settings registration. Note though that if you want to modify any settings or add more actions, categories, etc for new notifications, you should comment out the if opening and closing lines and run the app once, so the new additions to be applied and so you can test it with the new settings. Then, remove the comments to revert your code to its previous state.

With that, all the work in the notification details is over. Right next you can see the setupNotificationSettings() function fully implemented and in one part:

Don’t forget to call this function in the viewDidLoad body:

Handling Notification Actions

There is one last aspect of the local notification that we haven’t worked with yet, and that is to handle the actions that get received by the application when the user taps on the respective buttons. As usual, there are certain delegate methods that should be implemented.

Before we see what actions we’ll handle and how, let me introduce you a couple of other delegate methods that can become handy when developing your own applications. Note that in this demo we won’t really use them; we’ll just print a message to the console. Also, make sure to open the AppDelegate.swift file now.

So, the first one regards the notification settings. This delegate method is called when the app is launched (either normally or due to a local notification’s action) and contains all the settings configured for the notifications of your app. Right next you can see its definition. What we only do, is to print the current notification types:

Regardless of the above simple implementation, you can use it to access all kind of settings supported by the UIUserNotificationSettings class. This method can become useful in cases you need to check the existing settings and act depending on the values that will come up. Don’t forget that users can change these settings through the Settings app, so do never be confident that the initial configuration made in code is always valid.

When scheduling a local notification, this will be fired no matter whether your application is currently running or not. Usually, developers imagine how the notifications are going to appear when the app is in the background or suspended, and all the implementation is focused on that. However, it’s your duty to handle a notification in case it will be fired when the app is running. Thankfully, iOS SKD makes things pretty straightforward, as there’s another delegate method that is called when the app is in the foreground:

There will be cases where you won’t need to handle the notification if your app is running. However, in the opposite scenario, the above method is the place to deal with it and perform the proper actions.

Now let’s focus on the delegate method that is called when an action button is tapped by the user. Based on the identifier values we set to the actions, we’ll determine which one has been performed, and then we will make the app execute the proper part of code. We configured three actions in total:

  1. One to make the notification simply go away (identifier: justInform).
  2. One to add a new item to the list (identifier: editList).
  3. One to delete entire the list (identifier: trashAction).

From all the above, we don’t need to do anything when the first action is performed. However, we’ll handle the other two. Depending on the identifier value, we will post a notification (NSNotification) for each one, and then in the ViewController class we’ll observe for those notifications, and of course, we’ll handle them.

Let’s get started with the new delegate method:

In both cases we post a notification (a NSNotification), specifying a different name each time. Notice that at the end we call the completion handler, and that’s mandatory to do, so the system knows we handled the received action. This is the most important delegate method when working with local notifications, as this is the place where you’ll route your code depending on the action the user performed.

Now, let’s go back to the ViewController.swift file. There, let’s make that class observe for the notifications we set before. In the viewDidLoad method add the next lines:

The modifyListNotification() and deleteListNotification() are both custom functions that we’ll implement right next.

Let’s begin with the first one. What we want is to make the textfield the first responder when the app is launched due to the edit list action, so we’re just talking about one line of code only:

With this, the keyboard will automatically be shown and the user will be able to type a new item immediately.

For the delete list action, we want to remove all existing objects from the array that contains them. So, we’ll do exactly that, and then we’ll save the (empty) array back to file. Lastly, we’ll reload the tableview data, so when the user launches the app will find no items at all:

With the above implemented, our demo application is finally ready!

Creating Interactive Notifications in iOS 8

Geofencing in iOS with Core Location Swift 3

Setting Up a Location Manager and Permissions

Before any geofence monitoring can happen, though, you need to set up a Location Manager instance and request the appropriate permissions.

Open GeotificationsViewController.swift and declare a constant instance of a CLLocationManager near the top of the class, as shown below:

class GeotificationsViewController: UIViewController {

  @IBOutlet weak var mapView: MKMapView!

  var geotifications = [Geotification]()
  let locationManager = CLLocationManager() // Add this statement

  ...
}

Next, replace viewDidLoad() with the following code:

override func viewDidLoad() {
  super.viewDidLoad()
  // 1
  locationManager.delegate = self
  // 2
  locationManager.requestAlwaysAuthorization()
  // 3
  loadAllGeotifications()
}

Let’s run through this method step by step:

  1. You set the view controller as the delegate of the locationManager instance so that the view controller can receive the relevant delegate method calls.
  2. You make a call to requestAlwaysAuthorization(), which invokes a prompt to the user requesting for Always authorization to use location services. Apps with geofencing capabilities need Always authorization, due to the need to monitor geofences even when the app isn’t running. Info.plist has already been setup with a message to show the user when requesting the user’s location under the key NSLocationAlwaysUsageDescription.
  3. You call loadAllGeotifications(), which deserializes the list of geotifications previously saved to NSUserDefaults and loads them into a local geotifications array. The method also loads the geotifications as annotations on the map view.

When the app prompts the user for authorization, it will show NSLocationAlwaysUsageDescription, a user-friendly explanation of why the app requires access to the user’s location. This key is mandatory when you request authorization for location services. If it’s missing, the system will ignore the request and prevent location services from starting altogether.

Build and run the project, and you’ll see a user prompt with the aforementioned description that’s been set:

You’ve set up your app to request the required permission. Great! Click or tap Allow to ensure the location manager will receive delegate callbacks at the appropriate times.

Before you proceed to implement the geofencing, there’s a small issue you have to resolve: the user’s current location isn’t showing up on the map view! This feature is disabled, and as a result, the zoom button on the top-left of the navigation bar doesn’t work.

Fortunately, the fix is not difficult — you’ll simply enable the current location only after the app is authorized.

In GeotificationsViewController.swift, add the following delegate method to the CLLocationManagerDelegate extension:

extension GeotificationsViewController: CLLocationManagerDelegate {
  func locationManager(_ manager: CLLocationManager, didChangeAuthorization status: CLAuthorizationStatus) {
    mapView.showsUserLocation = (status == .authorizedAlways)
  }
}

The location manager calls locationManager(_:didChangeAuthorizationStatus:) whenever the authorization status changes. If the user has already granted the app permission to use Location Services, this method will be called by the location manager after you’ve initialized the location manager and set its delegate.

That makes this method an ideal place to check if the app is authorized. If it is, you enable the map view to show the user’s current location.

Build and run the app. If you’re running it on a device, you’ll see the location marker appear on the main map view. If you’re running on the simulator, click Debug\Location\Apple in the menu to see the location marker:

In addition, the zoom button on the navigation bar now works. :]

Registering Your Geofences

With the location manager properly configured, the next order of business is to allow your app to register user geofences for monitoring.

In your app, the user geofence information is stored within your custom Geotification model. However, Core Location requires each geofence to be represented as a CLCircularRegion instance before it can be registered for monitoring. To handle this requirement, you’ll create a helper method that returns a CLCircularRegion from a given Geotification object.

Open GeotificationsViewController.swift and add the following method to the main body:

func region(withGeotification geotification: Geotification) -> CLCircularRegion {
  // 1
  let region = CLCircularRegion(center: geotification.coordinate, radius: geotification.radius, identifier: geotification.identifier)
  // 2
  region.notifyOnEntry = (geotification.eventType == .onEntry)
  region.notifyOnExit = !region.notifyOnEntry
  return region
}

Here’s what the above method does:

  1. You initialize a CLCircularRegion with the location of the geofence, the radius of the geofence and an identifier that allows iOS to distinguish between the registered geofences of a given app. The initialization is rather straightforward, as the Geotification model already contains the required properties.
  2. The CLCircularRegion instance also has two Boolean properties, notifyOnEntry and notifyOnExit. These flags specify whether geofence events will be triggered when the device enters and leaves the defined geofence, respectively. Since you’re designing your app to allow only one notification type per geofence, you set one of the flags to true while you set the other to false, based on the enum value stored in the Geotification object.

Next, you need a method to start monitoring a given geotification whenever the user adds one.

Add the following method to the body of GeotificationsViewController:

func startMonitoring(geotification: Geotification) {
  // 1
  if !CLLocationManager.isMonitoringAvailable(for: CLCircularRegion.self) {
    showAlert(withTitle:"Error", message: "Geofencing is not supported on this device!")
    return
  }
  // 2
  if CLLocationManager.authorizationStatus() != .authorizedAlways {
    showAlert(withTitle:"Warning", message: "Your geotification is saved but will only be activated once you grant Geotify permission to access the device location.")
  }
  // 3
  let region = self.region(withGeotification: geotification)
  // 4
  locationManager.startMonitoring(for: region)
}

Let’s walk through the method step by step:

  1. isMonitoringAvailableForClass(_:) determines if the device has the required hardware to support the monitoring of geofences. If monitoring is unavailable, you bail out entirely and alert the user accordingly. showSimpleAlertWithTitle(_:message:viewController) is a helper function in Utilities.swift that takes in a title and message and displays an alert view.
  2. Next, you check the authorization status to ensure that the app has also been granted the required permission to use Location Services. If the app isn’t authorized, it won’t receive any geofence-related notifications.However, in this case, you’ll still allow the user to save the geotification, since Core Location lets you register geofences even when the app isn’t authorized. When the user subsequently grants authorization to the app, monitoring for those geofences will begin automatically.
  3. You create a CLCircularRegion instance from the given geotification using the helper method you defined earlier.
  4. Finally, you register the CLCircularRegion instance with Core Location for monitoring.

With your start method done, you also need a method to stop monitoring a given geotification when the user removes it from the app.

In GeotificationsViewController.swift, add the following method below startMonitoringGeotificiation(_:):

func stopMonitoring(geotification: Geotification) {
  for region in locationManager.monitoredRegions {
    guard let circularRegion = region as? CLCircularRegion, circularRegion.identifier == geotification.identifier else { continue }
    locationManager.stopMonitoring(for: circularRegion)
  }
}

The method simply instructs the locationManager to stop monitoring the CLCircularRegion associated with the given geotification.

Now that you have both the start and stop methods complete, you’ll use them whenever you add or remove a geotification. You’ll begin with the adding part.

First, take a look at addGeotificationViewController(_:didAddCoordinate) in GeotificationsViewController.swift.

The method is the delegate call invoked by the AddGeotificationViewController upon creating a geotification; it’s responsible for creating a new Geotification object using the values passed from AddGeotificationsViewController, and updating both the map view and the geotifications list accordingly. Then you call saveAllGeotifications(), which takes the newly-updated geotifications list and persists it via NSUserDefaults.

Now, replace the method with the following code:

func addGeotificationViewController(controller: AddGeotificationViewController, didAddCoordinate coordinate: CLLocationCoordinate2D, radius: Double, identifier: String, note: String, eventType: EventType) {
  controller.dismiss(animated: true, completion: nil)
  // 1
  let clampedRadius = min(radius, locationManager.maximumRegionMonitoringDistance)
  let geotification = Geotification(coordinate: coordinate, radius: clampedRadius, identifier: identifier, note: note, eventType: eventType)
  add(geotification: geotification)
  // 2
  startMonitoring(geotification: geotification)
  saveAllGeotifications()
}

You’ve made two key changes to the code:

  1. You ensure that the value of the radius is clamped to the maximumRegionMonitoringDistance property of locationManager, which is defined as the largest radius in meters that can be assigned to a geofence. This is important, as any value that exceeds this maximum will cause monitoring to fail.
  2. You add a call to startMonitoringGeotification(_:) to ensure that the geofence associated with the newly-added geotification is registered with Core Location for monitoring.

At this point, the app is fully capable of registering new geofences for monitoring. There is, however, a limitation: As geofences are a shared system resource, Core Location restricts the number of registered geofences to a maximum of 20 per app.

While there are workarounds to this limitation (See Where to Go From Here? for a short discussion), for the purposes of this tutorial, you’ll take the approach of limiting the number of geotifications the user can add.

 

Finally, let’s deal with the removal of geotifications. This functionality is handled in mapView(_:annotationView:calloutAccessoryControlTapped:), which is invoked whenever the user taps the “delete” accessory control on each annotation.

Add a call to stopMonitoring(geotification:) to mapView(_:annotationView:calloutAccessoryControlTapped:), as shown below:

func mapView(_ mapView: MKMapView, annotationView view: MKAnnotationView, calloutAccessoryControlTapped control: UIControl) {
  // Delete geotification
  let geotification = view.annotation as! Geotification
  stopMonitoring(geotification: geotification)   // Add this statement
  removeGeotification(geotification)
  saveAllGeotifications()
}

The additional statement stops monitoring the geofence associated with the geotification, before removing it and saving the changes to NSUserDefaults.

At this point, your app is fully capable of monitoring and un-monitoring user geofences. Hurray!

Build and run the project. You won’t see any changes, but the app will now be able to register geofence regions for monitoring. However, it won’t be able to react to any geofence events just yet. Not to worry—that will be your next order of business!

Reacting to Geofence Events

You’ll start by implementing some of the delegate methods to facilitate error handling – these are important to add in case anything goes wrong.

In GeotificationsViewController.swift, add the following methods to the CLLocationManagerDelegateextension:

func locationManager(_ manager: CLLocationManager, monitoringDidFailFor region: CLRegion?, withError error: Error) {
  print("Monitoring failed for region with identifier: \(region!.identifier)")
}

func locationManager(_ manager: CLLocationManager, didFailWithError error: Error) {
  print("Location Manager failed with the following error: \(error)")
}

These delegate methods simply log any errors that the location manager encounters to facilitate your debugging.

Note: You’ll definitely want to handle these errors more robustly in your production apps. For example, instead of failing silently, you could inform the user what went wrong.

Next, open AppDelegate.swift; this is where you’ll add code to properly listen and react to geofence entry and exit events.

Add the following line at the top of the file to import the CoreLocation framework:

import CoreLocation

Ensure that the AppDelegate has a CLLocationManager instance near the top of the class, as shown below:

class AppDelegate: UIResponder, UIApplicationDelegate {
  var window: UIWindow?

  let locationManager = CLLocationManager() // Add this statement
  ...
}

Replace application(_:didFinishLaunchingWithOptions:) with the following implementation:

func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey : Any]? = nil) -> Bool {
  locationManager.delegate = self
  locationManager.requestAlwaysAuthorization()
  return true
}

You’ve set up your AppDelegate to receive geofence-related events. But you might wonder, “Why did I designate the AppDelegate to do this instead of the view controller?”

Geofences registered by an app are monitored at all times, including when the app isn’t running. If the device triggers a geofence event while the app isn’t running, iOS automatically relaunches the app directly into the background. This makes the AppDelegate an ideal entry point to handle the event, as the view controller may not be loaded or ready.

Now you might also wonder, “How will a newly-created CLLocationManager instance be able to know about the monitored geofences?”

It turns out that all geofences registered by your app for monitoring are conveniently accessible by all location managers in your app, so it doesn’t matter where the location managers are initialized. Pretty nifty, right? :]

Now all that’s left is to implement the relevant delegate methods to react to the geofence events. Before you do so, you’ll create a method to handle a geofence event.

Add the following method to AppDelegate.swift:

func handleEvent(forRegion region: CLRegion!) {
  print("Geofence triggered!")
}

At this point, the method takes in a CLRegion and simply logs a statement. Not to worry—you’ll implement the event handling later.

Next, add the following delegate methods in the CLLocationManagerDelegate extension of AppDelegate.swift, as well as a call to the handleRegionEvent(_:) function you just created, as shown in the code below:

extension AppDelegate: CLLocationManagerDelegate {
  
  func locationManager(_ manager: CLLocationManager, didEnterRegion region: CLRegion) {
    if region is CLCircularRegion {
      handleEvent(forRegion: region)
    }
  }
  
  func locationManager(_ manager: CLLocationManager, didExitRegion region: CLRegion) {
    if region is CLCircularRegion {
      handleEvent(forRegion: region)
    }
  }
}

As the method names aptly suggest, you fire locationManager(_:didEnterRegion:) when the device enters a CLRegion, while you fire locationManager(_:didExitRegion:) when the device exits a CLRegion.

Both methods return the CLRegion in question, which you need to check to ensure it’s a CLCircularRegion, since it could be a CLBeaconRegion if your app happens to be monitoring iBeacons, too. If the region is indeed a CLCircularRegion, you accordingly call handleRegionEvent(_:).

Note: A geofence event is triggered only when iOS detects a boundary crossing. If the user is already within a geofence at the point of registration, iOS won’t generate an event. If you need to query whether the device location falls within or outside a given geofence, Apple provides a method called requestStateForRegion(_:).

Now that your app is able to receive geofence events, you’re ready to give it a proper test run. If that doesn’t excite you, it really ought to, because for the first time in this tutorial, you’re going to see some results. :]

The most accurate way to test your app is to deploy it on your device, add some geotifications and take the app for a walk or a drive. However, it wouldn’t be wise to do so right now, as you wouldn’t be able to verify the print logs emitted by the geofence events with the device unplugged. Besides, it would be nice to get assurance that the app works before you commit to taking it for a spin.

Fortunately, there’s an easy way do this without leaving the comfort of your home. Xcode lets you include a hardcoded waypoint GPX file in your project that you can use to simulate test locations. Lucky for you, the starter project includes one for your convenience. :]

Open up TestLocations.gpx, which you can find in the Supporting Files group, and inspect its contents. You’ll see the following:

<?xml version="1.0"?>
<gpx version="1.1" creator="Xcode">
  <wpt lat="37.422" lon="-122.084058">
    <name>Google</name>
  </wpt>
  <wpt lat="37.3270145" lon="-122.0310273">
    <name>Apple</name>
  </wpt>
</gpx>

The GPX file is essentially an XML file that contains two waypoints: Google’s Googleplex in Mountain View and Apple’s Headquarters in Cupertino.

To begin simulating the locations in the GPX file, build and run the project. When the app launches the main view controller, go back to Xcode, select the Location icon in the Debug bar and choose TestLocations:

Back in the app, use the Zoom button on the top-left of the navigation bar to zoom to the current location. Once you get close to the area, you’ll see the location marker moving repeatedly from the Googleplex to Apple, Inc. and back.

Test the app by adding a few geotifications along the path defined by the two waypoints. If you added any geotifications earlier in the tutorial before you enabled geofence registration, those geotifications will obviously not work, so you might want to clear them out and start afresh.

For the test locations, it’s a good idea to place a geotification roughly at each waypoint. Here’s a possible test scenario:

  • Google: Radius: 1000m, Message: “Say Bye to Google!”, Notify on Exit
  • Apple: Radius: 1000m, Message: “Say Hi to Apple!”, Notify on Entry

Once you’ve added your geotifications, you’ll see a log in the console each time the location marker enters or leaves a geofence. If you activate the home button or lock the screen to send the app to the background, you’ll also see the logs each time the device crosses a geofence, though you obviously won’t be able to verify that behavior visually.

Note: Location simulation works both in iOS Simulator and on a real device. However, the iOS Simulator can be quite inaccurate in this case; the timings of the triggered events do not coincide very well with the visual movement of the simulated location in and out of each geofence. You would do better to simulate locations on your device, or better still, take the app for a walk!

Notifying the User of Geofence Events

You’ve made a lot of progress with the app. At this point, it simply remains for you to notify the user whenever the device crosses the geofence of a geotification—so prepare yourself to do just that.

To obtain the note associated with a triggering CLCircularRegion returned by the delegate calls, you need to retrieve the corresponding geotification that was persisted in NSUserDefaults. This turns out to be trivial, as you can use the unique identifier you assigned to the CLCircularRegion during registration to find the right geotification.

In AppDelegate.swift, add the following helper method at the bottom of the class:

func note(fromRegionIdentifier identifier: String) -> String? {
  let savedItems = UserDefaults.standard.array(forKey: PreferencesKeys.savedItems) as? [NSData]
  let geotifications = savedItems?.map { NSKeyedUnarchiver.unarchiveObject(with: $0 as Data) as? Geotification }
  let index = geotifications?.index { $0?.identifier == identifier }
  return index != nil ? geotifications?[index!]?.note : nil
}

This helper method retrieves the geotification note from the persistent store, based on its identifier, and returns the note for that geotification.

Now that you’re able to retrieve the note associated with a geofence, you’ll write code to trigger a notification whenever a geofence event is fired, using the note as the message.

Add the following statements to the end of application(_:didFinishLaunchingWithOptions:), just before the method returns:

application.registerUserNotificationSettings(UIUserNotificationSettings(types: [.sound, .alert, .badge], categories: nil))
UIApplication.shared.cancelAllLocalNotifications()

The code you’ve added prompts the user for permission to enable notifications for this app. In addition, it does some housekeeping by clearing out all existing notifications.

Next, replace the contents of handleRegionEvent(_:) with the following:

func handleEvent(forRegion region: CLRegion!) {
  // Show an alert if application is active
  if UIApplication.shared.applicationState == .active {
    guard let message = note(fromRegionIdentifier: region.identifier) else { return }
    window?.rootViewController?.showAlert(withTitle: nil, message: message)
  } else {
    // Otherwise present a local notification
    let notification = UILocalNotification()
    notification.alertBody = note(fromRegionIdentifier: region.identifier)
    notification.soundName = "Default"
    UIApplication.shared.presentLocalNotificationNow(notification)
  }
}

If the app is active, the code above simply shows an alert controller with the note as the message. Otherwise, it presents a location notification with the same message.

Build and run the project, and run through the test procedure covered in the previous section. Whenever your test triggers a geofence event, you’ll see an alert controller displaying the reminder note:

Send the app to the background by activating the Home button or locking the device while the test is running. You’ll continue to receive notifications periodically that signal geofence events:

And with that, you have a fully functional, location-based reminder app in your hands. And yes, get out there and take that app for a spin!

Note: When you test the app, you may encounter situations where the notifications don’t fire exactly at the point of boundary crossing.

This is because before iOS considers a boundary as crossed, there is an additional cushion distance that must be traversed and a minimum time period that the device must linger at the new location. iOS internally defines these thresholds, seemingly to mitigate the spurious firing of notifications in the event the user is traveling very close to a geofence boundary.

In addition, these thresholds seem to be affected by the available location hardware capabilities. From experience, the geofencing behavior is a lot more accurate when Wi-Fi is enabled on the device.

Geofencing in iOS with Core Location Swift 3

Distributing iOS Frameworks Using CocoaPods

Creating a CocoaPod

CocoaPods is a popular dependency manager for iOS projects. It’s a tool for managing and versioning dependencies. Similar to a framework, a CocoaPod, or ‘pod’ for short, contains code, resources, etc., as well as metadata, dependencies, and set up for libraries. In fact, CocoaPods are built as frameworks that are included in the main app.

Anyone can contribute libraries and frameworks to the public repository, which is open to other iOS app developers. Almost all of the popular third-party frameworks, such as Alamofire, React native and SDWebImage, distribute their code as a pod.

Here’s why you should care: by making a framework into a pod, you give yourself a mechanism for distributing the code, resolving the dependencies, including and building the framework source, and easily sharing it with your organization or the wider development community.

Clean out the Project

Perform the following steps to remove the current link to ThreeRingControl.

  1. Select ThreeRingControl.xcodeproj in the project navigator and delete it.
  2. Choose Remove Reference in the confirmation dialog, since you’ll need to keep the files on disk to create the pod.

Install CocoaPods

If you’ve never used CocoaPods before, you’ll need to follow a brief installation process before going any further. Go to the CocoaPods Installation Guide and come back here when you’re finished. Don’t worry, we’ll wait!

Create the Pod

Open Terminal in the ThreeRingControl directory.

Run the following command:

pod spec create ThreeRingControl

This creates the file ThreeRingControl.podspec in the current directory. It’s a template that describes the pod and how to build it. Open it in a text editor.

The template contains plenty of comment descriptions and suggestions for the commonly used settings.

  1. Replace the Spec Metadata section with:
    s.name         = "ThreeRingControl"
    s.version      = "1.0.0"
    s.summary      = "A three-ring control like the Activity status bars"
    s.description  = "The three-ring is a completely customizable widget that can be used in any iOS app. It also plays a little victory fanfare."
    s.homepage     = "http://raywenderlich.com"
    

    Normally, the description would be a little more descriptive, and the homepage would point to a project page for the framework.

  2. Replace the Spec License section with the below code, as this iOS frameworks tutorial code uses an MIT License:
    s.license      = "MIT"
    
  3. You can keep the Author Metadata section as is, or set it u with how you’d lke to be credited and contacted.
  4. Replace the Platform Specifics section with the below code, because this is an iOS-only framework.:
    s.platform     = :ios, "10.0"
    
  5. Replace the Source Location with the below code. When you’re ready to share the pod, this will be a link to the GitHub repository and the commit tag for this version.
    s.source       = { :path => '.' }
    
  6. Replace the Source Code section with:
    s.source_files = "ThreeRingControl", "ThreeRingControl/**/*.{h,m,swift}"
    
  7. And the Resources section with:
    s.resources    = "ThreeRingControl/*.mp3"
    

    This will include the audio resources into the pod’s framework bundle.

  8. Remove the Project Linking and Project Settings sections.
  9. Add the following line. This line helps the application project understand that this pod’s code was written for Swift 3.
    s.pod_target_xcconfig = { 'SWIFT_VERSION' => '3' }
    
  10. Remove all the remaining comments — the lines that start with #.

You now have a workable development Podspec.

Note: If you run pod spec lint to verify the Podspec in Terminal, it’ll show an error because the source was not set to a valid URL. If you push the project to GitHub and fix that link, it will pass. However, having the linter pass is not necessary for local pod development. The Publish the Pod section below covers this.

Use the Pod

At this point, you’ve got a pod ready to rock and roll. Test it out by implementing it in the Phonercise app.

Back in Terminal, navigate up to the Phonercise directory, and then run the following command:

pod init

This steathily creates a new file named Podfile that lists all pods that the app uses, along with their versions and optional configuration information.

Open Podfile in a text editor.

Next, add the following line under the comment, # Pods for Phonercise:

pod 'ThreeRingControl', :path => '../ThreeRingControl'

Save the file.

Run this in Terminal:

pod install 

With this command, you’re searching the CocoaPods repository and downloading any new or updated pods that match the Podfile criteria. It also resolves any dependencies, updates the Xcode project files so it knows how to build and link the pods, and performs any other required configuration.

Finally, it creates a Phonercise.xcworkspace file. Use this file from now on — instead of the xcodeproj — because it has the reference to the Pods project and the app project.

Check it Out

Close the Phonercise and ThreeRingControl projects if they are open, and then open Phonercise.xcworkspace.

Build and run. Like magic, the app should work exactly the same before. This ease of use is brought to you by these two facts:

  1. You already did the work to separate the ThreeRingControl files and use them as a framework, e.g. adding import statements.
  2. CocoaPods does the heavy lifting of building and packaging those files; it also takes care of all the business around embedding and linking.

Pod Organization

Take a look at the Pods project, and you’ll notice two targets:

  • Pods-phonercise: a pod project builds all the individual pods as their own framework, and then combines them into one single framework: Pods-Phonercise.
  • ThreeRingControl: this replicates the same framework logic used for building it on its own. It even adds the music files as framework resources.

Inside the project organizer, you’ll see several groups. ThreeRingControl is under Development Pods. This is a development pod because you defined the pod with :path link in the app’s Podfile. These files are symlinked, so you can edit and develop this code side-by-side with the main app code.

Pods that come from a repository, such as those from a third party, are copied into the Pods directory and listed in a Pods group. Any modifications you make are not pushed to the repository and are overwritten whenever you update the pods.

Congratulations! You’ve now created and deployed a CocoaPod — and you’re probably thinking about what to pack into a pod first.

You’re welcome to stop here, congratulate yourself and move on to Where to Go From Here, but if you do, you’ll miss out on learning an advanced maneuver.

Publish the Pod

This section walks you through the natural next step of publishing the pod to GitHub and using it like a third party framework.

Create a Repository

If you don’t already have a GitHub account, create one.

Now create a new repository to host the pod. ThreeRingControl is the obvious best fit for the name, but you can name it whatever you want. Be sure to select Swift as the .gitignore language and MIT as the license.

Click Create Repository. From the dashboard page that follows, copy the HTTPS link.

Clone the Repository

Go back to Terminal and create a new directory. The following commands will create a repo directory from the project’s folder.

mkdir repo
cd repo

From there, clone the GitHub repository. Replace the URL below with the HTTPS link from the GitHub page.

git clone URL

This will set up a Git folder and go on to copy the pre-created README and LICENSE files.

Add the Code to the Repository

Next, copy the files from the previous ThreeRingControl directory to the repo/ThreeRingControl directory.

Open the copied version of ThreeRingControl.podspec, and update the s.source line to:

s.source       = { :git => "URL", :tag => "1.0.0" }

Set the URL to be the link to your repository.

Make the Commitment

Now it gets real. In this step, you’ll commit and push the code to GitHub. Your little pod is about to enter the big kid’s pool.

Run the following commands in Terminal to commit those files to the repository and push them back to the server.

cd ThreeRingControl/
git add .
git commit -m "initial commit"
git push

Visit the GitHub page and refresh it to see all the files.

Tag It

You need to tag the repository so it matches the Podspec. Run this command to set the tags:

git tag 1.0.0
git push --tags

Check your handiwork by running:

pod spec lint

The response you’re looking for is ThreeRingControl.podspec passed validation.

Update the Podfile

Change the Podfile in the Phonercise directory. Replace the existing ThreeRingControl line with:

pod 'ThreeRingControl', :git => 'URL', :tag => '1.0.0'

Replace URL with your GitHub link.

From Terminal, run this in the Phonercise directory:

pod update

Now the code will pull the framework from the GitHub repository, and it is no longer be a development pod!

Distributing iOS Frameworks Using CocoaPods