Core Data Delete Rules Swift 3

Delete Rules

What happens if a note is deleted? Should the category the note belongs to also be deleted? No. But what happens if a category is deleted? Should it be possible to have notes without a category?

This brings us to delete rules. Every relationship has a delete rule. A delete rule defines what happens when the record that owns the relationship is deleted.

A delete rule defines what happens when the record that owns the relationship is deleted.

Select the notes relationship of the Category entity and open the Data Model Inspector on the right. By default, the delete rule of a relationship is set to nullify. Core Data supports four delete rules:

  • No Action
  • Nullify
  • Cascade
  • Deny

No Action Delete Rule

If the delete rule of a relationship is set to No Action, nothing happens. Let me illustrate this with an example. We have a category that contains several notes. If the category is deleted, the notes are not notified of this event. The notes on the other end of the relationship believe that they are still associated with the deleted category.

I have never had a need to use this delete rule in a project. In most situations, you want to take some action when a record is deleted. And that is where the other delete rules come into play.

Nullify Delete Rule

If the delete rule of a relationship is set to Nullify, the destination of the relationship is nullified when the record is deleted.

For example, if a category has several notes and the category is deleted, the relationships pointing from the notes to the category are nullified. This is the default delete rule and the delete rule you will find yourself using most often.

Cascade Delete Rule

The Cascade delete rule is useful if the data model includes one or more dependencies. Let me give you an example. If a note should always have a category, the deletion of a category should automatically delete the notes associated with that category. In other words, the deletion of the category cascades or trickles down to the notes linked to the category. Even though this may make sense on paper, the user probably won’t like it when you automatically delete its notes. The Deny delete rule is a better option in this scenario (see below).

If you are dealing with a Many-To-Many relationship, this is often not what you want. If a note can have several tags and a tag can be linked to several notes, deleting a tag should not result in the deletion of every note with that tag. The notes could be associated with other tags, for example.

Deny Delete Rule

Remember the previous example in which the deletion of a category resulted in the deletion of every note that belonged to that category. It may be better to apply the Deny delete rule.

Deny is another powerful and useful pattern. It is the opposite of the Cascade delete rule. Instead of cascading the deletion of a record, it prevents the deletion of the record.

For example, if a category is associated with several notes, the category can only be deleted if it is no longer tied to any notes. This configuration prevents the scenario in which notes are no longer associated with a category.

Core Data Delete Rules Swift 3

GCD(Grand Central Dispatch) Threading for Swift 3

GCD Concepts

To understand GCD, you need to be comfortable with several concepts related to concurrency and threading.

Concurrency

In iOS a process or application is made up of one or more threads. The threads are managed independently by the operating system scheduler. Each thread can execute concurrently but it’s up to the system to decide if this happens and how it happens.

Single-core devices can achieve concurrency through time-slicing. They would run one thread, perform a context switch, then run another thread.

Concurrency_vs_Parallelism

Multi-core devices on the other hand, execute multiple threads at the same time via parallelism.

GCD is built on top of threads. Under the hood it manages a shared thread pool. With GCD you add blocks of code or work items to dispatch queues and GCD decides which thread to execute them on.

As you structure your code, you’ll find code blocks that can run simultaneously and some that should not. This then allows you to use GCD to take advantage of concurrent execution.

Note that GCD decides how much parallelism is required based on the system and available system resources. It’s important to note that parallelism requires concurrency, but concurrency does not guarantee parallelism.

Basically, concurrency is about structure while parallelism is about execution.

Queues

GCD provides dispatch queues represented by DispatchQueue to manage tasks you submit and execute them in a FIFO order guaranteeing that the first task submitted is the first one started.

Dispatch queues are thread-safe which means that you can access them from multiple threads simultaneously. The benefits of GCD are apparent when you understand how dispatch queues provide thread safety to parts of your own code. The key to this is to choose the right kind of dispatch queue and the right dispatching function to submit your work to the queue.

Queues can be either serial or concurrent. Serial queues guarantee that only one task runs at any given time. GCD controls the execution timing. You won’t know the amount of time between one task ending and the next one beginning:

Serial-Queue-Swift

Concurrent queues allow multiple tasks to run at the same time. Tasks are guaranteed to start in the order they were added. Tasks can finish in any order and you have no knowledge of the time it will take for the next task to start, nor the number of tasks that are running at any given time.

See the sample task execution below:Concurrent-Queue-Swift

 

Notice how Task 1, Task 2, and Task 3 start quickly one after the other. On the other hand, Task 1 took a while to start after Task 0. Also notice that while Task 3 started after Task 2, it finished first.

The decision of when to start a task is entirely up to GCD. If the execution time of one task overlaps with another, it’s up to GCD to determine if it should run on a different core, if one is available, or instead to perform a context switch to run a different task.

GCD provides three main types of queues:

  1. Main queue: runs on the main thread and is a serial queue.
  2. Global queues: concurrent queues that are shared by the whole system. There are four such queues with different priorities : high, default, low, and background. The background priority queue is I/O throttled.
  3. Custom queues: queues that you create which can be serial or concurrent. These actually trickle down into being handled by one of the global queues.

When setting up the global concurrent queues, you don’t specify the priority directly. Instead you specify a Quality of Service (QoS) class property. This will indicate the task’s importance and guide GCD into determining the priority to give to the task.

The QoS classes are:

  • User-interactive: This represents tasks that need to be done immediately in order to provide a nice user experience. Use it for UI updates, event handling and small workloads that require low latency. The total amount of work done in this class during the execution of your app should be small. This should run on the main thread.
  • User-initiated: The represents tasks that are initiated from the UI and can be performed asynchronously. It should be used when the user is waiting for immediate results, and for tasks required to continue user interaction. This will get mapped into the high priority global queue.
  • Utility: This represents long-running tasks, typically with a user-visible progress indicator. Use it for computations, I/O, networking, continous data feeds and similar tasks. This class is designed to be energy efficient. This will get mapped into the low priority global queue.
  • Background: This represents tasks that the user is not directly aware of. Use it for prefetching, maintenance, and other tasks that don’t require user interaction and aren’t time-sensitive. This will get mapped into the background priority global queue.

Synchronous vs. Asynchronous

With GCD, you can dispatch a task either synchronously or asynchronously.

A synchronous function returns control to the caller after the task is completed.

An asynchronous function returns immediately, ordering the task to be done but not waiting for it. Thus, an asynchronous function does not block the current thread of execution from proceeding on to the next function.

Handling Background Tasks

 

DispatchQueue.global(qos: .userInitiated).async { // 1
  let overlayImage = self.faceOverlayImageFromImage(self.image)
  DispatchQueue.main.async { // 2
    self.fadeInNewImage(overlayImage) // 3
  }
}

Here’s what the code’s doing step by step:

  1. You move the work to a background global queue and run the work in the closure asynchronously. This lets viewDidLoad() finish earlier on the main thread and makes the loading feel more snappy. Meanwhile, the face detection processing is started and will finish at some later time.
  2. At this point, the face detection processing is complete and you’ve generated a new image. Since you want to use this new image to update your UIImageView, you add a new closure to the main queue. Remember – you must always access UIKit classes on the main thread!
  3. Finally, you update the UI with fadeInNewImage(_:) which performs a fade-in transition of the new googly eyes image.

Here’s a quick guide of how and when to use the various queues with async:

  • Main Queue: This is a common choice to update the UI after completing work in a task on a concurrent queue. To do this, you’ll code one closure inside another. Targeting the main queue and calling async guarantees that this new task will execute sometime after the current method finishes.
  • Global Queue: This is a common choice to perform non-UI work in the background.
  • Custom Serial Queue: A good choice when you want to perform background work serially and track it. This eliminates resource contention since you know only one task at a time is executing. Note that if you need the data from a method, you must inline another closure to retrieve it or consider using sync.

Delaying Task Execution

DispatchQueue allows you to delay task execution. Care should be taken not to use this to solve race conditions or other timing bugs through hacks like introducing delays. Use this when you want a task to run at a specific time.

let delayInSeconds = 1.0 // 1
DispatchQueue.main.asyncAfter(deadline: .now() + delayInSeconds) { // 2
  let count = PhotoManager.sharedManager.photos.count
  if count > 0 {
    self.navigationItem.prompt = nil
  } else {
    self.navigationItem.prompt = "Add photos with faces to Googlyify them!"
  }
}

Here’s what’s going on above:

  1. You specify a variable for the amount of time to delay.
  2. You then wait for the specified time then asynchronously run the block which updates the photos count and updates the prompt.

Handling the Readers-Writers Problem

GCD provides an elegant solution of creating a read/write lock using dispatch barriers. Dispatch barriers are a group of functions acting as a serial-style bottleneck when working with concurrent queues.

When you submit a DispatchWorkItem to a dispatch queue you can set flags to indicate that it should be the only item executed on the specified queue for that particular time. This means that all items submitted to the queue prior to the dispatch barrier must complete before the DispatchWorkItem will execute.

When the DispatchWorkItem‘s turn arrives, the barrier executes it and ensures that the queue does not execute any other tasks during that time. Once finished, the queue returns to its default implementation.

The diagram below illustrates the effect of a barrier on various asynchronous tasks:Dispatch-Barrier-Swift

 

Notice how in normal operation the queue acts just like a normal concurrent queue. But when the barrier is executing, it essentially acts like a serial queue. That is, the barrier is the only thing executing. After the barrier finishes, the queue goes back to being a normal concurrent queue.

Use caution when using barriers in global background concurrent queues as these queues are shared resources. Using barriers in a custom serial queue is redundant as it already executes serially. Using barriers in custom concurrent queue is a great choice for handling thread safety in atomic of critical areas of code.

You’ll use a custom concurrent queue to handle your barrier function and separate the read and write functions. The concurrent queue will allow multiple read operations simultaneously.

fileprivate let concurrentPhotoQueue =
  DispatchQueue(
    label: "com.raywenderlich.GooglyPuff.photoQueue", // 1
    attributes: .concurrent) // 2

This initializes concurrentPhotoQueue as a concurrent queue.

  1. You set up label with a descriptive name that is helpful during debugging. Typically you’ll use the reversed DNS style naming convention.
  2. You specify a concurrent queue.

Next, replace addPhoto(_:) with the following code:

func addPhoto(_ photo: Photo) {
  concurrentPhotoQueue.async(flags: .barrier) { // 1
    self._photos.append(photo) // 2
    DispatchQueue.main.async { // 3
      self.postContentAddedNotification()
    }
  }
}

Here’s how your new write function works:

  1. You dispatch the write operation asynchronously with a barrier. When it executes, it will be the only item in your queue.
  2. You add the object to the array.
  3. Finally you post a notification that you’ve added the photo. This notification should be posted on the main thread because it will do UI work. So you dispatch another task asynchronously to the main queue to trigger the notification.

To ensure thread safety with your writes, you need to perform reads on the concurrentPhotoQueue queue. You need return data from the function call so an asynchronous dispatch won’t cut it. In this case, syncwould be an excellent candidate.

Use sync to keep track of your work with dispatch barriers, or when you need to wait for the operation to finish before you can use the data processed by the closure.

You need to be careful though. Imagine if you call sync and target the current queue you’re already running on. This will result in a deadlock situation.

Two (or sometimes more) items — in most cases, threads — are said to be deadlocked if they all get stuck waiting for each other to complete or perform another action. The first can’t finish because it’s waiting for the second to finish. But the second can’t finish because it’s waiting for the first to finish.

In your case, the sync call will wait until the closure finishes, but the closure can’t finish (it can’t even start!) until the currently executing closure is finished, which can’t! This should force you to be conscious of which queue you’re calling from — as well as which queue you’re passing in.

Here’s a quick overview of when and where to use sync:

  • Main Queue: Be VERY careful for the same reasons as above; this situation also has potential for a deadlock condition.
  • Global Queue: This is a good candidate to sync work through dispatch barriers or when waiting for a task to complete so you can perform further processing.
  • Custom Serial Queue: Be VERY careful in this situation; if you’re running in a queue and call sync targeting the same queue, you’ll definitely create a deadlock.
var photos: [Photo] {
  var photosCopy: [Photo]!
  concurrentPhotoQueue.sync { // 1
    photosCopy = self._photos // 2
  }
  return photosCopy
}

Here’s what’s going on step-by-step:

  1. Dispatch synchronously onto the concurrentPhotoQueue to perform the read.
  2. Store a copy of the photo array in photosCopy and return it.

Dispatch Groups

With dispatch groups you can group together multiple tasks and either wait for them to be completed or be notified once they are complete. Tasks can be asynchronous or synchronous and can even run on different queues.

DispatchQueue.global(qos: .userInitiated).async { // 1
  var storedError: NSError?
  let downloadGroup = DispatchGroup() // 2
  for address in [overlyAttachedGirlfriendURLString, 
                  successKidURLString,
                  lotsOfFacesURLString] {
    let url = URL(string: address)
    downloadGroup.enter() // 3
    let photo = DownloadPhoto(url: url!) {
      _, error in
      if error != nil {
        storedError = error
      }   
      downloadGroup.leave() // 4
    }   
    PhotoManager.sharedManager.addPhoto(photo)
  }   
      
  downloadGroup.wait() // 5
  DispatchQueue.main.async { // 6
    completion?(storedError)
  }   
}

Here’s what the code is doing step-by-step:

  1. Since you’re using the synchronous wait method which blocks the current thread, you use async to place the entire method into a background queue to ensure you don’t block the main thread.
  2. This creates a new dispatch group.
  3. You call enter() to manually notify the group that a task has started. You must balance out the number of enter() calls with the number of leave() calls or your app will crash.
  4. Here you notify the group that this work is done.
  5. You call wait() to block the current thread while waiting for tasks’ completion. This waits forever which is fine because the photos creation task always completes. You can use wait(timeout:) to specify a timeout and bail out on waiting after a specified time.
  6. At this point, you are guaranteed that all image tasks have either completed or timed out. You then make a call back to the main queue to run your completion closure.

Dispatch groups are a good candidate for all types of queues. You should be wary of using dispatch groups on the main queue if you’re waiting synchronously for the completion of all work since you don’t want to hold up the main thread. However, the asynchronous model is an attractive way to update the UI once several long-running tasks finish, such as network calls.

Dispatching asynchronously to another queue then blocking work using wait is clumsy.

DispatchGroup manages dispatch groups. You’ll first look at its wait method. This blocks your current thread until all the group’s enqueued tasks have been completed.

 

// 1
var storedError: NSError?
let downloadGroup = DispatchGroup()
for address in [overlyAttachedGirlfriendURLString,
                successKidURLString,
                lotsOfFacesURLString] {
  let url = URL(string: address)
  downloadGroup.enter()
  let photo = DownloadPhoto(url: url!) {
    _, error in
    if error != nil {
      storedError = error
    }   
    downloadGroup.leave()
  }   
  PhotoManager.sharedManager.addPhoto(photo)
}   
    
downloadGroup.notify(queue: DispatchQueue.main) { // 2
  completion?(storedError)
}

Here’s what’s going on:

  1. In this new implementation you don’t need to surround the method in an async call since you’re not blocking the main thread.
  2. notify(queue:work:) serves as the asynchronous completion closure. It is called when there are no more items left in the group. You also specify that you want to schedule the completion work to be run on the main queue.

This is a much cleaner way to handle this particular job as it doesn’t block any threads.

Concurrency Looping

You might notice that there’s a forloop in there that cycles through three iterations and downloads three separate images. Your job is to see if you can run this for loop concurrently to try and speed things up.

This is a job for DispatchQueue.concurrentPerform(iterations:execute:). It works similarly to a for loop in that it executes different iterations concurrently. It is sychronous and returns only when all of the work is done.

Care must be taken when figuring out the optimal number of iterations for a given amount of work. Many iterations and a small amount of work per iteration can create so much overhead that it negates any gains from making the calls concurrent. The technique known as striding helps you out here. This is where for each iteration you do multiple pieces of work.

When is it appropriate to use DispatchQueue.concurrentPerform(iterations:execute:)? You can rule out serial queues because there’s no benefit there – you may as well use a normal for loop. It’s a good choice for concurrent queues that contain looping, especially if you need to keep track of progress.

var storedError: NSError?
let downloadGroup = DispatchGroup()
let addresses = [overlyAttachedGirlfriendURLString,
                 successKidURLString,
                 lotsOfFacesURLString]
let _ = DispatchQueue.global(qos: .userInitiated)
DispatchQueue.concurrentPerform(iterations: addresses.count) {
  i in
  let index = Int(i)
  let address = addresses[index]
  let url = URL(string: address)
  downloadGroup.enter()
  let photo = DownloadPhoto(url: url!) {
    _, error in
    if error != nil {
      storedError = error
    }
    downloadGroup.leave()
  }
  PhotoManager.sharedManager.addPhoto(photo)
}
downloadGroup.notify(queue: DispatchQueue.main) {
  completion?(storedError)
}

The former for loop has been replaced with DispatchQueue.concurrentPerform(iterations:execute:) to handle concurrent looping.

Running this new code on the device will occasionally produce marginally faster results. But was all this work worth it?

Actually, it’s not worth it in this case. Here’s why:

  • You’ve probably created more overhead running the threads in parallel than just running the for loop in the first place. You should use DispatchQueue.concurrentPerform(iterations:execute:) for iterating over very large sets along with the appropriate stride length.

Cancelling Dispatch Blocks

Thus far, you haven’t seen code that allows you to cancel enqueued tasks. This is where dispatch block objects represented by DispatchWorkItem comes into focus. Be aware that you can only cancel a DispatchWorkItem before it reaches the head of a queue and starts executing.

Let’s demonstrate this by starting download tasks for several images from Le Internet then cancelling some of them.

var storedError: NSError?
let downloadGroup = DispatchGroup()
var addresses = [overlyAttachedGirlfriendURLString,
                 successKidURLString,
                 lotsOfFacesURLString]
addresses += addresses + addresses // 1
var blocks: [DispatchWorkItem] = [] // 2

for i in 0 ..< addresses.count {
  downloadGroup.enter()
  let block = DispatchWorkItem(flags: .inheritQoS) { // 3
    let index = Int(i)
    let address = addresses[index]
    let url = URL(string: address)
    let photo = DownloadPhoto(url: url!) {
      _, error in
      if error != nil {
        storedError = error
      }
      downloadGroup.leave()
    }
    PhotoManager.sharedManager.addPhoto(photo)
  }
  blocks.append(block)
  DispatchQueue.main.async(execute: block) // 4
}

for block in blocks[3 ..< blocks.count] { // 5
  let cancel = arc4random_uniform(2) // 6
  if cancel == 1 {
    block.cancel() // 7
    downloadGroup.leave() // 8
  }
}

downloadGroup.notify(queue: DispatchQueue.main) {
  completion?(storedError)
}

Here’s a step-by-step walk through the code above:

  1. You expand the addresses array to hold three copies of each image.
  2. You initialize a blocks array to hold dispatch block objects for later use.
  3. You create a new DispatchWorkItem. You pass in a flags parameter to specify that the block should inherit its Quality of Service class from the queue it is dispatched to. You then define the work to be done in a closure.
  4. You dispatch the block asynchronously to the main queue. For this example, using the main queue makes it easier to cancel select blocks since it’s a serial queue. The code that sets up the dispatch blocks is already executing on the main queue so you are guaranteed that the download blocks will execute at some later time.
  5. You skip the first three download blocks by slicing the blocks array.
  6. Here you use arc4random_uniform() to randomly pick a number between 0 and 1. It’s like a coin toss.
  7. If the random number is 1 you cancel the block. This can only cancel blocks that are still in a queue and haven’t began executing. Blocks can’t be canceled in the middle of execution.
  8. Here you remember to remove the canceled block from the dispatch group.

Other GCD Fun

Semaphores

 

let url = URL(string: urlString)
let semaphore = DispatchSemaphore(value: 0) // 1
let _ = DownloadPhoto(url: url!) {
  _, error in
  if let error = error {
    XCTFail("\(urlString) failed. \(error.localizedDescription)")
  }   
  semaphore.signal() // 2
}
let timeout = DispatchTime.now() + .seconds(defaultTimeoutLengthInSeconds)
if semaphore.wait(timeout: timeout) == .timedOut { // 3
  XCTFail("\(urlString) timed out")
} 

Here’s how the semaphore works in the code above:

  1. You create a semaphore and set its start value. This represents the number of things that can access the semaphore without needing the semaphore to be incremented (note that incrementing a semaphore is known as signaling it).
  2. You signal the semaphore in the completion closure. This increments the semaphore count and signals that the semaphore is available to other resources that want it.
  3. You wait on the semaphore, with a given timeout. This call blocks the current thread until the semaphore has been signaled. A non-zero return code from this function means that the timeout was reached. In this case, the test is failed because it is deemed that the network should not take more than 10 seconds to return — a fair point!

Dispatch Sources

Dispatch sources are a particularly interesting feature of GCD. A dispatch source can basically be used to monitor for some type of event. Events can include Unix signals, file descriptors, Mach ports, VFS Nodes, and other obscure stuff.

When setting up a dispatch source, you tell it what type of events you want to monitor and the dispatch queue on which its event handler block should be executed. You then assign an event handler to the dispatch source.

Upon creation, dispatch sources start off in a suspended state. This allows for additional configuration steps to take place, for example setting up the event handler. Once you’ve configured your dispatch source, you should resume it to start processing events.

In this tutorial, you’ll get a small taste of working with dispatch sources by using it in a rather peculiar way: to monitor when your app is put into debug mode.

#if DEBUG // 1
  var signal: DispatchSourceSignal? // 2
  private let setupSignalHandlerFor = { (_ object: AnyObject) -> Void in // 3
    let queue = DispatchQueue.main
    signal =
      DispatchSource.makeSignalSource(signal: Int32(SIGSTOP), queue: queue) // 4
    signal?.setEventHandler { // 5
      print("Hi, I am: \(object.description!)")
    }
    signal?.resume() // 6
  }
#endif

The code is a little involved, so walk through it step-by-step:

  1. You compile this code only in DEBUG mode to prevent “interested parties” from gaining a lot of insight into your app. :] DEBUG is defined by adding -D DEBUG under Project Settings -> Build Settings -> Swift Compiler – Custom Flags -> Other Swift Flags -> Debug. It should be set already in the starter project.
  2. You declare a signal variable of type DispatchSourceSignal for use in monitoring Unix signals.
  3. You create a block assigned to the setupSignalHandlerFor global variable that you’ll use for one-time setup of your dispatch source.
  4. Here you set up signal. You indicate that you’re interested in monitoring the SIGSTOP Unix signal and handling received events on the main queue — you’ll discover why shortly.
  5. If the dispatch source is successfully created, you register an event handler closure that’s invoked whenever you receive the SIGSTOP signal. Your handler prints a message that includes the class description.
  6. All sources start off in the suspended state by default. Here you tell the dispatch source to resume so it can start monitoring events.

Add the following code to viewDidLoad() just below the call to super.viewDidLoad():

#if DEBUG
  _ = setupSignalHandlerFor(self)
#endif

This code invokes the dispatch source’s initialization code.

Build and run the app. Pause the program execution and resume the app immediately by tapping the pause then play buttons in Xcode’s debugger:

grand central dispatch tutorial

Check out the console. You should see something like this:

Hi, I am: <GooglyPuff.PhotoCollectionViewController: 0x7fbf0af08a10>

You app is now debugging-aware! That’s pretty awesome, but how would you use this in real life?

You could use this to debug an object and display data whenever you resume the app. You could also give your app custom security logic to protect itself (or the user’s data) when malicious attackers attach a debugger to your application.

An interesting idea is to use this approach as a stack trace tool to find the object you want to manipulate in the debugger.

Think about that situation for a second. When you stop the debugger out of the blue, you’re almost never on the desired stack frame. Now you can stop the debugger at anytime and have code execute at your desired location. This is very useful if you want to execute code at a point in your app that’s tedious to access from the debugger. Try it out!

Put a breakpoint on the print() statement inside the setupSignalHandlerFor block that you just added.

Pause in the debugger, then start again. The app will hit the breakpoint you added. You’re now deep in the depths of your PhotoCollectionViewController method. Now you can access the instance of PhotoCollectionViewController to your heart’s content. Pretty handy!

Note: If you haven’t already noticed which threads are which in the debugger, take a look at them now. The main thread will always be the first thread followed by libdispatch, the coordinator for GCD, as the second thread. After that, the thread count and remaining threads depend on what the hardware was doing when the app hit the breakpoint.

In the debugger console, type the following:

(lldb) expr object.navigationItem.prompt = "WOOT!"

The Xcode debugger can sometimes be uncooperative. If you get the message:

error: use of unresolved identifier 'self'

Then you have to do it the hard way to work around a bug in LLDB. First take note of the address of objectin the debug area:

(lldb) po object

Then manually cast the value to the type you want:

(lldb) expr let $vc = unsafeBitCast(0x7fbf0af08a10, to: GooglyPuff.PhotoCollectionViewController.self)
(lldb) expr $vc.navigationItem.prompt = "WOOT!"

With this method, you can make updates to the UI, inquire about the properties of a class, and even execute methods — all while not having to restart the app to get into that special workflow state. Pretty neat.

 

GCD(Grand Central Dispatch) Threading for Swift 3

Higher Order Functions  Map, filter, reduce and flatMap in Swift 3.0

Map

Loops over a collection and applies the same operation to each element in the collection.

Map

Filter

Loops over a collection and returns an array that contains elements that meet a condition.

Filter

Reduce

Combines all items in a collection to create a single value.

Reduce

FlatMap

When implemented on sequences : Flattens a collection of collections.

FlatMap

Higher Order Functions  Map, filter, reduce and flatMap in Swift 3.0

Create Designable Classes of UITextField, UIButton, UILabel & UIImageView With FontAwesome

Prerequisite:- FontAwesome & Swift 3

UITextField

Source

// MARK: – Designable IconTextField Class
@IBDesignable final class IconTextField: UITextField {

@IBInspectable var leftImageName: String!
@IBInspectable var leftImageSize: CGFloat!
@IBInspectable var rightImageName: String!
@IBInspectable var rightImageSize: CGFloat!

/* Initiate the view with Icons */
override func draw(_ rect: CGRect) {

//Setup Text Field
self.leftViewMode = UITextFieldViewMode.always
self.leftView = UIView(frame: CGRect(x: 0, y: 0, width: 10, height: 50))

// Setup left view
if leftImageName != nil {
let leftLabel = UILabel(frame: CGRect(x: 0, y: 0, width: 38, height: 50))

if leftImageSize != nil{
leftLabel.font = UIFont.fontAwesome(ofSize: leftImageSize)
}else{
leftLabel.font = UIFont.fontAwesome(ofSize: 30)
}

leftLabel.text = ” \(String.fontAwesomeIcon(code: leftImageName)!)”
leftLabel.textColor = UIColor.init(colorLiteralRed: 80.0/255.0, green: 182.0/255.0, blue: 253.0/255.0, alpha: 1.0)

self.leftView = leftLabel
}

// Setup right view
if rightImageName != nil {

let rightLabel = UILabel(frame: CGRect(x: 0, y: 0, width: 20, height: 50))

if rightImageSize != nil{
rightLabel.font = UIFont.fontAwesome(ofSize: rightImageSize)
}else{
rightLabel.font = UIFont.fontAwesome(ofSize: 20)
}

rightLabel.text = String.fontAwesomeIcon(code: rightImageName)
rightLabel.textColor = UIColor.init(colorLiteralRed: 80.0/255.0, green: 182.0/255.0, blue: 253.0/255.0, alpha: 1.0)

self.rightViewMode = UITextFieldViewMode.always
self.rightView = rightLabel
}
}

}

Image Use

UIButton

Source

// MARK: – Designable FontAwesomeButton Class
@IBDesignable final class FontAwesomeButton: UIButton {

@IBInspectable var IconName: String! {
didSet{
//Set Button Icon
self.setTitle(“\(String.fontAwesomeIcon(code: IconName)!) \(Text)”, for: .normal)
}
}
@IBInspectable var Text: String = “”
@IBInspectable var NewLineText: String = “”
@IBInspectable var ImageLeftPadding: Int16 = 0
var ImagePaddingStr = “”

var fontSize : CGFloat!

/* Initiate the view with Icon & Text */
override func draw(_ rect: CGRect) {

// Setup view
if fontSize != nil{
self.titleLabel?.font = UIFont.fontAwesome(ofSize: fontSize)
}else{
self.titleLabel?.font = UIFont.fontAwesome(ofSize: 15)
}

/* Check whether it’s with new line or normal */

if NewLineText != “” {

// Add padding to the image to make it centre
if ImageLeftPadding > 0 {
for _ in 1…ImageLeftPadding {
ImagePaddingStr.append(” “)
}
}
self.titleLabel?.numberOfLines = 0
self.setTitle(“\(ImagePaddingStr)\(String.fontAwesomeIcon(code: IconName)!)\n\(NewLineText)”, for: .normal)

}else{
self.setTitle(“\(String.fontAwesomeIcon(code: IconName)!) \(Text)”, for: .normal)
}

}

}

Image

Use

UILabel

Source

// MARK: – Designable FontAwesomeLabel Class
@IBDesignable final class FontAwesomeLabel: UILabel {

@IBInspectable var IconName: String!
@IBInspectable var fontSize : CGFloat = 50

/* Initiate the view with Icon */

/* To do custom drawing involving label, subclass UILabel and override awakeFromNib().*/

override func awakeFromNib() {
// Setup view
self.font = UIFont.fontAwesome(ofSize: fontSize)
self.text = String.fontAwesomeIcon(code: IconName)
}
}

Image

Use

UIImageView

Source

// MARK: – Designable FontAwesomeImageView Class
@IBDesignable final class FontAwesomeImageView: UIImageView {

/* Call Method to setup Imageview on set of variables */
@IBInspectable var IconName: String! {
didSet {
SetCustomImage()
}
}
@IBInspectable var Color: UIColor = UIColor.white
@IBInspectable var BGColor: UIColor = UIColor.clear
@IBInspectable var ImageSize : CGSize = CGSize(width: 60, height: 60) {
didSet {
if IconName != nil{
SetCustomImage()
}
}
}

/* Initiate the view with Image Icon*/

/* The UIImageView class does not draw its content using the draw(_:) method. Use image views only to present images. To do custom drawing involving images, subclass UIView directly and draw your image there.*/

func SetCustomImage() {
self.image = UIImage.fontAwesomeIcon(code: IconName, textColor: Color, size: ImageSize)
}
}

Image

Use

Create Designable Classes of UITextField, UIButton, UILabel & UIImageView With FontAwesome

Write Mockup Classes For Unit Testing Of UserDefaults, Core Data & UrlSessions In Swift 3 & XCode 8

URLSession

Source Code

public protocol CLSession {
func dataTask(with url: URL, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Void) -> URLSessionDataTask
func dataTask(with request: URLRequest, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Void) -> URLSessionDataTask
}

extension URLSession: CLSession { }

public final class URLSessionMock: CLSession {

var url: URL?
var request: URLRequest?
private let dataTaskMock: URLSessionDataTaskMock

public convenience init?(jsonDict: [String: Any], response: URLResponse? = nil, error: Error? = nil) {
guard let data = try? JSONSerialization.data(withJSONObject: jsonDict, options: []) else { return nil }
self.init(data: data, response: response, error: error)
}

public init(data: Data? = nil, response: URLResponse? = nil, error: Error? = nil) {
dataTaskMock = URLSessionDataTaskMock()
dataTaskMock.taskResponse = (data, response, error)
}

public func dataTask(with url: URL, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Void) -> URLSessionDataTask {
self.url = url
self.dataTaskMock.completionHandler = completionHandler
return self.dataTaskMock
}

public func dataTask(with request: URLRequest, completionHandler: @escaping (Data?, URLResponse?, Error?) -> Void) -> URLSessionDataTask {
self.request = request
self.dataTaskMock.completionHandler = completionHandler
return self.dataTaskMock
}

final private class URLSessionDataTaskMock : URLSessionDataTask {

typealias CompletionHandler = (Data?, URLResponse?, Error?) -> Void
var completionHandler: CompletionHandler?
var taskResponse: (Data?, URLResponse?, Error?)?

override func resume() {
DispatchQueue.main.async {
self.completionHandler?(self.taskResponse?.0, self.taskResponse?.1, self.taskResponse?.2)
}
}
}

}

Images
Use

UserDefaults

Source Code

class MockUserDefaults: UserDefaults {
var loggedInUser = 0
override func set(_ value: Any?, forKey defaultName: String) {
if defaultName == “loggedInUser” {
loggedInUser += 1
}
}
}

Images

Use

CoreData

Source Code

// MARK: – Variables
//Core Data variables
var storeCoordinator: NSPersistentStoreCoordinator!
var managedObjectContext: NSManagedObjectContext!
var managedObjectModel: NSManagedObjectModel!
var store: NSPersistentStore!

//Managers
var apiManager: APIManager!
var dataManager: DataManager!

// MARK: – XCTest Methods

override func setUp() {
super.setUp()
// Put setup code here. This method is called before the invocation of each test method in the class.
//Set Managers
apiManager = APIManager.shared
dataManager = DataManager.shared

/* Core Data Mock Object Configuration */
// ——– Start ——-
managedObjectModel = NSManagedObjectModel.mergedModel(from: nil)
storeCoordinator = NSPersistentStoreCoordinator(managedObjectModel: managedObjectModel)

do {
store = try storeCoordinator.addPersistentStore(ofType: NSInMemoryStoreType, configurationName: nil, at: nil, options: nil)
} catch {
print(error.localizedDescription)
}

managedObjectContext = NSManagedObjectContext(concurrencyType: .mainQueueConcurrencyType)
managedObjectContext.persistentStoreCoordinator = storeCoordinator
apiManager.context = managedObjectContext
dataManager.context = managedObjectContext
(UIApplication.shared.delegate as! AppDelegate).mockContext = managedObjectContext
// ——– End ——-
}

override func tearDown() {
// Put teardown code here. This method is called after the invocation of each test method in the class.
managedObjectContext = nil
apiManager = nil
dataManager = nil
(UIApplication.shared.delegate as! AppDelegate).mockContext = nil

//Check Store Removal
do {
try storeCoordinator.remove(store)
} catch {
XCTFail(“couldn’t remove persistant store: \(error)”)
}

super.tearDown()
}

func testThatStoreIsSetUp() {
XCTAssertNotNil(store, “no persitant store”)
}

Images

Use

Write Mockup Classes For Unit Testing Of UserDefaults, Core Data & UrlSessions In Swift 3 & XCode 8

Create Reachability Using Swift 3 XCode 8

Source Code

import Foundation
import SystemConfiguration

//Create Queue for Reachability Callback
fileprivate let reachabilitySerialQueue = DispatchQueue(label: “com.kapilahuja.reachability.reachability”)

//Variable to save Reachability status and show toast if not reachable
var isInternetReachable = false {
didSet {
if !isInternetReachable {
ApplicationManager.sharedInstance.ShowInternetAlertToast()
}
}
}

/* Check Connectivity */
func isInternetAvailable()
{
// Create Network Configuration & Flag
// —– Start —–
var zeroAddress = sockaddr_in()
zeroAddress.sin_len = UInt8(MemoryLayout.size(ofValue: zeroAddress))
zeroAddress.sin_family = sa_family_t(AF_INET)

let defaultRouteReachability = withUnsafePointer(to: &zeroAddress) {
$0.withMemoryRebound(to: sockaddr.self, capacity: 1) {zeroSockAddress in
SCNetworkReachabilityCreateWithAddress(nil, zeroSockAddress)
}
}

var flags = SCNetworkReachabilityFlags()
if !SCNetworkReachabilityGetFlags(defaultRouteReachability!, &flags) {
isInternetReachable = false
return
}
// —– End —–

// Set Network Reachability Callback on status change of route & set queue for it
// —– Start —–
var context = SCNetworkReachabilityContext()
if !SCNetworkReachabilitySetCallback(defaultRouteReachability!, callback, &context) {
print(SCError())
}

if !SCNetworkReachabilitySetDispatchQueue(defaultRouteReachability!, reachabilitySerialQueue) {
print(SCError())
}
// —– End —–

// Get Network Status & Save it
// —– Start —–
let isReachable = (flags.rawValue & UInt32(kSCNetworkFlagsReachable)) != 0
let needsConnection = (flags.rawValue & UInt32(kSCNetworkFlagsConnectionRequired)) != 0

isInternetReachable = (isReachable && !needsConnection)
// —– End —–
}

func callback(reachability:SCNetworkReachability, flags: SCNetworkReachabilityFlags, info: UnsafeMutableRawPointer?) {

//guard info != nil else { return }

// Get Network Status & Save it
// —– Start —–
let isReachable = (flags.rawValue & UInt32(kSCNetworkFlagsReachable)) != 0
let needsConnection = (flags.rawValue & UInt32(kSCNetworkFlagsConnectionRequired)) != 0

isInternetReachable = (isReachable && !needsConnection)
// —– End —–
}

Images

Screen Shot 2017-05-31 at 3.00.03 PM

Screen Shot 2017-05-31 at 3.00.14 PM

How to use

Initialize

//Check Connectivity
isInternetAvailable()

Use

if isInternetReachable {

}

Create Reachability Using Swift 3 XCode 8