Categories
iOS

How to detect when an iOS app crashes

Great post on StackOverflow about detecting when an app is killed, due to an exception, a signal, etc.

Most crashes can be caught using a 3-fold approach:

  1. appWillTerminate
  2. exception handler
  3. signal handler

This translates into using:

// installs HandleExceptions as the Uncaught Exception Handler
NSSetUncaughtExceptionHandler(&HandleExceptions);
// create the signal action structurestruct sigaction newSignalAction;
// initialize the signal action structure
memset(&newSignalAction,0,sizeof(newSignalAction));
// set SignalHandler as the handler in the signal action structure
newSignalAction.sa_handler =&SignalHandler;
// set SignalHandler as the handlers for SIGABRT, SIGILL and SIGBUS
sigaction(SIGABRT,&newSignalAction, NULL);
sigaction(SIGILL,&newSignalAction, NULL);
sigaction(SIGBUS,&newSignalAction, NULL);

and:

-(void)applicationWillTerminate:(UIApplication*)application {
   ...
}
void HandleExceptions(NSException*exception){
DebugLog(@"This is where we save the application data during a exception");
// Save application data on crash
}
void SignalHandler(int sig) {
DebugLog(@"This is where we save the application data during a signal");
// Save application data on crash// Write your code to reset brightness
}

My own answer to the same question, did not go as far as suggesting to intercept signals, but that is fundamental.

As I said, really great tip!

Categories
iOS

Online resources to learn Apple’s Swift Language

A collection of resources to start with the new Swift language.

Categories
Digilife

Apple introduces iOS 8 SDK and Xcode 6

Another post for InfoQ: “At its 2014 Worldwide Developer Conference, Apple announced its new mobile operating system, iOS 8, alongside new SDKs and development tools. New Apple software includes over 4000 new APIs, including all new frameworks such as HealthKit, HomeKit, and CloudKit, and enhancements to the platform gaming capabilities.”

Read it all.

Categories
Digilife

My latest contribution to InfoQ

In the last couple of months, I have been contributing to InfoQ as a News Editor for the Mobile topic. Although this might not look like the most elegant behaviour, I will link here some of my writing for them. here you have my last one:

Android 4.1.1 Vulnerable to Reverse Heartbleed

Google announced last week that Android 4.1.1 is susceptible to the Heartbleed OpenSSL bug. While Android 4.1.1 is, according to Google, the only Android version vulnerable to Heartbleed, it remains in use in millions of smartphones and tablets. Android 4.1.1 devices have been shown to leak significant amount of data…

 

 

 

Categories
Tutorials

How Instruments can be used to fix a graphics performance issue

Lately, I have been investigating an issue a customer of mine’s app showed.

My customer’s app is a sort of PDF viewer that also allow to add annotations. Such annotations are not stored in the PDF itself, instead they are managed in a custom way and drawn on top of the PDF in a dedicated CATiledLayer-based view.

The issue was that after a couple of zoom-in/out operations, or alternatively after moving from one PDF to another, the CPU used to jump to 100% usage, even though no apparent operation was ongoing. This hampered a lot the overall experience of the app, since practically all graphics operations became extremely slow with the app stuck in that state. Curiously, other kind of operations, e.g., downloading a file, were not slowed down significantly.

The issue had quite a trivial cause, due to some “bad” programming (meaning that some obvious rule was not respected), but the interesting part in this is how I came to understand what was going on.

Instruments was the main tool that came to rescue, as you can imagine. The picture at the left shows the CPU Profiler tool output. You can see how the overall CPU usage goes to 100% at some point and stays there. The fundamental bits of information one can get from this output are the following:

  • there was something going wrong in the cleanup phase of a pthread lifecycle; knowing that the CATiledLayer used for optimised drawing uses threads, this was a hint at that something was not handled correctly in the drawing phase; hard to think at some CATiledLayer bug, but still a possibility;
  • furthermore, (while the program was running) the “self” field showed that there were very many calls being made to practically all symbols under “pthread_dst_cleanup” and that those calls would not halt for any reason;
  • among the calls being made repetitively, my attention got caught by those to FreeContextStack/PopContext/GetContextStack.

The last point was the key to understand that something in the handling of the Core Graphics context stack was not doing correctly. So I set up to investigate the custom drawing code and indeed what I found was a couple of unbalanced calls to UIGraphicsPushContext and UIGraphicsPopContext. Fixing this, removed the CPU utilisation issue.

As I said, the issue was caused by incorrect programming, but nevertheless catching it was an interesting experience.

Categories
Tutorials

What’s new in iOS 7 User Interface – Part 2: Deference

In a previous post, I began describing what changes the introduction of iOS 7 brought to iOS UI/UX dimension. In that post, I listed 4 main principles shaping the idea of iOS flat-UI:

    • Clarity
      Deference
      Depth
      Detail

Here I will try and clarify what a “deferent” UI should be. Again in the New Oxford American Dictionary, I found deference defined as: “humble submission and respect”. As Apple applies this concept to iOS, the subject of deference is meant to be the User Interface itself, while the object of deference is user content. This means that the User interface should not come in the way of user content; UI should not be prominent over content. Rather than that, it should exalt user content.

An example of deference as given by Apple can be found in its Calendar app. Specifically, as you can see in the image below, look at the search bar. In iOS 6 Calendar app, the space bar reduced the available space for user content. In iOS 7, the space bar as such disappears and it is replaced by a magnifier icon; when you tap on it, the search field appears inside of the navigation bar. The navigation bar itself changes its content to adapt to the new context by displaying two buttons: Today and Done.

Another example of deference is provided through the new Notes app. It must be said that the old Notes app was for sure one of the worst in the Apple pack. Here again we find the trick with the search bar disappearing, thus giving more space to content. But comparing two screenshots, it becomes apparent that in the new Notes app content is the king, while in the old one it was shadowed by several UI elements: the typeface used for notes; the strong colors both for the background and the text; grid lines, and so on.

Looking at the Notes app, it is interesting to note that flat-UI under iOS speak does not mean “no texture”. Indeed, the Notes app features a “realistic” (Apple wording) textured white background. It seems that what really matters is that “realistic” UI artifacts are “deferent”, as it happens with the background in the Notes app.

Finally, a great example of deference, i.e., content over UI, is found in the new Weather app. As you can see in the comparison below, gone is the card-like appearance; the only effect of this was some clutter and less usable space. Instead, we found a big background image to represent the current status; a big centered lettering specifying the current temperature; the larger available space allows to add a new textual, more explicit representation of the current weather status and to include one more hour n the detailed hourly forecast.

I hope I could make deference a bit easier to understand as a basic principle of iOS 7 UI. In a future post, I will take in exam the next principle: depth.

Categories
Digilife

What’s new in iOS 7 User Interface – 1

The new iOS 7 has radically changed the way user interface is conceived on iOS. Indeed, the change has been so radical that along the past few months many not so positive comments and criticism could be found on the web. Now that iOS 7 has officially shipped this is starting to change and many voices are expressing true their enthusiasm. Even the rush to updating to iOS 7 does not seem to stop, so that it is already clear that iOS 7 will be a huge success in adoption.

In any case, apart from any criticism that one may have regarding how iOS 7 looks and feels, it is clear that it marks the beginning of a new era for mobile interfaces. After several weeks of use and learning through WWDC videos, I am going to summarize here what, as far as I understand, gives iOS 7 its definite character and makes it a radical change.

About being flat

You can hear a lot of talk about “flat UI” surrounding iOS 7, and in a sense iOS 7 is flat; but it is not just that. If the change brought from iOS 7 were just going beyond skeuomorphism, it would have not been so radical a change, as the plethora of flat UI apps already existing for iOS 6 (and android) can prove. As an example, take the new Evernote release for iOS 7:

evernote for ios 7
evernote for ios 7

It is surely a flat UI app, but does this seem an iOS 7 app? Certainly not. Still, reviews about evernote have been raving and all of them (that I have read, that is) have praised evernote for being a great iOS 7 app. Now, it looks more like an Android app to me. Even the most basic tenet of iOS 7 interface design – which is: content before all else – is not really respected, as the Evernote home screen shows. But this post is not just about Evernote, and if you look at this link, you will find several more examples of iOS 7 apps that would feel home on an Android phone (along others that are pretty good).

Basic Principles

So, let’s start with the basics. If you want to design a great iOS 7 UI, you will need to take four principles in consideration:

    • Clarity
      Deference
      Depth
      Detail
  • It is not just about being flat. iOS 7 is about clarity, deference, depth, and detail.

    Clarity

    Clarity is a seemingly simple yet complex word. Clarity can be associated with the following qualities (as they are registered in the New Oxford American Dictionary):

    • coherence
      intelligibility
      easy to see or hear
      transparency
      purity
  • All of those qualities are there in the apps made by Apple for iOS 7, and you can clearly see throughout the whole iOS 7 redesign how those qualities are really a strong mark of iOS 7 user experience.

    Coherence: coherence is attained through several means: e.g., the now famous Yve’s icon grid; a new color palette; a completely redesigned icon set; etc. Even the new system icons have been designed in a way to make them look good with the new system font!

    Intelligibility is the quality of something that can be understood. This is evident in the effort to give user content the most important role to play (see also the concept of deference), and even more, to present user content so that the most important information is the most evident. A great example of that is the weather app.

    Intelligibility has also to do with the clear separation between user content and active elements, UI controls. And with the choice of using text for buttons: nothing comes close to text when it comes to intelligibility, of course. To be noted that the main point behind the decision of using purely textual buttons, i.e. of the removal of buttons’ borders, is intelligibility: no borders means more space available for useful information.

    Intelligibility has also got to do with another key concept in iOS 7 redesign: that of context and of always be in context. This is so crucial that I will come back to this later (or in a later post).

    Being easy to see or hear may appear something close to being intelligible, and indeed it is, but it carries a more specific meaning with it. Particularly this is visible in the new iOS 7 feature that allows the user change the system font size. This is attained through the use of text styles in place of the older font face qualifications (e.g., bold face): now we have headers, titles, and so on. And this is made possible by an overall redesign of the typography system in iOS 7.

    Sharpness: A noteworthy feature of iOS 7 new typography is its adaptive rendering of type: at smaller sizes, a type is rendered with a kind of boldening correction; at larger sizes, it is rendered with a lightening correction. This makes for a great, sharp effect which is possibly not so easy to notice. Have a look at this picture, where the central columns represents how a normal, regular typeface is rendered on iOS 7 compared to how the same typeface is rendered in plain and bold styles. This should be a clear example of sharpness.

    As to transparency, this is one of the main tenets of iOS 7. Just look at the new Control Center, or at navigation bar and toolbars and the way they let you see through them a blurred image of your content. This is not just for fun, since it gives a completely different look to the UI, where it somehow “adapts” to user content, and it changes so that is appears more in tune to it. Another great example of coherence by the way. But the main tenet behind the use of transparency is context, as the Control Center case makes clear.

    Now, I will not make any attempt at defining or describing “purity”, the last quality I have listed for clarity. But is it not true that iOS 7 home screen looks really pure with all those whites and pastel colors?

    Now this post has become long enough so that I will move on to the rest of principles behind iOS 7 redesign in a new post.

    Enjoy for now the clarity brought by iOS 7.

    Categories
    Tutorials

    Hitchhiker’s guide to MKStoreKit

    In-App Purchase is one of those great feature of the iOS ecosystem that I wish were easier to understand and implement in my apps. In the end it is just a machinery that will not really add value to my apps, and it would be great if the Apple motto “it just works” could be applied to StoreKit as well.

    A very good read to start is this tutorial on Ray Wenderlish’s blog, which explains all the steps required, from setting up your in-app purchases items in itunesconnect to implementing MKStoreKit in your app.

    Actually, so far I have found that the most convenient way to add In-App Purchase support to my apps is through Mugunth Kumar’s MKStoreKit. It makes things much easier and almost straightforward. On the other hand, MKStoreKit is presented by its author on a series of posts on his blog that are a bit sparse and fail somehow to give a quick view of the way you are supposed to make things work.

    In this post, I am going to summarise the steps required to integrate MKStoreKit into your app for non-consumable non-renewable items. I will assume that all the App Store paraphernalia has been already dealt with; but if you are starting right now to use In-App Purchases, maybe you could read the first part of the aforementioned tutorial.

    So, going to the meat of the thing, what you need to do to set up and use MKStoreKit in your app is:

    1. 1. in MKStoreConfigs.h define macros for all of your items: e.g.,


      #define kKlimtPictureSetId @"org.freescapes.jigsaw.klimt.pictureset"
      #define kKlimtAltPictureSetId @"org.freescapes.jigsaw.klimt.altpictureset"

    2. 2. create a MKStoreKitConfigs.plist file where you list all of your items; this could look like shown in the picture below.

       

    3. 3. in your app delegate call:

      [MKStoreManager sharedManager];

      in order to initialize MKStoreKit and give it time to retrieve info from the App Store while the app is initialising;
    4. 4. whenever you want to check if a feature has been bought, call:

      [MKStoreManager isFeaturePurchased:kKlimtPictureSetId]

       
    5. 5. when the user buys some feature, call:

      [[MKStoreManager sharedManager] buyFeature:kKlimtPictureSetId
      onComplete:...
      onCancelled:...];

    6. 6. to implement the required “restore purchases”, call:


      [[MKStoreManager sharedManager] restorePreviousTransactionsOnComplete:^()
      {
      [self handlePurchaseSuccess:nil];
      }
      onError:^(NSError* error)
      {
      [self handlePurchaseFailure:error];
      }]

    This is all that there is to it! Really straightforward and nice.

    Categories
    Digilife

    Core Graphics Image Interpolation Performance

    Recently, I have done quite a bit of Core Graphics programming and discovered that one key point in ensuring that an app performs well is choosing the right CGContext interpolation level. Interpolation, according to the Wikipedia, “is a method of constructing new data points within the range of a discrete set of known data points.” Interpolating is what is done when you scale up an image, but also when you scale it down. In both cases, interpolation has a tremendous impact on the result of the scaling.

    Core Graphics CGContext allows 4 different levels of interpolation: kCGInterpolationLow, kCGInterpolationMedium, kCGInterpolationHigh, and kCGInterpolationNone. Their effect is pretty trivial to describe: kCGInterpolationNone will give you the most jagged result of all; kCGInterpolationHigh will give you the smoothest result of all. What is less clear from the outside is which impact interpolation will have on your app.

    So, I put up a benchmark test and here are the results.

    The test

    In my benchmark test, I draw a set of 6 different 649×649 b&w bitmap images to build a set of 12 different jigsaw puzzle tiles. The bitmap images are scaled down by a factor of 5.0 (i.e., to 129.8×129.8) to make them fit on a 320×480 display (at retina display scale).

    The outcomes

    Run times were measured using Instruments and only considering the time spent inside of the CGContextDrawImage function, so to remove all effects not related to the interpolation itself.

    The device used was a 4th gen iPod touch.

    – kCGInterpolationLow: 969 msec

    – kCGInterpolationMedium: 1690 msec

    – kCGInterpolationHigh: 2694 msec

    – kCGInterpolationNone: 545 msec

    As you can see, there is a big factor between no interpolation and high interpolation (almost 500% increase). The question arises whether this time is spent for some good sake or not, so let’s take into account the visual result.

    Visual Outcome

    As to the visual outcome, if you compare High or Medium to Low or None, the difference is staggering. Even noticeable, IMO, the difference between None and Low; while the difference between High and Medium is not particularly noticeable in this particular test.

    What is clear is the difference in run times, so that in my particular case, I ended up using `kCGInterpolationMedium`.

    kCGInterpolationNone

    kCGInterpolationLow

    kCGInterpolationMedium

    kCGInterpolationHigh

    Categories
    Tutorials

    iOS6: dynamic autorotation

    One of the most intrusive changes brought by iOS6 is the way autorotation is handled in UIViewControllers. If you have an autorotating app for iOS5, you will need to change it to correctly support autorotation under iOS6. If you develop an app which is supposed to run both on iOS5 and iOS6, then you will have to handle autorotation in the old as well as the new way.

    In this post, I am going to provide a simple solution to a problem which, as much as I have been seeing around me, has not an entirely trivial solution. The problem statement is the following: a view which is allowed to autorotate only under certain conditions; otherwise, it will be frozen (as far as autorotation is concerned).

    E.g., you take a screenshot of your UI at a given moment, then display it, maybe applying some effects to it. If the device is rotated in this context, your overall UI will rotate accordingly, while the snapshot you took will not (it will still reflect the initial device orientation). Now, what you want is freezing autorotation while the snapshot is shown. Another example: on top of your fully “elastic” UI, you display some piece of information which is not meant to autorotate. Again, what you want is freezing autorotation while that piece of information is displayed.

    Under iOS5, this was really straightforward, because each time an autorotation event is detected, UIkit sends your controllers the shouldAutorotateToInterfaceOrientation: message. There you have a chance to deny autorotating to a specific interface rotation according to your criteria.

    Under iOS6, it is equally straightforward except for the unlucky naming of the cornerstone iOS6 autorotation method, namely shouldAutorotate. What that name leads you (or at least me) into thinking is that you can decide there whether (and when) your view can autorotate. Wrong. The shouldAutorotate method does actually respond to an optimisation purpose: if your view controller shouldAutorotate returns NO, then the framework will not forward any autorotation messages to it.

    So, if you want to control the conditions under which your controllers autorotate, you will have to either leave that method undefined or define it as to always return YES. The real “meat” of automation control is thus given in the supportedInterfaceOrientations method. E.g., it could be defined as:

    [sourcecode]

    – (NSUInteger)supportedInterfaceOrientations {

    if ([self canAutorotateNow])
    return UIInterfaceOrientationMaskAll;

    if (UIInterfaceOrientationIsLandscape([UIApplication sharedApplication].statusBarOrientation))
    return UIInterfaceOrientationMaskLandscape;
    return UIInterfaceOrientationMaskPortrait;
    }
    [/sourcecode]

    You see, the idea is checking if the autorotation is frozen at the moment supportedInterfaceOrientations is called; if it is, then only return as a supported orientation the one corresponding to the current status bar orientation.