Mobile Device Testing with Mobile Labs deviceConnect

For the last several years, companies of all shapes & sizes have been building and maturing mobile development teams. Whether it’s native app or responsive web development, ramping up teams for mobile programming has become a priority for every modern organization. Until now, the efforts have been focused on development while testing has been relegated to an afterthought. However, the testing approaches must also change in order to support these new mobile implementations! A focus on proper testing tools and best practices is essential to maintaining quality & customer experience. In other words, the testing approach must scale along with the development efforts in order to deliver high quality applications and websites to a growing mobile audience.

For most organizations, manual testing is the primary option. This means testers have a library of devices to use in-hand for testing mobile websites and apps. As the device landscape has grown, the time and effort required to maintain a device library has grown as well, making this an unscalable approach. Additionally, these device libraries are not available (or must be replicated) to testers working remotely (e.g. offshore, work from home, etc.). In other words, manual testing has significant limitations particularly if teams are distributed across regions or countries.

Who’s Impacted?
Quality Assurance (QA) testers aren’t the only ones impacted by the limits of the manual testing approach. Designers, web developers, and native app developers all feel the pains of validating their designs and code on the variety of supported devices. Production support engineers have unique challenges because they must support not only the devices & platforms but also potentially more than one version of your applications. And of course, the business will inevitably want to see and deliver demos of the applications on one or more devices before the official release of a site or app. For each of these scenarios, a physical device library has been the only option, and it is clearly an unrealistic solution.

The Solution
Enter Mobile Labs. Mobile Labs deviceConnect is “an on-premise, private cloud mobile device testing platform.” In other words, it’s a cabinet that houses a set of physical devices (up to 48!), and provides network users with remote access into physical devices. These are not simulators or emulators, but the real devices! And because they are accessible (via remote control) only on the internal network, the process of remoting into the devices benefits from the network security already established in your organization.
Mobile Labs cart
In addition to providing individuals with access to devices, this tool includes support for automated testing and deployments as well as remote debugging on devices. The automated testing integration hooks are provided through Mobile Labs Trust and support tools like HP’s QuickTest Professional and Unified Functional Testing (UFT). You can use Selenium scripts to automate web testing for responsive and mobile-targeted web applications. As of the deviceConnect 6.0 release, Appium and Calabash are now also supported for native application testing. There are also scripts for automating deployments to deviceConnect, which integrate well in most continuous integration environments. On the developer front, deviceBridge is a new product that allows developers to connect their machines to devices in deviceConnect, allowing them to run code on those devices. Currently, only iOS is supported, but Android is coming soon. This feature is a great addition in supporting the QA testing cycle and alleviating the “it works on my machine” fallacy.

In the Wild
By now, you’re probably thinking, “this tool sounds pretty good on digital paper, but what’s the real story?” I’m glad you asked! We helped a Fortune 100 financial services client setup their 48-slot cart and implement processes, procedures, and device management around its usage. Their primary goals were to provide onshore & offshore resources with physical devices to test native & web apps while reducing the number of device libraries required across the organization.

While setting up the cart was fairly straightforward, developing the processes, governance, and training took more effort. You can buy a tool, but until you build procedures around it, you won’t realize its true value. We started with creating processes around supporting & maintaining the cart. In order to manage the potentially large volume of users, we first determined how we would grant access. Ultimately, we decided to open access to anyone who got manager approval. Limiting access would put up barriers to adoption, and since this was a new concept in the organization, we determined allowing anyone to request access would increase adoption rates. To support the anticipated adoption rates, we also developed a wiki-style introduction page with information about the tool as well as frequently asked questions and tips for using deviceConnect.

In the same vein as supporting new users, we also created a process for handling upgrades. Knowing that this cart would be utilized by people at all hours of the day, we came up with a strategy to coordinate outages with the QA group, and communicated those outages in a timely manner. Mobile Labs does a great job of releasing cart updates to support the latest operating system versions very quickly after they are released to the public. In the absence of a test-environment cart, we had to be prepared to move quickly & efficiently on deviceConnect upgrades in order to support the demand for new OS versions in the cart. All of this production support required close collaboration between the internal cart support team and the QA team.

Device management was the final piece needed to ensure success. Developing a strategy to manage the devices and OS versions supported by the organization was key to the cart’s success over the long term. Two major components to device management were naming someone as The Device Manager (aka a governance leader) and access to reliable analytics of mobile traffic on which to base decisions. The Device Manager was responsible for building a team of stakeholders from around the organization including QA, development, the business, and production support at a minimum. This provided a breadth of knowledge & input from across the various departments and user touch points. As an added bonus to supporting the cart’s usage, a good device management strategy also supported the overall organizational strategy around mobile device support.

Overall, people loved being able to test on a variety of devices from wherever they were located. We even had several cases where business owners were able to demo new, unreleased features to stakeholders without having to figure out how to project from an iPhone or Android phone. Managers were relieved at not having to manage a large device library. The number of tests executed increased and the cost per test decreased by supporting offshore testing teams. The business continues to see the value in the decreased overhead of device library management and an increased number of test cases being executed.

BlendConf 2013: Immediate Take Aways

I had a blast at BlendConf this past weekend, and am still marinating on all the cool information that I gleaned from it. I want to share some things that I am already putting into action.

But first, what is BlendConf?! I’m glad you asked. The 3-day conference is primarily aimed at web designers and developers, and has 3 main tracks: User Experience, Design, and Development. The first day (Thursday) is workshops. The second day (Friday) is the traditional track talks. The third day (Saturday) is the blend day, where experts in each track speak in a session that is not their specialty. For instance, a developer talking about open source development in the design track. It was a really great opportunity to get exposure to the various aspects of software (web) development, especially considering this was the inaugural year.

Practical Application from a Talk

The first comes from Amanda Costello’s talk on working with specialists, specifically people with PhDs and other highly specialized individuals. This struck me as particularly interesting as I currently work with many healthcare professionals who are highly educated and specialized (PhDs and MDs). Amanda had a lot of great tips and stories about some of her experiences working with people in higher education.

At one point, she talked about giving her stakeholders (the PhDs) homework which was a list of 4 really simple questions to help her get the information she needs to do her job. Keep in mind that she’s a content strategist, so her main goal is to extract the specialized content from her stakeholders, and get that content onto a website.

Here are the 4 questions:

  1. Who is it for?
  2. Who is it not for?
  3. Are there other sites like this? What makes it different?
  4. What is its name?

When I came into work on Monday after the conference, one of the first e-mails I received was a request for what questions we have around some app requirements where the stakeholders are PhDs and MDs. The app is a mobile app for a healthcare provider, and we recently discovered the requirements and scope had changed since we last talked about the app (about 2 months ago). When I started to get frustrated about what to ask (I don’t know what I don’t know), I thought, “What would Amanda do?” I immediately pulled out my conference notes, and pulled up the 4 questions, all of which had some direct relevance. I was able to put together a list of “comprehensive questions” (see above) so my stakeholders would be more prepared for the face-to-face conversation we need to have.

Thank you, Amanda!

Practical Abstraction From One Point of a Talk

The second major lesson that I am already benefitting from is being more intentional about what activities take up my time. Cameron Moll, founder of Authentic Jobs, spoke about Authenticity in Creativity. He had several powerful stories enforcing his point about authenticity, but something stuck out beyond the obvious topic.

One of his points was to be skeptical of what technology is and what it is not. He applauded the fact that devices were banned/discouraged in the conference sessions, and talked at length about changing his technology usage habits to ensure his own personal authenticity (e.g. keeping his phone in his pocket at dinner). This made me really consider what I hold important and what takes my time. What activities steal my time from other more useful activities?

Afterwards, I did 2 things as a follow-up for myself. The first was to make a list of all the commitments I’ve made and activities I participate in (or want to). I prioritized & refined the list as I’m a chronic over-committer who needs to clear her plate regularly. The second thing has been far more overreaching. Everytime I go to a website or open an app, I try to think consciously about what I’m doing. Is this really how I want to spend this time? Are there other things that are more important to me? Then I think of my list.

Since that session, I have played very little Candy Crush Saga and surfed Facebook a lot less.

Practical Sense of Greater Purpose

The final thing I have been pondering since the keynote by Carl Smith. There really is a greater purpose in programming than simply meeting requirements or getting a paycheck. Carl talked in the keynote about a lot of things, in particular leaving a high paying job to go do something he wanted to do. At the end of his talk, he quoted Invictus, the poem Nelson Mandella cited for prisoners on Robben Island, and I came to a full realization. I am the master of my fate: I am the captain of my soul.

This theme resonated for the rest of the conference. In every session, I heard a consistent theme that we are building amazing things and changing the world one little bit at a time. Literally! One 0 and 1 at a time. I came away from the conference with a renewed purpose not to get mired down in politics or the muck of frustration. I have renewed determination to find my purpose and change the world one bit at a time.

And I was reminded of Bill Nye’s WWDC talk where he kept saying the phrase, “…we could, dare I say it, change the world!” That talk isn’t posted publicly, but this video will give you a sense of what I’m talking about.

I encourage you to remember your purpose, whatever it is, is greater than the politics you play. You could, in fact, change the world!

Testing a Location-Aware App

In order to support this post, I have published a demonstration project on Github. Feel free to clone it and follow along in this post.

I recently posted about how to implement location tracking via iOS Location Services, and felt a follow-up would be useful. It’s one thing to make a location aware app, and it’s quite another to see it work in the wild. Testing is very important here. But you don’t have time to drive around, you say? You don’t have the money to fly across country to see how your app behaves in other regions? Fortunately, Apple has provided some tools and mechanisms to test location aware apps from within the comforts of your own development environment.

With the release of iOS 5, location simulation was added to the corresponding development tools. As a result, there are a few ways to simulate locations.

  • Xcode: GPX files, schemes
  • Simulator (iOS 5+): manually set the location
  • UI Automation Instrument: load up various lat/long points via a script to simulate movement


GPX (GPS exchange) files are XML files that conform to the GPX schema, which allows interchanging of GPS data. GPS systems generate and consume this format, and so does Xcode! Creating a location is fairly simple if you have the latitude and longitude of a point. You can also create routes and a whole host of location files that can be used to simulate locations. The following is a simple GPX file that targets a location in Lincoln, Nebraska, USA.

<?xml version="1.0"?>
<gpx version="1.1" creator="Xcode">
      <wpt lat="40.828359" lon="-96.699257">
          <name>Lincoln, NE</name>

Once you have added your GPX files to your Xcode project, there are 2 ways to utilize them. The first is by setting the default location in a scheme. If you have several locations you need to test regularly, you can create a scheme for each location to make it simpler to test your app in each location quickly. When you run your app under the custom scheme, Xcode will automatically simulate the app running in the location according to the scheme configuration. To do this:

  1. From the scheme menu, click New Scheme…
  2. new schema menu

  3. Enter a name, and click OK. I used my project name + location. For example, CSLocationTestKit-Lincoln, NE.
  4. From the scheme menu, click Edit Scheme…
  5. In the Options tab, check the Allow Location Simulation in the Core Location section.
  6. Select the GPX file for the Default Location. If you added a GPX file to your project, it should be displayed in the list for you to select. Alternatively, you can add an existing GPX file from this menu.
  7. set default location on the schema

But what if I want to change the location in the middle of simulator testing? You can change the location in Xcode during runtime as well. After your app starts up, bring up Xcode and ensure the Debug pane is showing (the bottom view). Select the blue arrow to get a list of locations including those specified in GPX files in your project.

change location in debug view of xcode


To change the location in the simulator directly, click the Debug menu, and select Navigation. This can be done at runtime while debugging or while navigating through the simulator detached from any Xcode projects.

Debug menu then Location menu then click on custom location
The simulator options are vastly simpler and thus more limited. You can select Custom Location… to enter a set of coordinates, but this is less robust than using GPX file. However, if you have a need to do some ad hoc testing of coordinates, this method is sufficient.

UI Automation Instrument

The final and arguably the most powerful method of testing a location aware app is to use the UI Automation Instrument. It is also the trickiest because Apple’s documentation of instruments isn’t very explicit on how to use it for testing movement amongst locations. Once you get started though, it gets much simpler.

In my code example, I have a basic view that shows the current location coordinates. Once you click on the map button, the map view is displayed along with a button to turn region monitoring on (“Watch for the country club!”). I want to automate testing on this screen because I want to verify that my region monitoring logic works properly without having to go anywhere. My approach, for demonstration purposes, is to use a list of GPS coordinates to simulate a driving route from my current location to the Lincoln Country Club. This route be extracted from a GPX file, and injected into a script to simulate movement. You can also simulate this movement by making the GPX file with the route your default location in the scheme, but I wanted to demonstrate changing location in the UI automation instrument (via JavaScript). Note: I created this route in Google Maps, exported the KML, and then converted KML->GPX via GPSBabel.

To use the UI Automation instrument:

  1. Click and hold the Run button in Xcode, and select Profile (or click Product > Profile from the menu at the top)
  2. Run button long click results in profile being an option

  3. Instruments will open. Select Automation and click Profile
  4. Instrument selection screenshot

  5. Stop the recording that is automatically triggered by Instruments.
  6. In the Scripts section, click Add to create a new script. You can import existing scripts as well as calling up scripts you’ve recently used/exported. I have included a sample script to test driving a route to a country club in Lincoln, NE. This script works best if you use the Lincoln, NE GPX file as the starting location.
  7. import scripts screenshot

  8. In Xcode, click the Profile button again to trigger a profiling restart using the script you just imported/created. Don’t forget you have to stop the Recording once your script has run (this is documented).

Now, you have automated the testing of your driving route! At this point, you can add other instruments to your session in order to track allocations, leaks, etc. You can also run any variety of scripts to test various movement scenarios. Automating this kind of testing can make regression testing much more efficient. And because the test scripts are written in JavaScript (and are fairly primitive), you can enlist the help of JavaScript programmers who might not know Objective-C very well (or not at all). The UI Automation JavaScript Reference is very helpful for creating these scripts. The only major caveat that I’ve uncovered is the Automation instrument refers to values in the accessibility fields. For instance, if you want to check a value on a label, you need to make sure the accessibilityValue property is set.

Supplemental Recommended Viewing/Reading
Testing iPhone Location App with Automation by Plain Old Stan
Location Awareness Programming Guide from Apple
Session 500 at WWDC 2011: What’s New in Core Location – good basic intro to location services, particularly the new features that came with iOS 5
Session 518 at WWDC 2011: Testing Your Location-Aware Application – this is a must watch! It has a lot of good info on how to setup your environment to test different locations and even moving from location to location.
Session 303 at WWDC 2012: Staying on Track with Location Services​

How to Use Location Services in iOS

If you do any mobile development, chances are pretty good that you’ll want to utilize location tracking. While Apple has a good bit of documentation on the topic, one thing is missing: pragmatic usage of location services. For instance, there’s no clear direction on when you should update the location in your app, and that will certainly vary from app to app depending on your use case. Beyond that, which location service should you use? Apple provides a couple of options, so which one is best for your app? Battery usage is always a concern when it comes to tracking a user’s location. Do you really need to use GPS or is a less precise (less power hungry) location via Wi-Fi sufficient? And how are you going to tell your users what you’re doing with their location. Many users are wary of sharing location information for privacy reasons, but some of those same people will also allow access to their locations if they understand exactly what you’re doing with that information.

The first place to start, before writing any code, should be the Location Awareness Programming Guide that Apple has provided. Apple does a great job of introducing developers to the concepts and services they’ve laid out so you can start making some of these decisions. Next, you should consider how you will use a user’s current location. Are you using it to search for things his/her immediate area? Perhaps you are calculating distances or providing a sorting mechanism. Does it need to update constantly? Or can you get it once and reuse it throughout your app?

You’ll learn from Apple’s documentation that they have provided 3 different location services to track a user’s location.

  • Standard Location Service: uses GPS, cell, and Wi-Fi to determine connection; most accurate; requires more power
  • Significant Change Location Service: uses cell signal only; low-powered option; iOS 4.0+
  • Region Monitoring Location Service: monitors boundary crossing for a defined region; iOS 4.0+

The standard location service is the most well known service. It is also the most power hungry, so be sure to evaluate whether you really need this level of accuracy. The significant change service is better suited for scenarios where you need to track the location constantly. If you have an app that is tracking a route, this is a great option because of its lower power requirements. Additionally, this can run even when the app is in the background. The region monitoring service will monitor a user’s location, and will dispatch notifications when the user enters or exits the region specific. This could be useful if you want to send notifications, change phone settings, or wake your app up when your user moves into a particular region. For more detail and demonstrations of these services, check the WWDC 2011 session 500 What’s New in Core Location.

Which one you choose depends on what you’re doing with the location data. Using more than one service at the same time is not recommended since the services listen for location updates the same way. I recommend putting all location code in a single class so you can simply manage location tracking. You don’t want to manage 5 instances of the CLLocationManager that could be draining battery by overusing GPS or other radios. Additionally, you can encapsulate the CLLocation work in one place. You could go so far as to employ the Singleton pattern, but that’s outside the scope of this post.

Once you identify the appropriate location service to meet your app’s requirements, there are a few other things you should consider:

  • What if the user turned off Location Services on his device?
  • What if the user denies location permissions to your app?
  • What if no location can be found (maybe the user is in a basement, parking deck, or the subway) after a minute, 2 min, 10min, etc.?
  • Did you know MKMapView’s showUsersLocation attribute also triggers location tracking? If you’re using a map that has that attribute set to YES, you’ll need to be sure to set it to NO when the map view disappears

It’s really important to avoid allowing your app to get stuck in a state where it’s endlessly tracking the user’s location. Burning battery is a sure way to attract bad app reviews. You may find your app in this scenario if the phone can’t get an accurate enough location to meet the requirements setup by the CLLocationManager. You should set the desiredAccuracy to the highest level your app can withstand, and use the distanceFilter to get a location (do you really need the best location w/in 100 meters?). That still doesn’t guarantee the device will get a location. If your user is in a parking deck or a basement or maybe a building with really bad service, the device may never get a location that meets the accuracy and distance requirements. It’s a good idea to use NSTimers or the dispatcher to stop location tracking if a location can’t be acquired within a reasonable amount of time.

In the following code sample, I use a timer that allows the location manager to search for a location up to 10 seconds before I stop the service.

//my custom method to configure and kick off location tracking
    _locationManager.distanceFilter = 1000;
    _locationManager.desiredAccuracy = kCLLocationAccuracyBest;
    [_locationManager startUpdatingLocation];
    self.locationTimer = [NSTimer scheduledTimerWithTimeInterval:10.0 target:self selector:@selector(stopUpdatingLocation) userInfo:nil repeats:NO];

    [_locationManager stopUpdatingLocation];
    [self.locationTimer invalidate];

//listen for the new location
-(void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation{

    if (newLocation.coordinate.latitude != oldLocation.coordinate.latitude || newLocation.coordinate.longitude != oldLocation.coordinate.longitude) {
        self.currentLocation = newLocation;
        [self stopUpdatingLocation];        

As you can see, I’m using a distance filter of 1000m, which is a little more than .5 mile. That level of accuracy is acceptable for calculating distances in this app scenario, and it gives the location manager a slightly bigger field to hone in on. You also want to keep in mind the phone might return the same location it had before. In my code above, I allow the tracking to continue if I get the same location as before in case the phone lags in updating the location (a common occurrence).

The final consideration in implementing location services is user education. As developers, we frequently forget that users don’t automatically know what our app is supposed to do, and what it may be doing in the background. We must take any opportunity we can to inform our users if they have disabled features that our app depends on. Apple gives users full control over what apps have access to location services on the device, and we have to respect that.

Use the following bit of code to check to see if location services are enabled, and that the app has been granted permission to use location services.

if([CLLocationManager locationServicesEnabled] && 
   [CLLocationManager authorizationStatus] != kCLAuthorizationStatusDenied)
    //queue start location updates
    [self beginUpdatingLocation];
    //show an alert to let the user know why certain features may be unavailable since location services are disabled

If location services are important in your app, you’ll want to explain to users why those features aren’t working in the event they’ve disabled location services or simply prevented your app from using those services. An alert is the simplest approach, but you can really implement it however you wish. The goal is to educate your users. Additionally, you should set the purpose attribute on the CLLocationManager (available iOS 3.2-5.x) or add the NSLocationUsageDescription key (available iOS 6.x) in your info-plist. The value of this property is what iOS displays whenever the user is prompted to grant permission for the app to use location services. This is another education opportunity you don’t want to miss.

All in all, the location services that Apple provides are more robust than ever. There are several options available for a variety of scenarios, so be sure to take a close look at your users’ needs and make the best decision for them. Ultimately, your users have control over whether your app gets access to their location, and it’s important to make it worth their while (and their battery life!).

Format for Displaying TODO in the Xcode Jump Bar

I can never remember the exact format to get TODOs to show up in the Xcode jump bar. And for some reason, I can never pick the right search terms to look it up. So here it is for my reference and yours! Note that this works when the comments are outside of the method name. It use to work inside the method, but that has changed.

// TODO: finish the implementation!
    NSLog(@"method not implemented");

That will result in the jump bar displaying whatever is in the comment following TODO: like this:

jump bar that shows

iOS 4.x Background Color Quirk

Today’s iOS quirk is again related UITableViews. This time, it’s the background color.

The Problem

I’m basically creating a form in a UITableView, and there are several button-type elements like segmented controls (actually they’re fake segmented controls due to customization needs) and a big search button. As a result, I want the UITableViewCells to be transparent so the table view formatting isn’t displayed. However, in iOS 4.x, all the obvious stuff that worked in iOS 5.x+ didn’t exactly work. I ended up with black corners around my buttons, as seen in the screenshot below.

iOS 4.x Broken Clear Color on Cell

The Solution

Very simple. Apparently, iOS 4.x doesn’t care what color you set the background of the UITableView to in Interface Builder (at least, not when it’s clear). You have to set the background color to clear in the code.

    [super viewDidLoad];
    self.myTableView.backgroundColor = [UIColor clearColor];

This results in the following:

iOS 4.x Fixed Clear Color on Cell

Scroll position, UITableViews, and You

Came across a scenario on an iOS app today that didn’t have a clear answer in a single place. I basically had to cobble together a solution for my problem, and wanted to document it here.

The Problem

I have a UITableView that I’ve developed in order to mimic a form. My “form” has a single UITextField in it, but this should work with any number of text fields in a UITableView. The text field is at the very bottom of my form, and I need to set the scroll position of the table when the keyboard is shown so the field remains in the user’s view.

The Solution: Psuedo Code

  • Make the controller conform to the UITextFieldDelegate protocol
  • Implement the textDidBeginEditing method of the UITextFieldDelegate protocol to add the UITapGestureRecognizer
  • Create methods to respond to the keyboard display/hide notifications that will resize the scroll content and set the scroll position
  • Add notification observers to listen for keyboard display/hide notifications
  • Create a UITapGestureRecognizer that will be used to dismiss the keyboard when the user taps on the UITableView

The Solution: Actual Code

UITextFieldDelegate Protocol

Set your class to conform to the UITextFieldDelegate protoocal, and implement the textFieldDidBeginEditing:(UITextField *)textField method. This is where you’re going to add your gesture recognizer whenever the user taps in the field so the view knows to change scroll position.

Keyboard Notifications

Create methods to respond to the display and hiding of the keyboard. The following code was somewhat ripped from Apple’s documentation on managing scroll position for keyboards. The trick I found was to use the UITableView’s scrollToRowAtIndexPath:atScrollPosition:animated: method to get the scroll position I really wanted.

-(void)keyboardWasShown:(NSNotification *)theNotification{
    //adjust the scroll position so the zip code field is in view when the keyboard shows up
    NSDictionary *info = [theNotification userInfo];
    CGSize keyboardSize = [[info objectForKey:UIKeyboardFrameBeginUserInfoKey] CGRectValue].size;

    //set the insets to account for the keyboard height
    UIEdgeInsets contentInsets = UIEdgeInsetsMake(0, 0, keyboardSize.height, 0);
    self.myTableView.contentInset = contentInsets;
    self.myTableView.scrollIndicatorInsets = contentInsets;
    UITableViewCell *cell = (UITableViewCell*) [[_activeField superview] superview];
    [self.myTableView scrollToRowAtIndexPath:[self.myTableView indexPathForCell:cell] atScrollPosition:UITableViewScrollPositionTop animated:YES];

-(void)keyboardWillBeHidden:(NSNotification *)theNotification{
    //reset the content insets
    UIEdgeInsets contentInsets = UIEdgeInsetsZero;
    self.myTableView.contentInset = contentInsets;
    self.myTableView.scrollIndicatorInsets = contentInsets;

Add observers to the notification center that listen for the keyboard display and hidden events so you can adjust the scroll position. I did this in the init method of my class, but there are lots of alternatives to your placement of this code.

        [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardWasShown:) name:UIKeyboardDidShowNotification object:nil];
        [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardWillBeHidden:) name:UIKeyboardDidHideNotification object:nil];

Create the UIGestureRecognizer, and set it on the UITableView of your class. I created an instance variable (_tap) so I could add/remove it at various places in my code.

_tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tableTouched:)];
[self.myTableView addGestureRecognizer:_tap];

And here’s the tableTouched: method. As I said before, I have only 1 UITextField in my table, and I created an ivar (instance variable) so I could reference it quickly within the code. I removed the UIGestureRecognizer at this point because I don’t want the gesture to interfere with any otherwise normal table gestures.

    [_activeField resignFirstResponder];
    [self.myTableView removeGestureRecognizer:_tap];

Submitting to the Apple App Store

Besides having an app to submit, there are a few things that are imperative to remember when submitting an app to the Apple app store. Fortunately, Apple has documented a LOT of these things.

Prepare the App

Obviously, the first step is to prepare the app for packaging. Once you’ve gone through testing (ad hoc distribution makes that simple) and are ready to package the app for submission, you’ll want to check out the developer documentation on how to prepare for the app submission (provisioning, signing, etc.).

Preparing for Submission

  1. You should use your company-approved Apple account with a subscription to the Apple Developer Program with Team Agent access to provision, sign, and submit the app to the App Store. A consulting firm can build and submit for you, but they must do so under your account with your certificates as you maintain legal rights to the application, submission, etc. (this is very evident when signing up for the Apple developer program).
  2. Start at the App Store Resource Center (which has info on both iOS and Mac app submissions)
  3. Read all of the App Store Submission Tips!
  4. Read through the App Store Review Guidelines
  5. Be sure to read the iTunes Connect Developer Guide as well. This was, by far, one of the most important documents that can be overlooked. Apps are submitted via iTunes Connect, which is a completely separate account from those on the developer program. The document includes specifications for screenshot sizes (you can’t submit if they’re not right), icon specifications, and more. This is a must read if you’ve never submitted an app before or if you want to verify the current requirements (they do change over time).

Submitting the App

  1. On the computer you will be submitting from, do a final trial run (debug) of the app on a device to ensure it builds and runs without crashing (a top reason for app rejection).
  2. Login to iTunes Connect, and create your new version. If you want to control the app release date, select the version release control option when prompted. Otherwise, the app will be available as soon as Apple approves it.
  3. Once you’ve completed the new version information (effectively, updating the app record), go into the Version Details of the version you’re submitting for update, and click “Ready to Upload Binary” (refer to the “Editing and Updating App Information” section of the iTunes Connect Developer Guide)

Potential Gotchas

  • An app can be updated (updating an existing, approved app) with a different codebase (or the same codebase with changes) under the following conditions:
    • Bundle IDs match
    • Certificates match (signed with the certificates from the same account)
    • If the app you’re updating is Universal, you must submit a universal app as an update. For example, you cannot submit an iPhone-only version to update a Universal app (which would be removing iPad functionality). However, you can submit an app update that upgrades an iPhone-only app to a Universal app.
    • Everything else can change at the version update (name, version, description, tags, screenshots, etc.)
  • Make sure you have access to an iTunes Connect account and that the developer program subscription is current!
  • Upgrading Xcode to the latest and greatest can resolve issues for developers, but it can cause issues for anyone not on the latest/greatest version if you edit the codebase in a different version of Xcode. If you want to upgrade, make sure your developer counterparts are ready to upgrade as well.
  • Make sure you can access iTunes Connect from your company network (or find a location where you can access it)! This is often blocked by network admins since it’s effectively connecting to iTunes.
  • Only the Team Agent for a company can login to iTunes Connect to do app submission, and the kicker is there is only one Team Agent allowed on an account. A Team Admin cannot submit apps via iTunes Connect (much less even login to iTunes Connect). And only the Team Agent can relinquish that role to another user in the developer program (managed through the Apple Developer portal).

Native or Mobile Web App?

When someone asks me, “should we go native or mobile web for our mobile app?” I immediately have one question: what type of interaction will your users have with your app? Will they be inputting their data, taking pictures, or searching through lists? Or will they be browsing content like an online catalog, wiki-type material, or data from a report? There are many factors to the native vs. mobile web discussion including target audience, application complexity, and many more. However, I submit that the type of content and interaction that you want to serve up should be at the heart of your native vs. mobile web decision.

The variety of user interactions in your app also has varying levels of complexity in navigation and thought process. The more complex the navigation can get, the more you should consider taking the native approach. As a user is working in your app, his navigation path may wind and weave into something more complex than you first imagined. This is where native mobile apps have a serious advantage. Simplifying navigation complexity is at the heart of good user experience, something that is imperative to a successful mobile presence. Native apps provide a higher level of usability at a lower cost because it is simpler to create clean and clear navigation using the components, methods, and philosophies pre-determined by the platform. Google and Apple are going to great lengths to provide developers with all the tools and guidelines they need to build consistent and standardized apps on their platforms. The result is that an iOS developer doesn’t necessarily have to create a lot of controls from scratch nor does he have to determine the best way to layout an intuitive navigation scheme. He simply follows the patterns and practices of the iOS platform, and he can quickly create a highly usable & performant mobile app. Even Android has caught up on the user interface guidelines standardization, enabling Android developers to create standards on a platform that is begging for common ground. While it may be true that native development can cost more upfront (learning the platform, etc.), the benefits of being able to quickly provide a highly usable and highly performant app will quickly make up for the costs.

But what about the costs of supporting multiple native platforms? Because the standards are very different amongst the platforms, you’re going to have to account for them whether you go native, mobile web app, or somewhere in between (hybrid). As a result, it is best to take advantage of pre-existing controls and best practices of the native platform if you can, as that will be where you can recover costs. For example, the presence of a back button on Android devices allows for a different type of navigation (and less code) than on iOS devices, which have only the home button. All navigation on iOS is done via gestures or navigation bars that all have to be coded for. However, these differences shouldn’t be ignored or shunned! You should embrace those differences because your audience has embraced them! They have already been trained on the standards and practices unique to the platform they’ve chosen. The more you can leverage the platform and what already comes with it, the easier it is to simplify your navigation and decrease your users’ learning curve. Again, there may be some additional costs up front, but in the long run, native is much better suited for applications with complex navigation and heavy user interaction (filling out forms, etc.).

On the flip side of the complexity scale, your mobile app may be more geared toward browsing type activities. Take Wikipedia, for example. They have taken the mobile web app approach (web app packaged via PhoneGap) because the purpose of their app is purely informational. Their content and infrastructure are already well suited for web content, and really all they needed was to fit the content into smaller devices and provide some basic mobile functionality. There’s nothing complex about browsing Wikipedia. The most complex thing you can do in their app is search! The static nature of content like that eliminates the need for a lot of custom development to make the app usable. Of course, you can make any web app just as usable as a native one if you have enough time. The HTML/CSS/JavaScript stack is extremely versatile, but it can be very time consuming to create highly usable interactive interfaces that work well across the various platforms and devices. You’ll probably need a UX (user experience) designer for either approach.

It’s important to note that a pure mobile web app is not the only option available for taking a mobile web approach. Tools like PhoneGap and other frameworks give us another option somewhere in between going 100% native vs. 100% (mobile) website because they take an app built in standard web technologies (HTML/JavaScript/CSS) and package them in a native wrapper. The wrapper that PhoneGap provides also provides some basic hooks into some of the hardware and software features of the mobile device like access to the contacts list, calendar, GPS, and more. And PhoneGap isn’t the only option to consider. You can even create your own hybrid approach (Facebook, Netflix, LinkedIn) where some parts of it are native (navigation), but others are web views (the newsfeed). On the other side of this hybrid approach is creating a native app that serves up web content via web views. This can be a very effective method of blending performance of native with the cost savings of web to produce a very usable mobile application..

The honest answer to the native vs. mobile web app is not easy. It depends on your business needs, your customer needs, and how you want to interact with your audience on mobile devices. You may end up needing both at some level (a mobile site + a native app), which is becoming more and more common. If the vision for your mobile app includes a lot of user interaction (filling out forms, editing lists, making decisions, etc.), then you should really consider finding a way to go native even if it’s a hybrid approach. Yes, there’s some upfront cost to learning new platforms, but the time to develop a solid app with standardized platform behaviors is much shorter in the long run. Of course, you may not have the luxury of time or money to accommodate that upfront cost. You can still succeed in providing a solid user experience in the mobile web app approach as long as you keep things simple. Keep complex interactions out of the mix, and your mobile web app will serve your users just fine.

Some sources used though not directly quoted:

How To: Add .Mobi Files to the Kindle App on Android

I did some basic google searching for this solution, and found a lot of old and ridiculous info on how to do this. It’s a very simple solution!

  1. Connect your device to your computer
  2. Browse to the kindle folder on the device (typically under root)
  3. Drag the .mobi file from your computer to the kindle folder on your device

Voila! The Kindle app now has your .mobi file in it’s list.