A new Objective-C library for a new generation of APIs

A new Objective-C library for a new generation of APIs:


By Greg Robbins and Tom Van Lenten, Software Engineers

Cross-posted with the Google Code Blog

Four years ago, we introduced an Objective-C library for Google Data APIs. At first, it supported a scant three services - Google Base, Calendar, and Spreadsheets. Perhaps more surprising is that it was written just for Mac applications; the iPhone SDK was still a year off. In the years since, the library has grown to support 16 APIs, and has been used in many hundreds of applications. In a fine example of unforeseen consequences, most of those applications run not on the Mac but on iOS.

The Google Data APIs were built on XML and the Atom Publishing Protocol, a reasonable industry standard for the time. But mobile, low-power, and bandwidth-limited computers are now the biggest audience for client software. Across the Internet, XML and AtomPub have given way to the lighter-weight JSON data interchange format.

Other fundamental changes have also shifted the API landscape. Password-based authentication is being supplanted by the more secure and flexible OAuth 2 standard. The number of APIs has grown dramatically, making it impractical to hand-craft data classes for all APIs and all languages. When services offer API improvements, developers want access to those changes as quickly as possible.

To support this evolving world, we are introducing a brand new library for Cocoa developers, the Google APIs Client Library for Objective-C. The library supports recent Google JSON APIs, including Tasks, Latitude, Books, URL Shortener, and many others. It is designed to make efficient use of the device’s processor and memory, so it’s a great fit for iOS applications.

The new library includes high-level Objective-C interfaces and data classes for each service, generated from the Google APIs Discovery Service. This lets the library model data not just as generic JSON dictionaries, but also with first-class Objective-C 2.0 objects. The classes include properties for each field, letting developers take advantage of Xcode’s code completion and syntax and type checking.

Here’s how easy it is to use the library and the new Google Books API to search for and print the titles of free ebooks by Samuel Clemens:

#import "GTLBooks.h"

GTLServiceBooks *service = [[GTLServiceBooks alloc] init];

GTLQueryBooks *query =
[GTLQueryBooks queryForVolumesListWithQ:@"Mark Twain"];
query.filter = kGTLBooksFilterFreeEbooks;

[service executeQuery:query
completionHandler:^(GTLServiceTicket *ticket,
id object, NSError *error) {
// callback
if (error == nil) {
GTLBooksVolumes *results = object;
for (GTLBooksVolume *volume in results) {
NSLog(@"%@", volume.volumeInfo.title);
The library supports Google’s partial response and partial update protocols, so even items of data-rich APIs can be retrieved and updated with minimal network overhead. It also offers a simple, efficient batch model, so many queries can be combined into one http request and response, speeding up applications.

When we introduced the previous Objective-C library, it was with this assertion: When you trust your personal data to Google, it's still your data. You're free to edit it, to share it with others, or to download it and take it somewhere else entirely. We hope the new library and Google’s growing collection of APIs help iOS and Mac developers to keep that principle meaningful for many years to come. You can start using the Google APIs Client Library for Objective-C by checking it out from our open-source project site and by subscribing to the discussion group.

Greg Robbins writes code to connect Mac and iOS apps to Internet services. He chases dogs in the morning, and bugs in the afternoon.

Tom Van Lenten is a Software Engineer on the Google Chrome team. He is also hooked on the Google Toolbox for Mac open source projects.

Posted by Scott Knaster, Editor

Thinking Like a Web Designer

Thinking Like a Web Designer:

[This post is by Roman Nurik, who is passionate about icons, with input from me and a bunch of the Framework engineers. —Tim Bray]

The number of people working on mobile apps, and specifically Android, is growing fast. Since modern mobile software-development is a relatively new profession, the community is growing by sucking in experts from related domains, one being web design and development.

It turns out that familiarity with web UI development, particularly using modern HTML5 techniques, can be a great primer for Android UI development. The Android framework and SDK have many analogues to tools and techniques in the Web repertoire of HTML, CSS, and JavaScript.

In this blog post, we’ll walk through a few web development features and look for matches in the world of Android UI development.

Device resolutions and physical sizes

One of the most important aspects of both Android UI design and web design is support for multiple screen resolutions and physical sizes. Just as your web app needs to work on any physical display and inside any size browser window, your native app needs to run on a variety of form factors, ranging from 2.5” phones to 10” tablets to (possibly) 50” TVs.

Let’s look at some ways in which CSS and Android allow for flexible and adaptive layouts.

Providing custom layouts for different resolutions

CSS3 media queries allow developers to include additional stylesheets to target different viewport and screen configurations. For example, developers can provide additional style rules or override existing styles for mobile devices. Although the markup (layout hierarchy) remains the same, CSS3 has several sophisticated techniques for completely transforming the placement of elements with different stylesheets.

Android has long offered a similar mechanism in resource directory qualifiers. This extends to many different types of resources (layouts, images or ‘drawables’, styles, dimensions, etc). Thus you can customize the view hierarchy as well as styling depending on device form factor, A base set of layouts for handsets can be extended for tablets by placing additional layouts in res/layout-xlarge or res/layout-sw600dp (smallest width 600 density-independent pixels) directories. Note that the latter syntax requires Android 3.2 or later.

Below is a CSS3 example of how one could hide a ‘left pane’ on smaller devices and show it on screens at least 600 pixels wide:

#leftPane {
display: none;

@media screen and (min-device-width:600px) {
#leftPane {
display: block;

The same could be accomplished on Android using multiple layout directories:


<!-- a single pane -->
<View android:id="main_pane">


<LinearLayout android:orientation="horizontal">
<!-- two panes -->
<View android:id="left_pane">
<View android:id="main_pane">

As a side note, if you plan on creating multi-pane layouts, consider using fragments, which help break up your screens into modular chunks of both layout and code.

There are also other neat ways of using resource directory qualifiers. For example, you could create values/dimens.xml and values-sw600dp/dimens.xml files specifying different font sizes for body text, and reference those values in your layouts by setting android:textSize="@dimen/my_body_text_size". The same could be done for margins, line spacing, or other dimensions to help manage whitespace on larger devices.

‘Holy grail’ layouts

Web developers have long dreamt of an easy way to build a ‘holy grail’ 5-pane layout (header/footer + 3 vertical columns). There are a variety of pre-CSS3 tricks including position:fixed, float:left, negative margins, and so on, to build such layouts but CSS3 introduced the flexible box module, which simplifies this tremendously.

Figure: An archetypal “holy grail” layout

It turns out that grail is pretty holy for Android tablet apps, too, and in particular for tablets held sideways in landscape mode. A good approach involves the use of LinearLayout, one of the simplest and most popular of the Android layouts.

LinearLayout has this neat way to stretch its children to fit the remaining space, or to distribute available space to certain children, using the android:layout_weight attribute. If a LinearLayout has two children with a fixed size, and another child with a nonzero layout_weight, that other child view will stretch to fill the remaining available space. For more on layout_weight and other ways to make layouts more efficient (like switching from nested LinearLayouts to RelativeLayout), check out Layout Tricks: Creating Efficient Layouts.

Let’s take a look at some example code for implementing such a ‘holy grail’ layout on Android and on the web:

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"

<!-- top pane -->
<View android:id="@+id/top_pane"
android:layout_height="50dp" />

<LinearLayout android:id="@+id/middle_container"

<!-- left pane -->
<View id="@+id/left_pane"
android:layout_height="match_parent" />

<!-- center pane -->
<View id="@+id/center_pane"
android:layout_weight="1" />

<!-- right pane -->
<View id="@+id/right_pane"
android:layout_height="match_parent" />


<!-- bottom pane -->
<View android:id="@+id/bottom_pane"
android:layout_height="50dp" />


Note: Android tablet apps in landscape will generally show an action bar as the top pane and will usually have neither a right nor bottom pane. Also note that the action bar layout is automatically provided by the framework as of Android 3.0, and thus you don’t need to worry about positioning it.

And here’s an example implementation using the CSS3 flexible box model; notice the similarities:

html, body { margin: 0; height: 100%; }

#container {
height: 100%;
display: -webkit-box; /* like LinearLayout */
display: -moz-box;
-webkit-box-orient: vertical; /* like android:orientation */
-moz-box-orient: vertical;

#top, #bottom { height: 50px; }

#middle {
-webkit-box-flex: 1; /* like android:layout_weight */
-moz-box-flex: 1;
display: -webkit-box;
-webkit-box-orient: horizontal;
-moz-box-orient: horizontal;

#left, #right { width: 300px; }

#center {
-webkit-box-flex: 1;
-moz-box-flex: 1;

<div id="container">
<div id="top"></div>
<div id="middle">
<div id="left"></div>
<div id="center"></div>
<div id="right"></div>
<div id="bottom"></div>

Layered content

In CSS, with position:absolute, you can overlay your UI elements. On Android, you can use FrameLayout to achieve this. The child views in a frame layout are laid out on top of each other, with optional layout_gravity attributes indicating alignment with the parent frame layout.

Below is a contrived example of a FrameLayout with three children.

Figure: Example FrameLayout with three children (2 with top-left and 1 bottom-right alignment)

Figure: Isometric view of the example FrameLayout and its children.

The code for this example is as follows:

<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"

<!-- bottom-most child, with bottom-right alignment -->
<View android:layout_gravity="bottom|right"
android:layout_height="150dp" />

<!-- middle child, with top-left alignment -->
<View android:layout_gravity="top|left"
android:layout_height="175dp" />

<!-- top-most child, with top-left alignment →
<!-- also stretched to fill vertically -->
<View android:layout_gravity="top|left"
android:layout_height="match_parent" />


Scrollable content

HTML, by default, flows in reading order and scrolls vertically. When content extends beyond the bottom of the browser, scrollbars automatically appear. Content panes can also be made individually scrollable using overflow:scroll or overflow:auto.

Android screen content isn’t scrollable by default. However, many content Views such as ListView and EditText offer scrolling, and any layout can be made scrollable by wrapping it in a ScrollView or HorizontalScrollView.

It’s also possible to add custom scrolling to views by using methods like View.scrollTo and helpers like Scroller in response to touch events. And for horizontal, snap-to-page-bounds scrolling, one can use the excellent new ViewPager class in the support library.

Below is an example of a ScrollView containing a single TextView child and the code needed to implement something like this.

Figure: A TextView inside a ScrollView, scrolled about half way.

<ScrollView xmlns:android="http://schemas.android.com/apk/res/android"

<!-- the scrollable content -->
<TextView android:layout_gravity="bottom|right"
android:text="The\nquick\nbrown\nfox\njumps\nover..." />


Custom layouts

Sometimes the positioning and layout behaviors you can achieve with CSS aren’t enough to achieve your desired layout. In those cases, developers fall back on JavaScript coupled with absolute positioning to place and size elements as needed.

Programmatically defined layouts are also possible on Android. In fact, they’re sometimes the most elegant and/or performant way of implementing a unique or otherwise tricky layout. Not happy with nesting LinearLayouts for implementing a 2x3 grid of navigation icons? Just extend ViewGroup and implement your own layout logic! (see an example DashboardLayout here). All the built-in layouts such as LinearLayout, FrameLayout, and RelativeLayout are implemented this way, so there generally aren’t the same performance implications with custom layouts as there are with scripted layout on the web.

Device densities

Web designers have long dealt with the reality that display densities vary, and that there wasn’t much they could do about it. This meant that for a long time web page graphics and UI elements had different physical sizes across different displays. Your 100px wide logo could be 1” wide on a desktop monitor or ¾” on a netbook. This was mostly OK, given that (a) pointing devices such as mice offered generally good precision in interacting with such elements and (b) browsers allowed visually-impaired users to zoom pages arbitrarily.

However, on touch-enabled mobile devices, designers really need to begin thinking about physical screen size, rather than resolution in pixels. A 100px wide button on a 120dpi (low density) device is ~0.9” wide while on a 320dpi (extra-high density) screen it’s only ~0.3” wide. You need to avoid the fat-finger problem, where a crowded space of small touch targets coupled with an imprecise pointing tool (your finger) leads to accidental touches. The Android framework tries really hard to take your layout and scale elements up or down as necessary to work around device-density differences and get a usable result on a wide range of them. This includes the browser, which scales a 160px <img> at 100% browser zoom up to 240px on a 240dpi screen, such that its physical width is always 1”.

Developers can achieve finer-grained control over this browser scaling by providing custom stylesheets and images for different densities, using CSS3 media query filters such as -webkit-max-device-pixel-ratio and <meta> viewport arguments such as target-densitydpi=device-dpi. For an in-depth discussion on how to tame this mobile browser behavior see this blog post: Pixel-perfect Android web UIs.

For native Android apps, developers can use resource directory qualifiers to provide different images per density (such as drawable-hdpi and drawable-mdpi). In addition, Android offers a special dimension unit called ‘density independent pixels’ (dp) which can (and should!) be used in layout definitions to offset the density factors and create UI elements that have consistent physical sizes across screens with different densities.

Features you don’t have out of the box

There are a few features that web designers and developers rely on that aren’t currently available in the Android UI toolkit.

Developers can defer to user-driven browser zooming and two-dimensional panning for content that is too small or too large for its viewport, respectively. Android doesn’t currently provide an out-of-the-box mechanism for two-dimensional layout zooming and panning, but with some extra legwork using existing APIs, these interactions are possible. However, zooming and panning an entire UI is not a good experience on mobile, and is generally more appropriate for individual content views such as lists, photos, and maps.

Additionally, vector graphics (generally implemented with SVG) are gaining in popularity on the Web for a number of reasons: the need for resolution independence, accessibility and ‘indexability’ for text-heavy graphics, tooling for programmatic graphic generation, etc. Although you can’t currently drop an SVG into an Android app and have the framework render it for you, Android’s version of WebKit supports SVG as of Android 3.0. As an alternative, you can use the very robust Canvas drawing methods, similar to HTML5’s canvas APIs, to render vector graphics. There are also community projects such as svg-android that support rendering a subset of the SVG spec.


Web developers have a number of different tools for frontend layout and styling at their disposal, and there are analogues for almost all of these in the world of Android UI engineering. If you’re wondering about analogues to other web- or CSS-isms, start a conversation out there in the Android community; you’ll find you’re not alone.


Tutorial: How To Extend UIControl And Build Your Own Custom Range Slider Component

Tutorial: How To Extend UIControl And Build Your Own Custom Range Slider Component:

You’ve probable seen situations in other apps, or perhaps even in your own where two sliders were used to configure the minimum and maximum parameters of a setting.

If you have then you have probably thought it would be great if there was a control with two scrubbers providing minimum/maximum settings.

I’ve come across an excellent tutorial showing step by step how to build a control that looks like it would fit right in with any other native controls.

What I like about this tutorial is that the author covers every step showing how he extended UIControl in building his custom component, and ultimately providing the source code for us to look at.

The tutorial is by Mal Curtis, and you can find it in two parts here:

iOS Range Slider Part 1

iOS Range Slider Part 2

The source can be found on Github here:


A great guide if you would like to learn how to build your own drop-in custom iOS component.

©2011 iPhone, iOS 4, iPad SDK Development Tutorial and Programming Tips. All Rights Reserved.


Objective-C Comment Styles

Objective-C Comment Styles:

A few weeks ago I wrote about Objective-C Indentation Styles. A reader (thanks Joe) commented and suggested a similar post that covered comment formats. That blog post follows…

Interface File Comments

When I create an interface file, here is the basic layout of my comments:

*  AboutViewController.h
*  Created by John on 6/10/11.
*  Copyright 2011 iOSDeveloperTips.com. All rights reserved.
*  Long comments describing this object would go here...
*  ...

@interface AboutViewController : UIViewController
 UIButton          *backButton;  // Comment here
 NSMutableArray    *queueArray;  // Comment here
 NSMutableArray    *bucketArray;  // Comment here

 // To describe something in more detail, it would go here
 // searchResultsCount varaible will hold ....
 // questionsResultsCount is calculated by ...
 int               searchResultsCount;     // Comment here
 int      questionsResultsCount;  // Comment here



I’m not sure when it happened, somewhere on the way I moved away from the traditional comment block shown below, I don’t use this for the header or inside the primary code sections. I must of tired of the look and made a small tweak to us ~ and – characters.

* To describe something in more detail, it would go here
* searchResultsCount varaible will hold ....
*  questionsResultsCount is calculated by ...

Implementation File Comments

At the top of the implementation file, my header looks the same as the interface file. If there is a private interface (see the post Private Methods for more information), I follow the same layout:

*  AboutViewController.m
*  Created by John on 6/10/11.
*  Copyright 2011 iOSDeveloperTips.com. All rights reserved.
*  Long comments describing this class would go here...
*  ...

#import "AboutViewController.h"

* Private interface definitions
@interface AboutViewController(private)
- (BOOL) processSearch:(NSString *)string;

For methods in the class, my comment format looks as follows:

* Manage button precess events
- (void)buttonPressed:(UIButton *)button
 // For one line comment
 if (button == backButton)

 // If more than one line of comments is needed, 
 // I use this format...


Here is a template of what my code looks like with #pragma marks included:

*  AboutViewController.m
*  Created by John on 6/10/11.
*  Copyright 2011 iOSDeveloperTips.com. All rights reserved.
*  Long comments describing this object would go here...
*  ...

#import "AboutViewController.h"

* Private interface definitions
@interface AboutViewController(private)
- (BOOL) processSearch:(NSString *)string;

@implementation AboutViewController

#pragma mark - Private Methods

* This method....
- (BOOL) processSearch:(NSString *)string

#pragma mark - Initialization

* Initialization of ...
* More comments here...
- (id) init


#pragma mark - Event Management

* Manage button precess events
- (void)buttonPressed:(UIButton *)button
 // For one line comment
 if (button == backButton)

 // If more than one line of comments is needed, 
 // I use this format...


#pragma mark - Cleanup

* Cleanup code
- (void)dealloc

 [super dealloc];


The beauty of using #pragma mark is that within Xcode you can get a nice breakdown of the organization of your code:

You can view this dialog from within Xcode by clicking on the class status bar in the boxed area shown below:

Posting a Comment with Code

Please feel free to post code examples of your comment style. If you would like have your code color highlighted as shown above, using the following format:

<pre lang=”objc”>

your code here



What's New for Developers in Mac OS X Lion (Part 3)

What's New for Developers in Mac OS X Lion (Part 3):

This is the third and final article in my little series covering the new developer APIs in Lion. Check out parts one and two if you haven’t done so already:

  • Part 1: Major new features: Application persistence, automatic document saving, versioning, file coordination, Cocoa autolayout, full-screen apps, popover windows, sandboxing, push notifications.

  • Part 2: New frameworks: AV Foundation, Store Kit (in-app purchasing) and IMServicePlugIn. Changes in AppKit.

  • Part 3: Changes in Core Data, Core Foundation, Core Location, Foundation, QTKit, Quartz and Quartz Core.

Core Data

Formalized Concurrency Model

Working with Core Data on a background thread involves manually managing two separate managed object contexts and their associated object graphs, taking care that you do not pass managed object from one thread/context to another. You should never access a managed object context outside the thread or dispatch queue that created it. This is still true in Lion, and the system now lets you formalize this agreement by creating a context with initWithConcurrenceType:NSConfinementConcurrencyType.

If you pass NSPrivateQueueConcurrencyType to initWithConcurrencyType:, the context creates and manages a private dispatch queue to operate on. To send any message to such a context, you must use the new methods performBlock: or performBlockAndWait:. The context will then execute the passed blocks on its own queue.

NSMainQueueConcurrencyType creates a context that is associated with the main dispatch queue and thus the main thread. Use such a context to link it to controllers and UI objects that are required to run on the main thread.

Support for Nested Managed Object Contexts

In earlier releases of OS X, a managed object context is always directly associated with a “parent” persistent store coordinator. In Lion, rather than having a persistent store coordinator, a context can also have another managed object context as its parent. This nested relationship between managed object contexts has two very useful use cases:

  1. Fetches and saves are mediated by the second context, which can execute these time-consuming operations in the background, thereby avoiding blocking the UI.

  2. Allow discardable edits that go away by simply throwing away the second context.

Use the -setParentContext: method to assign a parent context to your managed object context.

Ordered Relationships

To-many relationships in Core Data can now optionally have an order. This much-awaited feature should save many people a lot of tedious manual work that is involved in manually maintaining an order attribute for the items in the relationship. See the isOrdered attribute of NSRelationshipDescription.

In code, ordered relationships are represented by the new NSOrderedSet class (see below). Note that ordered relationships are significantly less efficient than unordered ones. You should only use them if a relationship has an intrinsic order. Do not use ordered relationships just because you want to avoid sorting your fetch results.

External Storage for Attributes

Another area where many developers had to invest manual work was when the Core Data store was designed to hold references to large amounts of binary data that were usually stored in external files (such as images). Developers had to generate unique filenames, create these files, store the filenames in the data store and delete the files when the associated managed object was deleted.

In Lion, managed objects support optional external storage for attribute values. If you specify in the Core Data editor that the value of a certain attribute (typically a BLOB) may be stored externally, Core Data uses internal heuristics (probably based on the size of the BLOB) to decide whether to store the data directly in the database or in a separate external file. As a developer, you don’t have to care either way. See the allowsExternalBinaryDataStorage attribute of NSAttributeDescription. I think this is a great feature.

Fetch Requests

NSFetchRequest got a few new features in Lion:

Implement Your Own Incremental Stores

Two new classes, NSIncrementalStore and NSIncrementalStoreNode, allow you to add support for other non-atomic persistent stores besides the existing SQLite store. (Writing custom atomic persistent stores was already supported in earlier versions of OS X.)

One cool possibility would be to write a Core Data interface to a web service backend such as CouchDB.

The Core Data Release Notes offer the best overview of the new features in Apple’s documentation.

Core Foundation

With the CFStringGetHyphenationLocationBeforeIndex() function it is now possible to locate potential hyphenation points within a word. Since hyphenation is language-specific, you have to pass the locale of the string you are inspecting to the function. Do not forget to check whether the system supports hyphenation for that locale beforehand by calling CFStringIsHyphenationAvailableForLocale()

See the Core Foundation Release Notes for details on this and more changes in Core Foundation.

Core Location

The Core Location framework originated on iOS and was introduced to OS X with Snow Leopard. In Lion, Apple brought the header files up to par with iOS. They even added stuff like CLHeading that doesn’t make a lot of sense on a computer but who knows when the first laptop with a built-in compass will come out? The same goes for the CLDeviceOrientation enum, which includes values for portrait, landscape, face up and face down orientations.

The most useful addition to the framework seems to be region monitoring. Start the CLLocationManager by sending it a startMonitoringForRegion:desiredAccuracy: message to be alerted when the computer enters a specific area, or use startMonitoringSignificantLocationChanges to be notified whenever the computer’s location changes.


As usual, the Foundation Release Notes for Lion give the best explanation of the new features in Apple’s documentation. It is unfortunate that it is so hard to discover these release notes documents for the particular frameworks as they are not linked from either What’s New in Lion or the API Diffs.

Native JSON Support

With NSJSONSerialization, OS X gets native reading and writing support for JSON, which has become the de facto data format for web APIs.

The +JSONObjectWithData:options:error: class method takes JSON data that you received from a network request and converts it into a hierarchy of Foundation objects (NSDictionary, NSArray, NSString, NSNumber, NSNull). Two useful options, NSJSONReadingMutableContainers and NSJSONReadingMutableLeaves let you specify whether the resulting Foundation objects should be immutable (the default) or mutable. The class also offers a variant to read the data directly from an NSInputStream: +JSONObjectWithStream:options:error:.

To convert an object graph into UTF-8-encoded JSON, call the +dataWithJSONObject:options:error: class method. By default, the resulting JSON data will be as compact as possible. Passing NSJSONWritingPrettyPrinted as an option to the method allows you to generate more readable output.

Key-Value Observing

The new method removeObserver:forKeyPath:context: allows you to more precisely remove a key-value observer by passing the same context pointer you used when adding the observer with addObserver:forKeyPath:options:context:. Using the old removeObserver:forKeyPath: method could possibly lead to problems.


Before Lion, asynchronous NSURLConnections always required a running runloop, making the class a bit difficult to use with operation or dispatch queues. Queue support was added in Lion. Setting an NSOperationQueue with the setDelegateQueue: method will deliver the delegate messages on the specified queue.

The new sendAsynchronousRequest:queue:completionHandler: convenience method will start an asynchronous connection and execute the completionHandler block on the specified queue once the request has finished either successfully or with an error. It frees the developer from implementing any NSURLConnectionDelegate methods if special handling of the connection, continuous progress reports etc. are not required. Very handy.

Linguistic Tagging

With the new NSLinguisticTagger class, developers can easily analyze natural-language text and identify different grammatical parts of speech. For example, given the sentence “Steve Jobs just resigned as CEO.” and configured to use the tag scheme NSLinguisticTagSchemeLexicalClass, NSLinguisticTagger can identify the words “Steve”, “Jobs” and “CEO” as nouns, the word “just” as an adverb, and the word “as” as a preposition.

Other possible tag schemes include NSLinguisticTagSchemeLemma to identify stem forms of words, NSLinguisticTagSchemeLanguage to identify the language and NSLinguisticTagSchemeScript to identify the script (Latin, Cyrillic, etc.).

The tagging works very well for English. Other languages might not support all tag schemes or might not be supported at all. Call the availableTagSchemesForLanguage: class method to determine the level of support for a language.

To do the analysis, create an instance of NSLinguisticTagger and pass it a string with setString:. Then call enumerateTagsInRange:scheme:options:usingBlock: to iterate over all tags the linguistic tagger found.

Ordered Sets

NSOrderedSet and NSMutableOrderedSet are new collection classes that combine the advantages of NSArray (ordered collection, fast index-based access) and NSSet (unique contents, fast access to objects). Note that NSOrderedSet inherits neither from NSArray nor from NSSet.

The class also offers two interesting methods, -(NSArray *)array and - (NSSet *)set. These return proxy objects for the underlying ordered set that act like an array or set without actually being one. If the underlying ordered set is mutable, changes to it will directly pass through to the proxy objects and these “immutable” collections will appear to outside code to be changing.

Regular Expressions and Data Detectors

Two new Lion developer features that you may already know from iOS are built-in regular expression support and data detectors.

The NSRegularExpression class represents a regular expression that you can apply to a string. After creating a regex instance (+regularExpressionWithPattern:options:error:), you use the Block iterator method enumerateMatchesInString:options:range:usingBlock: to enumerate all matches in the specified string. Each match is represented by an NSTextCheckingResult instance.

To use regular expressions for searching and replacing text, use the stringByReplacingMatchesInString:options:range:withTemplate: method.

The class reference documentation for NSRegularExpression has a very detailed overview about the supported regex syntax.

NSDataDetector is a subclass of NSRegularExpression that offers pre-configured regular expressions to identify patterns such as dates, addresses, phone numbers, or URLs.


  • The NSUserDefaults) API now has better performance if accessed concurrently from multiple threads.

  • NSURLRequest now supports HTTP pipelining with the setHTTPShouldUsePipelining: method. You can also specify a so-called network service type (setNetworkServiceType:) such as voice or video, supposedly to give the networking subsystem a hint how to prioritize the request.

  • NSIndexSet got three new methods to enumerate the non-contiguous ranges in the set rather than every single index. See enumerateRangesUsingBlock:.

  • NSXMLParser now has a streaming mode where parsing of a document can begin before its entire contents have been downloaded. You had to use libxml2 directly if you wanted to do this before. See the initWithStream: method. In addition, NSXMLParser is now thread-safe.

  • NSBundle has a new method appStoreReceiptURL to get the location of the App Store receipt file without hard-coding relative paths in your bundle.

OpenGL 3.2

Lion now supports OpenGL 3.2. Developers should select a specific OpenGL profile to tell the system which version of OpenGL an app is designed for. The current options are either kCGLOGLPVersion_3_2_Core for Open GL 3.2 or kCGLOGLPVersion_Legacy, which provides the same functionality found in earlier versions of Mac OS X.

See the OpenGL Profiles section in Apple’s OpenGL Programming Guide for details.


QTKit provides two new classes that make it easier to export movies in different formats. It also features new APIs for reading movie metadata without having to use the QuickTime C API, which is not available in 64-bit apps.

The new QTExportSession class (not yet documented) represents an export process that produces a transcoded output from a given QTMovie source. The properties of the exported movie are specified by an instance of QTExportOptions. The initializer for an export session is the self-explaining initWithMovie:exportOptions:outputURL:error: method. After you have created the export session instance and set a delegate, send it a run message to start the export operation asynchronously. The session will inform its delegate about success (exportSessionDidSucceed:) or failure (exportSession:didFailWithError:) of the operation. It also informs regularly about progress while the operation is running (exportSession:didReachProgress:).

The export settings in the QTExportOptions class are specified by instantiating the class with one of the available format constants: QTExportOptionsAppleM4VCellular, QTExportOptionsAppleM4V480pSD, QTExportOptionsAppleM4ViPod, QTExportOptionsAppleM4VAppleTV, QTExportOptionsAppleM4VWiFi, QTExportOptionsAppleM4V720pHD, QTExportOptionsQuickTimeMovie480p, QTExportOptionsQuickTimeMovie720p, QTExportOptionsQuickTimeMovie1080p, QTExportOptionsAppleM4A. It does not seem possible to configure the export options in a more granular manner. See the QTExportSession.h and QTExportOptions.h header files for details.

The new metadata reading capabilities are based on the QTMetadataItem class. The QTMovie and QTTrack classes have been extended by new methods that return these metadata items. The commonMetadata method returns an array of QTMetadataItem objects for each common metadata key for which a value for the current locale is available.

Metadata can be available in several formats, including QTMetadataFormatQuickTimeUserData, QTMetadataFormatQuickTimeMetadata, QTMetadataFormatiTunesMetadata and QTMetadataFormatID3Metadata. The availableMetadataFormats method returns an array of those formats available for the current movie or track while the metadataForFormat: method lets you retrieve the metadata for a specific format.


QLPreviewView is a new class that allows you to embed a Quick Look preview into your own view hierarchy. Its designated initializer is initWithFrame:style:, giving you the option of two styles, QLPreviewViewStyleNormal and QLPreviewViewStyleCompact. After creating the view, just assign the item to preview (which must implement the QLPreviewItem protocol) to its previewItem property and add the view to your view hierarchy just as you would with any other view.

Quartz Core

Core Animation

New Features from iOS

Several Core Animation classes inherited new properties that were introduced before in iOS 4. Examples:

  • CALayer now has contentsScale and rasterizationScale properties. Setting the new shouldRasterize property to YES can improve rendering performance in some cases (especially during animations). shadowPath allows you to modify the shape of a layer’s drop shadow.

  • CAShapeLayer: Use the strokeStart and strokeEnd properties to restrict the region where the shape layer’s path is rendered to a subregion of the total.

  • CAKeyframeAnimation has two new calculation modes for intermediate frames of a keyframe animation: kCAAnimationCubic and kCAAnimationCubicPaced.

    Intermediate frames are computed using a Catmull-Rom spline that passes through the keyframes. You can adjust the shape of the spline by specifying an optional set of tension, continuity, and bias values, which modify the spline using the standard Kochanek-Bartels form.

    Right. Got that?

Remote Layer Clients and Servers

The new (not yet documented) classes CARemoteLayerServer and CARemoteLayerClient seem to allow one process to render a layer tree that is then displayed by another process. I guess this was introduced for the new XPC interprocess communication framework.

Core Image Face Detection

One of the coolest new features of Lion is a public face detection API. Don’t confuse face detection (identifying the position and size of faces in an image) with face recognition (being able to tell if two faces show the same person or not). iPhoto can do the latter while the new API is “only” capable of the former.

Face detection is part of Core Image and therefore works on CIImage objects. The API is very easy to use, provided that you have converted the image you want to analyze into a CIImage. The source can either be a still image or a frame of a video.

The central class for face detection is CIDetector. Although it can only detect faces at the moment, Apple has designed the API in a generalized manner so that other detection algorithms can be easily added in the future. To detect faces, instatiate a CIDetector with the +detectorOfType:context:options: class method, passing CIDetectorTypeFace as the type. The context argument may be nil but you can improve performance if you pass in a CIContext instance that already has uploaded the image to be processed to the GPU. The options dictionary allows you to opt for higher accuracy or higher speed of the detection, depending on your needs.

Sending the detector instance a featuresInImage: method starts the detection process and returns an array of CIFaceFeature objects if any faces were found. Each face is describes by its bounds as well as the leftEyePosition, rightEyePosition and mouthPosition.

Screen Capture in iOS Apps

Screen Capture in iOS Apps:
I occasionally come across the need to grab the contents of a view as an image. This is often the result of needing to perform some non-stock, animated transition between views, but there are a variety of reasons why this might be useful. Thanks to the Core Animation framework’s CALayer class, this is easy to do.

All UIView instances have an underlying instance of a CALayer. The layer is responsible for rendering the view’s contents and performing any view-related animations. CALayer defines a method called renderInContext: which allows you to render the layer, and its sublayers, into a given graphics context:

UIGraphicsContext context = // some graphics context
[viewToCapture.layer renderInContext:context];

Before you can access any layer-specific APIs, you’ll need to make sure you’re linking against the QuartzCore framework. Xcode’s default templates don’t link against this framework so you’ll need to select Target Name > Build Phases > Link Binary With Libraries and select QuartzCore.framework.

Additionally, you’ll need to add the following import to your code wherever you are calling the layer’s properties or methods:

#import <QuartzCore/QuartzCore.h>

With the necessary project configuration out of the way, the next question is where do we get a graphics context into which we can render the view’s content? This can be created using UIKit’s UIGraphicsBeginImageContextWithOptions function which will create a new bitmap-based graphics context for us.

UIGraphicsBeginImageContextWithOptions(CGSize size, BOOL opaque, CGFloat scale);

This function takes a CGSize (you view’s size), a BOOL indicating if your view is opaque, and a CGFloat specifying the scaling factor. If you’re rendering a fully opaque, rectangular view you can pass YES for the opaque argument so the alpha channel can be discarded from the context.

Now that we’ve created a graphics context we can use the UIGraphicsGetCurrentContext() and UIGraphicsGetImageFromCurrentImageContext() functions to get reference to this new context and retrieve the rendered image data from it. Finally, we’ll want to call the UIGraphicsEndImageContext() function to clean up this context and remove it from the stack. Putting all this together we end up with the following:

// Render the views layer contents into the current Graphics context
CGSize viewSize = viewToCapture.bounds.size;
UIGraphicsBeginImageContextWithOptions(viewSize, NO, 1.0);
[viewToCapture.layer renderInContext:UIGraphicsGetCurrentContext()];
// Read the UIImage object
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

To see this code in action I’ve put together a simple demo app. You can tap on Archer, Lana, or the background to capture the contents of the view and write the image to your Photo Library.

Note: Before running the demo be sure to open the “Photos” app on the Simulator so it can initialize its database or the images won’t be written. Enjoy!

Download Demo

How To Make a Catapult Shooting Game with Cocos2D and Box2D Part 1

How To Make a Catapult Shooting Game with Cocos2D and Box2D Part 1:

This is a blog post by iOS Tutorial Team member Gustavo Ambrozio, a software engineer with over 20 years experience, over 3 years of iOS experience and founder of CodeCrop Software.

Create a cute-a-pult game with Cocos2D and Box2D!  :D

Create a cute-a-pult game with Cocos2D and Box2D! :D

In this tutorial series we’ll build a cool catapult type game from scratch using Cocos2D and Box2D!

We’ll use the art created by Ray’s lovely and talented wife Vicki to create a game about catapults, acorns, dogs, cats, and angry squirrels.

In this tutorial series, you’ll learn:

  • How to use rotation joints

  • How to use weld joints

  • How to have the camera follow a projectile

  • How to use a collision’s impact ‘force” to decide if it should eliminate an enemy

  • And tons more!

This tutorial series assumes that you’ve already gone through the Intro to Box2D with Cocos2D Tutorial: Bouncing Balls Tutorial or have equivalent knowledge. It also uses a bunch of concepts found on the How To Create A Breakout Game with Box2D and Cocos2D Tutorial (part 1 and part 2).

Getting Started

Fire up Xcode and click “Create a new Xcode project”. Under iOS, choose cocos2d and choose the cocos2d_box2d template and click Next.

Selecting the Cocos2D with Box2D template

On the next screen give your project a name and fill out with a company identifier. We’ll call the project “cute-a-pult” – and if you look at the artwork, I’ll bet you can guess why! :]

On the next step select the folder where you want to store your project. You don’t have to create a new folder, Xcode will create one for you. I’d suggest enabling source control but this is not necessary.

When you click create your project will be created and we’ll be ready to start coding!

You probably know what this template project is all about, but just for the fun of it, click Run and let’s see what it does:

The Box2D Sample Project that comes with Cocos2D

Cool, so it runs and you can play with a few blocks. Cool, but not nearly as cool as what we’re about to make!

I Will Follow Him (on Github)

There isn't an ocean too deep that will keep me away from GitHub!

There isn't an ocean too deep that will keep me away from GitHub!

If you want to follow me wherever I may go, check out this project’s GitHub page!

Yes that’s right, for the first time in tutorial history (maybe I’m exaggerating a bit, I haven’t checked this in any way…) I’ll publish the entire tutorial files, with comments and tags for every steps on GitHub!

You might find this handy in case you get stuck somewhere or want to look at the code at a particular stage instead of typing it out yourself. Otherwise, keep following (and singing) along here! :]

The repository tag for this point in the tutorial is ProjectTemplate. You can download the project zip at this state here.

Cleaning Up

Before we get out hands dirty, let’s do some cleanup. Open HelloWorldLayer.h and remove the declaration for addNewSpriteWithCoords:(CGPoint)p.

Open HelloWorldLayer.mm and completely remove the implementations of these three methods:

  1. -(void) addNewSpriteWithCoords:(CGPoint)p (also remove from .h)

  2. - (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event

  3. - (void)accelerometer:(UIAccelerometer*)accelerometer didAccelerate:(UIAcceleration*)acceleration

Now you’ll have a warning on this file on the init method because of a call to addNewSpriteWithCoords:(CGPoint)p. Let’s cleanup the init method.

First, delete the line that enables the accelerometer as we won’t be using it in this project:

// delete this line!
self.isAccelerometerEnabled = YES;

At the end of the method cleanup everything that add stuff to the scene:

// Delete these lines!
CCSpriteBatchNode *batch = [CCSpriteBatchNode batchNodeWithFile:@"blocks.png" capacity:150];
[self addChild:batch z:0 tag:kTagBatchNode];

[self addNewSpriteWithCoords:ccp(screenSize.width/2, screenSize.height/2)];

CCLabelTTF *label = [CCLabelTTF labelWithString:@"Tap screen" fontName:@"Marker Felt" fontSize:32];
[self addChild:label z:0];
[label setColor:ccc3(0,0,255)];
label.position = ccp( screenSize.width/2, screenSize.height-50);

That’s it for the HelloWorld.mm file. Let’s just remove the blocks png file so we don’t have any files that are not really necessary. Open the resources group on the project navigator and remove the blocks.png file. When ask if you want to remove the reference only or delete don’t be shy and hit Delete as we really don’t want this file anymore.

To make sure everything works, compile and run and you should see an empty scene:

An empty scene in Cocos2D

The repository tag for this point in the tutorial is CleanUpProject.

Adding Some Sprites

First let’s add the images we’ll use on the project. Again, I got these images from Vicki’s site but I had to modify them a bit for the purpose of the game. Basically I changed the catapult arm to already include the cup and straightened it to make the physics easier.

So go ahead and download my modified files from GitHub.

Now expand this archive. This will create a new folder called “images”. Drag this folder to the Resources group of your project. Be sure to check the “Copy items into destination group’s folder” to copy these files to your project folder.

Adding images to your Xcode project

Now we’re ready to start adding our sprites to the scene and try to reproduce the final scene that Vicki designed:

The level design for our catapult level

Open the HelloWorldLayer.mm file and insert this code right after m_debugDraw->SetFlags(flags);

CCSprite *sprite = [CCSprite spriteWithFile:@"bg.png"];
sprite.anchorPoint = CGPointZero;
[self addChild:sprite z:-1];

sprite = [CCSprite spriteWithFile:@"catapult_base_2.png"];
sprite.anchorPoint = CGPointZero;
sprite.position = CGPointMake(181.0f, FLOOR_HEIGHT);
[self addChild:sprite z:0];

sprite = [CCSprite spriteWithFile:@"squirrel_1.png"];
sprite.anchorPoint = CGPointZero;
sprite.position = CGPointMake(11.0f, FLOOR_HEIGHT);
[self addChild:sprite z:0];

sprite = [CCSprite spriteWithFile:@"catapult_base_1.png"];
sprite.anchorPoint = CGPointZero;
sprite.position = CGPointMake(181.0f, FLOOR_HEIGHT);
[self addChild:sprite z:9];

sprite = [CCSprite spriteWithFile:@"squirrel_2.png"];
sprite.anchorPoint = CGPointZero;
sprite.position = CGPointMake(240.0f, FLOOR_HEIGHT);
[self addChild:sprite z:9];

sprite = [CCSprite spriteWithFile:@"fg.png"];
sprite.anchorPoint = CGPointZero;
[self addChild:sprite z:10];

Right now we’re just adding sprites that will not be part of the physics simulation. We add the sprites that are on the back of the scene first and work our way to the front.

The default anchor point of CCSprite is the middle of the sprite. I’m changing it to the left-bottom corner of the image to make it easier to place them.

You probably noticed that a lot of Y coordinates are using the constant FLOOR_HEIGHT that we have not defined yet. Since this is used in many places it’s easier to use the constant so we can change this later if we change the floor height a bit. So let’s create this constant. On the top of this file, right after the PTM_RATIO definition, add this:

#define FLOOR_HEIGHT    62.0f

Let’s click Run to check out how we’re coming. This is what I got:

Sprites added to level

w00t, it’s starting to look good already!

The repository tag for this point in the tutorial is BeforePhysics.

Adding the Catapult Arm

It’s time to start adding some physics to this world. Right after the code you added above is the template’s code to create the world borders. Let’s change this a bit to describe our world.

The default world is exactly the size of the iPhone’s screen. Since our scene’s width is twice this size we’ll have to change our world’s width. To do this just find every instance of screenSize.width you can find when defining the world boundaries and multiply it by 2.0f

Also, since our world’s floor is not on the bottom of the screen change the bottom’s edge Y coordinates to FLOOR_HEIGTH/PTM_RATIO. The final code should be this:

// bottom
groundBox.SetAsEdge(b2Vec2(0,FLOOR_HEIGHT/PTM_RATIO), b2Vec2(screenSize.width*2.0f/PTM_RATIO,FLOOR_HEIGHT/PTM_RATIO));

// top
groundBox.SetAsEdge(b2Vec2(0,screenSize.height/PTM_RATIO), b2Vec2(screenSize.width*2.0f/PTM_RATIO,screenSize.height/PTM_RATIO));

// left
groundBox.SetAsEdge(b2Vec2(0,screenSize.height/PTM_RATIO), b2Vec2(0,0));

// right
groundBox.SetAsEdge(b2Vec2(screenSize.width*2.0f/PTM_RATIO,screenSize.height/PTM_RATIO), b2Vec2(screenSize.width*2.0f/PTM_RATIO,0));

Now let’s add the catapult arm. First let’s add some references to the Box2d body and fixture we’ll create as we’ll need this later on. Go to HelloWorldLayer.h and add this to the class’ instance variables:

b2Fixture *armFixture;
b2Body *armBody;

Now go back to the class implementation file and add this at the bottom of init:

// Create the catapult's arm
CCSprite *arm = [CCSprite spriteWithFile:@"catapult_arm.png"];
[self addChild:arm z:1];

b2BodyDef armBodyDef;
armBodyDef.type = b2_dynamicBody;
armBodyDef.linearDamping = 1;
armBodyDef.angularDamping = 1;
armBodyDef.userData = arm;
armBody = world->CreateBody(&armBodyDef);

b2PolygonShape armBox;
b2FixtureDef armBoxDef;
armBoxDef.shape = &armBox;
armBoxDef.density = 0.3F;
armBox.SetAsBox(11.0f/PTM_RATIO, 91.0f/PTM_RATIO);
armFixture = armBody->CreateFixture(&armBoxDef);

If you already went through the previous Box2d tutorials on the site (and if you haven’t I recommend going through those before continuing) most of this should be to you.

We first load the catapult’s arm sprite and add it to the scene. Notice the z index. We used z indexes when we added the static sprites to the scene, so this z index puts the arm between the two sides of the catapult to look nice.

Notice also that we didn’t specify the sprite’s position. That’s because the tick method will set the sprite’s position to the Box2D body’s position.

Next we create the Box2d body as a dynamic body. The userData here is important because of what I mentioned on the last paragraph, so that the sprite will follow the body. Also notice that the position is set to be above the FLOOR_HEIGHT we defined. That’s because here we have to use the position of the center of the body, we can’t use the left-bottom corner using Box2d.

Next comes the creation of the fixture that describes the body’s physical features. It will be a simple rectangle.

We set the fixture’s rectangle to be a bit smaller than the size of the sprite. This is because if you take a look at the sprite, you’ll see that the sprite is larger than the arm itself:

Making a fixture rectangle smaller than the sprite

In this picture, the black rectangle is the size of the sprite, and the red rectangle is the size we want for the fixture.

You can get these dimensions using any image editing software. Just remember to use the standard resolution image for this as the -hd version gets scaled to the standard size.

Run the app again and you’ll see the arm in a vertical position:

Catapult Arm added to level

So far so good – now let’s add some movement to this catapult!

Rotating with Joints

We need to somehow restrict the movement of the catapult so that it rotates around a point. The way you restrict motion of Box2D bodies relative to each other is via joints.

No Jay and Silent Bob, not that kind of joint!

No Jay and Silent Bob, not that kind of joint!

There’s one particular joint that is perfect for this – a revolute joint. Think of this as pinning two bodies together at a particular point, but still allowing them to rotate.

So we can “pin” the catapult arm to the ground at the base of the catapult to get the effect we want!

Let’s try this out. Go to HelloWorldLayer.h and add this to the class’ instance variables:

b2RevoluteJoint *armJoint;

Now go back to the class implementation file and add this right after the catapult’s arm creation:

// Create a joint to fix the catapult to the floor.
b2RevoluteJointDef armJointDef;
armJointDef.Initialize(groundBody, armBody, b2Vec2(233.0f/PTM_RATIO, FLOOR_HEIGHT/PTM_RATIO));
armJointDef.enableMotor = true;
armJointDef.enableLimit = true;
armJointDef.motorSpeed = -1260;
armJointDef.lowerAngle = CC_DEGREES_TO_RADIANS(9);
armJointDef.upperAngle = CC_DEGREES_TO_RADIANS(75);
armJointDef.maxMotorTorque = 4800;

armJoint = (b2RevoluteJoint*)world->CreateJoint(&armJointDef);

When creating the joint you have to specify 2 bodies and the hinge point. You might be thinking: “shouldn’t the catapult’s arm attach to the base?”. Well, in the real world, yes. But in Box2d not necessarily. You could do this but then you’d have to create another body for the base and add more complexity to the simulation.

Since the base would be static anyway and the in Box2d the hinge body doesn’t have to be in any of the to bodies, we can just use the groundBody that we already have.

The angle limits, combined with a motor make the catapult behave much like a catapult in the real world.

You’ll also notice we set a motor up on the joint, by setting “enableMotor”, “motorSpeed”, and “maxMotorTorque”. By setting the motor speed to negative, this makes the catapult arm want to continually rotate clockwise (kind of like a spring on a catapult).

However, we also enabled limits on the joint by setting “enableLimit”, “lowerAngle”, and “upperAngle”. This makes the arm stop rotating at the 9 degree angle (slightly bent to the left) and a 75 degree angle (pulled back a good bit to the left). This simulates the movement of the catapult arm we want.

In a while we’ll add another joint that will let you pull back the catapult. When we release this force the motor will make the arm fling forward, much like a real catapult would!

The motor speed value is in radians per second. Yes, not very intuitive, I know. What I did was to just try out different values until I had the desired effect. You can start out small and increase it until you get the desired speed for the arm. The maxMotorTorque parameter is the max torque the motor can apply. This value is also something that you can vary to see how things react. Again it will become clear soon.

Run the app and you’ll see the arm bent to the left now:

Catapult arm connected to ground via a revolute joint

The repository tag for this point in the tutorial is CatapultJoint.

Pulling The Catapult’s Leg (or Arm)

Ok, now it’s time to move this arm. To accomplish this we’ll use a mouse joint. If you went through Ray’s breakout game tutorial you already know what a mouse joint does.

But if you didn’t go through this, here it is, straight from Ray:

“In Box2D, a mouse joint is used to make a body move toward a specified point.”

That’s exactly what we want to do. So, first let’s declare the mouse joint’s variable on the HelloWorldLayer.h file:

b2MouseJoint *mouseJoint;

Then let’s add the touchesBegan method to our implementation file:

- (void)ccTouchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
if (mouseJoint != nil) return;

UITouch *myTouch = [touches anyObject];
CGPoint location = [myTouch locationInView:[myTouch view]];
location = [[CCDirector sharedDirector] convertToGL:location];
b2Vec2 locationWorld = b2Vec2(location.x/PTM_RATIO, location.y/PTM_RATIO);

if (locationWorld.x < armBody->GetWorldCenter().x + 50.0/PTM_RATIO)
b2MouseJointDef md;
md.bodyA = groundBody;
md.bodyB = armBody;
md.target = locationWorld;
md.maxForce = 2000;

mouseJoint = (b2MouseJoint *)world->CreateJoint(&md);

Again, quoting from Ray:

"When you set up a mouse joint, you have to give it two bodies. The first isn’t used, but
the convention is to use the ground body. The second is the body you want to move”.

The target is where we want the joint to pull our arm’s body. We have to convert the touch first to the Cocos2d coordinates and then to the Box2d world coordinates. We only create the joint if the touch is to the left of the arm’s body. The 50 pixels offset is because we’ll allow the touch to be a little to the right of the arm.

The maxForce parameter will determine the max force applied to the catapult’s arm to make it follow the target point. In our case we have to make it strong enough to counteract the torque applied by the motor of the revolute joint.

If you make this value too small you won’t be able to pull the arm back because it’s applying a large torque to it. You can then decrease the maxMotorTorque specified in our revolute joint or increase the maxForce of the mouse joint.

I suggest you play with the maxForce of the mouse joint and the maxMotorTorque of the revolute joint to check what values work. Decrease the maxForce to 500 and try out and you’ll see you can’t pull the arm. Then decrease the maxMotorTorque to 1000 and you’ll see that you can do it again. But let’s finish implementing this first…

You’ll notice that the groundBody variable is not declared yet. We created the body on the init method but we didn’t keep a reference to it. Let’s fix this real quick. Add this to the .h file:

b2Body *groundBody;

Then go back to the init method and change this line:

 b2Body* groundBody = world->CreateBody(&groundBodyDef);

to this:

groundBody = world->CreateBody(&groundBodyDef);

We now have to implement the touchesMoved so the mouse joint follows your touch:

- (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
if (mouseJoint == nil) return;

UITouch *myTouch = [touches anyObject];
CGPoint location = [myTouch locationInView:[myTouch view]];
location = [[CCDirector sharedDirector] convertToGL:location];
b2Vec2 locationWorld = b2Vec2(location.x/PTM_RATIO, location.y/PTM_RATIO);


This method is simple enough. It just convert the point to world coordinates and then changes the target point of the mouse joint to this point.

To finish it off we just have to release the arm by destroying the mouse joint. Let’s do this by implementing the touchesEnded method:

- (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
if (mouseJoint != nil)
mouseJoint = nil;

Simple enough. Just destroy the joint and clear the variable. Try it out now. Run the project and pull the arm back with your finger. When you let go you’ll see the arm go back very fast to it’s resting position.

Pulling the Catapult Arm

This is actually too fast. What controls this speed, as you remember, is the motorSpeed of the revolute joint and the maxMotorTorque applied. Let’s try to decrease the value of the motorSpeed and see what happens.

Go to the init method and try a few smaller values for yourself to get a sense of it. A value that worked well for me was -10. Change it to this value and you’ll see the speed is something that seems more natural for a catapult.

armJointDef.motorSpeed  = -10; //-1260;

The repository tag for this point in the tutorial is MouseJoint.

Ready, Aim, Fire!

You know what time it is – heavy ammunition time, heh heh! Or acorns, in this case.

We’ll create every bullet body on the beginning of the game and we’ll use them one by one. So we need a place to store them all. Go to the .h file and add these variables to our class:

NSMutableArray *bullets;
int currentBullet;

Go back to the implementation file and, before we forget, add this to the dealloc method:

[bullets release];

Add a method to create all the bullets above the init method:

- (void)createBullets:(int)count
currentBullet = 0;
CGFloat pos = 62.0f;

if (count > 0)
// delta is the spacing between corns
// 62 is the position o the screen where we want the corns to start appearing
// 165 is the position on the screen where we want the corns to stop appearing
// 30 is the size of the corn
CGFloat delta = (count > 1)?((165.0f - 62.0f - 30.0f) / (count - 1)):0.0f;

bullets = [[NSMutableArray alloc] initWithCapacity:count];
for (int i=0; i<count; i++, pos+=delta)
// Create the bullet
CCSprite *sprite = [CCSprite spriteWithFile:@"acorn.png"];
[self addChild:sprite z:1];

b2BodyDef bulletBodyDef;
bulletBodyDef.type = b2_dynamicBody;
bulletBodyDef.bullet = true;
bulletBodyDef.userData = sprite;
b2Body *bullet = world->CreateBody(&bulletBodyDef);

b2CircleShape circle;
circle.m_radius = 15.0/PTM_RATIO;

b2FixtureDef ballShapeDef;
ballShapeDef.shape = &circle;
ballShapeDef.density = 0.8f;
ballShapeDef.restitution = 0.2f;
ballShapeDef.friction = 0.99f;

[bullets addObject:[NSValue valueWithPointer:bullet]];

Most of this method should be familiar to you by now. Our method will create a variable number of bullets, evenly spaced between the first squirrel and the catapult’s body.

One detail you might not have seen before are the “bullet” parameter of the bulletBodyDef. This tells box2d that this will be a fast moving body so box2d will be extra careful with it during the simulation.

The box2d manual explains it well why we need to mark these bodies as bullets:

“Game simulation usually generates a sequence of images that are played at some frame
rate. This is called discrete simulation. In discrete simulation, rigid bodies can move
by a large amount in one time step. If a physics engine doesn't account for the large
motion, you may see some objects incorrectly pass through each other. This effect is
called tunneling."

By default, Box2D uses continuous collision detection (CCD) to prevent dynamic bodies from tunneling through static bodies. This is done by sweeping shapes from their old position to their new positions. The engine looks for new collisions during the sweep and computes the time of impact (TOI) for these collisions. Bodies are moved to their first TOI and then halted for the remainder of the time step.

Normally CCD is not used between dynamic bodies. This is done to keep performance reasonable. In some game scenarios you need dynamic bodies to use CCD. For example, you may want to shoot a high speed bullet at a stack of dynamic bricks. Without CCD, the bullet might tunnel through the bricks.

We’ll now create a method to attach the bullet to the catapult. We’ll need two more class variables for this so let’s add them to the .h file:

b2Body *bulletBody;
b2WeldJoint *bulletJoint;

The bulletBody will keep track of the currently attached bullet body so we can track it’s movement later. The bulletJoint will keep a reference to the joint we’ll create between the bullet and the catapult’s arm.

Now go back to the implementation file and add the following right after createBullets:

- (BOOL)attachBullet
if (currentBullet < [bullets count])
bulletBody = (b2Body*)[[bullets objectAtIndex:currentBullet++] pointerValue];
bulletBody->SetTransform(b2Vec2(230.0f/PTM_RATIO, (155.0f+FLOOR_HEIGHT)/PTM_RATIO), 0.0f);

b2WeldJointDef weldJointDef;
weldJointDef.Initialize(bulletBody, armBody, b2Vec2(230.0f/PTM_RATIO,(155.0f+FLOOR_HEIGHT)/PTM_RATIO));
weldJointDef.collideConnected = false;

bulletJoint = (b2WeldJoint*)world->CreateJoint(&weldJointDef);
return YES;

return NO;

We first get the pointer to the current bullet (we’ll have a way to cycle through them later). The SetTransform method changes the position of the center of the body. The position in the code is the position of the tip of the catapult. We then set the bullet body to active so Box2d starts simulating it’s physics.

We then create a weld joint. A weld joint attaches two bodies on the position we specify in the Initialize method and don’t allow any movement between them from that point forward.

We set collideConnected to false because we don’t want to have collisions between the bullet and the catapult’s arm.

Notice that we return YES if there were still bullets available and NO otherwise. This will be useful later to check if the level is over because we ran out of bullets.

Let’s create another method to call all these initialization methods right after attachBullet:

- (void)resetGame
[self createBullets:4];
[self attachBullet];

Now add a call to this method at the end of the init method:

[self resetGame];

Run the project and you’ll see something weird:

Acorn attached at incorrect position

The corn is a little off the mark. That’s because the position I set for the corn to attach is the position for when the catapult’s arm is at 9 degrees, at the resting position. But at the end of the init method the catapult is still at the zero degree angle so the bullet actually gets attached to the wrong position.

To fix this we only have to give the simulation some time to put the catapult’s arm to rest. So let’s change the call to resetGame on the init method to this:

[self performSelector:@selector(resetGame) withObject:nil afterDelay:0.5f];

This will make the call half a second later. We’ll have a better solution for this later on but for now it’ll do. If you run the project now you’ll see the correct result:

Catapult arm with acorn attached correctly

If we now pull the catapult’s arm and release the corn will not be released because it’s welded to the catapult. We need a way to release the bullet. To do this we just have to destroy the joint. But where and when to destroy the joint?

The best way is to check for some conditions on the tick method that gets called on every simulation step.

First we need a way to know if the catapult’s arm is being released. Let’s add a variable to our class for this first:

BOOL releasingArm;

Now go back to the ccTouchesEnded method and add this condition right before we destroy the mouse joint:

if (armJoint->GetJointAngle() >= CC_DEGREES_TO_RADIANS(20))
releasingArm = YES;

This will set our release variable to YES only if the arm gets released then the arm is at least at a 20 degree angle. If we just pull the arm a little bit we won’t release the bullet.

Now add this to the end of the tick method:

// Arm is being released.
if (releasingArm && bulletJoint)
// Check if the arm reached the end so we can return the limits
if (armJoint->GetJointAngle() <= CC_DEGREES_TO_RADIANS(10))
releasingArm = NO;

// Destroy joint so the bullet will be free
bulletJoint = nil;


Pretty simple, right? We wait until the arm is almost at it’s resting position and we release the bullet.

Run the project and you should see the bullet flying off very fast!

Acorn flying through the air

In my opinion a little too fast. Let’s try to slow it down a bit by decreasing the max torque of the revolute joint.

Go back to the init method and decrease the max torque value from 4800 to 700. You can try out some other values to see what the effect is.

armJointDef.maxMotorTorque = 700; //4800;

Try again and ah much better – acorn flying action!

Gratuitous Camera Movement

One thing that would be nice is if we could make the scene move to follow the bullet so we can see it’s whole movement.

We can do this easily by changing the position property of the scene. Add this code to the end of the tick method:

// Bullet is moving.
if (bulletBody && bulletJoint == nil)
b2Vec2 position = bulletBody->GetPosition();
CGPoint myPosition = self.position;
CGSize screenSize = [CCDirector sharedDirector].winSize;

// Move the camera.
if (position.x > screenSize.width / 2.0f / PTM_RATIO)
myPosition.x = -MIN(screenSize.width * 2.0f - screenSize.width, position.x * PTM_RATIO - screenSize.width / 2.0f);
self.position = myPosition;

The condition checks is there’s a bullet that was attached but is not anymore, so it must be moving. We then get it’s position and check if it’s after the right of the screen’s middle. If it is we change the position of the screen to make the bullet be in the middle of the screen.

Notice the minus sign in front of the MIN. We do this because the position has to be negative to make the scene move to the left.

Try it out. It’s pretty cool now!

Flying acorn with screen scrolling

Where To Go From Here?

The repository tag for this point in the tutorial is BulletCreation.

At this point, you should have a great start of a catapult game – the catapult works quite well! Stay tuned for the next part of the tutorial, where we expand this into a full game with enemies and collision detection!

In the meantime, if you have any questions or comments, join the forum discussion below!

This is a blog post by iOS Tutorial Team member Gustavo Ambrozio, a software engineer with over 20 years experience, over 3 years of iOS experience and founder of CodeCrop Software.