2011年11月30日星期三

UIGestureRecognizer Tutorial in iOS 5: Pinches, Pans, and More!

UIGestureRecognizer Tutorial in iOS 5: Pinches, Pans, and More!:
Learn how to use UIGestureRecognizers to pinch, zoom, drag, and more!

Learn how to use UIGestureRecognizers to pinch, zoom, drag, and more!


If you need to detect gestures in your app, such as taps, pinches, pans, or rotations, it’s extremely easy with the built-in UIGestureRecognizer classes.


In this tutorial, we’ll show you how you can easily add gesture recognizers into your app, both within the Storyboard editor in iOS 5, and programatically.


We’ll create a simple app where you can move a monkey and a banana around by dragging, pinching, and rotating with the help of gesture recognizers.


We’ll also demonstrate some cool extras like:


  • Adding deceleration for movement
  • Setting gesture recognizers dependency
  • Creating a custom UIGestureRecognizer so you can tickle the monkey! :]

This tutorial assumes you are familiar with the basic concepts of ARC and Storyboards in iOS 5. If you are new to these concepts, you may wish to check out our ARC and Storyboard tutorials first.


I think the monkey just gave us the thumbs up gesture, so let’s get started! :]



Getting Started


Open up Xcode and create a new project with the iOS\Application\Single View Application template. For the Product Name enter MonkeyPinch, for the Device Family choose iPhone, and select the Use Storyboard and Use Automatic Reference Counting checkboxes, as shown below.


First, download the resources for this project and add the four files inside into your project. In case you’re wondering, the images for this tutorial came from my lovely wife’s free game art pack, and we made the sound effects ourselves with a mike :P


Next, open up MainStoryboard.storyboard, and drag an Image View into the View Controller. Set the image to monkey_1.png, and resize the Image View to match the size of the image itself by selecting Editor\Size to Fit Content. Then drag a second image view in, set it to object_bananabunch.png, and also resize it. Arrange the image views however you like in the view controller. At this point you should have something like this:


Adding image views in the storyboard editor


That’s it for the UI for this app – now let’s add a gesture recognizer so we can drag those image views around!


UIGestureRecognizer Overview


Before we get started, let me give you a brief overview of how you use UIGestureRecognizers and why they’re so handy.


In the old days before UIGestureRecognizers, if you wanted to detect a gesture such as a swipe, you’d have to register for notifications on every touch within a UIView – such as touchesBegan, touchesMoves, and touchesEnded. Each programmer wrote slightly different code to detect touches, resulting in subtle bugs and inconsistencies across apps.


In iOS 3.0, Apple came to the rescue with the new UIGestureRecognizer classes! These provide a default implementation of detecting common gestures such as taps, pinches, rotations, swipes, pans, and long presses. By using them, not only does it save you a ton of code, but it makes your apps work properly too!


Using UIGestureRecognizers is extremely simple. You just perform the following steps:


  1. Create a gesture recognizer. When you create a gesture recognizer, you specify a callback method so the gesture recognizer can send you updates when the gesture starts, changes, or ends.
  2. Add the gesture recognizer to a view. Each gesture recognizer is associated with one (and only one) view. When a touch occurs within the bounds of that view, the gesture recognizer will look to see if it matches the type of touch it’s looking for, and if a match is found it will notify the callback method.

You can perform these two steps programatically (which we’ll do later on in this tutorial), but it’s even easier adding a gesture recognizer visually with the Storyboard editor. So let’s see how it works and add our first gesture recognizer into this project!


UIPanGestureRecognizer


Still with MainStoryboard.storyboard open, look inside the Object Library for the Pan Gesture Recognizer, and drag it on top of the monkey Image View. This both creates the pan gesture recognizer, and it with the monkey Image View. You can verify you got it connected OK by clicking on the monkey Image View, looking at the Connections Inspector, and making sure the Pan Gesture Recognizer is in the gestureRecognizers collection:


Checking that a gesture recognizer is in the view's gestureRecognizers collection


You may wonder why we associated it to the image view instead of the view itself. Either approach would be OK, it’s just what makes most sense for your project. Since we tied it to the monkey, we know that any touches are within the bounds of the monkey so we’re good to go. The drawback of this method is sometimes you might want touches to be able to extend beyond the bounds. In that case, you could add the gesture recognizer to the view itself, but you’d have to write code to check if the user is touching within the bounds of the monkey or the banana and react accordingly.


Now that we’ve created the pan gesture recognizer and associated it to the image view, we just have to write our callback method so we can actually do something when the pan occurs.


Open up ViewController.h and add the following declaration:



- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer;


Then implement it in ViewController.m as follows:



- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {

CGPoint translation = [recognizer translationInView:self.view];
recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x,
recognizer.view.center.y + translation.y);
[recognizer setTranslation:CGPointMake(0, 0) inView:self.view];

}


The UIPanGestureRecognizer will call this method when a pan gesture is first detected, and then continuously as the user continues to pan, and one last time when the pan is complete (usually the user lifting their finger).


The UIPanGestureRecognizer passes itself as an argument to this method. You can retrieve the amount the user has moved their finger by calling the translationInView method. Here we use that amount to move the center of the monkey the same amount the finger has been dragged.


Note it’s extremely important to set the translation back to zero once you are done. Otherwise, the translation will keep compounding each time, and you’ll see your monkey rapidly move off the screen!


Note that instead of hard-coding the monkey image view into this method, we get a reference to the monkey image view by calling recognizer.view. This makes our code more generic, so that we can re-use this same routine for the banana image view later on.


OK, now that this method is complete let’s hook it up to the UIPanGestureRecognizer. Select the UIPanGestureRecongizer in Interface Builder, bring up the Connections inspector, and drag a line from the selector to the View Controller. A popup will appear – select handlePan. At this point your Connections Inspector for the Pan Gesture Recognizer should look like this:


Connections for the pan gesture recognizer


Compile and run, and try to drag the monkey and… wait, it doesn’t work!


The reason this doesn’t work is that touches are disabled by default on views that normally don’t accept touches, like Image Views. So select both image views, open up the Attributes Inspector, and check the User Interaction Enabled checkbox.


Setting user interaction enabled for an image view


Compile and run again, and this time you should be able to drag the monkey around the screen!


Dragging the monkey around the screen with UIPanGestureRecognizer


Note that you can’t drag the banana. This is because gesture recognizers should be tied to one (and only one) view. So go ahead and add another gesture recognizer for the banana, by performing the following steps:


  1. Drag a Pan Gesture Recognizer on top of the banana Image View.
  2. Select the new Pan Gesture Recognizer, select the Connections Inspector, and drag a line from the selector to the View Controller and connect it to the handlePan method.

Give it a try and you should now be able to drag both image views across the screen. Pretty easy to implement such a cool and fun effect, eh?


Gratuitous Deceleration


In a lot of Apple apps and controls, when you stop moving something there’s a bit of deceleration as it finishes moving. Think about scrolling a web view, for example. It’s common to want to have this type of behavior in your apps.


There are many ways of doing this, but we’re going to do one very simple implementation for a rough but nice effect. The idea is we need to detect when the gesture ends, figure out how fast the touch was moving, and animate the object moving to a final destination based on the touch speed.


  • To detect when the gesture ends: The callback we pass to the gesture recognizer is called potentially multiple times – when the gesture recognizer changes its state to begin, changed, or ended for example. We can find out what state the gesture recognizer is in simply by looking at its state property.
  • To detect the touch velocity: Some gesture recognizers return additional information – you can look at the API guide to see what you can get. There’s a handy method called velocityInView that we can use in the UIPanGestureRecognizer!

So add the following to the bottom of the handlePan method:



if (recognizer.state == UIGestureRecognizerStateEnded) {

CGPoint velocity = [recognizer velocityInView:self.view];
CGFloat magnitude = sqrtf((velocity.x * velocity.x) + (velocity.y * velocity.y));
CGFloat slideMult = magnitude / 200;
NSLog(@"magnitude: %f, slideMult: %f", magnitude, slideMult);

float slideFactor = 0.1 * slideMult; // Increase for more of a slide
CGPoint finalPoint = CGPointMake(recognizer.view.center.x + (velocity.x * slideFactor),
recognizer.view.center.y + (velocity.y * slideFactor));
finalPoint.x = MIN(MAX(finalPoint.x, 0), self.view.bounds.size.width);
finalPoint.y = MIN(MAX(finalPoint.y, 0), self.view.bounds.size.height);

[UIView animateWithDuration:slideFactor*2 delay:0 options:UIViewAnimationOptionCurveEaseOut animations:^{
recognizer.view.center = finalPoint;
} completion:nil];

}


This is just a very simple method I wrote up for this tutorial to simulate deceleration. It takes the following strategy:


  • Figure out the length of the velocity vector (i.e. the magnitude)
  • If the length is < 200, then decrease the base speed, otherwise increase it.
  • Calculate a final point based on the velocity and the slideFactor.
  • Make sure the final point is within the view’s bounds
  • Animate the view to the final resting place.
  • It uses the “ease out” animation option to slow down the movement over time.

Compile and run to try it out, you should now have some basic but nice deceleration! Feel free to play around with it and improve it – if you come up with a better implementation, please share in the forum discussion at the end of this article.


UIPinchGestureRecognizer and UIRotationGestureRecognizer


Our app is coming along great so far, but it would be even cooler if you could scale and rotate the image views by using pinch and rotation gestures as well!


Let’s add the code for the callbacks first. Add the following declarations to ViewController.h:



- (IBAction)handlePinch:(UIPinchGestureRecognizer *)recognizer;
- (IBAction)handleRotate:(UIRotationGestureRecognizer *)recognizer;


And add the following implementations in ViewController.m:



- (IBAction)handlePinch:(UIPinchGestureRecognizer *)recognizer {   
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;   
}

- (IBAction)handleRotate:(UIRotationGestureRecognizer *)recognizer {   
recognizer.view.transform = CGAffineTransformRotate(recognizer.view.transform, recognizer.rotation);
recognizer.rotation = 0;   
}


Just like we could get the translation from the pan gesture recotnizer, we can get the scale and rotation from the UIPinchGestureRecognizer and UIRotationGestureRecognizer.


Every view has a transform that is applied to it, which you can think of as information on the rotation, scale, and translation that should be applied to the view. Apple has a lot of built in methods to make working with a transform easy such as CGAffineTransformScale (to scale a given transform) and CGAffineTransformRotate (to rotate a given transform). Here we just use these to update the view’s transform based on the gesture.


Again, since we’re updating the view each time the gesture updates, it’s very important to reset the scale and rotation back to the default state so we don’t have craziness going on.


Now let’s hook these up in the Storyboard editor. Open up MainStoryboard.storyboard and perform the following steps:


  1. Drag a Pinch Gesture Recognizer and a Rotation Gesture Recognizer on top of the monkey. Then repeat this for the banana.
  2. Connect the selector for the Pinch Geture Recognizers to the View Controller’s handlePinch method. Connect the selector for the Rotation gesture recognizers to the View Controller’s handleRotate method.

Compile and run (I recommend running on a device if possible because pinches and rotations are kinda hard to do on the simulator), and now you should be able to scale and rotate the monkey and banana!


Scaling and rotating the monkey and banana


Simultaneous Gesture Recognizers


You may notice that if you put one finger on the monkey, and one on the banana, you can drag them around at the same time. Kinda cool, eh?


However, you’ll notice that if you try to drag the monkey around, and in the middle of dragging bring down a second finger to attempt to pinch to zoom, it doesn’t work. By default, once one gesture recognizer on a view “claims” the gesture, no others can recognize a gesture from that point on.


However, you can change this by overriding a method in the UIGestureRecognizer delegate. Let’s see how it works!


Open up ViewController.h and mark the class as implementing UIGestureRecognizerDelegate as shown below:



@interface ViewController : UIViewController <UIGestureRecognizerDelegate>


Then switch to ViewController.m and implement one of the optional methods you can override:



- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {       
return YES;
}


This method tells the gesture recognizer whether it is OK to recognize a gesture if another (given) recognizer has already detected a gesture. The default implementation always returns NO – we switch it to always return YES here.


Next, open MainStoryboard.storyboard, and for each gesture recognizer connect its delegate outlet to the view controller.


Compile and run the app again, and now you should be able to drag the monkey, pinch to scale it, and continue dragging afterwards! You can even scale and rotate at the same time in a natural way. This makes for a much nicer experience for the user.


Programmatic UIGestureRecognizers


So far we’ve created gesture recognizers with the Storyboard editor, but what if you wanted to do things programatically?


It’s just as easy, so let’s try it out by adding a tap gesture recognizer to play a sound effect when either of these image views are tapped.


Since we’re going to play a sound effect, we need to add the AVFoundation.framework to our project. To do this, select your project in the Project navigator, select the MonkeyPinch target, select the Build Phases tab, expand the Link Binary with Libraries section, click the Plus button, and select AVFoundation.framework. At this point your list of frameworks should look like this:


Adding the AVFoundation framework into the project


Open up ViewController.h and make the following changes:



// Add to top of file
#import <AVFoundation/AVFoundation.h>

// Add after @interface
@property (strong) AVAudioPlayer * chompPlayer;
- (void)handleTap:(UITapGestureRecognizer *)recognizer;


And then make the following changes to ViewController.m:



// After @implementation
@synthesize chompPlayer;

// Before viewDidLoad
- (AVAudioPlayer *)loadWav:(NSString *)filename {
NSURL * url = [[NSBundle mainBundle] URLForResource:filename withExtension:@"wav"];
NSError * error;
AVAudioPlayer * player = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error];
if (!player) {
NSLog(@"Error loading %@: %@", url, error.localizedDescription);
} else {
[player prepareToPlay];
}
return player;
}

// Replace viewDidLoad with the following
- (void)viewDidLoad
{
[super viewDidLoad];
for (UIView * view in self.view.subviews) {

UITapGestureRecognizer * recognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleTap:)];
recognizer.delegate = self;
[view addGestureRecognizer:recognizer];

// TODO: Add a custom gesture recognizer too

}

self.chompPlayer = [self loadWav:@"chomp"];
}

// Add to bottom of file
- (void)handleTap:(UITapGestureRecognizer *)recognizer {   
[self.chompPlayer play];
}


The audio playing code is outside of the scope of this tutorial so we won’t discuss it (although it is incredibly simple).


The imiportant part is in viewDidLoad. We cycle through all of the subviews (just the monkey and banana image views) and create a UITapGestureRecognizer for each, specifying the callback. We set the delegate of the recognizer programatically, and add the recognizer to the view.


That’s it! Compile and run, and now you should be able to tap the image views for a sound effect!


UIGestureRecognizer Dependencies


It works pretty well, except there’s one minor annoyance. If you drag an object a very slight amount, it will pan it and play the sound effect. But what we really want is to only play the sound effect if no pan occurs.


To solve this we could remove or modify the delegate callback to behave differently in the case a touch and pinch coincide, but I wanted to use this case to demonstrate another useful thing you can do with gesture recognizers: setting dependencies.


There’s a method called requireGestureRecognizerToFail that you can call on a gesture recognizer. Can you guess what it does? ;]


Let’s try it out. Open MainStoryboard.storyboard, open up the Assistant Editor, and makes ure that ViewController.h is showing there. Then control-drag from the monkey pan gesture recognizer to below the @interface, and connect it to an outlet named monkeyPan. Repeat this for the banana pan gesture recognizer, but name the outlet bananaPan.


Then simply add these two lines to viewDidLoad, right before the TODO:



[recognizer requireGestureRecognizerToFail:monkeyPan];
[recognizer requireGestureRecognizerToFail:bananaPan];


Now the tap gesture recognizer will only get called if no pan is detected. Pretty cool eh? You might find this technique useful in some of your projects.


Custom UIGestureRecognizer


At this point you know pretty much everything you need to know to use the built-in gesture recognizers in your apps. But what if you want to detect some kind of gesture not supported by the bulit-in recognizers?


Well, you could always write your own! Let’s try it out by writing a very simple gesture recognizer to detect if you try to “tickle” the monkey or banana by moving your finger several times from left to right.


Create a new file with the iOS\Cocoa Touch\Objective-C class template. Name the class TickleGestureRecognizer, and make it a subclass of UIGestureRecognizer.


Then replace TickleGestureRecognizer.h with the following:



#import <UIKit/UIKit.h>

typedef enum {
DirectionUnknown = 0,
DirectionLeft,
DirectionRight
} Direction;

@interface TickleGestureRecognizer : UIGestureRecognizer

@property (assign) int tickleCount;
@property (assign) CGPoint curTickleStart;
@property (assign) Direction lastDirection;

@end


Here we are declaring three properties of info we need to keep track of to detect this gesture. We’re keeping track of:


  • tickleCount: How many times the user has switched the direction of their finger (while moving a minimum amount of points). Once the user moves their finger direction 3 times, we count it as a tickle gesture.
  • curTickleStart: The point where the user started moving in this tickle. We’ll update this each time the user switches direction (while moving a minimum amount of points).
  • lastDirection: The last direction the finger was moving. It will start out as unknown, and after the user moves a minimum amount we’ll see whether they’ve gone left or right and update this appropriately.


Of course, these properties here are specific to the gesture we’re detecting here – you’ll have your own if you’re making a recognizer for a different type of gesture, but you can get the general idea here.


Now switch to TickleGestureRecognizer.m and replace it with the following:



#import "TickleGestureRecognizer.h"
#import <UIKit/UIGestureRecognizerSubclass.h>

#define REQUIRED_TICKLES        2
#define MOVE_AMT_PER_TICKLE     25

@implementation TickleGestureRecognizer
@synthesize tickleCount;
@synthesize curTickleStart;
@synthesize lastDirection;

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {   
UITouch * touch = [touches anyObject];
self.curTickleStart = [touch locationInView:self.view];
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {

// Make sure we've moved a minimum amount since curTickleStart
UITouch * touch = [touches anyObject];
CGPoint ticklePoint = [touch locationInView:self.view];
CGFloat moveAmt = ticklePoint.x - curTickleStart.x;
Direction curDirection;
if (moveAmt < 0) {
curDirection = DirectionLeft;
} else {
curDirection = DirectionRight;
}
if (ABS(moveAmt) < MOVE_AMT_PER_TICKLE) return;

// Make sure we've switched directions
if (self.lastDirection == DirectionUnknown ||
(self.lastDirection == DirectionLeft && curDirection == DirectionRight) ||
(self.lastDirection == DirectionRight && curDirection == DirectionLeft)) {

// w00t we've got a tickle!
self.tickleCount++;
self.curTickleStart = ticklePoint;
self.lastDirection = curDirection;    

// Once we have the required number of tickles, switch the state to ended.
// As a result of doing this, the callback will be called.
if (self.state == UIGestureRecognizerStatePossible && self.tickleCount > REQUIRED_TICKLES) {
[self setState:UIGestureRecognizerStateEnded];
}
}

}

- (void)reset {
self.tickleCount = 0;
self.curTickleStart = CGPointZero;
self.lastDirection = DirectionUnknown;
if (self.state == UIGestureRecognizerStatePossible) {
[self setState:UIGestureRecognizerStateFailed];
}
}

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
[self reset];
}

- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
[self reset];
}

@end


There’s a lot of code here, but I’m not going to go over the specifics because frankly they’re not quite important. The important part is the general idea of how it works: we’re implemeneting touchesBegan, touchesMoved, touchesEnded, and touchesCancelled and writing custom code to look at the touches and detect our gesture.


Once we’ve found the gesture, we want to send updates to the callback method. You do this by switching the state of the gesture recognizer. Usually once the gesture begins, you want to set the state to UIGestureRecognizerStateBegin, send any updates with UIGestureRecognizerStateChanged, and finalize it with UIGestureRecognizerStateEnded.


But for this simple gesture recognizer, once the user has tickled the object, that’s it – we just mark it as ended. The callback will get called and we can implement the code there.


OK, now let’s use this new recognizer! Open ViewController.h and make the following changes:



// Add to top of file
#import "TickleGestureRecognizer.h"

// Add after @interface
@property (strong) AVAudioPlayer * hehePlayer;
- (void)handleTickle:(TickleGestureRecognizer *)recognizer;


And to ViewController.m:



// After @implementation
@synthesize hehePlayer;

// In viewDidLoad, right after TODO
TickleGestureRecognizer * recognizer2 = [[TickleGestureRecognizer alloc] initWithTarget:self action:@selector(handleTickle:)];
recognizer2.delegate = self;
[view addGestureRecognizer:recognizer2];

// At end of viewDidLoad
self.hehePlayer = [self loadWav:@"hehehe1"];

// Add at beginning of handlePan (gotta turn off pan to recognize tickles)
return;

// At end of file
- (void)handleTickle:(TickleGestureRecognizer *)recognizer {
[self.hehePlayer play];
}


So you can see that using this custom gesture recognizer is as simple as using the built-in ones!


Compile and run and “he he, that tickles!”


Where To Go From Here?


Here is an example project with all of the code from the above tutorial.


Congrats, you now have tons of experience with gesture recognizers! I hope you use them in your apps and enjoy!


If you have any comments or questions about this tutorial or gesture recognizers in general, please join the forum discussion below!


UIGestureRecognizer Tutorial in iOS 5: Pinches, Pans, and More! is a post from: Ray Wenderlich

2011年11月29日星期二

Building a Caterpillar Game with Cocos2D

Building a Caterpillar Game with Cocos2D:
This entry is part 1 of 2 in the series Build a Caterpillar Game with Cocos2D

In this series, we will be recreating the popular Atari game Centipede using the Cocos2D game engine for iOS. Centipede was originally developed for Atari and released on the Arcade in 1980. Since then, it has been ported to just about every platform imaginable. For our purposes, we will be calling the game Caterpillar.




Series Overview


This series will focus heavily on utilizing all that Cocos2D has to offer to create a complete game from start to finish. We will also be using some other tools such as Texture Packer to help us along the way. By the end of this series, you will have a fully functional Centipede clone containing graphics, simple animations, user interaction, artificial intelligence, game logic, and audio.


Caterpillar HD, the project this series teaches you to build, is a real game available on the iTunes App Store for free. So, the best way to see what this series is all about is to download the game and try it out for yourself!




Organization of the Series


This series is organized into 6 separate parts that will be released over the coming month or so.


  • Part 1 – We will be focused on getting your assets and Cocos2D project set up. I will show you how to use Texture Packer to prepare your assets as well as how to start a new Cocos2D project, load the assets, and start the title screen.
  • Part 2 – In this section, we will be setting up the game area. This will include getting all of the sprites into place and learning how to draw the game board.
  • Part 3 – We will be building our basic caterpillar and getting it to move across the screen.
  • Part 4 – This will be the most in-depth section. It will be all about the Caterpillar’s artificial intelligence and how it interacts with the world. Bring your thinking caps to this one.
  • Part 5 – At some point, we need to make the game playable by someone. This section focuses on player interaction and the missile object used to kill the caterpillar. After this section, it will really start feeling like a game.
  • Part 6 – In this wrap up of the series, we put on the polish with game audio, scoring, and restart conditions.



Step 1: Getting Cocos2D


Before you begin, you need to download and install a couple tools. The first is the Cocos2D game engine. You can obtain it from their website at http://cocos2d-iphone.org/download.


Once downloaded, you need to install their Xcode templates. This will make your life much easier in the future when you want to set up a project using Cocos2D. Here is their tutorial on how to do that. Don’t worry, this is really simple: just download the template tarball and then untar it in ~/Library/Developer/Xcode/Templates on your machine. The next time you open Xcode and create a new project, you should see a Templates category with several Cocos2D options.




Step 2: Texture Packer


Texture Packer is a fantastic tool that will take a set of textures, turn them into a single texture, and output a plist that tells Cocos2D how to use them. Having a single texture can save you quite a bit of disk space, load time, and complexity.


To get started, download Texture Packer from http://texturepacker.com. You can use the demo version for this tutorial but I strongly recommend purchasing this tool. It is well worth the money!




Importing Assets Into Texture Packer


Start by downloading the attachment for this tutorial. It contains both the standard and high definition versions of our images. Remember, the iPhone 3GS is free now, so there are still plenty of users not using retina display devices. Let’s not leave them out. ;)


Being that we have 2 separate versions of our images, you will need to perform this process twice. Simply drag all of the images in the HD folder except title-hd.png and game-over-hd.png into Texture Packer. It will be clear later why we are not including these two images.




Exporting Assets Out Of Texture Packer


Texture Packer will automatically lay out the images for you and create a single image that is as small as it can possibly be. Note that Cocos2D requires all image dimensions to be supplied in powers of 2.


Now that the images have been laid out, click the Publish button at the top. Name the output caterpillar-hd. Make sure to clear the images from Texture Packer and repeat this process for all of the standard definition images in the sd folder and name their output caterpillar.


You should now see a total of 4 files: caterpillar-hd.png, caterpillar-hd.plist, caterpillar.png, and caterpillar.plist.




Step 3: Creating A New Cocos2D Project


Open Xcode and create a new Cocos2D application. This should appear in your new project menu after installing the templates mentioned above.



New Cocos2D Project

Name this project Caterpillar and Xcode will set up everything needed to start a basic project.




Step 4: The Game Scene


Cocos2D uses movie terminology to organize their objects (Director, Scene, etc…). The director is responsible for running and maintaining all of the scenes within the application.


Before we go any further, drag all of the caterpillar files that you created in the previous section into your project, as well as the few stragglers (title.png, title-hd.png, game-over.png, game-over-hd.png). Make sure to check the box to copy the files into your project directory.


By default, you are provided with a new scene and layer called HelloWorldLayer. Since we will be creating our own scene, we don’t need this in our project. Simply delete both the .h and .m files.


Create a new file that is a subclass of CCLayer called GameLayer. Paste in the following code for GameLayer.h.


#import &amp;quot;cocos2d.h&amp;quot;

@interface GameLayer : CCLayer {

}

@property(nonatomic, retain) CCSpriteBatchNode *spritesBatchNode;

+ (CCScene *) scene;

@end

This is basically the same content that was in the HelloWorldLayer with names changed to GameLayer and the additon of the spriteBatchNode property.


In Cocos2D, a CCSpriteBatchNode allows us to group all of our sprites up so that OpenGL ES displays them in a single call. OpenGL ES is essentially a state machine, and switching between states is often very costly, so you will want to do it as infrequently as possibly. You can have Cocos2D draw all of your sprites without a CCSpriteBatchNode, however you are not guaranteed when they will be drawn, therefore affecting performance.


The scene method is simply a class level method that will return a singleton instance of our GameScene. This will only be called when telling the director to start our game. We will see the implementation of that in a later section.


Open up GameLayer.m and add the following code:


#import &amp;quot;GameLayer.h&amp;quot;

@implementation GameLayer

@synthesize spritesBatchNode = _spritesBatchNode;

+(CCScene *) scene {
CCScene *scene = [CCScene node];

GameLayer *layer = [GameLayer node];

[scene addChild: layer];

return scene;
}

- (void) dealloc {
[_spritesBatchNode release];
[super dealloc];
}

-(id) init {
if( (self=[super init])) {

[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA4444];

// 1.
self.spritesBatchNode = [CCSpriteBatchNode batchNodeWithFile:@&amp;quot;caterpillar.png&amp;quot;];
[self addChild:self.spritesBatchNode];
// 2.
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:@&amp;quot;caterpillar.plist&amp;quot;];

[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGB565];
// 3.
CCSprite * background = [CCSprite spriteWithSpriteFrameName:@&amp;quot;background.png&amp;quot;];
background.anchorPoint = ccp(0,0);
[self.spritesBatchNode addChild:background];

[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_Default];

}
return self;
}
@end

Starting with the scene method, we see all of the boilerplate code to initialize the main layer for this scene. We call the node method of our layer which initializes and returns it back to the caller. Finally the instantiated scene is returned. You will see code exactly like this in every scene that you create.


The init method is where we are going to be doing all of our setup for our game and loading the main spritesheet into memory.


  1. Is where we initialize our CCSpriteBatchNode object with our caterpillar.png sprite sheet file. It will also look for a file of the same name with a .plist extention in order to determine how to use the file.
  2. After the sprite sheet is loaded, we add all of the sprites to Cocos2D’s CCSpriteFrameCache. This caches the sprites so that when we want to use them over and over again, we don’t have to re-source them from disk. I strongly encourage using the cache here as it will drastically improve performance.
  3. Now we are able to fetch sprites out of the cache based on their original file names. This is thanks to the caterpillar.plist file informing Cocos2D of the mappings (I told you Texutre Packer was handy). In this case, we fetch the background out of the cache and add it as a child to our layer at position 0,0 (starting from the top left corner). This will display our game background.



Step 5: The Title Scene


Before we can begin playing our game, we need to present our title screen to the player. To do this, you must create another new file that is a subclass of CCLayer called TitleLayer.


The file TitleLayer.h is very straight forward. Add the following code:


#import &amp;quot;cocos2d.h&amp;quot;

@interface TitleLayer : CCLayer
+(CCScene *) scene;
@end

The only thing we added was the declaration for the scene method. Now, open up TitleLayer.m and add the following code:


#import &amp;quot;TitleLayer.h&amp;quot;
#import &amp;quot;GameLayer.h&amp;quot;
#import &amp;quot;CCTransition.h&amp;quot;

@implementation TitleLayer

+(CCScene *) scene {
CCScene *scene = [CCScene node];

TitleLayer *layer = [TitleLayer node];

[scene addChild: layer];

return scene;
}

-(id) init {
if( (self=[super init])) {
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGB565];
// 1
CCSprite * background = [CCSprite spriteWithFile:@&amp;quot;title.png&amp;quot;];
background.anchorPoint = ccp(0,0);
[self addChild:background];

// 2
[[CCTouchDispatcher sharedDispatcher] addTargetedDelegate:self priority:0 swallowsTouches:YES];
}
return self;
}

- (BOOL)ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event {
// 3
[[CCDirector sharedDirector] replaceScene:[CCTransitionFade transitionWithDuration:.5 scene:[GameLayer scene] withColor:ccWHITE]];
return YES;
}

@end

The code for this should look very similar to the GameLayer code that we discussed above. Here are the few key differences.


  1. This loads the background image for the title screen and displays it in our TitleScene’s main layer.
  2. In order for any layer in Cocos2D to accept touches, you must enable it using the swallowsTouches method. This will invoke some of the touch callback methods of the receiving delegate class. In our case, we only care about the ccTouchesBegan method.
  3. When the user taps the screen, this method will fire. Inside, we use the director to transition the scene from the TitleScene to the GameScene using a fade transition. You can see all of the different transition types inside of the CCTransition.h file.



Step 6: Running The Project


If you try to build and run the application at this point, you will get an error. That’s because the AppDelegate is still trying to load the HelloWorldLayer that you deleted before. We need to modify the code and tell it to start with our TitleLayer upon application startup.


Open up AppDelegate.m and import the TitleLayer:


#import &amp;quot;TitleLayer.h&amp;quot;

Also, be sure to delete the import for the HelloWorldLayer. Next, navigate to around line 113 and change [HelloWorldLayer scene] to [TitleLayer scene].


[[CCDirector sharedDirector] runWithScene: [TitleLayer scene]];

Now, hit the Run button…If you pasted the code correctly, you should see something like this:



Landscape Screenshot

It appears that our game has been improperly oriented. This is an easy fix. By default, Cocos2D relies on the view controller that is displaying in it to determine the proper orientation. It’s currently set to landscape mode. Open up RootViewController.m and look in the “shouldAutorotateToInterfaceOrientation” method. Change the return statement of that method to be this:


return ( UIInterfaceOrientationIsPortrait( interfaceOrientation ) );

This will simply tell the view controller and Cocos2D to only support portrait mode. Now, when you hit Run, the game will be properly oriented and will function as you might expect.



Title


Game



Conclusion


What we have now is the ground work for our Cocos2D implementation of Centipede. It’s not much to look at right now, but know that this foundation is very important for our development going forward.




Next Time


In the next tutorial in this series, we will be setting up the interface elements including score, lives, and the sprout field.


Happy Coding!

iOS Framework: Introducing MKNetworkKit

iOS Framework: Introducing MKNetworkKit:

How awesome would it be if a networking framework automatically takes care of caching responses for you?

How awesome would it be if a networking framework automatically remembers your operations when your client is offline?

You favorite a tweet or mark a feed as read when you are offline and the Networking Framework performs all these operations when the device comes back online, all with no extra coding effort from you. Introducing MKNetworkKit.

What is MKNetworkKit?

MKNetworkKit is a networking framework written in Objective-C that is seamless, block based, ARC ready and easy to use.

MKNetworkKit is inspired by the other two popular networking frameworks, ASIHTTPRequest and AFNetworking. Marrying the feature set from both, MKNetworkKit throws in a bunch of new features. In addition to this, MKNetworkKit mandates you to write slightly more code than the other frameworks at the expense of code clarity. With MKNetworkKit, it’s hard to write ugly networking code.

Features

Super light-weight

The complete kit is just 2 major classes and some category methods. This means, adopting MKNetworkKit should be super easy.

Single Shared Queue for your entire application.

Apps that depend heavily on Internet connectivity should optimize the number of concurrent network operations. Unfortunately, there is no networking framework that does this correctly. Let me give you an example of what can go wrong if you don’t optimize/control the number of concurrent network operations in your app.

Let’s assume that you are uploading a bunch of photos (think Color or Batch) to your server. Most mobile networks (3G) don’t allow more than two concurrent HTTP connections from a given IP address. That is, from your device, you cannot open more than two concurrent HTTP connections on a 3G network. Edge is even worse. You can’t, in most cases, open more than one connection. This limit is considerably high (six) on a traditional home broadband (Wifi). However, since your iDevice is not always connected to Wifi, you should be prepared for throttled/restricted network connectivity. On any normal case, the iDevice is mostly connected to a 3G network, which means, you are restricted to upload only two photos in parallel. Now, it is not the slow upload speed that hurts. The real problem arises when you open a view that loads thumbnails of photos (say on a different view) while this uploading operations are running in the background. When you don’t properly control the queue size across the app, your thumbnail loading operations will just timeout which is not really the right way to do it. The right way to do this is to prioritize your thumbnail loading operation or wait till the upload is complete and the load thumbnails. This requires you to have a single queue across the entire app. MKNetworkKit ensures this automatically by using a single shared queue for every instance of it. While MKNetworkKit is not a singleton by itself, the shared queue is.

Showing the Network Activity Indicator correctly

While there are many third party classes that uses “incrementing” and “decrementing” the number of network calls and using that to show the network activity indicator, MKNetworkKit backs on the single shared queue principle and shows the activity indicator automatically when there is an operation running in the shared queue by observing (KVO) the operationCount property. As a developer, you normally don’t have to worry about setting the network activity indicator manually, ever again.

if (object == _sharedNetworkQueue && [keyPath isEqualToString:@"operationCount"]) {

[UIApplication sharedApplication].networkActivityIndicatorVisible =
([_sharedNetworkQueue.operations count] > 0);       
}

Auto queue sizing

Continuing the previous discussion, I told that most mobile networks don’t allow more than two concurrent connections. So your queue size should be set to two, when the current network connectivity is 3G. MKNetworkKit automatically handles this for you. When the network drops to 3G/EDGE/GPRS, it changes the number of concurrent operations that can be performed to 2. This is automatically changed back to 6 when the device connects back to a Wifi network. With this technique in place, you will see a huge performance benefit when you are loading thumbnails (or multiple similar small requests) for a photo library from a remote server over 3G.

Auto caching

MKNetworkKit can automatically cache all your “GET” requests. When you make the same request again, MKNetworkKit calls your completion handler with the cached version of the response (if it’s available) almost immediately. It also makes a call to the remote server again. After the server data is fetched, your completion handler is called again with the new response data. This means, you don’t have to handle caching manually on your side. All you need to do is call one method,

[[MKNetworkEngine sharedEngine] useCache];

Optionally, you can override methods in your MKNetworkEngine subclass to customize your cache directory and in-memory cache cost.

Operation freezing

With MKNetworkKit, you have the ability to freeze your network operations. When you freeze an operation, in case of network connectivity losses, they will be serialized automatically and performed once the device comes back online. Think of “drafts” in your twitter client.

When you post a tweet, mark that network call as freezable and MKNetworkKit automatically takes care of freezing and restoring these requests for you! So the tweets get sent later without you writing a single line of additional code. You can use this for other operations like favoring a tweet or sharing a post from your Google reader client, adding a link to Instapaper and similar operations.

Performs exactly one operation for similar requests

When you load thumbnails (for a twitter stream), you might end up creating a new request for every avatar image. But in reality, you only need as many requests as there are unique URLs. With MKNetworkKit, every GET request you queue gets executed exactly once. MKNetworkKit is intelligent enough not to cache “POST” http requests.

Image Caching

MKNetworkKit can be seamlessly used for caching thumbnail images. By overriding a few methods, you can set how many images should be held in the in-memory cache and where in the Caches directory it should be saved. Overriding these methods are completely optional.

Performance

One word. SPEED. MKNetworkKit caching is seamless. It works like NSCache, except that, when there is a memory warning, the in-memory cache is written to the Caches directory.

Full support for Objective-C ARC

You normally choose a new networking framework for new projects. MKNetworkKit is not meant for replacing your existing framework (though you can, it’s quite a tedious job). On new projects, you will almost and always want to enable ARC and as on date of writing this, MKNetworkKit is probably the only networking framework that is fully ARC ready. ARC based memory management is usually an order of magnitude faster than non-ARC based memory management code.

How to use

Ok, Enough self-praises. Let us now see how to use the framework.

Adding the MKNetworkKit

  1. Drag the MKNetworkKit directory to your project.
  2. Add the CFNetwork.Framework and SystemConfiguration.framework.
  3. Include MKNetworkKit.h to your PCH file
  4. Under your “Compile Sources” section, add -fno-objc-arc to the Reachability.m file.

You are done. Just 5 core files and there you go. A powerful networking kit.

Classes in MKNetworkKit

  1. MKNetworkOperation
  2. MKNetworkEngine
  3. Miscellaneous helper classes (Apple’s Reachability) and categories

I believe in simplicity. Apple has done the heavy lifting of writing the actual networking code. What a third-party networking framework should provide is an elegant queue based networking with optional caching. I believe that, any third party framework should have under 10 classes (whether it’s networking or UIKit replacement or whatever). More than that is a bloat. Three 20 library is an example of bloat and so is ShareKit. May be it’s good. But it still huge and bloated. ASIHttpRequest or AFNetworking are lean and lightweight, unlike RESTKit. JSONKit is lightweight unlike TouchJSON (or any of the TouchCode libraries). May be it’s just me, but I just can’t take it when more than a third of source code lines in my app comes from a third party library.

The problem with a huge framework is the difficulty in understanding the internal working and the ability to customize it to suit your needs (in case you need to). My frameworks (MKStoreKit for adding In App Purchases to your app) have always been super easy to use and I believe MKNetworkKit would also be the same. For using MKNetworkKit, all you need to know are the methods exposed by the two classes MKNetworkOperation and MKNetworkEngine. MKNetworkOperation is similar to the ASIHttpRequest class. It is a subclass of NSOperation and it wraps your request and response classes. You create a MKNetworkOperation for every network operation you need in your application.

MKNetworkEngine is a pseudo-singleton class that manages the network queue in your application. It’s a pseudo-singleton, in the sense, for simple requests, you should use MKNetworkEngine methods directly. For more powerful customization, you should subclass it. Every MKNetworkEngine subclass has its own Reachability object that notifies it of server reachability notifications. You should consider creating a subclass of MKNetworkEngine for every unique REST server you use. It’s pseudo-singleton in the sense, every single request in any of it’s subclass goes through one and only one single queue.

You can retain instances of your MKNetworkEngine in your application delegate just like CoreData managedObjectContext class. When you use MKNetworkKit, you create an MKNetworkEngine sub class to logically group your network calls. That is, all Yahoo related methods go under one single class and all Facebook related methods into another class. We will now look at three different examples of using this framework.

Example 1:

Let’s now create a “YahooEngine” that pulls currency exchange rates from Yahoo finance.

Step 1: Create a YahooEngine class as a subclass of MKNetworkEngine. MKNetworkEngine init method takes hostname and custom headers (if any). The custom headers is optional and can be nil. If you are writing your own REST server (unlike this case), you might consider adding client app version and other misc data like client identifier.

NSMutableDictionary *headerFields = [NSMutableDictionary dictionary];
[headerFields setValue:@"iOS" forKey:@"x-client-identifier"];

self.engine = [[YahooEngine alloc] initWithHostName:@"download.finance.yahoo.com"
customHeaderFields:headerFields];

Note that, while yahoo doesn’t mandate you to send x-client-identifier in the header, the sample code shown above sends this just to illustrate this feature.

Since the complete code is ARC, it’s up to you as a developer to own (strong reference) the Engine instance.

When you create a MKNetworkEngine subclass, Reachability implementation is done automatically for you. That’s when your server goes down or due to some unforeseen circumstances, the hostname is not reachable, your requests will automatically be queued/frozen. For more information about freezing your operations, read the section Freezing Operations later in the page.

Step 2: Designing the Engine class (Separation of concerns)

Let’s now start write the methods in Yahoo Engine to fetch exchange rates. The engine methods will be called from your view controller. A good design practice is to ensure that your engine class doesn’t expose URL/HTTPHeaders to the calling class. Your view should not “know” about URL endpoints or the parameters needed. This means, parameters to methods in your Yahoo Engine should be the currencies and the number of currency units. The return value of this method could be a double value that is the exchange rate factor and may be the timestamp of the time it was fetched. Since operations are not performed synchronously, you should return these values on blocks. An example of this would be,

-(MKNetworkOperation*) currencyRateFor:(NSString*) sourceCurrency
inCurrency:(NSString*) targetCurrency
onCompletion:(CurrencyResponseBlock) completion
onError:(ErrorBlock) error;

MKNetworkEngine, the parent class defines three types of block methods as below.

typedef void (^ProgressBlock)(double progress);
typedef void (^ResponseBlock)(MKNetworkOperation* operation);
typedef void (^ErrorBlock)(NSError* error);

In our YahooEngine, we are using a new kind of block, CurrencyResponseBlock that returns the exchange rate. The definition looks like this.

typedef void (^CurrencyResponseBlock)(double rate);

In any normal application, you should be defining your own block methods similar to this CurrencyResponseBlock for sending data back to the view controllers.

Step 3: Processing the data

Data processing, that is converting the data you fetch from your server, whether it’s JSON or XML or binary plists, should be done in your Engine. Again, relieve your controllers of doing this task. Your engine should send back data only in proper model objects or arrays of model objects (in case of lists). Convert your JSON/XML to models in the engine. Again, to ensure proper separation of concerns, your view controller should not “know” about the “keys” for accessing individual elements in your JSON.


That concludes the design of your Engine. Most networking framework doesn’t force you to follow this separation of concerns. We do, because we care for you :)


Step 4: Method implementation

We will now discuss the implementation details of the method that calculates your currency exchange.


Getting currency information from Yahoo, is as simple as making a GET request.

I wrote a macro to format this URL for a given currency pair.

#define YAHOO_URL(__C1__, __C2__) [NSString stringWithFormat:@"d/quotes.csv?e=.csv&f=sl1d1t1&s=%@%@=X", __C1__, __C2__]

Methods you write in your engine class should do the following in order.

  1. Prepare your URL from the parameters.
  2. Create a MKNetworkOperation object for the request.
  3. Set your method parameters.
  4. Add completion and error handlers to the operation (The completion handler is the place to process your responses and convert them to Models.)
  5. Optionally, add progress handlers to the operation. (Or do this on the view controller)
  6. If your operation is file download, set a download stream (normally a file) to it. This is again optional.
  7. When the operation completes, process the result and invoke the block method to return this data to the calling method.

This is illustrated in the following code

MKNetworkOperation *op = [self operationWithPath:YAHOO_URL(sourceCurrency, targetCurrency)
params:nil
httpMethod:@"GET"];

[op onCompletion:^(MKNetworkOperation *completedOperation)
{
DLog(@"%@", [completedOperation responseString]);

// do your processing here
completionBlock(5.0f);

}onError:^(NSError* error) {

errorBlock(error);
}];

[self enqueueOperation:op];

return op;


The above code formats the URL and creates a MKNetworkOperation. After setting the completion and error handlers it queues the operation by calling the super class’s enqueueOperation method and returns a reference to it. Your view controller should own this operation and cancel it when the view is popped out of the view controller hierarchy. So if you call the engine method in, say viewDidAppear, cancel the operation in viewWillDisappear. Canceling the operation will free up the queue for performing other operations in the subsequent view (Remember, only two operations can be performed in parallel on a mobile network. Canceling your operations when they are no longer needed goes a long way in ensuring performance and speed of your app).


You view controller can also (optionally) add progress handlers and update the user interface. This is illustrated below.



[self.uploadOperation onUploadProgressChanged:^(double progress) {

DLog(@"%.2f", progress*100.0);
self.uploadProgessBar.progress = progress;
}];


MKNetworkEngine also has convenience methods to create a operation with just a URL. So the first line of code can also be written as



MKNetworkOperation *op = [self operationWithPath:YAHOO_URL(sourceCurrency, targetCurrency)];


Do note here that request URLs are automatically prefixed with the hostname you provided while initializing your engine class.


Creating a POST, DELETE or PUT method is as easy as changing the httpMethod parameter. MKNetworkEngine has more convenience methods like this. Read the header file for more.


Example 2:


Uploading an image to a server (TwitPic for instance).

Now let us go through an example of how to upload an image to a server. Uploading an image obviously requires the operation to be encoded as a multi-part form data. MKNetworkKit follows a pattern similar to ASIHttpRequest.

You call a method addFile:forKey: in MKNetworkOperation to “attach” a file as a multi-part form data to your request. It’s that easy.

MKNetworkOperation also has a convenience method to add a image from a NSData pointer. That’s you can call addData:forKey: method to upload a image to your server directly from NSData pointer. (Think of uploading a picture from camera directly).


Example 3:


Downloading files to a local directory (Caching)

Downloading a file from a remote server and saving it to a location on users’ iPhone is super easy with MKNetworkKit.

Just set the outputStream of MKNetworkOperation and you are set.



[operation setDownloadStream:[NSOutputStream
outputStreamToFileAtPath:@"/Users/mugunth/Desktop/DownloadedFile.pdf"
append:YES]];


You can set multiple output streams to a single operation to save the same file to multiple locations (Say one to your cache directory and one to your working directory)


Example 4:


Image Thumbnail caching

For downloading images, you might probably need to provide an absolute URL rather than a path. MKNetworkEngine has a convenience method for this. Just call operationWithURLString:params:httpMethod: to create a network operation with an absolute URL.

MKNetworkEngine is intelligent. It coalesces multiple GET calls to the same URL into one and notifies all the blocks when that one operation completes. This drastically improves the speed of fetching your image URLs for populating thumbnails.


Subclass MKNetworkEngine and override image cache directory and cache cost. If you don’t want to customize these two, you can directly call MKNetworkEngine methods to download images for you. I would actually recommend you to do that.


Caching operations


MKNetworkKit caches all requests by default. All you need to do is to turn on caching for your Engine. When a GET request is performed, if the response was previously cached, your completion handler is called with the cached response almost immediately. To know whether the response is cached, use the isCachedResponse method. This is illustrated below.



[op onCompletion:^(MKNetworkOperation *completedOperation)
{
if([completedOperation isCachedResponse]) {
DLog(@"Data from cache");
}
else {
DLog(@"Data from server");
}

DLog(@"%@", [completedOperation responseString]);
}onError:^(NSError* error) {

errorBlock(error);
}];


Freezing operations


Arguably, the most interesting feature of MKNetworkKit is built in ability to freeze operations. All you need to do is set your operation as freezable. Almost zero effort!



[op setFreezable:YES];


Freezable operations are automatically serialized when the network goes down and executed when connectivity is restored. Think of having the ability to favorite a tweet while you are offline and the operation is performed when you are online later.

Frozen operations are also persisted to disk when the app enters background. They will be automatically performed when the app is resumes later.


Convenience methods in MKNetworkOperation


MKNetworkOperation exposes convenience methods like the following to get the format your response data.


  1. responseData
  2. responseString
  3. responseJSON (Only on iOS 5)
  4. responseImage
  5. responseXML
  6. error

They come handy when accessing the response after your network operation completes. When the format is wrong, these methods return nil. For example, trying to access responseImage when the actual response is a HTML response will return nil. The only method that is guaranteed to return the correct, expected response is responseData. Use the other methods if you are sure of the response type.


Convenience macros


The macros, DLog and ALog were stolen unabashedly from Stackoverflow and I couldn’t again find the source. If you wrote that, let me know.


A note on GCD


I purposefully didn’t use GCD because, network operations need to be stopped and prioritized at will. GCD, while more efficient that NSOperationQueue cannot do this. I would recommend not to use GCD based queues for your network operations.


Documentation


The header files are commented and I’m trying out headerdoc from Apple. Meanwhile, you can use/play around (read: Fork) with the code.


Source Code


The source code for MKNetworkKit along with a demo application is available on Github.

MKNetworkKit on Github


Feature requests


Please don’t email me feature requests. The best way is to create an issue on Github.


Licensing


MKNetworkKit is licensed under MIT License


All of my source code can be used free of charge in your app, provided you add the copyright notices to your app. A little mention on one of your most obscure “about” page will do.


Attribution free licensing available upon request. Contact me at mknetworkkit@mk.sg




Mugunth