Note from Ray: This is the fourteenth iOS 5 tutorial in the iOS 5 Feast! This tutorial is a free preview chapter from our new book iOS 5 By Tutorials. Enjoy!
This is a blog post by iOS Tutorial Team member Jacob Gundersen, an indie game developer and co-founder of Third Rail Games. Check out his latest app – Factor Samurai!
Core Image is a powerful framework that lets you easily apply filters to images, such as modifying the vibrance, hue, or exposure. It uses the GPU (or CPU, user definable) to process the image data and is very fast. Fast enough to do real time processing of video frames!
Core Image filters can stacked together to apply multiple effects to an image or video frame at once. When multiple filters are stacked together they are efficient because they create a modified single filter that is applied to the image, instead of processing the image through each filter, one at a time.
Each filter has it’s own parameters and can be queried in code to provide information about the filter, it’s purpose, and input parameters. The system can also be queried to find out what filters are available. At this time, only a subset of the Core Image filters available on the Mac are available on iOS. However, as more become available the API can be used to discover the new filter attributes.
In this tutorial, you will get hands-on experience playing around with Core Image. We’ll apply a few different filters, and you’ll see how easy it is to apply cool effects to images in real time!
Core Image Overview
Before we get started, let’s discuss some of the most important classes in the Core Image framework:
- CIContext. All of the processing of a core image is done in a CIContext. This is somewhat similar to a Core Graphics or OpenGL context.
- CIImage. This class hold the image data. It can be creating from a UIImage, from an image file, or from pixel data.
- CIFilter. The filter class has a dictionary that defines the attributes of the particular filter that it represents. Examples of filters are vibrance filters, color inversion filters, cropping filters, and much more.
We’ll be using each of these classes as we create our project.
Getting Started
Open up Xcode and create a new project with the iOS\Application\Single View Application template. Enter CITest for the Product Name, select iPhone for the device family, and make sure that Use Automatic Reference Counting is checked (but leave the other checkboxes unchecked).
First things first, let’s add the Core Image framework. On the Mac this is part of the QuartzCore framework, but on iOS it’s a standalone framework. Go to the project container in the file view on the left hand side. Choose the Build Phases tab, expand the Link Binaries with Library group and press the +. Navigate to the CoreImage framework and double-click on it.
From the resources for this tutorial, add image.png to your project. I like to put all the images and sound files into a Resources group. Your project won’t have that grouping. So, if you want to (it’s not necessary), control-click the image.png file once it’s added and choose New Group from Selection. Then click on the folder name to change it to Resources.
Now we’re going to hide the status bar. We need to add a key to our plist, add a line of code, and change a setting in the xib to accomplish this.
First the plist. Open SupportingFiles\CITest-Info.plist, control-click anywhere in the white space, and select Add Row. Change the value to Status bar is initally hidden and set the value to YES.
Next the .xib file. Open ViewController.xib had highlight the view. Bring up the attributes inspector, and change the Status Bar value to None. With the statusbar removed, we can add the 20 pixels that the statusbar takes up back in. In the size inspector, change the height of the view from 460 to 480.
While we’re in the .xib file, let’s add a UIImageView to our view object. Drag and drop one from the objects panel. The position and dimensions should roughly match the following image:
Also, open the Assistant Editor, make sure it’s displaying ViewController.h, and control-drag from the UIImageView to below the @interface. Set the Connection to Outlet, name it imgV, and click Connect.
Finally, we’ll add some code to AppDelegate.m. In the applicationDidFinishLaunchingWithOptions method add the following line before the return YES line:
[[UIApplication sharedApplication] setStatusBarHidden:YES]; |
Run your project, and you should see a plain gray screen with no status bar. The initial setup is complete – now onto Core Image!
Basic Image Filtering
We’re going to get started by simply running our image through a CIFilter and displaying it on the screen.
Every time we want to apply a CIFilters to an image we need to do four things:
- Create a CIImage object. CIImage has the following initialization methods: imageWithURL, imageWithData, imageWithCVPixelBuffer, and imageWithBitmapData:bytesPerRow:size:format:colorSpace. You’ll most likely be working with imageWithURL most of the time.
- Create a CIContext. A CIContext can be CPU or GPU based.
- Create a CIFilter. When you create the filter, you configure a number of properties on it, that depend on the filter you’re using.
- Get the filter output. The filter gives you an output image as a CIImage – you can convert this to a UIImage using the CIContext, as you’ll see below.
Let’s see how this works. Add the following code to ViewController.m inside viewDidLoad:
NSString *filePath = [[NSBundle mainBundle] pathForResource:@"image" ofType:@"png"]; NSURL *fileNameAndPath = [NSURL fileURLWithPath:filePath]; CIImage *beginImage = [CIImage imageWithContentsOfURL:fileNameAndPath]; CIContext *context = [CIContext contextWithOptions:nil]; CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone" keysAndValues: kCIInputImageKey, beginImage, @"inputIntensity", [NSNumber numberWithFloat:0.8], nil]; CIImage *outputImage = [filter outputImage]; CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]]; UIImage *newImg = [UIImage imageWithCGImage:cgimg]; [imgV setImage:newImg]; CGImageRelease(cgimg); |
The first two lines create an NSURL object that holds the path to our image file.
Next we create our CIImage with the imageWithContentsOfURL method and create the CIContext. The CIContext constructor takes an NSDictionary that specifies options including the color format and whether the context should run on the CPU or GPU. For this app, the default values are fine and so we pass in nil for that argument.
Next we’ll create our CIFilter object. A CIFilter constructor takes the name of the filter, and a dictionary that specifies the keys and values for that filter. Each filter will have its own unique keys and set of valid values.
The CISepiaTone filter takes only two values, the KCIInputImageKey (a CIImage) and the @”inputIntensity”, a float value, wrapped in an NSNumber, between 0 and 1. Here we give that value 0.8. Most of the filters have default values that will be used if no values are supplied. One exception is the CIImage, this must be provided as there is no default.
Getting a CIImage back out of a filter is easy just use the outputImage property.
Once we have an output CIImage, we will need to convert it into a CGImage (which we can then convert to a UIImage or draw the the screen directly). This is where the context comes in. Calling the createCGImage:fromRect: on the context with the supplied CIImage will produce a CGImageRef. This can then be used to create a UIImage by calling the imageWithCGImage constructor.
Once we’ve converted it to a UIImage, we just display it in the image view we added earlier.
Compile and run the projec,t and you’l see our image filtered by the sepia tone filter. Congratulations, you have successfully used CIImage and CIFilters!
Changing Filter Values
This is great, but this is just the beginning of what you can do with Core Image filters. Lets add a slider and set it up so we can adjust the image settings in real time.
Open up ViewController.xib and drag a slider in below the image view. Make sure the Assistant Editor is visible and displaying ViewController.h, then control-drag from the slider down below the @interface. Set the Connection to Action, the name to changeValue, make sure that the Event is set to Value Changed, and click Connect.
While you’re at it let’s connect the slider to an outlet as well. Again control-drag from the slider down below the @interface, but this time set the Connection to Outlet, the name to amountSlider, and click Connect.
The screen layout should look sort of like this:
We need to recreate parts of the process every time we move the slider. However, we don’t want to redo the whole process, that would be very inefficient and would take too long. We’ll need to change a few things in our class so that we hold on to some of the objects we create in our viewDidLoad method.
The biggest thing we want to do is reuse the CIContext whenever we need to use it. If we recreate it each time, our program will run very slow. The other things we can hold onto are the CIFilter and the CIImage that holds our beginning image. We’ll need a new CIImage for every output, but the image we start with will stay constant.
We need to add some instance variables to accomplish this task.
Add the following three instance variables to your private @implementation in ViewController.m:
@implementation ViewController { CIContext *context; CIFilter *filter; CIImage *beginImage; } |
Also, change the variables in your viewDidLoad method so they use the instance variables instead of declaring new local variables:
beginImage = [CIImage imageWithContentsOfURL:fileNameAndPath]; context = [CIContext contextWithOptions:nil]; filter = [CIFilter filterWithName:@"CISepiaTone" keysAndValues:kCIInputImageKey, beginImage, @"inputIntensity", [NSNumber numberWithFloat:0.8], nil]; |
Now we’ll implement the changeValue method. What we’ll be doing in this method is altering the value of the @”inputIntensity” key in our CIFilter dictionary. Once we’ve altered this value we’ll need to repeat a few steps:
- Get the output CIImage from the CIFilter.
- Convert the CIImage to a CGImageRef.
- Convert the CGImageRef to a UIImage, and display it in the image view.
So replace the changeValue method with the following:
-(IBAction)changeValue:(UISlider *)sender { float slideValue = [sender value]; [filter setValue:[NSNumber numberWithFloat:slideValue] forKey:@"inputIntensity"]; CIImage *outputImage = [filter outputImage]; CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]]; UIImage *newImg = [UIImage imageWithCGImage:cgimg]; [imgV setImage:newImg]; CGImageRelease(cgimg); } |
You’ll notice that we’ve changed the variable type from (id)sender to (UISlider *)sender in the method definition. We know we’ll only be using this method to retrieve values from our UISlider, so we can go ahead and make this change. If we’d left it as (id), we’d need to cast it to a UISlider or the next line would throw an error. Make sure that the method declaration in the header file matches the changes we’ve made here.
We retrieve the float value from the slider. Our slider is set to the default values – min 0, max 0, default 0.5. These happen to be the right values for this CIFilter, how convenient!
The CIFilter has methods that will allow us to set the values for the different keys in its dictionary. Here, we’re just setting the @”inputIntensity” to an NSNumber object with a float value of whatever we get from our slider.
The rest of the code should look familiar, as it follows the same logic as our viewDidLoad method. We’re going to be using this code over and over again. From now on, we’ll use the changeSlider method to render the output of a CIFilter to our UIImageView.
Compile and run, and you should have a functioning live slider that will alter the sepia value for our image in real time!
Getting Photos from the Photo Album
Now that we can change the values of the filter on the fly, things are starting to get real interesting! But what if we don’t care for this image of flowers? Let’s set up a UIPickerController so we can get pictures from out of the photo album and into our program so we can play with them.
We need to create a button that will bring up the photo album view, so open up ViewController.xib and drag in a button to the right of the slider.
Then make sure the Assistant Editor is visible and displaying ViewController.h, then control-drag from the button down below the @interface. Set the Connection to Action, the name to loadPhoto, make sure that the Event is set to Touch Up Inside, and click Connect.
Next switch to ViewController.m, and implement the loadPhoto method as follows:
- (IBAction)loadPhoto:(id)sender { UIImagePickerController *pickerC = [[UIImagePickerController alloc] init]; pickerC.delegate = self; [self presentModalViewController:pickerC animated:YES]; } |
The first line of code instantiates a new UIImagePickerController. We then set the delegate of the image picker to self (our ViewController).
We get a warning here. We need to setup our ViewController as an UIImagePickerControllerDelegate and UINaviationControllerDelegate and then implement the methods in that delegates protocol. In ViewController.h, change the interface line like so:
@interface ViewController : UIViewController <UIImagePickerControllerDelegate, UINavigationControllerDelegate> |
Now implement the following two methods:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { [self dismissModalViewControllerAnimated:YES]; NSLog(@"%@", info); } - (void)imagePickerControllerDidCancel: (UIImagePickerController *)picker { [self dismissModalViewControllerAnimated:YES]; } |
In both cases, we dismiss the UIPickerController. That’s the delegate’s job, if we don’t do it there, then we just stare at the image picker forever!
The first method isn’t completed yet – it’s just a placeholder to log out some information about chosen image. The cancel method just gets rid of the picker controller, and is fine as-is.
Compile and run and tap the button, and it will bring up the image picker with the photos in your photo album. If you are running this in the simulator, you probably won’t get any photos. On the simulator or on a device without a camera, you can use Safari to save images to your photo album. Open safari, find an image, tap and hold, and you’ll get a dialog to save that image. Next time you run our app, you’ll have it!
Here’s what you should see in the console after you’ve selected an image (something like this):
Note that it has an entry in the dictionary for the “original image” selected by the user. This is what we wan tto pull out and filter!
Now that we’ve got a way to select an image, how do we set our CIImage beganImage to use that image?
Simple, just change the delegate method to look like this:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { [self dismissModalViewControllerAnimated:YES]; UIImage *gotImage = [info objectForKey:UIImagePickerControllerOriginalImage]; beginImage = [CIImage imageWithCGImage:gotImage.CGImage]; [filter setValue:beginImage forKey:kCIInputImageKey]; [self changeValue:amountSlider]; } |
We need to create a new CIImage from our selected photo. We can get the UIImage representation of the photo by finding it in the dictionary of values, under the UIImagePickerControllerOriginalImage key constant. Note it’s better to use a constant rather than a hardcoded string, because Apple could change the name of the key in the future. For a full list of key constants, see the UIImagePickerController Delegate Protocol Reference.
We need to convert this into a CIImage, but we don’t have a method to convert a UIImage into a CIImage. However, we do have [CIImage imageWithCGImage:] method. We can get a CIImage from our UIImage by calling UIImage.CGImage, so we do exactly that!
We then set the key in the filter dictionary so that the input image is our new CIImage we just creaetd.
The last line may seem odd. Remember how I pointed out that the code in the changeValue ran the filter with the latest value and updated the image view with the result?
Well we need to do that again, so we can just call the changeValue method. Even though the slider value hasn’t changed, we can still use that method’s code to get the job done. We could break that code into it’s own method, and if we were going to be working with more complexity, we would to avoid confusion. But, in this case our purpose here is served using the changeValue method. We pass in the amountSlider so that it has the correct value to use.
Compile and run, and now you’ll be able to update the image from your photo album!
What if we create the perfect sepia image, how do we hold on to it? We could take a screenshot, but we’re not that ghetto. Let’s learn how to save our photos back to the photo album.
Saving to Photo Album
One thing you should know is that when you save a photo to the album, it’s a process that could continue even after you leave the app.
This could be a problem as the GPU stops whatever it’s doinng when we switch from one app to another. If the photo isn’t finished being saved, it won’t be there when we go looking for it later!
The solution to this is to use the CPU CIRendering context. The default is the GPU, so we need to make a few changes in order to have the CIContext use the CPU. The other advantage of using the CPU is that the CPU is not limited to a texture size (the GPU is limited, use -inputImageMaximumSize and -outputImageMaximumSize to find out on the current device). The GPU gives better performance, so for photos it’s often better to use the CPU (single image is easier than a stream of images), for video the GPU is the better choice.
To use the CPU in our context we need to add options to our CIContext initialization. Remember that we passed in nil? Well we’re gonna change that:
context = [CIContext contextWithOptions: [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:kCIContextUseSoftwareRenderer]]; |
Congratulations, now you’re using the CPU to perform the CI calculations. Note however that software rendering does not work on the simulator, so you’ll need to test on a device from now on.
Let’s add a new button to our app that will let us save the photo we are currently modifying with all the changes we’ve made. Open ViewController.xib, add a new button, and connect it to a new savePhoto method, like you did last time.
Then switch to ViewController.m and implement the photo as follows:
- (IBAction)savePhoto:(id)sender { CIImage *saveToSave = [filter outputImage]; CGImageRef cgImg = [context createCGImage:saveToSave fromRect:[saveToSave extent]]; ALAssetsLibrary* library = [[ALAssetsLibrary alloc] init]; [library writeImageToSavedPhotosAlbum:cgImg metadata:[saveToSave properties] completionBlock:^(NSURL *assetURL, NSError *error) { CGImageRelease(cgImg); }]; } |
The AssetsLibrary framework needs to be added to get rid of the errors we are getting on this code. Go back to the project container, choose the Build Phases tab, expand the Link Binaries with Libraries group and click the + button. Find the AssetsLibrary framework, and add it..
Then add the following #import statement to the top of ViewController.m:
#import <AssetsLibrary/AssetsLibrary.h> |
In this code block we:
- Get the CIImage output from the filter.
- Generate the CGImageRef.
- Save the CGImageRef to the photo library.
- Release the CGImage. That last step happens in a callback block so that it only fires after we’re done using it.
Compile and run the app (remember, on an actual device since you’re using software rendering), and now you can save that “perfect image” to your photo library so it’s preserved forever!
What About Image Metadata?
Let’s talk about image metadata for a moment. Image files taken on mobile phones have a variety of data associated with them, such as GPS coordinates, image format, and orientation. When we save our file to our photo library we want to preserve these attributes.
The metadata associated with a CIImage can be access by the properties method. In the previous code section we are passing the CIImage metadata in to the metadata for the generated library’s save function, so we’re good to go!
What Other Filters are Available?
The CIFilter API has 130 filters on the Mac OS plus the ability to create custom filters. On the iOS platform, it has 48 or more. The reason for this is that filters are being added. Currently there isn’t a way to build custom filters on the iOS platform, but it’s possible that it will come.
In order to find out what filters are available, we can use the [CIFilter filterNamesInCategory:kCICategoryBuiltIn] method. This method will return an array of filter names. In addition, each filter has an attributes method that will return a dictionary containing information about that filter. This information includes the filter’s name, the kinds of inputs the filter takes, the default and acceptable values for the inputs, and the filter’s category.
Let’s put together a method for our class that will print all the information for all the currently available filters to the log. Add this method right above viewDidLoad:
-(void)logAllFilters { NSArray *properties = [CIFilter filterNamesInCategory: kCICategoryBuiltIn]; NSLog(@"%@", properties); for (NSString *filterName in properties) { CIFilter *fltr = [CIFilter filterWithName:filterName]; NSLog(@"%@", [fltr attributes]); } } |
This method simply gets the arrary of filters from the filterNamesInCategory method. It prints the list of names first. Then, for each name in the list, it creates that filter and logs the attributes dictionary from that filter.
Then call this method at the end of viewDidLoad:
[self logAllFilters]; |
Where To Go From Here?
Here is an example project with all of the code from the above tutorial.
That about covers the basics of using Core Image filters. It’s a pretty handy technique, and you should be able to use it to apply some neat filters to images quite quickly!
If you want to learn more about Core Image, check out our new book iOS 5 By Tutorials, where we have an additional chapter where we cover compositing filters, masking images, additional filter examples, and even face detection!
If you have any questions or comments on this tutorial or Core Image in general, please join the forum discussion below!
This is a blog post by iOS Tutorial Team member Jacob Gundersen, an indie game developer and co-founder of Third Rail Games. Check out his latest app – Factor Samurai!
Beginning Core Image in iOS 5 Tutorial is a post from: Ray Wenderlich
没有评论:
发表评论