显示标签为“OpenGL”的博文。显示所有博文
显示标签为“OpenGL”的博文。显示所有博文

2011年11月10日星期四

iOS Lesson 02 - First Triangle-NeHe

http://nehe.gamedev.net/tutorial/ios_lesson_02__first_triangle/50001/

After the last lesson which laid out the basic app structure, we know start with the actual drawing in Lesson 02. Here's a screenshot of what we'll do:

Screenshot of a colored triangle int the iPhone Simulator

There are several steps perfomed in drawing geometry to the screen with OpenGL.
First we need to tell OpenGL what geometry to draw. This is usually a bunch of triangles specified by 3 vertices each. OpenGL then invokes a vertex shader for every single vertex which is used to transform the geometry according to rotations and translation, or for simple lighting. The resulting surfaces are then rasterized, which means OpenGL figures out which pixels are covered by each surface. For every covered pixel OpenGL now invokes a fragment shader (if we use multisampling that can happen more then once per pixel, hence it's a fragment and not a pixel for generalization). The task of the fragment shader is to determine the fragment's color by applying colors, lighting or mapping images onto the geometry.

Thus this lesson covers two things: how to draw a triangle and how a simple shader looks like.

It's probably best if you download the source code and have a look at it while we go through.

Uploading geometry

Geometry can be specified as several kinds of geometric primitive: GL_POINT, GL_LINES or GL_TRIANGLES, where lines and triangles have several variations for drawing a chain or strip of them. Nevertheless, each of these primitives is made up from a set of vertices. A vertex is a point in three dimensional space with a coordinate system where x points to the right, y points upward, and z points out of the display to the viewer's eye.

We will talk about that a lot more in depth when we come to the translations, rotations and projections lesson. But for now it is important that we need 3 floats for the position x,y, and z, and an additional 4th component w to facilitate linear transformations (this is due to matrix multiplications).

At the end of the vertex shader every vertex on the screen has to be within a cube ranging from -1.0 to 1.0 in all directions x,y, and z. We don't bother doing complex transformations yet and so we choose our positions to match these criteria.

We just store the vertices as a consecutive array of float values in our Lesson02::init method like this:

?
//create a triangle
std::vector<float> geometryData;
 
//4 floats define one vertex (x, y, z and w), first one is lower left
geometryData.push_back(-0.5); geometryData.push_back(-0.5); geometryData.push_back(0.0); geometryData.push_back(1.0);
//we go counter clockwise, so lower right vertex next
geometryData.push_back( 0.5); geometryData.push_back(-0.5); geometryData.push_back(0.0); geometryData.push_back(1.0);
//top vertex is last
geometryData.push_back( 0.0); geometryData.push_back( 0.5); geometryData.push_back(0.0); geometryData.push_back(1.0);
As you perhaps noticed, we are specifying the vertices in a counter clockwise order. This helps OpenGL to discard geometry that we cannot see anyways, by defining that only triangles with counter clockwise order are facing towards us (so the surface is one sided). Imagine you would turn this triangle around, as soon as you go further than 90 degrees, the order of the vertices flips!

Ok now we have an array of float values. How do we send them to OpenGL?

For that we employ a concept called Vertex Buffer Objects (VBO). This is basically allocating a buffer in the video memory and moving the geometry data over there. Then when we draw our geometry it already resids on the chip thus reducing the bandwitdh requirements per frame. This works fine for geometry that does not deform but is static through application runtime. As soon as the topology of the geometry changes, the new data has to be copied to the buffer again. All other kinds of transformations like translation, rotation, and scaling can be easily done in the vertex shader without changing the buffer contents.

?
//generate an ID for our geometry buffer in the video memory and make it the active one
glGenBuffers(1, &m_geometryBuffer);
glBindBuffer(GL_ARRAY_BUFFER, m_geometryBuffer);
 
//send the data to the video memory
glBufferData(GL_ARRAY_BUFFER, geometryData.size() * sizeof(float), &geometryData[0], GL_STATIC_DRAW);


The first step is to generate such a buffer in video memory. This is done by the call to glGenBuffers, where the parameters are the number of buffers to allocate, and a pointer to the variable which will hold the id of the buffer. m_geometryBuffer is defined as an unsigned int in the header file.

Then we bind the buffer so that all following buffer operations are performed on our geometry buffer. GL_ARRAY_BUFFER means that we store plain data in there, and is used always but when we want to index into another buffer.

Finally, we send the geometry data to our VBO using glBufferData. The parameters are the type again, then how many bytes we want to copy, a pointer to the data, and a usage hint to allow some optimiziation on memory usage by the graphics chip. For static geometry this is usually GL_STATIC_DRAW, but might be GL_DYNAMIC_DRAW if we were to upload new data regularly.

Adding color


Right now we just pass over geometry data, and we could draw a triangle in a solid color. But we can use that methodology for other per vertex attributes as well, for instance we can specify a color for each vertex.

Color values in most graphical applications are specified as intensities in the red, green, and blue channel. These range from 0 to 255 in most image formats, but as we deal with floating point numbers on the graphics chip, the intensities are specified in the range from 0 to 1 here.

The same concept as for the geometry data applies to the color data as well:


?
//create a color buffer, to make our triangle look pretty
std::vector<float> colorData;
 
//3 floats define one color value (red, green and blue) with 0 no intensity and 1 full intensity
//each color triplet is assigned to the vertex at the same position in the buffer, so first color -> first vertex
 
//first vertex is red
colorData.push_back(1.0); colorData.push_back(0.0); colorData.push_back(0.0);
//lower right vertex is green
colorData.push_back(0.0); colorData.push_back(1.0); colorData.push_back(0.0);
//top vertex is blue
colorData.push_back(0.0); colorData.push_back(0.0); colorData.push_back(1.0);
 
//generate an ID for the color buffer in the video memory and make it the active one
glGenBuffers(1, &m_colorBuffer);
glBindBuffer(GL_ARRAY_BUFFER, m_colorBuffer);
 
//send the data to the video memory
glBufferData(GL_ARRAY_BUFFER, colorData.size() * sizeof(float), &colorData[0], GL_STATIC_DRAW);

Drawing

Allright, now that our data is in the video memory, we just need to display it. This only goes in conjunction with telling OpenGL about the shader program we want to use.

We won't go into detail about how the loading of a shader works as it is a mere technicality. The Shader class is pretty well documented though for those who want to know the details. Let me know in the forums if there's a need for further explanations.

?
//load our shader
m_shader = new Shader("shader.vert", "shader.frag");
 
if(!m_shader->compileAndLink())
{
NSLog(@"Encountered problems when loading shader, application will crash...");
}
 
//tell OpenGL to use this shader for all coming rendering
glUseProgram(m_shader->getProgram());


What is basically happening in this bit of code, is that we create an object of our class Shader and tell it which files to use as input. As we covered earlier, there is a vertex shader and a fragment shader stage. They both get their own file which we'll look at in the next section, and then they are loaded, compiled and linked to a shader program which is a vertex shader and fragment shader pair. That happens in the compileAndLink() method, which returns false on failure.

Finally we tell OpenGL to use the shader program that we just generated by invoking glUseProgram.

That was easy. Now how do we plug the geometry and color data into this shader?

A vertex shader can cope with several input values per vertex, as stated earlier when we added the color buffer. These values are called vertex attributes. From within the shader these attributes are accessed by globally defining them as e.g. attribute vec4 position; which means we have an attribute called position that is specified as a vector of 4 float values.

From the application's point of view, we now need to be able to say that the data in our geometry buffer has to be assigned to an attribute called "position", and the color buffer maps to "color". We can use whatever name we like, but let's stick with those for this lesson.

?
//get the attachment points for the attributes position and color
m_positionLocation = glGetAttribLocation(m_shader->getProgram(), "position");
m_colorLocation = glGetAttribLocation(m_shader->getProgram(), "color");
 
//check that the locations are valid, negative value means invalid
if(m_positionLocation < 0 || m_colorLocation < 0)
{
NSLog(@"Could not query attribute locations");
}
 
//enable these attributes
glEnableVertexAttribArray(m_positionLocation);
glEnableVertexAttribArray(m_colorLocation);


OpenGL indexes all attributes in a shader program when it is linked, and by calling glGetAttribLocation with the shader and the name of the attribute in the shader source code we can query that index and store it. m_positionLocation and m_colorLocation are ints, because any invalid attribute name yields a response of -1.

Lastly we have to enable passing data to these attributes through glEnableVertexAttribArray.

Up to now we have just added all steps to our application initialization method. But the actual drawing has to happen every frame. So in our Lesson02::draw method we now have to map our buffered data to the attributes and finally invoke the drawing.

?
//bind the geometry VBO
glBindBuffer(GL_ARRAY_BUFFER, m_geometryBuffer);
//point the position attribute to this buffer, being tuples of 4 floats for each vertex
glVertexAttribPointer(m_positionLocation, 4, GL_FLOAT, GL_FALSE, 0, NULL);


To map the data, we first bind the geometry buffer to make it active as we did before when we populated it with data. Then we tell OpenGL that this buffer is the place to take the data for the position attribuge from. glVertexAttribPointer does exactly that, and expects the attribute location, how many values make one element (here 4 float make one vertex), the data type(float), whether the values should be normalized (usually not), a stride that is used if we interleave data (we don't, so 0), and the last parameter can be a pointer to an array that would be copied over to the graphic card every frame. But we can also pass NULL which will tell OpenGL to use the currently bound buffer contents.

?
//bint the color VBO
glBindBuffer(GL_ARRAY_BUFFER, m_colorBuffer);
//this attribute is only 3 floats per vertex
glVertexAttribPointer(m_colorLocation, 3, GL_FLOAT, GL_FALSE, 0, NULL);


The procedure for the color buffer looks the same.

Now we're done preparing everything and can draw our triangle:

?
//initiate the drawing process, we want a triangle, start at index 0 and draw 3 vertices
glDrawArrays(GL_TRIANGLES, 0, 3);


The call to glDrawArrays takes 3 arguments: the primitive we want to draw (can be one of GL_POINTS, GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, and GL_TRIANGLES), the offset to the first element in the buffer, and how many vertices we are going to draw.

If we were to add another triangle with 3 new vertices to both the geometry and color buffer and wanted to draw both, we would just increase the number of vertices from 3 to 6 and would get two triangles here!

Yay! We've actually drawn the first triangle to the screen!

OpenGL Shading Language

But we didn't look at what happens on the graphic chip yet. The heart of what is called a programmable graphics pipeline are the vertex and fragment shader which are specified in the OpenGL ES Shading Language or GLSL. This looks pretty close to to C programming, but has some different data types with a focus on vectors with 2,3, or 4 components. For float data these vectors are vec2, vec3 and vec4.

We are not going to look at all the details for GLSL here, but focus on the simple example for now. But a really good and extensive help for coding GLSL are pages 3 and 4 of the quick reference card which can be found here: http://www.khronos.org/opengles/sdk/2.0/docs/reference_cards/OpenGL-ES-2_0-Reference-card.pdf

So what does our vertex shader shader.vert look like? We already talked about the attributes which serve as input:

?
//the incoming vertex' position
attribute vec4 position;
 
//and its color
attribute vec3 color;


There we see a 4-component vector for the position and a 3-component one for the color in action. But we actually don't need the color in the vertex shader but in the fragment shader. To pass values from the vertex to the fragment shader they have to be defined globally with the varying qualifier.
?
//the varying statement tells the shader pipeline that this variable
//has to be passed on to the next stage (so the fragment shader)
varying lowp vec3 colorVarying;
The name colorVarying might sound a bit weird, but it serves a demonstrational purpose. Remember that the vertex shader is called for each vertex, so 3 times for the triangle, but the fragment shader is executed for every covered pixel. The trick is, that the value in a varying variable is linearly interpolated in each component. This is why our resulting triangle has these smooth color gradients.

The keyword lowp stands for low precision. In GLSL we need to specify a precision for our variables, being either heighp, mediump or lowp. For color values precision is not crucial so we're happy with low precision. For vertex positions you will usually use high precision to prevent weird artifacts at boundaries.

?
//the shader entry point is the main method
void main()
{
colorVarying = color; //save the color for the fragment shader
gl_Position = position; //copy the position
}

When the shader is executed, it's main() method is called. In the main method we pass on the color value, and copy the position attribute into a predefined output variable gl_Position which is not mandatory and specifies the position used for rasterization.

Finally the fragment shader shader.frag uses the passed on color value to store it in the fragment shader specific output variable gl_FragColor. As its main method is invoked for every pixel, by setting the color for every covered fragment we get a nicely colored shape on the screen.

?
//incoming values from the vertex shader stage.
//if the vertices of a primitive have different values, they are interpolated!
varying lowp vec3 colorVarying;
 
void main()
{
//create a vec4 from the vec3 by padding a 1.0 for alpha
//and assign that color to be this fragment's color
gl_FragColor = vec4(colorVarying, 1.0);
}

gl_FragColor is a vec4 so we need to pad a value for alpha at the end. The alpha channel defines the opacity of the color. We want our triangle to be fully opaque, so 1.0.

Note: GLSL does not allow implicit type casting from int to float, meaning you cannot just write a "1" here but have to type "1.0"!

With that I conclude this lesson and leave it up to you to play around with the colors, the vertex positions, and adding more triangles. Make sure you understand these basic principles!

Cheers,
Carsten
Downloads:
Lesson 01-NeHe iOS OpenGL Tutorials

2011年11月4日星期五

iOS Lesson 01 - Setting up GL ESHi all to the first tutorial in our new iOS series-NeHe

iOS Lesson 01 - Setting up GL ESHi all to the first tutorial in our new iOS series!
PrefaceBefore you start with the tutorial you should know, that I'm fairly new to Objective-C and iOS, so if there's anything I'm doing wrong or that could be done better, let me know! I'll just use Objective-C to interact with the OS anyway, all the OpenGL related code will be in C++. I will try to explain most things as we come across, but I will assume you have some basic programming knowledge and understand the fundamental concepts of object oriented programming like classes and a class diagram like below.
This code is working with iOS 4 and 5, so even compatible with the iPhone 4S and whatever you have out there.
We are going to use XCode 4, so if you don't have that installed yet, go and grab it from the Mac AppStore! And if anybody read up to this point hoping he could develop iPhone Apps from Windows or Linux, I'm sorry to disappoint you, you can't.
Note: Just owning a Mac and an iPhone/iPod/iPad does not mean that you can actually run your self written applications on your device. Unfortunately, to run the apps on a device, you have to be a registered Apple Developer which costs 99$ per year. If you don't want to spend so much money just to try it, you can still develop on the iOS Simulator until your game idea is polished enough to be sold on the AppStore :)
Note: What is OpenGL ES anyway? OpenGL is an standard which defines an API to access drawing and geometry rasterization functionality. On the desktop it has evolved to version 4.2 now, providing access to the most recent features of new graphic cards, a lot of which deal with shaders. The OpenGL Shading Language (GLSL) is used to take control over specific parts of the graphics pipeline. We will see that in later tutorials. The ES stands for embedded systems, thus this API is tailored to mobile phones, consoles, and anything that is not a whole PC. GL ES is currently at version 2.0, and is quite similar to desktop OpenGL without the really advanced functionality.
And I know this first lesson looks scary, it is probably the most boring lesson in this series and we don't even see any cool stuff at the end.. :( Nevertheless, the lesson contains a whole lot of valuable information, and is important to understand the interactions between the different parts of our framework. So be sure to read through or skim over it before moving on, you don't need to understand everything as you can always come back later if you want to know the details. But in good old NeHe manner, I tried to explain every single bit of code!
OverviewLet's first have a look at all the things and classes we need. As I said before, we'll have some Objective-C classes (with purple lines), and some C++ classes (in cyan).

The starting point of the application is a main method, like in all C/C++ programs. From there we start a UIApplicationMain, which we'll configure with the InterfaceBuilder to be a UIWindow containing an EAGLView and using our Lesson01AppDelegate to process all events. The window is created by the UIApplicationMain automatically and will then display our view. The view contains our OpenGL context which you can think of as our access to a canvas where we can use OpenGL ES to draw to. Within the view, we will hook into the operating system's run loop, and redraw our frame as often as we can. This is required for later lessons with animation. The drawing itself is done by the draw method of a Lesson, or more precisely, of a Lesson01 object, because init() and draw() are abstract methods in Lesson.
WalkthroughOkay that was pretty high level, so now let's look at how to this works step by step. Grab the code from here (or by cloning the git repository from http://code.google.com/p/nehe). Fire up XCode and open the project file Lesson01.xcodeproj.
Note:If you want to create your own OpenGL ES project later on, then you can do so using the project wizard. The structure of that will be a bit more confusing, but it has the same components. The draw method would then be yourProjectNameViewController::drawFrame(). The lesson code here is simply cleaned up and I separated our OpenGL code from the window.
You should see 3 folders in your project to the left: Lesson01, Frameworks and Products. Lesson01 contains all our code and has subfolders Supporting Files and Basecode. Frameworks contains all the Frameworks we intend to use, or which are needed by our project. Developers from other languages or operating systems call them libraries. Products finally lists all the applications that will be built, which is just our Lesson01.app right now.
In our source code we will mainly work in the LessonXX class in the following lessons, and the AppDelegate changes every time to create an object of our current lesson instance. But in this first lesson we will have a detailed look at the Basecode and some of the Supporting Files.
Let's approach the different files in the order they are traversed when the application is run. As we already mentioned, the birth of every program is it's main method. It is found in Lesson01/Supporting Files/main.m

#import <UIKit/UIKit.h>

//standard main method, looks like that for every iOS app..
int main(int argc, char *argv[])
{
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
    NSLog(@"Running app");
    int retVal = UIApplicationMain(argc, argv, nil, nil);
    [pool release];
    return retVal;
}


The method is rather uninformative and looks pretty much the same for every iPhone app. The NSAutoreleasePool is needed for the Objective-C garbage collection, we print a log message, and then comes the important part: we hand over the control to the UIApplicationMain method and pass our application's parameters. As soon as control comes back, we release the garbare collection utility and end the program with the return value that came from the UIApplicationMain.
The UIApplicationMain method runs, as the name implies, a user interface. In our project settings under Targets->Lesson01, tab Summary, our MainInterface is set to MainWindow. This tells our App, that we want it to display MainWindow.xib at startup. This file is under Supporting Files as well, and opens in the InterfaceBuilder.
In the new sidebar left of the editor, you will see the Placeholders and Objects. Among the Objects is our Lesson01AppDelegate and a Window with an embedded View. If you've never done any GUI programming you could imagine this like opening your favourite Office text processor (=window), and open a document (=view). Now you can switch between documents(views) filled with content (here UI elements like buttons, text fields, or an OpenGL ES canvas) without closing the whole program. But to be able to see any content, you need an open document (view).
The important thing to note about the setup of our window is not obvious unless you control-click (or right-click) Lesson01AppDelegate. There you see two defined outlets glView and window, which are connected to the View and the Window in the InterfaceBuilder. An outlet defines a slot, where a variable in the code of our app delegate contains a reference to the connected element in the InterfaceBuilder.

Lesson01AppDelegateWhich brings us to the first important code file: Lesson01AppDelegate.h

#import <UIKit/UIKit.h>

//we want to create variables of these classes, but don't need their implementation yet,
//so we just tell the compiler that they exist - called forward declaration
@class EAGLView;
class Lesson;

//This is our delegate class. It handles all messages from the device's operating system
@interface Lesson01AppDelegate : NSObject <UIApplicationDelegate> {
@private
    //we store a pointer to our lesson so we can delete it at program shutdown
    Lesson *lesson;
}

//we configure these variables in the interface builder (IB), thus they have to be declared as IBOutlet
//properties get accessor methods by synthesizing them in the source file (.mm)

//in this window we will embed a view which acts as OpenGL context
@property (nonatomic, retain) IBOutlet UIWindow *window;

//our main window, covering the whole screen
@property (nonatomic, retain) IBOutlet EAGLView *glView;

@end


In this Objective-C style class definition we first declare the interface in the header before the implementation of the methods follows in the corresponding source (.mm for Objective-C++) file. As also seen in the class diagram, it has member variables for the window, our glView, and the lesson which will be created when the AppDelegate is initialized. The important thing to note, is the definition of the properties as IBOutlets so they can be used in the InterfaceBuilder. An object of this class is automatically created when the application starts, because it is needed to handle the window's events. To be able to handle the window events, our AppDelegate implements the interface (or in Objective-C terminology: protocol) UIApplicationDelegate. This is done by adding it in < > brackets after the class name definition.
The first event the window triggers after it has been created is didFinishLaunchingWithOptions. The code is in Lesson01AppDelegate.mm

#import "Lesson01AppDelegate.h"

#import "EAGLView.h"
#include "Lesson01.h"

//now we implement all methods needed by our delegate
@implementation Lesson01AppDelegate

//
@synthesize window;
@synthesize glView;

//this method tells us, that our application has started and we can set up our OpenGL things,
//as the window is set up, and thus our glView is going to be displayed soon
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
    //for the first lesson we don't need a depth buffer as we're not drawing any geometry yet
    [glView setDepthBufferNeeded:FALSE];
    
    //we create our lesson which contains the OpenGL code
    //(allocated with new -> has to be cleaned up with delete!)
    lesson = new Lesson01();
    
    //we tell our OpenGL view which lesson we want to use for rendering.
    [glView setLesson:lesson];
    
    return YES;
}


As you can see, here we configure our glView and create the lesson instance. One important thing is that we defined the member variable lesson to be a pointer to a Lesson instance, but as Lesson01 is derived from Lesson(see class diagram), it offers the same interface and can be used as well.
Also note, that we synthesize our two properties from the header, this auto-generates getter and setter method for the otherwise private fields.
The remainder of Lesson01AppDelegate.mm deals with the other events that can be triggered from the window, and they are all related to becoming visible or being moved to the background. In the case of becoming visible, we tell our view to start refreshing repeatedly - and when we move to the background or close, we stop refreshing. Finally, if the AppDelegate gets disposed, its dealloc method is called where we clean up everything by freeing the memory that has been allocated. In Objective-C that is done with release, C++ pointers get created with new and deleted with delete.

- (void)applicationWillResignActive:(UIApplication *)application
{
    //the app is going to be suspended to the background,
    //so we should stop our render loop
    [glView stopRenderLoop];
}

- (void)applicationDidEnterBackground:(UIApplication *)application
{
    //we could do something here when the application entered the background
}

- (void)applicationWillEnterForeground:(UIApplication *)application
{
    //we could start preparing stuff for becoming active again
}

- (void)applicationDidBecomeActive:(UIApplication *)application
{
    //we're on stage! so let's draw some nice stuff
    [glView startRenderLoop];
}

- (void)applicationWillTerminate:(UIApplication *)application
{
    //before shutdown we stop the render loop
    [glView stopRenderLoop];
}

//dealloc is the destructor in Objective-C, so clean up all allocated things
- (void)dealloc
{
    [window release];
    [glView release];
    delete lesson;
    [super dealloc];
}

@end


Now let's finally move on to the classes involving OpenGL context creation and drawing!
EAGLViewAs we already learned, the window and the delegate get created automatically if we run UIApplicationMain. When the window is created, it knows that it has to contain a view which is an EAGLView because we defined the corresponding outlet in our delegate. So the window automatically creates the view as it needs it to be displayed. Let's first look at the header EAGLView.h:

//forward declarations again
@class EAGLContext;
class Lesson;

// This class combines our OpenGL context (which is our access to all drawing functionality)
// with a UIView that can be displayed on the iOS device. It handles the creation and presentation
// of our drawing surface, as well as handling the render loop which allows for seamless animations.
@interface EAGLView : UIView {
@private
    // The pixel dimensions of the CAEAGLLayer.
    GLint framebufferWidth;
    GLint framebufferHeight;
    
    // These are the buffers we render to: the colorRenderbuffer will contain the color that we will
    // finaly see on the screen, the depth renderbuffer has to be used if we want to make sure, that
    // we always see only the closest object and not just the one that has been drawn most recently.
    // The framebuffer is a collection of buffers to use together while rendering, here it is either
    // just the color buffer, or color and depth renderbuffer.
    GLuint defaultFramebuffer, colorRenderbuffer, depthRenderbuffer;
    
    // The display link is used to create a render loop
    CADisplayLink* displayLink;
    
    // Do we need a depth buffer
    BOOL useDepthBuffer;
    
    // The pointer to the lesson which we're rendering
    Lesson* lesson;
    
    // Did we already initialize our lesson?
    BOOL lessonIsInitialized;
}

// The OpenGL context as a property (has autogenerated getter and setter)
@property (nonatomic, retain) EAGLContext *context;

// Configuration setters
- (void) setDepthBufferNeeded:(BOOL)needed;
- (void) setLesson:(Lesson*)newLesson;

//if we want OpenGL to repaint with the screens refresh rate, we use this render loop
- (void) startRenderLoop;
- (void) stopRenderLoop;
@end


The class is derived from a UIView as it just specializes what this very view shall do. Being a subclass of a UIView allows us to overwrite several methods (or selectors in Objective-C), which we will se in the source file.
Note: The name EAGLView is not mandatory, but very common in iOS GL ES programs, because the context is called EAGLContext. EAGL probably stands for "Embedded AGL"  where AGL are Apple's OpenGL Extensions.
As we already mentioned, the EAGLView encapsulates our Open GL ES context. The OpenGL context can be seen as the permission to use OpenGL calls for drawing. It also keeps track of all the states we set like the current color or which image we're currently using as texture on our surfaces. The context is only useful in combination with a canvas where we can draw to. This canvas is implemented in a construct called a framebuffer. It can consist of several layers of buffers storing different information. The two layers that we usually need are a color renderbuffer (render because we are going to render to it), and a depth renderbuffer. The color renderbuffer stores information pretty similar to what is stored in images like JPEG, namely it stores a few bytes per color channel per pixel. This is what we will eventually see on the screen. The depth renderbuffer records for every pixel in our color buffer how far it is away from our screen, so that if we draw a house that is 10 units away and draw a person in front of it that is 5 units away, the person will definitely be in front of the house, no matter what is drawn first! This is called depth testing. The depth buffer is not meant to be displayed, though.
After this explanation most of the member variables of our EAGLView class should make sense: we store the width and height of our framebuffer (the color and depth buffer have to be the same size!), and we store IDs of the buffers which act as names or references in OpenGL.
Next is a CADisplayLink which allows us to hook into the system's run loop and request a redraw around 60 times a second. Then we have a switch to enable/disable the use of a depth buffer. As the buffer consumes valuable memory on the graphics chip, we should not set it up if we don't need it.
And obviously we need a pointer to our lesson so that we can invoke our draw() method, and a flag whether we already initialized it.
The context which we have talked about already for so long, is stored as an EAGLContext property.
Last but not least we have 4 methods with pretty self explanatory names which are used by the AppDelegate.
Before we can dive into the real code, EAGLView.mm begins with some initialization stuff:

#import <QuartzCore/QuartzCore.h>

#import "EAGLView.h"
#include "Lesson.h"

//declare private methods, so they can be used everywhere in this file
@interface EAGLView (PrivateMethods)
- (void)createFramebuffer;
- (void)deleteFramebuffer;
@end


//start the actual implementation of our view here
@implementation EAGLView

//generate getter and setter for the context
@synthesize context;

// We have to implement this method
+ (Class)layerClass
{
    return [CAEAGLLayer class];
}


We first add the private methods to the class definition. If we wouldn't do this, then we'd have to order the code in a way that a method has been defined before it is used. Not putting the declaration of the private methods into the header file has the advantage of keeping the header clean, although it does not really add to the readability of the source.
Then we actually begin the implementation of our EAGLView, synthesize our context to get auto-generated getters and setters, and we overwrite the layerClass method of UIView. This needs to be done because our view does not behave like a standard UI element but it paints onto a CAEAGLLayer (CA for CoreAnimation).

//our EAGLView is the view in our MainWindow which will be automatically loaded to be displayed.
//when the EAGLView gets loaded, it will be initialized by calling this method.
- (id)initWithCoder:(NSCoder*)coder
{
    //call the init method of our parent view
    self = [super initWithCoder:coder];
    
    //now we create the core animation EAGL layer
    if (!self) {
        return nil;
    }

    CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
    
    //we don't want a transparent surface
    eaglLayer.opaque = TRUE;
    
    //here we configure the properties of our canvas, most important is the color depth RGBA8 !
    eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
                                    [NSNumber numberWithBool:FALSE], kEAGLDrawablePropertyRetainedBacking,
                                    kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat,
                                    nil];
    
    //create an OpenGL ES 2 context
    context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    
    //if this failed or we cannot set the context for some reason, quit
    if (!context || ![EAGLContext setCurrentContext:context]) {
        NSLog(@"Could not create context!");
        [self release];
        return nil;
    }

    //do we want to use a depth buffer?
    //for 3D applications we usually do, so we'll set it to true by default
    useDepthBuffer = FALSE;
    
    //we did not initialize our lesson yet:
    lessonIsInitialized = FALSE;
    
    //default values for our OpenGL buffers
    defaultFramebuffer = 0;
    colorRenderbuffer = 0;
    depthRenderbuffer = 0;
    
    
    return self;
}


When our view is initialized, the method initWithCoder is executed. That is when we start setting up our context. First, we initialize our superclass UIView and check that everything went fine. Then we create the CAEGLLayer from our View's layer and make it drawable. (Don't ask me about details of this part.. ;) )
The next step is where we finally create our OpenGL context, and we require it to be version 2 (available since iPhone 3GS and iPod Touch 3rd gen) by calling initWithAPI:kEAGLRenderingAPIOpenGLES2. If this was successfull and we're able to make the context current (visually speaking: take the brush in our hand), then we move on and set our member variables to default values.
Remember the paragraph about framebuffers? Let's create our canvas which will serve as playground in all following lessons!

//on iOS, all rendering goes into a renderbuffer,
//which is then copied to the window by "presenting" it.
//here we create it!
- (void)createFramebuffer
{
    //this method assumes, that the context is valid and current, and that the default framebuffer has not been created yet!
    //this works, because as soon as we call glGenFramebuffers the value will be > 0
    assert(defaultFramebuffer == 0);
    
    NSLog(@"EAGLView: creating Framebuffer");
    
    // Create default framebuffer object and bind it
    glGenFramebuffers(1, &defaultFramebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
    
    // Create color render buffer
    glGenRenderbuffers(1, &colorRenderbuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);


First we check that we don't have a frambuffer already. If that's not the case then we generate IDs for our framebuffer by calling OpenGL's method for this. This assures that the ID is unique, and every generated ID is greater than zero. After the ID is generated, we bind the framebuffer. OpenGL keeps track of the active object of most things, like the active framebuffer, the last recently st color, the active texture or active shader program, etc. This causes all following API calls with respect to a framebuffer, affect the currently bound framebuffer.
Next we do the same thing for our color renderbuffer: we generate a name and bind it.

//get the storage from iOS so it can be displayed in the view
    [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
    //get the frame's width and height
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
    
    //attach this color buffer to our framebuffer
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);


This color renderbuffer is special. We want the color that we draw to, to be used as the UIView's color. That's why we created the CAEAGLLayer earlier. Now, we need to grab the layer and use it as renderbuffer storage. This way, the color we draw ends up in the UIView without the need to copy the buffer contents again. That is done by calling the renderbufferStorage method of our context. The neat thing about this is, that we automatically have a buffer adjusted to the size of our view. The next two lines are just querying for the width and height of our framebuffer.
glFramebufferRenderbuffer is very important. As we learned before, a framebuffer consists of several layers, called attachments. Here we tell OpenGL, that our currently bound framebuffer has a color buffer attached to it, namely our color renderbuffer. The index in GL_COLOR_ATTACHMENT_0 indicates that you can have several color attachments in one framebuffer, but that goes beyond the scope of this lesson.

//our lesson needs to know the size of the renderbuffer so it can work with the right aspect ratio
if(lesson != NULL)
{
    lesson->setRenderbufferSize(framebufferWidth, framebufferHeight);
}


As we just found out about the size of our actual render window, we need to pass this on to our lesson to make it actually render to the full screen size.

if(useDepthBuffer)
    {
        //create a depth renderbuffer
        glGenRenderbuffers(1, &depthRenderbuffer);
        glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
        //create the storage for the buffer, optimized for depth values, same size as the colorRenderbuffer
        glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, framebufferWidth, framebufferHeight);
        //attach the depth buffer to our framebuffer
        glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
    }


Given the case that we requested a depth buffer, then we perform pretty much the same steps as for the color renderbuffer, but this time we create the storage on our own by calling glRenderbufferStorage with details on which kind of data we store (DEPTH_COMPONENT16 is 16 bit per pixel) and how big the buffer is.

//check that our configuration of the framebuffer is valid
    if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
        NSLog(@"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}


Finally we make sure that our framebuffer is ready to be rendered to. This is best practice with framebuffer objects (short FBOs) as there are many possibilities to go wrong :)

//deleting the framebuffer and all the buffers it contains
- (void)deleteFramebuffer
{
    //we need a valid and current context to access any OpenGL methods
    if (context) {
        [EAGLContext setCurrentContext:context];
        
        //if the default framebuffer has been set, delete it.
        if (defaultFramebuffer) {
            glDeleteFramebuffers(1, &defaultFramebuffer);
            defaultFramebuffer = 0;
        }
        
        //same for the renderbuffers, if they are set, delete them
        if (colorRenderbuffer) {
            glDeleteRenderbuffers(1, &colorRenderbuffer);
            colorRenderbuffer = 0;
        }
        
        if (depthRenderbuffer) {
            glDeleteRenderbuffers(1, &depthRenderbuffer);
            depthRenderbuffer = 0;
        }
    }
}


Everything we create has to be deleted again at some point. This is true for FBOs and their renderbuffers as well. As always we can only use OpenGL calls if we have a valid and current context, and we use glDeleteFramebuffers and glDeleteRenderbuffers for deletion.

//this is where all the magic happens!
- (void)drawFrame
{
    //we need a context for rendering
    if (context != nil)
    {
        //make it the current context for rendering
        [EAGLContext setCurrentContext:context];
        
        //if our framebuffers have not been created yet, do that now!
        if (!defaultFramebuffer)
            [self createFramebuffer];

        glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
        
        //we need a lesson to be able to render something
        if(lesson != nil)
        {
            //check whether we have to initialize the lesson
            if(lessonIsInitialized == FALSE)
            {
                lesson->init();
                lessonIsInitialized = TRUE;
            }
            
            //perform the actual drawing!
            lesson->draw();
        }
        
        //finally, get the color buffer we rendered to, and pass it to iOS
        //so it can display our awesome results!
        glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
        [context presentRenderbuffer:GL_RENDERBUFFER];
    }
    else
        NSLog(@"Context not set!");
}


Now let's look at the heart of our app, the drawFrame method. This is the method that is called every time a frame is rendered, invoked by our display link. Here we actually make sure that we have a context, that the framebuffer is created and bound, and the lesson is present and initialized. If all that is true, then we call lesson->draw()which will be the method we will mostly focus on in the following lessons. The last lines are pretty interesting. After having called lesson->draw() our renderbuffers now contain what we rendered. To actually tell the system that we have new information that shall be displayed, we bind the color buffer and ask our context to present it in [context presentRenderbuffer:GL_RENDERBUFFER].
We just mentioned the display link. Remember that we invoke startRenderLoop and stopRenderLoop from our AppDelegate to have the application refreshed periodically if it is active?

//our render loop just tells the iOS device that we want to keep refreshing our view all the time
- (void)startRenderLoop
{
    //check whether the loop is already running
    if(displayLink == nil)
    {
        //the display link specifies what to do when the screen has to be redrawn,
        //here we use the selector (method) drawFrame
        displayLink = [self.window.screen displayLinkWithTarget:self selector:@selector(drawFrame)];
        
        //by adding the display link to the run loop our draw method will be called 60 times per second
        [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
        NSLog(@"Starting Render Loop");
    }
}


To begin updating our screen periodically, after making sure we haven't already set this up, we create a CADisplayLink for our screen, telling it what to do when the display needs to be redrawn. So we pass self as target and the selector drawFrame to the method displayLinkWithTarget. To actually force a refresh around 60 times per second, we need to add this displayLink to the system's runLoop, which is done with the next line. We get the system's runLoop by creating a NSRunLoop using its static method currentRunLoop and passing this to our displayLink's method addToRunLoop. Now we're actually rendering! But how do we stop this whenever our app moves to the background?

//we have to be able to stop the render loop
- (void)stopRenderLoop
{
    if (displayLink != nil) {
        //if the display link is present, we invalidate it (so the loop stops)
        [displayLink invalidate];
        displayLink = nil;
        NSLog(@"Stopping Render Loop");
    }
}


To stop the displayLink we just need to call its method invalidate. This automatically performs a clean up, so the pointer now points to some unitialized memory. That's why we set it to nil again, never ever leave pointers dangling!
We're almost done with the class EAGLView, there are just two simple setters left to implement, a destructor (the dealloc method in Objective-C), and a callback method for a windowing event.

//setter methods, should be straightforward
- (void) setDepthBufferNeeded:(BOOL)needed
{
    useDepthBuffer = needed;
}

- (void) setLesson:(Lesson*)newLesson
{
    lesson = newLesson;
    //if we set a new lesson, it is not yet initialized!
    lessonIsInitialized = FALSE;
}

//As soon as the view is resized or new subviews are added, this method is called,
//apparently the framebuffers are invalid in this case so we delete them
//and have them recreated the next time we draw to them
- (void)layoutSubviews
{
    [self deleteFramebuffer];
}

//cleanup our view
- (void)dealloc
{
    [self deleteFramebuffer];  
    [context release];
    [super dealloc];
}


The two setters should be easy to understand. The callback layoutSubviews gets invoked whenever something about our UIView changes, so either it was resized or new subviews have been added to it. In this case we have to create our renderbuffers again because they become invalid through such an event. For this we just delete the buffers, as we know that drawFrame creates the framebuffer before rendering, if it is not present.
The dealloc method cleans up everything that was created by it, namely the framebuffers and the context. For the framebuffers we use our delete method, the context has to be released as it is managed by the garbage collection. Finally we call our superclass' dealloc method, which is the UIView in this case. In Objective-C you have to invoke the dealloc method of the superclass manually, in C++ the destructor of the superclass is called automatically.
LessonWe eventually end up in the class that plays the key role in all our following tutorials. A lesson is responsible for drawing every single frame, and for initializing our OpenGL stuff like loading images and shaders or sending geometry data to the graphic chip. See Basecode/Lesson.h

#include <OpenGLES/ES2/gl.h>
#include <OpenGLES/ES2/glext.h>

//this is our general lesson class, providing the two most important methods init and draw
//which will be invoked by our EAGLView
class Lesson
{
public:
    //constructor
    Lesson();
    //the destructor has always to virtual!
    virtual ~Lesson();
    
    //abstract methods init and draw have to be defined in derived classes
    virtual void init() = 0;
    virtual void draw() = 0;
    
    //we need to know the size of our drawing canvas (called renderbuffer here),
    //so this method just saves the parameters in the member variables
    virtual void setRenderbufferSize(unsigned int width, unsigned int height);
    
//all protected stuff will be visible within derived classes, but from nowhere else  
protected:
    //fields for the renderbuffer size
    unsigned int m_renderbufferWidth, m_renderbufferHeight;
};


The first two lines include the OpenGL header files which define all the API. Then we define the class, which has a pretty simple interface. It has a constructor and a virtual destructor. In C++ every method that you are going to overwrite in derived classes has to be defined virtual in both, the super and the derived class. Remember that the constructors are never virtual, but the destructor should always be!
The init and draw method define the interface for the core functionality. But as the class Lesson is just a generic interface, we don't want to implement this methods here but leave that to the derived classes (e.g. Lesson01). This is why the methods are not only virtual, but also set to zero. This is  the C++ way of defining an abstract method, which implicitly makes the class Lesson abstract. An abstract class cannot be instanciated to prevent unallowed usage of these not implemented methods.
We have two protected member variables for our renderbuffer size. Protected means they will be visible in derived classes, but nowhere else. We already saw in EAGLView::createFramebuffer that we called the method setRenderbufferSize to pass these values to the lesson, and here we receive this input.
The implementation in Basecode/Lesson.mm is quite simple. The constructor does nothing but setting the size of the renderbuffer to zero, and the destructor does not need to clean up anything here as we haven't allocated any further memory from within the class.

#include "Lesson.h"

//Lesson constructor, set default values
Lesson::Lesson():
    m_renderbufferWidth(0),
    m_renderbufferHeight(0)
{
    
}

//Lesson destructor
Lesson::~Lesson()
{
    //cleanup here
}

//save the renderbuffer size in the member variables
void Lesson::setRenderbufferSize(unsigned int width, unsigned int height)
{
    m_renderbufferWidth = width;
    m_renderbufferHeight = height;
    
    glViewport(0, 0, m_renderbufferWidth, m_renderbufferHeight);
}


The first interesting OpenGL call happens in setRenderbufferSize. After storing the width and height in our member variables, we call glViewport(int left, int bottom, int width, int height).  With this we tell OpenGL which part of our screen we are drawing to. We specify that we want the whole screen by starting in the lower left and using the full width and height.
Lesson01Now we know every piece of our application but the actual OpenGL part. This was necessary to get started, I promise from now on there will be less code and more explanations and images :)

//We derive our current lesson class from the general lesson class
class Lesson01 : public Lesson
{
public:
    //overwrite all important methods
    Lesson01();
    virtual ~Lesson01();
    
    virtual void init();
    virtual void draw();
};


For every lesson we will derive a new LessonXX class, putting in more and more features of OpenGL ES. This lesson's Lesson01.h is very simplistic, we just implement the necessary methods declared abstract in the superclass Lesson.
In this lesson we will start with the simplest of all operations: clearing the screen. You can find them in Lesson01.mm. There we first define our constructor and destructor although they don't have to do anything yet.

#include "Lesson01.h"

//lesson constructor
Lesson01::Lesson01()
{
    //initialize values
}

//lesson destructor
Lesson01::~Lesson01()
{
    //do cleanup
}


When we initialize our lesson, we set the color we wish to use to overwrite everything that has been in there. Colors in OpenGL are usually specified as intensities for the color channels red, green, and blue (commonly known as RGB color). Each intensity is a floating point value between 0 (no intensity) and 1 (full intensity). Sometimes(e.g. in images like JPEG) the intensity is given as values between 0 and 255. This additional color model allows us to describe every color that a computer monitor can display.

(Image taken from: http://en.wikipedia.org/wiki/File:RGB_farbwuerfel.jpg)
So let's define our clearing color to plain red. This means we need full intensity in our red channel, and no intensity in green and blue.

//initializing all OpenGL related things
void Lesson01::init()
{
    NSLog(@"Init..");
    
    //set the color we use for clearing our colorRenderbuffer to red
    glClearColor(1.0, 0.0, 0.0, 1.0);
}


Whats that? There's 4 parameters to glClearColor ? The last value specifies the alpha value (an RGBA color) which defines the opacity, which will be 1 for most surfaces. The alpha value can be used to mix the current and new color of each pixel (calld blending), but here we will set the value of each pixel to the RGBA-tupel and don't use blending, so actually the value does not matter.
Now we also need to tell OpenGL that we want it to clear our color buffer. This is done with the command glClear(GL_COLOR_BUFFER_BIT). We will do this at the beginning of every frame, so put it into the draw() method.
?
22
23
24
25
26
27
28
29
//drawing a frame
void Lesson01::draw()
{
    //clear the color buffer
    glClear(GL_COLOR_BUFFER_BIT);
    
    //everything should be red now! yay :)
}


Congratulations! You've just completed you first OpenGL application on iOS! Play around with the color a bit to make sure you understand the RGB color model, and next time we'll actually start drawing.
Stay tuned!
Carsten
Downloads:
   * Lesson01 source code
Forum for this lesson:
   * http://www.gamedev.net/topic/613789-nehe-ios-lesson-01-qa/

2011年10月19日星期三

Beginning OpenGL ES 2.0 with GLKit Part 1

Beginning OpenGL ES 2.0 with GLKit Part 1:
Sink your teeth into GLKit!

Note from Ray: This is the fourth iOS 5 tutorial in the iOS 5 Feast! This tutorial is a free preview chapter from our new book iOS 5 By Tutorials. Enjoy!

iOS 5 comes with a new set of APIs that makes developing with OpenGL much easier than it used to be.

The new set of APIs is collectively known as GLKit. It contains four main sections:

  • GLKView/GLKViewController. These classes abstract out much of the boilerplate code it used to take to set up a basic OpenGL ES project.
  • GLKEffects. These classes implement common shading behaviors used in OpenGL ES 1.0, to make transitioning to OpenGL ES 2.0 easier. They’re also a handy way to get some basic lighting and texturing working.
  • GLMath. Prior to iOS 5, pretty much every game needed their own math library with common vector and matrix manipulation routines. Now with GLMath, most of the common math routines are there for you!
  • GLKTextureLoader. This class makes it much easier to load images as textures to be used in OpenGL. Rather than having to write a complicated method dealing with tons of different image formats, loading a texture is now a single method call!

The goal of this tutorial is to get you quickly up-to-speed with the basics of using OpenGL with GLKit, from the ground up, assuming you have no experience whatsoever. We will build a simple app from scratch that draws a simple cube to the screen and makes it rotate around.

In the process, you’ll learn the basics of using each of these new APIs. It should be a good introduction to GLKit, whether you’ve already used OpenGL in the past, or if you’re a complete beginner!

Note that this tutorial slightly overlaps some of the other OpenGL ES 2.0 tutorials on this site. This tutorial does not assume you have read them first, but if you have you might want to skip over the sections you already know.



OpenGL ES 1.0 vs OpenGL ES 2.0


Open GL ES 2.0 - Y U Make Me Write Everything?

Before we get started, I wanted to mention that this tutorial will be focusing on OpenGL ES 2.0 (not OpenGL ES 1.0).

If you are new to OpenGL ES programming, here is the difference between OpenGL ES 1.0 and OpenGL ES 2.0:

  • OpenGL ES 1.0 uses a fixed pipeline, which is a fancy way of saying you use built-in functions to set lights, vertexes, colors, cameras, and more.
  • OpenGL ES 2.0 uses a programmable pipeline, which is a fancy way of saying all those built-in functions go away, and you have to write everything yourself.

“OMG!” you may think, “well why would I ever want to use OpenGL ES 2.0 then, if it’s just extra work?!” Although it does add some extra work, with OpenGL ES 2.0 you make some really cool effects that wouldn’t be possible in OpenGL ES 1.0, such as this toon shader (via Imagination Technologies):

Toon Shader with OpenGL ES 2.0

Or even these amazing lighting and shadow effects (via Fabien Sanglard):

Lighting and Shadow Shaders with OpenGL ES 2.0

Pretty cool eh?

OpenGL ES 2.0 is only available on the iPhone 3GS+, iPod Touch 3G+, and all iPads. But the percentage of people with these devices is in the majority now, so it’s well worth using!

OpenGL ES 2.0 does have a bit of a higher learning curve than OpenGL ES 1.0, but now with GLKit the learning curve is much easier, because the GLKEffects and GLKMath APIs lets you easily do a lot of the stuff that was built into OpenGL ES 1.0.

I’d say if you’re new to OpenGL programming, it’s probably best to jump straight into OpenGL ES 2.0 rather than trying to learn OpenGL ES 1.0 and then upgrading, especially now that GLKit is available. And this tutorial should help get you started with the basics! :]

Getting Started


OK, let’s get started!

Create a new project in Xcode and select the iOS\Application\Empty Application Template. We’re selecting the Empty Application template (not the OpenGL Game template!) so you can put everything together from scratch and get a better idea how everything fits together.

Set the Product Name to HelloGLKit, make sure the Device Family is set to iPhone, make sure “Use Automatic Reference Counting” is checked but the other options are not, and click Next. Choose a folder to save your project in, and click Create.

If you run your app, you should see a plain blank Window:

A blank window

The project contains almost no code at this point, but let’s take a quick look to see how it all fits together. If you went through the Storyboard tutorial earlier in this book, this should be a good review.

Open main.m, and you’ll see the first function that is called when the app starts up, unsurprisingly called main:


int main(int argc, char *argv[])
{
@autoreleasepool {
return UIApplicationMain(argc, argv, nil, NSStringFromClass([AppDelegate class]));
}
}


The last parameter to this method tells UIApplication the class to crate an instance of and use as it’s delegate – in this case, a class called AppDelegate.

So switch over to the only class in the template, AppDelegate.m, and take a look at the method that gets called when the app starts up:


- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// Override point for customization after application launch.
self.window.backgroundColor = [UIColor whiteColor];
[self.window makeKeyAndVisible];
return YES;
}


This programatically creates the main window for the app and makes it visible. And that’s it! About as “from scratch” as you can get.

Introducing GLKView


To get started with OpenGL ES 2.0, the first thing we need to do is add a subview to the window that does its drawing with OpenGL. If you’ve done OpenGL ES 2.0 programming before, you know that there was a ton of boilerplate code to get this working – things like creating a render buffer and frame buffer, etc.

But now it’s nice and easy with a new GLKit class called GLKView! Whenever you want to use OpenGL rendering inside a view, you simply add a GLKView (which is a normal subclass of UIView) and configure a few properties on it.

You can then set a class as the GLKView’s delegate, and it will call a method on that class when it needs to be drawn. In this method you can put in your OpenGL commands!

Let’s see how this works. First things first – you need to add a few frameworks to your project to use GLKit. Select your HelloGLKit project in the Project Navigator, select the HelloGLKit target, select Build Phases, Expand the Link Binary With Libraries section, and click the Plus button. From the drop-down list, select the following frameworks and click Add:

  • QuartzCore.framework
  • OpenGLES.framework
  • GLKit.framework

Adding frameworks for GLKit to your Xcode project

Switch to AppDelegate.h, and at the top of the file import the header file for GLKit as follows:


#import <GLKit/GLKit.h>


Next switch to AppDelegate.m, and modify the application:didFinishLaunchingWithOptions to add a GLKView as a subview of the main window:


- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];

EAGLContext * context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; // 1
GLKView *view = [[GLKView alloc] initWithFrame:[[UIScreen mainScreen] bounds]]; // 2
view.context = context; // 3
view.delegate = self; // 4
[self.window addSubview:view]; // 5

self.window.backgroundColor = [UIColor whiteColor];
[self.window makeKeyAndVisible];
return YES;
}


The lines marked with comments are the new lines – so let’s go over them one by one.

1. Create a OpenGL context. To do anything with OpenGL, you need to create a EAGLContext.

An EAGLContext manages all of the information iOS needs to draw with OpenGL. It’s similar to how you need a Core Graphics context to do anything with Core Graphics.

When you create a context, you specify what version of the API you want to use. Here, you specify that you want to use OpenGL ES 2.0. If it is not available (such as if the program were run on an iPhone 3G), the app would terminate.

2. Create a GLKView. This creates a new instance of a GLKView, and makes it as large as the entire window.

3. Set the GLKView’s context. When you create a GLKView, you need to tell it the OpenGL context to use, so we specify the one we already created.

4. Set the GLKView’s delegate. This sets the current class (AppDelegate) as the GLKView’s delegate. This means whenever the view needs to be redrawn, it will call a method named glkView:drawInRect on whatever class you specify here. We will implement this inside the App Delegate shortly to contain some basic OpenGL commands to paint the screen red.

5. Add the GLKView as a subview. This line adds the GLKView as a subview of the main window.

Since we marked the App Delegate as GLKView’d delegate, we need to mark it as implementing the GLKViewDelegate protocol. So switch to AppDelegate.h and modify the @interface line as follows:


@interface AppDelegate : UIResponder <UIApplicationDelegate, GLKViewDelegate>


One step left! Switch back to AppDelegate.m, add the following code right before the @end:


#pragma mark - GLKViewDelegate

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {

glClearColor(1.0, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);

}


The first line calls glClearColor to specify the RGB and alpha (transparency) values to use when clearing the screen. We set it to red here.

The second line calls glClear to actually perform the clearing. Remember that there can be different types of buffers, such as the render/color buffer we’re displaying, and others we’re not using yet such as depth or stencil buffers. Here we use the GL_COLOR_BUFFER_BIT to specify what exactly to clear – in this case, the current render/color buffer.

That’s it! Compile and run, and with just 7 lines of code we have OpenGL rendering to the screen!

Hello, GLKit!

Those of you who are completely new to OpenGL might not be very impressed, but those of you who have done this before will be very happy with the convenience here :]

GLKView Properties and Methods


We only set a few properties of GLKView here (context and delegate), but I wanted to mention the other properties and methods on GLKView that might be useful to you in the future.

This is an optional reference section and is for informational purposes only. If you want to keep coding away, feel free to skip to the next section! :]

context and delegate

We already covered these in the previous section, so I won’t repeat here.

drawableColorFormat

Your OpenGL context has a buffer it uses to store the colors that will be displayed to the screen. You can use this property to set the color format for each pixel in the buffer.

The default value is GLKViewDrawableColorFormatRGBA8888, which means 8 bits are used for each pixel in the buffer (so 4 bytes per pixel). This is nice because it gives you the widest possible range of colors to work with, which often makes your app look nicer.

But if your app can get away with a lower range of colors, you might want to switch this to GLKViewDrawableColorFormatRGB565, which makes your app consume less resources (memory and processing tieme).

drawableDepthFormat

Your OpenGL context can also optionally have another buffer associated with it called the depth buffer. This helps make sure that objects closer to the viewer show up in front of objects farther away.

The way it works by default is OpenGL stores the closest object to the viewer at each pixel in a buffer. When it goes to draw a pixel, it checks the depth buffer to see if it’s already drawn something closer to the viewer, and if so discards it. Otherwise, it adds it to the depth buffer and the color buffer.

You can set this property to choose the format of the depth buffer. The default value is GLKViewDrawableDepthFormatNone, which means that no depth buffer is enabled at all.

But if you want this feature (which you usually do for 3D games), you should choose GLKViewDrawableDepthFormat16 or GLKViewDrawableDepthFormat24. The tradeoff here is with GLKViewDrawableDepthFormat16 your app will use less resources, but you might have rendering issues when objects are very close to each other.

drawableStencilFormat

Another optional buffer your OpenGL context can have is the stencil buffer. This helps you restrict drawing to a particular portion of the screen. It’s often useful for things like shadows – for example you might use the stencil buffer to make sure the shadows to be cast on the floor.

The default value for this property is GLKViewDrawableStencilFormatNone, which means there is no stencil buffer, but you can enable it by setting it to the only alternative – GLKViewDrawableStencilFormat8.

drawableMultisample

The last optional buffer you can set up through a GLKView property is the multisampling buffer. If you ever try drawing lines with OpenGL and notice “jagged lines”, multisampling can help with this issue.

Basically what it does is instead of calling the fragment shader one time per pixel, it divides up the pixel into smaller units and calls the fragment shader multiple times at smaller levels of detail. It then merges the colors returned, which often results in a much smoother look around edges of geometry.

Be careful about setting this because it requires more processing time and meomry for your app. The default value is GLKViewDrawableMultisampleNone, but you can enable it by setting it to the only alternative – GLKViewDrawableMultisample4X.

drawableHeight/drawableWidth

These are read-only properties that indicate the integer height and width of your various buffers. These are based on the bounds and contentSize of the view – the buffers are automatically resized when these change.

snapshot

This is a handy way to get a UIImage of the view’s current context.

bindDrawable

OpenGL has yet another buffer called a frame buffer, which is basically a collection of all the other buffers we talked about (color buffer, depth buffer, stencil buffer etc).

Before your glkView:drawInRect is called, GLKit will bind to the frame buffer it set up for you behind the scenes. But if your game needs to change to a different frame buffer to perform some other kind of rendering (for example, if you’re rendering to another texture), you can use the bindDrawable method to tell GLKit to re-bind back to the frame buffer it set up for you.

deleteDrawable

GLKView and OpenGL take a substantial amount of memory for all of these buffers. If your GLKView isn’t visible, you might find it useful to deallocate this memory temporarily until it becomes visible again. If you want to do this, just use this method!

Next time the view is drawn, GLKView will automatically re-allocate the memory behind the scenes. Quite handy, eh?

enableSetNeedsDisplay and display

I don’t want to spoil the surprise – we’ll explain these in the next section! :]

Updating the GLKView


Let’s try to update our GLKView periodically, like we would in a game. How about we make the screen pulse from red to black, kind of like a “Red Alert” effect!

Go to the top of AppDelegate.m and modify the @implementation line to add two private variables as follows:


@implementation AppDelegate {
float _curRed;
BOOL _increasing;
}


And initialize these in application:didFinishLaunchingWithOptions:


_increasing = YES;
_curRed = 0.0;


Then go to the glkView:drawInRect method and update it to the following:


- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {

if (_increasing) {
_curRed += 0.01;
} else {
_curRed -= 0.01;
}
if (_curRed >= 1.0) {
_curRed = 1.0;
_increasing = NO;
}
if (_curRed <= 0.0) {
_curRed = 0.0;
_increasing = YES;
}

glClearColor(_curRed, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);

}


Every time drawInRect is called, it updates the _curRed value a little bit based on whether it’s increasing or decreasing. Note that this code isn’t perfect, because it doesn’t take into effect how long it takes between calls to drawInRect. This means that the animation might be faster or slower based on how quickly drawInRect is called. We’ll discuss a way to fix this later in the tutorial.

Compile and run and… wait a minute, nothing’s happening!

By default, the GLKView only updates itself on an as-needed basis – i.e. when views are first shown, the size changes, or the like. However for game programming, you often need to redraw every frame!

We can disable this default behavior of GLKView by setting enableSetNeedsDisplay to false. Then, we can control when the redrawing occurs by calling the display method on GLKView whenever we want to update the screen.

Ideally we would like to synchronize the time we render with OpenGL to the rate at which the screen refreshes.

Luckily, Apple provides an easy way for us to do this with CADisplayLink! It’s really easy to use so let’s just dive in. First add this import to the top of AppDelegate.m:


#import <QuartzCore/QuartzCore.h>


Then add these lines to application:didFinishLaunchingWithOptions:


view.enableSetNeedsDisplay = NO;
CADisplayLink* displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(render:)];
[displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];


Then add a new render function as follows:


- (void)render:(CADisplayLink*)displayLink {
GLKView * view = [self.window.subviews objectAtIndex:0];
[view display];
}


Compile and run, and you should now see a cool pulsating “red alert” effect!

Introducing GLKViewController


You know that code we just wrote in that last section? Well you can just forget about it, because there’s a much easier way to do so by using GLKViewController :]

The reason I showed you how to do it with plain GLKView first was so you understand the point behind using GLKViewController – it saves you from writing that code, plus adds some extra neat features that you would have had to code yourself.

So try out GLKViewController. Modify your application:didFinishLaunchingWithOptions to look like this:


- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];

EAGLContext * context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
GLKView *view = [[GLKView alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
view.context = context;
view.delegate = self;
//[self.window addSubview:view];

_increasing = YES;
_curRed = 0.0;

//view.enableSetNeedsDisplay = NO;
//CADisplayLink* displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(render:)];
//[displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; 

GLKViewController * viewController = [[GLKViewController alloc] initWithNibName:nil bundle:nil]; // 1
viewController.view = view; // 2
viewController.delegate = self; // 3
viewController.preferredFramesPerSecond = 60; // 4
self.window.rootViewController = viewController; // 5

self.window.backgroundColor = [UIColor whiteColor];
[self.window makeKeyAndVisible];
return YES;
}


Feel free to delete the commented lines – I just commented them out so it is easy to see what’s no longer needed. There are also four new lines (marked with comments):

1. Create a GLKViewController. This creates a new instance of a GLKViewController programatically. In this case, it has no XIB associated.

2. Set the GLKViewController’s view. The root view of a GLKViewController should be a GLKView, so we set it to the one we already created.

3. Set the GLKViewController’s delegate. We set the current class (AppDelegate) as the delegate of the GLKViewController. This means that the GLKViewController will notify us each frame so we can run game logic, or when the game pauses (a nice built-in feature of GLKViewController we’ll demonstrate later).

4. Set the preferred FPS. The GLKViewController will call your draw method a certain number of times per second. This number gives a hint to the GLKViewController how often you’d like to be called. Of course, if your game takes a long time to render frames, the actual number may be lower than this.

The default value is 30 FPS. Apple’s guidelines are to set this to whatever your app can reliably support to the frame rate is consistent and doesn’t seem to stutter. This app is very simple so can easily run at 60 FPS, so we set it to that.

Also as an FYI, if you want to see the actual number of times the OS will attempt to call your update/draw methods, check the read-only framesPerSecond property.

5. Set the rootViewController. We want this view controller to be the first thing that shows up, so we add it as the rootViewController of the window. Note that we no longer need to add the view as a subview of the window manually, because it’s the root view of the GLKViewController.

Notice that we no longer need the code to run the render loop and tell the GLView to refresh each frame – GLKViewController does that for us in the background! So go ahead and comment out the render method as well.

Also remember that we set the GLKViewController’s delegate to the current class (AppDelegate), so let’s mark it as implementing GLKViewControllerDelegate. Switch to AppDelegate.h and replace the @implementation with the following line:


@interface AppDelegate : UIResponder <UIApplicationDelegate, GLKViewDelegate, GLKViewControllerDelegate>


The final step is to update the glkView:drawInRect method, and add the implementation for GLKViewController’s glkViewControllerUpdate callback:


#pragma mark - GLKViewDelegate

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {

glClearColor(_curRed, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);

}

#pragma mark - GLKViewControllerDelegate

- (void)glkViewControllerUpdate:(GLKViewController *)controller {
if (_increasing) {
_curRed += 1.0 * controller.timeSinceLastUpdate;
} else {
_curRed -= 1.0 * controller.timeSinceLastUpdate;
}
if (_curRed >= 1.0) {
_curRed = 1.0;
_increasing = NO;
}
if (_curRed <= 0.0) {
_curRed = 0.0;
_increasing = YES;
}
}


Note that we moved the code to change the current color from the draw method (where it didn’t really belong) to the update method (intended for game/app logic).

Also notice that we changed the amount the red color increments from a hardcoded value to a calculated value, based on the amount of time since the last update. This is nice because it guarantees the animation will always proceed at the same speed, regardless of the frame rate.

This is another of those convenient things GLKViewController does for you! We didn’t have to write special code to store the time since the last update – it did it for us! There are some other time-based properties, but we’ll discuss those later.

GLKViewController and Storyboards


So far, we’ve manually created the GLKViewController and GLKView because it was a simple way to introduce you to how they work. But you probably wouldn’t want to do it this way in a real app – it’s much better to leverage the power of Storyboards, so you can include this view controller anywhere you want in your app hierarchy!

So let’s do a little refactoring to accomplish that. First, let’s create a subclass of GLKViewController that we can use to contain our app’s logic. So create a new file with the iOS\Cocoa Touch\UIViewController subclass template, name the class HelloGLKitViewController, as a subclass of GLKViewController (you can type this in even though it’s not in the dropdown). Make sure Targeted for iPad and With XIB for user interface are both unselected, and create the file.

Open up HelloGLKitViewController.m, and start by adding a private category on the class to contain the instance variables we need, and a new property to store the context:


@interface HelloGLKitViewController () {
float _curRed;
BOOL _increasing;

}
@property (strong, nonatomic) EAGLContext *context;

@end

@implementation HelloGLKitViewController
@synthesize context = _context;


Then implement viewDidLoad and viewDidUnload as the following:


- (void)viewDidLoad
{
[super viewDidLoad];

self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

if (!self.context) {
NSLog(@"Failed to create ES context");
}

GLKView *view = (GLKView *)self.view;
view.context = self.context;
}

- (void)viewDidUnload
{
[super viewDidUnload];

if ([EAGLContext currentContext] == self.context) {
[EAGLContext setCurrentContext:nil];
}
self.context = nil;
}


In viewDidLoad, we create an OpenGL ES 2.0 context (same as we did last time in the App Delegate) and squirrel it away. Our root view is a GLKView (we know this because we set it up this way in the Storyboard editor), so we cast it as one. We then set its context to the OpenGL context we just created.

Note that we don’t have to set the view controller as the view’s delegate – GLKViewController does this automatically behind the scenes.

In viewDidUnload, we just do the opposite to clean up. We have to make sure there’s no references left to our context, so we check to see if the current context is our context, and set it to nil if so. We also clear out or reference to it.

At the bottom of the file, add the implementations of the glkView:drawInRect and update callbacks, similar to before:


#pragma mark - GLKViewDelegate

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {

glClearColor(_curRed, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);

}

#pragma mark - GLKViewControllerDelegate

- (void)update {
if (_increasing) {
_curRed += 1.0 * self.timeSinceLastUpdate;
} else {
_curRed -= 1.0 * self.timeSinceLastUpdate;
}
if (_curRed >= 1.0) {
_curRed = 1.0;
_increasing = NO;
}
if (_curRed <= 0.0) {
_curRed = 0.0;
_increasing = YES;
}
}


Note that the update method is called plain “update”, because now that we’re inside the GLKViewCotroller we can just override this method instead of having to set a delegate. Also the timeSinceLastUpdate is access via “self”, not a passed in view controller.

With this in place, let’s create the Storyboard. Create a new file with the iOS\User Interface\Storyboard template, choose iPhone for the device family, and save it as MainStoryboard.storyboard.

Open MainStoryboard.storyboard, and from the Objects panel drag a View Controller into the grid area. Select the View Controller, and in the Identity Inspector set the class to HelloGLKitViewController:

Setting the class of the View Controller

Also, select the View inside the View Controller, and in the Identity Inspector set the Class to GLKView.

To make this Storyboard run on startup, open HelloGLKit-Info.plist, control-click in the blank area, and select Add Row. From the dropdown select Main storyboard file base name, and enter MainStoryboard.

That pretty much completes everything we need, but we still have some old code in AppDelegate that we need to clean up. Start by deleting the _curRed and _increasing instance variables from AppDelegate.m. Also delete the glkView:drawInRect and glkViewControllerUpdate methods.

And delete pretty much everything from application:didFinishLaunchingWithOptions so it looks like this:


- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
return YES;
}


And modify the AppDelegate @interface to strip out the two GLKit delegates since we aren’t using those anymore:


@interface AppDelegate : UIResponder <UIApplicationDelegate>


That’s it! Compile and run your app, and you’ll see the “red alert” effect working as usual.

At this point, you’re getting pretty close to the setup you get when you choose the OpenGL Game template with the Storyboard option set (except it has a lot of other code in there you can just delete if you don’t need it). Feel free to choose that in the future when you’re creating a new OpenGL project to save a little time – but now you know how it works from the ground up!

GLKViewController and Pausing


Now that we’re all nicely set up in a custom GLKViewController subclass, let’s play around with one of the neat features of GLKViewController – pausing!

To see how it works, just add the following to the bottom of HelloGLKitViewController.m:


- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
self.paused = !self.paused;
}


Compile and run the app, and now whenever you tap the animation stops! Behind the scenes, GLKViewController stops calling your update method and your draw method. This is a really handy way to implement a pause button in your game.

In addition to that, GLKViewController has a pauseOnWillResignActive property that is by default set to YES. This means when the user hits the home button or receives an interruption such as a phone call, your game will be automatically paused! Similarly, it has a resumeOnDidBecomeActive property that is by default set to YES, which means when the user comes back to your app, it will automatically unpause. Handy, that!

We’ve covered almost every property of GLKViewController by now, except for the extra time info properties I discussed earlier:

  • timeSinceLastDraw gives you the elapsed time since the last call to the draw method. Note this might be different than timeSinceLastUpdate, since your update method takes time! :]
  • timeSinceFirstResume gives you the elapsed time since the first time GLKViewController resumed sending updates. This often means the time since your app launched, if your GLKViewController is the first thing that shows up.
  • timeSinceLastResume gives you the elapsed time since the last time GLKViewController resumed sending updates. This often means the last time your game was unpaused.

Let’s add some code to try these out. Add the following code to the top of touchesBegan:


NSLog(@"timeSinceLastUpdate: %f", self.timeSinceLastUpdate);
NSLog(@"timeSinceLastDraw: %f", self.timeSinceLastDraw);
NSLog(@"timeSinceFirstResume: %f", self.timeSinceFirstResume);
NSLog(@"timeSinceLastResume: %f", self.timeSinceLastResume);


Play around with it so you’re familiar with how they work. As you can see, these can be pretty convenient!

Where To Go From Here?


At this point, you know the basics of getting a project set up and running with GLKView and GLKViewController, and what they can do for you.

Stay tuned for the next tutorial, where we’ll finally start drawing something to the screen with OpenGL commands and the new GLKBaseEffect class!

This “Beginning OpenGL ES 2.0 with GLKit” series is one of the chapters in our new iOS 5 By Tutorials book. If you like what you see here, check out the book – there’s an entire second chapter on Intermediate OpenGL ES 2.0 with GLKit, above and beyond what we’re posting for free here! :]

If you have any questions or comments on this tutorial or on OpenGL ES 2.0 or GLKit in general, please join the forum discussion below!