Wednesday, July 15, 2015

Game programming in C# using OpenGL

Every computer has special graphics hardware that controls what you see on the screen. OpenGL tells this hardware what to do.The Open Graphics Library is one of the oldest, most popular graphics libraries game creators have. It was developed in 1992 by Silicon Graphics Inc. (SGI) and used for GLQuake in 1997. The GameCube,Wii, PlayStation, and the iPhone all use OpenGL.

The alternative to OpenGL is Microsoft’s DirectX. DirectX holds a larger number of libraries, including sound and input. OpenGL is almost like Direct3D library in DirectX. The latest version of DirectX is DirectX 12. The Xbox uses a version of DirectX 9.0, 10 or 11. DirectX 12 was shipped with Windows 10.
OpenGL is a C-style graphics library with no classes or objects. OpenGL is basically a large collection of functions. Internally, OpenGL is a state machine. Function calls alter the internal state of OpenGL, which then affects how it behaves and how it renders objects to the screen. Because it is a state machine, it is very important to carefully note which states are being changed.
The basic unit in OpenGL is the vertex. A vertex is a point in space. Extra information can be attached to these points—how it maps to a texture, if it has a certain weight or color—but the most important piece of information is its position.
Games spend a lot of their time sending OpenGL vertices or telling OpenGL to move vertices in certain ways. The game may first tell OpenGL that all the vertices it’s going to send are to be made into triangles. In this case, for every three vertices OpenGL receives, it will attach them together with lines to create a polygon, and it may then fill in the surface with a texture or color.
Modern graphics hardware is very good at processing vast sums of vertices, making polygons from them and rendering them to the screen. This process of going from vertex to screen is called the pipeline. The pipeline is responsible for positioning and lighting the vertices, as well as the projection transformation. This takes the 3D data and transforms it to 2D data so that it can be displayed on your screen. Sprites are 2D bitmaps that are drawn directly to a render target without using the pipeline for transformations, lighting or effects. Sprites are commonly used to display information such as health bars, number of lives, or text such as scores. Some games, especially older games, are composed entirely of sprites.
Even modern 2D games are made using vertices. The 2D sprites are made up of two triangles to form a square. This is often referred to as a quad. The quad is given a texture and it becomes a sprite. Two-dimensional games use a special projection transformation that ignores all the 3D data in the vertices, as it’s not required for a 2D game.
The pipeline has become programmable. Programs can be uploaded to the graphics card. There are few steps in the pipeline. All the steps could be applied in parallel. Each vertex can pass through a particular stage at the same time, provided the hardware supports it. This is what makes graphics cards so much faster than CPU processing.
  1. Input stage.The CPU sends instructions (compiled shading language programs) and geometry data (all the vertices’ properties including position, color, and texture data) to the graphics processing unit, located on the graphics card.
  2. Vertex shading. Shaders are simple programs that describe the traits of either a vertex or a pixel. Vertex shaders describe the traits (position, texture coordinates, colors, etc.) of a vertex, while pixel shaders describe the traits (color, z-depth and alpha value) of a pixel. A vertex shader is called for each vertex in a primitive (possibly after tessellation); thus one vertex in, one (updated) vertex out. Each vertex is then rendered as a series of pixels onto a surface (block of memory) that will eventually be sent to the screen.
  3. Geometry shading. Geometry shaders were added a little more recently than pixel and vertex shaders. Geometry shaders take a whole primitive (such as a strip of lines, points, or triangles) as input and the vertex shader is run on every primitive. Geometry shaders can create new vertex information, points, lines, and even primitives. Geometry shaders are used to create point sprites and dynamic tessellation, among other effects. Point sprites are a way of quickly rendering a lot of sprites; it’s a technique that is often used for particle systems to create effects like fire and smoke. Dynamic tessellation is a way to add more polygons to a piece of geometry. This can be used to increase the smoothness of low polygon game models or add more detail as the camera zooms in on the model.
  4. Tessellation shading: If a tessellation shader is in the graphic processing unit and active, the geometries in the scene can be subdivided.
  5. Primitive setup: This is the process of creating the polygons from the vertex information. This involves connecting the vertices together according to the OpenGL states. Games most commonly use triangles or triangle strips as their primitives.
  6. Rasterization: This is the process of converting the polygons to pixels (also called fragments).
  7. Pixel shading: Pixel shaders are applied to each pixel that is sent to the frame buffer. Pixel shaders are used to create bump mapping effects, specular highlights, and per-pixel lighting. Bump map effects is a method to give a surface some extra height information. Bump maps usually consist of an image where each pixel represents a normal vector that describes how the surface is to be perturbed. The pixel shader can use a bump map to give a model a more interesting texture that appears to have some depth. Per-pixel lighting is a replacement for OpenGL’s default lighting equations that work out the light for each vertex and then give that vertex an appropriate color. Per-pixel uses a more accurate lighting model in which each pixel is lit independently. Specular highlights are areas on the model that are very shiny and reflect a lot of light.
  8. Frame buffer. The frame buffer is a piece of memory that represents what will be displayed to the screen for this particular frame. Blend settings decide how the pixels will be blended with what has already been drawn.
OpenGL ES
OpenGL ES is a modern version of OpenGL for embedded systems. It is quite similar to recent versions of OpenGL but with a more restricted set of features. It is used on high-end mobile phones such as the Android, BlackBerry, and iPhone. It is also used in military hardware for heads-up displays on things like warplanes. OpenGL ES supports the programmable pipeline and has support for shaders.
WebGL
WebGL 2 is based on OpenGL ES 3.0 and provides an API for 3D graphics. It uses the HTML5 canvas element and is accessed using Document Object Model interfaces. Automatic memory management is provided as part of the JavaScript language. Early applications of WebGL include Zygote Body. In November 2012 Autodesk announced that they ported most of their applications to the cloud running on local WebGL clients. These applications included Fusion 360 and AutoCAD 360. WebGL is partially supported in IE 11. VRML (Virtual Reality Modeling Language) was similar to HTML 5, it had some academic popularity but never gained traction with general users.
SharpGL
SharpGL is a C# library that allows you to use OpenGL in your .NET Framework based application. SharpGL for WPF includes the Core as well as OpenGL controls to be used by your WPF application. To get started, 

1. let's create a new WPF application and  go to Tools, Click on Nuget Package Manager and open the Nuget Package Manager Console in visual studio and run

PM> Install-Package SharpGL.WPF

2. Update MainWindow.xaml with the following to add SharpGL control to your name space and to your grid

<Window x:Class="WpfApplication1.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:WpfApplication1"
        xmlns:sharpGL="clr-namespace:SharpGL.WPF;assembly=SharpGL.WPF"
        mc:Ignorable="d"
        Title="MainWindow" Height="350" Width="525">
    <Grid>
        <sharpGL:OpenGLControl Name="openGLControl" OpenGLDraw="OpenGLControl_OpenGLDraw" OpenGLInitialized="OpenGLControl_OpenGLInitialized"
                               DrawFPS="True" Resized="OpenGLControl_Resized" />
    </Grid>
</Window>




3. Enable the OpenGL depth testing functionality:
private void openGLControl_OpenGLInitialized(object sender, EventArgs args)
{

OpenGL gl = openGLControl.OpenGL;
}


4. Model-View Matrix: The first step is to “move” to the centre of the 3D scene.  In OpenGL, when you’re drawing a scene, you tell it to draw  at a “current” position with a “current” rotation “ like move 21 units forward, rotate 45 degrees, then draw the cube. The current position and current rotation are both held in a matrix. Matrices can represent translations (moves from place to place), rotations, and other geometrical transformations.  You can use a single 4×4 matrix to represent any number of transformations in 3D space. You start with the identity matrix — that is, the matrix that represents a transformation that does nothing at all — then multiply it by the matrix that represents your first transformation, then by the one that represents your second transformation, and so on.   The combined matrix represents all of your transformations in one. The matrix we use to represent this current move/rotate state is called the model-view matrix.

private void openGLControl_OpenGLDraw(object sender, EventArgs args)
{
// Get the OpenGL instance that's been passed to us.

OpenGL gl = openGLControl.OpenGL;

// Clear the color and depth buffers. buffers are things that hold the details of the the triangle and the square that we’re going to be drawing

gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);

// Reset the modelview matrix.
gl.LoadIdentity();
// Move the geometry into a fairly central position.
gl.Translate(-1.5f, 0.0f, -6.0f);
// Draw a pyramid. First, rotate the modelview matrix.
gl.Rotate(rotatePyramid, 0.0f, 1.0f, 0.0f);
// Start drawing triangles.
gl.Begin(OpenGL.GL_TRIANGLES);

gl.Color(1.0f, 0.0f, 0.0f);

gl.Vertex(0.0f, 1.0f, 0.0f);



5. Perpective: we’re setting up the perspective with which we want to view the scene. By default, OpenGL will draw things that are close by the same size as things that are far away (a style of 3D known as orthographic projection). 
In order to make things that are further away look smaller, we need to tell it a little about the perspective we’re using.  For our scene, we’re saying that our (vertical) field of view is 45°,  the width-to-height ratio of our window, and saying that we don’t want to see things that are closer than 0.1 units to our viewpoint, and that we don’t want to see things that are further away than 100 units.


private void openGLControl_Resized(object sender, EventArgs args)
{
// Get the OpenGL instance.
OpenGL gl = openGLControl.OpenGL;
// Load and clear the projection matrix.
gl.MatrixMode(OpenGL.GL_PROJECTION);
gl.LoadIdentity();
// Calculate The Aspect Ratio Of The Window
gl.Perspective(45.0f, (float)gl.RenderContextProvider.Width / (float)gl.RenderContextProvider.Height,
0.1f, 100.0f);
// Load the modelview.
gl.MatrixMode(OpenGL.GL_MODELVIEW);
}





6. Update MainWindow.xaml.cs with the following:


using System;
using System.Windows;
using SharpGL;
namespace WpfApplication1
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        public MainWindow()
        {
    
            InitializeComponent();
        }
        private void OpenGLControl_OpenGLDraw(object sender, EventArgs args)
        {
            //  Get the OpenGL instance that's been passed to us.
            OpenGL gl = openGLControl.OpenGL;
            //  Clear the color and depth buffers.
            gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
            //  Reset the modelview matrix.
            gl.LoadIdentity();
            //  Move the geometry into a fairly central position.
            gl.Translate(-1.5f, 0.0f, -6.0f);
            //  Draw a pyramid. First, rotate the modelview matrix.
            gl.Rotate(rotatePyramid, 0.0f, 1.0f, 0.0f);
            //  Start drawing triangles.
            gl.Begin(OpenGL.GL_TRIANGLES);
            gl.Color(1.0f, 0.0f, 0.0f);
            gl.Vertex(0.0f, 1.0f, 0.0f);
            gl.Color(0.0f, 1.0f, 0.0f);
            gl.Vertex(-1.0f, -1.0f, 1.0f);
            gl.Color(0.0f, 0.0f, 1.0f);
            gl.Vertex(1.0f, -1.0f, 1.0f);
            gl.Color(1.0f, 0.0f, 0.0f);
            gl.Vertex(0.0f, 1.0f, 0.0f);
            gl.Color(0.0f, 0.0f, 1.0f);
            gl.Vertex(1.0f, -1.0f, 1.0f);
            gl.Color(0.0f, 1.0f, 0.0f);
            gl.Vertex(1.0f, -1.0f, -1.0f);
            gl.Color(1.0f, 0.0f, 0.0f);
            gl.Vertex(0.0f, 1.0f, 0.0f);
            gl.Color(0.0f, 1.0f, 0.0f);
            gl.Vertex(1.0f, -1.0f, -1.0f);
            gl.Color(0.0f, 0.0f, 1.0f);
            gl.Vertex(-1.0f, -1.0f, -1.0f);
            gl.Color(1.0f, 0.0f, 0.0f);
            gl.Vertex(0.0f, 1.0f, 0.0f);
            gl.Color(0.0f, 0.0f, 1.0f);
            gl.Vertex(-1.0f, -1.0f, -1.0f);
            gl.Color(0.0f, 1.0f, 0.0f);
            gl.Vertex(-1.0f, -1.0f, 1.0f);
            gl.End();
            //  Reset the modelview.
            gl.LoadIdentity();
            //  Move into a more central position.
            gl.Translate(1.5f, 0.0f, -7.0f);
            //  Rotate the cube.
            gl.Rotate(rquad, 1.0f, 1.0f, 1.0f);
            //  Provide the cube colors and geometry.
            gl.Begin(OpenGL.GL_QUADS);
            gl.Color(0.0f, 1.0f, 0.0f);
            gl.Vertex(1.0f, 1.0f, -1.0f);
            gl.Vertex(-1.0f, 1.0f, -1.0f);
            gl.Vertex(-1.0f, 1.0f, 1.0f);
            gl.Vertex(1.0f, 1.0f, 1.0f);
            gl.Color(1.0f, 0.5f, 0.0f);
            gl.Vertex(1.0f, -1.0f, 1.0f);
            gl.Vertex(-1.0f, -1.0f, 1.0f);
            gl.Vertex(-1.0f, -1.0f, -1.0f);
            gl.Vertex(1.0f, -1.0f, -1.0f);
            gl.Color(1.0f, 0.0f, 0.0f);
            gl.Vertex(1.0f, 1.0f, 1.0f);
            gl.Vertex(-1.0f, 1.0f, 1.0f);
            gl.Vertex(-1.0f, -1.0f, 1.0f);
            gl.Vertex(1.0f, -1.0f, 1.0f);
            gl.Color(1.0f, 1.0f, 0.0f);
            gl.Vertex(1.0f, -1.0f, -1.0f);
            gl.Vertex(-1.0f, -1.0f, -1.0f);
            gl.Vertex(-1.0f, 1.0f, -1.0f);
            gl.Vertex(1.0f, 1.0f, -1.0f);
            gl.Color(0.0f, 0.0f, 1.0f);
            gl.Vertex(-1.0f, 1.0f, 1.0f);
            gl.Vertex(-1.0f, 1.0f, -1.0f);
            gl.Vertex(-1.0f, -1.0f, -1.0f);
            gl.Vertex(-1.0f, -1.0f, 1.0f);
            gl.Color(1.0f, 0.0f, 1.0f);
            gl.Vertex(1.0f, 1.0f, -1.0f);
            gl.Vertex(1.0f, 1.0f, 1.0f);
            gl.Vertex(1.0f, -1.0f, 1.0f);
            gl.Vertex(1.0f, -1.0f, -1.0f);
            gl.End();
            //  Flush OpenGL.
            gl.Flush();
            //  Rotate the geometry a bit.
            rotatePyramid += 3.0f;
            rquad -= 3.0f;
        }
        float rotatePyramid = 0;
        float rquad = 0;
        private void OpenGLControl_OpenGLInitialized(object sender, EventArgs args)
        {
            //  Enable the OpenGL depth testing functionality.
            OpenGL gl = openGLControl.OpenGL;
        }
        private void OpenGLControl_Resized(object sender, EventArgs args)
        {
            //  Get the OpenGL instance.
            OpenGL gl =  openGLControl.OpenGL;
            //  Load and clear the projection matrix.
            gl.MatrixMode(OpenGL.GL_PROJECTION);
            gl.LoadIdentity();
            // Calculate The Aspect Ratio Of The Window
            gl.Perspective(45.0f, (float)gl.RenderContextProvider.Width / (float)gl.RenderContextProvider.Height,
                0.1f, 100.0f);
            //  Load the modelview.
            gl.MatrixMode(OpenGL.GL_MODELVIEW);
        }
    }
}



OpenTK

The Open Toolkit is an advanced, low-level C# library that wraps OpenGL, OpenCL and OpenAL. It is suitable for games, scientific applications and any other project that requires 3d graphics, audio or compute functionality. SharpGL has a WPF control, while OpenTK only has a Windows Forms control, which makes it that you have to embed it in a Windows Forms Host.  On the other hand, OpenTK powers MonoTouch (iOS) and MonoDroid (Android). Here is an example as how to write a simple console programs in C# using OpenTK library.

using System;
using System.Drawing;
using OpenTK;
using OpenTK.Graphics;
using OpenTK.Graphics.OpenGL;
using OpenTK.Input;
namespace Examples.Tutorial
{
class TestOpenTK
{
[STAThread]
public static void Main()
{
using (var game = new GameWindow())
{
game.Load += (sender, e) =>
{
// setup settings, load textures, sounds
game.VSync = VSyncMode.On;
};
game.Resize += (sender, e) =>
{
GL.Viewport(0, 0, game.Width, game.Height);
};
game.UpdateFrame += (sender, e) =>
{
// add game logic, input handling
if (game.Keyboard[Key.Escape])
{
game.Exit();
}
};
game.RenderFrame += (sender, e) =>
{
// render graphics
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Begin(PrimitiveType.Quads);
GL.Color3(Color.MidnightBlue);
GL.Vertex3(-0f, 1.0f,0.0f);
GL.Color3(Color.SpringGreen);
GL.Vertex3(-1f, .0f,0.0f);
GL.Color3(Color.Ivory);
GL.Vertex3(0.5f, 0.0f,0.0f);
GL.Color3(Color.Beige);
GL.Vertex3(0.5f, 0.5f, 0.0f);
 
GL.End();
game.SwapBuffers();
};
// Run the game at 60 updates per second
game.Run(60.0);
}
}
}
}





Geometric Primitive Types

  1. Points
    Specifies 1 Point per Vertex v, thus this is usually only used with GL.DrawArrays().
    n Points = Vertex * (1n);
  2. Lines
    Two Vertices form a Line.
    n Lines = Vertex * (2n);
  3. LineStrip
    The first Vertex issued begins the LineStrip, every consecutive issued Vertex marks a joint in the Line.
    n Line Segments in the Strip = Vertex * (1+1n)
  4. LineLoop
    Same as LineStrip, but the very first and last issued Vertex are automatically connected by an extra Line segment.
    n Line Segments in the Loop = Vertex * (1n);
  5. Polygon
    Note that the first and the last Vertex will be connected automatically, just like LineLoop.
    Polygon with n Edges = Vertex * (1n);
    Note: This primitive type should really be avoided whenever possible, basically the Polygon will be split to Triangles in the end anyways. Like Quads, polygons must be planar or be displayed incorrectly. Another Problem is that there is only 1 single Polygon in a begin-end block, which leads to multiple draw calls when drawing a mesh, or using the Extensions GL.MultiDrawElements or GL.MultiDrawArrays.
  6. Quads
    Quads are especially useful to work in 2D with bitmap Images, since those are typically rectangular aswell. Care has to be taken that the surface is planar, otherwise the split into Triangles will become visible.
    n Quads = Vertex * (4n);
  7. QuadStrip
    Like the Triangle-strip, the QuadStrip is a more compact representation of a sequence of connected Quads.
    n Quads in Quadstrip = Vertex * (2+2n);
  8. Triangles
    This way to represent a mesh offers the most control over how the Triangles are sorted, a Triangle always consists of 3 Vertex.
    n Triangles = Vertex * (3n);
    Note: It might look like an inefficient brute force approach at first, but it has it's advantages over TriangleStrip. Most of all, since you are not required to supply Triangles in sequenced strips, it is possible to arrange Triangles in a way that makes good use of the Vertex Caches. If the Triangle you currently want to draw shares an edge with one of the Triangles that have been recently drawn, you get 2 Vertices, that are stored in the Vertex Cache, almost for free. This is basically the same what stripification does, but you are not restricted to a certain Direction and forced to insert degenerated Triangles.
  9. TriangleStrip
    The idea behind this way of drawing is that if you want to represent a solid and closed Object, most neighbour Triangles will share 2 Vertices (an edge). You start by defining the initial Triangle (3 Vertices) and after that every new Triangle will only require a single new Vertex for a new Triangle.
    n Triangles in Strip = Vertex * (2+1n);
    Note: While this primitive type is very useful for storing huge meshes (2+1n Vertices per strip as opposed to 3n for BeginMode.Triangles), the big disadvantage of TriangleStrip is that there is no command to tell OpenGL that you wish to start a new strip while inside the glBegin/glEnd block. Ofcourse you can glEnd(); and start a new strip, but that costs API calls. A workaround to avoid exiting the begin/end block is to create 2 or more degenerate Triangles (you can imagine them as Lines) at the end of a strip and then start the next one, but this also comes at the cost of processing Triangles that will inevitably be culled and aren't visible. Especially when optimizing an Object to be in a Vertex Cache friendly layout, it is essential to start new strips in order to reuse Vertices from previous draws.
  10. TriangleFan
    A fan is defined by a center Vertex, which will be reused for all Triangles in the Fan, followed by border Vertices. It is very useful to represent convex n-gons consisting of more than 4 vertices and disc shapes, like the caps of a cylinder.