dummy text

    Designing Graphical User Interface (GUI) using OpenGL

    Brief introduction of the process of designing a Graphical User Interface (GUI) using OpenGL

    Posted by Simar Mann Singh on 08 Nov, 2018

    Introduction

    We can generate Computer graphics using many different ways. For example, We can choose to use specifications like OpenGL, Direct3D, Vulkan etc or use any of the existing graphics rendering game-engines like Unity3D, FrostBite, CryEngine, Unreal Engine etc which provides necessary and sufficient functionality to render graphics. OpenGL is a flexible, robust and asset rich specification which has been used extensively in the industry for past so many years. It was initially released in February 1992.

    OpenGL is often referred to as an Application Programming Interface (API) which has a plethora of functions that can be used to render computer graphics. However, in reality, OpenGL is not an API, rather a specification defined by the Khronos Group containing a list of all the functions and documentation of what each function does. It doesn’t contain any implementation of the functions whatsoever. Every graphics card manufacturer like NVIDIA, Qualcomm, Intel, Broadcom etc. implements these function independently which may vary substantially to other graphics card manufacturer’s implementation.

    Perhaps, OpenGL can still be considered as a 3D graphics API. It has an Open Source License which allows it to be used by any individual or entity who intends to use OpenGL for rendering graphics for private or commercial purpose without having to pay any royalty fee to the Khronos group.

    So far, many different OpenGL specification versions have been released. Currently, the latest running version is OpenGL 4.6. Every graphics card developed so far supports up to a specific version of OpenGL that it can work with. Backward compatibility in each graphics card ensures flawless working and integration of the graphics card with all the previously released versions of the OpenGL specifications.

    Support for different Extensions is a salient feature of the OpenGL specification. Developers can query the graphics card to check which all extensions are supported. Hence, different functionalities can be turned ON/OFF in the code by developers.

    if (GL_ABR_extension) { // implement functions provided by the extension } else { //Do not implement functions provided by the extension }

    Graphics Pipeline

    The sole objective of any graphics framework is to decide the color of each pixel on the screen. To accomplish that, the following sequence is followed:-

    1. Transform 3D to 2D

      To transform 3D objects into 2D, first, we need to translate the 3D coordinates into 2D coordinates. 2D coordinates in the world space are then further mapped onto 2D pixels of the screen

    2. Rasterization Stage

      Rasterization stage comprises the creation of a Vertex Shader, a Fragment Shader and compiling them into a Shader program and thereafter implementing advanced OpenGL techniques like blending etc

    3. Integrating everything

      The final step is to combine together everything which results in the rendering of graphics on the screen

    OpenGL Components

    Basic Elements of OpenGL are:-

    1. Primitives
    2. Shaders
    3. Lighting

    Primitives

    Primitives, often called as prims, are the fundamental geometric objects that a computer graphic system can render on the screen. They are the ‘atomic’ graphics objects which constitute any complex figure or shape. Primitives can be either Geometric primitives or Texture primitives. Geometric primitives consist of Points, Lines, and Polygons whereas Texture primitives include Image Bitmaps and Texture mapping.

    Shaders

    Shaders can be either a Vertex shader or a Fragment shader. The vertex shader is used to define any primitive shape using attributes like the position of points, lines or curves, scale, width etc. Vertex shaders can be explicitly defined or they can be initialized with data during runtime. A Fragment shader is used to decide the color of a pixel by describing the color scheme to be used, the exact color and also the alpha channel (transparency) and hence, a fragment shader runs for every pixel on the screen. Shaders can not transfer data between them. However, techniques like using Uniforms allow shaders to transfer data back and forth.

    Lighting

    Lighting is the process of describing the light source to be used. A light source could be Spot Light which is simply a point source of light, a Surface light or an Ambient light source. Lighting in a frame is characterized by the position of the light source regarding the subject, the color of the light source and the intensity of the light source casting light on the subject. The final pixel color is calculated taking into account all the parameters like object color described by the fragment shader, the light source position, the intensity of light source etc.