Computer Graphics





    



Computers & Accessories | Amazon.com


  1. 1. Graphics System Basics & Models Book: Chapter 1 [Ed. Angel, Interactive Computer Graphics]


  1. 2. Computer Graphics  Computer Graphics: Use of computer in generating images.  Computer graphics: concerned with all aspects of producing pictures or images using a computer.

  1. 3. Applications of Computer Graphics  Can be roughly divided into four major areas  Display of Information  Design  Simulation and Animation  User Interfaces
  1. 4. 1. Display of Information  Classic graphics techniques used as a medium of conveying information  Human written/spoken language  Historical era:  Babylonians used to display floor plans on stones  Greeks used to display their architectural plans and language  Today graphical representation are generated by Architects, Designers using computers.
  1. 5. Display of Information (Contd…)  Statisticians  Uses CG to display plots/graphs of a data set  Extract information from these plots  Very useful in extracting info from large datasets.  Medical Imaging  Graphics used in Computed Tomography (CT)  Medical Resonance Imaging (MRI)  Ultrasound  Data Visualization  Understanding data by placing it in visual context
  1. 6. 2. Design  Many fields concerned with design. (Engineering & Arch)  With set of specification, a cost effective and esthetic design is tried to achieve using computer graphics.  Starting 40 years ago, today Computer Aided Design (CAD) pervades many fields.
  1. 7. 3. Simulation and Animation  Simulation is the imitation of the operation of a real-world process or system over time.  Flight simulators - train pilots.  Safety and Cost reduction  Architectural designs are tested in many weather conditions  Animation - illusion of motion  Became famous: After successful simulations  Artistic effects are achieved.  Complete movies are made using CG  Photo-realistic images  Virtual Reality-replicates an environment that simulates physical presence in places in the real world or imagined worlds and lets the user interact in that world.
  1. 8. 3. Simulation & Animation (Virtual Reality)
  1. 9. 4. User Interfaces  Interaction with computers increased.  Desktops, Tablets, Smartphones.  Use of GUI has overcome CLI  Microsoft Windows, Mac OS, Linux  Android, iOS,  Internet usage increased  Webpages, applications all are graphical  Resources are accessed through graphical browsers.  Interaction with UI is so often, that we have almost forgot that we’re working with computer graphics.
  1. 10. A Graphics System  General view of a graphics system  Generally contains, 1. Input Devices 2. CPU 3. GPU (Graphics Processing Unit) 4. Memory 5. Frame Buffer 6. Output Devices
  1. 11. Pixels, Frame Buffer & Basic Terms  Pixel: Short for Picture element  The smallest addressable element of a display device.  Basic unit of a digital image.  Each pixel corresponds to a location or small area in the image.  Raster: Array of picture elements or pixels  Images seen on displays are raters produced by graphics sys.  A raster is a grid of x and y coordinates on a display space.
  1. 12. Pixels, Frame Buffer & Basic Terms (Contd…)  Frame Buffer: portion of memory where pixels are stored  Core element of graphics system.  Contains bitmap that is driven to video display  Resolution: No. of pixels in frame buffer  Resolution determines the details that can be seen in image  Higher the resolution  Sharpen the image  Depth / Precision: No. of bits used per pixel to determine its properties like color  1-bit-deep frame buffer  allows only two colors  8-bit-deep frame buffer  allows 28 (256) colors.  16-bit (High color)  216 Colors  24-bit (True color)  224 Colors
  1. 13.  In Simple Sys: Frame Buffer only hold colored pixels that are displayed  In most systems, The frame buffer holds far more information,  depth information needed for creating images from 3D data.  In these systems, the frame buffer comprises multiple buffers,  one or more of which are color buffers that hold the colored pixels that are displayed.  Terms, frame buffer and color buffer can be used synonymously. Pixels, Frame Buffer & Basic Terms (Contd…)
  1. 14. CPU and GPU  In simple system there may be only one CPU  In early systems, frame buffer was the part of standard memory  CPU is responsible for both Normal and Graphical Processing  Main graphical processing of CPU is  Take graphical primitives from application program  Like (lines, polygons, circles)  Assign values to the pixels in frame buffer that best represent those entities.  Rasterization: Conversion of geometrical entities to pixel colors and locations in frame buffer. (a.k.a scan
  1. 15. CPU & GPU (Contd…)  Today, all graphics system are characterized by special purpose graphics processing unit (GPU).  GPU: A processing unit custom-tailored to carry out specific graphics functions  GPU can either be on the same system board or on separate graphics card  Frame buffer usually resides on the same board as GPU  Frame buffer is accessed through GPU
  1. 16. Output Devices  Cathode Ray Tube (CRT)  Most dominant type of display (until, recently)  Basic Op: When electron strikes the phosphor coating, light is emitted.  Deflection plates: to control the direction of the beam  Computer output is converted from digital(bits) to analog(voltage) by converters across x and y deflection plates.  When sufficiently intense beam of electrons is directed at the phosphor, light appears on CRT surface
  1. 17. Output Devices (Contd…)  Refresh rate: No. of times per second the device retrace the same path/image.  CRT emits light for short time (few milli seconds)  To see flickering-free image, same image must be retraced.  Old systems: Refresh rate = frequency of power system  50 Hz in US and 60 Hz in most of the world  Raster System (Fundamental ways of displaying pixels)  Non-interlaced: Pixels are displayed row by row at refresh rate  Interlaced Display: Odd and even rows are refreshed, alternatively. 30 Hz instead of 60 Hz.
  1. 18. Output Devices (Contd…)  Colored CRTs  Phosphors of three different colors (Red, Green, Blue)  Phosphors arranged in small groups.  Phosphors in triangular groups are called triads  Have three electron beams.  Shadow Mask: a metal sheet with small holes  Used to ensure the excitation of proper color phosphor.
  1. 19. Output Devices (Contd…)  Flat-panel Technology.  Flat-panels are inherently raster based.  Mostly used flat panels are LCD, LED and Plasma  Generic flat-panel display have  2-outside plate: containing parallel grids of wires, oriented perpendicular to each other.  Middle plate contains different material depending upon the technology.
  1. 20. Output Devices (Contd…)  Flat-panel Display  By sending electrical signal to proper wire on both plates.  Electric field is produced at the point of intersection of two wires  Electric field is used to control the corresponding element on the middle plate.  Electric field produced can be used in,  LED, to turn corresponding led on or off  LCD, to control polarization of liquid crystals to pass light  Plasma, to energize gases in order to glow or not.
  1. 21. Input Devices  Input Devices: Devices used for input purposes.  Common input device Keyboard, mouse  Other input devices include joy stick, track ball, space ball  Input Devices (Perspective)  Physical Device  Logical Device (application / programmer perspective)  Their properties are specified in terms of “what they do” from application perspective.  For example: cout in C++ outputs the string, the output device could be printer, display/terminal or a disk file.  Even the “cout” output could be input for another program.
  1. 22. Input Devices (Physical)  Two primary types of Input Devices  Keyboard Devices  Pointing Devices  Keyboard generally include physical keyboards or devices that return character codes.  ASCII code is used to represent characters.  ASCII assigns a single unsigned byte to each character.  Internet application use multiple bytes to represent each char.  Mouse & Trackball:  A mechanical mouse and trackball works on same principal.  Motion of the ball is converted to signals by converters.
  1. 23. Input Devices (Physical)  Signals from encoders might be interpreted as position (Not necessarily)  Driver/Program can interpret the signals as two velocities.  The Computer can integrate velocities to obtain position.  When ball rotates position changes, otherwise not.  In this mode, positioning is relative.  Motion sensing devices are known as Relative positioning devices
  1. 24. Input Devices (Physical)  Data Tablets:  Absolute positioning  Position is determined using electromagnetic interactions between signals traveling through the wires and sensors in the stylus  Position sensing devices are known as absolute positioning devices  READING: Space ball and Joy stick
  1. 25. Input Devices (Logical)  Logical Input Devices:  Addressing of physical input devices as abstract data types  ADT: data type defined by its behavior from user view  Two major characteristics describe logical behavior of input device: 1. The measurements that the device returns to the user program 2. The time when the device returns those measurements.
  1. 26. Input Devices (Logical)  Logical Input Devices:  String: Return string of characters from Keyboard, File, etc  Locator: Returns a position (in x, y coordinates)  Pick: Returns a segment name & pick identifier of object pointed by the user.  Choice: Represents a choice from a selection of several possibilities.  Valuator: Returns a real/analogue value, for example, to control some sort of analogue device.  Stroke: series of locations. (Tablets/Touch inputs)
  1. 27. Input Modes  Input is provided in terms of two entities  Measure: The returned data from input devices.  Ex: Data stream from keyboard OR location of pointer from mouse  Trigger: Physical input to signal the computer.  Ex: Pressing of Return (Enter) Key / Esc Key OR clicking the mouse button.  Measure of device can be obtained in three (03) distinct modes  Each mode is defined by: Relationship b/w measure & trigger.
  1. 28. Request Mode – Input Modes  The measure of the device is not returned to the program until the device is triggered.  Ex: cin / scanf in C++/C language. (Input Statements)  Program waits for trigger when input statement is encountered.  Take as long as the user wants.  The measure is only returned upon trigger. (e.g upon pressing enter/return)
  1. 29. Sample Mode (Input Modes)  Sample-Mode: Input is immediate. Measure is returned as the function is encountered in App.  Position the device or Enter data before the function call.  Program retrieves “measure” immediately from the buffer/file/location.  For example, your application can obtain the location of the screen cursor at any point in time, through the use of SAMPLE mode input.
  1. 30. Event Mode – Input Modes  Case: Multi input devices each with its own measure & trigger.  For Example: Flight simulators with multiple inputs.  Event Mode:  Application Program & Devices work independently of each other.  Each time listed device is triggered: measure + id is stored in event queue  App. Prog. Retrieves input from event queue whenever required.  Event Mode (Callback Approach):  Associate a function call (callback) with events specifically.  OS queries event queue and calls the associated function.  Efficient approach in client-server scenarios.
  1. 31. Images – Physically & Synthetic Elements of Image Formation  Basic entities of Image Formation  Objects  Viewers  Light
  1. 32. Object(s)  The object exists in space independent of any image-formation process and of any viewer.  Computer Graphics: deals with synthetic objects  Objects are formed by specifying positions of geometric primitives(basic shapes) in space like triangle, polygons etc  Mostly, set of spatial positions (vertices) are used to define objects  For Ex: line can be defined by two vertices  Triangle can be defined by three vertices.
  1. 33. Viewer(s)  To form an image, there must be someone or something that is viewing our objects. Like human, a camera, etc.  It is the viewer that forms the image of our objects.  Human Visual System: Image is formed at back of eye  Camera: Image is formed in the film plane  Objects are usually seen from different perspectives.
  1. 34. Light  Be it physical or synthetic images are in complete without light.  No light = dark objects = no image formation  Light is electromagnetic radiation.  Light Spectrum: Radio + Infrared + Visual spectrum  Visual spectrum: 350 – 780 nm  Around 520 nm: Green  Near 450 nm: Blue  Near 650 nm: Red  Except from recognizing that which frequency is for which color CG doesn’t deal with light
  1. 35. Light Spectrum
  1. 36. Imaging Systems  Physical imaging systems to understand imaging in computers.  Pin-hole Camera  Human Visual System  Pin-hole: To understand basic working principles of camera.  Human visual system is complex but obeys the physical principles of other imaging systems
  1. 37. Pin-hole Camera  A pinhole camera is a box with a small hole in the center of one side of the box  film is on the side opposite the pinhole.  Hole is so small that only a single ray of light can enter (assumption)  For Example: We have point in scene (x, y, z)  At Image: z = -d  y = yp  x = xp (In top view)  (xp, yp, -d) is called projection of (x,y,z)
  1. 38. Pin-hole Camera  Color: In idealized mode, color of the image will be same as in scene  Field/Angle of View: is the extent of the observable world that is seen at any given moment.  If h is the height of camera (film) then  Angle is formulated using basic Trigonometry.
  1. 39. Pin-hole Camera  Depth of Field: Every object in angle of view is in focus i.e appears sharply.  In Ideal pinhole camera depth of field is infinite. (Assumed)  Disadvantages of Pinhole camera  It admits only single ray – almost no light.  Camera cannot be adjusted to have different angle of view.  By replacing hole with lens; problems can be eliminated  With proper lens more light can be entered (larger aperture)
  1. 40. Human Visual System  Human Visual System is extremely complex.  Light enters through cornea and lens  Iris opens/shuts to adjust the amount of light  Image is formed at retina (back of the eye)  Cells (Rods and Cones) are sensors  They excites/responds when light enters eye (350-780 nm)  Rods: 1 type; Low light sensors; Night vision, not color sensitive  Cones: 3 types; responsible for color vision  Resolution of Visual System  Resolution: Measure of what size objects can we see.  Technically: it is a measure of how close we can place
  1. 41. Human Visual System  Brightness: Brightness is an overall measure of how we react to the intensity of light  HVS reacts differently to different wavelengths of light  HVS is more sensitive to green and less sensitive to red & blue  HVS only reacts to three colors instead of whole visual spectrum due to three types of cones.  These colors are called primary colors.  Primary colors are Red, Blue and Green.
  1. 42. Synthetic Camera Model  The paradigm of emulating image formation by optical system in computer is known as Synthetic Camera Model  Basic Principles:  As Object & Viewer are independent of each, so CG (API) should have separate functions for specifications.  Images can be computed using simple geometric calculation like in pin hole camera.
  1. 43. Synthetic Camera Model  In pin hole camera image formed is flipped.  In computer, image is retain; by moving the plane to front (Virtual image plane)  A line is drawn called projector from center of lens/projection (COP) to the object/point.  All the light originates from COP  This virtual image plane is called projection plane
  1. 44. Synthetic Camera Model  There is always limitation to the size of the image. In optical system; field of view expresses the limitation  Synthetic Camera Model places a Window/Rectangle in projection plane to cope with the limitation.  This window / rectangle through which a viewer at COP sees the world called Clipping window/rectangle  Given the following  Location of COP  Orientation of Projection plane  Size of clipping window  We can determine that which objects will appear in image
  1. 45. Programmer’s Interface  User interact with graphics system in different ways like using CAD modules, paint programs etc.  Programmers interact with GS using graphics library (API)  Graphics API: Interface b/w programmer and Graphics system specified through set of functions.  Programmers don’t see h/w related details.  Software Drivers interpret the output of API to the form that is understood by the specific hardware.
  1. 46. Three Dimensional API  As per synthetic camera model, 3D API must provide functions to specify,  Objects  Viewer  Light Sources  Material properties.  Objects are specified using vertices.  Objects are usually specified using geometric primitives like lines, polygons, triangles etc  Complex objects may involve multiple ways of specification.
  1. 47. Three Dimensional API  Camera can be defined in variety of ways.  APIs differ in camera selection and methods  Four types of specification for Camera  Position: Camera location (COP)  Orientation: Rotation of camera in axes.  Focal length: Size of the image/ Angle of view  Film plane: height & width  Specifications can be satisfied in various ways  Most used way is coordinate system transformation  Transformations convert object positions represented in coordinate system.
  1. 48. Three Dimensional API  Light Sources  Light sources are specified by their  location, strength, color and direction  These properties are specified for each light sources used.  Material Properties:  Characteristics or attributes of the objects  These attributes are specified when the objects are defined  Both light and material properties depend upon the light-material model API supports.
  1. 49. The Modeling–Rendering Paradigm  Model: Mathematical/Geometrical description of shapes  Rendering: Process of generating images from models  Modeling can be separated from Rendering.  Helpful in generating complex images.  The file/data produced by the modeler is used by the Renderer.  This File could be simple; containing info in specific format
  1. 50. The Modeling–Rendering Paradigm  Different hardware and software at both blocks.  Modeler, as well as Rendered are both customizable  Use diff: modeler with same renderer  Use same modeler with diff: renderer  Most popular approach now a days.  Models, lights, camera etc are place in special data structure called scene graph  Scene graph is then passed to renderer or game engine.
  1. 51. Graphics Architectures  Early graphics system: general-purpose computer of von Neumann architecture  Single processor system (Single instruction Execution)  Calligraphic CRT display  Generate a line segment by connecting two points.  Host used to Run app & compute end points and send them to CRT  Info needs to be sent at high speed (to avoid flicker)  Refreshing was slow that even small image would burden expensive computer.
  1. 52. Display Processors  Earliest special purpose graphics system  To separate the process of continuous display of refreshing  Display processor included instruction to display CRT primitives  Host generate image (Using instructions)  Send to Display processor  Display processor store program in memory (as display file / display list)  Display processor runs program iteratively
  1. 53. Pipeline Architecture  Process/Operation divided into several independent / dependent segments.  a + (b ∗ c)  Pipelining increases throughput of the computer.
  1. 54. Graphics Pipeline  Sets of Object  Objects (Set of primitives)  vertices  Complex objects may contain millions of vertices  To make process of imaging fast we use pipeline  Graphical pipeline consists of  Vertex Processing  Clipping and Primitive Assembly  Rasterization  Fragment Processing
  1. 55. 1. Vertex Processing  Two major functions of this block  Coordinate transformation  Compute color of each vertex
  1. 56. Clipping and Primitive Assembly  Vertices are assembled into primitives (shapes)  No camera can see the whole world  Clipping must be done.  Clipping window/volume is considered  Clipping is done primitive by primitive.  Output: set of primitives which will appear in clipping image.
  Learn More About Computer Graphics

Previous Post Next Post