API
API, an abbreviation of
application program interface, is a set of routines, protocols, and tools for
building software applications. A good API makes it easier to develop a program
by providing all the building blocks. A programmer then puts the blocks
together.
Most operating
environments, such as MS-Windows, provide an API so that programmers can write
applications consistent with the operating environment. Although APIs are
designed for programmers, they are ultimately good for users because they
guarantee that all programs using a common API will have similar interfaces.
This makes it easier for users to learn new programs.
Direct3D
An API for manipulating
and displaying three-dimensional objects. Developed by Microsoft, Direct3D
provides programmers with a way to develop 3-D programs that can utilize
whatever graphics acceleration device is installed in the machine. Virtually all
3-D accelerator cards for PCs support Direct3D.
OpenGL
A 3-D graphics language
developed by Silicon Graphics. There are two main implementations: Microsoft OpenGL, developed by Microsoft
and Cosmo OpenGL, developed by Silicon
Graphics. Microsoft OpenGL is built into
Windows NT and is designed to improve performance on hardware that supports the
OpenGL standard. Cosmo OpenGL, on the other hand, is a software-only
implementation specifically designed for machines that do not have a graphics
accelerator.
This is an overview of the graphics pipeline in OpenGL:
This is an overview of the graphics pipeline in OpenGL:
Graphics
Pipeline
In 3D computer graphics,
the terms graphics pipeline or rendering pipeline most commonly refer to the way
in which the 3D mathematical information contained within the objects and scenes
are converted into images and video. The graphics pipeline typically accepts
some representation of a three-dimensional primitive as input and results in a
2D raster image as output. OpenGL and Direct3D are two notable 3d graphic
standards, both describing very similar graphic
pipelines.
Stages of the graphics
pipeline
Per-vertex lighting and
shading
Geometry in the complete
3D scene is lit according to the defined locations of light sources,
reflectance, and other surface properties. Some (mostly older) hardware
implementations of the graphics pipeline compute lighting only at the vertices
of the polygons being rendered. The lighting values between vertices are then
interpolated during rasterization. Per-fragment or per-pixel lighting, as well
as other effects, can be done on modern graphics hardware as a
post-rasterization process by means of a shader program. Modern graphics
hardware also supports per-vertex shading through the use of vertex
shaders.
Clipping
Geometric primitives that
now fall completely outside of the viewing frustum will not be visible and are
discarded at this stage.
Projection
Transformation
In the case of a
Perspective projection, objects which are distant from the camera are made
smaller. This is achieved by dividing the X and Y coordinates of each vertex of
each primitive by its Z coordinate (which represents its distance from the
camera). In an orthographic projection, objects retain their original size
regardless of distance from the camera.
Viewport
Transformation
The post-clip vertices are
transformed once again to be in window space. In practice, this transform is
very simple: applying a scale (multiplying by the width of the window) and a
bias (adding to the offset from the screen origin). At this point, the vertices
have coordinates which directly relate to pixels in a
raster.
Scan Conversion or
Rasterisation
Rasterisation is the
process by which the 2D image space representation of the scene is converted
into raster format and the correct resulting pixel values are determined. From
now on, operations will be carried out on each single pixel. This stage is
rather complex, involving multiple steps often referred as a group under the
name of pixel pipeline.
Texturing, Fragment
Shading
At this stage of the
pipeline individual fragments (or pre-pixels) are assigned a color based on
values interpolated from the vertices during rasterization, from a texture in
memory, or from a shader program.
Display
The final colored pixels
can then be displayed on a computer monitor or other
display.
No comments:
Post a Comment