ST_INTERSECT of each element of two tables (polygons, lines)

ST_INTERSECT of each element of two tables (polygons, lines)

Im new to postgis and trying to solve the following problem:

I have two tables: 1. containing polygons 2. containing lines

I need to find out the id of all polygons on each line and fill them in new columns. So for each intersection there should be one new column in the table of lines with the id of the intersecting polygon. there wont be not more than three polygons on lines.

This is the code I already have, but it just gives just the first polygon that intersects with the line.

SELECT a.osm_id, b.osm_id FROM lines a LEFT JOIN polygons b ON ST_Intersects(b.way,a.way);

Working on the assumption that the polygons are only ever going to be at the end of lines, your query should return each polygon from your diagram, but not in the format you have indicated. This makes me think that you have a small gap between your polygon and the end of the line. I think the tolerance is 0.00001 for ST_Intersects.
Rather than using ST_Intersects to determine the relationship, you could use ST_Distance. Another option would be to buffer the line (or endpoints) slightly using ST_Buffer. The buffer is probably the best option performance wise.

Here's an example of a query to get back the results as you have indicated

SELECT Line_ID, Intersected_Polygons_id_1, Intersected_Polygons_id_2 FROM Lines l LEFT OUTER JOIN Polygons p1 ON ST_Intersects(p1.Geom, ST_Buffer(ST_StartPoint(l.Geom),0.05)) -- ST_Distance(p1.Geom,ST_StartPoint(l.Geom)) <= 0.05 LEFT OUTER JOIN Polygons p2 ON ST_Intersects(p2.Geom, ST_Buffer(ST_EndPoint(l.Geom),0.05)) -- ST_Distance(p2.Geom,ST_EndPoint(l.Geom)) <= 0.05 ;

I have also put in the ST_Distance join clause in as a comment.

Edit As per your comments, polygons may exist along the line. There is a number of ways of doing this query, but unless you know the maximum amount intersections that will happen having the output rows in the format that you have specified will be dificult. Here's a couple of options

-- One row per intersection SELECT line_id, polygon_id FROM Lines l LEFT OUTER JOIN Polygons p ON ST_Intersects(p.Geom, ST_Buffer(l.Geom,0.05)); -- One row per line with an array of intersections ids SELECT line_id, array_agg( polygon_id FROM Lines l LEFT OUTER JOIN Polygons p ON ST_Intersects(p.Geom, ST_Buffer(l.Geom,0.05)) GROUP BY; -- Crosstab query SELECT line_id, MAX(CASE WHEN r = 1 THEN polygon_id END) polygon_id_1, MAX(CASE WHEN r = 2 THEN polygon_id END) polygon_id_2, MAX(CASE WHEN r = 3 THEN polygon_id END) polygon_id_3, MAX(CASE WHEN r = 4 THEN polygon_id END) polygon_id_4, MAX(CASE WHEN r = 5 THEN polygon_id END) polygon_id_5 --… FROM ( SELECT line_id, polygon_id, ROW_NUMBER() OVER (PARTITION BY ORDER BY r FROM Lines l LEFT OUTER JOIN Polygons p ON ST_Intersects(p.Geom, ST_Buffer(l.Geom,0.05)) ) a GROUP BY line_id;

Graph theory developed a topological and mathematical representation of the nature and structure of transportation networks. However, graph theory can be expanded for the analysis of real-world and complex transport networks by encoding them in an information system. In the process, a digital representation of the network is created, which can then be used for a variety of purposes, such as managing deliveries or planning the construction of transport infrastructure. This digital representation is highly complex since transportation data is often multi-modal, can span several local, national, and international jurisdictions and has different logical views depending on the particular user. Besides, while transport infrastructures are relatively stable components, vehicles are very dynamic elements.

It is thus becoming increasingly relevant to use a data model where a transportation network can be encoded, stored, retrieved, modified, analyzed and displayed. Obviously, Geographic Information Systems (GIS) are among the best tools to create, store and use network data models, which are an implicit part of many GIS. There are four basic application areas of network data models:

    . The core purpose of a network data model is to provide an accurate representation of a network as a set of links and nodes. Topology is the arrangement of nodes and links in a network and their relationships. Of particular relevance are the representations of location, direction and connectivity since different features can share a point such as a street intersection being connected to several lines. Even if graph theory aims at the abstraction of transportation networks, the topology of a network data model should be as close as possible to the real world structure it represents. This is especially true for the usage of network data models in a GIS. (annotations). Allows the visualization of a transport network for the purpose of reckoning and simple navigation and serves to indicate the existence of a network. Different elements of the network can have a symbolism defined by some their attributes. For instance, a highway link may be symbolized as a thick line with a label such as its number, while a street may be symbolized as an unlabeled simple line. The symbolized network can also be combined with other features such as landmarks to provide a better level of orientation to the user. This is commonly the case for road maps used by the general public. . Transportation network models can be used to derive a precise location, notably through a linear referencing system. For instance, the great majority of addresses are defined according to a number and a street. If address information in imbedded in the attributes of a network data model, it becomes possible to use this network for geocoding and pinpoint the location of an address, or any location along the network, with reasonable accuracy. and assignment. Network data models may be used to find optimal paths and assign flows with capacity constraints in a network. While routing is concerned by the specific behavior of a limited number of vehicles, traffic assignment is mainly concerned by the system-wide behavior of traffic in a transport network. This requires a topology in which the relationship of each link with other intersecting segments is explicitly specified. Impedance measures (e.g. distance) are also attributed to each link and will have an impact on the chosen path or on how flows are assigned in the network. Routing and traffic assignment at the continental level is generally simple since small variations in impedance are of limited consequences. Routing and traffic assignment in an urban area is much more complex as it must consider stop signs, traffic lights and congestion, in determining the impedance of a route.

From B. Davis, GIS: A Visual Approach, ©1996. Reproduced by permission of Onword Press, Santa Fe, NM.

Many organizations have developed software packages for GIS analysis. The most widely used are ARC/INFO (Unix and Windows NT platforms) and ArcView (a smaller PC desktop version) marketed by the Environmental Systems Research Institute (ESRI) of Redlands, California, a company founded in 1969 by Jack and Laura Dangermond. Shane Murnion of Queens University in Belfast, created a tutorial for using ARC/INFO ( Another program, called GRASS, was developed by the Army Corps of Engineers and, although they no longer support it, you can learn about this tool at Baylor University’s GRASSLINKS.

The following flow diagram outlines a general system design for procedures and steps in a typical GIS data handling routine:

The objective in this operation is to assemble a data base that contains all the ingredients to manipulate, through models and other decision-making procedures, into a series of outputs for the problem-solving effort.

` <>`__15-12: In the diagram above, one of the entries in the data input end is satellite imagery. Make a list of some of satellite-derived observations that lead to information relevant to GIS analysis. `ANSWER <Sect15_answers.html#15-12>`__

Behind this table, on which we mount a map, are closely spaced electrical wires arranged in a grid that we can reference as x-y coordinates. The operator (here, Bill Campbell, Chief, NASA GSFC Code 935, who authored the chapter on GIS in the Landsat Tutorial Workbook) places a mobile puck with centered crosshairs over each point on the map and clicks a button to enter its position, along with a numerical code that records its attribute(s) into a computer database. He then moves to the next point, repeats the process, and enter a tie command.

Any map consists of points, lines, and polygons that locate spots or enclose patterns (information fields) that describe particular attributes or theme categories. Consider this situation that refers to several fields (e.g., different types of vegetation cover) separated by linear boundaries:

` <>`__15-13: In the above diagram, by visual inspection, decide which of the two geocoding methods seems to be more accurate. `ANSWER <Sect15_answers.html#15-13>`__

From B.Davis, GIS: A Visual Approach, ©1996. Reproduced by permission of Onword Press, Santa Fe, NM.

Each point has a unique coordinate value. Two ends or node points define a line. We specify a polygon by a series of connecting nodes (any two adjacent lines share a node), which must close (in this writer’s experience with digitizing, the main pitfall is that some polygons don’t to close, and we must repeat or repair the process). We then identify each polygon by a proper code label (numerical or alphabetical). A look-up table associates the code characters with the attributes they represent. In this way, we can enter all the map fields that are large enough to be conveniently circumscribed, and their category values, into a digital database.

In the raster approach, we manually overlay or scan onto the map a grid array of cells, having some specific size. As shown in the right panel (grid format, above), an irregular polygon then includes a number of cells, completely contained therein. The system records these cells’ locations within the grid and a relevant code number for each data element assigned to them. But some cells straddle field boundaries. A preset rule assigns the cell to one or the other of adjacent fields, usually related to a relevant proportion of either field. The array of cells that comprise a field only approximate the field shape, but for most purposes the inaccuracy is tolerable for making calculations.

Generally, grid cells are larger than the enclosed pixels in pictorial map displays, but the cluster of pixels within a polygon approximates the shape of the field. The relation of cells to pixels makes this raster format well adapted to digital manipulation. The size of a cell depends partly on the internal variability of the represented feature or property. Smaller cells increase accuracy but also require more data storage. Note that multiple data layers referenced to the same grid cell share this spatial dimensionality but have different coded values for the various attributes associated with any given cell.

` <>`__15-14: From a data handling viewpoint, particularly involving computer manipulations, which method of geocoding, vector or raster, should be easier to process? `ANSWER <Sect15_answers.html#15-14>`__

Data management is sensitive to storage retrieval methods and to file structures. A good management software package should be able to:

Scale and rotate coordinate values for “best fit” projection overlays and changes.

Convert (interchange) between polygon and grid formats.

Permit rapid updating, allowing data changes with relative ease.

Allow for multiple users and multiple interactions between compatible data bases.

Retrieve, transform, and combine data elements efficiently.

Search, identify, and route a variety of different data items and score these values with assigned weighted values, to facilitate proximity and routing analysis.

Perform statistical analysis, such as multivariate regression, correlations, etc.

Overlay one file variable onto another, i.e., map superpositioning.

Measure area, distance, and association between points and fields.

Model and simulate, and formulate predictive scenarios, in a fashion that allows for direct interactions between the user group and the computer program.

Developing a GIS can be a costly, complex, and somewhat frustrating experience for the novitiate. We stress that data base design and encoding are major tasks that demand time, skilled personnel, and adequate funds. However, once developed, the information possibilities are exciting, and the intrinsic worth of the output more than compensates for the marginal costs of handling the various kinds of data. In plain language, GIS is a systematic, versatile, and comprehensive way to present, interpret, and recast spatial (geographic) data into intelligible output.

Views in ArcMap

ArcMap displays map contents in one of two views:

Each view lets you look at and interact with the map in a specific way.

In ArcMap data view, the map is the data frame. The active data frame is presented as a geographic window in which map layers are displayed and used. Within a data frame, you work with GIS information presented through map layers using geographic (real-world) coordinates. These will typically be ground measurements in units such as feet, meters, or measures of latitude-longitude (such as decimal degrees). The data view hides all the map elements on the layout, such as titles, north arrows, and scale bars, and lets you focus on the data in a single data frame, for instance, editing or analysis.

When you're preparing your map's layout, you'll want to work with your map in page layout view. A page layout is a collection of map elements (such as a data frame, map title, scale bar, north arrow, and a symbol legend) arranged on a page. Layouts are used for composing maps for printing or export to formats such as Adobe PDF.

The Layout view is used to design and author a map for printing, exporting, or publishing. You can manage map elements within the page space (typically, in inches or centimeters), add new map elements, and preview what your map will look like before exporting or printing it. Common map elements include data frames with map layers, scale bars, north arrows, symbol legends, map titles, text, and other graphic elements.

Development of the GIS generalization module

There is a need for developing new powerful algorithms integrated with widely used GIS software packages, such as ArcGIS, to perform generalization processes in the integrated spatial and attribute database. This is very important for many reasons, such as reduction of database size for storage and processing purposes, performing certain types of analysis on small-scale digital maps and GIS databases, integration of multi data sources with different scales and formats, the need of semi automatic generalization module taking the input scale and the output scale to do the generalization model automatically without any human errors, and the need to take into consideration the standards of Egypt maps.

In the next section, a new technique is developed for performing some generalization tasks using the Visual Basic for Application (VBA) programming language, associated with the ArcGIS software package. These tasks are elimination, symbolization, grouping, and simplification. The proposed technique is capable of performing data abstraction and GIS database generalization. Many experiments are performed to evaluate the proposed technique using different GIS data sets that are developed separately in different projects from various sources with different scales and types.

Generalization Procedure for Polygon Features (GPPF):

The generalization procedure for polygon features is performed as follows:

1. Polygons are kept with the same feature definition

2. Polygons with area less than (Accuracy in mm* Reverse of Output Scale) will be eliminated unless they are important. In this case they will be Symbolized (transformed to point feature) and then Eliminated

3. Polygons with in between distances less than (Accuracy in mm* Reverse of Output Scale) will be Grouped (Aggregated). If any of these features are important, they will be also Symbolized

4. The polygons which have segments with length less than (Accuracy in mm* Reverse of Output Scale) will be Simplified The Figure (1) shows the flow chart of the adopted procedure.

Generalization Procedure for Linear Feature (GPLF):

The generalization procedure for linear feature is performed as follows:

1. Lines are kept with the same feature definition

2. Lines with length less than (Accuracy in mm* Reverse of Output Scale) will be Eliminated

3. Lines with segments length more than (Accuracy in mm* Reverse of Output Scale) will be Simplified and Smoothened

The Figure (2), shows the flow chart of the adopted procedure.

GIS Generalization Module Development:

The comprehensive GIS module (Figure (3)) is developed using Visual Basic for Application (VBA) programming language on the top of the ArcGIS ArcInfo software.

ST_INTERSECT of each element of two tables (polygons, lines) - Geographic Information Systems

After reading this chapter, you'll be able to do the following:

  • Clear the window to an arbitrary color
  • Force any pending drawing to complete
  • Draw with any geometric primitive - points, lines, and polygons - in two or three dimensions
  • Turn states on and off and query state variables
  • Control the display of those primitives - for example, draw dashed lines or outlined polygons
  • Specify normal vectors at appropriate points on the surface of solid objects
  • Use vertex arrays to store and access a lot of geometric data with only a few function calls
  • Save and restore several state variables at once

Although you can draw complex and interesting pictures using OpenGL, they're all constructed from a small number of primitive graphical items. This shouldn't be too surprising - look at what Leonardo da Vinci accomplished with just pencils and paintbrushes.

At the highest level of abstraction, there are three basic drawing operations: clearing the window, drawing a geometric object, and drawing a raster object. Raster objects, which include such things as two-dimensional images, bitmaps, and character fonts, are covered in Chapter 8. In this chapter, you learn how to clear the screen and to draw geometric objects, including points, straight lines, and flat polygons.

You might think to yourself, "Wait a minute. I've seen lots of computer graphics in movies and on television, and there are plenty of beautifully shaded curved lines and surfaces. How are those drawn, if all OpenGL can draw are straight lines and flat polygons?" Even the image on the cover of this book includes a round table and objects on the table that have curved surfaces. It turns out that all the curved lines and surfaces you've seen are approximated by large numbers of little flat polygons or straight lines, in much the same way that the globe on the cover is constructed from a large set of rectangular blocks. The globe doesn't appear to have a smooth surface because the blocks are relatively large compared to the globe. Later in this chapter, we show you how to construct curved lines and surfaces from lots of small geometric primitives.

This chapter has the following major sections:

  • "A Drawing Survival Kit" explains how to clear the window and force drawing to be completed. It also gives you basic information about controlling the color of geometric objects and describing a coordinate system.
  • "Describing Points, Lines, and Polygons" shows you what the set of primitive geometric objects is and how to draw them.
  • "Basic State Management" describes how to turn on and off some states (modes) and query state variables.
  • "Displaying Points, Lines, and Polygons" explains what control you have over the details of how primitives are drawn - for example, what diameter points have, whether lines are solid or dashed, and whether polygons are outlined or filled.
  • "Normal Vectors" discusses how to specify normal vectors for geometric objects and (briefly) what these vectors are for.
  • "Vertex Arrays" shows you how to put lots of geometric data into just a few arrays and how, with only a few function calls, to render the geometry it describes. Reducing function calls may increase the efficiency and performance of rendering.
  • "Attribute Groups" reveals how to query the current value of state variables and how to save and restore several related state values all at once.
  • "Some Hints for Building Polygonal Models of Surfaces" explores the issues and techniques involved in constructing polygonal approximations to surfaces.

One thing to keep in mind as you read the rest of this chapter is that with OpenGL, unless you specify otherwise, every time you issue a drawing command, the specified object is drawn. This might seem obvious, but in some systems, you first make a list of things to draw. When your list is complete, you tell the graphics hardware to draw the items in the list. The first style is called immediate-mode graphics and is the default OpenGL style. In addition to using immediate mode, you can choose to save some commands in a list (called a display list ) for later drawing. Immediate-mode graphics are typically easier to program, but display lists are often more efficient. Chapter 7 tells you how to use display lists and why you might want to use them.

A Drawing Survival Kit

This section explains how to clear the window in preparation for drawing, set the color of objects that are to be drawn, and force drawing to be completed. None of these subjects has anything to do with geometric objects in a direct way, but any program that draws geometric objects has to deal with these issues.

Clearing the Window

Drawing on a computer screen is different from drawing on paper in that the paper starts out white, and all you have to do is draw the picture. On a computer, the memory holding the picture is usually filled with the last picture you drew, so you typically need to clear it to some background color before you start to draw the new scene. The color you use for the background depends on the application. For a word processor, you might clear to white (the color of the paper) before you begin to draw the text. If you're drawing a view from a spaceship, you clear to the black of space before beginning to draw the stars, planets, and alien spaceships. Sometimes you might not need to clear the screen at all for example, if the image is the inside of a room, the entire graphics window gets covered as you draw all the walls.

At this point, you might be wondering why we keep talking about clearing the window - why not just draw a rectangle of the appropriate color that's large enough to cover the entire window? First, a special command to clear a window can be much more efficient than a general-purpose drawing command. In addition, as you'll see in Chapter 3, OpenGL allows you to set the coordinate system, viewing position, and viewing direction arbitrarily, so it might be difficult to figure out an appropriate size and location for a window-clearing rectangle. Finally, on many machines, the graphics hardware consists of multiple buffers in addition to the buffer containing colors of the pixels that are displayed. These other buffers must be cleared from time to time, and it's convenient to have a single command that can clear any combination of them. (See Chapter 10 for a discussion of all the possible buffers.)

You must also know how the colors of pixels are stored in the graphics hardware known as bitplanes . There are two methods of storage. Either the red, green, blue, and alpha (RGBA) values of a pixel can be directly stored in the bitplanes, or a single index value that references a color lookup table is stored. RGBA color-display mode is more commonly used, so most of the examples in this book use it. (See Chapter 4 for more information about both display modes.) You can safely ignore all references to alpha values until Chapter 6.

As an example, these lines of code clear an RGBA mode window to black:

The first line sets the clearing color to black, and the next command clears the entire window to the current clearing color. The single parameter to glClear() indicates which buffers are to be cleared. In this case, the program clears only the color buffer, where the image displayed on the screen is kept. Typically, you set the clearing color once, early in your application, and then you clear the buffers as often as necessary. OpenGL keeps track of the current clearing color as a state variable rather than requiring you to specify it each time a buffer is cleared.

Chapter 4 and Chapter 10 talk about how other buffers are used. For now, all you need to know is that clearing them is simple. For example, to clear both the color buffer and the depth buffer, you would use the following sequence of commands:

In this case, the call to glClearColor() is the same as before, the glClearDepth() command specifies the value to which every pixel of the depth buffer is to be set, and the parameter to the glClear() command now consists of the bitwise OR of all the buffers to be cleared. The following summary of glClear() includes a table that lists the buffers that can be cleared, their names, and the chapter where each type of buffer is discussed.

void glClearColor (GLclampf red , GLclampf green , GLclampf blue ,
GLclampf alpha ) Sets the current clearing color for use in clearing color buffers in RGBA mode. (See Chapter 4 for more information on RGBA mode.) The red , green , blue , and alpha values are clamped if necessary to the range [0,1]. The default clearing color is (0, 0, 0, 0), which is black. void glClear (GLbitfield mask ) Clears the specified buffers to their current clearing values. The mask argument is a bitwise-ORed combination of the values listed in Table 2-1 . Table 2-1 : Clearing Buffers

Before issuing a command to clear multiple buffers, you have to set the values to which each buffer is to be cleared if you want something other than the default RGBA color, depth value, accumulation color, and stencil index. In addition to the glClearColor() and glClearDepth() commands that set the current values for clearing the color and depth buffers, glClearIndex() , glClearAccum() , and glClearStencil() specify the color index , accumulation color, and stencil index used to clear the corresponding buffers. (See Chapter 4 and Chapter 10 for descriptions of these buffers and their uses.)

OpenGL allows you to specify multiple buffers because clearing is generally a slow operation, since every pixel in the window (possibly millions) is touched, and some graphics hardware allows sets of buffers to be cleared simultaneously. Hardware that doesn't support simultaneous clears performs them sequentially. The difference between

is that although both have the same final effect, the first example might run faster on many machines. It certainly won't run more slowly.

Specifying a Color

With OpenGL, the description of the shape of an object being drawn is independent of the description of its color. Whenever a particular geometric object is drawn, it's drawn using the currently specified coloring scheme. The coloring scheme might be as simple as "draw everything in fire-engine red," or might be as complicated as "assume the object is made out of blue plastic, that there's a yellow spotlight pointed in such and such a direction, and that there's a general low-level reddish-brown light everywhere else." In general, an OpenGL programmer first sets the color or coloring scheme and then draws the objects. Until the color or coloring scheme is changed, all objects are drawn in that color or using that coloring scheme. This method helps OpenGL achieve higher drawing performance than would result if it didn't keep track of the current color.

For example, the pseudocode

draws objects A and B in red, and object C in blue. The command on the fourth line that sets the current color to green is wasted.

Coloring, lighting, and shading are all large topics with entire chapters or large sections devoted to them. To draw geometric primitives that can be seen, however, you need some basic knowledge of how to set the current color this information is provided in the next paragraphs. (See Chapter 4 and Chapter 5 for details on these topics.)

To set a color, use the command glColor3f() . It takes three parameters, all of which are floating-point numbers between 0.0 and 1.0. The parameters are, in order, the red, green, and blue components of the color. You can think of these three values as specifying a "mix" of colors: 0.0 means don't use any of that component, and 1.0 means use all you can of that component. Thus, the code

makes the brightest red the system can draw, with no green or blue components. All zeros makes black in contrast, all ones makes white. Setting all three components to 0.5 yields gray (halfway between black and white). Here are eight commands and the colors they would set.

You might have noticed earlier that the routine to set the clearing color, glClearColor() , takes four parameters, the first three of which match the parameters for glColor3f() . The fourth parameter is the alpha value it's covered in detail in "Blending" in Chapter 6. For now, set the fourth parameter of glClearColor() to 0.0, which is its default value.

Forcing Completion of Drawing

As you saw in "OpenGL Rendering Pipeline" in Chapter 1, most modern graphics systems can be thought of as an assembly line. The main central processing unit (CPU) issues a drawing command. Perhaps other hardware does geometric transformations. Clipping is performed, followed by shading and/or texturing. Finally, the values are written into the bitplanes for display. In high-end architectures, each of these operations is performed by a different piece of hardware that's been designed to perform its particular task quickly. In such an architecture, there's no need for the CPU to wait for each drawing command to complete before issuing the next one. While the CPU is sending a vertex down the pipeline, the transformation hardware is working on transforming the last one sent, the one before that is being clipped, and so on. In such a system, if the CPU waited for each command to complete before issuing the next, there could be a huge performance penalty.

In addition, the application might be running on more than one machine. For example, suppose that the main program is running elsewhere (on a machine called the client) and that you're viewing the results of the drawing on your workstation or terminal (the server), which is connected by a network to the client. In that case, it might be horribly inefficient to send each command over the network one at a time, since considerable overhead is often associated with each network transmission. Usually, the client gathers a collection of commands into a single network packet before sending it. Unfortunately, the network code on the client typically has no way of knowing that the graphics program is finished drawing a frame or scene. In the worst case, it waits forever for enough additional drawing commands to fill a packet, and you never see the completed drawing.

For this reason, OpenGL provides the command glFlush() , which forces the client to send the network packet even though it might not be full. Where there is no network and all commands are truly executed immediately on the server, glFlush() might have no effect. However, if you're writing a program that you want to work properly both with and without a network, include a call to glFlush() at the end of each frame or scene. Note that glFlush() doesn't wait for the drawing to complete - it just forces the drawing to begin execution, thereby guaranteeing that all previous commands execute in finite time even if no further rendering commands are executed.

There are other situations where glFlush() is useful.

  • Software renderers that build image in system memory and don't want to constantly update the screen.
  • Implementations that gather sets of rendering commands to amortize start-up costs. The aforementioned network transmission example is one instance of this.

A few commands - for example, commands that swap buffers in double-buffer mode - automatically flush pending commands onto the network before they can occur.

If glFlush() isn't sufficient for you, try glFinish() . This command flushes the network as glFlush() does and then waits for notification from the graphics hardware or network indicating that the drawing is complete in the framebuffer. You might need to use glFinish() if you want to synchronize tasks - for example, to make sure that your three-dimensional rendering is on the screen before you use Display PostScript to draw labels on top of the rendering. Another example would be to ensure that the drawing is complete before it begins to accept user input. After you issue a glFinish() command, your graphics process is blocked until it receives notification from the graphics hardware that the drawing is complete. Keep in mind that excessive use of glFinish() can reduce the performance of your application, especially if you're running over a network, because it requires round-trip communication. If glFlush() is sufficient for your needs, use it instead of glFinish() .

void glFinish (void) Forces all previously issued OpenGL commands to complete. This command doesn't return until all effects from previous commands are fully realized.

Coordinate System Survival Kit

Whenever you initially open a window or later move or resize that window, the window system will send an event to notify you. If you are using GLUT, the notification is automated whatever routine has been registered to glutReshapeFunc() will be called. You must register a callback function that will

  • Reestablish the rectangular region that will be the new rendering canvas
  • Define the coordinate system to which objects will be drawn

In Chapter 3 you'll see how to define three-dimensional coordinate systems, but right now, just create a simple, basic two-dimensional coordinate system into which you can draw a few objects. Call glutReshapeFunc ( reshape ), where reshape() is the following function shown in Example 2-1.

Example 2-1 : Reshape Callback Function

The internals of GLUT will pass this function two arguments: the width and height, in pixels, of the new, moved, or resized window. glViewport() adjusts the pixel rectangle for drawing to be the entire new window. The next three routines adjust the coordinate system for drawing so that the lower-left corner is (0, 0), and the upper-right corner is ( w , h ) (See Figure 2-1).

To explain it another way, think about a piece of graphing paper. The w and h values in reshape() represent how many columns and rows of squares are on your graph paper. Then you have to put axes on the graph paper. The gluOrtho2D() routine puts the origin, (0, 0), all the way in the lowest, leftmost square, and makes each square represent one unit. Now when you render the points, lines, and polygons in the rest of this chapter, they will appear on this paper in easily predictable squares. (For now, keep all your objects two-dimensional.)

Figure 2-1 : Coordinate System Defined by w = 50, h = 50

Describing Points, Lines, and Polygons

This section explains how to describe OpenGL geometric primitives. All geometric primitives are eventually described in terms of their vertices - coordinates that define the points themselves, the endpoints of line segments, or the corners of polygons. The next section discusses how these primitives are displayed and what control you have over their display.

What Are Points, Lines, and Polygons?

You probably have a fairly good idea of what a mathematician means by the terms point , line, and polygon. The OpenGL meanings are similar, but not quite the same.

One difference comes from the limitations of computer-based calculations. In any OpenGL implementation, floating-point calculations are of finite precision, and they have round-off errors. Consequently, the coordinates of OpenGL points, lines, and polygons suffer from the same problems.

Another more important difference arises from the limitations of a raster graphics display. On such a display, the smallest displayable unit is a pixel, and although pixels might be less than 1/100 of an inch wide, they are still much larger than the mathematician's concepts of infinitely small (for points) or infinitely thin (for lines). When OpenGL performs calculations, it assumes points are represented as vectors of floating-point numbers. However, a point is typically (but not always) drawn as a single pixel, and many different points with slightly different coordinates could be drawn by OpenGL on the same pixel.


A point is represented by a set of floating-point numbers called a vertex. All internal calculations are done as if vertices are three-dimensional. Vertices specified by the user as two-dimensional (that is, with only x and y coordinates) are assigned a z coordinate equal to zero by OpenGL.

OpenGL works in the homogeneous coordinates of three-dimensional projective geometry, so for internal calculations, all vertices are represented with four floating-point coordinates ( x , y , z , w ). If w is different from zero, these coordinates correspond to the Euclidean three-dimensional point ( x/w , y/w , z/w ). You can specify the w coordinate in OpenGL commands, but that's rarely done. If the w coordinate isn't specified, it's understood to be 1.0. (See Appendix F for more information about homogeneous coordinate systems.)


In OpenGL, the term line refers to a line segment , not the mathematician's version that extends to infinity in both directions. There are easy ways to specify a connected series of line segments, or even a closed, connected series of segments (see Figure 2-2). In all cases, though, the lines constituting the connected series are specified in terms of the vertices at their endpoints.

Figure 2-2 : Two Connected Series of Line Segments


Polygons are the areas enclosed by single closed loops of line segments, where the line segments are specified by the vertices at their endpoints. Polygons are typically drawn with the pixels in the interior filled in, but you can also draw them as outlines or a set of points. (See "Polygon Details.")

In general, polygons can be complicated, so OpenGL makes some strong restrictions on what constitutes a primitive polygon. First, the edges of OpenGL polygons can't intersect (a mathematician would call a polygon satisfying this condition a simple polygon ). Second, OpenGL polygons must be convex , meaning that they cannot have indentations. Stated precisely, a region is convex if, given any two points in the interior, the line segment joining them is also in the interior. See Figure 2-3 for some examples of valid and invalid polygons. OpenGL, however, doesn't restrict the number of line segments making up the boundary of a convex polygon. Note that polygons with holes can't be described. They are nonconvex, and they can't be drawn with a boundary made up of a single closed loop. Be aware that if you present OpenGL with a nonconvex filled polygon, it might not draw it as you expect. For instance, on most systems no more than the convex hull of the polygon would be filled. On some systems, less than the convex hull might be filled.

Figure 2-3 : Valid and Invalid Polygons

The reason for the OpenGL restrictions on valid polygon types is that it's simpler to provide fast polygon-rendering hardware for that restricted class of polygons. Simple polygons can be rendered quickly. The difficult cases are hard to detect quickly. So for maximum performance, OpenGL crosses its fingers and assumes the polygons are simple.

Many real-world surfaces consist of nonsimple polygons, nonconvex polygons, or polygons with holes. Since all such polygons can be formed from unions of simple convex polygons, some routines to build more complex objects are provided in the GLU library. These routines take complex descriptions and tessellate them, or break them down into groups of the simpler OpenGL polygons that can then be rendered. (See "Polygon Tessellation" in Chapter 11 for more information about the tessellation routines.)

Since OpenGL vertices are always three-dimensional, the points forming the boundary of a particular polygon don't necessarily lie on the same plane in space. (Of course, they do in many cases - if all the z coordinates are zero, for example, or if the polygon is a triangle.) If a polygon's vertices don't lie in the same plane, then after various rotations in space, changes in the viewpoint, and projection onto the display screen, the points might no longer form a simple convex polygon. For example, imagine a four-point quadrilateral where the points are slightly out of plane, and look at it almost edge-on. You can get a nonsimple polygon that resembles a bow tie, as shown in Figure 2-4, which isn't guaranteed to be rendered correctly. This situation isn't all that unusual if you approximate curved surfaces by quadrilaterals made of points lying on the true surface. You can always avoid the problem by using triangles, since any three points always lie on a plane.

Figure 2-4 : Nonplanar Polygon Transformed to Nonsimple Polygon


Since rectangles are so common in graphics applications, OpenGL provides a filled-rectangle drawing primitive, glRect*() . You can draw a rectangle as a polygon, as described in "OpenGL Geometric Drawing Primitives," but your particular implementation of OpenGL might have optimized glRect*() for rectangles.

void glRect ( TYPEx1 , TYPEy1 , TYPEx2 , TYPEy2 )
void glRect v ( TYPE*v1 , TYPE*v2 ) Draws the rectangle defined by the corner points ( x1, y1 ) and ( x2, y2 ). The rectangle lies in the plane z =0 and has sides parallel to the x - and y -axes. If the vector form of the function is used, the corners are given by two pointers to arrays, each of which contains an ( x, y ) pair.

Note that although the rectangle begins with a particular orientation in three-dimensional space (in the x-y plane and parallel to the axes), you can change this by applying rotations or other transformations. (See Chapter 3 for information about how to do this.)

Curves and Curved Surfaces

Any smoothly curved line or surface can be approximated - to any arbitrary degree of accuracy - by short line segments or small polygonal regions. Thus, subdividing curved lines and surfaces sufficiently and then approximating them with straight line segments or flat polygons makes them appear curved (see Figure 2-5). If you're skeptical that this really works, imagine subdividing until each line segment or polygon is so tiny that it's smaller than a pixel on the screen.

Figure 2-5 : Approximating Curves

Even though curves aren't geometric primitives, OpenGL does provide some direct support for subdividing and drawing them. (See Chapter 12 for information about how to draw curves and curved surfaces.)

Specifying Vertices

With OpenGL, all geometric objects are ultimately described as an ordered set of vertices. You use the glVertex*() command to specify a vertex.

void glVertex <234>[v]( TYPEcoords ) Specifies a vertex for use in describing a geometric object. You can supply up to four coordinates (x, y, z, w) for a particular vertex or as few as two (x, y) by selecting the appropriate version of the command. If you use a version that doesn't explicitly specify z or w, z is understood to be 0 and w is understood to be 1. Calls to glVertex*() are only effective between a glBegin() and glEnd() pair.

Example 2-2 provides some examples of using glVertex*() .

Example 2-2 : Legal Uses of glVertex*()

The first example represents a vertex with three-dimensional coordinates (2, 3, 0). (Remember that if it isn't specified, the z coordinate is understood to be 0.) The coordinates in the second example are (0.0, 0.0, 3.1415926535898) (double-precision floating-point numbers). The third example represents the vertex with three-dimensional coordinates (1.15, 0.5, -1.1). (Remember that the x, y , and z coordinates are eventually divided by the w coordinate.) In the final example, dvect is a pointer to an array of three double-precision floating-point numbers.

On some machines, the vector form of glVertex*() is more efficient, since only a single parameter needs to be passed to the graphics subsystem. Special hardware might be able to send a whole series of coordinates in a single batch. If your machine is like this, it's to your advantage to arrange your data so that the vertex coordinates are packed sequentially in memory. In this case, there may be some gain in performance by using the vertex array operations of OpenGL. (See "Vertex Arrays.")

OpenGL Geometric Drawing Primitives

Now that you've seen how to specify vertices, you still need to know how to tell OpenGL to create a set of points, a line, or a polygon from those vertices. To do this, you bracket each set of vertices between a call to glBegin() and a call to glEnd() . The argument passed to glBegin() determines what sort of geometric primitive is constructed from the vertices. For example, Example 2-3> specifies the vertices for the polygon shown in Figure 2-6.

Example 2-3 : Filled Polygon

Figure 2-6 : Drawing a Polygon or a Set of Points

If you had used GL_POINTS instead of GL_POLYGON, the primitive would have been simply the five points shown in Figure 2-6. Table 2-2 in the following function summary for glBegin() lists the ten possible arguments and the corresponding type of primitive.

void glBegin (GLenum mode ) Marks the beginning of a vertex-data list that describes a geometric primitive. The type of primitive is indicated by mode , which can be any of the values shown in Table 2-2 . Table 2-2 : Geometric Primitive Names and Meanings

pairs of vertices interpreted as individual line segments

series of connected line segments

same as above, with a segment added between last and first vertices

triples of vertices interpreted as triangles

linked strip of triangles

quadruples of vertices interpreted as four-sided polygons

linked strip of quadrilaterals

boundary of a simple, convex polygon

Marks the end of a vertex-data list.

Figure 2-7 shows examples of all the geometric primitives listed in Table 2-2. The paragraphs that follow the figure describe the pixels that are drawn for each of the objects. Note that in addition to points, several types of lines and polygons are defined. Obviously, you can find many ways to draw the same primitive. The method you choose depends on your vertex data.

Figure 2-7 : Geometric Primitive Types

As you read the following descriptions, assume that n vertices (v0, v1, v2, . , vn-1) are described between a glBegin() and glEnd() pair.

Draws a point at each of the n vertices.

Draws a series of unconnected line segments. Segments are drawn between v0 and v1, between v2 and v3, and so on. If n is odd, the last segment is drawn between vn-3 and vn-2, and vn-1 is ignored.

Draws a line segment from v0 to v1, then from v1 to v2, and so on, finally drawing the segment from vn-2 to vn-1. Thus, a total of n - 1 line segments are drawn. Nothing is drawn unless n is larger than 1. There are no restrictions on the vertices describing a line strip (or a line loop) the lines can intersect arbitrarily.

Same as GL_LINE_STRIP, except that a final line segment is drawn from vn-1 to v0, completing a loop.

Draws a series of triangles (three-sided polygons) using vertices v0, v1, v2, then v3, v4, v5, and so on. If n isn't an exact multiple of 3, the final one or two vertices are ignored.

Draws a series of triangles (three-sided polygons) using vertices v0, v1, v2, then v2, v1, v3 (note the order), then v2, v3, v4, and so on. The ordering is to ensure that the triangles are all drawn with the same orientation so that the strip can correctly form part of a surface. Preserving the orientation is important for some operations, such as culling. (See "Reversing and Culling Polygon Faces") n must be at least 3 for anything to be drawn.

Same as GL_TRIANGLE_STRIP, except that the vertices are v0, v1, v2, then v0, v2, v3, then v0, v3, v4, and so on (see Figure 2-7).

Draws a series of quadrilaterals (four-sided polygons) using vertices v0, v1, v2, v3, then v4, v5, v6, v7, and so on. If n isn't a multiple of 4, the final one, two, or three vertices are ignored.

Draws a series of quadrilaterals (four-sided polygons) beginning with v0, v1, v3, v2, then v2, v3, v5, v4, then v4, v5, v7, v6, and so on (see Figure 2-7). n must be at least 4 before anything is drawn. If n is odd, the final vertex is ignored.

Draws a polygon using the points v0, . , vn-1 as vertices. n must be at least 3, or nothing is drawn. In addition, the polygon specified must not intersect itself and must be convex. If the vertices don't satisfy these conditions, the results are unpredictable.

Restrictions on Using glBegin() and glEnd()

The most important information about vertices is their coordinates, which are specified by the glVertex*() command. You can also supply additional vertex-specific data for each vertex - a color, a normal vector, texture coordinates, or any combination of these - using special commands. In addition, a few other commands are valid between a glBegin() and glEnd() pair. Table 2-3 contains a complete list of such valid commands.

Table 2-3 : Valid Commands between glBegin() and glEnd()

set normal vector coordinates

extract vertex array data

No other OpenGL commands are valid between a glBegin() and glEnd() pair, and making most other OpenGL calls generates an error. Some vertex array commands, such as glEnableClientState() and glVertexPointer() , when called between glBegin() and glEnd() , have undefined behavior but do not necessarily generate an error. (Also, routines related to OpenGL, such as glX*() routines have undefined behavior between glBegin() and glEnd() .) These cases should be avoided, and debugging them may be more difficult.

Note, however, that only OpenGL commands are restricted you can certainly include other programming-language constructs (except for calls, such as the aforementioned glX*() routines). For example, Example 2-4 draws an outlined circle.

Example 2-4 : Other Constructs between glBegin() and glEnd()

Note: This example isn't the most efficient way to draw a circle, especially if you intend to do it repeatedly. The graphics commands used are typically very fast, but this code calculates an angle and calls the sin() and cos() routines for each vertex in addition, there's the loop overhead. (Another way to calculate the vertices of a circle is to use a GLU routine see "Quadrics: Rendering Spheres, Cylinders, and Disks" in Chapter 11.) If you need to draw lots of circles, calculate the coordinates of the vertices once and save them in an array and create a display list (see Chapter 7), or use vertex arrays to render them.

Unless they are being compiled into a display list, all glVertex*() commands should appear between some glBegin() and glEnd() combination. (If they appear elsewhere, they don't accomplish anything.) If they appear in a display list, they are executed only if they appear between a glBegin() and a glEnd() . (See Chapter 7 for more information about display lists.)

Although many commands are allowed between glBegin() and glEnd() , vertices are generated only when a glVertex*() command is issued. At the moment glVertex*() is called, OpenGL assigns the resulting vertex the current color, texture coordinates, normal vector information, and so on. To see this, look at the following code sequence. The first point is drawn in red, and the second and third ones in blue, despite the extra color commands.

You can use any combination of the 24 versions of the glVertex*() command between glBegin() and glEnd() , although in real applications all the calls in any particular instance tend to be of the same form. If your vertex-data specification is consistent and repetitive (for example, glColor* , glVertex* , glColor* , glVertex* . ), you may enhance your program's performance by using vertex arrays. (See "Vertex Arrays.")

Basic State Management

In the previous section, you saw an example of a state variable, the current RGBA color, and how it can be associated with a primitive. OpenGL maintains many states and state variables. An object may be rendered with lighting, texturing, hidden surface removal, fog, or some other states affecting its appearance.

By default, most of these states are initially inactive. These states may be costly to activate for example, turning on texture mapping will almost certainly slow down the speed of rendering a primitive. However, the quality of the image will improve and look more realistic, due to the enhanced graphics capabilities.

To turn on and off many of these states, use these two simple commands:

void glEnable (GLenum cap )
void glDisable (GLenum cap ) glEnable() turns on a capability, and glDisable() turns it off. There are over 40 enumerated values that can be passed as a parameter to glEnable() or glDisable() . Some examples of these are GL_BLEND (which controls blending RGBA values), GL_DEPTH_TEST (which controls depth comparisons and updates to the depth buffer), GL_FOG (which controls fog), GL_LINE_STIPPLE (patterned lines), GL_LIGHTING (you get the idea), and so forth.

You can also check if a state is currently enabled or disabled.

GLboolean glIsEnabled (GLenum capability ) R eturns GL_TRUE or GL_FALSE, depending upon whether the queried capability is currently activated.

The states you have just seen have two settings: on and off. However, most OpenGL routines set values for more complicated state variables. For example, the routine glColor3f() sets three values, which are part of the GL_CURRENT_COLOR state. There are five querying routines used to find out what values are set for many states:

void glGetBooleanv (GLenum pname , GLboolean * params )
void glGetIntegerv (GLenum pname , GLint * params )
void glGetFloatv (GLenum pname , GLfloat * params )
void glGetDoublev (GLenum pname , GLdouble * params )
void glGetPointerv (GLenum pname , GLvoid ** params ) Obtains Boolean, integer, floating-point, double-precision, or pointer state variables. The pname argument is a symbolic constant indicating the state variable to return, and params is a pointer to an array of the indicated type in which to place the returned data. See the tables in Appendix B for the possible values for pname . For example, to get the current RGBA color, a table in Appendix B suggests you use glGetIntegerv (GL_CURRENT_COLOR, params ) or glGetFloatv (GL_CURRENT_COLOR, params ). A type conversion is performed if necessary to return the desired variable as the requested data type.

These querying routines handle most, but not all, requests for obtaining state information. (See "The Query Commands" in Appendix B for an additional 16 querying routines.)

Displaying Points, Lines, and Polygons

By default, a point is drawn as a single pixel on the screen, a line is drawn solid and one pixel wide, and polygons are drawn solidly filled in. The following paragraphs discuss the details of how to change these default display modes.

Point Details

To control the size of a rendered point, use glPointSize() and supply the desired size in pixels as the argument.

void glPointSize (GLfloat size ) Sets the width in pixels for rendered points size must be greater than 0.0 and by default is 1.0.

The actual collection of pixels on the screen which are drawn for various point widths depends on whether antialiasing is enabled. (Antialiasing is a technique for smoothing points and lines as they're rendered see "Antialiasing" in Chapter 6 for more detail.) If antialiasing is disabled (the default), fractional widths are rounded to integer widths, and a screen-aligned square region of pixels is drawn. Thus, if the width is 1.0, the square is 1 pixel by 1 pixel if the width is 2.0, the square is 2 pixels by 2 pixels, and so on.

With antialiasing enabled, a circular group of pixels is drawn, and the pixels on the boundaries are typically drawn at less than full intensity to give the edge a smoother appearance. In this mode, non-integer widths aren't rounded.

Most OpenGL implementations support very large point sizes. The maximum size for antialiased points is queryable, but the same information is not available for standard, aliased points. A particular implementation, however, might limit the size of standard, aliased points to not less than its maximum antialiased point size, rounded to the nearest integer value. You can obtain this floating-point value by using GL_POINT_SIZE_RANGE with glGetFloatv() .

Line Details

With OpenGL, you can specify lines with different widths and lines that are stippled in various ways - dotted, dashed, drawn with alternating dots and dashes, and so on.

Wide Lines

The actual rendering of lines is affected by the antialiasing mode, in the same way as for points. (See "Antialiasing" in Chapter 6.) Without antialiasing, widths of 1, 2, and 3 draw lines 1, 2, and 3 pixels wide. With antialiasing enabled, non-integer line widths are possible, and pixels on the boundaries are typically drawn at less than full intensity. As with point sizes, a particular OpenGL implementation might limit the width of nonantialiased lines to its maximum antialiased line width, rounded to the nearest integer value. You can obtain this floating-point value by using GL_LINE_WIDTH_RANGE with glGetFloatv() .

Note: Keep in mind that by default lines are 1 pixel wide, so they appear wider on lower-resolution screens. For computer displays, this isn't typically an issue, but if you're using OpenGL to render to a high-resolution plotter, 1-pixel lines might be nearly invisible. To obtain resolution-independent line widths, you need to take into account the physical dimensions of pixels.

With nonantialiased wide lines, the line width isn't measured perpendicular to the line. Instead, it's measured in the y direction if the absolute value of the slope is less than 1.0 otherwise, it's measured in the x direction. The rendering of an antialiased line is exactly equivalent to the rendering of a filled rectangle of the given width, centered on the exact line.

Stippled Lines

To make stippled (dotted or dashed) lines, you use the command glLineStipple() to define the stipple pattern, and then you enable line stippling with glEnable() .

void glLineStipple (GLint factor , GLushort pattern ) Sets the current stippling pattern for lines. The pattern argument is a 16-bit series of 0s and 1s, and it's repeated as necessary to stipple a given line. A 1 indicates that drawing occurs, and 0 that it does not, on a pixel-by-pixel basis, beginning with the low-order bit of the pattern. The pattern can be stretched out by using factor , which multiplies each subseries of consecutive 1s and 0s. Thus, if three consecutive 1s appear in the pattern, they're stretched to six if factor is 2. factor is clamped to lie between 1 and 255. Line stippling must be enabled by passing GL_LINE_STIPPLE to glEnable() it's disabled by passing the same argument to glDisable() .

With the preceding example and the pattern 0x3F07 (which translates to 0011111100000111 in binary), a line would be drawn with 3 pixels on, then 5 off, 6 on, and 2 off. (If this seems backward, remember that the low-order bit is used first.) If factor had been 2, the pattern would have been elongated: 6 pixels on, 10 off, 12 on, and 4 off. Figure 2-8 shows lines drawn with different patterns and repeat factors. If you don't enable line stippling, drawing proceeds as if pattern were 0xFFFF and factor 1. (Use glDisable() with GL_LINE_STIPPLE to disable stippling.) Note that stippling can be used in combination with wide lines to produce wide stippled lines.

Figure 2-8 : Stippled Lines

One way to think of the stippling is that as the line is being drawn, the pattern is shifted by 1 bit each time a pixel is drawn (or factor pixels are drawn, if factor isn't 1). When a series of connected line segments is drawn between a single glBegin() and glEnd() , the pattern continues to shift as one segment turns into the next. This way, a stippling pattern continues across a series of connected line segments. When glEnd() is executed, the pattern is reset, and - if more lines are drawn before stippling is disabled - the stippling restarts at the beginning of the pattern. If you're drawing lines with GL_LINES, the pattern resets for each independent line.

Example 2-5 illustrates the results of drawing with a couple of different stipple patterns and line widths. It also illustrates what happens if the lines are drawn as a series of individual segments instead of a single connected line strip. The results of running the program appear in Figure 2-9.

Figure 2-9 : Wide Stippled Lines

Example 2-5 : Line Stipple Patterns: lines.c

Polygon Details

Polygons are typically drawn by filling in all the pixels enclosed within the boundary, but you can also draw them as outlined polygons or simply as points at the vertices. A filled polygon might be solidly filled or stippled with a certain pattern. Although the exact details are omitted here, filled polygons are drawn in such a way that if adjacent polygons share an edge or vertex, the pixels making up the edge or vertex are drawn exactly once - they're included in only one of the polygons. This is done so that partially transparent polygons don't have their edges drawn twice, which would make those edges appear darker (or brighter, depending on what color you're drawing with). Note that it might result in narrow polygons having no filled pixels in one or more rows or columns of pixels. Antialiasing polygons is more complicated than for points and lines. (See "Antialiasing" in Chapter 6 for details.)

Polygons as Points, Outlines, or Solids

A polygon has two sides - front and back - and might be rendered differently depending on which side is facing the viewer. This allows you to have cutaway views of solid objects in which there is an obvious distinction between the parts that are inside and those that are outside. By default, both front and back faces are drawn in the same way. To change this, or to draw only outlines or vertices, use glPolygonMode() .

void glPolygonMode (GLenum face , GLenum mode ) Controls the drawing mode for a polygon's front and back faces. The parameter face can be GL_FRONT_AND_BACK, GL_FRONT, or GL_BACK mode can be GL_POINT, GL_LINE, or GL_FILL to indicate whether the polygon should be drawn as points, outlined, or filled. By default, both the front and back faces are drawn filled.

For example, you can have the front faces filled and the back faces outlined with two calls to this routine:

Reversing and Culling Polygon Faces

By convention, polygons whose vertices appear in counterclockwise order on the screen are called front-facing. You can construct the surface of any "reasonable" solid - a mathematician would call such a surface an orientable manifold (spheres, donuts, and teapots are orientable Klein bottles and Mobius strips aren't) - from polygons of consistent orientation. In other words, you can use all clockwise polygons, or all counterclockwise polygons. (This is essentially the mathematical definition of orientable .)

Suppose you've consistently described a model of an orientable surface but that you happen to have the clockwise orientation on the outside. You can swap what OpenGL considers the back face by using the function glFrontFace() , supplying the desired orientation for front-facing polygons.

void glFrontFace (GLenum mode) Controls how front-facing polygons are determined. By default, mode is GL_CCW, which corresponds to a counterclockwise orientation of the ordered vertices of a projected polygon in window coordinates. If mode is GL_CW, faces with a clockwise orientation are considered front-facing.

In a completely enclosed surface constructed from opaque polygons with a consistent orientation, none of the back-facing polygons are ever visible - they're always obscured by the front-facing polygons. If you are outside this surface, you might enable culling to discard polygons that OpenGL determines are back-facing. Similarly, if you are inside the object, only back-facing polygons are visible. To instruct OpenGL to discard front- or back-facing polygons, use the command glCullFace() and enable culling with glEnable() .

void glCullFace (GLenum mode ) Indicates which polygons should be discarded (culled) before they're converted to screen coordinates. The mode is either GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK to indicate front-facing, back-facing, or all polygons. To take effect, culling must be enabled using glEnable() with GL_CULL_FACE it can be disabled with glDisable() and the same argument.

In more technical terms, the decision of whether a face of a polygon is front- or back-facing depends on the sign of the polygon's area computed in window coordinates. One way to compute this area is

where x i and y i are the x and y window coordinates of the i th vertex of the n -vertex polygon and

Assuming that GL_CCW has been specified, if a >0, the polygon corresponding to that vertex is considered to be front-facing otherwise, it's back-facing. If GL_CW is specified and if a <0, then the corresponding polygon is front-facing otherwise, it's back-facing.

Modify Example 2-5 by adding some filled polygons. Experiment with different colors. Try different polygon modes. Also enable culling to see its effect.

Stippling Polygons

By default, filled polygons are drawn with a solid pattern. They can also be filled with a 32-bit by 32-bit window-aligned stipple pattern, which you specify with glPolygonStipple() .

void glPolygonStipple (const GLubyte * mask ) Defines the current stipple pattern for filled polygons. The argument mask is a pointer to a 32 ´ 32 bitmap that's interpreted as a mask of 0s and 1s. Where a 1 appears, the corresponding pixel in the polygon is drawn, and where a 0 appears, nothing is drawn. Figure 2-10 shows how a stipple pattern is constructed from the characters in mask . Polygon stippling is enabled and disabled by using glEnable() and glDisable() with GL_POLYGON_STIPPLE as the argument. The interpretation of the mask data is affected by the glPixelStore*() GL_UNPACK* modes. (See "Controlling Pixel-Storage Modes" in Chapter 8 .)

In addition to defining the current polygon stippling pattern, you must enable stippling:

Use glDisable() with the same argument to disable polygon stippling.

Figure 2-11 shows the results of polygons drawn unstippled and then with two different stippling patterns. The program is shown in Example 2-6. The reversal of white to black (from Figure 2-10 to Figure 2-11) occurs because the program draws in white over a black background, using the pattern in Figure 2-10 as a stencil.

Figure 2-10 : Constructing a Polygon Stipple Pattern

Figure 2-11 : Stippled Polygons

Example 2-6 : Polygon Stipple Patterns: polys.c

You might want to use display lists to store polygon stipple patterns to maximize efficiency. (See "Display-List Design Philosophy" in Chapter 7.)

Marking Polygon Boundary Edges

OpenGL can render only convex polygons, but many nonconvex polygons arise in practice. To draw these nonconvex polygons, you typically subdivide them into convex polygons - usually triangles, as shown in Figure 2-12 - and then draw the triangles. Unfortunately, if you decompose a general polygon into triangles and draw the triangles, you can't really use glPolygonMode() to draw the polygon's outline, since you get all the triangle outlines inside it. To solve this problem, you can tell OpenGL whether a particular vertex precedes a boundary edge OpenGL keeps track of this information by passing along with each vertex a bit indicating whether that vertex is followed by a boundary edge. Then, when a polygon is drawn in GL_LINE mode, the nonboundary edges aren't drawn. In Figure 2-12, the dashed lines represent added edges.

Figure 2-12 : Subdividing a Nonconvex Polygon

By default, all vertices are marked as preceding a boundary edge, but you can manually control the setting of the edge flag with the command glEdgeFlag*() . This command is used between glBegin() and glEnd() pairs, and it affects all the vertices specified after it until the next glEdgeFlag() call is made. It applies only to vertices specified for polygons, triangles, and quads, not to those specified for strips of triangles or quads.

void glEdgeFlag (GLboolean flag )
void glEdgeFlagv (const GLboolean * flag ) Indicates whether a vertex should be considered as initializing a boundary edge of a polygon. If flag is GL_TRUE, the edge flag is set to TRUE (the default), and any vertices created are considered to precede boundary edges until this function is called again with flag being GL_FALSE.

As an example, Example 2-7 draws the outline shown in Figure 2-13.

Figure 2-13 : Outlined Polygon Drawn Using Edge Flags

Example 2-7 : Marking Polygon Boundary Edges

Normal Vectors

A normal vector (or normal, for short) is a vector that points in a direction that's perpendicular to a surface. For a flat surface, one perpendicular direction is the same for every point on the surface, but for a general curved surface, the normal direction might be different at each point on the surface. With OpenGL, you can specify a normal for each polygon or for each vertex. Vertices of the same polygon might share the same normal (for a flat surface) or have different normals (for a curved surface). But you can't assign normals anywhere other than at the vertices.

An object's normal vectors define the orientation of its surface in space - in particular, its orientation relative to light sources. These vectors are used by OpenGL to determine how much light the object receives at its vertices. Lighting - a large topic by itself - is the subject of Chapter 5, and you might want to review the following information after you've read that chapter. Normal vectors are discussed briefly here because you define normal vectors for an object at the same time you define the object's geometry.

You use glNormal*() to set the current normal to the value of the argument passed in. Subsequent calls to glVertex*() cause the specified vertices to be assigned the current normal. Often, each vertex has a different normal, which necessitates a series of alternating calls, as in Example 2-8.

Example 2-8 : Surface Normals at Vertices

void glNormal3 (TYPEnx, TYPEny, TYPEnz)
void glNormal3 v (const TYPE *v) Sets the current normal vector as specified by the arguments. The nonvector version (without the v ) takes three arguments, which specify an ( nx, ny, nz ) vector that's taken to be the normal. Alternatively, you can use the vector version of this function (with the v ) and supply a single array of three elements to specify the desired normal. The b , s , and i versions scale their parameter values linearly to the range [-1.0,1.0].

There's no magic to finding the normals for an object - most likely, you have to perform some calculations that might include taking derivatives - but there are several techniques and tricks you can use to achieve certain effects. Appendix E explains how to find normal vectors for surfaces. If you already know how to do this, if you can count on always being supplied with normal vectors, or if you don't want to use the lighting facility provided by OpenGL lighting facility, you don't need to read this appendix.

Note that at a given point on a surface, two vectors are perpendicular to the surface, and they point in opposite directions. By convention, the normal is the one that points to the outside of the surface being modeled. (If you get inside and outside reversed in your model, just change every normal vector from ( x, y, z ) to (- &xgr , - y, - z )).

Also, keep in mind that since normal vectors indicate direction only, their length is mostly irrelevant. You can specify normals of any length, but eventually they have to be converted to having a length of 1 before lighting calculations are performed. (A vector that has a length of 1 is said to be of unit length, or normalized.) In general, you should supply normalized normal vectors. To make a normal vector of unit length, divide each of its x , y , z components by the length of the normal:

Normal vectors remain normalized as long as your model transformations include only rotations and translations. (See Chapter 3 for a discussion of transformations.) If you perform irregular transformations (such as scaling or multiplying by a shear matrix), or if you specify nonunit-length normals, then you should have OpenGL automatically normalize your normal vectors after the transformations. To do this, call glEnable() with GL_NORMALIZE as its argument. By default, automatic normalization is disabled. Note that automatic normalization typically requires additional calculations that might reduce the performance of your application.

Vertex Arrays

You may have noticed that OpenGL requires many function calls to render geometric primitives. Drawing a 20-sided polygon requires 22 function calls: one call to glBegin() , one call for each of the vertices, and a final call to glEnd() . In the two previous code examples, additional information (polygon boundary edge flags or surface normals) added function calls for each vertex. This can quickly double or triple the number of function calls required for one geometric object. For some systems, function calls have a great deal of overhead and can hinder performance.

An additional problem is the redundant processing of vertices that are shared between adjacent polygons. For example, the cube in Figure 2-14 has six faces and eight shared vertices. Unfortunately, using the standard method of describing this object, each vertex would have to be specified three times: once for every face that uses it. So 24 vertices would be processed, even though eight would be enough.

Figure 2-14 : Six Sides Eight Shared Vertices

OpenGL has vertex array routines that allow you to specify a lot of vertex-related data with just a few arrays and to access that data with equally few function calls. Using vertex array routines, all 20 vertices in a 20-sided polygon could be put into one array and called with one function. If each vertex also had a surface normal, all 20 surface normals could be put into another array and also called with one function.

Arranging data in vertex arrays may increase the performance of your application. Using vertex arrays reduces the number of function calls, which improves performance. Also, using vertex arrays may allow non-redundant processing of shared vertices. (Vertex sharing is not supported on all implementations of OpenGL.)

Note: Vertex arrays are standard in version 1.1 of OpenGL but were not part of the OpenGL 1.0 specification. With OpenGL 1.0, some vendors have implemented vertex arrays as an extension.

There are three steps to using vertex arrays to render geometry.

Activate (enable) up to six arrays, each to store a different type of data: vertex coordinates, RGBA colors, color indices, surface normals, texture coordinates, or polygon edge flags.

Put data into the array or arrays. The arrays are accessed by the addresses of (that is, pointers to) their memory locations. In the client-server model, this data is stored in the client's address space.

Draw geometry with the data. OpenGL obtains the data from all activated arrays by dereferencing the pointers. In the client-server model, the data is transferred to the server's address space. There are three ways to do this:

Accessing individual array elements (randomly hopping around)

Creating a list of individual array elements (methodically hopping around)

Processing sequential array elements

The dereferencing method you choose may depend upon the type of problem you encounter.

Interleaved vertex array data is another common method of organization. Instead of having up to six different arrays, each maintaining a different type of data (color, surface normal, coordinate, and so on), you might have the different types of data mixed into a single array. (See "Interleaved Arrays" for two methods of solving this.)

Step 1: Enabling Arrays

The first step is to call glEnableClientState() with an enumerated parameter, which activates the chosen array. In theory, you may need to call this up to six times to activate the six available arrays. In practice, you'll probably activate only between one to four arrays. For example, it is unlikely that you would activate both GL_COLOR_ARRAY and GL_INDEX_ARRAY, since your program's display mode supports either RGBA mode or color-index mode, but probably not both simultaneously.

void glEnableClientState (GLenum array ) Specifies the array to enable. Symbolic constants GL_VERTEX_ARRAY, GL_COLOR_ARRAY, GL_INDEX_ARRAY, GL_NORMAL_ARRAY, GL_TEXTURE_COORD_ARRAY, and GL_EDGE_FLAG_ARRAY are acceptable parameters.

If you use lighting, you may want to define a surface normal for every vertex. (See "Normal Vectors.") To use vertex arrays for that case, you activate both the surface normal and vertex coordinate arrays:

Suppose that you want to turn off lighting at some point and just draw the geometry using a single color. You want to call glDisable() to turn off lighting states (see Chapter 5). Now that lighting has been deactivated, you also want to stop changing the values of the surface normal state, which is wasted effort. To do that, you call

void glDisableClientState (GLenum array ) Specifies the array to disable. Accepts the same symbolic constants as glEnableClientState() .

You might be asking yourself why the architects of OpenGL created these new (and long!) command names, gl*ClientState() . Why can't you just call glEnable() and glDisable() ? One reason is that glEnable() and glDisable() can be stored in a display list, but the specification of vertex arrays cannot, because the data remains on the client's side.

Step 2: Specifying Data for the Arrays

There is a straightforward way by which a single command specifies a single array in the client space. There are six different routines to specify arrays - one routine for each kind of array. There is also a command that can specify several client-space arrays at once, all originating from a single interleaved array.

void glVertexPointer (GLint size , GLenum type , GLsizei stride ,
const GLvoid *pointer ) Specifies where spatial coordinate data can be accessed. pointer is the memory address of the first coordinate of the first vertex in the array. type specifies the data type (GL_SHORT, GL_INT, GL_FLOAT, or GL_DOUBLE) of each coordinate in the array. size is the number of coordinates per vertex, which must be 2, 3, or 4. stride is the byte offset between consecutive vertexes. If stride is 0, the vertices are understood to be tightly packed in the array.

To access the other five arrays, there are five similar routines:

void glColorPointer (GLint size , GLenum type , GLsizei stride ,
const GLvoid * pointer )
void glIndexPointer (GLenum type , GLsizei stride , const GLvoid * pointer )
void glNormalPointer (GLenum type , GLsizei stride ,
const GLvoid * pointer )
void glTexCoordPointer (GLint size , GLenum type , GLsizei stride ,
const GLvoid * pointer )
void glEdgeFlagPointer (GLsizei stride , const GLvoid * pointer )

The main differences among the routines are whether size and type are unique or must be specified. For example, a surface normal always has three components, so it is redundant to specify its size. An edge flag is always a single Boolean, so neither size nor type needs to be mentioned. Table 2-4 displays legal values for size and data types.

Table 2-4 : Vertex Array Sizes (Values per Vertex) and Data Types(continued)






no type argument (type of data must be GLboolean)

Example 2-9 uses vertex arrays for both RGBA colors and vertex coordinates. RGB floating-point values and their corresponding (x, y) integer coordinates are loaded into the GL_COLOR_ARRAY and GL_VERTEX_ARRAY.

Example 2-9 : Enabling and Loading Vertex Arrays: varray.c


With a stride of zero, each type of vertex array (RGB color, color index, vertex coordinate, and so on) must be tightly packed. The data in the array must be homogeneous that is, the data must be all RGB color values, all vertex coordinates, or all some other data similar in some fashion.

Using a stride of other than zero can be useful, especially when dealing with interleaved arrays. In the following array of GLfloats, there are six vertices. For each vertex, there are three RGB color values, which alternate with the (x, y, z) vertex coordinates.

Stride allows a vertex array to access its desired data at regular intervals in the array. For example, to reference only the color values in the intertwined array, the following call starts from the beginning of the array (which could also be passed as &intertwined[0] ) and jumps ahead 6 * sizeof (GLfloat) bytes, which is the size of both the color and vertex coordinate values. This jump is enough to get to the beginning of the data for the next vertex.

For the vertex coordinate pointer, you need to start from further in the array, at the fourth element of intertwined (remember that C programmers start counting at zero).

Step 3: Dereferencing and Rendering

Until the contents of the vertex arrays are dereferenced, the arrays remain on the client side, and their contents are easily changed. In Step 3, contents of the arrays are obtained, sent down to the server, and then sent down the graphics processing pipeline for rendering.

There are three ways to obtain data: from a single array element (indexed location), from a sequence of array elements, and from an ordered list of array elements.

Dereference a Single Array Element

glArrayElement() is usually called between glBegin() and glEnd() . (If called outside, glArrayElement() sets the current state for all enabled arrays, except for vertex, which has no current state.) In Example 2-10, a triangle is drawn using the third, fourth, and sixth vertices from enabled vertex arrays (again, remember that C programmers begin counting array locations with zero).

Example 2-10 : Using glArrayElement() to Define Colors and Vertices

When executed, the latter five lines of code has the same effect as

Since glArrayElement() is only a single function call per vertex, it may reduce the number of function calls, which increases overall performance.

Be warned that if the contents of the array are changed between glBegin() and glEnd() , there is no guarantee that you will receive original data or changed data for your requested element. To be safe, don't change the contents of any array element which might be accessed until the primitive is completed.

Dereference a List of Array Elements

glArrayElement() is good for randomly "hopping around" your data arrays. A similar routine, glDrawElements() , is good for hopping around your data arrays in a more orderly manner.

void glDrawElements (GLenum mode , GLsizei count , GLenum type ,
void * indices ) Defines a sequence of geometric primitives using count number of elements, whose indices are stored in the array indices . type must be one of GL_UNSIGNED_BYTE, GL_UNSIGNED_SHORT, or GL_UNSIGNED_INT, indicating the data type of the indices array. mode specifies what kind of primitives are constructed and is one of the same values that is accepted by glBegin() for example, GL_POLYGON, GL_LINE_LOOP, GL_LINES, GL_POINTS, and so on.

The effect of glDrawElements() is almost the same as this command sequence:

glDrawElements() additionally checks to make sure mode , count , and type are valid. Also, unlike the preceding sequence, executing glDrawElements() leaves several states indeterminate. After execution of glDrawElements() , current RGB color, color index, normal coordinates, texture coordinates, and edge flag are indeterminate if the corresponding array has been enabled.

With glDrawElements() , the vertices for each face of the cube can be placed in an array of indices. Example 2-11 shows two ways to use glDrawElements() to render the cube. Figure 2-15 shows the numbering of the vertices used in Example 2-11.

Figure 2-15 : Cube with Numbered Vertices

Example 2-11 : Two Ways to Use glDrawElements()

Or better still, crunch all the indices together:

Note: It is an error to encapsulate glDrawElements() between a glBegin() / glEnd() pair.

With both glArrayElement() and glDrawElements() , it is also possible that your OpenGL implementation caches recently processed vertices, allowing your application to "share" or "reuse" vertices. Take the aforementioned cube, for example, which has six faces (polygons) but only eight vertices. Each vertex is used by exactly three faces. Without glArrayElement() or glDrawElements() , rendering all six faces would require processing twenty-four vertices, even though sixteen vertices would be redundant. Your implementation of OpenGL may be able to minimize redundancy and process as few as eight vertices. (Reuse of vertices may be limited to all vertices within a single glDrawElements() call or, for glArrayElement() , within one glBegin() / glEnd() pair.)

Dereference a Sequence of Array Elements

While glArrayElement() and glDrawElements() "hop around" your data arrays, glDrawArrays() plows straight through them.

void glDrawArrays (GLenum mode , GLint first , GLsizei count ) Constructs a sequence of geometric primitives using array elements starting at first and ending at first + count -1 of each enabled array. mode specifies what kinds of primitives are constructed and is one of the same values accepted by glBegin() for example, GL_POLYGON, GL_LINE_LOOP, GL_LINES, GL_POINTS, and so on.

The effect of glDrawArrays() is almost the same as this command sequence:

As is the case with glDrawElements() , glDrawArrays() also performs error checking on its parameter values and leaves the current RGB color, color index, normal coordinates, texture coordinates, and edge flag with indeterminate values if the corresponding array has been enabled.

Interleaved Arrays

Earlier in this chapter (in "Stride"), the special case of interleaved arrays was examined. In that section, the array intertwined , which interleaves RGB color and 3D vertex coordinates, was accessed by calls to glColorPointer() and glVertexPointer() . Careful use of stride helped properly specify the arrays.

There is also a behemoth routine, glInterleavedArrays() , that can specify several vertex arrays at once. glInterleavedArrays() also enables and disables the appropriate arrays (so it combines both Steps 1 and 2). The array intertwined exactly fits one of the fourteen data interleaving configurations supported by glInterleavedArrays() . So to specify the contents of the array intertwined into the RGB color and vertex arrays and enable both arrays, call

This call to glInterleavedArrays() enables the GL_COLOR_ARRAY and GL_VERTEX_ARRAY arrays. It disables the GL_INDEX_ARRAY, GL_TEXTURE_COORD_ARRAY, GL_NORMAL_ARRAY, and GL_EDGE_FLAG_ARRAY.

This call also has the same effect as calling glColorPointer() and glVertexPointer() to specify the values for six vertices into each array. Now you are ready for Step 3: Calling glArrayElement() , glDrawElements() , or glDrawArrays() to dereference array elements.

void glInterleavedArrays (GLenum format , GLsizei stride , void * pointer ) Initializes all six arrays, disabling arrays that are not specified in format , and enabling the arrays that are specified. format is one of 14 symbolic constants, which represent 14 data configurations Table 2-5 displays format values. stride specifies the byte offset between consecutive vertexes. If stride is 0, the vertexes are understood to be tightly packed in the array. pointer is the memory address of the first coordinate of the first vertex in the array.

Note that glInterleavedArrays() does not support edge flags.

The mechanics of glInterleavedArrays() are intricate and require reference to Example 2-12 and Table 2-5. In that example and table, you'll see et, ec, and en, which are the boolean values for the enabled or disabled texture coordinate, color, and normal arrays, and you'll see st, sc, and sv, which are the sizes (number of components) for the texture coordinate, color, and vertex arrays. tc is the data type for RGBA color, which is the only array that can have non-float interleaved values. pc, pn, and pv are the calculated strides for jumping over individual color, normal, and vertex values, and s is the stride (if one is not specified by the user) to jump from one array element to the next.

The effect of glInterleavedArrays() is the same as calling the command sequence in Example 2-12 with many values defined in Table 2-5. All pointer arithmetic is performed in units of sizeof (GL_UNSIGNED_BYTE).

Example 2-12 : Effect of glInterleavedArrays(format, stride, pointer)

In Table 2-5, T and F are True and False. f is sizeof (GL_FLOAT). c is 4 times sizeof (GL_UNSIGNED_BYTE), rounded up to the nearest multiple of f.

ST_INTERSECT of each element of two tables (polygons, lines) - Geographic Information Systems

What Is GIS?

Definition of a Geographic Information System: A system of hardware, software, and procedures designed to support the capture, management, manipulation, analysis, modeling and display of spatially-referenced data for solving complex planning and management problems that involve data which is spatially referenced to the earth.

A geographic information system (GIS) is a computer-based tool for mapping and analyzing things that exist and events that happen on earth. GIS technology integrates common database operations such as query and statistical analysis with the unique visualization and geographic analysis benefits offered by maps. These abilities distinguish GIS from other information systems and make it valuable to a wide range of public and private enterprises for explaining events, predicting outcomes, and planning strategies. The major challenges we face in the world today--overpopulation, pollution, deforestation, natural disasters--have a critical geographic dimension.

Components of a GIS:

A working GIS integrates five key components: hardware, software, data, people, and methods.

Hardware: Hardware is the computer on which a GIS operates. Today, GIS software runs on a wide range of hardware types, from centralized computer servers to desktop computers used in stand-alone or networked configurations.

Software: GIS software provides the functions and tools needed to store, analyze, and display geographic information. Key software components are

    • Tools for the input and manipulation of geographic information
    • A database management system (DBMS)
    • Tools that support geographic query, analysis, and visualization
    • A graphical user interface (GUI) for easy access to tools

    Data: Possibly the most important component of a GIS is the data. Geographic data and related tabular data can be collected in-house or purchased from a commercial data provider. A GIS will integrate spatial data with other data resources and can even use a DBMS, used by most organizations to organize and maintain their data, to manage spatial data.

    People: GIS technology is of limited value without the people who manage the system and evelop plans for applying it to real-world problems. GIS users range from technical specialists who design and maintain the system to those who use it to help them perform their everyday work.

    Methods: A successful GIS operates according to a well-designed plan and business rules, which are the models and operating practices unique to each organization.

    How GIS Works:

    A GIS stores information about the world as a collection of thematic layers that can be linked together by geography. This simple but extremely powerful and versatile concept has proven invaluable for solving many real-world problems from tracking delivery vehicles, to recording details of planning applications, to modeling global atmospheric circulation.

    Geographic References: Geographic information contains either an explicit geographic reference, such as a latitude and longitude or national grid coordinate, or an implicit reference such as an address, postal code, census tract name, forest stand identifier, or road name. An automated process called geocoding is used to create explicit geographic references (multiple locations) from implicit references (descriptions such as addresses). These geographic references allow you to locate features, such as a business or forest stand, and events, such as an earthquake, on the earth's surface for analysis.

    Vector and Raster Models: Geographic information systems work with two fundamentally different types of geographic models--the "vector" model and the "raster" model. In the vector model, information about points, lines, and polygons is encoded and stored as a collection of x,y coordinates. The location of a point feature, such as a bore hole, can be described by a single x,y coordinate. Linear features, such as roads and rivers, can be stored as a collection of point coordinates. Polygonal features, such as sales territories and river catchments, can be stored as a closed loop of coordinates.

    The vector model is extremely useful for describing discrete features, but less useful for describing continuously varying features such as soil type or accessibility costs for hospitals. The raster model has evolved to model such continuous features. A raster image comprises a collection of grid cells rather like a scanned map or picture. Both the vector and raster models for storing geographic data have unique advantages and disadvantages. Modern GISs are able to handle both models.

    The GIS Process:

    • Input
    • Manipulation
    • Management
    • Query and Analysis
    • Visualization


    Before geographic data can be used in a GIS, the data must be converted into a suitable digital format. The process of converting data from paper maps into computer files is called digitizing.

    Modern GIS technology can automate this process fully for large projects using scanning technology smaller jobs may require some manual digitizing (using a digitizing table). Today many types of geographic data already exist in GIS-compatible formats. These data can be obtained from data suppliers and loaded directly into a GIS.

    Manipulation: It is likely that data types required for a particular GIS project will need to be transformed or manipulated in some way to make them compatible with your system. For example, geographic information is available at different scales (detailed street centerline files less detailed census boundaries and postal codes at a regional level). Before this information can be integrated, it must be transformed to the same scale (degree of detail or accuracy). This could be a temporary transformation for display purposes or a permanent one required for analysis. GIS technology offers many tools for manipulating spatial data and for weeding out unnecessary data.

    Management: For small GIS projects it may be sufficient to store geographic information as simple files. However, when data volumes become large and the number of data users becomes more than a few, it is often best to use a database management system (DBMS) to help store, organize, and manage data.A DBMS is nothing more than computer software for managing a database.

    There are many different designs of DBMSs, but in GIS the relational design has been the most useful. In the relational design, data are stored conceptually as a collection of tables. Common fields in different tables are used to link them together. This surprisingly simple design has been so widely used primarily because of its flexibility and very wide deployment in applications both within and without GIS.

    • Who owns the land parcel on the corner?
    • How far is it between two places?
    • Where is land zoned for industrial use?
    • Where are all the sites suitable for building new houses?
    • What is the dominant soil type for oak forest?
    • If I build a new highway here, how will traffic be affected?

    GIS provides both simple point-and-click query capabilities and sophisticated analysis tools to provide timely information to managers and analysts alike. GIS technology really comes into its own when used to analyze geographic data to look for patterns and trends and to undertake "what if" scenarios. Modern GISs have many powerful analytical tools, but two are especially important.

    • How many houses lie within 100 m of this water main?
    • What is the total number of customers within 10 km of this store?
    • What proportion of the alfalfa crop is within 500 m of the well?

    To answer such questions, GIS technology uses a process called buffering to determine the proximity relationship between features.

    Overlay Analysis:

    The integration of different data layers involves a process called overlay. At its simplest, this could be a visual operation, but analytical operations require one or more data layers to be joined physically. This overlay, or spatial join, can integrate data on soils, slope, and vegetation, or land ownership with tax assessment.


    For many types of geographic operation the end result is best visualized as a map or graph. Maps are very efficient at storing and communicating geographic information. While cartographers have created maps for millennia, GIS provides new and exciting tools to extend the art and science of cartography. Map displays can be integrated with reports, three-dimensional views, photographic images, and other output such as multimedia.

    Related Technology: GISs are closely related to several other types of information systems, but it is the ability to manipulate and analyze geographic data that sets GIS technology apart. Although there are no hard and fast rules about how to classify information systems, the following discussion should help differentiate GIS from desktop mapping, computer-aided design (CAD), remote sensing, DBMS, and global positioning systems (GPS) technologies.

    Desktop Mapping:

    A desktop mapping system uses the map metaphor to organize data and user interaction. The focus of such systems is the creation of maps: the map is the database. Most desktop mapping systems have more limited data management, spatial analysis, and customization capabilities. Desktop mapping systems operate on desktop computers such as PCs, Macintoshes, and smaller UNIX workstations.

    CAD systems evolved to create designs and plans of buildings and infrastructure. This activity required that components of fixed characteristics be assembled to create the whole structure. These systems require few rules to specify how components can be assembled and very limited analytical capabilities. CAD systems have been extended to support maps but typically have limited utility for managing and analyzing large geographic databases.

    Remote Sensing and GPS:

    Remote sensing is the art and science of making measurements of the earth using sensors such as cameras carried on airplanes, GPS receivers, or other devices. These sensors collect data in the form of images and provide specialized capabilities for manipulating, analyzing, and visualizing those images. Lacking strong geographic data management and analytical operations, they cannot be called true GISs.

    Database management systems specialize in the storage and management of all types of data including geographic data. DBMSs are optimized to store and retrieve data and many GISs rely on them for this purpose. They do not have the analytic and visualization tools common to GIS.

    Why Geography Matters To Local Governments:

    State and local governments are increasingly required to streamline business practices while adhering to complex regulatory requirements. To do so, they must digest an immense amount of information to perform their duties in a fair and sound manner. Almost all of this information is in some way tied to a geographic element such as an address, parcel, postal code, Census block, or some other component.

    GIS technology provides a flexible set of tools to perform the diverse functions of government by providing the data management tools to help accomplish the gargantuan task of managing all this geographic-based information. More important, GIS technology makes data sharing among departments and other agencies easy so that the government can work as a single enterprise.

    Display hexagon grid to visualize Langton's ant

    I am looking to recreate the following image from this reference as

    using Mathematica's Polygon documentation under "Applications" as a starting point. I want to eventually use Mathematica to visualize the evolution of multi-colored Langton's ant on a hexagonal grid (not too important). In working to create the z = 0 row (shown in the above image as blue 0's) using Polygon and Graphics . I generate a hexagon using Mathematica's example with a Pi/6 rotation as follows:

    to create a polygon at the center of with side-length 1 rotated appropriately. I then look to create a row of these polygons evenly spaced so that their sides are touching as in the above image 2. For this I am thinking that each center will be 2r away from the adjacent centers' where r is defined as the length from the center point to the center of the side and is Sqrt[3]/2 * t where t is the side length as defined from Wikipedia. Therefore, I am trying to create hexagons where ith hexagon is Sqrt[3] * i away from <0,0>. To accomplish this I have the following code

    which produces the following output

    I think that my maths are "solid" here in how I want to layout the polygons but I cannot seem to get them in the right configuration. How can I get my hexagons to touch at the edges in a row as such where I create a polygon based on where the center point should be (which I'd calculate based on the side-length of each hexagon)?

    Thank you in advance! I am not proficient in Mathematica so I believe my error to be how I'm programming but it could be that I've missed something obvious in the problem and my code is correct :)

    Supply risk

    Specific heat capacity (J kg −1 K −1 )

    Specific heat capacity is the amount of energy needed to change the temperature of a kilogram of a substance by 1 K.

    A measure of the stiffness of a substance. It provides a measure of how difficult it is to extend a material, with a value given by the ratio of tensile strength to tensile strain.

    A measure of how difficult it is to deform a material. It is given by the ratio of the shear stress to the shear strain.

    A measure of how difficult it is to compress a substance. It is given by the ratio of the pressure on a body to the fractional decrease in volume.

    A measure of the propensity of a substance to evaporate. It is defined as the equilibrium pressure exerted by the gas produced above a substance in a closed system.

    ST_INTERSECT of each element of two tables (polygons, lines) - Geographic Information Systems

    Greatest personal satisfaction:

    Surveys and Expository Papers

      An Introduction to the Conjugate Gradient Method Without the Agonizing Pain, August 1994. Abstract, PostScript (1,716k, 58 pages), PDF (516k, 58 pages), PostScript of classroom figures (1,409k, 37 pages). PDF of classroom figures (394k, 37 pages). This report is an exercise in trying to make a difficult subject as transparent and easy to understand as humanly possible. It includes sixty-six illustrations and as much intuition as I can provide. How could fifteen lines of pseudocode take fifty pages to explain?

    Delaunay Mesh Generation

    Our book is a thorough guide to Delaunay refinement algorithms that are mathematically guaranteed to generate meshes with high quality, including triangular meshes in the plane, tetrahedral volume meshes, and triangular surface meshes embedded in three dimensions. It is also the most complete guide available to Delaunay triangulations and algorithms for constructing them. This book has its own web page click here for more details.

    • 1. Introduction.
    • 2. Two-dimensional Delaunay triangulations.
    • 3. Algorithms for constructing Delaunay triangulations.
    • 4. Three-dimensional Delaunay triangulations.
    • 5. Algorithms for constructing Delaunay triangulations in R 3 .
    • 6. Delaunay refinement in the plane.
    • 7. Voronoi diagrams and weighted complexes.
    • 8. Tetrahedral meshing of PLCs.
    • 9. Weighted Delaunay refinement for PLCs with small angles.
    • 10. Sliver exudation.
    • 11. Refinement for sliver exudation.
    • 12. Smooth surfaces and point samples.
    • 13. Restricted Delaunay triangulations of surface samples.
    • 14. Meshing smooth surfaces and volumes.
    • 15. Meshing piecewise smooth complexes.

    Non-Delaunay Mesh Generation, Dynamic Meshing, and Physically-Based Computer Animation

      François Labelle and Jonathan Richard Shewchuk, Isosurface Stuffing: Fast Tetrahedral Meshes with Good Dihedral Angles, ACM Transactions on Graphics 26(3):57.1&ndash57.10, August 2007. Special issue on Proceedings of SIGGRAPH 2007. PDF (color, 3,530k, 10 pages). The isosurface stuffing algorithm fills an isosurface with a mesh whose dihedral angles are bounded between 10.7 o and 164.8 o . We're pretty proud of this, because virtually nobody has been able to prove dihedral angle bounds anywhere close to this, except for very simple geometries. Although the tetrahedra at the isosurface must be uniformly sized, the tetrahedra in the interior can be graded. The algorithm is whip fast, numerically robust, and easy to implement because, like Marching Cubes, it generates tetrahedra from a small set of precomputed stencils. The angle bounds are guaranteed by a computer-assisted proof. If the isosurface is a smooth 2-manifold with bounded curvature, and the tetrahedra are sufficiently small, then the boundary of the mesh is guaranteed to be a geometrically and topologically accurate approximation of the isosurface. Unfortunately, the algorithm rounds off sharp corners and edges. (I think it will be extremely hard for anyone to devise an algorithm that provably obtains dihedral angle bounds of this order and conforms perfectly to creases.)

    Streaming Geometric Computation

      Martin Isenburg, Yuanxin Liu, Jonathan Shewchuk, and Jack Snoeyink, Streaming Computation of Delaunay Triangulations, ACM Transactions on Graphics 25(3):1049&ndash1056, July 2006. Special issue on Proceedings of SIGGRAPH 2006. PDF (color, 9,175k, 8 pages). We compute a billion-triangle terrain representation for the Neuse River system from 11.2 GB of LIDAR data in 48 minutes using only 70 MB of memory on a laptop with two hard drives. This is a factor of twelve faster than the previous fastest out-of-core Delaunay triangulation software. We also construct a nine-billion-triangle, 152 GB triangulation in under seven hours using 166 MB of main memory. The main new idea in our streaming Delaunay triangulators is spatial finalization. We partition space into regions, and include finalization tags in the stream that indicate when no more points in the stream will fall in specified regions. Our triangulators certify triangles or tetrahedra as Delaunay when the finalization tags show it is safe to do so. This make it possible to write them out early, freeing up memory to read more from the input stream. Because only the unfinalized parts of a triangulation are resident in memory, the memory footprint remains small.

    Finite Element Quality

      What Is a Good Linear Finite Element? Interpolation, Conditioning, Anisotropy, and Quality Measures, unpublished preprint, 2002. COMMENTS NEEDED! Help me improve this manuscript. If you read this, please send feedback. PostScript (5,336k, 66 pages), PDF (1,190k, 66 pages). Why are elements with tiny angles harmless for interpolation, but deadly for stiffness matrix conditioning? Why are long, thin elements with angles near 180 o terrible in isotropic cases but perfectly acceptable, if they're aligned properly, for anisotropic PDEs whose solutions have anisotropic curvature? Why do elements that are too long and thin sometimes offer unexpectedly accurate PDE solutions? Why can interpolation error, discretization error, and stiffness matrix conditioning sometimes have a three-way disagreement about the aspect ratio and alignment of the ideal element? Why do scale-invariant element quality measures often lead to incorrect conclusions about how to improve a finite element mesh? Why is the popular inradius-to-circumradius ratio such an ineffective quality measure for optimization-based mesh smoothing? All is revealed here.

    Constrained Delaunay Triangulations

    Prior to my work below, the CDT had not been generalized to higher dimensions, and it can never be fully generalized because not every polyhedron has a constrained tetrahedralization (allowing no additional vertices). Here, however, I prove that there is an easily tested condition that guarantees that a polyhedron (or piecewise linear domain) in three or more dimensions does have a constrained Delaunay triangulation. (A domain that satisfies the condition is said to be edge-protected in three dimenions, or ridge-protected in general dimensions.)

    Suppose you want to tetrahedralize a three-dimensional domain. The result implies that if you insert enough extra vertices on the boundary of a polygon to recover its edges in a Delaunay tetrahedralization (in other words, if you make it be edge-protected) then you can recover the polygon's interior for free&mdashthat is, you can force the triangular faces of the tetrahedralization to conform to the polygon without inserting yet more vertices. This method of polygon recovery is immediately useful for mesh generation or the interpolation of discontinuous functions. (The result also fills a theoretical hole in my dissertation by showing that it is safe to delete a vertex from a constrained Delaunay tetrahedralization in the circumstances where my &ldquodiametral lens&rdquo algorithm does so.)

      Jonathan Richard Shewchuk and Brielin C. Brown, Fast Segment Insertion and Incremental Construction of Constrained Delaunay Triangulations, Computational Geometry: Theory and Applications 48(8):554&ndash574, September 2015. PostScript (536k, 29 pages), PDF (310k, 29 pages). Conference version: Proceedings of the Twenty-Ninth Annual Symposium on Computational Geometry (Rio de Janeiro, Brazil), pages 299&ndash308, Association for Computing Machinery, June 2013. PostScript (320k, 10 pages), PDF (213k, 10 pages). The most common way to construct a constrained Delaunay triangulation (CDT) in the plane is to first construct the Delaunay triangulation of the input vertices, then incrementally insert the input segments one by one. We give a randomized algorithm for inserting a segment into a CDT in expected time linear in the number of edges the segment crosses. We implemented it, and we show that it is faster than gift-wrapping for segments that cross many edges. We also show that a simple algorithm for segment location, which precedes segment insertion, is fast enough never to be a bottleneck in CDT construction. A result of Agarwal, Arge, and Yi implies that randomized incremental construction of CDTs by our segment insertion algorithm takes expected O(n log n + n log 2 k) time. We show that this bound is tight by deriving a matching lower bound. Although there are CDT construction algorithms guaranteed to run in O(n log n) time, incremental CDT construction is easier to program and competitive in practice. Note that the symposium paper studies only two-dimensional CDTs, whereas the journal article partly extends the analysis (albeit not the linear-time insertion algorithm) to three dimensions.

    By starting with a Delaunay (or regular) triangulation and incrementally inserting polygons one by one, you can construct the constrained Delaunay (or constrained regular) triangulation of a ridge-protected input in O(nv + 1 log nv) time, where nv is the number of input vertices and d is the dimensionality. In odd dimensions (including three dimensions, which is what I care about most) this is within a factor of log nv of worst-case optimal. The algorithm is likely to take only O(nv log nv) time in many practical cases. Aimed at both programmers and computational geometers. Discusses the general-dimensional case, but most useful in three dimensions.

    Surface Reconstruction

      Ravikrishna Kolluri, Jonathan Richard Shewchuk, and James F. O'Brien, Spectral Surface Reconstruction from Noisy Point Clouds, Symposium on Geometry Processing 2004 (Nice, France), pages 11&ndash21, Eurographics Association, July 2004. PDF (color, 7,648k, 11 pages). Researchers have put forth several provably good Delaunay-based algorithms for surface reconstruction from unorganized point sets. However, in the presence of undersampling, noise, and outliers, they are neither &ldquoprovably good&rdquo nor robust in practice. Our Eigencrust algorithm uses a spectral graph partitioner to make robust decisions about which Delaunay tetrahedra are inside the surface and which are outside. In practice, the Eigencrust algorithm handles undersampling, noise, and outliers quite well, while giving essentially the same results as the provably good Tight Cocone or Powercrust algorithms on &ldquoclean&rdquo point sets. (There is no theory in this paper, though.)

    Geometric Robustness

    To make robust geometric tests fast, I propose two new techniques (which can also be applied to other problems of numerical accuracy). First, I develop and prove the correctness of software-level algorithms for arbitrary precision floating-point arithmetic. These algorithms are refinements (especially with regard to speed) of algorithms suggested by Douglas Priest, and are roughly five times faster than the best available competing method when values of small or intermediate precision (hundreds or thousands of bits) are used. Second, I show how simple expressions (whose only operations are addition, subtraction, and multiplication) can be computed adaptively, trading off accuracy and speed as necessary to satisfy an error bound as quickly as possible. (This technique is probably applicable to any exact arithmetic scheme.) I apply these ideas to build fast, correct orientation and incircle tests in two and three dimensions, and to make robust the implementations of two- and three-dimensional Delaunay triangulation in Triangle and Pyramid. Detailed measurements show that in most circumstances, these programs run nearly as quickly when using my adaptive predicates as they do using nonrobust predicates.

      Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates, Discrete & Computational Geometry 18(3):305&ndash363, October 1997. PostScript (775k, 55 pages), PDF (556k, 55 pages). Also appears as Chapter 6 of my dissertation.