Buffering selected polygons within same feature class using ArcGIS Pro?

Buffering selected polygons within same feature class using ArcGIS Pro?

The limitation below applies only to ArcGIS Pro versions 1.0 - 1.2.

When editing in ArcMap a workflow I used to create a polygon feature with a donut polygon in the same feature class around it was to:

  1. Digitize the polygon
  2. Use the Buffer from the pulldown on the Editor menu to create a buffer polygon larger than the original
  3. Select the original polygon and use the Clip from the pulldown on the Editor menu to discard any part of the buffer polygon that fell under it.

Is there an equivalent workflow available in ArcGIS Pro?

From what I have seen the Buffer and Clip Geoprocessing tools are still there under ANALYSIS but these work at the feature class rather than feature level.

@ChrisW pointed out that there is also a Clip under Modify Features that seems equivalent to (and better than) Editor | Clip but it is the Buffer on the Editor menu pulldown that seems to have gone missing.

I just tested ArcGIS Pro 1.3, and there can now be found the Buffer tool that I was looking for:

In the Modify Features pane, Buffer creates polyline or polygon buffer features around selected features at a specified offset distance. The destination layer is specified by choosing a feature template. When you create buffers around two or more features, you can merge the resulting buffers into one nonoverlapping buffer by selecting Dissolve.

Polygon Buffering

i have a polygon-featureclass that represents all communitys of germany.

Now I want to add a buffer to all "border" communities.

The buffer should only extend to the "outside" of germany so that the area of all "border" communities is extended with "foreign" territory.

I made a freehand-drawing to better illustrate my intention.

What would be the best way to achieve this?

I have an ArcGIS Standard License.

Or can you imagine an sql-based approach (mssql) to the problem?

by RichardFairhurs t

I agree that the user can use the Project tool to convert from a Geographic Coordinate System to a Projected Coordinate System before using my tool. However, I am not familiar with the best projection choice for Germany, so I was not prepared to do that with this particular data. Anyway, using a Projected Coordinate System would be a requirement of the line network buffer tool I am building at this stage and that requirement will be specified in the help and error messages for the tool by the time I release it.

Thanks for the feedback on the buffers. I spent about 6 hours tweaking the results. Essentially anywhere the cut between two communities bends, I drew that. Bending cuts are not precisely measured to split between communities exactly 50/50 and is only eye ball accurate in those cases. But since these cuts are within waters (rivers and coastal areas) obviously controlled by Germany and not another country, I was not too concerned about getting better precision.

For straight line cuts, the methods involved used a mixture of tools, some of which would require an Advanced license. I had to use the Polygon to Line tool to extract the full community buffer outline and each communities outline as a line. I had to delete all community boundaries that touched another community (easy to do if topology is perfect). The Feature Vertices to Points tool was used to extract points from the ends of each community line for LR processing. I also used the Features to Polygon tool to combine a set of lines at a normal angle to the buffer outline with the buffer to cut it into separate polygons. The Feature Vertices to Points tool was used to extract points from the mid point of each community line and the Spatial Join tool (Basic license) was used to transfer attributes from the mid points to the cut up polygon output of the Features to Polygon tool.

The cut lines that were at a normal angle to the buffer outline were created with LR tools and techniques, which work with a Basic license. With these tools I had to categorize the country outlines into groups that would build continuous non-branching, non-looping portions of the country outline into routes and then used the Create Route tool to build simple routes. I used the Locate Features Along Route tool to create an event table of the community outside end points on the routes. I used the Make Route Event layer to create offset points at 2 km from the Germany outline. I used the Points to Line tool to create a line segment between the point on the line and the LR offset point.

The straight cut lines were all examined and where the normal angle to the line was not the best fit to the buffer I moved the offset end to a location that I liked manually. I would estimate I did this for about 15-20% of the cut lines. Moving the ends took longer than using the tools to get the 80% of correct cut lines, but was necessary.


Joshua Stevens, Jennifer M. Smith, and Raechel A. Bianchetti (2012), Mapping Our Changing World, Editors: Alan M. MacEachren and Donna J. Peuquet, University Park, PA: Department of Geography, The Pennsylvania State University.

Adapted from DiBiase, David, The Nature of Geographic Information (, with contributions by Jim Sloan and Ryan Baxter, John A. Dutton e-Education Institute, College of Earth and Mineral Sciences, The Pennsylvania State University.

This courseware module is part of Penn State's College of Earth and Mineral Sciences' OER Initiative.

The College of Earth and Mineral Sciences is committed to making its websites accessible to all users, and welcomes comments or suggestions on access improvements. Please send comments or suggestions on accessibility to the site editor. The site editor may also be contacted with questions or comments about this Open Educational Resource.


arcgis.features.analysis. connect_origins_to_destinations ( origins_layer , destinations_layer , measurement_type = 'DrivingTime' , origins_layer_route_id_field = None , destinations_layer_route_id_field = None , time_of_day = None , time_zone_for_time_of_day = 'GeoLocal' , output_name = None , context = None , gis = None , estimate = False , point_barrier_layer = None , line_barrier_layer = None , polygon_barrier_layer = None , future = False , route_shape = 'FollowStreets' , include_route_layers = False ) ¶

The Connect Origins to Destinations task measures the travel time or distance between pairs of points. Using this tool, you can

Calculate the total distance or time commuters travel on their home-to-work trips.

Measure how far customers are traveling to shop at your stores. Use this information to define your market reach, especially when targeting advertising campaigns or choosing new store locations.

Calculate the expected trip mileage for your fleet of vehicles. Afterward, run the Summarize Within tool to report mileage by state or other region.

You provide starting and ending points, and the tool returns a layer containing route lines, including measurements, between the paired origins and destinations.


Required layer. The starting point or points of the routes to be generated. See Feature Input .

Required layer. The routes end at points in the destinations layer. See Feature Input .

Required string. The origins and destinations can be connected by measuring straight-line distance, or by measuring travel time or travel distance along a street network using various modes of transportation known as travel modes.

Valid values are a string, StraightLine, which indicates Euclidean distance to be used as distance measure or a Python dictionary representing settings for a travel mode.

When using a travel mode for the measurement_type, you need to specify a dictionary containing the settings for a travel mode supported by your organization. The code in the example section below generates a valid Python dictionary and then passes it as the value for the measurement_type parameter.

Supported travel modes: [‘Driving Distance’, ‘Driving Time’, ‘Rural Driving Distance’, ‘Rural Driving Time’, ‘Trucking Distance’, ‘Trucking Time’, ‘Walking Distance’, ‘Walking Time’]

Optional string. Specify the field in the origins layer containing the IDs that pair origins with destinations.

The ID values must uniquely identify points in the origins layer.

Each ID value must also correspond with exactly one route ID value in the destinations layer. Route IDs that match across the layers create origin-destination pairs, which the tool connects together.

Specifying origins_layer_route_id_field is optional when there is exactly one point feature in the origins or destinations layer. The tool will connect all origins to the one destination or the one origin to all destinations, depending on which layer contains one point.

Optional string. Specify the field in the destinations layer containing the IDs that pair origins with destinations.

The ID values must uniquely identify points in the destinations layer.

Each ID value must also correspond with exactly one route ID value in the origins layer. Route IDs that match across the layers create origin-destination pairs, which the tool connects together.

Specifying destinations_layer_route_id_field is optional when there is exactly one point feature in the origins or destinations layer. The tool will connect all origins to the one destination or the one origin to all destinations, depending on which layer contains one point.

Optional datetime.datetime. Specify whether travel times should consider traffic conditions. To use traffic in the analysis, set measurement_type to a travel mode object whose impedance_attribute_name property is set to travel_time and assign a value to time_of_day. (A travel mode with other impedance_attribute_name values don’t support traffic.) The time_of_day value represents the time at which travel begins, or departs, from the origin points. The time is specified as datetime.datetime.

The service supports two kinds of traffic: typical and live. Typical traffic references travel speeds that are made up of historical averages for each five-minute interval spanning a week. Live traffic retrieves speeds from a traffic feed that processes phone probe records, sensors, and other data sources to record actual travel speeds and predict speeds for the near future.

The data coverage page shows the countries Esri currently provides traffic data for.

To ensure the task uses typical traffic in locations where it is available, choose a time and day of the week, and then convert the day of the week to one of the following dates from 1990:

Set the time and date as datetime.datetime.

For example, to solve for 1:03 p.m. on Thursdays, set the time and date to 1:03 p.m., 4 January 1990 and convert to datetime eg. datetime.datetime(1990, 1, 4, 1, 3).

To use live traffic when and where it is available, choose a time and date and convert to datetime.

Esri saves live traffic data for 12 hours and references predictive data extending 12 hours into the future. If the time and date you specify for this parameter is outside the 24-hour time window, or the travel time in the analysis continues past the predictive data window, the task falls back to typical traffic speeds.

Examples: from datetime import datetime

“time_of_day”: datetime(1990, 1, 4, 1, 3) # 13:03, 4 January 1990. Typical traffic on Thursdays at 1:03 p.m.

“time_of_day”: datetime(1990, 1, 7, 17, 0) # 17:00, 7 January 1990. Typical traffic on Sundays at 5:00 p.m.

“time_of_day”: datetime(2014, 10, 22, 8, 0) # 8:00, 22 October 2014. If the current time is between 8:00 p.m., 21 Oct. 2014 and 8:00 p.m., 22 Oct. 2014, live traffic speeds are referenced in the analysis otherwise, typical traffic speeds are referenced.

“time_of_day”: datetime(2015, 3, 18, 10, 20) # 10:20, 18 March 2015. If the current time is between 10:20 p.m., 17 Mar. 2015 and 10:20 p.m., 18 Mar. 2015, live traffic speeds are referenced in the analysis otherwise, typical traffic speeds are referenced.

Optional string. Specify the time zone or zones of the timeOfDay parameter. Choice list: [‘GeoLocal’, ‘UTC’]

GeoLocal-refers to the time zone in which the originsLayer points are located.

UTC-refers to Coordinated Universal Time.

Optional Boolean. When include_route_layers is set to True, each route from the result is also saved as a route layer item. A route layer includes all the information for a particular route such as the stops assigned to the route as well as the travel directions. Creating route layers is useful if you want to share individual routes with other members in your organization. The route layers use the output feature service name provided in the outputName parameter as a prefix and the route name generated as part of the analysis is added to create a unique name for each route layer.

Caution: Route layers cannot be created when the output is a feature collection. The task will raise an error if output_name is not specified (which indicates feature collection output) and include_route_layers is True.

The maximum number of route layers that can be created is 1,000. If the result contains more than 1,000 routes and include_route_layers is True, the task will only create the output feature service.

Optional string. If provided, the task will create a feature layer of the results. You define the name of the layer. If output_name is not supplied, the task will return a feature collection.

Optional string. Additional settings such as processing extent and output spatial reference. For calculate_density, there are two settings.

Extent (extent)-a bounding box that defines the analysis area. Only those points in the origins_layer and destinations_layer that intersect the bounding box will be analyzed.

Output Spatial Reference (outSR)-If the output is a feature service, the spatial reference will be the same as originsLayer. Setting outSR for feature services has no effect. If the output is a feature collection, the features will be in the spatial reference of the outSRvalue or the spatial reference of originsLayer when outSR is not specified.

Optional, the GIS on which this tool runs. If not specified, the active GIS is used.

Optional Boolean. Is True, the number of credits needed to run the operation will be returned as a float.

Optional layer. Specify one or more point features that act as temporary restrictions (in other words, barriers) when traveling on the underlying streets.

A point barrier can model a fallen tree, an accident, a downed electrical line, or anything that completely blocks traffic at a specific position along the street. Travel is permitted on the street but not through the barrier. See Feature Input .

Optional layer. Specify one or more line features that prohibit travel anywhere the lines intersect the streets.

A line barrier prohibits travel anywhere the barrier intersects the streets. For example, a parade or protest that blocks traffic across several street segments can be modeled with a line barrier. See Feature Input .

Optional string. Specify one or more polygon features that completely restrict travel on the streets intersected by the polygons.

One use of this type of barrier is to model floods covering areas of the street network and making road travel there impossible. See Feature Input .

Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Optional String. Specify the shape of the route that connects each origin to it’s destination when using a travel mode.

Values: FollowStreets or StraightLine

  • FollowStreets - The shape is based on the underlying street network. This option is best when you want to generate the routes between origins and destinations. This is the default value when using a travel mode.

  • StraightLine - The shape is a straight line connecting the origin-destination pair. This option is best when you want to generate spider diagrams or desire lines (for example, to show which stores customers are visiting). This is the default value when not using a travel mode.

The best route between an origin and it’s matched destination is always calculated based on the travel mode, regardless of which route shape is chosen.

dict with the following keys:

”routes_layer” : layer (FeatureCollection)

”unassigned_origins_layer” : layer (FeatureCollection)

”unassigned_destinations_layer” : layer (FeatureCollection)

Labeling a feature class based on proximity to a other.

So I’ve got two feature classes, one simple polygon of a boundary around a specific area, and another line feature of street centerlines.

Anyway, I’d like to specifically label all the streets which touch (or are in a certain proximity to the outer boundary of the polygon). This is mostly a learning exercise, but I’ve been trying and failing to make a label expression to accomplish this, any ideas?

FWIW this is in ArcGIS Pro 2.4.

If it were me I would probably label all the streets and then adjust the settings so that it only displays the streets which are within/intersect your polygon. But that depends on if it's just one polygon or many and if you are using data driven pages.

If you have an advanced license, use the Near tool to calculate the distance between your streets and polygons. Then use a label expression to add labels based on distance.

If you don’t have advanced you could select by location and flag, in the attribute table, the streets that satisfy your query. Then add a label expression based on that flag.

Do you have any examples of labeling based on distance with an expression? I’ve been trying that and it hasn’t worked so far.

So going by memory here as I'm not at work right now, here's a few places I would start looking:

select by location - can you select from your streets layer only the ones that intersect or are within the certain distance of the boundary? (I know it'll do intersect but it includes the fill usually, I would dig through the options there). If that works, once you have them selected, click the field then "calculate field" then type the thing you want as a string ("thing") and it will fill them all out

similar concept - but to force it to think of the boundary only - Iɽ look for a Polygon to Polyline tool, and try Select by Location with the line (if it allows you to select within a certain distance) or with a buffer of the line (the buffer set to the distance you want)

More spatial analysis tools¶

Buffering is a an important and often used spatial analysis tool but there are many others that can be used in a GIS and explored by the user.

Spatial overlay is a process that allows you to identify the relationships between two polygon features that share all or part of the same area. The output vector layer is a combination of the input features information (see figure_overlay_operations).

Figure Overlay Operations 1:

Spatial overlay with two input vector layers (a_input = rectangle, b_input = circle). The resulting vector layer is displayed green.

Typical spatial overlay examples are:

  • Intersection: The output layer contains all areas where both layers overlap (intersect).
  • Union: the output layer contains all areas of the two input layers combined.
  • Symmetrical difference: The output layer contains all areas of the input layers except those areas where the two layers overlap (intersect).
  • Difference: The output layer contains all areas of the first input layer that do not overlap (intersect) with the second input layer.

Extending feature classes

Each feature class in the geodatabase is a collection of geographic features with the same geometry type (point, line, or polygon), the same attributes, and the same spatial reference. Feature classes can be extended as needed to achieve a number of objectives. Here are some of the ways that users extend feature classes using the geodatabase and why.

Hold a collection of spatially related feature classes or build topologies, networks, cadastral datasets, and terrains.

Manage a set of feature subclasses in a single feature class. This is often used on feature class tables to manage different behaviors on subsets of the same feature type.

Specify a list of valid values or a range of valid values for attribute columns. Use domains to help ensure the integrity of attribute values. Domains are often used to enforce data classifications (such as road class, zoning codes, and land-use classifications).

Build relationships between feature classes and other tables using a common key. For example, find the related rows in a second table based on rows selected in the feature class.

Model how features share geometry. For example, adjacent counties share a common boundary. Also, county polygons nest within and completely cover states.

Model transportation connectivity and flow. You must have the Network Analyst extension to ArcGIS Desktop installed.

Model utility networks and tracing.

Model triangulated irregular networks (TINs) and manage large lidar and sonar point collections. You must have the 3D Analyst extension to ArcGIS Desktop installed.

Integrate and maintain survey information for subdivisions and parcel plans as part of a continuous parcel fabric data model in the geodatabase. Also, make incremental accuracy improvements of the parcel fabric as new subdivision plans and parcel descriptions are entered.

Locate events along linear features with measurements.

Manage multiple cartographic representations and advanced cartographic drawing rules.

Manage a number of key GIS workflows for data management for example, support long update transactions, historical archives, and multiuser editing. It requires the use of ArcSDE geodatabases.

Buffering selected polygons within same feature class using ArcGIS Pro? - Geographic Information Systems

Drainage basin delineations for selected USGS streamflow-gaging stations in Virginia (Drainage_Basin) vector digital data

polygon feature class Donald C. Hayes Ute Wiegand

Drainage Areas of Selected Streams in Virginia Open-File Report OFR 2006-1308

Digital dataset to represent official drainage basins for continuous-record streamflow-gaging stations, partial record streamflow-gaging stations of the U.S. Geological Survey (USGS), and other watercourse locations of interest. The dataset will be used for the update and publication of drainage areas to all USGS stations in Virginia.

Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Geological Survey. Although this Federal Geographic Data Committee-compliant metadata file is intended to document the data set in nonproprietary form, as well as in ARC/INFO format, this metadata file may include some ARC/INFO- specific terminology. 11-01-2006 unknown publication date

As needed -83.550878 -75.383351 39.561497 36.057696 ISO19119 Topic Category inlandWaters drainage divide watershed drainage area drainage basin gaging station

Geographic Names Information System

Jennifer L. Krstolic U.S. Geological Survey Water Science Center GIS Specialist, Surface Water Specialist mailing and physical address 1730 E Parham Road Richmond Virginia

USA 1-800-648-1592 804-261-2600 804-261-2659 [email protected]
Each drainage basin colored with a unique value renderer. State borders on top.
BMP Microsoft Windows XP Version 5.1 (Build 2600) Service Pack 2 ESRI ArcCatalog The source of these data was a coverage (WBD) that was built and cleaned. The cluster tolerance used was 1.2 meters and the snapping tolerance was 6 meters. As the Drainage_Basin dataset was constructed, tolopogy rules were used in a personal geodatabase to maintain topological integrity. Feature classes involved in the topology included: Temporary line feature class, DrainageDivide line feature class, and Drainage_Basin polygon feature class. Topology rules used are listed below: Temporary lines Must not Have Dangles Temporary lines Must Not Intersect or Touch Interior DrainageDivide Must not Have Dangles DrainageDivide Must Not Intersect or Touch Interior DrainageDivide Must be Covered by the Boundary of Drainage_Basin Drainage_Basin Must be Covered by the Boundary of DrainageDivide Temporary lines Must be Covered by the Feature Class of DrainageDivide DrainageDivide Must not have Pseudos DrainageDivide Must be Single Part At the time of publication all active and discontinued continuous-record streamflow-gaging station drainage basins and many partial record streamflow-gaging station drainage basins were included in this dataset. Additional drainage basins are included for watercourse locations of interest.

WBD lines were originally digitized using contours on USGS 1:24,0000 quadrangles as a base. The resulting accuracy of the linework should be consistent with hand-drawn basins on paper maps that were subsequently digitized on-screen. Typically accuracy is set at 1/2 the contour interval. In Virginia the largest contour interval is 40 feet. Half of this interval is 20 feet, or 6 meters.

Virginia's 12-digit hydrologic unit boundaries Edition: 3 vector digital data hydrologic units version 3 of sixth order units for Virginia

These are the new national fifth and sixth order hydrologic units for Virginia. They have been created in compliance with the new Federal Standards for Delineation of Hydrologic Unit Boundaries (1 October 2004) and therefore differ from the existing sixth order (14 digit) hydrologic units of Virginia as developed by DCR and the USDA in 1995 from the previous standards. This dataset covers the whole state and is seamless with surrounding state's NWBD product. Revised first through fifth order units are obtainable from codes in this layer. The Virginia WBD was developed as part of a seamless hydrologic unit product for the nation, to be used for more detailed watershed planning work in the state than can be performed using lower order units. This becomes the official statewide sixth order hydrologic unit delineation for Virginia. Contact: Karl Huber Virginia Dept. of Conservation & Recreation - DSWC Address: 203 Governor Street, Suite 206 Richmond, Virginia 23219-2094 USA Contact_Voice_Telephone: 804 371 7484 Contact_Facsimile_Telephone: 804 371 2630 Contact_Electronic_Mail_Address: [email protected] Natural Resources Conservation Service (NRCS)

Watershed Boundary Dataset vector digital data The Watershed Boundary Dataset is being developed under the leadership of the Subcommittee on Spatial Water Data, which is part of the Advisory Committee on Water Information (ACWI) and the Federal Geographic Data Committee (FGDC). The USDA Natural Resources Conservation Service (NRCS), along with many other federal agencies and national associations, have representatives on the Subcommittee on Spatial Water Data. As watershed boundary geographic information systems (GIS) coverages are completed, statewide and national data layers will be made available via the Geospatial Data Gateway to everyone, including federal, state, local government agencies, researchers, private companies, utilities, environmental groups, and concerned citizens. The database will assist in planning and describing water use and related land use activities. electronic mail system 03-16-2005 publication date WBD Watershed boundary in Virginia Used as a primary source for lines U.S. Geological Survey

National Elevation Dataset 1 raster digital data

accessed November 16, 2004 at Geospatial elevation data are utilized by the scientific and resource management communities for global change research, hydrologic modeling, resource monitoring, mapping, and visualization applications. digital vector data 2005 publication date NED Data were used for delineation of watersheds as a guide to on-screen digitizing of drainage areas and for checking boundaries previously digitized. No DEM-delineated lines were used as boundaries. EPA

Environmental Protection Agency's (EPA) Reach File (Version 3.0), known as RF3 3 Stream dataset shows the entire drainage network from small streams to large rivers. digital vector data 05-01-2004 publication date EPA RF3 The Environmental Protection Agency's (EPA) Reach File (Version 3.0), RF3, is a stream network that was used as a guide for on-screen digitizing and selection of WBD lines for inclusion in the Drainage_Basin dataset. USGS

7.5 minute quadrangles digital image files The images are in the NAD27 datum and UTM 17 or 18 projection. They have collars, but various colors were turned off (not displayed) to make overlap area visible. Typically white and green were turned off to assist the display of brown contours during digitizing. digital image files varied DRG The digital raster graphics served as a guilde for on-screen digitizing. The countours were used as a guide for drainage basin delineation. USGS

NWIS Site Information for Stream-flow gaging stations online-database Station location information is availabe through the online linkage. The locations of discontinued stations or partial-record stations are availabe through the USGS Virginia Water Science Center. digital database publication date NWIS gages Coordinates for each continuous-record streamflow-gaging stations, partial record streamflow-gaging stations of the U.S. Geological Survey are available on-line in the NWIS database. These coordinates were used to plot points for outlet points for each drainage basin.

Basemap datasets were derived from the Virginia Watershed Boundary Dataset (WBD), USGS digital quadrangles (DRG), the National Water Information System (NWIS) site information, the National Elevation Dataset (NED), and the Environmental Protection Agency's (EPA) Reach File version 3.0 (RF3). The Virginia WBD (updated in 2005) served as the primary source of linework for this dataset. The contours within the USGS digital quadrangles were relied upon for basin delineation and checking the WBD linework. NED was used to delineate basins above selected point locations from NWIS. The NED-delineated basins and RF3 streams were used as additional guides for on-screen digitizing and selection of WBD lines for inclusion in the Drainage_Divide line feature class and Drainage_Basin polygon feature class.


DEM-basins Drainage_Divide Drainage_Basin

U.S. Geological Survey WSC, Richmond Virginia

To construct one stacked polygon the following procedure was completed. Lines were selected from the WBD and Drainage_Divide line feature class then copied into a temporary line feature class. The lines from the WBD were checked for accuracy against the contours in the DRG's and modified when necessary. The WBD was modified to include tributaries or areas that had inadverdantly been excluded, or when the DRG contour lines indicated a peak or ridge line that had not been followed. When one entire drainage basin was represented in the temporary feature class topology was validated, errors corrected, and validated again. These lines were copied into a permanent line feature class, Drainage_Divide. Drainage_Divide is not stacked and stores the outlines of all basins one time only.

Lines stored in Drainage_Divide match all existing linework in the Drainage_Basin except for new linework for a polygon. Topology rules indicate a difference in linework when the only error that remains is 'Drainage_Divide Must be Covered by the Boundary of Drainage_Basin'. Polygons are constructed by selecting lines from the temporary feature class and choosing the 'construct features' option from the topology toolbar. This process creates one new polygon in the Drainage_Basin polygon feature class. This process was repeated for each basin, with lines copied from Drainage_Divide, the WBD, and with on-screen digitizing. For every polygon that was created, the boundary was constructed in the temporary line feature class, new lines copied into Drainage_Divide, topology rules run, errors corrected, and new polygon constructed.

Drainage area in square miles was compared between the NWIS published drainages areas for streamflow-gaging stations and the calculated areas from the digitized Drainage_Basin dataset.

Wednesday, December 3, 2014

Lab 4: Vector Analysis with ArcGIS

Goal and Background: The goal for this lab was to figure out the best habitat for bears in Marquette County, MI using different geoprocessing tools for vector analysis. These tools are found in the ArcToolbox in ArcMap. To find out the best suitable habitats for the bears certain information and steps needed in order to figure out the question of where is the best suitable spots.

Methods: For the first objective we had to make the bear_locations_geog$ excel file into an "event theme" in order to map the X Y coordinates. To do this the data had to be added as XY data and then choose the x, y fields. Also a projected coordinate system was needed for the coordinates. Once that was finished we exported the new data into the lab4 geodatabase as a feature class.
The second objective was to determine the bear habitat. Landcover, Streams, and Study Area feature classes were added to the map to help visualize the data better. Using the new feature class of the bear locations and landcover I was able to spatially join them together to create a new feature class called bear cover. This shows the location of the bears in which land type they were recorded in.
The third objective was to determine how many bears were found near (within 500 meters) of a stream when they were recorded on the GPS. The first step was to make a buffer around the streams that was within 500 meters. Then the bear locations were clipped to the buffer to see how many bears are near the streams. The data proved to show that way more than 30% of the bears were near the streams when recorded.
For the fourth objective we have to now find suitable habitat areas for the bears based on the stream buffer and the habitat locations. These two feature classes were then intersected together to give the result of the best habitat within 500 meters of a stream.
The fifth objective was to find for the Michigan DNR their land that the bear habitats were apart of. The dnr_mgmt class was clipped to the study area and then dissolved to get rid of the internal boundaries. Then the result was intersected with the best habitat for the bears. Finally the dissolved habitat was intersected with the dissolved dnr_mgmt land in the study area to create the best habitat on the dnt_mgmt land.
For the last objective the DNR wanted to exclude all the areas that are within 5 km from the Urban or Built Up lands. I selected the Urban or Built up land type from the landcover and created a new layer. Then I buffered the area 5km from Urban/Built up lands. Last, I took the new buffered Urban land and the habitat on the dnr_mgmt land and used the erase tool to get a result of the suitable bear habitat on the dnr_mgmt land.

Results: The geoprocessing tools allowed me to produce this map of suitable bear habitats in Marquette County, Michigan. The data flow model helps with the process of using the geoprocessing tools to getting the results of the map.

Field Methods

The aim of this project is to measure the height of the trees on the University of Utah (UU) campus. A previous study (Kuhns, 2011) documented the locations and type of 61 trees on campus. This will be the guide of which trees will be included in the field work. How tall are these trees and is their height above/below average for their species? Are there any other contributing factors to these heights, such as elevation, aspect of Utah’s climate?

METHODOLOGY/RESEARCH STRATEGY: The GPS locations and tree type of campus trees have already been documented. Using the University of Utah Tree Database (Kuhns, 2011) as a guide, do the following steps for each of the 61 trees (figure below from Henry, 2011):

  1. Locate a tree in the field, from the tree database map
  2. One person holds the survey tape as close to the tree root/trunk as possible, on the ground
  3. Second person walk the tape out in 25 ft increments, based on tree height, until the top of the tree can be seen
  4. Using the clinometer, measure the angle to the top of the tree
  5. In field book, write down distance to clinometer measurement, and angle of clinometer

These two measurements will be used in the Pythagorean Theorem to calculate tree height. These heights will be loaded into ArcGIS where further analysis will be spatially investigated. A projected coordinate system will be necessary because of the measurement, therefore WGS84 UTM11 will suffice. Anticipated results include average tree height for all trees where the climate of Utah is the species native climate. In addition, tree height might be below average where their location is getting limited sun exposure, northward for example.

Watch the video: ArcGIS Pro - All About Shapefiles