utils
API Reference¶
Wrappers around Esri’s arcpy
library.
This contains basic file I/O, coversion, and spatial analysis functions
to support the python-propagator library. These functions generally
are simply wrappers around their arcpy
counter parts. This was done
so that in the future, these functions could be replaced with calls to
a different geoprocessing library and eventually ween the code base off
of its arcpy
dependency.
- Geosyntec Consultants, 2015.
Released under the BSD 3-clause license (see LICENSE file for more info)
Written by Paul Hobson (phobson@geosyntec.com)
-
class
propagator.utils.
Statistic
¶ -
aggfxn
¶ Alias for field number 1
-
rescol
¶ Alias for field number 2
-
srccol
¶ Alias for field number 0
-
-
propagator.utils.
update_status
()¶ Decorator to allow a function to take a additional keyword arguments related to printing status messages to stdin or as arcpy messages.
-
class
propagator.utils.
RasterTemplate
(cellsize, xmin, ymin)¶ Georeferencing template for Rasters.
This mimics the attributes of teh
arcpy.Raster
class enough that it can be used as a template to georeference numpy arrays when converting to rasters.Parameters: cellsize : int or float
The width of the raster’s cells.
xmin, ymin : float
The x- and y-coordinates of the raster’s lower left (south west) corner.
See also
arcpy.Extent
Attributes
cellsize (int or float) The width of the raster’s cells. extent (Extent) Yet another mock-ish class that x
andy
are stored inextent.lowerLeft
as anarcpy.Point
.-
classmethod
from_raster
(raster)¶ Alternative constructor to generate a RasterTemplate from an actual raster.
Parameters: raster : arcpy.Raster
The raster whose georeferencing attributes need to be replicated.
Returns: template : RasterTemplate
-
classmethod
-
class
propagator.utils.
EasyMapDoc
(*args, **kwargs)¶ The object-oriented map class Esri should have made.
Create this the same you would make any other arcpy.mapping.MapDocument. But now, you can directly list and add layers and dataframes. See the two examples below.
Has
layers
anddataframes
attributes that return all of the arcpy.mapping.Layer and arcpy.mapping.DataFrame objects in the map, respectively.Examples
>>> # Adding a layer with the Esri version: >>> import arpcy >>> md = arcpy.mapping.MapDocument('CURRENT') >>> df = arcpy.mapping.ListDataFrames(md) >>> arcpy.mapping.AddLayer(df, myLayer, 'TOP')
>>> # And now with an ``EasyMapDoc``: >>> from propagator import utils >>> ezmd = utils.EasyMapDoc('CURRENT') >>> ezmd.add_layer(myLayer)
Attributes
mapdoc (arcpy.mapping.MapDocument) The underlying arcpy MapDocument that serves as the basis for this class. -
layers
¶ All of the layers in the map.
-
dataframes
¶ All of the dataframes in the map.
-
findLayerByName
(name)¶ Finds a layer in the map by searching for an exact match of its name.
Parameters: name : str
The name of the layer you want to find.
Returns: lyr : arcpy.mapping.Layer
The map layer or None if no match is found.
Warning
Group Layers are not returned.
Examples
>>> from propagator import utils >>> ezmd = utils.EasyMapDoc('CURRENT') >>> wetlands = ezmd.findLayerByName("wetlands") >>> if wetlands is not None: ... # do something with `wetlands`
-
add_layer
(layer, df=None, position='top')¶ Simply adds a layer to a map.
Parameters: layer : str or arcpy.mapping.Layer
The dataset to be added to the map.
df : arcpy.mapping.DataFrame, optional
The specific dataframe to which the layer will be added. If not provided, the data will be added to the first dataframe in the map.
position : str, optional (‘TOP’)
The positional within df where the data will be added. Valid options are: ‘auto_arrange’, ‘bottom’, and ‘top’.
Returns: layer : arcpy.mapping.Layer
The sucessfully added layer.
Examples
>>> from propagator import utils >>> ezmd = utils.EasyMapDoc('CURRENT') >>> ezmd.add_layer(myLayer)
-
-
propagator.utils.
Extension
(*args, **kwds)¶ Context manager to facilitate the use of ArcGIS extensions
Inside the context manager, the extension will be checked out. Once the interpreter leaves the code block by any means (e.g., sucessful execution, raised exception) the extension will be checked back in.
Examples
>>> import propagator, arcpy >>> with propagator.utils.Extension("spatial"): ... arcpy.sa.Hillshade("C:/data/dem.tif")
-
propagator.utils.
OverwriteState
(*args, **kwds)¶ Context manager to temporarily set the
overwriteOutput
environment variable.Inside the context manager, the
arcpy.env.overwriteOutput
will be set to the given value. Once the interpreter leaves the code block by any means (e.g., sucessful execution, raised exception),arcpy.env.overwriteOutput
will reset to its original value.Parameters: path : str
Path to the directory that will be set as the current workspace.
Examples
>>> import propagator >>> with propagator.utils.OverwriteState(False): ... # some operation that should fail if output already exists
-
propagator.utils.
WorkSpace
(*args, **kwds)¶ Context manager to temporarily set the
workspace
environment variable.Inside the context manager, the arcpy.env.workspace will be set to the given value. Once the interpreter leaves the code block by any means (e.g., sucessful execution, raised exception), arcpy.env.workspace will reset to its original value.
Parameters: path : str
Path to the directory that will be set as the current workspace.
Examples
>>> import propagator >>> with propagator.utils.OverwriteState(False): ... # some operation that should fail if output already exists
-
propagator.utils.
create_temp_filename
(filepath, filetype=None, prefix='_temp_', num=None)¶ Helper function to create temporary filenames before to be saved before the final output has been generated.
Parameters: filepath : str
The file path/name of what the final output will eventually be.
filetype : str, optional
The type of file to be created. Valid values: “Raster” or “Shape”.
prefix : str, optional (‘_temp_’)
The prefix that will be applied to
filepath
.num : int, optional
A file “number” that can be appended to the very end of the filename.
Returns: str
Examples
>>> create_temp_filename('path/to/flooded_wetlands', filetype='shape') path/to/_temp_flooded_wetlands.shp
-
propagator.utils.
check_fields
(table, *fieldnames, **kwargs)¶ Checks that field are (or are not) in a table. The check fails, a
ValueError
is raised.Parameters: table : arcpy.mapping.Layer or similar
Any table-like that we can pass to arcpy.ListFields.
*fieldnames : str arguments
optional string arguments that whose existance in table will be checked.
should_exist : bool, optional (False)
Whether we’re testing for for absense (False) or existance (True) of the provided field names.
Returns: None
-
propagator.utils.
result_to_raster
(*args, **kwargs)¶ Gets the actual arcpy.Raster from an arcpy.Result object.
Parameters: result : arcpy.Result
The Result object returned from some other geoprocessing function.
Returns: arcpy.Raster
See also
-
propagator.utils.
result_to_layer
(*args, **kwargs)¶ Gets the actual arcpy.mapping.Layer from an arcpy.Result object.
Parameters: result : arcpy.Result
The Result object returned from some other geoprocessing function.
Returns: arcpy.mapping.Layer
See also
-
propagator.utils.
load_data
(*args, **kwargs)¶ Loads vector and raster data from filepaths.
Parameters: datapath : str, arcpy.Raster, or arcpy.mapping.Layer
The (filepath to the) data you want to load.
datatype : str
The type of data you are trying to load. Must be either “shape” (for polygons) or “raster” (for rasters).
greedyRasters : bool (default = True)
Currently, arcpy lets you load raster data as a “Raster” or as a “Layer”. When
greedyRasters
is True, rasters loaded as type “Layer” will be forced to type “Raster”.Returns: data : arcpy.Raster or arcpy.mapping.Layer
The data loaded as an arcpy object.
-
propagator.utils.
add_field_with_value
(*args, **kwargs)¶ Adds a numeric or text field to an attribute table and sets it to a constant value. Operates in-place and therefore does not return anything.
Relies on arcpy.management.AddField.
Parameters: table : Layer, table, or file path
This is the layer/file that will have a new field created.
field_name : string
The name of the field to be created.
field_value : float or string, optional
The value of the new field. If provided, it will be used to infer the
field_type
parameter required by arcpy.management.AddField iffield_type
is itself not explicitly provided.overwrite : bool, optonal (False)
If True, an existing field will be overwritten. The default behavior will raise a ValueError if the field already exists.
**field_opts : keyword options
Keyword arguments that are passed directly to arcpy.management.AddField.
Returns: None
Examples
>>> # add a text field to shapefile (text fields need a length spec) >>> utils.add_field_with_value("mypolygons.shp", "storm_event", "100-yr", field_length=10) >>> # add a numeric field (doesn't require additional options) >>> utils.add_field_with_value("polygons.shp", "flood_level", 3.14)
-
propagator.utils.
cleanup_temp_results
(*args, **kwargs)¶ Deletes temporary results from the current workspace.
Relies on arcpy.management.Delete.
Parameters: *results : str
Paths to the temporary results
Returns: None
-
propagator.utils.
intersect_polygon_layers
(*args, **kwargs)¶ Intersect polygon layers with each other. Basically a thin wrapper around arcpy.analysis.Intersect.
Parameters: destination : str
Filepath where the intersected output will be saved.
*layers : str or arcpy.Mapping.Layer
The polygon layers (or their paths) that will be intersected with each other.
**intersect_options : keyword arguments
Additional arguments that will be passed directly to arcpy.analysis.Intersect.
Returns: intersected : arcpy.mapping.Layer
The arcpy Layer of the intersected polygons.
Examples
>>> from tidedates import utils >>> blobs = utils.intersect_polygon_layers( ... "flood_damage_intersect.shp" ... "floods.shp", ... "wetlands.shp", ... "buildings.shp" ... )
-
propagator.utils.
load_attribute_table
(*args, **kwargs)¶ Loads a shapefile’s attribute table as a numpy record array.
Relies on arcpy.da.TableToNumPyArray.
Parameters: input_path : str
Fiilepath to the shapefile or feature class whose table needs to be read.
*fields : str
Names of the fields that should be included in the resulting array.
Returns: records : numpy.recarray
A record array of the selected fields in the attribute table.
See also
Examples
>>> from propagator import utils >>> path = "data/subcatchment.shp" >>> catchements = utils.load_attribute_table(path, 'CatchID', ... 'DwnCatchID', 'Watershed') >>> catchements[:5] array([(u'541', u'571', u'San Juan Creek'), (u'754', u'618', u'San Juan Creek'), (u'561', u'577', u'San Juan Creek'), (u'719', u'770', u'San Juan Creek'), (u'766', u'597', u'San Juan Creek')], dtype=[('CatchID', '<U20'), ('DwnCatchID', '<U20'), ('Watershed', '<U50')])
-
propagator.utils.
groupby_and_aggregate
(*args, **kwargs)¶ Counts the number of distinct values of valuefield are associated with each value of groupfield in a data source found at input_path.
Parameters: input_path : str
File path to a shapefile or feature class whose attribute table can be loaded with arcpy.da.TableToNumPyArray.
groupfield : str
The field name that would be used to group all of the records.
valuefield : str
The field name whose distinct values will be counted in each group defined by groupfield.
aggfxn : callable, optional.
Function to aggregate the values in each group to a single group. This function should accept an itertools._grouper as its only input. If not provided, unique number of value in the group will be returned.
Returns: counts : dict
A dictionary whose keys are the distinct values of groupfield and values are the number of distinct records in each group.
See also
Examples
>>> # compute total areas for each 'GeoID' >>> wetland_areas = utils.groupby_and_aggregate( ... input_path='wetlands.shp', ... groupfield='GeoID', ... valuefield='SHAPE@AREA', ... aggfxn=lambda group: sum([row[1] for row in group]) ... )
>>> # count the number of structures associated with each 'GeoID' >>> building_counts = utils.groupby_and_aggregate( ... input_path=buildingsoutput, ... groupfield=ID_column, ... valuefield='STRUCT_ID' ... )
-
propagator.utils.
rename_column
(*args, **kwargs)¶
-
propagator.utils.
populate_field
(*args, **kwargs)¶ Loops through the records of a table and populates the value of one field (valuefield) based on another field (keyfield) by passing the entire row through a function (value_fxn).
Relies on arcpy.da.UpdateCursor.
Parameters: table : Layer, table, or file path
This is the layer/file that will have a new field created.
value_fxn : callable
Any function that accepts a row from an arcpy.da.SearchCursor and returns a single value.
valuefield : string
The name of the field to be computed.
*keyfields : strings, optional
The other fields that need to be present in the rows of the cursor.
Returns: None
Note
In the row object, the valuefield will be the last item. In other words, row[0] will return the first values in *keyfields and row[-1] will return the existing value of valuefield in that row.
Examples
>>> # populate field ("Company") with a constant value ("Geosyntec") >>> populate_field("wetlands.shp", lambda row: "Geosyntec", "Company")
-
propagator.utils.
copy_layer
(*args, **kwargs)¶ Makes copies of features classes, shapefiles, and maybe rasters.
Parameters: existing_layer : str
Path to the data to be copied
new_layer : str
Path to where
existing_layer
should be copied.Returns: new_layer : str
-
propagator.utils.
concat_results
(*args, **kwargs)¶ Concatentates (merges) serveral datasets into a single shapefile or feature class.
Relies on arcpy.management.Merge.
Parameters: destination : str
Path to where the concatentated dataset should be saved.
*input_files : str
Strings of the paths of the datasets to be merged.
Returns: arcpy.mapping.Layer
See also
join_results_to_baseline
-
propagator.utils.
update_attribute_table
(*args, **kwargs)¶ Update the attribute table of a feature class from a record array.
Parameters: layerpath : str
Path to the feature class to be updated.
attribute_array : numpy.recarray
A record array that contains the data to be writted into
layerpath
.id_column : str
The name of the column that uniquely identifies each feature in both
layerpath
andattribute_array
.*update_columns : str
Names of the columns in both
layerpath
andattribute_array
that will be updated.Returns: None
-
propagator.utils.
delete_columns
(layerpath, *columns)¶ Delete unwanted fields from an attribute table of a feature class.
Parameters: layerpath : str
Path to the feature class to be updated.
*columns : str
Names of the columns in
layerpath
that will be deletedReturns: None
-
propagator.utils.
spatial_join
(left, right, outputfile, **kwargs)¶
-
propagator.utils.
count_features
(layer)¶
-
propagator.utils.
query_layer
(inputpath, outputpath, sql)¶
-
propagator.utils.
intersect_layers
(input_paths, output_path, how='all')¶
-
propagator.utils.
get_field_names
(layerpath)¶ Gets the names of fields/columns in a feature class or table. Relies on arcpy.ListFields.
Parameters: layerpath : str, arcpy.Layer, or arcpy.table
The thing that has fields.
Returns: fieldnames : list of str
-
propagator.utils.
find_row_in_array
(array, column, value)¶ Find a single row in a record array.
Parameters: array : numpy.recarray
The record array to be searched.
column : str
The name of the column of the array to search.
value : int, str, or float
The value sought in
column
Returns: row : numpy.recarray row
The found row from
array
.Raises: ValueError
An error is raised if more than one row is found.
Examples
>>> from propagator import utils >>> import numpy >>> x = numpy.array( [ ('A1', 'Ocean', 'A1_x'), ('A2', 'Ocean', 'A2_x'), ('B1', 'A1', 'None'), ('B2', 'A1', 'B2_x'), ], dtype=[('ID', '<U5'), ('DS_ID', '<U5'), ('Cu', '<U5'),] ) >>> utils.find_row_in_array(x, 'ID', 'A1') ('A1', 'Ocean', 'A1_x', 'A1_y')
-
propagator.utils.
rec_groupby
(array, group_cols, *stats)¶ Perform a groupby-apply operation on a numpy record array.
Returned record array has dtype names for each attribute name in the groupby argument, with the associated group values, and for each outname name in the stats argument, with the associated stat summary output. Adapted from https://goo.gl/NgwOID.
Parameters: array : numpy.recarray
The data to be grouped and aggregated.
group_cols : str or sequence of str
The columns that identify each group
*stats : namedtuples or object
Any number of namedtuples or objects specifying which columns should be aggregated, how they should be aggregated, and what the resulting column name should be. The keys/attributes for these tuples/objects must be: “srccol”, “aggfxn”, and “rescol”.
Returns: aggregated : numpy.recarray
See also
Examples
>>> from collections import namedtuple >>> from propagator import utils >>> import numpy >>> Statistic = namedtuple("Statistic", ("srccol", "aggfxn", "rescol")) >>> data = data = numpy.array([ (u'050SC', 88.3, 0.0), (u'050SC', 0.0, 0.1), (u'045SC', 49.2, 0.04), (u'045SC', 0.0, 0.08), ], dtype=[('ID', '<U10'), ('Cu', '<f8'), ('Pb', '<f8'),]) >>> stats = [ Statistic('Cu', numpy.max, 'MaxCu'), Statistic('Pb', numpy.min, 'MinPb') ] >>> utils.rec_groupby(data, ['ID'], *stats) rec.array( [(u'045SC', 49.2, 0.04), (u'050SC', 88.3, 0.0)], dtype=[('ID', '<U5'), ('MaxCu', '<f8'), ('MinPb', '<f8')] )
-
propagator.utils.
stats_with_ignored_values
(array, statfxn, ignored_value=None)¶ Compute statistics on arrays while ignoring certain values
Parameters: array : numyp.array (of floats)
The values to be summarized
statfxn : callable
Function, lambda, or classmethod that can be called with
array
as the only input and returns a scalar value.ignored_value : float, optional
The values in
array
that should be ignored.Returns: res : float
Scalar result of
statfxn
. In that case that all values inarray
are ignored,ignored_value
is returned.Examples
>>> import numpy >>> from propagator import utils >>> x = [1., 2., 3., 4., 5.] >>> utils.stats_with_ignored_values(x, numpy.mean, ignored_value=5) 2.5