IronPyshp


NameIronPyshp JSON
Version 2.3.1 PyPI version JSON
download
home_pagehttps://github.com/JamesParrott/IronPyShp
SummaryPure Python read/write support for ESRI Shapefile format
upload_time2024-05-20 22:13:39
maintainerJames Parrott
docs_urlNone
authorJoel Lawhead
requires_python>=2.7
licenseMIT
keywords gis geospatial geographic shapefile shapefiles
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # IronPyShp

Generalises logic based on bytes instance checks (that rely on `str is bytes`, as is True in CPython 2), to allow PyShp to work 
correctly with unicode data in Iron Python 2 (in which `str is not bytes`).  

Bonus: Preserves the order of fields in shape files in `Record.as_dict()` by setting `dict = collections.OrderedDict`.

- **Reluctant Iron Python 2 user**: [James Parrott](https://github.com/JamesParrott)
- **Version**: 2.3.1
- **Date**: 18 April, 2023
 - **License**: [MIT](https://github.com/GeospatialPython/pyshp/blob/master/LICENSE.TXT)

# PyShp


The Python Shapefile Library (PyShp) reads and writes ESRI Shapefiles in pure Python.

![pyshp logo](http://4.bp.blogspot.com/_SBi37QEsCvg/TPQuOhlHQxI/AAAAAAAAAE0/QjFlWfMx0tQ/S350/GSP_Logo.png "PyShp")

- **Author**: [Joel Lawhead](https://github.com/GeospatialPython)
- **Maintainers**: [Karim Bahgat](https://github.com/karimbahgat)
- **Version**: 2.3.1
- **Date**: 28 July, 2022
- **License**: [MIT](https://github.com/GeospatialPython/pyshp/blob/master/LICENSE.TXT)

## Contents

- [Overview](#overview)
- [Version Changes](#version-changes)
- [The Basics](#the-basics)
	- [Reading Shapefiles](#reading-shapefiles)
		- [The Reader Class](#the-reader-class)
			- [Reading Shapefiles from Local Files](#reading-shapefiles-from-local-files)
			- [Reading Shapefiles from Zip Files](#reading-shapefiles-from-zip-files)
			- [Reading Shapefiles from URLs](#reading-shapefiles-from-urls)
			- [Reading Shapefiles from File-Like Objects](#reading-shapefiles-from-file-like-objects)
			- [Reading Shapefiles Using the Context Manager](#reading-shapefiles-using-the-context-manager)
			- [Reading Shapefile Meta-Data](#reading-shapefile-meta-data)
		- [Reading Geometry](#reading-geometry)
		- [Reading Records](#reading-records)
		- [Reading Geometry and Records Simultaneously](#reading-geometry-and-records-simultaneously)
	- [Writing Shapefiles](#writing-shapefiles)
		- [The Writer Class](#the-writer-class)
			- [Writing Shapefiles to Local Files](#writing-shapefiles-to-local-files)
			- [Writing Shapefiles to File-Like Objects](#writing-shapefiles-to-file-like-objects)
			- [Writing Shapefiles Using the Context Manager](#writing-shapefiles-using-the-context-manager)
			- [Setting the Shape Type](#setting-the-shape-type)
		- [Adding Records](#adding-records)
		- [Adding Geometry](#adding-geometry)
		- [Geometry and Record Balancing](#geometry-and-record-balancing)
- [Advanced Use](#advanced-use)
    - [Common Errors and Fixes](#common-errors-and-fixes)
        - [Warnings and Logging](#warnings-and-logging)
        - [Shapefile Encoding Errors](#shapefile-encoding-errors)
	- [Reading Large Shapefiles](#reading-large-shapefiles)
		- [Iterating through a shapefile](#iterating-through-a-shapefile)
		- [Limiting which fields to read](#limiting-which-fields-to-read)
		- [Attribute filtering](#attribute-filtering)
		- [Spatial filtering](#spatial-filtering)
	- [Writing large shapefiles](#writing-large-shapefiles)
		- [Merging multiple shapefiles](#merging-multiple-shapefiles)
		- [Editing shapefiles](#editing-shapefiles)
	- [3D and Other Geometry Types](#3d-and-other-geometry-types)
    	- [Shapefiles with measurement (M) values](#shapefiles-with-measurement-m-values)
		- [Shapefiles with elevation (Z) values](#shapefiles-with-elevation-z-values)
		- [3D MultiPatch Shapefiles](#3d-multipatch-shapefiles)
- [Testing](#testing)
- [Contributors](#contributors)


# Overview

The Python Shapefile Library (PyShp) provides read and write support for the
Esri Shapefile format. The Shapefile format is a popular Geographic
Information System vector data format created by Esri. For more information
about this format please read the well-written "ESRI Shapefile Technical
Description - July 1998" located at [http://www.esri.com/library/whitepapers/p
dfs/shapefile.pdf](http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf)
. The Esri document describes the shp and shx file formats. However a third
file format called dbf is also required. This format is documented on the web
as the "XBase File Format Description" and is a simple file-based database
format created in the 1960's. For more on this specification see: [http://www.clicketyclick.dk/databases/xbase/format/index.html](http://www.clicketyclick.dk/databases/xbase/format/index.html)

Both the Esri and XBase file-formats are very simple in design and memory
efficient which is part of the reason the shapefile format remains popular
despite the numerous ways to store and exchange GIS data available today.

Pyshp is compatible with Python 2.7-3.x.

This document provides examples for using PyShp to read and write shapefiles. However 
many more examples are continually added to the blog [http://GeospatialPython.com](http://GeospatialPython.com),
and by searching for PyShp on [https://gis.stackexchange.com](https://gis.stackexchange.com). 

Currently the sample census blockgroup shapefile referenced in the examples is available on the GitHub project site at
[https://github.com/GeospatialPython/pyshp](https://github.com/GeospatialPython/pyshp). These
examples are straight-forward and you can also easily run them against your
own shapefiles with minimal modification. 

Important: If you are new to GIS you should read about map projections.
Please visit: [https://github.com/GeospatialPython/pyshp/wiki/Map-Projections](https://github.com/GeospatialPython/pyshp/wiki/Map-Projections)

I sincerely hope this library eliminates the mundane distraction of simply
reading and writing data, and allows you to focus on the challenging and FUN
part of your geospatial project.


# Version Changes

## 2.3.1

### Bug fixes:

- Fix recently introduced issue where Reader/Writer closes file-like objects provided by user (#244)

## 2.3.0

### New Features:

- Added support for pathlib and path-like shapefile filepaths (@mwtoews). 
- Allow reading individual file extensions via filepaths.

### Improvements:

- Simplified setup and deployment (@mwtoews)
- Faster shape access when missing shx file
- Switch to named logger (see #240)

### Bug fixes:

- More robust handling of corrupt shapefiles (fixes #235)
- Fix errors when writing to individual file-handles (fixes #237)
- Revert previous decision to enforce geojson output ring orientation (detailed explanation at https://github.com/SciTools/cartopy/issues/2012)
- Fix test issues in environments without network access (@sebastic, @musicinmybrain). 

## 2.2.0

### New Features:

- Read shapefiles directly from zipfiles.
- Read shapefiles directly from urls.
- Allow fast extraction of only a subset of dbf fields through a `fields` arg.
- Allow fast filtering which shapes to read from the file through a `bbox` arg.

### Improvements:

- More examples and restructuring of README. 
- More informative Shape to geojson warnings (see #219).
- Add shapefile.VERBOSE flag to control warnings verbosity (default True).
- Shape object information when calling repr().
- Faster ring orientation checks, enforce geojson output ring orientation.

### Bug fixes:

- Remove null-padding at end of some record character fields.
- Fix dbf writing error when the number of record list or dict entries didn't match the number of fields.
- Handle rare garbage collection issue after deepcopy (https://github.com/mattijn/topojson/issues/120)
- Fix bug where records and shapes would be assigned incorrect record number (@karanrn)
- Fix typos in docs (@timgates)

## 2.1.3

### Bug fixes:

- Fix recent bug in geojson hole-in-polygon checking (see #205)
- Misc fixes to allow geo interface dump to json (eg dates as strings)
- Handle additional dbf date null values, and return faulty dates as unicode (see #187)
- Add writer target typecheck
- Fix bugs to allow reading shp/shx/dbf separately
- Allow delayed shapefile loading by passing no args
- Fix error with writing empty z/m shapefile (@mcuprjak)
- Fix signed_area() so ignores z/m coords
- Enforce writing the 11th field name character as null-terminator (only first 10 are used)
- Minor README fixes
- Added more tests

## 2.1.2

### Bug fixes:

- Fix issue where warnings.simplefilter('always') changes global warning behavior [see #203]

## 2.1.1

### Improvements:

- Handle shapes with no coords and represent as geojson with no coords (GeoJSON null-equivalent)
- Expand testing to Python 3.6, 3.7, 3.8 and PyPy; drop 3.3 and 3.4 [@mwtoews]
- Added pytest testing [@jmoujaes]

### Bug fixes:

- Fix incorrect geo interface handling of multipolygons with complex exterior-hole relations [see #202]
- Enforce shapefile requirement of at least one field, to avoid writing invalid shapefiles [@Jonty]
- Fix Reader geo interface including DeletionFlag field in feature properties [@nnseva]
- Fix polygons not being auto closed, which was accidentally dropped
- Fix error for null geometries in feature geojson
- Misc docstring cleanup [@fiveham]

## 2.1.0

### New Features:

- Added back read/write support for unicode field names. 
- Improved Record representation
- More support for geojson on Reader, ShapeRecord, ShapeRecords, and shapes()

### Bug fixes:

- Fixed error when reading optional m-values
- Fixed Record attribute autocomplete in Python 3
- Misc readme cleanup

## 2.0.0

The newest version of PyShp, version 2.0 introduced some major new improvements. 
A great thanks to all who have contributed code and raised issues, and for everyone's
patience and understanding during the transition period. 
Some of the new changes are incompatible with previous versions. 
Users of the previous version 1.x should therefore take note of the following changes
(Note: Some contributor attributions may be missing): 

### Major Changes:

- Full support for unicode text, with custom encoding, and exception handling. 
  - Means that the Reader returns unicode, and the Writer accepts unicode. 
- PyShp has been simplified to a pure input-output library using the Reader and Writer classes, dropping the Editor class. 
- Switched to a new streaming approach when writing files, keeping memory-usage at a minimum:
  - Specify filepath/destination and text encoding when creating the Writer. 
  - The file is written incrementally with each call to shape/record. 
  - Adding shapes is now done using dedicated methods for each shapetype. 
- Reading shapefiles is now more convenient:
  - Shapefiles can be opened using the context manager, and files are properly closed. 
  - Shapefiles can be iterated, have a length, and supports the geo interface. 
  - New ways of inspecting shapefile metadata by printing. [@megies]
  - More convenient accessing of Record values as attributes. [@philippkraft]
  - More convenient shape type name checking. [@megies] 
- Add more support and documentation for MultiPatch 3D shapes. 
- The Reader "elevation" and "measure" attributes now renamed "zbox" and "mbox", to make it clear they refer to the min/max values. 
- Better documentation of previously unclear aspects, such as field types. 

### Important Fixes:

- More reliable/robust:
  - Fixed shapefile bbox error for empty or point type shapefiles. [@mcuprjak]
  - Reading and writing Z and M type shapes is now more robust, fixing many errors, and has been added to the documentation. [@ShinNoNoir]
  - Improved parsing of field value types, fixed errors and made more flexible. 
  - Fixed bug when writing shapefiles with datefield and date values earlier than 1900 [@megies]
- Fix some geo interface errors, including checking polygon directions.
- Bug fixes for reading from case sensitive file names, individual files separately, and from file-like objects. [@gastoneb, @kb003308, @erickskb]
- Enforce maximum field limit. [@mwtoews]


# The Basics

Before doing anything you must import the library.


	>>> import shapefile

The examples below will use a shapefile created from the U.S. Census Bureau
Blockgroups data set near San Francisco, CA and available in the git
repository of the PyShp GitHub site.

## Reading Shapefiles

### The Reader Class

#### Reading Shapefiles from Local Files

To read a shapefile create a new "Reader" object and pass it the name of an
existing shapefile. The shapefile format is actually a collection of three
files. You specify the base filename of the shapefile or the complete filename
of any of the shapefile component files.


	>>> sf = shapefile.Reader("shapefiles/blockgroups")

OR


	>>> sf = shapefile.Reader("shapefiles/blockgroups.shp")

OR


	>>> sf = shapefile.Reader("shapefiles/blockgroups.dbf")

OR any of the other 5+ formats which are potentially part of a shapefile. The
library does not care about file extensions. You can also specify that you only 
want to read some of the file extensions through the use of keyword arguments:


	>>> sf = shapefile.Reader(dbf="shapefiles/blockgroups.dbf")

#### Reading Shapefiles from Zip Files

If your shapefile is wrapped inside a zip file, the library is able to handle that too, meaning you don't have to worry about unzipping the contents: 


	>>> sf = shapefile.Reader("shapefiles/blockgroups.zip")

If the zip file contains multiple shapefiles, just specify which shapefile to read by additionally specifying the relative path after the ".zip" part:


	>>> sf = shapefile.Reader("shapefiles/blockgroups_multishapefile.zip/blockgroups2.shp")

#### Reading Shapefiles from URLs

Finally, you can use all of the above methods to read shapefiles directly from the internet, by giving a url instead of a local path, e.g.: 


	>>> # from a zipped shapefile on website
	>>> sf = shapefile.Reader("https://biogeo.ucdavis.edu/data/diva/rrd/NIC_rrd.zip")

	>>> # from a shapefile collection of files in a github repository
	>>> sf = shapefile.Reader("https://github.com/nvkelso/natural-earth-vector/blob/master/110m_cultural/ne_110m_admin_0_tiny_countries.shp?raw=true")

This will automatically download the file(s) to a temporary location before reading, saving you a lot of time and repetitive boilerplate code when you just want quick access to some external data.

#### Reading Shapefiles from File-Like Objects

You can also load shapefiles from any Python file-like object using keyword
arguments to specify any of the three files. This feature is very powerful and
allows you to custom load shapefiles from arbitrary storage formats, such as a protected url or zip file, a serialized object, or in some cases a database.


	>>> myshp = open("shapefiles/blockgroups.shp", "rb")
	>>> mydbf = open("shapefiles/blockgroups.dbf", "rb")
	>>> r = shapefile.Reader(shp=myshp, dbf=mydbf)

Notice in the examples above the shx file is never used. The shx file is a
very simple fixed-record index for the variable-length records in the shp
file. This file is optional for reading. If it's available PyShp will use the
shx file to access shape records a little faster but will do just fine without
it.

#### Reading Shapefiles Using the Context Manager

The "Reader" class can be used as a context manager, to ensure open file
objects are properly closed when done reading the data:

    >>> with shapefile.Reader("shapefiles/blockgroups.shp") as shp:
    ...     print(shp)
    shapefile Reader
        663 shapes (type 'POLYGON')
        663 records (44 fields)

#### Reading Shapefile Meta-Data

Shapefiles have a number of attributes for inspecting the file contents.
A shapefile is a container for a specific type of geometry, and this can be checked using the 
shapeType attribute. 


	>>> sf = shapefile.Reader("shapefiles/blockgroups.dbf")
	>>> sf.shapeType
	5

Shape types are represented by numbers between 0 and 31 as defined by the
shapefile specification and listed below. It is important to note that the numbering system has
several reserved numbers that have not been used yet, therefore the numbers of
the existing shape types are not sequential:

- NULL = 0
- POINT = 1
- POLYLINE = 3
- POLYGON = 5
- MULTIPOINT = 8
- POINTZ = 11
- POLYLINEZ = 13
- POLYGONZ = 15
- MULTIPOINTZ = 18
- POINTM = 21
- POLYLINEM = 23
- POLYGONM = 25
- MULTIPOINTM = 28
- MULTIPATCH = 31
	
Based on this we can see that our blockgroups shapefile contains
Polygon type shapes. The shape types are also defined as constants in
the shapefile module, so that we can compare types more intuitively:


	>>> sf.shapeType == shapefile.POLYGON
	True

For convenience, you can also get the name of the shape type as a string:


	>>> sf.shapeTypeName == 'POLYGON'
	True
	
Other pieces of meta-data that we can check include the number of features 
and the bounding box area the shapefile covers:


	>>> len(sf)
	663
	>>> sf.bbox
	[-122.515048, 37.652916, -122.327622, 37.863433]
	
Finally, if you would prefer to work with the entire shapefile in a different
format, you can convert all of it to a GeoJSON dictionary, although you may lose
some information in the process, such as z- and m-values: 


	>>> sf.__geo_interface__['type']
	'FeatureCollection'

### Reading Geometry

A shapefile's geometry is the collection of points or shapes made from
vertices and implied arcs representing physical locations. All types of
shapefiles just store points. The metadata about the points determine how they
are handled by software.

You can get a list of the shapefile's geometry by calling the shapes()
method.


	>>> shapes = sf.shapes()

The shapes method returns a list of Shape objects describing the geometry of
each shape record.


	>>> len(shapes)
	663
	
To read a single shape by calling its index use the shape() method. The index
is the shape's count from 0. So to read the 8th shape record you would use its
index which is 7.


	>>> s = sf.shape(7)
	>>> s
	Shape #7: POLYGON

	>>> # Read the bbox of the 8th shape to verify
	>>> # Round coordinates to 3 decimal places
	>>> ['%.3f' % coord for coord in s.bbox]
	['-122.450', '37.801', '-122.442', '37.808']

Each shape record (except Points) contains the following attributes. Records of
shapeType Point do not have a bounding box 'bbox'.


	>>> for name in dir(shapes[3]):
	...     if not name.startswith('_'):
	...         name
	'bbox'
	'oid'
	'parts'
	'points'
	'shapeType'
	'shapeTypeName'

  * `oid`: The shape's index position in the original shapefile.


		>>> shapes[3].oid
		3

  * `shapeType`: an integer representing the type of shape as defined by the
	  shapefile specification.


		>>> shapes[3].shapeType
		5

  * `shapeTypeName`: a string representation of the type of shape as defined by shapeType. Read-only. 


		>>> shapes[3].shapeTypeName
		'POLYGON'
		
  * `bbox`: If the shape type contains multiple points this tuple describes the
	  lower left (x,y) coordinate and upper right corner coordinate creating a
	  complete box around the points. If the shapeType is a
	  Null (shapeType == 0) then an AttributeError is raised.


		>>> # Get the bounding box of the 4th shape.
		>>> # Round coordinates to 3 decimal places
		>>> bbox = shapes[3].bbox
		>>> ['%.3f' % coord for coord in bbox]
		['-122.486', '37.787', '-122.446', '37.811']

  * `parts`: Parts simply group collections of points into shapes. If the shape
	  record has multiple parts this attribute contains the index of the first
	  point of each part. If there is only one part then a list containing 0 is
	  returned.


		>>> shapes[3].parts
		[0]

  * `points`: The points attribute contains a list of tuples containing an
	  (x,y) coordinate for each point in the shape.


		>>> len(shapes[3].points)
		173
		>>> # Get the 8th point of the fourth shape
		>>> # Truncate coordinates to 3 decimal places
		>>> shape = shapes[3].points[7]
		>>> ['%.3f' % coord for coord in shape]
		['-122.471', '37.787']

In most cases, however, if you need to do more than just type or bounds checking, you may want 
to convert the geometry to the more human-readable [GeoJSON format](http://geojson.org),
where lines and polygons are grouped for you:


	>>> s = sf.shape(0)
	>>> geoj = s.__geo_interface__
	>>> geoj["type"]
	'MultiPolygon'
	
The results from the shapes() method similarly supports converting to GeoJSON:


	>>> shapes.__geo_interface__['type']
	'GeometryCollection'

Note: In some cases, if the conversion from shapefile geometry to GeoJSON encountered any problems
or potential issues, a warning message will be displayed with information about the affected
geometry. To ignore or suppress these warnings, you can disable this behavior by setting the 
module constant VERBOSE to False: 


	>>> shapefile.VERBOSE = False
	

### Reading Records

A record in a shapefile contains the attributes for each shape in the
collection of geometries. Records are stored in the dbf file. The link between
geometry and attributes is the foundation of all geographic information systems.
This critical link is implied by the order of shapes and corresponding records
in the shp geometry file and the dbf attribute file.

The field names of a shapefile are available as soon as you read a shapefile.
You can call the "fields" attribute of the shapefile as a Python list. Each
field is a Python list with the following information:

  * Field name: the name describing the data at this column index.
  * Field type: the type of data at this column index. Types can be: 
       * "C": Characters, text.
	   * "N": Numbers, with or without decimals.
	   * "F": Floats (same as "N").
	   * "L": Logical, for boolean True/False values. 
	   * "D": Dates. 
	   * "M": Memo, has no meaning within a GIS and is part of the xbase spec instead.
  * Field length: the length of the data found at this column index. Older GIS
	   software may truncate this length to 8 or 11 characters for "Character"
	   fields.
  * Decimal length: the number of decimal places found in "Number" fields.

To see the fields for the Reader object above (sf) call the "fields"
attribute:


	>>> fields = sf.fields

	>>> assert fields == [("DeletionFlag", "C", 1, 0), ["AREA", "N", 18, 5],
	... ["BKG_KEY", "C", 12, 0], ["POP1990", "N", 9, 0], ["POP90_SQMI", "N", 10, 1],
	... ["HOUSEHOLDS", "N", 9, 0],
	... ["MALES", "N", 9, 0], ["FEMALES", "N", 9, 0], ["WHITE", "N", 9, 0],
	... ["BLACK", "N", 8, 0], ["AMERI_ES", "N", 7, 0], ["ASIAN_PI", "N", 8, 0],
	... ["OTHER", "N", 8, 0], ["HISPANIC", "N", 8, 0], ["AGE_UNDER5", "N", 8, 0],
	... ["AGE_5_17", "N", 8, 0], ["AGE_18_29", "N", 8, 0], ["AGE_30_49", "N", 8, 0],
	... ["AGE_50_64", "N", 8, 0], ["AGE_65_UP", "N", 8, 0],
	... ["NEVERMARRY", "N", 8, 0], ["MARRIED", "N", 9, 0], ["SEPARATED", "N", 7, 0],
	... ["WIDOWED", "N", 8, 0], ["DIVORCED", "N", 8, 0], ["HSEHLD_1_M", "N", 8, 0],
	... ["HSEHLD_1_F", "N", 8, 0], ["MARHH_CHD", "N", 8, 0],
	... ["MARHH_NO_C", "N", 8, 0], ["MHH_CHILD", "N", 7, 0],
	... ["FHH_CHILD", "N", 7, 0], ["HSE_UNITS", "N", 9, 0], ["VACANT", "N", 7, 0],
	... ["OWNER_OCC", "N", 8, 0], ["RENTER_OCC", "N", 8, 0],
	... ["MEDIAN_VAL", "N", 7, 0], ["MEDIANRENT", "N", 4, 0],
	... ["UNITS_1DET", "N", 8, 0], ["UNITS_1ATT", "N", 7, 0], ["UNITS2", "N", 7, 0],
	... ["UNITS3_9", "N", 8, 0], ["UNITS10_49", "N", 8, 0],
	... ["UNITS50_UP", "N", 8, 0], ["MOBILEHOME", "N", 7, 0]]

The first field of a dbf file is always a 1-byte field called "DeletionFlag", 
which indicates records that have been deleted but not removed. However, 
since this flag is very rarely used, PyShp currently will return all records  
regardless of their deletion flag, and the flag is also not included in the list of 
record values. In other words, the DeletionFlag field has no real purpose, and 
should in most cases be ignored. For instance, to get a list of all fieldnames:


	>>> fieldnames = [f[0] for f in sf.fields[1:]]

You can get a list of the shapefile's records by calling the records() method:


	>>> records = sf.records()

	>>> len(records)
	663

To read a single record call the record() method with the record's index:


	>>> rec = sf.record(3)
	
Each record is a list-like Record object containing the values corresponding to each field in
the field list (except the DeletionFlag). A record's values can be accessed by positional indexing or slicing.
For example in the blockgroups shapefile the 2nd and 3rd fields are the blockgroup id 
and the 1990 population count of that San Francisco blockgroup:


	>>> rec[1:3]
	['060750601001', 4715]

For simpler access, the fields of a record can also accessed via the name of the field,
either as a key or as an attribute name. The blockgroup id (BKG_KEY) of the blockgroups shapefile 
can also be retrieved as:


    >>> rec['BKG_KEY']
    '060750601001'

    >>> rec.BKG_KEY
    '060750601001'
	
The record values can be easily integrated with other programs by converting it to a field-value dictionary:


	>>> dct = rec.as_dict()
	>>> sorted(dct.items())
	[('AGE_18_29', 1467), ('AGE_30_49', 1681), ('AGE_50_64', 92), ('AGE_5_17', 848), ('AGE_65_UP', 30), ('AGE_UNDER5', 597), ('AMERI_ES', 6), ('AREA', 2.34385), ('ASIAN_PI', 452), ('BKG_KEY', '060750601001'), ('BLACK', 1007), ('DIVORCED', 149), ('FEMALES', 2095), ('FHH_CHILD', 16), ('HISPANIC', 416), ('HOUSEHOLDS', 1195), ('HSEHLD_1_F', 40), ('HSEHLD_1_M', 22), ('HSE_UNITS', 1258), ('MALES', 2620), ('MARHH_CHD', 79), ('MARHH_NO_C', 958), ('MARRIED', 2021), ('MEDIANRENT', 739), ('MEDIAN_VAL', 337500), ('MHH_CHILD', 0), ('MOBILEHOME', 0), ('NEVERMARRY', 703), ('OTHER', 288), ('OWNER_OCC', 66), ('POP1990', 4715), ('POP90_SQMI', 2011.6), ('RENTER_OCC', 3733), ('SEPARATED', 49), ('UNITS10_49', 49), ('UNITS2', 160), ('UNITS3_9', 672), ('UNITS50_UP', 0), ('UNITS_1ATT', 302), ('UNITS_1DET', 43), ('VACANT', 93), ('WHITE', 2962), ('WIDOWED', 37)]

If at a later point you need to check the record's index position in the original 
shapefile, you can do this through the "oid" attribute:


	>>> rec.oid
	3
	
### Reading Geometry and Records Simultaneously

You may want to examine both the geometry and the attributes for a record at
the same time. The shapeRecord() and shapeRecords() method let you do just
that.

Calling the shapeRecords() method will return the geometry and attributes for
all shapes as a list of ShapeRecord objects. Each ShapeRecord instance has a
"shape" and "record" attribute. The shape attribute is a Shape object as
discussed in the first section "Reading Geometry". The record attribute is a
list-like object containing field values as demonstrated in the "Reading Records" section.


	>>> shapeRecs = sf.shapeRecords()

Let's read the blockgroup key and the population for the 4th blockgroup:


	>>> shapeRecs[3].record[1:3]
	['060750601001', 4715]

The results from the shapeRecords() method is a list-like object that can be easily converted
to GeoJSON through the _\_geo_interface\_\_:


	>>> shapeRecs.__geo_interface__['type']
	'FeatureCollection'

The shapeRecord() method reads a single shape/record pair at the specified index.
To get the 4th shape record from the blockgroups shapefile use the third index:


	>>> shapeRec = sf.shapeRecord(3)
	>>> shapeRec.record[1:3]
	['060750601001', 4715]
	
Each individual shape record also supports the _\_geo_interface\_\_ to convert it to a GeoJSON feature:


	>>> shapeRec.__geo_interface__['type']
	'Feature'
	

## Writing Shapefiles

### The Writer Class

PyShp tries to be as flexible as possible when writing shapefiles while
maintaining some degree of automatic validation to make sure you don't
accidentally write an invalid file.

PyShp can write just one of the component files such as the shp or dbf file
without writing the others. So in addition to being a complete shapefile
library, it can also be used as a basic dbf (xbase) library. Dbf files are a
common database format which are often useful as a standalone simple database
format. And even shp files occasionally have uses as a standalone format. Some
web-based GIS systems use an user-uploaded shp file to specify an area of
interest. Many precision agriculture chemical field sprayers also use the shp
format as a control file for the sprayer system (usually in combination with
custom database file formats).

#### Writing Shapefiles to Local Files

To create a shapefile you begin by initiating a new Writer instance, passing it
the file path and name to save to:


	>>> w = shapefile.Writer('shapefiles/test/testfile')
	>>> w.field('field1', 'C')
	
File extensions are optional when reading or writing shapefiles. If you specify
them PyShp ignores them anyway. When you save files you can specify a base
file name that is used for all three file types. Or you can specify a name for
one or more file types:


	>>> w = shapefile.Writer(dbf='shapefiles/test/onlydbf.dbf')
	>>> w.field('field1', 'C')
	
In that case, any file types not assigned will not
save and only file types with file names will be saved. 

#### Writing Shapefiles to File-Like Objects

Just as you can read shapefiles from python file-like objects you can also
write to them:


	>>> try:
	...     from StringIO import StringIO
	... except ImportError:
	...     from io import BytesIO as StringIO
	>>> shp = StringIO()
	>>> shx = StringIO()
	>>> dbf = StringIO()
	>>> w = shapefile.Writer(shp=shp, shx=shx, dbf=dbf)
	>>> w.field('field1', 'C')
	>>> w.record()
	>>> w.null()
	>>> w.close()

	>>> # To read back the files you could call the "StringIO.getvalue()" method later.
	>>> assert shp.getvalue()
	>>> assert shx.getvalue()
	>>> assert dbf.getvalue()

	>>> # In fact, you can read directly from them using the Reader
	>>> r = shapefile.Reader(shp=shp, shx=shx, dbf=dbf)
	>>> len(r)
	1
	
	

#### Writing Shapefiles Using the Context Manager

The "Writer" class automatically closes the open files and writes the final headers once it is garbage collected.
In case of a crash and to make the code more readable, it is nevertheless recommended 
you do this manually by calling the "close()" method: 


	>>> w.close()

Alternatively, you can also use the "Writer" class as a context manager, to ensure open file
objects are properly closed and final headers written once you exit the with-clause:


	>>> with shapefile.Writer("shapefiles/test/contextwriter") as w:
	... 	w.field('field1', 'C')
	... 	pass
	
#### Setting the Shape Type

The shape type defines the type of geometry contained in the shapefile. All of
the shapes must match the shape type setting.

There are three ways to set the shape type: 
  * Set it when creating the class instance. 
  * Set it by assigning a value to an existing class instance. 
  * Set it automatically to the type of the first non-null shape by saving the shapefile.

To manually set the shape type for a Writer object when creating the Writer:


	>>> w = shapefile.Writer('shapefiles/test/shapetype', shapeType=3)
	>>> w.field('field1', 'C')

	>>> w.shapeType
	3

OR you can set it after the Writer is created:


	>>> w.shapeType = 1

	>>> w.shapeType
	1
	

### Adding Records

Before you can add records you must first create the fields that define what types of 
values will go into each attribute. 

There are several different field types, all of which support storing None values as NULL. 

Text fields are created using the 'C' type, and the third 'size' argument can be customized to the expected
length of text values to save space:


	>>> w = shapefile.Writer('shapefiles/test/dtype')
	>>> w.field('TEXT', 'C')
	>>> w.field('SHORT_TEXT', 'C', size=5)
	>>> w.field('LONG_TEXT', 'C', size=250)
	>>> w.null()
	>>> w.record('Hello', 'World', 'World'*50)
	>>> w.close()
	
	>>> r = shapefile.Reader('shapefiles/test/dtype')
	>>> assert r.record(0) == ['Hello', 'World', 'World'*50]

Date fields are created using the 'D' type, and can be created using either 
date objects, lists, or a YYYYMMDD formatted string. 
Field length or decimal have no impact on this type:


	>>> from datetime import date
	>>> w = shapefile.Writer('shapefiles/test/dtype')
	>>> w.field('DATE', 'D')
	>>> w.null()
	>>> w.null()
	>>> w.null()
	>>> w.null()
	>>> w.record(date(1898,1,30))
	>>> w.record([1998,1,30])
	>>> w.record('19980130')
	>>> w.record(None)
	>>> w.close()
	
	>>> r = shapefile.Reader('shapefiles/test/dtype')
	>>> assert r.record(0) == [date(1898,1,30)]
	>>> assert r.record(1) == [date(1998,1,30)]
	>>> assert r.record(2) == [date(1998,1,30)]
	>>> assert r.record(3) == [None]

Numeric fields are created using the 'N' type (or the 'F' type, which is exactly the same). 
By default the fourth decimal argument is set to zero, essentially creating an integer field. 
To store floats you must set the decimal argument to the precision of your choice. 
To store very large numbers you must increase the field length size to the total number of digits 
(including comma and minus). 


	>>> w = shapefile.Writer('shapefiles/test/dtype')
	>>> w.field('INT', 'N')
	>>> w.field('LOWPREC', 'N', decimal=2)
	>>> w.field('MEDPREC', 'N', decimal=10)
	>>> w.field('HIGHPREC', 'N', decimal=30)
	>>> w.field('FTYPE', 'F', decimal=10)
	>>> w.field('LARGENR', 'N', 101)
	>>> nr = 1.3217328
	>>> w.null()
	>>> w.null()
	>>> w.record(INT=nr, LOWPREC=nr, MEDPREC=nr, HIGHPREC=-3.2302e-25, FTYPE=nr, LARGENR=int(nr)*10**100)
	>>> w.record(None, None, None, None, None, None)
	>>> w.close()
	
	>>> r = shapefile.Reader('shapefiles/test/dtype')
	>>> assert r.record(0) == [1, 1.32, 1.3217328, -3.2302e-25, 1.3217328, 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000]
	>>> assert r.record(1) == [None, None, None, None, None, None]

	
Finally, we can create boolean fields by setting the type to 'L'. 
This field can take True or False values, or 1 (True) or 0 (False). 
None is interpreted as missing. 


	>>> w = shapefile.Writer('shapefiles/test/dtype')
	>>> w.field('BOOLEAN', 'L')
	>>> w.null()
	>>> w.null()
	>>> w.null()
	>>> w.null()
	>>> w.null()
	>>> w.null()
	>>> w.record(True)
	>>> w.record(1)
	>>> w.record(False)
	>>> w.record(0)
	>>> w.record(None)
	>>> w.record("Nonesense")
	>>> w.close()
	
	>>> r = shapefile.Reader('shapefiles/test/dtype')
	>>> r.record(0)
	Record #0: [True]
	>>> r.record(1)
	Record #1: [True]
	>>> r.record(2)
	Record #2: [False]
	>>> r.record(3)
	Record #3: [False]
	>>> r.record(4)
	Record #4: [None]
	>>> r.record(5)
	Record #5: [None]
	
You can also add attributes using keyword arguments where the keys are field names.


	>>> w = shapefile.Writer('shapefiles/test/dtype')
	>>> w.field('FIRST_FLD','C','40')
	>>> w.field('SECOND_FLD','C','40')
	>>> w.null()
	>>> w.null()
	>>> w.record('First', 'Line')
	>>> w.record(FIRST_FLD='First', SECOND_FLD='Line')
	>>> w.close()

### Adding Geometry

Geometry is added using one of several convenience methods. The "null" method is used
for null shapes, "point" is used for point shapes, "multipoint" is used for multipoint shapes, "line" for lines,
"poly" for polygons. 

**Adding a Null shape**

A shapefile may contain some records for which geometry is not available, and may be set using the "null" method. 
Because Null shape types (shape type 0) have no geometry the "null" method is called without any arguments. 


	>>> w = shapefile.Writer('shapefiles/test/null')
	>>> w.field('name', 'C')

	>>> w.null()
	>>> w.record('nullgeom')

	>>> w.close()

**Adding a Point shape**

Point shapes are added using the "point" method. A point is specified by an x and
y value. 


	>>> w = shapefile.Writer('shapefiles/test/point')
	>>> w.field('name', 'C')
	
	>>> w.point(122, 37) 
	>>> w.record('point1')
	
	>>> w.close()

**Adding a MultiPoint shape**

If your point data allows for the possibility of multiple points per feature, use "multipoint" instead. 
These are specified as a list of xy point coordinates. 


	>>> w = shapefile.Writer('shapefiles/test/multipoint')
	>>> w.field('name', 'C')
	
	>>> w.multipoint([[122,37], [124,32]]) 
	>>> w.record('multipoint1')
	
	>>> w.close()
	
**Adding a LineString shape**

For LineString shapefiles, each shape is given as a list of one or more linear features. 
Each of the linear features must have at least two points. 
	
	
	>>> w = shapefile.Writer('shapefiles/test/line')
	>>> w.field('name', 'C')
	
	>>> w.line([
	...			[[1,5],[5,5],[5,1],[3,3],[1,1]], # line 1
	...			[[3,2],[2,6]] # line 2
	...			])
	
	>>> w.record('linestring1')
	
	>>> w.close()
	
**Adding a Polygon shape**

Similarly to LineString, Polygon shapes consist of multiple polygons, and must be given as a list of polygons.
The main difference is that polygons must have at least 4 points and the last point must be the same as the first. 
It's also okay if you forget to repeat the first point at the end; PyShp automatically checks and closes the polygons
if you don't.

It's important to note that for Polygon shapefiles, your polygon coordinates must be ordered in a clockwise direction.
If any of the polygons have holes, then the hole polygon coordinates must be ordered in a counterclockwise direction.
The direction of your polygons determines how shapefile readers will distinguish between polygon outlines and holes. 


	>>> w = shapefile.Writer('shapefiles/test/polygon')
	>>> w.field('name', 'C')

	>>> w.poly([
	...	        [[113,24], [112,32], [117,36], [122,37], [118,20]], # poly 1
	...	        [[116,29],[116,26],[119,29],[119,32]], # hole 1
	...         [[15,2], [17,6], [22,7]]  # poly 2
	...        ])
	>>> w.record('polygon1')
	
	>>> w.close()
		
**Adding from an existing Shape object**

Finally, geometry can be added by passing an existing "Shape" object to the "shape" method.
You can also pass it any GeoJSON dictionary or _\_geo_interface\_\_ compatible object. 
This can be particularly useful for copying from one file to another:


	>>> r = shapefile.Reader('shapefiles/test/polygon')

	>>> w = shapefile.Writer('shapefiles/test/copy')
	>>> w.fields = r.fields[1:] # skip first deletion field

	>>> # adding existing Shape objects
	>>> for shaperec in r.iterShapeRecords():
	...     w.record(*shaperec.record)
	...     w.shape(shaperec.shape)
	
	>>> # or GeoJSON dicts
	>>> for shaperec in r.iterShapeRecords():
	...     w.record(*shaperec.record)
	...     w.shape(shaperec.shape.__geo_interface__)
	
	>>> w.close()	
	

### Geometry and Record Balancing

Because every shape must have a corresponding record it is critical that the
number of records equals the number of shapes to create a valid shapefile. You
must take care to add records and shapes in the same order so that the record
data lines up with the geometry data. For example:

	
	>>> w = shapefile.Writer('shapefiles/test/balancing', shapeType=shapefile.POINT)
	>>> w.field("field1", "C")
	>>> w.field("field2", "C")
	
	>>> w.record("row", "one")
	>>> w.point(1, 1)
	
	>>> w.record("row", "two")
	>>> w.point(2, 2)
	
To help prevent accidental misalignment PyShp has an "auto balance" feature to
make sure when you add either a shape or a record the two sides of the
equation line up. This way if you forget to update an entry the
shapefile will still be valid and handled correctly by most shapefile
software. Autobalancing is NOT turned on by default. To activate it set
the attribute autoBalance to 1 or True:


    >>> w.autoBalance = 1
	>>> w.record("row", "three")
	>>> w.record("row", "four")
	>>> w.point(4, 4)
	
	>>> w.recNum == w.shpNum
	True

You also have the option of manually calling the balance() method at any time
to ensure the other side is up to date. When balancing is used
null shapes are created on the geometry side or records
with a value of "NULL" for each field is created on the attribute side.
This gives you flexibility in how you build the shapefile.
You can create all of the shapes and then create all of the records or vice versa. 


    >>> w.autoBalance = 0
	>>> w.record("row", "five")
	>>> w.record("row", "six")
	>>> w.record("row", "seven")
	>>> w.point(5, 5)
	>>> w.point(6, 6)
	>>> w.balance()
	
	>>> w.recNum == w.shpNum
	True

If you do not use the autoBalance() or balance() method and forget to manually
balance the geometry and attributes the shapefile will be viewed as corrupt by
most shapefile software.
	
### Writing .prj files
A .prj file, or projection file, is a simple text file that stores a shapefile's map projection and coordinate reference system to help mapping software properly locate the geometry on a map. If you don't have one, you may get confusing errors when you try and use the shapefile you created. The GIS software may complain that it doesn't know the shapefile's projection and refuse to accept it, it may assume the shapefile is the same projection as the rest of your GIS project and put it in the wrong place, or it might assume the coordinates are an offset in meters from latitude and longitude 0,0 which will put your data in the middle of the ocean near Africa. The text in the .prj file is a [Well-Known-Text (WKT) projection string](https://en.wikipedia.org/wiki/Well-known_text_representation_of_coordinate_reference_systems). Projection strings can be quite long so they are often referenced using numeric codes call EPSG codes. The .prj file must have the same base name as your shapefile. So for example if you have a shapefile named "myPoints.shp", the .prj file must be named "myPoints.prj". 

If you're using the same projection over and over, the following is a simple way to create the .prj file assuming your base filename is stored in a variable called "filename":

```
	with open("{}.prj".format(filename), "w") as prj:
	    wkt = 'GEOGCS["WGS 84",'
	    wkt += 'DATUM["WGS_1984",'
	    wkt += 'SPHEROID["WGS 84",6378137,298.257223563]]'
	    wkt += ',PRIMEM["Greenwich",0],'
	    wkt += 'UNIT["degree",0.0174532925199433]]'
	    prj.write(wkt)
```

If you need to dynamically fetch WKT projection strings, you can use the pure Python [PyCRS](https://github.com/karimbahgat/PyCRS) module which has a number of useful features. 

# Advanced Use

## Common Errors and Fixes

Below we list some commonly encountered errors and ways to fix them. 

### Warnings and Logging

By default, PyShp chooses to be transparent and provide the user with logging information and warnings about non-critical issues when reading or writing shapefiles. This behavior is controlled by the module constant `VERBOSE` (which defaults to True). If you would rather suppress this information, you can simply set this to False: 


	>>> shapefile.VERBOSE = False

All logging happens under the namespace `shapefile`. So another way to suppress all PyShp warnings would be to alter the logging behavior for that namespace:


	>>> import logging
	>>> logging.getLogger('shapefile').setLevel(logging.ERROR)

### Shapefile Encoding Errors

PyShp supports reading and writing shapefiles in any language or character encoding, and provides several options for decoding and encoding text. 
Most shapefiles are written in UTF-8 encoding, PyShp's default encoding, so in most cases you don't have to specify the encoding. 
If you encounter an encoding error when reading a shapefile, this means the shapefile was likely written in a non-utf8 encoding. 
For instance, when working with English language shapefiles, a common reason for encoding errors is that the shapefile was written in Latin-1 encoding.
For reading shapefiles in any non-utf8 encoding, such as Latin-1, just 
supply the encoding option when creating the Reader class. 


	>>> r = shapefile.Reader("shapefiles/test/latin1.shp", encoding="latin1")
	>>> r.record(0) == [2, u'Ñandú']
	True
	
Once you have loaded the shapefile, you may choose to save it using another more supportive encoding such 
as UTF-8. Assuming the new encoding supports the characters you are trying to write, reading it back in 
should give you the same unicode string you started with. 


	>>> w = shapefile.Writer("shapefiles/test/latin_as_utf8.shp", encoding="utf8")
	>>> w.fields = r.fields[1:]
	>>> w.record(*r.record(0))
	>>> w.null()
	>>> w.close()
	
	>>> r = shapefile.Reader("shapefiles/test/latin_as_utf8.shp", encoding="utf8")
	>>> r.record(0) == [2, u'Ñandú']
	True
	
If you supply the wrong encoding and the string is unable to be decoded, PyShp will by default raise an
exception. If however, on rare occasion, you are unable to find the correct encoding and want to ignore
or replace encoding errors, you can specify the "encodingErrors" to be used by the decode method. This
applies to both reading and writing. 


	>>> r = shapefile.Reader("shapefiles/test/latin1.shp", encoding="ascii", encodingErrors="replace")
	>>> r.record(0) == [2, u'�and�']
	True



## Reading Large Shapefiles

Despite being a lightweight library, PyShp is designed to be able to read shapefiles of any size, allowing you to work with hundreds of thousands or even millions 
of records and complex geometries. 

### Iterating through a shapefile

As an example, let's load this Natural Earth shapefile of more than 4000 global administrative boundary polygons:


	>>> sf = shapefile.Reader("https://github.com/nvkelso/natural-earth-vector/blob/master/10m_cultural/ne_10m_admin_1_states_provinces?raw=true")

When first creating the Reader class, the library only reads the header information
and leaves the rest of the file contents alone. Once you call the records() and shapes() 
methods however, it will attempt to read the entire file into memory at once. 
For very large files this can result in MemoryError. So when working with large files
it is recommended to use instead the iterShapes(), iterRecords(), or iterShapeRecords()
methods instead. These iterate through the file contents one at a time, enabling you to loop 
through them while keeping memory usage at a minimum. 


	>>> for shape in sf.iterShapes():
	...     # do something here
	...     pass
	
	>>> for rec in sf.iterRecords():
	...     # do something here
	...     pass
	
	>>> for shapeRec in sf.iterShapeRecords():
	...     # do something here
	...     pass

	>>> for shapeRec in sf: # same as iterShapeRecords()
	...     # do something here
	...     pass

### Limiting which fields to read

By default when reading the attribute records of a shapefile, pyshp unpacks and returns the data for all of the dbf fields, regardless of whether you actually need that data or not. To limit which field data is unpacked when reading each record and speed up processing time, you can specify the `fields` argument to any of the methods involving record data. Note that the order of the specified fields does not matter, the resulting records will list the specified field values in the order that they appear in the original dbf file. For instance, if we are only interested in the country and name of each admin unit, the following is a more efficient way of iterating through the file:


	>>> fields = ["geonunit", "name"]
	>>> for rec in sf.iterRecords(fields=fields):
	... 	# do something
	... 	pass
	>>> rec
	Record #4595: ['Birgu', 'Malta']
	
### Attribute filtering

In many cases, we aren't interested in all entries of a shapefile, but rather only want to retrieve a small subset of records by filtering on some attribute. To avoid wasting time reading records and shapes that we don't need, we can start by iterating only the records and fields of interest, check if the record matches some condition as a way to filter the data, and finally load the full record and shape geometry for those that meet the condition:


	>>> filter_field = "geonunit"
	>>> filter_value = "Eritrea"
	>>> for rec in sf.iterRecords(fields=[filter_field]):
	...     if rec[filter_field] == filter_value:
	... 		# load full record and shape
	... 		shapeRec = sf.shapeRecord(rec.oid)
	... 		shapeRec.record["name"]
	'Debubawi Keyih Bahri'
	'Debub'
	'Semenawi Keyih Bahri'
	'Gash Barka'
	'Maekel'
	'Anseba'

Selectively reading only the necessary data in this way is particularly useful for efficiently processing a limited subset of data from very large files or when looping through a large number of files, especially if they contain large attribute tables or complex shape geometries. 

### Spatial filtering

Another common use-case is that we only want to read those records that are located in some region of interest. Because the shapefile stores the bounding box of each shape separately from the geometry data, it's possible to quickly retrieve all shapes that might overlap a given bounding box region without having to load the full shape geometry data for every shape. This can be done by specifying the `bbox` argument to any of the record or shape methods:


	>>> bbox = [36.423, 12.360, 43.123, 18.004] # ca bbox of Eritrea
	>>> fields = ["geonunit","name"]
	>>> for shapeRec in sf.iterShapeRecords(bbox=bbox, fields=fields):
	... 	shapeRec.record
	Record #368: ['Afar', 'Ethiopia']
	Record #369: ['Tadjourah', 'Djibouti']
	Record #375: ['Obock', 'Djibouti']
	Record #376: ['Debubawi Keyih Bahri', 'Eritrea']
	Record #1106: ['Amhara', 'Ethiopia']
	Record #1107: ['Gedarif', 'Sudan']
	Record #1108: ['Tigray', 'Ethiopia']
	Record #1414: ['Sa`dah', 'Yemen']
	Record #1415: ['`Asir', 'Saudi Arabia']
	Record #1416: ['Hajjah', 'Yemen']
	Record #1417: ['Jizan', 'Saudi Arabia']
	Record #1598: ['Debub', 'Eritrea']
	Record #1599: ['Red Sea', 'Sudan']
	Record #1600: ['Semenawi Keyih Bahri', 'Eritrea']
	Record #1601: ['Gash Barka', 'Eritrea']
	Record #1602: ['Kassala', 'Sudan']
	Record #1603: ['Maekel', 'Eritrea']
	Record #2037: ['Al Hudaydah', 'Yemen']
	Record #3741: ['Anseba', 'Eritrea']

This functionality means that shapefiles can be used as a bare-bones spatially indexed database, with very fast bounding box queries for even the largest of shapefiles. Note that, as with all spatial indexing, this method does not guarantee that the *geometries* of the resulting matches overlap the queried region, only that their *bounding boxes* overlap. 



## Writing large shapefiles

Similar to the Reader class, the shapefile Writer class uses a streaming approach to keep memory 
usage at a minimum and allow writing shapefiles of arbitrarily large sizes. The library takes care of this under-the-hood by immediately 
writing each geometry and record to disk the moment they 
are added using shape() or record(). Once the writer is closed, exited, or garbage 
collected, the final header information is calculated and written to the beginning of 
the file. 

### Merging multiple shapefiles

This means that it's possible to merge hundreds or thousands of shapefiles, as 
long as you iterate through the source files to avoid loading everything into 
memory. The following example copies the contents of a shapefile to a new file 10 times:

	>>> # create writer
	>>> w = shapefile.Writer('shapefiles/test/merge')

	>>> # copy over fields from the reader
	>>> r = shapefile.Reader("shapefiles/blockgroups")
	>>> for field in r.fields[1:]:
	...     w.field(*field)

	>>> # copy the shapefile to writer 10 times
	>>> repeat = 10
	>>> for i in range(repeat):
	...     r = shapefile.Reader("shapefiles/blockgroups")
	...     for shapeRec in r.iterShapeRecords():
	...         w.record(*shapeRec.record)
	...         w.shape(shapeRec.shape)

	>>> # check that the written file is 10 times longer
	>>> len(w) == len(r) * 10
	True

	>>> # close the writer
	>>> w.close()

In this trivial example, we knew that all files had the exact same field names, ordering, and types. In other scenarios, you will have to additionally make sure that all shapefiles have the exact same fields in the same order, and that they all contain the same geometry type. 

### Editing shapefiles

If you need to edit a shapefile you would have to read the 
file one record at a time, modify or filter the contents, and write it back out. For instance, to create a copy of a shapefile that only keeps a subset of relevant fields: 

	>>> # create writer
	>>> w = shapefile.Writer('shapefiles/test/edit')

	>>> # define which fields to keep
	>>> keep_fields = ['BKG_KEY', 'MEDIANRENT']

	>>> # copy over the relevant fields from the reader
	>>> r = shapefile.Reader("shapefiles/blockgroups")
	>>> for field in r.fields[1:]:
	...     if field[0] in keep_fields:
	...         w.field(*field)

	>>> # write only the relevant attribute values
	>>> for shapeRec in r.iterShapeRecords(fields=keep_fields):
	...     w.record(*shapeRec.record)
	...     w.shape(shapeRec.shape)

	>>> # close writer
	>>> w.close()

## 3D and Other Geometry Types

Most shapefiles store conventional 2D points, lines, or polygons. But the shapefile format is also capable
of storing various other types of geometries as well, including complex 3D surfaces and objects. 

### Shapefiles with measurement (M) values

Measured shape types are shapes that include a measurement value at each vertex, for instance
speed measurements from a GPS device. Shapes with measurement (M) values are added with the following
methods: "pointm", "multipointm", "linem", and "polygonm". The M-values are specified by adding a
third M value to each XY coordinate. Missing or unobserved M-values are specified with a None value,
or by simply omitting the third M-coordinate.


	>>> w = shapefile.Writer('shapefiles/test/linem')
	>>> w.field('name', 'C')
	
	>>> w.linem([
	...			[[1,5,0],[5,5],[5,1,3],[3,3,None],[1,1,0]], # line with one omitted and one missing M-value
	...			[[3,2],[2,6]] # line without any M-values
	...			])
	
	>>> w.record('linem1')
	
	>>> w.close()
	
Shapefiles containing M-values can be examined in several ways:

	>>> r = shapefile.Reader('shapefiles/test/linem')
	
	>>> r.mbox # the lower and upper bound of M-values in the shapefile
	[0.0, 3.0]
	
	>>> r.shape(0).m # flat list of M-values
	[0.0, None, 3.0, None, 0.0, None, None]

	
### Shapefiles with elevation (Z) values

Elevation shape types are shapes that include an elevation value at each vertex, for instance elevation from a GPS device. 
Shapes with elevation (Z) values are added with the following methods: "pointz", "multipointz", "linez", and "polyz". 
The Z-values are specified by adding a third Z value to each XY coordinate. Z-values do not support the concept of missing data,
but if you omit the third Z-coordinate it will default to 0. Note that Z-type shapes also support measurement (M) values added
as a fourth M-coordinate. This too is optional. 
	
	
	>>> w = shapefile.Writer('shapefiles/test/linez')
	>>> w.field('name', 'C')
	
	>>> w.linez([
	...			[[1,5,18],[5,5,20],[5,1,22],[3,3],[1,1]], # line with some omitted Z-values
	...			[[3,2],[2,6]], # line without any Z-values
	...			[[3,2,15,0],[2,6,13,3],[1,9,14,2]] # line with both Z- and M-values
	...			])
	
	>>> w.record('linez1')
	
	>>> w.close()
	
To examine a Z-type shapefile you can do:

	>>> r = shapefile.Reader('shapefiles/test/linez')
	
	>>> r.zbox # the lower and upper bound of Z-values in the shapefile
	[0.0, 22.0]
	
	>>> r.shape(0).z # flat list of Z-values
	[18.0, 20.0, 22.0, 0.0, 0.0, 0.0, 0.0, 15.0, 13.0, 14.0]

### 3D MultiPatch Shapefiles

Multipatch shapes are useful for storing composite 3-Dimensional objects. 
A MultiPatch shape represents a 3D object made up of one or more surface parts.
Each surface in "parts" is defined by a list of XYZM values (Z and M values optional), and its corresponding type is
given in the "partTypes" argument. The part type decides how the coordinate sequence is to be interpreted, and can be one 
of the following module constants: TRIANGLE_STRIP, TRIANGLE_FAN, OUTER_RING, INNER_RING, FIRST_RING, or RING.
For instance, a TRIANGLE_STRIP may be used to represent the walls of a building, combined with a TRIANGLE_FAN to represent 
its roof: 

	>>> from shapefile import TRIANGLE_STRIP, TRIANGLE_FAN
	
	>>> w = shapefile.Writer('shapefiles/test/multipatch')
	>>> w.field('name', 'C')
	
	>>> w.multipatch([
	...				 [[0,0,0],[0,0,3],[5,0,0],[5,0,3],[5,5,0],[5,5,3],[0,5,0],[0,5,3],[0,0,0],[0,0,3]], # TRIANGLE_STRIP for house walls
	...				 [[2.5,2.5,5],[0,0,3],[5,0,3],[5,5,3],[0,5,3],[0,0,3]], # TRIANGLE_FAN for pointed house roof
	...				 ],
	...				 partTypes=[TRIANGLE_STRIP, TRIANGLE_FAN]) # one type for each part
	
	>>> w.record('house1')
	
	>>> w.close()
	
For an introduction to the various multipatch part types and examples of how to create 3D MultiPatch objects see [this
ESRI White Paper](http://downloads.esri.com/support/whitepapers/ao_/J9749_MultiPatch_Geometry_Type.pdf). 


	
# Testing

The testing framework is pytest, and the tests are located in test_shapefile.py. 
This includes an extensive set of unit tests of the various pyshp features, 
and tests against various input data. Some of the tests that require 
internet connectivity will be skipped in offline testing environments. 
In the same folder as README.md and shapefile.py, from the command line run 
```
$ python -m pytest
``` 

Additionally, all the code and examples located in this file, README.md, 
is tested and verified with the builtin doctest framework.
A special routine for invoking the doctest is run when calling directly on shapefile.py.
In the same folder as README.md and shapefile.py, from the command line run 
```
$ python shapefile.py
``` 

Linux/Mac and similar platforms will need to run `$ dos2unix README.md` in order
to correct line endings in README.md.

# Contributors

```
Atle Frenvik Sveen
Bas Couwenberg
Ben Beasley
Casey Meisenzahl
Charles Arnold
David A. Riggs
davidh-ssec
Evan Heidtmann
ezcitron
fiveham
geospatialpython
Hannes
Ignacio Martinez Vazquez
Jason Moujaes
Jonty Wareing
Karim Bahgat
karanrn
Kyle Kelley
Louis Tiao
Marcin Cuprjak
mcuprjak
Micah Cochran
Michael Davis
Michal Čihař
Mike Toews
Miroslav Šedivý
Nilo
pakoun
Paulo Ernesto
Raynor Vliegendhart
Razzi Abuissa
RosBer97
Ross Rogers
Ryan Brideau
Tim Gates
Tobias Megies
Tommi Penttinen
Uli Köhler
Vsevolod Novikov
Zac Miller
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/JamesParrott/IronPyShp",
    "name": "IronPyshp",
    "maintainer": "James Parrott",
    "docs_url": null,
    "requires_python": ">=2.7",
    "maintainer_email": "james.parrott@proton.me",
    "keywords": "gis, geospatial, geographic, shapefile, shapefiles",
    "author": "Joel Lawhead",
    "author_email": "jlawhead@geospatialpython.com",
    "download_url": "https://files.pythonhosted.org/packages/f6/84/6670109454e64f4c8e4a97e47bfd1c714b5a1d71c05e08eab792f6dded07/IronPyshp-2.3.1.tar.gz",
    "platform": null,
    "description": "# IronPyShp\r\n\r\nGeneralises logic based on bytes instance checks (that rely on `str is bytes`, as is True in CPython 2), to allow PyShp to work \r\ncorrectly with unicode data in Iron Python 2 (in which `str is not bytes`).  \r\n\r\nBonus: Preserves the order of fields in shape files in `Record.as_dict()` by setting `dict = collections.OrderedDict`.\r\n\r\n- **Reluctant Iron Python 2 user**: [James Parrott](https://github.com/JamesParrott)\r\n- **Version**: 2.3.1\r\n- **Date**: 18 April, 2023\r\n - **License**: [MIT](https://github.com/GeospatialPython/pyshp/blob/master/LICENSE.TXT)\r\n\r\n# PyShp\r\n\r\n\r\nThe Python Shapefile Library (PyShp) reads and writes ESRI Shapefiles in pure Python.\r\n\r\n![pyshp logo](http://4.bp.blogspot.com/_SBi37QEsCvg/TPQuOhlHQxI/AAAAAAAAAE0/QjFlWfMx0tQ/S350/GSP_Logo.png \"PyShp\")\r\n\r\n- **Author**: [Joel Lawhead](https://github.com/GeospatialPython)\r\n- **Maintainers**: [Karim Bahgat](https://github.com/karimbahgat)\r\n- **Version**: 2.3.1\r\n- **Date**: 28 July, 2022\r\n- **License**: [MIT](https://github.com/GeospatialPython/pyshp/blob/master/LICENSE.TXT)\r\n\r\n## Contents\r\n\r\n- [Overview](#overview)\r\n- [Version Changes](#version-changes)\r\n- [The Basics](#the-basics)\r\n\t- [Reading Shapefiles](#reading-shapefiles)\r\n\t\t- [The Reader Class](#the-reader-class)\r\n\t\t\t- [Reading Shapefiles from Local Files](#reading-shapefiles-from-local-files)\r\n\t\t\t- [Reading Shapefiles from Zip Files](#reading-shapefiles-from-zip-files)\r\n\t\t\t- [Reading Shapefiles from URLs](#reading-shapefiles-from-urls)\r\n\t\t\t- [Reading Shapefiles from File-Like Objects](#reading-shapefiles-from-file-like-objects)\r\n\t\t\t- [Reading Shapefiles Using the Context Manager](#reading-shapefiles-using-the-context-manager)\r\n\t\t\t- [Reading Shapefile Meta-Data](#reading-shapefile-meta-data)\r\n\t\t- [Reading Geometry](#reading-geometry)\r\n\t\t- [Reading Records](#reading-records)\r\n\t\t- [Reading Geometry and Records Simultaneously](#reading-geometry-and-records-simultaneously)\r\n\t- [Writing Shapefiles](#writing-shapefiles)\r\n\t\t- [The Writer Class](#the-writer-class)\r\n\t\t\t- [Writing Shapefiles to Local Files](#writing-shapefiles-to-local-files)\r\n\t\t\t- [Writing Shapefiles to File-Like Objects](#writing-shapefiles-to-file-like-objects)\r\n\t\t\t- [Writing Shapefiles Using the Context Manager](#writing-shapefiles-using-the-context-manager)\r\n\t\t\t- [Setting the Shape Type](#setting-the-shape-type)\r\n\t\t- [Adding Records](#adding-records)\r\n\t\t- [Adding Geometry](#adding-geometry)\r\n\t\t- [Geometry and Record Balancing](#geometry-and-record-balancing)\r\n- [Advanced Use](#advanced-use)\r\n    - [Common Errors and Fixes](#common-errors-and-fixes)\r\n        - [Warnings and Logging](#warnings-and-logging)\r\n        - [Shapefile Encoding Errors](#shapefile-encoding-errors)\r\n\t- [Reading Large Shapefiles](#reading-large-shapefiles)\r\n\t\t- [Iterating through a shapefile](#iterating-through-a-shapefile)\r\n\t\t- [Limiting which fields to read](#limiting-which-fields-to-read)\r\n\t\t- [Attribute filtering](#attribute-filtering)\r\n\t\t- [Spatial filtering](#spatial-filtering)\r\n\t- [Writing large shapefiles](#writing-large-shapefiles)\r\n\t\t- [Merging multiple shapefiles](#merging-multiple-shapefiles)\r\n\t\t- [Editing shapefiles](#editing-shapefiles)\r\n\t- [3D and Other Geometry Types](#3d-and-other-geometry-types)\r\n    \t- [Shapefiles with measurement (M) values](#shapefiles-with-measurement-m-values)\r\n\t\t- [Shapefiles with elevation (Z) values](#shapefiles-with-elevation-z-values)\r\n\t\t- [3D MultiPatch Shapefiles](#3d-multipatch-shapefiles)\r\n- [Testing](#testing)\r\n- [Contributors](#contributors)\r\n\r\n\r\n# Overview\r\n\r\nThe Python Shapefile Library (PyShp) provides read and write support for the\r\nEsri Shapefile format. The Shapefile format is a popular Geographic\r\nInformation System vector data format created by Esri. For more information\r\nabout this format please read the well-written \"ESRI Shapefile Technical\r\nDescription - July 1998\" located at [http://www.esri.com/library/whitepapers/p\r\ndfs/shapefile.pdf](http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf)\r\n. The Esri document describes the shp and shx file formats. However a third\r\nfile format called dbf is also required. This format is documented on the web\r\nas the \"XBase File Format Description\" and is a simple file-based database\r\nformat created in the 1960's. For more on this specification see: [http://www.clicketyclick.dk/databases/xbase/format/index.html](http://www.clicketyclick.dk/databases/xbase/format/index.html)\r\n\r\nBoth the Esri and XBase file-formats are very simple in design and memory\r\nefficient which is part of the reason the shapefile format remains popular\r\ndespite the numerous ways to store and exchange GIS data available today.\r\n\r\nPyshp is compatible with Python 2.7-3.x.\r\n\r\nThis document provides examples for using PyShp to read and write shapefiles. However \r\nmany more examples are continually added to the blog [http://GeospatialPython.com](http://GeospatialPython.com),\r\nand by searching for PyShp on [https://gis.stackexchange.com](https://gis.stackexchange.com). \r\n\r\nCurrently the sample census blockgroup shapefile referenced in the examples is available on the GitHub project site at\r\n[https://github.com/GeospatialPython/pyshp](https://github.com/GeospatialPython/pyshp). These\r\nexamples are straight-forward and you can also easily run them against your\r\nown shapefiles with minimal modification. \r\n\r\nImportant: If you are new to GIS you should read about map projections.\r\nPlease visit: [https://github.com/GeospatialPython/pyshp/wiki/Map-Projections](https://github.com/GeospatialPython/pyshp/wiki/Map-Projections)\r\n\r\nI sincerely hope this library eliminates the mundane distraction of simply\r\nreading and writing data, and allows you to focus on the challenging and FUN\r\npart of your geospatial project.\r\n\r\n\r\n# Version Changes\r\n\r\n## 2.3.1\r\n\r\n### Bug fixes:\r\n\r\n- Fix recently introduced issue where Reader/Writer closes file-like objects provided by user (#244)\r\n\r\n## 2.3.0\r\n\r\n### New Features:\r\n\r\n- Added support for pathlib and path-like shapefile filepaths (@mwtoews). \r\n- Allow reading individual file extensions via filepaths.\r\n\r\n### Improvements:\r\n\r\n- Simplified setup and deployment (@mwtoews)\r\n- Faster shape access when missing shx file\r\n- Switch to named logger (see #240)\r\n\r\n### Bug fixes:\r\n\r\n- More robust handling of corrupt shapefiles (fixes #235)\r\n- Fix errors when writing to individual file-handles (fixes #237)\r\n- Revert previous decision to enforce geojson output ring orientation (detailed explanation at https://github.com/SciTools/cartopy/issues/2012)\r\n- Fix test issues in environments without network access (@sebastic, @musicinmybrain). \r\n\r\n## 2.2.0\r\n\r\n### New Features:\r\n\r\n- Read shapefiles directly from zipfiles.\r\n- Read shapefiles directly from urls.\r\n- Allow fast extraction of only a subset of dbf fields through a `fields` arg.\r\n- Allow fast filtering which shapes to read from the file through a `bbox` arg.\r\n\r\n### Improvements:\r\n\r\n- More examples and restructuring of README. \r\n- More informative Shape to geojson warnings (see #219).\r\n- Add shapefile.VERBOSE flag to control warnings verbosity (default True).\r\n- Shape object information when calling repr().\r\n- Faster ring orientation checks, enforce geojson output ring orientation.\r\n\r\n### Bug fixes:\r\n\r\n- Remove null-padding at end of some record character fields.\r\n- Fix dbf writing error when the number of record list or dict entries didn't match the number of fields.\r\n- Handle rare garbage collection issue after deepcopy (https://github.com/mattijn/topojson/issues/120)\r\n- Fix bug where records and shapes would be assigned incorrect record number (@karanrn)\r\n- Fix typos in docs (@timgates)\r\n\r\n## 2.1.3\r\n\r\n### Bug fixes:\r\n\r\n- Fix recent bug in geojson hole-in-polygon checking (see #205)\r\n- Misc fixes to allow geo interface dump to json (eg dates as strings)\r\n- Handle additional dbf date null values, and return faulty dates as unicode (see #187)\r\n- Add writer target typecheck\r\n- Fix bugs to allow reading shp/shx/dbf separately\r\n- Allow delayed shapefile loading by passing no args\r\n- Fix error with writing empty z/m shapefile (@mcuprjak)\r\n- Fix signed_area() so ignores z/m coords\r\n- Enforce writing the 11th field name character as null-terminator (only first 10 are used)\r\n- Minor README fixes\r\n- Added more tests\r\n\r\n## 2.1.2\r\n\r\n### Bug fixes:\r\n\r\n- Fix issue where warnings.simplefilter('always') changes global warning behavior [see #203]\r\n\r\n## 2.1.1\r\n\r\n### Improvements:\r\n\r\n- Handle shapes with no coords and represent as geojson with no coords (GeoJSON null-equivalent)\r\n- Expand testing to Python 3.6, 3.7, 3.8 and PyPy; drop 3.3 and 3.4 [@mwtoews]\r\n- Added pytest testing [@jmoujaes]\r\n\r\n### Bug fixes:\r\n\r\n- Fix incorrect geo interface handling of multipolygons with complex exterior-hole relations [see #202]\r\n- Enforce shapefile requirement of at least one field, to avoid writing invalid shapefiles [@Jonty]\r\n- Fix Reader geo interface including DeletionFlag field in feature properties [@nnseva]\r\n- Fix polygons not being auto closed, which was accidentally dropped\r\n- Fix error for null geometries in feature geojson\r\n- Misc docstring cleanup [@fiveham]\r\n\r\n## 2.1.0\r\n\r\n### New Features:\r\n\r\n- Added back read/write support for unicode field names. \r\n- Improved Record representation\r\n- More support for geojson on Reader, ShapeRecord, ShapeRecords, and shapes()\r\n\r\n### Bug fixes:\r\n\r\n- Fixed error when reading optional m-values\r\n- Fixed Record attribute autocomplete in Python 3\r\n- Misc readme cleanup\r\n\r\n## 2.0.0\r\n\r\nThe newest version of PyShp, version 2.0 introduced some major new improvements. \r\nA great thanks to all who have contributed code and raised issues, and for everyone's\r\npatience and understanding during the transition period. \r\nSome of the new changes are incompatible with previous versions. \r\nUsers of the previous version 1.x should therefore take note of the following changes\r\n(Note: Some contributor attributions may be missing): \r\n\r\n### Major Changes:\r\n\r\n- Full support for unicode text, with custom encoding, and exception handling. \r\n  - Means that the Reader returns unicode, and the Writer accepts unicode. \r\n- PyShp has been simplified to a pure input-output library using the Reader and Writer classes, dropping the Editor class. \r\n- Switched to a new streaming approach when writing files, keeping memory-usage at a minimum:\r\n  - Specify filepath/destination and text encoding when creating the Writer. \r\n  - The file is written incrementally with each call to shape/record. \r\n  - Adding shapes is now done using dedicated methods for each shapetype. \r\n- Reading shapefiles is now more convenient:\r\n  - Shapefiles can be opened using the context manager, and files are properly closed. \r\n  - Shapefiles can be iterated, have a length, and supports the geo interface. \r\n  - New ways of inspecting shapefile metadata by printing. [@megies]\r\n  - More convenient accessing of Record values as attributes. [@philippkraft]\r\n  - More convenient shape type name checking. [@megies] \r\n- Add more support and documentation for MultiPatch 3D shapes. \r\n- The Reader \"elevation\" and \"measure\" attributes now renamed \"zbox\" and \"mbox\", to make it clear they refer to the min/max values. \r\n- Better documentation of previously unclear aspects, such as field types. \r\n\r\n### Important Fixes:\r\n\r\n- More reliable/robust:\r\n  - Fixed shapefile bbox error for empty or point type shapefiles. [@mcuprjak]\r\n  - Reading and writing Z and M type shapes is now more robust, fixing many errors, and has been added to the documentation. [@ShinNoNoir]\r\n  - Improved parsing of field value types, fixed errors and made more flexible. \r\n  - Fixed bug when writing shapefiles with datefield and date values earlier than 1900 [@megies]\r\n- Fix some geo interface errors, including checking polygon directions.\r\n- Bug fixes for reading from case sensitive file names, individual files separately, and from file-like objects. [@gastoneb, @kb003308, @erickskb]\r\n- Enforce maximum field limit. [@mwtoews]\r\n\r\n\r\n# The Basics\r\n\r\nBefore doing anything you must import the library.\r\n\r\n\r\n\t>>> import shapefile\r\n\r\nThe examples below will use a shapefile created from the U.S. Census Bureau\r\nBlockgroups data set near San Francisco, CA and available in the git\r\nrepository of the PyShp GitHub site.\r\n\r\n## Reading Shapefiles\r\n\r\n### The Reader Class\r\n\r\n#### Reading Shapefiles from Local Files\r\n\r\nTo read a shapefile create a new \"Reader\" object and pass it the name of an\r\nexisting shapefile. The shapefile format is actually a collection of three\r\nfiles. You specify the base filename of the shapefile or the complete filename\r\nof any of the shapefile component files.\r\n\r\n\r\n\t>>> sf = shapefile.Reader(\"shapefiles/blockgroups\")\r\n\r\nOR\r\n\r\n\r\n\t>>> sf = shapefile.Reader(\"shapefiles/blockgroups.shp\")\r\n\r\nOR\r\n\r\n\r\n\t>>> sf = shapefile.Reader(\"shapefiles/blockgroups.dbf\")\r\n\r\nOR any of the other 5+ formats which are potentially part of a shapefile. The\r\nlibrary does not care about file extensions. You can also specify that you only \r\nwant to read some of the file extensions through the use of keyword arguments:\r\n\r\n\r\n\t>>> sf = shapefile.Reader(dbf=\"shapefiles/blockgroups.dbf\")\r\n\r\n#### Reading Shapefiles from Zip Files\r\n\r\nIf your shapefile is wrapped inside a zip file, the library is able to handle that too, meaning you don't have to worry about unzipping the contents: \r\n\r\n\r\n\t>>> sf = shapefile.Reader(\"shapefiles/blockgroups.zip\")\r\n\r\nIf the zip file contains multiple shapefiles, just specify which shapefile to read by additionally specifying the relative path after the \".zip\" part:\r\n\r\n\r\n\t>>> sf = shapefile.Reader(\"shapefiles/blockgroups_multishapefile.zip/blockgroups2.shp\")\r\n\r\n#### Reading Shapefiles from URLs\r\n\r\nFinally, you can use all of the above methods to read shapefiles directly from the internet, by giving a url instead of a local path, e.g.: \r\n\r\n\r\n\t>>> # from a zipped shapefile on website\r\n\t>>> sf = shapefile.Reader(\"https://biogeo.ucdavis.edu/data/diva/rrd/NIC_rrd.zip\")\r\n\r\n\t>>> # from a shapefile collection of files in a github repository\r\n\t>>> sf = shapefile.Reader(\"https://github.com/nvkelso/natural-earth-vector/blob/master/110m_cultural/ne_110m_admin_0_tiny_countries.shp?raw=true\")\r\n\r\nThis will automatically download the file(s) to a temporary location before reading, saving you a lot of time and repetitive boilerplate code when you just want quick access to some external data.\r\n\r\n#### Reading Shapefiles from File-Like Objects\r\n\r\nYou can also load shapefiles from any Python file-like object using keyword\r\narguments to specify any of the three files. This feature is very powerful and\r\nallows you to custom load shapefiles from arbitrary storage formats, such as a protected url or zip file, a serialized object, or in some cases a database.\r\n\r\n\r\n\t>>> myshp = open(\"shapefiles/blockgroups.shp\", \"rb\")\r\n\t>>> mydbf = open(\"shapefiles/blockgroups.dbf\", \"rb\")\r\n\t>>> r = shapefile.Reader(shp=myshp, dbf=mydbf)\r\n\r\nNotice in the examples above the shx file is never used. The shx file is a\r\nvery simple fixed-record index for the variable-length records in the shp\r\nfile. This file is optional for reading. If it's available PyShp will use the\r\nshx file to access shape records a little faster but will do just fine without\r\nit.\r\n\r\n#### Reading Shapefiles Using the Context Manager\r\n\r\nThe \"Reader\" class can be used as a context manager, to ensure open file\r\nobjects are properly closed when done reading the data:\r\n\r\n    >>> with shapefile.Reader(\"shapefiles/blockgroups.shp\") as shp:\r\n    ...     print(shp)\r\n    shapefile Reader\r\n        663 shapes (type 'POLYGON')\r\n        663 records (44 fields)\r\n\r\n#### Reading Shapefile Meta-Data\r\n\r\nShapefiles have a number of attributes for inspecting the file contents.\r\nA shapefile is a container for a specific type of geometry, and this can be checked using the \r\nshapeType attribute. \r\n\r\n\r\n\t>>> sf = shapefile.Reader(\"shapefiles/blockgroups.dbf\")\r\n\t>>> sf.shapeType\r\n\t5\r\n\r\nShape types are represented by numbers between 0 and 31 as defined by the\r\nshapefile specification and listed below. It is important to note that the numbering system has\r\nseveral reserved numbers that have not been used yet, therefore the numbers of\r\nthe existing shape types are not sequential:\r\n\r\n- NULL = 0\r\n- POINT = 1\r\n- POLYLINE = 3\r\n- POLYGON = 5\r\n- MULTIPOINT = 8\r\n- POINTZ = 11\r\n- POLYLINEZ = 13\r\n- POLYGONZ = 15\r\n- MULTIPOINTZ = 18\r\n- POINTM = 21\r\n- POLYLINEM = 23\r\n- POLYGONM = 25\r\n- MULTIPOINTM = 28\r\n- MULTIPATCH = 31\r\n\t\r\nBased on this we can see that our blockgroups shapefile contains\r\nPolygon type shapes. The shape types are also defined as constants in\r\nthe shapefile module, so that we can compare types more intuitively:\r\n\r\n\r\n\t>>> sf.shapeType == shapefile.POLYGON\r\n\tTrue\r\n\r\nFor convenience, you can also get the name of the shape type as a string:\r\n\r\n\r\n\t>>> sf.shapeTypeName == 'POLYGON'\r\n\tTrue\r\n\t\r\nOther pieces of meta-data that we can check include the number of features \r\nand the bounding box area the shapefile covers:\r\n\r\n\r\n\t>>> len(sf)\r\n\t663\r\n\t>>> sf.bbox\r\n\t[-122.515048, 37.652916, -122.327622, 37.863433]\r\n\t\r\nFinally, if you would prefer to work with the entire shapefile in a different\r\nformat, you can convert all of it to a GeoJSON dictionary, although you may lose\r\nsome information in the process, such as z- and m-values: \r\n\r\n\r\n\t>>> sf.__geo_interface__['type']\r\n\t'FeatureCollection'\r\n\r\n### Reading Geometry\r\n\r\nA shapefile's geometry is the collection of points or shapes made from\r\nvertices and implied arcs representing physical locations. All types of\r\nshapefiles just store points. The metadata about the points determine how they\r\nare handled by software.\r\n\r\nYou can get a list of the shapefile's geometry by calling the shapes()\r\nmethod.\r\n\r\n\r\n\t>>> shapes = sf.shapes()\r\n\r\nThe shapes method returns a list of Shape objects describing the geometry of\r\neach shape record.\r\n\r\n\r\n\t>>> len(shapes)\r\n\t663\r\n\t\r\nTo read a single shape by calling its index use the shape() method. The index\r\nis the shape's count from 0. So to read the 8th shape record you would use its\r\nindex which is 7.\r\n\r\n\r\n\t>>> s = sf.shape(7)\r\n\t>>> s\r\n\tShape #7: POLYGON\r\n\r\n\t>>> # Read the bbox of the 8th shape to verify\r\n\t>>> # Round coordinates to 3 decimal places\r\n\t>>> ['%.3f' % coord for coord in s.bbox]\r\n\t['-122.450', '37.801', '-122.442', '37.808']\r\n\r\nEach shape record (except Points) contains the following attributes. Records of\r\nshapeType Point do not have a bounding box 'bbox'.\r\n\r\n\r\n\t>>> for name in dir(shapes[3]):\r\n\t...     if not name.startswith('_'):\r\n\t...         name\r\n\t'bbox'\r\n\t'oid'\r\n\t'parts'\r\n\t'points'\r\n\t'shapeType'\r\n\t'shapeTypeName'\r\n\r\n  * `oid`: The shape's index position in the original shapefile.\r\n\r\n\r\n\t\t>>> shapes[3].oid\r\n\t\t3\r\n\r\n  * `shapeType`: an integer representing the type of shape as defined by the\r\n\t  shapefile specification.\r\n\r\n\r\n\t\t>>> shapes[3].shapeType\r\n\t\t5\r\n\r\n  * `shapeTypeName`: a string representation of the type of shape as defined by shapeType. Read-only. \r\n\r\n\r\n\t\t>>> shapes[3].shapeTypeName\r\n\t\t'POLYGON'\r\n\t\t\r\n  * `bbox`: If the shape type contains multiple points this tuple describes the\r\n\t  lower left (x,y) coordinate and upper right corner coordinate creating a\r\n\t  complete box around the points. If the shapeType is a\r\n\t  Null (shapeType == 0) then an AttributeError is raised.\r\n\r\n\r\n\t\t>>> # Get the bounding box of the 4th shape.\r\n\t\t>>> # Round coordinates to 3 decimal places\r\n\t\t>>> bbox = shapes[3].bbox\r\n\t\t>>> ['%.3f' % coord for coord in bbox]\r\n\t\t['-122.486', '37.787', '-122.446', '37.811']\r\n\r\n  * `parts`: Parts simply group collections of points into shapes. If the shape\r\n\t  record has multiple parts this attribute contains the index of the first\r\n\t  point of each part. If there is only one part then a list containing 0 is\r\n\t  returned.\r\n\r\n\r\n\t\t>>> shapes[3].parts\r\n\t\t[0]\r\n\r\n  * `points`: The points attribute contains a list of tuples containing an\r\n\t  (x,y) coordinate for each point in the shape.\r\n\r\n\r\n\t\t>>> len(shapes[3].points)\r\n\t\t173\r\n\t\t>>> # Get the 8th point of the fourth shape\r\n\t\t>>> # Truncate coordinates to 3 decimal places\r\n\t\t>>> shape = shapes[3].points[7]\r\n\t\t>>> ['%.3f' % coord for coord in shape]\r\n\t\t['-122.471', '37.787']\r\n\r\nIn most cases, however, if you need to do more than just type or bounds checking, you may want \r\nto convert the geometry to the more human-readable [GeoJSON format](http://geojson.org),\r\nwhere lines and polygons are grouped for you:\r\n\r\n\r\n\t>>> s = sf.shape(0)\r\n\t>>> geoj = s.__geo_interface__\r\n\t>>> geoj[\"type\"]\r\n\t'MultiPolygon'\r\n\t\r\nThe results from the shapes() method similarly supports converting to GeoJSON:\r\n\r\n\r\n\t>>> shapes.__geo_interface__['type']\r\n\t'GeometryCollection'\r\n\r\nNote: In some cases, if the conversion from shapefile geometry to GeoJSON encountered any problems\r\nor potential issues, a warning message will be displayed with information about the affected\r\ngeometry. To ignore or suppress these warnings, you can disable this behavior by setting the \r\nmodule constant VERBOSE to False: \r\n\r\n\r\n\t>>> shapefile.VERBOSE = False\r\n\t\r\n\r\n### Reading Records\r\n\r\nA record in a shapefile contains the attributes for each shape in the\r\ncollection of geometries. Records are stored in the dbf file. The link between\r\ngeometry and attributes is the foundation of all geographic information systems.\r\nThis critical link is implied by the order of shapes and corresponding records\r\nin the shp geometry file and the dbf attribute file.\r\n\r\nThe field names of a shapefile are available as soon as you read a shapefile.\r\nYou can call the \"fields\" attribute of the shapefile as a Python list. Each\r\nfield is a Python list with the following information:\r\n\r\n  * Field name: the name describing the data at this column index.\r\n  * Field type: the type of data at this column index. Types can be: \r\n       * \"C\": Characters, text.\r\n\t   * \"N\": Numbers, with or without decimals.\r\n\t   * \"F\": Floats (same as \"N\").\r\n\t   * \"L\": Logical, for boolean True/False values. \r\n\t   * \"D\": Dates. \r\n\t   * \"M\": Memo, has no meaning within a GIS and is part of the xbase spec instead.\r\n  * Field length: the length of the data found at this column index. Older GIS\r\n\t   software may truncate this length to 8 or 11 characters for \"Character\"\r\n\t   fields.\r\n  * Decimal length: the number of decimal places found in \"Number\" fields.\r\n\r\nTo see the fields for the Reader object above (sf) call the \"fields\"\r\nattribute:\r\n\r\n\r\n\t>>> fields = sf.fields\r\n\r\n\t>>> assert fields == [(\"DeletionFlag\", \"C\", 1, 0), [\"AREA\", \"N\", 18, 5],\r\n\t... [\"BKG_KEY\", \"C\", 12, 0], [\"POP1990\", \"N\", 9, 0], [\"POP90_SQMI\", \"N\", 10, 1],\r\n\t... [\"HOUSEHOLDS\", \"N\", 9, 0],\r\n\t... [\"MALES\", \"N\", 9, 0], [\"FEMALES\", \"N\", 9, 0], [\"WHITE\", \"N\", 9, 0],\r\n\t... [\"BLACK\", \"N\", 8, 0], [\"AMERI_ES\", \"N\", 7, 0], [\"ASIAN_PI\", \"N\", 8, 0],\r\n\t... [\"OTHER\", \"N\", 8, 0], [\"HISPANIC\", \"N\", 8, 0], [\"AGE_UNDER5\", \"N\", 8, 0],\r\n\t... [\"AGE_5_17\", \"N\", 8, 0], [\"AGE_18_29\", \"N\", 8, 0], [\"AGE_30_49\", \"N\", 8, 0],\r\n\t... [\"AGE_50_64\", \"N\", 8, 0], [\"AGE_65_UP\", \"N\", 8, 0],\r\n\t... [\"NEVERMARRY\", \"N\", 8, 0], [\"MARRIED\", \"N\", 9, 0], [\"SEPARATED\", \"N\", 7, 0],\r\n\t... [\"WIDOWED\", \"N\", 8, 0], [\"DIVORCED\", \"N\", 8, 0], [\"HSEHLD_1_M\", \"N\", 8, 0],\r\n\t... [\"HSEHLD_1_F\", \"N\", 8, 0], [\"MARHH_CHD\", \"N\", 8, 0],\r\n\t... [\"MARHH_NO_C\", \"N\", 8, 0], [\"MHH_CHILD\", \"N\", 7, 0],\r\n\t... [\"FHH_CHILD\", \"N\", 7, 0], [\"HSE_UNITS\", \"N\", 9, 0], [\"VACANT\", \"N\", 7, 0],\r\n\t... [\"OWNER_OCC\", \"N\", 8, 0], [\"RENTER_OCC\", \"N\", 8, 0],\r\n\t... [\"MEDIAN_VAL\", \"N\", 7, 0], [\"MEDIANRENT\", \"N\", 4, 0],\r\n\t... [\"UNITS_1DET\", \"N\", 8, 0], [\"UNITS_1ATT\", \"N\", 7, 0], [\"UNITS2\", \"N\", 7, 0],\r\n\t... [\"UNITS3_9\", \"N\", 8, 0], [\"UNITS10_49\", \"N\", 8, 0],\r\n\t... [\"UNITS50_UP\", \"N\", 8, 0], [\"MOBILEHOME\", \"N\", 7, 0]]\r\n\r\nThe first field of a dbf file is always a 1-byte field called \"DeletionFlag\", \r\nwhich indicates records that have been deleted but not removed. However, \r\nsince this flag is very rarely used, PyShp currently will return all records  \r\nregardless of their deletion flag, and the flag is also not included in the list of \r\nrecord values. In other words, the DeletionFlag field has no real purpose, and \r\nshould in most cases be ignored. For instance, to get a list of all fieldnames:\r\n\r\n\r\n\t>>> fieldnames = [f[0] for f in sf.fields[1:]]\r\n\r\nYou can get a list of the shapefile's records by calling the records() method:\r\n\r\n\r\n\t>>> records = sf.records()\r\n\r\n\t>>> len(records)\r\n\t663\r\n\r\nTo read a single record call the record() method with the record's index:\r\n\r\n\r\n\t>>> rec = sf.record(3)\r\n\t\r\nEach record is a list-like Record object containing the values corresponding to each field in\r\nthe field list (except the DeletionFlag). A record's values can be accessed by positional indexing or slicing.\r\nFor example in the blockgroups shapefile the 2nd and 3rd fields are the blockgroup id \r\nand the 1990 population count of that San Francisco blockgroup:\r\n\r\n\r\n\t>>> rec[1:3]\r\n\t['060750601001', 4715]\r\n\r\nFor simpler access, the fields of a record can also accessed via the name of the field,\r\neither as a key or as an attribute name. The blockgroup id (BKG_KEY) of the blockgroups shapefile \r\ncan also be retrieved as:\r\n\r\n\r\n    >>> rec['BKG_KEY']\r\n    '060750601001'\r\n\r\n    >>> rec.BKG_KEY\r\n    '060750601001'\r\n\t\r\nThe record values can be easily integrated with other programs by converting it to a field-value dictionary:\r\n\r\n\r\n\t>>> dct = rec.as_dict()\r\n\t>>> sorted(dct.items())\r\n\t[('AGE_18_29', 1467), ('AGE_30_49', 1681), ('AGE_50_64', 92), ('AGE_5_17', 848), ('AGE_65_UP', 30), ('AGE_UNDER5', 597), ('AMERI_ES', 6), ('AREA', 2.34385), ('ASIAN_PI', 452), ('BKG_KEY', '060750601001'), ('BLACK', 1007), ('DIVORCED', 149), ('FEMALES', 2095), ('FHH_CHILD', 16), ('HISPANIC', 416), ('HOUSEHOLDS', 1195), ('HSEHLD_1_F', 40), ('HSEHLD_1_M', 22), ('HSE_UNITS', 1258), ('MALES', 2620), ('MARHH_CHD', 79), ('MARHH_NO_C', 958), ('MARRIED', 2021), ('MEDIANRENT', 739), ('MEDIAN_VAL', 337500), ('MHH_CHILD', 0), ('MOBILEHOME', 0), ('NEVERMARRY', 703), ('OTHER', 288), ('OWNER_OCC', 66), ('POP1990', 4715), ('POP90_SQMI', 2011.6), ('RENTER_OCC', 3733), ('SEPARATED', 49), ('UNITS10_49', 49), ('UNITS2', 160), ('UNITS3_9', 672), ('UNITS50_UP', 0), ('UNITS_1ATT', 302), ('UNITS_1DET', 43), ('VACANT', 93), ('WHITE', 2962), ('WIDOWED', 37)]\r\n\r\nIf at a later point you need to check the record's index position in the original \r\nshapefile, you can do this through the \"oid\" attribute:\r\n\r\n\r\n\t>>> rec.oid\r\n\t3\r\n\t\r\n### Reading Geometry and Records Simultaneously\r\n\r\nYou may want to examine both the geometry and the attributes for a record at\r\nthe same time. The shapeRecord() and shapeRecords() method let you do just\r\nthat.\r\n\r\nCalling the shapeRecords() method will return the geometry and attributes for\r\nall shapes as a list of ShapeRecord objects. Each ShapeRecord instance has a\r\n\"shape\" and \"record\" attribute. The shape attribute is a Shape object as\r\ndiscussed in the first section \"Reading Geometry\". The record attribute is a\r\nlist-like object containing field values as demonstrated in the \"Reading Records\" section.\r\n\r\n\r\n\t>>> shapeRecs = sf.shapeRecords()\r\n\r\nLet's read the blockgroup key and the population for the 4th blockgroup:\r\n\r\n\r\n\t>>> shapeRecs[3].record[1:3]\r\n\t['060750601001', 4715]\r\n\r\nThe results from the shapeRecords() method is a list-like object that can be easily converted\r\nto GeoJSON through the _\\_geo_interface\\_\\_:\r\n\r\n\r\n\t>>> shapeRecs.__geo_interface__['type']\r\n\t'FeatureCollection'\r\n\r\nThe shapeRecord() method reads a single shape/record pair at the specified index.\r\nTo get the 4th shape record from the blockgroups shapefile use the third index:\r\n\r\n\r\n\t>>> shapeRec = sf.shapeRecord(3)\r\n\t>>> shapeRec.record[1:3]\r\n\t['060750601001', 4715]\r\n\t\r\nEach individual shape record also supports the _\\_geo_interface\\_\\_ to convert it to a GeoJSON feature:\r\n\r\n\r\n\t>>> shapeRec.__geo_interface__['type']\r\n\t'Feature'\r\n\t\r\n\r\n## Writing Shapefiles\r\n\r\n### The Writer Class\r\n\r\nPyShp tries to be as flexible as possible when writing shapefiles while\r\nmaintaining some degree of automatic validation to make sure you don't\r\naccidentally write an invalid file.\r\n\r\nPyShp can write just one of the component files such as the shp or dbf file\r\nwithout writing the others. So in addition to being a complete shapefile\r\nlibrary, it can also be used as a basic dbf (xbase) library. Dbf files are a\r\ncommon database format which are often useful as a standalone simple database\r\nformat. And even shp files occasionally have uses as a standalone format. Some\r\nweb-based GIS systems use an user-uploaded shp file to specify an area of\r\ninterest. Many precision agriculture chemical field sprayers also use the shp\r\nformat as a control file for the sprayer system (usually in combination with\r\ncustom database file formats).\r\n\r\n#### Writing Shapefiles to Local Files\r\n\r\nTo create a shapefile you begin by initiating a new Writer instance, passing it\r\nthe file path and name to save to:\r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/testfile')\r\n\t>>> w.field('field1', 'C')\r\n\t\r\nFile extensions are optional when reading or writing shapefiles. If you specify\r\nthem PyShp ignores them anyway. When you save files you can specify a base\r\nfile name that is used for all three file types. Or you can specify a name for\r\none or more file types:\r\n\r\n\r\n\t>>> w = shapefile.Writer(dbf='shapefiles/test/onlydbf.dbf')\r\n\t>>> w.field('field1', 'C')\r\n\t\r\nIn that case, any file types not assigned will not\r\nsave and only file types with file names will be saved. \r\n\r\n#### Writing Shapefiles to File-Like Objects\r\n\r\nJust as you can read shapefiles from python file-like objects you can also\r\nwrite to them:\r\n\r\n\r\n\t>>> try:\r\n\t...     from StringIO import StringIO\r\n\t... except ImportError:\r\n\t...     from io import BytesIO as StringIO\r\n\t>>> shp = StringIO()\r\n\t>>> shx = StringIO()\r\n\t>>> dbf = StringIO()\r\n\t>>> w = shapefile.Writer(shp=shp, shx=shx, dbf=dbf)\r\n\t>>> w.field('field1', 'C')\r\n\t>>> w.record()\r\n\t>>> w.null()\r\n\t>>> w.close()\r\n\r\n\t>>> # To read back the files you could call the \"StringIO.getvalue()\" method later.\r\n\t>>> assert shp.getvalue()\r\n\t>>> assert shx.getvalue()\r\n\t>>> assert dbf.getvalue()\r\n\r\n\t>>> # In fact, you can read directly from them using the Reader\r\n\t>>> r = shapefile.Reader(shp=shp, shx=shx, dbf=dbf)\r\n\t>>> len(r)\r\n\t1\r\n\t\r\n\t\r\n\r\n#### Writing Shapefiles Using the Context Manager\r\n\r\nThe \"Writer\" class automatically closes the open files and writes the final headers once it is garbage collected.\r\nIn case of a crash and to make the code more readable, it is nevertheless recommended \r\nyou do this manually by calling the \"close()\" method: \r\n\r\n\r\n\t>>> w.close()\r\n\r\nAlternatively, you can also use the \"Writer\" class as a context manager, to ensure open file\r\nobjects are properly closed and final headers written once you exit the with-clause:\r\n\r\n\r\n\t>>> with shapefile.Writer(\"shapefiles/test/contextwriter\") as w:\r\n\t... \tw.field('field1', 'C')\r\n\t... \tpass\r\n\t\r\n#### Setting the Shape Type\r\n\r\nThe shape type defines the type of geometry contained in the shapefile. All of\r\nthe shapes must match the shape type setting.\r\n\r\nThere are three ways to set the shape type: \r\n  * Set it when creating the class instance. \r\n  * Set it by assigning a value to an existing class instance. \r\n  * Set it automatically to the type of the first non-null shape by saving the shapefile.\r\n\r\nTo manually set the shape type for a Writer object when creating the Writer:\r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/shapetype', shapeType=3)\r\n\t>>> w.field('field1', 'C')\r\n\r\n\t>>> w.shapeType\r\n\t3\r\n\r\nOR you can set it after the Writer is created:\r\n\r\n\r\n\t>>> w.shapeType = 1\r\n\r\n\t>>> w.shapeType\r\n\t1\r\n\t\r\n\r\n### Adding Records\r\n\r\nBefore you can add records you must first create the fields that define what types of \r\nvalues will go into each attribute. \r\n\r\nThere are several different field types, all of which support storing None values as NULL. \r\n\r\nText fields are created using the 'C' type, and the third 'size' argument can be customized to the expected\r\nlength of text values to save space:\r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/dtype')\r\n\t>>> w.field('TEXT', 'C')\r\n\t>>> w.field('SHORT_TEXT', 'C', size=5)\r\n\t>>> w.field('LONG_TEXT', 'C', size=250)\r\n\t>>> w.null()\r\n\t>>> w.record('Hello', 'World', 'World'*50)\r\n\t>>> w.close()\r\n\t\r\n\t>>> r = shapefile.Reader('shapefiles/test/dtype')\r\n\t>>> assert r.record(0) == ['Hello', 'World', 'World'*50]\r\n\r\nDate fields are created using the 'D' type, and can be created using either \r\ndate objects, lists, or a YYYYMMDD formatted string. \r\nField length or decimal have no impact on this type:\r\n\r\n\r\n\t>>> from datetime import date\r\n\t>>> w = shapefile.Writer('shapefiles/test/dtype')\r\n\t>>> w.field('DATE', 'D')\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.record(date(1898,1,30))\r\n\t>>> w.record([1998,1,30])\r\n\t>>> w.record('19980130')\r\n\t>>> w.record(None)\r\n\t>>> w.close()\r\n\t\r\n\t>>> r = shapefile.Reader('shapefiles/test/dtype')\r\n\t>>> assert r.record(0) == [date(1898,1,30)]\r\n\t>>> assert r.record(1) == [date(1998,1,30)]\r\n\t>>> assert r.record(2) == [date(1998,1,30)]\r\n\t>>> assert r.record(3) == [None]\r\n\r\nNumeric fields are created using the 'N' type (or the 'F' type, which is exactly the same). \r\nBy default the fourth decimal argument is set to zero, essentially creating an integer field. \r\nTo store floats you must set the decimal argument to the precision of your choice. \r\nTo store very large numbers you must increase the field length size to the total number of digits \r\n(including comma and minus). \r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/dtype')\r\n\t>>> w.field('INT', 'N')\r\n\t>>> w.field('LOWPREC', 'N', decimal=2)\r\n\t>>> w.field('MEDPREC', 'N', decimal=10)\r\n\t>>> w.field('HIGHPREC', 'N', decimal=30)\r\n\t>>> w.field('FTYPE', 'F', decimal=10)\r\n\t>>> w.field('LARGENR', 'N', 101)\r\n\t>>> nr = 1.3217328\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.record(INT=nr, LOWPREC=nr, MEDPREC=nr, HIGHPREC=-3.2302e-25, FTYPE=nr, LARGENR=int(nr)*10**100)\r\n\t>>> w.record(None, None, None, None, None, None)\r\n\t>>> w.close()\r\n\t\r\n\t>>> r = shapefile.Reader('shapefiles/test/dtype')\r\n\t>>> assert r.record(0) == [1, 1.32, 1.3217328, -3.2302e-25, 1.3217328, 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000]\r\n\t>>> assert r.record(1) == [None, None, None, None, None, None]\r\n\r\n\t\r\nFinally, we can create boolean fields by setting the type to 'L'. \r\nThis field can take True or False values, or 1 (True) or 0 (False). \r\nNone is interpreted as missing. \r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/dtype')\r\n\t>>> w.field('BOOLEAN', 'L')\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.record(True)\r\n\t>>> w.record(1)\r\n\t>>> w.record(False)\r\n\t>>> w.record(0)\r\n\t>>> w.record(None)\r\n\t>>> w.record(\"Nonesense\")\r\n\t>>> w.close()\r\n\t\r\n\t>>> r = shapefile.Reader('shapefiles/test/dtype')\r\n\t>>> r.record(0)\r\n\tRecord #0: [True]\r\n\t>>> r.record(1)\r\n\tRecord #1: [True]\r\n\t>>> r.record(2)\r\n\tRecord #2: [False]\r\n\t>>> r.record(3)\r\n\tRecord #3: [False]\r\n\t>>> r.record(4)\r\n\tRecord #4: [None]\r\n\t>>> r.record(5)\r\n\tRecord #5: [None]\r\n\t\r\nYou can also add attributes using keyword arguments where the keys are field names.\r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/dtype')\r\n\t>>> w.field('FIRST_FLD','C','40')\r\n\t>>> w.field('SECOND_FLD','C','40')\r\n\t>>> w.null()\r\n\t>>> w.null()\r\n\t>>> w.record('First', 'Line')\r\n\t>>> w.record(FIRST_FLD='First', SECOND_FLD='Line')\r\n\t>>> w.close()\r\n\r\n### Adding Geometry\r\n\r\nGeometry is added using one of several convenience methods. The \"null\" method is used\r\nfor null shapes, \"point\" is used for point shapes, \"multipoint\" is used for multipoint shapes, \"line\" for lines,\r\n\"poly\" for polygons. \r\n\r\n**Adding a Null shape**\r\n\r\nA shapefile may contain some records for which geometry is not available, and may be set using the \"null\" method. \r\nBecause Null shape types (shape type 0) have no geometry the \"null\" method is called without any arguments. \r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/null')\r\n\t>>> w.field('name', 'C')\r\n\r\n\t>>> w.null()\r\n\t>>> w.record('nullgeom')\r\n\r\n\t>>> w.close()\r\n\r\n**Adding a Point shape**\r\n\r\nPoint shapes are added using the \"point\" method. A point is specified by an x and\r\ny value. \r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/point')\r\n\t>>> w.field('name', 'C')\r\n\t\r\n\t>>> w.point(122, 37) \r\n\t>>> w.record('point1')\r\n\t\r\n\t>>> w.close()\r\n\r\n**Adding a MultiPoint shape**\r\n\r\nIf your point data allows for the possibility of multiple points per feature, use \"multipoint\" instead. \r\nThese are specified as a list of xy point coordinates. \r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/multipoint')\r\n\t>>> w.field('name', 'C')\r\n\t\r\n\t>>> w.multipoint([[122,37], [124,32]]) \r\n\t>>> w.record('multipoint1')\r\n\t\r\n\t>>> w.close()\r\n\t\r\n**Adding a LineString shape**\r\n\r\nFor LineString shapefiles, each shape is given as a list of one or more linear features. \r\nEach of the linear features must have at least two points. \r\n\t\r\n\t\r\n\t>>> w = shapefile.Writer('shapefiles/test/line')\r\n\t>>> w.field('name', 'C')\r\n\t\r\n\t>>> w.line([\r\n\t...\t\t\t[[1,5],[5,5],[5,1],[3,3],[1,1]], # line 1\r\n\t...\t\t\t[[3,2],[2,6]] # line 2\r\n\t...\t\t\t])\r\n\t\r\n\t>>> w.record('linestring1')\r\n\t\r\n\t>>> w.close()\r\n\t\r\n**Adding a Polygon shape**\r\n\r\nSimilarly to LineString, Polygon shapes consist of multiple polygons, and must be given as a list of polygons.\r\nThe main difference is that polygons must have at least 4 points and the last point must be the same as the first. \r\nIt's also okay if you forget to repeat the first point at the end; PyShp automatically checks and closes the polygons\r\nif you don't.\r\n\r\nIt's important to note that for Polygon shapefiles, your polygon coordinates must be ordered in a clockwise direction.\r\nIf any of the polygons have holes, then the hole polygon coordinates must be ordered in a counterclockwise direction.\r\nThe direction of your polygons determines how shapefile readers will distinguish between polygon outlines and holes. \r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/polygon')\r\n\t>>> w.field('name', 'C')\r\n\r\n\t>>> w.poly([\r\n\t...\t        [[113,24], [112,32], [117,36], [122,37], [118,20]], # poly 1\r\n\t...\t        [[116,29],[116,26],[119,29],[119,32]], # hole 1\r\n\t...         [[15,2], [17,6], [22,7]]  # poly 2\r\n\t...        ])\r\n\t>>> w.record('polygon1')\r\n\t\r\n\t>>> w.close()\r\n\t\t\r\n**Adding from an existing Shape object**\r\n\r\nFinally, geometry can be added by passing an existing \"Shape\" object to the \"shape\" method.\r\nYou can also pass it any GeoJSON dictionary or _\\_geo_interface\\_\\_ compatible object. \r\nThis can be particularly useful for copying from one file to another:\r\n\r\n\r\n\t>>> r = shapefile.Reader('shapefiles/test/polygon')\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/copy')\r\n\t>>> w.fields = r.fields[1:] # skip first deletion field\r\n\r\n\t>>> # adding existing Shape objects\r\n\t>>> for shaperec in r.iterShapeRecords():\r\n\t...     w.record(*shaperec.record)\r\n\t...     w.shape(shaperec.shape)\r\n\t\r\n\t>>> # or GeoJSON dicts\r\n\t>>> for shaperec in r.iterShapeRecords():\r\n\t...     w.record(*shaperec.record)\r\n\t...     w.shape(shaperec.shape.__geo_interface__)\r\n\t\r\n\t>>> w.close()\t\r\n\t\r\n\r\n### Geometry and Record Balancing\r\n\r\nBecause every shape must have a corresponding record it is critical that the\r\nnumber of records equals the number of shapes to create a valid shapefile. You\r\nmust take care to add records and shapes in the same order so that the record\r\ndata lines up with the geometry data. For example:\r\n\r\n\t\r\n\t>>> w = shapefile.Writer('shapefiles/test/balancing', shapeType=shapefile.POINT)\r\n\t>>> w.field(\"field1\", \"C\")\r\n\t>>> w.field(\"field2\", \"C\")\r\n\t\r\n\t>>> w.record(\"row\", \"one\")\r\n\t>>> w.point(1, 1)\r\n\t\r\n\t>>> w.record(\"row\", \"two\")\r\n\t>>> w.point(2, 2)\r\n\t\r\nTo help prevent accidental misalignment PyShp has an \"auto balance\" feature to\r\nmake sure when you add either a shape or a record the two sides of the\r\nequation line up. This way if you forget to update an entry the\r\nshapefile will still be valid and handled correctly by most shapefile\r\nsoftware. Autobalancing is NOT turned on by default. To activate it set\r\nthe attribute autoBalance to 1 or True:\r\n\r\n\r\n    >>> w.autoBalance = 1\r\n\t>>> w.record(\"row\", \"three\")\r\n\t>>> w.record(\"row\", \"four\")\r\n\t>>> w.point(4, 4)\r\n\t\r\n\t>>> w.recNum == w.shpNum\r\n\tTrue\r\n\r\nYou also have the option of manually calling the balance() method at any time\r\nto ensure the other side is up to date. When balancing is used\r\nnull shapes are created on the geometry side or records\r\nwith a value of \"NULL\" for each field is created on the attribute side.\r\nThis gives you flexibility in how you build the shapefile.\r\nYou can create all of the shapes and then create all of the records or vice versa. \r\n\r\n\r\n    >>> w.autoBalance = 0\r\n\t>>> w.record(\"row\", \"five\")\r\n\t>>> w.record(\"row\", \"six\")\r\n\t>>> w.record(\"row\", \"seven\")\r\n\t>>> w.point(5, 5)\r\n\t>>> w.point(6, 6)\r\n\t>>> w.balance()\r\n\t\r\n\t>>> w.recNum == w.shpNum\r\n\tTrue\r\n\r\nIf you do not use the autoBalance() or balance() method and forget to manually\r\nbalance the geometry and attributes the shapefile will be viewed as corrupt by\r\nmost shapefile software.\r\n\t\r\n### Writing .prj files\r\nA .prj file, or projection file, is a simple text file that stores a shapefile's map projection and coordinate reference system to help mapping software properly locate the geometry on a map. If you don't have one, you may get confusing errors when you try and use the shapefile you created. The GIS software may complain that it doesn't know the shapefile's projection and refuse to accept it, it may assume the shapefile is the same projection as the rest of your GIS project and put it in the wrong place, or it might assume the coordinates are an offset in meters from latitude and longitude 0,0 which will put your data in the middle of the ocean near Africa. The text in the .prj file is a [Well-Known-Text (WKT) projection string](https://en.wikipedia.org/wiki/Well-known_text_representation_of_coordinate_reference_systems). Projection strings can be quite long so they are often referenced using numeric codes call EPSG codes. The .prj file must have the same base name as your shapefile. So for example if you have a shapefile named \"myPoints.shp\", the .prj file must be named \"myPoints.prj\". \r\n\r\nIf you're using the same projection over and over, the following is a simple way to create the .prj file assuming your base filename is stored in a variable called \"filename\":\r\n\r\n```\r\n\twith open(\"{}.prj\".format(filename), \"w\") as prj:\r\n\t    wkt = 'GEOGCS[\"WGS 84\",'\r\n\t    wkt += 'DATUM[\"WGS_1984\",'\r\n\t    wkt += 'SPHEROID[\"WGS 84\",6378137,298.257223563]]'\r\n\t    wkt += ',PRIMEM[\"Greenwich\",0],'\r\n\t    wkt += 'UNIT[\"degree\",0.0174532925199433]]'\r\n\t    prj.write(wkt)\r\n```\r\n\r\nIf you need to dynamically fetch WKT projection strings, you can use the pure Python [PyCRS](https://github.com/karimbahgat/PyCRS) module which has a number of useful features. \r\n\r\n# Advanced Use\r\n\r\n## Common Errors and Fixes\r\n\r\nBelow we list some commonly encountered errors and ways to fix them. \r\n\r\n### Warnings and Logging\r\n\r\nBy default, PyShp chooses to be transparent and provide the user with logging information and warnings about non-critical issues when reading or writing shapefiles. This behavior is controlled by the module constant `VERBOSE` (which defaults to True). If you would rather suppress this information, you can simply set this to False: \r\n\r\n\r\n\t>>> shapefile.VERBOSE = False\r\n\r\nAll logging happens under the namespace `shapefile`. So another way to suppress all PyShp warnings would be to alter the logging behavior for that namespace:\r\n\r\n\r\n\t>>> import logging\r\n\t>>> logging.getLogger('shapefile').setLevel(logging.ERROR)\r\n\r\n### Shapefile Encoding Errors\r\n\r\nPyShp supports reading and writing shapefiles in any language or character encoding, and provides several options for decoding and encoding text. \r\nMost shapefiles are written in UTF-8 encoding, PyShp's default encoding, so in most cases you don't have to specify the encoding. \r\nIf you encounter an encoding error when reading a shapefile, this means the shapefile was likely written in a non-utf8 encoding. \r\nFor instance, when working with English language shapefiles, a common reason for encoding errors is that the shapefile was written in Latin-1 encoding.\r\nFor reading shapefiles in any non-utf8 encoding, such as Latin-1, just \r\nsupply the encoding option when creating the Reader class. \r\n\r\n\r\n\t>>> r = shapefile.Reader(\"shapefiles/test/latin1.shp\", encoding=\"latin1\")\r\n\t>>> r.record(0) == [2, u'\u00d1and\u00fa']\r\n\tTrue\r\n\t\r\nOnce you have loaded the shapefile, you may choose to save it using another more supportive encoding such \r\nas UTF-8. Assuming the new encoding supports the characters you are trying to write, reading it back in \r\nshould give you the same unicode string you started with. \r\n\r\n\r\n\t>>> w = shapefile.Writer(\"shapefiles/test/latin_as_utf8.shp\", encoding=\"utf8\")\r\n\t>>> w.fields = r.fields[1:]\r\n\t>>> w.record(*r.record(0))\r\n\t>>> w.null()\r\n\t>>> w.close()\r\n\t\r\n\t>>> r = shapefile.Reader(\"shapefiles/test/latin_as_utf8.shp\", encoding=\"utf8\")\r\n\t>>> r.record(0) == [2, u'\u00d1and\u00fa']\r\n\tTrue\r\n\t\r\nIf you supply the wrong encoding and the string is unable to be decoded, PyShp will by default raise an\r\nexception. If however, on rare occasion, you are unable to find the correct encoding and want to ignore\r\nor replace encoding errors, you can specify the \"encodingErrors\" to be used by the decode method. This\r\napplies to both reading and writing. \r\n\r\n\r\n\t>>> r = shapefile.Reader(\"shapefiles/test/latin1.shp\", encoding=\"ascii\", encodingErrors=\"replace\")\r\n\t>>> r.record(0) == [2, u'\ufffdand\ufffd']\r\n\tTrue\r\n\r\n\r\n\r\n## Reading Large Shapefiles\r\n\r\nDespite being a lightweight library, PyShp is designed to be able to read shapefiles of any size, allowing you to work with hundreds of thousands or even millions \r\nof records and complex geometries. \r\n\r\n### Iterating through a shapefile\r\n\r\nAs an example, let's load this Natural Earth shapefile of more than 4000 global administrative boundary polygons:\r\n\r\n\r\n\t>>> sf = shapefile.Reader(\"https://github.com/nvkelso/natural-earth-vector/blob/master/10m_cultural/ne_10m_admin_1_states_provinces?raw=true\")\r\n\r\nWhen first creating the Reader class, the library only reads the header information\r\nand leaves the rest of the file contents alone. Once you call the records() and shapes() \r\nmethods however, it will attempt to read the entire file into memory at once. \r\nFor very large files this can result in MemoryError. So when working with large files\r\nit is recommended to use instead the iterShapes(), iterRecords(), or iterShapeRecords()\r\nmethods instead. These iterate through the file contents one at a time, enabling you to loop \r\nthrough them while keeping memory usage at a minimum. \r\n\r\n\r\n\t>>> for shape in sf.iterShapes():\r\n\t...     # do something here\r\n\t...     pass\r\n\t\r\n\t>>> for rec in sf.iterRecords():\r\n\t...     # do something here\r\n\t...     pass\r\n\t\r\n\t>>> for shapeRec in sf.iterShapeRecords():\r\n\t...     # do something here\r\n\t...     pass\r\n\r\n\t>>> for shapeRec in sf: # same as iterShapeRecords()\r\n\t...     # do something here\r\n\t...     pass\r\n\r\n### Limiting which fields to read\r\n\r\nBy default when reading the attribute records of a shapefile, pyshp unpacks and returns the data for all of the dbf fields, regardless of whether you actually need that data or not. To limit which field data is unpacked when reading each record and speed up processing time, you can specify the `fields` argument to any of the methods involving record data. Note that the order of the specified fields does not matter, the resulting records will list the specified field values in the order that they appear in the original dbf file. For instance, if we are only interested in the country and name of each admin unit, the following is a more efficient way of iterating through the file:\r\n\r\n\r\n\t>>> fields = [\"geonunit\", \"name\"]\r\n\t>>> for rec in sf.iterRecords(fields=fields):\r\n\t... \t# do something\r\n\t... \tpass\r\n\t>>> rec\r\n\tRecord #4595: ['Birgu', 'Malta']\r\n\t\r\n### Attribute filtering\r\n\r\nIn many cases, we aren't interested in all entries of a shapefile, but rather only want to retrieve a small subset of records by filtering on some attribute. To avoid wasting time reading records and shapes that we don't need, we can start by iterating only the records and fields of interest, check if the record matches some condition as a way to filter the data, and finally load the full record and shape geometry for those that meet the condition:\r\n\r\n\r\n\t>>> filter_field = \"geonunit\"\r\n\t>>> filter_value = \"Eritrea\"\r\n\t>>> for rec in sf.iterRecords(fields=[filter_field]):\r\n\t...     if rec[filter_field] == filter_value:\r\n\t... \t\t# load full record and shape\r\n\t... \t\tshapeRec = sf.shapeRecord(rec.oid)\r\n\t... \t\tshapeRec.record[\"name\"]\r\n\t'Debubawi Keyih Bahri'\r\n\t'Debub'\r\n\t'Semenawi Keyih Bahri'\r\n\t'Gash Barka'\r\n\t'Maekel'\r\n\t'Anseba'\r\n\r\nSelectively reading only the necessary data in this way is particularly useful for efficiently processing a limited subset of data from very large files or when looping through a large number of files, especially if they contain large attribute tables or complex shape geometries. \r\n\r\n### Spatial filtering\r\n\r\nAnother common use-case is that we only want to read those records that are located in some region of interest. Because the shapefile stores the bounding box of each shape separately from the geometry data, it's possible to quickly retrieve all shapes that might overlap a given bounding box region without having to load the full shape geometry data for every shape. This can be done by specifying the `bbox` argument to any of the record or shape methods:\r\n\r\n\r\n\t>>> bbox = [36.423, 12.360, 43.123, 18.004] # ca bbox of Eritrea\r\n\t>>> fields = [\"geonunit\",\"name\"]\r\n\t>>> for shapeRec in sf.iterShapeRecords(bbox=bbox, fields=fields):\r\n\t... \tshapeRec.record\r\n\tRecord #368: ['Afar', 'Ethiopia']\r\n\tRecord #369: ['Tadjourah', 'Djibouti']\r\n\tRecord #375: ['Obock', 'Djibouti']\r\n\tRecord #376: ['Debubawi Keyih Bahri', 'Eritrea']\r\n\tRecord #1106: ['Amhara', 'Ethiopia']\r\n\tRecord #1107: ['Gedarif', 'Sudan']\r\n\tRecord #1108: ['Tigray', 'Ethiopia']\r\n\tRecord #1414: ['Sa`dah', 'Yemen']\r\n\tRecord #1415: ['`Asir', 'Saudi Arabia']\r\n\tRecord #1416: ['Hajjah', 'Yemen']\r\n\tRecord #1417: ['Jizan', 'Saudi Arabia']\r\n\tRecord #1598: ['Debub', 'Eritrea']\r\n\tRecord #1599: ['Red Sea', 'Sudan']\r\n\tRecord #1600: ['Semenawi Keyih Bahri', 'Eritrea']\r\n\tRecord #1601: ['Gash Barka', 'Eritrea']\r\n\tRecord #1602: ['Kassala', 'Sudan']\r\n\tRecord #1603: ['Maekel', 'Eritrea']\r\n\tRecord #2037: ['Al Hudaydah', 'Yemen']\r\n\tRecord #3741: ['Anseba', 'Eritrea']\r\n\r\nThis functionality means that shapefiles can be used as a bare-bones spatially indexed database, with very fast bounding box queries for even the largest of shapefiles. Note that, as with all spatial indexing, this method does not guarantee that the *geometries* of the resulting matches overlap the queried region, only that their *bounding boxes* overlap. \r\n\r\n\r\n\r\n## Writing large shapefiles\r\n\r\nSimilar to the Reader class, the shapefile Writer class uses a streaming approach to keep memory \r\nusage at a minimum and allow writing shapefiles of arbitrarily large sizes. The library takes care of this under-the-hood by immediately \r\nwriting each geometry and record to disk the moment they \r\nare added using shape() or record(). Once the writer is closed, exited, or garbage \r\ncollected, the final header information is calculated and written to the beginning of \r\nthe file. \r\n\r\n### Merging multiple shapefiles\r\n\r\nThis means that it's possible to merge hundreds or thousands of shapefiles, as \r\nlong as you iterate through the source files to avoid loading everything into \r\nmemory. The following example copies the contents of a shapefile to a new file 10 times:\r\n\r\n\t>>> # create writer\r\n\t>>> w = shapefile.Writer('shapefiles/test/merge')\r\n\r\n\t>>> # copy over fields from the reader\r\n\t>>> r = shapefile.Reader(\"shapefiles/blockgroups\")\r\n\t>>> for field in r.fields[1:]:\r\n\t...     w.field(*field)\r\n\r\n\t>>> # copy the shapefile to writer 10 times\r\n\t>>> repeat = 10\r\n\t>>> for i in range(repeat):\r\n\t...     r = shapefile.Reader(\"shapefiles/blockgroups\")\r\n\t...     for shapeRec in r.iterShapeRecords():\r\n\t...         w.record(*shapeRec.record)\r\n\t...         w.shape(shapeRec.shape)\r\n\r\n\t>>> # check that the written file is 10 times longer\r\n\t>>> len(w) == len(r) * 10\r\n\tTrue\r\n\r\n\t>>> # close the writer\r\n\t>>> w.close()\r\n\r\nIn this trivial example, we knew that all files had the exact same field names, ordering, and types. In other scenarios, you will have to additionally make sure that all shapefiles have the exact same fields in the same order, and that they all contain the same geometry type. \r\n\r\n### Editing shapefiles\r\n\r\nIf you need to edit a shapefile you would have to read the \r\nfile one record at a time, modify or filter the contents, and write it back out. For instance, to create a copy of a shapefile that only keeps a subset of relevant fields: \r\n\r\n\t>>> # create writer\r\n\t>>> w = shapefile.Writer('shapefiles/test/edit')\r\n\r\n\t>>> # define which fields to keep\r\n\t>>> keep_fields = ['BKG_KEY', 'MEDIANRENT']\r\n\r\n\t>>> # copy over the relevant fields from the reader\r\n\t>>> r = shapefile.Reader(\"shapefiles/blockgroups\")\r\n\t>>> for field in r.fields[1:]:\r\n\t...     if field[0] in keep_fields:\r\n\t...         w.field(*field)\r\n\r\n\t>>> # write only the relevant attribute values\r\n\t>>> for shapeRec in r.iterShapeRecords(fields=keep_fields):\r\n\t...     w.record(*shapeRec.record)\r\n\t...     w.shape(shapeRec.shape)\r\n\r\n\t>>> # close writer\r\n\t>>> w.close()\r\n\r\n## 3D and Other Geometry Types\r\n\r\nMost shapefiles store conventional 2D points, lines, or polygons. But the shapefile format is also capable\r\nof storing various other types of geometries as well, including complex 3D surfaces and objects. \r\n\r\n### Shapefiles with measurement (M) values\r\n\r\nMeasured shape types are shapes that include a measurement value at each vertex, for instance\r\nspeed measurements from a GPS device. Shapes with measurement (M) values are added with the following\r\nmethods: \"pointm\", \"multipointm\", \"linem\", and \"polygonm\". The M-values are specified by adding a\r\nthird M value to each XY coordinate. Missing or unobserved M-values are specified with a None value,\r\nor by simply omitting the third M-coordinate.\r\n\r\n\r\n\t>>> w = shapefile.Writer('shapefiles/test/linem')\r\n\t>>> w.field('name', 'C')\r\n\t\r\n\t>>> w.linem([\r\n\t...\t\t\t[[1,5,0],[5,5],[5,1,3],[3,3,None],[1,1,0]], # line with one omitted and one missing M-value\r\n\t...\t\t\t[[3,2],[2,6]] # line without any M-values\r\n\t...\t\t\t])\r\n\t\r\n\t>>> w.record('linem1')\r\n\t\r\n\t>>> w.close()\r\n\t\r\nShapefiles containing M-values can be examined in several ways:\r\n\r\n\t>>> r = shapefile.Reader('shapefiles/test/linem')\r\n\t\r\n\t>>> r.mbox # the lower and upper bound of M-values in the shapefile\r\n\t[0.0, 3.0]\r\n\t\r\n\t>>> r.shape(0).m # flat list of M-values\r\n\t[0.0, None, 3.0, None, 0.0, None, None]\r\n\r\n\t\r\n### Shapefiles with elevation (Z) values\r\n\r\nElevation shape types are shapes that include an elevation value at each vertex, for instance elevation from a GPS device. \r\nShapes with elevation (Z) values are added with the following methods: \"pointz\", \"multipointz\", \"linez\", and \"polyz\". \r\nThe Z-values are specified by adding a third Z value to each XY coordinate. Z-values do not support the concept of missing data,\r\nbut if you omit the third Z-coordinate it will default to 0. Note that Z-type shapes also support measurement (M) values added\r\nas a fourth M-coordinate. This too is optional. \r\n\t\r\n\t\r\n\t>>> w = shapefile.Writer('shapefiles/test/linez')\r\n\t>>> w.field('name', 'C')\r\n\t\r\n\t>>> w.linez([\r\n\t...\t\t\t[[1,5,18],[5,5,20],[5,1,22],[3,3],[1,1]], # line with some omitted Z-values\r\n\t...\t\t\t[[3,2],[2,6]], # line without any Z-values\r\n\t...\t\t\t[[3,2,15,0],[2,6,13,3],[1,9,14,2]] # line with both Z- and M-values\r\n\t...\t\t\t])\r\n\t\r\n\t>>> w.record('linez1')\r\n\t\r\n\t>>> w.close()\r\n\t\r\nTo examine a Z-type shapefile you can do:\r\n\r\n\t>>> r = shapefile.Reader('shapefiles/test/linez')\r\n\t\r\n\t>>> r.zbox # the lower and upper bound of Z-values in the shapefile\r\n\t[0.0, 22.0]\r\n\t\r\n\t>>> r.shape(0).z # flat list of Z-values\r\n\t[18.0, 20.0, 22.0, 0.0, 0.0, 0.0, 0.0, 15.0, 13.0, 14.0]\r\n\r\n### 3D MultiPatch Shapefiles\r\n\r\nMultipatch shapes are useful for storing composite 3-Dimensional objects. \r\nA MultiPatch shape represents a 3D object made up of one or more surface parts.\r\nEach surface in \"parts\" is defined by a list of XYZM values (Z and M values optional), and its corresponding type is\r\ngiven in the \"partTypes\" argument. The part type decides how the coordinate sequence is to be interpreted, and can be one \r\nof the following module constants: TRIANGLE_STRIP, TRIANGLE_FAN, OUTER_RING, INNER_RING, FIRST_RING, or RING.\r\nFor instance, a TRIANGLE_STRIP may be used to represent the walls of a building, combined with a TRIANGLE_FAN to represent \r\nits roof: \r\n\r\n\t>>> from shapefile import TRIANGLE_STRIP, TRIANGLE_FAN\r\n\t\r\n\t>>> w = shapefile.Writer('shapefiles/test/multipatch')\r\n\t>>> w.field('name', 'C')\r\n\t\r\n\t>>> w.multipatch([\r\n\t...\t\t\t\t [[0,0,0],[0,0,3],[5,0,0],[5,0,3],[5,5,0],[5,5,3],[0,5,0],[0,5,3],[0,0,0],[0,0,3]], # TRIANGLE_STRIP for house walls\r\n\t...\t\t\t\t [[2.5,2.5,5],[0,0,3],[5,0,3],[5,5,3],[0,5,3],[0,0,3]], # TRIANGLE_FAN for pointed house roof\r\n\t...\t\t\t\t ],\r\n\t...\t\t\t\t partTypes=[TRIANGLE_STRIP, TRIANGLE_FAN]) # one type for each part\r\n\t\r\n\t>>> w.record('house1')\r\n\t\r\n\t>>> w.close()\r\n\t\r\nFor an introduction to the various multipatch part types and examples of how to create 3D MultiPatch objects see [this\r\nESRI White Paper](http://downloads.esri.com/support/whitepapers/ao_/J9749_MultiPatch_Geometry_Type.pdf). \r\n\r\n\r\n\t\r\n# Testing\r\n\r\nThe testing framework is pytest, and the tests are located in test_shapefile.py. \r\nThis includes an extensive set of unit tests of the various pyshp features, \r\nand tests against various input data. Some of the tests that require \r\ninternet connectivity will be skipped in offline testing environments. \r\nIn the same folder as README.md and shapefile.py, from the command line run \r\n```\r\n$ python -m pytest\r\n``` \r\n\r\nAdditionally, all the code and examples located in this file, README.md, \r\nis tested and verified with the builtin doctest framework.\r\nA special routine for invoking the doctest is run when calling directly on shapefile.py.\r\nIn the same folder as README.md and shapefile.py, from the command line run \r\n```\r\n$ python shapefile.py\r\n``` \r\n\r\nLinux/Mac and similar platforms will need to run `$ dos2unix README.md` in order\r\nto correct line endings in README.md.\r\n\r\n# Contributors\r\n\r\n```\r\nAtle Frenvik Sveen\r\nBas Couwenberg\r\nBen Beasley\r\nCasey Meisenzahl\r\nCharles Arnold\r\nDavid A. Riggs\r\ndavidh-ssec\r\nEvan Heidtmann\r\nezcitron\r\nfiveham\r\ngeospatialpython\r\nHannes\r\nIgnacio Martinez Vazquez\r\nJason Moujaes\r\nJonty Wareing\r\nKarim Bahgat\r\nkaranrn\r\nKyle Kelley\r\nLouis Tiao\r\nMarcin Cuprjak\r\nmcuprjak\r\nMicah Cochran\r\nMichael Davis\r\nMichal \u010ciha\u0159\r\nMike Toews\r\nMiroslav \u0160ediv\u00fd\r\nNilo\r\npakoun\r\nPaulo Ernesto\r\nRaynor Vliegendhart\r\nRazzi Abuissa\r\nRosBer97\r\nRoss Rogers\r\nRyan Brideau\r\nTim Gates\r\nTobias Megies\r\nTommi Penttinen\r\nUli K\u00f6hler\r\nVsevolod Novikov\r\nZac Miller\r\n```\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Pure Python read/write support for ESRI Shapefile format",
    "version": "2.3.1",
    "project_urls": {
        "Download": "https://pypi.org/project/IronPyShp",
        "Homepage": "https://github.com/JamesParrott/IronPyShp"
    },
    "split_keywords": [
        "gis",
        " geospatial",
        " geographic",
        " shapefile",
        " shapefiles"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6d2bdb482138fe5911deec2647259711d2cc4490790a9377531996a819e4ebef",
                "md5": "140fe7bd94eedb0e64599380a27fce5f",
                "sha256": "d5271bf3b0495bed5e66e35fe9febf1a706f6e781dd7d9fce65ca81f4541bc4f"
            },
            "downloads": -1,
            "filename": "IronPyshp-2.3.1-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "140fe7bd94eedb0e64599380a27fce5f",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=2.7",
            "size": 48069,
            "upload_time": "2024-05-20T22:13:34",
            "upload_time_iso_8601": "2024-05-20T22:13:34.557128Z",
            "url": "https://files.pythonhosted.org/packages/6d/2b/db482138fe5911deec2647259711d2cc4490790a9377531996a819e4ebef/IronPyshp-2.3.1-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f6846670109454e64f4c8e4a97e47bfd1c714b5a1d71c05e08eab792f6dded07",
                "md5": "d95d18fd8c3516c3cdba8c3cbc7d1d37",
                "sha256": "d459b53fb1c13c27f1f700a1a977a1222d051b4393cf5409187f3c165ee61825"
            },
            "downloads": -1,
            "filename": "IronPyshp-2.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d95d18fd8c3516c3cdba8c3cbc7d1d37",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=2.7",
            "size": 1737087,
            "upload_time": "2024-05-20T22:13:39",
            "upload_time_iso_8601": "2024-05-20T22:13:39.424622Z",
            "url": "https://files.pythonhosted.org/packages/f6/84/6670109454e64f4c8e4a97e47bfd1c714b5a1d71c05e08eab792f6dded07/IronPyshp-2.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-20 22:13:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "JamesParrott",
    "github_project": "IronPyShp",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "ironpyshp"
}
        
Elapsed time: 0.76123s