Monday, October 31, 2016

Navigation Maps

Introduction
     The lab for this week consisted of creating two navigation maps for the Priory. The Priory is a 120 acres patch of land a couple of minutes south of the University of Wisconsin - Eau Claire. The maps created for this lab will be used to navigate these 120 acres of woods. The first map uses a Projected coordinate system and the other uses a geographical coordinate system.

Methods
     The Professor of the class obtained the data used to create this map from the United States Geological Survey or more commonly known as, USGS. The students of the class used this data too create two different maps. The first that was created was the one using the projected coordinate system. The projection used is the Universal Transverse Mercator (UTM) Zone 15. The UTM is a world wide projection system that breaks the world up into 60 zones. By doing this each zone experiences less distortion. This system uses meters for its linear unit. To create the map an areal image of the Priory was added. This was added to help identify features in the real world while conducting the navigation exercise. The second layer contains contour lines, which indicate the change in elevation of the land. These contours were originally given in 2 foot intervals but were switched to 2 meter intervals for two reasons. The first being that the linear units of the projection is meters and it is appropriate to have the contours units to match the projection units to improve the accuracy of the contours. The second reason is that the 2 ft. intervals were too detailed and it made reading the map difficult. The 2 meter intervals spread out the contour lines making the map easier to read. Another feature that was added to this map was an outline of the navigation area that is being used. This help to keep the students within the study area, hopefully. The final and most difficult part of this exercise was creating a grid that overlayed the entire map. The challenge of the grid was finding the right amount of detail to add while not cluttering the map too much. For the projection map a 50 meter interval was used to separate the grid cells.
     The second map that was created used the GCS North American 1983 geographic coordinate system(GCS). This coordinate system is designed to represent the North American and Pacific plates.  All of the same steps used in the first map were applied to this map. However the linear units of this GCS were decimal degrees. So the grid had to be converted to decimal degrees, with an interval of .001 decimal degrees separating the cells. Both maps contain a north arrow, scale bar, a water mark, which is simply the name of the creator of the map, the source of the data, and the coordinate systems used.

Discussion
      The projection map is posted below as figure 1. The projection map will be the easier and more useful of the two maps. Due to the projected system the features are more accurate and experience less distortion. Both maps use white contour lines because of the nature of the areal imagery. Basically the color white makes the lines easier to see than black.
(Figure 1: an image of the map using the UTM Projection for the Priory.)

     The second map is the one that uses the geographic coordinate system and is portrayed in figure 2. When comparing the two maps it is easy to see that the second is more stretched out and narrower than the project map. This distortion will make navigating slightly more difficult. 

(Figure 2: an Map of the Priory with a geographic coordinate system.)



Tuesday, October 25, 2016

Survey Using Distance and Azimuth

Introduction
     The point of this lab is to use survey equipment to find exact location of tress in real life. This method of Distance/Azimuth surveying is an excellent way to survey the locations of objects in real life without using GPS units. It is also a good back up plan for when equipment fails in the field. The study area for this project was in Putnam park, a park that runs through the University of Wisconsin - Eau Claire lower campus.

Methods
     Equipment
     - Laser Distance Finder
     - Distance Finder
     - Compass
     - GPS Unit
     - Notebook and Pencil

     The compass is a device that is able to find North and can measure the azimuth from north. The Laser Distance Finder is the most high tech device that was used and it can measure the distance that something is away from the user, the azimuth of that object, as well as variety of other things. The distance finder is a two part device that contains a transmitter and a receiver. The transmitter sends out a signal that is received by the receiver. The receiver is able to measure the distance that the signal traveled. The notebook and pencil were used to take notes in the field.
     The Distance/Azimuth survey method is a method that is very convenient when their is a lack of advanced technology. It is a simple matter of setting up a survey area. Then using a simple GPS unit or a smart phone, if their is service, to get the location of the origin point. When the origin point is set find north. North will be the starting point, or 0 degrees, in measuring the Azimuth. Next step is to find the location of the object that needs to be mapped. Use the compass or laser distance finder to get the Azimuth or the degrees away from north that the object is. The next step is to find the distance from the origin point that the object is by using a laser distance finder or a distance finder. With the latitude and longitude of the origin point, the azimuth and the distance of the object from the origin point it is possible to create a map of the tree locations without GPS. 10 tree locations were taken from three different origin points. The point to the furthest north and the middle origin points used the Laser Distance Finder to find the location of the 10 trees, while the last location used the compass and distance finder to get the location of the trees. The attributes that were collected include the origin latitude and longitude, the distance from the origin point each tree was in meters, the azimuth of each tree in degress, the diameter at breast height (DBH) in centimeters, and the tree type. A portion of the data is displayed in figure 1 below.
(Figure 1: A table containing all of the attribute data needed to map out the locations of trees in Putnam Park.)

      The data was collected in three different groups and compiled into a google docs spread sheet. This data was then download and imported into ArcMaps. The Bearing Distance to Line tool was used to create a feature class of lines from the origin point to the location of the trees. The other tool that was used was the Feature Vertices to Point tool. This created a point at the end of each line that represented the location of the trees. Figure 2 below is an image of the data flow model that was used to create the features used in the map.

(Figure 2: A data flow model that shows the progression of the data from the original google docs table through both the Bearing Distance to Line tool and the Feature Vertices to Points tool.)

Results
     There was one major issue that occurred in this project. The second origins latitude was incorrect and the features appeared in the parking lot 20 meters north of the actual location. This problem was remedied by trial and error. The latitudes were lowered slightly to try and get the features in the proper location. Figure 3 is an image of a map that show the incorrect positioning of the second origin point.
(Figure 3: A map of the incorrect positioning of the second origin points group of trees.)

This map shows a group of points and lines north of the park and on the edge of the parking lot. The actual location was in the middle of the other two groups of lines and points. The reason that the group of trees for the second origin point were not in the correct location could be due to a few reasons. The most likely is the ridge that forms to the south of this study area affecting the GPS device that was used. Which is another reason why this method is so important, the only technical device that was used failed one out of three times. Another possibility is that the latitude was marked down incorrectly. The second option is less likely due to the fact that numerous different people all wrote down the same latitudes and longitudes. Figure 4 below shows an image of the same area but with the second origin point with a more accurate latitude. 
(Figure 4: An image of the study area with a corrected origin point for the second origin point.)
Unfortunately even with the better location of the second group it is still not accurate. The longest line going in a northern direction should be to the right of the path going through the woods not to the left. So while this method can give a general idea of where objects are located relative to other objects, it is subject to the accuracy of the origin point. Other factors that may lead to inaccurate results include human error. An improper read of one of the devices will lead to wrong results. 

Conclusion
   Overall this method is excellent if a representation of where thing are in relation to other things is all that is needed. However if absolute accuracy is needed in measuring the location of objects this method is subject to many types of errors or inconsistencies that will through off the results but can be useful when in a bind. 


















Tuesday, October 18, 2016

Sandbox Survey: Visualizing and Refining your Terrain Survey

Introduction
     In the previous lab a landscape was created in a sandbox so that a group of students could take a survey of the sand landscape. The edges of the box were considered sea level, meaning areas above sea level were measured as positives and measurements below sea level were negative. A grid was created so that each measurement could be located in the sandbox by using the grid. A more detailed account of how this survey was conducted can be found in a previous blog post labelled Creating a Digital Survey Model. 
     The term data normalization refers to the cleaning of data. Making sure all of the excel cells are in the proper format, making sure that there are no spaces in the field titles, and making sure that predetermined symbols are not used, these are but a few of the challenges that are fast when normalizing data. The data normalization is important for this lab because all of the data was collected in the field and the way that the data was collected could have changed slightly during the surveying process. This data needs to be cleaned and put into the same formats across the data.
     The data points will be mapped out by using ArcMaps and ArcVisual. The data is in a excel table which will then be mapped in ArcMaps by using the Map by XY data function. These points will be converted to a raster and various types of interpolation will be preformed to show the elevation model in different ways.

Methods  
     To turn a X,Y,Z data points into an elevation model the first step is to create a geodatabase where all of the data will be stored. Add the excel table with the surveyed data points to the geodatabase. Add the points using the Add XY data function. The data will then be converted to a point feature class. With the new point feature class the points will be converted to continuous data by the means of 5 different interpolation methods: IDW, Natural Neighbors, Kriging, Spline, and TIN. IDW or Inverse Distance Weighted technique of interpolation uses sample points as starting points and further from the sample or starting point you get the more inverse the elevation becomes. The next method of interpolation is Natural Neighbor, which uses central points and take the weighted value of all sample points around the central point. The third method that was used is Kriging, which uses z-values, or the number of standard deviations away from a mean that a point is, to determine the elevation surface. The fourth interpolation method is called Spline, which uses a mathematical function that reduces the surface curvature, making the map look smoother. The final method is called TIN or the Triangular Irregular Networks. TINs are created by triangulating points or technically called vertices, which are then connected by a edges creating a network of triangles. All of these interpolation functions have a 3D version in the 3D Analysis Extension, the 3D version were used to create the continuous data or raster.
     After the vertices were interpolated into rasters they were exported into 3D Scene. The rasters were saved at as PNGs with 150 dpi. For each interpolation the orientation needed to be set to that interpolations orientation. This allowed for the data to be displayed properly in 3D Scene. As another step to help make the data look smother, in the setting under display, the display setting was changed to Bilinear Interpolation. This helps to smooth out the pixels in the PNGs that were added to 3D Scene. It does not change effects of the original interpolation that was used. It just smooths out the pixels in the PNG. The color scheme that was used was red to blue dark. This color scheme really helps to differentiate the high elevation points from the low elevation points. Also an aesthetic side note, the color scheme makes the lower elevation points look like water.

Results 
     Figure 1 looks at the IDW interpolation first. This method turned out the worst. The problem was not with interpolation itself but in the way that the data was collected. When the survey was conducted most of the points were collected in the center of each grid cell. However, when the data was recorded only one part of the XY points were recorded. So for an example, if a point was collected at Y3.1 and X3.2 it was not recorded that way. The mistake that was made was consistent across the grid in this case it was recorded as Y3.1 and X3. So all points collected between Y3 and Y4 all points lined up in a perfect line. When looking at this Figure 1 it is easy to see that a grid pattern has formed. This was caused because of the weighted distance method that the IDW uses. The further from the sample point the lower the elevation. If all points were collected on the grid lines and not in the middle of the grids, like they were suppose to be, this grid pattern appears.
(Figure 1: an IDW Interpolation of the Landscape)

     The second method, Natural Neighbor, is Figure 2. Natural Neighbor was one of the more accurate representations of the actual landscape. The method of taking all the points from near by sample points gave it fairly accurate representation. This is probably the best method to counteract the mistake that was made in the data collection. It took all of the "middle" samples that were mistakenly put on the edges of the grid and weighted them giving the interpolation a real look. 
(Figure 2: An Natural Neighbor Interpolation of the Landscape)

     The third method is Kriging, which used the z-scores of the sample points to create a 3D rendering of the landscape. Figure 3 shows an image of this interpolation. This image was the first created and different color scheme was used, Red to Green Dark. This method ended up giving it a dried dirt look that was not very representative of the landscape. This is due more to the data collection issue that occurred. Using the z-scores should have provided an excellent representation. This method really struggled to form the three hill in the top part of the image. These hills are better represented in the Natural Neighbor and IDW Interpolations. 
(Figure 3: The Kriging Interpolatin of the Landscape)

     The fourth interpolation was the Spline; Figure 4 shows an image of the Spline Interpolation. The Spline did the best job of capturing the terrain of the top part of the image but misinterpreted parts of it. The ridge running from the middle of the map to the top of the map was suppose to be the west side of a river. None of the other interpolations were able to detect this feature in the landscape. However, this interpolation struggled with the ridges on the bottom of the image that were more clear in the Natural Neighbor Interpolation. 
(Figure 4: The Spline Interpolation of the Landscape)

     The last method of creating a 3D model was by TIN. The TIN image is located in Figure 5. The TIN looks very similar to the Natural Neighbor Interpolation. It took note of the ridges in the bottom of the image as well as the plateau in the top left section of the image. It did a decent job of detecting the valley between the mountain ridges on the bottom of the image. This method was, overall, as good as the Natural Neighbor. 
(Figure 5: TIN Interpolation of the Landscape)

Conclusion
     This survey relates to other field based method in that it gave an excellent insight into how to collect data in the field where an employer will expect that the survey gets done properly. It made the group think critically about what is the best way in which to gather the data and where should sea level be. This method differs from actual field work in that sea level will already be determined. 
     I do not believe that this level of detail will always be possible in the field due to areas being inaccessible due to natural features or the land being owned by another person or corporation. Also if the area that is being surveyed is very large there may not be enough time to go to all desired locations in the area that is being surveyed. To continue off of the last point if all areas do get surveyed it will be at the cost of areas with large elevation changes not getting as surveyed with the detail that it may require.
     Interpolation can be used with any types of continuous data, precipitation and temperature are two classic examples. An interesting one that ArcHelp talks about is using the IDW to map out the likelihood of a return customer returning to an retail store. Due to the weighted distance method that is used to interpolate a set vertices it show areas that are further away from the central point. Which correlates to the further away you are from the store a customer is the more unlikely they are to use that store.    



















Tuesday, October 11, 2016

Creating a Digital Elevation Model

Introduction
     Sampling is a shortcut method of measuring the whole of a population or, in this case, an area. A small portion of the whole data is collected to make inquires and informed decisions on the entire population.
     There are three main techniques of sampling, Random, Systematic and Stratified. Random sampling grabs data from the entire population at random. Meaning that there is no bias with the sample collection. Systematic sampling is using a systematic approach to gather data in evenly or regularly throughout a data set. Finally there is the Stratified which breaks the population down into separate known groups and then data points are collected with in each group.
     The objective of this lab is to create a landscape containing various different features, collect elevation samples form the landscape, which will then be turned into a digital elevation model by means of ESRI.

Methods
     For this project a stratified sampling method was chosen because it gave more flexibility to where more on where larger amounts of samples could be taken. For instance, in the landscape created there was a plane and a mountain ridge. The plain stretched for quite a ways and the elevation did not change, so few samples were need in this area. However, the mountain ridge had drastic changes in elevation so many points in a relatively small area were needed to capture the shape of the ridge. This same flexibility is not possible in the random sample and systematic approaches. With the random sample it is possible that only one point could be taken on the ridge while dozens were taken on the plains. This would have distorted the shape of the mountain ridge greatly. A similar problem occurs with the systematic approach where in the areas of the landscape may be skipped and the land formations will not look as they did in real life.
     The landscape was created in a sandbox where in certain features were required, the feature were: a ridge, hill, depression, valley, and a plain. The materials that were used to create the landscape and measure the elevation were sand, tacks, string, a yard stick, a ruler, and a notebook and pen. The sand was used to create the features in the landscape, the tacks were used to create a the groups need to preform the stratified sampling, the string was used to as a way to maintain "sea level" in the middle of the landscape, the yard stick and rulers were used to measure the elevation, and the notebook and pen were for recording the elevation samples. Image 1 shows the landscape that will be used to create the digital elevation model.
(Image 1: The landscape)

     The sampling schema was set up in the following way. First the sandbox was measure so that evenly spaced grids could be created. The sandbox was used was a square that was about 111 cm by 111 cm. So a tack was pushed into the sandbox container at every 11 cm, to create a 10 by 10 grid over the sandbox. Unfortunately, the sampling process took so long that a proper collection was not possible for every grin. The decision was made to get rid of the 10th row of the Y axis. This areas was part of the plains meaning that there was little elevation change. The loss of this row was deemed acceptable.
     Elevation was chosen to be at the top edge of the sandbox. This elevation was chosen purely for convenience. Having the elevation be below the ridge of the sandbox would have been a nightmare to try and measure. Instead the group decided that if the sea level needed to be lower in the future a simple, equal subtraction of all of the elevations to the new sea level would be easier. Image 2 shows the string method in use. Image 3 shows the yard stick in use to measure the elevation.

(Image 2: Using a string to measure at sea level in middle of the sandbox.)


(Image 3: Yard stick in play.)

     To record the data the origin point was determined and every cell to the right and up from that point would be numbered up by 1, until the Y-axis was 9 and the X-axis was 10. Certain areas in each cell needed more data points collected than others. These points were labelled in the following way, (3.1, 1), (3.2,1), and (3.3,1). This would allow for large elevation changes that were directed east and west. The process could be switched so that the Y value and the .1, .2, .3, which would indicate elevation change going north to south. To actually measure these points a string was stretched tightly across the sandbox and a yardstick was used to measure all elevation below sea level, which were recorded as negative numbers. For the elevation above sea level the ruler was used to give a flat surface from the elevation level above sea level so that the yard stick could measure it. These were measure as positive numbers. This method was chosen simply because it seemed like the easiest.

Results
   The sample size ended up being 145 different elevation points spread out across the landscape. The lowest point for the landscape was -15 cm, the max was 16 cm, the standard deviation was 6.13 cm and the average elevation was -3.05 cm. The range from the highest to the lowest was 31 cm. However, the sea level may be changed and all elevation will be subtracted by 3. This sampling seemed to work the best due to the fact that it was the best way to focus higher amounts of data collection in one area over another, which was a necessity for this assignment. The sampling technique remained the same throughout the sampling process. The two problems that were encountered during the sampling problem were the sea level issue, which was discussed in detail earlier, and the second issue was touched on briefly earlier, which was the switching of the .1, .2, .3 from Y to X axis depending on the direction that the change in elevation occurred. The latter was solved by going back and double checking the areas that had the more in-depth collections. Figure 1 shows a portion of the table containing the elevation points that were collected during this assignment.
(Figure 1 Elevation Table)

Conclusion
     How does your sampling relate to the definition of sampling and the sampling methods out there?
     The sampling method used was the stratified sampling method. The way that it was employed for this assignment fits perfectly with the definition of sampling.
    Why use sampling in spatial situation?
     It took a small portion of the total amount of elevation and will, hopefully, allow for the creation of digital elevation model. To allow for the study of very large areas in shorter amounts of time. It is the same reason why media companies use samples of larger population to create political polls.
    How does this activity relate to sampling spatial data over larger areas?
    Similar methods described above could be used to collect data points on a significantly larger scale.
     Using the numbers you gathered, did your survey perform an adequate job of sampling the area you were tasked to sample? How might you refine your survey to accommodate the sampling density desired.
     Yes, the sample size was large enough to create an accurate model of the actual terrain. The sampling method used was hopefully accurate and will lead to well detailed digital elevation model.
















Tuesday, October 4, 2016

Completed Hadleyville Cemetery

Introduction
The Introduction for this project can be found below under the post Data Collection Proposal for the Hadleyville Cemetery, which was posted Wednesday, September 7th, 2016.

Study Area
Where is the study area located?
-The location of the cemetery is in the western part of Eau Claire County, in the state of Wisconsin. A locator map of where in Wisconsin this is can be found below in figure 1.2. The landscape around it is hilly with scattered tree clusters and farmland. 

What time of the year was the data collected?
This data was collected in mid-September, which is the fall season for Wisconsin.

Methods
What combination of geospatial tools did the class to use in order to conduct the survey? Why?
- The class at first started out using a survey grade GPS unit to geocode the exact locations of each headstone, create an attribute table to record all legible information of the tombstones and finally to take pictures of each headstones. Others tools that were used included the Inspire, Rededge, and Phantom UAV drones. These UAV drone were able to take areal photos of the cemetery to be used as a base map. Lastly each group of students had a pen and note book that they used to create hard copies of all of the information on the headstones. 


How did accuracy balance with the time involved to gather the data?
-Due to the large amount time it took to gather data by way of the GPS unit, the classes ended up not using it. In its place the areal photo taken by the Phantom had sufficient detail to make digitizing the each individual tombstone possible using ArcMaps. While this method saved hours of work the accuracy of each tombstone location decreased. The accuracy of the UAV photo is accurate up to centimeters but when it is switched to the raster format in ArcMaps the accuracy depends on how zoomed in on each tombstone the software was when the digitizing took place.  

How was data recorded? List the different methods and state why a pure digital approach is not always best. What media types are being used for data collection? Formats?
The Inspire drone used a camera to take pictures of the cemetery that would later be compiled into one image by use of the software Pix4DMapper. The Survey Grade GPS unit has the ability to compile a list of attributes for each point that was taken. The attributes included were name of the deceased, year born, year of death, and the legibility of the tombstone. This data would later be transformed over to ESRI's ArcMaps where it could be overlaid on the Inspire UAV imagery.

  This method of data retrieval at times can be the best and quickest way to gather the data that was missing, however there is one very large drawback to this method, signal interference. The class ran into this problem after an hour of data collection. In the southwest corner of the cemetery there are a few very large trees that are located directly over a a handful of tombstones. This prohibited the Survey Grade GPS from getting a strong enough signal to record the locations of those particular tombstones. This foliage also created problems for the UAV drone. When the data from the drone was being compiled into one image the program was unable to create a proper image of the data. The foliage in that southwest corner created large shadows, due to the collection occurring at 4:00PM in the afternoon, which put the sun directly behind these trees. These shadows lead to the software program being unable to compile the data correctly. The imagery from the Inspire was very distorted in the that same corner.

   Another issue that occurred in regards to the GPS unit was the time it took for each tombstone to be properly collect and have all of the attributes added. Each tombstone took three to four minutes to collect all of the data. Also before each group could get started the previous group had to show go through all of the steps to properly operate the GPS unit. Due to this set back the GPS unit was not used, in its place the areal photo taken by the phantom had sufficient detail to make digitizing the each individual tombstone possible using ArcMaps. When the digitizing took place a 


What equipment failures occurred if any? What was done to remedy the situation?
The equipment failure was covered in the previous paragraph. The steps the class took to remedy the stated problems was to first create a hard copy diagram of the cemetery and give it a standard coding system that the class would use. The system was simple, the row furthest to the west was row one, the row second closest to the west was row to and so forth. To find the exact location of a particular tombstone in each row a letter system was added to each row. So the tombstone in the southern most point in each row was labeled as "A" and the further north down the row a tombstone was the further down the alphabet it was labeled. So Row 1 tombstone 5 was labeled as "1E". The class used this method to label all of the tombstones in the cemetery along with the attributes of each tombstone. With these location of each tombstone the class will be able to digitize the each of the tombstones and add the attribute data to each location. 

How was the hard copy data transferred towards use in the GIS?
- Each group of students collected a portion of the information on the tombstone. This data was compiled into a online google doc. This google doc contained a field with the ID values that were given to each tombstone. At the same time during the digitization process two fields were created one that will contain raster data. The other was a PointID field that contained the same ID values as the attribute information on the Google Doc. Once the digitization was done the table could be joined to the points by the PointID fields. The raster field allows for images to be stored for each tombstone, figure 1.1 shows an image of the attributes table.
 
   A couple issues did occur during the normalization of the data. This biggest was whether or not each group counted a family memorial stone as a tombstone or not. Those that did had extra tombstones on there sheets and those that did not were lacking those memorial stones. It took a combined effort of every member going through their notes to create an accurate count of tombstones. In retrospect it would have been a good idea to get a total count of all tombstones before we began the project.

   The last major issue the class ran into was getting pictures for each tombstone. A website was found that had the name and an image of all tombstones on it. The website is linked below in this post.

How did you combine the UAV data with your survey data?
- The UAV data was the base map that and allowed for the digitization of each tombstone.

Results 

 (Figure 1.1: Show an image of the attribute table for the tombstones for the Hadleyville Cemetery. In the middle of the table is an image of the tombstone. An image of each tombstone is attached to each feature in this feature class.)

(Figure 1.2: Is the final map of the Hadleyville Cemetery. In the bottom left hand corner of this map is the locator map that shows where in Wisconsin Eau Claire County is as well as the location of Hadleyville Cemetery.)

How did data collection methods transfer into time spent creating the GIS? What was done to remedy the situation?
- The most time consuming parts of this project were the collecting of Tombstones with the GPS unit, which was remedied by digitizing. The second most time consuming issue was compiling every persons individual data into one collective dataset. This was accomplished by using Google Docs which allows many people to work on a single document at the same time. 

   Errors for this project for the most part lie in not having perfectly legible tombstones and not being perfectly accurate with digitizing. There is little that can be done to rectify the legibility of a tombstone but the digitizing can be as good as possible by taking a little extra time to zoom in on each individual tombstone.

What might have been done to facilitate data collection in terms of equipment and refining the method?
- The only way to make this better would have been to spend more time as a class to figure out how to collect this data as well as what data should be collected. For example had the class broken up the cemetery into sections and assigned every group a section to collect there would be less chance of missing data or collecting the same data. Another thing that could have made this easier would have been to clarify earlier on that this was a class project not just a group project.

Conclusion 
How did the methods transfer to the overall objectives of the project?
- The methods did a very good job overall of meeting the objectives of this project. Every tombstone was collected and a database with all of the information of each tombstone is available and can be easily updated.

How did the mixed formats of data collection relate to the accuracy and expediency of the survey?
- There was very little of mixed formats. The UAV was collected by using the UTM Zone 15 and all of the digitization was done using the same coordinate system. The limitations in accuracy are on the skill of the technician to digitize the tombstones correctly and the accuracy of the UAV, which again is accurate down to the centimeter.

Are the potential sources of error negligible and does the final product overall provide something better than the original situation?
 Yes, the errors are negligible and the county now has an excellent source of information on the graves of everyone buried in Hadleyville.

Describe the overall success of the survey, and speculate on how this GIS will be of use for continued record keeping.
This project was a success. The coordination could have been better earlier on but the final project is something to be proud of. This will be very easy to maintain and update in the future. Every time a new grave is added to this cemetery, simply add a new location for it by digitization and fill in the attributes for the new grave. The only draw back to this system is that if a grave is added in the middle of a row the PointIDs for that entire row will have to shift to accommodate the new grave. This is a minor problem that only takes minutes to correct.