Sunday, May 17, 2015

UAS Flight

Introduction

This lab is focused on data collection using an unmanned aerial system. Unfortunately, we weren't actually able to individually set up a flight path and collect any data in this exercise, which makes writing about it difficult. Originally I was hoping to actually capture some NIR reflectance values for the Priory, however due to unfavorable weather conditions this was not possible. Therefore, this post will be focused on the steps necessary to set a project up using

Methods

When starting up the project it is important to note environmental conditions, such as temperature, cloud cover, and wind speed/direction. Today it was in the low 50s with 100 percent stratus cloud cover and winds at 4 miles per hour with gusts much higher. If environmental conditions are unfavorable it is important to postpone the project. Before begenning the pre-flight check it is important to get the project set up on the Mission Planner software. This program allows extensive customization, such as camera width, flying height, time in between image captures, and flight speed. It is important to select the best settings for the job at hand. If you are collecting data for precision agriculture you should select camera lens with a very narrow view to eliminate off-nadir image capture. If you are collecting data for a grassy field, one would be able to use a much wider camera lense and in turn take fewer flight lines to obtain the necessary data (Figure 1).

Figure 1  Mission Planner user interface. As you can see, a rectangular box is drawn on the screen. The software will then create flight paths that will ensure the entire study area is covered. Flight lines are based on camera width and flying height.

Next, we draw the desire flight path in the Mission Planner software. By drawing a rectangular box over the study area, Mission Planner takes into account the width of camera lens and other variables selected above, and creates an appropraite flight path. After the flight path has been determined we are able to start the pre-flight check.
The pre-flight check is the most important aspect of any UAS mission. If not taken seriously a malfunction in any part of the system can lead to costly damage to the UAS and time spent repairing it. The pre-flight check involves checking various boxes in the Mission Planner program. Included in the pre-flight check are tests for electrical connections, frame connections, motor connections, secure props, secure battery, and secure antenna. After the checklist has been completed the UAS is powered up (Figure 2).
Figure 2  UAV that was used to colect data in the visible spectrum.

The UAS is manually lifted off the ground. Once it gets to a safe height above the ground the operator can swith into autopilot mode. When switched into autopilot the Mission Planner software takes over and flights the predetermined flight lines. If there is a malfunction anywhere in the UAS, the operator has the power to switch back to control and land the UAV safely. After the flight has been completed, the UAV automatically lands. Upon landing the post-flight check is to be completed. The post flight check involves noting the flight time, whether any problems occurred, making sure the data was properly collected, and safely shutting down the UAV.


Results

Due to heavy winds the UAS quickly traveled off the desired path. This led to the mission being aborted shortly after takeoff.Therefore, there was very little data collected. Pictured below are a few aerials that were collected in the NIR spectrum. NIR is highly refelected by healthy vegetation due the the cell structure of the plant. Therefore, areas that appear white in an NIR image and dense, heathly vegetation while areas that appear gray and black are less healthy. NIR images are commonly given a false color. In a false color NIR image the NIR band is assigned the color red, the red portion of the image is assigned the green portion of the spectrum, and the green coloring in the image is assigned the blue portion of the spectrum. This basically turns the image a pinkish color where healthy vegetation occurs. Figure X below is the original image collected from the UAS. Unfortunately, I was unable to create a false color infrared image due to the images being a .jpg file format. Also, due to time and weather constraints, we were unable to process the data any further. If given the opportunity we would have been able to combine all the images collected into 1 much larger image similar to what is seen by landsat data (Figure 3).

Figure 3  NIR image collected by the UAV. This image shows areas of healthy vegetation as white while areas with little or no vegetation appear black. The area in the upper right of the image is the parking lot of the Priory and the area in the right of the image is a deciduous tree.
If we were to process the images further, we would be able to create a string of images that overlap each other sufficiently. We would then be able to mosiac the images together, using a histogram matching technique to create seamless images that could be used for analysis. A process like this is very common in precision agriculture practices.

Conclusion

As previously state, there was no data that was collected except a few aerial images. Therefore, we were unable to do any NDVI calculations to determine where vegetation was the most healthy, or anything of the sort. Overall, based on some of the horror stories associated with UAV crashes this mission was not a total failure in that both UAVs returned to the lab in 1 piece.

Sunday, May 10, 2015

Navigation with a GPS

 
 

Introduction

    As a continuation from last weeks exercise, this weeks exercise was to create a GPS navigation course that will be used by the next field methods class. In order to accomplish this task we are to plot points on the navigation map created in exercise 3, navigate to them using techniques learned in last weeks exercise, and enter the points into a Trimble Juno GPS device. Our goal was to make a course that will be difficult for next years class. In order to do this we navigated around the building on the Priory a few times. This makes keeping track of the pace count very inaccurate. However, if they use a map with a decent aerial image they should have no problem locating where to enter the woods to find each of the points.

Study Area

This Priory is located three miles south of the UW- Eau Claire campus. It is 112 acres of multipurpose land that is used as a dormitory for UW- Eau Claire students and a day care for the children of university employees. It contains areas of dense underbrush, steep valleys, and a section with rows of evergreen trees. There is also a few small ponds located in the northeastern corner of the grounds, near Highway 94.
Figure 1  The Priory is located 3 miles south of the UW- Eau Claire campus.
 
 

Methods

Originally, we plotted five points throughout the navigation boundary that we were to navigate to using the skills learned during the previous exercise. After locating our first point, we decided to move the location slightly to the north to make it more difficult for the next class. After finding a suitable location we collected a point using the Trimble Juno GPS unit. After the point was collected, we navigated to the next point. After finding a suitable location for the tree marker we collected another point. This process continued for all five locational points throughout the study area. As you can see below, the location of the actual GPS points is slightly different from original points we were going to collect (Figure 2). After returning to the Cartography lab, we imported the feature class of collected points back into the geodatabase.
Figure 2  The original location of the GPS points versus the actual location of collect points. Some points were moved slightly to make a more difficult course for next years class.


Figure 3  The first of the points marked for next years class. The four other trees will have similar markings to this.  

Results

The following table contains the points and the x,y coordinates in meters based on UTM Zone 15 N coordinate system (Figure 4). Each of the points was mapped in ArcMap and given an appropraite label (Figure 5). Due to time concerns we did not collect the points in the correct order that next years students will have to. We simply located the closest point and set the GPS point.
Figure 4  Table showing the point name, id, and the xy coordinates based on the UTM Zone 15 projection.
 
Figure 5  The location of the points collected overlaid on an aerial.


Conclusion

The location of the points was spread out over a very large area. This will make it difficult for next years students to find the locations accurately. One thing that benefits the students is that there is very little elevation change in this area compared to other areas of the Priory. This should make the process go a lot faster since they will not be required to climb up and down 50 degree slopes to get to their points.





 


Sunday, May 3, 2015

Field Navigation part 1

 

Introduction

     This weeks exercise is aimed at navigating to five predetermined points around the Priory using a navigation map that was created in exercise three and a compass. This archaic technique is very useful when navigating in areas where GPS signal is weak or nonexistent, such as forests with dense crown cover.

Study Area

      Formerly St. Bede's Monetary, the Priory was purchased by the UW-Eau Claire Foundation in 2011 and is currently being leased to the University of Wisconsin- Eau Claire for expanded educational purposes. It is located about three miles South of the City of Eau Claire, WI (Figure 1). It is currently being used as a day care center for professors with young children. It is also used as dormitories for UW- Eau Claire students. The Priory consists of 3 buildings totaling around 80,000 square feet of space on a 112 acre forested lot. The land is heavily wooded and contains a combination of flat terrain along with very steep ridges with slopes up to 63 degrees (Figure 2).
Figure 1  Locational map of the Priory. The Priory is located three miles South of Eau Claire in Eau Claire County, WI.
 
Figure 2  Original navigation map that was used in this exercise. It contains 5 foot contour intervals, an elevation model that was set to 50% transparent, and an underlying aerial image.
In many areas magnetic declination would need to be considered. However, in the study area the magnetic declination is only about negative 1.4 degrees. Areas on the East coast have magnetic declinations up to 20 degrees. This could cause a huge problem when in the field if not taken into account.

Methods 

      The methods associated with this exercise were quite simple. To start we plotted the points, in order, on the navigation map based on the UTM coordinates that where provided by the professor. We then connected the lines in order that were to be collected (Figure 3). We measured the distance on the map and converted it to real life distance using the scale bar. We then converted that into the number of paces it would take to get to each point. We then set the compass on the map, set the orienteering arrow to geographic north on the map, aligned the magnetic needle within the orienteering needle, and followed the direction of travel arrow. After arriving at the first of the points we did the same process. Set the compass on the straight line to the next point, set the orienteering arrow to geographic North, aligned the magnetic arrow inside the orienteering arrow, and followed the direction of travel arrow. An example of the process is shown in Figure 4.
 
Figure 3 Navigation map showing our plotted points and the straight line from 1 point to the next.
 
Figure 4  Cartoon showing the basic process of setting up the compass. Here, they line of the compass along the start/destination line, set the compass dial to true north on the map, align the true north arrow and the magnetic arrow, and follow the direction of travel arrow while counting paces.
 

Results

Seen below are two of the photos that were taken at the predetermined locations. They were marked by bright pink tape wrapped around birch trees (Figure 3,4) These photos were taken to provide evidence that the points were actually located. We were able to locate all five points in order with relative ease. The most difficult aspect of this exercise was scaling the valleys that were present in the Priory.

Figure 5  One of the navigation points as marked by the bright pink tape.

Figure 6  Another of the navigation points.


Conclusion

As previously stated, this lab was fairly easy. We were able to located all five points in under an hour and a half. Keeping the pace count correct was sometimes difficult because we had to navigate around dense brush and very steep slopes. Therefore, the pace count was used as more of an estimate as to where we were located instead of an actual measurement unit. There were a few cases where our pace count was off by 20 or 30 paces. This is not a problem when searching for bright pink tags located on birch trees, however this could have been a problem if navigating to something less noticeable. In addition, being in such a small geographic area made the relativity of the pace count less of a problem. If we were navigating in a national forest that spans several hundreds of miles our location would have been way off from the actual points. Another important factor is that predetermined points were actually located off the map boundary. This made it difficult to originally mark them accurately. We were also not able to have information about the slope of the area or an aerial image that could have been used to located the points easier.

Sunday, April 26, 2015

Creating a Topographic Survey

 

Introduction

   In this lab we are tasked with creating a topographic survey of the UWEC campus mall using two different surveying techniques. The first method is by plotting points using the Tesla RTK with HIPER SR (Figure 1) attached. The second method we will be using the Tesla RTK and Topcon Total Station (Figure 2) to collect elevation points throughout the campus mall. These methods will then be compared to determine which method is more time efficient and produces more accurate data. 

 
Figure 1  Topcon HIPER SR GPS unit used in concurrence with the Topcon Tesla RTK to collect elevation points throughout the UWEC campus mall. 


Figure 2  Topcon Total Station used in many surveying applications. This unit using distance, vertical angle, and azimuth to create topographic points.

 

Study Area

   The study are for this lab, as previously stated, is the UWEC campus mall (Figure 3). It is located between between  the new W.R. Davies Center and Schofield Hall. This area has a relatively flat terrain that gradually slopes toward the West/Southwest. The mall itself ranges from 241.043 meters above sea level to 237.923 meters above sea level over an 84 meter distance. There is also a creek that runs East/West through the campus mall. This area has the greatest elevation change ranging from 235.884 meters above sea level to 238.627 meters above sea level over a 22 meter distance. 


Figure 3  The campus mall on the University of Wisconsin- Eau Claire campus. 



Methods

  The first portion of this lab was designated for survey collection using the Topcon Tesla RTK and HIPER SR attachment. For this process, a project was set up on the Tesla RTK device. Then some basic parameters were set, including coordinate system and height of the HIPER SR. The Testla RTK was then connected to the HIPER SR using a mifi connection. After the connection was set we were ready to start collecting points. For this process all that was required was the level the HIPER SR and hit save on the RTK to save the point into the job folder. After 100 points were collected we saved the job as a .txt file on the RTK, connected it to a computer, and imported the file into a geodatabase (Figure 1).

   The next step was to go into the field and collect elevation points using the Tesla RTK and Total Station. This process was significantly more time consuming than collection with the combination above. The first step in the process is to set up a job similar to above. Then we need to set up a the occupancy point and backsight. This is done to provide an azimuth reference for the Total Station to use while collecting points. The occupancy point and backsight are set up similarly to the process above where we level the Tesla RTK/HIPER SR and save the point into the job file. From there, we need to set up the Total Station directly over the occupancy point, measure its height from the ground, and level the Total Station. We then connected to Total Station to the RTK, input the occupuancy point and backsight into the RTK, and lazed to the prism which was set up on the backsight point. After the Total Station was set up with the occupancy point and backsight we collected 64 points around the campus mall. After the points were collected we saved the job file as a .txt file, similarly to the process above, and imported it onto the elevation geodatabase (Figure 1). 
Figure 4  Example of the .txt file that was imported from the Tesla RTK device. This file was then created into a point field with x,y,z coordinates. 

   We then used the Create Feature Class From XY Table (Figure 5) in ArcMap to create a point feature class with the x,y, and z coordinates for each of the .txt files and added them to the Arcmap display (Figure 6).

Figure 5  Create Feature Class from XY Table tool that was used to create each of the individual .txt files into the survey points.



Figure 6  Locations of each of the surveyed points collected  in the lab. The green points are those that were collected using the Tesla RTK and HIPER SR, while the blue points were collected using the Tesla RTK and Total Station combination. Also shown are the occupancy point and backsight that were required by the total station for accurate placement of the x,y points.

   This next step was optional, but I felt it provided a more accurate representation of the topography of the UWEC campus. To do this I created a creek feature class in the elevation geodatabase and digized the banks of Little Niagara Creek (FIgure 7). 

Figure 7  Little Niagara Creek was digitized and used a hard breakline while creating the TIN and digital elevation model. Breaklines are commonly used in areas where steep elevation changes occur, such as ridgetops or shorelines. 

   I then created TINs for each of the elevation datasets collected. In order to create the most accurate topography representation I set the elevation points as the input for the TIN and set the digitized creek polygon as a hard breakline (FIgure 8). The breakline is used to define interuptions in the surface, such as the banks of the creek. In this case, I erased to creek from the TIN because we were unable to collect points from within the creek and it would not give us the best representation of the topography. 

Figure 8  Creating a TIN is the first step in creating a 3D elevation model. In this example the elevation points surrounding Little Niagara Creek were input as x,y,z coordinates and the digitized polygon of Little Niagara Creek was input as a hardclip to remove that area from the TIN.
   After the TIN was created I used to the TIN to Raster tool as pictured below to create a continuous elevation surface (Figure 9). The TINs are represented in Figures 11 and 12 for the area surrounding Little Niagara Creek and 16 and 17 for the campus mall area, while the continuous raster surfaces are represented in Figures 12/13 and 18/19 repectively. 

Figure 9  After the TIN was created we use the TIN to Raster tool to create a continuous raster. 
   The process was also run for the points collected with the Total Station. However, there was no breaklines set because there was no significant interuptions in the elevation that needed to be taken into account.

Results

   The first 5 figures below are the results of the elevation survey using the Tesla RTK and HIPER SR combination. This method provided easier data collection and was overall more accurate than the Total Station. 

Figure 10  Elevation points collected using the Tesla RTK and HIPER SR. These points, along with the creek creaklines, will be used to create the TIN pictured below.



Figure 11  2D TIN created using the elevation points and creek breakline.


Figure 12  3D TIN that is diplayed in ArcScene. The vertical exageration in this photo is set to 3.9 to make the elevation change more noticeable.  



Figure 13  Continuous elevation raster created by using the TIN to Raster tool.



Figure 14  3D representation of the continuous raster elevation model.

   The next five figures represent the data collection that was done using the Total Station. This data collection method was much more time intensive and did not create an accurate output of data. All of the data points appear to be shifted about 40 meters to the East. The points in the Northwest corner of the map should align properly with the mulched planter ledges that are located in the very Northwest corner of the map.
Figure 15  Data points that were collected using the Total Station. This map also includes the location of the occupancy point and backsight that were used to reference the location of the total station.


Figure 16 2D TIN constructed using the elevation points from the Total Station collection. 


Figure 17 3D TIN that was created using the Create TIN tool in ArcMap.



Figure 18 2D continuous raster surface showing the elevation values. This raster was created using the TIN to Raster tool, as shown in Figure 9. 



Figure 19  3D representation of the TIN to Raster tool. 

 Data collection comparison

   This next section will include a comparison of the HIPER SR collection vs data collection using the Total Station. The two main factors that I am going to consider are accuracy and time consumption.

Accuracy

   The Tesla RTK and HIPER SR combination collected very accurate x,y,z locations. The majority of points were located within a .008 meter accuracy. In addition, the points were placed in the right location on the ArcMap display. While there was no direct way of determining accuracy of the points collected by the Total Station, by looking at the overall shape of the TIN and DEM I can tell that the elevation is fairly accurate. However, there is no way of comparing the data since it is not located in the correct location. If it were located in the correct location it would be possible to compare the elevation data collected in the field with elevations that were collected using Lidar data, which is extremely accurate.

Time Consumption

   The Tesla RTK and HIPER SR combination was also a lot less time consuming. All that was required was the start a new project, set a few parameters (such as the coordinate system that data is being collected in), and begin collecting. It was easy to move from one location to the next. A huge plus to this method is that it can be done with only one person. All in all, collecting 100 points only took about 90 minutes.
   The Tesla RTK and Total Station combination was very time consuming to use. The majority of this time was spent setting up the occupancy point and backsight that was required by the total station. It also took a significant amount of time to get the Total Station set up directly over the occuapancy point and level. In order to get the occupancy point and backsight we had to use the HIPER SR to get an accurate GPS location that could be used. 

Conclusion

   Overall, it seems unnecessary to use the total station to collect elevation data when you have a method that takes a lot less time and it just as if not more accurate. Therefore, if I had to set up a surey project I would not use the Total Station. Instead, I would stick to the Tesla and HIPER SR combination that seemed to work very well.  



Sunday, April 5, 2015

Conducting a Distance Azimuth Survey

Introduction

    The purpose of this lab is to become familiar with creating a survey using direction and distance. Sometimes while in the field technology is going to fail. Sometimes a GPS will run out of batteries or will not be able to acquire a signal due to heavy forest overgrowth, for example. In this case it is important to be able to collect data points using semi-archaic methods. Therefore, we will be using distance and azimuth measurements to plot the location of vehicles, and some associated attributes, in the Phillips parking lot (Figure 1). The Phillips parking lot is the most commonly used parking lot on campus and is constantly full. It contains about 90 faculty parking spots, 40 guaranteed faculty spots, 144 student parking spots, and 19 spots reserved for visiting future Blugolds.

Figure 1 Aerial image of Phillips Lot located on the University of Wisconsin- Eau Claire campus.

Methods

    To begin this exercise we had to decide on some features that were to be collected. Since we had to collect 100 points, we decided to survey the one thing that was readily available, vehicles. We set up the TruPulse 360 laser on the corner of the parking lot and started collecting the horizontal distance in meters and azimuth to individual vehicles parked in the Phillips lot. We also collected two attributes for each vehicle, the type of vehicle and the color. After collecting data for 68 different vehicles in the first location we were unable to collect any more. This caused us to move to a second location outside of Phillips Hall near the "art" by the entrance to the courtyard (Figure 2). After the data was collected for all 100 vehicles we came into the lab and entered it into an Excel table (Table 1).

Figure 2  TruPulse 300 setup used to complete the vehicle survey.


Table 1  Sample of attributes collected using the TruPulse 300 distance/azimuth finder.

    The next step in the process is to use the Bearing Distance tool to create lines to each of the vehicles based on distance and direction. At first, I had some difficulty getting the tool to work correctly. After some frustration I decided to take another approach. I altered the Excel table to only include the x,y coordinates of the TruPulse laser points, the horizontal distance, and the azimuth (Table 2). After this new table was created I imported it into the distance azimuth geodatabase and added it to the ArcMap display. We then opened the Bearing Distance to Line tool and inputted the variables into the tool (Figure 3).

Table 2  Altered table that was input into the Bearing Distance to Line tool.


Figure 3  Bearing Distance to Line tool interface.


    After the tool was finished running it left us with lines based on direction and distance to each vehicle we surveyed (Figure 4).

Figure 4  Lines created showing the location of cars based on distance and azimuth.


    After the line feature class we created I joined to original table containing all the vehicle attributes to the altered table that was used in the tool (Table 3). This will allow us to create a map showing where the different vehicle types and colors are located.
Table 3  Original table joined to the altered table in order to map the attributes of each survey point collected.


    After the distance azimuth lines were created and the tables were joined together I ran the Feature Vertices to Points tool (Figure ) to create a point feature class of all the vehicle locations (Figure 5). 
Figure 5 Feature Vertices to Points tool interface used to create point locations of each of the vehicles surveyed.

Figure 5 Point locations of each vehicle surveyed.


    Combining the lines and points created into one map we can show the location of vehicle colors and what type of vehicle is present at each location. As you can see from the graphs below, 25% of the total vehicles surveyed are silver in color (Graph 1) and 54% of the vehicles are cars (Graph 2).
Figure 6  Map showing the color and type of each vehicle surveyed.

    Then, I ran the summarize tool in ArcMap to produce a table that gives a count field for each of the vehicle types and colors. I imported those summary tables into Excel and created a graph showing the number of vehicles by color (Graph 3) and the number of vehicles by type (Graph 2). 
Graph 1 Number of vehicles surveyed based on color.


Graph2  Number of vehicles surveyed based on vehicle type.

 Discussion

    As previously stated, the method of surveying using distance and azimuth is a fairly good method when extremely accurate results aren't necessary. After the data was collected I noticed two major errors that affected the location of the survey points. The first error was the inexact x,y coordinates being used. Not having the exact x,y coordinates causes that location of the TruPulse laser to be moved from its actual position. Figure 6 shows that location of the x,y coordinates that were input into the Bearing Distance to Line tool compared to the actual position of the laser.
   
Figure 6 Actual location of TruPulse 360 compared to the GPS location.
 Another source of error associated with azimuth is the magnetic declination at a particular location. Magnetic declination is defined as the angular difference between magnetic north (the direction a compass needle points) and geographic north (the direction perpendicular to the equator). Some areas of the world have up to 30 degrees of magnetic declination. However, in Eau Claire, WI the magnetic declination is only 1.05 degrees to the west. Therefore, in this distance azimuth survey all of the points plotted must be moved 1.05 degrees to the west. This error can be seen in Figure 7, where the location of the points are located east of where they were actually located in real life. 

Figure 7  Errors associated with the TruPulse 360 that are caused by magnetic declination



    While the location of the x,y coordinates or the magnetic declination may be causing errors in the data, there may also be other factors causing the points to not show up in the correct locations. I recently noticed, while collecting GPS data for another class, that this particular aerial image may not be orthorectified properly. During this GPS collection the points collected on the east side of Phillips Hall were constantly being placed about two feet west of the sidewalk even though we were collecting the points from the middle of the sidewalk. If the aerial is not orthorectified accurately, the survey data collected in this lab may actually be correct.

Conclusion

    In conclusion, there are many highly scientific ways of plotting survey points, such as survey grade GPS units. However, there are certain situations that high tech equipment will not work or may not be needed. In this case it is very important to understand how to use simpler methods to achieve a fairly accurate survey. A basic survey, as done above, can also be conducted using only a compass for direction and paces to estimate the distance to each individual point. Obviously, this method would not be nearly as accurate as using a distance finder. Therefore, one needs to know the overall project accuracy in order to determine which method is necessary.


Sunday, March 15, 2015

ArcPad Data Collection II


Introduction


 This weeks exercise is a continuation of a previous exercise in which we set up a microclimate geodatabase in ArcMap 10.2.2. This week we will be taking that previously created geodatabase and employing it onto a handheld Trimble Juno GPS unit (Figure 1). We are then going to be going throughout the University of Wisconsin-Eau Claire campus and collecting different climate variables using the Kestrel 3000 Wind Meter (Figure 2).
 


Figure 1  Juno Trimble GPS unit that was used to collect each of the weather variables for every point.







Figure 2  Kestrel 3000 Wind Meter that was the source of the microclimate data. This Kestrel
provides temperature, wind speed, wind chill, humidity, and dew point.


     As previously stated, we will be using a geodatabase that was created in a previous exercise. Creating a geodatabase before going into the field is a standard protocol that helps ensure accurate data collection. This geodatabase has eight different variables that will be collected at each point. The variables include temperature at the surface, temperature 2 meters above the surface, dew point, wind chill, wind speed, wind direction, and relative humidity. In order to ensure each variable is recorded in the GPS correctly we set up domains for each. All temperature related variables (temp, wind chill, dew point) were set with a range between -30 and 60 degrees Fahrenheit. The wind speed attribute ranges from 0 to 50 mph. Wind direction ranges from 0 to 360 degrees. Ground cover is set up with coded values for each expected ground cover that we will see in our study area. These range values can help prevent the user from incorrectly inputting the data, for example adding an extra 0 to temperature values. Incorrect data entry can be a huge problem while in the field, especially if environmental conditions aren't favorable.

Study Area


     In this exercise we are going to be collecting microclimate data across the UW-Eau Claire campus (Figure 3). UWEC sits at the heart of the city of Eau Claire, WI on the banks of the Chippewa River. UWEC contains 28 major buildings on 333 acres. For this exercise we will be mainly staying within the main courtyard of the UWEC campus. March temperatures usually range between 0 and 50 degrees Fahrenheit. Relative humidity values commonly range between 20 to 60 percent during this the spring months, which is significantly lower than during the summer months. UWEC seems to always be really windy, especially on the walking bridge over the Chippewa River. Depending on the day wind speeds usually range from 0mph to 20mph, but 50mph gusts are frequent during large storms. The majority of the ground cover at UWEC is grass, blacktop parking lots, or concrete sidewalks. However, since it's March, there is a large amount of campus that still has snow covering the ground.









Figure 3  The UW-Eau Claire campus that where we collected the microclimate data. This map shows the
different zones that each group was assigned and the location of each point that was collected.
 

Methods

     Since this exercise was a direct continuation of last weeks exercise we used very similar methods to obtain our climate data. We started by creating a uniform geodatabase with a set of domains that everyone will deploy to their group's Juno Trimble GPS unit. We then split up the UWEC campus into five different sections and each group was tasked with collecting as many data points as possible. For each of the points we were to collect six climate variables:
  • Temperature at surface
  • Temperature two meters above surface
  • Wind speed
  • Wind chill
  • Relative humidity
  • Dew point
  • Ground Cover
  • Any associated notes that would be helpful.
     When each group finished collected points we all met back in the lab to combine our data points into one feature class. To do this we opened the file manager and navigated to the folder in the GPS unit that contained the microclimate data. We then copy that folder and past it into the desired folder in ArcMap. We then use the ArcPad Data Manager, similarly to last week, and use the Get Data From ArcPad feature. This feature converts the .apm file, collected in ArcPad, to a .mxd file that can be used in ArcMap. After each of the groups imported their data into an ArcMap friendly format we used the Merge tool to combine all the individual feature classes into one feature class that can be used to create interpolations for each variable.

     After the feature class with every point was created we ran the Inverse Distance Weighted spatial interpolation to create a raster surface for each variable that covered the entire study area. I decided to use the IDW interpolation because it is exact, meaning that the surface is given the value of each point that it passes through. IDW also sets each of the points as the highest values in the interpolated surface, which can give certain values a "bulls eye" look to it.

Results

     After the individual feature classes were merged together we were then able to run a geostatistical interpolation to create surface maps for each of the variables. Figure 4, below, shows the table of the main feature class after being merged.







Figure 4  Screenshot of the table after each individual feature class was merged together to form the main feature class that will be used to create the interpolated surface. As you can see there were some values for temperature at two meters that were left null, which had an impact on the interpolated surface.

     If there were any ground cover types that were not included in the original geodatabase, such as the canoe pictured below (Figure 5), we were to select the ground cover type of "other" and input a note that provided more detail as to what the exact ground cover was.






Figure 5  An example of a ground cover that was not originally set in the domains of the geodatabase. For this specific point we set the ground cover to "other" and wrote entered more information in the notes field.
 
 
     After running the IDW interpolation we were left with a series of surface rasters. Figure 6 is the raster surface that was created based on the temperature values collected two meters above the land surface. As you can see, there are two really cold points located along the western edge of the study area and randomly located hot spots spread throughout the study area.
 







Figure 6  IDW interpolated temperature two meters above the surface. Also shown are the ground covers associated with each point.
 
     We also created a surface for the surface temperature values (Figure 7). This surface seems to have a lot more variation than the map above where a very small area of the surface is given an intermediate temperature. 
 







Figure 7  Surface temperature surface created using the IDW interpolation. The temperature surface is compared to the ground cover located at each point.
 
     Figure 8 shows the surface created based on relative humidity values collected around campus. There is a large area with high relative humidity located along Putnam Drive. Compared to the surface temperature map above, the areas with high relative humidity correlate to areas that have colder surface temperatures. 
 







Figure 8  Relative humidity raster created using IDW interpolation.
 

     Figure 9 is a raster surface created based on dew point values collected by the Kestrel 3000. Dew point is the temperature the air must fall to in order for precipitation to occur. The dew point map below seems to have a strong correlation with the relative humidity map above. This means that areas with high relative humidity also have a higher dew point temperature. This makes sense because air that has more moisture requires the temperature to drop a lot less before it starts precipitating.








Figure 9 Dew point raster created using the IDW interpolation. Dew point is the temperature at which the air must drop to in order for precipitation to occur. The dew point raster and the relative humidity raster seem to have a direct correlation.



 
     Figure 10 shows the values of wind speed collected around the UWEC campus. As expected the area along Putnam Drive had really low wind speeds since it is located right at the bottom of the campus hill. Areas on top of the campus hill had an overall higher wind speed than areas on lower campus. The area on the walking bridge that spans the Chippewa River also has higher wind speeds, which is expected, as well. 
 




Figure 10  Wind speed surface created using IDW interpolation. This map is an accurate representation of the different areas of campus. The areas in the southwestern portion of the map that have high wind speeds are located on upper campus. There is also high wind speeds located on the walking bridge that goes across the Chippewa River, which as everybody who has ever walked across the walking bridge knows is very accurate.
 


     Figure 11 is a surface raster of the wind chill values collected from the Kestrel 3000. Since wind chill is directly correlated to temperature and wind speed I combined the wind chill raster with the associated wind speed points. One would think that areas with higher wind speed would have lower wind chills. However, this was not always the case. Certain points had higher wind chill values than surface temperature values.  
 





Figure 11 Wind chill surface combined with wind speed values at each location.

 

Discussion 

     There were many sources of error in this microclimate modeling project. First off, since the data was collected over a two hour period late in the evening the temperature related variables were significantly different from the time we started collected data points to the time we finished collecting the data. For a more accurate microclimate model we would need the data to be collected at each point simultaneously to prevent environmental conditions from changing. This is possible over larger areas that have multiple weather stations running, but not on such a small scale microclimate project like we are doing for this exercise.
 
     The instruments used for collecting the data also seemed rather inaccurate which provided incorrect climate data. There were several occurrences in which the wind chill was actually warmer than the surface temperature values at that specific location, which is not possible. I also noticed that some areas had surface temperatures that were warmer than the temperature at two meters and some areas where the surface temperature was colder than the temperature at two meters. Although one would expect surface temps to be colder when there is snow on the ground, I noticed that there was no real pattern associated with ground cover.
 
     Another source of error that directly impacted the interpolations was incorrect data entry into the Trimble Juno GPS unit. For instance, as you can see in Figure 12 several temp at 2 meter locations were not collected. This error gave the central part of the study area a more generalize interpolation because there was less data points to influence the model. There was also two points that were given a value of 0 degrees. This error had a huge impact with the interpolation. Since these two points were huge outliers they minimized the effect of the other extremes, meaning that the lowest temperature that was correctly collected did not seem like it was as cold as it actually was.
 
Figure 12  Errors that had impacts of the raster surface created. Several temperature at two meter values were left null. These points did not  have a significant impact on the overall surface raster. However, there were two points that had temperature values of zero. These points had huge impacts on the overall surface raster.

 
      Since these two errors greatly reduced the accuracy of the interpolation, I removed the erroneous data points and ran the interpolation a second time. Figure 13 appears to provide a much more accurate representation of the actual temperatures that were collected. As you can see there is a string of cold points that were collected in the southern portion of the study area. These points coincide with Putnam Drive, located at the bottom of the UWEC campus hill. Since this area was located on the north face of a steep slope it does not recieve much sunlight in the winter months and was still covered with several inches of snow, which could have impacted the temperature in that area.
 
Figure 13  After the erroneous values were removed we were left with a much more uniform looking raster surface that better models the actual temperature around UWEC.

 
 
     Similarly to Figure 12 above, there was a data entry for the Wind Chill field as well (Figure 14). One data point was given a value of zero for the wind chill even though that point had a 52 degree surface temp and only 2 mph winds. Since there was an abvious data entry error I went into ArcMap and removed the point from the dataset and ran the IDW interpolation a second time Figure 15. 


Figure 14  Erroneous wind chill value that was set to zero degrees. This point had a huge impact on the overall raster surface.

 
 
     As you can see, there is a significant difference between the two Wind Chill interpolations. Figure 14, above, has a cold spot that makes everywhere else appear much warmer than it actually is. After removing the erroneous data point we can see that more of the variation in wind chill throughout the area. After removing that point, the surface better represents the actual wind chill values experienced around campus (Figure 15). 
 
Figure 15  After the erroneous wind chill value was removed we were left with a much more accurate surface model.

 

Conclusion

     In conclusion, this exercise was very similar to last weeks exercise. The only main difference was that we covered a larger area and we were required to merge each group's data into one main feature class, which is a very simple process. Although we set the domains before collecting the data to prevent erroneous data, we still experienced several while in the field collecting the data. These errors can have huge impacts on the surface rasters created. Therefore, it is very important to be as careful as possible while inputting the data into the Trimble Juno.