Our research focuses on predicting the potential spread of the Pin-tailed Whydah (Vidua macroura) in the Continental US, the Antilles, and Hawaii. Our predictive models suggest that while the whydah has been reported in Puerto Rico and southern California, there are many more locations in the southern US, the Antilles, Hawaii and California that appear suitable for this invasive brood parasitic bird.
During the fall semester, I attended the Student Conference on Conservation Science - New York (SCCS-NY). The three-day conference was held at the ever-inspiring American Museum of Natural History. The conference highlighted work by undergraduates, grad students, and post-docs, and provided a very welcoming atmosphere for networking and sharing research. Recently, the AMNH posted a synopsis of the conference. Videos of all talks should be available soon.
At the conference, I gave a 5-minute speed talk on the potential distribution of a recently introduced parasitic bird, the Pin-tailed Whydah. I received valuable feedback from a panel of 5 researchers that will help improve future conference talks. Also, it was exciting to talk through ideas for future research on this invasive species with colleagues during a 20 minute Q&A session that followed my speed talk. At the end of the conference, I was fortunate enough to receive an award for best speed talk!
The SCCS-NY group has just announced that the 2017 conference will run from October 11-13. I encourage anyone thinking about presenting their work as a poster or talk to apply. In addition to the many mentors in attendance and thoughtfully prepared workshops, the conference provides a welcoming environment to get to know future colleagues. As many attendees are at various stages of starting careers in Conservation Science, it was great to meet so many researchers I will be sure to see at SCCS-NY 2017.
As some of you may know, when I'm not working on my doctoral research in quantitative ecology, I'm likely out riding my bike. I have enjoyed two wheeled transportation since I was young--occasionally competing in races and once biking across the USA with Bike & Build--though most often these days, I'm commuting through the streets of NYC. I also follow professional cycling, and with the 103rd edition of the Tour de France underway, I thought it appropriate to post a Science article that highlights the work of Jim Papadopoulos of Northeastern University. His recent work tackles a foundational, yet contentious, question: What are the physical forces that keep a bike upright. Jim and colleagues have published equations that elucidate the physics behind bike riding. They followed-up this theoretical work with experiments that demonstrate the many variables in a bicycle's design that contribute to its stability. They're currently working on novel bike designs that take advantage of the many modifications that may contribute to an even smoother ride.
I want to provide details the process of converting a line shapefile to a polygon using both R and QGIS. I found a line shapefile of the everglades, and wanted to eventually have a raster where all cells within the national park (raster) are given a value of 1 like this:
First, I imported to the line shapefile of ENP into QGIS and converted it to a polygon using VECTOR >> GEOMETRY TOOLS >> LINES TO POLYGON. The resulting polygon was a bit messy. Certain portions of the park were inverted. To the rescue--the node editing tool (picture below). I selected problematic nodes that caused the automated process to invert certain regions. I dragged and added nodes where necessary until the final shapefile looked exactly like the originaly imported line shapefile.
library(rgdal) library(raster) ## Import the new shapefile everglades_boundary <- readOGR(dsn = ".", layer = "everglades_newest") ## Create a raster that has the extent of the shapefile, and also set desired resolution/projection. everglades_boundary_raster <- raster(ext = extent(everglades_boundary), crs = projection(suitability_map_as_raster_20_sec), res = 0.005555556) ## Set all values of the raster to 1 values(everglades_boundary_raster) <- 1 ## If you plot the raster it will look like a sqaure (that's fine for now!) plot(everglades_boundary_raster) ## Here, use the very helpful mask() function to retain only raster cells within the park boundaries. everglades_boundary_outline <- raster::mask(everglades_boundary_raster, everglades_boundary) plot(everglades_boundary_outline)
While taking Dr. Rob Anderson's excellent Zoogeography course through CUNY's Graduate Center, I created distribution models for the Common Myna. The Myna has a widespread distribution, and my goal was to sample background points from accessible environments in order to create a model in MaxEnt. With guidance from the Dr. Anderson's lab members, I developed a way to sample background points from user-defined buffers around each locality. This method aims to avoid overestimating a species' dispersal abilities that plague some background sampling methods for widespread species such as sampling from a minimum convex polygon. Here is a PDF of the steps I outline here:
1. To start, you should have a data.frame that includes all localities for your focal species. Then, convert that data.frame of occurrences to a spatial points data.frame:
# convert list of occurrences to a spatial points data.frame occurrences_spdf <- SpatialPointsDataFrame(coords = occurrences.data.frame, data = occurrences.data.frame, proj4string = CRS("+proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0")) #check to ensure points and projection are correct plot(occurrences_spdf)
2. Open QGIS, a free open-source GIS software. In my experience, when using large occurrence datasets, the R function that would perform the following operations in R (gBuffer) is slow. The same process in QGIS takes seconds.
3. Import a single environmental raster and your spatial points data.frame of occurrences in QGIS.
4. Open the buffer dialogue box (VECTOR >> GEOPROCESSING TOOLS >> BUFFER(S). The settings pictured here will make 5 degree buffers around each point in your spatial points data.frame. Click "dissolve buffer results" to convert buffers into one continuous layer.
5. With all layers combined, you should have your climate rater, layer of occurrences, and your vector of buffers around each point.
4. Now, you can either crop your one environmental layer here, or do the rest of the cropping in R. I cropped one raster in QGIS to make sure everything went as planned, though this process in R would be quick. To crop in QGIS:
a. Open clipper dialogue box.
b. Set your climate raster as input, browse to where you would like output shape file, and then select clipping mode as mask and pick your newly created buffer layer
# Then, you can read the buffered vector into R, and crop all environmental layers to this new layer. buffered_region <- readGDAL("data/buffer_layer.tif")
At the end of each year in graduate school, I've made it a practice to take stock of new skills that I have acquired over the years, and areas that I look forward to developing in the coming years.
Beginning graduate school with experience as a field ecologist, I did not have an understanding of how important quantitative analyses would be over the course of my PhD. Within the first few months of my program, I began tinkering with R: writing bits of code and making programs for simple calculations. Of course, just starting out,I felt a bit lost because I knew how critical statistical programming would be, but had no idea where to begin.
I made a commitment to learn R and began by taking the wonderful Intro to R course led by Dr. Jose Anadon of Queens College (highly recommended if you're in the Doctoral Consortium in NYC!) Then, I followed this with biostatistics taught through an R framework. At this point, my confidence level as an R programmer had grown, as had my enjoyment!
Of course, learning R is never over, and the more I learn, the more questions I have--all once exciting loop of knowledge acquisition/question asking/skill sharing. Recently, I have been experimenting with using RStudio's app development tool, shiny, in an effort to make my research projects have a more public-facing component. All the programming is done in R, and the Shiny developers have made it so R script is converted into HTML for easy creation of websites and apps. Shiny allows users who are unfamiliar with R interact with our programs by plugging in their own values for different parameters, and moving sliders to manipulate statistical models. If you have some experience with R and an interest in app development, I strongly recommending viewing their 3 hour tutorial.
Once you have explored Shiny, your next stop should be Shiny Dashboard. This is another R framework that builds from Shiny, and allows for additional user interface configuration.
Recently, I've used bits and pieces of spare time to develop my own application which I'll posted along with source code here in a few weeks!