I have broad interests that span the intersection of data science, ecology and conservation, but here’s a couple of specific things I’ve been working on recently:
MAKING PREDICTIONS FROM SATELLITE IMAGERY
Earth observation satellites have captured a huge amount of imagery of the Earth in recent decades. I am interested in combining this imagery with data science approaches like machine learning, to understand and predict the future of biodiversity in the face of global change. I am particularly interested in forest ecosystems, due to their high biodiversity value (approximately 70% of terrestrial species live in forests) and their sequestration of large amounts of carbon.
I am especially interested in understanding the resilience, resistance and elasticity of ecosystems; our ability to predict forest loss with early warning signals; and the presence of tipping points in ecosystems.
All species interact with other species, and these interactions combine to form complex networks of connections. I use these species interaction networks (like food webs) to understand how biodiversity is responding to global change and predict what will happen to it in the future.
Species interaction networks are an incredibly powerful tool for understanding community-scale biodiversity responses as they allow a simultaneous consideration of the species in the community and the structure of the community as a whole. Moreover, networks are highly amenable to analyses using data science methods.
Particularly, I am interested in predicting the future of ecological communities under global change across space and time. This might mean predicting the consequences of an invasive species arriving, or predicting how well a community will function as the distributions of its constituent species shift under climate change. I’m also interested in broadly understanding the impacts of stressors on network structure and functioning; understanding the processes that give rise to network structure; and developing new software tools to analyse ecological networks (see software).
Camera traps and bioacoustics are powerful ways to monitor biodiversity, but they rely on manual labelling of images or audio files. Algorithms have been developed to automate this labelling, but they still require large labelled datasets to train on. I’m interested in data augmentations approaches to reduce the required size of these training datasets - if training data are supplemented with transformed or synthetic data, can we improve the performance of labelling algorithms, such as species counting or species identification? And, if so, what methods work best and how much data is needed? Answering these questions is essential for putting camera traps and bioacoustics in the hands of those who don’t have the resources to run large citizen science projects to label imagery or audio files.