Right Whale Auto-Detection

Hybrid profit model, Hybrid IP model, Validation Phase, Eager to add new members
We use deep neural networks to identify and verify right whale calls so that researchers, grad students, and shipping vessels don't have to.

The Problem

Boat strikes account for a plurality of whale deaths[1]. The North Atlantic Right Whales are no exception, and due to their status as an endangered species[2], they are at risk of extinction. We believe that sophisticated tools and techniques to detect these animals can, and should exist - not only for scientific data collection but for real-time alerts for nearby boat operators as well as land stations. [1] http://citeseerx.ist.psu.edu/viewdoc/download?doi= [2] http://www.iucnredlist.org/details/41712/0

Our Proposal

Our team won the global Fishackathon with a project called PoachStopper[1], which consisted of a hardware component and a software component - a deep neural network which was not only able to identify that a boat motor was present in given audio, but also fingerprint which boat motor it was hearing - down to the individual boat. Such neural networks can be re-trained using new labeled data, which we already have via the University Pierre et Marie Curie's 8th DCLDE Challenge[2], the Scripps Institute of Oceanography, the Right Whale Listening Network[3], and our germinating relationship with the Cornell Lab. We propose to continue the work we started with PoachStopper. If we are awarded the initial prize, we will re-train our neural network on whale sounds, and provide a user interface for researchers to input audio for auto detection (static files and real time streams such as Orcasound[4]). If we are awarded the grand prize, we will create a prototype of the PoachStopper hardware and conduct a pilot deployment. 1. https://www.youtube.com/watch?v=zdEjvMr0srE&feature=youtu.be 2. http://sabiod.univ-tln.fr/DCLDE/ 3. http://www.listenforwhales.org/Page.aspx?pid=430 4. http://orcasound.net/

We Assume that...

That Right whales can be identified, at the species level, using bioacoustic techniques is a proven fact.

It is also possible to verify that a sound is a right whale automatically with a high (>95%) degree of accuracy

Furthermore, it may be possible to fingerprint an individual whale for more detailed, yet passive, tracking

More organizations that provide data and infrastructure will be willing to work with us than not.

Constraints to Overcome

The task of re-training the network is straightforward, but it is useless if it doesn't fit into the overall ecosystem (both natural and scientific). Part of team's core mission is to be a value-add to existing data and infrastructure rather than trying to reinvent or duplicate work. Thus, the challenges are in gathering and analyzing the data, and connecting to the organizations that provide the infrastructure. The ocean can be considered as the biggest collection of unstructured data in existence. Data collection and annotation remains the biggest challenge. Luckily, we've been able to tap into our network and relationships to get the data we do have. From there, it will be a lot of outreach work and connecting with existing infrastructure providers and industry pioneers.

Current Work

Our goal is to create an open source deep neural network that allows people to auto-detect marine fauna from static and streaming audio, a user interface to allow people to access the system, and a pre-trained model that people can deploy to their hardware devices. Due to the "all hands on deck" mandate from organizations like CLF[1], our primary focus will be on the North Atlantic right whale. 1. https://www.clf.org/blog/all-hands-on-deck-save-endangered-north-atlantic-right-whale/

Current Needs

We have the technical abilities, but we will need mentoring and consultation from both marine science subject matter experts, as well as from bioacoustics experts. Luckily, we have such people in our network. Additionally, we have the AWS credits from the Fishackathon, so we have ample supercomputing resources as well.