Yellowstone National Park and Wild Livelihoods Business Coalition, in collaboration with Grizzly Systems and several academic institutions have partnered to deploy low-power, AI-infused monitoring devices that capture acoustic and visual data for behavioral research and for monitoring the presence and distribution of wolves across the Greater Yellowstone Ecosystem. Accurate population and occupancy estimates play a vital role in shaping state and federal management policies. Through the use of various artificial intelligence algorithms, scientists can efficiently analyze large data sets of audio and video/stills to find and then study wolf communication behavior.
Because wolf vocalizations carry for relatively long distances, AI-infused autonomous recording units (ARUs) that are integrated into camera traps can serve as low-cost tools to enhance existing census efforts. Furthermore, better approaches to livestock-conflict deterrence may be possible with playbacks of scientifically validated wolf or guardian dog vocalizations, triggered by the AI devices. Finally, audio and video educational tools can be created via an end-to-end software platform that responsibly showcases the biodiversity in some regions in order to encourage other regions to follow suite.
Passive Acoustic Monitoring (PAM) has emerged as a cost-effective and noninvasive technique for wolf surveys, providing detection probabilities exceeding those attained through camera trapping. We are building ARU's with classifiers for real-time detection, as well as ML models for post-processing analysis of the behavioral functions of wolf vocalizations. While bioacoustic monitoring is not a novel concept, the advent of advanced AI algorithms has opened up new possibilities to reduce costs and enhance researcher productivity in telemetry monitoring (for more information see Using machine learning to decode animal communication). The Greater Yellowstone region holds realistic, lower-cost potential for bioacoustic research, due to the long-term knowledge already gained from radio collaring, flight surveys, camera traps, and field surveys. As such, this collaborative research project aims to collect 24x7x365 bioacoustics data at pre-determined locations in the GYE which can be set aside, similar to genetic data, and used later for research of any species that vocalizes below 12khz.
Some Initial Findings
-
wolves predominantly vocalize during night time hours
-
wolves increase daytime vocalizations during the winter breeding season
-
wolves rapidly modulate their howls during "stressful" situations (inter-pack conflict)
-
wolves respond to coyote vocalizations, but do not silence the coyotes
-
wolf individuals can be identified by the pitch of their howl
-
female wolves play a significant role in how a pack communicates
Collaboration Partners
A Little about the Technology
Supervised Wolf Bioacoustic Detection
There is extensive precedent for applying ML for supervised bioacoustic detection tasks; examples include a sperm whale click detector, a humpback detector, and a model that detects and classifies birdsong, among many others. Employing similar methods, we can train a convolutional neural network (CNN) either from scratch or using pretrained weights to classify an acoustic window as non-signal or wolf signal depending on the absence or presence of a wolf vocalization in the given acoustic segment.
Gabe, a highschool intern annotating a wolf chorus howl for our machine learning algorithm
Supervised Wolf Chorus Counting
To our knowledge, there are no attempts at automated acoustic counting of overlapping signals, though there are several approaches that may be promising. We are training models (e.g. LSTM-CRF) to predict the number of overlapping spectral elements at fine timescales using open-source data. Assess the model’s ability to generalize to new datasets. Train a model to predict the number of wolves in a chorus based on human annotation of the number of wolves vocalizing concurrently.
Unsupervised Wolf Source Separation
Using previous work in source separation and emphasizing the unsupervised MixIT training algorithm used to separate overlapping birdsong mixtures, we can attempt to separate wolf choruses into predictions for the individuals present in the chorus. Though not functionally limited in the number of sources it can handle, it is unclear how the model will perform as the number of concurrently vocalizing wolves increases.
Unsupervised Meaning Discovery in Wolf Vocalizations
The CETI project has produced machine learning models, with little or no understanding of a species vocal repertoire, can be used to reveal meaningful units in the sounds. The approach in this paper, APPROACHING AN UNKNOWN COMMUNICATION SYSTEM BY LATENT SPACE EXPLORATION AND CAUSAL INFERENCE, with modification for wolf vocalizations, is promising.
Related Scientific Research
-
Acoustic Identification of Wild Gray Wolves, Canis lupus, Using Low Quality Recordings
-
Citizen science contribution to national wolf population monitoring: what have we learned?
-
Tracking cryptic animals using acoustic multilateration: A system for long-range wolf detection
-
Testing a New Passive Acoustic Recording Unit to Monitor Wolves
-
Bioacoustic Detection of Wolves: Identifying Subspecies and Individuals by Howls
-
Singing in a wolf chorus: structure and complexity of a multicomponent acoustic behaviour
-
The potential for acoustic individual identification in mammals
-
Tracking cryptic animals using acoustic multilateration: A system for long-range wolf detection
-
The contribution of source filter theory to mammal vocal communication research
-
Cross Modal Perception of Body Size in Domestic Dogs (Canis familiaris)
-
Size communication in domestic dog, Canis familiaris, growls
-
Wolf Howling Is Mediated by Relationship Quality Rather Than Underlying Emotional Stress
-
Not afraid of the big bad wolf: calls from large predators do not silence mesopredators
-
Acoustic analysis of wolf howls recorded in Apennine areas with different vegetation covers
-
Visualizing sound: counting wolves by using a spectral view of the chorus howling
-
Wolf howls encode both sender- and context-specific information
-
Individually distinct vocalizations in timber wolves, Canis lupus
-
Automated identification of avian vocalizations with deep convolutional neural networks
-
Chorus Howling by Wolves: Acoustic Structure, Pack Size and the Beau Geste Effect
-
Timber wolf howling playback studies: Discrimination of pup from adult howls
-
Radiographic analysis of canine vocal tract anatomy and its implications for human language origins
-
Voice-Sensitive Regions in the Dog and Human Brain Are Revealed by Comparative fMRI
-
Does size matter? Examining the drivers of mammalian vocalizations
-
Long-duration, false-colour spectrograms for detecting species in large audio data-sets
-
Recognition of familiarity on the basis of howls: a playback experiment in a captive group of wolves
Donate Financially to the Project
Yellowstone National Park's Wolf Project Team appreciates your interest in financially supporting the Cry Wolf Bioacoustics project. All donations go through Yellowstone Forever, the official non-profit of Yellowstone National Park. To ensure that your funds go to the Cry Wolf Project, click on the Donate Now button below. Put "For Dr Dan Stahler and The Cry Wolf Project" in the optional comments field. Thank you!