Our technology enables the development of systems that can process audio signals and learn to recognize patterns, based on research in human auditory neuroscience. Our platform simulates networks of active elements that mimic small populations of neurons in the brain using nonlinear dynamical systems. Networks self-organize when stimulated with sound. They can recognize patterns. They can learn. We are revolutionizing intelligent, robust and flexible systems for processing music, speech and other sounds.
The GrFNN API was developed through collaboration between Oscilloscape and the Music Dynamics Laboratory at University of Connecticut. Development was supported in part by funding from the National Science Foundation (BCS-1027761) and the Air Force Office of Scientific Research (FA9550-12-10388).
A Matlab version is available on github.