~ funded research |
Research Council of Canada (NSERC) through the New Media Initiative (2007-2010) My group is working with research groups headed by Sid Fels and Eric Vatikotis-Bateson to develop performances of new stage works involving live singers interacting with gesture-controlled speech and facial synthesis. Three types of speech synthesis will be used - formant, acoustic tube modelling, and articulatory - and the data generated by the cybergloves will control virtual faces based on data acquired through principle component analysis. Video examples |
My group worked with Sid Fels (UBC Electrical Engineering) to rewrite and develop the sound synthesis and gesture routines that he and Hinton (U of Toronto) used in their Glove Talk Project. This project developed and refined a gesture-controlled performance system for improvising digitally synthesized sound in real time for use in concert and stage performances.Performers use glove controllers to create speech, song, and electroacoustic timbres by manipulating a software model of the vocal tract, and also control the processing of multi-channel diffusion of sound from other acoustical and digital instruments. |
![]() UBC Toolbox: Max/MSP/Jitter modulesInformation and downloads on the MuSET site
Funded by the UBC Faculty of Arts Instructional Technology Fund 2003
This is a joint project with Profs. Keith Hamel (Music) and Nancy Nisbet (Visual Arts) to develop sophisticated bPatchers that can be used by Arts students who have a minimal knowledge of the Max/MSP/Jitter programming language. Approximately thirty bPatchers have been created by Hamel, Pritchard, and Sawatsky, and are used by Profs. Hamel and Pritchard in interactive works, and are also being used in music and visual arts courses. |