New technique
Essentially, FM synthesis approximates musical sounds by attempting to reproduce the manner in which their strength varies with frequency. A limitation of this technique is that it can only match the sound of an instrument as it is measured at one point in space. Now a new technique, called digital waveguide synthesis, has been developed at CCRMA by Associate Professor (Research) Julius O. Smith III. The new technique substantially improves sound quality by modeling the sound- generating processes that take place in instruments themselves.
"Waveguide synthesis is the key to duplicating the sounds of actual instruments," Chowning said.
Smith, an electrical engineer who began playing in rock bands when he was 14, as a graduate student settled on a project to improve simulation of a violin.
The brute force way to approach this problem, he explained, is to break down each string into hundreds of segments, or samples, and then solve the equations of motion for each of these points on the string, and do so about 44,000 times per second. That produces a realistic simulation of how the string vibrates and so produces sound. But it requires performing more calculations per second than the capability of the special digital signal processing (DSP) chips designed for this purpose.
Smith got the idea for a simpler approach in 1985 while listening to a colleague on a shuttle bus going to a computer music conference. The colleague was discussing his work on reverberators, systems in which signals bounce around in a cavity without losing much energy. That train of thought started Smith thinking about an approach that initially ignores the frictional losses in a string and then adds them back at a later point.
By starting with an "ideal" string that, when plucked, vibrates forever, Smith was able to reduce the number of computations that it takes to calculate the position of the string by a factor of 100 to 1,000, making it possible to run the simulation using current DSP chips.
The digital waveguide approach is not limited to stringed instruments. Because the sound propagation in tubular instruments like flutes, clarinets and trumpets is very similar to what happens with a string, by adding a simple simulation of what takes place at the mouthpiece or reed, Smith was able to simulate the sounds of these instruments as well.
In addition, Perry Cook, a CCRMA research associate working with Smith, has modified this technique so that it convincingly reproduces the singing voice. Essentially, the method works for most long-and-thin sound generators, including the human larynx and the string, wind and brass instruments.
Not only does the waveguide approach more closely recreate the sounds of these instruments but, because its underlying algorithms are based on a physical model, it is a straightforward process to add performance nuances such as vibrato and emotional colorations created by varying breath pressure in a woodwind or changing bow speed on a string, Smith said.
http://news.stanford.edu/pr/94/940607Arc4222.html