Many synthesizers do not use samples. Physical modeling oscillators have been available in good synthesis engines for a long time. These generate sounds based on a physics model of real acoustic instrument designs and materials, dynamically reacting like real instruments. These used to be dedicated synthesizers but are now buried in a long list of oscillator models available on general purpose synthesizers.
I used to own a Korg Z1, which had one of the first physical modeling synthesis engines. While it could produce relatively realistic acoustic sounds, that is not what people ended up using these types of synthesizers for. Unlike a physical acoustic instrument, where the basic geometry, material properties, and other parameters are fixed, most parameters of the synthesized physical models could be dynamically modulated and manipulated like any other synthesis parameter.
Modulating the fundamental physics properties of an acoustic instrument at audio rates produces some really interesting timbres and effects that are not reproducible using any other type of synthesis or (obviously) acoustic instruments. Consequently, no one ended up using them for realistic acoustic sounds; it was much more interesting to use the synthesis engine to do physically impossible manipulation of the acoustic model to generate novel sounds.
These days almost everything in a synthesizer engine is based on modeling due to the inexorable increase in available computing power. Nonetheless, there is still considerable convenience and economy in using acoustic instruments or samples for many purposes. Just because we can create incredibly detailed and realistic physical models doesn't mean it is worth the effort and they often have terrible UI. I could see this as being something where AI could do a lot.
Omnisphere is a good example of where the term "sample-based" isn't entirely accurate (ignoring its modeled oscillators). While it does have a large and excellent sample library, many of the oscillator engines use the samples as spectral feedstock instead of as a sound to be played per se. The raw sample is not identifiable even though it imparts a characteristic quality on how the oscillator sounds. (Omnisphere is also massively popular because it is an excellent synthesis engine with an unusually good ease-of-use to power ratio. Still one of my all-time favorites.)
Problem: We don't really know what "realistic" quality means. Is it
the timbre? Is it the spectral dynamics? Is it a set of recognisable
behaviours? Does playing versus only listening to a simulated
instrument make a difference to the perception of "realism"? Sure we
can do naive A-B tests on audio, but they turn out to show unexpected
or "wrong" results.
Not that I'd encourage anyone to get involved in academia in its
current degenerate state, but for someone really passionate about this
I'd say this postdoc Diemo is heading at IRCAM looks like one of the
more fun, interesting and challenging projects around right now. FWIW
there's a similar programme running at Edinburgh with Stefan Bilbao
and Rod Selfridge testing quality of PDE synthesis and AI discovery of
parametric control.
I used to own a Korg Z1, which had one of the first physical modeling synthesis engines. While it could produce relatively realistic acoustic sounds, that is not what people ended up using these types of synthesizers for. Unlike a physical acoustic instrument, where the basic geometry, material properties, and other parameters are fixed, most parameters of the synthesized physical models could be dynamically modulated and manipulated like any other synthesis parameter.
Modulating the fundamental physics properties of an acoustic instrument at audio rates produces some really interesting timbres and effects that are not reproducible using any other type of synthesis or (obviously) acoustic instruments. Consequently, no one ended up using them for realistic acoustic sounds; it was much more interesting to use the synthesis engine to do physically impossible manipulation of the acoustic model to generate novel sounds.
These days almost everything in a synthesizer engine is based on modeling due to the inexorable increase in available computing power. Nonetheless, there is still considerable convenience and economy in using acoustic instruments or samples for many purposes. Just because we can create incredibly detailed and realistic physical models doesn't mean it is worth the effort and they often have terrible UI. I could see this as being something where AI could do a lot.
Omnisphere is a good example of where the term "sample-based" isn't entirely accurate (ignoring its modeled oscillators). While it does have a large and excellent sample library, many of the oscillator engines use the samples as spectral feedstock instead of as a sound to be played per se. The raw sample is not identifiable even though it imparts a characteristic quality on how the oscillator sounds. (Omnisphere is also massively popular because it is an excellent synthesis engine with an unusually good ease-of-use to power ratio. Still one of my all-time favorites.)