2. Acoustics

Category
Aim
Technique
Requires
Effect

Apply Multiple reverb times/predelays

Convolution Reverb/Multiple Impulse Responses (IRs)

Places instruments in their own spaces

Mic Bleed/Phasing referencing

Stems, Guide or reference tracks

Creates a more effective impression of multiple instruments in the same space

More players = less reverb

Reverb, Guide Tracks

Simulates reverb absorpition

Use convolution reverb for Early reflections, Plate reverb for tails

Convolution & plate reverb

EQ reverb applied to different instruments where necessary

Reverb & EQ

Simulates reverb absorption

Use the same Impulse Responses (IRs) across different libraries where possible

Convolution reverb, suitable Impulse responses

Helps to place the various instruments in the same acoustic space

Apply chamber reverb to a sampled instrument

Appropriate recording & playback equipment

Allows the user to record their own live acoustic space with the virtual instruments

Use a stereo delay in conjunction with reverb and panning

Stereo delay, reverb

Recreates the sense of early reflections moving around the room to the other other mics

______________________________________________________________________________________________________

2. Acoustics - Summary

Along with compositional practice and instrument design, no other outside discipline affected music performance quite as much as acoustics. For centuries architects sought to develop public spaces which would accentuate and reinforce a given performance; ranging from sacred choral works in cathedrals to orchestras in concert halls. As sample libraries; particularly large scale orchestral libraries, are becoming more common and significantly more complex and detailed, it is perhaps no surprise that the given recording space a sample library is recorded in has become something of a major selling point of that given library. It is worth noting that the issue of emulating real orchestral spaces is not just confined to virtual renditions of the ensembles. In Performing Nostalgia on Record: How Virtual Orchestras and Youtube Ensembles Have Problematised Classical Music, Eve Klein notes how even with the advent of recording technologies and the possibilities they provide, recording engineers will typically aim to replicate how the acoustics would sound at a live performance:

The concert hall or the opera house are classical music’s most esteemed performance locations and classical music practitioners have fixated on “faithfully” emulating this performance environment on recordings throughout the last century rather than embracing the broader creative compositional palette offered by recording technologies… with the skill of the engineer being measured by how effectively they could recreate the perception of a great concert hall performance within ‘less compliant acoustics’  - (Klein, 2015)

Trying to impart a specific sense of a realistic acoustic space and reverberation within a virtual instrument can prove to be difficult but is a task made much easier thanks to advances in artificial reverbs, primarily that of Convolution Reverbs and Impulse Responses. This, in conjunction with the use of delay and the individual mic positions provided with a library, can be used to simulate both a given acoustic space for an entire ensemble as well as positioning specific instruments and groups within that space and relative to each other.

Given that the use of Orchestral sampled instruments is virtual and ‘in the box’, it can be assumed that the use of any audio processing on the sampled instruments will also be virtual and software based, although the same principles are generally applicable to both methods. As such it worth briefly considering the perceived effectiveness of this kind of reverb. By carrying out a series of listening tests, Pätynen and Lokki compared the sound of an orchestra in real acoustic spaces to those of simulated acoustics using Impulse Responses. Their results found that, several influencing factors included, “at best, the auralizations were considered as good as the real hall” (Pätynen, Lokki, 2010).

Acoustics 2a)

As each section of an orchestra is situated in different positions in a space, the use of different reverbs/reverb times for the different sections can be effective in giving the impression that the instruments are situated in different positions, and as such have different reverb times. Each section will have separate pre-delay times depending on where they are situated and how close they are to the audience. Instruments situated nearer to the front of the stage will be heard primarily in terms of the direct signal reaching the audiences’ position represented by the microphones, as opposed to instruments placed to the back of the stage which will be heard more in relation to the early reflections reverberating from the acoustic boundaries of the space.

In general, the three parameters or controls for situating virtual instruments can be summarised as below:

For z depth/Near to Far: EQ, Volume, Reverb Direct sound vs Early Reflections

For left and right: Panning, Delay

For space: Reverb Early Reflections, Room tail.

9.Positions

Figure 12. Volume (in dB) & Reverb time (in ms) relation for a given space (Acoustic & MIDI Orchestration for the Contemporary Composer, 2007, p150)

Zone # Reverberation time Filter Setting Comments
1 2.6 seconds Flat
2 2.3 seconds Damp -2 dB around 100hz The slightly shorter reverb time and the filter on the low end of the spectrum gives a bit more clarity to bass instruments
3 2.9 seconds Damp -3 dB around 200 Haz
4 3.1 seconds Damp -3 dB around 150 Hz

Figure 13. An example of using both various reverberation times in addition to applying EQ filtering to more accurately position instruments virtually (Acoustic & MIDI Orchestration for the Contemporary Composer, 2007, p150)

10.Spaces

Figure 14. Some convolution reverbs provide impulse responses already recorded for specific instrument placements within a space. This effect can be simulated manually by using individual reverb times set for each individual instrument and/or section.

The concert hall or the opera house are classical music’s most esteemed performance locations and classical music practitioners have fixated on “faithfully” emulating this performance environment on recordings throughout the last century rather than embracing the broader creative compositional palette offered by recording technologies… with the skill of the engineer being measured by how effectively they could recreate the perception of a great concert hall performance within ‘less compliant acoustics’  - (Klein, 2015)

Trying to impart a specific sense of a realistic acoustic space and reverberation within a virtual instrument can prove to be difficult but is a task made much easier thanks to advances in artificial reverbs, primarily that of Convolution Reverbs and Impulse Responses. This, in conjunction with the use of delay and the individual mic positions provided with a library, can be used to simulate both a given acoustic space for an entire ensemble as well as positioning specific instruments and groups within that space and relative to each other.

Given that the use of Orchestral sampled instruments is virtual and ‘in the box’, it can be assumed that the use of any audio processing on the sampled instruments will also be virtual and software based, although the same principles are generally applicable to both methods. As such it worth briefly considering the perceived effectiveness of this kind of reverb. By carrying out a series of listening tests, Pätynen and Lokki compared the sound of an orchestra in real acoustic spaces to those of simulated acoustics using Impulse Responses. Their results found that, several influencing factors included, “at best, the auralizations were considered as good as the real hall” (Pätynen, Lokki, 2010).

Acoustics 2b)

One of the primary elements of live orchestral recordings is that of microphone ‘bleed’ and phasing. This is a result of the many microphones typically used in an orchestral recording session and the layout of instruments in relation to the microphone positioning. While a particular microphone may be used for close mic’ing a string section for example, the microphone will invariably pick up the sound of other instruments to varying degrees. While this is an effect that is difficult to simulate, one method can be to send subtle amounts of different sections to the reverbs of other sections using Sends and Returns within the DAW. This is another instance where the use of stem recordings of an orchestral track can be extremely beneficial to serve as a general guide as to how much bleed from each section reaches mics for other sections; both in terms of the direct sound as well as the reverberations. To achieve this, solo the individual stems; representing a separate microphone, for the instrument concerned and assess how much of other instruments can be heard through the recording and attempt to mimic this as close as possible. Depending on the amount that can be heard of one instrument on a given microphone, the various levels of a signal can be sent to shared reverbs and groups of other instruments

Acoustics 2c)

As a general rule, the larger the orchestra, the shorter the reverberation times overall, as more players absorb the sound. This is due to the acoustic absorption that becomes more prevalent simply by the presence of more people in the space, and applies to both the number of players present on the stage in addition to audience members. This can also serve as a practical mixing principle, as more instruments will often be obscured by reverb times which are too long.

Put simply; a lone instrument performing in a large acoustic space will have a very long reverb time. A full orchestral ensemble in that same space will significantly reduce the reverb, and even more so if an audience is present.

11.Verb

Figure 15. A comparison of the Impulse Response (IR) of a sound with and without the orchestra present (Dammerud, 2009)

Acoustics 2d)

If using multiple types of reverbs or reverb settings; be it multiple convolution reverbs for different individual instrument sections or different reverb forms, one method of helping to ‘glue’ the sounds together is to use a single reverb as a last stage for the overall room space, commonly referred to as the reverb ‘tails’; the reverberations which occur after the direct sound and early reflections. For example, a Convolution reverb can be used to provide the early reflections, while using an algorithmic reverb afterwards can help blend the different reverbs together and provide the room tails. This can be used as a Master reverb applied to all instruments or sections with a small reverb time, or can be fed the output of the other convolution reverbs.

11.Diagram

Figure 16. A diagrammatic representation of the reverb setup used. This approach allows individual instruments and sections to have their own reverberation times and responses. The top instrument tracks (1st Violins, 2nd Violins, etc) are grouped into auxiliary groups (AUX’s) whose outputs are fed to separate reverb instances. This approach allows a wide array of routing options and ‘sharing’ reverbs across different groups, which also allows for simulating microphone bleed. In the above example, a small amount of the Strings AUX group is sent to the Brass group (Brass AUX), which in turn is fed to its reverb own Early Reflections (Brass ERs).

Acoustics 2e)

The frequency response of a reverberated sound is similar to that of the direct sound of an instrument in that frequency attenuation occurs. This means that the higher frequencies of the reverberated sound will be reduced more than lower frequencies, which is something that artificial reverbs often fail to mimic. When applying reverb, use filters and/or reduce the amount applied to simulate a room full of musicians, as well as the attenuation of any of the higher frequencies present. Many reverb plugins, particularly convolution reverbs, include an inbuilt EQ filter for reducing the high or low end of the resulting reverb however a third-party EQ can be applied in the same manner.

Acoustics 2f)

When combining different libraries recorded in different spaces, check if the hall or room the library was recorded in has an impulse response available for commercial use in a convolution reverb. (g. The scoring stage used for Eastwest's Symphonic Orchestra is included as an impulse response within the QL Spaces reverb plugin). This can help impart the sense that other libraries and instruments are in the same space. For this to be done, the same impulse response must be used, as different acoustic spaces react different with their own characteristics and sound, therefore sharing the same impulse response across varied instruments can help situate them in the same space more easily.

Acoustics 2g)

To impart a unique or 'live' aspect to a sampled part, or to serve as an alternative to an artificial reverb, the chamber reverb method can be another option.

Taken from the practice used by recording studios, particularly in the early days of recording, this involves playing back an audio signal from a speaker(s) into a reverberant space which is then re-recorded

 

sunset chamber

Figure 17. A recording setup for capturing a chamber reverb. 

Acoustics 2h)

Use a stereo delay in combination with a reverb to simulate positioning. Reverb gives the impression of space, but a stereo delay can help impart both a sense of space as well as a more accurate impression of positioning more accurately than simply panning. Keeping the delay largely panned to one side but with some amount still applied to the opposite side helps give the impression the sound is still 'bouncing' around the room and reflecting to the other side, in addition to giving an impression of microphone ‘bleed’.