1. Balance, Panning & Positioning

Category
Aim
Technique
Requires
Effect

Achieve more realistic and accurate balances within instruments & sections

Maintain consistent dynamics across all instruments

Ensure that dynamic and timbral changes are accurate to individual instrument registers

Control the apparent closeness of the instrument to the listener

Position instruments and groups within a space

Reference comparative audio stems

Maintain a standard benchmark of velocity levels to dynamics when contrasting or combining relative volumes

Use EQ to help position instruments

Use Stereo Imaging & Panning

Individual audio stems from the same recording or performance

Individual audio stems from the same recording or performance

 

 

EQ Processing

Stereo Imaging & Panning Plugins/Controls

Instrument and section volumes are relative to each other in relation to how they would be in a live recording

Instrument and section volumes are relative to each other in relation to how they would be in a live recording

Instrument and section volumes are relative to each other in relation to how they would be in a live recording

Gives the impression of distance

Provides greater control on positioning instruments within a space

______________________________________________________________________________________________________

1. Balance, Panning & Positioning - Summary

 

In attempting to replicate the sound of any instrument, one of the most crucial elements concerning the success or failure of such an endeavour largely relies on the user’s knowledge of that instrument. This however invariably goes beyond merely knowledge of how the instrument itself sounds or is used in composition or orchestration, but also how it is played, its general characteristics, as well as how it sounds within a given space acoustically.

When attempting to emulate an ensemble; particularly an orchestra, these aspects are compounded even further by how each instrument sounds in relation to each other. Music educator and composer Alan Belkin in his treatise Artistic Orchestration, notes;

Computer simulation of the orchestra is of course a useful tool, and its quality is constantly increasing. But to do a really convincing simulation requires that one already know, in some detail, how the passage must sound; most nonprofessional simulations are poorly balanced and woefully lacking in refinement. - (Belkin, 2008, p4)

It's notable that of all issues, difficulties and pitfalls that can arise trying to emulate the sound of an orchestra; such as unrealistic or 'robotic' performances, poor sound quality, unconvincing articulations, etc. the two that Belkin specifically mentions are can be 'poorly balanced' and 'woefully lacking in refinement’. Whilst individual instruments and small ensembles, sampled or otherwise, may sound highly effective on their own, one of the most telling aspects of an artificial orchestra is that of blend and balance; how each section sounds relative to another. Although this is an issue faced by composers and orchestrator’s in general with real life orchestras; a large part of the art of orchestration lies heavily in writing a coherent and effective balance itself, when it comes to simulations of the orchestra.

This issue becomes all the more problematic in cases where composers begin their practice with writing with or for virtual instruments and ignoring the problems or limitations.

developing composers may become accustomed to the limitations, and start to abandon the realities… From the perspective of a professional concert music composer, the process of making a sound set sound good means embracing its limitations. Unfortunately, this limits the type of music one can effectively compose. - (Goss, 2013)

On the issue of blending and internal balancing within an orchestra, it can be beneficial to at least briefly explore how the treatment of this issue with orchestral recordings have been dealt with in the past. For centuries, ongoing developments and refinements were made to the orchestra relating in balance and blend, with section sizes changing and evolving over time. With the advent of recording, although changes had to be made the overall layout of the orchestra in order to compensate for the poor quality and sensitivity of the earliest recording equipment, largely the main aim was to stay true to the sound of a real orchestra as heard in a concert hall in real life. This standard sound remained largely idiomatic for all subsequent orchestral recordings as recording equipment and techniques developed. This changed however, with the works of Bernard Herman, one of the most prolific and critically acclaimed composers of all time. Having first come across the practice in the world of radio broadcasting, he was the first to make use of the recording opportunities now available to manipulate the sounds for his scores, which often featured both Orchestral and electronic instruments. Herman used the ability to overdub and record instruments separately in order to create new blends and balances previously unachievable. In the BBC documentary series '20th Century Greats’, one episode dedicated entirely to Herman explains:

Hollywood composers of the 1930's led by Max Steiner and Erich Korngold more or less imported lock stock and barrel the orchestral sound of 19th century Vienna into their film scores.....But Herman realised that for film, which was a one off, this was nonsense…thanks to close miking in the sound studio, he could bring together instruments that couldn't possibly be heard together in the concert hall - (Goodall, 2004)

For the very first time, sounds of the orchestra and its traditional balances were being defied. This technique of manipulating recording techniques to achieve otherwise impossible sounds, textures and balances continues to this day in modern film scores. Inevitably, this raises an important, but perhaps largely subjective, issue for the virtual orchestrator. As sample libraries are largely nothing more than recordings, this option of manipulation and audio trickery is always available and therefore so is the question of whether to approach the sound of the orchestra as 'authentic' and purist or not. For his online article 'Records and Reality: How Music Sounds in Concert Halls' Robert E. Greene states;

Almost all records are made with the microphones closer to the performers than the audience would be. The sound very close to the performers is also an aspect of the absolute sound of live music. But the sound that the composer and the performers intend for us to hear is the sound at audience locations, and the sound the audience would hear is presumably what we should be trying to hear at home from our audio systems - (Greene, 1985)

Therefore the use of sampled orchestral libraries offers the composer a choice; to try to emulate the balance, sound, blend and panning/positioning of a live orchestra as heard by an audience perspective, or to ignore these conventions and historical practices as other recording technologies have allowed. Another issue that arises which concerns both balance, blending and performance. One of the key contributing factors to the homogeneous sound of, for example, an entire string section of an orchestra, is that each player must react and adapt to the other players. This is in regards to aspects such as timing, articulation, tuning but most noticeably in terms of volume. However, as many orchestral sample libraries feature instrument sections recorded separately, these can sound separate and disjunct when played together.

It is important to note, however, that if the ultimate aim of using orchestral samples is to later be replaced by a live ensemble, or as a means of learning orchestration in and of itself, attention to realistic performances can be vitally important.

Balance 1a)

The use of reference material can be extremely useful in a variety of disciplines, from production, mixing and mastering to volume balances and performance emulations in the case of orchestral simulations. This can be significantly more useful where the reference material is very similar to the work concerned; in terms of orchestra composition, genre, articulations and performances and the acoustic space to be emulated. However in the case of virtual simulation, this can be even more applicable and closely emulated by using individual stems of a live recording session. By comparing stems of individual sections and separate microphone positions, and contrasting the balance and blend between each, the user can more accurately analyse the relative balances of volume, panning and positioning.

Two such sources of orchestral recording stems that are provided specifically for the purpose of MIDI orchestration balancing and/or mixing can be found along with Mike Vertas Virtuosity masterclass and Thinkspace Education’s Mixing Cinematic Music series (The former being available only through purchasing the masterclass, the latter free by subscribing through email). These stems contain not only individual sections, but also individual microphone positions for specific instruments and groupings. If the user wishes to emulate a realistic or live orchestral balance, it is important to balance not only the contrast and relative power between instrument sections, but also ensure that the individual instruments volume levels are accurate and matched with the other players.

Orchestral Stems

Figure 1. Orchestral stems of a live orchestra recording session provided with Mike Verta’s Virtuosity Masterclass inside Pro Tools 11. Multiple microphone positions are provided for several of the instruments. Orchestral stems can be used to mimic various aspects of a live recording, from the volume balances and positioning to the amount of microphone spill and reverb times. These stems where used as guidelines and references when creating the larger ensemble pieces for the Listening Tests and Original Compositions.  

NOISE SOURCE dB Peak
Single musicians
Violin/viola (near left ear) 85 – 105 116
Violin/viola 80 - 90 * 104
Cello 80 - 104 * 112
Acoustic bass 70 - 94 * 98
Clarinet 68 - 82 * 112
Oboe 74 - 102 * 116
Saxophone 75 - 110 * 113
Flute 92 - 105 * 109
Flute (near right ear) 98 – 114 118
Piccolo 96 - 112 * 120
Piccolo (near right ear) 102 - 118* 126
French horn 92 - 104 * 107
Trombone 90 - 106 * 109
Trumpet 88 - 108 * 113
Harp 90 111
Timpani and bass drum 74 - 94 * 106
Percussion (high-hat near left ear) 68 – 94 125
Percussion 90 – 105 123-134
Singer 70 - 85 * 94
Soprano 105 – 110 118
Choir 86 No data
Normal piano practice 60 - 90 * 105
Loud piano 70 - 105 * 110
Keyboards (electric) 60 - 110 * 118
Several musicians
Chamber music (classical) 70 - 92 * 99
Symphonic music 86 - 102 * 120 - 137
* at 3 m

Figure 2. An estimate of the volume levels of each orchestral instrument. Though approximate, this table can be used to help gauge and illustrate the relative volume differences between instruments, e.g. that a French horn should be louder than a Clarinet, etc. (soundadvice.com, 2007)

Where individual orchestral stems are not used, full songs or performances of reference works of a similar piece can be used to gauge the appropriate balance and overall sound. For example, with the proprietary Virtual Orchestra used for live shows such as on Broadway, Bianchi notes how they would:

use the overture to Mozart’s the marriage of Figaro to establish balance in the string sounds, woodwinds and timpani, and the prelude to Bizet’s Carmen to adjust the large orchestral tutti sound which includes full brass and percussion - (Bianchi, 1998, p1)

While the use of complete audio tracks can be extremely beneficial, it may at times be more prudent to use orchestral stems of a given recording in order to more accurately compare and contrast each of the sections individually.

Balance 1b)

Given the nature of MIDI, dynamics and volume changes are often tied to specific values or control codes; more commonly CC1 (Modulation), CC11 (Expression) or CC7 (Volume). Adopting a uniform guideline or benchmark to relate certain velocity ranges to specific performance dynamics can help maintain an overall balance across instruments. For example, having a velocity range of around 80 to 90 can correspond to the dynamic of Mezzo-forte (mf). As different libraries and developers use different amounts of velocity layers, dynamics and control options, this can be difficult to maintain when using a variety of different libraries within the same piece. This should be kept in mind when trying to emulate orchestral section balances for instance, where the velocity and volume differences should be compared and matched as closely as possible.

2. Dynamics
Letters ppp pp p mp mf f ff fff
Logic Pro 9 dynamics[2] 16 32 48 64 80 96 112 127
Sibelius 5 dynamics[3] 20 39 61 71 84 98 113 127
Sibelius 5 attacks 15 30 50 60 75 90 105 119

Figure 3. Examples of assigning MIDI velocity ranges/values to specific dynamic markings. As different libraries use different velocity levels, these will be relative to each library (Wikipedia.org, 2018)

 

Balance 1c)

Internal volume balances within sections, g. flutes with clarinets, violins with violas, etc, and balances between different sections, e.g. woodwinds with strings, are vitally important. However it is also important to note that different instruments can reach various dynamics and timbres when playing in different registers. Balances against different instruments within different dynamic ranges; e.g. ensuring a brass section playing forte is louder than all other sections, a violin section playing pianissimo is still louder than a flute section playing at the same dynamic, etc. must be taken into consideration along with the fact that a flute playing in its lowest range will be quieter/lower in dynamics than if playing in its highest register. As Mike Verta states during his online masterclass;

Have an idea orchestrationally of the actual real world volume of a flute playing forte versus three trumpets in the exact same range…knowing the real difference in volume between those is maybe the lynch-pin upon which most of this [the practice of orchestral simulations] hinges. - (Verta, 2013) 

This practice is described by Bianchi and Smith using their Virtual Orchestra (VO), used for live performances, often to accompany or enhance real instruments and singers. Bianchi notes;

the VO sound level must reproduce as closely as possible the sound level output of each instrumental section for a given playing effort. This is easily done by cataloguing the sound level of a typical pit orchestra playing sectional chords at mezzo forte and setting the velocities and volume control to emulate the same sound level. The object of this exercise is to assure the conductor that the VO will respond correctly in sound level since the entire orchestral score has been translated to MIDI using velocities and volumes commensurate with the loudness marks in the score. - (Bianchi, 1998, p1) 

Another related issue is that of maintaining realistic dynamic ranges of an instrument for a given register. This is largely in relation to the construction of the instrument and the skill of the player, however general guidelines can be found in various orchestration manuals and resources.

18.Dynamic Curves

Figure 4. An example of a diagrammatic representation of the dynamic range of flutes and oboes in relation to register. Whilst being somewhat subjective and difficult to judge accurately, the general guidelines can typically be relied on. It is worth noting that other texts or resources may refer to this in other ways, such as the Essential Dictionary of Orchestration referring to it as the 'Dynamic Contour' of an instrument. This all refer to the same thing.

3.Dynamic Range

Figure 5. Another example of approximate volume levels of instruments, including the dynamic ranges for each. This can be another important factor to consider, as the dynamic ranges of different instruments can alter significantly. Furthermore, the register which an instrument is playing in can also influence the dynamic range and how loud or soft it is capable of reaching. (Sundstrup, 2008, p91)

 

1. Panning & Positioning

The research article Orchestral Seating in Modern Performance (Smith, 2009) explores the various changes and developments made to the orchestral seating arrangement and the various historical influences that affected it. The article provides an overview of how different arrangements have changed over time, and can provide a guideline of the positioning of instruments, as was the case for several of the works used in the Original Compositions.

The current and most common orchestral placement and instrument seating has been in place for roughly several hundred years. Although the current orchestral seating: that of the string section spread across the front, the brass section to the rear of the orchestra, the woodwinds in the centre between the strings and brass, and the percussion to the left, has become one of the most common setups for an orchestra, this can and often does vary. Depending on the orchestral composition or the piece being played, the seating and arrangement of instruments and groups can change drastically from what may be assumed of a typical full orchestra. Panning is typically an issue users deal with at the final stages of a track; the ‘mixing’ stage, with the majority of developers and/or specific libraries not featuring a separate or individual pan control within the program itself, rather this is left to the user to control the panning in the DAW they are using. However, assuming that users would wish to emulate the original orchestral seating of each given instrument, many libraries now record their orchestral libraries with the close mics placed in relation to where the instrument would be recorded within the entire orchestra.

4.Mic positions

Figure 6. Default Panning position of the first (Close) mic in Eastwest's Hollywood strings library. In this case, the Close Mic of the 1st Violins is automatically panned to where they would be in a typical Orchestra. Many of the most recent orchestral libraries will have pre-panned microphone positions.

This means that when enabling the ‘close’ mic for each instrument or ensemble, the sound is automatically panned within the stereo field of where the respective instruments would be recorded in a live orchestral recording or live performance. While this feature may serve in reducing the micro level of control parameters that a user may need to apply, this effect can easily be simulated manually using a variety of different options.

5.Layout

Figure 7. Typical Layout of a full Orchestra & Choir (ia33.org, 2012). Whilst this may be the most common arrangement, just like volume balances this can often be found to be changed widely depending on the instruments, material, and ultimate creative decisions of the composer and/or conductor.

Balance 1d)

Frequency attenuation in air, in combination with early reflections and a lack of direct sound, results in loss of higher frequency content as the sound travels. This results in instruments which are positioned further away from the audience having reduced high frequency content compared to those instruments placed to the fore of the orchestra. The use of EQ to reduce the highs and lows of instruments can help to 'push' them back further into the sound space. e.g. to give the impression the French horns are behind the string section by making the higher frequency content of the French horns less defined and audible compared to the string section. Similarly, in the case of reverb the effect is no different, where the attenuation of higher frequencies of an instruments’ reflections continues as the sound reverberates against multiple surfaces and while also diminishing as it travels through air.

In his online tutorial on Template balancing, Mike Verta states;

Z depth is the one you can't get around, and that's close and far. Because even in our hypothetical example of an orchestra in one room, an orchestra
that z space; that depth are an important part of making the presentation believable. Even one microphone on an orchestra in mono picks up that people are - (Verta) 

It should be noted however that the effectiveness of this approach can be nullified in cases where a sampled instrument was recorded with a microphone close to the instrument and which has recorded the performance with detail which could only be heard by being positioned close to it. For example, added detail of bow resin on a violin must be considered if the user wishes to situate the instrument further back in the space. Using additional EQ settings to attempt to narrowly EQ these sounds may help to minimize this direct sound effect.

If you want to place something at the back of the mix, it not only needs to be quieter than the up‑front sounds: it also needs to have less top end, to emulate the way air absorbs high frequencies. You may also want to roll off some low‑end below 150 to 200Hz, to enhance the illusion of distance - (SoundonSound, 2008) 

20.Distance

Figure 8. In the above example, the first audio segment is the dry vocal sample. The second contains reverb, and the third has the reverb applied to the sample along with a shelving EQ to reduce the low and high frequencies to 'push' the singer further back in space (the Z depth).

Balance 1e)

Depending on the orchestral composition, instruments used or the piece being played, the seating and arrangement of instruments and groups can change drastically from what may be assumed of a typical full orchestra. Furthermore, if the user wishes to defy convention and follow in the tradition of using custom seating or following the practice of manipulating recording technologies for ‘unrealistic’ placements, the issue of panning becomes more artistic and creative rather than one of replication. However, regardless of how the user wishes to place the instruments, there are various levels and methods of panning an instrument; from the use of microphone positions recorded in situ and provided by the developer, to applying internal panning on the Digital Audio Workstation or within the instrument itself. One additional method that can be applied for an added layer of control is that of a stereo imager or positioner to more accurately or strategically place an instrument in a given space. This option gives another form of control of the panning and can be additionally helpful in cases where an instrument is provided solely a stereo recording. By using a plugin for this, the instruments stereo field can be narrowed and then positioned in a more controlled manner. This could be further enhanced or manipulated at a later stage, by using mastering software which also allow for panning or control of the stereo image of set frequency bands, for example of more tightly grouping and panning lower bass frequencies coming from only one part of the orchestras stereo field.

6.Panning layout

Figure 9. An example of an orchestral layout in relation to Panning positions applied from -100 (extreme Left) to +100 (extreme Right) in a typical DAW (audiorecording.me, 2011)

Ozone imager

Figure 10. Izotopes free stereo imager plugin 

A special mention must be made regarding French horns. The placement and performance/construction of the instrument means that as the instrument is played with the bell of the horn facing away from the audience, much less of the direct sound is heard. This is particularly the case in relation to the direct sound vs. the reflections of the sound reverberating in the space. As a contrast, Trumpets are performed with the instrument aimed directly towards the audience and as such will have a much clearer direct sound in comparison to the early reflections.

Balance f)

For an additional level of control, panning applied to instruments can be done in stages; first applying it to individual/solo tracks followed by the overall instrument section or group. The panning applied to the solo instrument is focused on a more micro level within the instrument group, while the group panning is concerned with placing the section in the wider orchestral setting. This can be done by using subgroups or auxiliaries, where the outputs of the individual instruments are summed to one output which becomes the overall ‘section’ control.

8.Panning

Figure 11. A graphical representation of using multiple stages of panning, from individual instruments to subgroups. This same principle is also applicable applying iterative and micro changes to volume/balancing and reverb. The violin Group places the overall sectional panning within the context of the orchestra, while the individual instrument track panning places them in a more micro position within the sections.

An example of the importance of placing individual sections is discussed in the Virtual Orchestra paper;

By far the most difficult sections are the first and second violins. Regardless of which loudspeaker is used, several of them are deployed for each section in a way that generates the time smear associated with multiple violins bowing together but spaced significantly apart in the pit. The “humanizing” algorithm in the MIDI controller generates minor statistical errors such as faulty intonation and staggered starts. However the acoustical time smear is extremely important for violin section realism - (Campbell and Bianchi, p672, 2008)