This paper is available in Français too
d&b has recently, and doubtlessly before the others, unveiled their digital system for coverage and tonal uniformity, designed for use with their J, V and Y systems. This new feature is part of the ArrayCalc V8 prediction software and uses the DSP of the D80 and D20 amplifiers. It is also free.
Objective : to compensate for the natural attenuation of sound, to smooth the contour of sound pressure levels and the tonal response inherent to couplings between enclosures, and, finally, to correct the effects of temperature and humidity on the propagation of the high end of the frequency spectrum.
We attended a demonstration organized by d&b at the Zenith, in Paris, for about a hundred French sound engineers and system engineers, plus the owners of rental companies : Frédéric André of Fa Music, and Daniel Dollé of Silence, just to name a few.
Additionally, there were some friendly party-crashers of the caliber of Alex Maggi and XaXa Gendron, bringing along their expertly-trained ears.
A great group of people listened with great interest the use of a DSP no longer merely as a simple correction tool – for filtering and protection of transducers via the usual presets – but as an active guidance and tonal uniformity system for the array as a whole. Other brands have already tried this, with varying levels of success. Now it’s d&b’s turn to offer this sort of assistance to their systems that are already available and operational.
Tim & his slides at the Zenith in Paris!
The long theoretical perspective was very well presented by Tim Frühwirth – responsible for product support and promotion at d&b – and detailed a few priorities chosen by the German brand in order to achieve the desired effect. The keystone is indeed the D80, whose output power is now familiar to all of us but perhaps not the processing power of its DSP, which goes far beyond the needs of a preset.
It is this unprecedented processing capacity that d&b exploits, and this explains the recent birth of a D20, with the same DSP engine as its predecessor. The operating principle of the of Array Processing requires each amplifier channel – thus, each DSP module ahead of it – to drive only a single, unique enclosure. Therefore, it doubles the number of D80s required to power the same array.
But there is no longer a place for “too good”!
Now let’s see the effects – or, rather, improvements – on what already works without the help of DSP. As Tim said, somewhat humorously, we are all familiar with the attenuation of sound, which is 6 dB with each doubling of distance for a point source system and about 3 dB for a line source.
The first idea explored by d&b is therefore to fight against this reduction in order to give the farthest listeners a sound that, obviously, is slightly less powerful but spectrally compatible with the sound that is heard in the front rows.
A second priority of the research concerns smoothing the curves of sound pressure level and tonal response. In other words, to smooth out the bumps inherent, for example, to the coupling in the low end or to the proximity to enclosures in the HF, without wasting the power that has been selectively removed, but rather by donating it to where it is useful.
Another priority is concentrated on achieving more effective correction of the effects of temperature and humidity on the propagation of the high end.
As if that were not enough, the engineers at d&b have added the possibility to freely cut three different zones, like the pit, the orchestra and the balcony, to treat them separately, with the possibility, for example, of “turning off” the sound in an unoccupied, upper ring of bleachers.
The last aspect they have worked on is the possibility to automatically homogenize the output to a standardized response, for example, in order not to perceive too much difference between the main arrays and the lateral reinforcement hangs.
A correction is also incorporated into the Array Processing to account for diffraction effects generated between the enclosures themselves.
Once the excesses and deficiencies are identified, the algorithm performs a kind of leveling or, rather, moving the energy from where it is in surplus to areas where it is lacking
Finally, d&b is working to develop a new feature in its software that would allow the exclusion of a specific zone, while cautioning that what can be achieved depends inextricably on the nature of the array, its position in relation to the zone to be avoided, as well as its size and geometry. Such caution is a credit to the manufacturer’s honor, because the less you touch the sound, the better. Of course, Array Processing takes into account the subs of the J, V and Y series flown at the top of the arrays but, in order not to prolong the processing time, intervention is limited to alignment of the frequency response and phase, relative to the array of mid-high boxes.
The dance of the line array
To be able to offer such flexibility for intervention on the wavefront, d&b initially attempted the coup of dynamic temporal alignment of each module as a function of the frequency, an electronic method of virtually moving forward or backward one enclosure relative to another, but soon gave up, in the face of undesirable effects induced by bringing back into question the very principle of the line array.
A representation of the temporal shift performed by a bank of FIR filters that would be necessary to smooth the response at each of the three frequencies depicted on these three graphs – frequencies very close to one another. A purely mathematical approach. Suffice it to say that it is impossible to achieve and it will not work.
The solution was found by considering the array as a single whole, whose deformation resembles that of a living being, like a dancer or a swimmer during a dive. The column of enclosures therefore serves to ensure the optimization of the sound projection by shifting… without moving, through a combination of interdependent FIR and IIR filters from module to module and from frequency to frequency.
In the low end, this “movement” is important since each sound source covers a large part of the listening area. At the top of the spectrum, instead, where each source covers only a very limited area, the algorithm changes its mode of operation.
This virtual ballet, a kind of invisible morphing that still manages to maintain the coherence between all the elements of a linear array, costs the user 5.9 ms of computation time, which is summed with the 0.3 ms inherent in the D80 and D20 themselves, for a total of 6.2 ms of latency – 2.15 meters. That is still acceptable if, in the signal chain, the microphones, the console and effects all lined up don’t already make it too much.
Then comes the most important setting, the one that determines the amount of intervention and efficacy required of the DSP to obtain the desired result. The scale ranges from -11 to +11. Minus 11 corresponds to the most prudent choice, the most discreet and, especially, one where the action of the algorithms will have the least impact on headroom. The left side of the dial therefore bears the sweet name “Power”.
On the right, however, it intervenes more aggressively; it sacrifices more level but attains the desired effect better, at least that is how d&b sees it, so they’ve given it a name imbued with humor: “Glory”. Power and Glory. The choice of a scale up to 11 derived from the amps of the group in the film “Spinal Tap”… in short, humor on all levels.
The central ‘0’ position of this slider is in no way a bypass of the Array Processing, but a central position of the effect. To stop the action of the Array Processing, simply reload through the R1 the checkbox “by-pass”, which is found in all the amps. The selection of the extent of the intervention is made during the creation of a preset, keeping an eye on the Realizer display. Any attempt to do something that resembles sorcery will be sanctioned with a red indication on the display, even before your ears bring you back to sanity.
Where the big sound is !
After a quick refreshment break, the proper demonstration commenced in complete objectivity and honesty. A main array of 12 V8s plus a side array of 6 Y8s were flown at stage right of the Zenith, two stacks of 3 V-Subs directly below the V array provided the low end reinforcement.
The choice of these systems was dictated by the high propensity of these two systems to show their “limits” in this room, knowing the attenuation over distance for the V due to its small size, and therefore the differences between the V and the Y.
Let us be clear: both of these systems work very well and even better in a room well known for its very solid acoustics, and the “defects” of which we speak are those that would be produced by any enclosure of any brand in such circumstances.
They played for us a male voice reading a text loop through the V array only, enabling us – at the cost of the rather amusing transhumance of 100 pairs of ears along the aisles of the Zenith – to get a good feel for the natural attenuation from the pit, almost directly underneath the array, to the top of the bleachers nearly 65 meters from the speakers.
In this first case, the preset and the equalization are standard and flat. Just the mechanical set up is designed to best distribute energy between high and low, while trying to avoid points of excessive energy accumulation.
Once everyone got to the top of the bleachers, the insertion of the Array Processing resulted in an obvious rise in level, brilliance, precision and, frankly, a perceived shortening of the distance separating us from the enclosures.
The preset for this demo was designed to attenuate only 2 dB for each doubling of distance, correct the absorption of the high end and smooth the frequency contour wherever one is, and this is precisely what happens. We were given the chance to listen many times to the system in Traditional mode or Array Processing and we can only take off our hats to this new system. It works.
A quick walk through the stands confirmed this first impression. Wherever we went, the sound was consistent, accurate in the high end and made the V – a very well designed system and already close enough – resemble the J system even more.
Only a very small defect in the 800 Hz to 1.5 kHz range – call it a slight restraint digging a little into this part of the spectrum – betrayed the insertion of the Array Processing. That being said, to realize it, you had to go down into the pit closer to the enclosures, and even right up to the line marked on the floor with white gaffer tape, where the output of the first enclosure showed few side effects. Upon careful inspection – what we were there for – we could perceive slight variations in timbre depending on where we stood, but it was no more pronounced than the usual defects that occur from the coupling between modules in a line array. The balance is largely favorable.
Uhhh, would you like some nice guitar to go with the voice ?
After this introductory audio material, we were proposed a track by the late Chris Jones, “Roadhouses and Automobiles”, a signal more similar to what would normally be reproduced by the speakers and very interesting for the depth and tone of the voice, the cleanliness and richness of the guitar – in a word, a very well chosen beautiful recording, but also perfect for highlighting the slightest defect. The positive impression we had on the simple voice remained the same with music.
The next test consisted of isolating an entire zone. In our case, the first zone was the top tier of seating at the Zenith, which is usually masked by phono-absorbent curtains during low attendance, followed by a second test doing the same to the pit in order to simulate, for example, hosting a symphony.
They proposed for this purpose an attenuation of approximately 11 dB.
Again, it worked very well and the sound seemed to disappear, as if they cut the HF band in a 3-way system, or at least attenuated it very much. The crossover area into the covered zones was free of significant defects and occurred across two or three rows of seats. The influence on the response in the areas where the SPL was not lowered, however, was somewhat greater.
We found the same defect in the midrange, and that made the sound a little more physiological, overprocessed and less fluid. Keep in mind that these differences were acceptable between bypass and preset ; they were audible in the A/B comparison from the pit when the upper tribune was excluded and vice-versa. It’s interesting to note, also, the strange sensation when suddenly we would lose the upper tier and, above all, the sum of the reflections we are used to. This absence quickly became disconcertingly appealing as the sound seemed cleaner. Of course, we are talking about an empty room, but the fact remains that the algorithm of the Array Processing offers cleaning properties that are quite new and very interesting.
The final test was perhaps the most difficult, and the one that left me a little disappointed. This consisted of giving an array of six little Y8s – equipped with two 8-inch woofers and a 1.4-inch driver – an output that would couple them with that of the 12 V8s. Although configured with a passive crossover network like the Y, the V8 is nonetheless equipped with two 10-inch woofers, an 8-inch midrange and two 1.4-inch drivers– obviously a whole different animal. More specifically, this was about covering the usual loss of the foundation, roundness and proper fullness of the “big” box when we would leave its coverage zone and enter that of the lateral array.
Tim first made us listen to the transition using standard presets and a careful phase setting. Then we repeated the walk around the Zenith with the Array Processing preset engaged. Again, the result was good – very good – and one could almost speak of the “mouse that roared”, as the six small enclosures put on their best “V-like” and proved the capability of Array Processing to provide a common sound signature to the three series J, V and Y. However, I was less convinced by the phase coupling between the two arrays, which seemed to be less successful than in Traditional mode. Certainly, it did pass nicely and without any real break when we would leave one system and enter into the coverage of the other. Nevertheless, between 300 and 800 Hz, slight interference could be heard and just took away part of the magic. Clearly, the benefits outweigh this fault, especially for untrained ears, but this is probably an area where technicians at d&b could improve further.
Implementation
The implementation process is simple once one has mastered perfectly the “classic” exploitation of a line array, namely using the most precise possible knowledge of the dimensions of the room, the bleachers and the type of performance one is looking for, as this obviously affects the nature of the array – the length, the height and the splay angle between the modules. It is fallacious – and d&b is quite clear on this point – to expect to make up for a botched mechanical setting or an insufficient number of enclosures by using Array Processing, especially since this powerful algorithm does not hesitate to suck up headroom as soon as one asks it to perform a miracle.
Realizer. Do not go into the red!
Once the Array Calc has been given all the information it needs and it opens a new Array Processing window for the V8, the software presents a number of options and settings that the user can manipulate to create new “dynamic” presets that somehow replace those usual fixed ones. This step is surprisingly fast and uses a visual interface that is very well thought out, which includes a sort of safeguard meter, called the “Realizer”, that one should always keep an eye on.
As long as it stays green, all is well and the requested intervention will have little effect on the overall performance. When it becomes yellow, this announces that you are starting to exaggerate, but is still quite acceptable. Orange indicates that the user is approaching foolishness, which the red will eventually tell him he has achieved, if his ears haven’t already. We know the tendency we all have to give a tweak over here and another over there. This meter bar is therefore an idea that is quite important.
To sum it up, the goal is to establish a flawless design, installation and mechanical set up, which take into account the desired target and if, and only if, these conditions are met, we can begin to use this processor to improve what Mother Nature can not do alone.
The analytical power of the Array Processing is quite impressive, as each target point is spaced 20 cm from its neighbor, spread across the listening plane, and is seen, in a way, connected via an invisible thread with each of the enclosures that make up the array. A prediction calculation is performed as many times as necessary. Where it gets interesting is that, for each point – and there are many in a room when they are spaced at 20 cm apart – the Array Processing will repeat the same calculation for each of 24 frequencies within each octave. Add a 0, since there are ten octaves, and that makes 240 predictions multiplied by the number of enclosures hanging.
This mass of data is then stored in a matrix and used to create each Array Processing preset for the room. The AP will also give J, V or Y systems a standardized frequency response down to 140 Hz, at which point, necessarily, the laws of physics and the dimensions of the speakers regain the upper hand. The speed at which these presets are calculated is very, very fast. Stunning.
Extended conclusion (I promise, they don’t pay me by the word!)
We can’t deny it, a new era in pro-audio is beginning and, doubtlessly, other brands will soon follow the example of d&b, offering their solutions for improving coverage and frequency uniformity. Just like it would not even occur to a car manufacturer today to offer a car without ABS, to a camera manufacturer to offer cameras without DSP correction for the optics, or to an aircraft manufacturer to offer planes without electronic controls, it seems clear that electronic coverage assistance will invade our industry.
The advantage of what d&b offers is that it supplements from the top: lines of well-built enclosures that are coupled and flown with care, and that already sound good. Therefore, we’re talking about a performance enhancement and not a prerequisite for implementation; this is where the strength of Array Processing lies: its optional aspect. During this first listening – which we will repeat soon with others that are closer to the actual operating conditions – we were won over, surprised and, at the same time, convinced of the usefulness of this option and if technicians are, decision makers will soon be, too, despite the extra costs required to deploy the AP.
Of prime importance among the positive aspects is the reduction of the naturally higher sound pressure near the system, which can cause fatigue and injury for some of the audience. This is timely, considering this period of the renegotiation of France’s Decree 105 dBA. It will be possible to reduce the levels without further disadvantage for the audience at the rear.
These positive aspects also include coverage that is patently more uniform and, especially, less energy wasted in the rest of the room, where it generates modes that reduce intelligibility, and fewer hot spots through better management of sound pressure levels.
It also finally allows real management of the effects related to temperature and humidity.
The negative aspects do exist but, above all, it is important to remember that – just as one tree does not make a forest – one listening session, as focused as it may be, is not enough to forge a final opinion on a complex process which has many variables. It may take time and some updates to reap all the benefits.
I would like to tip my hat one last time to d&b for their courageous decision and honesty. True, it was about letting us “hear” what the Array Processing does, but leaving the setting on “Glory 11” all the time, for example, highlighted quite well some of the sonic inflections that betray the presence of the algorithm.
Similarly, engaging the AP’s zone “exclusion” while having placed his audience of fine ears right where the sound should remain the same is a courageous step because, again, it allows one to hear precisely the price they have to pay to clean up elsewhere. Others might have proposed that we go listen to the audio where it disappears, as if by magic… instead, Tim Frühwirth actually told us about the existing faults of the main/side coupling in the lower midrange before starting the demo. If this is not honesty, it certainly resembles it.
d&b still need to make some progress in order to achieve neutrality in the 800-1500 Hz band, as well as to recover full fluidity and naturalness in extreme cases of exclusion zones with heavy use of “Glory 11” – the most effective setting, but also the one with the greatest impact on performance. Some work also remains to be done on the overlap between two processed arrays, to avoid the current interference in the lower midrange.
In this regard, I look forward to listening to a complete system with left/right arrays, lateral reinforcements and, at least, some front-fills. It is obviously difficult to make line arrays that react to the sound coexist with fixed enclosures, like lip fills. Undoubtedly, strategies exist that involve locking or linking this or that part of the algorithm. As of now, the delays have not yet been considered in the implementation of an AP system, but d&b is working on that. Finally, I hope to hear how the algorithm behaves outdoors, when the air masses themselves are in motion.
d&b is working hard, and what has been presented to us is certainly the first draft of an algorithm that will only continue to evolve and improve – although it is already well done, as it is incredibly fast.
As if to demonstrate this was the announcement that future versions will also take into account what is happening behind the array, outside the coverage area, perhaps to avoid that the user gets surprised by the extra energy induced by the processing.
What is certain is that the Array Processing works, and it marks a milestone in the technological advancement of d&b . The benefits it brings considerably outweigh the few defects.
I will be up to you to make the best use of it, but leave the cape and the magic wand at the warehouse, because here, more than ever, too much is definitely not a good thing. We tend to say that the sound is always a compromise. With Array Processing, this is not entirely true and that’s saying a lot.