Ad

Array Processing. The democratization of sound by d&b, demonstrated at the Zenith

This paper is available in Français too.

Text et Photos : Ludovic Monchat

This paper is available in Français too

d&b has recently, and doubtlessly before the others, unveiled their digital system for coverage and tonal uniformity, designed for use with their J, V and Y systems. This new feature is part of the ArrayCalc V8 prediction software and uses the DSP of the D80 and D20 amplifiers. It is also free.

Objective : to compensate for the natural attenuation of sound, to smooth the contour of sound pressure levels and the tonal response inherent to couplings between enclosures, and, finally, to correct the effects of temperature and humidity on the propagation of the high end of the frequency spectrum.
We attended a demonstration organized by d&b at the Zenith, in Paris, for about a hundred French sound engineers and system engineers, plus the owners of rental companies : Frédéric André of Fa Music, and Daniel Dollé of Silence, just to name a few.
Additionally, there were some friendly party-crashers of the caliber of Alex Maggi and XaXa Gendron, bringing along their expertly-trained ears.

From left to right: Tim Frühwirth of d&b Germany conducted the demo, Didier Lubin, known as Lulu, the head of d&b France and, finally – with his back turned and his eternal smile – is Daniel Dollé of the rental company Silence (commonly known in France by his nickname “Shitty”, which his anglophone friends probably avoid using).

A great group of people listened with great interest the use of a DSP no longer merely as a simple correction tool – for filtering and protection of transducers via the usual presets – but as an active guidance and tonal uniformity system for the array as a whole. Other brands have already tried this, with varying levels of success. Now it’s d&b’s turn to offer this sort of assistance to their systems that are already available and operational.

Tim & his slides at the Zenith in Paris!

Tim Frühwirth – responsible for product support and promotion at d&b – during his very educational explanation of Array Processing: a full hour of theory on slides before unleashing the dogs.
The small FoH positioned, like the system, at stage right and including a Midas console, plus two PCs: one to control the Array Calc and R1, and one to play sound clips. Recognizable by his white head overlooking the audience, Didier Lubin of d&b France but also Eva, Xavier and Pierre have all dedicated themselves to the success of this afternoon.

The long theoretical perspective was very well presented by Tim Frühwirth – responsible for product support and promotion at d&b – and detailed a few priorities chosen by the German brand in order to achieve the desired effect. The keystone is indeed the D80, whose output power is now familiar to all of us but perhaps not the processing power of its DSP, which goes far beyond the needs of a preset.
It is this unprecedented processing capacity that d&b exploits, and this explains the recent birth of a D20, with the same DSP engine as its predecessor. The operating principle of the of Array Processing requires each amplifier channel – thus, each DSP module ahead of it – to drive only a single, unique enclosure. Therefore, it doubles the number of D80s required to power the same array.

The power required for this demo was provided by the D80, d&b’s secret weapon, along with the D20. It was also a great way for On-Off – who participated in the event – to show off their Touring racks.

But there is no longer a place for “too good”!

A graph showing the typical behavior of a line source facing a parterre followed by two tiers of bleachers. Two frequencies are indicated: 4 kHz, shown in bold, and 250 Hz, in a dashed line. Ideally the two should remain parallel lines, which is far from the case in reality. Note in particular the different energy accumulations in the bass, depending on the curvature of the array. We also see the attenuation of the level in function of the distance.

Now let’s see the effects – or, rather, improvements – on what already works without the help of DSP. As Tim said, somewhat humorously, we are all familiar with the attenuation of sound, which is 6 dB with each doubling of distance for a point source system and about 3 dB for a line source.

The first idea explored by d&b is therefore to fight against this reduction in order to give the farthest listeners a sound that, obviously, is slightly less powerful but spectrally compatible with the sound that is heard in the front rows.

A second priority of the research concerns smoothing the curves of sound pressure level and tonal response. In other words, to smooth out the bumps inherent, for example, to the coupling in the low end or to the proximity to enclosures in the HF, without wasting the power that has been selectively removed, but rather by donating it to where it is useful.

Another priority is concentrated on achieving more effective correction of the effects of temperature and humidity on the propagation of the high end.
As if that were not enough, the engineers at d&b have added the possibility to freely cut three different zones, like the pit, the orchestra and the balcony, to treat them separately, with the possibility, for example, of “turning off” the sound in an unoccupied, upper ring of bleachers.

The last aspect they have worked on is the possibility to automatically homogenize the output to a standardized response, for example, in order not to perceive too much difference between the main arrays and the lateral reinforcement hangs.
A correction is also incorporated into the Array Processing to account for diffraction effects generated between the enclosures themselves.

Schematic representation of the behavior of a line source, with the usual defects at two frequencies: 4 kHz (bold) and 250 Hz (dashed). There is an accumulation of energy at 250 Hz in two places and a clear surplus of 4 kHz on the floor.
Schematic representation of the behavior of a line source once these faults are corrected via Array Processing. The 4 kHz is now equivalent wherever one is in the listening area, and the same goes for the 250 Hz, whose peaks have disappeared. Magic !!

Once the excesses and deficiencies are identified, the algorithm performs a kind of leveling or, rather, moving the energy from where it is in surplus to areas where it is lacking

d&b Array Processing démo au Zénith

d&b Array Processing démo au Zénith


Finally, d&b is working to develop a new feature in its software that would allow the exclusion of a specific zone, while cautioning that what can be achieved depends inextricably on the nature of the array, its position in relation to the zone to be avoided, as well as its size and geometry. Such caution is a credit to the manufacturer’s honor, because the less you touch the sound, the better. Of course, Array Processing takes into account the subs of the J, V and Y series flown at the top of the arrays but, in order not to prolong the processing time, intervention is limited to alignment of the frequency response and phase, relative to the array of mid-high boxes.

The dance of the line array

To be able to offer such flexibility for intervention on the wavefront, d&b initially attempted the coup of dynamic temporal alignment of each module as a function of the frequency, an electronic method of virtually moving forward or backward one enclosure relative to another, but soon gave up, in the face of undesirable effects induced by bringing back into question the very principle of the line array.

d&b Array Processing démo au Zénithd&b Array Processing démo au Zénithd&b Array Processing démo au Zénith

A representation of the temporal shift performed by a bank of FIR filters that would be necessary to smooth the response at each of the three frequencies depicted on these three graphs – frequencies very close to one another. A purely mathematical approach. Suffice it to say that it is impossible to achieve and it will not work.


The solution was found by considering the array as a single whole, whose deformation resembles that of a living being, like a dancer or a swimmer during a dive. The column of enclosures therefore serves to ensure the optimization of the sound projection by shifting… without moving, through a combination of interdependent FIR and IIR filters from module to module and from frequency to frequency.

The principle of Array Processing does not work on each enclosure individually but considers them as a whole and works on the entire array. Take the case of the distribution of the frequency 200 Hz. Two large peaks degrade performance in the first third of the floor and in the stands.

In the low end, this “movement” is important since each sound source covers a large part of the listening area. At the top of the spectrum, instead, where each source covers only a very limited area, the algorithm changes its mode of operation.

This virtual ballet, a kind of invisible morphing that still manages to maintain the coherence between all the elements of a linear array, costs the user 5.9 ms of computation time, which is summed with the 0.3 ms inherent in the D80 and D20 themselves, for a total of 6.2 ms of latency – 2.15 meters. That is still acceptable if, in the signal chain, the microphones, the console and effects all lined up don’t already make it too much.

This is a representation of how the engineers at d&b have managed to “bring the array to life” in order to level the frequency response and smooth out defects, here at 229 Hz.
The virtual curvature of the line array that is necessary to evenly distribute a signal at 2 kHz throughout the room.

The enclosures chosen by d&b to provide the best possible overview of the capabilities of Array Processing: a main array of 12 V8s, plus 6 small Y8s for the side seats. Placed on the floor were 6 V-Subs in two stacks of three to complete the bottom end of the system.

Then comes the most important setting, the one that determines the amount of intervention and efficacy required of the DSP to obtain the desired result. The scale ranges from -11 to +11. Minus 11 corresponds to the most prudent choice, the most discreet and, especially, one where the action of the algorithms will have the least impact on headroom. The left side of the dial therefore bears the sweet name “Power”.

On the right, however, it intervenes more aggressively; it sacrifices more level but attains the desired effect better, at least that is how d&b sees it, so they’ve given it a name imbued with humor: “Glory”. Power and Glory. The choice of a scale up to 11 derived from the amps of the group in the film “Spinal Tap”… in short, humor on all levels.

The central ‘0’ position of this slider is in no way a bypass of the Array Processing, but a central position of the effect. To stop the action of the Array Processing, simply reload through the R1 the checkbox “by-pass”, which is found in all the amps. The selection of the extent of the intervention is made during the creation of a preset, keeping an eye on the Realizer display. Any attempt to do something that resembles sorcery will be sanctioned with a red indication on the display, even before your ears bring you back to sanity.

Where the big sound is !

The beginning of the listening session. Note the projected labels to remind even the most distracted of us which preset is active in the D80, and which enclosures are in use. Here it is the standard V8 setting, with Array Processing inactive. This allowed us to get a feel for the attenuation in function of the doubling of distance from the system before the DSP, even with a line array.

After a quick refreshment break, the proper demonstration commenced in complete objectivity and honesty. A main array of 12 V8s plus a side array of 6 Y8s were flown at stage right of the Zenith, two stacks of 3 V-Subs directly below the V array provided the low end reinforcement.

The choice of these systems was dictated by the high propensity of these two systems to show their “limits” in this room, knowing the attenuation over distance for the V due to its small size, and therefore the differences between the V and the Y.

Let us be clear: both of these systems work very well and even better in a room well known for its very solid acoustics, and the “defects” of which we speak are those that would be produced by any enclosure of any brand in such circumstances.

Xavier Cousyn of d&b France, after passing the presentation over to Tim.

They played for us a male voice reading a text loop through the V array only, enabling us – at the cost of the rather amusing transhumance of 100 pairs of ears along the aisles of the Zenith – to get a good feel for the natural attenuation from the pit, almost directly underneath the array, to the top of the bleachers nearly 65 meters from the speakers.

In this first case, the preset and the equalization are standard and flat. Just the mechanical set up is designed to best distribute energy between high and low, while trying to avoid points of excessive energy accumulation.

Xavier Cousyn during the presentation of the AP at the Zenith, in front of a panel of “decibel gunslingers” who sit in silent wonder… well, not for long.
Here we go: we head all as one to the last row, listening intently to the effects of distance on an unprocessed array, progressively as we add meters between our ears and the enclosures.

Two prized customers together: XaXa Gendron (left) and Alex Maggi (right). With these two around, the monitors are surely perfect and your ears will be happy. Though not directly concerned with Array Processing, they still came to hear what happens… and to hit the buffet ;0)

Once everyone got to the top of the bleachers, the insertion of the Array Processing resulted in an obvious rise in level, brilliance, precision and, frankly, a perceived shortening of the distance separating us from the enclosures.
The preset for this demo was designed to attenuate only 2 dB for each doubling of distance, correct the absorption of the high end and smooth the frequency contour wherever one is, and this is precisely what happens. We were given the chance to listen many times to the system in Traditional mode or Array Processing and we can only take off our hats to this new system. It works.
A quick walk through the stands confirmed this first impression. Wherever we went, the sound was consistent, accurate in the high end and made the V – a very well designed system and already close enough – resemble the J system even more.

Only a very small defect in the 800 Hz to 1.5 kHz range – call it a slight restraint digging a little into this part of the spectrum – betrayed the insertion of the Array Processing. That being said, to realize it, you had to go down into the pit closer to the enclosures, and even right up to the line marked on the floor with white gaffer tape, where the output of the first enclosure showed few side effects. Upon careful inspection – what we were there for – we could perceive slight variations in timbre depending on where we stood, but it was no more pronounced than the usual defects that occur from the coupling between modules in a line array. The balance is largely favorable.

All the guests climb back to the top of the bleachers to listen to the -2 dB preset and its beneficial effects in limiting the natural loss due to distance, homogenizing the response and correcting for the loss of high frequencies due to temperature and/or humidity.

Uhhh, would you like some nice guitar to go with the voice ?

After this introductory audio material, we were proposed a track by the late Chris Jones, “Roadhouses and Automobiles”, a signal more similar to what would normally be reproduced by the speakers and very interesting for the depth and tone of the voice, the cleanliness and richness of the guitar – in a word, a very well chosen beautiful recording, but also perfect for highlighting the slightest defect. The positive impression we had on the simple voice remained the same with music.
The next test consisted of isolating an entire zone. In our case, the first zone was the top tier of seating at the Zenith, which is usually masked by phono-absorbent curtains during low attendance, followed by a second test doing the same to the pit in order to simulate, for example, hosting a symphony.
They proposed for this purpose an attenuation of approximately 11 dB.

Tim Frühwirth facing his audience of sound and system engineers.

Again, it worked very well and the sound seemed to disappear, as if they cut the HF band in a 3-way system, or at least attenuated it very much. The crossover area into the covered zones was free of significant defects and occurred across two or three rows of seats. The influence on the response in the areas where the SPL was not lowered, however, was somewhat greater.

We found the same defect in the midrange, and that made the sound a little more physiological, overprocessed and less fluid. Keep in mind that these differences were acceptable between bypass and preset ; they were audible in the A/B comparison from the pit when the upper tribune was excluded and vice-versa. It’s interesting to note, also, the strange sensation when suddenly we would lose the upper tier and, above all, the sum of the reflections we are used to. This absence quickly became disconcertingly appealing as the sound seemed cleaner. Of course, we are talking about an empty room, but the fact remains that the algorithm of the Array Processing offers cleaning properties that are quite new and very interesting.

The final test was perhaps the most difficult, and the one that left me a little disappointed. This consisted of giving an array of six little Y8s – equipped with two 8-inch woofers and a 1.4-inch driver – an output that would couple them with that of the 12 V8s. Although configured with a passive crossover network like the Y, the V8 is nonetheless equipped with two 10-inch woofers, an 8-inch midrange and two 1.4-inch drivers– obviously a whole different animal. More specifically, this was about covering the usual loss of the foundation, roundness and proper fullness of the “big” box when we would leave its coverage zone and enter that of the lateral array.
Tim first made us listen to the transition using standard presets and a careful phase setting. Then we repeated the walk around the Zenith with the Array Processing preset engaged. Again, the result was good – very good – and one could almost speak of the “mouse that roared”, as the six small enclosures put on their best “V-like” and proved the capability of Array Processing to provide a common sound signature to the three series J, V and Y. However, I was less convinced by the phase coupling between the two arrays, which seemed to be less successful than in Traditional mode. Certainly, it did pass nicely and without any real break when we would leave one system and enter into the coverage of the other. Nevertheless, between 300 and 800 Hz, slight interference could be heard and just took away part of the magic. Clearly, the benefits outweigh this fault, especially for untrained ears, but this is probably an area where technicians at d&b could improve further.

Implementation

The implementation process is simple once one has mastered perfectly the “classic” exploitation of a line array, namely using the most precise possible knowledge of the dimensions of the room, the bleachers and the type of performance one is looking for, as this obviously affects the nature of the array – the length, the height and the splay angle between the modules. It is fallacious – and d&b is quite clear on this point – to expect to make up for a botched mechanical setting or an insufficient number of enclosures by using Array Processing, especially since this powerful algorithm does not hesitate to suck up headroom as soon as one asks it to perform a miracle.

Realizer. Do not go into the red!

This is how the Array Processing window looks. Few requests, but a beefy effect. All is well; the Realizer display shows three yellow bars here, so it is an entirely congruent request.

Once the Array Calc has been given all the information it needs and it opens a new Array Processing window for the V8, the software presents a number of options and settings that the user can manipulate to create new “dynamic” presets that somehow replace those usual fixed ones. This step is surprisingly fast and uses a visual interface that is very well thought out, which includes a sort of safeguard meter, called the “Realizer”, that one should always keep an eye on.
As long as it stays green, all is well and the requested intervention will have little effect on the overall performance. When it becomes yellow, this announces that you are starting to exaggerate, but is still quite acceptable. Orange indicates that the user is approaching foolishness, which the red will eventually tell him he has achieved, if his ears haven’t already. We know the tendency we all have to give a tweak over here and another over there. This meter bar is therefore an idea that is quite important.

This is a very, very simplified illustration of how Array Processing “ponders” the potential at each of the points spaced 20 cm from each other – an enormous resolution.Imagine the number of points in a large room, multiply it by the number of enclosures and then multiply the result by 240 !!

To sum it up, the goal is to establish a flawless design, installation and mechanical set up, which take into account the desired target and if, and only if, these conditions are met, we can begin to use this processor to improve what Mother Nature can not do alone.

The analytical power of the Array Processing is quite impressive, as each target point is spaced 20 cm from its neighbor, spread across the listening plane, and is seen, in a way, connected via an invisible thread with each of the enclosures that make up the array. A prediction calculation is performed as many times as necessary. Where it gets interesting is that, for each point – and there are many in a room when they are spaced at 20 cm apart – the Array Processing will repeat the same calculation for each of 24 frequencies within each octave. Add a 0, since there are ten octaves, and that makes 240 predictions multiplied by the number of enclosures hanging.
This mass of data is then stored in a matrix and used to create each Array Processing preset for the room. The AP will also give J, V or Y systems a standardized frequency response down to 140 Hz, at which point, necessarily, the laws of physics and the dimensions of the speakers regain the upper hand. The speed at which these presets are calculated is very, very fast. Stunning.

Extended conclusion (I promise, they don’t pay me by the word!)

We can’t deny it, a new era in pro-audio is beginning and, doubtlessly, other brands will soon follow the example of d&b, offering their solutions for improving coverage and frequency uniformity. Just like it would not even occur to a car manufacturer today to offer a car without ABS, to a camera manufacturer to offer cameras without DSP correction for the optics, or to an aircraft manufacturer to offer planes without electronic controls, it seems clear that electronic coverage assistance will invade our industry.

The advantage of what d&b offers is that it supplements from the top: lines of well-built enclosures that are coupled and flown with care, and that already sound good. Therefore, we’re talking about a performance enhancement and not a prerequisite for implementation; this is where the strength of Array Processing lies: its optional aspect. During this first listening – which we will repeat soon with others that are closer to the actual operating conditions – we were won over, surprised and, at the same time, convinced of the usefulness of this option and if technicians are, decision makers will soon be, too, despite the extra costs required to deploy the AP.

One request, among many others, here is to maintain a certain response without dropping off in the pit – say the first 15 meters – of the system’s coverage, and then with the normal attenuation of 3 dB per doubling of distance elsewhere. The Processing Emphasis is set to “Glory 11”. The Realizer indicates “OK”.
Here is the result with an almost uniform response right up to about 14 kHz wherever the listener stands, in terms of distance from the system. Amazing.

Of prime importance among the positive aspects is the reduction of the naturally higher sound pressure near the system, which can cause fatigue and injury for some of the audience. This is timely, considering this period of the renegotiation of France’s Decree 105 dBA. It will be possible to reduce the levels without further disadvantage for the audience at the rear.
These positive aspects also include coverage that is patently more uniform and, especially, less energy wasted in the rest of the room, where it generates modes that reduce intelligibility, and fewer hot spots through better management of sound pressure levels.
It also finally allows real management of the effects related to temperature and humidity.

A more complex request to meet. This involves letting it drop off by 3 dB for every doubling of distance in the pit and in the upper tier, while gouging a -6 dB hole in the lower stands.The differentiation between the three areas is definable using the three zones – Front, Central and Rear – and the gain and distance settings with the 0-point being where the system is located. The choice of “Glory 11” causes the three orange bars in the Realizer display, indicating the strain that this preset puts on the processor and the risk of loss of quality.

The negative aspects do exist but, above all, it is important to remember that – just as one tree does not make a forest – one listening session, as focused as it may be, is not enough to forge a final opinion on a complex process which has many variables. It may take time and some updates to reap all the benefits.
I would like to tip my hat one last time to d&b for their courageous decision and honesty. True, it was about letting us “hear” what the Array Processing does, but leaving the setting on “Glory 11” all the time, for example, highlighted quite well some of the sonic inflections that betray the presence of the algorithm.

The result of the request to attenuate by 6 dB in the first tier, while leaving the rest without intervention. Finally, note that, in fact, even when one does not ask anything specific, the Array Processing smoothes the response, homogenizes the sound signature of the different models and raises the level of the high end at the farthest throws. In the upper plot, the response prediction for the short throw in blue, for the medium distance in red and for the longest throw, in green, show the drop off at the top of the spectrum at long-range and, on the contrary, the excess high end nearest the system, among other defects common to all systems. In the lower plot, the overall improvement needs no comment. Note, however, where the level of the “red” zone is. It is 6 dB lower than the green (farthest) zone. Some defects at 200 Hz demonstrate the difficulty that the algorithm has in performing this action, which, however seem to be one of its characteristics when the slider is placed on “Glory 11”.
The same attenuation request, but this time the Processing Emphasis parameter is set to “Power 11”; that is to say, a less thorough action that gives up less headroom. The point of inflection goes back to 400 Hz, along with a less tortured contour. It is also less linear and there is a discrepancy in the high end between the zones; this shows the relative freedom allowed by the algorithm. Meanwhile, the central zone remains significantly attenuated, as planned.

Similarly, engaging the AP’s zone “exclusion” while having placed his audience of fine ears right where the sound should remain the same is a courageous step because, again, it allows one to hear precisely the price they have to pay to clean up elsewhere. Others might have proposed that we go listen to the audio where it disappears, as if by magic… instead, Tim Frühwirth actually told us about the existing faults of the main/side coupling in the lower midrange before starting the demo. If this is not honesty, it certainly resembles it.

: L’ensemble des invités remonte tout en haut des gradins écouter le preset -2 dB et ses effets bénéfiques en limitant la perte naturelle due à la distance, uniformisant le rendu et remédiant à la perte d’aigu liée à la tepérateure et/ou l’hygrométrie
All the guests came back down to listen to the preset -2 dB close to the array, after observing the positive effects at long distance. Remember: this is a correction to limit the natural losses due to the distance, to ensure uniform performance in the usual critical zones and to correct for the high-end loss that results from temperature and/or humidity.

d&b still need to make some progress in order to achieve neutrality in the 800-1500 Hz band, as well as to recover full fluidity and naturalness in extreme cases of exclusion zones with heavy use of “Glory 11” – the most effective setting, but also the one with the greatest impact on performance. Some work also remains to be done on the overlap between two processed arrays, to avoid the current interference in the lower midrange.
In this regard, I look forward to listening to a complete system with left/right arrays, lateral reinforcements and, at least, some front-fills. It is obviously difficult to make line arrays that react to the sound coexist with fixed enclosures, like lip fills. Undoubtedly, strategies exist that involve locking or linking this or that part of the algorithm. As of now, the delays have not yet been considered in the implementation of an AP system, but d&b is working on that. Finally, I hope to hear how the algorithm behaves outdoors, when the air masses themselves are in motion.

The alignment of the responses of the three systems that can be processed by the Array Processing and which are on this plot. Of course, also the pressure they can generate and the bass extension, enclosure by enclosure.

d&b is working hard, and what has been presented to us is certainly the first draft of an algorithm that will only continue to evolve and improve – although it is already well done, as it is incredibly fast.
As if to demonstrate this was the announcement that future versions will also take into account what is happening behind the array, outside the coverage area, perhaps to avoid that the user gets surprised by the extra energy induced by the processing.

What is certain is that the Array Processing works, and it marks a milestone in the technological advancement of d&b . The benefits it brings considerably outweigh the few defects.

I will be up to you to make the best use of it, but leave the cape and the magic wand at the warehouse, because here, more than ever, too much is definitely not a good thing. We tend to say that the sound is always a compromise. With Array Processing, this is not entirely true and that’s saying a lot.

 

Ad

EN CONNEXION...

Ad
Ad