Thread Rating:
  • 4 Vote(s) - 2.5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Qupe-Eurovision challengers
#11
(15-09-2016, 04:55 PM)The_Metallurgist Wrote: when i set up the project, rough balanced, no processing, i got -12LUFS PL. that's a pathetic 12dB of dynamic left before we even touch it. this sort of value would normally be seen AFTER MASTERING....

Well, that's how it should be. Recording engineer sets the individual channel gain levels to the nominal operating level of the recording equipment per channel. Emphasis on the words 'individual' and 'per channel'. Usually (almost always) this operating level in high quality equipment is +4 dBu = 0 VU = 1.228 RMS.

When a snare hit, for an example, produces this nominal RMS voltage, it will also momentarily produce a lot higher peak level, let's say 18 dBs higher. So, the equipment needs to be able to handle a lot higher levels momentarily without distortion, than the nominal level. This excess above the nominal level is called headroom. Most high quality equipment can handle at least +24 dBu peak levels, or 'headroom'.

Now, let's enter the digital domain. High quality converters are also calibrated (or can be calibrated) at a certain nominal, or reference, operating level. Usually this is -24, -20 or -18 dBFS RMS for 24-bit converters depending on the standard in use. Let's assume we use -24 dBFS RMS. The snare hit comes in at +4 dBu (RMS) and translates into -24 dBFS RMS at the converter. It peaks +18 dBs higher, so it translates into -6 dBFS peak. Perfect. No clipping and the payload level of the signal was nominal. Cork the champagne. This process is then applied for every individual channel, of course.

Now, if roughly summed, these tracks put together would propably produce clipping and horrible sound. Just as they should, because they are individual tracks recorded at nominal level.

Mixing engineer wants to mix the tracks in the box (on her home computer based DAW). She want's to produce a 24-bit EBUR128 compliant -23 LUFS stereo mix, so she has to calibrate her monitor system accordingly. EBU recommends her, that -18 dBFS RMS pink noise test signal should produce 82 dBA SPL per speaker, as this is their recommendation based on rigorous empirical testing. Although, there are other recommendations as well.

After the calibration she tries to test listen some records. Her head explodes, as they sound way too loud. They are 16-bit records mastered for 16-bit CDs and MP3s, at somewhere between -12 and -9 dBFS RMS, as in 16-bit world the nominal is -10 dBFS RMS for -10 dBV = 0.316 RMS volts. At her new calibration level they sound out at way over 90 dB SPL. She realizes her 24-bit production environment calibration levels are just incompatible with the 16-bit consumer listening environment by definition. Just the way it's supposed to be, and she has find a way to level match them. The quickest and easiest way is to take the level down roughly 10 dBs from her favourite media player. She could also buy a cheap separate USB sound card for the computer operating system audio, and use the 24-bit production system (DAW, audio interface) solely for production, and then switch between these with a separate monitoring controller with level matched +4 and -10 inputs or whatever. There are millions of ways to achieve this, and she figures one which suits her needs best, and now she can A/B compare her 24-bit production system with 16-bit consumer records level matched.

She loads the tracks into her DAW and hits the space bar. Once again, her head explodes as they burst out way too loud. But how can this be? Were they recorded too loud, or hot? Because each individual track is already at nominal level, so their overall level has to go down, so she can produce a mixDOWN, a sum of these tracks, which is again at the nominal level by itself.

Then, why not record individual tracks at even lower gain levels than -24 dBFS RMS, so that the sum of individual tracks produces 'automatically' a nominal level sum, or mix? Because the floor noise per channel is (practically) constant. In subtractive mixing, when you have to only take faders down from their original level, the only thing subtracting is the floor noise, which of course is the goal. And of course each individual recording channel preamp, possible eq and compressor and so on, and converter are designed to provide the best frequency and phase (or transient) response at their nominal operating level. This is the 'one and only job', from technical point of view, the recording engineer has to do: To set the gain levels and equipment to match the predefined reference level, or to go as hot as possible without clipping the tape or converter input if no reference level is predefined. The audible 'sound' is of course a subjective matter due to microphone choices, placement and possible eq and compression, room and so on.

After thinking things through she sets the monitoring level back to EBU recommendation and once again loads the tracks into DAW, and pulls all the faders down and then hits the space bar. First time actually taking advantage of her high headroom 24-bit audio gear and DAW's 64-bit floating point precision mixing engine she starts gradually to open the faders and set panning until the rough mix starts to sound right, and it goes fairly easy as she doesnt constantly follow the level meters. She has faith in the smart engineers who designed the gear, as she knows she's working 'by the book'. The mix starts to sound good, and she checks the levels. -26 LUFS. Should be -23 LUFS. She takes master up by 3 dBs, and now it's the magical (in Europe) -23 LUFS. Hurrah! But her ears start to hurt, as the audible level is now too loud. Once again she thinks over before going into the Internet ranting something about loudness, and just takes her monitoring system down the same 3 dBs. There was nothing wrong with the system, she just had to fine tune her own personal system levels to fit her own personal preference and the current material, room acoustics and so on.

Now the rough mix is somewhat ready and and she compares it (level matched) to the reference track for the first time, which happens to be an international commercial contemporary pop song. It's an 'return to earth' call. The pop hit just sounds better in every aspect, and it certainly feels it has more everything and yet very clean and pleasing. Even the high frequency transients sound better, although in metering it has only half of the microdynamics. But why? Because it was done by the best in the field, who propably applied every established mixing technique developed since the 1950's to make it sound as good as possible for the human ear. Her mix just sounds honky and weak. Off to the Internet to rant about loudness and bash the recording engineer! No.

She returns to her mix, and starts to apply all those dirty tricks from those YouTube tutorials. Smiley eq, transient enhancing compression, serial, parallel, levelling compression, limiting, saturation, side chaining, emphasis processing, delays, reverbs, stereo imaging, creative effects, subgrouping, automation etc etc, she does all the hard work that requires actions and subjective decisions. Now she thinks she has the best mix ever, and still working at the -23 LUFS level. And she compares it to the pop hit, and it endures the test. At least it's not 'that' bad. In the metering she sees, that the new mix has actually less microdynamics, but it still, to be honest, sounds better. That's because the material just doesn't need headroom above 10 dBs. Our human brain will 'invent' the missing parts, when the mix is right, giving emphasis on the transients etc.

Now she wants to 'publish,' or in another words, put the track to Internet. The 24-bit mixdown will sound quiet, and the audio quality will suffer greatly from encoding, unless she does something. Off to the internet to rant about loudness and that all the people in the world should calibrate their $200 laptops, Logitech desktop speakers, phones and earbuds, car stereos etc etc to -23 LUFS, even if they are only 16-bit systems. No. She finds about a process called mastering, or technically, premastering in which the program material is conformed for the publishing medium. She already knows, that in 16-bit domain a widely used level is -10 dBFS RMS for -10 dBV, and in Europe there are regulations that demand portable and handheld electronic devices to operate at so low voltages, that much lower levels than this wont produce decent listening levels on such devices. So, she puts a limiter on the master bus, and brings up the average level to around -10 dBFS RMS hoping, it wont affect the sound too much. And it does not, as her mix is already solid enough.

She sees in some video on the Internet, that she should leave some headroom for something called intersampled peaks. She uses intersampling peak meter, and discovers that hear 'true' peak levels go a little over 0 dBFS. She tests the MP3 or AAC encoded file, and the peaks go higher on the meter. This is impossible! Nothing can go over 0!!1 Well, these peak readings are theoretical predictions of what will happen to the audio in the D/A conversion during playback, and they can occur, but almost all consumer, even the cheapest, D/A chips are designed and built to handle built to handle ISP levels cleanly up to +2 dB dBFS, and her master is just a hint over zero, so she doesn't have to worry about that, although they look a bit scary on the level metering. Some ridiculously loud metal and EDM masters can easily produce ISP peaks higher than this.

Now her master track is on YouTube, Spotify and iTunes. And it plays at audibly different levels on each service. That's because all of these services use their internal loudness normalization, based on ITU 1770. The goal of course is to match the audible level of the tracks in the service, regardless of the RMS or peak levels. iTunes uses basically ReplayGain +/- 0 dB (-15 LUFS Short Term), YouTube RG +4.5 (-10.5 LUFS-S) and Spotify RG +6 (-9 LUFS-S), Spotify being the loudest. Going over the Spotify level won't of course pay any extra 'loudness impact' on Spotify, and Spotify optimized master will play 6 dBs quieter in iTunes etc etc. Main point here is, that the production shouldn't pay too much attention to the LUFS metering at mastering stage, as long as it sounds good and suits the publishing service, or is close enough. The idea of the scale is to provide broadcasters and streaming services a way to more accurately match the differences in delivered audio material. One can of course optimize masters for these platforms for that 'extra' what a 'loud as possible without going over' might achieve. The end user ultimately decides the listening level.

Only production situatation, where the use of LUFS scale is 'mandatory' is live production, where the broadcaster expects the incoming stream to be -23 LUFS, for an example, as they then process it further to, for an example, -9 dBFS RMS for FM radio, usually with hardware equipment preset for -23 LUFS program input. And LUFS meters aren't of any use during recording either. -23 or -24 LUFS standards can be useful during mixing of music, but only if the mixing engineer has to explicitly provide material at such level.

It can be a good practice to mix at -23 LUFS, for an example, but by no means does this mean that the mix has to have excessive amounts, or too much, microdynamics. Or use every available decibel of the headroom. By definition, the idea of the excessive headroom in 24-bit productions system is to provide a fail safe against clipping, not to be the main register for top snare mic.

About the raw tracks. I hear very little, or any, processing in the raw tracks. The snare, for example, has a good amount of peak over RMS constantly, around 15-20 dBs constantly, and is no where clipping. On drum tracks, there are some artifacts that sound like sloppy editing with Logic Pros Flex Time Slicing -algorithm, but this is just a hunch.

About that overly loud supposed CLA mix. It think it's actually quite impressive, that it still sounds like music, and, after the initial shock and turning the volume down, was quite good. I listened the latest Muse album from consumer gear, and only after listening it in the studio, realized that this loud as hell, around -6 LUFS, and still, to my ears, very good.

Listened your mix a couple of times. Generally it's good. Just about the dynamics. I set the initial listening level at the start, and gradually had to take down the volume as your mix builds energy. Your first verse is around -20 LUFS Short term and in the final chorus hits a little over -12 LUFS. I think difference is just too great, and this is no more in the category of micro or macro dynamics, and is just a volume change.

Also, during the first verse, you're averaging around -20 dBFS and your snare is peaking somewhere between -5 and -2 dBFS, which would indicate that there is very little if any compression in the source track or your mix during that part, unless you purposely compressed the original snare down and then used transient enhancer to bring back the transient, which I don't think you've done. So I wouldn't say that the source tracks are overly processed. They are mostly very clean and have no eq and compression, in my opinion.
Reply
#12
(15-09-2016, 04:55 PM)The_Metallurgist Wrote: garbage in, garbage out. that sounds critical, not very complementary, but that's what we have here - it's hypercompressed stock.

<snip>

i will also add, that i wouldn't mind betting the compression they deployed had completely the wrong parameters - nothing sounded good to me despite being 24bit? you don't upload songs for others to mix that is over compressed like this, or contains automation for that matter, unless you're ignorant of the mixing process, or simply bone idle and can't be arsed to switch it off before the print.

<snip>

there's a post you should read which goes to show how much ignorance actually exists here, and how loudness impresses, especially when it has a name to it (suckers!):

Whoa -- time out! I'm sorry, but I feel I have to make it clear that I find the tone of your post unacceptable. I'm happy for you to have strong views about mixing issues such as dynamic range, and I fully defend your right to express them, whether I agree with them or not. What I take exception to is your choice of language, because it risks undermining two things fundamental to the Discussion Zone.

1) We are all blessed to be allowed, free of charge (free of email registration, even!), to download and work with multitracks from many talented musicians. If I were a member of Qupe, the negativity of your comments would certainly make me think twice about contributing to the multitrack library. Whatever you think about the 'quality' of any multitrack in the library, each one offers something that we can all learn from, so there's no sense in any of us antagonising those musicians who kindly agree to support this educational resource. If you feel strongly that less processed tracks would help you mix this production more successfully, then try addressing a polite request directly to the band via their web site.

2) Being personally disparaging and dismissive of other DZ members is totally contrary to the spirit of this forum, and I have very little patience for it. I don't care how misguided you think they are. As far as I'm concerned, every user in this forum should be entitled to a basic level of civility from other users that precludes their being labelled ignorant suckers. It's tough opening up one's own mixing work to feedback, and I created this forum to provide a supportive environment for that. There is plenty of room on the Internet for the 'school of hard knocks' -- just not here.

I have said to you on a number of other occasions that I respect your opinions (and your right to them), but I also ask that you respect the intended spirit of this learning environment and maintain a positive and supportive tone.

Many thanks in advance for your understanding,

Mike Senior
(Discussion Zone Admin)
Reply
#13
(16-09-2016, 06:16 AM)kapu Wrote: Now the rough mix is somewhat ready and and she compares it (level matched) to the reference track for the first time, which happens to be an international commercial contemporary pop song. It's an 'return to earth' call....She returns to her mix, and starts to apply all those dirty tricks from those YouTube tutorials. Smiley eq, transient enhancing compression, serial, parallel, levelling compression, limiting, saturation, side chaining, emphasis processing, delays, reverbs, stereo imaging, creative effects, subgrouping, automation etc etc, she does all the hard work that requires actions and subjective decisions. Now she thinks she has the best mix ever, and still working at the -23 LUFS level. And she compares it to the pop hit, and it endures the test.

Kapu, that was an excellent article ... it captures the real world workflow perfectly.

One aspect often missed in the search for "authenticity" is that many of the processing "tricks" are aimed at making the mix seem closer to live.

A recorded track removes two degrees of freedom: (a) performance dynamics (most listening environments have a noise floor, if one is lucky, <25db below RMS); and, (b) most playback environments are frequency constrained (lows and/or highs disappear) -- yet, the mix must still sound good.

All of the most common mix techniques -- compression, distortion, transient enhancing -- suggest more dynamic range than is present (such as a particular vocal line distorting, suggesting how loudly it was originally sung). The movement of true lows into low-mids exploits the missing fundamental effect; ducking/pumping exploits how the ear reacts to loud transients at a concert or club. Pushing energy into the 5KHz region suggests that there is more happening above than might be reproduced on many systems.

I think we find these treatments pleasing precisely because they emulate, at lower volume, how we hear sounds at high volume. They are familiar auditory responses.

Anyways! This was meant to be a brief comment. My point is that we shouldn't assume mix processing is there to take away from the performance, but rather that it can act to better preserve the essential musical elements despite the constraints inherent to the transmission medium.

I'm always in awe of how the great mixing engineers assemble mixes that translate perfectly across all shapes and sizes of playback systems. Definitely an art, rather than a science, there.
All sound is a distortion of silence / soundcloud.com/jeffd42
Reply
#14
(16-09-2016, 06:28 PM)jeffd42 Wrote:
(16-09-2016, 06:16 AM)kapu Wrote: Now the rough mix is somewhat ready and and she compares it (level matched) to the reference track for the first time, which happens to be an international commercial contemporary pop song. It's an 'return to earth' call....She returns to her mix, and starts to apply all those dirty tricks from those YouTube tutorials. Smiley eq, transient enhancing compression, serial, parallel, levelling compression, limiting, saturation, side chaining, emphasis processing, delays, reverbs, stereo imaging, creative effects, subgrouping, automation etc etc, she does all the hard work that requires actions and subjective decisions. Now she thinks she has the best mix ever, and still working at the -23 LUFS level. And she compares it to the pop hit, and it endures the test.

Kapu, that was an excellent article ... it captures the real world workflow perfectly.

I agree with jeffd42, thanks Kapu for excellent post, I learned quite a lot.
If I had had the needed technical knowledge, skill, talent, experience, ability to hold pen, time etc, I would have written this for you. But as I don’t have none of them, I’m just happy to preorder your forthcoming book. Where can I place my order?

Reply
#15
It's certainly interesting, but at the same time I find it quite sad if this is indeed the common place process. Sounds more like an instruction manual for tuning a race car than producing music. Surely the pros are intuitive enough to take a piece of music as far as they can take it based on their own ideas and creative processes without being puppets to the industry. Nothin' wrong with being the trend setter is there?. As for all the numbers and technical jargon, how important is it really if we get the sound we are after. Is this not the job of the mastering engineer? I guess this is why I only mix music for fun and enjoyment and not for a living. Based on my skill set and practices, I'm not likely to any time soon.

Thanks for sharing this kapu. I might'n sleep tonight now knowing thisSmile

Dave
Reply
#16
Okay, one doesn't absolutely necessarily have to obey the 'rules' to get things done. But audio and music production can be described as a process, and as in every process, there is good practice and bad practice.

Behind all of this is a huge amount of science dating back as far as ancient Greece. Then, based on this (constantly evolving) science, all sorts of audio production related organizations (ITU, AES, EBU etc) give recommendations (publish papers), how this science should be applied in the real world. Before this of course, the wise elders of these guilds have reqularily a fierce debate about this stuff. Like AES convention for an example. Audio engineers then adopt these recommendations into their workflows, and manufactures design and build equipment fit for these recommendations. It just kind of makes it easier for everyone, even the 'not scientists', to get the most out of their equipment and best possible quality (tech wise) with the tools available.

There is of course an ideal solution for every individual situation, but following the 'standards' gets us very close to the optimal solution to begin with. In fact (in real life) the only thing one has to do, is to decide which standard to follow, and then possibly set up room correction for monitor speakers. And modern autocalibrating digital speaker systems do even this for you. Then you can get 'creative'. Or you can get creative right away, and hope it will sound good.

Eventually I calibrated my speakers according to EBUR128, and set up room correction for nearly flat frequency response. I had to spend some money on tech stuff, like calibration microphone and SPL meter, and this felt just awful. This meant, that material at -23 LUFS-I or music peaking at -18 LUFS-S should be optimal with these settings. But for me this was too loud, and took down my speakers a few dBs. Later, I found out this recommendation is primarily for large main field monitors in big acoustically 'perfect' control rooms, and EBU recommends something like 3-6 dBs lower for near and mid field monitors in smaller rooms, and engineer should set this by ear. Just as I did. Damn, they just have everything covered. So, everything was perfect. Except almost all contemporary records sounded too sharp, or plainly just bad, to my ear, because they we're mastered to sound bright and sparkling on consumer level equipment. Once again, I had to train my ear to get used to the treble and start mixing at lower levels. Because I wanted to make mixes that sound good in the ears and systems of other people, so I could some day get paid. Still, I think commercial masters sound horrible on my system, but if my mix is feels quite all right, then they can be mastered to match that commercial frequency response. But I realize that this is a property of my system, and not fault of commercial masters, as they are made for consumer level audio equipment. I'm getting used to this, and my stuff gets better day by day in other peoples opinions in general. At least I hope so. But because I'm conforming to the sound generally accepted as good, not the other way around. And I can still enjoy those dynamic 24-bit 'HD tracks' in with my system in my own peace. Now I only use the calibration level, when I dial in initial tones for individual channels soloed, and then take the master monitor level down 6-12 dBs more for actual mixing. At this very low level it is easier for me to judge levels, panning, possible build up resonances and so on, and my ear doesn't get 'bored,' which usually leads to some weird eq solutions in my case. Although, this method tends to lead me leaving the reverbs at too high levels, as many here have given me feedback.

I don't think there is nothing wrong with going solely by ear, but I wasn't happy with my results, and starting to learn more of the tech side took me a giant leap to better direction. There are of course the stories of those legendary American hit producers and mixing engineers how just make pure gold by the ear, but after reading some books and interviews it turns out that they seem to know a lot of the pure tech stuff, or hired separate engineers to set up their systems. And having some knowledge of this stuff or a correct set up doesn't make mixes automatically great, but only gives a good starting point and saves a lot of time and trouble. As jeffd42 stated, the tracks usually need treatment to make them into a better sounding mix, even if the individual tracks sound good to begin with.

But back to the original point about Qupes tracks. Level and 'tech wise' I think they're at ok levels, and could have been recorded even a bit hotter. The tone or 'sound' of the tracks is of course subjective matter, and there are some pops and artefacts due to editing, but as mixing engineers here I think our job is to make the most out of these. And honestly, the tracks here represent very closely the real life situation. I usually get a lot, a lot, worse material than this. In real life, almost always at least some tracks have wrong names, and there might be 3 different instruments edited on the same track and so on. And some tracks are missing, and no one has heard anything of the original recording engineer in two weeks. This site teaches us to get used to too good initial material, if you ask me. And for free. ^_^
Reply
#17
(17-09-2016, 12:41 PM)kapu Wrote: This site teaches us to get used to too good initial material, if you ask me. And for free. ^_^

Hah! That's the first time I've had that criticism! Smile There are plenty of tricky recordings if you search -- some of the Mix Rescue ones are certainly challenging, I can vouch for that...

Reply
#18
Digging this mix!
Reply
#19
So I've downloaded the mix, inserted it into Reaper, and compared it to Olli's mix. When both versions are level matched I prefer Olli's version, his version is more balanced frequency wise, has more punch, and overall sounds more like a rock mix than Metallurgist's mix. Sure the Metallurgist's mix has more dynamic range, has more effects going on throughout the track, and the builds are more exaggerated, but overall it sounds like there's minimal extremes in the frequency balanced of the mix, mainly it's just mid range.

In the metallurgist mix the intro guitar is being drowned out by the audio watermark, it's really distracting and also gives me the impression that he thinks his mix is going to be stolen. You can do whatever you want but PLEASE REFRAIN FROM USING THE WATERMARK, it really is unnecessary and gives off a hint of distrust of the honesty of the other members of this site(sorry for the mini rant, on with my commentsBig Grin). The piano in the final chorus is quite overbearing and is competing with the vocals for space in the mix, try automating this down so that it sits under the vocal but over the guitars and work from there.

I agree with Kapu's statement on how the loudness through out is actually unnecessary, the final chorus has no extra punch, it's just the initial shock of being hit with a bunch of volume and is quelled by turning down your volume knob. I will however say that the effects do add to the mix and the reverb on the vocals is a nice touch, still I prefer Olli's mix.

Cheers,
Dcp
Mixing is way more art and soul than science. We don’t really know what we’re doing. We do it because we love music! It’s the love of music first. Eddie Kramer

Gear list: Focusrite Scarlett 18i20, Mbox Mini w/Pro Tools Express, Reaper, Various plugins, AKG K240 MKii, Audio Technica ATH M50x, Yorkville YSM 6
Reply
#20
Mike, as a raw newbie here I wanted to thank you for your great efforts to educate and inform aspiring mixers such as myself. I'm enjoying the experience of trying something new in a field I've been involved with all my life as a musician on the other side of the glass.
Many thanks once more.
Reply