Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
James May: 'Hold The Line' kapu mix
#1
Did a new mix. Tried to preserve the original performance as much as I could. A few automation points here and there.


.m4a    james_may-__on_the_line-kapu-mix.m4a --  (Download: 10.38 MB)


Reply
#2
SOunds like a record: smooth, balanced and natural. I like the fact that the mixer has not forced his imprint to this mix, but it sounds like artist’s vision with a natural focus in vox.

The moving cello in the intro feels like an added effect.
Reply
#3
(22-09-2016, 07:32 AM)Olli H Wrote: SOunds like a record: smooth, balanced and natural. I like the fact that the mixer has not forced his imprint to this mix, but it sounds like artist’s vision with a natural focus in vox.

The moving cello in the intro feels like an added effect.

Thanks. The cello 'move' in the intro just comes from the performance, as they are panned off-centre; one on the right starts and the one on the left carries on. But yes, thinking now, it might sound a bit artificial, or 'too good' and unorganic.

I started working with the broadcast and film post-production approach, and used the dialogue, or in this case the lead vocal, as an 'anchor' around which everything else 'revolves.' Based on the idea that we humans, for some reason, like to hear the human voice pretty much at a constant level. There is quite heavy processing on the vocals, so nice to hear they still sound natural. I think the instrument performance has a nice natural intensity build-up, so I set their levels so, that near the end they might slightly overpowering, and maybe a bit too quiet in the beginning. I tried to even this out with slight levelling and parallel compression. Then used eq to make room for the vocals as much as possible. Good to hear this, at least in some manner, turned out to be a good mix, and it sounds the way I mostly intended. Getting better, getting better. ^_^
Reply
#4
Interesting approach. I think there's some point in it, but maybe not to every genre. Typically it bothers me in car if I have to first turn the volume up in the beginning of song to hear lyrics and then down in the end of the song if it comes too loud. In my silent studio room I don't hear that issue so easily. Do you adjust the constant level just by ears or with the help of meters?

Although most of the time I'm not listening with meters, I still have in my vox bus a VU-meter, to find issues that my tired ears don't find. I've been using it with quite slow release, so that needle is not moving too fast, but I'm not sure it that's reasonable. Broadcasting companies seem to have some kind of standard for spoken vox. Was it 23 LUFS? I quess it must be the momentary setting?
Reply
#5
(22-09-2016, 01:35 PM)Olli H Wrote: Interesting approach. I think ...

You're right, this type of mix won't work in an environment with a good amount off background noise, such as a car.

I try to work with ear mostly, still using meters too as supportive guides. I trust the system level calibration provides me sufficent headroom, so I initally try set the levels with DAW faders just by ear. After the initial setup I just check the meters (RME Digicheck) that I'm somewhere around -20 dBFS RMSish and LUFSish and thus good to go further and start tweaking detail, usually in mono and instrument soloed, checking every now and then how it fits in the bigger picture. After this part I hit the dim to take monitors down 12 dBs so, that I can hear only the character sound from each channel/instrument and start readjusting levels and panning of the full mix. Then possible effects and automation, but I mostly put here almost static mixes, as the automation part is the most laborous part. Even if it is usually the most important aspect, with this site it's usually not worth it, as I try to focus here in the overall sound in 'how the h*ll did he get this sound out f these tracks' manner, and then try to imitate those mixes, still following my own sense of aesthetics.
Reply
#6
(22-09-2016, 02:15 PM)kapu Wrote: You're right, this type of mix won't work in an environment with a good amount off background noise, such as a car.

Actually I meant the opposite, I think this is car friendly mixing style. Essential vox is steadily on the same level.
Reply
#7
(22-09-2016, 02:28 PM)Olli H Wrote: Actually I meant the opposite, I think this is car friendly mixing style. Essential vox is steadily on the same level.

Ah, ok. I was thinking the purist perspective, where you set the listening level according to lead vocal, but then the instruments in the beginning might get lost in the car background noise or something.

About LUFS metering in music production. I personally have come to think the integrated LUFS measurement should not be paid any attention in the mixing or sound engineering process itself. In production, working by ear on some of the commonly recognized calibration level will 'always' result in a good signal-to-noise ratio yet retaining adequate headroom in 24-bit production, from a technical point of view, if that's what you're thinking. Anyway, at least I can't tell a thing about materials overall dynamics even with very good metering. At best make some vague guesses. Of course, when listening music at fixed speaker level, I can tell that this mix is louder than the last one, and meters show that it's 2 dBs louder, buh that's about it. ^_^
Reply
#8
(23-09-2016, 02:40 PM)kapu Wrote: Ah, ok. I was thinking the purist perspective, where you set the listening level according to lead vocal, but then the instruments in the beginning might get lost in the car background noise or something.

I’m far from the purist. History of rock’n’roll is fight against rules made by audio purists. But I’m willing to listen reasonable arguments. And I’m listening technical details ONLY when I’m mixing, otherwise I’m listening the song, and most often the song for me is the vox. In my case the analytical listening ruins the experience.

(23-09-2016, 02:40 PM)kapu Wrote: About LUFS metering in music production. I personally have come to think the integrated LUFS measurement should not be paid any attention in the mixing or sound engineering process itself. In production, working by ear on some of the commonly recognized calibration level will 'always' result in a good signal-to-noise ratio yet retaining adequate headroom in 24-bit production, from a technical point of view, if that's what you're thinking. Anyway, at least I can't tell a thing about materials overall dynamics even with very good metering. At best make some vague guesses. Of course, when listening music at fixed speaker level, I can tell that this mix is louder than the last one, and meters show that it's 2 dBs louder, buh that's about it. ^_^

I agree about integrated loudness. To use the integrated loudness in during the mixing would be quite impossible, because one has to calculate it from the whole song. I guess it’s something that is essential only to broadcasters and online playback systems to calculate common loudness to playlist songs.
But I do have a growing reference library (currently about 150 hastily chosen songs) in my DAW and I have calibrated each song to -16LUFS integrated loudness. It’s quite handy when all references (from old 50’s rock’n’roll to modern heavy metal) are directly in somewhat same level. And with reference buss fader I can adjust the references to my mixing level if needed.

Personnally I prefer to use RMS metering with needle, just to see I’m somewhere around -18d-20dbs. RMS must be technically quite near to more modern ”Momentary Loudness” metering. They are just two different perspectives to same issue. And for mixing the RMS is surely accurate enough. And I like the slow meter, it suits my mind that ’s becoming slower day by day.

Although my mixes here are quite loud, it’s because I’m sending the pseudo mastered version. (It’s not bad to have and to develop some mastering skills.) During mixing I have plenty of headroom with quite much of dynamics.


Reply
#9
(23-09-2016, 05:41 PM)Olli H Wrote: I’m far from the purist. History of rock’n’roll ...

Can't remember the exact values, but I think LU meters are basically K-filtered RMS with different longer (slower) integration times. I believe there's some gating applied too. I use RME's TotalMix and DIGICheck for monitoring and metering, in a same way as you use your setup. TotalMix is basically just a separate DSP monitoring mixer used in RME gear, which allows me to put the DAW and computer operating system audio into separate busses, and then level match these. It also provides a bunch of other cool features like true mono and true side summing into center speaker, lt/rt fold down of surround audio without change in levels etc, and these can then be controlled from a desktop remote controller totally independent from the software. DIGICheck then provides all kinds of different metering 'modules' for audio passing through TotalMix from simple digital peak to highly customizable spectrum analyzer and correlation meters, every imaginable and fully customizable level meters to pure digital bitstream and noise statistics etc. Very useful and time saving tools.

I use Spotify (Premium) for reference tracks in music. I wrote a UNIX Bash script for batch EBUR128, ReplayGain and RMS analysing files using ffmpeg and sox binaries. In Spotify I generated a radio playlist including music from all sorts of different genres and eras, and recorded the bitstream for a couple of hours. Then I sliced the songs into separate files and batch analyzed them. Common factor for every song that had been normalized downward was ReplayGain value +6, which is basically 'same' as -9 LUFS Short Term, indicating that this is the limit. I took down the computer audio bus in TotalMix by 12 dBs, resulting in situation where music (the beefy parts) from Spotify 'always' plays around -23 LUFS and peaks at -21 LUFS Short Term. I can then A/B Spotify and DAW from the TotalMix remote controller level matched. Later I did the analysis also for YouTube and iTunes.

I also use the pseudo mastering mentality, as we are uploading encoded files, it is very reasonable to cut away as much of the unnecessary headroom as possible before encoding.

About integrated program loudness, and why I think it's not a good way for normalizing music. Some highly personal opinions coming up, and I am bracing myself for a certain type of storm, but you mentioned something about listening reasonable arguments, so I'll try to conjure up something. As you stated, it would be pointless trying to work towards some integrated loudness based on metering, as the measuring would be impossible. The integrated loudness normalization works very well on the broadcasting platform, where there simply aren't enough human resources to check and correct the material. The actual normalization is mostly done by automated offline processing, based on the assumption that the production is done by professionals and the sound is ok, but the level might need final adjustments. This is in fact quite pragmatic and 'crude' part of the post production, and is done by automation on music streaming services too. When I used integrated normalization with music, I came to the conclusion that the loudness 'experience' between different styles and genres could have huge, and for me, unwanted differences, even if they measure the same integrated loudness. This is of course understandable because of the different styles in the composition, arrangement, overall tone of the instrumentation and so on. In another words, the meanest rock song might be basically anchored to the -23 through the song with just a slight level build in the final chorus, while the forte fortissimo in the grande finale of a classical piece might hit -18 sounding hugely more powerful, as might the most intense part of lead solo of an acoustic jazz trio. I think the audible loudness of this climax point despite the genre should be the common reference point of the loudness 'experience', so that no matter what this level is or the overall dynamics of the piece, it would be the same for all material. This would of course mean that generally louder music like rock and EDM would be relatively louder in integrated measurements, which, in my opinion, would be actually kind of natural, because these genres have a loud nature, and should be experienced louder. This is the way loudness normalization is done for music in most streaming services, probably because most people do find this the most pleasant and natural way of listening music, and I find it kind of odd that there are some people promoting strongly against this. Maybe it's the mastering engineers who have based their income on producing those -5 LUFS hitting masters, or the investors who bought shares in classical or jazz record labels, and want the streaming services to switch to high headroom integrated normalizing, as the contemporary pop would start to sound weak and bad in comparison.

And then there is the human ingenuity. It's actually a small miracle, how the most popular contemporary pop sound has fairly quickly evolved the almost perfectly opposite characteristics of which the LU measuring emphasizes; the bass is ultra low drone tone, so it passes the K-curve 'unnoticed', the beats are composed of super short transienty samples, which pass through the slower integration time windows, vocals are done in rapping style, and longer notes are edited choppy for the same reason. All the legato elements are filtered to a very narrow bandwidth. This type of sound then slips through the LU measurement algorithm and sounds very impressive when it explodes out from the charts and playlists. And most certainly the young producers have developed this style by intuition to fight the loudness normalization they probably don't even know exists, and just try to achieve sound loud as possible as it is perceived better. And the best thing is these normalization algorithms were partly developed to battle the loudness war in the first place. Smile
Reply
#10
Thanks for detailed post. I read it couple of times and I have a strong illusion that I understood everything and I also tend to agree with everything.

I guess RMS is better suited for checkin the optimal audio levels for analogical gear and for converters sweet spot, if there’s such thing. ANd modern LU metering probably is trying mimic human listening experience. If so, then RMS would be in theory better meter during mixing process, but in practise there's probably no relevant audible difference as long as one leaves big enough head room. But I’m not trying to be expert on this area.

I might try that spotify trick. So far I have used excellent finnish library system, where I can find almost any cd from big enough stars. I can order it by net and fetch it after couple of days from nearest library. But sometimes "couple of days" is eternity.

If one follows Spotify’s loudness standard in (pseudo) mastering, then it’s safe not to have peak to loudness ratio higher than -10-12 dbs. If peak to loudness ratio is for example around -16 dbs, then Spotify must use their own limiting or leave the music 4 dbs quiter than others. I think I’ve read somewhere that they are limiting with not so good limiter if material is too dynamic. So in that sense targeting integrated loudness to -10-12 dbs is pretty safe choise. (This is layman interpretation of technical material I’ve read in web.)

If there comes a strom of your (our) opinions, I think I just go behind the nearest tree and check from there what happens Smile It’s just too difficult for me to write with foreign language of technical issues where I don’t claim to be an expert. So thank you and good luck!
Reply