Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
"Away" with mix notes
#1
Here is my mix.

What was done.

XY Channels
1. Split XY Mic wave into 2 separate mono channels.
2. Time Align XY channels. XY channel 1 was shifted 4 samples behind ch 2
3. Gain Match XY channels. XY channel 2 was reduced to -.01db.
4. Linear Phase EQ both XY channels together on the XY sub bus.
12db LSh at 22.50Hz -5db.
6db HSh at 2328Hz +1.11db
5. Add reverb tail to XY mics. Find correct predelay and room width. Next find the best Reverb time. One that matches the natural ambience captured in the XY mics. The XY mics captured a good amount of early reflection so I did not use the ER on the reverb plugin.
6. EQ Reverb Tail.
6db High Pass at 104Hz
6db Low Pass at 3762.7Hz

Close Mic L and R Channels
7. Align Close Mic L and R channels to the XY Channels.
8. Gain match Close mic channels to XY channels
9. Pan match Close mic channels to XY channels
10. Linear Phase EQ -3.02db at 1428.10Hz Q 1.09. This was done to attenuate the midrange build up when all 4 mics were combined.

I will post another clip soon with my experimental sonic optimization. Will explain when I post the new clip. Thanks for listening.

UPDATE: Sonic Optimization
What is Sonic Optimization? Sonic Optimization is the audio processing the eliminates the ghost frequencies the blur the instrument image of a recording. It could be room modes ringing, mic capsule resonance, or any number of miscellaneous information in a digital recording the has the speaker wasting energy that could be used for the musical frequencies. When this is done correctly, the instrument image and separation should become sharper without changing the tonal character of the music.

Let me know if you can hear the difference.


.mp3    Away ALX Mix.mp3 --  (Download: 9.94 MB)


.mp3    Away ALX Mix Sonic Optimized.mp3 --  (Download: 9.7 MB)


Reply
#2
Why time align and gain match the XY Mics?
I only have earbuds to listen and mix with at the moment. Take everything I say with a grain of salt.
-
Mix the song, not the tracks (<-pithy)
Reply
#3
(10-03-2017, 10:43 PM)RoyMatthews Wrote: Why time align and gain match the XY Mics?

I find that it helps the instrument imaging. Try the settings that I suggested on mono XY channels and listen to the original stereo wave and see if you can hear the difference. Listening in mono should really make the difference stand out. Although the mics are usually close together, the mic distance from the side walls if not perfectly the same can create a smeared sound. Every time I do this on Mic pairs the image gets sharper. Sometimes I get lucky and do not have to change anything.
Reply
#4
Just curious.

Yeah it would have a noticeable effect in mono because coincident stereo arrays like XY (and M/S and Blumlein) rely on volume difference between the capsules to create the stereo image so gain matching the level difference between mics would focus the signal to the center of the image. But it would also change the stereo image.
Time aligning shouldn't (in theory) make much of a difference because the sound should hit both mics at the same time (if properly set up). Maybe these mics' capsules weren't as close to each other as they're suppose to be.

That said if it works it works.
I only have earbuds to listen and mix with at the moment. Take everything I say with a grain of salt.
-
Mix the song, not the tracks (<-pithy)
Reply
#5
(11-03-2017, 12:14 AM)RoyMatthews Wrote: Just curious.

Yeah it would have a noticeable effect in mono because coincident stereo arrays like XY (and M/S and Blumlein) rely on volume difference between the capsules to create the stereo image so gain matching the level difference between mics would focus the signal to the center of the image. But it would also change the stereo image.
Time aligning shouldn't (in theory) make much of a difference because the sound should hit both mics at the same time (if properly set up). Maybe these mics' capsules weren't as close to each other as they're suppose to be.

That said if it works it works.

You are correct. I find that most mics are not setup perfectly. 4 samples @ 44.1kHz is equal to 1.2 inches and that is not very much to be off if the room is more that 24ft in any direction. It is very hard to get the mics setup perfectly. Close to ideal is good enough if technology can perfect it for you. Sometimes nothing needs to be done and sometimes I hear this fuzziness that I know is coming from the gain being just a little off. When that does not tighten things up I start to shift the channels to see if the image improves. Most of the time it does the trick. Thanks for the listening.

Reply
#6
i just want to say thanks ALX for the detailed notes. I tried the sample alignment you suggested, it was helpful in me figuring out how I wanted to approach it. Also the 'sonic optimization' details are nice, confirmed what I felt before i saw your post, which was I don't do that much EQ normally.. but I felt like this track it was helping not hurting.
Reply
#7
(10-03-2017, 08:33 PM)ALX Wrote: What was done.

XY Channels
1. Split XY Mic wave into 2 separate mono channels.
2. Time Align XY channels. XY channel 1 was shifted 4 samples behind ch

Can we hear a 4 sample alignment out of 44100 samples per second?

Is it important anyway, given that it's the overall mix that matters and what our vision necessitates, rather than an academic approach in this instance?

Quote:3. Gain Match XY channels. XY channel 2 was reduced to -.01db.

Is the human ear capable of hearing a -0.01dBFS change, either subjectively or objectively?

Let's reflect for a moment. How much does the head need to move either closer to, or further away from the sound source (in stereo, it's 2 speakers) to make a 0.01dB difference without even touching the fader?

Quote:4. Linear Phase EQ both XY channels together on the XY sub bus.
12db LSh at 22.50Hz -5db.

I suggest that firstly there's no issue with bass below 22Hz that needs solving. But even if there was, would a shelf at this level do anything?

Quote:5. Add reverb tail to XY mics.

Why would you want to do that?

In what way will this help the listener with their subjective illusion of space in which the musicians are performing?

Furthermore, you're adding material on top of the stuff which is already in abundance. It creates an additional cloud of excess, even with EQ.

Quote:Find correct predelay and room width.

Why?

Quote:Next find the best Reverb time. One that matches the natural ambience captured in the XY mics. The XY mics captured a good amount of early reflection so I did not use the ER on the reverb plugin.

We have a room with ambiance which the microphones have captured. There's plenty of it in the close mics too The mic's aren't merely capturing ER's, but LR's and importantly, the RT60. It's a small, reverberant room, I think we can all agree?

Controversially, all you are doing here is putting this small room inside a larger one, resulting in significant ambiance clash while trying to defy the Laws of Nature (acoustically). Small rooms don't have a long RT60 as you have inferred in your selection of reverb tail. All the psychoacoustic spatial cues which define the room in the recordings, still remain. This makes it highly ambiguous to my ears, a fake. Fake is a distraction from the performance.

I am even asking myself if trying to change the illusion of space (impossible) is what this performance needs. It's a performance which appeals perhaps more to the musical academic than the public in general, we could say. They aren't interested in ear candy, just the performance itself.

In the real world, the brain is super capable of assessing space, and from a very early age. The eyes confirm to the brain, what the ears have conveyed and we don't think anything of it, it's second nature. Take the eyes away and the brain doesn't suddenly abandon this natural trait, as you are persuading me to do here.

The fundamental reason we have reverb emulations, is so we can place instruments which were recorded in a dry space, missing all those spatial cues which define room size, into the illusion of a performance. You can't shoehorn one wet room inside a bigger emulation, nature doesn't work that way.

Quote:UPDATE: Sonic Optimization

This is the responsibility of the recording engineer! Different forum Tongue

Great opportunity for discussion. And thanks for the interesting listen.
Reply
#8
(14-04-2017, 02:57 PM)Max Headroom Wrote:
(10-03-2017, 08:33 PM)ALX Wrote: What was done.

XY Channels
1. Split XY Mic wave into 2 separate mono channels.
2. Time Align XY channels. XY channel 1 was shifted 4 samples behind ch

Can we hear a 4 sample alignment out of 44100 samples per second?

Is it important anyway, given that it's the overall mix that matters and what our vision necessitates, rather than an academic approach in this instance?

Quote:3. Gain Match XY channels. XY channel 2 was reduced to -.01db.

Is the human ear capable of hearing a -0.01dBFS change, either subjectively or objectively?

Let's reflect for a moment. How much does the head need to move either closer to, or further away from the sound source (in stereo, it's 2 speakers) to make a 0.01dB difference without even touching the fader?

Quote:4. Linear Phase EQ both XY channels together on the XY sub bus.
12db LSh at 22.50Hz -5db.

I suggest that firstly there's no issue with bass below 22Hz that needs solving. But even if there was, would a shelf at this level do anything?

Quote:5. Add reverb tail to XY mics.

Why would you want to do that?

In what way will this help the listener with their subjective illusion of space in which the musicians are performing?

Furthermore, you're adding material on top of the stuff which is already in abundance. It creates an additional cloud of excess, even with EQ.

Quote:Find correct predelay and room width.

Why?

Quote:Next find the best Reverb time. One that matches the natural ambience captured in the XY mics. The XY mics captured a good amount of early reflection so I did not use the ER on the reverb plugin.

We have a room with ambiance which the microphones have captured. There's plenty of it in the close mics too The mic's aren't merely capturing ER's, but LR's and importantly, the RT60. It's a small, reverberant room, I think we can all agree?

Controversially, all you are doing here is putting this small room inside a larger one, resulting in significant ambiance clash while trying to defy the Laws of Nature (acoustically). Small rooms don't have a long RT60 as you have inferred in your selection of reverb tail. All the psychoacoustic spatial cues which define the room in the recordings, still remain. This makes it highly ambiguous to my ears, a fake. Fake is a distraction from the performance.

I am even asking myself if trying to change the illusion of space (impossible) is what this performance needs. It's a performance which appeals perhaps more to the musical academic than the public in general, we could say. They aren't interested in ear candy, just the performance itself.

In the real world, the brain is super capable of assessing space, and from a very early age. The eyes confirm to the brain, what the ears have conveyed and we don't think anything of it, it's second nature. Take the eyes away and the brain doesn't suddenly abandon this natural trait, as you are persuading me to do here.

The fundamental reason we have reverb emulations, is so we can place instruments which were recorded in a dry space, missing all those spatial cues which define room size, into the illusion of a performance. You can't shoehorn one wet room inside a bigger emulation, nature doesn't work that way.

Quote:UPDATE: Sonic Optimization

This is the responsibility of the recording engineer! Different forum Tongue

Great opportunity for discussion. And thanks for the interesting listen.

Max Headroom,
Those are all very good points you brought up. I could spend days explaining the concepts behind the decisions that were made mixing this piece. AES is a very good site to learn about new trends and theories in the professional audio industry. All of the suggestions I made make the source cleaner or more musical. The only thing that matters at the end of the day is making the source cleaner and more musical. All of my suggestions are very easy to try and if you try them I am sure you will hear an improvement in the recording. Adding the reverb tail is not as easy to try so I have attached a dry version. I feel the version with the tail added sounds more musical than without. Also, 2L and Sonos Luminus are two very highly regarded labels that use this technique as well as many others. Technology allows us to do more than we were able to when I attended sound engineering school 20 plus years ago. My mixing knowledge went to another level when I started looking at digital audio as information converted to music.

I will give a couple of general explanations into my methods.

Example 1: Digital audio is information –
It does not have the capabilities of the human ear attached to a brain (super computer) to filter or focus on the most essential information. It records everything and gives everything back to you. You have to isolate the essential information. For example, very good AD converters can capture information as low as 5Hz. Most speakers can only play ref level bass above 60Hz. If a speaker can not audibly reproduce those sub frequencies, this is wasted current that an audio system could be using to reproduce what the speaker can actually play. That is why you should always cut sub lows even if it is at 10Hz or lower. Cut what you can without hurting the music. With that said, if you cut too much it messes with the harmonics of the uppers tones and makes the bass sound hollow and weak. I am always trying to find ways to preserve the original digital information and cut any unnecessary ones and zeros.

Example 2: Using a low shelf vs. a high pass filter –
Steep filters flip phase 180 degrees. Shelves do not mangle phase in the same way. Much more gentle to phase if you are only cutting 6db or less. When phase is mangled, it blurs transient information.

Example 3: Aligning XY Mics –
Both the X and the Y mics pick up the whole room, L and R information in both mics. You may not be able to hear a difference in the close capture when sample aligning. That is not the goal. You are aligning the XY mics to align the reverb captured from the room. You are looking to hear smooth and clear reverb tail decay.

Hope this gives a better understanding of "Why".

ALX



.mp3    Away ALX Mix Sonic Optimized No Reverb.mp3 --  (Download: 9.7 MB)


.mp3    Away ALX Mix Sonic Optimized.mp3 --  (Download: 9.7 MB)


Reply
#9
I have to be honest but you say "all of the suggestions I made make the source cleaner and more musical. The only thing that matters at the end of the day is making the source "cleaner and more musical."
are all kind of subjective.

I, personally have tried what you suggested and then asked the questions in my post above. My results were, different but not necessarily better. I do feel like adjusting the timing volume of the individual microphones in a certain mic array can create a different sound but not necessarily better sound. Especially when those mics get summed closer to mono as you do by time aligning. Your opinions differ which is fine and cool but aren't necessarily more right.

Don't just suppose that people haven't done what you said because you don't like the answer. I have out of curiosity and for the purposes of education and discussion. Can I hear these changes? Yeah, kinda. Are they better? I dunno but I doubt it because everything else changes.

In the end you posit that your choices are "an improvement in the recording" and "more musical" all subjective. Which is fine and great and I'm all for deconstructing recordings and techniques to get a different sound but that doesn't make them any "more musical" or "an improvement in the recording" than if I ran them through a distortion pedal.

In the end if presented with stereo micing of an group that image becomes paramount to the structure of the image and perspective of the picture of the music. Is it the best image? No necessarily but it's as much a picture as what an engineer can take these days. We can argue AES or whatever. But XY has long been established. I have to say that I think you're over thinking it. Maxine presented a lot of good points that you gloss over and put the onus back on her. I will say that I have done the changes, beyond reverb, that you mentioned and heard differences but can't say they were 'better or more musical'

I have no opinion or care when it comes to personal preference. I do have an opinion if it's bandied about as being right or science. This is a place with a lot of people who are of a lot of different levels of experience and sometimes all it takes is someone speaking in big, technical terms to confuse people and for them to take it with them for a long time until they learn differently.

I can talk stereo micing all day. I've done it for over 20 years on some level. Let's not get into a dick waving contest and try to impress people. Let's try to teach people. You can time align stereo mic arrays but that changes the perspective. You can change the levels of mics in a stereo array but that changes the perspective. And ultimately I don't know if you should. That was sorted by the engineer and by the techniques sorted decades before.

In a "classical" piece like this one if you do something like time align or gain change the mics then say so. That doesn't make it better. Let the listener decide. Let the mixer who's learning decide. Hell, in any case, say so. So we, who listen can learn but don't present it as right and don't worry so much if someone presses you on it.

I think all those changes are, somewhat, audible. Are they better? I think that's really subjective.

Sorry to rant. Kinda had a bit of vodka and getting a bit sure of myself. I'm sure it doesn't make all that much sense. But the feeling is there. Ha.

Cheers

I only have earbuds to listen and mix with at the moment. Take everything I say with a grain of salt.
-
Mix the song, not the tracks (<-pithy)
Reply
#10
(17-04-2017, 03:10 AM)RoyMatthews Wrote: I have to be honest but you say "all of the suggestions I made make the source cleaner and more musical. The only thing that matters at the end of the day is making the source "cleaner and more musical."
are all kind of subjective.

I, personally have tried what you suggested and then asked the questions in my post above. My results were, different but not necessarily better. I do feel like adjusting the timing volume of the individual microphones in a certain mic array can create a different sound but not necessarily better sound. Especially when those mics get summed closer to mono as you do by time aligning. Your opinions differ which is fine and cool but aren't necessarily more right.

Don't just suppose that people haven't done what you said because you don't like the answer. I have out of curiosity and for the purposes of education and discussion. Can I hear these changes? Yeah, kinda. Are they better? I dunno but I doubt it because everything else changes.

In the end you posit that your choices are "an improvement in the recording" and "more musical" all subjective. Which is fine and great and I'm all for deconstructing recordings and techniques to get a different sound but that doesn't make them any "more musical" or "an improvement in the recording" than if I ran them through a distortion pedal.

In the end if presented with stereo micing of an group that image becomes paramount to the structure of the image and perspective of the picture of the music. Is it the best image? No necessarily but it's as much a picture as what an engineer can take these days. We can argue AES or whatever. But XY has long been established. I have to say that I think you're over thinking it. Maxine presented a lot of good points that you gloss over and put the onus back on her. I will say that I have done the changes, beyond reverb, that you mentioned and heard differences but can't say they were 'better or more musical'

I have no opinion or care when it comes to personal preference. I do have an opinion if it's bandied about as being right or science. This is a place with a lot of people who are of a lot of different levels of experience and sometimes all it takes is someone speaking in big, technical terms to confuse people and for them to take it with them for a long time until they learn differently.

I can talk stereo micing all day. I've done it for over 20 years on some level. Let's not get into a dick waving contest and try to impress people. Let's try to teach people. You can time align stereo mic arrays but that changes the perspective. You can change the levels of mics in a stereo array but that changes the perspective. And ultimately I don't know if you should. That was sorted by the engineer and by the techniques sorted decades before.

In a "classical" piece like this one if you do something like time align or gain change the mics then say so. That doesn't make it better. Let the listener decide. Let the mixer who's learning decide. Hell, in any case, say so. So we, who listen can learn but don't present it as right and don't worry so much if someone presses you on it.

I think all those changes are, somewhat, audible. Are they better? I think that's really subjective.

Sorry to rant. Kinda had a bit of vodka and getting a bit sure of myself. I'm sure it doesn't make all that much sense. But the feeling is there. Ha.

Cheers
Thank you for the reply. I am very happy to share information. That is why I posted mix notes. I have no tolerance for trolls who like to challenge information that they have not discovered yet. Also, Max Headroom has not posted a single mix on this site and the tone of the questions were not that of inquiry. I agree that clean and musical are subjective. I should have added "to me" at the end of my statements. My mix notes are not rules, just information that anyone can use if they find that it works for them. If not, move on, no need to challenge. I am sure at some point there were people who challenged time aligning drum spot mics to the Overhead mics. Now that technology makes it very easy to do, it has become standard practice in popular music. We should all be pushing the envelope and trying new things, especially the younger engineers. If we keep doing the same old things, we will keep getting the same old results.

ALX
Reply