Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
"Away" with mix notes
#11
Just to chime in, the notes were great and helpful. I tried the time align, felt it didn't improve things. I think what I can hear is some sort of phase mismatch between the xy and close Mic's. So I can see where time aligning would be tried. I tried it for every track, also tried phase rotation of every track. Nothing really did anything that jumped out at me and and said "I'm fixed", except for flipping the phase on one of the close Mic's and not using the other close mic.the XY are perfectly fine as-is and they would be easy to just solo and leave up. So I thought xY and one close mic was best. But that's no fun, we are learning to challenge ourselves here and I wanted each mic to give a voice. That's why I then listened to each close mic and chose one to be winds and the other to be strings, and tried to pan them to sort of match place in what the xy room might be. Total guesses.
Filtered each, strings a little more low end, winds a little more highs, blend those into the XY.

The other thing that I try never to do (similar to how I feel about time align which is nearly always ugliest option imo) I did here: tons of narrow EQs all boosting and fishing for pronounced frequencies, then ducking them. Basically I guess this is needed for modal rooms or comb filtering, but I know some in the pop auto tune crowd do this stuff no matter what which is simply annoying to me. I did all these tricks for first few years but then I realized, and honestly, whatever benefit is giving is also sucking the soul away. Only in very rare occasions where I run out of options do I try it anymore.

And one important thing to understand: analog can fix bad sounds. No EQ needed, just run it through the right piece of gear and it will clip it, (make things able to be mixed louder/less limiting needed) compress, even fix the modal resonances, and add a little harmonic life (distortion). A lot of home guys won't ever believe it, but that's the biggest reason IMO the pros have that sound that is so elusive. That and tons more tracks and automation and talent. Without analog though you can waste years getting everything else perfectly right but won't ever get what they get in 3 hours, because well yeah they have the tools and teamwork. Even if the mixing engineers and producer doesn't ever use analog, you can pretty much be assured the mastering engineer isn't some guy down the street.. but instead has many hits tons of gear and uses analog constantly, many times a day.
Reply
#12
(21-04-2017, 01:23 AM)mixogen Wrote: Just to chime in, the notes were great and helpful. I tried the time align, felt it didn't improve things. I think what I can hear is some sort of phase mismatch between the xy and close Mic's. So I can see where time aligning would be tried. I tried it for every track, also tried phase rotation of every track. Nothing really did anything that jumped out at me and and said "I'm fixed", except for flipping the phase on one of the close Mic's and not using the other close mic.the XY are perfectly fine as-is and they would be easy to just solo and leave up. So I thought xY and one close mic was best. But that's no fun, we are learning to challenge ourselves here and I wanted each mic to give a voice. That's why I then listened to each close mic and chose one to be winds and the other to be strings, and tried to pan them to sort of match place in what the xy room might be. Total guesses.
Filtered each, strings a little more low end, winds a little more highs, blend those into the XY.

The other thing that I try never to do (similar to how I feel about time align which is nearly always ugliest option imo) I did here: tons of narrow EQs all boosting and fishing for pronounced frequencies, then ducking them. Basically I guess this is needed for modal rooms or comb filtering, but I know some in the pop auto tune crowd do this stuff no matter what which is simply annoying to me. I did all these tricks for first few years but then I realized, and honestly, whatever benefit is giving is also sucking the soul away. Only in very rare occasions where I run out of options do I try it anymore.

And one important thing to understand: analog can fix bad sounds. No EQ needed, just run it through the right piece of gear and it will clip it, (make things able to be mixed louder/less limiting needed) compress, even fix the modal resonances, and add a little harmonic life (distortion). A lot of home guys won't ever believe it, but that's the biggest reason IMO the pros have that sound that is so elusive. That and tons more tracks and automation and talent. Without analog though you can waste years getting everything else perfectly right but won't ever get what they get in 3 hours, because well yeah they have the tools and teamwork. Even if the mixing engineers and producer doesn't ever use analog, you can pretty much be assured the mastering engineer isn't some guy down the street.. but instead has many hits tons of gear and uses analog constantly, many times a day.

Mixogen,
Most of what you said is correct. Most of the best mix engineers are going totally in the box to save time and to have prefect recall for edits and tweaks. I think that you have very good intentions with your comment. I have attached a clip of my mix that fades into your mix. Listen to the imaging in the clip. I peaked matched the clip. I know, should have RMS matched but this was quick and easier. Try to ignore the eq difference and focus on the imaging of the instruments. My clip plays first then Mixogen's clip. I had it repeat that sequence to give you another listen before hitting the play button again. Pay close attention to the attack on the bass plucks and the detail in the flute on the left. This is no jab at you Mixogen. Thought this was the best way to illustrate what I am hearing.

ALX



.mp3    Imaging Compare.mp3 --  (Download: 621.91 KB)


Reply
#13
(21-04-2017, 12:18 PM)ALX Wrote:
(21-04-2017, 01:23 AM)mixogen Wrote: Just to chime in, the notes were great and helpful. I tried the time align, felt it didn't improve things. I think what I can hear is some sort of phase mismatch between the xy and close Mic's. So I can see where time aligning would be tried. I tried it for every track, also tried phase rotation of every track. Nothing really did anything that jumped out at me and and said "I'm fixed", except for flipping the phase on one of the close Mic's and not using the other close mic.the XY are perfectly fine as-is and they would be easy to just solo and leave up. So I thought xY and one close mic was best. But that's no fun, we are learning to challenge ourselves here and I wanted each mic to give a voice. That's why I then listened to each close mic and chose one to be winds and the other to be strings, and tried to pan them to sort of match place in what the xy room might be. Total guesses.
Filtered each, strings a little more low end, winds a little more highs, blend those into the XY.

The other thing that I try never to do (similar to how I feel about time align which is nearly always ugliest option imo) I did here: tons of narrow EQs all boosting and fishing for pronounced frequencies, then ducking them. Basically I guess this is needed for modal rooms or comb filtering, but I know some in the pop auto tune crowd do this stuff no matter what which is simply annoying to me. I did all these tricks for first few years but then I realized, and honestly, whatever benefit is giving is also sucking the soul away. Only in very rare occasions where I run out of options do I try it anymore.

And one important thing to understand: analog can fix bad sounds. No EQ needed, just run it through the right piece of gear and it will clip it, (make things able to be mixed louder/less limiting needed) compress, even fix the modal resonances, and add a little harmonic life (distortion). A lot of home guys won't ever believe it, but that's the biggest reason IMO the pros have that sound that is so elusive. That and tons more tracks and automation and talent. Without analog though you can waste years getting everything else perfectly right but won't ever get what they get in 3 hours, because well yeah they have the tools and teamwork. Even if the mixing engineers and producer doesn't ever use analog, you can pretty much be assured the mastering engineer isn't some guy down the street.. but instead has many hits tons of gear and uses analog constantly, many times a day.

Mixogen,
Most of what you said is correct. Most of the best mix engineers are going totally in the box to save time and to have prefect recall for edits and tweaks. I think that you have very good intentions with your comment. I have attached a clip of my mix that fades into your mix. Listen to the imaging in the clip. I peaked matched the clip. I know, should have RMS matched but this was quick and easier. Try to ignore the eq difference and focus on the imaging of the instruments. My clip plays first then Mixogen's clip. I had it repeat that sequence to give you another listen before hitting the play button again. Pay close attention to the attack on the bass plucks and the detail in the flute on the left. This is no jab at you Mixogen. Thought this was the best way to illustrate what I am hearing.

ALX

The imaging on your mix from a technical standpoint is better, but IMHO Mixogen has captured more of a feeling of being in a room with the musicians. ALX, I appreciate your dedication to progressing forward and pushing away from tradition to achieve results, I time align my snare mics to my overheads and sometimes even the other close mics on a drumkit, the whole idea of a set of room mics like this is to capture the sound of the ensemble as a whole and part of that is having things not quite as centered as they would be if recorded as a direct source.

Remember you aren't producing pop rock here, this is a live performance of an Avant Garde Classical piece recorded with mainly stereo pairs, it's not meant to sound as direct and pure as something recorded with spot mics. You've made something technically perfect but in that you've lost the humanity of the source, it's a small detail but in the end you have to wonder, "Am I making it better or am I making it different?"

Take what I have to say with the biggest vat of salt possible, I haven't mixed this and really don't have a lot of experience mixing this stuff (I mostly do metal and rock), this is ust my reaction to the snipit you posted.

Cheers,
Doug
Mixing is way more art and soul than science. We don’t really know what we’re doing. We do it because we love music! It’s the love of music first. Eddie Kramer

Gear list: Focusrite Scarlett 18i20, Mbox Mini w/Pro Tools Express, Reaper, Various plugins, AKG K240 MKii, Audio Technica ATH M50x, Yorkville YSM 6
Reply
#14
(21-04-2017, 02:18 PM)dcp10200 Wrote:
(21-04-2017, 12:18 PM)ALX Wrote:
(21-04-2017, 01:23 AM)mixogen Wrote: Just to chime in, the notes were great and helpful. I tried the time align, felt it didn't improve things. I think what I can hear is some sort of phase mismatch between the xy and close Mic's. So I can see where time aligning would be tried. I tried it for every track, also tried phase rotation of every track. Nothing really did anything that jumped out at me and and said "I'm fixed", except for flipping the phase on one of the close Mic's and not using the other close mic.the XY are perfectly fine as-is and they would be easy to just solo and leave up. So I thought xY and one close mic was best. But that's no fun, we are learning to challenge ourselves here and I wanted each mic to give a voice. That's why I then listened to each close mic and chose one to be winds and the other to be strings, and tried to pan them to sort of match place in what the xy room might be. Total guesses.
Filtered each, strings a little more low end, winds a little more highs, blend those into the XY.

The other thing that I try never to do (similar to how I feel about time align which is nearly always ugliest option imo) I did here: tons of narrow EQs all boosting and fishing for pronounced frequencies, then ducking them. Basically I guess this is needed for modal rooms or comb filtering, but I know some in the pop auto tune crowd do this stuff no matter what which is simply annoying to me. I did all these tricks for first few years but then I realized, and honestly, whatever benefit is giving is also sucking the soul away. Only in very rare occasions where I run out of options do I try it anymore.

And one important thing to understand: analog can fix bad sounds. No EQ needed, just run it through the right piece of gear and it will clip it, (make things able to be mixed louder/less limiting needed) compress, even fix the modal resonances, and add a little harmonic life (distortion). A lot of home guys won't ever believe it, but that's the biggest reason IMO the pros have that sound that is so elusive. That and tons more tracks and automation and talent. Without analog though you can waste years getting everything else perfectly right but won't ever get what they get in 3 hours, because well yeah they have the tools and teamwork. Even if the mixing engineers and producer doesn't ever use analog, you can pretty much be assured the mastering engineer isn't some guy down the street.. but instead has many hits tons of gear and uses analog constantly, many times a day.

Mixogen,
Most of what you said is correct. Most of the best mix engineers are going totally in the box to save time and to have prefect recall for edits and tweaks. I think that you have very good intentions with your comment. I have attached a clip of my mix that fades into your mix. Listen to the imaging in the clip. I peaked matched the clip. I know, should have RMS matched but this was quick and easier. Try to ignore the eq difference and focus on the imaging of the instruments. My clip plays first then Mixogen's clip. I had it repeat that sequence to give you another listen before hitting the play button again. Pay close attention to the attack on the bass plucks and the detail in the flute on the left. This is no jab at you Mixogen. Thought this was the best way to illustrate what I am hearing.

ALX

The imaging on your mix from a technical standpoint is better, but IMHO Mixogen has captured more of a feeling of being in a room with the musicians. ALX, I appreciate your dedication to progressing forward and pushing away from tradition to achieve results, I time align my snare mics to my overheads and sometimes even the other close mics on a drumkit, the whole idea of a set of room mics like this is to capture the sound of the ensemble as a whole and part of that is having things not quite as centered as they would be if recorded as a direct source.

Remember you aren't producing pop rock here, this is a live performance of an Avant Garde Classical piece recorded with mainly stereo pairs, it's not meant to sound as direct and pure as something recorded with spot mics. You've made something technically perfect but in that you've lost the humanity of the source, it's a small detail but in the end you have to wonder, "Am I making it better or am I making it different?"

Take what I have to say with the biggest vat of salt possible, I haven't mixed this and really don't have a lot of experience mixing this stuff (I mostly do metal and rock), this is ust my reaction to the snipit you posted.

Cheers,
Doug

You are correct. My part of the clip I posted has the reverb tail turned off so that you are only hearing the mic capture. His clip has reverb added. My goal was to show that aligning improves the imaging. Which you stated you were able to hear. One thing is for certain, knowing you end user is very important. Clarity, dynamics and imaging are everything to the guy who has a listening room with $100,000 reference speakers, $20,000 mono block amps and premium cabling. Check out the audiogon forum to see what they are discussing. They are on another level when it comes to audio enthusiast. Thanks for all the comments. I definitely understand your point of view.

ALX

Reply
#15
Well before you go too far with this ideology, it should be understood that the AES guys don't always understand things.
The monobloc guys won't appreciate it much either! (edit: I hope I don't sound rude here - I'm thankful we can have this discussion!)

Think of close tom mic and close snare mic. Those 2 mics BOTH pick up snare AND tom (as both mics pickup tons of bleed).
When you setup the mics you listen and make sure that snare sounds good and the tom sound good. Doesn't have to be exact perfect phase, just needs to sound great. Leave the mics all on during tracking and if everything sounds great, you have a good enough phase relationship.

After tracking you can decide to time align snare to overheads and toms to overheads. If you do this notice how the snare's tone changes and tom's tone change. That fat full thing is gone, and often toms sound terrible now and the snare sounds like crap due to the tom bleed. Zoom in on the tom's track and notice the snare transient is not aligned to the snare close mic. Nothing you can do, your tom bleed is EQ-ing the snare track in nasty way, and vice/versa. THE FIX: Gate the snares and toms. Its sounding much better, but that fatness you had is now gone, its weaker, so you go looking for samples to replace/assist in things. This works because we gate, and only one mic at a time is on (and also the blend with overheads isn't often same volume, ie the close mic is often louder and jumps over the OH).

Now imagine XY mic. One mic is closer to the left side, one to the right side. The bass will reach one mic before it reaches the other. Whatever instrument is on the other side of the bassist will reach the other mic first. That's just the way it is, same as close tom/ close snare. But XY the distance is much shorter, so the high frequencies are what to watch out for when adjusting time.

Sample rate 44.1:

A frequency of 5,512.5 has exactly 8 samples to represent the actual wave shape when it was captured. At this frequency you move one of the XY then you have a 1in8 chance of landing in perfect phase, and if you are only 1 sample off then you are 45degrees out of phase. If you are 4 samples off then you are at a null. I chose that frequency because its easy math. Each sample represents 12% of the wave shape.

Now the rest of the frequencies around 5k are not easy math. The samples for 5k or 6k or 10k or 15k are all fractions of a repeating wav, each cycle is gonna give you a different peak and null. This means you will probably never find perfect phase again when you shift the file over. The vast majority of frequencies up high will be out of phase minimum of 12% or much more, in a random weird way.
This will represent itself at highs that sounds out of phase, non-harmonic, ugly, dull, smeared, and simply un-natural. And that's what I heard when I tried it on this project, though the bass imaging was more centered, just didn't seem worth it.

Now if you look at lows, time aligning is not messed up because the capture allows so many more samples.
A 44Hz wave has 1000 samples. If you are off a few samples, won't be noticeable at all, you can dial in the phase to less than 1%. You can line it up in very detailed manner.
Trying to line up the higher frequencies as good as they were when they were captured, its just not gonna happen. The original capture took from full resolution actual waveforms, and the engineer could get it better than we ever can.

Anyway, it gives me an idea for this track.
Maybe one could time-align the XY and low pass it at around 150Hz? This would effectively make the XY a bass mic and use the close mics for high end sounds?

This might work!




Reply
#16
(24-04-2017, 09:44 PM)mixogen Wrote: Well before you go too far with this ideology, it should be understood that the AES guys don't always understand things.
The monobloc guys won't appreciate it much either! (edit: I hope I don't sound rude here - I'm thankful we can have this discussion!)

Think of close tom mic and close snare mic. Those 2 mics BOTH pick up snare AND tom (as both mics pickup tons of bleed).
When you setup the mics you listen and make sure that snare sounds good and the tom sound good. Doesn't have to be exact perfect phase, just needs to sound great. Leave the mics all on during tracking and if everything sounds great, you have a good enough phase relationship.

After tracking you can decide to time align snare to overheads and toms to overheads. If you do this notice how the snare's tone changes and tom's tone change. That fat full thing is gone, and often toms sound terrible now and the snare sounds like crap due to the tom bleed. Zoom in on the tom's track and notice the snare transient is not aligned to the snare close mic. Nothing you can do, your tom bleed is EQ-ing the snare track in nasty way, and vice/versa. THE FIX: Gate the snares and toms. Its sounding much better, but that fatness you had is now gone, its weaker, so you go looking for samples to replace/assist in things. This works because we gate, and only one mic at a time is on (and also the blend with overheads isn't often same volume, ie the close mic is often louder and jumps over the OH).

Now imagine XY mic. One mic is closer to the left side, one to the right side. The bass will reach one mic before it reaches the other. Whatever instrument is on the other side of the bassist will reach the other mic first. That's just the way it is, same as close tom/ close snare. But XY the distance is much shorter, so the high frequencies are what to watch out for when adjusting time.

Sample rate 44.1:

A frequency of 5,512.5 has exactly 8 samples to represent the actual wave shape when it was captured. At this frequency you move one of the XY then you have a 1in8 chance of landing in perfect phase, and if you are only 1 sample off then you are 45degrees out of phase. If you are 4 samples off then you are at a null. I chose that frequency because its easy math. Each sample represents 12% of the wave shape.

Now the rest of the frequencies around 5k are not easy math. The samples for 5k or 6k or 10k or 15k are all fractions of a repeating wav, each cycle is gonna give you a different peak and null. This means you will probably never find perfect phase again when you shift the file over. The vast majority of frequencies up high will be out of phase minimum of 12% or much more, in a random weird way.
This will represent itself at highs that sounds out of phase, non-harmonic, ugly, dull, smeared, and simply un-natural. And that's what I heard when I tried it on this project, though the bass imaging was more centered, just didn't seem worth it.

Now if you look at lows, time aligning is not messed up because the capture allows so many more samples.
A 44Hz wave has 1000 samples. If you are off a few samples, won't be noticeable at all, you can dial in the phase to less than 1%. You can line it up in very detailed manner.
Trying to line up the higher frequencies as good as they were when they were captured, its just not gonna happen. The original capture took from full resolution actual waveforms, and the engineer could get it better than we ever can.

Anyway, it gives me an idea for this track.
Maybe one could time-align the XY and low pass it at around 150Hz? This would effectively make the XY a bass mic and use the close mics for high end sounds?

This might work!

As I said before, if it works for you use it, if not move on. This thread has helped me realize that my knowledge has grown far beyond the average mix engineers knowledge. I think my Benchmark DAC2, B&W 805n, and 300-watt mono amps allowing me to hear much deeper into the music helped that growth. Money well spent for sure.

ALX
Reply