Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Updated - Movin to Sante Fe with Fruition - Mix 5
#11
just sending one guitar/channel to Santa fe doesn't make for a coherent mix ,humans have two ears and depth perception depends on the brain being able to triangulate signals ,if you pan things sooo wide that each (widely panned instrument) is effectively reaching the ears from one source you end up with a flat, one dimensional 'mix'
Reply
#12
I listened to your 5th mix.
Hmm okay, I'm really biased here! In my head I hear the song way more intimate and it's all about her vocal. That said the balance between the elements is fine except that I would want to hear the leadvocal a half dB louder. Also when she comes in and sings the first words her voice should really catch the listener 100%, so vocals up there! You're after that "empty warehouse sound" okay, I think the overall feel could benefit though if you put less reverb at least on the Organ, perhaps some chorus instead and also less reverb the rattles of the snare. Right now it seems that the whole band plays in one row in the depth of a warehouse, okay except of the bassguitar. The kick could also benefit from some sub in the 60Hz region to feel more warm.

Just some thoughts, cheers!
Reply
#13
(25-03-2019, 12:07 PM)AndyGallas Wrote: I listened to your 5th mix.
Hmm okay, I'm really biased here! In my head I hear the song way more intimate and it's all about her vocal. That said the balance between the elements is fine except that I would want to hear the leadvocal a half dB louder. Also when she comes in and sings the first words her voice should really catch the listener 100%, so vocals up there! You're after that "empty warehouse sound" okay, I think the overall feel could benefit though if you put less reverb at least on the Organ, perhaps some chorus instead and also less reverb the rattles of the snare. Right now it seems that the whole band plays in one row in the depth of a warehouse, okay except of the bassguitar. The kick could also benefit from some sub in the 60Hz region to feel more warm.

Just some thoughts, cheers!

When I approached this song and listened to the individual tracks, what most defined the environment for me was the 1st guitar (panned right in my mix). The embedded reverb on this guitar kind of defined how the rest of the ensemble would be presented. I went through several levels of intimacy for the guitars and the lead vocal. In the mix with the most intimate lead vocal a very good friend and respected mixer told me he felt the lead vocal was disconnected from the mix and I had to agree. As a result I worked very diligently on finding the right environment to place her voice into which is mostly early reflections and limited reverb with a relatively short decay of 2.08 secs. in a narrow and diffuse field (so as to limit interference with the background vocals and drum effects). Hardly an empty warehouse. And I feel it holds on to the intimacy while connecting her to the rest of the ensemble. The supplied chamber and room tracks were also a very good clue on how to present this. Did you refer to or use it in your approach? The drum reverb is less than a half second in a medium studio. As for the organ, yes, it is defined by the added reverb which is panned opposite the dry mono track. I know many mixes treated this very differently than how I did, but I like the way it lays in the back, is highly defined but never gets in the way of the guitars and vocals.

I'm not saying it is right but it is pleasing to my ear and fulfills what I think is the character called for in the song. There is a melancholia about it, almost a surrender to fate that I wanted to try and instantiate.

Thanks for your listen and comments.
PreSonus Studio One DAW
[email protected]
Reply
#14
(25-03-2019, 11:59 AM)Mark M.O.T.D. Wrote: just sending one guitar/channel to Santa fe doesn't make for a coherent mix ,humans have two ears and depth perception depends on the brain being able to triangulate signals ,if you pan things sooo wide that each (widely panned instrument) is effectively reaching the ears from one source you end up with a flat, one dimensional 'mix'

There is a technique called LCR which places things only in one of those positions and nothing in between. It has been a mainstay for years in the mixing universe and lots of depth perception can result from this technique. I did not use this fully in this mix as my guitars are panned 75% right and left. The rest of the illusion is created by reverb and early reflection panning.

Depth perception in a stereo field is generated by modifying the various ratios of direct/indirect sound, panning, level, eq, delay and reverb. With all those variables, there is a lot you can do and lot you can screw up. There is also mid-side processing, which is a relatively new tech (originally a microphone technique using an omni and a figure 8 mic) and was used in the mastering of my mixes for this song. Take away the panning (meaning mono) and the front to back depth should still be there. I try very hard to have my mono mixes translate my environment accurately. However, not all mixes are required to maintain any sense of reality so exaggerating the separation of instruments is often a goal of spatial treatments; enhancing the clarity of individual parts, which is easier to do in ballads and simple ensemble arrangements.

While a frequency following, dynamic eq can be part of the equation, it is by no means anything definitive of all of the possibilities of environment creation.

Thanks for your comments.
PreSonus Studio One DAW
[email protected]
Reply
#15
While I agree with everything you've said so far MitC I would disagree that M/S is relatively new tech. Heck, it's part Alan Blumlein's stereo patent mentions the concept back into the 30s. The Fairchild 670 had the Lat/Vert option which is basically M/S compression. It's how vinyl records work.
Essentially anything that widens or narrows a stereo image is working on the principle of Middle Side sum and difference. Yes there are more plug ins taking advantage of M/S and the technique has grown in popularity but I believe it's due more to it being simpler and more efficient to implement in the digital world as opposed to the hoops one had to go through in the analog world. Decoding an M/S microphone alone without a matrix took up 3 channels then had to be bussed in stereo if you wanted to compress or eq the image. Though sometimes I would just compress the mid microphone signal though the results were never as dramatic as I hoped. There's a certain narrow zone where the interplay of mid and side signals work together to create a stereo image and by compressing too much I'd just get out of phase side signals as opposed to stereo. I didn't have as good a grasp on it as I do now.

I don't bother much now but you can M/S any stereo signal these days if you have a plugin that has an M/S matrix in it. I know it's in Logic. I'd reckon there must be a free one out there for any platform. Just put on the matrix to split the signal into Mid and Side, any plugin you'd like to process and then the matrix again to decode it back to stereo.

Sorry that's a lot of words and I hope I'm fairly correct in what I'm mentioning. I've just long been fascinated with M/S and sometimes it comes across as 'voodoo' to some people (not saying you) and I've had people argue that it isn't even stereo. M/S with a cardioid pattern microphone is mathematically the same as XY depending of the ratios of mid and side pattern.

I'm sure you know all this but maybe someone else will happen upon it and find it interesting.
Reply
#16
(25-03-2019, 05:30 PM)RoyMatthews Wrote: While I agree with everything you've said so far MitC I would disagree that M/S is relatively new tech. Heck, it's part Alan Blumlein's stereo patent mentions the concept back into the 30s. The Fairchild 670 had the Lat/Vert option which is basically M/S compression. It's how vinyl records work.
Essentially anything that widens or narrows a stereo image is working on the principle of Middle Side sum and difference. Yes there are more plug ins taking advantage of M/S and the technique has grown in popularity but I believe it's due more to it being simpler and more efficient to implement in the digital world as opposed to the hoops one had to go through in the analog world. Decoding an M/S microphone alone without a matrix took up 3 channels then had to be bussed in stereo if you wanted to compress or eq the image. Though sometimes I would just compress the mid microphone signal though the results were never as dramatic as I hoped. There's a certain narrow zone where the interplay of mid and side signals work together to create a stereo image and by compressing too much I'd just get out of phase side signals as opposed to stereo. I didn't have as good a grasp on it as I do now.

I don't bother much now but you can M/S any stereo signal these days if you have a plugin that has an M/S matrix in it. I know it's in Logic. I'd reckon there must be a free one out there for any platform. Just put on the matrix to split the signal into Mid and Side, any plugin you'd like to process and then the matrix again to decode it back to stereo.

Sorry that's a lot of words and I hope I'm fairly correct in what I'm mentioning. I've just long been fascinated with M/S and sometimes it comes across as 'voodoo' to some people (not saying you) and I've had people argue that it isn't even stereo. M/S with a cardioid pattern microphone is mathematically the same as XY depending of the ratios of mid and side pattern.

I'm sure you know all this but maybe someone else will happen upon it and find it interesting.

My statement on the newness of M/S processing is only a reaction to the inclusion of it in just about every plugin these days. In the analogue world in which we grew up, M/S was not an easy get and took a lot of patching to accomplish.
PreSonus Studio One DAW
[email protected]
Reply
#17
That's fair.
Reply
#18
(25-03-2019, 07:12 PM)RoyMatthews Wrote: That's fair.

I'm not sure about the analogy to vinyl. I was taught that Vertical/Lateral was left/right encoding in association with the RIAA curve to limit excursion of low frequencies and increase high frequencies as noise reduction and is why low end is ingrained in the center of music mixes because of the medium's inability to encode high energy low frequency anywhere but where the energy could be equally distributed across left/right, up/down movement of a needle and within the limited space of grooves on a spinning disk. Disc mastering back in the day was all about putting as much material on a disc while avoiding over modulation.
PreSonus Studio One DAW
[email protected]
Reply
#19
Here's what I read. [https://flypaper.soundfly.com/produce/vi...s-for-you/] Keep in mind this was a quick search. I can't speak to the accuracy of the site and frankly I haven had much experience with vinyl albums in years.

"Until very late in the 1950s, all records were mono (one channel). On a mono record, the stylus tracks lateral movement (side-to-side squiggles) only — the groove is a consistent depth. When record companies saw a market for two-channel stereo recordings, the vertical dimension of the groove (up-and-down hills and valleys) was right there for the taking!

But the most obvious solution — to use “lateral” as one channel and “vertical” as the other — presented at least two problems. For one, it would make stereo records incompatible with mono turntables (you’d only get one side instead of a summation of both). Another challenge: Large vertical excursions (hills) make for difficult tracking. The last thing we want is a ramp that launches the stylus right out of the groove!

Ultimately, a brilliant solution was conceived, albeit one requiring a bit of clever mathematics. With a little pencil-pushing, “left” (L) and “right” ® channels can be rearranged as “mid” (M) and “side” (S).

“Mid” contains everything that’s the same in both L and R, and “side” contains everything that’s different between them. To encode into mid-side: (L+R)/2 = M and (L-R)/2 = S. To decode back into stereo, M+S = L and M-S = R. It might seem strange, but it works!

If the stereo signal is converted into mid/side for cutting, both aforementioned problems are solved. The “mid” channel moves in the lateral side-to-side dimension, resulting in perfect compatibility on a mono player. Remember: M = (L+R)/2. The “side” signal moves in the vertical dimension, and as long as bass frequencies are kept in-phase and close to the middle of the stereo field, there will be few large vertical excursions (read: “launch ramps”).

If you’ve ever encountered a Fairchild 670 stereo compressor (or one of the many plugin emulations), you may have noticed that its mid-side mode is called “LAT/VERT.” This stands for “lateral/vertical” and betrays the Fairchild’s early history as a disc-mastering compressor!

Even still, some rare types of stereo signal — phase issues and hard-panned bass frequencies, chiefly — can be challenging for your mastering engineer to cut. But a good engineer will have a few tricks up their sleeve to deal with even these rare circumstances."
Reply
#20
(25-03-2019, 07:48 PM)RoyMatthews Wrote: Here's what I read. [https://flypaper.soundfly.com/produce/vi...s-for-you/] Keep in mind this was a quick search. I can't speak to the accuracy of the site and frankly I haven had much experience with vinyl albums in years.

"Until very late in the 1950s, all records were mono (one channel). On a mono record, the stylus tracks lateral movement (side-to-side squiggles) only — the groove is a consistent depth. When record companies saw a market for two-channel stereo recordings, the vertical dimension of the groove (up-and-down hills and valleys) was right there for the taking!

But the most obvious solution — to use “lateral” as one channel and “vertical” as the other — presented at least two problems. For one, it would make stereo records incompatible with mono turntables (you’d only get one side instead of a summation of both). Another challenge: Large vertical excursions (hills) make for difficult tracking. The last thing we want is a ramp that launches the stylus right out of the groove!

Ultimately, a brilliant solution was conceived, albeit one requiring a bit of clever mathematics. With a little pencil-pushing, “left” (L) and “right” ® channels can be rearranged as “mid” (M) and “side” (S).

“Mid” contains everything that’s the same in both L and R, and “side” contains everything that’s different between them. To encode into mid-side: (L+R)/2 = M and (L-R)/2 = S. To decode back into stereo, M+S = L and M-S = R. It might seem strange, but it works!

If the stereo signal is converted into mid/side for cutting, both aforementioned problems are solved. The “mid” channel moves in the lateral side-to-side dimension, resulting in perfect compatibility on a mono player. Remember: M = (L+R)/2. The “side” signal moves in the vertical dimension, and as long as bass frequencies are kept in-phase and close to the middle of the stereo field, there will be few large vertical excursions (read: “launch ramps”).

If you’ve ever encountered a Fairchild 670 stereo compressor (or one of the many plugin emulations), you may have noticed that its mid-side mode is called “LAT/VERT.” This stands for “lateral/vertical” and betrays the Fairchild’s early history as a disc-mastering compressor!

Even still, some rare types of stereo signal — phase issues and hard-panned bass frequencies, chiefly — can be challenging for your mastering engineer to cut. But a good engineer will have a few tricks up their sleeve to deal with even these rare circumstances."

Thanks. I learned something. So many limitations that vinyl created and as a direct result shaped modern music's presentation. M/S, outside of disc mastering was always a way of post producing a stereo image when I was doing analogue recording, and even then it was rarely used maybe owing to the rarity of figure 8 mics. No, I did not have Neumans, or 414s or the like. Plus the patching and phase flipping. A real pain in the butt. Now, M/S is an enhancement on almost every mastering chain, not for compatibility but for sweetening and driving width. A completely different approach yet with some retaining the ability to filter low end so it remains mono-fied. That is the deference to convention from years of necessity of placing the low end in the middle only to allow the medium to convey it. Imagine what popular music would be like today if that constraint was never there.

I did a jazz mix on here a while back and placed the bass off to the side because that's where it seemed to be on the stage and sound field. I caught hell for it! I did not change it, however.
PreSonus Studio One DAW
[email protected]
Reply