Thread Rating:
  • 4 Vote(s) - 3 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Dark Ride: 'Hammer Down'
#91
(02-05-2023, 01:09 AM)mikej Wrote: Heh, I don't think it's actually that easy to explain phase by writing it out in a forum post after all, but I'll still give it a go.

Speaker cones move in and out and rest at what we could call a center position.

If I have a 1khz sine wave at -6db on one track, the speaker will move in and out at a frequecy of 1Khz.  If I duplicate it to another track I will get a 0db 1Khz sine wave, as adding two identical signals of equal volume will give a 6db increase in volume (I hope I've got that right).  The speaker will move in and out, but a bit further (louder).  If I invert the polarity of one track, ie effectively put the signal 180 degrees out of phase, the two signals will effectively cancel each other out, and we'll get silence. One track is moving the speaker in, and the other track is pushing the speaker out by the same amount. Sliding one of the tracks left and right will increase or decrease the volume.  This is basically why we ensure snare top and bottom mic signals are lined up and in phase, so the signals don't cancel each other out, in simple terms.

*This is polarity switching, and polarity#phase. Polarity is a function of negative and positive in sound pressure and voltage. Phase is a function of time. One has nothing to do with the other. After you switch the polarity, I can still make it out of phase by moving it.

Eq works by effectively delaying the signal at a specific frequency and mixing it back in with the original signal (at a specific volume).  This creates the phase shift.  This signal combined in phase could be an eq 'boost' and out of phase could be an eq 'cut'.

*This, according to my understanding, is latency, referring the time it takes for the EQ to react. And since it involves time therefore phase can happen here. It is also the reason why boost/cut creates phase equally tho we don't hear them equally; easier to detect when boost.


Well, think about it, processor by definition will only give us the final product after processing the signal. This is the principle diff bet processor and effect. Effect will give us both, original and processed signal, but not processor; it only gives us the processed signal. In order to do that an EQ must take time, no matter how small, to receive the signal in its original form, processes it (boost/cut) and spits out the final product. All that takes time, and whatever time it takes will delay the signal that much thus create phase in the process. The more processors one uses the more phase one makes. I suspect it is the same for compressor/gate/limiter as well since they are processors. So the less we use the better.


So I think you will only hear 'problems' with the resulting phase shift if there is another similar correlated signal present, where the frequencies will be added, or taken away, as in the example above.  So you might possibly hear phase shift (freqs being added or taken away) when, for example you eq a snare drum top mic.  (frequencies could be added or subracted when the eq'd top mic signal combines with the bottom mic signal).

[Ok, so there is a filter type called an 'all pass' filter.  This effectively creates a 180 degree phase shift at a particular frequency.  This is how a phaser effect works - this filter is swept up and down the frequency range, and the filtered signal is combined with the original.  This creates the 'phasing' sound.]

So, my thinking is that although phase with regards to eq has an effect with regards to corellated signals, eg snare top and bottom mic, overheads kick in/out mics, you will be making eq decisions based on what you actually hear.  This is why I don't generally worry about phase as such when it comes to eq.

*Phase happens everywhere but we hear them most clearly (most destructive/constructive) in the low end. And yeah, man, we just have to live with it and try to minimize it best we can, right?

Minimium phase eq's minimize phase shift, and linear phase eqs I think have no phase shift at all, so that is why you might choose to use these on multi-mic'd signals.  I think they work by somehow delaying the 'delayed' and original signal somehow hence the added latency.  This is about where my knowledge runs out and I'd have to look it up Smile.
Reply
#92
(02-05-2023, 10:46 PM)SonicTramp Wrote: *This is polarity switching, and polarity#phase. Polarity is a function of negative and positive in sound pressure and voltage. Phase is a function of time. One has nothing to do with the other. After you switch the polarity, I can still make it out of phase by moving it.

Not quite.  Inverting polarity and changing the phase of a signal by 180 will have the same result.  Try it.

Quote:*This, according to my understanding, is latency, referring the time it takes for the EQ to react. And since it involves time therefore phase can happen here. It is also the reason why boost/cut creates phase equally tho we don't hear them equally; easier to detect when boost.

No, I don't mean latency as in additional delay introduced by a plugin.  The delay I am referring to above is how eq's actually work.

I think you are slightly misunderstanding Smile.
Just uploaded a mix/master?  Waiting for comments? Why not give back and critique a mix/master, or two!
Reply
#93
(02-05-2023, 11:03 PM)mikej Wrote: Not quite.  Inverting polarity and changing the phase of a signal by 180 will have the same result.  Try it.
My understanding is that essentially Sonic Tramp is right. Phase is relative. You can invert the polarity of a single speaker or signal with no difference in the sound (theoretically speaking, real world YMMV). It won't make any difference. When multiple signals interact they can be in phase and or out of phase (by however many degrees over time) that introduces constructive or destructive interference in the signals.
Phase switches in audio equipment is a bit of a misnomer but makes sense because they switch the polarity of the audio but it's in service of being in relative phase with other signals in the audio path.

I don't think I'm explaining this well and my brain no work well right now but that's the pedantic gist.

Here's a quick link on the subject:
https://ethanwiner.com/EQPhase.html

In the end, don't worry about eq phase shift.
Reply
#94
Not quite.  Inverting polarity and changing the phase of a signal by 180 will have the same result.  Try it.

*No need to try; I know it works that way. But switching polarity can only flip it 180 and no where else, not, say, 156 degree, whereas phase can happen at anytime. We all know that there are partial cancellation/addition in phase, not exactly at 180 all the time. That's the principle diff. Phase is not either or, but polarity is. That is why they came up with those definitions, so we can distinguish them correctly. By definitions polarity is a function of negative/positive in sound pressure and voltage. Phase is a function of time. They, clearly, are not related.


No, I don't mean latency as in additional delay introduced by a plugin.  The delay I am referring to above is how eq's actually work.

I think you are slightly misunderstanding Smile.

*Latency is a synonym for delay. In telecommunications, low latency is associated with a positive user experience (UX) (this is why we preferred low latency) while high latency is associated with poor UX. In computer networking, latency is an expression of how much time it takes for a data packet (aka signals) to travel from one designated point to another.
Reply
#95
All pedantry accepted. I know what I wrote Big Grin.

Cheers!

Edit just to add: In my original post you might have missed the word effectively...
Just uploaded a mix/master?  Waiting for comments? Why not give back and critique a mix/master, or two!
Reply
#96
Thanks Roy, for posting the link and better explanation of polarity/phase. thru both of you I understand a lot more now and, almost, know what to do.

I read the article (stuck at the first paragraph) but still have some questions, perhaps one of you can explain and clear some of the fog. Thanks in advance.

In Ethan Winer's first paragraph, last sentence:

"Therefore analog equalizers work by intentionally shifting phase, and then combining the original signal with the shifted version. In fact, without phase shift they would not work at all!"

1. "intentionally". Does that mean the engineers didn't have to? I wonder the reason behind it. Destructive phase is detrimental to a mix and constructive phase is nothing but fader up, an easier move than, say, creating phase to get it. I don't think it's intentional. They just cannot go around it is more like it. Electronic components by themselves create latency=phase. Once any of them is installed, phase happens. There are hundreds of them inside any analogue EQ. This statement, imo, is false.
2. "then combined...". This is not how a processor works. I can always ask: why would I want to have both? Why waste so much resources to give me something I don't want to start with?

These statements erode the the trust I reserve for the writer in general. But I'll give it the benefit of the doubt and will b happy if someone can explain. Thanks in advance.
Reply
#97
The short answer to your questions would be:

1. Yes, intentionally, as that's how [analog] eq works.
2. Yes, this is how [analog] eq works.

The longer answer is:

EQ is made of delay: https://www.youtube.com/watch?v=uyZyyQgxL0c

(This probably explains it better than my attempt in the previous post...)

Hope that helps?
Just uploaded a mix/master?  Waiting for comments? Why not give back and critique a mix/master, or two!
Reply
#98
Also, "Destructive phase" isn't detrimental to a mix. Destructive and Constructive interference are just physics terms and not a statement of quality. They're the same thing just some parts of a signal interact with another part and that can lead to an increase or decrease of certain frequencies.
Electronic components don't necessarily cause any phase issues. A signal can run through a device just fine. Phase isn't an issue because it's simple traveling through the circuit. Phase comes into play when an eq is involved to boost or cut at a particular frequency band because it's the phase difference between signal in the eq section and the original signal. Like Mike said, that's how EQs work.
Reply
#99
Thanks for trying but somehow my questions are not quite answered. I have to chew on it for a while and plan another...attack. One of the ways to get to the truth, imo. A quick look at what transistors do tells me I am solid ground as far as electronic components themselves create delay=latency=phase. The question is how many of them, on average, do they put them in an analog EQ. I still hold that they couldn't do it even if they wanted to, for delay is inherent in any processor. I take his statement as someone who tripped and said "I meant to do it". Transistors, among other things, are processor. Well, I guess this is what Neil Degrass Tyson meant when he said, and I find myself in the same place, "I know enough to know that I am right but not enough to know that I am wrong." I just want to know more and the only way for me to do it is to ask and hold my ground until it's no longer solid.
Reply
In the end the only way we can deal with processing audio is through volume and time. And any 'delay' in an is happening at electrical speeds.

If a signal is processed at the same time there is no phase issue. Phase comes into play when a signal is summed relative to another signal. Be it another track, sound bouncing off different walls or in the frequency band in an eq summed with the original signal. It's the only way it works. It's the way microphone patterns work. It's the way our ears work together to locate sound in space.
Reply