|Electronics Forum||Help Search Members Calendar|
|Welcome Guest ( Log In | Register )||Resend Validation Email|
Posted: October 19, 2012 06:00 pm
Forum Addict ++
Group: Trusted Members
Member No.: 3,558
Joined: November 06, 2005
I'm looking at matched filtering and i read that it uses convolution. So i went off and looked up convolution. Now i understand convolution finds the amount of overlap between two functions which is good for matched filtering. As i can clearly see that one function would be the signal the system is looking for and the other function is the input signal.
The thing i don't understand in the equation (as seen here) , what is t and tou?
I want to say that t is the offset of both functions but i have no idea about tou.
If i'm correct about t.... in a matched filter with incoming signals, calling the signal match function f and the incoming signal g. Would one say that signal match function wouldn't require an offset? And that the offset would be used to 'move' the incoming signal in to the filter (in to line with the signal match function)?
I've looked at 3 or 4 websites and they haven't described t and tou properly at all.
Posted: October 19, 2012 11:35 pm
Forum Addict ++
Group: Cleanup Taskforce
Member No.: 6,662
Joined: October 08, 2006
The integral is with respect to tau, i.e., every variable is fixed and the only thing varying inside the integral is tau. What you do is set t from the the outside, and then when you compute the integral, that value of t stays fixed whilst tau is varied from -inf to inf. That gives you a single number corresponding to the index of t you chose. You can repeat this for any other t, so you'd have a series of numbers where each point represents the convolution at that value of t.
Posted: October 19, 2012 11:59 pm
Forum Addict ++
Member No.: 73
Joined: July 24, 2002
*tau, not tou
The digital (discrete time) form is actually easier to visualize. Suppose you take a number (representing the value of your function at time t = 0) and put it into a queue. Then at t = Ts, you take another sample, and so on for every t = n*Ts. Now you've got numbers going into a queue, and after N samples you remove one each time you add one, so there's always a total of N numbers in the queue.
This is a powerful representation now. Suppose you summed all the samples in the queue. You simply get the average (well, average * N, so you can divide by N to get the actual average). If you examine the output while varying the input, you'll see you've created an FIR (finite impulse response) filter: if the first sample had a value of 1.0, and all others = 0, then while that "one" is going through the queue, the average is always 1/N, and holds that level for N samples. You get the same total energy (1/N height * N samples = 1 * 1 area), but it goes "slower".
Suppose that you do the same averaging trick, but now you do a weighted average. Maybe you've assigned the weights to a particularly useful windowing function, something that starts out zero, goes up slowly, until the samples in the middle are heavily weighted, then tapers off to zero again on the other side. Now as an impulse travels through, the average rises and falls gradually, so that instead of a flat top, you get a humpy shape: slowly-varying signals contain little high-frequency energy, which means you've achieved a very useful low-pass filter. If you now send sine waves through this filter, you can observe the phase shift and attenuation at various frequencies, and see that it performs nicely indeed.
If you imagine that this process is going on, but with continuous time rather than buckets-O'-signal, you have (analog) convolution.
Answering questions is a tricky subject to practice. Not due to the difficulty of formulating or locating answers, but due to the human inability of asking the right questions; a skill that, were one to possess, would put them in the "answering" category.
:: support us ::