Powered by Invision Power Board


Forum Rules Forum Rules (Please read before posting)
  Reply to this topicStart new topicStart Poll

> Gibbs Effect With Power?
Shocker
Posted: August 25, 2015 09:01 pm
Reply to this postQuote Post


Forum Addict ++
*******

Group: Trusted Members
Posts: 1,634
Member No.: 3,558
Joined: November 06, 2005




Hi,

I understand that the power spectral density of a sine wave decreases as the frequency is increased. I've just MATLAB'd the Gibbs' effect but i also understand that regardless of how many harmonics are included, there will also be overshoot and ringing. So i'm curious about combining the two to get a good approximation to a real world signal. Am i thinking along the correct lines? Is this possible? And if so, how should i go about doing it?


I'm curious about doing this because i want to know what a 300MHz signal would look like if it was created in an ideal environment. I also want to see what it looks like if i was to filter harmonics after an nth order.



Thanks
PMEmail Poster
Top
Sch3mat1c
Posted: August 25, 2015 11:37 pm
Reply to this postQuote Post


Forum Addict ++
Group Icon

Group: Moderators
Posts: 20,521
Member No.: 73
Joined: July 24, 2002




Eh..?

What does this come from:
"the power spectral density of a sine wave decreases as the frequency is increased"
?


A literal sine wave (v(t) = A sin(w_0 t)) has a PSD of delta(w +/- w_0). It's infinitely tall and infinitely thin, and those aspects are independent of frequency.

Gibbs' phenomenon is not an aspect of sinusoidal or unconditionally convergent harmonic series. It is the result of the Fourier series / spectrum arising from a step discontinuous function (which for the periodic case, includes square and sawtooth waves).

What kind of 300MHz signal would you be looking to create? What do you mean by "300MHz"? The period? The bandwidth?

Tim


--------------------
Answering questions is a tricky subject to practice. Not due to the difficulty of formulating or locating answers, but due to the human inability of asking the right questions; a skill that, were one to possess, would put them in the "answering" category.
PMEmail PosterUsers Website
Top
Shocker
Posted: August 26, 2015 03:13 pm
Reply to this postQuote Post


Forum Addict ++
*******

Group: Trusted Members
Posts: 1,634
Member No.: 3,558
Joined: November 06, 2005




Maybe i've interpreted things incorrectly. In Howard Johnson's book he speaks of performing a spectral power density on the output of a D type flip flop, where he describes and shows a graph in which the spectral density significantly rolls off after the 'knee frequency' (At -6dB), which is 10 times greater than the fundamental frequency. The graph shows that the amplitude in dBV decreases as the harmonics increase.

I therefore translated this to mean that as frequency increases, the power decreases.


I'm looking to create a square pulse train where the on phase is equal to 300MHz/3.33ns.

Ie.
user posted image


Without reading much in to the Gibbs' phenomenon, i fully expected the percentage of overshoot to decay. And so i speculated that this was due to the fact that the power of each harmonic needed to be altered. Obviously i was wrong about this.

Is there a way to achieve what i'm trying to achieve?
PMEmail Poster
Top
Sch3mat1c
Posted: August 26, 2015 09:32 pm
Reply to this postQuote Post


Forum Addict ++
Group Icon

Group: Moderators
Posts: 20,521
Member No.: 73
Joined: July 24, 2002




Your terminology still sounds a bit haphazard, so I'm going to run over some definitions, I think:

Since we're talking digital domain, all signals can be assumed to be square in some sense. That's fine.

Normally when you speak of pulses, you're referring specifically to the "on time" aspect: the frequency (time between rising edges) might vary, or the pulses come asynchronously, or randomly. So that it's not meaningful to speak in terms of a periodic function, like a square wave (where the time between rising edges, and between falling edges, is reasonably constant), or pulse width modulation (the edges or pulse centers are periodic, but the edges vary with respect to each other).

The "on" state of a pulse might be high or low, positive or negative; this is merely a digital convention, whether the signal is active-high or active-low, and what voltage ranges "high" and "low" are defined as. Remember, digital is a subset of analog, produced by applying these definitions.

Frequency: this is the rate at which an event repeats. For a square wave, it's the reciprocal of the time between the same edge. It isn't meaningful or accurate to speak of the frequency of a single pulse, as F = 1/Tpw. That is, between different edges. (The pulse width does have consequences for the PSD, though.)

Now, harmonics, Fourier, pulse width, rise time, and so on:

If you have a square wave, then you have several aspects of it to consider:
- Frequency (periodic rate)
- Rise and fall time
- Pulse width or duty cycle (symmetry)
- Overshoot, undershoot, droop, etc. (long time constants; how flat are the top and bottom nominally-constant-voltage portions)
- Overshoot, undershoot, ringing, etc. (short time constants; how rounded is the edge)

The theoretical ideal of a square wave (50% duty cycle, zero rise time) has harmonic amplitudes given as Vpk * (4/pi) / N, for all odd N.

This is equivalent to a pulse train of alternating deltas (i.e., infinitely tall, infinitesimally thin pulses) sent through a filter. This type of function is an eigenfunction of the Fourier transform (a complicated and usually poorly explained concept, but in this case, I mean: a graph of the function, and a graph of its Fourier transform, look identical, give or take a change of horizontal or vertical scale). So its transform is also a train of deltas -- in other words, harmonics of constant amplitude. Sent through a -20dB/dec filter, the harmonics drop off at a 1/N rate, forming a square wave. (Which is easy enough to see from the definition of the delta function, where integrating it creates a constant step change. An integrator is a 1/N filter for all frequencies, which can be approximated by a lowpass filter having a cutoff frequency well below the fundamental frequency.)

You could imagine a square wave of odd duty cycle as somewhat inbetween a train of deltas and a proper square wave. What I'm getting at here is, a train of deltas has all harmonics (not odd only). So, you should expect that a duty cycle other than 100% has all harmonics, if in varying quantity.

Triangle wave:

When you do the same calculation for a triangle, you get odd harmonics as 1/N^2 instead. Which shouldn't be a surprise as a triangle is the integral of a square.

Trapezoid wave:

What happens when you clip a triangle, or add ramps to a square wave (giving it finite rise time)?

Well, you know that, if the edges are zero, the harmonics go as 1/N, and if they're 100% (a full triangle), they go as 1/N^2. This must be some kind of intermediate case. How to tell? As it turns out, there is a break frequency, where it goes from 1/N to 1/N^2. What frequency? It's on the order of 1/risetime.

What if the square wave is additionally rounded, peaked, etc.?

Rounding occurs due to further attenuation. One would suppose a real signal is always band limited in an absolute sense, but that harmonics don't simply end (a brick wall filter), but decrease quickly, perhaps exponentially or hyperbolically past some cutoff frequency. This would be consistent with quantum physics, which says you can't have quanta of higher energy than you're working with.

Silicon apparently rolling off in the <1THz range implies about 4meV, which is probably doing pretty well against ~26meV thermal energy and 1.1eV band gap). Black body radiation cuts off exponentially above the peak emission frequency, so that at room temperature, the average thermal energy is 26meV, the peak is at 16THz (2.5 times the average -- which tells you, most of the particles are in lower energy states, but the most intense are the few in higher energy states), and beyond 32 or 64THz (> 264meV), there's essentially nothing. (Though useful for semiconductor purposes, that's just enough to loosen a few parts-per-trillion of electrons in undoped silicon (at 1100meV), so just because it's small doesn't mean it's fully zero!)

So one should expect, for practical or physical reasons, bandwidth is necessarily limited, but going from "oh, these harmonics are pretty strong" to "well we can stop counting here, this is useless" is a mathematically foggy range. For single precision calculations, you can stop counting by the millionth or so harmonic for square waves, or perhaps thousands or less for an exponentially limited signal.

So, to synthesize a nice square wave:

Try going as 1/N until some break frequency, then 1/N^2 beyond that, then as 1/exp(F) past the ultimate frequency. (The usual way to add an exponential cutoff is to multiply by 1 / (1 + exp(x)), scaled appropriately.) One catch: phase shift needs to go along with this, otherwise the interference pattern will be all wrong (and you'll still get weird edges). It might be better to simply start with the wave in the time domain, apply the filters as difference equations (i.e., discrete-time differential equations), then DFT to find the spectrum.

The rub about the exponential cutoff is this: you could approximate it as a series of filter poles, the ideal series is infinite. Which means you're trading infinite harmonics (with some cutoff profile) for infinite filter poles -- no free lunch.

A final note about scales:

Scale doesn't matter any. So you want a 3.3ns pulse, so what? It's no different, structurally, from a 1s pulse, or 1fs.

It's almost always better to reduce units to ratios wherever possible, and keep the scale factors on the outside. The math is easier, the numbers are more accurate (although floating point more-or-less addresses that for the most part), and you get more insight into what variables relate to others.

As a simple example, consider an RC filter: you can pass around R and C as-is, but pretty soon you'll tire of it, and wish to express everything as tau = R*C instead. Saves a keystroke, right? As you get into bigger filters and bigger analyses, you want to use more groupings and ratios. You might find it's even better to go with expressions of the form f/f_0, where f_0 = 1 / (2*pi*R*C). Even though you're apparently dividing twice, which looks like more work -- but you're removing more dimensions (now the expression f/f_0 is a pure ratio), which makes things better.

And those ratios, in turn, tend to be most significant (i.e., in the range of 1.0) around characteristic points (like the cutoff frequency). So as you graph different terms or groupings, you see where important things happen, too.

And you can take interesting snippets of equations, like 1/(1+x), and combine them with different scales and offsets to get interesting curves, or compose with other functions, like exp(x) (again with options for different scales inside and out of each function), to get further interesting shapes. (The function 1 / (1 + exp(x)) pops up frequently in statistical mechanics: it is the exponential cutoff for energy levels in many situations, like the electrons in a crystal (Fermi-Dirac statistics). So it's pretty cool.)

Tim


--------------------
Answering questions is a tricky subject to practice. Not due to the difficulty of formulating or locating answers, but due to the human inability of asking the right questions; a skill that, were one to possess, would put them in the "answering" category.
PMEmail PosterUsers Website
Top
0 User(s) are reading this topic (0 Guests and 0 Anonymous Users)
0 Members:

Topic Options Reply to this topicStart new topicStart Poll

 


:: support us ::




ElectronicsSkin by DutchDork & The-Force