Pulse Signal and Its Representation
We are introducing pulse signal, which is a common function seen in the future.
Definition
A pulse signal at time 0 is written as,
δ(t)
It is defined as a limit of any distribution, that has a mean value of zero, and the chance of being non-zero is one.
That is to say,
δ(t)=ϵ→0limdistributionϵ(t)
Where ϵ is the chance of t being non-zero.
For example,
δ(t):=ϵ→0limrectϵ(t)
Where,
rectϵ(t):={2ϵ1∣∣t∣∣≤2ϵ0otherwises.t.ϵ>0
rectϵ is also called the gate function. It is a uniform distribution from −ϵ to ϵ. Where ϵ>0.
Or the gaussian distribution,
δ(t):=ϵ→0lim2πϵ21e−2ϵ2t2
Whenever you need to prove some properties of a pulse signal, you must keep in mind that, δ is the limit of a probability distribution.
There are more than one ways to define a δ signal. We use the probability distribution as the definition. It can also be defined via test function.
Obviously, δ(0)=+∞, and δ(t)=0 for t=0.
We often draw δ(t) signal as an arrow pointing upwards, with a height of 1.
In a physical sense, the dimension is,
[δ(t)]=[t]1
Properties
Even
This is obvious because δ(t)=δ(−t)=0 for t=0.
Unit Area
∫−∞+∞δ(t)dt=1
Because it is the limit of a distribution, the area under the curve is 1.
Selective
∫−∞+∞δ(t−t0)f(t)dt=f(t0)
Consider,
∫−∞+∞δ(t)f(t)dt=Et∼δ(t)(f(t))=f(0)
So the selectivity can be proved via substitution.
Derivative
∫−∞+∞f(t)(dtd)nδ(t−t0)dt=(−1)n((dtd)nf(t))∣t=t0
This is easy from integration by parts. Consider,
∫−∞+∞f(t)dtdδ(t−t0)dt=∫−∞+∞f(t)dδ(t−t0)=(f(t)δ(t−t0))∣−∞+∞−∫−∞+∞dtdf(t)δ(t−t0)dt=−(dtdf(t))∣t=t0
This can be done recursively for the final result.
Scale
δ(αt)=∣∣α∣∣1δ(t)
This is done by definition. Assume α>0,
We first calculate the CDF for the distribution αt,
F(t′)=P(αt≤t′)=P(t≤αt′)=∫−∞αt′distributionϵ(t)dt
Because the derivative of CDF is a PDF, so,
p(t′)=α1distributionϵ(t)
We enforce ϵ→0, so,
δ(αt)=α1δ(t)
If α<0, we can utilize the evenness of the delta function.
Decomposition
If a polynomial P(x) can be decomposed into product of linear terms pi=xi−ζi.
That is,
P(x)=i=0∏i=npi
Then,
δ(P(x))=i=0∑i=n∣∣P′(ζi)∣∣δ(xi−ζi)
This is hard to prove directly, we use an indirect proof.
Consider,
∫−∞+∞δ(P(x))f(x)dx
Because in most of the space, δ(P(x)) is zero. We can make that into,
i=0∑i=n∫ζi−ϵiζi+ϵiδ(P(x))f(x)dx
We can use substitution if we assume ϵi→0. That is to say, we can perceive P(x) as locally linear, so,
dx=∣∣P′(ζi)∣∣1dP(x)
So,
∫−∞+∞δ(P(x))f(x)dx=i=0∑i=n∫ζi−ϵiζi+ϵiδ(P(x))f(x)dx=i=0∑i=n∫ζi−ϵiζi+ϵi∣∣P′(ζi)∣∣1δ(P(x))f(x)dP(x)=i=0∑i=n∣∣P′(ζi)∣∣f(ζi)
Which is equivalent to,
δ(P(x))=i=0∑i=n∣∣P′(ζi)∣∣δ(xi−ζi)
Integration
If we integrate δ(t), we get a step function,
By integrate, we will always mean a definite integral from −∞ to a variable. So that integrating a function yields another function.
u(t):=∫−∞tδ(τ)dτ
Or, alternatively,
u(t)={01if t<0if t≥0
The step function is a common function seen in the future.
It is obvious that,
rectϵ(t)=u(ϵ+t)−u(ϵ−t)
More precisely,
u(t)=⎩⎨⎧01undefinedif t<0if t>0if t=0The value at 0 depends on whether we are integrating on δ(t−0) or δ(t+0).