The concept of poles and zeros are also generalized for linear time-varying systems. Concerning the stability and behavior of the LTV systems, several definitions of poles or eigenvalues of such systems have been proposed [37], depending on the characterization method of the LTV system. The notion of time-varying poles in this paper is founded on the time-varying autoregressive model. Parametric models for LTI systems can be generalized for LTV ones by imposing time-varying parameters on the model. The AM-FM signal, *x*[ *n*], is modeled by a time-varying autoregressive (TVAR) of order *M*:

x\left[\phantom{\rule{0.3em}{0ex}}n\right]=-\sum _{m=1}^{M}{a}_{m}\left[\phantom{\rule{0.3em}{0ex}}n\right]x[\phantom{\rule{0.3em}{0ex}}n-m]+\nu \left[\phantom{\rule{0.3em}{0ex}}n\right].

(8)

{*a*_{
m
}[ *n*], *m* = 1, …, *M*} are the time-varying parameters, and *ν*[ *n*] is the zero-mean innovation process, also addressed as a modeling error. The most general case of this model is where the parameters are completely uncorrelated at each time sample. Therefore, each time sample of *x*[ *n*] would be represented by *M* unknown coefficients; hence, it is not a practical approach. Based on a common practical assumption, the non-stationary signal is approximately regarded locally stationary or quasi-stationary. This assumption implies that the parameters of the TVAR model are correlated, and the coefficients are supposed to be constant in subintervals of the total time span, referred to as segments. This model is called a block stationary AR model [34]. For multicomponent AM-FM signals whose IAs and IFs are slowly time-varying or piecewise-constant, the segmentation strategy is applicable. By virtue of this assumption, a real multicomponent AM-FM signal over its support is considered as a superposition of temporarily more limited signals with constant frequencies. These intervals can generally have various lengths, and different methods from fixed-length windowing to adaptive segmentation algorithms are introduced to determine the borders of the segments [23]. In the proposed method, the segmentation is performed adaptively from the aspect of TVAR parameter estimation.

### 4.1 Segmentation procedure

The entire signal of *N* samples is segmented into *L* blocks with various lengths:

{x}_{\ell}\left[\phantom{\rule{0.3em}{0ex}}n\right]=x\left[\phantom{\rule{0.3em}{0ex}}n\right],\phantom{\rule{1em}{0ex}}{n}_{\ell -1}\le n<{n}_{\ell},

(9)

where *ℓ* = 1, …, *L* and *n*_{0} = 0. The TVAR coefficients are supposed to be constant in each segment. The mean square error in the *ℓ* th segment is given by

{J}_{\ell}=\frac{1}{{n}_{\ell}-{n}_{\ell -1}}\sum _{n={n}_{\ell -1}}^{{n}_{\ell}-1}{\left|\phantom{\rule{0.3em}{0ex}}x\left[\phantom{\rule{0.3em}{0ex}}n\right]+\sum _{m=1}^{M}{a}_{\ell ,m}x[\phantom{\rule{0.3em}{0ex}}n-m]\phantom{\rule{0.3em}{0ex}}\right|}^{2},

(10)

where {*a*_{ℓ, m}, *m* = 1, …, *M*} are the TVAR coefficients of the *ℓ* th segment. The boundaries of each segment are determined such that the error *J*_{
ℓ
} remains under a specified threshold. The segmentation algorithm operates as follows. At the start of each stage, the length of the current segment (say *ℓ*) is considered the minimum possible length, equal to the order of the TVAR model, i.e., *n*_{
ℓ
} = *n*_{ℓ-1} + *M*. The TVAR coefficients, *a*_{ℓ, m}, are estimated by the recursive least squares (RLS) technique, and the error in (10) is computed. If it is still greater than the pre-specified threshold, the length of the segment increases by one sample, and the calculations are repeated. This procedure continues by one-sample increment in each stage until the error falls below the threshold. Now, the boundaries and the length of the current segment are determined, and the procedure starts over the next time sample for another segment establishment. This algorithm runs through the entire signal repeatedly and stops at the end of the data batch. The question arises here about the threshold setting and how it can affect the accuracy of the IF estimation. This issue is scrutinized in the succeeding subsection separately. Once the TVAR parameters are estimated, the corresponding time-varying poles, denoted by {*ξ*_{
k
}[ *m*], *k* = 1, …, *M*}, are obtained by applying the *Z*-transform of (8) with respect to *n*.

### 4.2 Error analysis

It is noteworthy to mention the relation between the error caused by segmentation and the error of IF estimation. This analysis leads us to select a reliable error threshold in the adaptive segmentation procedure. Let us consider a discrete-time AM-FM component:

x\left[\phantom{\rule{0.3em}{0ex}}n\right]=A\left[\phantom{\rule{0.3em}{0ex}}n\right]{e}^{\phantom{\rule{0.3em}{0ex}}j\theta \left[n\right]}.

(11)

The error in the instantaneous phase imposed by the TVAR modeling in each segment, denoted by *ε*_{
θ
}, sets off an error in signal:

\hat{x}\left[\phantom{\rule{0.3em}{0ex}}n\right]=A\left[\phantom{\rule{0.3em}{0ex}}n\right]{e}^{\phantom{\rule{0.3em}{0ex}}j\left(\theta \left[n\right]+{\epsilon}_{\theta}\left[n\right]\right)}.

(12)

For very small phase errors, the following approximation is considered using the Maclaurin series:

{e}^{\phantom{\rule{0.3em}{0ex}}j\left(\theta \left[n\right]+{\epsilon}_{\theta}\left[n\right]\right)}\approx \left(1-\frac{{\epsilon}_{\theta}^{2}\left[\phantom{\rule{0.3em}{0ex}}n\right]}{2}+j{\epsilon}_{\theta}\left[\phantom{\rule{0.3em}{0ex}}n\right]\right){e}^{\phantom{\rule{0.3em}{0ex}}j\theta \left[n\right]}.

(13)

Substituting this approximation in (12), we have

e\left[\phantom{\rule{0.3em}{0ex}}n\right]=x\left[\phantom{\rule{0.3em}{0ex}}n\right]-\hat{x}\left[\phantom{\rule{0.3em}{0ex}}n\right]=\epsilon \left[\phantom{\rule{0.3em}{0ex}}n\right]A\left[\phantom{\rule{0.3em}{0ex}}n\right]{e}^{\phantom{\rule{0.3em}{0ex}}j\theta \left[n\right]},

(14)

where

\epsilon \left[\phantom{\rule{0.3em}{0ex}}n\right]=\frac{{\epsilon}_{\theta}^{2}\left[\phantom{\rule{0.3em}{0ex}}n\right]}{2}-j{\epsilon}_{\theta}\left[\phantom{\rule{0.3em}{0ex}}n\right].

(15)

The error *e*[ *n*] whose instantaneous amplitude is absolute error of modeling is also an AM-FM signal:

\left|e\left[\phantom{\rule{0.3em}{0ex}}n\right]\phantom{\rule{0.3em}{0ex}}\right|=\left|\epsilon \left[\phantom{\rule{0.3em}{0ex}}n\right]A\left[\phantom{\rule{0.3em}{0ex}}n\right]\phantom{\rule{0.3em}{0ex}}\right|.

(16)

The error in phase is now transduced to the error of amplitude. Let us define a time-varying threshold denoted by *η*[ *n*] such that |*e*[ *n*] | is restrained lower than it, i.e., |*e*[ *n*] | < *η*[ *n*]. If we substitute |*e*[ *n*] | by (16), the following inequality holds:

\left|\epsilon \left[\phantom{\rule{0.3em}{0ex}}n\right]\phantom{\rule{0.3em}{0ex}}\right|<\frac{\eta \left[\phantom{\rule{0.3em}{0ex}}n\right]}{\left|A\left[\phantom{\rule{0.3em}{0ex}}n\right]\phantom{\rule{0.3em}{0ex}}\right|}.

(17)

So, the absolute error of phase depends on the signal envelope. This means that for a fixed threshold, where *η*[ *n*] is constant over the entire signal, larger phase errors can occur when IA becomes smaller. Therefore, the threshold should vary adaptively, adjusted to the envelope of the observed signal. In other words, the locally normalized error for each segment is a proper threshold. Since the IA evolves slowly, its mean or minimum amount during the segment can be utilized for normalization. The normalized threshold is denoted by \stackrel{\u0304}{\eta} for brevity:

\stackrel{\u0304}{\eta}=\frac{\eta \left[\phantom{\rule{0.3em}{0ex}}n\right]}{\text{mean}\left\{\left|A\left[\phantom{\rule{0.3em}{0ex}}n\right]\phantom{\rule{0.3em}{0ex}}\right|\right\}}.

(18)

Thus, the inequality (17) is practically used as the following one:

\left|\epsilon \left[\phantom{\rule{0.3em}{0ex}}n\right]\phantom{\rule{0.3em}{0ex}}\right|<\stackrel{\u0304}{\eta}.

(19)

The square of |*ε*[ *n*] | is obtained from (15):

{\left|\epsilon \left[\phantom{\rule{0.3em}{0ex}}n\right]\phantom{\rule{0.3em}{0ex}}\right|}^{2}=\frac{{\epsilon}_{\theta}^{4}\left[\phantom{\rule{0.3em}{0ex}}n\right]}{4}+{\epsilon}_{\theta}^{2}\left[\phantom{\rule{0.3em}{0ex}}n\right].

(20)

Exploiting this relation in the inequality (19) and performing some mathematical reformulations result in a bound for the phase error:

{\left|{\epsilon}_{\theta}\left[\phantom{\rule{0.3em}{0ex}}n\right]\phantom{\rule{0.3em}{0ex}}\right|}^{2}<2\left(\sqrt{1+{\stackrel{\u0304}{\eta}}^{2}}-1\right).

(21)

When \stackrel{\u0304}{\eta}\to 0, the right-hand side of the above inequality is approximately equal to {\stackrel{\u0304}{\eta}}^{2}. Keeping the phase error (*ε*_{
θ
}) under control, the error of IF is consequently controlled. By definition (2), IF is the derivative of instantaneous phase, which is a difference equation in the discrete-time situation:

\omega \left[\phantom{\rule{0.3em}{0ex}}n\right]=\frac{\theta \left[\phantom{\rule{0.3em}{0ex}}n\right]-\theta [\phantom{\rule{0.3em}{0ex}}n-1]}{{T}_{\mathrm{s}}},

(22)

where *ω*[ *n*] = 2*π* *f*[ *n*] is the instantaneous frequency in radian per second, and *T*_{s} denotes the sampling time. In the worst case, the maximum phase errors of two consecutive instants accumulate. Thus, the maximum error of IF is 2*ε*_{
θ
}*f*_{s}. For example, if \stackrel{\u0304}{\eta}=1{0}^{-3}, then from (21), the maximum phase error is almost 10^{-3}, and the absolute error of IF is at most 0.2*%* of the sampling frequency. This error can be controlled by arbitrary selection of \stackrel{\u0304}{\eta}. A smaller threshold leads to wider segments, in which the assumption of constant frequency is no longer respected. Our experiments verified that the condition of piecewise-constant frequency for slowly varying IFs is satisfied for \stackrel{\u0304}{\eta} in the order of 10^{-3}∼10^{-2}.