Moisés Soto-Bajo, Andrés Fraguela Collar, Javier Herrera-Vega
Nikola Tesla said: “If you want to find the secrets of the universe, think in terms of energy, frequency and vibration.” Unfortunately, this is a hieroglyph, and we are still looking for its Rosetta Stone.
Frequency is a fundamental concept in science. However, in spite of its seemingly simple meaning, its mathematical foundation is not as straightforward as it may seem at first glance. A naive interpretation of the different mathematical concepts for modeling frequency can be misleading as their actual meanings essentially differ from the intuitive notion that they are supposed to represent.
This circumstance should be taken into account to develop and apply appropriate signal analysis and processing tools. We discuss this topic to draw the attention of the mathematical and engineering community to this point, which is often overlooked.
Frequency is a central concept in science and a keystone in signal processing. It describes the oscillatory behavior of signals, which is usually argued to be the manifestation of some of their key features, depending on their nature.
Hence, a mathematically rigorous definition of frequency, tightly linked to a meaningful physical or phenomenological interpretation, is highly desirable and critical. Nevertheless, beyond the intuitive notion arisen from the study of simple vibratory waves, this search is intricate and slippery. This is not surprising at all; as stated in [12], “The term ‘instantaneous frequency’ is somewhat of an oxymoron.” It is even paradoxical [1].
In this regard, it is interesting to retrieve the discussion in [11] about the modeling of “real-world” signals. Concepts such as frequency, time limited, or band limited are helpful mathematical abstractions, but they are actually meaningless in practice.
What do we exactly mean/understand by “frequency”? How is it related to oscillations? Is it appropriate for answering the key questions that arise in any discipline?
The concept of “frequency” is almost ubiquitous in signal processing and in the many disciplines that make extensive use of it, and it is present in the most common methodologies for analyzing signals. Applications of frequency analysis cover a broad spectrum, ranging from biomedical signals like artifact removal in functional near-infrared spectroscopy neuroimaging (measuring Mayer waves, heartbeat, or breathing), limb movement monitoring in poststroke rehab patients, fault detection in induction motors or other devices through acoustic wave analysis, and also in economics and financial time series analysis. In bioengineering, the analysis of bioelectric signals is a major task.
Especially interesting is the case of neurophysiology. Electroencephalographic (EEG) signals are supposed to be the superposition or juxtaposition of brain rhythms, which are the manifestation of the collective and synchronous activity of millions of neurons, and they depend on many factors. Consequently, from the perspective of signal processing, the basic goal is to decompose EEG into rhythms or to analyze these “components” or different “oscillatory-behaved” manifestations. Frequency-based analyses are at the heart of this kind of analysis, and many techniques and procedures, ultimately resulting in automatized analytical algorithms, have been developed to process EEG signals.
Brain rhythms are characterized by frequency and amplitude ranges. They are oscillatory signals with a close-to-uniform oscillation rate, which fluctuates within a specific range. However, there is not a consensus about the limits of frequency bands (see Figure 1).
Figure 1. An EEG signal (a) with two possible decompositions into brain rhythms. (b) ${\delta}$ in dark blue and ${\theta}$ in light blue, (c) ${\alpha}$ in red and ${\beta}$ in orange, and (d) ${\gamma}$ in green.
This is not a rigorous definition, but a “naive” description. Despite being central, the use of frequency is merely intuitive. In practice, it strongly depends on the used methodology. Because of the lack of firmly established and consensual definitions, EEG analysis turns out to be a difficult task, far from being completely understood. How can these fuzzy notions be appropriately mathematically modeled? This is a bewildering and challenging question.
The customary answer is: The signal is a relation between the independent variable, let us say time, and the main magnitude (voltage difference). The spectral representation of the signal is another function that relates to each frequency the amplitude corresponding to the basic oscillation at this frequency, that is, the strength with which this harmonic takes part in the composition of the signal (see Figure 2). Clearly, the first description is elementary; it is completely linked to perception or measurement. However, the second one is less intuitive and depends strongly on the frequency concept itself and on the analytical procedure performed.
Figure 2. An EEG signal (a) with (b) two possible spectral amplitudes (in absolute values). (c) At the bottom, a horizontal zoom is shown.
According to this description, rhythms need to be understood in terms of the chosen spectral representation, and they depend on the corresponding concept of frequency. However, it is clear that any analysis should respect the neuroscientist’s paradigm (or the corresponding expert’s, according to each type of phenomenon), in terms of oscillations, and the subsequent components in which signals are split should be meaningful to the specialists.
Undoubtedly, signal processing borrowed the concept of frequency from physics and mathematics, specifically from classical Fourier analysis. The sinusoidal basic functions are ${\sin}{(}{2}{\pi}{\omega}{t}{)}$ and ${\cos}{(}{2}{\pi}{\omega}{t}{)}$, or ${e}^{{2}{\pi}{i}{\omega}{t}}$. Here, t represents time and ${\omega}$ frequency. The physical interpretation is the angular frequency or angular speed (in hertz) in the corresponding rotary motion around a resting state or equilibrium point, and it represents the rate of change of the phase, measuring the angular displacement in cycles per unit of time. Frequency measures the oscillatory speed: ${\left\vert{\omega}\right\vert}$ is the number of cycles performed per unit of time. This interpretation is naturally restricted to pure sinusoidal signals [1].
In harmonic analysis, we seek to somehow represent general functions in terms of basic sinusoidal functions. These representations are defined by the set of frequencies involved and the corresponding amplitudes accompanying the sinusoidal components, and they take the form of sequences of coefficients or functions, depending on the nature of the frequency set (discrete or continuous). These amplitudes are the response in frequency of the signal or the spectral representation of the function. Hence, the representation process is split into two parts: the analysis, which consists in obtaining the representative given by the corresponding amplitudes, and the synthesis, which consists in recovering the signal from its representation.
Classical Fourier analysis consists of two main areas: the Fourier series and the Fourier transform.
The Fourier series theory deals with periodic signals and provides an effective way of analyzing and synthesizing them. The most remarkable result is the Plancherel theorem: The properly normalized trigonometric system ${\left\{{e}^{{2}{\pi}{ik}\,{\cdot}\, / {L}} / {\sqrt{L}}{:}{k}\,{\in}\,{\Bbb{Z}}\right\}}$ is an orthonormal basis. Consequently, any L-periodic function f is written as the corresponding Fourier series, where the information of f is encoded into its Fourier coefficients, which, due to orthonormality, are given by (put ${b}{-}{a} = {L}$) \[{\hat{f}}{(}{k}{)} = \frac{1}{\sqrt{L}} \mathop{\int}\nolimits_{a}\nolimits^{b}{f}{(}{t}{)}{e}^{{-}{2}{\pi}{ikt} / {L}}{dt} \tag{1} \] and satisfy the Parseval identity: \[{\mathop{\int}\nolimits_{a}\nolimits^{b}{\left\vert{f}{(t)}\right\vert}^{2}{dt}} = \mathop{\sum}\limits_{{k}\,{\in}\,{\Bbb{Z}}}{\left\vert{\hat{f}}{(k)}\right\vert}^{2}{.} \tag{2} \]
Analogously, the Fourier transform deals with nonperiodic signals. For an integrable function f, the Fourier transform is given by \[{\hat{f}}{(}{\xi}{)} = \mathop{\int}\nolimits_{{-}{\infty}}\nolimits^{\infty}{f}{(}{t}{)}{e}^{{-}{2}{\pi}{i}{\xi}{t}}{dt}{.} \tag{3} \]
The Plancherel identity reads as \[{\mathop{\int}\nolimits_{{-}{\infty}}\nolimits^{\infty}{\left\vert{f}{(t)}\right\vert}^{2}}{dt} = \mathop{\int}\nolimits_{{-}{\infty}}\nolimits^{\infty}{\left\vert{\hat{f}}{(}{\xi}{)}\right\vert}^{2}{d}{\xi}{.} \tag{4} \]
In the first case, the set of frequencies is ${\Bbb{Z}}$ (the set of integers), we represent periodic functions, the analysis process consists in computing the sequence of Fourier coefficients, and the synthesis process consists in summing the Fourier series. In the second case, the set of frequencies is ${\Bbb{R}}$ (real numbers), we represent nonperiodic functions, and the analysis and synthesis processes are performed by the Fourier and inverse Fourier transform, respectively.
These objects extend the concept of frequency from pure sinusoidal functions to more general signals. Now, frequency is a parameter that indexes the set of responses of the signal when it is faced against a pure oscillation: the Fourier coefficient ${\hat{f}}{(k)}$ or the Fourier transform value ${\hat{f}}{(}{\xi}{)}$. It is also the “addition index” when recovering the signal.
But this is achieved at some price. In this framework, the exact meaning of “frequency” is hidden behind the variables k and ${\xi}$. It turns out that frequency is no longer an oscillation speedometer (although it is supposed to be). This is the point we want to highlight here, and it plunges into the deep nature of time–frequency analysis since it is inherent to the central concept of frequency itself.
It is easy to see some eloquent symptoms of this: one of the main tasks in signal processing is the identification of the characteristic oscillations of signals and where they occur (time–frequency localization). First of all, it is worth noting that, in real applications, one deals with finite signals, so the discrete Fourier transform is applied.
The analysis is mostly performed by computing inner products with basic harmonics. For instance, for any ${\xi},{\eta}\,{\in}\,{\Bbb{R}}$, \[{\mathop{\int}\nolimits_{a}\nolimits^{b}{e}^{{2}{\pi}{i}{(}{\eta}{-}{\xi}{)}{t}}}{dt} = {LC} \frac{{\sin}{(}{\pi}{L}{(}{\eta}{-}{\xi}{)}{)}}{{\pi}{L}{(}{\eta}{-}{\xi}{)}} \tag{5} \] where ${C} = {e}^{{\pi}{i}{(}{\eta}{-}{\xi}{)(}{a} + {b}{)}}$ and ${\left\vert{C}\right\vert} = {1}$. This can be interpreted as the kth Fourier coefficient (corresponding to the frequency ${\xi} = {k} / {L}$, ignoring the constant ${\sqrt{L}}$) of a pure tone of frequency ${\eta}$, or as the Fourier transform (valued at ${\xi}$) of this windowed (on [a, b]) pure tone of frequency ${\eta}$, too.
This implies that, when computed on a finite interval, basic tones interfere with each other in the whole spectrum of frequencies, excluding a discrete sequence of pairings in harmonic consonance, given by the relation ${(}{\eta}{-}{\xi}{)(}{b}{-}{a}{)}\,{\in}\,{\Bbb{Z}}$. Therefore, the appearance of some harmonic, taking part in the composition of the signal, is not only traced back to the corresponding coefficient, but it leaves a spurious trail through the entire spectral representation, except its consonant frequencies. Moreover, this harmonic consonance relation depends not only on the pair of frequencies but also on the length ${L} = {b}{-}{a}$ of the sample window, which is in general uncorrelated with the signal or even with the phenomenon itself. This circumstance could have been favorable when modeling finite vibrating strings, but it does not represent the general case.
The previous computation also shows how the Fourier transform spreads entirely over the whole spectrum, distributing the energy completely. This is exactly the opposite of being condensed at ${\eta}$, as would be expected. This fact is very inappropriate since we are usually interested in studying signals on finite intervals. This clearly manifests the fact that the Fourier transform was designed to handle signals defined in ${\Bbb{R}}$. You can restrict the support, but you cannot “fool” the Fourier transform.
Representing the other side of the same coin are the Fourier coefficients, where orthogonality relations are obtained at the price of neglecting dissonant frequencies: ${(}{\eta}{-}{\xi}{)}{(}{b}{-}{a}{)}\,{\notin}\,{\Bbb{Z}}$. The Fourier series scatters spectrum energy, giving rise to a completely spurious spectrum with no physical meaning at all (see Figure 3).
Figure 3. (a) In blue, a square pulse signal and (c) an action potential-type signal (b), (d) with their respective spectral representations. In red, a pure sinusoidal tone ${(}{\sin}{(}{2}{\pi}{t} / {4}{)}{)}$ of frequency ${\omega} = {1} / {4}$ has been added to them.
Moreover, consonant frequencies are also troublesome. Sometimes harmonics represent different and independent features, sometimes not. Several harmonics can contribute altogether to reshape a specific waveform, where it is not every component but the joint admixture that provides a meaningful profile. Furthermore, a recurrent compound waveform could have an intrinsic “frequency” related to that of its components to a greater or lesser extent (see Figure 4).
Figure 4. (a) In blue, a square pulse signal and (c) an action potential-type signal (b), (d) with their respective spectral representations. In red, a pure sinusoidal tone ${(}{0.2}\,{\cdot}\,{\sin}{(}{2}{\pi}{6}{t}{)}{)}$ of frequency ${\omega} = {6}$ has been added to them.
But, why should we worry about dissonant frequencies? Does it make sense at all to distinguish between real frequencies and spurious ones? How to define independent or cooperative components and how to classify and/or cluster them? If one is only interested in decomposing and synthesizing signals, there is no major problem. But if the purpose is to get meaningful components, explanatory of the underlying constitutive mechanisms of the phenomenon, the previous concerns are justified.
Discordant frequencies are not superfluous at all. In the EEG case, they can occur naturally, as the superposition of independent, perhaps asynchronous and uncoordinated, but concurrent, contributor oscillations. Orthogonality enables energy additivity, but it does not necessarily mean independence or uncorrelation. Frequency analysis can definitely distort the phenomenological nature of the signal. It is mathematically correct, but misinformative.
Other parallel ideas, shaped like uncertainty principles, are well known [5]. In the framework of Fourier analysis, which was conceived for other purposes of a completely different nature, these drawbacks cannot be avoided.
Harmonic analysis is a classical discipline that has turned out to be the cornerstone of many others in pure and applied sciences. Concerning time–frequency localization, many techniques have been developed: Fast Fourier transforms, band-limited functions, and prolate spheroidal wave functions, among many others. These tools have been successfully applied in many contexts. However, in spite of their remarkable properties, there are also disadvantages in their use. Anyway, we want to focus here on the fact that all of these techniques depend on the concepts of frequency that we have just explained and, consequently, share their shortcomings.
Beyond the classical harmonic framework, much effort has been devoted to developing alternative tools and procedures for capturing the main time–frequency features of a signal. The point is to identify the pairs of time windows and frequency bands in which characteristic oscillations of the signal occur. These different approaches can be clustered under the common name time–frequency analysis [5]. In contrast to classical harmonic analysis, time–frequency analysis treats (or attempts to) time and frequency variables equally, as primary concepts. However, frequency is not actually an absolute primary concept; only the way the frequency variable appears in each transform (and only that) accurately determines its precise meaning, not the name with which it was coined.
Many techniques have been developed with this flavor: the short-time Fourier transform; the spectrogram, ambiguity function, Wigner–Ville distribution, and other quadratic time–frequency representations; and Gabor frames, among others. These tools have been successfully applied to many processing tasks, such as feature extraction, separation of signal components, or signal compression [5].
Typically, time–frequency techniques define a local frequency spectrum that is supposed to report the strength of each oscillation. They borrow their foundational idea from Fourier analysis: response in frequency is obtained by facing up an averaged portion of the signal (via window functions) to the exponential kernel, which encodes the oscillation speed. Consequently, despite their success, the deficiencies of these approaches are inherited; they cannot overcome the limits that Fourier analysis imposes.
Presently, many symptoms are actually well known (although the diagnosis maybe not so well). Two main principles are ubiquitous in time–frequency analysis: 1) the smoothness–decay duality: if one representation is smooth, the other one decays; and 2) the uncertainty principles: both representations cannot be simultaneously well localized.
At first sight it seems reasonable that sudden jumps produce respective manifestations at high frequencies. But consider the square signal given by ${sign}{(}{\sin}{(}{2}{\pi}{t}{)}{)}$. How would you describe its periodicity, its oscillation, in terms of frequency? How do you think a neurophysiologist would do it? Now compute its Fourier transform and compare.
The uncertainty principle has been well studied [5]. Some relevant examples are the Heisenberg–Pauli–Weyl inequality, the Donoho–Stark uncertainty principle, Lieb’s inequalities, the radar uncertainty principle, the Wigner–Ville distribution uncertainty principles, the positivity of smoothed Wigner–Ville distributions, the Balian–Low theorem, and density theorems for Gabor frames, etc. They make the existence of an absolute and ideal concept of instantaneous frequency (or spectrum) impossible: there is no finite energy function concentrated on an arbitrarily small interval with Fourier transform also concentrated on an arbitrarily small band.
Remarkable exceptions to the uncertainty principle are the Wilson basis, the Malvar basis, or the local Fourier bases [5], [6]. However, because of their structure as orthogonal bases of windowed Fourier series, they suffer from the same shortcomings. The window smooths the edge effects but also totally reshapes the sinusoidal waveforms, intimately related to the notion of oscillation.
Another case of interest is wavelet theory. Wavelets have been applied with great success in signal processing and many other disciplines. There are well localized in time–frequency wavelets and very suitable for time–frequency analysis [6]. Nevertheless, not all that glitters is gold. In this case, the concept of frequency is in some way replaced by the concept of scale or resolution. The scale spectrum grows geometrically, and mesh refines according to scale. Hence, “dissonant” features (with scale and position outside the lattice) will be scattered through several spurious components. On the other hand, there is relatively much freedom when choosing the waveform for wavelets; they do not need at all to have a sinusoidal appearance.
The instantaneous frequency (IF) is also worth mentioning. It has a long history, but its own nature has been quite controversial, despite successful applications, especially in the analysis of nonstationary signals. The IF is supposed to measure a local time-varying frequency, following the spectral frequency peak. It plays the role of a variable frequency in an amplitude–phase representation (the analytic signal) that locally best fits the signal [1], [7].
The definition and physical interpretation of the IF as well as the amplitude–phase decomposition, the monocomponent signals, etc., are far from being completely clarified, and doubts persist. The suitability of this mathematical object, assumed to possess some “natural” properties, is actually conditioned to the fulfillment of some arguable requirements. Another serious drawback is the nonuniqueness of amplitude–phase type representations and their physical meanings—only partially solved by Gabor [1]. Also, the IF usually deviates from the expected frequency band, even attaining meaningless negative values [7].
It is clear that the whole spectrum cannot be represented by a single number, so the IF is just appropriate for monocomponent signals (whatever it means). Consequently, multicomponent signals need to be decomposed [1]. Many algorithms have been proposed for that decomposition. One of the most successful is the empirical mode decomposition (EMD). Resulting intrinsic mode functions (IMFs) are supposed to be monocomponent and thus suitable to possess a meaningful IF. This is the so-called Hilbert–Huang transform, and the resulting time–frequency–amplitude/energy distribution is called the Hilbert spectrum [8].
The Hilbert spectrum seems to provide a more appropriate notion of IF than previous methodologies, although it is not exempt from drawbacks. At each IMF, the IF is not homogeneous and may not remain within a band, so possibly many vibrating modes seem to coexist. Consequently, IMFs are not necessarily meaningful. Furthermore, the mathematical foundation of all of these data-driven techniques (especially the so-called sifting process in EMD) turns out to be very challenging, and most of the confidence they generate comes from numerical experiments and particular data analyses [2], [7], [8].
Another interesting approach for defining and computing IFs is the synchrosqueezed wavelet transform, a reallocation method based on wavelet analysis [4]. In what is called the adaptive harmonic model, a sinusoidal representation with amplitude and frequency modulation is considered. In this case, the IF is always positive, and the mathematical treatment is more easily handled. The mathematical interpretation of the IF is straightforward, but not so the physical one, and not the decomposition into intrinsic mode type functions, which obviously rely on the time–frequency representation provided by wavelet analysis.
Concerning the multicomponent splitting problem, and the “one or two frequencies” question, see [3], [9], [10], and [14]. A recent review on nonlinear time–frequency analysis can be found in [13].
In many types of signals (for example, as in electrocardiography), specific waveforms take place repeatedly. These patterns do not strictly appear as periodic but in the form of modulated basic shapes. This led Wu [12] to introduce the adaptive nonharmonic model, where signals are composed by nonsinusoidal oscillation patterns with time-varying amplitudes and arguments, called wave-shape functions. They are model-dependent periodic outlines modulated by an intrinsic intrawave frequency. Hence, the IF is defined as the derivative of the generalized phase functions.
There are still important drawbacks to overcome, such as the nonuniqueness of representation, the identification of suitable basic shape patterns, the computation of amplitude and phase functions, or the splitting into different mutually interfering waveforms. But this results in a very auspicious proposal since the concept of frequency is linked again with repetition: it accounts for the number of repetitions of the waveform per unit time, and it also measures the instantaneous speed when performing each one. Note the subtlety underlying this notion. Here the IF does not attempt to capture a kind of absolute oscillation speed, but the local relative rhythm with which the corresponding waveform is traced.
A priori, the wave shape is an intrinsic feature, presumably the product of the underlying mechanism of signal generation, which could be estimated from previous knowledge about the phenomenon. Consequently, this method is able to capture the essential signal dynamics in a phenomenologically meaningful fashion. In short, the information on morphology, speed, and intensity is unmixed by using the wave shape, amplitude functions, and IF.
It is not easy to rigorously “define” what frequency means or even should mean. Time–frequency analysis has been successfully and firmly established. However, some problems arise, which could seem counterintuitive (see the musical score metaphor [5]) or necessary (uncertainty principles in quantum mechanics). Other recent attempts, such as the IF or the wave-shape functions, are quite promising, but some perplexing features still remain unrevealed.
Probably, there is not an absolute answer; it could be application dependent. What is certain is that, in any event, solutions must be phenomenologically meaningful to allow a deep comprehension of studied phenomena, and experts in each specific field will have the last word.
This is definitely an old topic, but it is still in full force. There is a real need for generating new strategies that fill the gaps between theory and practice and between real problems and mathematical modeling. New ideas will produce more fruitful notions of frequency, susceptible to being applied in many disciplines.
The authors thankfully acknowledge CONACYT financial support of MSB through the “Cátedras CONACYT para Jóvenes Investigadores 2016” program and also funding of JHV from CONACYT under postdoctoral scholarship 741147. This article was prepared in the context of the research project A1-S-36879, “Estudio teórico de las soluciones periódicas y de problemas inversos en sistemas de reacción difusión y elípticos que aparecen en los modelos matemáticos de generación y propagación de la actividad eléctrica en el corazón y el cerebro.” The authors thankfully acknowledge the computer resources, technical advice, and support provided by Laboratorio Nacional de Supercómputo del Sureste de México, a member of the CONACYT national laboratories, under project 202104097C.
The authors acknowledge the anonymous referees for their useful comments and suggestions, which have improved this text.
Moisés Soto-Bajo (moises.soto@fcfm.buap.mx) received his Ph.D. degree in mathematics from Universidad Autónoma de Madrid in 2012. He has been awarded with a Cátedra Conacyt position from the corresponding program for young researchers from CONACYT, commissioned at Centro Multidisciplinario de Modelación Matemática y Computacional, Benemérita Universidad Autónoma de Puebla, 72570 Puebla, Mexico. He has been a level I researcher with the Sistema Nacional de Investigadores of CONACYT since 2016. His research interests include functional analysis, Fourier, and wavelet analysis and shift-invariant subspace theory, mathematical epidemiology, and mathematical modeling of brain and heart electrical activity.
Andrés Fraguela Collar (fraguela@fcfm.buap.mx) received his B.S. and M.S degrees from the University of Havana (Cuba), his Ph.D. degree from Lomonosov State University, and his Habilitation of the Doctorate in science at the M.V. Lomonosov University and the V.A. Steklov Mathematical Institute of the Russian Academy of Sciences. He is currently a professor of Applied Mathematics at the Benemérita Universidad Autónoma de Puebla, 72570 Puebla, Mexico. He is a recipient of the Distinguished Visitor Award from the Complutense University of Madrid, an associate researcher at the International Center of Theoretical Physics of Trieste (Italy), and received the State Prize for Science and Technology of the State of Puebla, Mexico. His research focuses on theoretical results in several branches of analysis and differential equations and their applications in epidemiology and medicine, including the analysis of normal and abnormal electrical activity of the brain and the heart and its correlation with the corresponding electrical signals, using inverse problem methodologies.
Javier Herrera-Vega (vega@fcfm.buap.mx) received his Ph.D. degree in computer science from the National Institute of Astrophysics, Optics and Electronics. He was a posdoctoral researcher at Centro Multidisciplinario de Modelación Matemática y Computacional and is with Benemérita Universidad Autónoma de Puebla, 72570 Puebla, Mexico. He has been a level C member of the Sistema Nacional de Investigadores since 2020. His research focuses on the processing and analysis of biomedical signals, mainly from neuroimaging modalities like electroencephalography and functional near-infrared spectroscopy.
[1] B. Boashash, “Estimating and interpreting the instantaneous frequency of a signal. I. Fundamentals,” Proc. IEEE, vol. 80, no. 4, pp. 520–538, Apr. 1992, doi: 10.1109/5.135376.
[2] B. Boashash, “Estimating and interpreting the instantaneous frequency of a signal. II. Algorithms and applications,” Proc. IEEE, vol. 80, no. 4, pp. 540–568, Apr. 1992, doi: 10.1109/5.135378.
[3] A. Cicone, S. Serra-Capizzano, and H. Zhou, “One or two frequencies? The iterative filtering answers,” 2021, arXiv:2111.11741.
[4] I. Daubechies, J. Lu, and H.-T. Wu, “Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool,” Appl. Comput. Harmon. Anal., vol. 30, no. 2, pp. 243–261, Mar. 2011, doi: 10.1016/j.acha.2010.08.002.
[5] K. Gröchenig, Foundations of Time-Frequency Analysis. New York, NY, USA: Springer Science & Business Media, 2001.
[6] E. Hernández and G. Weiss, A First Course on Wavelets. Boca Raton, FL, USA: CRC Press, 1996.
[7] N. E. Huang, Z. Wu, S. R. Long, K. C. Arnold, X. Chen, and K. Blank, “On instantaneous frequency,” Adv. Adaptive Data Anal., vol. 1, no. 2, pp. 177–229, Apr. 2009, doi: 10.1142/S1793536909000096.
[8] N. E. Huang and S. S. P. Shen, Hilbert-Huang Transform and Its Applications, vol. 16, 2nd ed. Singapore: World Scientific, 2014.
[9] V. Lostanlen, A. Cohen-Hadria, and J. P. Bello, “One or two frequencies? The scattering transform answers,” in Proc. 28th IEEE Eur. Signal Process. Conf. (EUSIPCO), 2021, pp. 2205–2209, doi: 10.23919/Eusipco47968.2020.9287216.
[10] G. Rilling and P. Flandrin, “One or two frequencies? The empirical mode decomposition answers,” IEEE Trans. Signal Process., vol. 56, no. 1, pp. 85–95, Jan. 2008, doi: 10.1109/TSP.2007.906771.
[11] D. Slepian, “On bandwidth,” Proc. IEEE, vol. 64, no. 3, pp. 292–300, Mar. 1976, doi: 10.1109/PROC.1976.10110.
[12] H.-T. Wu, “Instantaneous frequency and wave shape functions (I),” Appl. Comput. Harmon. Anal., vol. 35, no. 2, pp. 181–199, Sep. 2013, doi: 10.1016/j.acha.2012.08.008.
[13] H.-T. Wu, “Current state of nonlinear-type time–frequency analysis and applications to high-frequency biomedical signals,” Current Opinion Syst. Biol., vol. 23, pp. 8–21, Oct. 2020, doi: 10.1016/j.coisb.2020.07.013.
[14] H.-T. Wu, P. Flandrin, and I. Daubechies, “One or two frequencies? The synchrosqueezing answers,” Adv. Adaptive Data Anal., vol. 3, no. 1, pp. 29–39, Apr. 2011, doi: 10.1142/S179353691100074X.
Digital Object Identifier 10.1109/MSP.2023.3257505