From Surf Wiki (app.surf) — the open knowledge base
Approximate entropy
Concept in statistics
Concept in statistics
In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data.{{cite journal : Series A: (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, ...), which alternates 0 and 1.
: Series B: (0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, ...), which have either a value of 0 or 1, chosen randomly, each with probability 1/2.
Moment statistics, such as mean and variance, will not distinguish between these two series. Nor will rank order statistics distinguish between these series. Yet series A is perfectly regular: knowing a term has the value of 1 enables one to predict with certainty that the next term will have the value of 0. In contrast, series B is randomly valued: knowing a term has the value of 1 gives no insight into what value the next term will have.
Regularity was originally measured by exact regularity statistics, which has mainly centered on various entropy measures. However, accurate entropy calculation requires vast amounts of data, and the results will be greatly influenced by system noise, |doi-access=free therefore it is not practical to apply these methods to experimental data. ApEn was first proposed (under a different name) by Aviad Cohen and Itamar Procaccia,{{cite journal as an approximate algorithm to compute an exact regularity statistic, Kolmogorov–Sinai entropy, and later popularized by Steve M. Pincus. ApEn was initially used to analyze chaotic dynamics and medical data, such as heart rate, and later spread its applications in finance, |doi-access=free physiology, human factors engineering, and climate sciences.
Algorithm
A comprehensive step-by-step tutorial with an explanation of the theoretical foundations of Approximate Entropy is available. The algorithm is:
; Step 1: Assume a time series of data u(1), u(2),\ldots, u(N). These are N raw data values from measurements equally spaced in time. ; Step 2: Let m \in \mathbb{Z}^+ be a positive integer, with m \leq N, which represents the length of a run of data (essentially a window). Let r \in \mathbb{R}^+ be a positive real number, which specifies a filtering level. Let n=N-m+1. ; Step 3: Define \mathbf{x}(i) = \big[u(i),u(i+1),\ldots,u(i+m-1)\big] for each i where 1 \leq i \leq n. In other words, \mathbf{x}(i) is an m-dimensional vector that contains the run of data starting with u(i). Define the distance between two vectors \mathbf{x}(i) and \mathbf{x}(j) as the maximum of the distances between their respective components, given by :: \begin{align} d[\mathbf{x}(i),\mathbf{x}(j) ] & = \max_k \big(|\mathbf{x}(i)_k - \mathbf{x}(j)k| \big) \ & = \max_k \big(|u(i+k-1) - u(j+k-1)| \big) \ \end{align} : for 1 \leq k \leq m. ; Step 4: Define a count C^m_i as :: C_i^m (r)={(\text{number of } j \text { such that } d[\mathbf{x}(i),\mathbf{x}(j)] \leq r) \over n} : for each i where 1 \leq i,j \leq n. Note that since j takes on all values between 1 and n, the match will be counted when j=i (i.e. when the test subsequence, \mathbf{x}(j), is matched against itself, \mathbf{x}(i)). ; Step 5: Define :: \phi ^m (r) = {1 \over n} \sum{i=1}^{n}\log (C_i^m (r)) : where \log is the natural logarithm, and for a fixed m , r , and n as set in Step 2. ; Step 6: Define approximate entropy (\mathrm{ApEn}) as :: \mathrm{ApEn}(m,r,N)(u) = \phi ^m (r) - \phi^{m+1} (r) ;Parameter selection: Typically, choose m=2 or m=3 , whereas r depends greatly on the application.
An implementation on Physionet, which is based on Pincus, use d[\mathbf{x}(i), \mathbf{x}(j)] instead of d[\mathbf{x}(i), \mathbf{x}(j)] \le r in Step 4. While a concern for artificially constructed examples, it is usually not a concern in practice.
Example

Consider a sequence of N=51 samples of heart rate equally spaced in time:
: \ S_N = {85, 80, 89, 85, 80, 89, \ldots}
Note the sequence is periodic with a period of 3. Let's choose m=2 and r=3 (the values of m and r can be varied without affecting the result).
Form a sequence of vectors: :\begin{align} \mathbf{ x}(1) & = [u(1) \ u(2)]=[85 \ 80]\ \mathbf{ x}(2) & = [u(2) \ u(3)]=[80 \ 89]\ \mathbf{ x}(3) & = [u(3) \ u(4)]=[89 \ 85]\ \mathbf{ x}(4) & = [u(4) \ u(5)]=[85 \ 80]\ & \ \ \vdots \end{align}
Distance is calculated repeatedly as follows. In the first calculation,
:\ d[\mathbf{x}(1), \mathbf{x}(1)]=\max_k |\mathbf{x}(1)_k - \mathbf{x}(1)_k|=0 which is less than r .
In the second calculation, note that |u(2)-u(3)| |u(1)-u(2)|, so
:\ d[\mathbf{x}(1), \mathbf{x}(2)]=\max_k |\mathbf{x}(1)_k-\mathbf{x}(2)_k|=|u(2)-u(3)|=9 which is greater than r . Similarly, :\begin{align} d[\mathbf{x}(1) &, \mathbf{x}(3)] = |u(2)-u(4)| = 5r\ d[\mathbf{x}(1) &, \mathbf{x}(4)] = |u(1)-u(4)| = |u(2)-u(5)| = 0 & \vdots \ d[\mathbf{x}(1) &, \mathbf{x}(j)] = \cdots \ & \vdots \ \end{align} The result is a total of 17 terms \mathbf{ x}(j) such that d[\mathbf{x}(1), \mathbf{x}(j)]\le r . These include \mathbf{x}(1), \mathbf{x}(4), \mathbf{x}(7),\ldots,\mathbf{x}(49). In these cases, C^m_i(r) is
:\ C_1^2 (3)=\frac{17}{50} :\ C_2^2 (3)=\frac{17}{50} :\ C_3^2 (3)=\frac{16}{50} :\ C_4^2 (3)=\frac{17}{50}\ \cdots
Note in Step 4, 1 \leq i \leq n for \mathbf{x}(i) . So the terms \mathbf{x}(j) such that d[\mathbf{x}(3), \mathbf{x}(j)] \leq r include \mathbf{x}(3), \mathbf{x}(6), \mathbf{x}(9),\ldots,\mathbf{x}(48), and the total number is 16.
At the end of these calculations, we have
:\phi^2 (3) = {1 \over 50} \sum_{i=1}^{50}\log(C_i^2(3))\approx-1.0982
Then we repeat the above steps for m=3 . First form a sequence of vectors: :\begin{align} \mathbf{x}(1) & = [u(1) \ u(2) \ u(3)]=[85 \ 80 \ 89]\ \mathbf{x}(2) & = [u(2) \ u(3) \ u(4)]=[80 \ 89 \ 85]\ \mathbf{x}(3) & = [u(3) \ u(4) \ u(5)]=[89 \ 85 \ 80]\ \mathbf{x}(4) & = [u(4) \ u(5) \ u(6)]=[85 \ 80 \ 89]\ &\ \ \vdots \end{align}
By calculating distances between vector \mathbf{x}(i), \mathbf{x}(j), 1 \le i \le 49 , we find the vectors satisfying the filtering level have the following characteristic: :d[\mathbf{x}(i), \mathbf{x}(i+3)]=0 Therefore, :\ C_1^3 (3)=\frac{17}{49} :\ C_2^3 (3)=\frac{16}{49} :\ C_3^3 (3)=\frac{16}{49} :\ C_4^3 (3)=\frac{17}{49}\ \cdots At the end of these calculations, we have :\phi^3 (3)={1 \over 49} \sum_{i=1}^{49}\log(C_i^3(3))\approx-1.0982
Finally, : \mathrm{ ApEn}=\phi^2 (3)-\phi^3 (3)\approx0.000010997
The value is very small, so it implies the sequence is regular and predictable, which is consistent with the observation.
Python implementation
import math
def approx_entropy(time_series, run_length, filter_level) -> float:
"""
Approximate entropy
>>> import random
>>> regularly = [85, 80, 89] * 17
>>> print(f"{approx_entropy(regularly, 2, 3):e}")
1.099654e-05
>>> randomly = [random.choice([85, 80, 89]) for _ in range(17*3)]
>>> 0.8 < approx_entropy(randomly, 2, 3) < 1
True
"""
def _maxdist(x_i, x_j):
return max(abs(ua - va) for ua, va in zip(x_i, x_j))
def _phi(m):
n = len(time_series) - m + 1
x = [
[time_series[j] for j in range(i, i + m)]
for i in range(n)
]
counts = [
sum(1 for x_j in x if _maxdist(x_i, x_j) <= filter_level) / n for x_i in x
]
return sum(math.log(c) for c in counts) / n
return abs(_phi(run_length + 1) - _phi(run_length))
if __name__ == "__main__":
import doctest
doctest.testmod()
MATLAB implementation
- Fast Approximate Entropy from MatLab Central
- approximateEntropy
Interpretation
The presence of repetitive patterns of fluctuation in a time series renders it more predictable than a time series in which such patterns are absent. ApEn reflects the likelihood that similar patterns of observations will not be followed by additional similar observations. A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn.
Advantages
The advantages of ApEn include:
- Lower computational demand. ApEn can be designed to work for small data samples ( N points) and can be applied in real time.
- Less effect from noise. If data is noisy, the ApEn measure can be compared to the noise level in the data to determine what quality of true information may be present in the data.
Limitations
The ApEn algorithm counts each sequence as matching itself to avoid the occurrence of \log(0) in the calculations. This step might introduce bias in ApEn, which causes ApEn to have two poor properties in practice:
- ApEn is heavily dependent on the record length and is uniformly lower than expected for short records.
- It lacks relative consistency. That is, if ApEn of one data set is higher than that of another, it should, but does not, remain higher for all conditions tested.
Applications
ApEn has been applied to classify electroencephalography (EEG) in psychiatric diseases, such as schizophrenia,{{cite journal epilepsy,{{cite journal and addiction.{{cite journal
References
References
- (2020-01-22). "Analyzing changes in the complexity of climate in the last four decades using MERRA-2 radiation data". Scientific Reports.
- (June 2019). "Approximate Entropy and Sample Entropy: A Comprehensive Tutorial". Entropy.
- "PhysioNet".
This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.
Ask Mako anything about Approximate entropy — get instant answers, deeper analysis, and related topics.
Research with MakoFree with your Surf account
Create a free account to save articles, ask Mako questions, and organize your research.
Sign up freeThis content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.
Report