Home     Teaching     Research

 

 

Fast Computation of Entropy

Reducing computation time for entropy estimation in m-dimensional space

Most popular definitions of entropy are computationally expensive methods which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part of the computation is the similarity check between points in the m-dimensional space. We propose algorithms aiming to compute entropy fast. All algorithms return exactly the values of the metric definition, no approximation techniques are used.

The key idea is based on avoiding unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time.

 

Contact: manis -at- cs -dot- uoi -dot- gr

Selected Publications:

-          George Manis, Md. Aktaruzzaman, and Roberto Sassi, “Low computational cost for sample entropy,” Entropy, MDPI, vol. 20, art. 61, 2018 [link]

-          George Manis, “Fast computation of approximate entropy,” Computer Methods and Programs in Biomedicine, Elsevier, vol. 91, no. 1, pp. 48–54, Jul. 2008 [link]

 

Back to home page