Regularisation
Definition: A range of algebraic operations that introduce a small perturbation on a tensor in order to alter its determinant. Consequently, regularization changes the condition number of a matrix. The range of applications of regularization is vast e.g. perhaps for enabling or stabilizing matrix inversion, for altering the prominence of some dimensions of the tensor over others, for increasing derivability e.g. smoothing, etc. By and large, the most popular regularization approach is Tikhonov’s which attacks the main diagonal isotropically and only has 1 parameter, but certainly many other regularization approaches exist with different purposes. A popular use in fNIRS is to improve the stability and accuracy of the reconstruction algorithm by introducing additional constraints. It is often used in solving the ill-posed inverse problem in image reconstruction when the data being used for reconstruction is noisy and can help to reduce the effects of these sources of error on the final reconstructed image. Another form of regularization that has been used in fNIRS analysis is sparsity regularization, which adds a penalty term proportional to the L1 norm of the changes in hemoglobin concentration. This penalty term encourages the solution to be sparse, meaning that it has many zero or near-zero coefficients, which can help identify the brain regions that are most active.
Alternative definition:
Synonym:
References: https://doi.org/10.1063/5.0015512
https://doi.org/10.1117%2F1.JBO.26.5.056001
Related terms: Moore-Penrose Pseudoinverse