# Comparison of LR-Approximation and H-arithmetic¶

In this program, different H-arithmetic and low-rank approximation techniques are compared for a given application, clustering and H-function.

Currently, H-matrix multiplication (`approx-mm`

) and H-LU factorization (`approx-lu`

) are
implemented.

## H-arithmetic¶

HLR implements the following H-arithmetic versions:

**eager**after computation of an update this is immediately applied to the destination blocks, with a subsequent low-rank truncation in case of low-rank blocks.

**accu.**accumulate all update per level in a low-rank matrix from root to leaves, thereby shifting down accumulated updates to sub-blocks. Apply all accumulated updates to leaf block in last step.

**lazy**for each leaf block collect all updates as a list of corresponding products (or matrices) and apply the sum of all updates simultaneously in final step.

See also Arithmetic for more information.

## Low-Rank Approximation¶

The following low-rank approximation techniques are available in HLR and used in the comparison:

**SVD**: Singular Value Decomposition**RRQR**: Rank Revealing QR**RandLR/SVD**: Randomized LR and Randomized SVD**ACA**: Adaptive Cross Approximation**Lanczos**: Lánczos Bidiagonalization

See also Approximation for more information.

## Comparison¶

Below you can choose between different settings for H-function, application, etc. and see precomputed results. The figure shows the relative runtime, error and memory consumption of the different H-arithmetic and low-rank approximation techniques with the baseline defined by the classical H-arithmetic (immediate updates with low-rank SVD).

If any arithmetic or approximation algorithm is missing in the comparison, this is either due to an error, e.g., failed to converge, or because runtime/approximation was beyond any practical value.

Please note that, depending on the values, the x-axis uses logarithmic scaling.