1 | | A symbolic expression tree interpreter in native code (C++) can offer a significant speed advantage due to more mature backends (msvc, gcc) and features like auto-vectorization and loop unrolling. |
| 1 | This ticket explores the possibility of employing batching and vectorisation techniques (ie. using dedicated datatypes from `System.Numerics`) to speed up the interpretation of symbolic expression trees. |
| 2 | |
| 3 | Batching consists in allocating a small buffer for each instruction and performing operations on the whole buffer (instead of individual values for each row in the dataset). |
| 4 | |
| 5 | Vectorisation additionally involves using SIMD (Single Instruction Multiple Data) CPU instructions to speed up batch processing. |
| 6 | |
| 7 | === Managed (C#) interpreter |
| 8 | |
| 9 | Batch processing using the `Vector<double>` class in `System.Numerics` allows us to achieve a 2-3x speed improvement compared to the standard linear interpreter. |
| 10 | |
| 11 | === Native interpreter |
| 12 | |
| 13 | A tree interpreter in native code (C++) can offer a significant speed advantage due to more mature backends (msvc, gcc) and features like auto-vectorization and loop unrolling. |