# Changes between Initial Version and Version 4 of Ticket #2958

Ignore:
Timestamp:
11/08/18 13:11:09 (12 months ago)
Comment:

### Legend:

Unmodified
 initial A symbolic expression tree interpreter in native code (C++) can offer a significant speed advantage due to more mature backends (msvc, gcc) and features like auto-vectorization and loop unrolling. This ticket explores the possibility of employing batching and vectorisation techniques (ie. using dedicated datatypes from System.Numerics) to speed up the interpretation of symbolic expression trees. Batching consists in allocating a small buffer for each instruction and performing operations on the whole buffer (instead of individual values for each row in the dataset). Vectorisation additionally involves using SIMD (Single Instruction Multiple Data) CPU instructions to speed up batch processing. === Managed (C#) interpreter Batch processing using the Vector class in System.Numerics allows us to achieve a 2-3x speed improvement compared to the standard linear interpreter. === Native interpreter A tree interpreter in native code (C++) can offer a significant speed advantage due to more mature backends (msvc, gcc) and features like auto-vectorization and loop unrolling. Preliminary results show 5-10x speed improvement compared to the linear tree interpreter. We should also investigate the potential benefit of integrating fast math libraries such as [https://github.com/dpiparo/vdt vdt] ('''v'''ectorize'''d''' ma'''t'''h]) to increase computation speed.