{\rtf1\ansi\ansicpg1252\deff0\nouicompat\deflang3079{\fonttbl{\f0\fnil\fcharset0 Arial;}{\f1\fnil\fcharset161{\*\fname Arial;}Arial Greek;}{\f2\fnil\fcharset1 Cambria Math;}{\f3\fnil\fcharset2 Symbol;}} {\colortbl ;\red0\green0\blue255;} {\*\generator Riched20 6.3.9600}{\*\mmathPr\mmathFont2\mwrapIndent1440 }\viewkind4\uc1 \pard\qc\b\fs28 Statistical Tests View\b0\par \pard\fs18\par \par The Statistical Tests View offers tools for statistical analysis.\par \par \par \ul\fs22 Normality Check\par \ulnone\fs18\par \pard\qj Check for normality of the data is done using ALGLIB's Jarque-Bera-test [4]. It requires a sample size of at least 5 samples. The test checks whether sample data is normally distributed (H\sub 0\nosupersub ) or not (H\sub 1\nosupersub ). The table shows the calculated p-values. If a p-value is below the significance level \f1\lang1032\'e1\f0\lang3079 (default: 0.05), the chance that the data is normaly distributed is very unlikely:\par \par p \f2\u8804?\f0\lang3079 \f1\lang1032\'e1\f0\lang3079 : reject H\sub 0\nosupersub , data is not normally distributed\par p > \f1\lang1032\'e1\f0\lang3079 : accept H\sub 0\nosupersub , data is very likely normally distributed\par \par A green check is only given if all groups are normally distributed. This test should be given special attention if the t-test is used afterwards as the t-test requires normally distributed sample data (except for large sample sizes, see central limit theorem [5]).\par \pard\par \par \ul\fs22 Statistical Testing beween Groups\par \ulnone\fs18\par \pard\qj The statistical tests view provides 3 different tests: \par \pard{\pntext\f3\'B7\tab}{\*\pn\pnlvlblt\pnf3\pnindent0{\pntxtb\'B7}}\fi-360\li720\qj T-test\par {\pntext\f3\'B7\tab}Mann-Whitney-U-test\par {\pntext\f3\'B7\tab}Kruskal-Wallis-test\par \pard\qj\par The first 2 tests are provided by the ALGLIB package (see ALGLIB documentation [0, 1]) while the Kruskal-Wallis test is a C# port of R's kruskal.test() function [2].\par \par The p-values calculated by all of the 3 tests can be interpreted as follows: The null hypothesis H\sub 0\nosupersub states that the distributions of the compared groups are equal while H\sub 1\nosupersub states the opposite (the samples are from populations with different distributions). \par All tests, similar to the Jarque-Bera-test, calculate a p-value. If this value is below the significance level \f1\lang1032\'e1\f0\lang3079 (default: 0.05), the null hypothesis is rejected, which therefore means that the groups do not have the same distribution:\par \par p \f2\u8804?\f0\lang3079 \f1\lang1032\'e1\f0\lang3079 : reject H\sub 0\nosupersub , groups don't come from the same distribution\par p > \f1\lang1032\'e1\f0\lang3079 : accept H\sub 0\nosupersub , groups have equal distribution\par \par \pard\par \ul\fs22 Statistical Testing of all Groups\par \ulnone\fs18\par \pard\qj Statistical hypothesis testing between all groups is performed using the Kruskal-Wallis-test [3]. A green check is given if any of the groups is different. Note that if H\sub 0\nosupersub gets rejected, this means that there is at least one group that has another underlying distribution as at least one other group. The test provides no information about which group(s) are different. Therefore, if H\sub 0\nosupersub gets rejected, pairwise tests should be performed to answer that question. \par \pard\par \par \ul\fs22 Pairwise Statistical Tests\par \ulnone\fs18\par \pard\qj For pairwise comparision the Mann-Whitney-U-test as well as the t-test is used. They also require group sizes of at least 5 samples. As mentioned before, the t-test requires the groups to be normally distributed (though it does not require the groups to have equal variance; this is called a two-sample unpooled test) while the Mann-Whitney-U-test is a non-parametric test that can be used for samples with unknown distribution.\par Besides the p-values, also corrected p-values are calculated using the Bonferroni-Holm correction [7] (implemented as described at [8]) to control the familywise error rate. \par Additionally, the effect size is calculated using Cohen's d and Hedges' g. The effect size describes the strength of a phenomen [6]. Higher values mean that there is a stronger effect. Statistical hypothesis tests may, especially with high sample sizes, lead to significant results even though there may not be a relevant difference. In such a case the effect size is small and therefore gives a hint that the practical relevance may not be given. On the other hand these values are also often used for making considerations about the required sample size: Lower d's and g's may mean that a larger sample size is required.\par \pard\sa200\sl276\slmult1\qj\par \par \ul\fs22 References:\par \ulnone\fs18 [0] {{\field{\*\fldinst{HYPERLINK http://www.alglib.net/hypothesistesting/studentttest.php }}{\fldrslt{http://www.alglib.net/hypothesistesting/studentttest.php\ul0\cf0}}}}\f0\fs18\par [1] {{\field{\*\fldinst{HYPERLINK http://www.alglib.net/hypothesistesting/mannwhitneyu.php }}{\fldrslt{http://www.alglib.net/hypothesistesting/mannwhitneyu.php\ul0\cf0}}}}\f0\fs18\par [2] {{\field{\*\fldinst{HYPERLINK https://stat.ethz.ch/R-manual/R-devel/library/stats/html/kruskal.test.html }}{\fldrslt{https://stat.ethz.ch/R-manual/R-devel/library/stats/html/kruskal.test.html\ul0\cf0}}}}\f0\fs18\par [3] {{\field{\*\fldinst{HYPERLINK http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_one-way_analysis_of_variance }}{\fldrslt{http://en.wikipedia.org/wiki/Kruskal%E2%80%93Wallis_one-way_analysis_of_variance\ul0\cf0}}}}\f0\fs18\par [4] {{\field{\*\fldinst{HYPERLINK http://www.alglib.net/hypothesistesting/jarqueberatest.php }}{\fldrslt{http://www.alglib.net/hypothesistesting/jarqueberatest.php\ul0\cf0}}}}\f0\fs18\par [5] {{\field{\*\fldinst{HYPERLINK http://en.wikipedia.org/wiki/Central_limit_theorem }}{\fldrslt{http://en.wikipedia.org/wiki/Central_limit_theorem\ul0\cf0}}}}\f0\fs18\par [6] {{\field{\*\fldinst{HYPERLINK http://en.wikipedia.org/wiki/Effect_size }}{\fldrslt{http://en.wikipedia.org/wiki/Effect_size\ul0\cf0}}}}\f0\fs18\par [7] {{\field{\*\fldinst{HYPERLINK http://en.wikipedia.org/wiki/Holm%E2%80%93Bonferroni_method }}{\fldrslt{http://en.wikipedia.org/wiki/Holm%E2%80%93Bonferroni_method\ul0\cf0}}}}\f0\fs18\par \pard\sa200\sl276\slmult1 [8] {{\field{\*\fldinst{HYPERLINK http://www.mathworks.com/matlabcentral/fileexchange/28303-bonferroni-holm-correction-for-multiple-comparisons }}{\fldrslt{http://www.mathworks.com/matlabcentral/fileexchange/28303-bonferroni-holm-correction-for-multiple-comparisons\ul0\cf0}}}}\f0\fs18\par \pard\sa200\sl276\slmult1\qj\fs22\lang7\par \pard\sa200\sl276\slmult1\par }