Conflicting results with multcompare when using the Kruskal-Wallis test on multiple groups

15 views (last 30 days)
I have 6 groups (named A to F) of continuous data and most of the groups follow a non-normal distribution. I've plotted the values using a boxplot with notch 'on' and applied a Kruskal-Wallis test which confirmed that the groups did not come from the same distribution. I then used multcompare to check the significance of each of the group pairs. The data is in fdata, the group names in fgroups:
boxplot(fdata,'Notch', 'on', 'Symbol', 'r.');
[p, tbl, stats]=kruskalwallis(fdata,fgroups,'on');
disp(tbl);
c=multcompare(stats,'display','on');
[ncomp,nccol] = size(c);
disp(' ');
disp(' Comparing groups - showing only significant differences')
for j=1:ncomp
if c(j,nccol) <= 0.05
disp([' Group ' fgroups{c(j,1)} ' to ' fgroups{c(j,2)} ' - p = ' num2str(c(j, nccol))]);
end
end
Both the printout and the plot of the mean rank sum showed that groups B, D & F were not significantly different. However, looking at the boxplot of group D it was clear that the notches did not overlap with those of groups B & F, which would indicate that that D is significantly different from B & F. When I separated out B, D & F and analysed them as a group, multcompare then gave (what I assume to be) the correct answer: D was significantly different from B & F (although B & F are not different).
So what is going on? I note that the plot shows that multcompare is analyzing the 'mean rank sum' and is using all of the groups to calculate the rank (instead of the ranks between the pairs of groups?). Obviously when you have fewer groups you are going to have a different rank sum and thus a different answer, which doesn't seem right.
Of course, it may be that I'm using multcompare incorrectly - please advise.

Answers (3)

davidwriter
davidwriter on 24 Dec 2016
Thank you for your reply.
Since I posted I've read-up on the problems involved in doing multiple comparisons of non-parametric data and the effect that I observed is well known - the results can depend on the order of the individual data sets.
The Kruskal-Wallis test only tells you if the data sets come from the same distribution, sorting out the differences between the sets requires a more sensitive test than multcompare (even with the hsd correction). In the end I switched to R and settled for the Conover-Iman test with the Benjamini-Yekutieli adjustment. This turned out to be less sensitive to the order and gave consistent results.

Jake
Jake on 6 May 2019
To avoid confusion, this is not an issue with MATLAB's multcompare. Testing the medians via boxplot notches (which should only ever be used as an estimate!) does not correct for multiple comparisons and therefore seems to show significance. Default multcompare uses a correction for multiple comparisons, which makes the differences not significant. When the user removes the groups (“When I separated out B, D & F and analysed them as a group”), the user is relaxing the multiple comparisons correction, because now it’s only correcting for 3 multiple comparisons, which then allows the result to be significant.

Tom Lane
Tom Lane on 10 Dec 2016
It's sad but true that there can be an overall difference according to one test, another test might not declare specific differences to be significant, and a test of one type (Kruskal-Wallis) might not match a test of another type (test of medians via boxplot notches). If you suspect a bug and can share your data, I'd be willing to look into it.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!