So, I did some more digging. Turns out, the first time I run my simulation with the cell array allocation after a fresh start of MATLAB, there is no slow down:
Done in 54.6 seconds.
However, just running the code 9 times after that, while still doing the cell allocation shows a clear slow down:
Done in 54.6 seconds. (first time)
Done in 60.7 seconds.
Done in 66.0 seconds.
Done in 72.8 seconds.
Done in 83.6 seconds.
Done in 88.8 seconds.
Done in 95.2 seconds.
Done in 103.4 seconds.
Done in 106.0 seconds.
Done in 113.3 seconds.
Then, going back to multidimensional and running the code:
Done in 58.1 seconds.
This is consistent across many repetitions. Now switch back to cell:
Done in 124.4 seconds.
So, the slow down seems persistent. This must have something to do with the way MATLAB allocates and frees large cell arrays. In the link I posted (https://nl.mathworks.com/matlabcentral/answers/331930-can-anyone-explain-why-matlab-gets-slower-and-slower-until-restart-if-large-cell-or-struct-arrays-ar), there seems to be a similar problem, however there the cell array is being used in the script. Here, it is not.
As discussed in the comments on that page, this might have to do with Windows memory handling. Is this known? And how can I avoid this? Can MATLAB avoid it by changing the way they allocate cell arrays?