What is the fastest way to load many large files and then reuse that data
61 views (last 30 days)
I have upwards of 200 .csv files that are around 500 MB each. Each file contains a one line text header and 10 columns of numeric data with many, many rows. I only need to load columns 2-4 once from any one of the files as that information is identical in all files. From all of the files, I need columns 5-8 only. The files are all in one folder with a systematic naming convention if that helps at all. What is the fastest way to do this the first time? I've tried importdata, textscan, and readmatrix and have either not been able to do what I want above or have found it still too slow. Once it's loaded, I'll do some manipulation and save it as a .mat to work on later. Am I right that saving as .mat will produce the fastest load times in the future?