# Way of conserving memory when extracting data from CSV

5 views (last 30 days)
Mate 2u on 12 Apr 2013
Hi everybody I have few questions. I have some HUGE CSV files which I need in Matlab for analysis. The CSV it self has 5 columns. The columns of relevance are:
Column 1 is our date starting from early 2007 all the way till till mid 2011 in the form of mm/dd/yyyy.
Column 3 is our respective prices
Column 5 is the number of trades.
The questions I have are these:
1) How can I extract these 3 columns into a Matrix in MATLAB without taking too much memory (bear in mind that some of these CSV files have around 60 million rows)? Is there a way to decrease the memory of each cell Matlab allocates for the matrix? Please help with code.
2) How can I extract all the information into a non-string matrix (for analysis) for a specific year....ie only for 2009. So I would require to store in Matrix all information for 2009 (bearing in mind the memory limitations in 1).
Thanks so much.
per isakson on 13 Apr 2013
And price will never exceed
>> intmax('uint32')
ans =
4294967295
cents ????

per isakson on 13 Apr 2013
Edited: per isakson on 13 Apr 2013
Something like this will do it
function mate2u
day_number = zeros( 60*1e6, 1, 'uint16' ); % day_number = 1 for 1/1/2007
price = zeros( 60*1e6, 1, 'uint32' ); % 1/100 of cents
volume = zeros( 60*1e6, 1, 'uint16' ); % volume
pivot_day = datenum( '1/1/2007', 'mm/dd/yyyy' );
chunk_size = 10; % choose 5*1e6
fid = fopen( 'mate2u.txt' );
while not( feof( fid ) )
cac = textscan( fid, '%s%*s%f32%*s%u16', chunk_size, 'Delimiter', ',' );
uint16( datenum( cac{1}, 'mm/dd/yyyy' ) - pivot_day )
uint32( cac{2}*10000 )
cac{3}
end
fclose( fid );
end
where mate2u.txt is
04/29/2008,38:52.0,71.35,CTN08,2
04/29/2008,38:53.0,71.35,CTN08,2
04/29/2008,38:56.0,71.35,CTN08,3
04/29/2008,38:56.0,71.35,CTN08,1
04/29/2008,38:56.0,71.35,CTN08,1
04/29/2008,38:57.0,71.35,CTN08,1
prints to command window
ans =
484
484
484
484
484
484
ans =
713500
713500
713500
713500
713500
713500
ans =
2
2
3
1
1
1
>>
per isakson on 13 Apr 2013
Firstly, make some experiments with the [{}Code] button.
Secondly:
• convert the script to a function (I've done it in my answer)
• step through my code with the debugger and analyze what it does
• notice that the twenty lines are indeed printed in the command window - two chunks of ten entries each
• the prices are hurt by the single precision "%f32" - you could change f32 to f64

Image Analyst on 12 Apr 2013
What are the classes of each column? Are they all 8 byte (64 bit) doubles? For example, the number of trades might be able to be a 4 byte integer, and most of the floating point numbers could probably be single instead of double. By retrieving it a line at a time and using sscanf() you can place each value into the smallest type of variable that is appropriate for that number. For example, assuming no stock price is over \$655.35 you could read in the number and multiply by 100 so that all stock prices are in cents rather than dollars. That way you can use 16 bit unsigned integer instead of a 32 bit single.
I don't have the toolboxes, but perhaps the Financial Toolbox or the Fixed Point Designer may have efficient ways of handling numbers like prices of stocks.
Like Matt said, perhaps you don't need all 60 million rows in memory at once - hopefully you can process it in chunks.
Image Analyst on 13 Apr 2013
For example, maybe someone asks about 2010 prices, so you scan the file line by line, throwing away data if it belongs to any other year than 2010. Only if the year is 2010 do you use put it into your array. Other years just go into single variables because you used sscanf but you re-use (overwrite) those variables. So on a line by line basis you will have variables thisPrice, thisDay, thisVolume, thisYear, and only when this year = 2010 do you add thisPrice, thisDay, thisVolume to priceArray, dayArray, volumeArray.