You are now following this question
- You will see updates in your followed content feed.
- You may receive emails, depending on your communication preferences.
Image format converstions for processing
6 views (last 30 days)
Show older comments
hello, all. i have an rgb image of class uint8 with dimension m-by-n-by-3 that i need to convert to double with dimensions m-by-n for processing. after processing i will end up with also a double image of type m-by-n. If i need to convert this back to its original color format how best can i approach this? should i separate the color channels, then convert each channle to grayscale then to double and deal with that? then how do i convert that channel back to colored uint8 so that i can later use the cat function on the three channels? thanks very much.
I am not sure whether to post it to this related post as a comment or ask a new question. So i did both and will remove whichever is inappropriate accordingly. thank you guys.
Accepted Answer
Image Analyst
on 26 May 2021
Alex:
If you want a double, you can use double() on an image. Or im2double().
If you want a gray scale image, you can either call rgb2gray() or take one of the color channels
rgbImage = imread(fullFileName);
[rows, columns, numberOfColorChannels] = size(rgbImage)
if numberOfColorChannels == 3
grayImage = rgb2gray(rgbImage); % Method 1
grayImage = rgbImage(:, :, 1); % Method 2 take red channel
else
grayImage = rgbImage; % It's already gray scale
end
Once you've thrown away color information, you can't get it back. If you need it, be sure to save your original RGB image or read it back in from disk.
If you want your double image to be back in uint8 format you can pass it in to uint8. If it's not in the same range as uint8 you need to decide how to scale it. You can either divide by the max and multiply by 255
gray8 = uint8(255 * grayImage / max(grayImage(:));
or you can scale the min to zero and the max to 1 with mat2gray() or rescale():
gray8 = uint8(255 * mat2gray(grayImage)); % Method 1
gray8 = uint8(rescale(grayImage, 0, 255)); % Method 2
You can convert a gray scale image into a gray scale RGB image by concatenating your gray scale image so that it is in every color channel. It will be a 3-D RGB image though it will look gray.
rgbImage2 = cat(3, grayImage, grayImage, grayImage);
You could also use ind2rgb() if you want
rgbImage2 = ind2rgb(grayImage, yourColorMap);
21 Comments
MatlabEnthusiast
on 26 May 2021
Edited: MatlabEnthusiast
on 26 May 2021
i saved the original image, and i rescaled teh gray image, according to this snipet forexample:
Now how can i add the color information, alone to the gray image, gray8, assuming am done processing it? I using imread with the colormap function but the colormap turns up empty
gray8 = uint8(255 * mat2gray(grayImage)); % Method 1
Image Analyst
on 27 May 2021
What is "the color information". What colors do you want to apply to each gray level? How about jet(256):
cmap = jet(256); % or hot(256) or winter(256) or whatever.
rgbImage = ind2rgb(gray8, cmap);
Or you can get a colormap from your original
[indexedImage, cmap] = rgb2ind(rgbImage);
% Note: indexes image IS NOT a normal gray scale image.
% It will not look like a grayscale image you get from rgb2gray(rgbImage)
% because the pixel value is an index into a custom color map that gets created
% rather than a lightness brightness value.
% You can apply the colormap to the indexed image to get an RGB image
% that will be somewhat similar, but not exactly like, the original RGB image.
rgbImage = ind2rgb(indexedImage, cmap);
Note, that colormap you got from rgb2ind() will not work well with a grayscale image you get from rgb2gray().
MatlabEnthusiast
on 27 May 2021
I would prefer to use the original image color map.
[indexedImage, cmap] = rgb2ind(rgbImage);
the approach in the snippet above seems to give a color map. Is converting the indexed image obtained above to double for processing, then converting the resultant double image after processing back to indexed image so that i can re apply the map possible? if so, how can it be achieved?
Image Analyst
on 27 May 2021
No, it is not possible. The indexed image is not a normal image. You can see by showing it:
imshow(indexedImage, [], 'Colormap', jet(256));
colorbar;
Why do you think you want to, or need to, do that? It does not make sense.
Like I said, if you convert the image to gray scale you'll get a lightness image that looks like you'd expect, and you can process that if you want, but you cannot somehow apply color to it unless you save the colors somehow or save the original image and just replace your processed image with the original (basically throwing away all your manipulations.)
If you want, you can convert to LAB or HSV color space and manipulate the L or V channel only and then apply the original hue and saturation channels. Like
hsvImage = rgb2hsv(rgbImage);
grayImage = hsvImage(:, :, 3); % Extract V image.
% Now process grayImage somehow.
% Now put manipulated gray image back in as the V channel
hsvImage(:,:,3) = grayImage;
% Now convert back to RGB
rgbImage2 = hsv2rgb(hsvImage);
I'm attaching some examples where I did a similar thing with the Hue channel to change the hue of the pixels.
MatlabEnthusiast
on 27 May 2021
alright thanks. am going to look into these options in details to see what works best for me. I want to do it to observe the change my manipulations make on the original image. And how much color affects the image. It will not be one manipulation. but forexample. If i rescale each image pixel and then re apply the color to it? what changes does that imply. Or another instance. if i use the matlab image writer to compress some images to some formats. How much are these changed? Can i re obtain an image even if not exactly the same as the original, slightly similar looking? Thats what i want :). I sure do hope it makes a little sense to you too now xd.
MatlabEnthusiast
on 27 May 2021
also i have seen several MATLAB applications that manipulate image size. But for some reason, they all do it in double or grayscale and do not bother to re-apply the color. I dont know why, So why bother doing that to the image if it does not matter what it looks like in the end? Say some compression, encoding and decoding algorithms
Image Analyst
on 27 May 2021
alex, I'm still not sure what you want to do. You can surely rescale the image in intensity using rescale(). You can rescale the size using imresize(). Neither of those will change the hue or saturation, though of course rescaling intensity will make it brighter or darker so technically the color is changed.
I don't know what those other programs did. Maybe they didn't need the color so just threw it away (ignored it by converting to black and white). There may be a reason whey they converted from interger to floating point. Some functions, like std() and conv2() require it.
How "similar" the image will look to the original after you process it totally depends on what you did to it of course. It could look identical or totally unrecognizable -- it just depends on what you do (which I have no idea of because you have not told us).
MatlabEnthusiast
on 27 May 2021
oh yeah you are right, this makes a lot of sense. What am going to "try" to do, is check the image or similarities (using any of the various measure functions e.g. least square means, i will look for other alternatives) between pixels. Then render very similar (according to a threshold value i choose) some pixels almost unnecessary and take them off the image. Then i will try to add the color, to the remainder of the image. See how well it will look and how much of a difference i made. Which approach is best for this sort of application in your professional opinion?
Image Analyst
on 28 May 2021
I don't understand. OK, check the image - i get that. But what does "check the similarities" mean? What are the similarities? Do you have a second image, like the processed one. Okay, maybe you are going to check the original image and the processed one for similarities or differences, but what metrics are you going to use? There are tons of things that could be measured in each image (mean, stddev, entropy, etc) but which do you want to compare? And there are also comparitive metrics like pnsr() and immse() and ssim(). Those could also be used to assess similarity.
Then you say "Then render very similar ... some pixels almost unnecessary". I have no idea what that means. What does "render" mean to you there? And what defines the pixels that are "almost uncessary"? Do you have some algorithm that classifies pixels into necessary and unnecessary pixels?
Then you say "take them off the image". Again, what does that mean? You can't take them off the image unless they are on the outer edge and are a rectangular region, so in essence you'd be cropping the image. But you can't remove pixels interior to the image because an image must be rectangular - it can't have "holes" in it or have ragged edges. You can change them, like make them black or white or grayscale or something, but they have to still be in the image.
Then you say you'll add color to "the remainder of the image". What remainder? Like I said, unless you cropped off rectangular blocks on the side, there will be no remainder. And how is the "remainder of the image" not still color? Did you cast that part to gray scale? If so, how are you going to "add color"? If you want the original color you'll just have to replace the gray scale image with the original color image (thus undoing all of your prior processing).
So, I'm now more in the dark than ever.
MatlabEnthusiast
on 28 May 2021
ok. metrics to be used. yesss yes. psnr is one of them. after looking into ssim(). I will use that too. and yes. I have a processed image and the original image. Those are the ones i want to compare. in fact long story short. After using those metrics on the processed image. What am wondering is if, there is a way to use the color information of the original image to try to color the processed image which is of double type after processing. it is M-by-N but the original is an M-by-N-by-3 true color image. If this information is enough for you to offer a solution then lets ignore the information that put in the dark more than ever, but yes, I will try to measure things like entropy between different parts of the image.
Image Analyst
on 28 May 2021
Like I showed you already, if you want, you can apply the hue and saturation to your intensity/grayscale image. If that's not what you want to do then say why not and what you'd like to do instead. Obviously something is going to be changed, and the color consists of three components (R, G, and B or H, S and V), so what components do you want to be the same and what do you want to allow to be changed?
MatlabEnthusiast
on 28 May 2021
Thank you very much. let me do that now. probably after some hours i will update you with the current situation.
MatlabEnthusiast
on 29 May 2021
Hello @Image Analyst thank you very much. Since all comments were helpful. I will just accept the original answer, if that's acceptable. However, if you do not mind, I would like to know what exactly L, a and b actually mean in the Lab approach, as well as, h, s and v in the HSV approach and the roles they play on the image.
Thank you very much.
Image Analyst
on 29 May 2021
L and V are basically intensity - like as if you'd converted the color image to grayscale. The LAB and HSV color spaces are like if you took all the possible colors (all possible combinations of RGB) and plotted them in a 3-D coordinate system, like you'd see with the colorlcoud() function. Now L or V is the Z axis and is the darkness to lightness value. The hue (colors of the rainbow) go around the clock (angle), while the saturation or chroma for in and out from the central intensity axis. Saturation or chroma is like how "pure" the color is. Closer to the L (V) axis is more pastel/neutral colored, while far away from the axis is a more vivid/pure color. So for the red hue, it would go like gray->pink->red, and for blue: gray -> sky blue -> royal blue.
HSV and LAB are sort of the same except that LAB is cartesian coordinate system (x,y,z) while HSV is a cylindrical coordinate system (angle=hue, radius = saturation, Z = value). Experts will find plenty to nitpick on with my explanations but that's basically it and I don't want to go into excrutiatingly correct details for a beginner who's just trying to learn this stuff for the first time. It can be very confusing even for experts.
Just google images for LAB color space or HSV color space and you'll see some nice renderings.
See my attached demos to create some renderings.
MatlabEnthusiast
on 29 May 2021
Edited: MatlabEnthusiast
on 29 May 2021
first time, very confusing indeed. after a few readings. began to make a some more sense. Athough i have not yet grasped the whole concept. Am gona have to google first indeed as suggested. Now, using a simple analogy. assuming a true color image held about 10mb on a dist. I would expect the L ( from LAB ) and the V from (HSV) to take up the most space of this? roughly (preferably as a percentage) how much would the (AB) and (SV) take of this? Is this composition constant or does it vary for the following cases:
- assuming the image is simply rescaled? using imresize forexample?
- if the image is compressed but the scale is not tampered with, say using winzip or something. How are these values affected. Just a simple explanation as the one above will do. For instance, in the samples you attached, to change color, you only played wih the H value. I am looking into how that works (mapping of values to colors in these spaces) as is in RGB (where the scale plays from 0-255)
And yeah, after that I think i have no more questions concerning this topic for the time being (until something more confusing comes up :) ). Again thanks for your time.
Image Analyst
on 29 May 2021
alex, images are generally stored as RGB, not as a color space. But if you did store the double values for LAB or HSV, each color channel would take up the same space, basically the number of rows times the number of columns times the number of bytes (8). L or V does not take up any more space than the others.
MatlabEnthusiast
on 29 May 2021
Edited: MatlabEnthusiast
on 29 May 2021
does that mean, whatever operations are conducted on L or V, basically have no effect whatsoever on the storage space the image needs when converted back to rgb?
Also what if operations were conducted on all channels, say dividing H,S and V by a constant? what would that imply? Versus dividing only V by the same constant.Since V is the "intensity" of the image,
Walter Roberson
on 29 May 2021
Multiplying or dividing any of H, S, or V by a constant only affects the storage requirement in the sense that you might need to switch between an integer representation and a floating point representation. It does not change the "information" (in the sense that "information" is defined in mathematics), but it might change the representation.
Just like you might have the integers 0, 1, 2, 3 . In theory you could store those as 2 bits each, 00, 01, 10, 11, but most of the time that will not be convenient (because most chips these days do not have ways of addressing individual bits.) You would be more likely to store them as 8 bit integers, 00000000 00000001 00000010 00000011 . If you know that each entry can only be one of the four numbers, you have at most two bits of information stored in the 8 bit representation. And if you wrote 0.0 1.0 2.0 3.0 that would imply you were storing in a floating point representation, which takes a minimum of 32 bits (single precision) even though the "information" content is still only two bits.
Likewise if you take the integers 0 to 255 and divide by 255 then you need a floating point representation . The information content would be at most 8 bits, but the representation would need at least 32 bits (single precision).
Image Analyst
on 30 May 2021
To build on what Walter says, if you're working on compression schemes you would store in a special format, not full 8-bit integer. So sometimes compression schemes (of which I'm not an expert on) will compress different color channels different amounts and them store them with different amounts of space allocated to the different color channels. "422" seems to be a common scheme. See the tutorial https://www.elotek.com/wp-content/uploads/An-Introduction-to-Video-Compression-final.pdf
Walter Roberson
on 30 May 2021
Note that 420 and 422 color compression schemes are "lossy" compression schemes. They throw out information that matters less to the human eye.
MatlabEnthusiast
on 30 May 2021
thank you guys very much. this has made most basics a more clearer now.
More Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!An Error Occurred
Unable to complete the action because of changes made to the page. Reload the page to see its updated state.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom(English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)