What does an unprocessed RAW file look like?Why are Red, Green, and Blue the primary colors of light?RAW...

Do authors have to be politically correct in article-writing?

Does Skippy chunky peanut butter contain trans fat?

Am I correct in stating that the study of topology is purely theoretical?

Why did Luke use his left hand to shoot?

Why does 0.-5 evaluate to -5?

What can I do to encourage my players to use their consumables?

What makes papers publishable in top-tier journals?

How would an AI self awareness kill switch work?

Crack the bank account's password!

Potential client has a problematic employee I can't work with

Why didn't the 2019 Oscars have a host?

Custom shape shows unwanted extra line

Square Root Distance from Integers

Non-Cancer terminal illness that can affect young (age 10-13) girls?

Is there a way to not have to poll the UART of an AVR?

Does the US government have any planning in place to ensure there's no shortages of food, fuel, steel and other commodities?

Has any human ever had the choice to leave Earth permanently?

Stuck on a Geometry Puzzle

The No-Straight Maze

Why are carbons of Inositol chiral centers?

Does a paladin have to announce that they're using Divine Smite before attacking?

Why is a temp table a more efficient solution to the Halloween Problem than an eager spool?

Is there any danger of my neighbor having my wife's signature?

Does the ditching switch allow an A320 to float indefinitely?



What does an unprocessed RAW file look like?


Why are Red, Green, and Blue the primary colors of light?RAW files store 3 colors per pixel, or only one?What *exactly* is white balance?Does converting RAW files to TIFF lose image quality?How sharp are RAW photos before processing?How exactly is the deeper bit-depth of RAW mapped onto JPEG and the display?How does Picasa process RAW photos?What, if anything, does “RAW” stand for?Pulling colors out of a raw file in postHow to open 'RAW' file from UltraCam-Xp WA camera?Why does the appearance of RAW files change when switching from “lighttable” to “darkroom” in Darktable?Why do we even need RAW-specific editing software?How do I convert multiple RAW files to jpeg in Photoshop Elements 11 without losing the RAW?How to identify slide/negative scanners that produce RAW files?Problems with HDR regarding bit depth and file typeHow to process DSLR RAW files for display on modern HDR-capable TVs & computer monitors? (actual 10bit/ch or 12bit/ch dynamic range)













78















I know people use fancy software like Lightroom or Darktable to post-process their RAW files. But what if I don't? What does the file look like, just, y'know, RAW?










share|improve this question




















  • 17





    The matrix. It looks like the matrix.

    – Hueco
    Feb 21 at 3:52






  • 2





    Related; RAW files store 3 colors per pixel, or only one? and Why are Red, Green, and Blue the primary colors of light? which explains how digital camera sensors minic the way or eyes/brains perceive color that in a sense does not actually exist the way we often assume it does.

    – Michael C
    Feb 21 at 5:48






  • 3





    @Hueco, a Bayer matrix, perhaps.

    – Mark
    Feb 21 at 21:28











  • I've moved the discussion about how best to handle this as a canonical question to chat. Let's please continue it there so that we don't have noise in the comments whatever the decision ends up being.

    – AJ Henderson
    Feb 22 at 1:39


















78















I know people use fancy software like Lightroom or Darktable to post-process their RAW files. But what if I don't? What does the file look like, just, y'know, RAW?










share|improve this question




















  • 17





    The matrix. It looks like the matrix.

    – Hueco
    Feb 21 at 3:52






  • 2





    Related; RAW files store 3 colors per pixel, or only one? and Why are Red, Green, and Blue the primary colors of light? which explains how digital camera sensors minic the way or eyes/brains perceive color that in a sense does not actually exist the way we often assume it does.

    – Michael C
    Feb 21 at 5:48






  • 3





    @Hueco, a Bayer matrix, perhaps.

    – Mark
    Feb 21 at 21:28











  • I've moved the discussion about how best to handle this as a canonical question to chat. Let's please continue it there so that we don't have noise in the comments whatever the decision ends up being.

    – AJ Henderson
    Feb 22 at 1:39
















78












78








78


35






I know people use fancy software like Lightroom or Darktable to post-process their RAW files. But what if I don't? What does the file look like, just, y'know, RAW?










share|improve this question
















I know people use fancy software like Lightroom or Darktable to post-process their RAW files. But what if I don't? What does the file look like, just, y'know, RAW?







raw image-processing file-format






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 23 at 4:09







mattdm

















asked Feb 21 at 2:49









mattdmmattdm

121k39356646




121k39356646








  • 17





    The matrix. It looks like the matrix.

    – Hueco
    Feb 21 at 3:52






  • 2





    Related; RAW files store 3 colors per pixel, or only one? and Why are Red, Green, and Blue the primary colors of light? which explains how digital camera sensors minic the way or eyes/brains perceive color that in a sense does not actually exist the way we often assume it does.

    – Michael C
    Feb 21 at 5:48






  • 3





    @Hueco, a Bayer matrix, perhaps.

    – Mark
    Feb 21 at 21:28











  • I've moved the discussion about how best to handle this as a canonical question to chat. Let's please continue it there so that we don't have noise in the comments whatever the decision ends up being.

    – AJ Henderson
    Feb 22 at 1:39
















  • 17





    The matrix. It looks like the matrix.

    – Hueco
    Feb 21 at 3:52






  • 2





    Related; RAW files store 3 colors per pixel, or only one? and Why are Red, Green, and Blue the primary colors of light? which explains how digital camera sensors minic the way or eyes/brains perceive color that in a sense does not actually exist the way we often assume it does.

    – Michael C
    Feb 21 at 5:48






  • 3





    @Hueco, a Bayer matrix, perhaps.

    – Mark
    Feb 21 at 21:28











  • I've moved the discussion about how best to handle this as a canonical question to chat. Let's please continue it there so that we don't have noise in the comments whatever the decision ends up being.

    – AJ Henderson
    Feb 22 at 1:39










17




17





The matrix. It looks like the matrix.

– Hueco
Feb 21 at 3:52





The matrix. It looks like the matrix.

– Hueco
Feb 21 at 3:52




2




2





Related; RAW files store 3 colors per pixel, or only one? and Why are Red, Green, and Blue the primary colors of light? which explains how digital camera sensors minic the way or eyes/brains perceive color that in a sense does not actually exist the way we often assume it does.

– Michael C
Feb 21 at 5:48





Related; RAW files store 3 colors per pixel, or only one? and Why are Red, Green, and Blue the primary colors of light? which explains how digital camera sensors minic the way or eyes/brains perceive color that in a sense does not actually exist the way we often assume it does.

– Michael C
Feb 21 at 5:48




3




3





@Hueco, a Bayer matrix, perhaps.

– Mark
Feb 21 at 21:28





@Hueco, a Bayer matrix, perhaps.

– Mark
Feb 21 at 21:28













I've moved the discussion about how best to handle this as a canonical question to chat. Let's please continue it there so that we don't have noise in the comments whatever the decision ends up being.

– AJ Henderson
Feb 22 at 1:39







I've moved the discussion about how best to handle this as a canonical question to chat. Let's please continue it there so that we don't have noise in the comments whatever the decision ends up being.

– AJ Henderson
Feb 22 at 1:39












4 Answers
4






active

oldest

votes


















163














There's a tool called dcraw which reads various RAW file types and extracts pixel data from them — it's actually the original code at the very bottom of a lot of open source and even commercial RAW conversion software.



I have a RAW file from my camera, and I've used dcraw in a mode which tells it to create an image using literal, unscaled 16-bit values from the file. I converted that to an 8-bit JPEG for sharing, using perceptual gamma (and scaled down for upload). That looks like this:



dcraw -E -4



Obviously the result is very dark, although if you click to expand, and if your monitor is decent, you can see some hint of something.



Here is the out-of-camera color JPEG rendered from that same RAW file:



out-of-camera JPEG



(Photo credit: my daughter using my camera, by the way.)



Not totally dark after all. The details of where exactly all the data is hiding is best covered by an in-depth question, but in short, we need a curve which expands it over the range of darks and lights available in an 8-bit JPEG on a typical screen.



Fortunately, the dcraw program has another mode which converts to a more "useful" but still barely-processed image. This adjusts the level of the darkest black and brightest white and rescales the data appropriately. It can also set white balance automatically or from the camera setting recorded in the RAW file, but in this case I've told it not to, since we want to examine the least processing possible.



There's still a one-to-one correspondence between photosites on the sensor and pixels in the output (although again I've scaled this down for upload). That looks like this:



dcraw -d -r 1 1 1 1



Now, this is obviously more recognizable as an image — but if we zoom in on this (here, so each pixel is actually magnified 10×), we see that it's all... dotty:



10× zoom and crop



That's because the sensor is covered by a color filter array — tiny little colored filters the size of each photosite. Because my camera is a Fujifilm camera, this uses a pattern Fujifilm calls "X-Trans", which looks like this:



10× xtrans



There are some details about the particular pattern that are kind of interesting, but overall it's not super-important. Most cameras today use something called a Bayer pattern (which has less green and repeats every 2×2 rather than 6×6). Why more green? The human eye is more sensitive to light in that range, and so using more of the pixels for that allows more detail with less noise.



So, anyway, here's a 1:1 (one pixel in the image is one pixel on the screen) section of the out-of-camera JPEG:



1:1 view crop of out-of-camera image



... and here's the same area from the quick-grayscale conversion above. You can see the stippling from the X-trans pattern:



1:1 crop of the dcraw -d -r 1 1 1 1 version



We can actually take that and colorize the pixels so those corresponding to green in the array are mapped to levels of green instead of gray, red to red, and blue to blue. That gives us:



1:1 with xtrans colorization



... or, for the full image:



full image from dcraw -d -r 1 1 1 1 with xtrans colorization



The green cast is very apparent, which is no surprise because there are 2½× more green pixels than red or blue). Each 3×3 block has two red pixels, two blue pixels, and five green pixels. To counteract this, I made a very simple scaling program which turns each of those 3×3 blocks into a single pixel. In that pixel, the green channel is the average of the five green pixels, and the red and blue channels the average of the corresponding two red and blue pixels. That gives us:



xtrans colorized, naïve block demosaicking



... which actually isn't half bad. The white balance is off, but since I intentionally decided to not adjust for that, this is no surprise. Hitting "auto white-balance" in an imaging program compensates for that (as would have letting dcraw set that in the first place):



xtrans colorized, naïve block demosaicking + auto-levels



Detail isn't great compared to the more sophisticated algorithms used in cameras and RAW processing programs, but clearly the basics are there. Better approaches average values around each pixel rather than going by big blocks, and since color usually changes gradually in photographs, this works pretty well. They also have clever tricks to reduce edge artifacts, noise, and other problems. This process is called "demosaicing", because the pattern of colored filters looks like a tile mosaic.



I suppose this view (where I didn't really make any decisions, and the program didn't do anything automatically smart) could have been defined as the "standard default appearance" of RAW file, thus ending many internet arguments. But, there is no such standard — there's no such rule that this particular "naïve" interpretation is special.



And, this isn't the only possible starting point. All real-world RAW processing programs have their own ideas of a basic default state to apply to a fresh RAW file on load. They've got to do something (otherwise we'd have that dark, useless thing at the top of this post), and usually they do something smarter than my simple manual conversion, which makes sense, because that gets you better results anyway.








share|improve this answer





















  • 4





    Beautiful picture. And great answer.

    – Marc.2377
    yesterday



















8














It's a really really big grid of numbers. Everything else is processing.






share|improve this answer








New contributor




WolfgangGroiss is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – AJ Henderson
    2 days ago



















5














I know it's already been answered quite well by mattdm, but I just thought you might find this article interesting.



In case the link goes down, here is a summary:



The human eye is most sensitive to colors in the green wavelength region (coincidental with the fact that our sun emits most intensely in the green region).



The camera eye (charge coupled device (CCD) or complimentary metal oxide semiconductor (CMOS)) is sensitive only to light intensity, not to color.



Optical filters are used to filter out different wavelengths of light. For example, a green pass filter will only let green light through.



Optical filters used in digital cameras are the size of the individual pixel sensors, and are arranged in a grid to match the sensor array. Red, green and blue (sort of like our cone cells) filters are used. However, because our eyes are more sensitive to green, the Bayer array filter has 2 green pixel filters for each red and blue pixel.
The Bayer array has green filters forming a checkerboard like pattern, while red and blue filters occupy alternating rows.



Getting back to your original question: what does an unprocessed RAW file look like?



It looks like a black an white checkered lattice of the original image.



The fancy software for post-processing the RAW files first applies the Bayer filter. It looks more like the actual image after this, with color in the correct intensity and locations. However, there are still artifacts of the RGB grid from the Bayer filter, because each pixel is only one color.



There are a variety of methods for smoothing out the color coded RAW file. Smoothing out the pixels is similar to blurring though, so too much smoothing can be a bad thing.



Some of the demosaicing methods are briefly described here:



Nearest Neighbor: The value of a pixel (single color) is applied to its other colored neighbors and the colors are combined. No "new" colors are created in this process, only colors that were originally perceived by the camera sensor.



Linear Interpolation: for example, averages the two adjacent blue values and applies the average blue value to the green pixel in between the adjacent blue pixels. This can blur sharp edges.



Quadratic and cubic Interpolation: similar to linear interpolation, higher order approximations for the in-between color. They use more data points to generate better fits. linear only looks at two, quadratic at three, and cubic at four to generate an in between color.



Catmull-Rom Splines: similar to cubic, but takes into consideration the gradient of each point to generate the in-between color.



Half Cosine: used as an example of an interpolation method, it creates half cosines between each pair of like-colors and has a smooth inflected curve between them. However, as noted in the article, it does not offer any advantage for Bayer arrays due to the arrangement of the colors. It is equivalent to linear interpolation but at higher computational cost.



Higher end post-processing software has better demosaicing methods and clever algorithms. For example, they can identify sharp edges or high contrast changes and preserve their sharpness when combining the color channels.






share|improve this answer










New contributor




jreese is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




























    -5














    I think a lot of people imagine that raw files are simply an array of pixel values straight out of the camera sensor. There are cases there this is really the case, and you have to supply some information about the sensor in order to let the software interpret the image. But a lot of the consumer cameras usually give "raw files" that actually are more or less conforming to the TIFF file specification (in some cases, the colours may be off). One can try by simply change the file extension to ".tif" and see what happens when opening the file. I think some of you will se a good picture, but not everyone, because there are differences between how different camerabrands solve this.



    A TIFF file instead of a "real raw file" is a good solution. A TIFF file can have 16 bits per colour. That's enough for all cameras I know.



    Ed: I wonder why this answer got downvoted. The answer is essentially correct (with reservation for the fact that camera manufacturers don't have to use TIFF structs, but many of them do).



    About the part about array of pixels straight out of the sensor, it is not ridiculous to expect something like that. Because that is how a lot of sensors outside the consumer camera market works. In these cases, You have to provide a separate file that describes the sensor.



    By the way, the word "RAW" is used because it should mean that we get the unprocessed sensor data. But it's reasonable that the camera manufacturers use a structured format instead of raw files for real. This way the photographer doesn't have to know the exact sensor data.






    share|improve this answer










    New contributor




    Ulf Tennfors is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.
















    • 2





      I have had file recovery applications that "recovered" .cr2 files as TIFFs. Those files would not open using any application that can work with TIFFs. Changing the file extensions back to .cr2 made them perfectly usable .cr2 files.

      – Michael C
      Feb 21 at 18:50






    • 4





      That's not to say that RAW files aren't often actually using TIFF format containers — that's absolutely correct. It's just that the thing you're seeing probably isn't the "RAW" data in the sense I'm looking for.

      – mattdm
      Feb 21 at 19:26






    • 2





      Ok, to clarify, the file do uses structures from the TIFF file format. But since it doesn't do exactly as the TIFF specification says, it is not a strict TIFF file. But the point is that a TIFF library could be used to read the file. One doesn't have to make everything from scratch in order to read that file.

      – Ulf Tennfors
      Feb 21 at 19:39






    • 2





      One kind of needs to so something from scratch in order to do something useful with the file, though. Otherwise you get the almost-all-dark splotchy grayscale image I lead my answer with.

      – mattdm
      Feb 21 at 20:08






    • 4





      No one with any sense would expect a RAW file to be data straight off the sensor without any metadata of any kind like time, camera information, etc. The fact that TIFF-like formats are useful for structuring the data isn't really important, nor does it decrease the conceptual principle that that data is "straight" off the sensor without post-processing.

      – whatsisname
      Feb 21 at 22:52











    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "61"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f105271%2fwhat-does-an-unprocessed-raw-file-look-like%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    163














    There's a tool called dcraw which reads various RAW file types and extracts pixel data from them — it's actually the original code at the very bottom of a lot of open source and even commercial RAW conversion software.



    I have a RAW file from my camera, and I've used dcraw in a mode which tells it to create an image using literal, unscaled 16-bit values from the file. I converted that to an 8-bit JPEG for sharing, using perceptual gamma (and scaled down for upload). That looks like this:



    dcraw -E -4



    Obviously the result is very dark, although if you click to expand, and if your monitor is decent, you can see some hint of something.



    Here is the out-of-camera color JPEG rendered from that same RAW file:



    out-of-camera JPEG



    (Photo credit: my daughter using my camera, by the way.)



    Not totally dark after all. The details of where exactly all the data is hiding is best covered by an in-depth question, but in short, we need a curve which expands it over the range of darks and lights available in an 8-bit JPEG on a typical screen.



    Fortunately, the dcraw program has another mode which converts to a more "useful" but still barely-processed image. This adjusts the level of the darkest black and brightest white and rescales the data appropriately. It can also set white balance automatically or from the camera setting recorded in the RAW file, but in this case I've told it not to, since we want to examine the least processing possible.



    There's still a one-to-one correspondence between photosites on the sensor and pixels in the output (although again I've scaled this down for upload). That looks like this:



    dcraw -d -r 1 1 1 1



    Now, this is obviously more recognizable as an image — but if we zoom in on this (here, so each pixel is actually magnified 10×), we see that it's all... dotty:



    10× zoom and crop



    That's because the sensor is covered by a color filter array — tiny little colored filters the size of each photosite. Because my camera is a Fujifilm camera, this uses a pattern Fujifilm calls "X-Trans", which looks like this:



    10× xtrans



    There are some details about the particular pattern that are kind of interesting, but overall it's not super-important. Most cameras today use something called a Bayer pattern (which has less green and repeats every 2×2 rather than 6×6). Why more green? The human eye is more sensitive to light in that range, and so using more of the pixels for that allows more detail with less noise.



    So, anyway, here's a 1:1 (one pixel in the image is one pixel on the screen) section of the out-of-camera JPEG:



    1:1 view crop of out-of-camera image



    ... and here's the same area from the quick-grayscale conversion above. You can see the stippling from the X-trans pattern:



    1:1 crop of the dcraw -d -r 1 1 1 1 version



    We can actually take that and colorize the pixels so those corresponding to green in the array are mapped to levels of green instead of gray, red to red, and blue to blue. That gives us:



    1:1 with xtrans colorization



    ... or, for the full image:



    full image from dcraw -d -r 1 1 1 1 with xtrans colorization



    The green cast is very apparent, which is no surprise because there are 2½× more green pixels than red or blue). Each 3×3 block has two red pixels, two blue pixels, and five green pixels. To counteract this, I made a very simple scaling program which turns each of those 3×3 blocks into a single pixel. In that pixel, the green channel is the average of the five green pixels, and the red and blue channels the average of the corresponding two red and blue pixels. That gives us:



    xtrans colorized, naïve block demosaicking



    ... which actually isn't half bad. The white balance is off, but since I intentionally decided to not adjust for that, this is no surprise. Hitting "auto white-balance" in an imaging program compensates for that (as would have letting dcraw set that in the first place):



    xtrans colorized, naïve block demosaicking + auto-levels



    Detail isn't great compared to the more sophisticated algorithms used in cameras and RAW processing programs, but clearly the basics are there. Better approaches average values around each pixel rather than going by big blocks, and since color usually changes gradually in photographs, this works pretty well. They also have clever tricks to reduce edge artifacts, noise, and other problems. This process is called "demosaicing", because the pattern of colored filters looks like a tile mosaic.



    I suppose this view (where I didn't really make any decisions, and the program didn't do anything automatically smart) could have been defined as the "standard default appearance" of RAW file, thus ending many internet arguments. But, there is no such standard — there's no such rule that this particular "naïve" interpretation is special.



    And, this isn't the only possible starting point. All real-world RAW processing programs have their own ideas of a basic default state to apply to a fresh RAW file on load. They've got to do something (otherwise we'd have that dark, useless thing at the top of this post), and usually they do something smarter than my simple manual conversion, which makes sense, because that gets you better results anyway.








    share|improve this answer





















    • 4





      Beautiful picture. And great answer.

      – Marc.2377
      yesterday
















    163














    There's a tool called dcraw which reads various RAW file types and extracts pixel data from them — it's actually the original code at the very bottom of a lot of open source and even commercial RAW conversion software.



    I have a RAW file from my camera, and I've used dcraw in a mode which tells it to create an image using literal, unscaled 16-bit values from the file. I converted that to an 8-bit JPEG for sharing, using perceptual gamma (and scaled down for upload). That looks like this:



    dcraw -E -4



    Obviously the result is very dark, although if you click to expand, and if your monitor is decent, you can see some hint of something.



    Here is the out-of-camera color JPEG rendered from that same RAW file:



    out-of-camera JPEG



    (Photo credit: my daughter using my camera, by the way.)



    Not totally dark after all. The details of where exactly all the data is hiding is best covered by an in-depth question, but in short, we need a curve which expands it over the range of darks and lights available in an 8-bit JPEG on a typical screen.



    Fortunately, the dcraw program has another mode which converts to a more "useful" but still barely-processed image. This adjusts the level of the darkest black and brightest white and rescales the data appropriately. It can also set white balance automatically or from the camera setting recorded in the RAW file, but in this case I've told it not to, since we want to examine the least processing possible.



    There's still a one-to-one correspondence between photosites on the sensor and pixels in the output (although again I've scaled this down for upload). That looks like this:



    dcraw -d -r 1 1 1 1



    Now, this is obviously more recognizable as an image — but if we zoom in on this (here, so each pixel is actually magnified 10×), we see that it's all... dotty:



    10× zoom and crop



    That's because the sensor is covered by a color filter array — tiny little colored filters the size of each photosite. Because my camera is a Fujifilm camera, this uses a pattern Fujifilm calls "X-Trans", which looks like this:



    10× xtrans



    There are some details about the particular pattern that are kind of interesting, but overall it's not super-important. Most cameras today use something called a Bayer pattern (which has less green and repeats every 2×2 rather than 6×6). Why more green? The human eye is more sensitive to light in that range, and so using more of the pixels for that allows more detail with less noise.



    So, anyway, here's a 1:1 (one pixel in the image is one pixel on the screen) section of the out-of-camera JPEG:



    1:1 view crop of out-of-camera image



    ... and here's the same area from the quick-grayscale conversion above. You can see the stippling from the X-trans pattern:



    1:1 crop of the dcraw -d -r 1 1 1 1 version



    We can actually take that and colorize the pixels so those corresponding to green in the array are mapped to levels of green instead of gray, red to red, and blue to blue. That gives us:



    1:1 with xtrans colorization



    ... or, for the full image:



    full image from dcraw -d -r 1 1 1 1 with xtrans colorization



    The green cast is very apparent, which is no surprise because there are 2½× more green pixels than red or blue). Each 3×3 block has two red pixels, two blue pixels, and five green pixels. To counteract this, I made a very simple scaling program which turns each of those 3×3 blocks into a single pixel. In that pixel, the green channel is the average of the five green pixels, and the red and blue channels the average of the corresponding two red and blue pixels. That gives us:



    xtrans colorized, naïve block demosaicking



    ... which actually isn't half bad. The white balance is off, but since I intentionally decided to not adjust for that, this is no surprise. Hitting "auto white-balance" in an imaging program compensates for that (as would have letting dcraw set that in the first place):



    xtrans colorized, naïve block demosaicking + auto-levels



    Detail isn't great compared to the more sophisticated algorithms used in cameras and RAW processing programs, but clearly the basics are there. Better approaches average values around each pixel rather than going by big blocks, and since color usually changes gradually in photographs, this works pretty well. They also have clever tricks to reduce edge artifacts, noise, and other problems. This process is called "demosaicing", because the pattern of colored filters looks like a tile mosaic.



    I suppose this view (where I didn't really make any decisions, and the program didn't do anything automatically smart) could have been defined as the "standard default appearance" of RAW file, thus ending many internet arguments. But, there is no such standard — there's no such rule that this particular "naïve" interpretation is special.



    And, this isn't the only possible starting point. All real-world RAW processing programs have their own ideas of a basic default state to apply to a fresh RAW file on load. They've got to do something (otherwise we'd have that dark, useless thing at the top of this post), and usually they do something smarter than my simple manual conversion, which makes sense, because that gets you better results anyway.








    share|improve this answer





















    • 4





      Beautiful picture. And great answer.

      – Marc.2377
      yesterday














    163












    163








    163







    There's a tool called dcraw which reads various RAW file types and extracts pixel data from them — it's actually the original code at the very bottom of a lot of open source and even commercial RAW conversion software.



    I have a RAW file from my camera, and I've used dcraw in a mode which tells it to create an image using literal, unscaled 16-bit values from the file. I converted that to an 8-bit JPEG for sharing, using perceptual gamma (and scaled down for upload). That looks like this:



    dcraw -E -4



    Obviously the result is very dark, although if you click to expand, and if your monitor is decent, you can see some hint of something.



    Here is the out-of-camera color JPEG rendered from that same RAW file:



    out-of-camera JPEG



    (Photo credit: my daughter using my camera, by the way.)



    Not totally dark after all. The details of where exactly all the data is hiding is best covered by an in-depth question, but in short, we need a curve which expands it over the range of darks and lights available in an 8-bit JPEG on a typical screen.



    Fortunately, the dcraw program has another mode which converts to a more "useful" but still barely-processed image. This adjusts the level of the darkest black and brightest white and rescales the data appropriately. It can also set white balance automatically or from the camera setting recorded in the RAW file, but in this case I've told it not to, since we want to examine the least processing possible.



    There's still a one-to-one correspondence between photosites on the sensor and pixels in the output (although again I've scaled this down for upload). That looks like this:



    dcraw -d -r 1 1 1 1



    Now, this is obviously more recognizable as an image — but if we zoom in on this (here, so each pixel is actually magnified 10×), we see that it's all... dotty:



    10× zoom and crop



    That's because the sensor is covered by a color filter array — tiny little colored filters the size of each photosite. Because my camera is a Fujifilm camera, this uses a pattern Fujifilm calls "X-Trans", which looks like this:



    10× xtrans



    There are some details about the particular pattern that are kind of interesting, but overall it's not super-important. Most cameras today use something called a Bayer pattern (which has less green and repeats every 2×2 rather than 6×6). Why more green? The human eye is more sensitive to light in that range, and so using more of the pixels for that allows more detail with less noise.



    So, anyway, here's a 1:1 (one pixel in the image is one pixel on the screen) section of the out-of-camera JPEG:



    1:1 view crop of out-of-camera image



    ... and here's the same area from the quick-grayscale conversion above. You can see the stippling from the X-trans pattern:



    1:1 crop of the dcraw -d -r 1 1 1 1 version



    We can actually take that and colorize the pixels so those corresponding to green in the array are mapped to levels of green instead of gray, red to red, and blue to blue. That gives us:



    1:1 with xtrans colorization



    ... or, for the full image:



    full image from dcraw -d -r 1 1 1 1 with xtrans colorization



    The green cast is very apparent, which is no surprise because there are 2½× more green pixels than red or blue). Each 3×3 block has two red pixels, two blue pixels, and five green pixels. To counteract this, I made a very simple scaling program which turns each of those 3×3 blocks into a single pixel. In that pixel, the green channel is the average of the five green pixels, and the red and blue channels the average of the corresponding two red and blue pixels. That gives us:



    xtrans colorized, naïve block demosaicking



    ... which actually isn't half bad. The white balance is off, but since I intentionally decided to not adjust for that, this is no surprise. Hitting "auto white-balance" in an imaging program compensates for that (as would have letting dcraw set that in the first place):



    xtrans colorized, naïve block demosaicking + auto-levels



    Detail isn't great compared to the more sophisticated algorithms used in cameras and RAW processing programs, but clearly the basics are there. Better approaches average values around each pixel rather than going by big blocks, and since color usually changes gradually in photographs, this works pretty well. They also have clever tricks to reduce edge artifacts, noise, and other problems. This process is called "demosaicing", because the pattern of colored filters looks like a tile mosaic.



    I suppose this view (where I didn't really make any decisions, and the program didn't do anything automatically smart) could have been defined as the "standard default appearance" of RAW file, thus ending many internet arguments. But, there is no such standard — there's no such rule that this particular "naïve" interpretation is special.



    And, this isn't the only possible starting point. All real-world RAW processing programs have their own ideas of a basic default state to apply to a fresh RAW file on load. They've got to do something (otherwise we'd have that dark, useless thing at the top of this post), and usually they do something smarter than my simple manual conversion, which makes sense, because that gets you better results anyway.








    share|improve this answer















    There's a tool called dcraw which reads various RAW file types and extracts pixel data from them — it's actually the original code at the very bottom of a lot of open source and even commercial RAW conversion software.



    I have a RAW file from my camera, and I've used dcraw in a mode which tells it to create an image using literal, unscaled 16-bit values from the file. I converted that to an 8-bit JPEG for sharing, using perceptual gamma (and scaled down for upload). That looks like this:



    dcraw -E -4



    Obviously the result is very dark, although if you click to expand, and if your monitor is decent, you can see some hint of something.



    Here is the out-of-camera color JPEG rendered from that same RAW file:



    out-of-camera JPEG



    (Photo credit: my daughter using my camera, by the way.)



    Not totally dark after all. The details of where exactly all the data is hiding is best covered by an in-depth question, but in short, we need a curve which expands it over the range of darks and lights available in an 8-bit JPEG on a typical screen.



    Fortunately, the dcraw program has another mode which converts to a more "useful" but still barely-processed image. This adjusts the level of the darkest black and brightest white and rescales the data appropriately. It can also set white balance automatically or from the camera setting recorded in the RAW file, but in this case I've told it not to, since we want to examine the least processing possible.



    There's still a one-to-one correspondence between photosites on the sensor and pixels in the output (although again I've scaled this down for upload). That looks like this:



    dcraw -d -r 1 1 1 1



    Now, this is obviously more recognizable as an image — but if we zoom in on this (here, so each pixel is actually magnified 10×), we see that it's all... dotty:



    10× zoom and crop



    That's because the sensor is covered by a color filter array — tiny little colored filters the size of each photosite. Because my camera is a Fujifilm camera, this uses a pattern Fujifilm calls "X-Trans", which looks like this:



    10× xtrans



    There are some details about the particular pattern that are kind of interesting, but overall it's not super-important. Most cameras today use something called a Bayer pattern (which has less green and repeats every 2×2 rather than 6×6). Why more green? The human eye is more sensitive to light in that range, and so using more of the pixels for that allows more detail with less noise.



    So, anyway, here's a 1:1 (one pixel in the image is one pixel on the screen) section of the out-of-camera JPEG:



    1:1 view crop of out-of-camera image



    ... and here's the same area from the quick-grayscale conversion above. You can see the stippling from the X-trans pattern:



    1:1 crop of the dcraw -d -r 1 1 1 1 version



    We can actually take that and colorize the pixels so those corresponding to green in the array are mapped to levels of green instead of gray, red to red, and blue to blue. That gives us:



    1:1 with xtrans colorization



    ... or, for the full image:



    full image from dcraw -d -r 1 1 1 1 with xtrans colorization



    The green cast is very apparent, which is no surprise because there are 2½× more green pixels than red or blue). Each 3×3 block has two red pixels, two blue pixels, and five green pixels. To counteract this, I made a very simple scaling program which turns each of those 3×3 blocks into a single pixel. In that pixel, the green channel is the average of the five green pixels, and the red and blue channels the average of the corresponding two red and blue pixels. That gives us:



    xtrans colorized, naïve block demosaicking



    ... which actually isn't half bad. The white balance is off, but since I intentionally decided to not adjust for that, this is no surprise. Hitting "auto white-balance" in an imaging program compensates for that (as would have letting dcraw set that in the first place):



    xtrans colorized, naïve block demosaicking + auto-levels



    Detail isn't great compared to the more sophisticated algorithms used in cameras and RAW processing programs, but clearly the basics are there. Better approaches average values around each pixel rather than going by big blocks, and since color usually changes gradually in photographs, this works pretty well. They also have clever tricks to reduce edge artifacts, noise, and other problems. This process is called "demosaicing", because the pattern of colored filters looks like a tile mosaic.



    I suppose this view (where I didn't really make any decisions, and the program didn't do anything automatically smart) could have been defined as the "standard default appearance" of RAW file, thus ending many internet arguments. But, there is no such standard — there's no such rule that this particular "naïve" interpretation is special.



    And, this isn't the only possible starting point. All real-world RAW processing programs have their own ideas of a basic default state to apply to a fresh RAW file on load. They've got to do something (otherwise we'd have that dark, useless thing at the top of this post), and usually they do something smarter than my simple manual conversion, which makes sense, because that gets you better results anyway.









    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Feb 22 at 3:32

























    answered Feb 21 at 2:49









    mattdmmattdm

    121k39356646




    121k39356646








    • 4





      Beautiful picture. And great answer.

      – Marc.2377
      yesterday














    • 4





      Beautiful picture. And great answer.

      – Marc.2377
      yesterday








    4




    4





    Beautiful picture. And great answer.

    – Marc.2377
    yesterday





    Beautiful picture. And great answer.

    – Marc.2377
    yesterday













    8














    It's a really really big grid of numbers. Everything else is processing.






    share|improve this answer








    New contributor




    WolfgangGroiss is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





















    • Comments are not for extended discussion; this conversation has been moved to chat.

      – AJ Henderson
      2 days ago
















    8














    It's a really really big grid of numbers. Everything else is processing.






    share|improve this answer








    New contributor




    WolfgangGroiss is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





















    • Comments are not for extended discussion; this conversation has been moved to chat.

      – AJ Henderson
      2 days ago














    8












    8








    8







    It's a really really big grid of numbers. Everything else is processing.






    share|improve this answer








    New contributor




    WolfgangGroiss is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.










    It's a really really big grid of numbers. Everything else is processing.







    share|improve this answer








    New contributor




    WolfgangGroiss is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    share|improve this answer



    share|improve this answer






    New contributor




    WolfgangGroiss is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.









    answered Feb 21 at 8:20









    WolfgangGroissWolfgangGroiss

    1212




    1212




    New contributor




    WolfgangGroiss is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.





    New contributor





    WolfgangGroiss is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    WolfgangGroiss is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.













    • Comments are not for extended discussion; this conversation has been moved to chat.

      – AJ Henderson
      2 days ago



















    • Comments are not for extended discussion; this conversation has been moved to chat.

      – AJ Henderson
      2 days ago

















    Comments are not for extended discussion; this conversation has been moved to chat.

    – AJ Henderson
    2 days ago





    Comments are not for extended discussion; this conversation has been moved to chat.

    – AJ Henderson
    2 days ago











    5














    I know it's already been answered quite well by mattdm, but I just thought you might find this article interesting.



    In case the link goes down, here is a summary:



    The human eye is most sensitive to colors in the green wavelength region (coincidental with the fact that our sun emits most intensely in the green region).



    The camera eye (charge coupled device (CCD) or complimentary metal oxide semiconductor (CMOS)) is sensitive only to light intensity, not to color.



    Optical filters are used to filter out different wavelengths of light. For example, a green pass filter will only let green light through.



    Optical filters used in digital cameras are the size of the individual pixel sensors, and are arranged in a grid to match the sensor array. Red, green and blue (sort of like our cone cells) filters are used. However, because our eyes are more sensitive to green, the Bayer array filter has 2 green pixel filters for each red and blue pixel.
    The Bayer array has green filters forming a checkerboard like pattern, while red and blue filters occupy alternating rows.



    Getting back to your original question: what does an unprocessed RAW file look like?



    It looks like a black an white checkered lattice of the original image.



    The fancy software for post-processing the RAW files first applies the Bayer filter. It looks more like the actual image after this, with color in the correct intensity and locations. However, there are still artifacts of the RGB grid from the Bayer filter, because each pixel is only one color.



    There are a variety of methods for smoothing out the color coded RAW file. Smoothing out the pixels is similar to blurring though, so too much smoothing can be a bad thing.



    Some of the demosaicing methods are briefly described here:



    Nearest Neighbor: The value of a pixel (single color) is applied to its other colored neighbors and the colors are combined. No "new" colors are created in this process, only colors that were originally perceived by the camera sensor.



    Linear Interpolation: for example, averages the two adjacent blue values and applies the average blue value to the green pixel in between the adjacent blue pixels. This can blur sharp edges.



    Quadratic and cubic Interpolation: similar to linear interpolation, higher order approximations for the in-between color. They use more data points to generate better fits. linear only looks at two, quadratic at three, and cubic at four to generate an in between color.



    Catmull-Rom Splines: similar to cubic, but takes into consideration the gradient of each point to generate the in-between color.



    Half Cosine: used as an example of an interpolation method, it creates half cosines between each pair of like-colors and has a smooth inflected curve between them. However, as noted in the article, it does not offer any advantage for Bayer arrays due to the arrangement of the colors. It is equivalent to linear interpolation but at higher computational cost.



    Higher end post-processing software has better demosaicing methods and clever algorithms. For example, they can identify sharp edges or high contrast changes and preserve their sharpness when combining the color channels.






    share|improve this answer










    New contributor




    jreese is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.

























      5














      I know it's already been answered quite well by mattdm, but I just thought you might find this article interesting.



      In case the link goes down, here is a summary:



      The human eye is most sensitive to colors in the green wavelength region (coincidental with the fact that our sun emits most intensely in the green region).



      The camera eye (charge coupled device (CCD) or complimentary metal oxide semiconductor (CMOS)) is sensitive only to light intensity, not to color.



      Optical filters are used to filter out different wavelengths of light. For example, a green pass filter will only let green light through.



      Optical filters used in digital cameras are the size of the individual pixel sensors, and are arranged in a grid to match the sensor array. Red, green and blue (sort of like our cone cells) filters are used. However, because our eyes are more sensitive to green, the Bayer array filter has 2 green pixel filters for each red and blue pixel.
      The Bayer array has green filters forming a checkerboard like pattern, while red and blue filters occupy alternating rows.



      Getting back to your original question: what does an unprocessed RAW file look like?



      It looks like a black an white checkered lattice of the original image.



      The fancy software for post-processing the RAW files first applies the Bayer filter. It looks more like the actual image after this, with color in the correct intensity and locations. However, there are still artifacts of the RGB grid from the Bayer filter, because each pixel is only one color.



      There are a variety of methods for smoothing out the color coded RAW file. Smoothing out the pixels is similar to blurring though, so too much smoothing can be a bad thing.



      Some of the demosaicing methods are briefly described here:



      Nearest Neighbor: The value of a pixel (single color) is applied to its other colored neighbors and the colors are combined. No "new" colors are created in this process, only colors that were originally perceived by the camera sensor.



      Linear Interpolation: for example, averages the two adjacent blue values and applies the average blue value to the green pixel in between the adjacent blue pixels. This can blur sharp edges.



      Quadratic and cubic Interpolation: similar to linear interpolation, higher order approximations for the in-between color. They use more data points to generate better fits. linear only looks at two, quadratic at three, and cubic at four to generate an in between color.



      Catmull-Rom Splines: similar to cubic, but takes into consideration the gradient of each point to generate the in-between color.



      Half Cosine: used as an example of an interpolation method, it creates half cosines between each pair of like-colors and has a smooth inflected curve between them. However, as noted in the article, it does not offer any advantage for Bayer arrays due to the arrangement of the colors. It is equivalent to linear interpolation but at higher computational cost.



      Higher end post-processing software has better demosaicing methods and clever algorithms. For example, they can identify sharp edges or high contrast changes and preserve their sharpness when combining the color channels.






      share|improve this answer










      New contributor




      jreese is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.























        5












        5








        5







        I know it's already been answered quite well by mattdm, but I just thought you might find this article interesting.



        In case the link goes down, here is a summary:



        The human eye is most sensitive to colors in the green wavelength region (coincidental with the fact that our sun emits most intensely in the green region).



        The camera eye (charge coupled device (CCD) or complimentary metal oxide semiconductor (CMOS)) is sensitive only to light intensity, not to color.



        Optical filters are used to filter out different wavelengths of light. For example, a green pass filter will only let green light through.



        Optical filters used in digital cameras are the size of the individual pixel sensors, and are arranged in a grid to match the sensor array. Red, green and blue (sort of like our cone cells) filters are used. However, because our eyes are more sensitive to green, the Bayer array filter has 2 green pixel filters for each red and blue pixel.
        The Bayer array has green filters forming a checkerboard like pattern, while red and blue filters occupy alternating rows.



        Getting back to your original question: what does an unprocessed RAW file look like?



        It looks like a black an white checkered lattice of the original image.



        The fancy software for post-processing the RAW files first applies the Bayer filter. It looks more like the actual image after this, with color in the correct intensity and locations. However, there are still artifacts of the RGB grid from the Bayer filter, because each pixel is only one color.



        There are a variety of methods for smoothing out the color coded RAW file. Smoothing out the pixels is similar to blurring though, so too much smoothing can be a bad thing.



        Some of the demosaicing methods are briefly described here:



        Nearest Neighbor: The value of a pixel (single color) is applied to its other colored neighbors and the colors are combined. No "new" colors are created in this process, only colors that were originally perceived by the camera sensor.



        Linear Interpolation: for example, averages the two adjacent blue values and applies the average blue value to the green pixel in between the adjacent blue pixels. This can blur sharp edges.



        Quadratic and cubic Interpolation: similar to linear interpolation, higher order approximations for the in-between color. They use more data points to generate better fits. linear only looks at two, quadratic at three, and cubic at four to generate an in between color.



        Catmull-Rom Splines: similar to cubic, but takes into consideration the gradient of each point to generate the in-between color.



        Half Cosine: used as an example of an interpolation method, it creates half cosines between each pair of like-colors and has a smooth inflected curve between them. However, as noted in the article, it does not offer any advantage for Bayer arrays due to the arrangement of the colors. It is equivalent to linear interpolation but at higher computational cost.



        Higher end post-processing software has better demosaicing methods and clever algorithms. For example, they can identify sharp edges or high contrast changes and preserve their sharpness when combining the color channels.






        share|improve this answer










        New contributor




        jreese is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.










        I know it's already been answered quite well by mattdm, but I just thought you might find this article interesting.



        In case the link goes down, here is a summary:



        The human eye is most sensitive to colors in the green wavelength region (coincidental with the fact that our sun emits most intensely in the green region).



        The camera eye (charge coupled device (CCD) or complimentary metal oxide semiconductor (CMOS)) is sensitive only to light intensity, not to color.



        Optical filters are used to filter out different wavelengths of light. For example, a green pass filter will only let green light through.



        Optical filters used in digital cameras are the size of the individual pixel sensors, and are arranged in a grid to match the sensor array. Red, green and blue (sort of like our cone cells) filters are used. However, because our eyes are more sensitive to green, the Bayer array filter has 2 green pixel filters for each red and blue pixel.
        The Bayer array has green filters forming a checkerboard like pattern, while red and blue filters occupy alternating rows.



        Getting back to your original question: what does an unprocessed RAW file look like?



        It looks like a black an white checkered lattice of the original image.



        The fancy software for post-processing the RAW files first applies the Bayer filter. It looks more like the actual image after this, with color in the correct intensity and locations. However, there are still artifacts of the RGB grid from the Bayer filter, because each pixel is only one color.



        There are a variety of methods for smoothing out the color coded RAW file. Smoothing out the pixels is similar to blurring though, so too much smoothing can be a bad thing.



        Some of the demosaicing methods are briefly described here:



        Nearest Neighbor: The value of a pixel (single color) is applied to its other colored neighbors and the colors are combined. No "new" colors are created in this process, only colors that were originally perceived by the camera sensor.



        Linear Interpolation: for example, averages the two adjacent blue values and applies the average blue value to the green pixel in between the adjacent blue pixels. This can blur sharp edges.



        Quadratic and cubic Interpolation: similar to linear interpolation, higher order approximations for the in-between color. They use more data points to generate better fits. linear only looks at two, quadratic at three, and cubic at four to generate an in between color.



        Catmull-Rom Splines: similar to cubic, but takes into consideration the gradient of each point to generate the in-between color.



        Half Cosine: used as an example of an interpolation method, it creates half cosines between each pair of like-colors and has a smooth inflected curve between them. However, as noted in the article, it does not offer any advantage for Bayer arrays due to the arrangement of the colors. It is equivalent to linear interpolation but at higher computational cost.



        Higher end post-processing software has better demosaicing methods and clever algorithms. For example, they can identify sharp edges or high contrast changes and preserve their sharpness when combining the color channels.







        share|improve this answer










        New contributor




        jreese is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer








        edited Feb 22 at 13:51





















        New contributor




        jreese is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered Feb 21 at 18:19









        jreesejreese

        593




        593




        New contributor




        jreese is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        jreese is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        jreese is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.























            -5














            I think a lot of people imagine that raw files are simply an array of pixel values straight out of the camera sensor. There are cases there this is really the case, and you have to supply some information about the sensor in order to let the software interpret the image. But a lot of the consumer cameras usually give "raw files" that actually are more or less conforming to the TIFF file specification (in some cases, the colours may be off). One can try by simply change the file extension to ".tif" and see what happens when opening the file. I think some of you will se a good picture, but not everyone, because there are differences between how different camerabrands solve this.



            A TIFF file instead of a "real raw file" is a good solution. A TIFF file can have 16 bits per colour. That's enough for all cameras I know.



            Ed: I wonder why this answer got downvoted. The answer is essentially correct (with reservation for the fact that camera manufacturers don't have to use TIFF structs, but many of them do).



            About the part about array of pixels straight out of the sensor, it is not ridiculous to expect something like that. Because that is how a lot of sensors outside the consumer camera market works. In these cases, You have to provide a separate file that describes the sensor.



            By the way, the word "RAW" is used because it should mean that we get the unprocessed sensor data. But it's reasonable that the camera manufacturers use a structured format instead of raw files for real. This way the photographer doesn't have to know the exact sensor data.






            share|improve this answer










            New contributor




            Ulf Tennfors is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.
















            • 2





              I have had file recovery applications that "recovered" .cr2 files as TIFFs. Those files would not open using any application that can work with TIFFs. Changing the file extensions back to .cr2 made them perfectly usable .cr2 files.

              – Michael C
              Feb 21 at 18:50






            • 4





              That's not to say that RAW files aren't often actually using TIFF format containers — that's absolutely correct. It's just that the thing you're seeing probably isn't the "RAW" data in the sense I'm looking for.

              – mattdm
              Feb 21 at 19:26






            • 2





              Ok, to clarify, the file do uses structures from the TIFF file format. But since it doesn't do exactly as the TIFF specification says, it is not a strict TIFF file. But the point is that a TIFF library could be used to read the file. One doesn't have to make everything from scratch in order to read that file.

              – Ulf Tennfors
              Feb 21 at 19:39






            • 2





              One kind of needs to so something from scratch in order to do something useful with the file, though. Otherwise you get the almost-all-dark splotchy grayscale image I lead my answer with.

              – mattdm
              Feb 21 at 20:08






            • 4





              No one with any sense would expect a RAW file to be data straight off the sensor without any metadata of any kind like time, camera information, etc. The fact that TIFF-like formats are useful for structuring the data isn't really important, nor does it decrease the conceptual principle that that data is "straight" off the sensor without post-processing.

              – whatsisname
              Feb 21 at 22:52
















            -5














            I think a lot of people imagine that raw files are simply an array of pixel values straight out of the camera sensor. There are cases there this is really the case, and you have to supply some information about the sensor in order to let the software interpret the image. But a lot of the consumer cameras usually give "raw files" that actually are more or less conforming to the TIFF file specification (in some cases, the colours may be off). One can try by simply change the file extension to ".tif" and see what happens when opening the file. I think some of you will se a good picture, but not everyone, because there are differences between how different camerabrands solve this.



            A TIFF file instead of a "real raw file" is a good solution. A TIFF file can have 16 bits per colour. That's enough for all cameras I know.



            Ed: I wonder why this answer got downvoted. The answer is essentially correct (with reservation for the fact that camera manufacturers don't have to use TIFF structs, but many of them do).



            About the part about array of pixels straight out of the sensor, it is not ridiculous to expect something like that. Because that is how a lot of sensors outside the consumer camera market works. In these cases, You have to provide a separate file that describes the sensor.



            By the way, the word "RAW" is used because it should mean that we get the unprocessed sensor data. But it's reasonable that the camera manufacturers use a structured format instead of raw files for real. This way the photographer doesn't have to know the exact sensor data.






            share|improve this answer










            New contributor




            Ulf Tennfors is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.
















            • 2





              I have had file recovery applications that "recovered" .cr2 files as TIFFs. Those files would not open using any application that can work with TIFFs. Changing the file extensions back to .cr2 made them perfectly usable .cr2 files.

              – Michael C
              Feb 21 at 18:50






            • 4





              That's not to say that RAW files aren't often actually using TIFF format containers — that's absolutely correct. It's just that the thing you're seeing probably isn't the "RAW" data in the sense I'm looking for.

              – mattdm
              Feb 21 at 19:26






            • 2





              Ok, to clarify, the file do uses structures from the TIFF file format. But since it doesn't do exactly as the TIFF specification says, it is not a strict TIFF file. But the point is that a TIFF library could be used to read the file. One doesn't have to make everything from scratch in order to read that file.

              – Ulf Tennfors
              Feb 21 at 19:39






            • 2





              One kind of needs to so something from scratch in order to do something useful with the file, though. Otherwise you get the almost-all-dark splotchy grayscale image I lead my answer with.

              – mattdm
              Feb 21 at 20:08






            • 4





              No one with any sense would expect a RAW file to be data straight off the sensor without any metadata of any kind like time, camera information, etc. The fact that TIFF-like formats are useful for structuring the data isn't really important, nor does it decrease the conceptual principle that that data is "straight" off the sensor without post-processing.

              – whatsisname
              Feb 21 at 22:52














            -5












            -5








            -5







            I think a lot of people imagine that raw files are simply an array of pixel values straight out of the camera sensor. There are cases there this is really the case, and you have to supply some information about the sensor in order to let the software interpret the image. But a lot of the consumer cameras usually give "raw files" that actually are more or less conforming to the TIFF file specification (in some cases, the colours may be off). One can try by simply change the file extension to ".tif" and see what happens when opening the file. I think some of you will se a good picture, but not everyone, because there are differences between how different camerabrands solve this.



            A TIFF file instead of a "real raw file" is a good solution. A TIFF file can have 16 bits per colour. That's enough for all cameras I know.



            Ed: I wonder why this answer got downvoted. The answer is essentially correct (with reservation for the fact that camera manufacturers don't have to use TIFF structs, but many of them do).



            About the part about array of pixels straight out of the sensor, it is not ridiculous to expect something like that. Because that is how a lot of sensors outside the consumer camera market works. In these cases, You have to provide a separate file that describes the sensor.



            By the way, the word "RAW" is used because it should mean that we get the unprocessed sensor data. But it's reasonable that the camera manufacturers use a structured format instead of raw files for real. This way the photographer doesn't have to know the exact sensor data.






            share|improve this answer










            New contributor




            Ulf Tennfors is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.










            I think a lot of people imagine that raw files are simply an array of pixel values straight out of the camera sensor. There are cases there this is really the case, and you have to supply some information about the sensor in order to let the software interpret the image. But a lot of the consumer cameras usually give "raw files" that actually are more or less conforming to the TIFF file specification (in some cases, the colours may be off). One can try by simply change the file extension to ".tif" and see what happens when opening the file. I think some of you will se a good picture, but not everyone, because there are differences between how different camerabrands solve this.



            A TIFF file instead of a "real raw file" is a good solution. A TIFF file can have 16 bits per colour. That's enough for all cameras I know.



            Ed: I wonder why this answer got downvoted. The answer is essentially correct (with reservation for the fact that camera manufacturers don't have to use TIFF structs, but many of them do).



            About the part about array of pixels straight out of the sensor, it is not ridiculous to expect something like that. Because that is how a lot of sensors outside the consumer camera market works. In these cases, You have to provide a separate file that describes the sensor.



            By the way, the word "RAW" is used because it should mean that we get the unprocessed sensor data. But it's reasonable that the camera manufacturers use a structured format instead of raw files for real. This way the photographer doesn't have to know the exact sensor data.







            share|improve this answer










            New contributor




            Ulf Tennfors is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            share|improve this answer



            share|improve this answer








            edited yesterday





















            New contributor




            Ulf Tennfors is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            answered Feb 21 at 18:33









            Ulf TennforsUlf Tennfors

            31




            31




            New contributor




            Ulf Tennfors is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.





            New contributor





            Ulf Tennfors is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            Ulf Tennfors is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.








            • 2





              I have had file recovery applications that "recovered" .cr2 files as TIFFs. Those files would not open using any application that can work with TIFFs. Changing the file extensions back to .cr2 made them perfectly usable .cr2 files.

              – Michael C
              Feb 21 at 18:50






            • 4





              That's not to say that RAW files aren't often actually using TIFF format containers — that's absolutely correct. It's just that the thing you're seeing probably isn't the "RAW" data in the sense I'm looking for.

              – mattdm
              Feb 21 at 19:26






            • 2





              Ok, to clarify, the file do uses structures from the TIFF file format. But since it doesn't do exactly as the TIFF specification says, it is not a strict TIFF file. But the point is that a TIFF library could be used to read the file. One doesn't have to make everything from scratch in order to read that file.

              – Ulf Tennfors
              Feb 21 at 19:39






            • 2





              One kind of needs to so something from scratch in order to do something useful with the file, though. Otherwise you get the almost-all-dark splotchy grayscale image I lead my answer with.

              – mattdm
              Feb 21 at 20:08






            • 4





              No one with any sense would expect a RAW file to be data straight off the sensor without any metadata of any kind like time, camera information, etc. The fact that TIFF-like formats are useful for structuring the data isn't really important, nor does it decrease the conceptual principle that that data is "straight" off the sensor without post-processing.

              – whatsisname
              Feb 21 at 22:52














            • 2





              I have had file recovery applications that "recovered" .cr2 files as TIFFs. Those files would not open using any application that can work with TIFFs. Changing the file extensions back to .cr2 made them perfectly usable .cr2 files.

              – Michael C
              Feb 21 at 18:50






            • 4





              That's not to say that RAW files aren't often actually using TIFF format containers — that's absolutely correct. It's just that the thing you're seeing probably isn't the "RAW" data in the sense I'm looking for.

              – mattdm
              Feb 21 at 19:26






            • 2





              Ok, to clarify, the file do uses structures from the TIFF file format. But since it doesn't do exactly as the TIFF specification says, it is not a strict TIFF file. But the point is that a TIFF library could be used to read the file. One doesn't have to make everything from scratch in order to read that file.

              – Ulf Tennfors
              Feb 21 at 19:39






            • 2





              One kind of needs to so something from scratch in order to do something useful with the file, though. Otherwise you get the almost-all-dark splotchy grayscale image I lead my answer with.

              – mattdm
              Feb 21 at 20:08






            • 4





              No one with any sense would expect a RAW file to be data straight off the sensor without any metadata of any kind like time, camera information, etc. The fact that TIFF-like formats are useful for structuring the data isn't really important, nor does it decrease the conceptual principle that that data is "straight" off the sensor without post-processing.

              – whatsisname
              Feb 21 at 22:52








            2




            2





            I have had file recovery applications that "recovered" .cr2 files as TIFFs. Those files would not open using any application that can work with TIFFs. Changing the file extensions back to .cr2 made them perfectly usable .cr2 files.

            – Michael C
            Feb 21 at 18:50





            I have had file recovery applications that "recovered" .cr2 files as TIFFs. Those files would not open using any application that can work with TIFFs. Changing the file extensions back to .cr2 made them perfectly usable .cr2 files.

            – Michael C
            Feb 21 at 18:50




            4




            4





            That's not to say that RAW files aren't often actually using TIFF format containers — that's absolutely correct. It's just that the thing you're seeing probably isn't the "RAW" data in the sense I'm looking for.

            – mattdm
            Feb 21 at 19:26





            That's not to say that RAW files aren't often actually using TIFF format containers — that's absolutely correct. It's just that the thing you're seeing probably isn't the "RAW" data in the sense I'm looking for.

            – mattdm
            Feb 21 at 19:26




            2




            2





            Ok, to clarify, the file do uses structures from the TIFF file format. But since it doesn't do exactly as the TIFF specification says, it is not a strict TIFF file. But the point is that a TIFF library could be used to read the file. One doesn't have to make everything from scratch in order to read that file.

            – Ulf Tennfors
            Feb 21 at 19:39





            Ok, to clarify, the file do uses structures from the TIFF file format. But since it doesn't do exactly as the TIFF specification says, it is not a strict TIFF file. But the point is that a TIFF library could be used to read the file. One doesn't have to make everything from scratch in order to read that file.

            – Ulf Tennfors
            Feb 21 at 19:39




            2




            2





            One kind of needs to so something from scratch in order to do something useful with the file, though. Otherwise you get the almost-all-dark splotchy grayscale image I lead my answer with.

            – mattdm
            Feb 21 at 20:08





            One kind of needs to so something from scratch in order to do something useful with the file, though. Otherwise you get the almost-all-dark splotchy grayscale image I lead my answer with.

            – mattdm
            Feb 21 at 20:08




            4




            4





            No one with any sense would expect a RAW file to be data straight off the sensor without any metadata of any kind like time, camera information, etc. The fact that TIFF-like formats are useful for structuring the data isn't really important, nor does it decrease the conceptual principle that that data is "straight" off the sensor without post-processing.

            – whatsisname
            Feb 21 at 22:52





            No one with any sense would expect a RAW file to be data straight off the sensor without any metadata of any kind like time, camera information, etc. The fact that TIFF-like formats are useful for structuring the data isn't really important, nor does it decrease the conceptual principle that that data is "straight" off the sensor without post-processing.

            – whatsisname
            Feb 21 at 22:52


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Photography Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f105271%2fwhat-does-an-unprocessed-raw-file-look-like%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            is 'sed' thread safeWhat should someone know about using Python scripts in the shell?Nexenta bash script uses...

            How do i solve the “ No module named 'mlxtend' ” issue on Jupyter?

            Pilgersdorf Inhaltsverzeichnis Geografie | Geschichte | Bevölkerungsentwicklung | Politik | Kultur...