Announcement

Collapse
No announcement yet.

Why is CS5 default color mode 8 bit?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Photoshop: Why is CS5 default color mode 8 bit?

    Someone sent my blog a good question. I didn't have a good answer, so I bring it you.

    Why does Photoshop make 8-Bit it's default? While it does save the settings if you create a New File with 16-Bit, there does not seem to be anywhere that you can specify "Open All Images in 16-Bit Color Space" in the same way you can specify "Open All Images in Adobe RGB (1998)". Then, if it's in sRGB or 8-Bit, you get a warning dialog box.
    Last edited by artofretouching; 07-23-2011, 10:05 AM.

  • #2
    Re: Why is CS5 default color mode 8 bit?

    I do not know the answer - but that does not stop me making a guess

    I suspect that traditionally many images that find their way into Photoshop have been acquired as 8 bit in the first instance. I am not sure that there is any real advantage in converting from a small colour space to a larger one, so perhaps this may go some way to answering the question.

    Many digital cameras however have the ability to shoot in RAW therefore I think it makes a lot more sense to utilise the additional bit depth and import as 16 bit maybe ProPhoto. Once your image is opened in Camera RAW you can specify to be opened as 16 bit in Photoshop, and AFAIK these settings will remain for all images until you make the change to something else.

    Comment


    • #3
      Re: Why is CS5 default color mode 8 bit?

      While I was thinking the same thing, the simple fact that I have access to Photoshop shows some signs of "professional" in my title. This does not mean that I am necessarily going to remember to convert some designer's supplied JPG to 16-Bit before fixing it.

      So, I guess it just go back to the original question: Why isn't there a "Convert 8-Bit to 16-Bit" option in the preferences? We all know that Photoshop's Preferences are filled with 20+ years of legacy nonsense. No reason they can't, or even shouldn't, add this as an option.

      Comment


      • #4
        Re: Why is CS5 default color mode 8 bit?

        Artofretouching, 16 bit is not a color space, it is the bit depth of the pixels in an image. Converting an 8 bit image to 16 bit for all practical purposes is a waste of time, memory, and disk space. An analogy would be that someone started with a number 5.6781 and he / she rounded that number down to 5.0000 and asked you to double it. You now are want your work to be very accurate and so you add decimal places and make the number 5.00000000. Your result is not any more accurate. If you start with a 8 bit file, much of the information contained in a file that was converted from 16 bit to 8 bit has been lost and there is nothing you can do to restore it by converting it back to 16 or 32 bit. You are just adding lots of zeros.
        When you are working with RAW files, the software like Adobe Camera RAW, and other programs offer you the option to open or save the files as 16 or 8 bit.
        Regards, Murray

        Comment


        • #5
          Re: Why is CS5 default color mode 8 bit?

          Hi Artofretouching
          Here is a way I suggest trying to have all images you bring into Photoshop to be in 16 bit mode if you really don't want to do the bit depth change manually.

          First create an Action whose sole function is to change to 16 bit depth.

          Use Edit > Scripts > Scripts Event Manager.....

          Set the Open Document event to use your Action to change to 16 bit depth.

          With this enabled all images should come in to Photoshop and be converted to 16 bit depth no matter what bit depth they were originally.

          I personally don't use this approach because the preferred approach (IMHO) would be to have the user flagged with an option of what bit depth to use similar to mismatches in color spaces. You may not always want "all" images converted to 16 depth.

          Murray, you stated:

          Originally posted by mistermonday View Post
          Converting an 8 bit image to 16 bit for all practical purposes is a waste of time, memory, and disk space
          Hmmm, I really appreciate your numerous posts and learn a lot from them yet in this case I respectfully disagree.

          Converting a 8 bit image to 16 bit greatly reduces any additional cumulative quantization errors with additional Layer Blends, etc. that would have occurred if you stay in 8 bit mode and proceeded with editing.

          Here is an easy example.

          I start with an 8 bit image in ProPhoto RGB Color Space where all pixels have R=G=B (gray values with no color) and the image has equal number of pixels at each of the values from 0 to 255. This produces the following histogram:

          Screen shot 2011-07-23 at 12.12.29 AM.png

          While leaving it in 8 bit mode I change to Lab Color Mode and then back to RGB mode. The updated histogram looks like this:

          Screen shot 2011-07-23 at 12.13.22 AM.png

          The above roundtrip to Lab Color Mode created an uneven histogram where some values are doubled up and some are not used at all. This type of operation creates the potential for increased banding.

          Now, I run a similar experiment where I take the same starting image, convert to 16 bit first, change to Lab Color Mode and then back to RGB mode and then finally convert back to 8 bit mode. Note that the histogram is again perfectly uniform. I have done this with actual images and it produced less banding:

          Screen shot 2011-07-23 at 12.14.07 AM.png

          Conclusion: better results by converting to 16 bit mode even for 8 bit images. Similar experiments may not be as clean as this yet in all cases I find that you get better results and fewer 8 bit math cumulative errors when converting to 16 bit first.

          Comment


          • #6
            Re: Why is CS5 default color mode 8 bit?

            Couple of supplementary questions folks. I'm still using CS3 and note that a number of PS functions are disenabled in 16 bit. Is that still the case with CS5? All my retouching is produced for publicity, ads etc, and has a shelf life of months. Would I gain anything by working in 16bit given the downside of working with larger files. I must say smoother grads has definite appeal.
            R.

            Comment


            • #7
              Re: Why is CS5 default color mode 8 bit?

              Originally posted by artofretouching View Post
              While I was thinking the same thing, the simple fact that I have access to Photoshop shows some signs of "professional" in my title.....
              Seems like I have unintentionally caused offence - if this is the case then my apologies. I have to disagree that to have Photoshop shows signs of professional. Early versions of PS are relatively cheap to purchase and the marketing of the application leads many to the conclusion that PS is the only way to go pro or not.

              So, I guess it just go back to the original question: Why isn't there a "Convert 8-Bit to 16-Bit" option in the preferences?
              Perhaps consider asking Adobe themselves either directly or through their forum?

              We all know that Photoshop's Preferences are filled with 20+ years of legacy nonsense. No reason they can't, or even shouldn't, add this as an option.
              Not all of us know, I have to admit I am one of those that is not aware of the so called legacy nonsense, would you care to comment further on this? Who knows your wish for Convert 8 to 16 bit feature may even appear in CS8-10 .

              Comment


              • #8
                Re: Why is CS5 default color mode 8 bit?

                Originally posted by John Wheeler View Post
                Hi Artofretouching
                Murray, you stated:

                Converting a 8 bit image to 16 bit greatly reduces any additional cumulative quantization errors with additional Layer Blends, etc. that would have occurred if you stay in 8 bit mode and proceeded with editing.
                John, you may notice that I used the word "practical". I am very familiar with quantization error, gaps in histograms, the effects of blendings, levels adjustments, and calculations on 8 bit vs 16 bit images. However, there is a significant difference between the practical and theoretical / mathematical. Your example with LAB is actually a perfect example of my argument. If you read the book LAB COLOR: The Canyon Conundrum by Dan Margulis you will find a very detailed explanation of why.

                I personally do all my capture in 14 bit and all subsequent work in 16 and 32 bit and I only convert to 8 bit for output and certain distribution. The problem with starting with an 8 bit jpg image is that it has already been likely compromised (assuming it is not vector or created art). Firstly by truncation to 8 bit and secondly by jpg compression which usually has a much more destructive impact. Yes, conversion to 16 bit may help reduce effects caused subsequent post processing but that will depend a lot on the specific image and what processing is done. But this is a little like applying a coat of lacquer to a car that's just been through a 2 day desert sandstorm.

                Today one reads and hears much about the wisdom of working in 16 bit, and that is generally good advise. But we should all be aware that automatically converting every 8 bit jpg to 16 bit is not necessarily a useful practice.

                Regards, Murray

                Comment


                • #9
                  Re: Why is CS5 default color mode 8 bit?

                  Originally posted by mistermonday View Post
                  John, you may notice that I used the word "practical". I am very familiar with quantization error, gaps in histograms, the effects of blendings, levels adjustments, and calculations on 8 bit vs 16 bit images. However, there is a significant difference between the practical and theoretical / mathematical. Your example with LAB is actually a perfect example of my argument. If you read the book LAB COLOR: The Canyon Conundrum by Dan Margulis you will find a very detailed explanation of why.

                  I personally do all my capture in 14 bit and all subsequent work in 16 and 32 bit and I only convert to 8 bit for output and certain distribution. The problem with starting with an 8 bit jpg image is that it has already been likely compromised (assuming it is not vector or created art). Firstly by truncation to 8 bit and secondly by jpg compression which usually has a much more destructive impact. Yes, conversion to 16 bit may help reduce effects caused subsequent post processing but that will depend a lot on the specific image and what processing is done. But this is a little like applying a coat of lacquer to a car that's just been through a 2 day desert sandstorm.

                  Today one reads and hears much about the wisdom of working in 16 bit, and that is generally good advise. But we should all be aware that automatically converting every 8 bit jpg to 16 bit is not necessarily a useful practice.

                  Regards, Murray
                  I totally agree with what you have stated/quoted. IMHO, if all one is provided and can access is 8 bit images to begin with, converting to 16 bit before additional processing does help make a non ideal situation from getting worse from a practical and not just theoretical standpoint (based on my own practical experience).

                  Back to the OPs original question. Here is a link for requesting features or reporting bugs:
                  https://www.adobe.com/cfusion/mmform...&name=wishform

                  Adobe certainly could provide such features yet as with all businesses they work on a prioritized basis I am sure.

                  There are certainly features that have an old legacy from within Photoshop that do not have a great need anymore. However, I greatly appreciate that Adobe maintains those features for a backwards compatibility standpoint.

                  Artofretouching you were not clear if you just wanted that feature or if you wanted alternatives to achieve the same result. I provided one path in a previous post using the Scripts Event Manager. Another approach that works for JPG and TIFF files is to have them open through ACR. Within ACR you can have the defaults set to open up in Photoshop in the desired bit depth and also Color Space. Hope the additional information is helpful.

                  Comment


                  • #10
                    Re: Why is CS5 default color mode 8 bit?

                    Originally posted by artofretouching View Post
                    So, I guess it just go back to the original question: Why isn't there a "Convert 8-Bit to 16-Bit" option in the preferences?
                    There is.

                    Edit > Color Settings then change your Working Spaces > RGB to ProPhotoRGB. Then set your Camera Raw import options the way John describes above.

                    You can also do Image > Mode and set your working color space to 16-bit on a case by case basis.

                    Originally posted by artofretouching View Post
                    We all know that Photoshop's Preferences are filled with 20+ years of legacy nonsense. No reason they can't, or even shouldn't, add this as an option.
                    Originally posted by Tony W View Post
                    Not all of us know, I have to admit I am one of those that is not aware of the so called legacy nonsense, would you care to comment further on this? Who knows your wish for Convert 8 to 16 bit feature may even appear in CS8-10 .
                    Well, actually there is such legacy stuff. It wasn't nonsense at the time, nor is a good bit of it nonsense now -- if you're in pre-press, which is what Photoshop was originally designed for doing. Photoshop morphed into a photographer's tool later in its life but didn't start that way.

                    One example is this very 8-bit vs 16-bit default that we're discussing. That might make sense for Elements, but not for Photoshop proper. AdobeRGB 16-bit should be the default, if not ProPhoto RGB.

                    Another example is Photoshop's memory handling. After a horrendously painful time with file I/O at 8 GB RAM I upgraded to 16 GB RAM. That helps a tremendous amount overall, but Photoshop still caches several GB of data on disk when there are enough GB free in RAM to store it there. I allow Photoshop 70% of my RAM but it still prefers to store too much on disk, which makes it much slower to read when it needs to be read into memory.

                    There are other areas, too, like most of the "secret" and "professional" performance-enhancing settings we all read about and/or take for granted that we have to do manually and do them that way without really thinking about it, we just know they have to be set. A lot of those settings should be default in Photoshop proper.
                    Last edited by RobertAsh; 07-23-2011, 03:19 PM.

                    Comment


                    • #11
                      Re: Why is CS5 default color mode 8 bit?

                      Interesting reading Murray's and John's comments. Not wanting to take the thread off track but maybe add a little fuel to the discussion with my opinion FWIW and in the process maybe add to my knowledge.

                      Firstly I suspect that Adobe may have thought that it was not appropriate to add this functionallity to preferences due to the prevalance of 8 bit jpegs (AFAIK until Jpeg2000 all were 8bit). If this is the case could it be due to the fear of potentially making the images worse by converting from 8 to 16bit?

                      There seems to have been a lot of debate relating to editing 8 bit as 16 bit does it make sense, does it improve, does it have potential to degrade etc etc.

                      My current view based on experience to date and also coloured by the 'experts' views i.e. those 'experts' whose views I respect. One comment that struck me by a respected Photoshop guru along the lines of 'converting 8 bit files to 16 bits is a voodoo manouvere that will gain no improvement' - I think it may have been Katrin Eisman

                      When I started using Photoshop I used to convert 8 bits to 16 bits in the belief that I would be gaining something. My revised view is that it may be possible but there are risks associated that need to be understood - I have seen/introduced posterisation into my own images which could be attributable to editing original 8 bit as 16 bit.

                      So now this is my current thinking. Of course I accept my thinking may be out of date, plain wrong or just a load of c**p and would welcome different views

                      If an image is available as anything over 8 bit then I will edit in 16 bit. This includes scanning and acquiring from DSLR. In my case I thought my DSLR was 12 bit but it is actually 12 bit compressed which I believe equates to a real bit depth of only 9.5 bits!

                      There is a problem when converting 8 to 16 bit which may not be seen or we are even aware of. The original 8 bit image has 256 levels when converted to 16 bit it will have 32769 (think PS is actually 15 bit hence losing 32000+ levels). This must mean that there are huge gaps in the histogram. AFAIK PS histogram of a 16 bit image has been converted to an 8 bit view therefore the gaps are probably not apparent. Depending on editing steps taken it is conceivable that the gaps will increase even more due to spreading the 8 bit level information over the new 16 bit levels ?

                      Robert, thanks for the info. Seems to me that generally Adobe do not remove much, if anything from previous releases. On balance I think probably a good idea to keep customers happy and brand loyal by maintaining legacy features.

                      Comment


                      • #12
                        Re: Why is CS5 default color mode 8 bit?

                        Originally posted by mistermonday View Post
                        However, there is a significant difference between the practical and theoretical / mathematical. Your example with LAB is actually a perfect example of my argument. If you read the book LAB COLOR: The Canyon Conundrum by Dan Margulis you will find a very detailed explanation of why.
                        Well the practical and theoretical example in the book should be subject to close scrutiny! Its a pretty awful quality reproduction for one (low linescreen, not very good paper), its far from a high quality reproduction in comparison of say a high line screen print job, let alone a high end ink jet or contone reproduction. So can you convert back and forth dozens of times (the first conversion is really the one that dumps the additional data I’ll point out) and not see anything in terms of this kind of low quality repro? In this case yes. What happens as you continue to edit the image? Or find you need a far higher end quality output device. And the banding usually seen is found in smooth gradients (sky, chrome car bumpers etc). You see anything remotely like this in Dan’s one example? Nope.

                        The proof here is not all encompassing by a long shot! And in terms of Dan’s belief system about high bit files (which virtually every scanner and pro level camera provides), why dump the data? The famous “prove the need for high bit” Dan debate is one in which the field goals move anytime Dan feels the need as well described here (by a imaging color scientist no less):

                        http://www.brucelindbloom.com/index....nMargulis.html

                        Years ago, when I provided Dan a real world image that showed visual on-screen degradation in 8-bits per color that didn’t show in high bit, he then changed the goal posts again and began his new goal to dismiss ProPhoto RGB! IMHO, he’s not too interested in scientific concepts but flat earth concepts.

                        So the best thing I can say here is, ignore Dan’s examples!

                        Getting back to the OP, yes, the default is for a new doc to be 8-bits per color. But the dialog is sticky, meaning that once you setup a new doc as you desire, it should stick to the color space and bit depth you ask for. IF the clean, out of the box default where say ProPhoto RGB in 16-bit, people who want to have a clean out of the box sRGB 8-bit would question why.

                        Comment


                        • #13
                          Re: Why is CS5 default color mode 8 bit?

                          Originally posted by Tony W View Post
                          ...If an image is available as anything over 8 bit then I will edit in 16 bit. This includes scanning and acquiring from DSLR. In my case I thought my DSLR was 12 bit but it is actually 12 bit compressed which I believe equates to a real bit depth of only 9.5 bits!

                          There is a problem when converting 8 to 16 bit which may not be seen or we are even aware of. The original 8 bit image has 256 levels when converted to 16 bit it will have 32769 (think PS is actually 15 bit hence losing 32000+ levels). This must mean that there are huge gaps in the histogram. AFAIK PS histogram of a 16 bit image has been converted to an 8 bit view therefore the gaps are probably not apparent. Depending on editing steps taken it is conceivable that the gaps will increase even more due to spreading the 8 bit level information over the new 16 bit levels ?

                          Robert, thanks for the info. Seems to me that generally Adobe do not remove much, if anything from previous releases. On balance I think probably a good idea to keep customers happy and brand loyal by maintaining legacy features.
                          You're right. Also, part of keeping customers happy in high-end production environments that rely on your product means not introducing unexpected behavior. High end environments have a lot of automation, standardization and reliance on products behaving absolutely consistently. People write scripts, actions, workflows, etc. that make assumptions about the environment they're running in, and anything that changes could conceiveably mess up something for an important customer or group of them. That's one reason it's always important to read release notes.

                          Regarding compression, it depends on which compression algorithm(s) your camera provides. Nikon has visually lossless compression that allows you to keep full 14-bit image quality, or normal compression that conceiveably could affect image quality. I use lossless in all my Nikons.

                          Also, there is no way to have 9.5 bits, you have either 9 or 10 because a bit is either there or not, it cannot be halfway there.

                          Regarding your histogram question, you actually don't have that many gaps in the histogram typically.

                          One reason is that 8 bits provide you 256 colors per pixel or per dot, not just 256 colors. When pixels, or especially dots, get close enough together they're visually indistinguishable. So 2 dots side-by-side at 256 colors each provides the eye 256*256 = 65,536 different color variants. At 1440 dpi or 2880 dpi with good software interpolation algorithms you get literally millions of possible color combinations even from 8 bits because the print dots are so close together. Nifty!

                          That's one reason Dan Margulis' point is valid for so many cases, though that's not the reason he gives in his book.

                          That's also why such beautiful prints can be made from 8-bit files. What matters in the end is not the technical specifics of the exact numbers, what matters is the clever ways people have developed of getting the most out of specs which are so unimpressive at first glance.

                          Comment


                          • #14
                            Re: Why is CS5 default color mode 8 bit?

                            Originally posted by RobertAsh View Post
                            .......Regarding compression, it depends on which compression algorithm(s) your camera provides. Nikon has visually lossless compression that allows you to keep full 14-bit image quality, or normal compression that conceiveably could affect image quality. I use lossless in all my Nikons....Also, there is no way to have 9.5 bits, you have either 9 or 10 because a bit is either there or not, it cannot be halfway there.
                            You are right - my bad! For some reason I typed 9.5 bits - probably due to the fact (apart from not thinking!!) that Nikon only offers lossy compression on the 12 bit RAW acquired by the D90. I understand that this equates to a true bit depth of around 9 or 10 bits.

                            I knew I should have stumped up the additional cash for the D300 when I made the move to digital - I just did not investigate thoroughly enough.

                            While I would prefer to have the potential benefits of a true 12 or 14 bit capture the D90 is a fine camera for the price and very capable. Too true that beautiful prints have been and are being made from 8 bit captures

                            Comment


                            • #15
                              Re: Why is CS5 default color mode 8 bit?

                              Originally posted by Tony W View Post
                              I knew I should have stumped up the additional cash for the D300 when I made the move to digital - I just did not investigate thoroughly enough.

                              While I would prefer to have the potential benefits of a true 12 or 14 bit capture the D90 is a fine camera for the price and very capable. Too true that beautiful prints have been and are being made from 8 bit captures
                              Well, I can't get too proud A key reason I upgraded from my D200 my wife bought me to the D300 instead of buying a D80 (the D90 wasn't out quite yet) is because my wife was ragging on me. Because she was wondering out loud why on earth I was seriously considering "spending $1000 for a plastic camera" as she put it. I didn't know that she took buying me cameras that seriously It was only later that I found out the other extras the D300 had. Glad I went with it.

                              I did do a departure recently and bought the D7000 though -- that camera is the bomb! Really remarkable. The image quality and high ISO performance are world-class. With the right post-processing you can make its images look like they were taken with a 4x5-inch camera. From a $1200 camera. My D300 is now my backup camera. And I'm not the only one from a few posts I've read from others who did the same thing.

                              It's so good that LeicaRumors.com had a video posted awhile back where one of Leica's spokes-photogs compared the Leica M9.......not to the D3x or D3s he had on his bench......but to the D7000 he picked up from right beside those two. And he even conceded that from a technology spec standpoint the D7000 was superior and did a lot more. Then went on to try to tell people why they ought to buy that $8000 camera that did less instead of the $1200 camera that did more. Really amazing.

                              Yes, you're right about the 8-bit images. That's one thing I really like about Dan Margulis -- the guy understands stuff like diminishing returns from higher bit-count. The guy is a real genius and truly earned his market-leading reputation. I plan to take his 1-week workshop in the next couple of years if at all possible.

                              Anyone will always have corner cases where their techniques don't work. Anyone. But the large majority of what Margulis is saying is valid, well-explained and just plain works. Certainly for me. And he provides the best explanations I've personally seen for most of the topics he addresses in his book.
                              Last edited by RobertAsh; 07-23-2011, 08:07 PM.

                              Comment

                              Related Topics

                              Collapse

                              • rlualhati
                                8 bit and 16 bit post processing
                                by rlualhati
                                Hello,

                                I'm about to venture into editing in 16-bit mode. I've been doing some research, gathering facts...and I'd like to ask a few questions to help tie it all together.

                                So before I got "smart" by shooting in RAW a few years ago, I captured a decent shot of...
                                02-09-2012, 11:00 AM
                              • Tony W
                                Converting 8 bit images to 16 bit - any benefits?
                                by Tony W
                                In another thread I posted the following about my current belief and understanding relating to converting 8 bit files to 16 bit i.e. I doubted that any real benefits would be had and further that there may be potential to make matters worse by taking this course. Quote:
                                When I started using...
                                07-24-2011, 06:02 PM
                              • artofretouching
                                Advanced Bit Depth Question
                                by artofretouching
                                I have an incredibly complex image (series) that is killing my machine. 13x18, ProPhoto, 16-Bit Depth with several Smart Objects. While I want to stab myself repeatedly with an exacto knife (changes are slow as molasses), i was hoping for a better solution.

                                We know that 16-bit gives the...
                                05-30-2013, 10:25 PM
                              • swimpix
                                "Calculate A More Accurate Histogram Icon"
                                by swimpix
                                I posted this on the Adobe forums and came up dry, so I thought I would try here.

                                CS5 Version 12.0.4 Mac OS X 10.10.5

                                1) The "Calculate A More Accurate Histogram Icon" in my Curves Adjustment panel is no longer visible.

                                This may be file specific...
                                08-31-2016, 02:35 AM
                              • john_opitz
                                8/16 bit images! What's the difference !
                                by john_opitz
                                http://www.ledet.com/margulis/PP7_Ch15_Resolution.pdf

                                The 8/16 bit article starts on page 310 of this pdf. file.
                                Can not tell the difference.
                                28.00%
                                7
                                Can tell the difference and let me explain.
                                36.00%
                                9
                                16 bit images is just a waste of space(longer processing).
                                12.00%
                                3
                                I would like to remain neutral on this subject.
                                24.00%
                                6
                                10-01-2002, 07:54 PM
                              Working...
                              X