Working with some halftone images today gave me a couple of ideas. I've tried them with some mixed success, and wanted to throw them out for you to do further experimentation on and discuss.
Take a good highrez scan of a halftone image (must be in RGB or grayscale mode, no gif or jpg, must be original scan, no conversion from another source). Make sure your image is level according to the dots (not according to the edges or the pixels). Count the halftone dots per inch (or count the dots per quarter inch and multiply x 4). Don't accidentally count pixels. This would be the screen value.
The screen is a grid, each square of that grid is analogous to a pixel, so resize the image (using resample, I like Smooth Bicubic) so that the PPI equals the screen value (ie: if you counted 100 dots in one inch, resize to 100ppi). The idea is to turn one dot into one pixel.
How'd it turn out?
I'm getting mixed results, as I said. I think the answer is here somewhere, but I keep introducing moire, so I'm thinking there's a math error somewhere.
Take a good highrez scan of a halftone image (must be in RGB or grayscale mode, no gif or jpg, must be original scan, no conversion from another source). Make sure your image is level according to the dots (not according to the edges or the pixels). Count the halftone dots per inch (or count the dots per quarter inch and multiply x 4). Don't accidentally count pixels. This would be the screen value.
The screen is a grid, each square of that grid is analogous to a pixel, so resize the image (using resample, I like Smooth Bicubic) so that the PPI equals the screen value (ie: if you counted 100 dots in one inch, resize to 100ppi). The idea is to turn one dot into one pixel.
How'd it turn out?
I'm getting mixed results, as I said. I think the answer is here somewhere, but I keep introducing moire, so I'm thinking there's a math error somewhere.
Comment