|
Edited on Wed Jan-24-07 06:53 PM by jmowreader
In the very old days, before computers were used in the graphic arts, you prepared photos for printing on a stat camera.
The stripper I used to work for had been a printer since before the Korean War (when he got drafted he was sent to work as an engraver), so foolishly I asked him how to make a color separation on a camera. We spent the next three days separating color on the camera, and I got pretty good at making basic seps.
To separate a photo on a camera, you take four photos of the original photo through filters and screens. The screen is a sheet of acetate with rows of little dots all over it. The number of dots per inch determines the coarseness of the separation--how coarse or fine you want it depends on the paper, the press and the process you're running--and the angle is determined by the color of ink that's going to be on the plate.
Cyan is normally run at either 15 or 105 degrees Black is ALWAYS run at 45 Magenta is usually run at 75 And yellow, because it's the least-visible color, is at an off-angle: either 0 or 90. Notice that the other three colors are 30 degrees offset; yellow is at a 15-degree offset. This causes moire, but you don't see it because screened yellow ink is hard to see.
(Those are typical angles. We fucked with my Scitex system's screening system for about a month when we got it--we switched the cyan and magenta angles, rotated all four screens six degrees, altered the dot shape on the yellow dot, and got a screening set that was just perfect for my presses.)
Now for the filters: each separation negative contains only the parts of the photo needed for that color. Therefore, you must use a filter over your lens that removes all but that color of light from the photo. Now the shit gets weird, and here we've got to go WAY deeper into subtractive color theory than you really wanted to delve.
How any printed piece actually works is pretty simple: the light you use to see the image goes through the colorants, hits the paper, and reflects back up into your eyes. The colorants absorb part of the light...the part that comes back, your eye melds into a coherent whole.
Cyan ink removes red light. Therefore, to shoot the cyan negative you put a red filter over the lens. Magenta ink removes green light, so a green filter is used. Yellow ink removes blue light, so a blue filter is used.
(Right now you're thinking "but there are SEVEN colors of light that, when properly combined, make all the colors one can possibly see." There are, and when we separate color we ignore four of them. There's a reason the real world is so much more vivid than the colors in a picture book.)
To shoot the black negative in a traditional separation, you use a neutral-density filter that just removes about half of the light,in all the colors, from the image. The black neg is just a ghosted halftone. You can tell in an instant the difference between a computer-generated black neg and a camera-generated one: the camera-generated one is a lot more detailed.
All right, how do you get rid of the screen? Start by scanning at a really high, but irrational, DPI. My favorite setting is 763dpi. For an album cover you're looking at rigs like Screen Cezanne, some really huge Scitex (who sold to Creo, and Kodak has it now) flatbeds, one of the big Fuji pro scanners...you probably won't find those in a Kinko's but you might have a printing plant in town that has one. We're talking $40,000 for a scanner.
Once you have this ungodly huge scan, open it in Photoshop, zoom in on a moderately dense area, then run Gaussian Blur. The little Gaussian Blur window will come up. Set the blur radius just barely high enough to make the dots go completely away. Back down one and see if they come back at all. If they don't, back down one more time and see if they come back. If they do, bump it back up one and hit OK. Once the blur has done its thing, resample the image to 300dpi. What you have done is kinda weird. You have sharpened the scan by throwing part of it--the rough edges--away. It sounds weird, looks weird, feels weird and probably IS weird, but it works so don't complain very loudly. (I better edit here and tell you that the irrational DPI you scanned at is critical to do. When you resample to 300dpi, it has to throw away more dots in some places than in others, and it is this, not just the sampling down, that sharpens the image. If you scan at 1200dpi and blur it, when you sample down you'll get a smaller yet still fuzzy image. Scan at a weird DPI and downsample, it clears things right up. You can do this to make photos larger too--upsample to the final image dimensions at 763dpi, blur out the artifacts and resample to 300. It will look a hell of a lot better than it has a right to.)
|