An interesting look at where digital photography may be headed..
http://www.americanscientist.org/issues/pub/computational-photography
The digital camera has brought a revolutionary shift in the nature of photography, sweeping aside more than 150 years of technology based on the weird and wonderful photochemistry of silver halide crystals. Curiously, though, the camera itself has come through this transformation with remarkably little change. A digital camera has a silicon sensor where the film used to go, and there's a new display screen on the back, but the lens and shutter and the rest of the optical system work just as they always have, and so do most of the controls. The images that come out of the camera also look much the same—at least until you examine them microscopically.
But further changes in the art and science of photography may be coming soon. Imaging laboratories are experimenting with cameras that don't merely digitize an image but also perform extensive computations on the image data. Some of the experiments seek to improve or augment current photographic practices, for example by boosting the dynamic range of an image (preserving detail in both the brightest and dimmest areas) or by increasing the depth of field (so that both near and far objects remain in focus). Other innovations would give the photographer control over factors such as motion blur. And the wildest ideas challenge the very notion of the photograph as a realistic representation. Future cameras might allow a photographer to record a scene and then alter the lighting or shift the point of view, or even insert fictitious objects. Or a camera might have a setting that would cause it to render images in the style of watercolors or pen-and-ink drawings.
<snip>
For some purposes a hand-rendered illustration can be clearer and more informative than a photograph, but creating such artwork requires much labor, not to mention talent. Raskar's camera attempts to automate the process by detecting and emphasizing the features that give a scene its basic three-dimensional structure, most notably the edges of objects. Detecting edges is not always easy. Changes in color or texture can be mistaken for physical boundaries; to the computer, a wallpaper pattern can look like a hole in the wall. To resolve this visual ambiguity Raskar et al. exploit the fact that only physical edges cast shadows. They have equipped a camera with four flash units surrounding the lens. The flash units are fired sequentially, producing four images in which shadows delineate changes in contour. Software then accentuates these features, while other areas of the image are flattened and smoothed to suppress distracting detail. The result is reminiscent of a watercolor painting or a drawing with ink and wash.