I think a lot about resolution when I consider the disparities
between machinic and biological vision. The whole dirty pixels
project was in a way negotiating those ideas, and i was struck by
I remember how i felt puzzled when i saw one of Mark
8x10/large format hand prints- the impossible detail the 8x10 neg
produces from ordinary scenes (ie non microscopic or telescopic)- just
because my expectation for sharpness was preconditioned by film,TV,
video and 35mm print resolution.
I had Mark and Haru take 8 by 10 photos of the face and reverse of
the embroidery of my computer desktop - and I remember the shock of
seeing them after they were printed (on a non-digital system no
longer available in nz for prints of that size). It was exactly that
sharpness that I missed. as a serious abuser of photoshop's unsharp
mask, the visual quality of the image from that huge negative really
made me question my ideas about photography. The photo of the
needlepoint, has, at one level, a very limited amount of detail (640
by 480 pixels/stitches) and at another, vastly more (the visual
quality of the stitches, bits of dangling thread, the canvas and pins
holding it square).
There are (some fairly vexed) parallels between photographic grain
(molecules as pixels), digital resolution (ccds) and the eye - where
you could make an argument for rods and cones as the grain or
structure of what we see. And the eye, of course, even when working
according to spec, only has a very small area of focussed colour
vision (about 2 degrees of our 200 degree angle of vision), where the
cones are predominant.
What is interesting about the potential of machine
is in how commercial production values oscillates between
portability/convenience and image
quality. but video etc doesn't exist as an image without a machine
to make its magnetic info resemble an image...
in the camera-eye, Film is a long way from the biological eye,
because those little pictures formed at the focal plane of the camera
aren't what we see. our brains don't apprehend little pictures, but
the pattern of stimulation on our retina, which gets turned into
electrical impulses - the parallel with video is much stronger, I
back to your question Is an 'uber sight'
practical for our primitive
brains?' this is such an interesting question- i seem to remember a
phrase from the bible 'the eye never has enough of seeing'.
When I think of models of vision, I always come back to that amazing
scene in Bladerunner, where Deckhard puts a photo into some kinda tv/
video machine, and is able to zoom and pan and derive ever more
detail from this one kinda haphazard snapshot. It's almost an image
of infinite resolution. I was struck by this story in the New
Scientist that promises something maybe a little like that:
"In an ordinary digital camera, a sensor behind the lens records the
light level that hits each pixel on its surface. If the light rays
reaching the sensor are not in focus, the image will appear blurry.
Now, Pat Hanrahan and his team at Stanford University have figured
out how to adjust the light rays after they have reached the camera.
They inserted a sheet of 90,000 lenses, each just 125 micrometres
across, between the camera's main lens and the image sensor. The
angle of the light rays that strike each microlens is recorded, as
well as the amount of light arriving along each ray.
Software can then be used to adjust these values for each microlens
to reconstruct what the image would have looked like if it had been
properly focused. That also means any part of the image can be
refocused - not just the main subject."
Interesting how the camera is touted for security purposes (a la
bladerunner). infinitely-focussing panopticon, anyone?