The photo at the top of this page is 205 degrees wide. If you were there, both ends of the picture would be behind you. Yet it looks plausible, because it satisfies many of your visual expectations. Lines you would expect to be straight look straight, things you know should be vertical are vertical, and everything seems to be a reasonable size and shape. Painters know very well how exploit such expectations in order to create believable pictures, even of scenes that could not exist in reality, and to make us see space, even though the picture is flat. This aspect of their art is called perspective.
Perspective is about trying to depict what we actually see. Leonardo da Vinci called that “natural perspective”, and famously declared that it would never be captured on a piece of paper. He was basically right, but that has not stopped generations of artists from trying to do it anyhow. In the process they have invented some useful conventions, and some clever tricks, and created many remarkable images. Now, with software partly based on painters’ methods, photographers can join the struggle for natural perspective.
Photographers normally deal with images of real scenes, captured objectively by lenses which faithfully perform the rectilinear perspective projection. That projection gave Renaissance painters a powerful new way of depicting space, and is the foundation of the modern language of perspective. Nevertheless, relatively few photos give a really convincing sense of space, and many photographers prefer just to create interesting flat patterns. One reason for this is that until recently photographers could not do much to control perspective. They just matched lens focal length to camera-to-subject distance, resulting in a very predictable, and so rather boring, “photographic perspective”. But in the digital age, we can alter the relationship between distance and field of view that is built into our lenses. In fact, we can completely separate an image from the lens that took it. By measuring and correcting the geometrical effects of the lens and camera, we can recover an ideal spherical image of the subject. There is no easy way to view such an image, but there are many possible ways to convert it back into a viewable flat picture. I think of that as re-photographing the subject using a software lens. And software lenses are not limited to simulating what a glass lens could do.
One thing glass lenses cannot do is capture really wide fields of view without obvious distortion. Rectilinear lenses grossly over-expand the outer parts of wide images; fish-eye lenses grossly compress those areas, and moreover bend most straight lines into curves. But now we have panorama stitching software that can extract ideal spherical images from photos taken with any lens, combine those seamlessly into a larger spherical image, and render that into flat views in a great variety of ways. One possible result is an image with a really big field of view and believable perspective, like the picture above.
That picture was re-photographed with Panini-Pro from a partial spherical panorama stitched with PTGui from 3 sets of 30 photos taken at 3 exposure levels. It is not important that I took the photos with a Nikon F 24mm lens on a Canon EOS 7D camera, because the image would look much the same no matter what camera and lens I used to capture the raw data. This is an image of a real scene, but it was made with a virtual camera. And although it is undeniably a photograph, the perspective is not “photographic”. It is my own conception of what I might have seen from that spot if my visual field of view were 205 degrees instead of 150 degrees, as well as I could approximate it with the software tools available to me. (The scene is the Girard Point Bridge on I-95, wrapped in cloth for re-painting, as seen from the drawbridge at the West end of the Philadelphia Navy Yard.)
This kind of virtual photography uses the mathematical muscle of panorama stitching software, but it doesn’t require that you stitch a panorama. The images below show how the same process — extracting the ideal spherical image and re-photographing it — can convert a fish-eye snapshot into a convincing perspective view.
It was done roughly as follows. I snapped a photo with a Sigma 8mm fish-eye lens, trying for a good point of view but making no effort to level the camera, loaded it into Panini-Pro and set the lens parameters — focal length and curve shape — to match the lens. Then I rotated the spherical image (by adjusting pitch and roll angles) to get all vertical elements of the scene vertical on screen — this is equivalent to leveling the camera, and is critical to a good result — using the shift controls to keep the image framed nicely. Next I adjusted the Panini compression parameter for a convincing perspective in the horizontal aspect, and scaled the image down by about 8% vertically to give the people more plausible shapes (the Panini projection tends to exaggerate height). Finally I used Panini-Pro’s spot flattening tools to “squeeze” bulging areas near the floor and ceiling into more pleasing shapes, touched up yaw, pitch, zoom and framing, and saved the view. The whole process took two or three minutes.
My guide throughout was the visual appearance of the image, not any theoretical rules of perspective. I was working more like a painter than an engineer.