Image Gallery

All images and animations are Copyright (c) University of Manchester,
and may not be reproduced without permission.

Photometric Reconstruction (2000-2001)

Photometric reconstruction is the process of estimating the illumination and surface reflectance properties of an environment, given a geometric model of the scene and a set of photographs of its surfaces. For mixed-reality applications, such data is required if synthetic objects are to be correctly illuminated or if synthetic light sources are to be used to re-light the scene. Current methods of estimating such data are limited in the practical situations in which they can be applied, due to the fact that the geometric and radiometric models of the scene which are provided by the user must be complete, and that the position (and in some cases, intensity) of the light sources must also be specified {\it a-priori\/}. In this work, a novel algorithm has been developed which overcomes these constraints, and allows photometric data to be reconstructed in less restricted situations. This is achieved through the use of {\it virtual light sources\/} which mimic the effect of direct illumination from unknown luminaires, and indirect illumination reflected off unknown geometry. The intensity of these virtual light sources and the surface material properties are estimated using an iterative algorithm which attempts to match calculated radiance values to those observed in photographs. Below, we show results for real scenes that show the quality of the reconstructed data and its use in off-line mixed-reality applications.

S. Gibson, T.L.J. Howard, R.J. Hubbold, "Image-Based Photometric Reconstruction for Mixed Reality", SIGGRAPH 2001 Sketches and Applications Program, Los Angeles CA, USA, August 2001.

S. Gibson, T.L.J. Howard, R.J. Hubbold, "Flexible Image-Based Photometric Reconstruction using Virtual Light
Sources", Computer Graphics Forum (Proceedings of Eurographics 2001) 19(3), Manchester, UK, September 2001
(to appear).

We start with high dynamic range background images, captured with a digital camera. The right image was captured under artificial lighting, and the left image under a mixture of natural and artificial lighting.
Using a geometric model, built using computer vision and photogrammetry techniques, we estimate the photometric properties of the scene (the surface materials, textures, and the lighting distribution). These can then be used to generate completely synthetic renderings.
These images show completely synthetic renderings of the reconstructed scene, without surface texture detail.
We can use the reconstructed data to render the scene from novel viewpoints and illuminate the surfaces using synthetic light-sources.
We can also introduce synthetic objects into the scene.
Finally, here's two MPEG animations showing the results of this work.