Universität Bielefeld - Sonderforschungsbereich 360
Self-Consistency: A methodology for designing and
assessing computer vision algorithms
SRI International, Menlo Parc, U.S.A.
Wednesday, June 9th, 1999
16 c.t. Uhr, Hörsaal 10
Contrary to a traditional idea about human perception, our perceptual
inferences are not constant. Indeed, they change all the time as we move
around a static world because new vantage points provide new information
about the world. What is remarkable, however, is that our perceptual
inferences almost never contradict each other. For example, if, at some
point in time, we infer that object A is behind object B or that a dark area
on the ground is a shadow, then it is almost never the case that inferences
based on new observations will contradict this. How is it possible to make
an inference at one point in time that is almost certain not to be
contradicted by new observations? In this talk I will present a
methodology, called self-consistency, that can be used as a principle for
designing computer vision algorithms to have the property that inferences
based on new observations do not contradict inferences based on previous
observations. The methodology can also be used to assess the performance of
current computer vision algorithms. I will describe the application of this
methodology to algorithms for shape from shading and line drawings, and 3-D
reconstruction from multiple images. The latter will be discussed in
detail, with examples demonstrating that the methodology can be used to
reliably distinguish between real changes in shape from apparent changes in
shape due to errors in the shape reconstruction algorithm.
Biography
Yvan G. Leclerc is a Senior Computer Scientist at the Artificial
Intelligence Center of SRI International, which he joined in 1985. He
received his Bachelors in Electrical Engineering (Honours) in 1977,
his Masters of Engineering in 1980, and his Ph. D. in 1989, all from
McGill University. He has worked in various areas of computer vision,
including the development of methods for: edge detection; calibration
of color images; interactive matching of long smooth curves to edges
in images; partitioning images and grouping image regions via global
optimization; recovering the three-dimensional shape and material
property of objects from such diverse imagery as a single shaded
image, a line-drawing, and multiple calibrated images. Recently, he
has also been working in the area of high-speed, network-based terrain
visualization systems.
Anke Weinberger, 1999-05-10