Our visual systems are remarkably adept at deriving the shape and material properties of surfaces even when only one image of a surface is available. This ability implies that a single image of a surface contains potent information about both surface shape and material. However, from a computational perspective, the problem of deriving surface shape and material is formally ill posed. Any given image could be due to many combinations of shape, material, and illumination. Early computational models required prior knowledge about two of the three scene variables to derive the third. However, such models are biologically implausible because our visual systems are tasked with extracting all relevant scene variables from images simultaneously. This review describes recent progress in understanding how the visual system solves this problem by identifying complex forms of image structure that support its ability to simultaneously derive the shape and material properties of surfaces from images.


Article metrics loading...

Loading full text...

Full text loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error