1932

Abstract

Our visual systems are remarkably adept at deriving the shape and material properties of surfaces even when only one image of a surface is available. This ability implies that a single image of a surface contains potent information about both surface shape and material. However, from a computational perspective, the problem of deriving surface shape and material is formally ill posed. Any given image could be due to many combinations of shape, material, and illumination. Early computational models required prior knowledge about two of the three scene variables to derive the third. However, such models are biologically implausible because our visual systems are tasked with extracting all relevant scene variables from images simultaneously. This review describes recent progress in understanding how the visual system solves this problem by identifying complex forms of image structure that support its ability to simultaneously derive the shape and material properties of surfaces from images.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-vision-102122-094213
2024-06-07
2024-06-18
Loading full text...

Full text loading...

/content/journals/10.1146/annurev-vision-102122-094213
Loading
  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error