Carregant...
Tipus de document
Treball de fi de grauData de publicació
Llicència de publicació
Si us plau utilitzeu sempre aquest identificador per citar o enllaçar aquest document: https://hdl.handle.net/2445/47243
Mapes de profunditat a partir de l’anàlisi d’imatges
Títol de la revista
Autors
Director/Tutor
ISSN de la revista
Títol del volum
Recurs relacionat
Resum
In this paper we intend to obtain the depth mapping of a scene using a single image and a single viewpoint. Estimating the depth of a scene is a useful tool with applications in several problems inside the world of 3D modeling and computer vision, e.g. simulate the effect of semi-transparent elements, such as fog or smoke or recreate blur effect at chosen areas of a scene. Especially for this last reason, obtaining depth maps of different types of scenes becomes even more important. Currently, Microsoft Kinect [7] depth camera is one of the most reilable methods to obtain depth maps.
This device, however, has several limitations, which considerably reduce its scope.
Our approach aims to provide an analytical alternative for situations in which the depth camera does not work (outdoors, pressence of daylight, and so on).
We want to know how far we can estimate the depth of a scene, using a single image and the viusual features we manage to extract from analyzing it. We use classifiers in order to find useful correlations between the visual features and the depth information provided by Kinect. This classification training helps us finding
the most relevant features for estimate a depth map.Thus, we answer both main
questions behind this paper: which is the maximum performance of analyzing visual
features and which of thes features provide the best results.
Descripció
Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2013, Director: Oriol Pujol Vila
Matèries (anglès)
Citació
Citació
TORRALBA GARCÍA, Antonio. Mapes de profunditat a partir de l’anàlisi d’imatges. [consulta: 26 de febrer de 2026]. [Disponible a: https://hdl.handle.net/2445/47243]