Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/96133
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorPujol Vila, Oriol-
dc.contributor.authorGirones Dezsènyi, Marc-
dc.date.accessioned2016-03-04T12:22:09Z-
dc.date.available2016-03-04T12:22:09Z-
dc.date.issued2013-06-20-
dc.identifier.urihttp://hdl.handle.net/2445/96133-
dc.descriptionTreballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2013, Director: Oriol Pujol Vilaca
dc.description.abstractMotion capture, motion tracking or mocap are the terms used to describe the process of recording movement and translating that movement onto a digital model, it is used in military, entertainment, sports, and medical application. In film making it refers to recording actions of human actors, and using that information to animate digital character models in 3D animation. Unfortunately, most motion capture systems on today's market are prohibitively expensive for educational institutions and small businesses. My goal is to develop a relatively low-cost competitive motion capture system. So in this research I’m looking for the answer how can create an optical mocap player with a single camera. Current motion capture methods use passive markers that are attached to different body parts of the subject and are therefore intrusive in nature. In applications such as pathological human movement analysis, these markers may introduce an unknown artifact in the motion, and are, in general, cumbersome. So the other key challenge is to produce a system that allows marker-less real-time tracking or near the real time. My ultimate objective is to build a visual system that can integrate the above mentioned components, wherewith can display the user’s movement into an avatar in the virtual space. Finally, I will propose a validation system to validate the user movements. I will evaluate the different postures or complete movements. Moreover, I want to go future and show the result in different ways, such as different viewing angles, different meshes or mesh with illumination.ca
dc.format.extent54 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isoengca
dc.rightsmemòria: cc-by-sa (c) Marc Girones Dezsènyi, 2015-
dc.rightscodi: GPL (c) Marc Girones Dezsènyi, 2015-
dc.rights.urihttp://creativecommons.org/licenses/by-sa/3.0/es-
dc.rights.urihttp://www.gnu.org/licenses/gpl-3.0.ca.html-
dc.sourceTreballs Finals de Grau (TFG) - Enginyeria Informàtica-
dc.subject.classificationProcessament digital d'imatgescat
dc.subject.classificationVisualització tridimensionalcat
dc.subject.classificationProgramaricat
dc.subject.classificationTreballs de fi de graucat
dc.subject.classificationReconeixement de formes (Informàtica)cat
dc.subject.classificationVisió per ordinadorcat
dc.subject.otherDigital image processingeng
dc.subject.otherThree-dimensional display systemseng
dc.subject.otherComputer softwareeng
dc.subject.otherBachelor's theseseng
dc.subject.otherPattern recognition systemseng
dc.subject.otherComputer visioneng
dc.titleMotion capture with Kinecteng
dc.typeinfo:eu-repo/semantics/bachelorThesiseng
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesseng
Appears in Collections:Treballs Finals de Grau (TFG) - Enginyeria Informàtica

Files in This Item:
File Description SizeFormat 
src.zipCodi font397.1 MBzipView/Open
memoria.pdfMemòria2.33 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons