DSpace
    • English
    • Deutsch
    • Eesti
  • English 
    • English
    • Deutsch
    • Eesti
  • Login
View Item 
  •   DSpace @University of Tartu
  • Loodus- ja täppisteaduste valdkond
  • Tehnoloogiainstituut
  • Robotics and Computer Engineering - Master's theses
  • View Item
  •   DSpace @University of Tartu
  • Loodus- ja täppisteaduste valdkond
  • Tehnoloogiainstituut
  • Robotics and Computer Engineering - Master's theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Garment retexturing using Kinect V2.0

Thumbnail
View/Open
Avots_MA2017.pdf (37.02Mb)
Date
2017
Author
Avots, Egils
Metadata
Show full item record
Abstract
This thesis describes three new garment retexturing methods for FitsMe virtual fitting room applications using data from Microsoft Kinect II RGB-D camera. The first method, which is introduced, is an automatic technique for garment retexturing using a single RGB-D image and infrared information obtained from Kinect II. First, the garment is segmented out from the image using GrabCut or depth segmentation. Then texture domain coordinates are computed for each pixel belonging to the garment using normalized 3D information. Afterwards, shading is applied to the new colors from the texture image. The second method proposed in this work is about 2D to 3D garment retexturing where a segmented garment of a manikin or person is matched to a new source garment and retextured, resulting in augmented images in which the new source garment is transferred to the manikin or person. The problem is divided into garment boundary matching based on point set registration which uses Gaussian mixture models and then interpolate inner points using surface topology extracted through geodesic paths, which leads to a more realistic result than standard approaches. The final contribution of this thesis is by introducing another novel method which is used for increasing the texture quality of a 3D model of a garment, by using the same Kinect frame sequence which was used in the model creation. Firstly, a structured mesh must be created from the 3D model, therefore the 3D model is wrapped to a base model with defined seams and texture map. Afterwards frames are matched to the newly created model and by process of ray casting the color values of the Kinect frames are mapped to the UV map of the 3D model.
URI
http://hdl.handle.net/10062/56983
Collections
  • Robotics and Computer Engineering - Master's theses [90]

DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
Atmire NV
 

 

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

DSpace software copyright © 2002-2016  DuraSpace
Contact Us | Send Feedback
Theme by 
Atmire NV