by Prof. Fernand S. Cohen, Director of the Imaging and Computer Vision Center, Drexel University, Philadelphia, USA.
Image-based 3D shape reconstruction is essential for 3D modeling in recognition, virtual reality, generation of video games, 3D animation, and in the creation of 3D virtual fitting rooms – the sought-after application in this talk. In this work, we propose a novel reconstruction method that uses a generic 3D model and two canonical 2D images for obtaining a 3D point model. The generic model consists of a massive set of 3D points that lack saliency, topological meaning, and relational interconnections. This is also true about points in the two canonical images. In this work, we propose finding a small set of interconnected ordered intrinsic salient points (control points) residing on the silhouette of the projections of the generic model onto the two image spaces, as well as finding those for the canonical images. Points are automatically given saliency with order and interconnections through loop-subdivision operating on those control points and their equivalent points on the canonical images. The transformation of the generic model into a personalized one is realized by morphing the points on the generic model according to their equivalent points on the canonical images. The control points on the canonical images are automatically obtained using an active shape model (ASM) or a convolutional neural network (CNN). This reconstruction method is convenient, simple, efficient, and results in an average geometric error of less than 0.5% over 700 diverse body models from the CAESAR dataset. We then implement the contour-based articulated model to the personalized 3D model for pose recovery. The evolution of the model through the articulation process is captured in a video clip. 3D transformations are found and applied to the 3D model on its parts by minimizing the error between the frontal projection body region points and the target points from the image for each independent moving part, with sub-resolution recovery errors. Once the personalized 3D model is obtained and the articulated model is implemented, the algorithm simulates how garments virtually appear on the reconstructed person under different poses. Different types (tight or loose) of clothes are mapped on to the personalized 3D model using different schemes. This creates a virtual in-home fitting room for garment fitting, which is a timely, useful, attractive, and needed application (App) for online shopping. As it allows for the trying of garments at home under different poses before ordering them. The model can be updated as often as necessary to accommodate changes in physique, weight, aging, etc. This is also done with minimum requirements from the user aside from taking two images and a video of pose articulations using their cam or smart phone.
Fernand S. Cohen received his B. Sc. degree in Physics from the American University in 1978, and M. Sc. and Ph.D. degrees in Electrical Engineering from Brown University, Providence, RI, in 1980 and 1983, respectively. In 1984 he joined the Robotics Research Center, University of Rhode Island and was responsible for the Vision Research in the center from 1986-1987. In 1985 he was the recipient of a Research Excellence Award from the College of Engineering, University of Rhode Island. He was in 1986 invited by the French government (Mission Scientifique) to tour research laboratories and universities. In 1987 he joined the Department of Electrical and Computer Engineering at Drexel University as a named Chair Associate Professor (George Beggs). He is currently Professor of Electrical and Computer Engineering and is affiliated with the School of Biomedical Engineering, Science and Health Systems, and serves as Director of the Imaging and Computer Vision Center (ICVC). In the summer 1994 he was invited as a visiting Professor by the National Institute of Research in Information and Automation (INRIA) in Sophia Antipolis, France. He was awarded the Tom Moore Teaching Award, ECE Department, Drexel University, in May 2003. He was also the recipient of a CNRS (Centre National de Recherche Scientique) fellowship in the summer of 2005. He has worked in the areas of computer vision and sensor networks, as well as in early cancer detection using ultrasound and optical probes and has published extensively in these areas over the last three decades. He published over 150 journal and reviewed conference publications and has graduated 15 Ph. D. students and has had over 10 million dollars funding from NIH, NSF, and NSA. He has numerous papers with over 200 citations each. He was keynote speakers at many conferences more recently in Bangkok, Barcelona, and Morocco. His research interests include pattern recognition, computer vision, medical image processing, computational methods, sensor networks, and applied stochastic processes. (IEEE SM 96).