Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I was checking out this cool app called Morfo. According to their product description -

Use Morfo to quickly turn a photo of your friend's face into a talking, dancing, crazy 3D character! Once captured, you can make your friend say anything you want in a silly voice, rock out, wear makeup, sport a pair of huge green cat eyes, suddenly gain 300lbs, and more.

So if you take a normal 2D image of steve jobs & feed it to this app it converts it into a 3D model of that image & the user can interact with it.

enter image description here

My questions are as following -

  1. How are they doing this?
  2. How is this possible in iPad?
  3. Isn't it computationally intensive to render and convert 2D image into 3D?

Any pointers, links to websites or libraries in objectiveC which do this is very much appreciated.

UPDATE: this demo of this product here shows how morfo, uses a template mechanism to do the conversion. i.e. after a 2D image is fed, one needs to set the boundaries of the face, where the eyes are located, size & length of lips. then it goes off to convert it into a 3D model. How is this part done? What frameworks or libraries they might be using?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
334 views
Welcome To Ask or Share your Answers For Others

1 Answer

This is a broad question but i can point you in the right direction of how 3D Rendering works, trust me this is a huge subject with decades of work behind it and to much to put here. Not sure how up to speed you are on 3D Rendering techniques so i will give you a basic idea of texturing and point you to a good set of tutorials.

  1. How are they doing this?
    The idea is that in 3D Rendering, 3D models can be textured with a 2d image known as a texture map. You use a 2D image and wrap it around a 3d model, be that a simple primitive like a sphere of a cube or more advanced such as the classic teapot or the model of a human head e.t.c. A texture can be taken from anywhere, I have used the camera feed in the past to texture meshes with the video from the camera stream, I have used photos from the camera which s how there doing it. So this is how the face is rendered to the 3D Model.

  2. Is this efficient?
    On iOS and most mobile devices 3D rendering uses hardware acceleration utilizing OpenGLES. In regards to your question this is really fast depending on how you implement your render code.

The way it uses the mapping (scale rotate template in the video) as mentioned by anticyclope allows you to make the texture fit a model and also place the eyes which are part of there render code.

So if you want to pick this up i recommend reading Jeff Lamarche Tutorial "from the ground up" as a primer:

http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html

Second to that i have read about 4 books on OpenGLES, for general design and for platforms specifics. I recommend this book:

http://www.amazon.co.uk/iPhone-Programming-Developing-Graphical-Applications/dp/0596804822/ref=sr_1_1?ie=UTF8&qid=1331114559&sr=8-1


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share

548k questions

547k answers

4 comments

86.3k users

...