Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is the app taking more than one photo? It wasn't clear in the blog post. AFAIU to have any depth perception you need to take more than one photo. Calculate the pupil distance (the distance the phone moved) then match image features between the two or more images. Calculate the amount of movement between the matching features to then calculate the depth.

As described you then map the depth into an alpha transparency and then apply the blurred image with various blur strength over the original image.

Since you're able to apply the blur after the image, it would mean the google camera always takes more than one photo.

Also a Cool feature would be to animate the transition from no blur to DOF blur as a short clip or use the depth perception to apply different effect than just blur, like selective coloring, or other filters.



Yes - you have to move the camera upwards as it takes a series of photos. It is not working from a single photo.


I just used it, it seems to use the movement to calculate the depth, but the initial image is not blurred or mutated in any way other than how it has calculated the depth.


Yes, the app requires you to move the camera after your initial shot which takes another picture to calculate depth


Using the app it feels like it knows how much you are moving it, which would indicate it's not taking more 'pictures' but actually building a depth map for the first image it takes.


>AFAIU to have any depth perception you need to take more than one photo

no you dont, its very tricky, but doable with one:

http://www.cs.cornell.edu/~asaxena/learningdepth/

http://www.cs.cornell.edu/~asaxena/reconstruction3d/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: