I was wondering today what advanced inputs/outputs we could try and automatically train neural nets for.
I realised 3D models could be used to produce the “rotated around the model photographs” that are the input to things like 123Catch and such photo to model systems.
But instead, this system would try and compute a 3D model from the “photographs”, and it would also have the original 3D model to check for accuracy against!
This could make the process automatic, as real-world photosets wouldn’t be any good when it came to checking the accuracy of the model.
Why would AI be better than what we have now?
Well, MOST of these “3D model from photograph” systems start with a solid blob of virtual material, and then “cut out” the silhouette of the picture from it. The next photo is selected, the angle it’s moved from the last photo is computed, and the 3D block is rotated the same amount – then the silhouette is cut from whats left of the model. Do this 15 to 20 times around the model and you get a 3D object!
The issue is concave parts – they just become flat, as they can’t be “seen” by using the silhouette process.
Also adding more pictures actually REDUCES the models accuracy – all the inaccuracies calculating the angle of the model in each picture and chipping away the silhouette removes LOTS of material that should be in it, mean too many pictures leave you with a tiny stick of virtual material left!
Hopefully the AI would take into account texture parallax to calculate how deep inside curves are, and follow solid areas around to ensure they’re not erased by a rogue photo.
But – I don’t think neural networks are capable of this type of process?
submitted by /u/SarahC
[link] [comments]
May 29, 2017 at 02:16AM
from /u/SarahC
No comments:
Post a Comment