b3ta.com links
You are not logged in. Login or Signup
Home » links » Link 941454 | Random (Thread)

This is a normal post
True, though those algorithms are improving all the time - but yes, the mesh will usually need a lot of cleaning and work to make it properly solid, so it can be printed out, for example.

As for normals, hopefully you can see on this model that there are some! They were all calculated after capture though. There weren't many holes in the data, but huge amounts of noise thanks to the incredibly shiny material. Cleaning that up did introduce holes, which then required filling using the software's algorithms, which is why the model looks rough (in the actual sense of the word) in places.

We got this model using about 30 image stereo-pairs. My colleagues here have got this method (depending on the source object) to output geometry that's accurate to less than 20 microns, better than pretty much any laser scanner. We're looking at buying a light field camera to play with!
(, Fri 8 Feb 2013, 13:11, , Reply)
This is a normal post Spatial res on the light field cameras I have seen is unimpressive.
Probably best off sticking with a high res camera + robot type rig.

I wasn't getting decent lighting unfortunately - it was all flat. I believe it is there tho. Software at my end is the likely culprit.

Got any papers btw? I'm fairly interested in this stuff but haven't followed it for a couple of years.
(, Fri 8 Feb 2013, 13:24, , Reply)
This is a normal post
It takes a while for the lighting to 'switch on' - was the model fully loaded? Lots of pink specularity...?

Funny you should say that about high-res... our first model used the full res of the image, and we got a very dense point cloud but with a lot of noise and holes. Running the same images but downsampled 2x, we got a sparser point cloud but cleaner data and fewer holes - probably because the feature detection worked better on lo-res images rather than high-res with everything 'smeared out' over more pixels.

And not sure what you mean by 'robot' rig - we just used a tripod and shifted the camera round! We did use a kinect to make a very rough 3d model first, and then used an algorithm in development here to predict the best camera positions for the object (though I doubt it would matter much with a fairly 2d symmetrical object like this). A couple of colleagues are actually building a robot rig which would take pics from all the best spots...

As for papers, I try not to get involved in the maths/technical side, but google scholar should throw up a load - try searching for just photogrammetry or 'structure from motion'. I get the gist of most of them, but they tend to lose me as soon as they start with the maths...
(, Tue 12 Feb 2013, 13:28, , Reply)