b3ta.com links
You are not logged in. Login or Signup
Home » links » Link 941415 | Random (Thread)

This is a normal post
Not sure, and getting out of my comfort zone, but I don't think it's using voxels. It's a point cloud, so the points are distributed arbitrarily rather than in a 3d grid... does that make sense?

Fairly simple to mesh a point cloud, could you then use that?
(, Fri 8 Feb 2013, 12:28, , Reply)
This is a normal post CSG isn't meshes,
it's things like spheres, cylinders and cubes, combined with union, intersection and difference. For example you can define a cube and take a spherical chunk out of it. Which tickles my maths gland.

*Constructive Solid Geometry
(, Fri 8 Feb 2013, 12:33, , Reply)
This is a normal post It's not that simple to mesh a point cloud properly.
you can get a reasonable first approximation, but there are cases where most of the algorithms break and manual intervention is required. Worth the effort if you want to render without holes or analyse the data though.

Adding normals and reflectance info to turn the points into surfels is another alternative that sometimes works well for rendering, unfortunately you still get problems with holes where the samples are too far apart.

If you want to go all the way to crazy you could just take hundreds of pictures and go for light field rendering. No geometry at all, just clever warp & blend operations.
(, Fri 8 Feb 2013, 12:51, , Reply)
This is a normal post
True, though those algorithms are improving all the time - but yes, the mesh will usually need a lot of cleaning and work to make it properly solid, so it can be printed out, for example.

As for normals, hopefully you can see on this model that there are some! They were all calculated after capture though. There weren't many holes in the data, but huge amounts of noise thanks to the incredibly shiny material. Cleaning that up did introduce holes, which then required filling using the software's algorithms, which is why the model looks rough (in the actual sense of the word) in places.

We got this model using about 30 image stereo-pairs. My colleagues here have got this method (depending on the source object) to output geometry that's accurate to less than 20 microns, better than pretty much any laser scanner. We're looking at buying a light field camera to play with!
(, Fri 8 Feb 2013, 13:11, , Reply)
This is a normal post Spatial res on the light field cameras I have seen is unimpressive.
Probably best off sticking with a high res camera + robot type rig.

I wasn't getting decent lighting unfortunately - it was all flat. I believe it is there tho. Software at my end is the likely culprit.

Got any papers btw? I'm fairly interested in this stuff but haven't followed it for a couple of years.
(, Fri 8 Feb 2013, 13:24, , Reply)
This is a normal post
It takes a while for the lighting to 'switch on' - was the model fully loaded? Lots of pink specularity...?

Funny you should say that about high-res... our first model used the full res of the image, and we got a very dense point cloud but with a lot of noise and holes. Running the same images but downsampled 2x, we got a sparser point cloud but cleaner data and fewer holes - probably because the feature detection worked better on lo-res images rather than high-res with everything 'smeared out' over more pixels.

And not sure what you mean by 'robot' rig - we just used a tripod and shifted the camera round! We did use a kinect to make a very rough 3d model first, and then used an algorithm in development here to predict the best camera positions for the object (though I doubt it would matter much with a fairly 2d symmetrical object like this). A couple of colleagues are actually building a robot rig which would take pics from all the best spots...

As for papers, I try not to get involved in the maths/technical side, but google scholar should throw up a load - try searching for just photogrammetry or 'structure from motion'. I get the gist of most of them, but they tend to lose me as soon as they start with the maths...
(, Tue 12 Feb 2013, 13:28, , Reply)