Following a first experiment I wanted to share with the community to seek any interest/feedback in order to continue/enhance the development. Might spend more time on it if people are interested!
To give the context, MESH is working on picture gathered on internet to produce 3D models with photogrammetry. We need to deal with what we have, which is often not that great! Difference in exposition, light, contrast, time of day, quality, shadows etc..We are more focusing on processing with terrible databse rather than optimizing the scanning (Which would be way easier but we can’t!).
I had a try with a fast scanning processing, which consist on taking a video around an object without any intention to make a good video. Basically, I just start the recording on my camera (Olympus Stylus) while turning it toward the object. I didn’t looked at the screen nor checking the focus. I did a pretty bad job as a cameraman but it was the point!
Above are two frames from the resulting video where you can spot some obvious issues. I used Matlab to extract the frames in a basic fashion but it can be improve in many ways. If you take one frame every 20 for example, you might end up on the blurry one or with something blocking the view (like bellow). I am planning to add some selective process but one step at a time! The other way would be to extract in a fanatic way, like every frame, and sorting out by hand but I’m a lazy physicist..
Also depending on the speed and movement, it might be able to average the frame in order to increase the image quality, which I didn’t do in that case.
Once the frame extracted for both object, I did no further processing. We use enhancement while taking picture from internet and got sometime better result with it, but it was nice for once to work with pics that have the same format (). So directly into the photogrammetry softwares that were in that case VisualSFM and photoscan. Photoscan worked better, which is not always true for us (reader should not forget that most of our work is based on collected images!), VisualSFM still managed to get most of the object.
Here is the model of the stone:
And here is the boat:
Note that the boat was a bit more difficult since you have the canal, the tree and pillar around while the stone occupy the entire field of view. Also due to the canal, impossible to take a shoot in front of the boat as I didn’t fell like swimming during winter!
As you can see, both models could be improve with post-processing but as an experimental process the focus was on the video, if anyone want to join our adventure to refine the models we are open collaboration.
The stone model was quite straight forward, the first results were impressive.The object is a bit difficult as it contains many holes (which is the point of these stones by the way). We could think about extracting more frame around the holes, but as you can see on the first pics, the light was not the best to see inside features. The boat was more problematic, at first only half was modeled. I tried to select the “best pictures” (without anything in front) which resulted in a worst model. Turn out the best result was achieve with some bad ones (including even out of focus..).
I tried to use some markers from photoscan which worked but didn’t really improve the model. Also, these markers had to be manually corrected on each images which greatly increase the time to invest. Mask didn’t worked well neither but I did a quick job for that.
At the end, I was quite happy with the stone (minus the holes not modeled making holes into the model..tricky) and it appears that fastscan can be interesting to exploit even though it would require further work. If it is of interest to anybody, I would be happy to provide the code of frame selection once I spend time on it.