Automating Original Image Capture For Photogrammetry
3D data creation is part of a growing trend in the use of computational imaging techniques within cultural heritage digitization shops. In particular, operational adoption of photogrammetry has been witnessed at such institutions as the Minneapolis Institute of Art (MIA), the Smithsonian, and the University of Virginia Library.
3D data use cases abound. For instance, it can be leveraged to create 3D digital models for display and manipulation in various viewers, 3D printed, and re-purposed in VR environments. Additionally, virtual models can be employed as teaching tools, used in conservation condition assessments of objects through time and to open new lines of inquiry and digital scholarship around such data sets.
One of the current bottlenecks in the multi-step workflow that leads to the creation of original 3D data is the capture stage. In the case of photogrammetry, automating original 2D image capture under controlled shooting conditions is one way to begin to not only scale up data creation but to also make data more accurate and easier for 3D post-processing software to work with.
As we recently began to build out our own 3D capture capabilities at the University of Connecticut Library's Digital Production Lab, we decided to look at existing automated systems with an eye towards customizing a rig that would best fit our space, budget, and anticipated requirements. In collaboration with ace systems integrator, Michael Ulsaker, this is the structure that we recently co-designed and installed in our studio:
Salient features include an automated 360 degree spin turntable, and camera column that can be programmed to seamlessly control movements along X, Y, and Z axes during a given shooting session. Both the turntable and camera are driven by an integrated combination of 5 stepper motors.
All of this movement is coordinated through a linked pair of Cognisys Stackshot 3X modules. Each module, which in essence acts like a programmable logic controller, has a haptic touchscreen and a nice GUI to the Cognisys software.
Successful photogrammetry requires a 2D image set of an object from overlapping look angles. This needs to be done in a comprehensive manner across a subject's entire surface in order to give post-processing software a greater opportunity to create 3D data from the original. Turntables are a good capture solution in this scenario, as they help control needed overlap from shot to shot and permit consistent stationary lighting to be built into the overall design. Beyond object movement, however, there remains the need to reposition the camera to different look angles above the subject per 360 spin for optimal capture coverage. This is where precisely programmed turntable rotation and camera movement can come together to create high quality source imaging:
The net results are a series of hemispheric image sets, viewed here in Agisoft Photoscan, where each individual 2D image capture is represented by a blue rectangle around the generated 3D model:
Once exported from post-processing software, the model can then be uploaded to an online viewer site like Sketchfab where it may be shared more broadly to the online world.
Though the Polaroid Test model presented common photogrammetric challenges like the presence of specular highlights from its more reflective surfaces and self-occluded areas along the bellows, this initial trial, exported straight from Photoscan was promising nonetheless. A second test, this time using a small gift store duck with a terracotta-like surface was something that the software more elegantly handled and made watertight.
After our initial test phase concludes, we hope to eventually begin work on aspects of the Connecticut Archaeology Center's bone collection and selections from the department of Ecology and Evolutionary Biology's Biodiversity Research Collections, both of which are housed nearby on campus.