photography of the Quabbin Region & beyond

Tundra Graphics

  • Home
  • Galleries
  • Blog
  • Bio
Share
June 26, 2020

Small Object Photogrammetry Through Focus Stacking

Small object photogrammetry rig

Woodshedding in a basement

Back in March during our last on-site work week before the COVID-19 shutdown at UConn, I shot and took with me as much useful raw image data as possible from our lab's automated capture system for later photogrammetric post-processing. In addition, I was also able to checkout a small assortment of gear that I thought I would combine with my own resources at home to create a purpose-built photogrammetry rig in the basement.

The resulting kit features an inexpensive popup light tent, an old lazy Susan turntable, and a Cognysis 3X Stackshot macro rail package. I've attached the macro rail to a Manfrotto 405 3-Way, Geared Pan-and-Tilt Head that I screwed onto a spare tripod that I normally keep behind the seat of my truck. A Canon 5D III, my current backup camera, sits atop the Manfrotto head and is mated to a Zeiss Milvus 50mm f/1.4 ZE lens.

With this setup, I have been further customizing a photogrammetry technique for small archaeological lithics that I initially encountered in the 2016 paper, A Simple Photogrammetry Rig for the Reliable Creation of 3D Artifact Models in the Field: Lithic Examples from the Early Upper Paleolithic Sequence of Les Cottés (France). First author, Samantha Porter, has also produced an excellent three-part video series that goes into extensive, articulate detail on the specialized post-processing that is required of the raw data acquired from such a system.

A problem explained

Generally speaking, depth of field becomes shallower as one shortens the camera-to-subject distance. And most camera/lens combinations eventually become diffraction limited when stopped down beyond a certain aperture. This combination of factors works against the straightforward capture of sharp, high resolution images of small objects that could potentially serve to create richer photogrammetric geometry and texture.

Test lithic, 2.75cm long, personal collection of author

Coincidentally pieces like lithics often possess distinctive surface geometry and visible detail that researchers highly value. But these artifacts can be characteristically small as well. With the system that I was spinning up at home, one thing that I wanted to explore was how to gather more useful object detail during initial image acquisition compared to what was normally possible with standard single shot camera capture. In turn, I wished to see how incorporating additional photographic techniques could possibly help refine an established photogrammetric pipeline.

Series of images demonstrating a six-image focus stack of a Tachinid fly. First two images illustrate typical depth of field of a single image at f/10 while the third image is the composite of six images. Photo credit: Muhammad Mahdi Karim

Devising a solution

In the 2D imaging world, digital macro photographers have traditionally employed a method known as focus stacking or z-stacking to effectively achieve greater depth of field. This is accomplished first through a series of photos taken of an object using different focal points, covering a range between the nearest and furthest points of an object in relation to camera position. From the resulting image set or focus bracket, the sharpest regions of each photo's overlapping depth of field are then computationally merged or focus stacked into a single, composite image that is much deeper than any of its component elements.

Cognysis 3X macro rail and Manfrotto 405 3-way pan-and-tilt head

A common method of focus bracketing involves moving the entire camera/lens combination from one focal point to the next through the use of a macro focus rail. With the Stackshot 3X system, not only can this movement be very precisely calculated and executed, but a given photo sequence can also be programmatically shot with a single touchscreen gesture.

From the controller's Auto-Distance mode, the user can enter both start and stop positions and can also specify the distance of travel for each step in a given series. In my case, to best estimate an appropriate travel distance, I first calculated the depth of field for my specific camera, focal length, f-stop, and subject distance using the formula, DoF≈2u2Nc/f2. Actually, I didn't do that manually. Instead I used DOFMaster's online calculator and came up with the value .96cm for my particular camera and lens set up to shoot at f/11 from a distance of 21.5cm.

Stackshot 3X Auto-Distance mode settings

In-focus overlap of roughly 50% between adjacent images is a good initial guideline for creating a successful focus stack in post-processing. In turn, the calculated depth of field was divided by 2 to estimate 50% coverage. This value was then converted into millimeters and entered into the Stackshot 3X controller's distance (of rail travel) setting.

Next, the beginning and end positions of the series were assigned using the controller to move the Canon 5D III along the rails while monitoring the camera's live view for correct focus through a tethered laptop. Once completed, the controller was finally able to automatically calculate and display the resulting number of steps required to best meet all entered parameters. From that point on, I could shoot each successive focus bracket by simply hitting the touchscreen's start button. Following each completed series, I would then manually rotate the object 10 degrees on the lazy Susan and commence yet another bracket from the controller.

After testing both Photoshop and Helicon Focus, I chose Photoshop to focus stack each subsequent focus-bracketed series. Its consistent alignment and blending abilities plus its capacity to work in conjunction with Lightroom where camera raw files were initially organized by brackets and batch edited were key attributes in Photoshop's favor. The resulting composite images created by the program were saved out as TIFF files. These were then eventually imported into Agisoft MetashapePro photogrammetry software where 3D data was created from the focus-stacked 2D TIFFs.

To date, test objects have included those that have measured as little as 2.71cm across their longest dimension. See the above illustrations for one such example. For lithics this small and in the shape of thin projectile points, 4 shots per bracket were normally captured. This resulted in a total of 144 shots per complete 360-degree spin which were derived from 36 incremental 10-degree rotations of the lazy Susan. Since a block of kneadable eraser was used to support the lithic first by its base, then by its tip, separate 360-degree rotations were conducted for each object orientation to guarantee full look angle coverage and to avoid occlusions caused by the support. This, factored in with the previously-noted requirements for focus stacking, resulted in 288 total raw images being shot of the piece.

Rendered results: non-stacked vs. focus stacked

Here is a comparison of the same lithic previously 3D rendered from 144 total photographs taken from two look angles per object orientation. This example was shot at a greater camera-to-subject distance and without focus stacking (as displayed in MetashapePro's model view):

Non-focus stacked version of 2.75cm long test lithic

And here is the same object rendered in 3D from 288 raw images taken from one look angle per object orientation. In this case, focus-bracketed images were focus-stacked by Photoshop into 72 composite TIFFs:

Focus stacked version of 2.75cm long test lithic

The following zoomed-in views of the above tandem renderings include a measured 1mm feature that offers a better sense of the object's small scale and comparative perceivable detail between the non-focus stacked and focus stacked versions:

Non-focus stacked version of test lithic with measured 1mm detail
Focus stacked version of test lithic with measured 1mm detail

Decimated and compressed versions of these two examples may also be viewed as interactive 3D models on Sketchfab. Feel free to zoom in and pan around that same 1mm detail from the preceding illustrations to better compare the two 3D renderings:

Test Lithic 20200320 Non-focus stacked, Scaled by michael.bennett1 on Sketchfab


Test Lithic 20200614 Focus-Stacked and Scaled by michael.bennett1 on Sketchfab

This combination of focus stacking and small object photogrammetry has similarly been employed by Santella and Milner (2017) using examples from paleontology, and Olkowicz et al. (2019) in their recent study of rock fracture morphology. But it can also be applied to other item types as well... for instance, coins and tokens.

This token for Rhode Island's Newport Bridge brought back childhood memories of Del's, the absolutely terrifying Old Jamestown Bridge, and summer day trips from my parents' Central Massachusetts home to Beaver Tail Lighthouse. At 28mm in diameter, the brass piece is similar in size to the lithic examined earlier and likewise demonstrates the benefits of the same choreographed photography and software techniques described herein.

Newport Bridge Token 20200620 by michael.bennett1 on Sketchfab

Final thoughts

As we look toward the future, it will be interesting to see if Canon and other manufacturers eventually include in-camera focus bracketing and stacking in some of their models much like Olympus and Panasonic have already accomplished. Though possibly introducing an undesirable workflow black box, this feature could potentially eliminate the complexities of focus rails, additional file management, and the separate program hand-offs outlined here for small objects. If done with accuracy, precision, and a level of transparency such in-camera processing holds the promise of significantly streamlining the creation of high quality source images that are so crucial towards building exceptional 3D geometry and visible detail.

Share
June 10, 2019

Mitigating Silver Mirroring Through Cross Polarization

Silver Mirroring. Documentation Photography, Mauro J. Mazzini 2011.

Silver mirroring is, according to the American Institute of Conservation, "a natural chemical process that affects photographic materials containing silver over time. It results in a metallic sheen over the surface of the photograph, typically affecting the darker areas of a photograph most." As can been seen in the silver gelatin print above, image details are obscured and overall spatial continuity is disrupted from highlight to shadow.

Colloquially known as silvering or silvering out, the phenomenon is commonly understood to be caused by silver in the print's binder initially oxidizing over time to silver ions. These ions can then migrate upward from the gelatin layer to the print's surface and subsequently transform to silver sulfide. A number of possible conservation treatments have been explored, including the use of saliva as a mild enzymatic solution applied by swab to the photograph's affected areas.

Digitizing prints of this type can be challenging. When illuminated, metals produce almost nothing but randomly-polarized (aka unpolarized) direct reflections. These type of direct reflections can cause the fogging effect associated with silver mirroring. However, because this property is consistent among metals certain studio lighting and filtering techniques can be employed to consistently improve the image quality of captured prints. One such method is cross polarization.

Cross polarization begins with managing the linearity of a polarized light source. In the case of copy photography, that normally begins with two lights on either side of a table that contains a scene to be shot.

Mounted Pair of Polarizer Sheets

Here, mounted sheets of neutral gray linear polarizer film are positioned in front of two strobes that are part of the UConn Digital Imaging Lab's large automated X-Y table system. Once fired, the flash light that will eventually fall on the scene will be polarized according to the linearity of the film sheet in front of each head.

Polarizer Sheet Mounting

Both sheets are positioned so that their linearities are in the same direction. As a result, the large panoramic photo print on the table can now be illuminated with polarized light in a manipulated way.

Remember metals primarily produce direct reflections. However, through controlled polarization of the light source, those direct reflections can also be polarized with the same linearity. If we then use a circular polarizer filter on the camera lens and orient its linearity 90 degrees to that of the polarized lights this will produce a nearly complete blockage of directly reflected light known as extinction. In this way, the metallic sheen from silver mirroring can be mitigated during capture.

Zeiss Milvus 50mm f/1.4 Mounted With B+W Circular Polarizer Filter
Cross-Polarization Schematic: Unpolarized White Light = Flash > Polarizer 1 = Polarizer Sheet Film Linearity > Polarizer 2 = Circular Polarizer Filter Linearity > Straight Line = Cross-Polarization Extinction Factor (Click Image for Animation)Source: https://micro.magnet.fsu.edu/primer/java/polarizedlight/filters/index.html

Here are a few examples of the benefits of the technique as illustrated through two panoramic photos that we recently shot in the lab. The first set comes from a 1909 photo of Hartford's Bushnell Park looking towards the Connecticut State Capitol building in the distance...

Bushnell Park and Capitol Building, Hartford, Connecticut. Left Image: No Polarization Filters. Right Image: Cross Polarization Technique Employed.

The version on the left was shot with no filtering of either the lights or camera. The cross-polarized version on the right only hints at technique's effect. In turn, here are a couple of 100% zooms that better exemplify the original silver mirroring and its mitigation...

Bushnell Park Detail. Left Image: No Polarization Filters. Right Image: Cross Polarized Technique Employed.
Hartford Capitol Building Detail. Left Image: No Polarization Filters. Right Image: Cross Polarization Technique Employed.
Bushnell Park Detail. Left Image: No Polarization Filters. Right Image: Cross Polarization Technique Employed.

The following group portrait from the UConn Archives' Grabowski Collection also contains areas affected by silver mirroring, though to a lesser degree. Note that the phenomenon's presence is mainly observable in the photo's shadows and darker areas, which are symptomatic traits. Yet even in this somewhat mild case, cross-polarization can clearly help reveal obscured contrast and detail.

Grabowski Collection Panorama. Left Image: No Polarization Filters. Right Image: Cross Polarization Technique Employed.
Grabowski Collection Photograph Detail, Left Image: No Polarization Filters. Right Image: Cross Polarization Technique Employed.
Grabowski Collection Photograph Detail, Left Image: No Polarization Filters. Right Image: Cross Polarization Technique Employed.

If we return for a moment to the mechanism of cross-polarization and the notion of extinction, questions still remain. What are we seeing in these shots on the right? Why aren't they mostly black?

Up until this point, we've focused on silver mirroring and its ties to the direct reflection properties of metals. However, light reflected from a print is mainly a combination of two components: direct reflection and diffuse reflection. Diffuse reflection is basically light scattered at many angles. Another one of its characteristics is that it is randomly polarized (non-polarized). As a result, cross-polarization has little effect on diffuse reflections.

This variance between the two reflection types allows us to systematically separate them. In essence, cross-polarization can selectively reduce problematic direct reflections (e.g. silver metal) while leaving diffuse reflections mostly alone (non-metallic print surface). As illustrated above, this allows us to digitally capture historical photographs that exhibit silver mirroring with newfound detail.

Sources:

Chen, J.-J. (2001). Documenting Photographs: A Sample Book, Retrieved June 7, 2019, from http://paulmessier.com/pm/pdf/papers/documenting_photographs_chen.pdf

How it Works: Visible Light Linear Polarizer - American Polarizers, Inc. (n.d.). Retrieved June 7, 2019, from https://www.apioptics.com/about-api/resources/visible-light-linear-polarizer/

Hunter, F., Fuqua, P., & Biver, S. (2012). Light--science & magic: an introduction to photographic lighting. Walthan, MA: Focal Press/Elsevier.

Molecular Expressions Microscopy Primer: Light and Color - Polarization of Light: Interactive Java Tutorial. (n.d.). Retrieved June 15, 2019, from https://micro.magnet.fsu.edu/primer/java/polarizedlight/filters/index.html

Müller V. (1995) Polarization-Based Separation of Diffuse and Specular Surface-Reflection. In: Sagerer G., Posch S., Kummert F. (eds) Mustererkennung 1995. Informatik aktuell. Springer, Berlin, Heidelberg

PMG Silver Mirroring - Wiki. (n.d.). Retrieved June 9, 2019, from http://www.conservation-wiki.com/wiki/PMG_Silver_Mirroring

Silver mirroring - Wiki. (n.d.). Retrieved June 9, 2019, from https://www.conservation-wiki.com/wiki/Silver_mirroring Umeyama, S., & Godin, G. (2004).

Separation of diffuse and specular components of surface reflection by use of polarization and statistical analysis of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(5), 639–647. https://doi.org/10.1109/TPAMI.2004.1273960

Weaver, G. (n.d.). Conservation of a gelatin silver print by August Sander. Retrieved June 9, 2019, from https://gawainweaver.com/news/News-conservation-gelatin-silver-print-august-sander/

Share
April 26, 2019

Flex Snakes and Other Things Wild: Photographing Maurice Sendak's Book Dummies

Dummy for ‘Where the Wild Horses Are’ (1955), The Maurice Sendak Collection. Archives & Special Collections at the Thomas J. Dodd Research Center, University of Connecticut Library. © The Maurice Sendak Foundation, Inc.

At the UConn Library Digital Production Lab, my student photographers and I are currently in the midst of digitizing approximately 12,000 pieces of Maurice Sendak's artwork on deposit at the Library's Archives and Special Collections. Among the objects that we've had the pleasure of photographing are a group of Sendak's hand-crafted book dummies. With these pieces, the illustrator worked through a number of preliminary art and design ideas that went towards the creation of his final well-known commercial publications.

Dummy for ‘Where the Wild Things Are’ (1963), The Maurice Sendak Collection. Archives & Special Collections at the Thomas J. Dodd Research Center, University of Connecticut Library. © The Maurice Sendak Foundation, Inc.

Capturing such fragile, miniature books is challenging. Custom supports and careful handling are key elements to a successful shoot. Here's an example of how we staged one of the objects for overhead camera capture:

Camera Stand and Dummy for ‘Where the Wild Things Are’ (1963), The Maurice Sendak Collection. Archives & Special Collections at the Thomas J. Dodd Research Center, University of Connecticut Library. © The Maurice Sendak Foundation, Inc.

In this instance, the verso pages of Where The Wild Things Are are gently held back by a thin archival flex snake. The flex snake is a bit like a nylon tube sock with shot weights inside. In use, one can subtly vary its effect with gentle precision, which makes the tool well-suited for this particular application.

Flattening the recto page is a custom-cut 12”x12” piece of 4.5mm TruVue Optium Acrylic. This light-weight, museum-grade material is easy to handle, scratch-resistant and highly transparent. It is normally used in displaying framed artwork in galleries. Unlike most iron glass, it absorbs little light and imparts a minimal color cast to objects behind it.

Even so, when imaging pieces in the lab using acrylic we batch apply custom white balance values and slightly compensate normal exposure settings in post-processing. Guiding us in these adjustments are data sampled from measurable color targets which are photographed through the same acrylic as the artwork. This, in turn, helps us achieve high-level FADGI image quality metrics in an objective fashion that compare well to normal shots taken without the clear medium.

Custom Support for Shooting Dummy for ‘The Birthday Party’ (1957), The Maurice Sendak Collection. Archives & Special Collections at the Thomas J. Dodd Research Center, University of Connecticut Library. © The Maurice Sendak Foundation, Inc.

For the even smaller and more fragile The Birthday Party dummy we chose to support the front board and recto pages with an angled stack of conservation bag weights. Once again, the flex snake is used to apply resistance to the turned pages, so the verso side can be lightly flattened by acrylic and accurately photographed.

Dummy for ‘The Birthday Party’ (1957), The Maurice Sendak Collection. Archives & Special Collections at the Thomas J. Dodd Research Center, University of Connecticut Library. © The Maurice Sendak Foundation, Inc.

The overall speed of the entire capture workflow is mostly predicated on the individual handling needs of each particular dummy book. So, the effort is necessarily deliberate, slow and careful. In following FADGI 4 star imaging guidelines for spatial resolution of two dimensional art, we're shooting these pieces at 600ppi. What this reveals in the end are the books' finer material details as archival objects, while also casting new light on Sendak's own creative process.

Share
January 16, 2019

Night Visions Through The Canon EOS R

Benton Courtyard Statuary

I recently picked up a new Canon EOS R camera. It's my first experience with a full-frame mirrorless system, so admittedly I'm a little late to the party that has been led up to this point by manufacturers other than Canon. However, now after a couple of months of hands-on use in the field I can confidently join the rising chorus and say that this feels like an irreversible step forward in camera technology. Among a number of novel attributes, I'd like to touch on one of the EOS R's more salient mirrorless features that I've really grown fond of.

After owning the original Canon 5D, followed over time by the Mark II and III, I decided to skip the fourth iteration of this solid line and make the leap from a camera body with an optical viewfinder to one with an electronic viewfinder or EVF. In short the EOS R's viewfinder is killer. Beyond a number of other systemic enhancements that the mirrorless system affords in terms of autofocus point coverage and accuracy, the EVF grants one a live view of the soon-to-be-exposed scene based upon the camera's current shutter speed, aperture, and ISO settings. What this translates into among other things is the ability to compose and shoot in extremely low light. The viewfinder no longer simply acts as a mirrored reflection of the world in front of you but instead offers a depiction of how the camera sensor is capturing and interpreting light coming through the lens. As a result, what the EVF displays is an accurate representation of how an image will eventually be exposed once the shutter is triggered. This gives the photographer detailed real-time visual feedback and compositional control before a shot is fired. And because the sensor's signal can be amplified through higher ISO settings, and the EVF seamlessly reacts to such changes, the effect is something akin to night vision when the system is paired with fast glass.

Along with the image above, here are a few additional shots from a week ago that I took while walking across campus on my way home for the day. A steady mist was falling, and heavy fog blanketed the evening. Normally these are tough conditions to grab critical focus and to dial in a decent, balanced exposure. The camera, however, easily excelled in both regards through my use of the EVF while simultaneously toggling a few control dials all within easy reach. No need to fire, slowly chimp the shot, adjust focus or exposure, and then re-compose.  Instead, I was handling a tool that was really working with me as I was moving and seeing.

Benton Courtyard Bench

Because I like to selectively push and pull tones in post for creative effect, particularly when working in black and white, some of the resulting shadow regions in these processed shots were intentionally clipped in areas.  However, when I initially brought the images into Lightroom for editing, I was pleased to note that the camera raw files had ample dynamic range to work with.  This was particularly appreciated in images such as these night shots which contained both very bright highlights and deep shadows.  Is the EOS R's available dynamic range as broad as that found in medium format digital camera systems or even other DSLR competitors?  Most likely not.  But it is certainly usable.

Tree Near Gulley Hall

In summary, my early experiences in using a mirrorless system have fostered a growing sense that I'm peering into photography's future.  So far I like what I'm seeing.

Oak At Wilbur Cross
Share
August 1, 2018

Automating Original Image Capture For Photogrammetry

Polaroid Land Camera Model 95, 2D and 3D Views

3D data creation is part of a growing trend in the use of computational imaging techniques within cultural heritage digitization shops. In particular, operational adoption of photogrammetry has been witnessed at such institutions as the Minneapolis Institute of Art (MIA), the Smithsonian, and the University of Virginia Library.

3D data use cases abound. For instance, it can be leveraged to create 3D digital models for display and manipulation in various viewers, 3D printed, and re-purposed in VR environments. Additionally, virtual models can be employed as teaching tools, used in conservation condition assessments of objects through time and to open new lines of inquiry and digital scholarship around such data sets.

One of the current bottlenecks in the multi-step workflow that leads to the creation of original 3D data is the capture stage. In the case of photogrammetry, automating original 2D image capture under controlled shooting conditions is one way to begin to not only scale up data creation but to also make data more accurate and easier for 3D post-processing software to work with.

As we recently began to build out our own 3D capture capabilities at the University of Connecticut Library's Digital Production Lab, we decided to look at existing automated systems with an eye towards customizing a rig that would best fit our space, budget, and anticipated requirements. In collaboration with ace systems integrator, Michael Ulsaker, this is the structure that we recently co-designed and installed in our studio:

University of Connecticut Digital Production Lab 3D Capture System

Salient features include an automated 360 degree spin turntable, and camera column that can be programmed to seamlessly control movements along X, Y, and Z axes during a given shooting session. Both the turntable and camera are driven by an integrated combination of 5 stepper motors.

Cognisys NEMA 17 Stepper Motor and Canon 5D II

All of this movement is coordinated through a linked pair of Cognisys Stackshot 3X modules. Each module, which in essence acts like a programmable logic controller, has a haptic touchscreen and a nice GUI to the Cognisys software.

Cognisys Stackshot 3X Controllers

Successful photogrammetry requires a 2D image set of an object from overlapping look angles. This needs to be done in a comprehensive manner across a subject's entire surface in order to give post-processing software a greater opportunity to create 3D data from the original. Turntables are a good capture solution in this scenario, as they help control needed overlap from shot to shot and permit consistent stationary lighting to be built into the overall design. Beyond object movement, however, there remains the need to reposition the camera to different look angles above the subject per 360 spin for optimal capture coverage. This is where precisely programmed turntable rotation and camera movement can come together to create high quality source imaging:

The net results are a series of hemispheric image sets, viewed here in Agisoft Photoscan, where each individual 2D image capture is represented by a blue rectangle around the generated 3D model:

Model View, Agisoft Photoscan

Once exported from post-processing software, the model can then be uploaded to an online viewer site like Sketchfab where it may be shared more broadly to the online world.

Polaroid Test by michael.bennett1 on Sketchfab

Though the Polaroid Test model presented common photogrammetric challenges like the presence of specular highlights from its more reflective surfaces and self-occluded areas along the bellows, this initial trial, exported straight from Photoscan was promising nonetheless. A second test, this time using a small gift store duck with a terracotta-like surface was something that the software more elegantly handled and made watertight.

UConn Duck Test by michael.bennett1 on Sketchfab

After our initial test phase concludes, we hope to eventually begin work on aspects of the Connecticut Archaeology Center's bone collection and selections from the department of Ecology and Evolutionary Biology's Biodiversity Research Collections, both of which are housed nearby on campus.

Share
May 11, 2017

Early Morning Visit from Ursus americanus

American Black Bear Sow with Cubs

Last week, I woke up to what has become a yearly Springtime visit. This particular season brought a mother and three young black bear cubs to the homestead. After emerging from one of the trails that I've cut through our woodlot, they slowly made their way across the still un-tilled garden and past the shed. Eventually the quartet landed in a clearing where I have had vague plans on building a hoop house.

Fresh greens appeared to be on the family's morning menu...

Black Bear Cubs
Black Bear Sow and Cub

After a while, though, eating their greens became a bore to the cubs. Climbing a nearby black cherry tree proved to be much more fun.

Black Bear Cubs Climbing Tree

All of these images were taken on May 3rd at around 5:45AM from a slightly opened upstairs bedroom window with a Canon 5DIII, 300mm f/4 L combination shot at f/5.0, ISO 20000 in dim light.

Share
December 14, 2016

Using Humidification and Electrostatic Force in the Handling and Digitizing of 19th Century Latin American Newspapers

Often when scoping out digitization projects, devising complementary conservation treatments that assist in digital capture are challenging aspects of overall workflow design. And so it has been the case with our recent efforts at UConn Library on a set of 19th century Latin American newspapers from the University's archives and special collections.

Over time the thin, pulp-based and acidic periodicals have become brittle, broken, and in some instances creased in ways that obscure the print. Humidification is a common conservation technique that can be used to relax these creases in order to once again flatten the paper and allow for subsequent digital capture of hidden text. In the following 40 second time lapse video my colleague and UConn Library Conservator, Carole Dyal, employs her lab's humidification dome to start the initial prep work on a two page spread. The entire process in real time takes approximately 15 minutes per sheet.

Once the pages are humidified, they are then sandwiched between layers of polyester webbing and blotter on the lab's large work tables. On top of this goes a heavy 1/2 inch piece of plexiglass which acts as an even surface weight across the sheets below. The before and after transformation to the paper that results from this careful process is quite remarkable and greatly assists my lab's subsequent photographic efforts toward creating archival-quality page images.

Brittle paper that ultimately has become broken presents its own complications often in the form of a jigsaw puzzle. Fortunately, one of the unique design features of the digital production lab's X-Y table is its controllable electrostatic surface that can be used to temporarily hold down folded paper remnants during shooting. These remnants can be particularly problematic at newspaper folds where broken page sections can "spring" in the air and become difficult to lay out flat with their corresponding halves.

UConn Digital Production Lab X-Y Table System

In practice, the table's electrostatic force is first fine-tuned through an adjustable controller. With such brittle and light weight paper it is important to regulate the table's downward pull in order to avoid damaging the fragile pages:

Electrostatic Table Controller

From there, the photographer can turn the electrostatic force completely on or off with a foot switch that is cabled to the controller:

Electrostatic Table Foot Switch

Fragile, oversized materials like old newspapers are often best transported to and from the deactivated table surface with a heavier, alkaline carrier base. For this project .010 inch folder stock, cut to size, is being used.

Once on the table, a problematic fold can be addressed with selective application of electrostatic force coupled with some gentle, manual pressure. If you listen carefully, you can hear me hit the stomp box on the floor to activate the system's foot switch. The foot switch allows for effective hands-free control, which enables the photographer to coordinate the table's downward pull in combination with their own physical manipulation of the paper.

Here's a closer look at the same technique in action. What was once a broken line of text is again made legible and ready for overhead camera capture...

Turning over page 1 in order to expose page 2 for shooting presents a similar handling issue...

In this instance, the paper break is harder to elegantly fit together as the delicate page edges are a bit more ragged on this side. However, the text is at least made readable through the process, even while not being perfectly in line.

Individual images are then taken of each page at 400ppi. In between shots, the X-Y table (not the newspaper) is moved along its Y-axis to position the next page directly under the overhead camera. In this way, manual handling of such fragile material is minimized and in turn made more automated, faster and precise.

Finally, a word about the periodicals and their cultural significance. They were originally published in the Bolivian port city of Antofagasta just prior to the 1879-1883 War of the Pacific which pitted an alliance of Bolivia and Peru versus Chile.

Antofagasta, Chile, https://goo.gl/maps/8Zdc8sAbECH2

Chile would go on to win the conflict and capture the city that has remained important through time not only for its access to the sea but also for its proximity to the rich mineral resources of the area. Printed during a period of regional strife and subsequent transition, the newspapers in their digital format will soon offer researchers new, more detailed access to this important chapter in Latin American history.

Share
October 4, 2016

Digitizing Large Format Aerial Photography Transparencies: Part II

In Part I of this post, I summarized the workflow steps employed in digitizing 9x9 large format film from a 2002 aerial survey of Connecticut. In Part II, I'd like to take a deeper dive into the resulting image and compare it with similar aerials taken through time that used different film stocks and digital technologies.

The 2002 image that I converted in our lab was taken above Watertown, CT. In addition, the USGS hosts Watertown aerial photos from 2008 and 2012. Here's a look at these three surveys in succession:

Though they all have the same 1:1 aspect ratio, only the 2008 and 2012 images were shot at the same scale. So, let's zoom in to a particular area of common interest among the three, resize them all to a comparable resolution and do a more precise and balanced visual examination. On the 2002 survey, I've outlined part of the playing fields on The Taft School's campus that appear in all of the aerials:

Taft School Campus Area of Detail
Here's a look at the area of detail through time:

While digitizing the 2002 aerial photo, I also included the film's edge information which like many types of film stocks can contain a multitude of coded technical information. In this case, "Wild 15/4" can be found along the film's upper border.

This indicates a Leica aerial camera was employed in the original photography. Additionally, the dark manner in which water is rendered throughout the town and the clear separation of conifer and deciduous foliage suggest the use of black and white infrared (B&WIR) film. Digitized at 14,000px along the long edge, the film's grain is clearly discernible. But so is Taft's football field end-zone lettering and pole vault area:

2002 Taft School Football Field

For the 2008 survey, color film stock was the choice of the day's flight which occurred on April 3rd of that year. Though the film's edge information is cropped out of this USGS-sourced image, the file downloads from the USGS' site as part of a zip-archived data set. Within this bundle, an invaluable XML file exists that contains interesting technical and process metadata on the image. There you can learn that the USGS outsourced the original shoot to AeroMetric, Inc. who used a Zeiss Intergraph aerial photography film system. Interestingly enough, in the metadata's <procdesc> field it is also noted that the image was dodged. Perhaps this may have contributed to some of the grain in the final product. Here's Taft's football field once again. Though the sun's angle is different between shots, observe how the track's lane markers and bleacher seats can act almost like comparative resolution targets in the 2002 and 2008 images:

2008 Taft School Football Field

By 2012, the Watertown survey was a born digital asset. According to the metadata in the shoot's bundled data set, USGS contracted with Kentucky's Photo Science, Inc. for the work. As an interesting aside, Photo Science and AeroMetric, along with Watershed Sciences, merged the following year to form Quantum Spatial. Once again, a Zeiss/Intergraph imaging system was employed in the 2012 flight. However this time the device includes the "Digital Mapping Camera (DMC)" designation in its name. As a result, the view from 10,000' was not projected onto large format film but was instead passed onto a mosaic of digital sensors, most likely DALSA CCD chips. The solely digital workflow resulted in a very clean signal:

2012 Taft School Football Field

Today's current state of the art aerial camera rigs include the 250MP Leica Geosystems’ DMC III which can capture up to 25,000+ pixels across a single CMOS chip, and Vexcel Imaging's UltraCam Eagle II, a system originally co-developed with Microsoft that can use a variety of interchangeable lenses on its multiple CCD sensor arrays.

Finally, if this tale hasn't gotten you fired up yet about aerial imaging, then try Leica's DMC III marketing video below. It's like a great pre-game locker room speech, not necessarily for Taft's football finest, but for those looking to further hone Occam's razor in the name of supreme image-making geekery!

Share
September 30, 2016

Digitizing Large Format Aerial Photography Transparencies: Part I

One of the challenges (and rewards) of managing a digital production lab for a university research library is working with the wide assortment of analog formats that are collected within its archives, special collections, and map library holdings. For instance, we've recently begun conversion work on a 2002 aerial survey of Connecticut that was originally shot on 9"x9" positive black and white film.

Aerial photo transparencies are commonly turned into contact prints soon after the film is developed. And indeed, we have a large collection of these prints that we've digitized over the years at UConn. This type of reflective media can be converted in a couple of ways: you can either scan, or digitally photograph the prints at a sufficient spatial resolution. The Federal Agencies Digitization Guidelines Initiative (FADGI) suggest 6,000 pixels across the long dimension of the image area.

When tasked with digitizing original transparencies, however, certain challenges arise. Unlike reflective media, light needs to be evenly shined through transmissives with a light sensing device placed on the opposite side of the illumination source in order to capture an image. Aerial photography service bureaus, for example, employ expensive specialized large format film scanners that can handle the film's actual 10"x10" physical size (both cut and rolled), lighting needs, and high spatial resolution requirements.

Leica DSW700 Scanning Workstation, from http://www.gis.si/en/storitve/historicni-aeroposnetki

As a general rule, photo film contains considerably more visual detail than derivative prints made from film. And indeed, FADGI recommends a considerably greater spatial resolution for the digitization of film vs. reflective prints in this format: 10,000 pixels across the long dimension for aerial transmissives. So, the promise is there for some striking image data if you can engineer a suitable conversion process that is sensitive to both the format's particular handling needs and visually rendered potential.

For my own initial thinking on workflow architecture, the demonstrated design concepts behind both high resolution multi-shot camera backs, and DIY Arduino-controlled film scanners seemed like good theoretical entry points. In addition, I wanted to leverage and re-purpose gear that I already had in the lab. So, I thought, let's start with one of the same light boxes that we use for single-shot medium format film conversion. But instead of using a regular stationary copy stand, let's put the light box on the lab's new X-Y table. Then, let's program the table's movements and camera's controls to create automated, high resolution mosaics of a given 9x9 aerial transmissive. Finally, let's see if the resulting image tiles can be merged into a single, high resolution image for the entire piece of film. If that proves successful, then we'll be able to determine whether or not the image meets FADGI's 10,000 pixel guideline and also better understand the entire workflow's potential for production.

Here's what the concept looked like in staging:

9x9 Aerial Photography Film on Top of Kaiser Prolite Scan Lightbox
Masking Light Outside of Transparency
Ready to Go...

Initial shooting was done with a 50MP Canon 5DsR camera and 100mm macro lens combo. The tandem geometries of the camera's aspect ratio and the 10"x10" (actual size) of the 9x9 format meant that I would make most efficient use of the rig when shooting mosaics in a 2x3 pattern for a total of six images per transparency. Here's a video of the overhead camera's view of the automated system during a shoot in this configuration:

Image tiles were auto-imported directly into Lightroom off of the tethered camera as they were captured. From there, they took a quick trip en masse to Photoshop for final composite image merging. What resulted was an image that was roughly 14,000 pixels across the long dimension, captured as 16 bit data which left plenty of latitude for any needed tone adjustments to more fully express the image's dynamic range. This was encouraging stuff!

In Part II of this post, I'll take a closer look at this file and compare it with other aerial photographs of the same region of Connecticut taken over time with different imaging technologies.

Share
June 29, 2016

An Automated X-Y Table System for Photo Stitching: Initial Impressions

X-Y Table

I recently took delivery of an automated X-Y table system for our digital production studio at UConn's Homer Babbidge Library. It marked the completion of a process that began back in February when I had first contacted Michael Ulsaker of Ulsaker Studio with my initial specs for a custom rig. The details of how we worked out the final design, its components, and its system integration and automation are topics that I'll be briefly speaking about at Stanford's Cultural Heritage Imaging Professionals Conference next month in the Bay Area.

X-Y tables are used in digital capture for the creation of overlapping image tiles for items that cannot be photographed with a sufficient amount of spatial resolution in a single shot. One of the system's main attributes is that once the analog original is placed down on the table surface it doesn't need to be manually moved from shot to shot. Instead the entire support surface moves programmatically beneath a stationary overhead camera. This results in much less handling by the photographer, far greater throughput, and less wear on the mostly old, mostly oversized, and sometimes brittle formats that require this capture technique.

Additionally, since the table's programmable logic controller can mathematically calculate a consistent percent of overlap among adjacent image tiles, the resulting shots are very precisely photographed. This, in turn, assists Photoshop's Photomerge algorithm to do faster, more accurate image stitching in final post-processing.

Here's a short video that I shot of the newly installed automated system in action. As can be seen, the setup leverages design aspects from the field of robotics and integrates them into studio photography. The 1949 map of New Haven County being captured here is roughly 36" x 49":

After a subsequent trip to Photoshop, the 400ppi image tiles are automatically aligned, blended, and stitched into a new unified image by the software. Indeed the map's edges are in fact not all cut straight (which the composite image accurately depicts)...

1949 Map of New Haven County
View all blog by month
1 of 4
  1. 1 2 3 4
  2. >
  • Home
  • Galleries
  • Blog
  • Bio
© Michael J. Bennett