What Is (or Isn't) Good Enough? Part 2
In Part 1 of this series, I began to broadly inquire about what may be some of the longer-term ramifications of recent trends in cultural heritage digital capture. For instance, what results when speed and outsourcing are at the absolute forefront of memory institutions' decision making? What is "good enough" by today's standards? Or more interesting perhaps, what are the results of that which isn't good enough?
Last month in his New Yorker article, The Artful Accidents of Google Books, Kenneth Goldsmith examined some of the more startling phenomena that can creep into such sped-up endeavors as Google's mass-digitization initiative. There he found a growing group of artists and digital artifact collectors leveraging the residual physical capture and/or post-processing algorithm errors that such speed and less-than-optimal quality control can bring. Here are just a few examples:
Google Earth has also had its share of anomalies...
Though Goldsmith's main focus was looking at what can be creatively done with such raw materials, he does go on to make the point that, "Because of the speed and volume with which Google is executing the project, the company can’t possibly identify and correct all of the disturbances in what is supposed to be a seamless interface. There’s little doubt that generations to come will be stuck with both these antique stains and workers’ hands."
Both the Google Books and Google Earth examples are similar in that they exhibit what we perceive as errors. Their errors, however, are different in origin and remediation. Google Books image problems derive mostly from capture errors related to excessive workflow speed. Google Earth errors are mostly algorithm errors in their texture mapping implementation. Algorithms can be fixed through code tweaks. Such tweaks can in many instances be applied globally and can simultaneously solve many individual problems in a batch. Capture errors, on the other hand, are mostly unique, require individual re-capture and all of the inherent prep, set-up and re-processing steps that are necessarily involved. Hence, Goldsmith's assumption that we will be "stuck" with the technicians' latex-clad fingers through time is most likely accurate. That can be a tough realization for a memory institution involved in such outsourced initiatives to arrive at. Fingers and institutional ownership stamps don't go well together. Throw into this equation the time-honored trust by patrons in the authority and comprehensiveness of libraries' and archives' collections, and you have a possible friction point.
Or not? Certainly the argument can be made that this is simply the price you pay for "good enough," and that the overall good of Google's efforts in this regard outstrips the bad which is naturally bound to rear its head when the law of averages factor into the massive scale of the project. However, as Paul Conway noted to the audience at the IS&T Archiving 2013 conference last April, most mass digitization images are good enough, unless you are the unlucky scholar who needs to read and cite the content from one of the pages with fingers all over them.
One of the common shibboleths of this work is that the re-capture of archival objects is always an option if you need it. Reality inconveniently can say otherwise. Such large scale initiatives as Google Books can be closed-loop systems that are engineered for volume-level, linear, one-time processing. Once in the system, it can be difficult to replace single page elements as the existing workflow architectures aren't necessarily built to retrospectively deal with isolated (but numerous?) exceptions. Additionally, many archival items are incredibly brittle and fragile. In turn, the opportunity that you have today to digitally convert such objects may be the best or the last chance that you will ever get. So, it is often wise to make the most of that opportunity if you can.
With this all said, there is no doubt that it remains imperative that we work to scale up digital content creation to meet expanding usability and digital scholarship needs. However, when also faced with the broadening fidelity requirements that such scholarship increasingly demands, the ever improving display devices that allow for such deeper examination, and the large costs of overall digital preservation, it follows that we have to strike a balance. In a world of ever tightening budgets, it is difficult to justify the preservation of avoidable errors.