Digital Humanities 2011 and the elephant in the tent

July 15th, 2011 § 2 comments § permalink

I couldn’t keep up with all that was going on at the Digital Humanities 2011 conference at Stanford last month, but I thoroughly enjoyed it, learned from it, and found myself thinking about unexpected connections while trying to make sense of it. The three key things I kept thinking about are: scale, materiality, and agency. The semi-processed notes that follow are not so much a record of the conference as they are an experiment to see if a succession of points might hint at an arc.

The paradox of Digital Humanities is that it is a term that attracts more interesting people than it can stably support through careful definition of a coherent field, clear identification of an object of study, or singular commitment to a fixed methodology. The theme of this year’s Stanford conference, the “big tent,” was meant to be welcoming to all, and I think on the ground it succeeded quite well. It wasn’t an occasion of anxious definitional boundary-drawing. People seemed willing to do what they were doing with confidence not only in their own work, but in its acceptance by others within the big tent. This a strength not to be underestimated, I would like to think, and yet it could be difficult to see from a distance, outside the tent altogether.

A conference is a difficult thing to claim as an object of knowledge. There were more than 300 participants and usually four concurrent sessions. There is a substantive 417-page book of abstracts (closer to full papers than the term “abstracts” might imply; here in PDF), and also a #dh11 Twitter hashtag that I did not manage to keep up with. Amid the divergent particulars of papers, posters, and projects, common themes or problematics kept emerging.

  • David Rumsey’s opening keynote showed us implicitly how unsatisfying many of our existing tools are through a masterful demonstration of the kind of digital experience that would enable exploration across multiple levels of scale, seamlessly going from small map scales to large ones, from thumbnails to full screen, from close reading to distant and back, without artificially drawing sharp distinctions between macro-scale discovery, broad analytic purposes, careful examination of detail, and speculative browsing.
  • I understand a perennial question at this conference has often been one or another version of “What is text?” in the light of digital text encoding practices. Statistical text mining approaches are getting a lot of attention now, and perhaps put these questions in a new light. Tastes differ in such things, as do philosophical commitments, but I thought generally the people I heard wore their humanities practice pretty easily and confidently, whatever their methodology.  They were quite willing to explore how texts and meaning are contextually produced, not assuming a positivistic mechanism or scientistic magic somehow inherent in digital bits, even as they were in some cases interested in looking across different levels of scale. (I won’t say much directly about “distant reading” here, but recommend this recent post by conference co-organizer Matt Jockers, whose presentation at the conference was quite impressive.) The conference wasn’t quite what one might imagine of digital humanities solely from reading about “culturomics” in the popular press. To be in the midst of the conference participants, what’s interesting is not so much what’s new, but how much a horizon of the humanities can go without saying among people with relatively diverse backgrounds and occupational and disciplinary commitments.
  • I heard the conclusion of a paper Julia Flanders presented on behalf of herself and Jacqueline Wernimont, trying out the idea that textual markup could conceivably be a practice of philosophical exploration of “possible worlds.” An insightful comment during the question period suggested an analogy to Rumsey’s layering of historic maps, which Julia enthusiastically endorsed. To draw out just one of the implications of that comment: it might be helpful to people outside digital humanities to understand the humanistic character of practices like text encoding and database modeling by analogy to cartography. We can value the work of cartographers without getting particularly confused by the fact that the map is not the territory, and we can appreciate the cultural and rhetorical as well as technical “making” that goes into cartography. And we can accept that maps of different scales have different purposes, without pretending that differently scaled maps necessarily invalidate each other.
  • The panel on materiality was well attended and energetic. Jean-Francois Blanchette, Johanna Drucker, and Matt Kirschenbaum all spoke well about the material basis of technology, against an antihistorical techno-fantasy of disembodied bits. A couple of Blanchette’s slides reminded us of the shipping container industry as analogue and material support for digital technology, and that image seemed to give a particularly vivid ethical and environmental grounding to our sense of all having a stake in materiality. Matt Kirschenbaum talked about the materiality of born-digital archives, the physicality and historical boundedness of hardware, and the importance of engagement with archives, archivists, and materiality in digital humanities scholarship.
  • Johanna Drucker spoke brilliantly at a remarkable pace in what I understood to be advocacy of theoretically mature critical intellectual practice. I won’t pretend to be able to summarize her argument adequately, but toward the end of her talk she included theoretical gestures reminding us of the non-self-identicality of cultural objects, and of the idea of parallax, which I perhaps half-understood as looking at an object simultaneously from different perspectives. And I felt like I was hearing these gestures simultaneously from at least two perspectives: I had glimpsed these ideas before, and was intrigued by them, because something about them sounds right to me. Yet I was also hearing these gestures with sympathy for people who might be impatient of theory, and who might take non-self-identicality in particular as simply a logical offense.
  • Fred Gibbs gave a superb, understated presentation based on his work with Dan Cohen using text mining and visualization techniques to explore questions about Victorian intellectual history. He argued for starting with simple questions, simple tools, skepticism, modest results, and scholarly transparency in making code and data available. He was not cheerleading for his methods, but testing them, playing with them, and evaluating them. He showed how the same underlying question might lead him to produce different visualization graphs depending on whether the source data was just book titles or Google’s full-text n-grams. In the context of my conference experience Fred’s presentation turned into an especially significant highlight, for two reasons. 

First, his practice sounded to me like a pragmatic historian’s version of what I understood Johanna Drucker to be calling for. Gibbs produced and then wondered about three graphs of the “same” phenomenon that turned out not to be conclusively the same, for reasons that bear further exploration. I don’t know if this is what non-self-identicality means, but I appreciated that Gibbs is not sitting still, debating which side to pick in a false dichotomy between historical research object and digital method, he’s productively moving between imperfect method and uncertainly mediated object in an active hermeneutic process that he is transparently implicated in and willing to share, and he is unapologetic about the fact that the process isn’t over and its ultimate results cannot be fully and comfortably spoken for, because there is still something to learn.

Also, Gibbs was quite frank about the nature of his use of digital tools of inquiry in this process. He doesn’t need particularly fancy and complicated cyberinfrastructure to ask some of his questions, he just needs to be free to write short scripts and see results in quick cycles of exploration. I started to wonder whether having that kind of practice in mind, and then asking questions about scaling up, shouldn’t happen more often. The risk of creating sophisticated, methodologically committed digital tools in anticipation of supporting future scholarship is that unless there is the flexibility for quick iteration and change, complex tools might end up silencing considerable fields of potential evidence by hard-coding initial presumptions that will be expensive or difficult to change.

  • I made the most of an opportunity to find out what would bring digital poets to the digital humanities conference, a community I’m interested in but don’t know much about. It seemed to me that the digital poesis folks, much like Gibbs, are using technology critically and experimentally, fiddling with knobs to see what happens, and adjusting based on what they find. John Cayley’s exploration of language through Google searches, looking for short sequences of words that appear together in prior usage (but not too often) to incorporate into poems, catalyzed discussion around Google, which was perhaps a somewhat underacknowledged elephant in the big tent throughout much of the conference. Cayley understands himself to be simultaneously exploring the never-fully-accessible expansive world of language on the web and also the imperfect, essential mediating tool of the search engine. He drives his own exploratory process in a manner not so different from Gibbs. He doesn’t use Google’s n-gram data; as a matter of principle and method he insists on getting his search counts from “The Mouth,” his term for the simple Google web search box. Cayley spoke of chafing under Google’s terms of service and observed that Google didn’t want him to be a robot. For his purposes, it’s only as a robot, or a potential fast-typing equivalent of a robot, that he effectively has any agency at the interface. Debating culturomics on all sides can make it hard to see the immediacy of this sublime paradox. But it’s not a question for some speculative future. It matters now.
  • Plenty of presentations, papers, and posters involved various kinds of digital work in modeling, visualizing, organizing, and presenting human cultural materials. (My own poster [PDF], too, relates to a project that fits this general description.) Many people working in digital humanities understand that visualization tools and interface design are critically important, and there’s much more work to be done. But I found myself wondering how design necessarily looks different from the perspectives of anticipated future use and immediate active inquiry. Often, with “end users” in mind, we assume that applications on the web or elsewhere bind interfaces and data pretty tightly, and that a primary issue is the quality and clarity of the interface. It does not always go without saying that any interface can be a bad interface for scholarship if it’s the only one. The Linked Open Data movement is part of an answer to moving beyond this. The focus there still is mostly on the data publishing side, understandably. But on the demand side, there is not nearly enough exploration yet of paths toward researcher- or reader-driven (as opposed to user-tested) tools for inquiry.
  • Perhaps the most materially significant statement about the future of humanities research at network scale came during a reception honoring the establishment of the HathiTrust Research Center, which will provide computational research access (in some manner to be determined) to the full-text collections of millions of books in the digital collections of HathiTrust. This is an important and exciting initiative, but I got the sense that the vision is still well out ahead of what one would want in a community of practice able to provide informed feedback at appropriate scale.

Google’s Jon Orwant, who has put considerable work into engaging with the digital humanities community in the past few years, explained that the HTRC will support “nonconsumptive” research, meaning computational research on the aggregate of millions of digital volumes, including those in copyright, without providing full-text access that could be in potential violation of copyright. For rights management reasons and also for material engineering reasons, the research architecture will move the computation to the data. That is, the vision of the future here is not one in which major data providers give access to data in big downloadable chunks for reuse and querying in other contexts, but one in which researchers’ queries are somehow formalized in code that the data provider’s servers will run on the researcher’s behalf, presumably also producing economically sized result sets. The economic logic makes sense, given bandwidth limitations and data collections growing at the scale of petabytes. But it seems likely there are intellectual consequences here for research in the humanities that aren’t that easy yet to envision. For ordinary humanities researchers who are already not in a position to fix or route around the interfaces thought up by those of us who in good faith imperfectly make such things, what does it mean to announce that the way of the world will be to “move the computation to the data”? I can imagine many people in the humanities thinking: there were websites before, there will be websites after, what difference will this make to me?

If “moving computation to the data” is going to work as an active humanities research practice, the expressiveness of computation as inquiry will have to move toward researchers and readers. Whatever we mean by “computation,” that is, can’t be locked up in an interface that tightly binds computation and data. Readers already need (and for the most part do not have) our own agents and our own data, our own algorithms for testing, validating, calibrating, and recording our interaction with the black boxes of external infrastructure. The black boxes, too, are time-bound artifacts of culture, and must be read, within and without.

The pedagogy of collaboration for self-herding autodidacts

August 8th, 2010 § 7 comments § permalink

It’s an unlikely—even an absurd—premise for a summer institute: twelve people will come together for a week to see if they can make some kind of digital tool that will be useful in the humanities. No one will know in advance what tool they will build, what technologies they will draw on, or who will take responsibility for which part of the project. It’s not that the organizers will surprise them—the organizers also don’t have any of this information, because there are no advance decisions. All the details will be decided by the group only after they meet. And many of them have never met each other before.

Having had the privilege of participating in One Week | One Tool, an NEH-funded summer institute organized by the Center for History and New Media at George Mason University (July 25–July 31, 2010), I’ll add my voice to those who have said it was an extraordinary success. Sunday night we met. Monday we heard from CHNM staff about their work, and then we started brainstorming tool ideas. On Tuesday we narrowed the ideas, voted, revoted, selected one, chose teams and leaders, and got to work. By Saturday we had an early version of a workable tool, a name, a logo, a site to promote it, and a provisional plan for the future. We were prepared to launch Anthologize.

With much attention these days to the institutionalization of Digital Humanities, whatever it is taken to mean, one might reasonably ask how the One Week | One Tool project functions as pedagogy. Aspiring digital humanists sometimes ask what they need to know, which technological specialization they should learn first. Databases? Web design? GIS? XML? Ruby? PHP? Microsoft Word macros? The One Week project set out to teach none of these. In terms of the pedagogy of specific technologies within the humanities, One Week | One Tool was proposed, as Dan Cohen reports, as a summer institute seemingly about nothing in particular.

Tom Scheinfeldt has written that the Center for History and New Media judges tools by their use, and undoubtedly CHNM wanted the tool created by One Week | One Tool to be widely used. Yet after more than a year of preparation they began the week not having the slightest idea what the group would come up with, and without establishing an overt mechanism for ensuring that the group’s decision would be one that CHNM would be happy with.

If they had had their own clear vision in advance of what the tool should be, all sorts of responsible decision-making would have immediately followed from that. They would have wanted to investigate carefully what technologies were best for development, and they would have looked for people with specialized strengths in just those technologies. They would have prepared pedagogical materials thoughtfully focused on the purposes and technologies relevant to the tool.

But they did none of this. It seems irresponsible, but it wasn’t. Judging by outcomes, it’s clear that almost magically, somehow, and consistent with CHNM’s own track record, they managed to demonstrate an approach that worked much better than an early-risk-minimization strategy would have. It’s a tribute to the vision of both CHNM and the NEH Office of Digital Humanities that this experiment happened at all. If it was a crazy idea, it was “crazy like a fox,” maybe, in the words of Boone Gorges (YouTube video at 3:26).

There are many thoughtful accounts of the project and its process that describe the team’s effective rapid self-development in terms of risk, trust, humility, and leadership. When I first heard of the initiative, my own experience with project management made me skeptical, fascinated, and hopeful. I was especially skeptical about the idea of deferring the decision about the tool until Tuesday afternoon, two days into the week, and months after the applications were invited and the participants chosen. I would never have imagined setting up a project in that way.

But in retrospect it now seems obvious how brilliant and essential that structure was. It could well have been just as brilliant had our group of twelve been skipped over for any other set of a dozen applicants. Because by advertising that the One Week | One Tool project had no plan of its own, I imagine now it must have ensured that any applicants were likely to be simultaneously fearless, pragmatic, and capable of humility and trust. Had we maximized our own personal risk management strategies, we would not have applied. We were committing ourselves not just to learning something, but to creating something together that our names would go on. Our success or failure would be dependent on eleven strangers (for all we knew, and this was often the case in fact) who happened for some reason to be attracted to the same opportunity, and who matched whatever inscrutable criteria the people at CHNM might have had in mind.

The pedagogy of One Week | One Tool was grounded in tacit values that are recognizably characteristic of people who are drawn to Digital Humanities, and yet much of that culture is not necessarily overtly tied to technology at all. There is a kind of geeky communitarian anarchy, a tropism toward the values captured in the phrase “rough consensus and running code,” that lends itself to a paradoxical kind of pedagogy: self-taught lessons in group dynamics for a team of pragmatic collaborative autodidacts. With the right group, or the right expectations and balance of uncertainties, twelve people can all be simultaneously service-oriented and capable of exercising leadership, flexibly and as needed in pursuit of a common goal.

However intensely production-focused One Week was, and however use-focused its resulting tool, as a pedagogical intervention it raises some important questions for which the answers don’t seem at all obvious yet. Was this a pioneering laboratory experiment under exceedingly rare, carefully prepared conditions? What would it take for its lessons to be replicable in other contexts? I would be very interested to see this dimension of the project discussed further. Much as I love the superb folks at CHNM and the great work they do, and as impressive as their marketing savvy is, their successes don’t seem to be aimed primarily at burnishing their unique brand for its own sake.

If their pedagogy is going to be successful, their working assumptions will have to appear to the rest of us much less unusual and their achievements less radically innovative and unexpected in contrast to common practice. There is power in the premise that there are many latent groups of a dozen people ready to imagine themselves into existence to get something useful done.

As for trust

May 28th, 2010 § 1 comment § permalink

How knowledge
  slips by in-
     formation faith
in danger and rumors
  of the credulous
     like you
        would
          not
          believe.

Genealogies of old newspapers

May 13th, 2010 § 9 comments § permalink

Even before the Internet disrupted their environment in ways that are still unfolding, newspapers were complicated things, at once periodical publications, businesses, and devices of social organization and communication. The names of the best-known newspapers carry an aura of institutional solidity — the New York Times, the Wall Street Journal — but the history of newspapers includes many locales, many more papers, some of them short-lived, many changes in ownership, editorial leadership and political stance. Mergers and renamings have left their stamp on names like Star-Ledger, Journal-Constitution, Post-Gazette. We cite historical newspapers by name and date, usually ignoring the complexities of daily variations in editions and other irregular publication patterns that made newspapers awkward misfits in book-oriented bibliographic contexts long before digital media added new complications. Editorial page writers and historians have often employed without apology the convenient social fiction that a newspaper is a continuous identity of singular agency. judging that a more precise account would be hopelessly unwieldy. But they have been in a position to know how much of a fiction it is.

Ten years ago I had the good fortune to participate in the preparation of the Encyclopedia of Chicago. The editors sought to supplement the alphabetical entries with a number of new maps, tables, and charts. One of my colleagues prepared several charts to visualize highlights of the history of daily papers in English, in other languages, and in the metropolitan region beyond Chicago. The scope was limited to dailies in order to keep the research and visualizations from getting to be too much.

Last year the Library of Congress’s wonderful Chronicling America project announced a major release of its data on the Web, with a web-friendly API (Application Programming Interface) to provide access to their data, including links between data. Anything with a three-letter acronym can sound complicated, but the essential idea of their API is quite simple and familiar: every major kind of data resource has a bookmarkable Web address, and the document found at that URL can have a structure suited to its content and links to related resources. Their API is intelligently organized to serve human readers and also, importantly, to provide information to machines that others invest in that can serve even more readers at one remove from the Library of Congress team, providing services they don’t have to anticipate in advance.

Chronicling America has more than a million newspaper pages digitized through the National Digital Newspaper Program (NDNP), and will grow to as much as 20 million pages. It also has approximately 140,000 bibliographic title records gathered from libraries across the U.S. There is a lot there. As a way of exercising my uneven digital humanities skills and engaging particular topical interests, as a personal project I started a speculatively exploring for myself a small subset of just the bibliographic records. So here’s a report on playing around:

Interestingly, one of the metadata fields in these bibliographic records is a pointer to other records representing the successor to the paper described. Making use of the friendly API, I downloaded about 1,700 bibliographic records for newspapers associated with Chicago to start to play with a subset of data more systematically than was possible by searching for single titles and traversing the links in a browser. [Brief technical note: this exploration was hacked together iteratively using Python, SPARQL queries, and Graphviz.]

If we represent each bibliographic record as a node (an oval), and we draw arrows between these nodes to indicate which record is succeeded by another record, we get a directed graph that at first glance seems to amount to a genealogical chart for a newspaper, the skeleton of a narrative that exists in no single bibliographic record, but emerges from linkages across a small subset of records. For example, we see that the Western Herald became the Prairie Herald in two quick steps in the late 1840s.

That’s an easy one, and there are many like it among the 177 graphs with two or more nodes based on the Chicago records I extracted. There are also a small number of more dense graphs with multiple branches representing mergers and renamings, like the relationships between Swedish papers over a 75-year span (click through for a larger image):

It matters, though, that these nodes really are bibliographic records, and not newspapers. The linkages between records are imperfect. Some “successor” relationships associate related records that are not actual successor publications, but predecessors or coexistent publications that ended up merged. Catalogers created these records and their links for the sake of aiding discovery of library materials by researchers in particular contexts. Somehow the Southtown Economist ended up tangled in a bibliographic network that looks more complicated than the underlying data seems to be:

Still, if we understand the origins of the data and are willing to revise our understanding of what the arrows mean, we can infer some outlines of stories that suggest a tension between neighborhood identities and centralizing business considerations.

Because of the diverse origins of these records, there are also duplicates and inconsistencies. The Chicago Tribune, for example, has many bibliographic records describing what researchers would consider the same paper, and the successor relationships recorded in metadata don’t bring these together in a single connected graph:



Behind each of these records is an additional set of holdings records, and beyond that a set of institutional contexts, drawers of microfilm or even shelves of bound paper. These records were created to serve discovery in the research process. They weren’t meant to be graphed and read like this, exactly.

The exercise of looking at these graphs makes me wonder about the many stories and important forgotten histories that must be undiscovered in the millions of pages of old news. But it also makes me think about how the mechanics of a research process intended to lead eventually to interpretation is already necessarily a process of interpretation from the beginning. Metadata is created to serve discovery, but once created, it becomes evidence, and how it serves as evidence is beyond the control of its catalogers and creators.

I can imagine network graphs like these provoking three different kinds of story-construction simultaneously, in different dimensions (none of them new, they have long histories). We can look at the metadata as historical data. Even before we get to see a page of newsprint, we can use aggregated metadata not solely for the sake of discovery, but to look toward history, constructing provisional stories. Yet to appropriately discount and trust the data in this way, we need also to read the graphs with an eye to quality control, to look through the metadata to the circumstances of its production and aggregation, envisioning how what we’re seeing is the evidence of disparate and evolving library processes converging at network scale over spans of decades. And finally, we can look at these graphs as maps offering a field of prospective narratives of possible research paths. Any one of these nodes may have once been a card in a catalog. Each is now its own web page, each is attached to holdings records. To find what we are looking for we may have to traverse a long chain of such records, reading and filtering, judging what’s likely to be the main path and what will be a distraction.

Without further comment, here are a few more interesting graphs:


Considering the source

March 11th, 2010 § 0 comments § permalink

I’m looking forward to attending the Great Lakes THATCamp unconference later this month. What follows is an extended discussion of some of the things I have been thinking about. It begins in observing a path from command-line tools to speculation about the nature of evidence in humanities research from the perspective, especially, of history, and representing historical sources in digital form.

I’m currently in the midst of a project that involves what might be called hack-enhanced editing (aspiring toward inquiry-based hacking), preparing a digital collection of tens of thousands of articles that were translated and organized in an idiosyncratic paper database back in the 1930s, the Chicago Foreign Language Press Survey. (Images available at the Internet Archive; transcription project site not available, yet.)

With regular expressions, short scripts, bits of XPath and XSLT, and some venerable tools like ‘make’ and ‘grep’ (now approaching its fourth decade), I have gradually been building up tools and workflows to check quality and normalize some data-like elements across tens of thousands of transcribed articles. It’s not a complete Programming Historian approach, but I would be glad to share a few relevant geeky parts of this process if there’s interest, and hear ideas or learn methods from others (where else but a THATCamp?), although it’s not necessary to get into too many details for the sake of discussing the more general questions that this can lead to.

There’s a kind of productive bootstrapping Catch-22 involved in editing in this way. It’s not possible to make informed decisions about certain aspects of how to structure the digital representation until the content is available in an initial transcription. We can’t decide what ought to be normalized, or how practical that will be to attempt, until the full range of variation is known. Exploring that variation is a matter of creating small automated tools to explore latent structures and data values. So editing is not a merely technical activity, and it’s also not a matter of searching to find information, as if the resource were a transparent carrier of historical data. Editing is more a process of asking, in all sorts of ways, what is this thing, what could imaginably go wrong with it, and in whose judgment would it count as wrong?

Technical details aside, “what is this source?” is a fundamental question in many contexts, including just about any browser tab. With a queryable database, it seems to me that there may be little overt difference between querying a set of data for quality-control purposes, observing patterns and seeking inconsistencies and errors to be edited out, and performing the essentially the same query to explore possible historical hypotheses about the data. Some “data errors” might themselves amount to historical evidence.

We tend to think of searching as an activity directed at finding documents, representations of documents, or information. But in practice a certain amount of searching is better described as querying, not simply to find what a database can point to, but to size up the database itself, to better understand the nature of its mediation. Any interaction with an information source, digital or otherwise, involves a certain amount of figuring out what its limits are, what it ignores or takes for granted, what kind of processes produced it. When I can’t find what I’m looking for, I need to be prepared to be intellectually engaged at many different levels, not knowing in advance which will turn out to be relevant. It could be that I mistyped a term; it could be that the evidence I thought would exist is somewhere other than where I’m looking for it; it could be that the imagined evidence simply does not exist, or perhaps never existed. Making sense of a list of search results is not just a matter of searching and finding. It can in itself lead to provisional hypotheses about historical processes.

These are not new observations within digital humanities, but I think there is more to be done to keep drawing out the humanities continuities amid the perpetually ascribed novelties of the digital. What is the database when it’s understood as itself a kind of historically rooted document? And can we constructively make databases that accept their own document-like nature and don’t presume to evade history altogether? (I’m eager to see pragmatic Linked Data thrive as grounded documents, and I’m deeply skeptical of dreams of a Giant Global Graph if it is thought of as a kind of atemporal central vat containing a slurry of deracinated triples — standardized assertions of fact — which it doesn’t necessarily need to be.)

This line of thought makes me wonder if a language of evidence could help clarify issues that get muddied in item-focused battles over originals and digital surrogates, vexations over authority and authenticity, and perceptions of innovation in visualization. Historical inquiry has always looked past single documents toward pattern, with an understanding that the pattern often is not a property of any single document alone. Evidence has never just been in discrete items and their metadata; the quality of evidence depends on the quality of the questions we ask of it.

When we talk about search, it is convenient to make a simplifying presumption that what we are doing is looking for items already known by their type. But inquiry is a hermeneutic venture in which a set of questions is iteratively refined through the resistance of the world to answering them as initially stated. That resistance itself can be evidence.

Beginning

March 10th, 2010 § 0 comments § permalink

It seems late. Isn’t blogging dead yet?

In any case, there’s a small backlog of miscellaneous things I have thought would suit a blog, generally relating to working with digital data and methods in the humanities.

I don’t promise to be timely, to post regularly, or to avoid an accumulation of miscellaneous incoherence.

This is an individually maintained blog. Opinions expressed here are my own, at best, and subject to change.