Cliff Wulfman on Skunks, Shmoos, and the Future of DH

[The following slides and presentation notes are from Cliff Wulfman’s talk, “Thinking Big,” which took place Thursday, April 2, 2015 in Stafford Auditorium on the campus of Dickinson College. The Digital Humanities Advisory Committee thanks Dr. Wulfman for his permission to share them–PSB].

Slide01

I want to thank Chris and Patrick for inviting me to speak with you this afternoon.  I’m a close reader by training and inclination, so I can’t start a talk like this without “problematizing” our terms:

“Successful Digital Humanities Project Development”

Indeed, I’m going to use those terms as the framework for exploring these five steps, though not in syntactic order.

1. DIGITAL: Let’s begin with the term digital, and its verbal derivation, digitize.

Slide03

The term digital is, of course, treacherously polysemous.  It has become a metonym for the discrete values modern computers use to represent information, and so to digitize is to represent information by means of discrete values.  Digital data is simply information stored as ordered sequences of discrete states.  These ordered sequences are often called files or streams, and they come in many varieties, but at the most basic level they are all the same: audio files, image files, text files are all just sequences of bits.

So the digital in digital humanities refers to the binary representation of information as bits.  It does not, in other words, connote numerical or mathematical so much as it does symbolic, or semiotic.

Slide4

It is about representability.

So digital humanities is not equivalent to statistical humanities, although the showiest face of digital humanities is the visualization of maps, graphs, and trees derived from the application of social-science methods to texts and to phenomena of interest to historians of various types, literary and otherwise. The rhetorical impact of these visualizations is undeniable, but at bottom they are simply a way of displaying quantitative information, and computation is not equivalent to quantification. Computation also entails the application of procedural logic and heuristics: using an encoded knowledge base and a reasoning algorithm, for example, to diagnose an illness from a set of symptoms.

Nor is digital humanities equivalent to making web pages.

Slide5

For scholars in the humanities, in most cases, web sites are akin to publications: they constitute the presentation of research, not the research itself.  So in almost all cases, creating a web site does not constitute a digital humanities project.

At the same time, the World Wide Web has evolved, from a collection of lightly encoded text files linked together by the HTTP data-transfer protocol, into a network of data and services. So creating a trove of carefully prepared data in machine-readable format — a digital edition encoded in the schema of the Text Encoding Initiative, for example, or a biographical dictionary encoded using the standards of linked open data — does constitute a digital humanities project.

So the first step to successful digital humanities project development is understanding what it means for something to be digital.Slide06

2. PROJECT: Next: Defining a project.

Slide07

As a researcher, you may already have disciplinary knowledge and traditional practice guiding and constraining your conception and realization of a project. What makes a scholarly or academic project a digital humanities project?

Defining a project isn’t always straightforward in the humanities.

Slide8

These endeavors are not always product-oriented; even when they are, the product is frequently intangible: an idea; an argument; an analysis; a method; a critique; etc. I’m leaving aside articles and monographs as direct products of research: they are secondary instruments of dissemination

Sometimes there is tangible product, though: editions; transcriptions; databases; instruments for research and analysis.

When thinking in terms of a project, then, it is important to learn to think strategically:

Slide09Think about the outcomes you want to want to achieve, and why they are important: what will the consequences of this work be?

Think about the resources your work will require. Particular materials, in particular forms? Tools for accomplishing specific tasks?  Whose time and attention will you be drawing upon, and for how long?

How difficult is your project? What are the risk factors: what sorts of things might go wrong, what sorts of events might interfere with the successful completion of your project? What are your contingency plans? Can your project produce partial successes, or is it all or nothing? (Not a good idea.)

Try to organize your project into phases, each of which has its own success criteria, and each of which builds on the preceding phases.

If it sounds like I’m telling you to learn to think like an engineer, I am.

3. HUMANITIES:

Slide10

Earlier, I talked about what it means for something to be digital. Chiseling a definition of the term digital is easy; sharpening the meaning of the term humanities is much, much more difficult – so difficult and contentious, in fact, that I’m not going to address it directly at all, other than to suggest it has more to do with subject-matter than method.  Instead, just as I have tried to complicate the popular conflation of digital humanities with social science, I want to take this opportunity to distinguish digital humanities from digital librarianship.  Once again, these endeavors often overlap significantly, but they are different.

From one perspective, a library is a hoard of physical artifacts whose principal function is to be looked at. Seen from that perspective, digitization is an image-making activity: rendering surfaces on which drawings and inscriptions appear into sequences of bits that a computer can use to produce a reflection of that surface. From another perspective, a library is a gathering of texts whose principal function is to be read. From this perspective digitization is a linguistic activity: rendering words or other symbols into sequences of bits that a computer can use to create linguistic symbols that can be analyzed and compared.

It is the scholar’s privilege to regard the library from the latter perspective; it is the librarian’s burden to view it from the former, and in large measure the job of libraries is conservative digital photo-duplication: not creating a digital library so much as digitizing an existing one.

Thus the work of the digital scholar depends on that of the digital librarian, and in some aspects overlaps considerably with it, but it is not the same work. Likewise the work of the information scientist; the software engineer; the computer scientist (all different sorts of work, often done by different people).

This is part of the reason the digital humanities are so often hyped as being collaborative: quite often, work in DH requires knowledge and expertise from a variety of fields.  By bringing in many different perspectives you necessarily get many different priorities, points of view, cutting across different traditional academic disciplines, but focusing on humanities questions.

So, step three in developing a successful digital humanities project is to conceptualize your work in the context of an interdisciplinary framework of humanistic endeavor.

Slide11

4. SUCCESSFUL: Defining success isn’t always straightforward in the humanities, and in research in general.

Slide12

I’m going to hazard the following measure of a good DH Project:

“a good DH project uses domain knowledge and intellectual labor to create digital objects that can be curated and shared with others through standard formats and services.”

That last criterion (accessibility) strongly implicates the world wide web, but it needn’t always. And it certainly doesn’t necessitate a whizzy web site.

Slide13

But defining success is a useful discipline nonetheless. For one thing, it can help you focus your work by articulating specific outcomes you want to achieve.

What specific goals do you expect to meet with this work?  A full and compelling argument?  An insightful biography?  A meticulous accounting of an event, or an object, or an archive?  If there are products of your work, what are they?  On what basis can you or others evaluate their quality, their success or failure?

Of course, this kind of outcome-orientation isn’t appropriate at all stages of research, but the point at which you can articulate goals and deliverables is the point at which research becomes a project.

Slide14

Defining successful outcomes also helps to organize time and effort.  Most of us know the value of setting intermediate goals and deadlines; organizing these around success criteria can help make them realistic.

Let me give you some examples (this is a highly opinionated list) of “Bad (or Meh) DH Projects”:

Slide16

Slide17

Slide18

Slide19

Now another, equally opinionated, list of “Good (or Exemplary) DH Projects”:

Slide20

The Text Creation Partnership to improve the OCR of 18th century typography is a good DH project.  Good DH projects are those whose products or outcomes can be used in multiple ways by others.

EXEMPLARY PROJECTS

The Valley of the Shadow is one of the first digital humanities projects.

Slide22

Begun in 1993 by Ed Ayers and Will Thomas, at Uva, it is an electronic archive of two communities in the American Civil War–Augusta County, Virginia, and Franklin County, Pennyslvania. The Valley Web site includes encoded, searchable newspapers, population census data, agricultural census data, manufacturing census data, slave-owner census data, and tax records. The Valley Web site also contains letters and diaries, images, maps, church records, and military rosters.

What makes it particularly important, to my mind, is that it was designed not as a showcase but as a working research tool.

Ayers and Thomas published a web-based hypertext article that explicitly uses hypertext and full-text encoded archival material to make an argument.

The Shelley-Godwin Archive is another exemplary archival project.

Slide23

It features transcriptions of manuscripts that are deeply encoded to allow users to study the composition history of the materials.

Mapping the Republic of Letters is another.

Slide24

Based at Stanford, this project gathers meta data about the networks of correspondence among the luminaries of the Age of Enlightenment and uses it to produce wonderful visualizations of them.

5. DEVELOPMENT: So how do you go about doing this? How do you develop a DH project?

Slide25

Talk with people.

We’ve already talked about the almost inherently collaborative nature of the digital humanities.  There simply is not (not yet, anyway) a strong, documented track record of digital humanities methods and approaches; they are in any case highly interdisciplinary and under rapid evolution.

The proliferation of DH centers at universities testifies to the anxiety on the part of researchers to acquire new competencies as part of their academic work.  So seek out others in your field who have already had some experience, and ask them how they did it; seek out colleagues in other fields to talk with you about methodologies and approaches.

Climb the steep hill.

Slide27

This is really important. Ask yourself if you are willing to take the time to learn something new, different, and possibly outside your comfort zone.

Be prepared to acquire a more than superficial understanding of computational practices and methods.  Not that you have to become a master programmer; but you should understand the fundamentals of programming and computer science: data structures and algorithms; inputs and outputs.

Just as you would not undertake a professional study of Homer without learning Greek, learn the the language of computer engineering: how could I represent the objects of my study in machine-readable forms? Can I develop models of things and events? How might I manipulate those representations? Could I describe procedures, techniques, tricks for analyzing them, generating them, enhancing them, expressing them in different forms?

Deploy project-oriented thinking.

Slide28

In developing your project, employ the project-oriented strategic thinking we discussed earlier:  Try to lay out your project as a series of incremental steps and accomplishments.

Be flexible.

Unless your project is very straightforward and extremely well defined, it is likely to change in response to external events (funding, personnel) and internal evolution (discoveries made in the course of the project).

But, don’t just go chasing rabbits down the rabbit-hole. It’s very tempting to let the scope of your project expand over time as you learn about new things, see someone’s nifty tool, and so on.

Scope creep founders projects.

At the same time, though, don’t hobble your imagination or your ambition based on what you can see from here, today.

Don’t be afraid to think big.

Slide29

Let me share with you a little thought experiment.  A few months ago I was asked to speak on a conference panel entitled “Modernism and Big Data.”

The so-called “digital humanities” are at this early stage of engagement as much a series of considered poses, or deliberative positions, as anything else.  So to hold a panel on “Modernism and Big Data” was to propose a consideration of “Humanism as Big Science,” to position ourselves, to imagine ourselves, as big scientists asking big questions, knowing all the while that we were “playing pretend”.

In what follows, I am going to pretend that the collective textual remnants of the late 19th and early 20th centuries have all been processed into a machine-readable textual corpus. We don’t have it now, but it is not so far-fetched to imagine that we will be able to capture a significant portion of the written record, at least that portion already under institutional control in libraries and archives. It wasn’t all that long ago that the Google Books project seemed absolutely preposterous.

And besides, we’re just playing.

Slide30

Big Science asks big questions, such as “what is the nature of matter?”  The enormity of the question and the value of obtaining an answer (both practical value and intellectual value) drive research, collaboration, funding — they provide the energy that turns the wheels of research.

Perhaps, in this big-science fantasy we’re indulging ourselves in for the moment, we can imagine what such a Big Question might be, and speculate on what sort of engine posing it might awaken.  In our context I can imagine no bigger question than Raymond Williams’ question, ”When was Modernism?”

This seems a reasonable — and somewhat preposterous — Big Question to start with.  But we could just as easily ask something just as grandiose, like “WHAT was Modernism?”  — answering which is a precondition to answering the “When?” question, or “WHERE was Modernism?”

Slide31

These questions share the playful, tantalizing precision of Virginia Woolf’s famous aphorism from “Mr Bennet and Mrs Brown.”

Less often quoted is her qualification. Nevertheless, let’s succumb to temptation and take Woolf’s assertion at face value.  How would we go about proving or disproving her hypothesis? Could the immensity of Big Data help us, and if so, how?

So, in Woolf’s spirit, and since one must be arbitrary, let us call our Big Science endeavor…

Slide32

We’re talking Big Science here – REALLY BIG – like the Manhattan Project, or the search for the Higgs boson. So let’s keep playing dress-up and imagine an alternative reality where the Institutions of Power actually thought these questions were as important as finding out whether a subatomic particle actually exists or not, or how to blow up the planet. That is, we would have access to REALLY BIG RESOURCES, with really big expectations.

What would it mean for us, institutionally and professionally, to address ourselves collectively to answering such a question?  What would happen to the current models of promotion and tenure, department composition, teaching, publication? Who would have to be involved?

We would inevitably want some Theorists.

Slide33

We want to describe a state change: for some definition of human character, we want to be able to say that before some point (the “December 1910 Moment”), human character was in state H and after that point it was in state H′.

We might then call Modernism a function which, when applied to Human Character H, transforms it to H prime.

As with so much theory, the discussion quickly becomes highly arcane.  So I’m going to leave the theorists to do their thing for the moment and turn to the Empiricists.

Slide34

They’re the ones who get to play with the big toys, the big machines, the big data. Sometimes they get to play pirate, or skunks – more about that in a minute.  The linear accelerator model: building a ginormous machine that you can use to produce humungous amounts of data, which you can then search for traces in. The ginormous machine is history, which has left a humungous data trail of artifacts and documents in its wake.

How might the Empiricists use that Big Data to locate the December 1910 Moment?

Well, statistical topic modeling seems pretty tantalizing. If Woolf’s hypothesis is correct, we should expect to find topic models after the December 1910 moment that do not exist before that moment. The simple existence of the moment doesn’t explain what caused the change: that is, it doesn’t explain what the Modernism function is.  That’s the problem with History: it isn’t testable. You can’t change the factors in some equation and re-run events to see how the factors affect them.

Slide35

The Empiricists include scholars like Greg Crane, who ask what do you do with a million books, and Brewster Kahle of the Internet Archive, who asks us to imagine capturing the entire human record in digital form, and Stephen Ramsay, who articulates the Screwmeneutical Imperative to subvert the academic orthodoxies and ideologies of method and form an anarchic version of The December 1910 Project, a “community of practice” that valorizes Roland Barthe’s playful writerly text.

Now, right about now you’re maybe getting a little tired of playing dress-up. But before we pooh-pooh these visionary questions, let’s recall the remarkable thing Google did with its Google Books project. Sure: it isn’t perfect, and it leaves lots of things out, and it’s texts are really, really dirty.

But this is how *big* works.  It isn’t small acts of perfection: perfectly crafted editions, for example.  Big works through iterative refinement, each iteration changing the state of things in such a way as to open opportunities for further refinement.  Unattended OCR, the holy grail: a machine that can read printed text as well as a trained human being.  We don’t have it yet, so today the results of unattended OCR are dirty.

But OCR algorithms continue to improve (need citations). In fact, the principal value of generation X digitization projects like the Google Books project is the /page capture/.  If those pages were photographed well, the OCR can always be re-run, and over time the cost of processing and re-processing will decline.

So, on the one hand, we must develop research methods that tolerate noise, while at the same time anticipating improvements in the accuracy of text recognition.

Slide36

The larger message I’m trying to convey is this one.  The most valuable part of the December 1910 Project is the social and institutional infrastructure that supports, promotes, protects, and preserves human effort..  Put your emphasis on the stuff that machines need but can’t do. The most expensive, most valuable part of digital humanities work is the work done by trained human beings.  That’s the work that can’t be re-processed cheaply, no matter how little you pay graduate students.  Don’t treat it lightly! Don’t stick it in a Word document and forget about it.  Spend some time thinking about the best ways to capture that intellectual work so that it can be re-used in today’s scholarly world: that may not be a verbal argument published in a scholarly monograph, but a data set – a formal marshalling of evidence – represented in a way that can be taken up by reasoning machines as well as reasoning people.

Don’t become slaves to the machine: hack the machine, or partner with people who can. Make the machine work for you by giving it information it can use.

Give it highly crafted, machine-actionable metadata: not just the usual library metadata – names, titles, dates of publication and so on.

Slide37

We will need granular structured analyses of complex pages, like those in newspapers and magazines.  Not slabs of undifferentiated text, but pages that have been decomposed into their structural regions, mult-page articles that have been joined together into discrete wholes. Much of this work can now be automated, but it still needs human assistance.

Give the machine descriptions of nuanced relations and assertions that it can read.

Slide38

Statements in first-order predicate logic are a start.  Here is a portion of a graph describing the publication of Bayard Boysen’s “Lake” in the first issue of Broom, a description that captures the complex relationships among abstract entities (“the magazine Broom”, “a poem called ‘Lake’”) and concrete realities – a copy of the first issue of Broom, housed in Firestone Library, and a set of electronic files that embody various representations of it. These sorts of assertions – encoded in some sort of standard schema, like RDF – are the raw material of the knowledge base the so-called “semantic web” promises to become. There are lots of problems with the semantic web, just as there are problems with Google Books, but it is for now by far the best place to start putting our scholarly effort.

Slide39

I want to conclude with a nod to three pioneers of computer science, Vannevar Bush, Douglas Englebart, and J. R. Licklider. At the dawn of the computer age, these men, all three engineers and administrators, each had a vision of the computer that was profoundly humanistic.  Bush’s Memex, often cited as the precursor to the world wide web, was a machine that enabled people to link and track the vastness of human knowledge more efficiently.

Doug Englebart, inventor of the mouse and a variety of other ground-breaking technologies, saw in computers the possibility of augmenting the human intellect.

R. Licklider, director of the Defense department’s Advanced Research Projects Agency, from which the Internet sprang, envisioned a “human computer symbiosis” in which humans and machines partner to extend the reach of human thinking and decision-making.

For each of them, the computer was not an enormous calculating machine, but an empowering system that people could engage to increase the store of human knowledge. If you can develop projects that participate in, extend, and augment this vision, they will indeed be successful digital humanities projects.

Which brings us to skunks.

Slide40

I read with great pleasure and sympathy Bethany Nowviskie’s blog post entitled ‘a skunk in the library’.  Nowviskie traces the term to Lockheed Martin in the 1940s, where it was used to describe a “rogue team” of engineers who functioned outside the usual corporate culture in order to accomplish special things, and she applies it to to the Scholar’s Lab at UVa, which she directs.

Nowviskie mentions parenthetically that the engineers took the term “skunkworks” from Al Capp’s L’il Abner, but she doesn’t pursue the allusion, staying with the meaning that has evolved from the Lockheed Martin appropriation: a group of elite creatives who get special license to do wonderful, innovative things.  Following this etymology, those creative people are the skunks.  And who wouldn’t want to be a skunk?  These skunks are like the kids in the Gifted and Talented program: they may be misfits, some of them, but they’re precious and special, and they smell bad only to Department Chairs, who don’t savor liberty and innovation.

The thing is, that’s not how things were in the hillbilly hamlet of Dogpatch, and I want to conclude with that.  (I also want to claim the right to use the term “hillbilly”, as I was born and bred in West Virginia and am proud to be called one.)

In the world of Li’l Abner, the “Skonk Works” was a toxic chemical factory on the outskirts of Dogpatch, where the lone operator, “Big Barnsmell,” crafted a mysterious concoction called ‘skonk oil’ by brewing dead skunks and old shoes in a still.  Dozens of Dogpatch residents died every year of the toxic fumes.

According to Ben Rich, the second director of the Lockheed Martin skunk works, the group got its name because the original facility was located next to a toxic-smelling plastics factory and one of the engineers likened their own secretive operation to factory in the Al Capp cartoon.

Slide41

So there are several things to think about here.  First, the skunks aren’t in charge.  They aren’t the workers in the “Skonk Works”; they are the raw material.  Second, the work of the skunk works isn’t benign “creative innovation”; it is industrial pollution.  Nowviskie acknowledges the unease occasioned by use of the term “skunkworks”: “there’s a level of honesty and self-awareness involved in not calling them snuggly bunnies.”

There’s a larger story here about papering over the toxic effects of the digital revolution, literally, as in the waste byproducts of microchip manufacture, and figuratively in the effects of automation on an underclass of workers (the denizens of Dogpatch) and the fact that the Lockheed Martin operation designed war planes.  These bunnies are not snuggly at all, and they aren’t even amusingly off-beat: they are fodder for a noxious process of commodification.

I’m afraid that to expect academia to work like Lockheed Martin, or like Silicon Valley start-ups, or even like a forward-looking library, is naïve. From what I’ve seen, the skunks are the graduate students, the adjuncts, and the alt-acs who do the work but don’t get the credit; who build the intellectual playgrounds Steve Ramsay describes but aren’t allowed inside.  To call them skunks is to give them a roguish tang; in fact, they risk becoming that other legendary Al Capp creature …

Slide41

The Shmoo, which exists to be a commodity: delicious to eat, and eager to be eaten.

The Digital Humanities, Big Data: these highfalutin terms promise much, and we can fantasize about the opportunities they open up, the roles they may let us play, the discoveries they may enable. But let’s not allow our dress-up fantasies to become wish-fulfillment. Higher Education is in crisis; intellectualism is in decline; graduate education is in a death spiral. Let’s not pretend that DH is going to solve all these problems: even more, let’s not let DH become part of the problem.

Thank you.

Mellon Digital Humanities Seminar: Clifford Wulfman (4-7:30 PM, Thursday, April 2, 2015) @ Dickinson College, Carlisle PA

DHAC PRESENTS

“Thinking Big: Five Steps to Successful Digital Project Development”

posterWulfmanFINAL

Clifford Wulfman is the Coordinator of Library Digital Initiatives at Princeton University and co-founder of Princeton’s new Center for Digital Humanities. He has been involved with the Perseus Digital Library, the Modernist Journals Project, and is currently the Director of the Blue Mountain Project, an NEH-funded initiative digitizing European art periodicals. In April, Dr. Wulfman will be on the campus of Dickinson College to talk about his work and training, digital libraries, and the future of digital humanities.

This event is FREE, open to the public, and dinner will be provided. Please RSVP through the form below, and choose a meal option:

Event RSVP
  1. (required)
  2. (required)
  3. (valid email required)
  4. Meal Preferences
 

cforms contact form by delicious:days

Mellon DH Fund supports the creation and development of Eighteenth-Century Poets Connect

by Jacob Sider Jost (assistant professor, English)

In the fall of 2014 I applied for a semester-long grant to pay a student assistant, Mary Naydan, to complete a spreadsheet listing the works of 133 English poets active during the years 1730-1740.  This was a continuation of a project begun by Mary and me using a Dickinson summer student-faculty collaborative research grant in the summer of 2012.  Over the course of the fall, Mary logged 64 hours of research, gleaning bibliographical and biographical information about poets from a range of online sources (particularly the Oxford Dictionary of National Biography, the English Short Title Catalogue, and Eighteenth-Century Collections Online).  Although not able to complete the full roster of poets in the time funded by the grant, she did complete approximately half of them—an impressive one poet an hour.

Pages - 0001 copy

I am confident that as with our work in the summer after her sophomore year, this experience will stand Mary in good stead as she looks ahead to graduate study in English or an allied field.  While Mary was in the digital archive researching eighteenth-century poets, our digital humanities postdoc Patrick Belk took the lead on organizing the data in a way that would be not only useful but publicly accessible, building a Drupal database to hold the data stored in our spreadsheets.

18cpc (2)

This database is accessible at http://dh.dickinson.edu/18cpc/.

            Thanks to Patrick and Mary, my goal of documenting, visualizing, and analyzing networks of print and patronage for eighteenth-century poetry has moved significantly closer to realization—with the unexpected benefit that thanks to Dr. Belk’s implementation of Drupal the project, while still in progress, is available to me and other researchers online.

Booksellers Network Viz3_2

18cpc

With that said, further research work and technical tinkering remain to be done.  My current timetable is as follows: this semester, Dr. Belk will smooth out the remaining problems with our data, not all of which imported successfully from our Excel spreadsheets into the online database.  Over the summer, I will finish entering the biographical and bibliographical data from the 60 or so poets who remain undocumented.  By the end of summer 2015, Eighteenth-Century Poets Connect will be complete as a publicly accessible online database documenting the poetic culture of Britain in the 1730s.  In the fall of 2015, Dr. Belk and I will work together to find the visualizations and other tools of analysis that make this data most useful, and it will be the work of the following year, 2015-16, for me to write an article for peer-reviewed publication discussing my findings.

Jacob Sider Jost

Mycenae Lower Town Excavations and 3-D Reconstruction

Prof. Christofilis Maggidis sends along this report on his work documenting the Lower Town at Mycenae. The 3-D scanning and reconstruction work was partly funded by Dickinson Digital Humanities grants over the summers of 2013 and 2014.

The archaeological investigation of the Lower Town of Mycenae (2001-to date) aims to reveal the relationship between the citadel/palace of Mycenae and the surrounding settlement, and to show land development and public works (fortification walls, roads, bridges, dams, irrigation, terracing). The Geometric settlement (houses, workshops, silos, graves) dates to the 9th-8th century BC; these Geometric ruins are the first and only ones discovered so far at Mycenae since Schliemann’s excavations in 1874, and establish the cultural continuity of the site in the transition from the end of the Bronze Age, after the collapse of the Mycenaean world, to the historical period of the Early Iron Age. The underlying Late Mycenaean urban settlement (fortification walls, gates, houses, storerooms, dams, etc) dates to the 13th century BC, This is the first time that the very existence of the Lower Town is archaeologically established.

The Mycenae GIS database built by the Dickinson team includes geology, terrain and topography (based on digitized Hellenic Military Geographical Service topographic maps, geological maps, satellite photos, and Total Station points), geophysical survey (subsurface architectural features detected by remote sensing), architectural remains, archaeological contexts, features, and finds (embedded and catalogued by date, accession number, material number, layer/context number, geodetic coordinates, grid-square and locus, photos and drawings).

The G.I.S. geodatabase further integrates a 3-D digital reconstruction of the Lower Town. . This comprehensive 3-D digital model of the site will constitute an interactive learning tool, but also a pioneer and dynamic digital publication platform with a powerful database, incorporating and illustrating the architectural development of the buildings with all successive construction or modification phases, their finds, and their surroundings.

The site scanner shoots millions of georeferenced points from many different angles and locations to compose a highly accurate (5mm) georectified ground plan

The site scanner shoots millions of georeferenced points from many different angles and locations to compose a highly accurate (5mm) georectified ground plan

This past summer, all excavated architectural structures of the palatial workshops at Mycenae, including buildings, walls, floors, deposits, gates, and roads were scanned with a 3-D Terrestrial Laser Scanner, which was leased for a period of three weeks from the Demokritos University of Thrace (Prof. Nikolaos Lianos). The site scanner shoots millions of georeferenced points from many different angles and locations to compose a highly accurate (5mm) georectified ground plan, thousands of cross-sections of orthogonal axial tomography (like a Cat-scan), and a ‘walk-through’ rotating 3-D model of the site, which forms the basis for the 3-D digital reconstruction of the workshops. Next year, the plan is to scan and photograph from the air the whole valley of the Lower Town and the cyclopean walls of the citadel in the backdrop using a videocamera-equipped drone in order to digitally recreate the precise terrain and background for the 3-D town/citadel model which will then form the basis for the georeferenced 3-D digital model of the ancient landscape.

For more information about the Mycenae Lower Town Excavations, see here and here.

Showcase of 2015 Boot Camp Projects

Photos from the Digital Projects Showcase (Jan. 29, 2015) in the HUB, Social Hall East (photos by Chris Francese). For Tony Moore’s article on this year’s DBC at Dickinson, click here.

Two New Historical Simulations

Three Dickinson students, Shayna Solomon, Patrick Schlee, and Edwin Padilla, working with Todd Bryant of Academic Technology, and Ed Webb, Associate Professor of Political Science and International Studies, have created two historical scenarios using the game “Civilization V” and the software “ModBuddy.” Todd Bryant passes along this report.

The first scenario is an updated and expanded version of a mod created by Todd Bryant in “Civilization IV” covering Europe and the Americas in 1492. The second scenario was developed from scratch and covers Europe and Africa beginning in 1876. To the greatest extent possible, each scenario accurately depicts the size of empires, geography, resources, diplomatic relations, military strength, scientific progress, and religion for the major civilizations of the time period. In addition to accurately recreating variables already within the game, the students used XML to change the underlying database of the game to create additional resources, military units, and social policies. Additional logic was also built using the coding language LUA to include the Atlantic slave trade in the 1492 mod and Rinderpest, the Berlin Conference, and malaria for the Africa/Europe 1876 mod.

1492: Aztecs: Playable civilizations include the Aztecs, the Quiche Maya, the Tarascans of Michoacan, the Incas, the Songhai, Morocco, Spain, Portugal, France, England, the Netherlands, Venice, and the Papal State.

1492: Aztecs: Playable civilizations include the Aztecs, the Quiche Maya, the Tarascans of Michoacan, the Incas, the Songhai, Morocco, Spain, Portugal, France, England, the Netherlands, Venice, and the Papal State.

A more detailed description of the 1492 mod as well as links to download can be found here, and for the Africa/Europe 1876 mod here. Students also wrote an extensive ReadMe file for each mod describing the research on which each mod was based. Each ReadMe file also explains decisions they made due to the limitations of the game and important historical factors that were unable to be included. Both are published and available online and as a ReadMe file within each download where the mods were published. The 1492 ReadMe file is available here, and the 1876 Africa/Europe mod is available here.

Two of the students, Shayna Solomon and Patrick Schlee, worked primarily as researchers on the project. Shayna focused on the Africa/Europe 1876 mod while Patrick worked on 1492. Although they spent most of their time conducting research and learning the variables used in the game, they also used the ModBuddy software to design the very extensive maps and modify some of the XML that held any changes to variables in the database.

Edwin Padilla was in charge of the technical aspects of both mods. This included learning the database structure underlying the game and how to write database queries in XML to make changes to variables in the database when the mod is loaded. He also learned a scripting language, LUA, which he used to introduce new logical elements to the game including the Atlantic slave trade, Rinderpest, and malaria.

The students in charge of research, Shayna Solomon and Patrick Schlee, gained a great deal of experience working with primary historical sources covering a very broad range of topics. They learned to analyze these documents for inaccuracy and historical bias as well as how these variables interacted in order to create as accurate a simulation as possible.

For Edwin Padilla, who was in charge of the technical aspects of both mods, he learned two new languages, LUA and XML. He also became familiar with the differences between writing code to create a program from scratch and using an API to modify someone else’s code. Finally, he worked with shareholders, including the other two students and Professor Webb, who were largely unfamiliar with the possibilities and limitations of coding via the API, to determine overall project goals and set priorities.

All three of these students can now point to a very public project in the rapidly evolving areas of games in education and the digital humanities. It allows them to showcase their individual skills while working as a member of a team.

Publication

The mods have been submitted to submrge.org (a University of Harrisburg website tracking the use of commercial games in education), CivFranatics.com (a web forum for the Civilization game series and mods) and Steam (mainstream game and mod distributor).

Direct Download Links:

Via CivFanatics:

Colonization of Africa – http://forums.civfanatics.com/showthread.php?t=538103

1492 – http://forums.civfanatics.com/showthread.php?p=13652882

Via Steam:

Colonization of Africa – http://steamcommunity.com/sharedfiles/filedetails/?id=336750907

1492 – http://steamcommunity.com/sharedfiles/filedetails/?id=379961933

This work was carried out in the summer of 2014, and funded by the Andrew W. Mellon Foundation Digital Humanities Grant, administered by Dickinson’s Digital Humanities Advisory Committee.

Dickinson President’s Report

The annual report from Dickinson’s President Nancy A. Roseman and the senior staff is out, and I wanted to highlight the statements there on technology, scholarship, and learning, which nicely sum up the approaches being taken at Dickinson. President Roseman begins with her vision for the academic program, a statement which concludes with the following:

Lastly, we will seek new ways to leverage our work in the digital humanities, highlighting the value of technology to enhance, not replace, our high-touch, intensely collaborative approach to education.

Provost and Dean of the College Neil Weissman expands on this as follows:

Finally, technology. Despite all the talk of “disruption” and the threat of displacement of residential education epitomized by MOOCs, computing makes the liberal arts taught through direct student-faculty contact more, not less, germane. Rather than being replaced, liberal learning is enriched by technology as a tool. Each year, select Dickinson faculty in the Willoughby Institute for Teaching with Technology explore approaches to pedagogy ranging from the use of tablet computers in the classroom to new models of commentary on Greek and Latin texts. Supported by a $700,000 grant from the Mellon Foundation, faculty are investigating digital approaches to the humanities. Another Mellon award has made possible a Central Pennsylvania Consortium faculty project on “blended learning” through the use of technology.

Students and faculty at Dickinson are fortunate to have strong administrative support for digital initiatives. Watch this space for details about some exciting faculty-driven and student-faculty collaborative projects, and news from the recently completed Digital Boot Camp.

DH Boot Camp Poster Session January 29, 2015

Here is the merry band of Dickinson students who came back to campus a week early to participate in the second annual Digital Boot Camp.

2015 DH Boot Camp participants met in the Waidner-Spahr Library at Dickinson the week of January 12, 2015

2015 DH Boot Camp participants met in the Waidner-Spahr Library at Dickinson the week of January 12, 2015

Led by Mellon Postdoctoral Fellow Patrick Belk, the eleven students completed online tutorials at home the week of January 5, and convened on campus for further instruction and to work on their own projects. Other instructors included Michael D’Aprix, Daniel Plehkov, Leah Orr, and Don Sailer. Topics included ArcGIS, Drupal, XML, and discussions of metadata and other DH principles (full schedule here). Most of the projects they are working on represent collaborations with faculty, departments, or student organizations on campus.

Make sure to stop by the digital poster session, at which the students will show off what they have accomplished in this intense period of work and discovery.

What: Digital Boot Camp Poster Session

When: Thursday January 29 12:00-1:15 p.m.

Where: HUB Social Hall East

Here is a list of the students and their projects linked here (still works in progress):

Masculinity in Advertising
Victoria DeLaney
Sophomore
English, Spanish

Mapping Sustainability at Dickinson College
Jackie Goodwin
Sophomore
Environmental Studies, Sociology

Cultural Mapping: A Documentation of Yarmouth Maine
Wesley Lickus
Sophomore
Environmental Science

The Peddler
Nick Bailey
Junior
International Business, Management

Maryland Folklore Project
Andrew McGowan
Junior
Biochemistry, Molecular Biology

EDDC Archive: Digital Library for the English Department
Harris Risell
Junior
English

Renaissance Music Database
Maurice Royce
Junior
Computer Science, Mathematics

Student Curation at the Trout
Anna Leistikow
Senior
International Studies

Education Reform
Melissa Pesantes
Senior
Italian Studies, Anthropology

Mapping the Aeneid
Katherine Purington
Senior
Classical Studies

Exploring the Invisible Universe
Olivia Wilkins
Senior
Chemistry, Mathematics

The Digital Boot Camp @ Dickinson was made possible by a generous grant from The Andrew W. Mellon Foundation. It was supported by members of the Digital Humanities Advisory Committee (or DHAC), and Archives & Special Collections, Waidner-Spahr Library. The students, instructors, and organizers taking part in this year’s boot camp would like to thank the following people: Dan Confer, Ryan Burke, Jim Ciarrocca, Chuck Steel, Maureen Dermott, Meredith Brozik, Tricia Contino, Dottie Warner, and Malinda Triller Doran.

Lincoln’s Writings: Multimedia Edition

The Multi-Media Edition of Lincoln’s Writings at the House Divided Project offers 150 of Abraham Lincoln’s most teachable documents organized around five major themes and designed provide key alignments with the Common Core State Standards.

House_Divided_Lincoln_Edition

In addition to transcripts there are audio recordings of readings by the wonderful Todd Wronski of Dickinson’s Theatre and Dance Department. My favorite feature is the inclusion at the beginning of a paragraph on the context of each document by Civil War historian and House Divided director Prof. Matthew Pinsker. Here, for example, is his lead-in to the Emancipation Proclamation.

Context: The Emancipation Proclamation of January 1, 1863 culminated more than eighteen months of heated policy debates in Washington over how to prevent Confederates from using slavery to support their rebellion. Lincoln drafted his first version of the proclamation in mid-July 1862, following passage of the landmark Second Confiscation Act, though he did not make his executive order public until September 22, 1862, after the Union victory at Antietam. The January 1st proclamation then promised to free enslaved people in Confederate states (with some specific exceptions for certain –but not all– areas under Union occupation) and authorized the immediate enlistment of black men in the Union military. The proclamation did not destroy slavery everywhere, but it marked a critical turning point in the effort to free slaves. (By Matthew Pinsker)

Prof. Pinsker also offers a 12-minute close reading of the text of the document itself. And there is bibliography, and excerpts from other historians, writing about how they understand the document. Check out this excellent use of the web to richly annotate key historical documents!

 

Digital Humanities at MLA 2015 (Vancouver)

The digital humanities is well represented at this weekend’s 130th annual Modern Language Association Convention (Vancouver, BC; January 8-11). A simple keyword search of the 2015 Program displays 43 sessions that match the criteria “All text: digital humanities”; 6 sessions match “All text: DH,” and 32 sessions are listed under the program’s Subject heading:

General Literature–Electronic Technology (Teaching, Research, and Theory).

Because it reminded me of Chris’s thoughtful (and provoking) post on Desmond Schmidt’s article two days ago, I wanted to first bring attention, and share the link, to a session held yesterday: 204. Text Tools in the (Digital) Humanities (Friday, 9). Here’s a case being made by David Hoover for “plain text” alternatives to XML, which also focuses on inter-operability, and shares some of the concerns in Schmidt’s article that Chris discussed Thursday. Abstracts of all 3 papers for session 204 are posted at 204 Abstracts. The top-most abstract is Hoover’s paper, titled “The Promise of the Plain: Plain Text and Plain Tools in the Digital Humanities.”

I won’t even try to briefly touch on all 43 sessions, but another that caught my attention, and I wanted to share because it looked interesting, was this morning’s roundtable: 448. Disrupting the Digital Humanities (Saturday, 10). Last night while browsing the program, I paused at this one in particular, because I saw that participants included Sean Michael Morris (presiding) and Jessie Stommel (final speaker), who are co-directors of Hybrid Pedagogy, an online blog/ peer-reviewed journal that I follow. According to the program’s session description:

All too often, defining a discipline becomes more an exercise of exclusion than inclusion. This roundtable rethinks how we map disciplinary terrain by directly confronting the gatekeeping impulse of many academic disciplines. Participants investigate the edges and open the digital humanities more fully to its fringes and outliers.

For papers featured at this morning’s roundtable discussion, go here: DisruptingDH.

Sessions, abstracts, and (some) papers from DH-related events at this year’s MLA can be found through links in the Full Program. Relative to other methodologies and content areas, the digital humanities remains the annual mega-conference’s MVP (Most Visible Player)-as Pannapacker called it, “The Thing”-five years running. The 204 and 408 sessions give a good idea of the kinds of wide-ranging approaches being taken, moreover. DH at this year’s MLA–from textual analysis and close reading to LOL cats and critical queer theory–is thriving, and scholars in languages and literature are doing some pretty meaningful work across diverse areas of research.