Your Personnel Committee Has Questions

The following derives from SCS 2020 panel Evaluating Scholarship, Digital and Traditional, Organized by the Digital Classics Association and Neil Coffee (University at Buffalo, SUNY). I would like thank Neil and my fellow presenters for a stimulating session (fairly well-attended and lively given that it was in the very first slot of the conference, 8:00 a.m. Friday Jan. 3!)

The lack of regular procedures and opportunities for peer-review for digital work poses a serious threat to the future of digitally based scholarship and publication in the academy. The absence of routine peer-review is already acting as a brake limiting the time and energy which scholars with a healthy regard for their own professional futures will spend on digital work. Even those committed to the ideals of openness, access, and collaboration that draw us all to this area often don’t fully commit because of the lack of serious peer-review, and I put myself in this category. What kind of amazing open digital projects would be created if scholars could get the same recognition for this kind of work as they get for books and journal articles? Think of digital humanities as a faucet turned three-quarters off  by the disincentive resulting from lack of regular peer-review. The NEH and the Mellon Foundation have done much not only to finance expensive projects, but also to create structures of evaluation and prestige around digital humanities. But what if Mellon and NEH turn to different priorities in the future? Much of what impedes the progress of digital humanities is beyond the power of any individual to change: the dominance of legacy print publishing houses and print journals, the conservative nature of graduate training, the expense required to mount an effective digital project, the scarcity of grant money. These are intractable structures and economic facts. What can an individual scholar do but wait patiently for things to change? My message today is that there are two things we can, as individual scholars, do immediately: initiate conversations with personnel committees at our institutions about evaluating digital scholarship, independent of our own personnel reviews; and ourselves review a project through the SCS digital project review series.

I served on my institution’s personnel committee for two recent years and participated in reviews that included digital scholarship (2016–18). Before that, for four years I chaired a committee that distributed DH funding that came in a grant from the Mellon Foundation. In the process I helped to evaluate proposals from many different fields (2012–16). These grants were mostly quite small, typically a few thousand dollars for a course release or the labor of a student assistant, or to purchase some software. I have managed a medium-sized digital classics project myself for about 10 years. I currently chair the group that produces the SCS digital project reviews (2017–pres.). Depending on the day, then, I’m involved both as a gate crasher (advocating for the acceptance of digital scholarship, my own or others’) and as a gate keeper (turning a critical eye to digital scholarship). My experience leads me to a certain optimism that DH scholars can make the case for acceptance, and succeed in the academic personnel process, if they consider the legitimate needs of institutions to evaluate and assess their faculty.

How should personnel committees approach evaluating digital scholarship? Sam Huskey and his colleagues at OU arrived at two lists of evaluation criteria. The first list gives the essentials: conference presentations or publications related to the project, the use of accepted coding standards, openness of data, and a strategy for data preservation. The second list gives optional elements, the nice-to-haves: grant support, collaboration, contribution to the field, pedagogical applications, and evidence of adoption and use. This framing is an unquestioned advance. We can argue about the relative importance of each item, and whether some items might be moved from one list to the other. Contribution to the field, for example, seems like it might be an essential. But the powerful OU formulation deserves to be adopted, adapted to local conditions, and widely used. If there is problem with the OU approach it is that some of its central elements, coding standards, data preservation, open data, and the advisability of collaboration, derive from preoccupations within the DH community, priorities that may not be shared by personnel committees; other aspect of the OU criteria, like pedagogical applications and contribution to the field, are things that the committee undoubtedly wants to know, but cannot simply find out from the candidate alone. Independent peer-review is the only real solution. Committees routinely consult outside experts at tenure reviews and full professor reviews, and having the OU lists as a way to prompt reviewers on what to talk about is a huge help.

I want to come at the problem from a different angle, not from the perspective of the evaporators, but from that of the candidate. How should we best present our work to the committee? How can we persuade the persuadable and placate those who are not completely implacable? I applied to be on this panel because it so happens that the personnel committee at my institution drafted guidelines on how it would like to be talked to regarding digital work, and I think their list of questions is a good one. It is more diffuse than the OU lists of criteria, but I think it has the merit of coming from non-specialists. I suspect that the questions they formulated are representative of the types of questions many other non-specialist committees would have.

The Dickinson guidelines are based on work by Todd Presner and were developed after a consulting visit by him to campus. They deal with topics such as platforms and technical requirements (how do I as a committee member actually examine your work?); user experience (how might a user use the tool or progress through the site?); scholarly context (what kind of research did you do, what’s the scholarly argument? Who is the audience? Where does this fit in the scholarly landscape?); the relationship of the project to teaching and service (has it been used in courses or other contexts within the institution?); impact (how have others used the project?); the life cycle of the project (how has it evolved? What are future plans? What about data preservation?); defining roles within a project (what, specifically, did you do, and what did other team members do? What kind of new technologies did you have to learn?). The Dickinson document covers some of the same issues as the OU document, but poses them as questions that can be used a guide to create a persuasive story about your project, its life, and its value.

Good rhetoric means knowing your audience, its desires, its fears, and its values. Assuming they are trying to do their jobs, which I think is generally a fair assumption, most committees want above all to make the right decision, and to avoid having to render a verdict on the quality of academic work by themselves alone. This is something which they correctly feel unqualified to do, and which they do not have the time even to attempt. The job of evaluation is hard enough, given the field-specific criteria for length, venues, genres, and styles of scholarship (to say nothing of field-specific pedagogy and the hard-to-assess complexities of institutional service). DH adds yet another element of field-specific complexity, which is perplexing and anxiety-producing, given the stakes and the potential downsides of making a bad decision. Allaying that anxiety is the key task of the candidate, as much as making the case for one’s own work.

There is no denying that all this discussion and rhetorical framing takes effort. The dispersed, evolving nature of DH imposes added burdens on DH scholars to explain and justify their field, their work, and their chosen modes of publication. This is not fair. One does not have to explain the desire to write a journal article. On the other hand, seen from the committee’s perspective, it is undeniable that there are people who publish on the internet mainly as a way of avoiding the hassles, scrutiny, and compromises of peer review. We all know that there are projects that, for whatever reason, are poorly conceived, vaporous, or over-ambitious. There are some projects that are methodologically blameless, but not terribly interesting or useful to anyone but the scholar who decided to undertake them. There are projects that seem to neglect the manifest needs of their potential audience, projects that have no clear sense of audience at all, projects that mainly repackage material readily available elsewhere, projects that needed a lot more work, but the author got distracted, projects that were good in their day but fell into neglect, tools that produce error-filled results, tools that mislead and mystify rather than elucidate, tools that over-promise, or don’t explain clearly how they work and what they are for. It is not unreasonable for committees to be wary.

Of course, the same intellectual flaws and more can be seen in print scholarship, mutatis mutandis. The immaturity of digital scholarship creates problems of assessment for those not familiar with the medium. The professional apparatus of evaluation is underdeveloped. But the intellectual values are not different: relevance, usefulness, contribution, and significance. As Greg emphasizes, in many cases DH is actually truer to the core missions of humanistic scholarship than much of what is produced by print culture. Finding and articulating that common ground and those shared values is the surest strategy when speaking to a traditionally minded personnel committee. The act of engaging in this dialogue about evaluation on an institutional level will also, I believe, have salutary effects on digital projects themselves, as they come to better articulate their purpose and place in the intellectual landscape. If you have trouble talking about the purpose of your project in these terms, it may be time to rethink the aims and methods of the project.

In the end, a lively culture of public peer-review will be the single most important factor in making it easier for personnel committees to distinguish confidently between the good, the better, and the best. A question arises, however. When it comes to a DH project, who are the peers, really? Is the proper context for public evaluation of digital scholarship the traditional academic discipline, or the emerging DH discipline itself, the average user rating, or some combination? Several review projects have arisen within DH and attempt to sidestep traditional disciplinary identities, the latest being Reviews in Digital Humanities, the first issue of which is dated Jan. 2020. But even these folks admit ominously in their about text that “similar endeavors have been largely unsuccessful in the past.” Perhaps print journals could pick up the slack? The journal American Quarterly announced a digital review series in American Studies with some fanfare in 2016, but it has produced as far as I can see only one actual review. Print journals in general seem like a strange venue for such reviews. Reading a print digital project review is a bit like looking at a stuffed bear at the zoo. It fails to satisfy.

I would argue that professional associations like SCS have a valuable role to play. Many of the projects in question aim to help students and scholars of traditional academic subjects, and the associations themselves are collections of field experts in those subjects. The scholars there may be less than current in DH methods, but they can certainly evaluate how useful a project is to students and scholars in their own disciplines. The professional associations typically publish print journals, but they are not themselves print journals. They all have stable websites and, more importantly, stable organizations.

The SCS publishes about one digital project review per month, but honestly it has been hard to identify willing reviewers. There are guidelines for digital project reviews posted on its website, and they are pretty straightforward. Much of it was borrowed from The Bryn Mawr Classical Review and tweaked to apply to digital projects.  The trick will be getting more people in the associations interested in doing the work. In our own field, while BMCR chokes our in-boxes (that faucet is 100% open), while reviews of digital projects are few and far between. It is difficult to find qualified scholars to write digital project reviews without the offer of something tangible like a book in return. BMCR has, it seems, essentially given up on digital projects.

In the dispiriting landscape  of failed DH review efforts, the SCS series has been modestly successful. But, honestly, I worry about the uniformly positive tone of digital classics project reviews the SCS has published so far. We need critical reviews. DH has a culture of mutual support, collaboration, and generosity—which is wonderful if you are involved but damaging to the credibility of the field as a whole in the long run. I urge you to volunteer to review a digital project for the SCS, to apply the standards that are being discussed in this panel, and not hold back. Be the peer review you want to see in the world. If everyone in this room commits to doing a single review in 2020, we will set an example that will get the notice of he entire DH field, and pave the way for a golden age of digital classics to come.

References

“Evaluation of Digital Scholarship at Dickinson” Memo, Dec. 20, 2013 https://www.dickinson.edu/download/downloads/id/4510/evaluation_of_digital_scholarship_at_dickinson 

Cohen, Daniel J., and Tom Scheinfeldt, eds. Hacking the Academy: New Approaches to Scholarship and Teaching from Digital Humanities. Ann Arbor: University of Michigan Press, 2013. www.jstor.org/stable/j.ctv65swj3.

Todd Presner, “How to Evaluate Digital Scholarship,” Journal of Digital Humanities, 1.4 (Fall 2012). http://journalofdigitalhumanities.org/1-4/how-to-evaluate-digital-scholarship-by-todd-presner/  

 

Leave a Reply

Your email address will not be published. Required fields are marked *