Visualizing the Classics

8365945652_3e92f00b0a_bAnvil Academic and Dickinson College Commentaries are pleased to announce the availability of a $1,000 prize for the best scholarly visualization of data in the field of classical studies submitted during 2013. Two runners-up will be awarded prizes of $500 each.

 

Submissions must include:

  • one or more visual representations of data that involves some linguistic component (Latin, Greek, or another ancient language of the Greco-Roman worlds), but may also include physical, geospatial, temporal, or other data;
  • a research question and narrative argument that describes the conclusions drawn from the data and the visualization; and
  • the source data itself.

Submissions in any and all sub-fields of classical studies, including pedagogical approaches, are welcome from any individual or team. The three winning submissions will be published by Anvil under a Creative Commons license (CC-BY-ND). The visualizations themselves and the narratives that accompany them will be published on Anvil’s website. The source data may be published there as well; though in any case the source data must be in some published form and included, even if only via link, with the submission.

Submissions will be evaluated by the panel of reviewers listed below on the criteria of scholarly contribution, effectiveness of the visualization, accuracy and relevance of the data, and the cogency of the conclusions drawn. Existing digital projects are welcome to submit entries, which must be formatted in a way that can be republished by Anvil, as described above.

Please contact Fred Moody (fmoody@anvilacademic.org) or Chris Francese (francese@dickinson.edu) with any questions.

Deadline for submission: December 31, 2013, to fmoody@anvilacademic.org; only submissions in electronic form will be considered.

Panel of reviewers:

John Bodel, W. Duncan MacMillan II Professor of Classics and Professor of History, Brown University

Alison Cooley, Reader & Deputy Head, Department of Classics & Ancient History, University of Warwick

Gregory Crane, Professor of Computer Science, Tufts University, and Humboldt Professor, Universität Leipzig

Lin Foxhall, Professor of Greek Archaeology and History, Head of School, School of Archaeology and Ancient History, University of Leicester

Chris Francese, Professor of Classical Studies, Dickinson College

Jonathan Hall, Phyllis Fay Horton Distinguished Service Professor in the Humanities and Professor of History and Classics, University of Chicago

Dominique Longrée, Professor of Classics, University of Liège and Saint-Louis University, Brussels.

Andrew M. Riggsby, Professor of Classics and Art History, University of Texas at Austin

Greg Woolf, Professor of Ancient History, University of St. Andrews

 

Image: Pont du Gard, Dimitris Kilymis (cc)

Classical Commentary DIY

A  guest post from Peter Sipes, who has been using the DCC core Latin vocabulary in the process of creating texts for his students. Perhaps you might like to do the same? Peter explains exactly how (and why) he does it . . . 

Over the last few years of teaching Latin to homeschoolers, I’ve found that I need to make a lot of my own materials. It’s not that what’s available is poor quality: quite the contrary. I worked for publishers Bolchazy-Carducci for the better part of five years, and use their books when it makes sense. I’m also a great admirer of Hans Ørberg’s Lingua Latina series. There is a lot of high-quality material available for beginners and people studying Golden Age Latin literature. Once you get away from those two sweet spots, however, the supply of student materials quickly dries up.

Since I want to make sure my students are aware of a broad range of Latin, I present post-Classical literature. I’ve taught selections from the Vulgate three times, and just finished up with a second go around of Thomas More’s Utopia. These are wonderful texts, but they require the making of my own materials for students. My aim is fluent reading, and so I like for everything—text, notes, and vocabulary— to be on one page, as in Clyde Pharr’s well-known text of the Aeneid.

With some serious tweaking of process over the last few years, I have finally nailed down a good work flow. The big breakthrough was the publication of the DCC Core Latin Vocabulary—before that I never knew which words to assume students knew. Worse, I always felt like I was trying to reinvent the wheel.

Now that I’ve got it down, I’ve been slowly sharing my work, which you can find free of charge at this site. Here’s how I make my DIY commentaries on my laptop using free tools. Follow along with me as I make a student handout for the chapter 44 of Gesta Romanorum (entitled de invidia).

1. Select your text. I like thelatinlibrary.com (graphic 1). The text is fairly clean, and—this is important to me—there’s little formatting on it. Perseus has high-quality text, but it has a lot of formatting you’ve got to get out.

1. latin library home

 

2. gr copy and paste

2. Open up your word processor. I use Open Office. Whatever you use, it needs good table support. Set up a new text document with a 2×3 table (graphic 3). The next few steps are tricky, but your students will want you to go through the trouble to get line numbering.

3. 2x3 table

3. Highlight the two cells in the top row (graphic 4). Merge them (graphic 5). Now for the tricky part: highlight that cell and split it vertically into three new cells (graphic 6). Highlight the two left cells—but not the one on the right. Merge those two cells.

4. top two cells highlighted5. highlight top cell6. split cells

4. Push the cell divider to the right so that the upper left cell is really wide in comparison to the one on the upper right (graphic 7). The upper left cell will house the text. The upper right will house the line numbers. Steps 2, 3, and 4 will seem pretty odd right now. Just follow the screen caps.

7. finished table

5. Copy (graphic 2) and paste the text selected in step 1 into the upper left cell (graphic 8). At this point you’re pretty much ready to get to the real work. I like to double space the text and apply some gentle formatting—but that’s wholly optional. Here are my preferences for formatting:
a. Double space the text. Students need room to write and mark up.
b. Get the title out of the text cell. It offends me aesthetically—no other good reason.
c. Rag right alignment. I come from print. Old biases die hard.
d. Indent the paragraphs. Same reason.

8. text pasted in

6. If your text is longer than half of a page, you need to divide it into sections. Repeat steps 2–5 to make new pages. Better yet: cut and paste the table you’ve already got. It’s best to divide long texts at this step rather than after you’ve developed the vocabulary list. You can tweak the text on each page a bit after this step, but it is a tedious and error-prone process to do more than that.

7. Copy the text from your text cell into a new blank document (graphic 9). Perform a find and replace to turn spaces into returns (graphic 10). At this point, you should have a list of words, one word per line, in the order of the text (graphic 11).

9. text prepared for find and replace10. find and replace dialog box11. spaces replaced

8. Scrolling up, look for things you may not want on separate lines—usually names. I find that things to fix pop out much faster if I’m going against the flow of the text. Example: M., Tullius and Cicero should be on the same line.

9. At the top, select all text and sort alphabetically (graphic 12). Once again finding myself at the bottom of the list, I scroll back up looking for and deleting duplicates. But that is optional. Sometimes a quote mark or a parenthesis will cause their attached word to float to the top (graphic 13). Delete the offending punctuation and re-sort. This isn’t optional.

12. sort dialog box13. quotes ruin alpha order

10. Open up the DCC Top 1,000 and scroll down both the Top 1,000 and the newly generated word list. When a word on the Top 1,000 appears, delete it from the word list (graphic 14). For the most part, you can probably guess the contents of the Top 1,000—but be careful until you know the list better. There are surprises.

14. top 1000 and wordlist

Sometimes there will be word that are obviously derived from one of the Top 1,000 (graphic 15). I don’t delete them, since I would rather offer too much help than too little—but you might want to delete these derived terms to encourage student vocabulary building strategies. In the text I’m preparing, artifex is clearly derived from ars. Even though a student should be able to guess the meaning of artifex based on ars, I don’t chance it. I’d rather the gloss be at hand so as not to interrupt the flow of reading more than necessary.

15. derived terms

Sometimes there will be words that are potential hits to the Top 1,000. In our text, we have bello and bella on the list. Since alphabetization removed the words from their contexts, you need to go check. Are bello and bella versions of bellum, -i (war) or bellus, -a, -um (beautiful)? One is on the list. The other is not. Fortunately this doesn’t happen too often. In this case, both bello and bella are derived from bellum, -i (war), which is on the Top 1,000 list. Out they go. On occasion, something like obtulit (offero) and sustulit (tollo) show up and make things somewhat out of order—remember to get them out too (graphic 16).

16. offero - obtulit

11. Once I’ve thrown out the Top 1,000 from the raw vocabulary list, I format the raw vocabulary list a bit (graphic 17): I set the type to single spacing; turn the point size down a little; and get rid of excess space before or after paragraphs. Copy the list of lower frequency words and then paste it into the left cell below the text (graphic 18).

17. low frequency vocab list18. low freq in handout

12. Turn your raw word list into an actual glossary. Principal parts, noun stems, definitions—the whole lot. I add macrons to this list of vocabulary (on a Mac the Hawaiian keyboard is a godsend—option + vowel = vowel with macron: graphic 19). I like to use my paper dictionary, but Wiktionary (graphic 21) and Perseus will both tell you where the macrons go. If I’m in a pinch for time, I rely on my memory. Though I prefer macrons, they are optional. If the word list spills off the bottom of the page a little at this point, don’t worry.

19. switching to hawaiian21. wiktionary sample

13. Cut and paste the vocabulary list into two columns of equal length (graphic 20).

20. vocabulary added

14. Add in the line numbers in the skinny cell on the top right (graphic 22). This will take some patience and some fiddling with the paragraph spacing to make it turn out right. Using a soft return (shift + enter) may take some of the pain out of the procedure. For the example I put in every line number, which I usually don’t.

22. line numbers

15. Write the notes in the two remaining cells on the bottom. In an ideal world, it wouldn’t matter much what level the notes are at. Students will use them when they need them and ignore them when they don’t. Of course, this isn’t an ideal world. On average, I’d rather err on the side of too much help, since my aim is reading fluency. I tend to gloss over style and rhetoric in the notes and go for morphology when writing for beginning students, as is the case in our example. I probably also err on the side of an overly conversational style in the notes as well, but that’s what works for me. In the example, I point out the present participles quite frequently, since we haven’t come to them yet.

16. Find where the cell boundaries properties are. Turn all cell boundaries white or 0 pt. It makes the handout look more professional.

17. Export as PDF and upload to scribd.com or Google drive—accounts are free and the more material openly shared the better. What’s even better is that Scribd allows for revisions to be posted on uploaded documents. I exported the Open Office file to a Word format and have uploaded it here. Feel free to tinker with the file to see what I’ve done. The final pdf is on scribd.com.

18. Read with students and enjoy!

–Peter Sipes (sipes23@gmail.com)

Greek Core Vocabulary: A Sight Reading Approach

http://www.flickr.com/photos/crystiancruz/3235797556/in/photostream/

Crytian Cruz, via Flickr (http://bit.ly/13HaBAU)

(This is a slightly revised version of a talk given by Chris Francese on January 4, 2013 at the American Philological Association Meeting, at the panel “New Adventures in Greek Pedagogy,” organized by Willie Major.)

Not long ago, in the process of making some websites of reading texts with commentary on classical authors, I became interested in high-frequency vocabulary for ancient Greek. The idea was straightforward: define a core list of high frequency words that would not be glossed in running vocabulary lists to accompany texts designed for fluid reading. I was fortunate to be given a set of frequency data from the TLG by Maria Pantelia, with the sample restricted to authors up to AD 200, in order to avoid distortions introduced church fathers and Byzantine texts. So I thought I had it made. But I soon found myself in a quicksand, slowly drowning in a morass infested with hidden, nasty predators, until Willie Major threw me a rope, first via his published work on this subject, and then with his collaboration in creating what is now a finished core list of around 500 words, available free online. I want to thank Willie for his generosity, his collegiality, his dedication, and for including me on this panel. I also received very generous help, data infusions, and advice on our core list from Helma Dik at the University of Chicago, for which I am most grateful.

What our websites offer that is new, I believe, is the combination of a statistically-based yet lovingly hand-crafted core vocabulary, along with handmade glosses for non-core words. The idea is to facilitate smooth reading for non-specialist readers at any level, in the tradition of the Bryn Mawr Commentaries, but with media—sound recordings, images, etc. Bells and whistles aside, however, how do you get students to actually absorb and master the core list? Rachel Clark has published an interesting paper on this problem at the introductory level of ancient Greek that I commend to you. There is also of course a large literature on vocabulary acquisition in modern languages, which I am going to ignore completely. This paper is more in the way of an interim report from the field about what my colleague Meghan Reedy and I have been doing at Dickinson to integrate core vocabulary with a regime based on sight reading and comprehension, as opposed to the traditional prepared translation method. Consider this a provisional attempt to think through a pedagogy to go with the websites. I should also mention that we make no great claim to originality, and have taken inspiration from some late nineteenth century teachers who used sight reading, in particular Edwin Post.

In the course of some mandated assessment activities it became clear that the traditional prepared translation method was not yielding students who could pick their way through a new chunk of Greek with sufficient vocabulary help, which is our ultimate goal. With this learning goal in mind we tried to back-design a system that would yield the desired result, and have developed a new routine based around the twin ideas of core vocabulary and sight reading. Students are held responsible for the core list, and they read and are tested at sight, with the stipulation that non-core words will be glossed. I have no statistics to prove that our current regime is superior to the old way, but I do know it has changed substantially the dynamics of our intermediate classes, I believe for the better.
Students’ class preparation consists of a mix of vocabulary memorization for passages to be read at sight in class the next day, and comprehension/grammar worksheets on other passages (ones not normally dealt with in class). Class itself consists mainly of sight translation, and review and discussion of previously read passages, with grammar review as needed. Testing consists of sight passages with comprehension and grammar questions (like the worksheets), and vocabulary quizzes. Written assignments focus on textual analysis as well as literal and polished literary translation.

The concept (not always executed with 100% effectiveness, I hasten to add) is that for homework students focus on relatively straightforward tasks they can successfully complete (the vocabulary preparation and the worksheets). This preserves class time for the much more difficult and higher-order task of translation, where they need to be able to collaborate with each other, and where we’re there to help them—point out word groups and head off various types of frustration. It’s a version, in other words, of the flipped classroom approach, a model of instruction associated with math and science, where students watch recorded lectures for homework and complete their assignments, labs, and tests in class. More complex, higher-order tasks are completed in class, more routine, more passive ones, outside.

There are many possible variations of this idea, but the central selling point for me is that it changes the set of implicit bargains and imperatives that underlie ancient language instruction, at least as we were practicing it. Consider first vocabulary: in the old regime we said essentially: “know for the short-term every word in each text we read. I will ask you anything.” In the new regime we say, “know for the long-term the most important words. The rest will be glossed.” When it comes to reading, we used to say or imply, “understand for the test every nuance of the texts we covered in class. I will ask you any detail.” In the new system we say, “learn the skills to read any new text you come across. I will ask for the main points only, and give you clues.” What about morphology? The stated message was, “You should know all your declensions and conjugations.” The unspoken corollary was “But if you can translate the prepared passage without all that you will still pass.” With the new method, the daily lived reality is, “If you don’t know what endings mean you will be completely in the dark as to how these words are related.” When it comes to grammar and syntax, the old routine assumed they should know all the major constructions as abstract principles, but with the tacit understanding that this is not really likely to be possible at the intermediate level. The new method says, “practice recognizing and identifying the most common grammatical patterns that actually occur in the readings. Unusual things will be glossed.” More broadly, the underlying incentives of our usual testing routines was always, “Learn and English translation of assigned texts and you’ll be in pretty good shape.” This has now changed to: “know core vocabulary and common grammar cold and you’ll be in pretty good shape.”

Now, every system has its pros and cons. The cons here might be a) that students don’t spend quite as much time reading the dictionary as before, so their vocabulary knowledge is not as broad or deep as it should be; b) that the level of attention to specific texts is not as high as in the traditional method; and c) that not as much material can be covered when class work done at sight. The first of these (not enough dictionary time) is a real problem in my view that makes this method not really suitable at the upper levels. At the intermediate level the kind of close reading that we classicists value so much can be accomplished through repeated exposure in class to texts initially encountered at sight, and through written assignments and analytical papers. The problem of coverage is alleviated somewhat by the fact that students encounter as much or more in the original language than before, thanks to the comprehension worksheets, which cover a whole separate set of material.

On the pro side, the students seem to like it. Certainly their relationship to grammar is transformed. They suddenly become rather curious about grammatical structures that will help them figure out what the heck is going on. With the comprehension worksheets the assumption is that the text makes some kind of sense, rather than what used to be the default assumption, that it’s Greek, so it’s not really supposed to make that much sense anyway. While the students are still mastering the core vocabulary, one can divide the vocabulary of a passage into core and non-core items, holding the students responsible only for core items. Students obviously like this kind of triage, since it helps them focus their effort in a way they acknowledge and accept as rational. The key advantage to a statistically based core list in my view is really a rhetorical one. In helps generate buy-in. The problem is that we don’t read enough to really master the core contextually in the third semester. Coordinating the core with what happens to occur in the passages we happen to read is the chief difficulty of this method. I would argue, however, that even if you can’t teach them the whole core contextually, the effort to do so crucially changes the student’s attitude to vocabulary acquisition, from “how can I possibly ever learn this vast quantity of ridiculous words?” to “Ok, some of these are more important than others, and I have a realistic numerical goal to achieve.” The core is a possible dream, something that cannot always be said of the learning goals implicit in the traditional prepared translation method at the intermediate level.

The question of how technology can make all this work better is an interesting one. Prof. Major recently published an important article in CO that addresses this issue. In my view we need a vocabulary app that focuses on the DCC core, and I want to try to develop that. We need a video Greek grammar along the lines of Khan Academy that will allow students to absorb complex grammatical concepts by repeated viewings at home, with many, many examples, annotated with chalk and talk by a competent instructor. And we need more texts that are equipped with handmade vocabulary lists that exclude core items, both to facilitate reading and to preserve the incentive to master the core. And this is where our project hopes to make a contribution. Thank you very much, and I look forward to the discussion period.

–Chris Francese

HANDOUT:

Greek Core Vocabulary Acquisition: A Sight Reading Approach

American Philological Association, Seattle, WA

Friday January 4, 2013

Panel: New Adventures in Greek Pedagogy

Christopher Francese, Professor of Classical Studies, Dickinson College francese@dickinson.edu

References

Dickinson College Commentaries: http://dcc.dickinson.edu/

Latin and Greek texts for reading, with explanatory notes, vocabulary, and graphic, video, and audio elements. Greek texts forthcoming: Callimachus, Aetia (ed. Susan Stephens); Lucian, True History (ed. Stephen Nimis and Evan Hayes).

DCC Core Ancient Greek Vocabulary http://dcc.dickinson.edu/vocab/greek-alphabetical

About 500 of the most common words in ancient Greek, the lemmas that generate approximately 65% of the word forms in a typical Greek text. Created in the summer of 2012 by Christopher Francese and collaborators, using two sets of data:  1. A subset of the comprehensive Thesaurus Linguae Graecae database, including all texts in the database up to AD 200, a total of 20.003 million words (of which the period AD 100–200 accounts for 10.235 million). 2. The corpus of Greek authors at Perseus Chicago, which at the time our list was developed was approximately 5 million words.

Rachel Clark, “The 80% Rule: Greek Vocabulary in Popular Textbooks,” Teaching Classical Languages 1.1 (2009), 67–108.

Wilfred E. Major, “Teaching and Testing Classical Greek in a Digital World,” Classical Outlook 89.2 (2012), 36–39.

Wilfred E. Major, “It’s Not the Size, It’s the Frequency: The Value of Using a Core Vocabulary in Beginning and Intermediate Greek”  CPL Online 4.1 (2008), 1–24. http://www.camws.org/cpl/cplonline/files/Majorcplonline.pdf

 

 

Read Iliad 1.266-291, then answer the following in English, giving the exact Greek that is the basis of your answer:

 

  1. (lines 266-273)  Who did Nestor fight against, and why did he go?

 

who                                                                                                                                  

why                                                                                                                                  

 

  1. (lines 274-279 ) Why should Achilles defer to Agamemnon, in Nestor’s view?

 

                                                                                                                                        

                                                                                                                                        

  1. (lines 280-284) What is the meaning and difference between κάρτερος and φέρτερος as Nestor explains it?

 

                                                                                                                                        

                                                                                                                                        

  1. (lines 285-291) What four things does Achilles want, according to Agamemnon?

                                                                                                                                        

                                                                                                                                        

Find five prepositional phrases, write them out and translate, noting the line number, and the case that each preposition takes.

1.                                                                                                                    

2.                                                                                                                    

3.                                                                                                                    

4.                                                                                                                    

5.                                                                                                                    

 

Find five verbs in the imperative mood, write them out and translate, noting the line number and tense of each.

1.                                                                                                                    

2.                                                                                                                    

3.                                                                                                                    

4.                                                                                                                    

5.                                                                                                                    

The Scholarly Edition Goes Social

Latin lolcat by Laura Gibbs

Ok, so you’re the scholarly textual edition. You’re a venerable and useful genre. You’ve got some years on you, but you still look good. You have a lot of friends, even some fans, and people respect you. But you were born too early to understand this whole social media craze. You want to be connected, and it’s good to keep in touch with your family. But why do people seem to feel the need to be constantly sharing all this quotidian detail? Many people you really admire won’t have anything to do with social media. And yet, it feels lame to be left behind. After all, you’ve still got it, you’re still relevant, right? Question is, scholarly edition, should you break down and join Facebook?

It is in fact your destiny to embrace social media, according to a new article by a team of researchers published December issue of Literary & Linguistic Computing: “Toward Modeling the Social Edition: An Approach to Understanding the Electronic Scholarly Edition in the Context of New and Emerging Social Media.” The authors, Ray Siemens, Meghan Timney, Cara Letich, Corinna Koolen and Alex Garnett, are associated with the Electronic Textual Cultures Lab at the University of Victoria, British Columbia. The article itself is behind a pay wall, but a pre-print version is available here.

They propose that digital textual editions have gone through three phases so far, and are about to enter a fourth. The early stages of digitization (in the 1980s) made possible the “dynamic text,” in which readers could search, retrieve, and analyze in a way impossible in print media, treating the text with the flexibility of a database. This sped up all kinds of academic tasks. Shortly thereafter (in the 1990s) arose the “hypertextual edition,” which uses linking to give access to the various types of apparatus (textual, critical) that sometimes accompany print scholarly editions, and to even more in the way of images, parallel texts, and other linked resources. The third phase saw the development of a combination of the first two, the “dynamic edition,” in which the user can both interact with the text itself, change it, slice and dice it, and have access to various scholarly annotation and apparatus via hypertext. One promise of the dynamic edition, which they admit is not fully realized in practice yet, is that algorithmic processes can be used to start to automate some of the scholarly activities of textual scholarship. If we can “automate the process of formalizing the associations we take for granted in current editions,” they write, “such an edition has the ability, in effect, to annotate itself.”

The fourth phase, into which we are currently hurtling, is characterized by the application of social media tools and crowd sourcing to scholarly editorial practices. Siemens and collaborators point out that social tools enlarge the knowledge-building community beyond the traditional realm of academic scholars, and tap into the category of citizen scholars, not affiliated with academic institutions, in addition to the usual pools of academic labor. Siemens et al. identify five new modes of engagement with digital objects using social tools:

  1. Collaborative annotation (e.g. Diigo, digress.it).
  2. User-derived content (the Library of Congress Flickr stream, NINES).
  3. Folksonomy tagging, in which users add metadata in the form of keyword tags for shared content (English Broadside Ballad Archive, Flickr, Twitter, Del.icio.us).
  4. Community bibliography, in which users collect and catalogue references by means of academic citations (Zotero, reddit, StumbleUpon).
  5. Text analysis, which involves “algorithmically facilitated search, retrieval, and critical processes.” (E.g. the open source electronic role-playing game for educational use called Ivanhoe, based on the Walter Scott novel).

But beyond the various tools involved, they claim to identify a fundamental shift in the sociology of knowledge that drives the fourth phase. They see an inevitable move from the editor as a single, quasi-omniscient authority to the editor as a kind of impresario who can “facilitate interaction among the communities of practice they serve.” This community building is the essential thing that current self-contained digital editions do not do. The new social edition editor does not set himself or herself up as the arbiter of text and annotation, no matter how dynamic. These new editors coordinate contributions from many sources and oversee “living” editions.

At this point the rhetoric of the article begins to evoke the Reformation, with an added touch of Marxist revolutionary idealism. The old-style print-based scholarly editor is a “mediator” between the text and reader, “determines and shapes what is important to the reader,” and “exerts immense control over what the reader can engage.” The new social edition undermines these self-appointed authority figures that come between text and reader, thus “challenging current notions of personal and institutional authority, and the systems in which they are perpetuated.”

But in my view it is far too simple to say that the expert editor must now simply yield to, and facilitate, the crowd. For one thing, the use of the word “edition” in this discussion is misleading, and blurs distinctions between very different types of intellectual labor, some amenable to crowd-sourcing, some not. On the one hand there is textual editing in the strict sense: the examination, transcription, and collation of archival documents to produce a readable and reliable text with reports of variant readings. The people who do this kind of work are hardly constricting interpretive possibilities. They are making material available to the community, often at considerable risk to their eye-sight and domestic happiness. This is not the same thing as annotation, the equipping of texts with relevant information about its historical and literary contexts (which can be much more ideologically loaded), and linguistic explanations (which need to take into account very specific audiences). A third distinct area is the application of digital tools in computational analysis of textual data and the crafting of interpretive perspectives on that basis.

The article lumps all this together in the notion of “edition,” but in each area there is a different dynamic at work when it comes to the relationship between the expert scholar and a reading, and potentially contributing, community. And more importantly this relationship varies markedly with different types of texts, something ignored completely in the article. Take annotation, for example. Classic texts with highly developed academic cultures surrounding them, like Thomas More’s Utopia, do not readily elicit crowd annotation. We know this because it’s being tried at the site Open Utopia. The user-generated comments are not numerous or impressive, and much of the material represents the work of its editor, Stephen Duncombe, as Associate Prof. at NYU, who published a book based on the site. My own experience trying to develop a wiki community around Caesar’s Gallic War yielded similarly unimpressive results.

By strong contrast, in the case of a set of contemporary texts with little or no existing scholarly commentary, the novels of Thomas Pynchon, elaborate fan wikis  have developed which comprehensively annotate just about every page of his extremely long novels. Like the burgeoning and sometimes hilarious electronic literary genre of Amazon.com product reviews, crowd sourced commentary and annotation successfully grow up to fill a vacuum of trusted information, not replace trusted expert-made resources.

The same can be said of other types of editorial labor. Nobody wants to reinvent the wheel. The fascinating thing about the social media and self-publishing revolution is not that citizen scholars can now seize the tools of production and dethrone the academics (as desirable as that might in some cases be), but that independent scholars can now contribute in their own ways, and serve new audiences with new texts and new genres of edition. In my field there are many examples, including Evan Milner’s massive archive of textual, video, and audio Latin materials, Laura Gibbs’ excellent work with fables and proverbs, and, delightfully, her new genre of the Latin lolcat, a combination of proverb text and feline image. There are innovative pegagogical texts begin edited and published outside the normal channels by Justin Schwamm and Peter Sipes, among others. Then there are the apps being created by non-academic computer programmers such as Nick Kallen, Paul Hudson, and Harry Schmidt, apps that deliver Latin and Greek texts with the tools to read them. These are resources that people want, but academics will never be rewarded for making, and publishers generally won’t bother with. Social media means we all benefit from this new energy.

The “social edition” is thus not a box created and overseen by an academic impresario, and filled with content by a crowd of lesser contributors. It is a totally unpredictable new thing, driven by the creativity and desire for credited publication on the part of highly trained, but non-tenure track, scholars. Rather than distributing traditional academic labor, social media enlarges the pool of publishing scholars. Rather than prompting the re-making of old scholarly editions, it identifies and fills needs that the academic establishment can’t even see, much less satisfy.

So my advice, scholarly edition, is not worry, to do what feels right. Find the mix of social media and good old fashioned expert editorial authority that works in each case. Stop worrying about the trends, and think hard about the users and what they need.

–Chris Francese

3 types of publication that classical studies needs

Glancing over the latest issue of a certain classics journal that came to my door, and seeing nothing terribly interesting or new, I got to thinking . . . The web has made it possible to publish scholarly work in new ways, and that’s certainly what DCC is trying to do. Classical commentary is one of the oldest genres out there. What are some other types of scholarship that classicists could usefully embrace in the digital realm? How can we leverage digital media to make progress? Herewith, three suggestions. I’d love to hear more!

1. Critical reflections on pedagogy and descriptions of innovative teaching technique using digital tools. Pedagogy discussions in our field happen predominantly in informal venues like listservs and at conferences. The online journal Teaching Classical Languages (http://tcl.camws.org/) is a leader in making these important and interesting discussions more widely available and subjecting them to some peer review. What if we could do that not just with a traditional article, but with video, audio, and ancillary materials provided?

2. Distant Reading, a la Moretti’s Graphs, Maps, Trees. (“argues heretically that literature scholars should stop reading books and start counting, graphing, and mapping them instead.”) What can statistical analysis of classical texts, and the graphical display of that data, show us that is new and interesting? There is not much of this yet in classics as far as I know, but digital tools are making it more possible. Publishing it in digital form would allow for full publication of data and many more illustrations/vizualizations than in traditional print media. Related to this but more broad is . . .

3. Visualization projects (infographics etc.) made by scholars and conveying scholarly perspectives on the ancient world. These could be literary, or come from archaeologists and historians. Here again, as far as I am aware there is not much happening at the moment (but I’m not an archaeologist). Ramsay MacMullen did some fascinating work along these lines with inscriptional evidence. What can be done with coin hordes, word counts, anything countable that relates to the ancient world?

–Chris Francese

How principal are Greek principal parts?

I just finished adding the principal parts to the DCC ancient Greek core vocabulary list, something I meant to do last summer, but which got lost in the shuffle. So that’s done, and up. Phew. Anybody who has tried to learn ancient Greek knows what a big hurdle the principal parts are: absolutely essential, but a beastly task of brute memorization. I am here to say that, as one who focuses more on Latin than on Greek, I have to re-learn some of them on a regular basis if I want to read (or teach) Greek well. This is not the fun, life-affirming, profound, aesthetically enriching part of Greek. This is the boot camp, the weight-lifting one must do to get there.

The idea behind principal parts is to put in your hands, and hopefully in your brain, all the different stems of a verb, so that (theoretically) any declined form can be derived from, or traced back to, one of them. But of course it’s not quite that simple.

On the one hand, some verb forms and related things are extremely common, but not really directly derivable from the principal parts as they are traditionally presented. εἰκός, for example, is a very common participial form meaning “likely, plausible” that is not immediately apparent from the principal parts of ἔοικα. It’s in the dictionary, of course, but somewhat buried in the entry on ἔοικα.

On the other hand, many Greek verbs have principal parts whose stems are only very rarely employed. πέφασμαι, for example, is a perfect tense principal part of a very common verb, φαίνω. But forms derived from it are rare. πέφαγκα, another perfect form listed by Smyth among the “principal” parts is very rare indeed, with only seven attestations in the TLG, almost all of those from late antique grammarians and lexica. I guarantee you will never encounter it outside a grammar book.

Part of the problem here is that our apparatus for learning ancient Greek is largely derived from big, comprehensive, scientific grammars of the 19th century, and thus have a tendency to completism, rather than the conveying of what is most essential. This is a general problem that does not only affect the issue of principal parts.

Enter into this picture the database, specifically the TLG and its lemmatizer tool. This is the tool that attempts to determine from what dictionary head word (or lemma), a given form derives. I have complained elsewhere about the impotence of existing lemmatizers when it comes to determining the meaning of homographs–forms that are spelled the same but derive from different lemmas, or forms derived from a single lemma, but which could have more than one grammatical function. This is a serious and as yet unsolved problem when it comes to asking a computer to analyze a given chunk of Greek or Latin. And the homograph problem also substantially compromises frequency data based on machine-analyzed large corpora of Greek and Latin.

But one thing at which the lemmatizers are extraordinarily good–theoretically flawless– is telling how many occurrences of a certain word form there are in a given corpus. And by examining that data you can get in most cases a very accurate picture of how common are the forms derived from a particular stem or principle part in a Greek verb. In other words, the TLG Lemma Search (which is what I have been working with in making the principal parts lists for our site), helps us see more clearly than has ever been possible which principal parts of each verb are the most important, and which very common forms lie slightly outside the traditional lists of principal parts. It has the potential to make principal parts lists far more informative and helpful to the language learner even than the information found in Smyth, LSJ, or any of the current textbooks.

I can think of a couple ways in which TLG lemmatizer data could be used to enhance the presentation of Greek principal parts. One could, for example, have a second list of, say, the five most statistically common forms of a given verb. In the case of πάρειμι, for example, that would be the following (with the total raw occurrences in TLG as of today):

παρόντος (8587), παρόν (5406), παρόντα (4920), παρόντων (4442), παρόντι (3451)

In fact the top 10 or so are all participial. παρών παροῦσα παρόν: that’s what I call a principal part!

Another way to do it would be to print in bold the principal part from which the most forms derive, or even use a couple different font sizes to reflect how commonly used each principal part is. For σῴζω, save, the figures are (roughly) as follows σῴζω (8600) σώσω (1300), ἔσωσα (5500), σέσωκα (400), σέσωσμαι (700), ἐσώθην (8800). Interesting to see the aorist passive stem beat out the present stem. The top vote-getters in terms of forms are σωθῆναι, ἔσωθεν, σώζεται/σῴζεται, σῶσαι, and σῶσον.

People who are better at Greek and spend more time with large corpora and their analysis than I do have probably thought of all this long ago, and there may be some principal parts lists that incorporate some of this data. If so, I would love to hear about it.

Before closing I should give a huge thank you to Prof. Stephen Nimis from Miami University of Ohio and his collaborator Evan Hayes, whose principal parts list in their edition of Lucian’s A True Story (soon to be re-published on our site with extra features) was of great assistance as I was making our list. And I should mention here also the crucial help I have had all along with our Greek list from the great Wilfred Major, of Louisiana State University.

 

 

 

Rafael Alvarado and the future of DCC

Last month DCC benefited from an outstanding day of consulting with Rafael Alvarado, Associate Director of the SHANTI program at the University of Virginia, as well a lecturer in Anthropology and Media Studies there. A career digital humanist, he has divided his time between building software and organizations that support the scholarly use of technology and studying digital technology as a cultural form. His consulting business is called Ontoligent Design (Twitter @ontoligent), and his blog is called The Transducer.

Some of his key recommendations were to make DCC a citable scholarly resource, in conformity with widely accepted standards of citation in digital humanities; to consider making use of comments by readers; to make the site more friendly to tablet devices like the iPad; to create print and e-book versions of all commentaries; and to continue making innovative use of geographical tools to enhance the reader experience. As a sort of promissory note to follow up on some of his excellent suggestions, I have written a new lead “about” text, that I think concisely expresses what is different and important about our project. Certain aspects of this are in the future, but not that far in the future:

DCC publishes scholarly commentaries on classical texts intended to provide an effective reading and learning experience for classicists at all levels of experience. Though they are born digital, the commentaries will also be available in print and e-book formats. In contrast to other projects that conceive of classical texts as a database, or foreground hypertext—focusing on chunking or linking the text—DCC aims at a readerly approach, and one firmly grounded in the needs of readers, teachers, and students. Texts are presented in a clean, readable format, with custom-authored notes, specially selected images and maps, and original audio and video content. Core vocabulary lists of the most common Latin and Greek words are provided, and all words not in the core lists are fully and accurately defined in running vocabulary lists that accompany each section of text. DCC commentaries are citable scholarly resources, licensed under Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Many thanks to Raf for his insightful critique and help in framing the central ideas behind the project. In other news, Prof. Ariana Traill of the University of the Illinois at Urbana-Champaign has joined the editorial board. Prof. Traill is planning to work with some of her students in laying the groundwork for a future edition of the Advanced Placement selections from the Aeneid. Eric Casey of Sweet Briar College has agreed to take on substantial editing duties for our forthcoming Greek commentaries (see below). To recognize the large amount of work this represents we decided to split the editorial board along the lines of what the Bryn Mawr Classical Review does, into Senior and Associate Editors, with Eric and me as senior.

Stephen Nimis of Miami University of Ohio, who has produced a series of print-on-demand commentaries on Greek texts with Evan Hayes (the latest being some Plutarch), has offered us all his content to use to re-make in our format, and has offered to help create printed versions of our existing content through his distribution system. The first Nimis-Hayes commentary we will take on will be Lucian’s True History, which Prof. Casey will edit. Susan Stephens of Stanford has a well-advanced digital edition of Callimachus’ Aetia that ran into some technical problems, and she has agreed to let us put it in our series, with her continued help. This is a very exciting collaboration, with outstanding content that should raise the profile of DCC. Another very welcome addition will be Bret Mulligan’s edition of Nepos’ Life of Hannibal, which is largely done but in need of final editing and equipping with vocabulary lists and maps. So that makes three new commentaries, basic content largely complete, that we will try to equip with the various DCC enhancements this spring and summer. We are growing, and I am very pleased to see DCC developing as a kind of aggregator and editor of high quality online classical commentary.
–Chris Francese

Ovid, Amores Book 1

The DCC edition of Ovid’s Amores Book I, with notes and essays by William Turpin, is now up and ready to be used: http://dcc.dickinson.edu/

This is the first non-pilot, freshly authored and created digital edition in our series. I think it shows off nicely what can be done to enhance the reading experience of a classical text in the digital realm.

In addition to the notes, features include:

  • essays on each poem by William Turpin, with bibliography
  • images/illustrations for all poems chosen and annotated by Chris Francese
  • audio recordings for 1.1 and 1.5 by Meghan Reedy
  • vocabulary lists that gloss words not in the 1,000-word DCC core Latin vocabulary
  • an annotated Google Earth map of all places mentioned in the text, created by Dickinson student Merri Wilson

I am tremendously grateful to all who contributed time and advice and ideas. The list of acknowledgments will give an idea of how many people helped. Please let me know if you have any thoughts or suggestions.

–Chris Francese

Beyond PowerPoint

I think my favorite session at the recent Visual Learning conference at Carleton was the one on presentation and pedagogical modes. Despite its obvious utility, all of us who teach or give talks feel slightly oppressed by PowerPoint. Edward Tufte’s famous critique of PowerPoint as contributing to the Challenger disaster is extreme, but we all suffer, I think, from a twin PowerPoint dread: on the one hand, it seems to drive us to a mechanical, deadening style of speaking (“next slide please; as you can see from the outline . . .”); on the other hand, the desire not to be boring makes us want to use all the bells and whistles PowerPoint provides. The less said about those the better.

But how are we to escape? The folks at Viz conference had some ideas.
Robert Smythe of Temple University introduced a Japanese presentation mode known as Pecha Kucha, which was new to me. It uses PowerPoint as a base, but with the following limiting rules: you are allowed 20 slides, which show for exactly 20 seconds each. These slides do not contain text (though there may be photographs that include some text). You, the speaker, talk for the 6’40” available as the slides roll by. That’s it, very basic: “no nuts, no chocolate sauce, no whipped cream,” as Robert put it.

20 seconds gives the audience time to think, to absorb an image, to contemplate. But the slides move you along, and the speaker can’t ramble. It was invented, apparently, by Japanese architects who found that when people are passionate about a project they tend to go on to long. Robert emphasized that this is not so much for teachers as for students giving research presentations. Robert has his students work without script, without notes: just narrate the show.

The crucial beauty of this system is what it does to the speaker. Unlike PowerPoint, which brings out the bureaucrat in all of us, Pecha Kucha allows for an idiosyncratic voice to emerge, and encourages storytelling. Images are rich with implications. Pecha Kucha forces us to interpret them, to fill in the blanks. There are no fades, no transitions, not rotating flying text, just images that drive us to connect them and make sense out of them. The emerging sense is deeply personal, and results in a much more genuine connection between speaker and audience. The example that Robert played for us from his own class, a research assignment about post-war Europe, bore this out nicely. The speaker was almost giddy in communicating her research by explicating the images.

Robert does five per semester, so the students get gradually better at this rather strange type of communication. I have always said that college curricula way under-emphasize public speaking. Here is a way for students to find their own voices at the podium at a much younger age than most of us do. And they best part: it’s fun. People have been known to organize Pecha Kucha nights as entertainment.

Tamara Carley, a PhD candidate in Environmental Sciences at Vanderbilt University, gave a fascinating demonstration of the pedagogical uses of Prezi. After a geology lecture, students are asked to go out and find images to illustrate the main concepts (they can also use professor-supplied charts, etc.), then put it all with their notes into a Prezi canvass that shows the relationships between the concepts and details as the student understands it. It’s a blank canvass. The only requirement is that the composition needs to make sense to the student, and the student needs to be able to explain why it makes sense.

Prezi has a zooming feature that makes it handle differences of scale beautifully. You can zoom back to see the mega level (say, a whole art movement for instance), the macro level (a particular artist), and the micro level (a single work). The student receives new information and works it into their own “mind map” with various levels, and including all sorts of verbal, graphic, and video elements as needed. As it gets more and more elaborate, the composition is evaluated three times per semester, and ends up being in lieu of a final paper. In Tamara’s case this would traditionally be on a single mineral. With this format the final project can be on a broader variety of things, while still having substantial amounts of detail if you drill down.

Here again, the students use presentation technology to create their own meaning and organization out of given facts, not simply repackage what others are saying. Both Pecha Kucha and Prezi used in this fashion pretty much require than the student invest the material with his or her own voice and perspective, a good which seems well worth the trouble of adjusting routines to accommodate these new techniques.

What if you want to just use traditional PowerPoint, but do it well? Doug Foxgrover, Carleton’s Communication and Training Coordinator, gave a diverting history of presentation technology, based partly on Nancy Duarte’s history of visual aides, Slide:ology. He brought along as props a 1920s vintage lantern projector with some very cool glass slides, and an overhead projector. He gave a hilariously bad PowerPoint presentation, which he offers in his classes and asks the students to critique. Ideally, he argued (echoing keynoter Scott McCloud), you want to show and tell at the same time. Foxgrover’s laws of PowerPoint are three in number: 1. Text must be readable–and not much of it, please. 2. Show only what you want others to see. 3. Time your visuals to complement your talk. His laws of graphic design for PowerPoint were also three: 1. Make your objects as simple as possible, but not simpler. 2. Use contrast to draw attention, alignment of text to not draw attention. 3. Choose legible type for the screen.

All in all a fascinating panel. Thanks to all three presenters, and to the sponsors of the conference!

–Chris Francese

Comics and Visual Communication: Scott McCloud at Carleton Viz Conference

Comic artist and theorist Scott McCloud, author of Understanding Comics: The Invisible Art (1993), spoke at the recent conference Visual Learning: Transforming the Liberal Arts, at Carleton College in Northfield, Minnesota.

Of the many fascinating points in his keynote speech about the techniques of visual communication and learning, one was a critique of the way presentation software is commonly used, with outline slides that statically reproduce a series of points that a speaker is making. McCloud’s active principal, brilliantly put into practice in his own show, is synchronization: “When I’m telling you, I’m showing you. When I’m done telling you, I’m not showing you anymore.” Cognitive load time, the time it takes to “get” what you are looking at, is very quick, and continuing to display words or images long after their moment has past is deadening. Wordy, over-dense slides, he points out, are a legacy of print culture. The mind is quick, predisposed to fill in gaps, to create meaning and narrative from small, disparate pieces of visual information. This means that “visual rhetoric” can be very powerful. But we have not as yet figured out how that visual rhetoric can best be employed. This is one area he plans to explore in his future work.

McCloud wants to figure out how to use visual culture, including comics but also the whole history of visual culture back to ancient Egyptian tomb paintings, Roman triumphal columns, and medieval stained glass windows, to try create the visual rhetoric of the web. His main interest is finding a way to make comics work effectively on the web, and he had many fascinating examples of innovative and effective web-based comics. His sense of joyous experimentation in search of the right use of the medium was very inspiring to me as I work on finding ways to use the web to enhance classical commentary.

One of his interesting observations regarding comics is that comic strips–3 or 4 panels– have transferred quite well to the web, but that long form graphic novels (think Persepolis and Maus) have not. In his view this is because people have an in-built desire for immersion, to lose themselves in fictional worlds, and that this is simply not readily possible on a computer screen. Books allow us that immersion, that forgetting of the medium known as the proscenium arch phenomenon, in a way that screens do not.

Speaking of computer screens, McCloud was full of scorn for the preservation of upright rectangles of traditional comic pages in the digital realm. The sideways rectangle, wider than it is tall, is the more natural shape, based on the geometry of our two eyes. Theater stages and movie screens are shaped this way, as is the open print book—comics and web designers are foolish to ignore this, he says.

Another key point, and one quite relevant to the DCC, it seems to me, had to do with the relationship between text and image. “Form and content,” he said, “must never apologize for one another.” That is, to create an effective visual narrative, you have to believe both in the message and in the form. You can’t dress up a boring or lame content by adding pretty visuals, or it will just fall flat. By the same token, you shouldn’t simply add illustrations to a great text, because they will seem like afterthoughts, appendages. When creating graphic novels of existing stories the best ones (he singled out City of Glass, based on a Paul Auster story) are true adaptations that honor the potentialities both of visual art, and of the word. As we come to think at DCC of ways to use the visual to enhance the comprehension and enjoyment of Latin and Greek texts, all these reflections are highly relevant.

One more super cool idea I picked up: it is believed that there are six and only six primary facial expressions that express emotions across cultures: joy, surprise, fear, sadness, disgust, and anger. These can be combined: anger + joy = cruelty. A nifty piece of software called The Grimace Project allows the schematic mixing of these, a primitive analogue to the comic artist’s craft. Love, McCloud believes, is best conveyed by using a mixture of joy, surprise, and about 10% sadness–recognition that as wonderful as the emotion is, it is destined not to last. The Grimace Project, by the way, has been helpful to children on the Asperger’s specturm in learning about the visual expression of emotion and social cues.

McCloud’s 2005 TED talk may give you a flavor of what a treat it was to listen to him. Thank, Carleton (and the Mellon Foundation), for sponsoring a truly great conference. Future posts will provide details on some of the regular sessions.

–Chris Francese