Visualizing the Classics

8365945652_3e92f00b0a_bAnvil Academic and Dickinson College Commentaries are pleased to announce the availability of a $1,000 prize for the best scholarly visualization of data in the field of classical studies submitted during 2013. Two runners-up will be awarded prizes of $500 each.

 

Submissions must include:

  • one or more visual representations of data that involves some linguistic component (Latin, Greek, or another ancient language of the Greco-Roman worlds), but may also include physical, geospatial, temporal, or other data;
  • a research question and narrative argument that describes the conclusions drawn from the data and the visualization; and
  • the source data itself.

Submissions in any and all sub-fields of classical studies, including pedagogical approaches, are welcome from any individual or team. The three winning submissions will be published by Anvil under a Creative Commons license (CC-BY-ND). The visualizations themselves and the narratives that accompany them will be published on Anvil’s website. The source data may be published there as well; though in any case the source data must be in some published form and included, even if only via link, with the submission.

Submissions will be evaluated by the panel of reviewers listed below on the criteria of scholarly contribution, effectiveness of the visualization, accuracy and relevance of the data, and the cogency of the conclusions drawn. Existing digital projects are welcome to submit entries, which must be formatted in a way that can be republished by Anvil, as described above.

Please contact Fred Moody (fmoody@anvilacademic.org) or Chris Francese (francese@dickinson.edu) with any questions.

Deadline for submission: December 31, 2013, to fmoody@anvilacademic.org; only submissions in electronic form will be considered.

Panel of reviewers:

John Bodel, W. Duncan MacMillan II Professor of Classics and Professor of History, Brown University

Alison Cooley, Reader & Deputy Head, Department of Classics & Ancient History, University of Warwick

Gregory Crane, Professor of Computer Science, Tufts University, and Humboldt Professor, Universität Leipzig

Lin Foxhall, Professor of Greek Archaeology and History, Head of School, School of Archaeology and Ancient History, University of Leicester

Chris Francese, Professor of Classical Studies, Dickinson College

Jonathan Hall, Phyllis Fay Horton Distinguished Service Professor in the Humanities and Professor of History and Classics, University of Chicago

Dominique Longrée, Professor of Classics, University of Liège and Saint-Louis University, Brussels.

Andrew M. Riggsby, Professor of Classics and Art History, University of Texas at Austin

Greg Woolf, Professor of Ancient History, University of St. Andrews

 

Image: Pont du Gard, Dimitris Kilymis (cc)

Classical Commentary DIY

A  guest post from Peter Sipes, who has been using the DCC core Latin vocabulary in the process of creating texts for his students. Perhaps you might like to do the same? Peter explains exactly how (and why) he does it . . . 

Over the last few years of teaching Latin to homeschoolers, I’ve found that I need to make a lot of my own materials. It’s not that what’s available is poor quality: quite the contrary. I worked for publishers Bolchazy-Carducci for the better part of five years, and use their books when it makes sense. I’m also a great admirer of Hans Ørberg’s Lingua Latina series. There is a lot of high-quality material available for beginners and people studying Golden Age Latin literature. Once you get away from those two sweet spots, however, the supply of student materials quickly dries up.

Since I want to make sure my students are aware of a broad range of Latin, I present post-Classical literature. I’ve taught selections from the Vulgate three times, and just finished up with a second go around of Thomas More’s Utopia. These are wonderful texts, but they require the making of my own materials for students. My aim is fluent reading, and so I like for everything—text, notes, and vocabulary— to be on one page, as in Clyde Pharr’s well-known text of the Aeneid.

With some serious tweaking of process over the last few years, I have finally nailed down a good work flow. The big breakthrough was the publication of the DCC Core Latin Vocabulary—before that I never knew which words to assume students knew. Worse, I always felt like I was trying to reinvent the wheel.

Now that I’ve got it down, I’ve been slowly sharing my work, which you can find free of charge at this site. Here’s how I make my DIY commentaries on my laptop using free tools. Follow along with me as I make a student handout for the chapter 44 of Gesta Romanorum (entitled de invidia).

1. Select your text. I like thelatinlibrary.com (graphic 1). The text is fairly clean, and—this is important to me—there’s little formatting on it. Perseus has high-quality text, but it has a lot of formatting you’ve got to get out.

1. latin library home

 

2. gr copy and paste

2. Open up your word processor. I use Open Office. Whatever you use, it needs good table support. Set up a new text document with a 2×3 table (graphic 3). The next few steps are tricky, but your students will want you to go through the trouble to get line numbering.

3. 2x3 table

3. Highlight the two cells in the top row (graphic 4). Merge them (graphic 5). Now for the tricky part: highlight that cell and split it vertically into three new cells (graphic 6). Highlight the two left cells—but not the one on the right. Merge those two cells.

4. top two cells highlighted5. highlight top cell6. split cells

4. Push the cell divider to the right so that the upper left cell is really wide in comparison to the one on the upper right (graphic 7). The upper left cell will house the text. The upper right will house the line numbers. Steps 2, 3, and 4 will seem pretty odd right now. Just follow the screen caps.

7. finished table

5. Copy (graphic 2) and paste the text selected in step 1 into the upper left cell (graphic 8). At this point you’re pretty much ready to get to the real work. I like to double space the text and apply some gentle formatting—but that’s wholly optional. Here are my preferences for formatting:
a. Double space the text. Students need room to write and mark up.
b. Get the title out of the text cell. It offends me aesthetically—no other good reason.
c. Rag right alignment. I come from print. Old biases die hard.
d. Indent the paragraphs. Same reason.

8. text pasted in

6. If your text is longer than half of a page, you need to divide it into sections. Repeat steps 2–5 to make new pages. Better yet: cut and paste the table you’ve already got. It’s best to divide long texts at this step rather than after you’ve developed the vocabulary list. You can tweak the text on each page a bit after this step, but it is a tedious and error-prone process to do more than that.

7. Copy the text from your text cell into a new blank document (graphic 9). Perform a find and replace to turn spaces into returns (graphic 10). At this point, you should have a list of words, one word per line, in the order of the text (graphic 11).

9. text prepared for find and replace10. find and replace dialog box11. spaces replaced

8. Scrolling up, look for things you may not want on separate lines—usually names. I find that things to fix pop out much faster if I’m going against the flow of the text. Example: M., Tullius and Cicero should be on the same line.

9. At the top, select all text and sort alphabetically (graphic 12). Once again finding myself at the bottom of the list, I scroll back up looking for and deleting duplicates. But that is optional. Sometimes a quote mark or a parenthesis will cause their attached word to float to the top (graphic 13). Delete the offending punctuation and re-sort. This isn’t optional.

12. sort dialog box13. quotes ruin alpha order

10. Open up the DCC Top 1,000 and scroll down both the Top 1,000 and the newly generated word list. When a word on the Top 1,000 appears, delete it from the word list (graphic 14). For the most part, you can probably guess the contents of the Top 1,000—but be careful until you know the list better. There are surprises.

14. top 1000 and wordlist

Sometimes there will be word that are obviously derived from one of the Top 1,000 (graphic 15). I don’t delete them, since I would rather offer too much help than too little—but you might want to delete these derived terms to encourage student vocabulary building strategies. In the text I’m preparing, artifex is clearly derived from ars. Even though a student should be able to guess the meaning of artifex based on ars, I don’t chance it. I’d rather the gloss be at hand so as not to interrupt the flow of reading more than necessary.

15. derived terms

Sometimes there will be words that are potential hits to the Top 1,000. In our text, we have bello and bella on the list. Since alphabetization removed the words from their contexts, you need to go check. Are bello and bella versions of bellum, -i (war) or bellus, -a, -um (beautiful)? One is on the list. The other is not. Fortunately this doesn’t happen too often. In this case, both bello and bella are derived from bellum, -i (war), which is on the Top 1,000 list. Out they go. On occasion, something like obtulit (offero) and sustulit (tollo) show up and make things somewhat out of order—remember to get them out too (graphic 16).

16. offero - obtulit

11. Once I’ve thrown out the Top 1,000 from the raw vocabulary list, I format the raw vocabulary list a bit (graphic 17): I set the type to single spacing; turn the point size down a little; and get rid of excess space before or after paragraphs. Copy the list of lower frequency words and then paste it into the left cell below the text (graphic 18).

17. low frequency vocab list18. low freq in handout

12. Turn your raw word list into an actual glossary. Principal parts, noun stems, definitions—the whole lot. I add macrons to this list of vocabulary (on a Mac the Hawaiian keyboard is a godsend—option + vowel = vowel with macron: graphic 19). I like to use my paper dictionary, but Wiktionary (graphic 21) and Perseus will both tell you where the macrons go. If I’m in a pinch for time, I rely on my memory. Though I prefer macrons, they are optional. If the word list spills off the bottom of the page a little at this point, don’t worry.

19. switching to hawaiian21. wiktionary sample

13. Cut and paste the vocabulary list into two columns of equal length (graphic 20).

20. vocabulary added

14. Add in the line numbers in the skinny cell on the top right (graphic 22). This will take some patience and some fiddling with the paragraph spacing to make it turn out right. Using a soft return (shift + enter) may take some of the pain out of the procedure. For the example I put in every line number, which I usually don’t.

22. line numbers

15. Write the notes in the two remaining cells on the bottom. In an ideal world, it wouldn’t matter much what level the notes are at. Students will use them when they need them and ignore them when they don’t. Of course, this isn’t an ideal world. On average, I’d rather err on the side of too much help, since my aim is reading fluency. I tend to gloss over style and rhetoric in the notes and go for morphology when writing for beginning students, as is the case in our example. I probably also err on the side of an overly conversational style in the notes as well, but that’s what works for me. In the example, I point out the present participles quite frequently, since we haven’t come to them yet.

16. Find where the cell boundaries properties are. Turn all cell boundaries white or 0 pt. It makes the handout look more professional.

17. Export as PDF and upload to scribd.com or Google drive—accounts are free and the more material openly shared the better. What’s even better is that Scribd allows for revisions to be posted on uploaded documents. I exported the Open Office file to a Word format and have uploaded it here. Feel free to tinker with the file to see what I’ve done. The final pdf is on scribd.com.

18. Read with students and enjoy!

–Peter Sipes (sipes23@gmail.com)

Greek Core Vocabulary: A Sight Reading Approach

http://www.flickr.com/photos/crystiancruz/3235797556/in/photostream/

Crytian Cruz, via Flickr (http://bit.ly/13HaBAU)

(This is a slightly revised version of a talk given by Chris Francese on January 4, 2013 at the American Philological Association Meeting, at the panel “New Adventures in Greek Pedagogy,” organized by Willie Major.)

Not long ago, in the process of making some websites of reading texts with commentary on classical authors, I became interested in high-frequency vocabulary for ancient Greek. The idea was straightforward: define a core list of high frequency words that would not be glossed in running vocabulary lists to accompany texts designed for fluid reading. I was fortunate to be given a set of frequency data from the TLG by Maria Pantelia, with the sample restricted to authors up to AD 200, in order to avoid distortions introduced church fathers and Byzantine texts. So I thought I had it made. But I soon found myself in a quicksand, slowly drowning in a morass infested with hidden, nasty predators, until Willie Major threw me a rope, first via his published work on this subject, and then with his collaboration in creating what is now a finished core list of around 500 words, available free online. I want to thank Willie for his generosity, his collegiality, his dedication, and for including me on this panel. I also received very generous help, data infusions, and advice on our core list from Helma Dik at the University of Chicago, for which I am most grateful.

What our websites offer that is new, I believe, is the combination of a statistically-based yet lovingly hand-crafted core vocabulary, along with handmade glosses for non-core words. The idea is to facilitate smooth reading for non-specialist readers at any level, in the tradition of the Bryn Mawr Commentaries, but with media—sound recordings, images, etc. Bells and whistles aside, however, how do you get students to actually absorb and master the core list? Rachel Clark has published an interesting paper on this problem at the introductory level of ancient Greek that I commend to you. There is also of course a large literature on vocabulary acquisition in modern languages, which I am going to ignore completely. This paper is more in the way of an interim report from the field about what my colleague Meghan Reedy and I have been doing at Dickinson to integrate core vocabulary with a regime based on sight reading and comprehension, as opposed to the traditional prepared translation method. Consider this a provisional attempt to think through a pedagogy to go with the websites. I should also mention that we make no great claim to originality, and have taken inspiration from some late nineteenth century teachers who used sight reading, in particular Edwin Post.

In the course of some mandated assessment activities it became clear that the traditional prepared translation method was not yielding students who could pick their way through a new chunk of Greek with sufficient vocabulary help, which is our ultimate goal. With this learning goal in mind we tried to back-design a system that would yield the desired result, and have developed a new routine based around the twin ideas of core vocabulary and sight reading. Students are held responsible for the core list, and they read and are tested at sight, with the stipulation that non-core words will be glossed. I have no statistics to prove that our current regime is superior to the old way, but I do know it has changed substantially the dynamics of our intermediate classes, I believe for the better.
Students’ class preparation consists of a mix of vocabulary memorization for passages to be read at sight in class the next day, and comprehension/grammar worksheets on other passages (ones not normally dealt with in class). Class itself consists mainly of sight translation, and review and discussion of previously read passages, with grammar review as needed. Testing consists of sight passages with comprehension and grammar questions (like the worksheets), and vocabulary quizzes. Written assignments focus on textual analysis as well as literal and polished literary translation.

The concept (not always executed with 100% effectiveness, I hasten to add) is that for homework students focus on relatively straightforward tasks they can successfully complete (the vocabulary preparation and the worksheets). This preserves class time for the much more difficult and higher-order task of translation, where they need to be able to collaborate with each other, and where we’re there to help them—point out word groups and head off various types of frustration. It’s a version, in other words, of the flipped classroom approach, a model of instruction associated with math and science, where students watch recorded lectures for homework and complete their assignments, labs, and tests in class. More complex, higher-order tasks are completed in class, more routine, more passive ones, outside.

There are many possible variations of this idea, but the central selling point for me is that it changes the set of implicit bargains and imperatives that underlie ancient language instruction, at least as we were practicing it. Consider first vocabulary: in the old regime we said essentially: “know for the short-term every word in each text we read. I will ask you anything.” In the new regime we say, “know for the long-term the most important words. The rest will be glossed.” When it comes to reading, we used to say or imply, “understand for the test every nuance of the texts we covered in class. I will ask you any detail.” In the new system we say, “learn the skills to read any new text you come across. I will ask for the main points only, and give you clues.” What about morphology? The stated message was, “You should know all your declensions and conjugations.” The unspoken corollary was “But if you can translate the prepared passage without all that you will still pass.” With the new method, the daily lived reality is, “If you don’t know what endings mean you will be completely in the dark as to how these words are related.” When it comes to grammar and syntax, the old routine assumed they should know all the major constructions as abstract principles, but with the tacit understanding that this is not really likely to be possible at the intermediate level. The new method says, “practice recognizing and identifying the most common grammatical patterns that actually occur in the readings. Unusual things will be glossed.” More broadly, the underlying incentives of our usual testing routines was always, “Learn and English translation of assigned texts and you’ll be in pretty good shape.” This has now changed to: “know core vocabulary and common grammar cold and you’ll be in pretty good shape.”

Now, every system has its pros and cons. The cons here might be a) that students don’t spend quite as much time reading the dictionary as before, so their vocabulary knowledge is not as broad or deep as it should be; b) that the level of attention to specific texts is not as high as in the traditional method; and c) that not as much material can be covered when class work done at sight. The first of these (not enough dictionary time) is a real problem in my view that makes this method not really suitable at the upper levels. At the intermediate level the kind of close reading that we classicists value so much can be accomplished through repeated exposure in class to texts initially encountered at sight, and through written assignments and analytical papers. The problem of coverage is alleviated somewhat by the fact that students encounter as much or more in the original language than before, thanks to the comprehension worksheets, which cover a whole separate set of material.

On the pro side, the students seem to like it. Certainly their relationship to grammar is transformed. They suddenly become rather curious about grammatical structures that will help them figure out what the heck is going on. With the comprehension worksheets the assumption is that the text makes some kind of sense, rather than what used to be the default assumption, that it’s Greek, so it’s not really supposed to make that much sense anyway. While the students are still mastering the core vocabulary, one can divide the vocabulary of a passage into core and non-core items, holding the students responsible only for core items. Students obviously like this kind of triage, since it helps them focus their effort in a way they acknowledge and accept as rational. The key advantage to a statistically based core list in my view is really a rhetorical one. In helps generate buy-in. The problem is that we don’t read enough to really master the core contextually in the third semester. Coordinating the core with what happens to occur in the passages we happen to read is the chief difficulty of this method. I would argue, however, that even if you can’t teach them the whole core contextually, the effort to do so crucially changes the student’s attitude to vocabulary acquisition, from “how can I possibly ever learn this vast quantity of ridiculous words?” to “Ok, some of these are more important than others, and I have a realistic numerical goal to achieve.” The core is a possible dream, something that cannot always be said of the learning goals implicit in the traditional prepared translation method at the intermediate level.

The question of how technology can make all this work better is an interesting one. Prof. Major recently published an important article in CO that addresses this issue. In my view we need a vocabulary app that focuses on the DCC core, and I want to try to develop that. We need a video Greek grammar along the lines of Khan Academy that will allow students to absorb complex grammatical concepts by repeated viewings at home, with many, many examples, annotated with chalk and talk by a competent instructor. And we need more texts that are equipped with handmade vocabulary lists that exclude core items, both to facilitate reading and to preserve the incentive to master the core. And this is where our project hopes to make a contribution. Thank you very much, and I look forward to the discussion period.

–Chris Francese

HANDOUT:

Greek Core Vocabulary Acquisition: A Sight Reading Approach

American Philological Association, Seattle, WA

Friday January 4, 2013

Panel: New Adventures in Greek Pedagogy

Christopher Francese, Professor of Classical Studies, Dickinson College francese@dickinson.edu

References

Dickinson College Commentaries: http://dcc.dickinson.edu/

Latin and Greek texts for reading, with explanatory notes, vocabulary, and graphic, video, and audio elements. Greek texts forthcoming: Callimachus, Aetia (ed. Susan Stephens); Lucian, True History (ed. Stephen Nimis and Evan Hayes).

DCC Core Ancient Greek Vocabulary http://dcc.dickinson.edu/vocab/greek-alphabetical

About 500 of the most common words in ancient Greek, the lemmas that generate approximately 65% of the word forms in a typical Greek text. Created in the summer of 2012 by Christopher Francese and collaborators, using two sets of data:  1. A subset of the comprehensive Thesaurus Linguae Graecae database, including all texts in the database up to AD 200, a total of 20.003 million words (of which the period AD 100–200 accounts for 10.235 million). 2. The corpus of Greek authors at Perseus Chicago, which at the time our list was developed was approximately 5 million words.

Rachel Clark, “The 80% Rule: Greek Vocabulary in Popular Textbooks,” Teaching Classical Languages 1.1 (2009), 67–108.

Wilfred E. Major, “Teaching and Testing Classical Greek in a Digital World,” Classical Outlook 89.2 (2012), 36–39.

Wilfred E. Major, “It’s Not the Size, It’s the Frequency: The Value of Using a Core Vocabulary in Beginning and Intermediate Greek”  CPL Online 4.1 (2008), 1–24. http://www.camws.org/cpl/cplonline/files/Majorcplonline.pdf

 

 

Read Iliad 1.266-291, then answer the following in English, giving the exact Greek that is the basis of your answer:

 

  1. (lines 266-273)  Who did Nestor fight against, and why did he go?

 

who                                                                                                                                  

why                                                                                                                                  

 

  1. (lines 274-279 ) Why should Achilles defer to Agamemnon, in Nestor’s view?

 

                                                                                                                                        

                                                                                                                                        

  1. (lines 280-284) What is the meaning and difference between κάρτερος and φέρτερος as Nestor explains it?

 

                                                                                                                                        

                                                                                                                                        

  1. (lines 285-291) What four things does Achilles want, according to Agamemnon?

                                                                                                                                        

                                                                                                                                        

Find five prepositional phrases, write them out and translate, noting the line number, and the case that each preposition takes.

1.                                                                                                                    

2.                                                                                                                    

3.                                                                                                                    

4.                                                                                                                    

5.                                                                                                                    

 

Find five verbs in the imperative mood, write them out and translate, noting the line number and tense of each.

1.                                                                                                                    

2.                                                                                                                    

3.                                                                                                                    

4.                                                                                                                    

5.                                                                                                                    

The Scholarly Edition Goes Social

Latin lolcat by Laura Gibbs

Ok, so you’re the scholarly textual edition. You’re a venerable and useful genre. You’ve got some years on you, but you still look good. You have a lot of friends, even some fans, and people respect you. But you were born too early to understand this whole social media craze. You want to be connected, and it’s good to keep in touch with your family. But why do people seem to feel the need to be constantly sharing all this quotidian detail? Many people you really admire won’t have anything to do with social media. And yet, it feels lame to be left behind. After all, you’ve still got it, you’re still relevant, right? Question is, scholarly edition, should you break down and join Facebook?

It is in fact your destiny to embrace social media, according to a new article by a team of researchers published December issue of Literary & Linguistic Computing: “Toward Modeling the Social Edition: An Approach to Understanding the Electronic Scholarly Edition in the Context of New and Emerging Social Media.” The authors, Ray Siemens, Meghan Timney, Cara Letich, Corinna Koolen and Alex Garnett, are associated with the Electronic Textual Cultures Lab at the University of Victoria, British Columbia. The article itself is behind a pay wall, but a pre-print version is available here.

They propose that digital textual editions have gone through three phases so far, and are about to enter a fourth. The early stages of digitization (in the 1980s) made possible the “dynamic text,” in which readers could search, retrieve, and analyze in a way impossible in print media, treating the text with the flexibility of a database. This sped up all kinds of academic tasks. Shortly thereafter (in the 1990s) arose the “hypertextual edition,” which uses linking to give access to the various types of apparatus (textual, critical) that sometimes accompany print scholarly editions, and to even more in the way of images, parallel texts, and other linked resources. The third phase saw the development of a combination of the first two, the “dynamic edition,” in which the user can both interact with the text itself, change it, slice and dice it, and have access to various scholarly annotation and apparatus via hypertext. One promise of the dynamic edition, which they admit is not fully realized in practice yet, is that algorithmic processes can be used to start to automate some of the scholarly activities of textual scholarship. If we can “automate the process of formalizing the associations we take for granted in current editions,” they write, “such an edition has the ability, in effect, to annotate itself.”

The fourth phase, into which we are currently hurtling, is characterized by the application of social media tools and crowd sourcing to scholarly editorial practices. Siemens and collaborators point out that social tools enlarge the knowledge-building community beyond the traditional realm of academic scholars, and tap into the category of citizen scholars, not affiliated with academic institutions, in addition to the usual pools of academic labor. Siemens et al. identify five new modes of engagement with digital objects using social tools:

  1. Collaborative annotation (e.g. Diigo, digress.it).
  2. User-derived content (the Library of Congress Flickr stream, NINES).
  3. Folksonomy tagging, in which users add metadata in the form of keyword tags for shared content (English Broadside Ballad Archive, Flickr, Twitter, Del.icio.us).
  4. Community bibliography, in which users collect and catalogue references by means of academic citations (Zotero, reddit, StumbleUpon).
  5. Text analysis, which involves “algorithmically facilitated search, retrieval, and critical processes.” (E.g. the open source electronic role-playing game for educational use called Ivanhoe, based on the Walter Scott novel).

But beyond the various tools involved, they claim to identify a fundamental shift in the sociology of knowledge that drives the fourth phase. They see an inevitable move from the editor as a single, quasi-omniscient authority to the editor as a kind of impresario who can “facilitate interaction among the communities of practice they serve.” This community building is the essential thing that current self-contained digital editions do not do. The new social edition editor does not set himself or herself up as the arbiter of text and annotation, no matter how dynamic. These new editors coordinate contributions from many sources and oversee “living” editions.

At this point the rhetoric of the article begins to evoke the Reformation, with an added touch of Marxist revolutionary idealism. The old-style print-based scholarly editor is a “mediator” between the text and reader, “determines and shapes what is important to the reader,” and “exerts immense control over what the reader can engage.” The new social edition undermines these self-appointed authority figures that come between text and reader, thus “challenging current notions of personal and institutional authority, and the systems in which they are perpetuated.”

But in my view it is far too simple to say that the expert editor must now simply yield to, and facilitate, the crowd. For one thing, the use of the word “edition” in this discussion is misleading, and blurs distinctions between very different types of intellectual labor, some amenable to crowd-sourcing, some not. On the one hand there is textual editing in the strict sense: the examination, transcription, and collation of archival documents to produce a readable and reliable text with reports of variant readings. The people who do this kind of work are hardly constricting interpretive possibilities. They are making material available to the community, often at considerable risk to their eye-sight and domestic happiness. This is not the same thing as annotation, the equipping of texts with relevant information about its historical and literary contexts (which can be much more ideologically loaded), and linguistic explanations (which need to take into account very specific audiences). A third distinct area is the application of digital tools in computational analysis of textual data and the crafting of interpretive perspectives on that basis.

The article lumps all this together in the notion of “edition,” but in each area there is a different dynamic at work when it comes to the relationship between the expert scholar and a reading, and potentially contributing, community. And more importantly this relationship varies markedly with different types of texts, something ignored completely in the article. Take annotation, for example. Classic texts with highly developed academic cultures surrounding them, like Thomas More’s Utopia, do not readily elicit crowd annotation. We know this because it’s being tried at the site Open Utopia. The user-generated comments are not numerous or impressive, and much of the material represents the work of its editor, Stephen Duncombe, as Associate Prof. at NYU, who published a book based on the site. My own experience trying to develop a wiki community around Caesar’s Gallic War yielded similarly unimpressive results.

By strong contrast, in the case of a set of contemporary texts with little or no existing scholarly commentary, the novels of Thomas Pynchon, elaborate fan wikis  have developed which comprehensively annotate just about every page of his extremely long novels. Like the burgeoning and sometimes hilarious electronic literary genre of Amazon.com product reviews, crowd sourced commentary and annotation successfully grow up to fill a vacuum of trusted information, not replace trusted expert-made resources.

The same can be said of other types of editorial labor. Nobody wants to reinvent the wheel. The fascinating thing about the social media and self-publishing revolution is not that citizen scholars can now seize the tools of production and dethrone the academics (as desirable as that might in some cases be), but that independent scholars can now contribute in their own ways, and serve new audiences with new texts and new genres of edition. In my field there are many examples, including Evan Milner’s massive archive of textual, video, and audio Latin materials, Laura Gibbs’ excellent work with fables and proverbs, and, delightfully, her new genre of the Latin lolcat, a combination of proverb text and feline image. There are innovative pegagogical texts begin edited and published outside the normal channels by Justin Schwamm and Peter Sipes, among others. Then there are the apps being created by non-academic computer programmers such as Nick Kallen, Paul Hudson, and Harry Schmidt, apps that deliver Latin and Greek texts with the tools to read them. These are resources that people want, but academics will never be rewarded for making, and publishers generally won’t bother with. Social media means we all benefit from this new energy.

The “social edition” is thus not a box created and overseen by an academic impresario, and filled with content by a crowd of lesser contributors. It is a totally unpredictable new thing, driven by the creativity and desire for credited publication on the part of highly trained, but non-tenure track, scholars. Rather than distributing traditional academic labor, social media enlarges the pool of publishing scholars. Rather than prompting the re-making of old scholarly editions, it identifies and fills needs that the academic establishment can’t even see, much less satisfy.

So my advice, scholarly edition, is not worry, to do what feels right. Find the mix of social media and good old fashioned expert editorial authority that works in each case. Stop worrying about the trends, and think hard about the users and what they need.

–Chris Francese

Latin Core Spreadsheet

Peter Sipes, benevolus amicus noster apud Google+, has kindly made available a Google spreadsheet of the DCC Latin Core Vocabulary. Check it out, and download it. He uses it for those occasions when he is working without an internet connection. I wonder what he is doing with the list? Perhaps a guest blog post is in order. Peter?

The core vocabularies have been on my back burner while I have been finishing up a book project of the dead tree variety while on leave from Dickinson for the fall ’12 semester. But I hope to return very soon to consideration of the semantic groupings in particular. My Dickinson colleague Meghan Reedy pointed out some flaws in the groupings on the Latin side, and we need to get that sorted before she and I move forward on our grand project: a poster that will visually represent the core according to its associated LASLA data, expressing visually each lemma’s frequency, semantic group, and relative commonness in poetry and prose.

In the meantime, if you will be at the meetings of the (soon-to-be-renamed) American Philological Association in Seattle, please stop by the Greek pedagogy session and hear my fifteen minute talk about a way to use the DCC Greek core vocabulary in an intermediate sequence based around sight reading and comprehension, as opposed to the traditional prepared translation method.

Here is the whole line-up:

Friday January 4, 8:30 AM – 11:00 AM Washington State Convention Center Room 604

NEW ADVENTURES IN GREEK PEDAGOGY
Wilfred E. Major, Louisiana State University, Organizer
The papers on this panel each offer guidance and new directions for teaching beginning and intermediate Greek. First is a report on the 2012 College Greek Exam. Following are a new way to teach Greek accents, and a new way to sequence declensions, tenses and conjugations in beginning classes. Then we get a look at a reader in development that makes authentic ancient texts accessible to beginning students, and finally a way to make sight reading the standard method of reading in intermediate Greek classes.

Albert Watanabe, Louisiana State University
The 2012 College Greek Exam (15 mins.)

Wilfred E. Major, Louisiana State University
A Better Way to Teach Greek Accents (15 mins.)

Byron Stayskal, Western Washington University
Sequence and Structure in Beginning Greek (15 mins.)

Georgia L. Irby, The College of William and Mary
A Little Greek Reader: Teaching Grammar and Syntax with Authentic Greek (15 mins.)

Christopher Francese, Dickinson College
Greek Core Vocabulary Acquisition: A Sight Reading Approach (15 mins.)

3 types of publication that classical studies needs

Glancing over the latest issue of a certain classics journal that came to my door, and seeing nothing terribly interesting or new, I got to thinking . . . The web has made it possible to publish scholarly work in new ways, and that’s certainly what DCC is trying to do. Classical commentary is one of the oldest genres out there. What are some other types of scholarship that classicists could usefully embrace in the digital realm? How can we leverage digital media to make progress? Herewith, three suggestions. I’d love to hear more!

1. Critical reflections on pedagogy and descriptions of innovative teaching technique using digital tools. Pedagogy discussions in our field happen predominantly in informal venues like listservs and at conferences. The online journal Teaching Classical Languages (http://tcl.camws.org/) is a leader in making these important and interesting discussions more widely available and subjecting them to some peer review. What if we could do that not just with a traditional article, but with video, audio, and ancillary materials provided?

2. Distant Reading, a la Moretti’s Graphs, Maps, Trees. (“argues heretically that literature scholars should stop reading books and start counting, graphing, and mapping them instead.”) What can statistical analysis of classical texts, and the graphical display of that data, show us that is new and interesting? There is not much of this yet in classics as far as I know, but digital tools are making it more possible. Publishing it in digital form would allow for full publication of data and many more illustrations/vizualizations than in traditional print media. Related to this but more broad is . . .

3. Visualization projects (infographics etc.) made by scholars and conveying scholarly perspectives on the ancient world. These could be literary, or come from archaeologists and historians. Here again, as far as I am aware there is not much happening at the moment (but I’m not an archaeologist). Ramsay MacMullen did some fascinating work along these lines with inscriptional evidence. What can be done with coin hordes, word counts, anything countable that relates to the ancient world?

–Chris Francese

How principal are Greek principal parts?

I just finished adding the principal parts to the DCC ancient Greek core vocabulary list, something I meant to do last summer, but which got lost in the shuffle. So that’s done, and up. Phew. Anybody who has tried to learn ancient Greek knows what a big hurdle the principal parts are: absolutely essential, but a beastly task of brute memorization. I am here to say that, as one who focuses more on Latin than on Greek, I have to re-learn some of them on a regular basis if I want to read (or teach) Greek well. This is not the fun, life-affirming, profound, aesthetically enriching part of Greek. This is the boot camp, the weight-lifting one must do to get there.

The idea behind principal parts is to put in your hands, and hopefully in your brain, all the different stems of a verb, so that (theoretically) any declined form can be derived from, or traced back to, one of them. But of course it’s not quite that simple.

On the one hand, some verb forms and related things are extremely common, but not really directly derivable from the principal parts as they are traditionally presented. εἰκός, for example, is a very common participial form meaning “likely, plausible” that is not immediately apparent from the principal parts of ἔοικα. It’s in the dictionary, of course, but somewhat buried in the entry on ἔοικα.

On the other hand, many Greek verbs have principal parts whose stems are only very rarely employed. πέφασμαι, for example, is a perfect tense principal part of a very common verb, φαίνω. But forms derived from it are rare. πέφαγκα, another perfect form listed by Smyth among the “principal” parts is very rare indeed, with only seven attestations in the TLG, almost all of those from late antique grammarians and lexica. I guarantee you will never encounter it outside a grammar book.

Part of the problem here is that our apparatus for learning ancient Greek is largely derived from big, comprehensive, scientific grammars of the 19th century, and thus have a tendency to completism, rather than the conveying of what is most essential. This is a general problem that does not only affect the issue of principal parts.

Enter into this picture the database, specifically the TLG and its lemmatizer tool. This is the tool that attempts to determine from what dictionary head word (or lemma), a given form derives. I have complained elsewhere about the impotence of existing lemmatizers when it comes to determining the meaning of homographs–forms that are spelled the same but derive from different lemmas, or forms derived from a single lemma, but which could have more than one grammatical function. This is a serious and as yet unsolved problem when it comes to asking a computer to analyze a given chunk of Greek or Latin. And the homograph problem also substantially compromises frequency data based on machine-analyzed large corpora of Greek and Latin.

But one thing at which the lemmatizers are extraordinarily good–theoretically flawless– is telling how many occurrences of a certain word form there are in a given corpus. And by examining that data you can get in most cases a very accurate picture of how common are the forms derived from a particular stem or principle part in a Greek verb. In other words, the TLG Lemma Search (which is what I have been working with in making the principal parts lists for our site), helps us see more clearly than has ever been possible which principal parts of each verb are the most important, and which very common forms lie slightly outside the traditional lists of principal parts. It has the potential to make principal parts lists far more informative and helpful to the language learner even than the information found in Smyth, LSJ, or any of the current textbooks.

I can think of a couple ways in which TLG lemmatizer data could be used to enhance the presentation of Greek principal parts. One could, for example, have a second list of, say, the five most statistically common forms of a given verb. In the case of πάρειμι, for example, that would be the following (with the total raw occurrences in TLG as of today):

παρόντος (8587), παρόν (5406), παρόντα (4920), παρόντων (4442), παρόντι (3451)

In fact the top 10 or so are all participial. παρών παροῦσα παρόν: that’s what I call a principal part!

Another way to do it would be to print in bold the principal part from which the most forms derive, or even use a couple different font sizes to reflect how commonly used each principal part is. For σῴζω, save, the figures are (roughly) as follows σῴζω (8600) σώσω (1300), ἔσωσα (5500), σέσωκα (400), σέσωσμαι (700), ἐσώθην (8800). Interesting to see the aorist passive stem beat out the present stem. The top vote-getters in terms of forms are σωθῆναι, ἔσωθεν, σώζεται/σῴζεται, σῶσαι, and σῶσον.

People who are better at Greek and spend more time with large corpora and their analysis than I do have probably thought of all this long ago, and there may be some principal parts lists that incorporate some of this data. If so, I would love to hear about it.

Before closing I should give a huge thank you to Prof. Stephen Nimis from Miami University of Ohio and his collaborator Evan Hayes, whose principal parts list in their edition of Lucian’s A True Story (soon to be re-published on our site with extra features) was of great assistance as I was making our list. And I should mention here also the crucial help I have had all along with our Greek list from the great Wilfred Major, of Louisiana State University.

 

 

 

NITLE seminar to feature DCC

Members of the team who created the Dickinson College Commentaries will be featured in a seminar hosted by the National Institute for Technology in Liberal Education (NITLE). The event, which will take place on Thursday, December 6, 3:00-4:00 pm EST, will be hosted online via NITLE’s videoconferencing platform, and is open to NITLE consortium members.

“Collaborative Digital Scholarship Projects: The Liberal Art of Drupal,” will address the creation of collaborative digital projects in a liberal arts context, using the example of DCC site, which was built with the widely used content management system Drupal. The speakers will be Meredith Wilson (’13), Dickinson web developer Ryan Burke, and Prof. Christopher Francese.

For more details or to register, see: http://www.nitle.org/live/events/154-collaborative-digital-scholarship-projects-the

Spanish Latin, a curse, and a lusty postman

More epigraphical adventures in Google Books . . .

From the library of Francis Kelsey, author of a fine school edition of the Gallic War (1918 edition) comes a thorough publication of a set of curse tablets that came into the possession of the Department of Classical Archaeology of The Johns Hopkins University in 1908 (after the publication of Audollent’s Defixionum Tabellae), apparently found near Rome.

William Sherwood Fox, The Johns Hopkins Tabellae Defixionum. Baltimore: The Johns Hopkins Press, 1912. http://bit.ly/T70r9o

Here is a taste:

“A quartan fever, a tertian fever, every day, may they wrestle with her, overpower her, vanquish her, conquer her, until they steal away her life. And so I hand over this victim to you, Proserpina, or if I, Proserpina, or if I should call you Acherusia. Please send me to summon the three-headed dog to steal Avonia’s heart . . .”

Henry Martin, Notes on the Syntax of the Latin Inscriptions Found in Spain. Baltimore: J.H. Furst, 1909. http://bit.ly/Xj12IR or here at the Internet Archive http://archive.org/details/cu31924029794470

This book will be a delight to all those who suspect that the grammatical rules of classical Latin were not really followed by ordinary people. They often were not, and Mr. Martin gives a detailed survey of syntactical and grammatical peculiarities to be found in inscriptions from Spain.

The use of the genitive in Spanish Latin, for example, “often appears to indicate ignorance on the part of the writer of the idiomatic Latin turn or to be his method expressing an idea in the fewest possible words without reference to clearness.” (p. 13) Think that’s snarky? Just wait till you get to the part about pronouns.

W.M. Lindsay, Handbook of Latin Inscriptions Illustrating the History of the Language. Boston: Allyn and Bacon, 1897. http://bit.ly/U4X1DX

Written by the titan of early Latin studies from the turn of the 20th c., the editor of Plautus and Festus, this book has all sorts of goodies, treated with an eye to archaic or vulgar Latin features.

“While I am Vitalis and still alive, I have made a tomb. And I read my verses (on my own tomb) as I pass by. I carried letters all around the region on foot, and with my dogs I hunted rabbits and also wolves. Later, I enjoyed drinking the contents of my wine cup. I did many things like a young man, because I am going to die. Any wise young man should build a tomb for himself while still alive.”

–Chris Francese

Inscriptions from Syria and Sinope

I’ve been translating inscriptions lately, and that has gotten me interested in finding older publications of inscriptions available on Google books. There has to be a ton of this kind of thing, but I don’t know that they have been collected anywhere. Here are a few items that caught my eye, with snippets to give an impression of the kind of material to be found in each.

William Kelly Prentice, Greek and Latin Inscriptions. Part III of the Publications of an American Archaeological Expedition to Syria, 1899-1900. New York: The Century Co., 1908. http://bit.ly/QKsE6S

“May Odedon the teacher live, may he live!” Prentice believes that this inscription came from a tomb, “perhaps written … by some pupil who wished his master well enough, after he was dead.”

D.M. Robinson, Greek and Latin Inscriptions from Sinope and Environs. American School of Classical Studies at Athens (American Journal of Archaeology, second series, Journal of the Archaeological institute of America, v. IX (1905) no. 3.) http://bit.ly/WarqOS

From an Armenian village: “Manius Fulvius Pacatus, age 60, Fulvius Praetorenus, his son, age 20, lie here. Licinia Caesellia lies here, age 50.” Evidently Greek-speaking Romans of some means, to judge by the elegant lettering.

James C. Egbert, Introduction to the Study of Latin Inscriptions. New York: American Book Co., 1896. http://bit.ly/XeQj2a

Lippitudo or conjunctivitis was a scourge of Roman times, and the eye doctors have many terms for different varieties of it. It was often caused by smoke coming from braziers used indoors. The second of these documents seems to prescribe egg-white to be daubed on with a sponge (penecillus). For this latter vulgar Latin term is unknown in print in this particular sense until the middle ages. See See Rabanus Maurus, De Universo (ca. AD 842) 8.5 (PL 111.239C): mollissimum genus earum [sc. spongiarum] penecilli vocantur eo quod aptae sint ad oculorum tumores, et ad extergendas lippitudines utiles.

–Chris Francese