Two Tiered Faculty as the product of Online Education Models

I suspect that one of the outcomes of current development of tertiary online education models, at least in the models I’m most familiar with (which is a few), will be an entrenchment of a divide between permanent faculty and the continued subsistence of an academic underclass of adjuncts.

I won’t go into the economic side of things so much in this post (I always simply feel bad because ‘at least this isn’t America’), as a different dimension.

The nature of online education delivery is that to a large extent you can create “content” that is delivered to students. Lectures/Readings/Material can all be pre-packaged for students to digest. That material, if done well, can be done once and left for several cycles of the course. Do it well enough, and the length of its ‘digital lifetime’ gets extended. Eventually, if you care about it being up to date in the field at all, and it will need revising.

Who gets to create this content? Primarily established academic staff who have a good deal of experience in teaching the course and can produce a quality course.

But any such course still needs human interaction, otherwise you’re not selling a course, you’re selling content to be delivered. Which is fine but it’s a different thing. Students don’t fare very well when you just give them ‘content’. They need learning activities, they need to engage with the material, discuss, do assignments, get feedback, and of course ‘get graded’ (do they?)

Who does that? Rarely the person who created the content. They are off doing other things, research included. Online courses are generally administered by casual academic staff, who do all those things: engage online with students, facilitate discussions, mark papers.

In many ways, this is simply the replication of a model of two-tiered academia that already exists in plenty of brick-and-mortar institutions, especially those with large classes. Tutors/TAs do the grunt work, faculty do the lecturing. So it’s not new, but online exacerbates that division.

It makes it worse because the only people creating content are older, established academics, and those employed casually as tutors (a) do not gain teaching experience per se, because they are not content creators, (b) are economically unable to effectively pursue research in the conditions of their employment because (i) they are not paid to research, (ii) the gap between full-time-hours as a casual academic and full-time hours as a full-time employ actually means there is no “casual loading” benefit to casual work.

It seems to me that the direction this heads in is, on the one hand, content driven more and more by older, established, and high-profile scholars, who increasingly have no need to interact with actual students. On the other, course delivery by lower paid, junior academics with insecure employment status who by the conditions of their employment may never be able to leave that class. That’s a two-tiered class system, and its long term effects on higher education will be deleterious.

Digital Nyssa Project: From OCR to Plain Text

So the first step in my project is getting from a print text to a digital text.

There are a few options for doing OCR. Bruce Robertson at Mt. Allison University is involved in LACE: Greek OCR, a large scale project to produce high quality OCR of ancient Greek texts. But apart from submitting a request, this is not really scalable to personal use.

Antigraphaeus allows you to do Greek OCR in your browser. It’s the child of Ryan Baumann, and basically instantiates online what I describe below.

Ancient Greek OCR provides a number of options (platform dependent) for using the Tesseract OCR engine trained for Ancient Greek. It’s the work of Nick White.

I followed the instructions to install and set-up gImageReader and found it pretty straightforward.

For a pdf input, I grabbed Patrologia Graeca 46 of Archive.org. I then created a pdf of only the pages I needed for DDAE (6), to save loading time. Here’s an image with the first page open, and about to click “Recognize” which will run the OCR process on the selection and output it to the sidebar on the right.

It takes around 3 minutes to run a full page (well, half-page) of Migne. I cut and paste the results to a text editor so that I could save the result as a UTF-8 encoded .txt, rather than the default in the program which appears to generate an ANSI file (which is not useful).

 

Then I worked with pdf and text side by side to manually correct the OCR results. This is slightly laborious, and using some kind of editing process like Robertson has up on the Greek OCR Challenge page would probably speed this process.

Instead, I did this:

The general quality of the OCR was good, but it did need corrections. I’d love to speed up this process, because it takes me circa 10 mins to do a page.

So, 13 mins a page, 6 pages, looking at almost 80 mins work to do the OCR work to produce a corrected text. Then a bit of quick editing to remove line breaks and generate a single continuous text.

Know/Don’t Know: the myth of binary knowledge in language learning.

The other day I was in a conversation and couldn’t for the life of me retrieve the Gàidhlig word for “question”. All I could think of was freagairt, which is “answer”. I had to ask what it was. It’s ceist, of course. Duh. That’s a word I “know”, or “am meant to know.”

But the real question is never ‘do you know this “word/phrase/structure/chunk of language”?’ It’s always, ‘can you comprehend this chunk of language right at this instant, or produce this chunk in a way that effects communication?’

Which means the strongly binary model most of us inherit of language learning, which includes “Teacher taught word X, therefore student learnt word X” (wrong not just for languages, but for instruction in general), and “You memorised word X, therefore you know word X in all circumstances” or even “you once got X right on a multiple choice question, therefore you can actively recall X for communication production”, and so on – these are just wrong.

‘Knowing’ is a lot fuzzier. It’s a huge range of contextualised, circumstantial, bits and pieces that determine whether communication is going to take place in any particular instance, and how well a message is going to go from producer to receiver.

Which is why, at the end of the day, “vocab testing” is mere approximation. It’s testing, “can you on particular occasion X, recall particular word Y (actively? passively?) in particular context/decontext Z which may or may not bear much relation to any genuine language encounter?”

It’s also why we should basically ‘lighten up’ on students. “I taught you this” has no real place in a language teacher’s teaching vocabulary (except maybe as a joke?). Students don’t really need to feel shame/guilt/frustration at not knowing a chunk of language in that moment, they just need the minimum amount of help to make the utterance comprehensible, so they can get on with getting meaning and so acquiring language. And the next time they encounter, or need, “chunk X”, it will hopefully come a little easier. Or the next time. Or the time after that. Or however many times.

Diary of a Digital Apprentice (2): First, a Unix tutorial

(Here for the blog-series kick-off post).

We’re playing catch-up a little, and these are things I did in the tail end of 2017.

It’s been a long time since I’ve done anything with Unix. About 10 years, actually, and my unix experience was limited to running Ubuntu at the time and being forced to troubleshoot a lot of things mainly by googling answers. That was frustrating and satisfying at the same time. A memorable highlight was the time that my system switched to Ancient Greek at some fundamental level so that I couldn’t log in because it would only input Greek characters and it was not as simple as ‘change keyboard’.

Anyway, Jedi master Tauber decided I should learn to manipulate text files in Unix and set me the following tasks. You can see them over here:

This is what I call “hunt”-learning. The teacher isn’t pushing, and the learner isn’t actively trying to pull things from the teacher, rather the teacher is setting up tasks which the learner must then go and problem-solve. I think there’s a lot to be said for such a method, and it works particularly well for something like this.

Also, by the end of 7 tasks, I had not only an appreciation for how to do these things, but a sense of both (a) the kinds of things that could be done just by manipulating appropriate data sets, (b) that so much is possible if you just have the data.

Of course, having the data, or having a text in an actionable form, is itself half the struggle.

If you’re a totally beginner like me, and want to follow through those 7 tasks, go ahead, and feel free to drop me a line if you get stuck. There’s lots I don’t know, but I know enough to hint you along the path.

Project: Shepherding a text from print to digital

One of my projects for 2018 is to take a text and shepherd it, or curate it, all the way through an open source pipeline from ‘print’ to ‘digital edition’. This is part of my 2018 year of digital humanities. Here I talk a little bit about the envisioned process.

The text I have in mind is quite short, just over 2000 words. It’s Gregory of Nyssa’s De Deitate adversus Evagrium (in vulgo In suam Ordinationem). I’ve done some work on De Deitate Filii et Spiritus Sancti and this will be a nice complement to that.

My checklist of things to do:

The Pipeline

Step 1: OCRing a print text
Step 2: Correcting the OCR output
Step 3: Create a TEI-XML version.
Step 4: PoS Tagging/Lemma tagging/Morph tagging
Step 5: Produce a translation
Step 6: Alignment
Step 7: Annotations and commentary

Then, voilá, open-sourced text freely available with useful data attached. Half of these things I don’t actually know how to do yet. Maybe more than half. That’s part of the fun. And, presuming it goes well, will make it a pilot project for future texts through a similar pipeline.

Moving to project-based management: todoist and todo.vu

Since my life/work balance, and my work/work balance, has become more and more fractured, I’ve felt the need to organise it more effectively. Last year I started using todoist, which I love because there’s nothing I quite enjoy as much as ticking off a to-do list, and todoist has a ‘karma’ counter built in that racks up meaningless numbers as you tick things off.

This week I’ve started trialing todo.vu, and I’m now kind of using the two in tandem. It’s helped mentally organise the things I do, and todo.vu includes time-tracking in it, which is really helpful for me.

So, this post is just kind of a run-down of how I’m organising things and using these tools.

todo.vu has a hierarchy of Clients > Projects > Tasks.

This isn’t ideal, but I’ve just adapted it as seems good. todoist has Projects and Tasks, but Projects are endlessly nestable (well, I’ve never gone past about 4). So this is aligning nicely.

My Clients list looks like this (And my top-tier todoist now mirrors it):

  • Internal
  • Self-Education
  • (College)
  • (2nd College)
  • (3rd College)
  • Academic Career
  • Church
  • Online Tutoring
  • (A Business)

 

Internal was a pre-existing client, I’ve repurposed it for ‘life management’, whereas Self-Education is more specific educationally/professionally directed projects. The 3 colleges are institutions I do work for. Academic career is here I place things specifically academic: research papers, phd to monograph, etc.. Online Tutoring is self explanatory; if I were using this as a billing platform I’d create each person as a separate client, but since I don’t, I prefer this breakdown.

One of the things the whole schematisiation process helped me with was sorting through ‘recurring open-ended projects’ and ‘defined, goal-oriented projects’. For example, under Self-Ed I have separate project for “Latin studies”, “Greek studies”, “Gaelic studies”. These are all very open ended ‘projects’. Previously I had just set them as recurring tasks in todoist. Now, I’ve created individual Tasks for each component in todo.vu, e.g. reading Athenaze, working on a specific course, etc.. I can still set these as recurring tasks in todoist, (e.g. daily reading tasks), but it means I can log time on todo.vu, and when I finish an individual task, I can complete it while keeping the open-ended project going. So all book reading is also set up in the same fashion. Truly open-ended ongoing tasks, I’ll set on a year-by-year basis. For example, “Online Tutoring > Student 1 (2018)”, and just log hours against the task and close it at the end of the year. But, complementing this in todoist, I might also have “OT > Student 1 > this week’s prep” as an individual task.

I’m hoping that the net effect of this will be to give me greater awareness, control, and discipline over the disparate projects and tasks on my plate. That, combined with tools to limit/block-off various distractions, and here’s to a productive 2018. (I’ll just set a reminder now to blog on this again in 6 months…)

 

 

Diary of a Digital Apprentice (1)

One of my goals for 2018 is to acquire a working skillset in areas of Digital Humanities. As I do so, I plan to blog regularly on that ‘mission’. In today’s post, I provide some context for the start of that journey.

 

I’d say I’ve long had a user-side interest in Digital Humanities. I’ve appreciated, and used, the considerable resources that things like Perseus, TLG,  PHI, and other packages have presented. And I’ve always envisioned ‘more’ being possible. But, being relatively short on the technical side of things, DH has always been a bit of black-box wizardry to me.

A couple of years back I made the acquaintance, first digitally, of James Tauber. Some of our initial overlap and discussion had to do with tools for language learning and teaching. We met briefly at AARSBL in 2015, and conversed a bit more since then. Another face to face meeting at AARSBL in 2017 helped solidify things and we have launched both some collaboration, but also some apprenticing.

That ‘looks like’ two things. Firstly, a combination of push-learning, pull-learning, and hunt-learning. Pull, where I ask, “how do we do X?” or “is Y possible?” and then get a crash course on how to make certain things happen. Or an explanation of “yes, Y is possible, look, Dr ABC has been working on this for umpteen years, see!”. Push-learning is where you learn things you didn’t know you could learn, e.g. “Hey, Seumas, did you know  you can use E to accomplish F, G, and H!” And hunt-learning is when James says something like, “Seumas, figure out how to do M, N, O, and P, and then tell me how you did it or when you get stuck.”

Part of this relates to the work that Eldarion is doing on developing the Scaife Viewer for Perseus. Which is incredibly exciting because (a) Perseus! (b) have you seen the Scaife Viewer demo’d? (c) it’s great to see inside the black-box so to speak, to see how something like this gets developed and figure out how it works.

Another side of it is my digital Nyssa project for the year.

“Digital Nyssa” is my project to curate/shepherd a text ((initially just one, but maybe more)) through an open, free, digital pipeline from print to digital edition. It’s both a means of acquiring practical DH skills across a range of tools (OCR, TEI-XML marking, PoS and morph tagging, digital edition creation and then commentary/annotation/translation). You’ll be hearing more about it as the year goes on, and I’ll outline a little bit more next week.

So, each week I’ll be posting up a bit of what I’ve been doing/learning/working on, as part of a bigger project to self-document the learning process for myself, and hopefully encourage others that DH is not so scary. The first few weeks will play some catch-up too on things over the past few weeks.