How many languages do you speak/know/etc? Acquaintancy vs. Competency vs. Proficiency vs. Fluency

It’s not infrequently that I get asked a version of the first question. It happened a week or so ago, a girl in my research office casually started conversation with, “So, how many language do you speak?”

My reaction varies a little based on the question, or more precisely the verb chosen in the question. Generally speaking, I don’t like to launch into a discourse on the difference between knowing/speaking/acquisition/etc.. That’s what this place is for!

I think one of the difficulties is that we have a Native Language concept that interfers or influences how we think about L2s. That is, we generally think we ‘know’ our L1(s), and treat them as a singular entity that is “known”. Even though, of course, Native speakers often don’t ‘know’ some things about their language, or make errors, or any number of things. Let’s not even get into the idea of idiolects and each language as an idealisation. We tend to think of Languages as Idealised Units, and knowledge as binary: you take a course of instruction and then you “know” TL. Even when we know this is wrong, we still have this tendency. We, ironically, need better vocabularies for talking about knowing Languages!

I like the term “Acquaintance” to mark any general knowledge about a language and an ability up through the beginner stages. It’s a useful tag for saying, “I’ve come into contact with TL (target Language), know a little bit about it and know a little bit of it.” It’s vague enough and humble enough to cover a wide range of levels below the rest. I would say that I’m acquainted with German, French, Italian, Spanish, and ‘well-acquainted’ with Biblical Hebrew.

Let’s skip forward to “fluency”. This is probably the most difficult term. It’s used for such a broad range of abilities. Bennie Lewis of Fi3M fame pins it as low as B2 (in his “Fluent in 3 Months, kindle loc. 674). I suspect most people think of it as higher than this, C1 at least. For most people, fluent means something close to Native-like: an ability to speak about a broad range of topics, in depth, without any errors that hinder communication. It’s, frankly, difficult to reach such a level for an L2, primarily because the sheer time to go from B2 to C1 and then to C2 is really quite vast, and requires a lot of time functioning in the L2. I rarely say I’m fluent in a language (except English!)

What’s between the two? I like to use the terms “competent” and “proficient”. Recently I’ve been reading Alice Omaggio’s Teaching Language in Context: Proficiency-Oriented Instruction, which is an interesting book for all sorts of reasons. Although these terms could be used interchangeably, or with various nuance, I treat ‘competent’ as lower than ‘proficient’; “competent” in my mind is someone who can understand the language without frequent miscommunication, and can manipulate the language to express what they want. It’s a lower-intermediate level. “Proficient” in my view is more what Bennie thinks of as “fluent” – they speaker has a mastery of competency, so to speak. They are able to discourse about a range of topics, and have socio-communicative strategies for managing when they are out of their depth. They are not near-Native, but they can function and survive in a wide array of TL settings, without needing to resort to their L1s.

Generally I would say I am competent in Gaelic. I would say I (was) proficient in Mongolian, though it’s probably slipping. Depending on audience I am usually happy to say I’m proficient in Ancient Greek and Latin. Primarily because, although my conversational skills are low, I am rarely called upon to speak in these languages, and my high level reading abilities mean I am equipped for what I do in them.

Of course, in an ideal language-learning world we would have unlocked all secrets and have a fast-track method for moving students rapidly from A1 to C2, from acquaintance to ‘fluency’. But we don’t, we just make up these labels to try and categorise a range of phenomena, our raw data on those times when we succeed or fail in communication, whether transmitting or receiving. What’s more important, in my view, is the simple principle of Language learning momentum, to keep moving forward rather than backward in one’s L2s.

 

Sometimes I just say 5 and move along with my day.

The interpreter’s task

The first thing an interpreter needs to do is to examine the text under consideration in order to establish the text. We call this the work of Textual Criticism.

Next, the interpreter must read the work. This involves understanding the correct phrasing of the text, and if necessary making decisions about appropriate breaks in words and phrases.

Third, the interpreter should move to analysis of the text at the level of grammar. They should explain the meaning of any difficult words, whether because they are rare, have heightened significance, or contain an ambiguity that must be discussed. They should also discuss the technical elements of grammar, illuminating any difficult constructions or explaining their sense. If the text is poetic, they should likewise give an account of its metrical features; if prose, they may give an account of elements of genre and discourse.

At this stage the interpreter may also relate the text to any historical issues, debating its relation with other historical texts, intertextuality, archaeology, and other findings of the scientific disciplines that may bear on our understanding of the text. They should also explain at the textual level the functioning of figures and tropes.

The last stage of the interpreter’s work is interpretation proper. Here they will explain not merely the meaning of the text, but its significance. What is the author’s purpose, how does the text function for its original audience, how has the text been understood and utilised since, how is it reframed in a canonical context, what is its theological significance, what does it teach us about God, Christ, and the history of salvation, how may believers today understand and appropriate it.

 

Does this sound like a reasonable description of, say, contemporary historico-grammatical exegesis? Because the outline I have just written is pretty much the template of literary studies in Late Antiquity. It’s what someone like Dionysius Thrax discusses briefly in his “Grammar”. It’s the methodology of that too-easily dismissed theologian, Origen. It’s a description that applies equally well to the so-called ‘Antiochene’ and ‘Alexandrian’ traditions.

I’m going to have a bit more to say on this shortly, but I am tossing around a few ideas at the moment. One is that a large chunk of what is being called ‘Nicene Culture’ is in fact just standard Late Antique literary studies, and what makes it Nicene is not as expansive as we sometimes take it for. Another is that the real difference between interpretative ‘schools’, ancient and modern, has very little to do with most of the above elements, and rather has to do with the area of interpretation of figures, tropes, and non-literal referents. More on this in a forthcoming post.

Why accents matter

In this post I want to talk about why I think learning accents for Ancient Greek is important, and shouldn’t be put off.

The Case Against

First, let’s consider a case against accents. In Duff, “The Elements of New Testament Greek”, we are given three reasons:

1. Accents were not present in written Greek in the New Testament period.

2. The rules of accentuation are complicated, and you have enough to learn.

3. Accents don’t help you translate or understand Greek.

 

Duff immediately recognises that 3 is not strictly true, there are a few cases where accentuation does matter. However I am not convinced that 1 and 2 are really good reasons either. To discuss 1, I think this is a false appeal to NT writing. We don’t read 1st century manuscript writing anyway. Unless you do papyrus work, most students aren’t doing that. Most scholars aren’t doing that. So to be truly consistent on point 1, shouldn’t you teach Majuscule texts with no word divisions, and instruct students how to make word divisions on their own? I don’t think an appeal to 1st century “authenticity” will work on one basis (writing) when used to argue against another basis (pronunciation).

I also think 2 is a very poor pedagogical principle. It does match very well with what Duff writes on p1, that the aim of the book is “[t] help you learn enough Greek to read the New Testament”. That is a very truncated goal in itself. The NT is, give or take, 138,020 Greek words. That’s like one of the later books in the Song of Ice and Fire, or an overly long PhD dissertation. Imagine you learnt enough of a language to read one dissertation and that was it. How odd, we would say.

Principle 2 would, in my view, be equivalent to ‘admitting’, “hey, Greek is hard, it’s a language, and languages are hard, let’s teach you just enough to act like you almost knew Greek”.

A Case For

Here’s why I think accents should be taught from day 1:

Greek was not a ‘written-only’ language; there exist a very few languages that are designed not to be spoken, and Greek is not one of them. So that should remind us that there is an interplay between a once-spoken language and its written representation. Accent marks were added later, yes, but they were added to preserve information that is present in spoken Greek, and this is particularly useful to learners.

Latin actually provides a really good parallel here. We know that Latin has a distinction between short and long vowels: a, e, i, o, u, ā, ē, ī, ō, ū. But Latin writing doesn’t use macrons, only we do, in learner’s texts. Why didn’t Latin writing use macrons (in general, that is)? Because this information didn’t need to be marked – if you learnt to speak Latin you knew, by the sound, what was a long and what was a short vowel. But here is what’s key: vowel length carries semantic significance. Not always, and I’m sure L2 speakers made Latin blunders all the time and were yet understood, but est and ēst are two different words.

English readers generally have difficulty taking note of diacritical marks on letters, because we are used to operating in a Latin script that doesn’t utilise them, and in English they aren’t tremendously meaningful, because we, who speak English, already know how to pronounce English words. Though actually English proununciation is hell on earth.

Learning accentuation in Greek is forcing you to learn how to pronounce words accented correctly. It is an authenticity criterion just like Duff’s principle 1 is, but it goes the opposite direction: learn to accent correctly and you could read unaccented text fine, learn to read unaccented text and you will always struggle to accent correctly (like most of us!).

Hebrew vowel pointing provides a partial analogy. Everyone learns vowel pointing for Biblical Hebrew. Then, when you read an unpointed text, you can usually work out what’s going on. No one learns to read unpointed Hebrew, unless they learn to speak Hebrew in which case the Hebrew ‘is’ pointed, i.e. the vowel information is already encoded into the speaker’s mind.

It’s much harder to learn accentuation later. That’s because we treat it as ‘extra’ and so ‘extraneous’ data. By teaching it from day 1, we teach is as significant and as semantic, it carries differences in meaning that actually matter in the Greek, which is why if we’re actually teaching the Greek language, and not just hoping to pass a grammar exam and forget about Greek until the next sermon insight comes along, I argue for accents from the start.

How do you ‘fix’ textbooks? (with some thoughts on Aspect in Greek)

I mean, the answer is kind of simple. You edit and reprint them. If you’re the author.

I’ve had occasion this week to consult what Mounce, that most widely used Greek textbook, says about aspect.

Similarly, I also dipped into Duff. Now Duff is unfortunately a revision of Wenham which is a revision of Nunn. I’m sure some places out there are still using Wenham. I hope no one is still using Nunn! Duff’s 3rd edition from 2005 discusses Greek verbs on p66. It begins with tenses, and at least discusses aspect, saying that verbs encode both time and aspect. It lists the aspects like this:

  • Present tense: process or undefined
  • Future: undefined
  • Imperfect: process
  • Aorist: undefined

It also says that undefined is either ‘undefined’ or punctiliar in contrast to process.

I won’t hammer Duff on other issues (deponency, for one), at least today.

The perfect doesn’t appear in Duff until 16.1 (p179), he says the aspect is “completion”, which isn’t too far off, really. And he says the time is past and present, which is a little confusing I would think.

My problem with Duff is that the terminology is inconsistent, and that the presentation of aspect is muddled. Although I think Campbell is wrong about the perfect, one very helpful thing he achieved was to disentangle aspect from Aktionsart and to help people think of these separately. Punctiliar is a type of action, aktionsart, it doesn’t help us to use it for an aspect. “Undefined” is not a helpful label for an aspect, because it actually doesn’t tell you anything. The aorist tense-form is aoristic, i.e. undefined, but only in the language of Greek grammarians. In our language ‘undefined’ could be perfective or imperfective!

There are things we might disagree about, and there are things we just need to get clear. For instance, I think Duff is just wrong about the Present being ‘either’ “process or undefined”. But I think there are a number of things that have become reasonably clear in the last few years, and NT textbooks could serve us all well by revising themselves for their next reprint.

Adopt a clear terminology of, say, perfective vs. imperfective, divorce time from aspect, get rid of deponency language, and then use a clear terminology to state points of ‘position’.

Decker does this pretty well in his work, I don’t see why a revised Mounce or Duff couldn’t do likewise. If you want to say that time is encoded in verbs tense-forms, fine, just state it clearly. If you want to say that the perfect is imperfective in aspect, fine, just make it clear that that’s what your textbook is doing. Conflating categories and muddling terminology is not helping students in this field and isn’t going to set them up well for later on.

Software for ‘reading’ foreign language texts (2)

Okay, Let’s look at Learning with Texts, and how to get set up and going with it.

A lot of information in this post is totally derived from this post over at DIY Classics. I 100% tell you to go and read that post. My aim is to supplement that post by giving some more specific details and talking through some of the details of FLTR and how to use the two sites.

Your first port of call should be the LWT page, which is full of a lot of information, including what to do if you want to set up your own localised version, which involves running a server version of it. That’s not most of us, and so thankfully Benny of “Fluent in 3 Months” fame runs a free hosting service. So next stop is to go to his website:

http://www.fluentin3months.com/wp-login.php?action=register

and register a username and password, then use this to go to:

http://lwtfi3m.co/

and use it to log in.

For all the set-up details, follow the rest of that post as DIY Classics (though note point 5 below), here is the link again. But if you are feeling lazy and want to stay here, here’s a brief run-down:

  1. Click on my Languages
  2. Click on the Green plus sign or ‘New Language’ and add Latin/Greek
  3. For dictionary, you want to add:

Latin: http://www.perseus.tufts.edu/hopper/morph?la=la&l=###

Greek: http://www.perseus.tufts.edu/hopper/morph?la=gr&l=###

I would delete the Google Translate URI because it doesn’t exist for Ancient Greek, and isn’t good for Latin.

  1. For Character Substitutions: ´=’|`=’|’=’|’=’|…=…|..=‥|«=|»=
  2. Especially for Greek, you want:

RegExp Split Sentences:  .!?:;•

RegExp Word Characters: a-zA-ZÀ-ÖØ-öø-ȳͰ-Ͽἀ-ῼ

These two values make sure that LWT correctly works out (a) what starts and ends words, and (b) make sure it uses the Unicode set that includes polytonic Greek characters. Both are important in getting Greek to display and function properly. Notice that the RegExp Word Characters is different from what DIY Classics said; I found that didn’t work for me.

  1. Select your Language from the home screen
  2. Click “My Texts”
  3. Click “New Text”, and enter a title, paste in your text, and tag it as you please.

Okay, you should be all set up for Greek and or Latin. Once you’ve done that, go to “read”, it’s the first icon listed on the text page. You’ll get a screen with 4 components:

LWT 1

In the top left is your LWT menu. Notice that it lists 839 “To Do” – those are untagged/unknown words in the text. Next to it is an “I know all” button. Basically click this is you know every single word in a text and couldn’t be bothered tagging them.

Below this is the reading pane. In this is the text you’re working with. You can see in the first screen shot that I’ve left-clicked on ἀπόστολος, which has given me several options.

In the top right pane I’ve got the option to edit this term; it’s listed as a new term, and I can add in both a translation, as well as some tags, perhaps I would add: noun, masculine, nominative, singular. Romaniz is for a Romanisation of foreign alphabets. At the bottom is a coloured status bar: 1-5 of unknown to well-known, then “WKn” for actually so well known you don’t want to worry about it ever again, and “Ign” for Ignore this term, useful if your text contains non words or words not in your target language.

The bottom right pane that’s opened up is the dictionary look up, based on what you listed in Dictionary 1 in the settings. In this case, it’s gone to Perseus just like I told it too. The bottom frame just works like an inset webpage, so you can click through to the LSJ or Middle Liddell entry as you please.

 

Foreign Language Text Reader

I also want to talk about Foreign Language Text Reader (FLTR). FLTR is like a slimmer, maybe-dumber, version of LWT. But its two greatest strengths are that it’s on your computer (without running a server!) and that it’s super simple.

For FLTR head to https://sourceforge.net/projects/fltr/

Follow the download and installation instructions, they are pretty straightforward.

Open up the program and you’ll see a very basic interface.

First you’ll want to find the line of options that starts with Language, click on New, type in your language, say “Greek”, and then we want to edit the settings for the language.

Pretty much the settings are the same as for LWT, so here’s what I use for for Greek:

Char Substitutions: ´=’|`=’|’=’|’=’|…=…|..=‥|«=|»=

WordCharRegExp: a-zA-ZÀ-ÖØ-öø-ȳͰ-Ͽἀ-ῼ

DictionaryURL1: http://www.perseus.tufts.edu/hopper/morph?la=gr&l=###

You may want to come back and increase the text size later as well. I find the default is often too small.

Then you need to add some texts. Personally, I add them via this method:

Each language will have a subfolder wherever you installed FLTR, open this up, and you’ll find a folder like Greek_Texts; save a text file (*.txt) in UTF-8 format in here, and you can use it in FLTR. So, for example, grab a copy of say, 1 Peter, put it into a text editor, switch to UTF-8, and then save to here. Open up FLTR, select Greek, and then select the text. If it hasn’t appeared, click refresh and double-check for it in the right folder. Here’s a screen-shot of some Greek text open with FLTR:

FLTR Greek

You can see that I haven’t used it a lot with Greek, as most of the text is blue, which means a new word you haven’t tagged. Green words are well-known, Yellow words are level 4, pretty well known; the colours shade down to a red which is level 1-2 not very well known. A left click will bring up two pages:

FLTR Greek 2

Firstly it will bring up the editing screen for that word, and you can edit this with relevant information; it will also open in a web browser the linked dictionary. A right click on a word brings up a smaller dialogue box, and I can edit from there as well, without having it force open the web-browser/dictionary option. Here’s another screenshot with a short text about Bonnie Prince Charlie in Gaelic; you can see I’ve worked both with Gaelic and with this text before, because most of the text is coloured in.

FLTR Greek 3

What’s the point? Or, Pros and Cons

A student helpfully asked me how this was better/different to using, say, morphologically tagged Bible-software, a la Accordance, BibleWorks, Logos, etc..

  • It’s personalised, and testable. Every entry is put in by you, and so it’s filled with whatever you wanted to include.
  • It’s geared towards reading and familiarity. It doesn’t mindlessly tell me all the information for each word as I scroll my cursor across, it colour-codes to how well I know the word, and what information I have included.
  • It’s faster than doing this manually. Reader’s editions are great, writing on paper is great. This lets you tag your own texts digitally, and it saves those tags across languages, which is great when you’ve encountered a word once, and then find it again 3 years later in a difficult text.
  • It’s easy to work with the same interface across multiple languages. This is my preferred way of dealing with foreign texts. I use it for Gaelic, Mongolian, French, German, Italian, and am exploring its use for Greek and Latin.

 

Cons:

  • There’s no real way to do actual morphological tagging. So every inflection of ἀπόστολος is going to be a separate entry. LWT does nothing to alleviate this. FLTR does have a little drop down when you’re entering a new word that lists similar words, so if you have entered, say, a different form of the word, you can more or less copy what you had elsewhere. I suspect there isn’t an easy way to fix this, since you would need some way of teaching the software to do morphology for multiple user-inputted languages.
  • It’s slow to get started. Opening up 1 Peter and seeing 839 new words to tag, if you already have some experience in Koine, is not a thrilling experience, because this takes time. If you were starting from scratch in a language, it would be more rewarding. But if you’re already ‘on the way’, then it’s slow to get going. But it pays off. This week I opened up a new Gaelic text I’d never tackled, and at least 90% of words were already tagged. This is the pay-off.

Conclusion:

So that’s it. I’d be interested in your feedback, if you’ve had some experience or if you go and try it yourself now. Let me know if you have any difficulties in set-up or need a hand.

 

Tips and Advanced usage

Tip #1: You can select a string of words as a group; this is great if you want to tag a whole phrase that, for instance, might function idiomatically.

Tip #2: FLTR allows you to select “Vocabulary” as a text. This will let you filter a range of vocabulary by ‘knownness’, from a specific text or all texts, with a number of entries, sorted either alphabetically, by status, or random.

Tip #3: You can also access your FLTR vocab as two sets of files in your main FLTR directory, one is a plain text, say Greek_Export.txt, while the other will be a comma separated version, Greek_Words.csv. These files aren’t very useful to look at, but they are the same as the LWT TSV export, so you can actually move between the two programs.

Tip #4: You can import these exported files (from FLTR or LWT) into an Anki deck, if that’s how you like to operate.

The Middle Voice (Greek): Thoughts and Pedagogy

Recently I’ve been thinking and reading more about the middle voice. It was first occasioned by some by-the-way comments in Aubrey’s thesis, p204-6. There he gives a typological table derived from Kemmer. Also, in some email exchange, he suggested I check out R.J. Allen’s doctoral thesis, “The Middle Voice in Ancient Greek. A study in Polysemy”, as well as Rachel Aubrey’s forthcoming thesis dealing with it.

I also had the chance to think about the middle in my “Methods” class, since the 1st year students are just hitting the issue of voice, and so I had the opportunity to interact with 2nd and 3rd years students and talk about the difficulty of teaching Greek voice.

I’m going to briefly summarise the typology of the middle voice that you find in Kemmer and Allen. Allen basically gives us 11 or 12 categories:

  1. Passive Middle: The Patient has subject status
  2. Spontaneous Process Middle: the subject undergoes an internal change of (physical) state.
  3. Mental Process Middle: The subject experiences a mental affectedness.
  4. Body Motion Middle: The subject causes a change of physical position to themself.
  5. Collective Motion Middle: The (plural) subjects move, i.e. gathering or dispersing.
  6. Reciprocal Middle: The (plural) subjects act so that A does to B what B does to A.
  7. Direct reflexive middle: The subject acts upon themself, usually in a habitual/customary action.
  8. Perception Middle: The subject perceives by means of the senses and so is both agent and experiencer.
  9. Mental Activity Middle: The subject acts within and upon their own mind, and so is both agent and experiencer (and possibly patient). This differs from 3 in that 9 is more reflexive, whereas in 3 the process may have an external stimulus.
  10. Speech act middle: The subject acts as speaker, but is involved also as beneficiary or experiencer.
  11. Indirect Reflexive Middle: The subject performs a transitive action but also functions as beneficiary of the action.
  12. At some point, Allen seems to treat δύναμαι as a distinct group.

I think having this kind of typology helps a student in their intermediate stages see how middles “involve the subject”, rather than the often place-holder explanations given in a beginner’s course. In each of these, except 1, you can begin to understand how the subject of the verb also takes a role as patient, experiencer, or beneficiary. This helps relate how these ideas are “middle” in the ‘logic’ of the Greek language.

It also helps to explain why deponency is a bad explanation for middle-only verbs. Middle-only verbs are ‘middle’ in the internal-logic of the Greek. We would call them middle verb-forms with middle ‘meaning’. It’s only in, say, English, that they are “middle in form but active in translation”. Translation and native-language meaning are two different things here.

One of the problems, pedagogically, is that when the middle voice is introduced in most textbooks, they have a fairly unclear way of explaining what to do with it. Basically, students are usually told: look at the active meaning of the verb, and come up with a way to ‘make it middle’. This doesn’t really help that much, I would say. It’s often better to (a) look up the word in a lexicon and check if there’s an entry for the middle, (b) consider the context of the word and how middleness might function, (c) if you’re a “think of the category” type person, having the kind of typology above would help you actually think through the various options.

The other thing about Allen’s thesis that’s nice is that it is about the diachronic changes in Greek, and he maps out some of the shift of the θη passive stem. I think it’s deadly confusing for Koine students in particular to talk about the passive as the passive. I can see now why it is that textbooks call this a passive stem; I would conjecture that it’s because when θη appears, it appears as a subset of the middle voice, but particularly expressing category 1, the true passive. But English learners function with an active/passive dichotomy, and so are more likely to overstate the passivity of the middle category. Learning/teaching that the passive is a subset of the middle helps to dislodge this idea.

On page 110, and 123, Allen has a couple of diagrams that show how, chronologically, the θη stem is ‘eating up’ other middle usages, a trajectory that continues beyond classical Greek, into the Koine period and beyond, until the middle gets devoured. θη is like the ‘cancer of the middle voice’ that cannibalises and colonises the other usages. Realising this for NT students is important because the passive marker isn’t distinctly passive and so does not necessarily carry exegetical significance. I think R. Buth made this point somewhere about ἐγείρω and the form ἠγέρθη(ν). (Sorry, I can’t recall where, and apologies if it wasn’t Buth). What’s the difference between Christ “being raised” and Christ “arose” (in the middle sense)? The θη doesn’t tell you which is meant. Exegetical restraint demands that you don’t try and make a theological point from a grammatical feature that won’t ‘bear that weight’.

What to do in the classroom? I’m still figuring that out. I think, personally, that I would go with these things though:

  • Teach two voices: Active and Subject-Reflexive.
  • Teach the passive as a subset of S-R.
  • Teach θη as an alternate middle stem, and give some reading material for advanced/interested students explaining its history.
  • Teach middle-only forms as just middle only, without making a big deal out of them.

Software for ‘reading’ foreign language texts (1)

So, particularly if you’ve come from a Greek/Latin/Classics background, most of what one is taught is how to do a lot of intensive reading. Intensive reading involves talking relatively small segments of texts and analysing, grammatically, each word and segment, mentally parsing and tagging things, and then understanding how the clause and sentence and paragraph fits together.

There’s a place for that. But it’s generally not the best way to move towards a more fluent reading approach. On the other hand, the grammar-translation approach almost never employs something at the other end of the spectrum, extensive reading (http://en.wikipedia.org/wiki/Extensive_reading). There’s lots of good material and research arguing for the efficacy of extensive reading.

One of the barriers for reading particularly classical languages is that there is simply not enough and not appropriate enough texts for reading. There aren’t really graded readers, Lingua Latina being a sole exception. There certainly aren’t extensive series of readers pitched at stable levels designed to move readers slowly and surely to greater proficiency and expand their vocabulary. And, a great gap is that there isn’t any YA literature. YA literature, I would say, is actually amazingly important; it’s language is just a step below ‘adult’ literature, but it’s interesting (usually) and engaging and adults can read it with genuine enjoyment while at a slightly lower language level.

Anyway, to read extensively in a way that works requires a fairly high level of comprehension. One needs to be recognising upwards of 90% of what’s going on in order to figure out the other 10% from context. Maybe more, maybe less. Certainly there’s a point at which there are two many unknowns and the reader gets lost.

So what if there are no texts suitable for your reading level? This is where I think some reading tools will help a great deal. Basically, we want to remove the barriers that slow reading down to the point of frustration. The main difficulty to be overcome is vocabulary – how do we raise our vocabulary to a level where more and more is comprehensible? What if we accelerated and integrated the ability to look up words, and if we made readily available on the same reading ‘page’ the meanings of every word we’ve ever encountered? That is the premise of the reading software I’ll be looking at in the next two posts. By recording and tagging *every single word*, you can see at a glance what you know, what you have previously encountered, and what you’ve never encountered.

If someone is mainly working with texts that have already been tagged to death, i.e. biblical texts, classical texts in the Perseus collection, then maybe something like this isn’t so necessary, but once you’re outside those corpora, essentially one is tagging one’s own texts. This is a digital way to do it across texts, instead of covering a sheet of paper in notes (I’m sure plenty of us have done that), or some ad hoc software solutions.

 

In my next post I’ll talk through Learning with Texts and Foreign Language Text Reader

Gaelic, the present tense and ‘X is Y’

I was having a chat the other day to someone about differences between Gàidhlig (Scottish Gaelic) and Gaeilge (Irish Gaelic), and also some of the issues with the grammar introduction sequencing in Duolingo.

Duolingo, as far as I can tell, built its core model tree on mapping English to the main European languages: Spanish, French, Italian, and German. In quite a few of those languages, the Present tense doubles as a Present continuous/progressive. So English “I eat” and “I am eating” can both be, say, “(Io) mangio” in Italian. Usually English speakers don’t have *too* much problem with this. Similarly, in most of those languages, the Present tense is fairly simple morphologically. So the first few lessons involve the present tense, and later on in the DL learning trees, you encounter other tenses. Participles come quite late.

Though, if you’re an English speaker, you don’t actually use the present tense for present continuous/progressive actions. You use the Present tense primarily for habitual/gnomic statements. “I eat cheese” is a habitual/general statement. We use participles, “I am eating” to form our present continuous/instantaneous tense.

This is problematic if one were to build a Gàidhlig course, because Gàidhlig doesn’t have a present tense except for the two “to be” verbs. So it’s actually quite difficult to build a course that starts with the ‘present general/habitual’ tense, so this would in fact be the non-past a.k.a. future/habitual tense. Most Gàidhlig courses teach the ‘to be’ verbs first and then introduce the ‘participle’ (actually a verbal noun), since all progressive/continuous tenses are formed with this combination of periphrastic tense.

At the same time, when you look at Direct Method readers of the kind of which I am so fond, one of the fundamental structures is something like “This is a boy”, “Donald is a man”, etc, X is Y. Unfortunately, Gàidhlig has what appears to be a grammatically complex way of expressing this. Firstly, there are two verbs to express “to be”, as in some other languages, but secondly the type of expression is varied based on whether the “Y” component is (i) adjective, (ii) definite noun, (iii) indefinite noun. And since people often want to start with (iii), this becomes:

‘Se duine a th’ ann an Dòmhnall

In literal English: It is a man that is “in” Donald.

If you parse out that sentence, it’s:

Contraction (Se + e), which is Copula Verb Is + pronoun marker e

noun: duine (man)

relative pronoun: a (who/that)

verb: tha (“to be”)

preposition: ann an (in, used existentially for phrases of the type “there is X)

noun: Dòmhnall (proper noun)

(This is part of a strong tendency to use cleft-structures and fronting in Gaelic, if you’re wondering)

That’s not easy to grammar out, certainly not for a beginner, but it’s the standard way to say X is Y (indefinite noun).

These two reasons are part of the reason I’ve never really embarked on writing a Direct Method reader for Gàidhlig. You would just have to start differently to circumvent these two issues which lead to accelerated complexity at just the very point one doesn’t want such complexity.