Productivity and Regularity in Synchronic Phonology
In the same way, synchronic phonology is also regular, and describable through the same sorts of rules as diachronic phonology (though we should note that these are not describing the same object - synchronic phonological rules describe processes happening in a single human brain, while diachronic sound rules describe relationships between grammars that exist at different points in time, a meta-analysis, hence de Saussure’s famous argument about the primacy of synchrony over diachrony).
What this means, in the context of the current conversation, is that if, as you say, the “phonetic easing” process is still active in modern English, you need to be able to provide a regular, exceptionless environment that can describe it.
You’ve attempted to do this to some degree with your consonant deletion examples (even if your proposed pronunciations for strong “the” and “to” are pretty dicey), but in order to prove that the sound law that produced the a/an alternation is still a regular phonotactic constraint in Modern English, you’ll have to provide a regular synchronic sound rule that can exceptionlessly describe the phonetic environment of the constraint in question that leads to the deletion, which I don’t think you’ll be able to do.
Note that your proposed rule must not be specific to individual lexical items or refer to morphological or syntactic boundaries. This is because:
Structure is not Visible to Phonology - the Modularity of the Grammar
It’s traditionally assumed by most generative linguists that the grammar is largely modular - that is, each phase of the generation of an utterance is separate, and proceeds one at a time with little overlap between the modules. So, syntax first builds the structure of the clause, and then morphology (which does not have access to the syntactic structure (though see Distributed Morphology for modern attempts to unite syntax and morphology)) builds words to fit into the structural positions that syntax built, and then phonology (which similarly cannot see either syntactic or morphological structure) determines the sounds that are sent off to be pronounced by the articulators. (Note that the actual relationships are a bit more complex - see Kiparsky’s 1982 book on Cyclic/Lexical Phonology for a famous example that’s pretty accessible, but the generalization holds well enough for the data we’re dealing with here.)
What this means is that synchronic (and diachronic, for that matter) sound rules only ever apply in phonological environments, that is, to strings of phones and suprasegmental features like tone, stress, etc. (which does include prosody), and not to individual words.
So, in order for the “ease of pronunciation” constraint you’re referring to here to still be active in Modern English, it must be describable as a phonological rule that applies exceptionlessly in a specific phonological environment, regardless of the words or structures that are actually present.
This is why I don’t think you’ll be able to show that the a/an alternation is still a regular, productive alternation in Modern English. The a/an alternation is not predictable - there is no general rule in English phonology of which its behavior is a subset. A child acquiring English just has to learn that for this specific morpheme, there’s an “n” before vowels and no “n” before consonants, and, crucially, no generally describable phonological sequence in the language works this way.
We can test this with the analogy and borrowing tests above. First, through the analogy test, “my/mine” no longer behaves this way, because its behavior has been altered through a combination of analogy and grammaticalization - the sound law clearly no longer holds in its environment, so the phonotactic constraint that produced it is no longer active in the language. Second, and this is admittedly a hypothetical, I don’t believe that any new monosyllabic word borrowed into English ending in -an (or -uw or -ij, for that matter) would show the same alternation in any environment, which would again indicate that the phonotactic constraint is no longer active.
All of this is because the regular sound change that originally produced this alternation is really just as fossilized as the medial f/v alternation: neither alternation can be successfully described using exceptionless synchronic sound rules, and must therefore be stored in the lexicon (“fossilized”) and learned as exceptions by new acquirers.
(Note: Both of these alternations are morphologically/lexically-conditioned allomorphy, if you’re interested.)
I hope this makes sense. Sorry if this was way too much info - it felt nostalgic, like being back in front of my third- and fourth-year undergrad students again, and I got a bit carried away. Also, I like your username. :)
Let me know if anything here is unclear or if you have further questions.
(Two more comments this time as well.)
To start off, its clear that our theoretical assumptions are irreconcilable (I might go so far as to say “diametrically opposed”), and that we are not going to agree here, but its important to note that my model is perfectly able to capture your German data.
It was a great example. There’s no such thing as a bad example, because sound change is equally regular in every natural human language.
Yes, the vast majority of theoretical linguists, and practically all historical linguistics, both in America and in Europe (with much of the best European work still coming from Germany and the Netherlands), very much still subscribe to the regularity of sound change, because as far as we can tell, it’s an empirical fact.
Also note that it’s impossible to prove language relatedness without the regularity of sound change. Regularity of correspondence is literally the only metric we have that can prove relatedness, so if the Neogrammarian Hypothesis were somehow disproven (which is very unlikely), then the scientific underpinnings of the way we group languages into families immediately collapses.
(Also, yes, hypercorrection is another form of analogy, often called “interdialectic analogy”.)
This is a great question, and technically it’s still unproven (and may never be), but the hypothesis has been borne out in so much data for so many decades, with no convincing counterexamples, that there seems to be no good reason to disbelieve it.
OH! I should include the most important reason why the regularity of sound change is considered by most western linguists to be scientifically reliable - it makes predictions that are borne out by new data.
The Case of the Indo-European laryngeals
(This is an oversimplication of the events, because the data is complex and goes beyond the scope of our discussion here, but the (wikipedia page)[https://en.m.wikipedia.org/wiki/Laryngeal_theory] is fairly good if you’re interested.)
Basically, in the late 1800s, scholars working on reconstructing Proto-Indo-European were a bit confused - the reconstructed sound system (which is reconstructible, of course, due to the regularity of sound change) seemed to have two different systems of vowel alternations - a system unheard of in any of the world’s languages.
Ferdinand de Saussure (yes, that Ferdinand de Saussure) realized that he could collapse both systems into one by positing an unspecified series of sonorant consonants (his famous coefficients sonantiques) that colored adjacent vowels in specific environments before disappearing entirely in all of the daughter languages. This resulted in a much simpler system that was also more typologically likely.
His contemporaries ridiculed him for reconstructing a proto-sound that disappeared in all of the daughter languages, but, once Hittite was deciphered in the early 1900s, shortly after de Saussure died, every single place de Saussure had predicted his “coefficients sonantiques” to show up in the proto-language, Hittite had an “h”.
None of this is possible without the regularity of sound change, and we’ve seen the theory make predictions that are borne out by new data again and again.
Yes, linguists very much still subscribe to the regularity of sound change, both in the US and in Europe.
I didn’t forget anything. While frequency is clearly a factor in language change, it’s not relevant for sound change since it reduces to a prosody/stress change, which is a describable regular phonological environment that can be acted upon by regular sound rules.
In your “haben” example, for example, the grammaticalization of a main verb to an auxiliary verb clearly establishes a new prosodic pattern, which can be acted upon by regular sound change to the exclusion of other main verbs.
We see similar alternations in English main/auxiliary verb pairs:
I’ve already eaten. BUT
*I’ve a cheesburger. (In American English - Brits can do this)
I’m gonna eat a cheeseburger. BUT
*I’m gonna the store.
This is, of course, expected, since the grammaticalization of main verbs into auxiliary verbs results in a different stress/prosodic pattern (which I’m sure you can feel in German with “haben” as well), and so it’s a perfect location for a regular and exceptionless sound rule to occur.
And these phenomena (and likely “haben”'s case also, though I’m not familiar with the literature) have been thoroughly treated in generative and historical literature perfectly satisfactorily for exactly this reason.
This is a common mistake made by those trying to “disprove” the Regularity of Sound Change - they don’t invest enough time in phonology to understand that phonological domains larger than the word exist. It’s actually kind of funny how elementary all of the “counterexamples” critics bring up always are - you’d think people would understand that a field that’s over two hundred years old would have come across auxiliary verbs at some point during that time.
Also, you’ve asserted that “haben”'s change is not due to analogy or interdialectic borrowing, but I’m not sure where your certainty is coming from here. Without looking more deeply into the phenomenon, at this point the data you’ve presented could easily be described by sound change, analogy, or borrowing, and though I’m not familiar with that data specifically, I have no doubt that one or a combination of the three fully explains the data (because, again, one or a combination of the three fully explains literally all historical data that we’ve found so far).
I mean, it’s an empirical fact of language going all the way back to de Saussure and Jan Boudoin de Courtenay’s insight that phonemes have regular and predictable relationships with their allophones, but luckily there’s also a clear physiological explanation for the regularity of synchronic phonology as well. (It’s interesting that you’re so interested in “explanation” now, but we’ll get to that later).
The explanation comes from a combination of the nature of the movement of the articulators, and the fact that (as de Saussure famously noted), language is a regular system composed entirely of contrasts.
Humans articulate language by moving their articulators in a surprisingly small number of regular, precise, complex movements that they have been practicing since they acquired their language in childhood.
These movements eventually become second nature to the speakers, but humans always feel a constant pull between wanting the system to be as simple as possible (leading to regular sound change - our “ease of articulation” here), and wanting the system to have enough contrasts to adequately encode meaning.
That’s why phonology is regular. That’s it. It’s a consequence of the nature of human articulation. Every time an American English speaker pronounces a /t/ between a stressed vowel and an unstressed vowel in a word, that /t/ becomes a flap, because of these millions of practiced, unconscious movements.
Note that this also means that American English speakers literally cannot (without practice) produce a different /t/ allophone in that position in one specific word when speaking fluently. If the pronunciation of intervocalic /t/ were to change in one word, it must change in all of them (unless a different specific environment catalyzes a different regular change), because that sequence of articulator movements functions as a single unit.
Once again, it’s an empirical fact that phonology is regular, and the regularity of sound change follows from it.
Also, the fact that synchronic phonology is regular is further proven by the fact that it’s difficult to pronounce foreign language sounds. The mechanism is the same: we are only accustomed to pronouncing the relatively small set of regular movements in our native language, and altering those is difficult. It’s just as difficult, if not more so, to spontaneously begin pronouncing one word in a way that doesn’t conform to a language’s phonotactics.