I have three students this year who are doing an Independent Study course in linguistics with me. We have spent the year taking apart languages, and looking at the phonetics, phonology, morphology, and syntax, studying how languages have evolved over time, and seeing how children acquire language.
Their final project is to create a synthetic language.
It has to have, like all languages, consistent, rule-based sound and grammar structure (with some exceptions to the rules, because all languages have 'em). They decided, rather early on, to design an agglutinative language -- one in which new words are built by gluing together old ones, rather as German does. Thus, their word for "biology (class)" is "züpobyshada," made up of morphemes (units of meaning) for "life," "study," and "students."
They're finding out how difficult this process is. True synthetic languages, like Klingon and Tolkien's Elvish, are a real challenge to design, because to make them consistent, you have to think through things that most of us take completely for granted. For example, have you ever thought about the rule in English that you can't have an "ng" sound at the beginning of a word? You might be thinking, "Well, of course not. That would be weird." But that's just because English doesn't do that -- not because it's somehow impossible. Plenty of languages do. Consider, for example, the Masai name "Ngorongoro" for the famous crater in Tanzania, and the fact that one of the most common Vietnamese surnames is Nguyen.
So, to design a language, you have to start from the ground up, deciding what the sound inventory of the language is, how those sounds can combine, where in words they can (and can't) occur, and how words and ideas fit together to form sentences -- and realize that the patterns in English aren't sacred, but represent only one of a myriad of possibilities.
Given that this is so complex, it's a wonder we can speak at all, really. And more of a wonder is the fact that if children of normal intelligence are allowed to be together, but are not taught a language, they will just... invent one.
Grace and Virginia Kelly were twins whose parents were told at birth their daughters might be mentally retarded because of problems at birth. The girls were, in fact, mentally normal, but the parents upon finding out the possibility decided that they were retarded and completely neglected them. The girls periodically heard English and German from the parents, and heard Romanian from a nurse who cared for them; and some of the morphemes in their language come from those three sources. Some of them are, however, idiosyncratic and unique to their language. They even made up names for themselves (Poto and Cabengo). (If you're curious, when Child Protection Services found out about them, they were put into a foster home, allowed to attend school, and quickly learned to speak English.)
And now, scientists have taken the first steps to emulating what these children did, and what my students are doing, in robots.
Ruth Schultz and her colleagues at the University of Queensland (Australia) have created what they call "Lingodroids." These robots are equipped with mobile cameras, sonar range finding sensors, and wheels. And -- most importantly -- microphones and speakers, so they can talk to one another.
These robots are capable of doing a simplified version what Poto and Cabengo did -- they have a set of parent syllables and syllable-joining rules, and when they "see" an unfamiliar object, they name it and point it out. If one robot sees a block for the first time, it might say "liko." The other robots, hearing it, will rush up, trying to see if they can figure out what "liko" is, pointing things out and saying the word. If they agree, the connection between the word and the object is reinforced. They then say more words, not for objects, but to describe where they came from and how they got there -- giving them words that map out the space they live in, and words for distances. (For example, after a few interactions, the robots "decided" that "ropi hiza" meant "a short distance to the east.")
What I find fascinating about all of this is how natural the development of language is. Given only a few ground rules, these robots are basically creating a language from the ground up, and thereby providing linguists (and roboticists) with valuable information about how language structure works.
It does make me wonder, however, why humans are the only animals with true language. Language is defined as "symbolic communication using arbitrary sounds or written characters;" as such, a dog barking or a bird singing isn't language (because a bark or a twitter doesn't carry an arbitrarily linked meaning, in the way that the sounds of the word "dog" do). It's possible, of course, that dolphin and whale vocalizations might be language -- we simply don't know. It's hard enough to decode language when we are already certain that it is language (if you have any doubts about this, read the fascinating little book The Decipherment of Linear B by John Chadwick, which describes how linguists figured out how to read a written language for which we had no information about the letter-to-sound correspondence). To figure out if dolphins' clicks, pops, and whistles carry meaning, when we don't even know if it is language to begin with, is an enormously difficult question.
All of which brings up the question of whether we'll be able to understand communications from other planets, should those ever be detected. SETI (the Search for Extraterrestrial Intelligence) is a project of long standing, recently defunded by the government, which uses radio telescopes to search for intelligible signals from space. The task, although breathtaking in its goal, is in practice phenomenally difficult. For it to succeed, a single information-carrying signal would have to be detected from amongst the background clutter of naturally-produced radio noise -- and after that, decoded somehow. Still, it's sad that they've fallen on hard times. But amateurs have risen to the occasion, with SETI@home, which will allow volunteers to analyze the radio signal data on their home computers.
So, that's today's ramble, from synthetic languages to Poto and Cabengo to linguistic robots to dolphins to outer space. Think about all of this when you get to work today, and a friend says, "Hi, how are you doing?" and you answer, "Just fine, and you?", and consider how complicated what you just said actually was.
Try not to let it get you tongue-tied, okay?