Later Years
Jonathan Slocum
When I was hired to direct the machine translation (MT) project at the Linguistics Research Center in January 1979, I knew virtually nothing about MT, although I was well-educated and highly proficient in the encompassing discipline of natural language processing (NLP) or, in a broader sense, computational linguistics. Only gradually, over a period of years, did I learn why I had remained so ignorant.
A report by the Automated Language Processing Advisory Committee (ALPAC), commissioned by the National Academy of Sciences, had been released in 1966; the report's conclusions were that applications of Machine Translation were infeasible, and that funding for applied MT projects should be redirected into research on more basic, underlying NLP problems. The not-so-veiled implication of the political uproar that ensued was that MT system developers were suspect, if not outright frauds.
But while MT development funding was indeed eliminated virtually across-the-board -- only a few MT research projects managed to survive, none unaffected -- funding for NLP projects in general was reduced or eliminated. Naturally, NLP workers blamed these broader funding cuts on MT system developers, who became pariahs in the NLP profession. Machine translation was not discussed in polite professional company, and universities (in the U.S., at least) tended to ignore the discipline altogether. Intellectually, I was a child of the next generation: I knew exceedingly little about MT, and nothing about events that led to its becoming taboo before my college days.
When I arrived at the LRC, the project was in shambles. The old METAL system, implemented in FORTRAN and at least hundreds of thousands of lines long, was semi-documented but useless: a run required several hours of access to a CDC 6600, once considered a supercomputer and still a campus mainstay, with parts of the operating system removed -- thereby making the hardware unavailable to anyone else. For the LRC in 1979, the days of having a dedicated supercomputer were gone. Tiny bits of LISP code, written by a graduate student, could cripple through some process that when successful, which was extremely rare, vaguely resembled MT; but it took an entire day to complete a "short" batch run on a time-shared computer. If an error was encountered, there were few or no clues as to what or why, and days of effort were generally required in order to identify a likely cause.
I proceeded to study system documentation, to the extent that any existed, and interviewed the project linguist about how the system was thought to work. I then wrote an entirely new METAL system in LISP, using all new algorithms, which could produce translations of sentences in seconds -- online, not in overnight batch mode -- and which, when an error was encountered, would emit diagnostics. It became possible to test linguistic rules online, and modify & test them again in short order. The light of progress began to fall on the project.
A curious thing happened when I commenced MT R&D: some of my professional colleagues seemed to stop speaking to me -- or worse. Dawn began breaking on my ignorance when, at a conference in 1980 where I presented my first MT paper, the question period was consumed by a husband & wife tag-team who proceeded to scream at me for the duration. No question was being raised. I realized that something was very wrong... My new interest, I later came to realize, was taboo. On the other hand, I had one great advantage: I was not aware that MT was infeasible, and that my hopes were doomed. Perhaps this ignorance accounted in part for the subsequent success of our project! But that's another story.
It became obvious, very early in our project, that analysis & translation problems were too often due to simple human mistakes, such as typographical errors in grammar rules and lexical entries. Once a usable translation engine was in place, I began to devote my programming efforts to early detection of "obvious" errors in linguistic rules, in order to point them out before they had a chance to spoil a translation run. This became systematic, so that translation was attempted only with validated systems of rules. It was amazing that few if any other NLP projects used "rule validators" to catch errors in new/revised linguistic rules; but, then, remarkably few NLP projects of any kind dealt with systems having much more than a handful of rules, so validators might seem unnecessary, even odd. Nevertheless, once I published a paper describing rule validators, I began to notice this technique being practiced elsewhere.
The next, seemingly logical step was the provision of special-purpose tools to help linguists create and edit rules -- especially lexical entries by their thousands, but other rules as well. Naturally, a well-written software tool would never define a rule containing an error. (In this context, a text editor is not well-written software.) And since humans are much better at selecting items from lists than typing them in, possible "closed class" values were made selectable in a GUI interface, reducing typing to a bare minimum. Again, these techniques seemed to be unknown in the NLP field until I published them.
A subsequent "toolmaking" step was to create software that would guess what a lexical entry might look like, based on analyzing the new word form and finding any similar words already in the dictionary. Humans are much better at verifying content than creating it. With a "guesser" tool, a lexicographer seldom faced a tabula rasa when coding a new entry; rather, a plausible lexical entry would be "guessed" and made available for editing. Surprisingly often, guessed lexical content would be correct, leaving only "unguessable" (and therefore more interesting!) content up to the lexicographer. Our German and English lexicons grew to contain tens of thousands of entries. Abroad, other projects using the general-purpose METAL software extended coverage to Spanish, French, and Dutch.
At each step along the way, then, system-building tools made it yet easier for non-programmers to create, test, and debug linguistic content. Indeeed, as our project grew in terms of personnel, the level of programming expertise remained much the same: instead of programmers, linguists were hired to use tools crafted especially for them. Another toolmaking project concerned the use of the MT system in practice, i.e. not by system developers or linguists, but by actual translators. Early on, tools would extract text from formatted documents and then replace that text with its translation. Later, other tools were developed to manage the translation process: searching texts for unknown terms before translation commenced, scheduling texts for translation when the lexicon was updated, etc.
The key to developing a practical software system is to "program oneself out of a job" by developing tools for non-programmers. After all, there are always other systems to create! In the summer of 1984, a full production prototype translation system was delivered to Siemens, our sponsor, and I moved on to create another system...
Jonathan Slocum,
November, 2006
For references to technical details about this project, and to MT history and older MT systems more generally, see papers and books from the 1980-89 time frame in the author's publications.