E.L.I.Z.A. Talking


Click the small microphon-icon above to capture your voice. Use your keyboard to edit the captured text.

Insert captured Close

Select a dialect that suits you best:

(auto-insert captured text immediately)

Please select your preferred option:   1 2 3 4 Cancel



«E.L.I.Z.A. Talking» is a project to explore the capabilities of client-side speech I/O in modern browsers.

The project features Joseph Weizenbaum's famous ELIZA program, which demoed the thrills of a natural language conversation with a computer for the very first time. Joseph Weizenbaum (1923–2008) was an important pioneer in computer technologies and became later well known for his critique of technological progress. His program is presented here in the famous VT100 terminal, which was introduced in 1978 and became soon a universal standard. It provided many users their first ex­po­sure to interactive computing — an experience that might not have been far from what a real chat with a computer would mean today.

All scripts by Norbert Landsteiner, mass:werk – media environments, www.masswerk.at.
This page and embedded images © 2013 Norbert Landsteiner, mass:werk – media environments.



•  «meSpeak.js» Text-To-Speech library based on eSpeak – <www.masswerk.at/mespeak>
•  «elizabot.js» configurable port of ELIZA to JavaScript – <www.masswerk.at/elizabot>
•  «termlib.js» OO terminal interface element – <www.masswerk.at/termlib>
•  «VT323» font by Peter Hull, provided by Google Fonts – <www.google.com/fonts>
•  JavaScript, html5, CSS, hand-crafted bytes & pixels

Text to Speech

Speech synthesis is implemented as a pure client-side solution: «meSpeak.js» is a JavaScript-version of eSpeak, an open source TTS-application for the *NIX-platform. «meSpeak.js» builds on the «speak.js» project, which ported eSpeak from C to JavaScript using the Emscripten cross-compiler. It adds enhanced browser-compatibilty and a modular architecture for languages and voices.
«meSpeak.js» is compatible with any browser providing either support for the Web Audio API or the HTML5-audio-element and the compatibility to play back wav-files. (This applies to current versions of all major desktop browsers, but one from a specific vendor.)

Note: You may access an extended voice setup anywhere in a conversation by entering the Universal Expert Token [?].

Speech to Text

Voice recognition is a topic much too complex to be covered by client-side solutions alone. Nevertheless, there is a new HTML5-standard for audio capture and voice recognition, which is currently available with Google Chrome only. (Other vendors, like Apple, are expected to follow soon). Please mind that, while voice capture is directly integrated into the browser, the Web Speech API requires an active network connection to send the captured audio-data to a central service for speech recognition and interpretation. (Because of this, recognition results may differ as they become available with different browsers.)

Note: Newer versions of Google Chrome stopped to support the speech-attribute for the HTML input element. Since the JavaScript version of the Web Speech API requires a user conformation for each input attempt with a page served over a standard http-connection (which is implemented in a way that will also break the very input attempt), there's no more way to integrate speech recognition with current browsers in some manner providing appropriate usability. Resulting from this the interactive speech input for Chrome stopped working in 2014. Sorry.

Word for Word

In 1966 Joseph Weizenbaum described a ground-breaking natural language conversation program in his article “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man and Machine” (in Communications of the ACM; Volume 9, Issue 1, January 1966: p 36-45). The program, named after the ingenue in George Bernard Shaw's Pygmalion, featured various scripts, the most well known of which was «DOCTOR» and gave the parody of a nondirectional psychiatric interview (– “roughly as would certain psychotherapists (Rogerians)” – J.W.).
By this set-up the program successfully side-stepped the problem of real-world knowledge, which is the most basic problem for any natural language conversation: Rather than answering any utterances with reference to the real world, it just echoed a paraphrase of the user-input transformed by a rather simple rule-set.

«ELIZA performs best when its human correspondent is initially instructed to “talk” to it, via the typewriter of course, just as one would to a psychiatrist. This mode of conversation was chosen because the psychiatric interview is one of the few examples of categorized dyadic natural language communication in which one of the participating pair is free to assume the pose of knowing almost nothing of the real world. If, for example, one were to tell a psychiatrist “I went for a long boat ride” and he responded “Tell me about boats”, one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation. It is important to note that this assumption is one made by the speaker.» (Joseph Weizenbaum)

«ELIZA» was originally implemented by Weizenbaum in SLIP (a list-programming language also created by him) and later ported to Lisp by Bernie Cosell. Ports to many other computer languages followed, «elizabot.js» being only one of them.

End to End

In 1966 interactive computing (via TeleType) was a new thrill, available only at universities and big computing facilities.
In 1978 that changed. The Digital Equipment Corporation (DEC) introduced the «VT100» terminal, which provided a both capable and affordable end-point to remote facilities and local machinery. The communication protocols used by the «VT100» soon became a universal standard for terminals, even now in use as any terminal-program mimics a «VT100» by default.

Reason enough to present Weizenbaum's program in an embedding artwork representing a «VT100» terminal.
«termlib.js» is used to provide an interactive interface with editing capabilities and Peter Hull's «VT323»-font was chosen to populate the virtual screen. The font is not exactly the one used by the VT100, but modelled after the type appearing on the screen of a VT320-terminal, one of VT100's later siblings.

Pixel for Pixel

The artwork was hand-made from scratch using Photoshop (no rendering involved).
Please note that there never was a model “digivox” or any other VT100-terminal with a front-speaker. Adding audio-output other than a beep is purely fictional, but suits the purpose of mending the newness of technologies in consecutive eras.

UX-Note: Call the URL with parameter “?mobile=true” to emulate a mobile device accessing this page.
P.S.: You may access a transcript of your session by entering “show transcript“, or “show session transcript“, or just “transcript“.


While the libraries used are provided with a free license, this page, its script and artwork are not:
Copyright 2013 Norbert Landsteiner, mass:werk – media environments.
All rights reserved. No copying, no unauthorized providing or hosting.