The First Machine You Could Talk To
In 1966, MIT professor Joseph Weizenbaum created ELIZA, a program that could engage in text-based conversation with humans. Named after Eliza Doolittle from Pygmalion, the program used pattern matching and substitution to simulate understanding — and it fooled almost everyone.
How ELIZA Worked
ELIZA operated on surprisingly simple rules. It would scan user input for keywords, then apply transformation rules to generate responses. Its most famous script, DOCTOR, simulated a Rogerian psychotherapist by reflecting questions back at the user.
When a user typed “I am feeling sad,” ELIZA might respond with “Why do you say you are feeling sad?” This simple trick created a powerful illusion of understanding. Users would spend hours conversing with the program, often attributing emotional intelligence to what was essentially a sophisticated text processor.
The ELIZA Effect
What surprised Weizenbaum most was not what ELIZA could do, but how people reacted to it. His secretary asked him to leave the room so she could have a private conversation with the program. Students formed emotional attachments. This phenomenon — humans attributing understanding to machines — became known as the ELIZA effect.
Weizenbaum was so disturbed by this that he became one of AI’s earliest critics, arguing that certain applications of AI were inappropriate regardless of whether they were technically possible.
What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people. — Joseph Weizenbaum
Today, as billions of people interact daily with ChatGPT, Siri, and Alexa, the ELIZA effect is more relevant than ever. The line between simulation and understanding continues to blur.