Workers in Silicon Valley are finally experiencing the negative economic consequences of artificial intelligence. Stack Overflow, the question-and-answer website that was the go-to place for software engineers to check their code, recently laid off 28 per cent of its workforce. Usage of Stack Overflow has dropped in the past 12 months, as coders address their queries to ChatGPT instead. With revenues down, the company had to shed staff in an effort to stay afloat. These scenarios will only increase over time due to AI.
This made me wonder what the consequences will be in my world of universities. A new academic year has just started, and this week I found myself doing something I had never done before: entering essay questions I had set students into ChatGPT to see how easily the answers could be plagiarised. What the AI came up with seemed impressive at first, until I noticed errors, or “hallucinations”, in the responses, including misleading generalisations and invented citations attributed to established names in the field. A novice could easily have missed these hallucinations because they sounded authoritative and looked convincing, but I felt reassured that an assessor would catch them. For now, hallucinations are the expert’s friend.
For most of us, the word hallucination relates to illusions brought on by psychedelic drugs or severe mental illness. How did this concept enter AI and machine learning? The term first gained traction in biological neuronal modelling in the 1970s, relating to notions of altered perception and dreaming. By the mid-1980s, it had been picked up in the field of image recognition and generation by computer scientists such as Eric Mjolsness, who pioneered AI neural networks that could create realistic images. In this context, hallucinations were considered more a research opportunity than a hindrance.
Thirty years later, by 2017, specialists in natural language processing were beginning to see the pernicious side to hallucinations. They recognised that their large language models—which enable computers to produce text that seems similar to what a human might say—could not only be racist, sexist and toxic, but sometimes they blatantly made stuff up that sounded plausible. Even when based on accurate data, falsehoods were produced. The creators did not know how or why, nor could they predict when, the AI might lie. They still don’t fully understand it.
Hallucinations currently provide a key distinction between an AI-generated answer and an expert human one. While educators and domain experts see them as a safety net, creating an AI model that stops hallucinating is regarded by computer scientists as the greatest challenge yet to be overcome. And we all know Silicon Valley loves a challenge.