Time for some reflection. Now that the hype cycle has plateaued. Now that the court cases are underway. Now that culture has said its piece and innovation has responded in kind. What are the stakes for ethical technology in this evolving world of large language models?
Misinformation. We have to start here because of what these tools are… The Center for Humane Technology best relates our first contact with Artificial Intelligence. It was an interaction we tragically lost: the assimilation of ad-powered social media platforms into our society. This was an adoption fraught with ignorance, naivety, and false promises. Content curators, in the form of what we began to charmingly call “the algorithm,” became able to know us better than we knew ourselves.
Our second contact with large language models (LLMs) is in a pitched battle with pieces and factions strewn across a global board. LLMs are personal knowledge prediction engines. More than Google that showed us the web, more than “the algorithm” that served us ads, these seek to satiate our hunger for information. They tenaciously do this at times to our detriment.
The “lab coat effect” appears to apply to VC-hyped softwares as everyone from students to lawyers accepts a chatbot’s text as truth. The apt term “hallucination” is a benign example of LLM’s shortcomings. Getting the Eiffel Tower’s height wrong or misattributing accolades is harmless in isolation. What these mistakes will impact more and more are the intentional distribution of convincingly falsified information. Scammers, governments and guerrilla marketers. These bad actors will use LLM’s gains in efficiency and comprehension to turn profit and public unrest to a new level…
Education. Years of tired, overextended, and too few students has taxed learning in the West. The unwilling partnership with technology has not aided the situation up to now, but simply smoothed over the cracks that were soon to break open. AI brings opportunities for the growth and improvement in areas where iPads, teaching softwares, and Google only complicated education. LLM-powered homework tutors, reflection with an AI partner, multi-lingual interpretation that benefits the reader and writer. The possibilities (and trajectory of AI in the industries students are bound for) are revolutionary.
The issues most raise is a short-term one. Homework assignments no longer work, essays written outside of class cannot be vetted, students becoming mindless “regurgitators” like LLMs. The hope of most innovators transcends these fears. With a truly disruptive technology used by all under twenty, the bubble holding back old education from new innovations may finally have to burst. When homework breaks for good, we’ll see conservators turn inward to shore up prohibitions or we’ll see innovators look to what the world is becoming and prepare a way for their students in the future...