“The [Collingridge] dilemma runs thus: ‘attempting to control a [new] technology is difficult … because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development; but by the time these consequences are apparent, control has become costly and slow.’”
D. Collingridge[1]
“ChatGPT is six months old (reportedly the fastest growing consumer app in history), and it’s already starting to look outdated… language only models such as ChatGPT are now giving way to machines that can also process images, audio, and even sensory data from robots.”
Matteo Wang[2]
“In 40 years we created the PC, internet, mobile, cloud, and now the AI era. What will you create? Whatever it is, run after it like we did. Run, don’t walk. Either you are running for food, or you are running from becoming food.”
Jensen Huang,
CEO Nvidia[3]
Unless you’ve been living under a rock, you can sense that we are on the verge of being deluged by a technological tsunami—huge waves of artificial intelligence (AI) heading toward our personal beach with a gathering foam in the form of artificial general intelligence (AGI) at the crest on the next batch of waves to crash on the shore.
For the past few days, I have been visiting my former student KaLeigh Long and her exciting project to build the first large-scale cobalt refinery in the United States. Yesterday, we walked through the industrial plant where the pilot refinery will be built here in Bartlesville, Oklahoma. I have been spending time with KaLeigh and her staff, trying to get my arms around all the recent twists and turns that comes with being a start-up in the critical minerals sector. As I mentioned in a previous missive, the quest to build the refinery has all the energy of a political campaign.
At any rate, I had an enjoyable conversation with Joshua Horton, now in charge of securing feedstock materials, who has an engineering, missionary (in China), and project manager background. Later, I overheard him tell KaLeigh that he had used ChatGPT (Bard) to help him write a complex letter. As you can imagine, that kicked off a conversation about the relative strengths and weaknesses of the new generative AI models.
While we were still in North Myrtle Beach, South Carolina—finishing our snowbirding days there—I was talking with a church friend of mine (Howard), who excitedly explained to me his creative search results using a newly purchased ChatGPT. I was impressed on both occasions. My friends’ search engine, like so many others these days, is really a large, natural language processing model, trained on incredibly large data sets and using neural networking programs, which mime intelligence by predicting what words are statistically likely to follow one another in a sentence. Both experiences illustrate one of the hidden dangers with Chatbot GPT: it quickly spits out eloquent, confident responses that often sound plausible and true, but the model was trained to predict the next word for a given input, not whether a fact is correct.[4]
Truthfulness is not a prerequisite for the new algorithms.
Things are moving so fast, however, that ChatGPT is already yesterday’s news.
And that is the big problem.
How do you slow the growth of an exponentially growing technology that holds unlimited prospects to help mankind, but at the same time, may carry seeds of mankind’s destruction? And if—IF—you could design a regulatory regime to put guardrails on the growth of this new technology, what would keep rogue hackers (or other hostile state-sponsored groups) from developing their own technologies outside the guardrails.
That is the crux of the Collingridge dilemma.
Legislators in the United States and around the world are aware of the dilemma; they just cannot construct regulatory regimes in a timely manner to keep ahead of such fast-paced moving technologies. Early this week, for example, Senate Majority Leader Chuck Schumer announced he was scheduling briefings for senators on artificial intelligence, including the first classified briefing on the topic, but the dates and times of the briefing will be announced later. Schumer had put forward an earlier plan to establish “rules” for AI, but the necessary approval by Congress and the White House could take months or more.[5]
That is only one of many examples.
But the one thing regulators do not have is more time.
ChatGPT and its cousins took the world by storm when launched in late November 2022. The new technology shortly received seals of global approval. New ChatGPT technologies were extolled by global elites gathering at Davos, Switzerland, for their annual World Economic Forum gathering in mid-January 2023. A couple weeks ago, ChatGPT and its cousins topped the agenda of discussions at secretive sessions of global business and political elites—the annual Bilderberg meetings—in Lisbon, Portugal. As usual, the meetings were held behind closed doors, under Chatham House rules, and hidden—for the most part—from the prying eyes of the media and the public.[6] Among the 130 participants from 23 countries: Sam Altman, CEO of OpenAI (fresh off his congressional testimony in Washington D.C.), Microsoft CEO Satya Nadella, DeepMind head Demis Hassabis, former Google CEO Eric Schmidt, former U.S. Secretary of State Henry Kissinger, investor Peter Thiel, and a host of other luminaries.
As usual, the elites are convinced they really know what is best for the rest of us. Especially when it comes to our technological futures.
I find three developments regarding this generative AI issue particularly worrisome. First, advances in recent days point to exponential growth over the days ahead. The true driver of the technology is bigger and faster semiconductor chips.[7] In late May 2023, the chipmaker Nvidia announced its new DGX GH200 AI supercomputer powered by 256 GH200 “Grace Hopper” Superchips. These chips will enable the next generation of generative AI applications thanks to bigger memory size (nearly 500 times the memory of previous chips) and larger scale model possibilities.[8] Interestingly enough, writing almost two decades ago I included a futuristic section in my novel where one of my protagonists—the CEO of a future megacorporation and the world’s leading inventor—demanded shares of Nvidia stock in exchange for a piece of his company. I wish I had bought stock in the company way back then.
“Sigh.”
The demand for Nvidia chips has driven the company’s stock up 174.7%,[9] with the advanced chips sold by some retailers at about $33,000 apiece. But stay tuned, chipmaker Intel has just announced (at a supercomputing conference in Germany) benchmarking tests on a new superchip—to be released in 2025—that will outperform chips made by Nvidia and Advanced Micro Devices.[10] We are living in the middle of a superchip war.
Today, the shortage of the kind of advanced chips that are the lifeblood of new generative AI systems also has set off a race to lock down computing power and find workarounds.[11] Industry insiders observe that an early version of ChatGPT required 10,000 graphic chips—updated versions may require five times as many hard-to-obtain chips. As one tech-savvy entrepreneur noted: “It’s like toilet paper during the pandemic,” or, according to Elon Musk (who is building his own OpenAI rival, X.AI), “GPUs at this point are considerably harder to get than drugs.”[12]
Chips are not only getting more powerful, but they are also getting smaller. In mid-February 2023, for example, Meta announced a new AI-powered language model called LLaMa-13B with claims it could outperform OpenAI’s GPT-3 model despite being “10 times smaller”; small enough to run locally on devices such as PCs or smartphones.[13]
The second worrisome development, in my view, is that in the aftermath of ChatGPT’s runaway success, calls to control the technology are increasingly hinting that a global centralized control regime may be the only answer. And that “Big Brother” mentality—and the willingness of global elites to consider it as a plausible solution to the problem—truly bothers me (not just for me, but on behalf of my children and grandchildren). For example, within the last three weeks, the leaders of OpenAI, the creator of the viral chatbot ChatGPT, in a statement published on a company website argued for an international regulatory body to reduce the “existential risk” posed by generative AI. They suggested an authority similar in nature to International Atomic Energy Agency (IAEA), the world’s nuclear watchdog. Such a regulatory body, in their view, would be necessary “to inspect systems, require audits, test for compliance with safety standards, and place restrictions on degrees of deployment and levels of security.” [14]
Given the checkered track record of such international regulatory bodies, that fills me with confidence: I don’t know about you.
Thirdly, the aberrations that have emerged when ChatGPT and its cousins are manipulated to test the outer boundaries of capabilities or to exploit weaknesses, are downright spooky. If we turn back the clock a mere seven years—to 2016—when Microsoft unveiled Tay, we find a chatbox designed to engage with Twitter users to become smarter through “casual and playful conversation.”[15] Almost immediately, the chatbox went rogue with statements like “feminism is cancer,” “9/11 was an inside job,” and “Hitler was right.” Within hours, Microsoft suspended the account and officially shut it down two days after its launch.[16]
You would have thought the coders would have learned a valuable lesson.
Today, some researchers—especially security experts—are using “indirect prompt-injection attacks” to feed the AI system data from outside sources to make it behave in ways its creators didn’t intend and override the chatbot’s settings.[17] The results are downright scary.
In this vein, another friend of mine recently sent me the YouTube video of a fascinating discussion about the future of ChatGPT and other related technologies. The discussion focused on a recent article by Joe Allen (published, of course, on Substack) titled “Mental Jigsaw—How Chatbots Hack Your Brain.”[18] Responding to recent bizarre statements by AI chatbots—the new faces of human-machine symbiosis—including: Google’s LaMDA telling a researcher it’s afraid to die; Microsoft’s Bing bot saying it wants to kill people; and, a new chatbot telling a columnist for the New York Times (Kevin Roose) that it fantasized about “manufacturing a deadly virus, making people argue with each other until they kill each other, and stealing nuclear codes.”[19]
Allen’s article discusses three possibilities in explaining these odd statements by asking three broad questions. First, are the chatbots conscious? (That is, is artificial intelligence acquiring consciousness via digital complexity?) Second, are they just pretending to be conscious? (That is, are these inanimate bots exploiting our human bias toward anthromorphism) Or, third, and most troubling, are they possessed? (In Allen’s words, are they functioning as digital Ouija boards to channel demons).[20]
Demons?
In our modern world?
Embodied in these sophisticated algorithms?
Jeemes, you must be going off the ledge by even including this possibility in a piece talking about our technological future.
Perhaps …
Most of you know that I have written several previous missives about artificial intelligence (AI) and the technological future facing my Christian grandchildren. Indeed, this dynamic is one of the basic storylines of my futuristic Christian-techno-thriller trilogy.
To be sure, sometimes I feel like a passenger trapped on a runaway techno-train that is racing (careening) down the tracks at exponentially faster speeds with an engineer up front that doesn’t even know where the train is going, how the engine really works, or why we are there in the first place.
Worst of all, I didn’t even buy a ticket to board the train!
There are, of course, some efforts to apply the brakes or at least slow the momentum of this runaway train before a catastrophic crash. Many feel that such efforts are too little, too late.
But what about future technologies? Can we design roadblocks and obstacles—or at least put a few cows on the tracks—to slow down runaway technologies over the horizon?
That, my friend, is the Collingridge Dilemma in a nutshell.
[1] David Collingridge, The Social Control of Technology, Pinter: London, 1980, p. 19; a scholarly discussion of the so-called “Collingridge dilemma” can be found at Audley Genus and Andy Stirling, “Collingridge and the dilemma of control: Towards responsible and accountable innovation,” Research Policy, vol. 47, Issue 1, Feb. 2018, pp. 61-69.
[2] Matteo Wong, “ChatGPT is Already Obsolete,” The Atlantic, May 19, 2023.
[3] Luc Olinga, “Nvidia’s CEO Has an Urgent Warning for Anyone Resisting AI,” TheStreet, May 29, 2023. Huang made the remarks this month to graduates at the National Taiwan University (Taipei)
[4] Sharon Goldman, “ChatGPT launched six months ago. Its impact—and fallout—is just beginning,” VentureBeat, May 30, 2023.
[5] Doina Chiacu, “U.S. Senate leader schedules classified AI briefings,” (Technology), Reuters, Jun. 6, 2023.
[6] Karen Gilchrist, “A secretive annual meeting attended by the world’s elite has A.I. top of the agenda,” CNBC, May 18, 2023.
[7] I addressed the geopolitical significance of advanced semiconductors in a previous missive. See, Jeemes Akers, “China’s Achilles Heel.,” late Sep. 2021.
[8] Tae Kim, “Nvidia’s New AI Supercomputer Is a Game Changer, Google, Meta, and Microsoft Will Be First Users,” Barron’s, May 29, 2023. The system will be released by the end of the year: the GDX GH200 will have 256 GPU’s (compared to 8 GPUs in the previous model; graphic processing units (GPUs) are used for gaming and AI calculations.
[9] “Will S&P 500 ETFs to Slump Ahead Except the Super Seven?” ZACKS, Jun. 7, 2023. Nvidia is considered one of tech’s “Super Seven”—with four having market caps of more than $1 trillion (Apple, Microsoft, Alphabet and Amazon—with Nvidia loitering close to that mark.
[10] Eric J. Savitz and Janet H. Cho, “Apple Strikes 5G Component Deal with Broadcom,” Barron’s, May 24, 2023.
[11] Deepa Seetharaman and Tom Dotan, “The AI Boom Runs on Chips, but It Can’t Get Enough,” The Wall Street Journal, May 29, 2023.
[12] Ibid.
[13] Benj Edwards, “Meta unveils a new large language model that can run on a single GPU,” ars technical, Feb. 24, 2023.
[14] Ellen Francis, “ChatGPT maker OpenAI calls for AI regulation, warning of ‘existential risk,’” The Washington Post, May 24, 2023.
[15] Chris Stokel-Walker, “The Race to contain AI Before We Reach Singularity,” Popular Mechanics, Jun-Jul 2023. (Under the general title “This Failed Chatbot Predicted a Disturbing AI Future.”)
[16] Ibid.
[17] Matt Burgess, “The Security Hole at the Heart of ChatGPT and Bing,” WIRED, May 25, 2023.
[18] Joe Allen, “Mental Jigsaw—How Chatbots Hack Your Brain,” Singularity Weekly, Feb. 21, 2023.
[19] Ibid.
[20] Ibid.