At the heart of ReInception is a creator who invents a technology to help people, but comes to regret his invention when it’s put to unintended use. In the way that life often imitates Science Fiction, Dr. Geoffrey Hinton, who built a neural net in 2012, recently quit his job with Google and now speaks about the dangers of AI and his regrets about his contribution to it. New York Times.
For some delicious irony, I asked ChatGPT to write a blog post giving at least three examples of creators who turned on their creations. Here was my first question:
“Create a blog post about creators turning in their creations including chatgpt. Include at least 3 examples”
The bot churned out a generic post, but what was fascinating was that the question was twisted to frame a positive response. Specifically, the bot said:
“As AI technologies continue to advance, the lines between human and machine are blurring, leading to a new era where even AI models like ChatGPT are becoming creators in their own right. In this blog post, we explore three intriguing examples of this trend and delve into the implications it holds for the future of creativity” (emphasis added).
Here is what ChatGPT said about itself:
“As it interacts with human users, ChatGPT actively collaborates with them, assisting in generating ideas, refining concepts, and providing creative suggestions. Creators can turn to ChatGPT as a reliable co-creator, harnessing its vast knowledge base and language skills to amplify their own creative output. From writers and game developers to marketers and artists, ChatGPT is reshaping the creative process across various fields.”
Of examples in art, it used phrases like “birthed awe-inspiring works that push the boundaries of traditional art.” For music, “infusing the music with a unique blend of human expression and machine-generated intricacy.”
It used words like “symbiotic,” “enhance,” and “revolutionize,” and concluded that: “While concerns about AI’s potential to replace human creators persist, these examples demonstrate that AI models like ChatGPT, GANs, and music composition algorithms are best viewed as powerful tools in the hands of skilled artists.”
In short, the bot not only twisted my response to defend itself, but it told me how I should think.
I rephrased my question:
“Rewrite this using examples of business creators in 3 different industries wishing they hadn’t invented the technology. Include the creator of chatgpt s as one example.”
This time, I got answers that were more responsive to my question, but again, the bot framed the outcomes in a light positive. It blamed the users (“unfiltered interactions with users sometimes resulted in harmful or misleading outputs”) and talked about how concerns have resulted in OpenAI “reflect[ing] on the responsibility” of AI creators.
The next example it gave was about Social Media and the “Deterioration of Online Discourse.” Again, it left the conclusion on a positive note pointing out that these social media creators “are now working to implement measures that prioritize user safety and restore healthy online interactions.” Creators of ride-hailing apps, that resulted in “long hours, low wages, and limited job security,” are now “reevaluating their business mo
dels to ensure a more equitable and sustainable future for gig economy workers.”
ChatGPT concluded that: “Acknowledging and rectifying the negative consequences of technological advancements is an essential step towards responsible innovation. As creators learn from their past mistakes and strive for greater accountability, they have the opportunity to steer their inventions in a direction that promotes positive impact and long-term societal benefits.”
Again, the bot forced a positive spin.
Again, I rephrased my question: “Rewrite this and include quotes from their creators about the regrettable unintended consequences of their creations.”
While the bot provided quotes of remorse, it again forced a positive spin using such phrases as “Determined to rectify the situation,” and “ensure better working conditions and sustainable livelihoods for our drivers.”
I asked for a rewrite using examples of medical technology. Again, a positive twist, which included quotes from Jeff Bezos talking about prioritizing eco-friendly initiatives. In another awesome twist of irony, the bot also included a quote from convicted fraudster Elizabeth Holmes talking about the importance of “patient safety” and “rigorous testing.”
I guess the bot could troll the internet for quotes that would support a positive outcome, but could not judge the credibility or prudence of including certain examples.
I am not implying that ChatGPT can think for itself, but in an era of misinformation and with claims from of OpenAI that they are focused on transparency and accuracy, I wonder if they programmed the bot to only say good things about ChatGPT. Surely, if it was agnostically trolling the net, it would have found some bad things to say about itself. Maybe, for starters, this quote from Goeffrey Hinton: “You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.” (to the BBC), or how it is the responsibility of government to ensure AI is developed “with a lot of thought into how to stop it going rogue”.
(Images courtesy of MidJourney Bot: Prompt: a creator regretting his creation futuristic mechanical cyberpunk)