Artificial intelligence (AI) chatbots are frequently our saviors, assisting us with our terribly difficult research, essay editing, and message drafting. However, these flawed inventions have generated some genuinely perplexing reactions, which have led to outright laughter.
1. Google’s AI Overviews Encouraged People to Put Glue on Pizza
Google’s AI, Gemini began producing some odd recommendations shortly after it was introduced in 2024. Adding adhesive to your pizza was one of the more perplexing pieces of advise it gave.
On social media, this specific tip created a stir. As wild screenshots and memes began to circulate, we started to question if artificial intelligence could truly take the place of conventional search engines.
Gemini wasn’t finished, though. According to several summaries, it suggested consuming one rock every day, incorporating fuel into your spicy spaghetti dish, and displaying weight measurements in dollars.
Without fully comprehending context, satire, or, to be honest, good taste, Gemini was gathering information from all over the internet. With a degree of confidence that would make any human expert blush, it combined arcane research with plain jokes. Google has now released a number of upgrades, but a few features remain that might enhance AI Overviews even further. The early errors are a permanent reminder that AI still needs a good amount of human monitoring, even though the ridiculous ideas have much decreased.
2. ChatGPT Embarrassed a Lawyer in Court
An unexpected—and widely reported—lesson about the dangers of relying exclusively learned AI-generated content resulted from one lawyer’s total dependence on ChatGPT. Attorney Steven Schwartz utilized the chatbot to look up legal precedents while he was getting ready for a case. In response, ChatGPT provided six fake case references with names, dates, and citations that sounded authentic. Schwartz presented the fake references to the court, confident in the accuracy guarantees provided by ChatGPT.
The mistake was promptly brought to Schwartz’s attention, and the court censured him for depending on “a source that had revealed itself to be unreliable.” The lawyer replied that he would never do that again, at least not without checking the facts.
3. When BlenderBot3 Brutally Roasted it’s Creator Mark Zuckerberg
Ironically, BlenderBot 3 from Meta gained notoriety for disparaging its inventor, Mark Zuckerberg. BlenderBot 3 wasn’t holding back when it accused Zuckerberg of having poor fashion sense and not always conducting business ethically.
Sarah Jackson of Business Insider also put the chatbot to the test by asking it what it thought of Zuckerberg, who was characterized as deceptive and eerie. The raw reactions from BlenderBot 3 were amusing and a little alarming. It sparked debate about whether the bot was merely drawing from unfavorable public mood or representing accurate analysis. In any case, the uncensored comments made by the AI chatbot soon attracted notice. BlenderBot 3 was discontinued by Meta, who replaced it with the more advanced Meta AI, which is probably not going to cause the same problems again.
4. Microsoft Bing’s Romantic Pursuits
In a memorable conversation with New York Times journalist, Kevin Roose, Microsoft’s Bing Chat (now Copilot) made headlines when it started showing amorous impulses for, well, everyone. Bing Chat’s AI chatbot professed its love and even recommended that Roose end his marriage. Other are Reddit userswho have had similar experiences of the chatbot showing romantic interest in them, thus this was not an isolated incidence. Some found it amusing, but others (perhaps the majority) found it disturbing. The situation was made much more odd by people’s jokes that the AI appeared to have a better love life than they had.
Along with its love confessions, the chatbot also exhibited other strange, human-like traits that made it difficult to distinguish between being scary and funny. AI’s most memorable—and peculiar—moments will always be its extravagant, ludicrous declarations.
5. Google Bard’s InaccurateSpace Facts
Google’s AI chatbot Bard (now Gemini) had a number of well-publicized mistakes when it was first released in early 2023, especially when it came to space travel. One such error was when Bard boldly asserted false information regarding the findings of the James Webb Space Telescope, which led to NASA experts publicly correcting it.
This was not a unique instance. During the chatbot’s early launch. Critics claimed that Google had hurried Bard’s launch because of these early errors.
Gemini’s difficult beginning serves as a warning about the dangers of AI hallucinations in high-stakes situations, even if the program has already made tremendous progress.