The Lamen

WHERE TO DRAW THE LINE WHEN IT COMES TO AI

by | Apr 10, 2023

Artificial Intelligence.

AI IS SCARY GOOD, AND WE JUST CAN'T DEAL WITH IT.

AI is on the path to steering humanity toward a better future, but experts have already worked out the apocalyptic scenarios — even proposing a “profound risk to society and humanity.”

As big tech companies like Google and Microsoft race to deploy new AI-enhanced products, industry insiders and experts worry that the concerns for ethics, safety, and privacy have already been sidelined.

  • An open letter published by the nonprofit Future of Life Institute states that “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
  • Signed by a number of well-known AI researchers, Elon Musk, Apple co-founder Steve Wozniak, and politician Andrew Yang among others, it suggests that we may be rushing “unprepared into a fall,” calling on all AI labs to “immediately pause for at least 6 months the training of AI systems more powerful that GPT-4.”
  • The expressed concerns have attracted criticism from artificial intelligence experts, who suggest that these suggested apocalyptic scenarios are pivoting the attention from the harm AI is already causing.

Despite the evident worries, several tech companies have already embedded such “flawed” AI models into their products.

  • Microsoft confirmed that the company’s new AI-powered Bing had been running on GPT-4 weeks before the product was released to the public.
  • Companies like Salesforce (Slack) and Snapchat have already integrated the chatbot into their apps, although the products have greater restrictions placed on them.

 AI-generated content is already becoming more and more common on the internet.

  • CNET was found to be “quietly publishing” entire AI-generated articles under the author’s tag of “CNET Money Staff,” according to Futurism. This contributed to the company culling nearly 50 percent of its news and video staff.
  • Buzzfeed recently announced that the company will work with OpenAI for content creation and move AI into the “core business.”

However, it’s not just jobs that generative AI might be going after.

A MORE OPEN AI.

OpenAI publicly released GPT-4 for commercial use despite well realizing the risks of such generative AI — something the company itself acknowledged.

  • The GPT-4 technical report clearly states that the LLM “hallucinates” facts and can be “confidently wrong in its predictions.”
  • In terms of safety, OpenAI’s website states GPT-4 is 82 percent less likely to respond to requests for disallowed content compared to GPT-3.5. Additionally, it produces toxic generations only 0.73 percent of the time, compared to 6.48 percent of GPT-3.5.
  • Talking about the AI safety concerns on the Lex Fridman Podcast, Altman reinforced that GPT-4 is the most “capable” and “aligned” model yet — primarily through reinforcement learning from human feedback (RLHF).

However, a report has suggested that GPT-4 can easily spew misinformation, sometimes even more readily than GPT-3.5.

  • GPT-4 was found to present false narratives more frequently and was more persuasive about them, reported NewsGuard, a company that tracks online misinformation.
  • These false narratives included conspiracies about COVID-19 vaccines, the Sandy Hook Elementary School shooting, and the Kenyan disguised population control program conspiracy.

The Center for AI and Digital Policy filed a complaint, urging the FTC to stop the further commercial release of GPT-4, suggesting that it is “biased, deceptive, and a risk to public safety.

Zooming out: Instances of people pushing generative AI over the edge have been all over the internet, but the security risks are not limited to phishing scams.

  • OpenAI has placed certain “guardrails” that keep their chatbot from going wild, but certain people have already found their way around it.
  • Termed “jailbreaking,” people have figured out ways to phrase prompts by masking the ill intentions — which ChatGPT readily responds to.
  • These models have even been compromised in instances simply due to the fact that they scrape huge chunks of data, which may include a number of software bugs.
  • ChatGPT was recently shut down due to one such bug, reported Bloomberg, which showed users the chat history of others’ conversations. OpenAI later confirmed that the bug may have also exposed the payment info of 1.2 percent of the ChatGPT Plus subscribers active during “a specific 9-hour window.”

Are we banning generative AI models like OpenAI’s GPT-4 and Google’s Bard? Not likely. But policymakers around the work have already started criticizing these companies for jeopardizing data privacy and security.

  • Italy’s data protection authority recently announced that it would block ChatGPT in the nation, citing the recent data breach as the reason.
  • The investigation was launched after noting the “absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of ‘training’ the algorithms,” said the Italian regulation.
  • U.S. President Joe Biden stated that tech companies had a responsibility to ensure the safety of such products before making them public — addressing potential risks to national security.

Standing on the opposite end, Indian IT minister Ashwini Vaishnav announced that the nation is not considering “regulating the growth of artificial intelligence in the country.”