AI is on the path to steering humanity toward a better future, but experts have already worked out the apocalyptic scenarios — even proposing a “profound risk to society and humanity.”
As big tech companies like Google and Microsoft race to deploy new AI-enhanced products, industry insiders and experts worry that the concerns for ethics, safety, and privacy have already been sidelined.
Despite the evident worries, several tech companies have already embedded such “flawed” AI models into their products.
AI-generated content is already becoming more and more common on the internet.
However, it’s not just jobs that generative AI might be going after.
OpenAI publicly released GPT-4 for commercial use despite well realizing the risks of such generative AI — something the company itself acknowledged.
However, a report has suggested that GPT-4 can easily spew misinformation, sometimes even more readily than GPT-3.5.
The Center for AI and Digital Policy filed a complaint, urging the FTC to stop the further commercial release of GPT-4, suggesting that it is “biased, deceptive, and a risk to public safety.
Zooming out: Instances of people pushing generative AI over the edge have been all over the internet, but the security risks are not limited to phishing scams.
Are we banning generative AI models like OpenAI’s GPT-4 and Google’s Bard? Not likely. But policymakers around the work have already started criticizing these companies for jeopardizing data privacy and security.
Standing on the opposite end, Indian IT minister Ashwini Vaishnav announced that the nation is not considering “regulating the growth of artificial intelligence in the country.”