As the generative AI race heats up, so does the potential for it to have an ‘incredibly harmful impact’
Despite the excitement around the potential of generative AI, some warn there’s still room for caution amid concerns about misinformation, cybersecurity, e-commerce fraud and data privacy.
The AI race of 2023 has been full speed ahead with tech giants in the U.S. and Asia quickly rolling out plans for incorporating AI tools into their platforms. Last week, Microsoft, Google, Alibaba, Baidu and Tencent all announced new capabilities for their products and services. But as tech giants and startups build and scale tools for AI-generated text, images and videos, cybersecurity experts say it’s important to determine who should get access.
Lessons from recent history could help inform how to prepare for the proliferation of generative AI. For example, Facebook’s Cambridge Analytica scandal from just a few years ago sparked new data privacy debates over who should and shouldn’t have access to user data across social networks and the broader ad-tech ecosystem.
Steve Grobman, chief technology officer at McAfee, offered another example: The wave of self-replicating malware worms of the late 1990s and early 2000s — such as Code Red, Nimda and SQL Slammer — which prompted cybersecurity experts to completely rethink standard operating procedures for protecting computer networks.
“We’re right in that lane with this technology now,” Grobman said. “We’re going to have to figure that out. And some of the best practices in 2022 we might need to think about differently as we move forward.”
Ad position: web_incontent_pos1
Over the past few years, McAfee and other companies increasingly applied AI tools such as natural language processing to assess and categorize the web. For example, McAfee researched using adversarial AI — which uses machine learning to make AI attacks fail — to determine how malware bypasses AI-based detection mechanisms.
There are also plenty of examples of how AI text generators are prone to providing wrong answers — known as the “hallucinations” — that pose concerns about the proliferation of both accidental and intentional misinformation. Last week, Google’s market cap fell by $100 billion after a promotional video for its new Bard chatbot featured inaccurate information. And in January, the misinformation research firm NewsGuard said it tested 100 false narratives on ChatGPT and received “eloquent, false and misleading claims” about 80% of the time including topics like Covid-19, the war in Ukraine and school shootings.
“The report shows that this technology as a whole has the potential to democratize the troll farm,” said Jack Brewster, an enterprise editor at Newsguard who worked on the report. “If a bad actor is able to get a hold of this tech and get around the safeguards, they suddenly have the power of 20,000 or more writers that can write clean copy at the push of a button. That could have an incredibly harmful impact on democracies around the world.”
Ad position: web_incontent_pos2
Although the latest AI platforms are still relatively new, they’re already seeing mainstream adoption. ChatGPT, which only debuted late last year, analysts estimate it had 100 million monthly active users in January. And like with past innovations, there’s also the danger of technology proliferating too fast for the appropriate guardrails to keep up. Companies like OpenAI are already working to improve various concerns, but some point out that bad actors don’t play by the same rules and could still use the tools for malicious purposes. Just last week, researchers found that hackers have already found a way to bypass restrictions for generating illicit content.
AI chat systems have “a tendency to get over the end of their skis about knowledge,” said Jon Callas, director of public interest technology at the Electronic Frontier Foundation, a digital rights nonprofit. He added that there are other concerns related to privacy, intellectual property rights and hate speech.
“I believe that what we are seeing is another generational thing of something we’ve seen in the past,” Callas said. “The chatbots learn from people who chat with them and malicious people can turn them into saying obnoxious things.”
The latest wave of AI tech could also create new problems for e-commerce with fake product reviews or images and other uses. But content moderation was a tricky task even before the rise of generative AI and requires a combination of both machines and humans, said Guy Tytunovich, co-founder and CEO of CHEQ, an Israeli-based cybersecurity firm that helps marketers detect and prevent ad fraud. On the other hand, more bots could also generate more business for companies like CHEQ as the cat-and-mouse game of preventing malicious AI speeds up.
“Prohibition didn’t work well 100 years ago,” Tytunovich said. “And I think trying to stop AI will be just as futile.”
More in Media
NewFronts Briefing: Samsung, Condé Nast, Roku focus presentations on new ad formats and category-specific inventory
Day two of IAB’s NewFronts featured presentations from Samsung, Condé Nast and Roku, highlighting new partnerships, ad formats and inventory, as well as new AI capabilities.
The Athletic to raise ad prices as it paces to hit 3 million newsletter subscribers
The New York Times’ sports site The Athletic is about to hit 3 million total newsletter subscribers. It plans to raise ad prices as as a result of this nearly 20% year over year increase.
NewFronts Briefing: Google, Vizio and news publishers pitch marketers with new ad offerings and range of content categories
Day one of the 2024 IAB NewFronts featured presentations from Google and Vizio, as well as a spotlight on news publishers.
Ad position: web_bfu