By J. N. Sunday September 1 2019
- Pentagon research to sift 250,000 news items in initial phase
- Fears grow about viral political memes polarizing society
Fake news and social media posts are such a threat to U.S. security that the Defense Department is launching a project to repel “large-scale, automated disinformation attacks.”
The Defense Advanced Research Projects Agency wants custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips, according to Bloomberg.
If successful, the system after four years of trials may expand to detect malicious intent and prevent viral fake news from polarizing society.
The algorithm testing process will include an ability to scan and evaluate 250,000 news articles and 250,000 social media posts, with 5,000 fake items in the mix. The program has three phases over 48 months, initially covering news and social media, before an analysis begins of technical propaganda. The project will also include week-long ‘hackathons’.
Andrew Grotto at the Center for International Security at Stanford University states:
“A decade ago, today’s state-of-the-art would have registered as sci-fi — that’s how fast the improvements have come. There is no reason to think the pace of innovation will slow any time soon.”
“The risk factor is social media being abused and used to influence the elections,” Syracuse University assistant professor of communications Jennifer Grygiel said in a telephone interview to Bloomberg:
“It’s really interesting that DARPA is trying to create these detection systems but good luck is what I say. It won’t be anywhere near perfect until there is legislative oversight. There’s a huge gap and that’s a concern.”
So-called ‘deepfakes’ are increasingly sophisticated and making it more difficult for data-driven software to spot. AI imagery has advanced in recent years and is now used by Hollywood, the fashion industry and facial recognition systems. Researchers have shown that these generative adversarial networks — or GANs — can be used to create fake videos.
Famously, Oscar-winning filmmaker Jordan Peele created a fake video (link) of former President Barack Obama talking about the Black Panthers, Ben Carson, and making an alleged slur against Trump, to highlight the risk of trusting material online.
Facebook Chief Executive Officer Mark Zuckerberg played down fake news as a challenge for the world’s biggest social media platform.
He later signaled that he took the problem seriously (link) and would let users flag content and enable fact-checkers to label stories in dispute. These judgments subsequently prevented stories being turned into paid advertisements, which were one key avenue toward viral promotion.
Where things get especially difficult is the prospect of malicious actors combining different forms of fake content into a seamless platform, Grotto continued:
“Researchers can already produce convincing fake videos, generate persuasively realistic text, and deploy chatbots to interact with people. Imagine the potential persuasive impact on vulnerable people that integrating these technologies could have: an interactive deepfake of an influential person engaged in AI-directed propaganda on a bot-to-person basis.”
By increasing the number algorithm checks, the military research agency hopes it can spot fake news with malicious intent before going viral.
The agency also has an existing research program underway, called MediFor, which is trying to plug a technological gap in image authentication, as no end-to-end system can verify manipulation of images taken by digital cameras and smartphones.
Mirroring this rise in digital imagery is the associated ability for even relatively unskilled users to manipulate and distort the message of the visual media, according to the agency’s website (link).
“While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns.”