Thursday, August 29, 2019

AI, bots and deep fakes: MLI study shows how technology shapes the fight against foreign political interference

Original post from MacDonald- Laurier Institute
Hostile foreign actors are increasingly employing digital foreign interference (DFI) as part of their toolkit to undermine democratic countries like Canada and its allies. This remains a serious, ongoing threat. And, with our federal election less than two months away, Canada needs to guard against possible intrusions in its information environment and election process.
If such a threat looms large today, what does the future hold when it comes to DFI? What impact will technological advances have on the threat posed by DFI?
To help answer these questions, MLI’s latest report by Munk Senior Fellow Alex Wilner and several of his students – James Balasch, Jonathan Kandelshein, Cristian Lorenzoni, and Sydney Reis – explores how advances in Artificial Intelligence and Machine Learning could impact DFI in the future.
Titled The Threat of Digital Foreign Interference: Past, Present and Future, the report explores how DFI uses the Internet, social media platforms, and other types of technology to create and proliferate disinformation and misinformation. Once a malicious actor is virtually connected with foreign individuals and communities, they can create and disseminate tailored and targeted propaganda.
Russia’s Internet Research Agency (IRA), for instance, ran digital campaigns across multiple social media platforms during the 2016 US presidential election. This campaign resulted in 3841 persona accounts on Twitter generating 10.4 million tweets (of which 6 million were original), 81 unique Facebook pages containing 61,483 posts, 1107 videos across 17 YouTube channels, and 33 Instagram accounts containing 116,205 posts.
DFI campaigns also targeted Germany, the UK, France, and Taiwan. The process often starts with sophisticated hackers stealing sensitive personal and/or professional digital data, which are dumped anonymously and made publicly available.
“Twitter and other social media platforms are then used to draw broader attention to the documents and data. Bots do their part to amplify the process even further. The content enters the collective mainstream, shared by regular social media users and reported upon by traditional media,” note Wilner and his co-authors.
Contemporary DFI still has an important human element, in which information is generated and disseminated by people who publish material online via social media, in much the same way as ordinary citizens might communicate their own political views to their friends and family. Yet the future of DFI will be even further AI-enhanced and AI-generated.
“Powerful automated software will troll the Internet, generating its own content and disseminating it against pre-selected and vulnerable populations. AI-supported software may eventually autonomously generate manipulative or suggestive photographs, videos, and text. A DFI campaign may even be executed and managed by an artificially intelligent software program.”
Of particular concern are deepfakes, which are video forgeries that appear to make people say or do things they never did. A good example is the famous FakeApp forgery of President Barack Obama insulting President Trump.
“With enough photo or video images of a person, facial recognition algorithms can recreate a solid replica of the person’s original face. The material can then be superimposed onto other video content. Add audio – also facilitated by AI – and you have a convincing video of a person engaged in a scenario that never took place.”
Responding effectively to DFI will require a multifaceted, multilateral, and flexible approach. Internet companies and social media firms will have to be held accountable for the information they disseminate and post on their sites. Independent tribunals might be established to review and possibly reinstate material that is removed. States may need to promote a common legal understanding of the phenomenon of disinformation and misinformation among and between the private and public sectors.
Canada should continue working with like-minded states to counter DFI when and where it occurs. As the authors conclude: “Providing a common baseline for response and collective action will help individual democracies present a unified front. Working in partnership with others, Canada might cautiously explore whether and how it might use DFI against known and identified aggressors.”
For the future, Canada should encourage the continued private sector development of domestic AI excellence in a manner that finds the right balance with privacy rights. It can also explore ways to better integrate AI expertise into Canada’s defence establishment.
To read about the future of digital foreign interference, check out MLI’s latest report here.
***
Alex S. Wilner is an Assistant Professor of International Affairs at the Norman Paterson School of International Affairs, Carleton University and a Munk Senior Fellow at the Macdonald-Laurier Institute.

James Balasch, Jonathan Kandelshein, Cristian Lorenzoni, and Sydney Rei are MA students at the Norman Paterson School of International Affairs, Carleton University.

No comments:

Post a Comment