Elections can be exciting times when there are valid candidates to cheer and when you feel that a positive change may come if they get elected. Yet in our times things have become extremely complicated: social media radically changed the way candidates reach voters. Besides, influencing voters has also become a mathematical equation with algorithms analyzing social media data to determine personality traits linked to voting behavior (remember the Cambridge Analytica scandal?)
So there's a jungle out there in which voters must find their ways, discerning between lies and truths, fake and real news, and genuine comments and fake ones posted by bots. Things have now become even more complicated with the rise of sophisticated Artificial Intelligence systems that can produce fake content and even generate perfect deep fakes.
A few days ago, the Microsoft Threat Analysis Center (MTAC) posted its "Same targets, new playbooks: East Asia threat actors employ unique methods report", exploring how countries in East Asia are using technology to influence voters' behavior.
Microsoft observed a series of significant cyber and influence trends originating from China and North Korea since June 2023 and found that these trends reveal attempts to employ more sophisticated influence tactics to achieve their objectives.
In the case of Chinese cyber actors, their activities have primarily centered around three key areas in the past seven months. Firstly, certain Chinese actors extensively targeted entities across the South Pacific Islands with China-based espionage group Gingham Typhoon (the most active actor in this region) hitting international organizations, government entities, and the IT sector with complex phishing campaigns.
Secondly, another group of Chinese actors has continued their cyberattacks against regional adversaries in the South China Sea region (Raspberry Typhoon, Flax Typhoon and Nylon Typhoon – the latter mainly targeting foreign affairs entities in countries around the world, including Brazil, Guatemala, Costa Rica, and Peru, but also Portugal, France, Spain, Italy, and the United Kingdom). Meanwhile, a third faction of Chinese actors (Storm-0062) compromised US defense-related government entities, including contractors who provide technical engineering services around aerospace, defense, and natural resources critical to US national security.
In the last few months, though, Chinese influence actors have concentrated on refining their techniques and exploring new media avenues through AI-generated or AI-enhanced content, demonstrating a readiness to amplify such media to benefit their strategic narratives. Additionally, they have been creating their own memes, video, and audio content. These tactics have been deployed in campaigns aimed at exacerbating divisions within the United States and aggravating tensions in the Asia-Pacific region, including Taiwan, Japan, and South Korea.
The most prolific of these actors using AI content is Storm-1376 - Microsoft’s designation for the Chinese Communist Party (CCP)-linked actor commonly known as "Spamouflage" or "Dragonbridge." AI generated content was for example used to create confusion during the recent Taiwanese elections: videos published by Storm-1376 used AI-generated voice recordings of Foxconn owner and election candidate Terry Gou (who retired from the contest in November 2023) to make him appear as though he endorsed another candidate in the presidential race (YouTube quickly removed this content).
Storm-1376 also made use of AI-generated news anchors (according to Microsoft created by the CapCut tool, developed by Chinese company ByteDance, the owner of TikTok), AI-enhanced videos and AI-generated memes in the run up to Taiwan’s January 2024 Presidential and Parliamentary elections.
The Taiwanese presidential election was the first instance where a nation-state actor was observed employing AI-generated content in efforts to influence a foreign election.
But interests in influencing voters in other countries has been increasing wth CCP-affiliated social media accounts trying to exert influence on the US elections. Last August, Storm-1376 propagated conspiracy theories on social media, suggesting a US government "weather weapon" caused wildfires in Hawaii. The group also launched an aggressive campaign criticizing Japan's disposal of nuclear wastewater, casting doubt on its safety. Additionally, Storm-1376 capitalized on a Kentucky train derailment to spread anti-US government conspiracy theories, encouraging mistrust among US voters.
Besides, Chinese-affiliated sockpuppets on social media, impersonating US voters, began influencing political discussions surrounding the 2022 US midterm elections. These accounts, posting predominantly about US domestic issues, seek engagement and opinions on various political topics including global warming, U.S. border policies and drug use, potentially aiming to gather intelligence on American perspectives and voting demographics. Polling questions have increased in the last few months, and some of these accounts have indeed posted about various presidential candidates and then asked their followers to comment about supporting them or not, posing divisive questions.
The Microsoft report also features a section on North Korea, where cyber actors have garnered attention for their increased involvement in software supply chain attacks and cryptocurrency heists over the past year.
According to the United Nations, North Korean cyber actors have illicitly acquired more than $3 billion in cryptocurrency since 2017. In 2023 alone, thefts amounting to between $600 million and $1 billion took place. It's reported that these stolen funds contribute to financing over half of North Korea's nuclear and missile program, allowing the country to continue its weapons proliferation and testing activities despite international sanctions.
But North Korean threat actors are now also embracing advancements in AI technology, particularly large-language models (LLMs), to optimize their operations. Microsoft and OpenAI have observed one of the threat actors, Emerald Sleet, leveraging LLMs to enhance spear-phishing campaigns aimed at diplomats and Korean Peninsula experts in government, think tanks/NGOs, media, and education. In response to this threat, Microsoft collaborated with OpenAI to dismantle accounts and assets linked to Emerald Sleet.
So far, the impact of AI-generated disinformation campaigns on swaying audiences has been limited, but China's ongoing experimentation with enhancing memes, videos, and audio using Artificial Intelligence is expected to persist. Although the effectiveness of these efforts remains uncertain at present, there is a possibility that they could become more potent over time.
With significant elections scheduled worldwide, notably in South Korea (10th April 2024), India (from 19th April 2024 to 1st June 2024), Europe (6th to 9th June 2024) and the United States (5th November 2024), China may engage in creating and amplifying AI-generated content to advance its interests. As the issue of fake news transcends borders affecting voters globally, different countries that are currently implementing AI legislation should maybe try and ponder more about how to stop these contents to spread on the Internet. In the meantime, as stated also in a previous post, it is crucial for individuals to remain vigilant and maintain a critical mindset, questioning the authenticity of political materials encountered online or on social media feeds. Developing a critical conscience is the beacon that may help us voters (no matter where we live and work and where we vote) finding the path to the truth in a digital landscape rife with pervasive misinformation.
Comments
You can follow this conversation by subscribing to the comment feed for this post.