Will AI win the election in 2024? Not if America fights disinformation with this weapon.
Will democracy last in the age of artificial intelligence? There are growing concerns about a surge in misleading and deceptive AI-generated content as the 2024 election draws near, particularly from adversarial foreign countries.
Deepfakes have been alerted by the New York Times to the potential to “swing elections.” The White House and Congress are under even more pressure to curtail or even halt the use of artificial intelligence (AI) in light of the potential for election manipulation, as the Biden administration has previously called for care when using AI.
Accelerating the use of AI, however, is the true answer. The defensive use of AI is the strongest line of defence against AI that has been weaponized for electoral purposes.
The difficult truth that misinformation and disinformation created by AI cannot be stopped must be acknowledged at the outset of any conversation about AI and elections. There are already strong AI tools available, and as long as the US has adversaries abroad, they will attempt to sway elections using AI.
Even if the genie is unreturnable, its power may be neutralised by even more potent technologies.
Missile technology is the most comparable example. Although our enemies produce and utilise missiles, we are able to intercept them with our own missiles. The most notable example of this strategy in action is Israel’s Iron Dome, which demolishes approaching rockets with interceptor missiles.
America requires an information iron dome driven by AI. A system like this would use artificial intelligence’s unparalleled capacity for pattern recognition to detect coordinated attacks utilising fictitious content created by AI, including photographs, videos, and articles.
AI is better than humans at finding deepfakes Given the potential volume of such content, it would take an army of humans to identify AI-driven attempts to influence elections, requiring enormous sums of money and time while still suffering from human error. By contrast, AI tools could scan the internet far more quickly, efficiently and effectively, labeling deepfakes almost as soon as they enter the public discourse.
In the language of rockets, it’s the distinction between a swarm of warriors brandishing shoulder-mounted rockets aimed upwards and a sophisticated, integrated missile defence system. Israel choose the Iron Dome for a purpose.
This innovative technology should be developed mostly by private enterprises. Giving politicised agencies control over AI with an electoral emphasis will encourage misuse and jeopardise the very democracy we fight to preserve.
What is the meaning of the Dragon?
American businesses are well-suited to create this technology since they dominate the world in artificial intelligence. The DeepMedia firm is already assisting the Pentagon in identifying deepfakes that pose a threat to national security, such as the one in which Volodymyr Zelenskyy, the president of Ukraine, is seen ordering his troops to surrender.
Malicious actors don’t employ cutting-edge technology, as my colleague Neil Chilson, a former top technologist at the Federal Trade Commission, clarified in his recent testimony before the Senate. Selective manipulation, foreign content farms, cheap fakes, and basic Photoshop are all reasonably priced and functional.
In the same way that the real Iron Dome deters both rockets and artillery rounds, an artificial intelligence version of it may help recognise both threats even more quickly.