A study explores the possible range and risk of attacks from military robots and autonomous attack drones to AI-assisted stalking. Here are the top 5.
Organizations are embracing digital transformation to enhance operations. Artificial intelligence (AI) in particular is revolutionizing the ways companies collaborate and conduct business. However, as these technologies spread across industries, these systems give rise to new attack points and vulnerabilities; especially for criminal activity.
A study published in the journal Crime Science analyzed a vast spectrum of AI-enabled crimes in the years ahead ranging from military robots and autonomous attack drones to AI-assisted stalking. To assess the risks associated with these various potential criminal scenarios, the review featured a two-day workshop of individuals from the private and public sector, academics, police agencies, and more.
These delegates were asked to “catalogue potential criminal and terror threats arising from increasing adoption and power” and then rank threats based on anticipated harm, overall achievability, criminal profit, and the level of difficulty associated with defeating a particular threat. Below, we’ve listed the top five AI-enabled crimes with the highest-rated risk in the years ahead, according to the study.
Fake audio and video
Realistic fake audio and video impersonation topped the list of high-risk AI-enabled crimes of the future. In recent years, realistic fake videos and photos known as “deepfakes” have increased in sophistication. The ability to produce exceptionally realistic fake media creates an entirely new pathway for misinformation. With continued advancements, these impersonations could evoke a wide range of criminal activity; including exploiting “people’s implicit trust in these media,” according to the study.
There are other potential applications aside from eroding confidence in trusted news sources.
Realistic deepfakes of political leaders such as Vladamir Putin, Donald Trump, George Bush, Barack Obama, and others have appeared online. Widely disseminated, realistic fake images and audio could be used to also undermine the democratic process itself. Deepfake videos could depict “public figures speaking or acting reprehensibly in order to manipulate support.”
In another scenario, the delegates anticipated criminals using a deepfake “impersonation of children to elderly parents over video calls to gain access to funds.” Realistic fake audio could also be used over to the grant criminals access to assets and information.
Unmanned vehicular attacks
There are numerous instances of people using motorized vehicles to carry out violent attacks. Currently, various manufacturers around the globe are working on AI-controlled autonomous vehicles. The inherent danger lies within the ability to enable a vehicular attack without the need to recruit drivers as part of the attack, according to the study. Situationally, an individual perpetrator could orchestrate an even scale attack from afar. The authors of the report note that autonomous vehicles could enable a single person to coordinate an attack utilizing multiple vehicles contemporaneously.
SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)
A new type of targeted phishing
Phishing may become an increased risk in recent years with an added AI boost. As the report points out, common phishing attacks, such as spearphishing campaigns lack scalability, since they are individually tailored for a specific target. The authors of the report explain that AI could be used to harvest information from social media or by imitating “the style of a trusted party” to create more successful targeted phishing scams. Opposed to generic large-scale phishing attempts, these messages could be “tailored to prey on the specific vulnerabilities inferred for each individual, effectively automating the spear-phishing approach.”
Attacks on AI-controlled systems
As mentioned previously, organizations across industries are using AI to enhance operations and streamline workflows. However, as the authors of the report point out, “the more complex a control system is, the more difficult it can be to defend completely.” The study identified a number of potential threats arising from targeted strikes on AI-systems ranging from attacks on the power grid to activities designed to disrupt the food supply logistics. The study found that the defeatability of these particular types of attacks were high.
SEE: Natural language processing: A cheat sheet (TechRepublic)
Blackmail at scale
Large-scale blackmail rounded out the top of high-risk AI-enabled crimes of the future. Blackmail typically follows a fairly basic extortion framework leveraging damning or embarrassing information. If a person doesn’t pay a certain amount of money, then said damning information will be released.
However, aggregating this personal information requires time and the crime only pays if someone is willing to shell out the cash to conceal the information. As the authors of the report point out, AI can be used to address these limiting factors by harvesting information and potential victims on a large scale. Criminals could leverage AI to collect potential blackmail from social media accounts, email logs, phone contents, browser data, and more. Then, identify “specific vulnerabilities for a large number of potential targets and tailoring threat messages to each.”