Photo by Jr Korpa on Unsplash

Deepfakes Harms & Threat Modeling

Deepfakes makes it possible to fabricate media — swap faces, lip-syncing, and puppeteer — mostly, without consent and bring threat to psychology security, political stability, and business disruption.

Ashish Jaiman
Towards Data Science
11 min readAug 19, 2020

--

The Book is now available at Amazon — https://www.amazon.com/Deepfakes-aka-Synthetic-Media-Humanity-ebook/dp/B0B846YCNJ/

As with any new innovative technology, it can be used as a tool to improve people’s lives and by nefarious actors to inflict harm by weaponizing it. Deepfakes are no different. The weaponization of deepfakes can have a massive impact on the economy and national security, it can inflict harm to individuals and democracy. Deepfakes will further erode already declining trust in the media.

Deepfakes are becoming easy to create and even easier to distribute in a policy and legislative vacuum.

In the last two years, the potential for nefarious use of synthetic data created using AI models has begun to cause alarm with technologists, civil society, and legislators. The technology has now advanced to potentially be weaponized to perpetrate damage and inflict harm to individuals, societies, institutions, and democracies. Deepfakes can contribute to factual relativism and enables authoritarian leaders to thrive. It can also help public figures hide their immoral acts in the veil of deepfakes and fake news, calling their actual harmful actions false, which is also known as liar’s dividend[1].

Threat to Individuals

The very first use case of malicious use of deepfake was seen in pornography, inflicting emotional, reputational, and in some cases, violence towards the individual, mainly women. According to the Deeptrace, now sensity.ai, report on Deepfake[2], 96% of deepfakes are pornographic videos, with over 135 million views on the pornographic website alone.

Deepfake pornography exclusively targets women.

In April 2018, Rana Ayyub, a journalist based in Mumbai, after writing a critical article about India’s ruling party, BJP, faced a deepfake attack of superimposing her face on a porn video, then was doxed, and video was distributed on the social media. The harassment and humiliation sent Ayyub to the hospital with heart palpitations and led her to withdraw from online life.

In 2017, someone used deepfakes to create a porn video of Gal Gadot and other celebrities. A recent law graduate in Perth, Australia, Noelle Martin discovered that someone took her social media photos and photoshopped them into nude images to create deepfake videos. Kristen Bell, recently, did a public campaign to discuss and raise awareness of deepfake porn and the harms it can inflict on individuals.

Pornographic deepfakes can threaten, intimidate, and inflict psychological harm on an individual. Deepfake porn reduces women to sexual objects, torments them, causing emotional distress, reputation harm, abuse, and in some cases, material harm like financial loss and collateral consequences like job loss.

Deepfake can depict a person indulging in antisocial behaviors and saying vile things that they never did. Those deepfakes can have severe implications on their reputation. It can sabotage their current and future lives, including and not limited to career, the professional marketplace, politics, relationships, and romance. Even if the victim could debunk the fake via alibi or otherwise, that fix may come too late to remedy the initial harm.

In many cases, it is challenging to annul the consequential damages to their reputation, agency, and material. Scandalous fakery spreads more rapidly with the distribution power of social media and the innate human desire for gossip and falsehood versus rebuttals and corrections. Most people have no desire to review falsehood critically.

The lack of awareness of AI-based synthetic media, deepfakes, led villagers in India to believe falsehood of children kidnaping to be accurate, fake images, and video on WhatsApp resulted in several cases of mob lynching and killing.

Malicious actors can take advantage of unwitting individuals to defraud them for financial gains using audio and video deepfakes. Deepfake can be used for blackmailing. Fake media, video and audio, can be used to extract money, confidential information, or exact favors from individuals.

In Entertainment and Art domain, we had seen some examples of deepfakes when an actor had died before completing the movie, studios have used deepfakes to complete the film. The likeliness of the deceased for profit is a complex legal issue and can harm the reputation posthumously.

Voice technology has evolved to the level that with a few utterances, AI can generate an impressively accurate imitation of any individual, even when it comes to pronouncing words or phrases they’ve most likely never uttered. For voice-over artist, synthetic voice can be used to augment or extend the role of the artist. If the deepfake content is created without the artist’s consent, it will impact their business and limit their agency. It may also have an impact on their livelihood.

Threat to Society

Deepfakes can cause short- and long-term social harms. AI-based synthetic media may accelerate the already declining trust in media. Such erosion can contribute to a culture of factual relativism, fraying the increasingly strained fabrics of civil society. The digital platforms have replaced the traditional news gatekeepers. The distrust in social institutions is perpetuated by the democratizing nature of information dissemination and the financial incentives of the social media channels. Falsity is profitable if it is popular and shared further on the platforms. Combined with the distrust, the existing biases and political disagreement can help create echo chambers and filter bubbles, creating discord in society.

Deepfake can aid alter the democratic discourse. False information about the institutions, policy, and public leaders powered by a deepfake can be exploited to spin information and manipulate belief. Deepfakes will make is challenging for institutions, public and private, to fend reputational attacked and debunk misinformation and disinformation.

A well-timed deepfake can generate significant harm to physical property and life and can create social unrest.

The CMP (Center for Medical Progress), an anti-abortion organization, released a series of heavily edited videos claiming that Planned Parenthood representatives illegally sold fetal tissue for profit, something the reproductive health nonprofit vehemently denied. Multiple investigations since have found no wrongdoing on the part of Planned Parenthood, while CMP’s founder and another member faced legal consequences.

Deepfake can be used to spread vicious disinformation with speed and scale to erode trust in institutions.

A fake story was published before the presidential election in 2016 accusing Clinton and her campaign chairman, John Podesta, were running a child abuse ring from a restaurant called Comet Ping Pong. As the story, PizzaGate, spread, Comet Ping Pong received hundreds of threats from the theory’s believers. D.C. police arrested a North Carolina man after he allegedly walked into Comet Pizza with a semi-automatic rifle to “self-investigate” the theory, pointed the gun at an employee, and fired at least one shot. Though fake story didn’t use deepfakes, but a realistic video, deepfake, of the fake story could have resulted in a dire outcome.

Deepfakes can be used to exacerbate social division by using fake video and audio to spread disinformation about a community. There are a few examples from South Asia, in Myanmar and in India against Muslims.

Deepfake could act as a powerful tool by a malicious nation-state to undermine public safety and create uncertainly and chaos. In early 2018, a false alert of an incoming ballistic missile was sent to Hawaii residents and visitors. It was a human error of misinterpreting a testing instruction that resulted in sending a live alert, which caused panic and confusion across the state. The same live alert by a rogue actor with a deepfake of a ballistic missile hitting a U.S. state would have put the country on a retaliation course undermining public safety and diplomacy.

Threat to Democracy

In 2018, people of Gabon suspected that their president, Ali Bongo, was seriously ill or was dead. To debunk the speculation, the government announced that Bongo had suffered a stroke but is in good health. The government soon released a video of Bongo delivering new year’s address to Gabon’s population. Within a week, the military launched an unsuccessful coup citing the video as deepfake. It was never established that the video was, in fact, a deepfake, but it would have changed the course of government in Gabon. The idea of deepfakes is enough to accelerate the unraveling of an already precarious situation.

Deepfake of a political candidate can sabotage the image and reputation of a political candidate and may also alter the course of an election.

A well-executed deepfake, a few days before the polling, of a leading political candidate spewing out racial epithets or indulge in an unethical act, can damage their campaign. The campaign and the candidate may not even have time to recover from the episode even after effective debunking of the AI-generated synthetic media disinformation. State-sponsored disinformation campaigns were seen in the U.S. 2016 presidential and 2017 France elections. A high-quality deepfake can inject compelling false information that can cast a shadow of illegitimacy over the voting process and election results.

In many countries, colonial-era laws criminalizing same-sex activity. In the socially conservative Malaysia, they are still being used for political ends. In 2019, A video of two men in sex video, one of them resembled Minister of Economic Affairs, Azmin Ali, went viral on WhatsApp. The other man in the video confessed and claimed that the video was created without consent. He was subsequently arrested on sodomy charges. Ali and others, including the Malaysian prime minister, argued that the video was a deepfake2. Digital forensic experts are yet to conclude the authenticity of the video. It could have ruined Ali’s political career.

A Belgian political party created a fake video of the U.S. president in which Mr. Trump calls Belgium to exit the Paris climate agreement. The Flemish Socialist Party sp.a, which posted the video, claimed that the deepfake intended to draw attention to climate change as the video, in the end, calls on to sign a petition to invest in renewable energy, electric cars, and public transport.

In politics, stretching a truth, over-representing a policy position, and presenting alternate facts for opposition are acceptable tactics. Using deepfakes and synthetic media may have a profound impact on the polls’ outcome. Deepfake can have an impact on the voters and the candidate. Deepfakes may also be used for misattributions, telling a lie about a candidate, or falsely amplifying their contribution, or inflicting reputational harm to a candidate.

Liars dividend is when an actual piece of media or undesirable truth can be dismissed as deepfake or fake news by a leader.

Threat to Businesses

A study estimated that businesses loose approximately $78 billion each year because of disinformation about them. It includes $9 billion to repair reputational damages and another $17 billion lost due to financial disinformation.

Deepfakes are used to impersonate identities of business leaders and executives to facilitate fraud. In March 2019, A CEO of a UK-based energy firm was asked, over the phone, to wire $243,000 to a Hungarian supplier by his boss, the CEO of the firm’s German parent company. The British CEO complied. Only afterward, they realized, according to the energy firm’s insurance carrier, Euler Hermes Group S.A., that AI-based software was used to impersonate the German CEO’s voice.

According to Symantec, millions of dollars were stolen from three companies, which have fallen victim to deepfake audio attack. On each attack, the AI-generated synthetic voice would call the senor financial officer to request an urgent money transfer. The deepfakes models were trained on the CEO’s public speeches. In February 2020, a Pennsylvania attorney was fooled by a deepfake voice of his son, claiming that he needed $9,000 in bail money.

In a less sophisticated fraud, Israeli fraudsters stole about $9 million from a businessman by impersonating the French foreign minister in a Skype video call. The impostors disguised themselves as the minister and chief of staff and recreated a phony office setting to make the video call. The office and disguise will not be needed if they had used deepfakes. Deepfakes can also be used in social engineering to dupe employees to solicit business secrets.

Deepfakes could pose unique labor and employment risks. Employees are relying increasingly on secret video and audio recordings to support their claims of harassment or mistreatment. Depending on the local rules, these recordings are usually admissible as the most reliable evidence, even though broad employers ban on workplace recordings. A deepfake of a business leader engaged in inappropriate behavior can be used to substantiate a harassment claim or support class action litigation. Remedying the reputational injuries inflicted by such efforts may require substantial investments of time and resources[3].

In August 2017, a faux social media campaign with realistic graphics using the Starbucks font and logo trended on twitter with a hashtag “borderfreecoffee”. The hoax as to spread disinformation that Starbucks was giving free drinks to undocumented immigrants during a so-called Dreamer Day. Similarly, in August 2019, a rumor that the restaurant chain Olive Garden was helping fund President Trump’s reelection campaign was started with a hashtag “BoycottOliveGarden”. Both cases caused momentary reputation harm to Olive Garden and Starbucks. Deepfakes will make the impact exponentially worse.

Some prank websites, like Channel23News allow users to create their own genuine-looking fake-news articles and post them directly to social media, lowering the cost of entry for propagandists and subjecting even small businesses to such dangers. There have been many other fake stories about other brands, like Coca-Cola was recalling Dasani bottled water because of “clear parasites,” Xbox console killed a teenager, and Costco was ending its membership program. Businesses must prepare for this potential additional layer of legal and business risk.

Deepfakes can be used for market manipulation.

In their report, Ferraro, Chipman, and Preston from Wilmerhale identify the legal and business risk of disinformation and deepfake focused extensively on the harm to a company [4]. They specifically call out that businesses not only stand to lose the value of defrauded funds, reputational goodwill, but they can also be subject to litigation by shareholders, investigations by regulators, and loss of access to further capital.

JPMorgan Chase published a report on market participants who use trading algorithms based on posts, and headlines are particularly susceptible to disinformation manipulation. There are several examples of the pump and dump scheme using coordinated disinformation activity. A false story backed by a deepfake video and audio can easily manipulate the market in the short term. Consumer confidence will plummet if deepfake showed a recently unveiled autonomous vehicle getting catching fire spontaneously, or a CEO made disparaging comments about an ethnic group or depicts a business leader in an explicit non-consensual act. The dangers of these threats will have a destructive impact on the valuation of a company.

There is limited legal consensus on who owns your voice. Many U.S. states protect names and likeness, and only a few protect the voice. Most of the states protect these rights only for the living. Only a few states in the U.S. have posthumous protections for likeliness. There are additional questions of the rights for fair use in satire and parody and definitions of public figures and first amendment rights. Audio deepfakes will impact the recording industry.

Conclusion

As with any new technology, nefarious actors will take advantage of the innovation and use it for their benefit. Deepfakes without consent create a threat to psychology security, political stability, and business disruption. Today, deepfakes are mostly used in pornography, inflicting emotional, reputational, and in some cases, violence towards the individual, mainly women. Other than deepfake pornography, we have not seen any consequential deepfake incidents yet. Experts cite their main concern from deepfakes is liar’s dividend when an undesirable truth is dismissed as deepfake or fake news. I firmly believe that, in the longer term, there is a significant threat to business from deepfakes.

--

--

thoughts on #AI, #cybersecurity, #techdiplomacy sprinkled with opinions, social commentary, innovation, and purpose https://www.linkedin.com/in/ashishjaiman