As a technology enthusiast, I have always been fascinated by the potential of artificial intelligence (AI) to transform our lives for the better. AI has enabled many breakthroughs in various fields, such as medicine, education, entertainment, and more (Mishra & Tyagi, 2022).
AI can produce stunning visual effects, enhance artistic expression, and enable new forms of communication and entertainment. However, I also have some concerns and paranoia about the dark side of AI, especially when it comes to the phenomenon of deepfake.
Deepfake is a term that refers to the use of AI to create realistic but fake videos, audio, or text. It can be used for good or bad purposes, depending on the intention of the user. However, many experts and executives believe that deepfake poses a high ethical risk, as it can spread misinformation, manipulate people, and harm reputations (Cheres & Groza, 2023).
I have seen many examples of deepfake videos online that made me question the authenticity of what I was watching. Some of them were amusing, such as the ones that featured celebrities or politicians saying or doing funny things. But some of them were disturbing, such as the ones that showed people committing crimes, confessing secrets, or endorsing products or ideologies that they did not actually support. Some of them were so realistic that I could not tell the difference between the original and the fake.
For instance, I watched a video of former US President Barack Obama delivering a speech that he never gave, with his voice and gestures perfectly mimicked by AI. I also saw a video of Facebook CEO Mark Zuckerberg talking about how he controls billions of people's data and privacy, which was actually a prank by artists. And I was shocked by a video of a journalist being beheaded by terrorists, which turned out to be a hoax.
These videos made me realize how easy it is to create and spread deepfakes with the help of AI tools that are becoming more accessible and user-friendly. Anyone with a smartphone and an internet connection can make a convincing deepfake video with a few clicks, using apps like FaceApp or Zao. These apps allow users to swap their faces with celebrities, politicians, or anyone else they want. They also use AI to adjust the lighting, expression, and movement of the faces to make them look natural.
The problem is that not everyone who uses these apps has good intentions. Some people may use them to create fake news, slander someone's reputation, or scam someone's money. According to Deloitte's State of AI in the Enterprise report (2018), executives who understand AI best believe that the use of AI to create falsehoods is the top ethical risk posed by the technology. They also fear that deepfakes could harm their public perception and stock price, as well as expose them to legal liability.
The dangers of deepfakes are not limited to individuals or organizations. They can also affect society as a whole, by undermining trust, democracy, and security (Samoilenko & Suvorova, 2023). As Joseph Carson, Chief Security Scientist at Thycotic, said: "Deepfakes are becoming one of the biggest threats to global stability, in terms of fake news as well as serious cyber risks." (www.isaca.org, 2021). He warned that deepfakes could be used to influence elections, manipulate stock markets, or provoke conflicts.
I worry about the impact of a deepfake on our society and democracy. A machine designed to create realistic fakes is a perfect weapon for vendors of fake news who want to influence everything from stock prices to elections. How can we protect ourselves from deepfakes? How can we tell what is real and what is not? How can we prevent deepfakes from causing harm? How can we trust what we see or hear online if we cannot tell what is real and what is not?
How can we hold people accountable for their actions if they can deny or blame them on deepfake? How can we protect our privacy and identity if anyone can steal or misuse our digital likeness?
These are some of the questions that keep me awake at night. I don't have definitive answers, but I have some suggestions.
First, we need to educate ourselves and others about the existence and potential impact of deepfakes. We need to be aware of how AI can create and manipulate media, and how to spot signs of forgery. For example, we can look for inconsistencies in the lighting, shadows, or reflections of the images or videos. We can also look for unnatural eye movements or blinks, which are hard for AI to replicate.
Second, we need to verify the sources and credibility of the media we consume and share. We need to check where they come from, who made them, and why they made them. We need to look for corroborating evidence from other sources or witnesses. We need to use critical thinking and common sense before we believe or spread anything online.
Third, we need to use AI to fight AI. We need to develop and deploy tools that can detect and expose deepfakes automatically and accurately. We need to leverage the power of AI to counteract the misuse of AI. For example, researchers have developed algorithms that can analyze the frequency and duration of eye blinks in videos to identify deepfakes.
Finally, we need to establish ethical standards and regulations for the creation and distribution of deepfakes. We need to balance the rights and responsibilities of creators and consumers of media. We need to respect intellectual property, privacy, and consent. We need to hold accountable those who create or use deepfakes for malicious purposes.
I hope that by sharing my personal thoughts, concerns, and paranoia about deepfakes, I can raise awareness and spark a conversation about this important and timely issue. I also hope that by citing some references to support my claims, I can provide some useful and reliable information for anyone who is interested in learning more about deepfakes and AI.
Thank you for reading my blog post. I welcome your feedback and comments. Please let me know what you think about deepfakes and AI, and how we can deal with them.
Here are some of the sources that I used for this blog post:
Cheres, I., & Groza, A. (2023). The profile: unleashing your deepfake self. Multimedia Tools and Applications, 1-16.
Deloitte (2018). Deepfakes and the Dangers of AI. https://www2.deloitte.com/us/en/pages/technology-media-and-telecommunications/articles/deepfakes-artificial-intelligence-ethics.html
Forbes (2022). Deepfakes - The Danger Of Artificial Intelligence That We Will Learn To Manage Better.
https://www.forbes.com/sites/lutzfinger/2022/09/08/deepfakesthe-danger-of-artificial-intelligence-that-we-will-learn-to-manage-better/
Karnouskos, S. (2020). Artificial intelligence in digital media: The era of deepfakes. IEEE Transactions on Technology and Society, 1(3), 138-147.
ISACA (2021). Taking Fakeness to New Depths with AI Alterations: The Dangers of Deepfake Videos. https://northernkentuckylawreview.com/blog/deepfakes-nightmares-of-the-future-occurring-now
MUO (2022). The Future of AI Deepfakes: 3 Real Dangers of This Evolving Technology. https://www.makeuseof.com/future-and-dangers-of-ai-deepfakes/
Samoilenko, S. A., & Suvorova, I. (2023). Artificial intelligence and deepfakes in strategic deception campaigns: The US and Russian experiences. In The Palgrave Handbook of Malicious Use of AI and Psychological Security (pp. 507-529). Cham: Springer International Publishing.