Deepfake: The Rise of the Synthetic Reality

Deepfakes are a sophisticated form of synthetic media. They use artificial intelligence (AI) and machine learning (ML) to fabricate convincing audio, video, or images. The name “deepfake” combines “deep learning” and “fake,” referencing the core technology. Unlike traditional fakes, deepfakes are difficult to identify, posing a new challenge for everyone.


A Deeper Look at a Digital Deception

The technology relies on an advanced, iterative process. A key method uses a Generative Adversarial Network, or GAN. This involves two competing AI systems: a “Generator” that creates fake content and a “Discriminator” that tries to detect it. This “cat and mouse” game constantly refines the Generator, creating increasingly realistic fakes and fueling a technological arms race against detection methods. Other AI technologies, such as autoencoders and Convolutional Neural Networks (CNNs), are used for tasks like superimposing facial features and tracking movements. For audio, Natural Language Processing (NLP) mimics a person’s voice and speaking style. High-performance computing has made high-quality deepfakes faster and more accessible to the public.


Listen to a podcast on thishttps://youtu.be/hcVty3Y_bDE


The Cat and Mouse Game: A Brief History of a Digital Threat

The term “deepfake” originated in 2017 from a Reddit user who shared an algorithm for creating fake videos. The code became open-source on platforms like GitHub, lowering the barrier to entry. User-friendly applications like FakeApp soon followed, making the process accessible to a wider audience. This democratization is the direct cause of the massive increase in deepfake incidents and fraud today. The threat is now widespread and no longer limited to experts or state actors.


The Real Cost: Deepfake Incidents and Their Echoes in Bangladesh

Deepfakes have evolved into a tangible threat, resulting in significant financial losses worldwide. The statistics reveal a clear, escalating trend.


Global Case Files: Financial and Corporate Targets

Deepfake fraud attempts surged by 3,000% in 2023, driven by the accessibility of generative AI tools. This trend continues, with one deepfake digital identity attack happening every five minutes in 2024. The financial impact is severe. Businesses lost nearly $500,000 on average in 2024, with large enterprises losing up to 680,000. By 2027, fraud losses from generative AI are projected to hit US$40 billion in the United States alone.


High-profile incidents highlight the severity:

  • A multinational firm in Hong Kong lost $25.6 million to a deepfake video call where fraudsters impersonated the CFO.
  • Another case involved a company director being impersonated in a deepfake phone call, leading to a $35 million transfer.
  • In politics, a deepfake robocall of President Joe Biden urged New Hampshire voters to stay home during the primaries, showing how the technology can undermine democratic processes.

A Local Reality Check: Deepfake Incidents in Bangladesh

Deepfake incidents in Bangladesh mirror global trends, with attacks targeting financial fraud and political misinformation. For example, deepfake videos have falsely shown Chief Adviser Professor Muhammad Yunus endorsing gambling and betting apps on social media. The doctored footage, taken from a genuine Al Jazeera interview, was manipulated to promote a gambling app called “Blue Live.” The video even used the logo of a news outlet, bdnews24.com, to appear credible.

In the political sphere, Bangladesh Nationalist Party (BNP) leaders have alleged that AI-powered deepfakes were used to distort their speeches and create fake videos. These were reportedly spread by pro-government platforms for extortion and misinformation. Other political deepfakes have been used to announce the withdrawal of candidates on election night and to create explicit videos of female leaders. These local cases show a sophisticated understanding of media and public trust. The same threats seen globally are now in Bangladesh.


Table 1: Deepfake Incidents: A Global and Local View

Attack Type

Target Location

Financial Impact/Outcome

C-suite Impersonation CFO, Multinational Firm Hong Kong $25.6 million loss
Political Disinformation Joe Biden, US Voters USA Voter confusion
Financial Fraud Prof. Muhammad Yunus, Bangladeshi Social Media Users Bangladesh Attempted financial fraud and reputational damage
Political Disinformation BNP Leaders, Bangladeshi Voters Bangladesh Extortion, misinformation, and reputational damage

 


The Human Element: Why We Are Vulnerable to Deepfakes

Deepfakes succeed by exploiting human psychological vulnerabilities, especially in high-pressure situations. Understanding this is key to building effective defenses.


Beyond the Technology: Exploiting Our Cognitive Biases

Deepfakes exploit cognitive biases, or systematic errors in our thinking, leading us to accept false narratives.

  • Confirmation Bias: We favor information that confirms our existing beliefs. A deepfake of a politician behaving negatively is more likely to be believed by an opposing audience.
  • Trust Bias: We tend to believe information from sources we trust. When a deepfake uses a familiar face or voice, we are less skeptical. This is why scams impersonating executives are successful.
  • Fear of Missing Out (FOMO): The anxiety of being left out of a social conversation drives people to share deepfakes, prioritizing engagement over accuracy.

The threat is not just technical; it’s a human-centric problem. A deepfake’s power is in the stories we already believe.


The Illusion of Control

People often believe they are good at spotting deepfakes, but research shows this is an illusion of control. A study found that humans could only identify high-quality deepfakes correctly 24.5% of the time. This overconfidence makes us vulnerable, especially during high-stakes events.

A dangerous side effect of widespread deepfake knowledge is the “liar’s dividend.” Bad actors can dismiss legitimate evidence as “just a deepfake,” eroding public trust in all media and making it difficult to hold people accountable. The inability to distinguish real from fake puts our shared reality at risk.


A Double-Edged Sword: Deepfakes for Good

The technology behind deepfakes is neutral. Its impact depends on the user’s intent. Acknowledging its positive applications provides a more balanced perspective.


The Positive Potential

The entertainment industry uses deepfakes positively to “de-age” actors and bring back iconic performers for modern audiences, as seen in films like Star Wars and The Irishman.

In medicine, deepfakes can create realistic patient simulations for medical training. In education, lifelike AI versions of historical figures, like Salvador Dalí, offer engaging learning experiences in museums.

The technology also enhances accessibility. After actor Val Kilmer lost his voice to throat cancer, deepfake tech recreated it, allowing him to “speak” again. In a public health campaign, deepfakes allowed David Beckham to deliver a message in multiple languages while preserving his expressions.


Your Guide to Digital Resilience: Practical Steps for Protection

Digital resilience requires a multi-layered defense strategy of human vigilance, education, and strong protocols.


Becoming a Digital Detective: How to Spot a Deepfake Manually

While AI detection tools are important, you can train yourself to spot deepfakes. Look for these red flags as your first line of defense:

  • Visual Anomalies: Look for unnatural eye movements, a lack of blinking, or inconsistent facial expressions and emotions. Check for unnatural body movements, inconsistent lighting, or blurred edges around the face, hair, and neck.
  • Audio and Sync Issues: Poor lip-syncing, a robotic voice, or a mismatch between voice tone and facial emotion are key signs.
  • Contextual Red Flags: The content might seem too bizarre or sensational to be true, or it may only appear on one unverified source.

Use this checklist for a practical approach to manual detection:


Table 2: The Deepfake Detection Checklist

Visual/Audio Clue

What to Look For

Your Action

Eye Movements Absence of blinking, unnatural movements. Be skeptical. Is the person making eye contact and blinking naturally?
Facial Expressions Stiffness, a lack of emotion that matches the words. Slow down the video. Do the emotions look authentic and fluid?
Lip-Syncing Mismatched mouth movements and audio. Watch closely. Do the audio and video align perfectly?
Lighting & Shadows Unnatural shadows, discoloration, blurry edges. Pay attention to the background. Does the lighting on the face match the environment?
Source & Context The content is shocking or out of character, only found in one place. Search for the original. Do credible news sources confirm this story?

Strengthening Your Defenses: Personal and Professional Practices

Your digital security is the first step to building a resilient society.

  • Pause and Verify: Before sharing sensational content, especially if it aligns with your beliefs, take a moment to verify it from multiple credible sources.
  • Protect Your Digital Footprint: Limit the high-quality photos and videos you share online by using strong privacy settings. This reduces the data available to deepfake creators.
  • Embrace Strong Security: Always use multi-factor authentication (MFA) to add an extra layer of security and prevent unauthorized access, even if a deepfake is used to bypass your identity.

For corporate leaders, the threat is magnified, requiring a more robust strategy.

  • Employee Awareness and Training: A well-informed workforce is your first line of defense. Run regular cybersecurity programs and deepfake drills to train employees. Foster a culture of skepticism where they question unusual requests, even from a seemingly legitimate source.
  • Enact Strong Verification Protocols: Never rely solely on a video or voice call for sensitive tasks. Use a secondary, out-of-band verification process for all high-value transactions, such as a pre-agreed-upon question or a call to a trusted number.
  • Leverage AI to Fight AI: Use AI-powered defense tools to monitor your digital presence. Implement advanced biometric authentication with “liveness detection” to confirm a person on a video call is real, stopping camera injection attacks before they cause harm.

Navigating the Legal Frontier: The Regulatory Landscape

Governments worldwide are amending legal frameworks to address deepfakes. A regional trend is emerging in South Asia to combat these threats through legislation.


Global and Regional Efforts in Context

South Korea bans deepfakes and manipulated media within 90 days of an election, with penalties up to seven years in prison.

In India, the AI TRA Bill 2024 and other acts criminalize the creation and distribution of deepfakes without consent, especially for sexual content, fraud, or identity theft. The government can issue orders for content removal and prosecute cybercrimes involving deepfakes.

Pakistan’s amended Prevention of Electronic Crimes Act (PECA 2025) criminalizes spreading false information likely to cause public fear, with penalties up to three years in prison and a 2 million rupee fine. These regional laws aim to protect electoral integrity and public order.


Bangladesh’s Evolving Legal Response

Bangladesh has taken a significant step forward with the enactment of the Cyber Security Ordinance 2025. This ordinance includes specific provisions that criminalize cybercrimes involving artificial intelligence, marking a first for the South Asian region. This law promises a more balanced and rights-conscious approach than its predecessor, the Cyber Security Act (2023), which was criticized for its potential to suppress dissent. The new ordinance’s focus on AI-driven threats aligns with the global and regional push to combat deepfakes.

However, the legal response in Bangladesh presents a critical contradiction. While the new ordinance is a necessary and progressive step, its predecessor, the Digital Security Act (DSA), was known for its institutionalized digital repression, using vague laws to suppress dissent and criminalize online expression. The new Cyber Security Ordinance (CSO) maintains many of the DSA’s provisions, raising questions about how these new legal tools will be enforced.

The success of Bangladesh’s new legal framework will depend on how it is implemented. The challenge is to use these new laws to genuinely protect citizens from deepfakes and fraud while ensuring they are not used to stifle free expression or legitimate political discourse. As the country navigates this complex issue, it will serve as a crucial test case for how a democratic society in South Asia balances the need for security with the protection of human rights and freedom of expression.

 

C. Basu.


 

 

 

 

Bibliography:

  • https://www.skadden.com/insights/publications/2025/06/take-it-down-act
  • https://surfshark.com/research/study/deepfake-statistics
  • https://oecd.ai/en/incidents/2024-03-17-f266
  • https://scholarlycommons.law.cwsl.edu/cgi/viewcontent.cgi?article=2066&context=cwilj
  • https://www.youtube.com/watch?v=JQwJQVhTifY
  • https://asjp.cerist.dz/en/downArticle/524/11/3/254051
  • https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1127507/full
  • https://www.adalovelaceinstitute.org/blog/beyond-disinformation-and-deepfakes/
  • https://varindia.com/news/india-enacts-deepfake-law-under-ai-tra-bill-2024
  • https://www.rochester.edu/newscenter/video-deepfakes-ai-meaning-definition-technology-623572/
  • https://advox.globalvoices.org/2024/07/11/bangladesh-meta-accuses-ruling-party-and-think-tank-of-coordinated-inauthentic-behavior/
  • https://www.tandfonline.com/doi/full/10.1080/00913367.2024.2306415
  • https://www.authorsalliance.org/2025/07/10/no-fakes-2025-another-bill-sacrificing-authors-free-expression-for-industry-control/
  • https://digitalcommons.mainelaw.maine.edu/cgi/viewcontent.cgi?article=1022&context=sjipl
  • https://accesspartnership.com/access-alert-how-will-deepfake-regulations-in-apac-impact-your-business/
  • https://www.bssnews.net/news/283023
  • https://www.wilsoncenter.org/article/positive-use-cases-deepfakes
  • https://www.arabnews.com/node/2588272/amp
  • http://aiub.edu/Files/student-research/Deepfakes_A_Focus_on_Fraud_and_Identity_Theft.html
  • https://oecd.ai/en/incidents/2025-07-26-9dd4
  • https://jsis.washington.edu/news/national-and-transnational-digital-repression-in-bangladesh/
  • https://www.fsisac.com/hubfs/Knowledge/AI/DeepfakesInTheFinancialSector-UnderstandingTheThreatsManagingTheRisks.pdf
  • https://zerothreat.ai/blog/deepfake-and-ai-phishing-statistics
  • https://sensity.ai/
  • https://www.adgully.com/post/4968/it-act-bns-and-dpdp-act-to-tackle-deepfakes-minister-tells-rajya-sabha
  • https://now.fordham.edu/university-news/deepfake-cybercrime-is-soaring-experts-say/
  • https://blog.usecure.io/equip-your-employees-to-fight-against-deepfake-powered-attacks
  • https://rsilpak.org/2024/deepfakes-a-crisis-of-human-rights/
  • https://www.eftsure.com/statistics/deepfake-statistics/
  • https://www.staysafeonline.org/articles/how-to-protect-yourself-against-deepfakes
  • https://www.weforum.org/stories/2025/07/deepfake-legislation-denmark-digital-id/
  • https://digitalguider.com/blog/deepfake-technology/
  • https://www.fairfaxcounty.gov/familyservices/older-adults/golden-gazette/2024-03-artificial-intelligence-and-deepfake-videos-what-you-need-to-know#:~:text=The%20process%20of%20producing%20complex,better%20at%20their%20respective%20task.
  • https://www.wipo.int/web/wipo-magazine/articles/artificial-intelligence-deepfakes-in-the-entertainment-industry-42620
  • https://www.webwise.ie/news/explained-what-are-deepfakes/
  • https://www.hp.com/hk-en/shop/tech-takes/post/understanding-impact-deepfake-technology
  • https://www.proofpoint.com/us/threat-reference/deepfake
  • https://www.infosecurity-magazine.com/opinions/corporate-deepfake-safeguarding-ai/
  • https://journals.library.columbia.edu/index.php/stlr/blog/view/669
  • https://www.socialmediasafety.org/advocacy/deepfake-technology/

C. Basu

a marketing professional with over 10 years of experience working with local and international brands and specializes in crafting and executing brand strategies that not only drive business growth but also foster meaningful connections with audiences.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *