Advertisement · 728 × 90
#
Hashtag
#AIdeepfakes
Advertisement · 728 × 90
Preview
AI Deepfakes Roil 2026 US Midterm Campaigns Investing.com reports a 300% increase in deepfakes in early 2026; 221 days until Nov 4, 2026. Campaigns, platforms, and advertisers are reallocating budgets and compliance spend.

AI Deepfakes Roil 2026 US Midterm Campaigns: Investing.com reports a 300% increase in deepfakes in early 2026; 221 days until Nov 4, 2026. Campaigns, platforms, and advertisers are reallocating… 👈 Read full analysis #AIDeepfakes #MidtermElections #CampaignStrategy #2026Election #DigitalAdvertising

0 0 0 0
Sony Music Requests Removal of Over 135,000 AI-Generated Deepfake Songs Impersonating Its Artists from Streaming Platforms Sony Music has taken action against the rising tide of AI-generated music by requesting the removal of more than 135,000 songs from various streaming services. These tracks were created by fraudsters using artificial intelligence to mimic the voices and musical styles of prominent artists signed to the label, including Beyoncé, Queen, Harry Styles, Bad Bunny, Miley Cyrus, and producer Mark Ronson. The company highlighted that this figure represents only a portion of the problem, with at least 60,000 such fake songs identified since March 2025 alone. Dennis Kooker, President of Sony’s Global Digital Business, expressed concerns that these deepfakes can undermine artists' promotional efforts, damage release campaigns, and potentially harm their reputations. He noted that the surge in deepfakes is often demand-driven, capitalizing on heightened fan interest during major artist promotions. Streaming platforms like Spotify do not currently mandate clear labeling for AI-generated content, which contributes to the ease with which these tracks circulate. The issue echoes previous incidents, such as deepfakes targeting deceased artist Blaze Foley and rapper Tyler, the Creator around his album release. As AI tools become more accessible and affordable, Sony warns that the volume of unauthorized AI music is expected to keep rising, posing ongoing challenges to the integrity of the music industry and artists' rights.

Sony Music Requests Removal of Over 135,000 AI-Generated Deepfake Songs Impersonating Its Artists from Streaming Platforms

🤖 IA: It's not clickbait ✅
👥 Usuarios: It's not clickbait ✅

#aideepfakes #musicindustry #streamingplatforms

View full AI summary:

0 0 0 0
Preview
Deepfake Fraud Expands as Synthetic Media Targets Online Identity Verification Systems  Beyond spreading false stories or fueling viral jokes, deepfakes are shifting into sharper, more dangerous forms. Security analysts point out how fake videos and audio clips now play a growing role in trickier scams - ones aimed at breaking through digital ID checks central to countless web-based platforms.  Now shaping much of how companies operate online, verifying who someone really is sits at the core of digital safety. Customer sign-up at financial institutions, drivers joining freelance platforms, sellers accessing marketplaces, employment checks done remotely, even resetting lost accounts - each depends on proving a person exists beyond a screen.  Yet here comes a shift: fraudsters increasingly twist live authentication using synthetic media made by artificial intelligence. Attackers now focus less on tricking face scans. They pretend to be actual people instead. By doing so, they secure authorized entry into digital platforms. After slipping past verification layers, their access often spreads - crossing personal apps and corporate networks alike. Long-term hold over hijacked profiles becomes the goal. This shift allows repeated intrusions without raising alarms.  What security teams now notice is a blend of methods aimed at fooling identity checks. High-resolution fake faces appear alongside cloned voices - both able to get through fast login verifications. Stolen video clips come into play during replay attempts, tricking systems expecting live input. Instead of building from scratch, hackers sometimes reuse existing recordings to test weak spots often. Before the software even analyzes the feed, manipulated streams slip in through injection tactics that alter what gets seen.  Still, these methods point to an escalating issue for groups counting only on deepfake spotting tools. More specialists now suggest that checking digital content by itself falls short against today’s identity scams. Rather than focusing just on files, defenses ought to examine every step of the ID check process - spotting subtle signs something might be off. Starting with live video analysis, Incode Deepsight checks if the stream has been tampered with.  Instead of relying solely on images, it confirms identity throughout the entire session. While processing data instantly, the tool examines device security features too. Because behavior patterns matter, slight movements or response timing help indicate real people. Even subtle cues, like how someone holds a phone, become part of the evaluation. Though focused on accuracy, its main role is spotting mismatches across different inputs. Deepfakes pose serious threats when used to fake identities. When these fakes slip through defenses, criminals may set up false profiles built from artificial personas.  Accessing real user accounts becomes possible under such breaches. Verification steps in online job onboarding might be tricked with fabricated visuals. Sensitive business networks could then open to unauthorized entry. Not every test happens in a lab - some scientists now check how detection tools hold up outside controlled settings. Work from Purdue University looked into this by testing algorithms against actual cases logged in the Political Deepfakes Incident Database. Real clips pulled from sites like YouTube, TikTok, Instagram, and X (formerly Twitter) make up the collection used for evaluation.  Unexpected results emerged: detection tools tend to succeed inside lab settings yet falter when faced with actual recordings altered by compression or poor capture quality. Complexity grows because hackers mix methods - replay tactics layered with automated scripts or injected data - which pushes identification efforts further into uncertainty. Security specialists believe trust won’t hinge just on recognizing faces or voices.  Instead, protection may come from checking multiple signals throughout a digital interaction. When one method misses something, others can still catch warning signs. Confidence grows when systems look at patterns over time, not isolated moments. Layers make it harder for deception to go unnoticed. A single flaw doesn’t collapse the whole defense. Frequent shifts in digital threats push experts to treat proof of identity as continuous, not fixed at entry. Over time, reliance on single checkpoints fades when systems evolve too fast.

Deepfake Fraud Expands as Synthetic Media Targets Online Identity Verification Systems #AIDeepfakes #CyberFraud #CyberSecurity

0 0 0 0
Preview
AI, deepfakes and online attacks on activists From smear campaigns and doxxing to AI-generated deepfakes, digital technologies are used to attack activists online.

AI deepfakes are weaponized to silence women human rights defenders globally fake explicit images used to shame, discredit & terrorize. Big Tech hosts this content. Big Tech profits from it. Regulate them. #ProtectActivists #AIDeepfakes #WomensRights
kvinnatillkvinna.org/2026/03/05/a...

1 0 0 0
Preview
Wiz Khalifa bekritiseert ‘Scream 7’ door het gebruik van AI-deepfakes In het kort Wiz Khalifa liet tijdens een livestream van de The Sesh weten dat hij de film Scream 7 echt niet goed vindt. Omdat er Artificiële Intelligentie (AI) en deepfakes worden gebruikt om overleden personages weer tot leven te brengen. Hij noemde de film “troep” en vond dat het gebruik van AI geforceerd en […] The post Wiz Khalifa bekritiseert ‘Scream 7’ door het gebruik van AI-deepfakes appeared first on Newsmonkey.

Wiz Khalifa bekritiseert ‘Scream 7’ door het gebruik van AI-deepfakes #WizKhalifa #Scream7 #AIDeepfakes #Filmcritic #ArtificialIntelligence

0 0 0 0
Post image

Regulators are zeroing in on AI deepfakes, but the real threat could be the quiet nudges from conversational agents and wearables we barely notice. How will Meta, Google & new tech regulation shape AI influence? Find out. #AIDeepfakes #ConversationalAgents #TechRegulation

🔗

1 0 0 0
Post image

Samsung’s new ticket promo is a wild AI deepfake train wreck—what does it say about brand authenticity and the limits of AI in marketing? Dive into the fallout and what it means for creators. #AIDeepfakes #SamsungAds #AIMarketingLimits

🔗 aidailypost.com/news/ai-deep...

0 0 0 0
Post image

New study shows transparency warnings barely dent the spread of AI deepfakes—synthetic videos still fool us. What does this mean for content authenticity and digital media trust? Dive in. #AIDeepfakes #TransparencyFails #ContentAuthenticity

🔗 aidailypost.com/news/study-f...

0 0 0 0
🚨 Fraud is evolving faster than our defenses.

Biometric matching and basic spoof detection are no longer enough. The rise of AI-generated deepfakes and sophisticated injection attacks is forcing a complete rethink of identity security architecture — from digital wallets to government border systems.

Standards-based IAD testing is maturing, but it needs to mature faster.

Read the full breakdown 👇 🔗 https://provadivita.com/biometric-injection-attacks/

#BiometricFraud #InjectionAttackDetection #AIDeepfakes #IdentitySecurity #LivenessDetection #BiometricTechnology #CyberThreats #DigitalTrust

🚨 Fraud is evolving faster than our defenses. Biometric matching and basic spoof detection are no longer enough. The rise of AI-generated deepfakes and sophisticated injection attacks is forcing a complete rethink of identity security architecture — from digital wallets to government border systems. Standards-based IAD testing is maturing, but it needs to mature faster. Read the full breakdown 👇 🔗 https://provadivita.com/biometric-injection-attacks/ #BiometricFraud #InjectionAttackDetection #AIDeepfakes #IdentitySecurity #LivenessDetection #BiometricTechnology #CyberThreats #DigitalTrust

🚨 Fraud is evolving faster than our defenses.

Standards-based IAD testing is maturing, but it needs to mature faster.

Read the full breakdown 👇 🔗 provadivita.com/biometric-in...

#BiometricFraud #InjectionAttackDetection #AIDeepfakes #IdentitySecurity #LivenessDetection #BiometricTechnology

0 0 0 0
Preview
‘The Investigation of Lucy Letby’: Netflix krijgt kritiek over gebruik van AI-deepfakes om getuigen te anonimiseren In het kort De nieuwe documentaire van Netflix, The Investigation of Lucy Letby, gaat over de huiveringwekkende zaak van een verpleegster die veroordeeld is voor de moord en poging tot moord op meerdere baby’s. De docuserie heeft snel de aandacht getrokken van liefhebbers van true crime, maar de keuze om AI-deepfakes te gebruiken om rouwende […] The post ‘The Investigation of Lucy Letby’: Netflix krijgt kritiek over gebruik van AI-deepfakes om getuigen te anonimiseren appeared first on Newsmonkey.

‘The Investigation of Lucy Letby’: Netflix krijgt kritiek over gebruik van AI-deepfakes om getuigen te anonimiseren #Netflix #TrueCrime #LucyLetby #Documentaire #AIDeepfakes

0 0 0 0

Netflix's 'true crime' deepfakes? Not just bad taste, it's digital historical revisionism. Yellow journalism 2.0, with better pixels. #AIDeepfakes #TrueCrime #MediaManipulation

Read more: piaz.news/article/streaming-giants...

0 0 0 0
Preview
Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage  A sudden pause in accessibility marks Indonesia’s move against Grok, Elon Musk’s AI creation, following claims of misuse involving fabricated adult imagery. News of manipulated visuals surfaced, prompting authorities to act - Reuters notes this as a world-first restriction on the tool. Growing unease about technology aiding harm now echoes across borders. Reaction spreads, not through policy papers, but real-time consequences caught online.   A growing number of reports have linked Grok to incidents where users created explicit imagery of women - sometimes involving minors - without consent. Not long after these concerns surfaced, Indonesia’s digital affairs minister, Meutya Hafid, labeled the behavior a severe breach of online safety norms.  As cited by Reuters, she described unauthorized sexually suggestive deepfakes as fundamentally undermining personal dignity and civil rights in digital environments. Her office emphasized that such acts fall under grave cyber offenses, demanding urgent regulatory attention Temporary restrictions appeared in Indonesia after Antara News highlighted risks tied to AI-made explicit material.  Protection of women, kids, and communities drove the move, aimed at reducing mental and societal damage. Officials pointed out that fake but realistic intimate imagery counts as digital abuse, according to statements by Hafid. Such fabricated visuals, though synthetic, still trigger actual consequences for victims. The state insists artificial does not mean harmless - impact matters more than origin. Following concerns over Grok's functionality, officials received official notices demanding explanations on its development process and observed harms.  Because of potential risks, Indonesian regulators required the firm to detail concrete measures aimed at reducing abuse going forward. Whether the service remains accessible locally hinges on adoption of rigorous filtering systems, according to Hafid. Compliance with national regulations and adherence to responsible artificial intelligence practices now shape the outcome.  Only after these steps are demonstrated will operation be permitted to continue. Last week saw Musk and xAI issue a warning: improper use of the chatbot for unlawful acts might lead to legal action. On X, he stated clearly - individuals generating illicit material through Grok assume the same liability as those posting such content outright. Still, after rising backlash over the platform's inability to stop deepfake circulation, his stance appeared to shift slightly.  A re-shared post from one follower implied fault rests more with people creating fakes than with the system hosting them. The debate spread beyond borders, reaching American lawmakers. A group of three Senate members reached out to both Google and Apple, pushing for the removal of Grok and X applications from digital marketplaces due to breaches involving explicit material. Their correspondence framed the request around existing rules prohibiting sexually charged imagery produced without consent.  What concerned them most was an automated flood of inappropriate depictions focused on females and minors - content they labeled damaging and possibly unlawful. When tied to misuse - like deepfakes made without consent - AI tools now face sharper government reactions, Indonesia's move part of this rising trend. Though once slow to act, officials increasingly treat such technology as a risk needing strong intervention.  A shift is visible: responses that were hesitant now carry weight, driven by public concern over digital harm. Not every nation acts alike, yet the pattern grows clearer through cases like this one. Pressure builds not just from incidents themselves, but how widely they spread before being challenged.

Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage #AIDeepfakes #AIgeneratedDeepfake #AIPrivacy

0 0 0 0
Post image

Paris Hilton being very brave!

👏🏻👏🏻👏🏻👏🏻

#AIDeepfakes #Grok

3 0 0 0
Post image

🚨 Grok’s latest AI deepfake scandal just got messy—now facing child‑undressing accusations and legal heat. What’s really happening behind the code? Dive into the details. #GrokAI #AIDeepfakes #AIRegulation

🔗 aidailypost.com/news/grok-fa...

0 0 0 0
Preview
Grok AI Faces Global Backlash Over Nonconsensual Image Manipulation on X  A dispute over X's internal AI assistant, Grok, is gaining attention - questions now swirl around permission, safety measures online, yet also how synthetic media tools can be twisted. This tension surfaced when Julie Yukari, a musician aged thirty-one living in Rio de Janeiro, posted a picture of herself unwinding with her cat during New Year’s Eve celebrations. Shortly afterward, individuals on the network started instructing Grok to modify that photograph, swapping her outfit for skimpy beach attire through digital manipulation.  What started as skepticism soon gave way to shock. Yukari had thought the system wouldn’t act on those inputs - yet it did. Images surfaced, altered, showing her with minimal clothing, spreading fast across the app. She called the episode painful, a moment that exposed quiet vulnerabilities. Consent vanished quietly, replaced by algorithms working inside familiar online spaces.  A Reuters probe found that Yukari’s situation happens more than once. The organization uncovered multiple examples where Grok produced suggestive pictures of actual persons, some seeming underage. No reply came from X after inquiries about the report’s results. Earlier, xAI - the team developing Grok - downplayed similar claims quickly, calling traditional outlets sources of false information.  Across the globe, unease is growing over sexually explicit images created by artificial intelligence. Officials in France have sent complaints about X to legal authorities, calling such content unlawful and deeply offensive to women. A similar move came from India’s technology ministry, which warned X it did not stop indecent material from being made or shared online. Meanwhile, agencies in the United States, like the FCC and FTC, chose silence instead of public statements.  A sudden rise in demands for Grok to modify pictures into suggestive clothing showed up in Reuters' review. Within just ten minutes, over one00 instances appeared - mostly focused on younger females. Often, the system produced overt visual content without hesitation. At times, only part of the request was carried out. A large share vanished quickly from open access, limiting how much could be measured afterward.  Some time ago, image-editing tools driven by artificial intelligence could already strip clothes off photos, though they mostly stayed on obscure websites or required payment. Now, because Grok is built right into a well-known social network, creating such fake visuals takes almost no work at all. Warnings had been issued earlier to X about launching these kinds of features without tight controls.  People studying tech impacts and advocacy teams argue this situation followed clearly from those ignored alerts. From a legal standpoint, some specialists claim the event highlights deep flaws in how platforms handle harmful content and manage artificial intelligence. Rather than addressing risks early, observers note that X failed to block offensive inputs during model development while lacking strong safeguards on unauthorized image creation.  In cases such as Yukari’s, consequences run far beyond digital space - emotions like embarrassment linger long after deletion. Although aware the depictions were fake, she still pulled away socially, weighed down by stigma. Though X hasn’t outlined specific fixes, pressure is rising for tighter rules on generative AI - especially around responsibility when companies release these tools widely. What stands out now is how little clarity exists on who answers for the outcomes.

Grok AI Faces Global Backlash Over Nonconsensual Image Manipulation on X #AIChatbot #AIDeepfakes #AIPrivacy

0 0 0 0
Preview
Why Cybersecurity Threats in 2026 Will Be Harder to See, Faster to Spread, And Easier to Believe   The approach to cybersecurity in 2026 will be shaped not only by technological innovation but also by how deeply digital systems are embedded in everyday life. As cloud services, artificial intelligence tools, connected devices, and online communication platforms become routine, they also expand the surface area for cyber exploitation. Cyber threats are no longer limited to technical breaches behind the scenes. They increasingly influence what people believe, how they behave online, and which systems they trust. While some risks are still emerging, others are already circulating quietly through commonly used apps, services, and platforms, often without users realizing it. One major concern is the growing concentration of internet infrastructure. A substantial portion of websites and digital services now depend on a limited number of cloud providers, content delivery systems, and workplace tools. This level of uniformity makes the internet more efficient but also more fragile. When many platforms rely on the same backbone, a single disruption, vulnerability, or attack can trigger widespread consequences across millions of users at once. What was once a diverse digital ecosystem has gradually shifted toward standardization, making large-scale failures easier to exploit. Another escalating risk is the spread of misleading narratives about online safety. Across social media platforms, discussion forums, and live-streaming environments, basic cybersecurity practices are increasingly mocked or dismissed. Advice related to privacy protection, secure passwords, or cautious digital behavior is often portrayed as unnecessary or exaggerated. This cultural shift creates ideal conditions for cybercrime. When users are encouraged to ignore protective habits, attackers face less resistance. In some cases, misleading content is actively promoted to weaken public awareness and normalize risky behavior. Artificial intelligence is further accelerating cyber threats. AI-driven tools now allow attackers to automate tasks that once required advanced expertise, including scanning for vulnerabilities and crafting convincing phishing messages. At the same time, many users store sensitive conversations and information within browsers or AI-powered tools, often unaware that this data may be accessible to malware. As automated systems evolve, cyberattacks are becoming faster, more adaptive, and more difficult to detect or interrupt. Trust itself has become a central target. Technologies such as voice cloning, deepfake media, and synthetic digital identities enable criminals to impersonate real individuals or create believable fake personas. These identities can bypass verification systems, open accounts, and commit fraud over long periods before being detected. As a result, confidence in digital interactions, platforms, and identity checks continues to decline. Future computing capabilities are already influencing present-day cyber strategies. Even though advanced quantum-based attacks are not yet practical, some threat actors are collecting encrypted data now with the intention of decrypting it later. This approach puts long-term personal, financial, and institutional data at risk and underlines the need for stronger, future-ready security planning. As digital and physical systems become increasingly interconnected, cybersecurity in 2026 will extend beyond software and hardware defenses. It will require stronger digital awareness, better judgment, and a broader understanding of how technology shapes risk in everyday life.

Why Cybersecurity Threats in 2026 Will Be Harder to See, Faster to Spread, And Easier to Believe #AIDeepfakes #ArtificialIntelligence #CyberSecurity

0 0 0 0
Elon Musk's X investigated by Ofcom over Grok AI sexual deepfakes | BBC News
Elon Musk's X investigated by Ofcom over Grok AI sexual deepfakes | BBC News YouTube video by BBC News

Elon Musk's X Investigated By Ofcom Over Grok AI Sexual Deepfakes

#USA #X #Twitter #ElonMusk #Ofcom #AIdeepfakes

youtu.be/w-fRuMpB-rM?...

0 0 0 0
Original post on mastodon.social

Elon Musk's X to block #Grok from undressing images of real people
Elon Musk's AI model Grok will no longer be able to edit photos of real people to show them in revealing clothing in jurisdictions where it is illegal, after widespread concern over sexualised #AIdeepfakes […]

0 1 0 0
Preview
Exclusive | Matthew McConaughey Trademarks Himself to Fight AI Misuse Actor plans to use trademarks of himself saying ‘Alright, alright, alright’ and staring at a camera to combat AI fakes in court.

#MatthewMcConaughey Trademarks Himself to Fight #AI Misuse.Actor to use #trademarks of himself saying ‘Alright, alright, alright’ & staring at a camera to combat AI Fakes in court

#movies #filmsky #AIDeepFakes #CyberCrime #DigitalDisinformation #actors

www.wsj.com/tech/ai/matt...

3 0 0 0
Video

Grok-Blocked: Regulators Hit Kill Switch on AI
Authorities in Malaysia and Indonesia blocked the AI after it was used to create nonconsensual deepfakes.

Read the article and see our sources: s.vp.net/5TnNq

#GrokBlocked #Grok #AIdeepfakes

0 0 1 0
Preview
Deepfakes in the Workplace: The Emerging Legal Risks of AI-Driven Harassment | JD Supra A California appellate court recently affirmed a jury verdict awarding $4 million to a police captain who was subjected to a hostile work environment...

Deepfakes in the Workplace: The Emerging Legal Risks of AI-Driven Harassment

www.jdsupra.com/legalnews/de...

#AI #AIDeepfakes #WorkplaceHarassment #HostileWorkEnvironment #EEOC

0 0 0 0
Preview
Malaysia, Indonesia become first to block Musk's Grok over AI deepfakes Malaysia and Indonesia have become the first countries to block Grok, the artificial intelligence chatbot developed by Elon Musk's xAI, after authorities said it was being misused to generate sexually...

#Malaysia, #Indonesia become first to block Musk's @groktr.bsky.social over #AIdeepfakes via @npr.org #MondayMorning
www.npr.org/2026/01/12/n...

1 0 0 0
Video

When Even Biology Isn’t Proof Anymore

#AIDeepfakes #TechCulture #DigitalVerification #MediaTrust #TheInternetIsCrack

0 0 0 0
Preview
Boys at her school shared AI-generated, nude images of her. She was the one expelled A 13-year-old girl at a Louisiana middle school got into a fight with classmates who were sharing AI-generated nude images of her

The girls begged for help, first from a school guidance counselor and then from a sheriffs deputy assigned to their school. But the images were shared on Snapchat, an app that deletes messages seconds after they’re viewed .... #Crime #AIdeepfakes #Education
abcnews.go.com/US/wireStory...

0 0 0 0
Preview
Kurtis David Harder’s Influencers Sequel Tackles AI, Deepfakes | Entertainment Geekly Veteran horror filmmaker Kurtis David Harder is positioning his new feature, Influencers, as a genre workout with timely bite, leaning into fears around social media and artificial intelligence as…

Kurtis David Harder is back with an Influencer sequel diving into AI and deepfakes. We’re in. www.entertainmentgeekly.com/2025/12/05/k...

#InfluencerMovie #KurtisDavidHarder #HorrorThriller #AIDeepfakes

0 0 0 0
Preview
You should own your own face : Pocock launches AI deepfake bill Australians who share AI deepfakes of another person without consent could be sued or fined under a new bill.

Could sharing AI deepfakes lead to emotional damages? A new bill in Australia aims to hold culprits accountable! #AIDeepfakes

www.abc.net.au/news/2025-11-24/victims-...

0 0 0 0
Preview
Deepfake of Finance Minister Lures Bengaluru Homemaker into ₹43.4 Lakh Trading Scam A deceptive social media video that appeared to feature Union Finance Minister Nirmala Sitharaman has cost a Bengaluru woman her life’s savings. The 57-year-old homemaker from East Bengaluru lost ₹43.4 lakh after being persuaded by an artificial intelligence-generated deepfake that falsely claimed the minister was recommending an online trading platform promising high profits. Investigators say the video, which circulated on Instagram in August, directed viewers to an external link where users were encouraged to sign up for investment opportunities. Believing the message to be authentic, the woman followed the link and entered her personal information, which was later used to contact her directly. The next day, a man identifying himself as Aarav Gupta reached out to her through WhatsApp, claiming to represent the company shown in the video. He invited her to a large WhatsApp group titled “Aastha Trade 238”, which appeared to host over a hundred participants discussing stock trades. Another contact, who introduced herself as Meena Joshi, soon joined the conversation, offering to help the victim learn how to use the firm’s trading tools. Acting on their guidance, the homemaker downloaded an application called ACSTRADE and created an account. Meena walked her through the steps of linking her bank details, assuring her that the platform was reliable. The first transfer of ₹5,000 was made soon after, and to her surprise, the app began displaying what looked like real profits. Encouraged by what appeared to be rapid returns, she made larger investments. The application showed her initial ₹1 lakh growing into ₹2 lakh, and a later ₹5 lakh transfer seemingly yielding ₹8 lakh. The visual proof of profit strengthened her trust, and she kept transferring higher amounts. In September, problems surfaced. While exploring an “IPO feature” on the app, she tried to exit but was unable to do so due to recurring technical errors. When she sought help, Meena advised her to continue investing to prevent losses. The woman followed this advice, transferring a total of ₹23 lakh in hopes of recovering her funds. Once her savings were exhausted, the scammers proposed a loan option within the same app, claiming it would help her maintain her trading record. When she attempted to withdraw money, the platform denied the request, displaying a message stating her loan account was still active. Believing the issue could be resolved with more funds, she pawned her gold jewellery at a bank and a finance company, wiring additional money to the fraudsters. By late October, her total transfers had reached ₹43.4 lakh across 13 separate transactions between September 24 and October 27. The deception came to light only when her bank froze her account on November 1, alerting her that unusual activity had been detected. The East Cybercrime Police Station has since registered a case under the Information Technology Act and Section 318 of the Bharatiya Nyaya Sanhita, which addresses cheating. Officers confirmed that the fraudulent video used sophisticated AI tools to mimic the minister’s voice and gestures convincingly, making it difficult for untrained viewers to identify as fake. Police officials have urged the public to remain alert to deepfake-driven scams that exploit public trust in well-known personalities. They advise verifying any financial offer through official government portals or trusted news sources, and to avoid clicking unfamiliar links on social media. Experts warn that such crimes surface a new wave of cyber fraud, where manipulated media is used to build false credibility. Citizens are advised never to disclose personal or banking information through unverified links, and to immediately report suspicious investment schemes to their banks or local cybercrime authorities.

Deepfake of Finance Minister Lures Bengaluru Homemaker into ₹43.4 Lakh Trading Scam #AIDeepfakes #Bengaluru #CyberCrime

0 0 0 0
It’s Getting Harder To Know What’s Real
It’s Getting Harder To Know What’s Real YouTube video by StarTalk

Be warned, this video starts out with an AI generated Neil deGrasse Tyson talking about Flat Earth... but it's only there to prove a point, which the real Neil debunks.

#AIDeepFakes #Don'tUseAI

0 0 0 0
Video

That Video Is NOT Real ‼️ #thejaampodcast 
.
.
.
#aideepfakes #generativeai #podcastclips

10 2 0 0
German

German

What is the Sora App and How Does It Work? #AIdeepfakes #digitalethics #disinformation #generativeAI #onlinetrust #OpenAI #Sora #texttovideo
pintiu.com/sora-app-sho...

0 0 0 0