Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Fraud. Show all posts

AI Fraud Emerges as a Growing Threat to Consumer Technology


 

With the advent of generative AI, a paradigm shift has been ushered in the field of cybersecurity, transforming the tactics, techniques, and procedures that malicious actors have been using for a very long time. As threat actors no longer need to spend large amounts of money and time on extensive resources, they are now utilising generative AI to launch sophisticated attacks at an unprecedented pace and efficiency. 

With these tools, cybercriminals can scale their operations to a large level, while simultaneously lowering the technical and financial barriers of entry as they craft highly convincing phishing emails and automate malware development. The rapid growth of the cyber world is posing a serious challenge to cybersecurity professionals. 

The old defence mechanisms and threat models may no longer be sufficient in an environment where attackers are continuously adapting to their environment with AI-driven precision. Therefore, security teams need to keep up with current trends in AI-enabled threats as well as understand historical attack patterns and extract actionable insights from them in order to stay ahead of the curve in order to stay competitive.

By learning from previous incidents and anticipating the next use of generative artificial intelligence, organisations can improve their readiness to detect, defend against, and respond to intelligent cyber threats of a new breed. There has never been a more urgent time to implement proactive, AI-aware cybersecurity strategies than now. With the rapid growth of India's digital economy in recent years, supported by platforms like UPI for seamless payment and Digital India for accessible e-governance, cyber threats have become increasingly complex, which has fueled cybercrime. 

Aside from providing significant conveniences and economic opportunities, these technological advances have also exposed users to the threat of a new generation of cyber-related risks caused by artificial intelligence (AI). Previously, AI was used as a tool to drive innovation and efficiency. Today, cybercriminals use AI to carry out incredibly customized, scalable, and deceptive attacks based on artificial intelligence. 

A threat enabled by artificial intelligence, on the other hand, is capable of mimicking human behaviour, producing realistic messages, and adapting to targets in real time as opposed to traditional scams. A malicious actor is able to create phishing emails that mimic official correspondence very closely, use deepfakes to fool the public, and alarmingly automate large-scale scams by taking advantage of these capabilities. 

In India, where millions of users, many of whom are first-time internet users, may not have the awareness or tools required to detect such sophisticated attacks, the impact is particularly severe. As a global cybercrime loss is expected to reach trillions of dollars in the next decade, India’s digitally active population is becoming increasingly attractive as a target. 

Due to the rapid adoption of technology and the lack of digital literacy present in the country, AI-powered fraud is becoming increasingly common. This means that it is becoming increasingly imperative that government agencies, private businesses, and individuals coordinate efforts to identify the evolving threat landscape and develop robust cybersecurity strategies that take into account AI. 

Affectionately known as AI, Artificial Intelligence can be defined as the branch of computer science concerned with developing products capable of performing tasks that are typically generated by human intelligence, such as reasoning, learning, problem-solving, perception, and language understanding, all of which are traditionally performed by humans. In its simplest form, AI involves the development of algorithms and computational models that are capable of processing huge amounts of data, identifying meaningful patterns, adapting to new inputs, and making decisions with minimal human intervention, all of which are crucial to the overall success of AI. 

As an example, AI helps machines emulate cognitive functions such as recognising speech, interpreting images, comprehending natural language, and predicting outcomes, enabling them to automate, improve efficiency, and solve complex problems in the real world. The applications of artificial intelligence are extending into a wide variety of industries, from healthcare to finance to manufacturing to autonomous vehicles to cybersecurity. As part of the broader field of Artificial Intelligence, Machine Learning (ML) serves as a crucial subset that enables systems to learn and improve from experience without having to be explicitly programmed for every scenario possible. 

Data is analysed, patterns are identified, and these algorithms are refined over time in response to feedback, thus becoming more accurate as time passes. A more advanced subset of machine learning is Deep Learning (DL), which uses layered neural networks that are modelled after the human brain to process high-level data. Deep learning excels at processing unstructured data like images, audio, and natural language and is able to handle it efficiently. As a result, technologies like facial recognition systems, autonomous driving, and conversational AI models are powered by deep learning. 

ChatGPT is one of the best examples of deep learning in action since it uses large-scale language models to understand and respond to user queries as though they were made by humans. With the continuing evolution of these technologies, their impact across sectors is increasing rapidly and offering immense benefits. However, these technologies also present new vulnerabilities that cybercriminals are increasingly hoping to exploit in order to make a profit. 

A significant change has occurred in the fraud landscape as a result of the rise of generative AI technologies, especially large language models (LLMs), providing both powerful tools for defending against fraud as well as new opportunities for exploitation. While these technologies enhance the ability of security teams to detect and mitigate threats, they also allow cybercriminals to devise sophisticated fraud schemes that bypass conventional safeguards in order to conceal their true identity. 

As fraudsters increasingly use generative artificial intelligence to craft attacks that are more persuasive as well as harder to detect, they are making attacks that are increasingly convincing. There has been a significant increase in phishing attacks utilising artificial intelligence. In these attacks, language models are used to generate emails and messages that mimic the tone, structure, and branding of legitimate communications, eliminating any obvious telltale signs of poor grammar or suspicious formatting that used to be a sign of scams. 

A similar development is the deployment of deepfake technology, including voice cloning and video manipulation, to impersonate trusted individuals, enabling social engineering attacks that are both persuasive and difficult to dismiss. In addition, attackers have now been able to automate at scale, utilising generative artificial intelligence, in real time, to target multiple victims simultaneously, customise messages, and tweak their tactics. 

It is with this scalability that fraudulent campaigns become more effective and more widespread. Furthermore, AI also enables bad actors to use sophisticated evasion techniques, enabling them to create synthetic identities, manipulate behavioural biometrics, and adapt rapidly to new defences, thus making it difficult for them to be detected. The same artificial intelligence technologies that fraudsters utilise are also used by cybersecurity professionals to enhance the defences against potential threats.

As a result, security teams are utilising generative models to identify anomalies in real time, by establishing dynamic baselines of normal behaviour, to flag deviations—potential signs of fraud—more effectively. Furthermore, synthetic data generation allows the creation of realistic, anonymous datasets that can be used to train more accurate and robust fraud detection systems, particularly for identifying unusual or emerging fraud patterns in real time. 

A key application of artificial intelligence to the investigative process is the fact that it makes it possible for analysts to rapidly sift through massive data sets and find critical connections, patterns, and outliers that otherwise may go undetected. Also, the development of adaptive defence systems- AI-driven platforms that learn and evolve in response to new threat intelligence- ensures that fraud prevention strategies remain resilient and responsive even when threat tactics are constantly changing. In recent years, generative artificial intelligence has been integrated into both the offensive and defensive aspects of fraud, ushering in a revolutionary shift in digital risk management. 

It is becoming increasingly clear that as technology advances, fraud prevention efforts will increasingly be based upon organisations utilising and understanding artificial intelligence, not only in order to anticipate emerging threats, but also in order to stay several steps ahead of those looking to exploit them. Even though artificial intelligence is becoming more and more incorporated into our daily lives and business operations, it is imperative that people do not ignore the potential risks resulting from its misuse or vulnerability. 

As AI technologies continue to evolve, both individuals and organisations should adopt a comprehensive and proactive cybersecurity strategy tailored specifically to the unique challenges they may face. Auditing AI systems regularly is a fundamental step towards navigating this evolving landscape securely. Organisations must evaluate the trustworthiness, security posture and privacy implications of these technologies, whether they are using third-party platforms or internally developed models. 

In order to find weaknesses and minimize potential threats, organizations should conduct periodic system reviews, penetration tests, and vulnerability assessments in cooperation with cybersecurity and artificial intelligence specialists, in order to identify weaknesses and minimize potential threats. In addition, sensitive and personal information must be handled responsibly. A growing number of individuals are unintentionally sharing confidential information with artificial intelligence platforms without understanding the ramifications of this.

In the past, several corporations have submitted proprietary information to tools such as ChatGPT that are powered by artificial intelligence, or healthcare professionals have disclosed patient information. Both cases raise serious concerns regarding data privacy and compliance with regulations. The AI interactions will be recorded so that system improvements can be made, so it is important for users to avoid sharing any personal, confidential, or regulated information on such platforms. 

Secured data is another important aspect of AI modelling. The integrity of the training data is a vital component of the functionality of AI, and any manipulation, referred to as "data poisoning", can negatively impact outputs and lead to detrimental consequences for users. There are several ways to mitigate the risk of data loss and corruption, including implementing strong policies for data governance, deploying robust encryption methods, enforcing access controls, and using comprehensive backup solutions. 

Further strengthening the system's resilience involves the use of firewalls, intrusion detection systems, and secure password protocols. Additionally, it is important to adhere to the best practices in software maintenance in order to maintain the software correctly. With the latest security patches installed on AI frameworks, applications, and supporting infrastructure, you can significantly reduce the probability of exploitation. It is also important to deploy advanced antivirus and endpoint protection tools to help protect against AI-driven malware as well as other sophisticated threats.

In an attempt to improve AI models, adversarial training is one of the more advanced methods of training them, as it involves training them with simulated attacks as well as data inputs that are unpredictable. It is our belief that this approach will increase the robustness of the model in order for it to better deal with adversarial manipulations in real-world environments, thereby making it more resilient. As well as technological safeguards, employee awareness and preparedness are crucial. 

Employees need to be taught to recognise artificial intelligence-generated phishing attacks, avoid unsafe software downloads, and respond effectively to changing threats as they arise. As part of the AI risk management process, AI experts can be consulted to ensure that training programs are up-to-date and aligned with the latest threat intelligence. 

A second important practice is AI-specific vulnerability management, which involves identifying, assessing, and remediating security vulnerabilities within the AI systems continuously. By reducing the attack surface of an organisation, organisations can reduce the likelihood of breaches that will exploit the complex architecture of artificial intelligence. Last but not least, even with robust defences, incidents can still occur; therefore, there must be a clear set of plans for dealing with AI incidents. 

A good AI incident response plan should include containment protocols, investigation procedures, communication strategies, and recovery efforts, so that damage is minimised and operations are maintained as soon as possible following a cyber incident caused by artificial intelligence. It is critical that businesses adopt these multilayered security practices in order to maintain the trust of their users, ensure compliance, and safeguard against the sophisticated threats emerging in the AI-driven cyber landscape, especially at a time when AI is both a transformative force and a potential risk vector. 

As artificial intelligence is continuing to reshape the technological landscape, all stakeholders must address the risks associated with it. In order to develop comprehensive governance frameworks that balance innovation with security, it is important to work together in concert with business leaders, policymakers, and cybersecurity experts. In addition, cultivating a culture of continuous learning and vigilance among users will greatly reduce the vulnerabilities that can be exploited by increasingly sophisticated artificial intelligence-driven attacks in the future.

It will be imperative to invest in adaptive technologies that will evolve as threats arise, while maintaining ethical standards and ensuring transparency, to build resilient cyber defences. The goal of securing the benefits of AI ultimately depends upon embracing a forward-looking, integrated approach that embraces both technological advancement and rigorous risk management in order to protect digital ecosystems today and in the future, to be effective.

Generative AI Fuels Identity Theft, Aadhaar Card Fraud, and Misinformation in India

 

A disturbing trend is emerging in India’s digital landscape as generative AI tools are increasingly misused to forge identities and spread misinformation. One user, Piku, revealed that an AI platform generated a convincing Aadhaar card using only a name, birth date, and address—raising serious questions about data security. While AI models typically do not use real personal data, the near-perfect replication of government documents hints at training on real-world samples, possibly sourced from public leaks or open repositories. 

This AI-enabled fraud isn’t occurring in isolation. Criminals are combining fake document templates with authentic data collected from discarded paperwork, e-waste, and old printers. The resulting forged identities are realistic enough to pass basic checks, enabling SIM card fraud, bank scams, and more. What started as tools for entertainment and productivity now pose serious risks. Misinformation tactics are evolving too. 

A recent incident involving playback singer Shreya Ghoshal illustrated how scammers exploit public figures to push phishing links. These fake stories led users to malicious domains targeting them with investment scams under false brand names like Lovarionix Liquidity. Cyber intelligence experts traced these campaigns to websites built specifically for impersonation and data theft. The misuse of generative AI also extends into healthcare fraud. 

In a shocking case, a man impersonated renowned cardiologist Dr. N John Camm and performed unauthorized surgeries at a hospital in Madhya Pradesh. At least two patient deaths were confirmed between December 2024 and February 2025. Investigators believe the impersonator may have used manipulated or AI-generated credentials to gain credibility. Cybersecurity professionals are urging more vigilance. CertiK founder Ronghui Gu emphasizes that users must understand the risks of sharing biometric data, like facial images, with AI platforms. Without transparency, users cannot be sure how their data is used or whether it’s shared. He advises precautions such as using pseudonyms, secondary emails, and reading privacy policies carefully—especially on platforms not clearly compliant with regulations like GDPR or CCPA. 

A recent HiddenLayer report revealed that 77% of companies using AI have already suffered security breaches. This underscores the need for robust data protection as AI becomes more embedded in everyday processes. India now finds itself at the center of an escalating cybercrime wave powered by generative AI. What once seemed like harmless innovation now fuels identity theft, document forgery, and digital misinformation. The time for proactive regulation, corporate accountability, and public awareness is now—before this new age of AI-driven fraud becomes unmanageable.

Colorado Faces Growing Financial Losses from AI-Powered Scams in 2024

 

Colorado is on track to suffer even greater financial losses from scams by the end of 2024 compared to the nearly $100 million stolen in 2023. According to the Colorado Attorney General's Office, the rapid integration of artificial intelligence (AI) into everyday life may be driving this increase.

Gone are the days when misspelled words, unprofessional websites, and suspicious email domains were telltale signs of scams. With AI, criminals now replicate the voices of loved ones to stage fake emergencies, tricking victims into sharing money or sensitive information. "Artificial intelligence takes existing scam opportunities and puts them on steroids," said Colorado Attorney General Phil Weiser.

In 2023, the FBI Denver Field Office reported that scammers stole $187 million from nearly 11,500 residents in Colorado—an increase of $9 million compared to 2022. Investment fraud ($50 million), business email compromise ($57 million), and tech support scams ($23 million) were the top schemes contributing to these losses.

Weiser's office received a record-breaking 20,390 fraud complaints in 2023, up from 19,519 in 2019, reflecting a growing trend. Colorado now ranks seventh nationwide for scam complaints per capita. Many of these cases were reported through StopFraudColorado.com, a platform providing fraud education and reporting tools.

One alarming scam, known as the "grandparent scam," highlights how scammers use AI to imitate a grandchild's voice. The victim is told their grandchild is in jail abroad and needs money urgently. "One of the scary parts is many people have a hard time understanding the difference between deepfakes and reality," Weiser said. He advises skepticism: "Don't trust those calls. Hang up and verify the information with the appropriate source."

Younger internet users and older adults are particularly vulnerable. Weiser added, "AI is not new, but the widespread use of tools like ChatGPT has taken adoption to a new level."

Austin Hastings, assistant vice president at Alpine Bank, noted that scammers adapt their strategies once people stop falling for certain tricks. Recent scams involve AI-generated phishing emails and websites that convincingly mimic legitimate organizations.

To combat scams, Alpine Bank suggests:

  • Avoid clicking on unexpected links. Use verified websites or saved URLs.
  • Never share financial information or passwords over email or phone.
  • Beware of too-good-to-be-true deals and stick to trusted retailers.
  • Monitor bank accounts regularly for suspicious charges.
  • Report fraudulent activity to authorities promptly.
  • The Colorado Privacy Act, enacted in 2021, provides residents with tools to protect their data, such as opting out of targeted advertising and requiring entities to safeguard personal information.

"It's a dangerous world out there, and AI is making it more dangerous," Weiser warned. "Please protect yourself and those you love."

Microsoft and OpenAI Reveal Hackers Weaponizing ChatGPT

 

In a digital landscape fraught with evolving threats, the marriage of artificial intelligence (AI) and cybercrime has become a potent concern. Recent revelations from Microsoft and OpenAI underscore the alarming trend of malicious actors harnessing advanced language models (LLMs) to bolster their cyber operations. 

The collaboration between these tech giants has shed light on the exploitation of AI tools by state-sponsored hacking groups from Russia, North Korea, Iran, and China, signalling a new frontier in cyber warfare. According to Microsoft's latest research, groups like Strontium, also known as APT28 or Fancy Bear, notorious for their role in high-profile breaches including the hacking of Hillary Clinton’s 2016 presidential campaign, have turned to LLMs to gain insights into sensitive technologies. 

Their utilization spans from deciphering satellite communication protocols to automating technical operations through scripting tasks like file manipulation and data selection. This sophisticated application of AI underscores the adaptability and ingenuity of cybercriminals in leveraging emerging technologies to further their malicious agendas. The Thallium group from North Korea and Iranian hackers of the Curium group have followed suit, utilizing LLMs to bolster their capabilities in researching vulnerabilities, crafting phishing campaigns, and evading detection mechanisms. 

Similarly, Chinese state-affiliated threat actors have integrated LLMs into their arsenal for research, scripting, and refining existing hacking tools, posing a multifaceted challenge to cybersecurity efforts globally. While Microsoft and OpenAI have yet to detect significant attacks leveraging LLMs, the proactive measures undertaken by these companies to disrupt the operations of such hacking groups underscore the urgency of addressing this evolving threat landscape. Swift action to shut down associated accounts and assets coupled with collaborative efforts to share intelligence with the defender community are crucial steps in mitigating the risks posed by AI-enabled cyberattacks. 

The implications of AI in cybercrime extend beyond the current landscape, prompting concerns about future use cases such as voice impersonation for fraudulent activities. Microsoft highlights the potential for AI-powered fraud, citing voice synthesis as an example where even short voice samples can be utilized to create convincing impersonations. This underscores the need for preemptive measures to anticipate and counteract emerging threats before they escalate into widespread vulnerabilities. 

In response to the escalating threat posed by AI-enabled cyberattacks, Microsoft spearheads efforts to harness AI for defensive purposes. The development of a Security Copilot, an AI assistant tailored for cybersecurity professionals, aims to empower defenders in identifying breaches and navigating the complexities of cybersecurity data. Additionally, Microsoft's commitment to overhauling software security underscores a proactive approach to fortifying defences in the face of evolving threats. 

The battle against AI-powered cyberattacks remains an ongoing challenge as the digital landscape continues to evolve. The collaborative efforts between industry leaders, innovative approaches to AI-driven defence mechanisms, and a commitment to information sharing are pivotal in safeguarding digital infrastructure against emerging threats. By leveraging AI as both a weapon and a shield in the cybersecurity arsenal, organizations can effectively adapt to the dynamic nature of cyber warfare and ensure the resilience of their digital ecosystems.

AI Scams: When Your Child's Voice Isn't Their Own

 

A fresh species of fraud has recently surfaced, preying on unwary victims by utilizing cutting-edge artificial intelligence technologies. A particularly alarming development is the use of AI-generated voice calls, in which con artists imitate children's voices to trick parents into thinking they are chatting with their own children only to be duped by a fatal AI hoax.

For law enforcement organizations and families around the world, these AI fraud calls are an increasing issue. These con artists imitate a child's voice using cutting-edge AI speech technology to trick parents into thinking their youngster needs money right away and is in distress.

Numerous high-profile incidents have been published, garnering attention and making parents feel exposed and uneasy. One mother reported getting a frightening call from a teenager who claimed to be her daughter's age and to be involved in a kidnapping. She paid a sizeable sum of money to the con artists in a panic and a desperate attempt to protect her child, only to learn later that it was an AI voice and that her daughter had been safe the entire time.

Due to the widespread reporting of these instances, awareness-raising efforts and preventative actions are urgently needed. It's crucial to realize that AI-generated voices have developed to a remarkable level of sophistication and are now almost indistinguishable from actual human voices in order to comprehend how these frauds work. Fraudsters rely on parents' natural desire to protect their children at all costs by using this technology to influence emotions.

Technology businesses and law enforcement organizations are collaborating to fight these AI scams in response to the growing worry. One method involves improving voice recognition software to better accurately identify sounds produced by AI. To stay one step ahead of their schemes, however, is difficult because con artists are constantly changing their strategies.

Experts stress the significance of maintaining vigilance and taking proactive steps to guard oneself and loved ones against falling for such fraud. It is essential to establish the caller's identification through other ways if parents receive unexpected calls asking for money, especially under upsetting circumstances. The scenario can be verified by speaking with the youngster directly or by requesting a dependable relative or acquaintance.

Children must be taught about AI scams in order to avoid accidentally disclosing personal information that scammers could use against them. Parents should talk to their children about the dangers of giving out personal information over the phone or online and highlight the need to always confirm a caller's identity, even if they seem familiar.

Technology is always developing, which creates both opportunities and difficulties. Scammers can take advantage of modern techniques to target people's vulnerabilities, as evidenced by the increase of AI frauds. Technology companies, law enforcement, and people all need to work together to combat these scams in order to prevent themselves and their loved ones from falling for AI fraud. People must also be knowledgeable, careful, and proactive in preventing AI fraud.

5 Tips to Protect Yourself from Deepfake Crimes

The rise of deepfake technology has ushered in a new era of concern and vulnerability for individuals and organizations alike. Recently, the Federal Bureau of Investigation (FBI) issued a warning regarding the increasing threat of deepfake crimes, urging people to take precautionary measures to protect themselves. To help you navigate this evolving landscape, experts have shared valuable tips to safeguard against the dangers of deepfakes.

Deepfakes are highly realistic manipulated videos or images that use artificial intelligence (AI) algorithms to replace a person's face or alter their appearance. These can be used maliciously to spread disinformation, defame individuals, or perpetrate identity theft and fraud. With the potential to deceive even the most discerning eye, deepfakes pose a significant threat to personal and online security.

Tip 1: Stay Informed and Educated

Keeping yourself informed about the latest advancements in deepfake technology and the potential risks it poses is essential. Stay updated on the techniques used to create deepfakes and the warning signs to look out for. Trusted sources such as the FBI's official website, reputable news outlets, and cybersecurity organizations can provide valuable insights and resources.

Tip 2: Be Vigilant and Verify

When encountering media content, especially if it seems suspicious or controversial, be vigilant and verify its authenticity. Scrutinize the source, cross-reference information from multiple reliable sources, and fact-check before accepting something as true. Additionally, scrutinize the video or image itself for any anomalies, such as inconsistent lighting, unnatural facial movements, or mismatches in lip-syncing.

Tip 3: Strengthen Online Security

Enhancing your online security measures can help protect you from falling victim to deepfake-related crimes. Utilize strong and unique passwords for your accounts, enable two-factor authentication, and regularly update your devices and software. Be cautious when sharing personal information online and be aware of phishing attempts that may exploit deepfake technology.

Tip 4: Foster Digital Literacy and Critical Thinking

Developing digital literacy skills and critical thinking is crucial in navigating the deepfake landscape. Teach yourself and others how to spot deepfakes, understand their implications, and discern between real and manipulated content. By fostering these skills, you can minimize the impact of deepfakes and contribute to a more informed and resilient society.

Tip 5: Report and Collaborate

If you come across a deepfake or suspect malicious use of deepfake technology, report it to the relevant authorities, such as the FBI's Internet Crime Complaint Center (IC3) or local law enforcement agencies. Reporting such incidents is vital in combatting deepfake crimes and preventing further harm. Additionally, collaborate with researchers, technology developers, and policymakers to drive innovation and develop effective countermeasures against deepfakes.

Deepfake crimes are becoming more dangerous, so it's important to take a proactive and informed approach. People can improve their own security and help to reduce the hazards posed by deepfakes by being informed, and alert, bolstering their online security, promoting digital literacy, and reporting occurrences. To keep one step ahead of those who try to use these tools for bad, it is crucial to stay agile and knowledgeable as technology develops.

Kidnapping Scam Implicates AI Cloning

 


With ChatGPT and other businesses developing artificial intelligence (AI) technology for their customers, artificial intelligence (AI) has gained traction. The three major technology companies, Google, Microsoft, and Meta appear to be investing heavily and concentrating their efforts on artificial intelligence.

A woman recently posted a Facebook post about her experience with artificial intelligence-based fraud. It is highly recommended that people protect themselves against similar incidents by creating a secret family word or question that is known only to their family members. This will enable them to authenticate that they are not being scammed by automated systems. They will also share the news item on social media sites to spread the word. 

In the last few years, AI tools have made it possible for scammers to exploit the human habit to steal millions of dollars from people. This is done by exploiting their vulnerability to exploit them. An organized group of fraudsters used cloned voices and modulated messages to send a modulated message to the girl's mother, accusing her of kidnapping her daughter by allowing them to do it. 

A woman from Arizona named Jennifer DeStefano reported that a few days ago, she received a call from an unknown number, according to news reports from WKYT, a CBS News-affiliated US news outlet. During a recent interview with the news outlet, DeStefano revealed that her 15-year-old daughter also received the call while skiing during the incident. 

As DeStefano picked up the phone, the next thing she heard was her daughter crying and sobbing, calling her mother for help. " She said, 'Mom, these criminal men have me. Help me, help me.' " 

As soon as this man gets on the phone, he says, 'Listen, listen to this. The man said, 'I've got your daughter,' DeStefano responded, explaining that the man had described exactly how the event unfolded. 

The man then demanded a ransom of USD 1 million to release the teenager. As they approached the 'kidnapper,' DeStefano said she did not have that much money, so he agreed to keep USD 50,000 from her. 

As she continued, she said, "I am planning to have my way with her and drop her off in Mexico," and at that moment, she said, "I just started shaking." Ms. DeStefano added. In the background, she can be heard yelling “Help me, Mom!”. Please help me. Help me," and bawling.

When DeStefano received the call, she asserted that her daughter was in the dance studio with other mothers when she picked up the phone. 

The first telephone call was made to 911, while the second was made to DeStefano's husband. She confirmed within minutes that her teenage daughter was safe on her skiing trip during her skiing trip. She indicated, however, that when she answered the phone, the voice that came over sounded just like her daughter's voice. 

In an interview with NBC 15, she told the network that she was truly convinced that her daughter was on the line, rather than a machine learning platform (AI) that was being used.  

According to Subbarao Kambhampati, a computer science professor and artificial intelligence expert at Arizona State University, in the beginning, it will be necessary to have a large number of samples. As a result, you will be able to carry out this task within the three seconds that you have to spare. It took three seconds to complete the task. It's possible to get a close idea of how you sound in just three seconds. 

It has been reported that if a large enough sample size of subjects is used, AI might mimic accents and emotions, according to the professor. 

According to a post on the mother's Facebook page, DeStefano was particularly unnerved by the voice simulation as Brei has no public social media accounts to be heard and barely communicates with her parents through social media at all. 

"In regards to Brie's voice, she has several interviews which she does for sports/school, in which a significant portion is her own." Brie's mother explained. "Children with public accounts should, however, be extra cautious. This should be taken very seriously."

FBI experts warn that fraudsters often find their targets on social media sites to commit fraud. The police are currently investigating the situation. It is still unknown who the fraudsters are, and no one has been able to capture them or find them.