Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Deepfakes. Show all posts

AI Can Create Deepfake Videos of Children Using Just 20 Images, Expert Warns

 

Parents are being urged to rethink how much they share about their children online, as experts warn that criminals can now generate realistic deepfake videos using as few as 20 images. This alarming development highlights the growing risks of digital identity theft and fraud facing children due to oversharing on social media platforms.  

According to Professor Carsten Maple of the University of Warwick and the Alan Turing Institute, modern AI tools can construct highly realistic digital profiles, including 30-second deepfake videos, from a small number of publicly available photos. These images can be used not only by criminal networks to commit identity theft, open fraudulent accounts, or claim government benefits in a child’s name but also by large tech companies to train their algorithms, often without the user’s full awareness or consent. 

New research conducted by Perspectus Global and commissioned by Proton surveyed 2,000 UK parents of children under 16. The findings show that on average, parents upload 63 images to social media every month, with 59% of those being family-related. A significant proportion of parents—21%—share these photos multiple times a week, while 38% post several times a month. These frequent posts not only showcase images but also often contain sensitive data like location tags and key life events, making it easier for bad actors to build a detailed online profile of the child. Professor Maple warned that such oversharing can lead to long-term consequences. 

Aside from potential identity theft, children could face mental distress or reputational harm later in life from having a permanent digital footprint that they never consented to create. The problem is exacerbated by the fact that many parents are unaware of how their data is being used. For instance, 48% of survey respondents did not realize that cloud storage providers can access the data stored on their platforms. In fact, more than half of the surveyed parents (56%) store family images on cloud services such as Google Drive or Apple iCloud. On average, each parent had 185 photos of their children stored digitally—images that may be accessed or analyzed under vaguely worded terms and conditions.  

Recent changes to Instagram’s user agreement, which now allows the platform to use uploaded images to train its AI systems, have further heightened privacy concerns. Additionally, experts have warned about the use of personal images by other Big Tech firms to enhance facial recognition algorithms and advertising models. To protect their children, parents are advised to implement a range of safety measures. These include using secure and private cloud storage, adjusting privacy settings on social platforms, avoiding public Wi-Fi when sharing or uploading data, and staying vigilant against phishing scams. 

Furthermore, experts recommend setting boundaries with children regarding online activity, using parental controls, antivirus tools, and search filters, and modeling responsible digital behavior. The growing accessibility of AI-based image manipulation tools underscores the urgent need for greater awareness and proactive digital hygiene. What may seem like harmless sharing today could expose children to significant risks in the future.

Bangladesh’s Deepfake Challenge: Why New Laws Aren’t Enough

 


Bangladesh has taken a big step to protect its people online by introducing the Cyber Security Ordinance 2025. This law updates the country’s approach to digital threats, replacing the older and often criticized 2023 act. One of its most important changes is that it now includes crimes that involve artificial intelligence (AI). This makes Bangladesh the first South Asian country to legally address this issue, and it comes at a time when digital threats are growing quickly.

One of the most dangerous AI-related threats today is deepfakes. These are fake videos or audio recordings that seem completely real. They can be used to make it look like someone said or did something they never did. In other countries, such as the United States and Canada, deepfakes have already been used to mislead voters and damage reputations. Now, Bangladesh is facing a similar problem.

Recently, fake digital content targeting political leaders and well-known figures has been spreading online. These false clips spread faster than fact-checkers can respond. A few days ago, a government adviser warned that online attacks and misinformation are becoming more frequent as the country gets closer to another important election.

What makes this more worrying is how easy deepfake tools have become to access. In the past, only people with strong technical skills could create deepfakes. Today, almost anyone with internet access can do it. For example, a recent global investigation found that a Canadian hospital worker ran a large website full of deepfake videos. He had no special training, yet caused serious harm to innocent people.

Experts say deepfakes are successful not because people are foolish, but because they trick our emotions. When something online makes us feel angry or shocked, we’re more likely to believe it without questioning.

To fight this, Bangladesh needs more than new laws. People must also learn how to protect themselves. Schools should begin teaching students how to understand and question online content. Public campaigns should be launched across TV, newspapers, radio, and social media to teach people what deepfakes are and how to spot them.

Young volunteers can play a big role by spreading awareness in villages and small towns where digital knowledge is still limited. At the same time, universities and tech companies in Bangladesh should work together to create tools that can detect fake videos and audio clips. Journalists and social media influencers also need training so they don’t unknowingly spread false information.

AI can be used to create lies, but it can also help us find the truth. Still, the best defence is knowledge. When people know how to think critically and spot fake content, they become the strongest line of defence against digital threats.

Emerging Cybersecurity Threats in 2025: Shadow AI, Deepfakes, and Open-Source Risks

 

Cybersecurity continues to be a growing concern as organizations worldwide face an increasing number of sophisticated attacks. In early 2024, businesses encountered an alarming 1,308 cyberattacks per week—a sharp 28% rise from the previous year. This surge highlights the rapid evolution of cyber threats and the pressing need for stronger security strategies. As technology advances, cybercriminals are leveraging artificial intelligence, exploiting open-source vulnerabilities, and using advanced deception techniques to bypass security measures. 

One of the biggest cybersecurity risks in 2025 is ransomware, which remains a persistent and highly disruptive threat. Attackers use this method to encrypt critical data, demanding payment for its release. Many cybercriminals now employ double extortion tactics, where they not only lock an organization’s files but also threaten to leak sensitive information if their demands are not met. These attacks can cripple businesses, leading to financial losses and reputational damage. The growing sophistication of ransomware groups makes it imperative for companies to enhance their defensive measures, implement regular backups, and invest in proactive threat detection systems. 

Another significant concern is the rise of Initial Access Brokers (IABs), cybercriminals who specialize in selling stolen credentials to hackers. By gaining unauthorized access to corporate systems, these brokers enable large-scale cyberattacks, making it easier for threat actors to infiltrate networks. This trend has made stolen login credentials a valuable commodity on the dark web, increasing the risk of data breaches and financial fraud. Organizations must prioritize multi-factor authentication and continuous monitoring to mitigate these risks. 

A new and rapidly growing cybersecurity challenge is the use of unauthorized artificial intelligence tools, often referred to as Shadow AI. Employees frequently adopt AI-driven applications without proper security oversight, leading to potential data leaks and vulnerabilities. In some cases, AI-powered bots have unintentionally exposed sensitive financial information due to default settings that lack robust security measures. 

As AI becomes more integrated into workplaces, businesses must establish clear policies to regulate its use and ensure proper safeguards are in place. Deepfake technology has also emerged as a major cybersecurity threat. Cybercriminals are using AI-generated deepfake videos and audio recordings to impersonate high-ranking officials and deceive employees into transferring funds or sharing confidential data. 

A recent incident involved a Hong Kong-based company losing $25 million after an employee fell victim to a deepfake video call that convincingly mimicked their CFO. This alarming development underscores the need for advanced fraud detection systems and enhanced verification protocols to prevent such scams. Open-source software vulnerabilities are another critical concern. Many businesses and government institutions rely on open-source platforms, but these systems are increasingly being targeted by attackers. Cybercriminals have infiltrated open-source projects, gaining the trust of developers before injecting malicious code. 

A notable case involved a widely used Linux tool where a contributor inserted a backdoor after gradually establishing credibility within the project. If not for a vigilant security expert, the backdoor could have remained undetected, potentially compromising millions of systems. This incident highlights the importance of stricter security audits and increased funding for open-source security initiatives. 

To address these emerging threats, organizations and governments must take proactive measures. Strengthening regulatory frameworks, investing in AI-driven threat detection, and enhancing collaboration between cybersecurity experts and policymakers will be crucial in mitigating risks. The cybersecurity landscape is evolving at an unprecedented pace, and without a proactive approach, businesses and individuals alike will remain vulnerable to increasingly sophisticated attacks.

How to Protect Your Small Business from Cyber Attacks

 


It so coincided that October was international cybersecurity awareness month, during which most small businesses throughout Australia were getting ready once again to defend themselves against such malicious campaigns. While all cyber crimes are growing both here and all around the world, one area remains to be targeted more often in these cases: the smaller ones. Below is some basic information any small businessman or woman should know before it can indeed fortify your position.

Protect yourself from Phishing and Scamming.

One of the most dangerous threats that small businesses are exposed to today is phishing. Here, attackers pose as trusted sources to dupe people into clicking on malicious links or sharing sensitive information. According to Mark Knowles, General Manager of Security Assurance at Xero, cyber criminals have different forms of phishing, including "vishing," which refers to voice calls, and "smishing," which refers to text messages. The tactics of deception encourage users to respond to these malicious messages, which brings about massive financial losses.

Counter-phishing may be achieved by taking some time to think before answering any unfamiliar message or link. Delaying and judging if the message appears suspicious would have averted the main negative outcome. Knowles further warns that just extra seconds to verify could have spared a business from an expensive error.

Prepare for Emerging AI-driven Threats Like Deepfakes

The emergence of AI has provided new complications to cybersecurity. Deepfakes, the fake audio and video produced using AI, make it increasingly difficult for people to distinguish between what is real and what is manipulated. It can cause critical problems as attackers can masquerade as trusted persons or even executives to get employees to transfer money.

Knowles shares a case, where the technology was implemented in Hong Kong to cheat a finance employee of $25 million. This case highlights the need to verify identities in this high-pressure situation; even dialling a phone can save one from becoming a victim of this highly sophisticated fraud.

Develop a Culture of Cybersecurity

Even a small team is a security-aware culture and an excellent line of defence. Small business owners will often hold regular sessions with teams to analyse examples of attempted phishing and discuss awareness about recognising threats. Such collective confidence and knowledge make everyone more alert and watchful.

Knowles further recommends that you network with other small business owners within your region and share your understanding of cyber threats. Having regular discussions on common attack patterns will help businesses learn from each other's experiences and build collective resilience against cybercrime.

Develop an Incident Response Plan for Cyber

Small businesses typically don't have dedicated IT departments. However, that does not mean they can't prepare for cyber incidents. A simple incident-response plan is crucial. This should include the contact details of support: trusted IT advisors or local authorities such as CERT Australia. If an attack locks down your systems, immediate access to these contacts can speed up recovery.

Besides, a "safe word" that will be used for communication purposes can help employees confirm each other's identities in such crucial moments where even digital impersonation may come into play.

Don't Let Shyness Get in Your Way

The embarrassment of such an ordeal by cyber crooks results in the likelihood that organisations are not revealing an attack as it can lead the cyber criminals again and again. Knowles encourages any organisation affected to report suspicions of the scam immediately to bankers, government, or experienced advisors in time to avoid possible future ramifications to the firm. Communicating the threat is very beneficial for mitigating damages, but if nothing was said, chances are slim to stop that firm further from getting another blow at that point of time in question.

Making use of the local networks is beneficial. Open communication adds differences in acting speedily and staying well-informed to build more resilient proactive approaches toward cybersecurity.


AI-Driven Deepfake Scams Cost Americans Billions in Losses

 


As artificial intelligence (AI) technology advances, cybercriminals are now capable of creating sophisticated "deepfake" scams, which result in significant financial losses for the companies that are targeted. On a video call with her chief financial officer, in which other members of the firm also took part, an employee of a Hong Kong-based firm was instructed to send US$25 million to fraudsters in January 2024, after offering instruction to her chief financial officer in the same video call. 

Fraudsters, however, used deepfakes to fool her into sending the money by creating one that replicated these likenesses of the people she was supposed to be on a call with: they created an imitation that mimicked her likeness on the phone. The number of scammers continues to rise, and artificial intelligence, as well as other sophisticated tools, are raising the risk that victims potentially being scammed. It is estimated that over $12.5 billion in American citizens were swindled online in the past year, which is up from $10.3 billion in 2022, according to the FBI's Internet Crime Complaint Center. 

A much higher figure may be possible, but the actual price could be much higher. During the investigation of a particular case, the FBI found out that only 20% of the victims had reported these crimes to the authorities. It appears that scammers are continuing to erect hurdles with new ruses, techniques, and policies, and artificial intelligence is playing an increasingly prominent role. 

Based on a recent FBI analysis, 39% of victims last year were swindled using manipulated or doctored videos that were used to manipulate what a victim did or said, thereby misrepresenting what they said or did. Currently, video scams have been used to perpetrate investment frauds, as well as romance swindles, as well as other types of scams. The number of scammers continues to rise, and artificial intelligence, as well as other sophisticated tools, are raising the risk that victims potentially being scammed.

It is estimated that Americans were scammed out of $12.5 billion online last year, which is an increase from $10.3 billion in 2022, according to the FBI's Internet Crime Complaint Center, but the totals could be much higher due to increased awareness. An FBI official recently broke an interesting case in which only 20% of the victims had reported these crimes to the authorities. Today, scammers perpetrate many different scams, and AI is becoming more prominent in that threat. 

According to the FBI's assessment last year, 39% of victims were swindled based on fake or doctored videos altered with artificial intelligence technology to manipulate or misrepresent what someone did or said during the initial interaction. In investment scams and other ways, the videos are being used to deceive people into believing they are in love, for example. It appears that in several recent instances, fraudsters have modified publicly available videos and other footage using deepfake technology in an attempt to cheat people out of their money, a case that has been widely documented in the news.

In his response, Romero indicated that artificial intelligence could allow scammers to process much larger quantities of data and, as a result, try more combinations of passwords in their attempts to hack into victims' accounts. For this reason, it is extremely important that users implement strong passwords, change them frequently, and use two-factor authentication when they are using a computer. The Internet Crime Complaint Center of the FBI received more than 880,000 complaint forms last year from Americans who were victims of online fraud. 

In fact, according to Social Catfish, 96% of all money lost in scams is never recouped, mainly because most scammers live overseas and cannot return the money. The increasing prevalence of cryptocurrency in criminal activities has made it a favoured medium for illicit transactions, particularly investment-related crimes. Fraudsters often exploit the anonymity and decentralized nature of digital currencies to orchestrate schemes that demand payment in cryptocurrency. A notable tactic includes enticing victims into fraudulent recovery programs, where perpetrators claim to assist in recouping funds lost in prior cryptocurrency scams, only to exploit the victims further. 

The surge in such deceptive practices complicates efforts to differentiate between legitimate and fraudulent communications. Falling victim to sophisticated scams, such as those involving deepfake technology, can result in severe consequences. The repercussions may extend beyond significant financial losses to include legal penalties for divulging sensitive information and potential harm to a company’s reputation and brand integrity. 

In light of these escalating threats, organizations are being advised to proactively assess their vulnerabilities and implement comprehensive risk management strategies. This entails adopting a multi-faceted approach to enhance security measures, which includes educating employees on the importance of maintaining a sceptical attitude toward unsolicited requests for financial or sensitive information. Verifying the legitimacy of such requests can be achieved by employing code words to authenticate transactions. 

Furthermore, companies should consider implementing advanced security protocols, and tools such as multi-factor authentication, and encryption technologies. Establishing and enforcing stringent policies and procedures governing financial transactions are also essential steps in mitigating exposure to fraud. Such measures can help fortify defenses against the evolving landscape of cybercrime, ensuring that organizations remain resilient in the face of emerging threats.