Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.
This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.
Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.
The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.
The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.
Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.
Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.
Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.
On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.
As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.
Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”
Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.
Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident.
As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally.
Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.
Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”
The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.
Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities.
After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.
Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions.
This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes.
In today’s digital world, many of us protect our online accounts using two-step verification. This process, known as multi-factor authentication (MFA), usually requires a password and an extra code, often sent via SMS, to log in. It adds an extra layer of protection, but there’s a growing concern: receiving these codes through text messages might not be as secure as we think.
Why Text Messages Aren’t the Safest Option
When you get a code on your phone, you might assume it’s sent directly by the company you’re logging into—whether it’s your bank, email, or social media. In reality, these codes are often delivered by external service providers hired by big tech firms. Some of these third-party firms have been connected to surveillance operations and data breaches, raising serious concerns about privacy and security.
Worse, these companies operate with little public transparency. Several investigative reports have highlighted how this lack of oversight puts user information at risk. Additionally, government agencies such as the U.S. Cybersecurity and Infrastructure Security Agency (CISA) have warned people not to rely on SMS for authentication. Text messages are not encrypted, which means hackers who gain access to a telecom network can intercept them easily.
What Should You Do Instead?
Don’t ditch multi-factor authentication altogether. It’s still a critical defense against account hijacking. But you should consider switching to a more secure method—such as using an authenticator app.
How Authenticator Apps Work
Authenticator apps are programs installed on your smartphone or computer. They generate temporary codes for your accounts that refresh every 30 seconds. Because these codes live inside your device and aren’t sent over the internet or phone networks, they’re far more difficult for criminals to intercept.
Apps like Google Authenticator, Microsoft Authenticator, LastPass, and even Apple’s built-in password tools provide this functionality. Most major platforms now allow you to connect an authenticator app instead of relying on SMS.
Want Even Better Protection? Try Passkeys
If you want the most secure login method available today, look into passkeys. These are a newer, password-free login option developed by a group of leading tech companies. Instead of typing in a password or code, you unlock your account using your face, fingerprint, or device PIN.
Here’s how it works: your device stores a private key, while the website keeps the matching public key. Only when these two keys match—and you prove your identity through a biometric scan — are you allowed to log in. Because there are no codes or passwords involved, there’s nothing for hackers to steal or intercept.
Passkeys are also backed up to your cloud account, so if you lose your device, you can still regain access securely.
Multi-factor authentication is essential—but how you receive your codes matters. Avoid text messages when possible. Opt for an authenticator app, or better yet, move to passkeys where available. Taking this step could be the difference between keeping your data safe or leaving it vulnerable.
In response to the rising threat of artificial intelligence being used for financial fraud, U.S. lawmakers have introduced a new bipartisan Senate bill aimed at curbing deepfake-related scams.
The bill, called the Preventing Deep Fake Scams Act, has been brought forward by Senators from both political parties. If passed, it would lead to the formation of a new task force headed by the U.S. Department of the Treasury. This group would bring together leaders from major financial oversight bodies to study how AI is being misused in scams, identity theft, and data-related crimes and what can be done about it.
The proposed task force would include representatives from agencies such as the Federal Reserve, the Consumer Financial Protection Bureau, and the Federal Deposit Insurance Corporation, among others. Their goal will be to closely examine the growing use of AI in fraudulent activities and provide the U.S. Congress with a detailed report within a year.
This report is expected to outline:
• How financial institutions can better use AI to stop fraud before it happens,
• Ways to protect consumers from being misled by deepfake content, and
• Policy and regulatory recommendations for addressing this evolving threat.
One of the key concerns the bill addresses is the use of AI to create fake voices and videos that mimic real people. These deepfakes are often used to deceive victims—such as by pretending to be a friend or family member in distress—into sending money or sharing sensitive information.
According to official data from the Federal Trade Commission, over $12.5 billion was stolen through fraud in the past year—a 25% increase from the previous year. Many of these scams now involve AI-generated messages and voices designed to appear highly convincing.
While this particular legislation focuses on financial scams, it adds to a broader legislative effort to regulate the misuse of deepfake technology. Earlier this year, the U.S. House passed a bill targeting nonconsensual deepfake pornography. Meanwhile, law enforcement agencies have warned that fake messages impersonating high-ranking officials are being used in various schemes targeting both current and former government personnel.
Another Senate bill, introduced recently, seeks to launch a national awareness program led by the Commerce Department. This initiative aims to educate the public on how to recognize AI-generated deception and avoid becoming victims of such scams.
As digital fraud evolves, lawmakers are urging financial institutions, regulators, and the public to work together in identifying threats and developing solutions that can keep pace with rapidly advancing technologies.