Last week, the International Criminal Court (ICC) announced that it had discovered a new advanced and targeted cybersecurity incident. Its response mechanism and prompt discovery helped to contain the attack.
The ICC did not provide details about the attackers’ intentions, any data leaks, or other compromises. According to the statement, the ICC, which is headquartered in The Hague, the Netherlands, is conducting a threat evaluation after the attack and taking measures to address any injuries. Details about the impact were not provided.
The constant support of nations that have ratified the Rome Statute helps the ICC in ensuring its capacity to enforce its mandate and commitment, a responsibility shared by all States Parties. “The Court considers it essential to inform the public and its States Parties about such incidents as well as efforts to address them, and calls for continued support in the face of such challenges,” ICC said.
The ICC was founded in 2002 through the Rome Statute, an international treaty, by a coalition of sovereign states, aimed to create an international court that would prosecute individuals for international crimes– war crimes, genocide, terrorism, and crimes against humanity. The ICC works as a separate body from the U.N. International Court of Justice, the latter brings cases against countries but not individuals.
In 2023, the ICC reported another cybersecurity incident. The attack was said to be an act of espionage and aimed at undermining the Court’s mandate. The incident had caused it to disconnect its system from the internet.
In the past, the ICC has said that it had experienced increased security concerns as threats against its various elected officials rose. “The evidence available thus far indicates a targeted and sophisticated attack with the objective of espionage. The attack can therefore be interpreted as a serious attempt to undermine the Court's mandate," ICC said.
The recent notable arrests issued by the ICC include Russian President Vladimir Putin and Israeli Prime Minister Benjamin Netanyahu.
Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.
This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.
Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.
The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.
The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.
Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.
Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.
Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.
On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.
As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.
Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”
Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.
Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident.
As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally.
Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.
Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”
The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.
Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities.
After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.
Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions.
This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes.