According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses.
Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers.
“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said.
The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data.
The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.
Currently, the database is not publicly accessible (a good sign).
This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.
Earlier, a misconfigured Elasticsearch instance included the data with details such as sex, names, dates of birth, and Cadastro de Pessoas FÃsicas (CPF) numbers. This number is used to identify taxpayers in Brazil.
As artificial intelligence becomes part of daily workflows, attackers are exploring new ways to exploit its weaknesses. Recent research has revealed a method where seemingly harmless images uploaded to AI systems can conceal hidden instructions, tricking chatbots into performing actions without the user’s awareness.
How hidden commands emerge
The risk lies in how AI platforms process images. To reduce computing costs, most systems shrink images before analysis, a step known as downscaling. During this resizing, certain pixel patterns, deliberately placed by an attacker can form shapes or text that the model interprets as user input. While the original image looks ordinary to the human eye, the downscaled version quietly delivers instructions to the system.
This technique is not entirely new. Academic studies as early as 2020 suggested that scaling algorithms such as bicubic or bilinear resampling could be manipulated to reveal invisible content. What is new is the demonstration of this tactic against modern AI interfaces, proving that such attacks are practical rather than theoretical.
Why this matters
Multimodal systems, which handle both text and images, are increasingly connected to calendars, messaging apps, and workplace tools. A hidden prompt inside an uploaded image could, in theory, request access to private information or trigger actions without explicit permission. One test case showed that calendar data could be forwarded externally, illustrating the potential for identity theft or information leaks.
The real concern is scale. As organizations integrate AI assistants into daily operations, even one overlooked vulnerability could compromise sensitive communications or financial data. Because the manipulation happens inside the preprocessing stage, traditional defenses such as firewalls or antivirus tools are unlikely to detect it.
Building safer AI systems
Defending against this form of “prompt injection” requires layered strategies. For users, simple precautions include checking how an image looks after resizing and confirming any unusual system requests. For developers, stronger measures are necessary: restricting image dimensions, sanitizing inputs before models interpret them, requiring explicit confirmation for sensitive actions, and testing models against adversarial image samples.
Researchers stress that piecemeal fixes will not be enough. Only systematic design changes such as enforcing secure defaults and monitoring for hidden instructions can meaningfully reduce the risks.
Images are no longer guaranteed to be safe when processed by AI systems. As attackers learn to hide commands where only machines can read them, users and developers alike must treat every upload with caution. By prioritizing proactive defenses, the industry can limit these threats before they escalate into real-world breaches.