As artificial intelligence becomes part of daily workflows, attackers are exploring new ways to exploit its weaknesses. Recent research has revealed a method where seemingly harmless images uploaded to AI systems can conceal hidden instructions, tricking chatbots into performing actions without the user’s awareness.
How hidden commands emerge
The risk lies in how AI platforms process images. To reduce computing costs, most systems shrink images before analysis, a step known as downscaling. During this resizing, certain pixel patterns, deliberately placed by an attacker can form shapes or text that the model interprets as user input. While the original image looks ordinary to the human eye, the downscaled version quietly delivers instructions to the system.
This technique is not entirely new. Academic studies as early as 2020 suggested that scaling algorithms such as bicubic or bilinear resampling could be manipulated to reveal invisible content. What is new is the demonstration of this tactic against modern AI interfaces, proving that such attacks are practical rather than theoretical.
Why this matters
Multimodal systems, which handle both text and images, are increasingly connected to calendars, messaging apps, and workplace tools. A hidden prompt inside an uploaded image could, in theory, request access to private information or trigger actions without explicit permission. One test case showed that calendar data could be forwarded externally, illustrating the potential for identity theft or information leaks.
The real concern is scale. As organizations integrate AI assistants into daily operations, even one overlooked vulnerability could compromise sensitive communications or financial data. Because the manipulation happens inside the preprocessing stage, traditional defenses such as firewalls or antivirus tools are unlikely to detect it.
Building safer AI systems
Defending against this form of “prompt injection” requires layered strategies. For users, simple precautions include checking how an image looks after resizing and confirming any unusual system requests. For developers, stronger measures are necessary: restricting image dimensions, sanitizing inputs before models interpret them, requiring explicit confirmation for sensitive actions, and testing models against adversarial image samples.
Researchers stress that piecemeal fixes will not be enough. Only systematic design changes such as enforcing secure defaults and monitoring for hidden instructions can meaningfully reduce the risks.
Images are no longer guaranteed to be safe when processed by AI systems. As attackers learn to hide commands where only machines can read them, users and developers alike must treat every upload with caution. By prioritizing proactive defenses, the industry can limit these threats before they escalate into real-world breaches.
Artificial intelligence (AI) agents are revolutionizing the cryptocurrency sector by automating processes, enhancing security, and improving trading strategies. These smart programs help analyze blockchain data, detect fraud, and optimize financial decisions without human intervention.
What Are AI Agents?
AI agents are autonomous software programs that operate independently, analyzing information and taking actions to achieve specific objectives. These systems interact with their surroundings through data collection, decision-making algorithms, and execution of tasks. They play a critical role in multiple industries, including finance, cybersecurity, and healthcare.
There are different types of AI agents:
1. Simple Reflex Agents: React based on pre-defined instructions.
2. Model-Based Agents: Use internal models to make informed choices.
3. Goal-Oriented Agents: Focus on achieving specific objectives.
4. Utility-Based Agents: Weigh outcomes to determine the best action.
5. Learning Agents: Continuously improve based on new data.
Evolution of AI Agents
AI agents have undergone advancements over the years. Here are some key milestones:
1966: ELIZA, an early chatbot, was developed at MIT to simulate human-like conversations.
1980: MYCIN, an AI-driven medical diagnosis tool, was created at Stanford University.
2011: IBM Watson demonstrated advanced natural language processing by winning on Jeopardy!
2014: AlphaGo, created by DeepMind, outperformed professional players in the complex board game Go.
2020: OpenAI introduced GPT-3, an AI model capable of generating human-like text.
2022: AlphaFold solved long-standing biological puzzles related to protein folding.
2023: AI-powered chatbots like ChatGPT and Claude AI gained widespread use for conversational tasks.
2025: ElizaOS, a blockchain-based AI platform, is set to enhance AI-agent applications.
AI Agents in Cryptocurrency
The crypto industry is leveraging AI agents for automation and security. In late 2024, Virtuals Protocol, an AI-powered Ethereum-based platform, saw its market valuation soar to $1.9 billion. By early 2025, AI-driven crypto tokens collectively reached a $7.02 billion market capitalization.
AI agents are particularly valuable in decentralized finance (DeFi). They assist in managing liquidity pools, adjusting lending and borrowing rates, and securing financial transactions. They also enhance security by identifying fraudulent activities and vulnerabilities in smart contracts, ensuring compliance with regulations like Know Your Customer (KYC) and Anti-Money Laundering (AML).
The Future of AI in Crypto
Tech giants like Amazon and Apple are integrating AI into digital assistants like Alexa and Siri, making them more interactive and capable of handling complex tasks. Similarly, AI agents in cryptocurrency will continue to take new shapes, offering greater efficiency and security for traders, investors, and developers.
As these intelligent systems advance, their role in crypto and blockchain technology will expand, paving the way for more automated, reliable, and secure financial ecosystems.
Brave is a Chromium-based browser, running on Brave search engine, that restricted tracking for personal ads.
Brave’s new product – Leo – is a generative AI assistant, on top of Anthropic's Claude and Meta's Llama 2. Apparently, Leo promotes user-privacy as its main feature.
Unlike any other generative AI-chatbots, like ChatGPT, Leo offers much better privacy to its users. The AI assistant does not store any of the user’s chat history, neither does it use the user’s data for training purposes.
Moreover, a user does not need to make an account in order to access Leo. Also, if a user is leveraging its premium experience, Brave will not link their accounts to the data they may have used. / Leo chatbot has been put to test for three months now. However, Brave is now making Leo available to all users of the most recent 1.60 desktop browser version. As soon as Brave rolls it out to you, you ought to see the Leo emblem on the sidebar of the browser. In the upcoming months, Leo support will be added to the Brave apps for Android and iPhone.
User privacy has remained a major concern when it comes to ChatGPT and Google Bard or any AI product.
A better option in AI chatbots, along with their innovative features, will ultimately be the one which provides better privacy to its users. Leo, in this case, has a potential to bring a revolution, taking into account that Brave promotes the chatbot’s “unparalleled privacy” feature straight away.
Since users do not require any account to access Leo, they need not verify their emails or phones numbers as well. This way, the user’s contact information is rather secure.
Moreover, if the user chooses to use $15/month Leo Premium, they receive tokens that are not linked to their accounts. However, Brave notes that, this way, “ you can never connect your purchase details with your usage of the product, an extra step that ensures your activity is private to you and only you.”
The company says, “the email you used to create your account is unlinkable to your day-to-day use of Leo, making this a uniquely private credentialing experience.”
Brave further notes that all Leo requests will be sent via an anonymous server, meaning that Leo traffic cannot be connected to user’s IP addresses.
More significantly, Brave will no longer host Leo's conversations. As soon as they are formed, they will be disposed of instantly. Leo will also not learn from those conversations. Moreover, Brave will not gather any personal identifiers, such as your IP address. Leo will not gather user data, nor will any other third-party model suppliers. Considering that Leo is based on two language models, this is significant.