Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Adversarial attacks. Show all posts

How Image Resizing Could Expose AI Systems to Attacks



Security experts have identified a new kind of cyber attack that hides instructions inside ordinary pictures. These commands do not appear in the full image but become visible only when the photo is automatically resized by artificial intelligence (AI) systems.

The attack works by adjusting specific pixels in a large picture. To the human eye, the image looks normal. But once an AI platform scales it down, those tiny adjustments blend together into readable text. If the system interprets that text as a command, it may carry out harmful actions without the user’s consent.

Researchers tested this method on several AI tools, including interfaces that connect with services like calendars and emails. In one demonstration, a seemingly harmless image was uploaded to an AI command-line tool. Because the tool automatically approved external requests, the hidden message forced it to send calendar data to an attacker’s email account.

The root of the problem lies in how computers shrink images. When reducing a picture, algorithms merge many pixels into fewer ones. Popular methods include nearest neighbor, bilinear, and bicubic interpolation. Each creates different patterns when compressing images. Attackers can take advantage of these predictable patterns by designing images that reveal commands only after scaling.

To prove this, the researchers released Anamorpher, an open-source tool that generates such images. The tool can tailor pictures for different scaling methods and software libraries like TensorFlow, OpenCV, PyTorch, or Pillow. By hiding adjustments in dark parts of an image, attackers can make subtle brightness shifts that only show up when downscaled, turning backgrounds into letters or symbols.

Mobile phones and edge devices are at particular risk. These systems often force images into fixed sizes and rely on compression to save processing power. That makes them more likely to expose hidden content.

The researchers also built a way to identify which scaling method a system uses. They uploaded test images with patterns like checkerboards, circles, and stripes. The artifacts such as blurring, ringing, or color shifts revealed which algorithm was at play.

This discovery also connects to core ideas in signal processing, particularly the Nyquist-Shannon sampling theorem. When data is compressed below a certain threshold, distortions called aliasing appear. Attackers use this effect to create new patterns that were not visible in the original photo.

According to the researchers, simply switching scaling methods is not a fix. Instead, they suggest avoiding automatic resizing altogether by setting strict upload limits. Where resizing is necessary, platforms should show users a preview of what the AI system will actually process. They also advise requiring explicit user confirmation before any text detected inside an image can trigger sensitive operations.

This new attack builds on past research into adversarial images and prompt injection. While earlier studies focused on fooling image-recognition models, today’s risks are greater because modern AI systems are connected to real-world tools and services. Without stronger safeguards, even an innocent-looking photo could become a gateway for data theft.


AI Agents and the Rise of the One-Person Unicorn

 


Building a unicorn has been synonymous for decades with the use of a large team of highly skilled professionals, years of trial and error, and significant investments in venture capital. That is the path to building a unicorn, which has a value of over a billion dollars. Today, however, there is a fundamental shift in the established model in which people live. As AI agentic systems develop rapidly, shaped in part by OpenAI's vision of autonomous digital agents, one founder will now be able to accomplish what once required an entire team of workers.

It is evident in today's emerging landscape that the concept of "one-person unicorn" is no longer just an abstract concept, but rather a real possibility, as artificial intelligence agents expand their role beyond mere assistants, becoming transformative partners that push the boundaries of individual entrepreneurship. In spite of the fact that artificial intelligence has long been part of enterprise strategies for a long time, Agentic Artificial Intelligence marks the beginning of a significant shift. 

Aside from conventional systems, which primarily analyse data and provide recommendations, these autonomous agents can act independently to make strategic decisions and directly affect the outcome of their business decisions without needing any human intervention at all. This shift is not merely theoretical—it is already reshaping organisational practices on a large scale.

It has been revealed that the extent to which generative AI is being adopted is based on a recent survey conducted among 1,000 IT decision makers in the United States, the United Kingdom, Germany, and Australia. Ninety per cent of the survey respondents indicated that their companies have incorporated generative AI into their IT strategies, and half have already implemented AI agents. 

A further 32 per cent are preparing to follow suit shortly, according to the survey. In this new era of artificial intelligence, defining itself no longer by passive analytics or predictive modelling, but by autonomous agents capable of grasping objectives, evaluating choices, and executing tasks without the need for human intervention, people are seeing a new phase of AI emerge. 

With the advent of artificial intelligence, agents are no longer limited to providing assistance; they are now capable of orchestrating complex workflows across fragmented systems, adapting constantly to changing environments, and maximising outcomes on a real-time basis. With this development, there is more to it than just automation. It represents a shift from static digitisation to dynamic, context-aware execution, effectively transforming judgment into a digital function. 

Leading companies are increasingly comparing the impact of this transformation with the Internet's, but there is a possibility that the reach of this transformation may be even greater. Whereas the internet revolutionised external information flows, artificial intelligence is transforming internal operations and decision-making ecosystems. 

As a result of such advances, healthcare diagnostics are guided and predictive interventions are enabled; manufacturing is creating self-optimized production systems; and legal and compliance are simulating scenarios in order to reduce risk and accelerate decisions in order to reduce risk. This advancement is more than just boosting productivity – it has the potential to lay the foundations of new business models that are based on embedded, distributed intelligence. 

According to Google CEO Sundar Pichai, artificial intelligence is poised to affect “every sector, every industry, every aspect of our lives,” making the case that the technology is a defining force of our era, a reminder of the technological advances of this era. Agentic AI is characterised by its ability to detect subtle patterns of behaviour and interactions between services that are often difficult for humans to observe. This capability has already been demonstrated in platforms such as Salesforce's Interaction Explorer, which allows AI agents to detect repeated customer frustrations or ineffective policy responses and propose corrective actions, resulting in the creation of these platforms. 

Therefore, these systems become strategic advisors, which are capable of identifying risks, flagging opportunities, and making real-time recommendations to improve operations, rather than simply being back-office tools. Combined with the ability to coordinate between agents, the technology can go even further, allowing for automatic cross-functional enhanced functionality that speeds up business processes and efficiency. 

As part of this movement, leading companies like Salesforce, Google, and Accenture are combining complementary strengths to provide a variety of artificial intelligence-driven solutions ranging from multilingual customer support to predictive issue resolution to intelligent automation, integrating Salesforce's CRM ecosystem with Google Cloud's Gemini models and Accenture's sector-specific expertise. 

Moreover, with the availability of such tools, innovation is no longer confined to engineers alone; subject matter experts across a wide range of industries can now drive adoption and shape the next wave of enterprise transformation, since they have the means to do so. In order to be competitive, an organisation must not simply rely on pre-built templates. 

Instead, it must be able to customise its Agentic AI system according to its unique identity and needs. As a result of the use of natural language prompts, requirement documents, and workflow diagrams, businesses can tailor agent behaviours without having to rely on long development cycles, large budgets, or a lot of technical expertise. 

In the age of no-code and natural language interfaces, the ability to customise agents is shifting from developers to business users, ensuring that agents reflect the company's distinctive values, brand voice, and philosophy, moving the power of customisation from developers to business users. Moreover, advances in multimodality are allowing AI to be used in new ways beyond text, including voice, images, videos, and sensors. Through this evolution, agents will be able to interpret customer intent more deeply, providing them with more personalised and contextually relevant assistance based on customer intent. 

In addition, customers are now able to upload photos of defective products rather than type lengthy descriptions, or receive support via short videos rather than pages of text if they have a problem with a product. A crucial aspect of these agents is that they retain memories across their interactions, so they can constantly adapt to individual behaviours, making digital engagement less transactional and more like an ongoing, human-centred conversation, rather than a transaction. 

There are many implications beyond operational efficiency and cost reduction that are being brought about by Agentic AI. As a result of this transformation, a radical redefining of work, value creation, and even entrepreneurship itself is becoming apparent. With the capability of these systems enabling companies as well as individuals to utilise distributed intelligence, they are redefining the boundaries between human and machine collaboration, and they are not just reshaping workflows—they are redefining the boundaries of human and machine collaboration. 

A future in which scale and impact are no longer determined by headcount, but rather by the sophisticated capabilities of digital agents working alongside a single visionary, is what people are seeing in the one-person unicorn. While this transformation is bringing about societal changes, it also raises a number of concerns. The increasing delegating of decision-making tasks to autonomous agents raises questions about accountability, ethics, job displacement, and systemic risks. 

In this time and age, regulators, policymakers, and industry leaders must establish guardrails that ensure that the benefits of artificial intelligence do not further deepen inequalities or erode trust by balancing innovation with responsibility. The challenge for companies lies in deploying these tools not only in a fast and efficient manner, but also by their values, branding, and social responsibilities. It is not just the technical advance of autonomous agents that makes this moment historic, but also the cultural and economic pivot they signal that makes it so. 

Likewise to the internet, which democratized access to information in the past, artificial intelligence agents are poised to democratize access to judgment, strategy, and execution, which were traditionally restricted to larger organisations. Using it, enterprises can achieve new levels of agility and competitiveness, while individuals can achieve a greater amount of what they can accomplish. Agentic intelligence is not just an incremental upgrade to existing systems, but an entire shift that determines how the digital economy will function in the future, a shift which will define the next chapter in the history of our society.

AI Tools are Quite Susceptible to Targeted Attacks

 

Artificial intelligence tools are more susceptible to targeted attacks than previously anticipated, effectively forcing AI systems to make poor choices.

The term "adversarial attacks" refers to the manipulation of data being fed into an AI system in order to create confusion in the system. For example, someone might know that putting a specific type of sticker at a specific spot on a stop sign could effectively make the stop sign invisible to an AI system. Hackers can also install code on an X-ray machine that alters image data, leading an AI system to make inaccurate diagnoses. 

“For the most part, you can make all sorts of changes to a stop sign, and an AI that has been trained to identify stop signs will still know it’s a stop sign,” stated Tianfu Wu, coauthor of a paper on the new work and an associate professor of electrical and computer engineering at North Carolina State University. “However, if the AI has a vulnerability, and an attacker knows the vulnerability, the attacker could take advantage of the vulnerability and cause an accident.”

Wu and his colleagues' latest study aims to determine the prevalence of adversarial vulnerabilities in AI deep neural networks. They discover that the vulnerabilities are far more common than previously believed. 

What's more, we found that attackers can take advantage of these vulnerabilities to force the AI to interpret the data to be whatever they want. Using the stop sign as an example, you could trick the AI system into thinking the stop sign is a mailbox, a speed limit sign, a green light, and so on, simply by using slightly different stickers—or whatever the vulnerability is, Wu added. 

This is incredibly important, because if an AI system is not dependable against these sorts of attacks, you don't want to put the system into operational use—particularly for applications that can affect human lives.

The researchers created a piece of software called QuadAttacK to study the sensitivity of deep neural networks to adversarial attacks. The software may be used to detect adversarial flaws in any deep neural network. 

In general, if you have a trained AI system and test it with clean data, the AI system will behave as expected. QuadAttacK observes these activities to learn how the AI makes data-related judgements. This enables QuadAttacK to figure out how the data can be modified to trick the AI. QuadAttack then starts delivering altered data to the AI system to observe how it reacts. If QuadAttacK discovers a vulnerability, it can swiftly make the AI see whatever QuadAttacK desires. 

The researchers employed QuadAttacK to assess four deep neural networks in proof-of-concept testing: two convolutional neural networks (ResNet-50 and DenseNet-121) and two vision transformers (ViT-B and DEiT-S). These four networks were picked because they are widely used in AI systems across the globe. 

“We were surprised to find that all four of these networks were very vulnerable to adversarial attacks,” Wu stated. “We were particularly surprised at the extent to which we could fine-tune the attacks to make the networks see what we wanted them to see.” 

QuadAttacK has been made accessible by the research team so that the research community can use it to test neural networks for shortcomings. 

Defending Against Adversarial Attacks in Machine Learning: Techniques and Strategies


As machine learning algorithms become increasingly prevalent in our daily lives, the need for secure and reliable models is more important than ever. 

However, even the most sophisticated models are not immune to attacks, and one of the most significant threats to machine learning algorithms is the adversarial attack.

In this blog, we will explore what adversarial attacks are, how they work, and what techniques are available to defend against them.

What are Adversarial Attacks?

In simple terms, an adversarial attack is a deliberate attempt to fool a machine learning algorithm into producing incorrect output. 

The attack works by introducing small, carefully crafted changes to the input data that are imperceptible to the human eye, but which cause the algorithm to produce incorrect results. 

Adversarial attacks are a growing concern in machine learning, as they can be used to compromise the accuracy and reliability of models, with potentially serious consequences.

How do Adversarial Attacks Work?

Adversarial attacks work by exploiting the weaknesses of machine learning algorithms. These algorithms are designed to find patterns in data and use them to make predictions. 

However, they are often vulnerable to subtle changes in the input data, which can cause the algorithm to produce incorrect outputs. 

Adversarial attacks take advantage of these vulnerabilities by adding small amounts of noise or distortion to the input data, which can cause the algorithm to make incorrect predictions.

Understanding White-Box, Black-Box, and Grey-Box Attacks

1. White-Box Attacks

White-box attacks occur when the attacker has complete knowledge of the machine-learning model being targeted, including its architecture, parameters, and training data. Attackers can use various methods to generate adversarial examples that can fool the model into producing incorrect predictions.

Because white-box attacks require a high level of knowledge about the targeted machine-learning model, they are often considered the most dangerous type of attack. 

2. Black-Box Attacks

In contrast to white-box attacks, black-box attacks occur when the attacker has little or no information about the targeted machine-learning model's internal workings. 

These attacks can be more time-consuming and resource-intensive than white-box attacks, but they can also be more effective against models that have not been designed to withstand adversarial attacks.

3. Grey-Box Attacks

Grey-box attacks are a combination of both white-box and black-box attacks. In a grey-box attack, the attacker has some knowledge about the targeted machine-learning model, but not complete knowledge. 

These attacks can be more challenging to defend against than white-box attacks but may be easier to defend against than black-box attacks. 

There are several types of adversarial attacks, including:

Adversarial examples 

These are inputs that have been specifically designed to fool a machine-learning algorithm. They are created by making small changes to the input data, which are not noticeable to humans but which cause the algorithm to make a mistake.

Adversarial perturbations    

These are small changes to the input data that are designed to cause the algorithm to produce incorrect results. The perturbations can be added to the data at any point in the machine learning pipeline, from data collection to model training.

Model inversion attacks

These attacks attempt to reverse-engineer the parameters of a machine-learning model by observing its outputs. The attacker can then use this information to reconstruct the original training data or extract sensitive information from the model.

How can We Fight Adversarial Attacks?

As adversarial attacks become more sophisticated, it is essential to develop robust defenses against them. Here are some techniques that can be used to fight adversarial attacks:

Adversarial training 

This involves training the machine learning algorithm on adversarial examples as well as normal data. By exposing the model to adversarial examples during training, it becomes more resilient to attacks in the future.

Defensive distillation 

This technique involves training a model to produce outputs that are difficult to reverse-engineer, making it more difficult for attackers to extract sensitive information from the model.

Feature squeezing 

This involves reducing the number of features in the input data, making it more difficult for attackers to introduce perturbations that will cause the algorithm to produce incorrect outputs.

Adversarial detection 

This involves adding a detection mechanism to the machine learning pipeline that can detect when an input has been subject to an adversarial attack. Once detected, the input can be discarded or handled differently to prevent the attack from causing harm.

As the field of machine learning continues to evolve, it is crucial that we remain vigilant and proactive in developing new techniques to fight adversarial attacks and maintain the integrity of our models.