According to Brian Krebs, Vermont had at least five websites that gave anyone access to critical information. One of the programs impacted was the state's Pandemic Unemployment Assistance program. It revealed the applicants' full names, Social Security numbers, residences, contact information (phone, email, and address), and bank account details. Vermont adopted Salesforce Community, a cloud-based software solution created to make it simple for businesses to quickly construct websites, just like the other organizations giving the general public access to sensitive data.
Among the other victims was Columbus, an Ohio-based Huntington Bank. It recently bought TCF Bank, which processed commercial loans using Salesforce Community. Names, residences, Social Security numbers, titles, federal IDs, IP addresses, average monthly payrolls, and loan amounts were among the data components that were revealed.
Apparently, both Vermont and Huntington discovered the data leak after Krebs reached them for a comment on the matter. Following this, both the customers withdrew public access to the critical data.. Salesforce Community websites can be set up to require authentication, limiting access to internal resources and sensitive information to a select group of authorized users. The websites can also be configured to let anyone read public information without requiring authentication. In certain instances, administrators unintentionally permit unauthorized users to view website sections that are meant to be accessible only to authorized personnel.
Salesforce tells Krebs that it provides users with clear guidance on how to set up Salesforce Community so that only certain data is accessible to unauthorized guests, according to Krebs.
Doug Merret, who raised awareness in regards to the issue eight months ago, further elaborated his concerns on the ease of misconfiguring Salesforce in a post headlined ‘The Salesforce Communities Security Issue.’
“The issue was that you are able to ‘hack’ the URL to see standard Salesforce pages - Account, Contact, User, etc.[…]This would not really be an issue, except that the admin has not expected you to see the standard pages as they had not added the objects associated to the Aura community navigation and therefore had not created appropriate page layouts to hide fields that they did not want the user to see,” he wrote.
Krebs noted that it came to know about the leaks from security researcher Charan Akiri, who apparently identified hundreds of organizations with misconfigured Salesforce sites. He claimed only five of the many companies and governmental agencies that Akiri informed had the issues resolved, among which none were in the government sector.
However, even the most sophisticated models are not immune to attacks, and one of the most significant threats to machine learning algorithms is the adversarial attack.
In this blog, we will explore what adversarial attacks are, how they work, and what techniques are available to defend against them.
In simple terms, an adversarial attack is a deliberate attempt to fool a machine learning algorithm into producing incorrect output.
The attack works by introducing small, carefully crafted changes to the input data that are imperceptible to the human eye, but which cause the algorithm to produce incorrect results.
Adversarial attacks are a growing concern in machine learning, as they can be used to compromise the accuracy and reliability of models, with potentially serious consequences.
Adversarial attacks work by exploiting the weaknesses of machine learning algorithms. These algorithms are designed to find patterns in data and use them to make predictions.
However, they are often vulnerable to subtle changes in the input data, which can cause the algorithm to produce incorrect outputs.
Adversarial attacks take advantage of these vulnerabilities by adding small amounts of noise or distortion to the input data, which can cause the algorithm to make incorrect predictions.
These are small changes to the input data that are designed to cause the algorithm to produce incorrect results. The perturbations can be added to the data at any point in the machine learning pipeline, from data collection to model training.
These attacks attempt to reverse-engineer the parameters of a machine-learning model by observing its outputs. The attacker can then use this information to reconstruct the original training data or extract sensitive information from the model.
As adversarial attacks become more sophisticated, it is essential to develop robust defenses against them. Here are some techniques that can be used to fight adversarial attacks:
This involves training the machine learning algorithm on adversarial examples as well as normal data. By exposing the model to adversarial examples during training, it becomes more resilient to attacks in the future.
This technique involves training a model to produce outputs that are difficult to reverse-engineer, making it more difficult for attackers to extract sensitive information from the model.
This involves reducing the number of features in the input data, making it more difficult for attackers to introduce perturbations that will cause the algorithm to produce incorrect outputs.
This involves adding a detection mechanism to the machine learning pipeline that can detect when an input has been subject to an adversarial attack. Once detected, the input can be discarded or handled differently to prevent the attack from causing harm.
As the field of machine learning continues to evolve, it is crucial that we remain vigilant and proactive in developing new techniques to fight adversarial attacks and maintain the integrity of our models.