Visa is one of the largest payment companies in the world, handling billions of transactions every year. As such, it is a prime target for cyberattacks from hackers looking to steal sensitive financial information. To counter these threats, Visa has turned to artificial intelligence (AI) and machine learning (ML) to bolster its security defenses.
AI and ML offer several advantages over traditional cybersecurity methods. They can detect and respond to threats in real time, identify patterns in data that humans may miss, and adapt to changing threat landscapes. Visa has incorporated these technologies into its fraud detection and prevention systems, which help identify and block fraudulent transactions before they can cause harm.
One example of how Visa is using AI to counter cyberattacks is through its Visa Advanced Authorization (VAA) system. VAA uses ML algorithms to analyze transaction data and identify patterns of fraudulent activity. The system learns from historical data and uses that knowledge to detect and prevent future fraud attempts. This approach has been highly effective, with VAA reportedly blocking $25 billion in fraudulent transactions in 2020 alone.
Visa is also using AI to enhance its risk assessment capabilities. The company's Risk Manager platform uses ML algorithms to analyze transaction data and identify potential fraud risks. The system can detect unusual behavior patterns, such as a sudden increase in transaction volume or an unexpected change in location, and flag them for further investigation. This allows Visa to proactively address potential risks before they turn into full-fledged cyberattacks.
Another area where Visa is using AI to counter cyberattacks is in threat intelligence. The company's CyberSource Threat Intelligence service uses ML algorithms to analyze global threat data and identify potential security threats. This information is then shared with Visa's clients, helping them stay ahead of emerging threats and minimize their risk of a cyberattack.
Visa has also developed a tool called the Visa Payment Fraud Disruption (PFD) platform, which uses AI to detect and disrupt cyberattacks targeting Visa clients. The PFD platform analyzes transaction data in real time and identifies any unusual activity that could indicate a cyberattack. The system then alerts Visa's cybersecurity team, who can take immediate action to prevent the attack from causing harm.
In addition to these measures, Visa is also investing in the development of AI and ML technologies to further enhance its cybersecurity capabilities. The company has partnered with leading AI firms and academic institutions to develop new tools and techniques to detect and prevent cyberattacks more effectively.
Overall, Visa's use of AI and ML in its cybersecurity systems has proven highly effective in countering cyberattacks. By leveraging these technologies, Visa is able to detect and respond to threats in real time, identify patterns in data that humans may miss, and adapt to changing threat landscapes. As cyberattacks continue to evolve and become more sophisticated, Visa will likely continue to invest in AI and ML to stay ahead of the curve and protect its customers' sensitive financial information.
The aforementioned findings were made by researchers from the Department of Energy’s Pacific Northwest National Laboratory based on an abstract simulation of the digital conflict between threat actors and defenders in a network and trained four different DRL neural networks in order to expand rewards based on minimizing compromises and network disruption.
The simulated attackers transitions from the initial access and reconnaissance phase to other attack stages until they arrived at their objective, i.e. the impact and exfiltration phase. Apparently, these strategies were based on the classification of the MITRE ATT&CK architecture.
Samrat Chatterjee, a data scientist who presented the team's work at the annual meeting of the Association for the Advancement of Artificial Intelligence in Washington, DC, on February 14, claims that the successful installation and training of the AI system on the simplified attack surfaces illustrates the defensive responses to cyberattacks that, in current times, could be conducted by an AI model.
"You don't want to move into more complex architectures if you cannot even show the promise of these techniques[…]We wanted to first demonstrate that we can actually train a DRL successfully and show some good testing outcomes before moving forward," says Chatterjee.
Machine learning (ML) and AI tactics have emerged as innovative trends to administer cybersecurity in a variety of fields. This development in cybersecurity has started from the early integration of ML in email security in the early 2010s to utilizing ChatGPT and numerous AI bots that we see today to analyze code or conduct forensic analysis. The majority of security products now incorporate a few features that are powered by machine learning algorithms that have been trained on massive datasets.
Yet, developing an AI system that is capable of proactive protection is still more of an ideal than a realistic approach. The PNNL research suggests that an AI defender could be made possible in the future, despite the many obstacles that still need to be addressed by researchers.
"Evaluating multiple DRL algorithms trained under diverse adversarial settings is an important step toward practical autonomous cyber defense solutions[…] Our experiments suggest that model-free DRL algorithms can be effectively trained under multistage attack profiles with different skill and persistence levels, yielding favorable defense outcomes in contested settings," according to a statement published by the PNNL researchers.
The initial objective of the research team was to develop a custom simulation environment based on an open-source toolkit, Open AI Gym. Through this environment, the researchers created attacker entities with a range of skill and persistence levels that could employ a selection of seven tactics and fifteen techniques from the MITRE ATT&CK framework.
The attacker agents' objectives are to go through the seven attack chain steps—from initial access to execution, from persistence to command and control, and from collection to impact—in the order listed.
According to Chatterjee of PNNL, it can be challenging for the attacker to modify their strategies in response to the environment's current state and the defender's existing behavior.
"The adversary has to navigate their way from an initial recon state all the way to some exfiltration or impact state[…] We're not trying to create a kind of model to stop an adversary before they get inside the environment — we assume that the system is already compromised," says Chatterjee.
In the experiments, it was revealed that a particular reinforcement learning technique called a Deep Q Network successfully solved the defensive problem by catching 97% of the intruders in the test data set. Yet the research is just the beginning. Yet, security professionals should not look for an AI assistant to assist them with incident response and forensics anytime soon.
One of the many issues that are required to be resolved is getting RL and deep neural networks to explain the causes that affected their decision, an area of research called explainable reinforcement learning (XRL).
Moreover, the rapid emergence of AI technology and finding the most effective tactics to train the neutral network are both a challenge that needs to be addressed, according to Chatterjee.
The UK's top cybersecurity agency has released new guidance designed to assist developers and others identify and patch vulnerabilities in Machine Learning (ML) systems.
GCHQ's National Cyber Security Centre (NCSC) has laid out together its principles for the security of machine learning for any company that is looking to reduce potential adversarial machine learning (AML).
AML attacks compromise the unique features of ML or AI systems to attain different goals. AML has become a serious issue as technology has found its way into a rising critical range of systems, finance, national security, underpinning healthcare, and more.
At its core, software security depends on understanding how a component or system works. This lets a system owner inspect and analyze vulnerabilities, these can be reduced or accepted later.
Sadly, it's difficult to deal with this ML. ML is precisely used for enabling a system that has self-learning, to take out information from data, with negligible assistance from a human developer.
Since a model's internal logic depends on data, its behaviour can be problematic to understand, and at times is next to impossible to fully comprehend why it is doing what it is doing.
This explains why ML components haven't undergone the same level of inspection as regular systems, and why some vulnerabilities can't be identified.
According to experts, the new ML principles will help any organization "involved in the development, deployment, or decommissioning of a system containing ML."
The experts have pointed out some key limitations in ML systems, these include:
In the NCSC, the team recognises the massive benefits that good data science and ML can bring to society, along with cybersecurity. The NCSC wants to make sure these benefits are recognised.
According to a new study, as neural networks become more popularly used, they may become the next frontier for malware operations.
![]() |
Image Source |
Pattern recognition — In this, we discover explicit characteristics hidden in the data which is nothing but feature sets and these can be used to teach an ML algorithm to recognize other forms of the data that exhibit the same set of characteristics.
Anomaly Detection — In this, the goal is to establish a notion of normality that describes 95% of a given dataset. Learning of the patterns is data is not done in this. So, once the normality is determined, any deviations from this will be detected as anomalies.
![]() |
“These increases were driven by expanding our proactive detection technologies in English and Spanish,” as the company states.