As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?
Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.
Why Traditional Access Rules Don’t Work for AI
In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.
Why It Matters
Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.
Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.
Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.
What’s Making This So Difficult?
1. AI systems often blend data so deeply that it’s hard to tell what came from where.
2. Access rules are usually fixed, but AI relies on fast-changing data.
3. Companies have many users with different roles and permissions, making enforcement complicated.
4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.
How Can Businesses Fix This?
• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.
• Flexible Access Rules: Adjust permissions based on user roles and context.
• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.
• Separate Models: Train different AI models for different user groups, each with its own safe data.
• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.
As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.
Ivanti, a leading company in network and security solutions, has issued urgent security updates to address a critical vulnerability in its Virtual Traffic Manager (vTM). The flaw, identified as CVE-2024-7593, carries an alarming severity with a CVSS score of 9.8 out of 10, signalling its potential risk to users.
Authentication Bypass Could Lead to Rogue Admin Access
The vulnerability arises from an incorrect implementation of the authentication algorithm in Ivanti vTM, excluding specific versions (22.2R1 and 22.7R2). This flaw allows remote attackers to bypass authentication processes, enabling them to create unauthorized administrative users. This could grant cybercriminals full control over the management interface, posing daunting risks to the affected systems.
Affected Versions and Immediate Actions
The vulnerability impacts several versions of Ivanti vTM, including 22.2, 22.3, 22.3R2, 22.5R1, 22.6R1, and 22.7R1. Ivanti has responded by releasing patched versions—22.2R1, 22.7R2, and upcoming fixes for 22.3R3, 22.5R2, and 22.6R2, expected during the week of August 19, 2024. As a temporary measure, the company recommends that users limit admin access to the management interface or restrict it to trusted IP addresses to mitigate the risk of unauthorised access.
Despite no confirmed incidents of this vulnerability being exploited in the wild, the availability of a proof-of-concept (PoC) code increases the urgency for users to apply the latest patches to safeguard their systems.
Additional Vulnerabilities Addressed in Neurons for ITSM
In addition to the vTM flaw, Ivanti has also patched two serious vulnerabilities in its Neurons for ITSM product. The first, CVE-2024-7569, is an information disclosure vulnerability with a CVSS score of 9.6. It affects Ivanti ITSM on-premises and Neurons for ITSM versions 2023.4 and earlier, allowing attackers to obtain sensitive information, including OIDC client secrets, through debug data.
The second flaw, CVE-2024-7570, rated 8.3 on the CVSS scale, involves improper certificate validation. This vulnerability enables a remote attacker in a man-in-the-middle (MITM) position to craft a token that could grant unauthorised access to the ITSM platform as any user. These issues have been resolved in the latest patched versions of 2023.4, 2023.3, and 2023.2.
Further adding to the urgency, Ivanti has also addressed five high-severity vulnerabilities (CVE-2024-38652, CVE-2024-38653, CVE-2024-36136, CVE-2024-37399, and CVE-2024-37373) in its Avalanche product. These flaws could potentially lead to denial-of-service (DoS) conditions or even remote code execution if exploited. Users are strongly advised to update to version 6.4.4, which includes fixes for these issues.
These security updates highlight the critical practicality of staying current with patches and updates, especially for systems as vital as traffic management and IT service management platforms. Ivanti's quick response to these vulnerabilities is crucial in helping organisations protect their digital infrastructure from potentially devastating attacks. Users are urged to implement the recommended updates without delay to combat any risks posed by these newly discovered flaws.