Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label open-source. Show all posts

Rise in Data-Stealing Malware Targeting Developers, Sonatype Warns

 

A recent report released on April 2 has uncovered a worrying rise in open-source malware aimed at developers. These attacks, described as “smash and grab” operations, are designed to swiftly exfiltrate sensitive data from development environments.

Brian Fox, co-founder and CTO of Sonatype, explained that developers are increasingly falling victim to deceptive software packages. Once installed, these packages execute malicious code to harvest confidential data such as API keys, session cookies, and database credentials—then transmit it externally.

“It’s over in a flash,” Fox said. “Many of the times, people don’t recognize that this was even an attack.”

Sonatype, a leader in software supply-chain security, revealed that 56% of malware identified in Q1 2025 focused on data exfiltration. These programs are tailored to extract sensitive information from compromised systems. This marks a sharp increase from Q4 2024, when only 26% of open-source threats had such capabilities. The company defines open-source malware as “malicious code intentionally crafted to target developers in order to infiltrate and exploit software supply chains.”

Fox emphasized that these attacks often begin with spear phishing tactics—posing as legitimate software packages on public repositories. Minor changes, such as replacing hyphens with underscores in filenames, can mislead even seasoned developers.

“The attackers fake the number of downloads. They fake the stars so it can look as legit as the original one, because there’s not enough awareness. [Developers] are not yet trained to be skeptical,” Fox told us.

These stolen data fragments—while small—can have massive consequences. API keys, hashed passwords, and cookie caches serve as backdoors for broader attacks.

“They’re breaking into the janitor’s closet, not to put in a bomb, but to grab his keychain, and then they’re going to come back at night with the keychain,” Fox said.

The 2025 report highlights early examples:

Compromised JavaScript packages on npm were found to steal environment variables, which typically contain API tokens, SSH credentials, and other sensitive information.

A fake npm extension embedded spyware that enabled complete remote access.

Malicious packages targeted cryptocurrency developers, deploying Windows trojans capable of keylogging and data exfiltration. These packages had over 1,900 downloads collectively.

A separate report published by Sonatype in November 2024 reported a 156% year-over-year surge in open-source malware. Since October 2023, over 512,847 malicious packages have been identified—including but not limited to data-exfiltrating malware.

Customized AI Models and Benchmarks: A Path to Ethical Deployment

 

As artificial intelligence (AI) models continue to advance, the need for industry collaboration and tailored testing benchmarks becomes increasingly crucial for organizations in their quest to find the right fit for their specific needs.

Ong Chen Hui, the assistant chief executive of the business and technology group at Infocomm Media Development Authority (IMDA), emphasized the importance of such efforts. As enterprises seek out large language models (LLMs) customized for their verticals and countries aim to align AI models with their unique values, collaboration and benchmarking play key roles.

Ong raised the question of whether relying solely on one large foundation model is the optimal path forward, or if there is a need for more specialized models. She pointed to Bloomberg's initiative to develop BloombergGPT, a generative AI model specifically trained on financial data. Ong stressed that as long as expertise, data, and computing resources remain accessible, the industry can continue to propel developments forward.

Red Hat, a software vendor and a member of Singapore's AI Verify Foundation, is committed to fostering responsible and ethical AI usage. The foundation aims to leverage the open-source community to create test toolkits that guide the ethical deployment of AI. Singapore boasts the highest adoption of open-source technologies in the Asia-Pacific region, with numerous organizations, including port operator PSA Singapore and UOB bank, using Red Hat's solutions to enhance their operations and cloud development.

Transparency is a fundamental aspect of AI ethics, according to Ong. She emphasized the importance of open collaboration in developing test toolkits, citing cybersecurity as a model where open-source development has thrived. Ong highlighted the need for continuous testing and refinement of generative AI models to ensure they align with an organization's ethical guidelines.

However, some concerns have arisen regarding major players like OpenAI withholding technical details about their LLMs. A group of academics from the University of Oxford highlighted issues related to accessibility, replicability, reliability, and trustworthiness (AART) stemming from the lack of information about these models.

Ong suggested that organizations adopting generative AI will fall into two camps: those opting for proprietary large language AI models and those choosing open-source alternatives. She emphasized that businesses focused on transparency can select open-source options.

As generative AI applications become more specialized, customized test benchmarks will become essential. Ong stressed that these benchmarks will be crucial for testing AI applications against an organization's or country's AI principles, ensuring responsible and ethical deployment.

In conclusion, the collaboration, transparency, and benchmarking efforts in the AI industry are essential to cater to specific needs and align AI models with ethical and responsible usage. The development of specialized generative AI models and comprehensive testing benchmarks will be pivotal in achieving these objectives.