Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AWS. Show all posts

AWS CodeBuild Misconfiguration Could Have Enabled Full GitHub Repository Takeover

 

One mistake in how Amazon Web Services set up its CodeBuild tool might have let hackers grab control of official AWS GitHub accounts. That access could spill into more parts of AWS, opening doors for wide-reaching attacks on software supplies. Cloud security team Wiz found the weak spot and called it CodeBreach. They told AWS about it on August 25, 2025. Fixes arrived by September that year. Experts say key pieces inside AWS were at stake - like the popular JavaScript SDK developers rely on every day. 

Into trusted repositories, attackers might have slipped harmful code thanks to CodeBreach, said Wiz team members Yuval Avrahami and Nir Ohfeld. If exploited, many apps using AWS SDKs could face consequences - possibly even disruptions in how the AWS Console functions or risks within user setups. Not a bug inside CodeBuild caused this, but gaps found deeper in automated build processes. These weak spots lived where tools merge and deploy code automatically. 

Something went wrong because the webhook filters had been set up incorrectly. They’re supposed to decide which GitHub actions get permission to start CodeBuild tasks. Only certain people or selected branches should be allowed through, keeping unsafe code changes out of high-access areas. But in a few open-source projects run by AWS, the rules meant to check user IDs didn’t work right. The patterns written to match those users failed at their job. 

Notably, some repositories used regex patterns missing boundary markers at beginning or end, leading to incomplete matches rather than full validation. This gap meant a GitHub user identifier only needed to include an authorized maintainer's number within a larger sequence to slip through. Because GitHub hands out IDs in order, those at Wiz showed how likely it became for upcoming identifiers to accidentally align with known legitimate ones. 

Ahead of any manual effort, bots made it possible to spam GitHub App setups nonstop. One after another, these fake apps rolled out - just waiting for a specific ID pattern to slip through broken checks. When the right match appeared, everything changed quietly. A hidden workflow fired up inside CodeBuild, pulled from what should have stayed locked down. Secrets spilled into logs nobody monitored closely. For aws-sdk-js-v3, that leak handed total control away - tied straight to a powerful token meant to stay private. If hackers gained that much control, they might slip harmful code into secure branches without warning. 

Malicious changes could get approved through rigged pull requests, while hidden data stored in the repo gets quietly pulled out. Once inside, corrupted updates might travel unnoticed through trusted AWS libraries to users relying on them. AWS eventually confirmed some repos lacked tight webhook checks. Still, they noted only certain setups were exposed. 

Now fixed, Amazon says it adjusted those flawed settings. Exposed keys were swapped out, safeguards tightened around building software. Evidence shows CodeBreach wasn’t used by attackers, the firm added. Yet specialists warn - small gaps in automated pipelines might lead to big problems down the line. Now worries grow around CI/CD safety, a new report adds fuel. 

Lately, studies have revealed that poorly set up GitHub Actions might spill sensitive tokens. This mistake lets hackers gain higher permissions in large open-source efforts. What we’re seeing shows tighter checks matter. Running on minimal needed access helps too. How unknown data is processed in builds turns out to be critical. Each step shapes whether systems stay secure.

Amazon Says It Has Disrupted GRU-Linked Cyber Operations Targeting Cloud Customers

 



Amazon has announced that its threat intelligence division has intervened in ongoing cyber operations attributed to hackers associated with Russia’s foreign military intelligence service, the GRU. The activity targeted organizations using Amazon’s cloud infrastructure, with attackers attempting to gain unauthorized access to customer-managed systems.

The company reported that the malicious campaign dates back to 2021 and largely concentrated on Western critical infrastructure. Within this scope, energy-related organizations were among the most frequently targeted sectors, indicating a strategic focus on high-impact industries.

Amazon’s investigation shows that the attackers initially relied on exploiting security weaknesses to break into networks. Over multiple years, they used a combination of newly discovered flaws and already known vulnerabilities in enterprise technologies, including security appliances, collaboration software, and data protection platforms. These weaknesses served as their primary entry points.

As the campaign progressed, the attackers adjusted their approach. By 2025, Amazon observed a reduced reliance on vulnerability exploitation. Instead, the group increasingly targeted customer network edge devices that were incorrectly configured. These included enterprise routers, VPN gateways, network management systems, collaboration tools, and cloud-based project management platforms.

Devices with exposed administrative interfaces or weak security controls became easy targets. By exploiting configuration errors rather than software flaws, the attackers achieved the same long-term goals: maintaining persistent access to critical networks and collecting login credentials for later use.

Amazon noted that this shift reflects a change in operational focus rather than intent. While misconfiguration abuse has been observed since at least 2022, the sustained emphasis on this tactic in 2025 suggests the attackers deliberately scaled back efforts to exploit zero-day and known vulnerabilities. Despite this evolution, their core objectives remained unchanged: credential theft and quiet movement within victim environments using minimal resources and low visibility.

Based on overlapping infrastructure and targeting similarities with previously identified threat groups, Amazon assessed with high confidence that the activity is linked to GRU-associated hackers. The company believes one subgroup, previously identified by external researchers, may be responsible for actions taken after initial compromise as part of a broader, multi-unit campaign.

Although Amazon did not directly observe how data was extracted, forensic evidence suggests passive network monitoring techniques were used. Indicators included delays between initial device compromise and credential usage, as well as unauthorized reuse of legitimate organizational credentials.

The compromised systems were customer-controlled network appliances running on Amazon EC2 instances. Amazon emphasized that no vulnerabilities in AWS services themselves were exploited during these attacks.

Once the activity was detected, Amazon moved to secure affected instances, alerted impacted customers, and shared intelligence with relevant vendors and industry partners. The company stated that coordinated action helped disrupt the attackers’ operations and limit further exposure.

Amazon also released a list of internet addresses linked to the activity but cautioned organizations against blocking them without proper analysis, as they belong to legitimate systems that had been hijacked.

To mitigate similar threats, Amazon recommended immediate steps such as auditing network device configurations, monitoring for credential replay, and closely tracking access to administrative portals. For AWS users, additional measures include isolating management interfaces, tightening security group rules, and enabling monitoring tools like CloudTrail, GuardDuty, and VPC Flow Logs.

ShadowV2 Botnet Activity Quietly Intensified During AWS Outage

 


The recently discovered wave of malicious activity has raised fresh concerns for cybersecurity analysts, who claim that ShadowV2 - a fast-evolving strain of malware that is quietly assembling a global network of compromised devices - is quietly causing alarm. It appears that the operation is based heavily upon Mirai's source code and is much more deliberate and calculated than previous variants. The operation is spread across more than 20 countries. 

Moreover, ShadowV2 has been determined to have been created by actors exploiting widespread misconfigurations in everyday Internet of Things hardware. This is an increasingly common weakness in modern digital ecosystems and it is aimed at building a resilient, stealthy, and scaleable botnet. The campaign was discovered by FortiGuard Labs during the Amazon Web Services disruption in late October, which the operators appeared to have been using to cover up their activity. 

During the outage, the malware spiked in activity, an activity investigators interpret to be the result of a controlled test run rather than an opportunistic attack, according to the report. During its analysis of devices from DDWRT (CVE-2009-2765), D-Link (CVE-2020-25506, CVE-2022-37055, CVE-2024-10914, CVE-2024-10915), DigiEver (CVE-2023-52163), TBK (CVE-2024-3721), TP-Link (CVE-2024-53375), and DigiEver (CVE-2024-53375), ShadowV2 was observed exploiting a wide range of CVE-2024-53375. 

A campaign’s ability to reach out across industries and geographies, coupled with its precise use of IoT flaws, is indicative of a maturing cybercriminal ecosystem, according to experts. This ecosystem is becoming increasingly adept at leveraging consumer-grade technology to stage sophisticated and coordinated attacks in the future. 

ShadowV2 exploited a variety of vulnerabilities that have been identified for a long time in IoT security, particularly in devices that have already been retired by manufacturers. This report, which is based on a research project conducted by NetSecFish, identified a number of vulnerabilities that could be affecting D-Link products that are at the end of their life cycle. 

The most concerning issue is CVE-2024-10914, which is a command-injection flaw affecting end-of-life D-Link products. In November 2024, a related issue, CVE-2024-10915, was found by researchers in a report published by NetSecFish. However, after finding no advisory, D-Link later confirmed that the affected devices had reached end of support and were unpatched. 

The vendor responded to inquiries by updating an existing bulletin to include the newly assigned CVE and issuing a further announcement that has directly related to the ShadowV2 campaign, reminding customers that outdated hardware will no longer receive security updates or maintenance, and that security updates will not be provided on them anymore. 

During the same period, a vulnerability exploited by the botnet, CVE-2024-53375, was revealed. This vulnerability has been reported to have been resolved through a beta firmware update. Considering that all of these lapses are occurring together, they serve as an excellent illustration of the fact that aging consumer devices continue to serve as a fertile ground for large-scale malicious operations long after support has ended, as many of these devices are left running even after support has ended. 

Based on the analysis of the campaign, it seems as though ShadowV2's operators use a familiar yet effective distribution chain to spread its popularity and reach as widely as possible. By exploiting a range of vulnerable IoT vulnerabilities, the attackers are able to download a software program known as binary.sh, which is located at 81[.]88[.]18[.]108, which is the command server's location. As soon as the script is executed, it fetches the ShadowV2 payload - every sample is identified by the Shadow prefix - which is similar to the well-known Mirai offshoot LZRD in many ways.

A recent study examining the x86-64 build of the malware, shadow.x86_64, has found that the malware initializes its configuration and attack routines by encoding them using a light-weight XOR-encoding algorithm, encrypting them with one byte (0x22) to protect file system paths, HTTP headers, and User-Agent strings using a single byte key. 

As soon as these parameters are decoded, the bot connects with its command-and-control server, where it waits for instructions on how to launch distributed denial-of-service attacks. While aesthetically modest in nature, this streamlined design is a reflection of a disciplined and purpose-built approach which makes it easy for deployment across diverse hardware systems without attracting attention right away. 

According to Fortinet, a deeper analysis of the malware—which uses XOR capabilities to encrypt configuration data and compact binaries—underscores that ShadowV2 shares many of the same features as the LZRD strain derived from Mirai. This allows ShadowV2 to minimize its visibility on compromised systems in a similar fashion. 

An infection sequence that has been observed across multiple incidents follows a consistent pattern: attackers are the ones who break into a vulnerable device, then they download the ShadowV2 payload via 81[.]88[.]18[.]108, and then they proceed to install it. The malware connects to its command server at silverpath[.]shadowstresser[.]info immediately after it has been installed, allowing it to be part of a distributed network geared towards coordinated attacks. 

Once installed, the malware immediately resides on the compromised device. In addition to supporting a wide range of DDoS techniques, including UDP, TCP, and HTTP, the botnet is well suited for high-volume denial-of-service operations, including those associated with for-hire DDoS services, criminal extortion, and targeted disruption campaigns. 

Researchers claim that ShadowV2's initial activity window may have been purposefully chosen to be the right time to conduct its initial operations. It is perfectly possible to test botnets at an early stage in the early stages of their development during major outages, such as the AWS disruption of late October, as sudden traffic irregularities are easily blended into the broader instability of the service. 

By targeting both consumer-grade and enterprise-grade IoT systems, operators seem to be building an attack fabric that is flexible and geographically diffuse, and capable of scaling rapidly, even in times of overwhelming defensive measures. While the observation was brief, analysts believe that it served as a controlled proof-of-concept that could be used to determine if a more expansive or destructive return could occur as a result of future widespread outages or high-profile international events. 

Fortinet has issued a warning for consumers and organizations to strengthen their defenses before similar operations occur in the future, in light of the implications of the campaign. In addition to installing the latest firmware on all supported IoT and networking devices, the company emphasizes the importance of decommissioning any end-of-life D-Link or other vendor devices, as well as preventing unnecessary internet-exposed features such as remote management and UPnP, to name just a few. 

Additionally, IoT hardware should be isolated within segmented networks, outbound traffic and DNS queries are monitored for anomalies, and strong, unique passwords should be enforced across all interfaces of all connected devices. As a whole, these measures aim to reduce the attack surface that has enabled the rapid emergence of IoT-driven botnets such as ShadowV2 to flourish. 

As for ShadowV2's activity, it has only been limited to the short window of the Amazon Web Services outage, but researchers stress that it should act as a timely reminder of the fragile state of global IoT security at the moment. During the campaign, it is stressed that the continued importance of protecting internet-connected devices, updating firmware regularly, and monitoring network activity for unfamiliar or high-volume traffic patterns that may signal an early compromise of those devices has been underscored. 

Defendants will benefit from an extensive set of indicators of compromise that Fortinet has released in order to assist them with proactive threat hunting, further supporting what researcher Li has described as an ongoing reality in cybersecurity: IoT hardware remains one of the most vulnerable entry points for cybercriminals. When ShadowV2 emerged, there was an even greater sense of concern when Microsoft disclosed just days later, days after its suspected test run, that Azure had been able to defend against what they called the largest cloud-based DDoS attack ever recorded. 

As a result of this attack, attributed to the Aisuru botnet, an unprecedented 15.72 Tbps was reached, resulting in nearly 3.64 billion packets per second being delivered. Despite the attack, Microsoft reported that it had successfully been absorbed by its cloud DDoS protection systems on October 24, thus preventing any disruptions to customer workflows. 

Analysts suggest that the timing of the two incidents indicates a rapidly intensifying threat landscape in which adversaries are increasingly preparing to launch large-scale attacks, often without much advance notice. Analysts are pointing out that the ShadowV2 incident is not merely an isolated event, but should also be considered a preview of what a more volatile era of botnet-driven disruption might look like once the dust settles on these consecutive warning shots. 

Due to the convergence of aging consumer hardware and incomplete patch ecosystems, as well as the increasing sophistication of adversaries, an overlooked device can become a launchpad for global-scale attacks as a result of this emergence. According to experts, real resilience will require more than reactive patching: settings that embed sustained visibility into their networks, enforcing strict asset lifecycle management, and incorporating architectures that limit the blast radius of inevitable compromises are all priorities that need to be addressed. 

Consumers also play a crucial role in preventing botnets from spreading by replacing unsupported devices, enabling automatic updates, and regularly reviewing router and Internet-of-Things configurations, which collectively help to reduce the number of vulnerable nodes available to botnets. 

In the face of attacks that demonstrate a clear willingness to demonstrate their capabilities during times of widespread disruption, cybersecurity experts warn that proactive preparedness must replace event-based preparedness as soon as possible. As they argue, the ShadowV2 incident serves as a timely reminder that strengthening the foundations of IoT security today is crucial to preventing much more disruptive campaigns from unfolding tomorrow.

AWS Apologizes for Massive Outage That Disrupted Major Platforms Worldwide

 

Amazon Web Services (AWS) has issued an apology to customers following a widespread outage on October 20 that brought down more than a thousand websites and services globally. The disruption affected major platforms including Snapchat, Reddit, Lloyds Bank, Venmo, and several gaming and payment applications, underscoring the heavy dependence of the modern internet on a few dominant cloud providers. The outage originated in AWS’s North Virginia region (US-EAST-1), which powers a significant portion of global online infrastructure. 

According to Amazon’s official statement, the outage stemmed from internal errors that prevented systems from properly linking domain names to the IP addresses required to locate them. This technical fault caused a cascade of connectivity failures across multiple services. “We apologize for the impact this event caused our customers,” AWS said. “We know how critical our services are to our customers, their applications, and their businesses. We are committed to learning from this and improving our availability.”

While some platforms like Fortnite and Roblox recovered within a few hours, others faced extended downtime. Lloyds Bank customers, for instance, reported continued access issues well into the afternoon. Similarly, services like Reddit and Venmo were affected for longer durations. The outage even extended to connected devices such as Eight Sleep’s smart mattresses, which rely on internet access to adjust temperature and elevation. 

The company stated it would work to make its systems more resilient after some users reported overheating or malfunctioning devices during the outage. AWS’s detailed incident summary attributed the issue to a “latent race condition” in the systems managing the Domain Name System (DNS) records in the affected region. Essentially, one of the automated processes responsible for maintaining synchronization between critical database systems malfunctioned, triggering a chain reaction that disrupted multiple dependent services. Because many of AWS’s internal processes are automated, the problem propagated without human intervention until it was detected and mitigated. 

Dr. Junade Ali, a software engineer and fellow at the Institute for Engineering and Technology, explained that “faulty automation” was central to the failure. He noted that the internal “address book” system in the region broke down, preventing key infrastructure components from locating each other. “This incident demonstrates how businesses relying on a single cloud provider remain vulnerable to regional failures,” Dr. Ali added, emphasizing the importance of diversifying cloud service providers to improve resilience. 

The event once again highlights the concentration of digital infrastructure within a few dominant providers, primarily AWS and Microsoft Azure. Experts warn that such dependency increases systemic risk, as disruptions in one region can have global ripple effects. Amazon has stated that it will take measures to strengthen fault detection, introduce greater redundancy, and enhance the reliability of automated processes in its network. 

As the world grows increasingly reliant on cloud computing, the AWS outage serves as a critical reminder of the fragility of internet infrastructure and the urgent need for redundancy and diversification.

AWS Outage Exposes the Fragility of Centralized Messaging Platforms




A recently recorded outage at Amazon Web Services (AWS) disrupted several major online services worldwide, including privacy-focused communication apps such as Signal. The event has sparked renewed discussion about the risks of depending on centralized systems for critical digital communication.

Signal is known globally for its strong encryption and commitment to privacy. However, its centralized structure means that all its operations rely on servers located within a single jurisdiction and primarily managed by one cloud provider. When that infrastructure fails, the app’s global availability is affected at once. This incident has demonstrated that even highly secure applications can experience disruption if they depend on a single service provider.

According to experts working on decentralized communication technology, this kind of breakdown reveals a fundamental flaw in the way most modern communication apps are built. They argue that centralization makes systems easier to control but also easier to compromise. If the central infrastructure goes offline, every user connected to it is impacted simultaneously.

Developers behind the Matrix protocol, an open-source network for decentralized communication, have long emphasized the need for more resilient systems. They explain that Matrix allows users to communicate without relying entirely on the internet or on a single server. Instead, the protocol enables anyone to host their own server or connect through smaller, distributed networks. This decentralization offers users more control over their data and ensures communication can continue even if a major provider like AWS faces an outage.

The first platform built on Matrix, Element, was launched in 2016 by a UK-based team with the aim of offering encrypted communication for both individuals and institutions. For years, Element’s primary focus was to help governments and organizations secure their communication systems. This focus allowed the project to achieve financial stability while developing sustainable, privacy-preserving technologies.

Now, with growing support and new investments, the developers behind Matrix are working toward expanding the technology for broader public use. Recent funding from European institutions has been directed toward developing peer-to-peer and mesh network communication, which could allow users to exchange messages without relying on centralized servers or continuous internet connectivity. These networks create direct device-to-device links, potentially keeping users connected during internet blackouts or technical failures.

Mesh-based communication is not a new idea. Previous applications like FireChat allowed people to send messages through Bluetooth or Wi-Fi Direct during times when the internet was restricted. The concept gained popularity during civil movements where traditional communication channels were limited. More recently, other developers have experimented with similar models, exploring ways to make decentralized communication more user-friendly and accessible.

While decentralized systems bring clear advantages in terms of resilience and independence, they also face challenges. Running individual servers or maintaining peer-to-peer networks can be complex, requiring technical knowledge that many everyday users might not have. Developers acknowledge that reaching mainstream adoption will depend on simplifying these systems so they work as seamlessly as centralized apps.

Other privacy-focused technology leaders have also noted the implications of the AWS outage. They argue that relying on infrastructure concentrated within a few major U.S. providers poses strategic and privacy risks, especially for regions like Europe that aim to maintain digital autonomy. Building independent, regionally controlled cloud and communication systems is increasingly being seen as a necessary step toward safeguarding user privacy and operational security.

The recent AWS disruption serves as a clear warning. Centralized systems, no matter how secure, remain vulnerable to large-scale failures. As the digital world continues to depend heavily on cloud-based infrastructure, developing decentralized and distributed alternatives may be key to ensuring communication remains secure, private, and resilient in the face of future outages.


The Fragile Internet: How Small Failures Trigger Global Outages






The modern internet, though vast and advanced, remains surprisingly delicate. A minor technical fault or human error can disrupt millions of users worldwide, revealing how dependent our lives have become on digital systems.

On October 20, 2025, a technical error in a database service operated by Amazon Web Services (AWS) caused widespread outages across several online platforms. AWS, one of the largest cloud computing providers globally, hosts the infrastructure behind thousands of popular websites and apps. As a result, users found services such as Roblox, Fortnite, Pokémon Go, Snapchat, Slack, and multiple banking platforms temporarily inaccessible. The incident showed how a single malfunction in a key cloud system can paralyze numerous organizations at once.

Such disruptions are not new. In July 2024, a faulty software update from cybersecurity company CrowdStrike crashed around 8.5 million Windows computers globally, producing the infamous “blue screen of death.” Airlines had to cancel tens of thousands of flights, hospitals postponed surgeries, and emergency services across the United States faced interruptions. Businesses reverted to manual operations, with some even switching to cash transactions. The event became a global lesson in how a single rushed software update can cripple essential infrastructure.

History provides many similar warnings. In 1997, a technical glitch at Network Solutions Inc., a major domain registrar, temporarily disabled every website ending in “.com” and “.net.” Though the number of websites was smaller then, the event marked the first large-scale internet failure, showing how dependent the digital world had already become on centralized systems.

Some outages, however, have stemmed from physical damage. In 2011, an elderly woman in Georgia accidentally cut through a fiber-optic cable while scavenging for copper, disconnecting the entire nation of Armenia from the internet. The incident exposed how a single damaged cable could isolate millions of users. Similarly, in 2017, a construction vehicle in South Africa severed a key line, knocking Zimbabwe offline for hours. Even undersea cables face threats, with sharks and other marine life occasionally biting through them, forcing companies like Google to reinforce cables with protective materials.

In 2022, Canada witnessed one of its largest connectivity failures when telecom provider Rogers Communications experienced a system breakdown that halted internet and phone services for roughly a quarter of the country. Emergency calls, hospital appointments, and digital payments were affected nationwide, highlighting the deep societal consequences of a single network failure.

Experts warn that such events will keep occurring. As networks grow more interconnected, even a small mistake or single-point failure can spread rapidly. Cybersecurity analysts emphasize the need for stronger redundancy, slower software rollouts, and diversified cloud dependencies to prevent global disruptions.

The internet connects nearly every part of modern life, yet these incidents remind us that it remains vulnerable. Whether caused by human error, faulty code, or damaged cables, the web’s fragility shows why constant vigilance, better infrastructure planning, and verified information are essential to keeping the world online.



Amazon resolves major AWS outage that disrupted apps, websites, and banks globally



 


A widespread disruption at Amazon Web Services (AWS) on Monday caused several high-profile apps, websites, and banking platforms to go offline for hours before the issue was finally resolved later in the night. The outage, which affected one of Amazon’s main cloud regions in the United States, drew attention to how heavily the global digital infrastructure depends on a few large cloud service providers.

According to Amazon’s official update, the problem stemmed from a technical fault in its Domain Name System (DNS) — a core internet function that translates website names into numerical addresses that computers can read. When the DNS experiences interruptions, browsers and applications lose their ability to locate and connect with servers, causing widespread loading failures. The company confirmed the issue affected its DynamoDB API endpoint in the US-EAST-1 region, one of its busiest hubs.

The first reports of disruptions appeared around 7:00 a.m. BST on Monday, when users began facing difficulties accessing multiple platforms. As the issue spread, users of services such as Snapchat, Fortnite, and Duolingo were unable to log in or perform basic functions. Several banking websites, including Lloyds and Halifax, also reported temporary connectivity problems.

The outage quickly escalated to a global scale. According to the monitoring website Downdetector, more than 11 million user complaints were recorded throughout the day, an unprecedented figure that reflected the magnitude of the disruption. Early in the incident, Downdetector noted over four million reports from more than 500 affected platforms within just a few hours, which was more than double its usual weekday average.

AWS engineers worked through the day to isolate the source of the issue and restore affected systems. To stabilize its network, Amazon temporarily limited some internal operations to prevent further cascading failures. By 11:00 p.m. BST, the company announced that all services had “returned to normal operations.”

Experts said the incident underlined the vulnerabilities of an increasingly centralized internet. Professor Alan Woodward of the University of Surrey explained that modern online systems are highly interdependent, meaning that an error within one major provider can ripple across numerous unrelated services. “Even small technical mistakes can trigger large-scale failures,” he said, pointing out how human or software missteps in one corner of the infrastructure can have global consequences.

Professor Mike Chapple from the University of Notre Dame compared the recovery process to restoring electricity after a large power outage. He said the system might “flicker” several times as engineers fix underlying causes and bring services gradually back online.

Industry observers say such incidents reflect a growing systemic risk within the cloud computing sector, which is dominated by a handful of major firms such as Amazon, Microsoft, and Google collectively controlling nearly 70% of the market. Cori Crider, director of the Future of Technology Institute, described the current model as “unsustainable,” warning that heavy reliance on a few global companies poses economic and security risks for nations and organizations alike.

Other experts suggested that responsibility also lies with companies using these services. Ken Birman, a computer science professor at Cornell University, noted that many organizations fail to develop backup mechanisms to keep essential applications online during provider outages. “We already know how to build more resilient systems,” he said. “The challenge is that many businesses still rely entirely on their cloud providers instead of investing in redundancy.”

Although AWS has not released a detailed technical report yet, its preliminary statement confirmed that the outage originated from a DNS-related fault within its DynamoDB service. The incident, though resolved, highlights a growing concern within the cybersecurity community: as dependence on cloud computing deepens, so does the scale of disruption when a single provider experiences a failure.


Salesloft Hack Shows How Developer Breaches Can Spread

 



Salesloft, a popular sales engagement platform, has revealed that a breach of its GitHub environment earlier this year played a key role in a recent wave of data theft attacks targeting Salesforce customers.

The company explained that attackers gained access to its GitHub repositories between March and June 2025. During this time, intruders downloaded code, added unauthorized accounts, and created rogue workflows. These actions gave them a foothold that was later used to compromise Drift, Salesloft’s conversational marketing product. Drift integrates with major platforms such as Salesforce and Google Workspace, enabling businesses to automate chat interactions and sales pipelines.


How the breach unfolded

Investigators from cybersecurity firm Mandiant, who were brought in to assist Salesloft, found that the GitHub compromise was the first step in a multi-stage campaign. After the attackers established persistence, they moved into Drift’s cloud infrastructure hosted on Amazon Web Services (AWS). From there, they stole OAuth tokens, digital keys that allow applications to access user accounts without requiring passwords.

These stolen tokens were then exploited in August to infiltrate Salesforce environments belonging to multiple organizations. By abusing the access tokens, attackers were able to view and extract customer support cases. Many of these records contained sensitive information such as cloud service credentials, authentication tokens, and even Snowflake-related access keys.


Impact on organizations

The theft of Salesforce data affected a wide range of technology companies. Attackers specifically sought credentials and secrets that could be reused to gain further access into enterprise systems. According to Salesloft’s August 26 update, the campaign’s primary goal was credential theft rather than direct financial fraud.

Threat intelligence groups have tracked this operation under the identifier UNC6395. Meanwhile, reports also suggest links to known cybercrime groups, although conclusive attribution remains unsettled.


Response and recovery

Salesloft said it has since rotated credentials, hardened its defenses, and isolated Drift’s infrastructure to prevent further abuse. Mandiant confirmed that containment steps have been effective, with no evidence that attackers maintain ongoing access. Current efforts are focused on forensic review and long-term assurance.

Following weeks of precautionary suspensions, Salesloft has now restored its Salesforce integrations. The company has also published detailed instructions to help customers safely resume data synchronization.

The incident underlines the risks of supply-chain style attacks, where a compromise at one service provider can cascade into breaches at many of its customers. It underscores the importance of securing developer accounts, closely monitoring access tokens, and limiting sensitive data shared in support cases.

For organizations, best practices now include regularly rotating OAuth tokens, auditing third-party app permissions, and enforcing stronger segmentation between critical systems.