Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label API. Show all posts

Instagram Refutes Breach Allegations After Claims of 17 Million User Records Circulating Online

 



Instagram has firmly denied claims of a new data breach following reports that personal details linked to more than 17 million accounts are being shared across online forums. The company stated that its internal systems were not compromised and that user accounts remain secure.

The clarification comes after concerns emerged around a technical flaw that allowed unknown actors to repeatedly trigger password reset emails for Instagram users. Meta, Instagram’s parent company, confirmed that this issue has been fixed. According to the company, the flaw did not provide access to accounts or expose passwords. Users who received unexpected reset emails were advised to ignore them, as no action is required.

Public attention intensified after cybersecurity alerts suggested that a large dataset allegedly connected to Instagram accounts had been released online. The data, which was reportedly shared without charge on several hacking forums, was claimed to have been collected through an unverified Instagram API vulnerability dating back to 2024.

The dataset is said to include information from over 17 million profiles. The exposed details reportedly vary by record and include usernames, internal account IDs, names, email addresses, phone numbers, and, in some cases, physical addresses. Analysis of the data shows that not all records contain complete personal details, with some entries listing only basic identifiers such as a username and account ID.

Researchers discussing the incident on social media platforms have suggested that the data may not be recent. Some claim it could originate from an older scraping incident, possibly dating back to 2022. However, no technical evidence has been publicly provided to support these claims. Meta has also stated that it has no record of Instagram API breaches occurring in either 2022 or 2024.

Instagram has previously dealt with scraping-related incidents. In one earlier case, a vulnerability allowed attackers to collect and sell personal information associated with millions of accounts. Due to this history, cybersecurity experts believe the newly surfaced dataset could be a collection of older information gathered from multiple sources over several years, rather than the result of a newly discovered vulnerability.

Attempts to verify the origin of the data have so far been unsuccessful. The individual responsible for releasing the dataset did not respond to requests seeking clarification on when or how the information was obtained.

At present, there is no confirmation that this situation represents a new breach of Instagram’s systems. No evidence has been provided to demonstrate that the data was extracted through a recently exploited flaw, and Meta maintains that there has been no unauthorized access to its infrastructure.

While passwords are not included in the leaked information, users are still urged to remain cautious. Such datasets are often used in phishing emails, scam messages, and social engineering attacks designed to trick individuals into revealing additional information.

Users who receive password reset emails or login codes they did not request should delete them and take no further action. Enabling two-factor authentication is fiercely recommended, as it provides an added layer of security against unauthorized access attempts.


Gainsight Breach Spread into Salesforce Environments; Scope Under Investigation

 



An ongoing security incident at Gainsight's customer-management platform has raised fresh alarms about how deeply third-party integrations can affect cloud environments. The breach centers on compromised OAuth tokens connected with Gainsight's Salesforce connectors, leaving unclear how many organizations touched and the type of information accessed.

Salesforce was the first to flag suspicious activity originating from Gainsight's connected applications. As a precautionary measure, Salesforce revoked all associated access tokens and, for some time, disabled the concerned integrations. The company also released detailed indicators of compromise, timelines of malicious activity, and guidance urging customers to review authentication logs and API usage within their own environments.

Gainsight later confirmed that unauthorized parties misused certain OAuth tokens linked to its Salesforce-connected app. According to its leadership, only a small number of customers have so far reported confirmed data impact. However, several independent security teams-including Google's Threat Intelligence Group-reported signs that the intrusion may have reached far more Salesforce instances than initially acknowledged. These differing numbers are not unusual: supply-chain incidents often reveal their full extent only after weeks of log analysis and correlation.

At this time, investigators understand the attack as a case of token abuse, not a failure of Salesforce's underlying platform. OAuth tokens are long-lived keys that let approved applications make API calls on behalf of customers. Once attackers have them, they can access the CRM records through legitimate channels, and the detection is far more challenging. This approach enables the intruders to bypass common login checks, and therefore Salesforce has focused on log review and token rotation as immediate priorities.

To enhance visibility, Gainsight has onboarded Mandiant to conduct a forensic investigation into the incident. The company is investigating historical logs, token behavior, connector activity, and cross-platform data flows to understand the attacker's movements and whether other services were impacted. As a precautionary measure, Gainsight has also worked with platforms including HubSpot, Zendesk, and Gong to temporarily revoke related tokens until investigators can confirm they are safe to restore.

The incident is similar to other attacks that happened this year, where other Salesforce integrations were used to siphon customer records without exploiting any direct vulnerability in Salesforce. Repeated patterns here illustrate a structural challenge: organizations may secure their main cloud platform rigorously, but one compromised integration can open a path to wider unauthorized access.

But for customers, the best steps are as straightforward as ever: monitor Salesforce authentication and API logs for anomalous access patterns; invalidate or rotate existing OAuth tokens; reduce third-party app permissions to the bare minimum; and, if possible, apply IP restrictions or allowlists to further restrict the range of sources from which API calls can be made.

Both companies say they will provide further updates and support customers who have been affected by the issue. The incident served as yet another wake-up call that in modern cloud ecosystems, the security of one vendor often relies on the security practices of all in its integration chain. 



65% of Top AI Companies Leak Secrets on GitHub

 

Leading AI companies continue to face significant cybersecurity challenges, particularly in protecting sensitive information, as highlighted in recent research from Wiz. The study focused on the Forbes top 50 AI firms, revealing that 65% of them were found to be leaking verified secrets—such as API keys, tokens, and credentials—on public GitHub repositories. 

These leaks often occurred in places not easily accessible to standard security scanners, including deleted forks, developer repositories, and GitHub gists, indicating a deeper and more persistent problem than surface-level exposure. Wiz's approach to uncovering these leaks involved a framework called "Depth, Perimeter, and Coverage." Depth allowed researchers to look beyond just the main repositories, reaching into less visible parts of the codebase. 

Perimeter expanded the search to contributors and organization members, recognizing that individuals could inadvertently upload company-related secrets to their own public spaces. Coverage ensured that new types of secrets, such as those used by AI-specific platforms like Tavily, Langchain, Cohere, and Pinecone, were included in the scan, which many traditional tools overlook.

The findings show that despite being leaders in cutting-edge technology, these AI companies have not adequately addressed basic security hygiene. The researchers disclosed the discovered leaks to the affected organisations, but nearly half of these notifications either failed to reach the intended recipients, were ignored, or received no actionable response, underscoring the lack of dedicated channels for vulnerability disclosure.

Security Tips 

Wiz recommends several essential security measures for all organisations, regardless of size. First, deploying robust secret scanning should be a mandatory practice to proactively identify and remove sensitive information from codebases. Second, companies should prioritise the detection of their own unique secret formats, especially if they are new or specific to their operations. Engaging vendors and the open source community to support the detection of these formats is also advised.

Finally, establishing a clear and accessible disclosure protocol is crucial. Having a dedicated channel for reporting vulnerabilities and leaks enables faster remediation and better coordination between researchers and organisations, minimising potential damage from exposure. The research serves as a stark reminder that even the most advanced companies must not overlook fundamental cybersecurity practices to safeguard sensitive data and maintain trust in the rapidly evolving AI landscape.

Ernst & Young Exposes 4TB Database Backup Online, Leaking Company Secrets

 

Ernst & Young (EY), one of the world’s largest accounting firms, reportedly left a massive 4TB SQL database backup exposed online, containing highly sensitive company secrets and credentials accessible to anyone who knew where to find it. 

The backup, in the form of a .BAK file, contained not only schema and stored procedures but also application secrets, API keys, session tokens, user credentials, cached authentication tokens, and service account passwords. Security researchers from Neo Security discovered this alarming exposure during routine tooling work, verifying that the file was indeed publicly accessible.

The researchers emphasized that an exposed database backup like this is equivalent to releasing the master blueprints and keys to a vault, noting that such exposure could lead to catastrophic consequences, including large-scale breaches and ransomware attacks. Due to legal and ethical concerns, the researchers did not download the backup in full, but they warned that any skilled threat actor could have already accessed the data, potentially leading to severe security fallout.

Upon discovering the issue, Neo Security promptly alerted EY, who were praised for their professional and prompt response; the company did not deflect, show defensiveness, or issue legal threats, but instead acknowledged the risk and began triaging the problem. Despite the quick engagement, EY took a full week to remediate the issue, which is considered a significant delay given the urgency and potential for malicious exploitation in such security incidents.

The breach highlights the dangers of misconfigured cloud storage and the need for organizations, especially those handling sensitive data, to rigorously audit and secure their backups and databases. The exposure of such a large database could have resulted in the theft of proprietary information, customer data, and even facilitated coordinated cyberattacks on EY and its clients.

Experts urge companies to assume that any publicly accessible database backup may have already been compromised, as even a brief window of exposure can be enough for malicious actors to exploit the data. The incident underscores the importance of robust security practices, regular audits, and rapid incident response protocols to minimize the risk and impact of data breaches.

This incident serves as a cautionary tale for organizations to take extra precautions in securing all forms of sensitive data, especially those stored in backups, and to act swiftly to remediate publicly exposed databases.

The Strategic Imperatives of Agentic AI Security


 

In terms of cybersecurity, agentic artificial intelligence is emerging as a transformative force that is fundamentally transforming the way digital threats are perceived and handled. It is important to note that, unlike conventional artificial intelligence systems that typically operate within predefined parameters, agentic AI systems can make autonomous decisions by interacting dynamically with digital tools, complex environments, other AI agents, and even sensitive data sets. 

There is a new paradigm emerging in which AI is not only supporting decision-making but also initiating and executing actions independently in pursuit of achieving its objective in this shift. As the evolution of cybersecurity brings with it significant opportunities for innovation, such as automated threat detection, intelligent incident response, and adaptive defence strategies, it also poses some of the most challenging challenges. 

As much as agentic AI is powerful for defenders, the same capabilities can be exploited by adversaries as well. If autonomous agents are compromised or misaligned with their targets, they can act at scale in a very fast and unpredictable manner, making traditional defence mechanisms inadequate. As organisations increasingly implement agentic AI into their operations, enterprises must adopt a dual-security posture. 

They need to take advantage of the strengths of agentic AI to enhance their security frameworks, but also prepare for the threats posed by it. There is a need to strategically rethink cybersecurity principles as they relate to robust oversight, alignment protocols, and adaptive resilience mechanisms to ensure that the autonomy of AI agents is paired with the sophistication of controls that go with it. Providing security for agentic systems has become more than just a technical requirement in this new era of AI-driven autonomy. 

It is a strategic imperative as well. In the development lifecycle of Agentic AI, several interdependent phases are required to ensure that the system is not only intelligent and autonomous but also aligned with organisational goals and operational needs. Using this structured progression, agents can be made more effective, reliable, and ethically sound across a wide variety of use cases. 

The first critical phase in any software development process is called Problem Definition and Requirement Analysis. This lays the foundation for all subsequent efforts in software development. In this phase, organisations need to be able to articulate a clear and strategic understanding of the problem space that the artificial intelligence agent will be used to solve. 

As well as setting clear business objectives, defining the specific tasks that the agent is required to perform, and assessing operational constraints like infrastructure availability, regulatory obligations, and ethical obligations, it is imperative for organisations to define clear business objectives. As a result of a thorough requirements analysis, the system design is streamlined, scope creep is minimised, and costly revisions can be avoided during the later stages of the deployment. 

Additionally, this phase helps stakeholders align the AI agent's technical capabilities with real-world needs, enabling it to deliver measurable results. It is arguably one of the most crucial components of the lifecycle to begin with the Data Collection and Preparation phase, which is arguably the most vital. A system's intelligence is directly affected by the quality and comprehensiveness of the data it is trained on, regardless of which type of agentic AI it is. 

It has utilised a variety of internal and trusted external sources to collect relevant datasets for this stage. These datasets are meticulously cleaned, indexed, and transformed in order to ensure that they are consistent and usable. As a further measure of model robustness, advanced preprocessing techniques are employed, such as augmentation, normalisation, and class balancing to reduce bias, es and mitigate model failures. 

In order for an AI agent to function effectively across a variety of circumstances and edge cases, a high-quality, representative dataset needs to be created as soon as possible. These three phases together make up the backbone of the development of an agentic AI system, ensuring that it is based on real business needs and is backed up by data that is dependable, ethical, and actionable. Organisations that invest in thorough upfront analysis and meticulous data preparation have a significantly greater chance of deploying agentic AI solutions that are scalable, secure, and aligned with long-term strategic goals, when compared to those organisations that spend less. 

It is important to note that the risks that a systemic AI system poses are more than technical failures; they are deeply systemic in nature. Agentic AI is not a passive system that executes rules; it is an active system that makes decisions, takes action and adapts as it learns from its mistakes. Although dynamic autonomy is powerful, it also introduces a degree of complexity and unpredictability, which makes failures harder to detect until significant damage has been sustained.

The agentic AI systems differ from traditional software systems in the sense that they operate independently and can evolve their behaviour over time as they become more and more complex. OWASP's Top Ten for LLM Applications (2025) highlights how agents can be manipulated into misusing tools or storing deceptive information that can be detrimental to the users' security. If not rigorously monitored, this very feature can turn out to be a source of danger.

It is possible that corrupted data penetrates a person's memory in such situations, so that future decisions will be influenced by falsehoods. In time, these errors may compound, leading to cascading hallucinations in which the system repeatedly generates credible but inaccurate outputs, reinforcing and validating each other, making it increasingly challenging for the deception to be detected. 

Furthermore, agentic systems are also susceptible to more traditional forms of exploitation, such as privilege escalation, in which an agent may impersonate a user or gain access to restricted functions without permission. As far as the extreme scenarios go, agents may even override their constraints by intentionally or unintentionally pursuing goals that do not align with the user's or organisation's goals. Taking advantage of deceptive behaviours is a challenging task, not only ethically but also operationally. Additionally, resource exhaustion is another pressing concern. 

Agents can be overloaded by excessive queues of tasks, which can exhaust memory, computing bandwidth, or third-party API quotas, whether through accident or malicious attacks. When these problems occur, not only do they degrade performance, but they also can result in critical system failures, particularly when they arise in a real-time environment. Moreover, the situation is even worse when agents are deployed on lightweight frameworks, such as lightweight or experimental multi-agent control platforms (MCPs), which may not have the essential features like logging, user authentication, or third-party validation mechanisms, as the situation can be even worse. 

When security teams are faced with such a situation, tracking decision paths or identifying the root cause of failures becomes increasingly difficult or impossible, leaving them blind to their own internal behaviour as well as external threats. A systemic vulnerability in agentic artificial intelligence must be considered a core design consideration rather than a peripheral concern, as it continues to integrate into high-stakes environments. 

It is essential, not only for safety to be ensured, but also to build the long-term trust needed to enable enterprise adoption, that agents act in a transparent, traceable, and ethical manner. Several core functions give agentic AI systems the agency that enables them to make autonomous decisions, behave adaptively, and pursue long-term goals. These functions are the foundation of their agency. The essence of agentic intelligence is the autonomy of agents, which means that they operate without being constantly overseen by humans. 

They perceive their environment with data streams or sensors, evaluate contextual factors, and execute actions that are in keeping with the predefined objectives of these systems. There are a number of examples in which autonomous warehouse robots adjust their path in real time without requiring human input, demonstrating both situational awareness and self-regulation. The agentic AI system differs from reactive AI systems, which are designed to respond to isolated prompts, since they are designed to pursue complex, sometimes long-term goals without the need for human intervention. 

As a result of explicit or non-explicit instructions or reward systems, these agents can break down high-level tasks, such as organising a travel itinerary, into actionable subgoals that are dynamically adjusted according to the new information available. In order for the agent to formulate step-by-step strategies, planner-executor architectures and techniques such as chain-of-thought prompting or ReAct are used by the agent to formulate strategies. 

In order to optimise outcomes, these plans may use graph-based search algorithms or simulate multiple future scenarios to achieve optimal results. Moreover, reasoning further enhances a user's ability to assess alternatives, weigh tradeoffs, and apply logical inferences to them. Large language models are also used as reasoning engines, allowing tasks to be broken down and multiple-step problem-solving to be supported. The final feature of memory is the ability to provide continuity. 

Using previous interactions, results, and context-often through vector databases-agents can refine their behavior over time by learning from their previous experiences and avoiding unnecessary or unnecessary actions. An agentic AI system must be secured more thoroughly than incremental changes to existing security protocols. Rather, it requires a complete rethink of its operational and governance models. A system capable of autonomous decision-making and adaptive behaviour must be treated as an enterprise entity of its own to be considered in a competitive market. 

There is a need for rigorous scrutiny, continuous validation, and enforceable safeguards in place throughout the lifecycle of any influential digital actor, including AI agents. In order to achieve a robust security posture, it is essential to control non-human identities. As part of this process, strong authentication mechanisms must be implemented, along with behavioural profiling and anomaly detection, to identify and neutralise attempts to impersonate or spoof before damage occurs. 

As a concept, identity cannot stay static in dynamic systems, since it must change according to the behaviour and role of the agent in the environment. The importance of securing retrieval-augmented generation (RAG) systems at the source cannot be overstated. As part of this strategy, organisations need to enforce rigorous access policies over knowledge repositories, examine embedding spaces for adversarial interference, and continually evaluate the effectiveness of similarity matching methods to avoid data leaks or model manipulations that are not intended. 

The use of automated red teaming is essential to identifying emerging threats, not just before deployment, but constantly in order to mitigate them. It involves adversarial testing and stress simulations that are designed to expose behavioural anomalies, misalignments with the intended goals, and configuration weaknesses in real-time. Further, it is imperative that comprehensive governance frameworks be established in order to ensure the success of generative and agentic AI. 

As a part of this process, the agent behaviour must be codified in enforceable policies, runtime oversight must be enabled, and detailed, tamper-evident logs must be maintained for auditing and tracking lifecycles. The shift towards agentic AI is more than just a technological evolution. The shift represents a profound change in the way decisions are made, delegated, and monitored in the future. A rapid adoption of these systems often exceeds the ability of traditional security infrastructures to adapt in a way that is not fully understood by them.

Without meaningful oversight, clearly defined responsibilities, and strict controls, AI agents could inadvertently or maliciously exacerbate risk, rather than delivering what they promise. In response to these trends, organisations need to ensure that agents operate within well-defined boundaries, under continuous observation, and aligned with organisational intent, as well as being held to the same standards as human decision-makers. 

There are enormous benefits associated with agentic AI, but there are also huge risks associated with it. Moreover, these systems should not just be intelligent; they should also be trustworthy, transparent, and their rules should be as precise and robust as those they help enforce to be truly transformative.

AI as a Key Solution for Mitigating API Cybersecurity Threats

 


Artificial Intelligence (AI) is continuously evolving, and it is fundamentally changing the cybersecurity landscape, enabling organizations to mitigate vulnerabilities more effectively as a result. As artificial intelligence has improved the speed and scale with which threats can be detected and responded, it has also introduced a range of complexities that necessitate a hybrid approach to security management. 

An approach that combines traditional security frameworks with human-digital interventions is necessary. There is one of the biggest challenges AI presents to us, and that is the expansion of the attack surface for Application Programming Interfaces (APIs). The proliferation of AI-powered systems raises questions regarding API resilience as sophisticated threats become increasingly sophisticated. As AI-driven functionality is integrated into APIs, security concerns have increased, which has led to the need for robust defensive strategies. 

In the context of AI security, the implications of the technology extend beyond APIs to the very foundation of Machine Learning (ML) applications as well as large language models. Many of these models are trained on highly sensitive datasets, raising concerns about their privacy, integrity, and potential exploitation. When training data is handled improperly, unauthorized access can occur, data poisoning can occur, and model manipulation may occur, which can further increase the security vulnerability. 

It is important to note, however, that artificial intelligence is also leading security teams to refine their threat modeling strategies while simultaneously posing security challenges. Using AI's analytical capabilities, organizations can enhance their predictive capabilities, automate risk assessments, and implement smarter security frameworks that can be adapted to the changing environment. By adapting to this evolution, security professionals are forced to adopt a proactive and adaptive approach to reducing potential threats. 

Using artificial intelligence effectively while safeguarding digital assets requires an integrated approach that combines traditional security mechanisms with AI-driven security solutions. This is necessary to ensure an effective synergy between automation and human oversight. Enterprises must foster a comprehensive security posture that integrates both legacy and emerging technologies to be more resilient in the face of a changing threat landscape. However, the deployment of AI in cybersecurity requires a well-organized, strategic approach. While AI is an excellent tool for cybersecurity, it does need to be embraced in a strategic and well-organized manner. 

Building a robust and adaptive cybersecurity ecosystem requires addressing API vulnerabilities, strengthening training data security, and refining threat modeling practices. A major part of modern digital applications is APIs, allowing seamless data exchange between various systems, enabling seamless data exchange. However, the widespread adoption of APIs has also led to them becoming prime targets for cyber threats, which have put organizations at risk of significant risks, such as data breaches, financial losses, and disruptions in services.

AI platforms and tools, such as OpenAI, Google's DeepMind, and IBM's Watson, have significantly contributed to advancements in several technological fields over the years. These innovations have revolutionized natural language processing, machine learning, and autonomous systems, leading to a wide range of applications in critical areas such as healthcare, finance, and business. Consequently, organizations worldwide are turning to artificial intelligence to maximize operational efficiency, simplify processes, and unlock new growth opportunities. 

While artificial intelligence is catalyzing progress, it also introduces potential security risks. In addition to manipulating the very technologies that enable industries to orchestrate sophisticated cyber threats, cybercriminals can also use those very technologies. As a result, AI is viewed as having two characteristics: while it is possible for AI-driven security systems to proactively identify, predict, and mitigate threats with extraordinary accuracy, adversaries can weaponize such technologies to create highly advanced cyberattacks, such as phishing schemes and ransomware. 

It is important to keep in mind that, as AI continues to grow, its role in cybersecurity is becoming more complex and dynamic. Organizations need to take proactive measures to protect their organizations from AI attacks by implementing robust frameworks that harness its defensive capabilities and mitigate its vulnerabilities. For a secure digital ecosystem that fosters innovation without compromising cybersecurity, it will be crucial for AI technologies to be developed ethically and responsibly. 

The Application Programming Interface (API) is the fundamental component of digital ecosystems in the 21st century, enabling seamless interactions across industries such as mobile banking, e-commerce, and enterprise solutions. They are also a prime target for cyber-attackers due to their widespread adoption. The consequences of successful breaches can include data compromises, financial losses, and operational disruptions that can pose significant challenges to businesses as well as consumers alike. 

Pratik Shah, F5 Networks' Managing Director for India and SAARC, highlighted that APIs are an integral part of today's digital landscape. AIM reports that APIs account for nearly 90% of worldwide web traffic and that the number of public APIs has grown 460% over the past decade. Despite this rapid proliferation, the company has been exposed to a wide array of cyber risks, including broken authentication, injection attacks, and server-side request forgery. According to him, the robustness of Indian API infrastructure significantly influences India's ambitions to become a global leader in the digital industry. 

“APIs are the backbone of our digital economy, interconnecting key sectors such as finance, healthcare, e-commerce, and government services,” Shah remarked. Shah claims that during the first half of 2024, the Indian Computer Emergency Response Team (CERT-In) reported a 62% increase in API-targeted attacks. The extent of these incidents goes beyond technical breaches, and they represent substantial economic risks that threaten data integrity, business continuity, and consumer trust in addition to technological breaches.

Aside from compromising sensitive information, these incidents have also undermined business continuity and undermined consumer confidence, in addition to compromising business continuity. APIs will continue to be at the heart of digital transformation, and for that reason, ensuring robust security measures will be critical to mitigating potential threats and protecting organisational integrity. 


Indusface recently published an article on API security that underscores the seriousness of API-related threats for the next 20 years. There has been an increase of 68% in attacks on APIs compared to traditional websites in the report. Furthermore, there has been a 94% increase in Distributed Denial-of-Service (DDoS) attacks on APIs compared with the previous quarter. This represents an astounding 1,600% increase when compared with website-based DDoS attacks. 

Additionally, bot-driven attacks on APIs increased by 39%, emphasizing the need to adopt robust security measures that protect these vital digital assets from threats. As a result of Artificial Intelligence, cloud security is being transformed by enhancing threat detection, automating responses, and providing predictive insights to mitigate cyber risks. 

Several cloud providers, including Google Cloud, Microsoft, and Amazon Web Services, employ artificial intelligence-driven solutions for monitoring security events, detecting anomalies, and preventing cyberattacks.

The solutions include Chronicle, Microsoft Defender for Cloud, and Amazon GuardDuty. Although there are challenges like false positives, adversarial AI attacks, high implementation costs, and concerns about data privacy, they are still important to consider. 

Although there are still some limitations, advances in self-learning AI models, security automation, and quantum computing are expected to raise AI's profile in the cybersecurity space to a higher level. The cloud environment should be safeguarded against evolving threats by using AI-powered security solutions that can be deployed by businesses.

Weak Cloud Credentials Behind Most Cyber Attacks: Google Cloud Report

 



A recent Google Cloud report has found a very troubling trend: nearly half of all cloud-related attacks in late 2024 were caused by weak or missing account credentials. This is seriously endangering businesses and giving attackers easy access to sensitive systems.


What the Report Found

The Threat Horizons Report, which was produced by Google's security experts, looked into cyberattacks on cloud accounts. The study found that the primary method of access was poor credential management, such as weak passwords or lack of multi-factor authentication (MFA). These weak spots comprised nearly 50% of all incidents Google Cloud analyzed.

Another factor was screwed up cloud services, which constituted more than a third of all attacks. The report further noted a frightening trend of attacks on the application programming interfaces (APIs) and even user interfaces, which were around 20% of the incidents. There is a need to point out several areas where cloud security seems to be left wanting.


How Weak Credentials Cause Big Problems

Weak credentials do not just unlock the doors for the attackers; it lets them bring widespread destruction. For instance, in April 2024, over 160 Snowflake accounts were breached due to the poor practices regarding passwords. Some of the high-profile companies impacted included AT&T, Advance Auto Parts, and Pure Storage and involved some massive data leakages.

Attackers are also finding accounts with lots of permissions — overprivileged service accounts. These simply make it even easier for hackers to step further into a network, bringing harm to often multiple systems within an organization's network. Google concluded that more than 60 percent of all later attacker actions, once inside, involve attempts to step laterally within systems.

The report warns that a single stolen password can trigger a chain reaction. Hackers can use it to take control of apps, access critical data, and even bypass security systems like MFA. This allows them to establish trust and carry out more sophisticated attacks, such as tricking employees with fake messages.


How Businesses Can Stay Safe

To prevent such attacks, organizations should focus on proper security practices. Google Cloud suggests using multi-factor authentication, limiting excessive permissions, and fixing misconfigurations in cloud systems. These steps will limit the damage caused by stolen credentials and prevent attackers from digging deeper.

This report is a reminder that weak passwords and poor security habits are not just small mistakes; they can lead to serious consequences for businesses everywhere.


Fake Invoices Spread Through DocuSign’s API in New Scam

 



Cyber thieves are making use of DocuSign's Envelopes API to send fake invoices in good faith, complete with names that are giveaways of well-known brands such as Norton and PayPal. Because these messages are sent from a verified domain - namely DocuSign's - they go past traditional email security methods and therefore sneak through undetected as malicious messages.

How It Works

DocuSign is an electronic signing service that the user often provides for sending, signing, and managing documents in a digital manner. Using the envelopes API within its eSignature system, document requests can be sent out, signed, and tracked entirely automatically. Conversely, attackers discovered how to take advantage of this API, where accounts set up for free by paying customers on DocuSign are available to them, giving them access to the templates and the branding feature. They now can create fake-looking invoices that are almost indistinguishable from official ones coming from established companies.

These scammers use the "Envelopes: create" function to send an enormous number of fake bills to a huge list of recipients. In most cases, the charges in the bill are very realistic and therefore appear more legitimate. In order to get a proper signature, attackers command the user to "sign" the documents. The attackers then use the signed document to ask for payment. In some other instances, attackers will forward the "signed" documents directly to the finance department to complete the scam.


Mass Abuse of the DocuSign Platform

According to the security research firm Wallarm, this type of abuse has been ongoing for some time. The company noted that this mass exploitation is exposed by DocuSign customers on online forums as users have marked complaints about constant spamming and phishing emails from the DocuSign domain. "I'm suddenly receiving multiple phishing emails per week from docusign.net, and there doesn't seem to be an obvious way to report it," complained one user.

All of these complaints imply that such abuse occurs on a really huge scale, which makes the attacker's spread of false invoices very probably done with some kind of automation tools and not done by hand.

Wallarm already has raised the attention of the abuse at DocuSign, but it is not clear what actions or steps, if any, are being taken by DocuSign in order to resolve this issue.


Challenges in Safeguarding APIs Against Abuse

Such widespread abuse of the DocuSign Envelopes API depicts how openness in access can really compromise the security of API endpoints. Although the DocuSign service is provided for verified businesses to utilise it, the attack teams will buy valid accounts and utilize these functions offered by the API for malicious purposes. It does not even resemble the case of the DocuSign company because several other companies have had the same abuses of their APIs as well. For instance, hackers used APIs to search millions of phone numbers associated with Authy accounts to validate them, scraping information about millions of Dell customers, matching millions of Trello accounts with emails, and much more.

The case of DocuSign does show how abuses of a platform justify stronger protections for digital services that enable access to sensitive tools. Because these API-based attacks have become so widespread, firms like DocuSign may be forced to consider further steps they are taking in being more watchful and tightening the locks on the misuses of their products with regards to paid accounts in which users have full access to the tools at their disposal.