Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Threat Detection. Show all posts

Rising Prompt Injection Threats and How Users Can Stay Secure

 


The generative AI revolution is reshaping the foundations of modern work in an age when organizations are increasingly relying on large language models like ChatGPT and Claude to speed up research, synthesize complex information, and interpret extensive data sets more rapidly with unprecedented ease, which is accelerating research, synthesizing complex information, and analyzing extensive data sets. 

However, this growing dependency on text-driven intelligence is associated with an escalating and silent risk. The threat of prompt injection is increasing as these systems become increasingly embedded in enterprise workflows, posing a new challenge to cybersecurity teams. Malicious actors have the ability to manipulate the exact instructions that lead an LLM to reveal confidential information, alter internal information, or corrupt proprietary systems in such ways that they are extremely difficult to detect and even more difficult to reverse. 

Malicious actors can manipulate the very instructions that guide an LLM. Any organisation that deploys its own artificial intelligence infrastructure or integrates sensitive data into third-party models is aware that safeguarding against such attacks has become an urgent concern. Organisations must remain vigilant and know how to exploit such vulnerabilities. 

It is becoming increasingly evident that as organisations are implementing AI-driven workflows, a new class of technology—agent AI—is beginning to redefine how digital systems work for the better. These more advanced models, as opposed to traditional models that are merely reactive to prompts, are capable of collecting information, reasoning through tasks, and serving as real-time assistants that can be incorporated into everything from customer support channels to search engine solutions. 

There has been a shift into the browser itself, where AI-enhanced interfaces are rapidly becoming a feature rather than a novelty. However, along with that development, corresponding risks have also increased. 

It is important to keep in mind that, regardless of what a browser is developed by, the AI components that are embedded into it — whether search engines, integrated chatbots, or automated query systems — remain vulnerable to the inherent flaws of the information they rely on. This is where prompt injection attacks emerge as a particularly troubling threat. Attackers can manipulate an LLM so that it performs unintended or harmful actions as a result of exploiting inaccuracies, gaps, or unguarded instructions within its training or operational data. 

Despite the sophisticated capabilities of agentic artificial intelligence, these attacks reveal an important truth: although it brings users and enterprises powerful capabilities, it also exposes them to vulnerabilities that traditional browsing tools have not been exposed to. As a matter of fact, prompt injection is often far more straightforward than many organisations imagine, as well as far more harmful. 

There are several examples of how an AI system can be manipulated to reveal sensitive information without even recognising the fact that the document is tainted, such as a PDF embedded with hidden instructions, by an attacker. It has also been demonstrated that websites seeded with invisible or obfuscated text can affect how an AI agent interprets queries during information retrieval, steering the model in dangerous or unintended directions. 

It is possible to manipulate public-facing chatbots, which are intended to improve customer engagement, in order to produce inappropriate, harmful, or policy-violating responses through carefully crafted prompts. These examples illustrate that there are numerous risks associated with inadvertent data leaks, reputational repercussions, as well as regulatory violations as enterprises begin to use AI-assisted decision-making and workflow automation more frequently. 

In order to combat this threat, LLMs need to be treated with the same level of rigour that is usually reserved for high-value software systems. The use of adversarial testing and red-team methods has gained popularity among security teams as a way of determining whether a model can be misled by hidden or incorrect inputs. 

There has been a growing focus on strengthening the structure of prompts, ensuring there is a clear boundary between user-driven content and system instructions, which has become a critical defence against fraud, and input validation measures have been established to filter out suspicious patterns before they reach the model's operational layer. Monitoring outputs continuously is equally vital, which allows organisations to flag anomalies and enforce safeguards that prevent inappropriate or unsafe behaviour. 

The model needs to be restricted from accessing unvetted external data, context management rules must be redesigned, and robust activity logs must be maintained in order to reduce the available attack surface while ensuring a more reliable oversight system. However, despite taking these precautions to protect the system, the depths of the threat landscape often require expert human judgment to assess. 

Manual penetration testing has emerged as a decisive tool, providing insight far beyond the capabilities of automated scanners that are capable of detecting malicious code. 

Using skilled testers, it is possible to reproduce the thought processes and creativity of real attackers. This involves experimenting with nuanced prompt manipulations, embedded instruction chains, and context-poisoning techniques that automatic tools fail to detect. Their assessments also reveal whether security controls actually perform as intended. They examine whether sanitisation filters malicious content properly, whether context restrictions prevent impersonation, and whether output filters intervene when the model produces risky content. 

A human-led testing process provides organisations with a stronger assurance that their AI deployments will withstand the increasingly sophisticated attempts at compromising them through the validation of both vulnerabilities and the effectiveness of subsequent fixes. In order for user' organisation to become resilient against indirect prompt injection, it requires much more than isolated technical fixes. It calls for a coordinated, multilayered defence that encompasses both the policy environment, the infrastructure, and the day-to-day operational discipline of users' organisations. 

A holistic approach to security is increasingly being adopted by security teams to reduce the attack surface as well as catch suspicious behaviour early and quickly. As part of this effort, dedicated detection systems are deployed, which will identify and block both subtle, indirect manipulations that might affect an artificial intelligence model's behaviour before they can occur. Input validation and sanitisation protocols are a means of strengthening these controls. 

They prevent hidden instructions from slipping into an LLM's context by screening incoming data, regardless of whether it is sourced from users, integrated tools, or external web sources. In addition to establishing firm content handling policies, it is also crucial to establish a policy defining the types of information that an artificial intelligence system can process, as well as the types of sources that can be regarded as trustworthy. 

A majority of organisations today use allowlisting frameworks as part of their security measures, and are closely monitoring unverified or third-party content in order to minimise exposure to contaminated data. Enterprises are adopting strict privilege-separation measures at the architectural level so as to ensure that artificial intelligence systems have minimal access to sensitive information as well as being unable to perform high-risk actions without explicit authorisations. 

In the event that an injection attempt is successful, this controlled environment helps contain the damage. It adds another level of complexity to the situation when shadow AI begins to emerge—employees adopting unapproved tools without supervision. Consequently, organisations are turning to monitoring and governance platforms to provide insight into how and where AI tools are being implemented across the workforce. These platforms enable access controls to be enforced and unmanaged systems to be prevented from becoming weak entry points for attackers. 

As an integral component of technical and procedural safeguards, user education is still an essential component of frontline defences. 

Training programs that teach employees how to recognise and distinguish sanctioned tools from unapproved ones will help strengthen frontline defences in the future. As a whole, these measures form a comprehensive strategy to counter the evolving threat of prompt injection in enterprise environments by aligning technology, policy, and awareness. 

It is becoming increasingly important for enterprises to secure these systems as the adoption of generative AI and agentic AI accelerates. As a result of this development, companies are at a pivotal point where proactive investment in artificial intelligence security is not a luxury but an essential part of preserving trust, continuity, and competitiveness. 

Aside from the existing safeguards that organisations have already put in place, organisations can strengthen their posture even further by incorporating AI risk assessments into broader cybersecurity strategies, conducting continuous model evaluations, as well as collaborating with external experts. 

An organisation that encourages a culture of transparency can reduce the probability of unnoticed manipulation to a substantial degree if anomalies are reported early and employees understand both the power and pitfalls of Artificial Intelligence. It is essential to embrace innovation without losing sight of caution in order to build AI systems that are not only intelligent, but also resilient, accountable, and closely aligned with human oversight. 

By harnessing the transformative potential of modern AI and making security a priority, businesses can ensure that the next chapter of digital transformation is not just driven by security, but driven by it as a core value, not an afterthought.

Exabeam Extends Proven Insider Threat Detection to AI Agents with Google Cloud

 



BROOMFIELD, Colo. & FOSTER CITY, Calif. – September 9, 2025 – At Google Cloud’s pioneering Security Innovation Forum, Exabeam, a global leader in intelligence and automation that powers security operations, today announced the integration of Google Agentspace and Google Cloud’s Model Armor telemetry into the New-Scale Security Operations Platform. This integration gives security teams the ability to monitor, detect, and respond to threats from AI agents acting as digital insiders. This visibility gives organizations insight into the behavior of autonomous agents to reveal intent, spot drift, and quickly identify compromise.

Recent findings in the “From Human to Hybrid: How AI and the Analytics Gap are Fueling Insider Risk” study from Exabeam reveal that a vast majority (93%) of organizations worldwide have either experienced or anticipate a rise in insider threats driven by AI, and 64% rank insiders as a higher concern than external threat actors. As AI agents perform tasks on behalf of users, access sensitive data, and make independent decisions, they introduce a new class of insider risk: digital actors operating beyond the scope of traditional monitoring. Just as insider threats have traditionally been classified as malicious, negligent, and compromised, AI agents now bring their own risks: malfunctioning, misaligned, or outright subverted.

SIEM and XDR solutions that are unable to baseline and learn normal behavior lack the intelligence necessary to identify when agents go rogue. As a pioneer in machine learning and behavioral analytics, Exabeam addresses this critical gap by extending its proven capabilities to monitor both human and AI agent activity. By integrating telemetry from Google Agentspace and Google Cloud’s Model Armor into the New-Scale Platform, Exabeam is expanding the boundaries of behavioral analytics and setting a new standard for what modern security platforms must deliver.

“This is a natural evolution of our leadership in insider threat detection and behavioral analytics,” said Steve Wilson, Chief AI and Product Officer at Exabeam. “Exabeam solutions are inherently designed to deliver behavioral analytics at scale. Security operations teams don’t need another tool — they need deeper insight into both human and AI agent behavior, delivered through a platform they already trust. We’re giving security teams the clarity, context, and control they need to secure the new class of insider threats.”

The company’s latest innovation, Exabeam Nova, is central to this, serving as the intelligence layer that enables security teams to interpret and act on agent behavior with confidence. Exabeam Nova delivers explainable, prioritized threat insights by analyzing the intent and execution patterns of AI agents in real time. This capability allows analysts to move beyond surface-level alerts and understand the context behind agent actions — whether they represent legitimate automation or potential misuse. By operationalizing telemetry from Google Agentspace and Google Cloud’s Model Armor in the New-Scale Platform, Exabeam Nova equips security teams to defend against the next generation of insider threats with clarity and precision.

“AI agents are quickly changing how business gets done, and that means security must evolve at the same rate,” said Chris O’Malley, CEO at Exabeam. “This is a pivotal moment for the cybersecurity industry. By extending our behavioral analytics to AI agents, Exabeam is once again leading the way in insider threat detection. We’re giving security teams the visibility and control they need to protect the integrity of their operations in an AI-driven world.”

“As businesses integrate AI into their core operations, they face a new set of security challenges,” said Vineet Bhan, Director of Security and Identity Partnerships at Google Cloud. “Our partnership with Exabeam is important to addressing this, giving customers the advanced tools needed to protect their data, maintain control, and innovate confidently in the era of AI.”

By unifying visibility across both human and AI-driven activity, Exabeam empowers security teams to detect, assess, and respond to insider threats in all their forms. This advancement sets a new benchmark for enterprise security, ensuring organizations can confidently embrace AI while maintaining control, integrity, and trust.

Researchers Link Surge in Malicious Scanning to New Vulnerability Disclosures Weeks Ahead

 

A new study suggests that in nearly 80% of cases, unusual spikes in malicious online activity — such as network reconnaissance, targeted scanning, and brute-force attacks on edge networking devices — occur within six weeks before the public disclosure of new security vulnerabilities (CVEs).

The finding comes from threat intelligence company GreyNoise, which says these incidents are not random, but instead follow consistent and statistically significant patterns.

GreyNoise analyzed data from its Global Observation Grid (GOG) dating back to September 2024, applying objective statistical measures to filter out noise, ambiguity, and low-quality entries. This process identified 216 significant spike events linked to eight enterprise edge vendors.

"Across all 216 spike events we studied, 50 percent were followed by a new CVE within three weeks, and 80 percent within six weeks," explain the researchers. The correlation was especially strong for products from Ivanti, SonicWall, Palo Alto Networks, and Fortinet, and weaker for MikroTik, Citrix, and Cisco. According to GreyNoise, state-sponsored actors have consistently targeted such systems for initial access and persistence, often probing for older, already-documented flaws.

Researchers believe this scanning activity either aids in uncovering new vulnerabilities or in identifying exposed endpoints that could later be exploited with novel attacks.

Traditionally, defenders act after a CVE is published. However, GreyNoise’s findings indicate that unusual attacker behavior can serve as an early warning system — giving security teams a valuable window to strengthen defenses before a vulnerability becomes public knowledge.

These pre-disclosure spikes allow defenders to bolster monitoring, tighten security controls, and prepare for possible exploits, even if no patch is yet available or the targeted component remains unknown. GreyNoise recommends closely monitoring scanning activity and swiftly blocking source IPs to prevent reconnaissance from progressing to active attacks.

The company also stresses that scans targeting older vulnerabilities shouldn’t be dismissed as harmless, since attackers often use them to catalog internet-facing systems that might be vulnerable to other exploits in the future.

In a related move, Google’s Project Zero announced it will now notify the public within one week of discovering a new vulnerability. The disclosure will include the affected vendor or product, the discovery date, and the standard 90-day patch deadline. No technical details, proof-of-concept code, or exploit information will be released in this early notice, ensuring attackers cannot leverage the information while helping administrators reduce the “patch gap.”

Cybersecurity Threats Are Evolving: Seven Key OT Security Challenges

 

Cyberattacks are advancing rapidly, threatening businesses with QR code scams, deepfake fraud, malware, and evolving ransomware. However, strengthening cybersecurity measures can mitigate risks. Addressing these seven key OT security challenges is essential.

Insurance broker Howden reports that U.K. businesses lost $55 billion to cyberattacks in five years. Basic security measures could save $4.4 million over a decade, delivering a 25% ROI.

Experts at IDS-INDATA warn that outdated OT systems are prime hacker entry points, with 60% of breaches stemming from unpatched systems. Research across industries identifies seven major OT security challenges.

Seven Critical OT Security Challenges

1. Ransomware & AI-Driven Attacks
Ransomware-as-a-Service and AI-powered malware are escalating threats. “The speed at which attack methods evolve makes waiting to update your defences risky,” says Ryan Cooke, CISO at IDS-INDATA. Regular updates and advanced threat detection systems are vital.

2. Outdated Systems & Patch Gaps
Many industrial networks rely on legacy systems. “We know OT is a different environment from IT,” Cooke explains. Where patches aren’t feasible, alternative mitigation is necessary. Regular audits help address vulnerabilities.

3. Lack of OT Device Visibility
Limited visibility makes networks vulnerable. “Without visibility over your connected OT devices, it’s impossible to secure them,” says Cooke. Asset discovery tools help monitor unauthorized access.

4. Growing IoT Complexity
IoT expansion increases security risks. “As more IoT and smart devices are integrated into industrial networks, the complexity of securing them grows exponentially,” Cooke warns. Prioritizing high-risk devices is essential.

5. Financial & Operational Risks
Breaches can cause financial losses, production shutdowns, and life-threatening risks. “A breach in OT environments can cause financial loss, shut down entire production lines, or, in extreme cases, endanger lives,” Cooke states. A strong incident response plan is crucial.

6. Compliance with Evolving Regulations
Non-compliance with OT security regulations leads to financial penalties. Regular audits ensure adherence and minimize risks.

7. Human Error & Awareness Gaps
Misconfigured security settings remain a major vulnerability. “Investing in cybersecurity awareness training for your OT teams is critical,” Cooke advises. Security training and monitoring help prevent insider threats.

“Proactively addressing these points will help significantly reduce the risk of compromise, protect critical infrastructure, ensure compliance, and safeguard against potentially severe disruptions,” Cooke concluded. 

Moreover, cyberattacks will persist regardless, but proactively addressing these challenges significantly improves the chances of defending against them.

Mamba 2FA Emerges as a New Threat in Phishing Landscape

 

In the ever-changing landscape of phishing attacks, a new threat has emerged: Mamba 2FA. Discovered in late May 2024 by the Threat Detection & Research (TDR) team at Sekoia, this adversary-in-the-middle (AiTM) phishing kit specifically targets multi-factor authentication (MFA) systems. Mamba 2FA has rapidly gained popularity in the phishing-as-a-service (PhaaS) market, facilitating attackers in circumventing non-phishing-resistant MFA methods such as one-time passwords and app notifications.

Initially detected during a phishing campaign that imitated Microsoft 365 login pages, Mamba 2FA functions by relaying MFA credentials through phishing sites, utilizing the Socket.IO JavaScript library to communicate with a backend server. According to Sekoia's report, “At first, these characteristics appeared similar to the Tycoon 2FA phishing-as-a-service platform, but a closer examination revealed that the campaign utilized a previously unknown AiTM phishing kit tracked by Sekoia as Mamba 2FA.” 

The infrastructure of Mamba 2FA has been observed targeting Entra ID, third-party single sign-on providers, and consumer Microsoft accounts, with stolen credentials transmitted directly to attackers via Telegram for near-instant access to compromised accounts.

A notable feature of Mamba 2FA is its capacity to adapt to its targets dynamically. For instance, in cases involving enterprise accounts, the phishing page can mirror an organization’s specific branding, including logos and background images, enhancing the believability of the attack. The report noted, “For enterprise accounts, it dynamically reflects the organization’s custom login page branding.”

Mamba 2FA goes beyond simple MFA interception, handling various MFA methods and updating the phishing page based on user interactions. This flexibility makes it an appealing tool for cybercriminals aiming to exploit even the most advanced MFA implementations.

Available on Telegram for $250 per month, Mamba 2FA is accessible to a broad range of attackers. Users can generate phishing links and HTML attachments on demand, with the infrastructure shared among multiple users. Since its active promotion began in March 2024, the kit's ongoing development highlights a persistent threat in the cybersecurity landscape.

Research from Sekoia underscores the kit’s rapid evolution: “The phishing kit and its associated infrastructure have undergone several significant updates.” With its relay servers hosted on commercial proxy services, Mamba 2FA effectively conceals its true infrastructure, thereby minimizing the likelihood of detection.

Fostering Cybersecurity Culture: From Awareness to Action

 

The recent film "The Beekeeper" opens with a portrayal of a cyberattack targeting an unsuspecting victim, highlighting the modern challenges posed by technology-driven crimes. The protagonist, Adam Clay, portrayed by Jason Statham, embarks on a mission to track down the perpetrators and thwart their ability to exploit others through cybercrimes.

While security teams may aspire to emulate Clay's proactive approach, physical prowess and combat skills are not within their realm. Instead, prioritizing awareness becomes paramount. Educating the workforce proves to be a formidable task but stands as the most effective defense against individual-targeted threats. New training methodologies integrate traditional techniques, emphasizing adaptability over repetition.

In cybersecurity, the technology operates predictably, unlike humans. Recognizing this distinction underscores the necessity for personalized training during onboarding processes. Interactive training acknowledges the complexity of human behavior, emphasizing adaptability to address evolving threats and individual learning preferences. Unlike automated methods, personalized approaches can swiftly adjust to cater to unique challenges and learner needs, fostering a deeper understanding of security practices.

Organizations must evaluate their readiness to combat AI-based threats, considering that human error contributes to the majority of data breaches. Prioritizing education and resource allocation towards cultivating an informed workforce emerges as a critical strategy. Utilizing security champions and fostering collaboration among teams are advocated over solely relying on automation.

Establishing a robust cybersecurity culture involves encouraging employees to share their personal experiences with security incidents openly. Storytelling proves to be a powerful tool in imparting valuable security lessons, promoting a sense of community, and normalizing discussions around cybersecurity.

Testing and monitoring employee responses are crucial aspects of assessing the effectiveness of security programs. Conducting simulated phishing or smishing attacks allows organizations to gauge employee awareness and readiness to detect and report potential threats. Active engagement and communication among staff members indicate the success of the security program in fostering a proactive security culture.

Moreover, while we may not engage in the direct confrontation depicted in "The Beekeeper," building a resilient security culture through awareness remains our primary defense against cybercrime. Encouraging employee participation, personalized training, and proactive testing are pivotal in equipping individuals to identify and mitigate potential threats effectively. The benefits of these strategies extend beyond the workplace, empowering individuals to navigate the digital landscape safely in both personal and professional spheres, and contributing to a safer online environment for all.

End-User Risks: Enterprises on Edge Amid Growing Concerns of the Next Major Breach

 

The shift to remote work has been transformative for enterprises, bringing newfound flexibility but also a myriad of security challenges. Among the rising concerns, a prominent fear looms large - the potential for end-users to inadvertently become the cause of the next major breach. 

As organizations grapple with this unsettling prospect, the need for a robust security strategy that addresses both technological and human factors becomes increasingly imperative. Enterprises have long recognized that human error can be a significant factor in cybersecurity incidents. However, the remote work surge has amplified these concerns, with many organizations now expressing heightened apprehension about the potential for end-users to inadvertently compromise security. 

A recent report highlights that this fear is not unfounded, as enterprises increasingly worry that employees may become the weak link in their cybersecurity defenses. The complexity of the remote work landscape adds a layer of difficulty to security efforts. Employees accessing sensitive company data from various locations and devices create a broader attack surface, making it challenging for IT teams to maintain the same level of control and visibility they had within the confines of the corporate network. 

This expanded attack surface has become a breeding ground for cyber threats, and organizations are acutely aware that a single unintentional action by an end-user could lead to a major breach. Phishing attacks, in particular, have become a prevalent concern. Cybercriminals have adeptly adapted their tactics to exploit the uncertainties surrounding the pandemic, capitalizing on the increased reliance on digital communication channels. End-users, potentially fatigued by the constant influx of emails and messages, may unwittingly click on malicious links or download infected attachments, providing adversaries with a foothold into the organization's systems. 

While end-users can be the first line of defense, their actions, if not adequately guided and secured, can also pose a significant risk. Enterprises are grappling with the need to strike a delicate balance between enabling a seamless remote work experience and implementing stringent security measures that mitigate potential threats arising from end-user behavior. Education and awareness emerge as critical components of the solution. Organizations must invest in comprehensive training programs that equip employees with the knowledge and skills to identify and thwart potential security threats. 

Regularly updated security awareness training can empower end-users to recognize phishing attempts, practice secure online behavior, and promptly report any suspicious activity. Moreover, enterprises need to implement advanced cybersecurity technologies that provide an additional layer of protection. AI-driven threat detection, endpoint protection, and multi-factor authentication are crucial elements of a modern cybersecurity strategy. These technologies not only bolster the organization's defenses but also alleviate some of the burdens placed on end-users to be the sole gatekeepers of security. 

Collaboration between IT teams and end-users is paramount. Establishing open communication channels encourages employees to report security incidents promptly, enabling swift response and mitigation. Additionally, organizations should foster a culture of cybersecurity responsibility, emphasizing that every employee plays a crucial role in maintaining a secure digital environment. As the remote work landscape continues to evolve, enterprises must adapt their cybersecurity strategies to address the shifting threat landscape. 

The concerns about end-users being the potential cause of the next major breach underscore the need for a holistic approach that combines technological advancements with ongoing education and collaboration. By fortifying the human element of cybersecurity, organizations can navigate the complexities of remote work with confidence, knowing that their employees are not unwittingly paving the way for the next significant security incident.

The Essential Role of a Cybersecurity Playbook for Businesses

 

In the realm of sports, playbooks serve as strategic roadmaps. A similar concept applies to cybersecurity, where an updated security playbook, also known as an incident response plan, equips IT teams with a targeted strategy to mitigate risks in the event of an attack.

However, a significant number of companies lack a comprehensive security playbook. Instead, they resort to ad hoc responses that offer short-term relief but fail to address the underlying issues. Surprisingly, 36 percent of midsized companies don't have a formal incident response plan, and while most back up their data, 58 percent don't perform daily backup testing.

This article delves into the crucial elements that companies should incorporate into their cybersecurity playbook, emphasizes the importance of regular updates, and underscores the necessity of having a playbook in place prior to a security incident.

Inclusion Criteria for a Cybersecurity Playbook

Recent data reveals that over 72 percent of global firms have encountered ransomware attacks in the past year. These attacks often stem from spam emails and malicious links that compromise staff accounts. Consequently, it is imperative for companies to be proactive rather than reactive. A well-structured security playbook should encompass:

1. Assignment of Responsibilities: Clearly defining which team members are tasked with specific duties, such as identifying attack vectors, pinpointing compromise points, and isolating critical systems.

2. Communication Protocol: Establishing a streamlined communication chain for notifying the right individuals promptly when an attack occurs. This chain should be regularly updated.

3. Contingency Plans: Anticipating scenarios where key personnel may be unavailable due to illness, vacation, or departure from the company. Playbooks should incorporate backup plans for such situations.

4. Incident Handling Procedures: Detailing the process for addressing specific incidents like stolen credentials, ransomware attacks, or compromised endpoints. This encompasses detection, identification, and remediation steps.

Maintaining the Currency of Your Cybersecurity Playbook

Just as threat actors evolve their tactics, incident response plans must also adapt. For instance, cyber attackers recently exploited a fake Windows update to compromise business and government devices. Security playbooks should be regularly reviewed quarterly and updated annually to ensure they address contemporary threats effectively. Conducting simulated attacks to assess the playbook's efficacy is also advisable.

Furthermore, playbooks serve a dual purpose – not only for incident response but also as a requirement for cybersecurity insurance. Companies should update their response plans when integrating new technologies, such as deploying public cloud services, which introduce new connections and potential attack surfaces.

The Significance of Crafting a Security Playbook

While businesses can create their own security playbooks, this can be a time-consuming endeavor, particularly for smaller companies with limited IT resources or large enterprises operating internationally.

CDW offers incident response services that assist companies in tailoring custom playbooks to their specific needs. Access to CDW statement-of-work services is provided at no cost, outlining the defensive actions CDW can take to support a company in the event of an incident, along with associated fees.

For a comprehensive approach, organizations can opt for paid services, which encompass an incident response program and playbook development, readiness assessments, and tabletop exercises.

In the face of corporate network breaches, swift and well-prepared action is paramount. An in-depth security playbook ensures readiness and equips companies to navigate the challenges that arise.

CrowdSrike: Cybercriminals Are Choosing Data Extortion Over Ransomware Attacks


CrowdStrike’s threat intelligence recently reported that cybercriminals have been learning how data extortion attacks are more profitable than ransomware attacks, leading to a drastic shift in the behavior of cyber activities throughout 2022. 

The cybersecurity vendor's "2023 Global Threat Report," which summarizes CrowdStrike's research on cybercrime (or "e-Crime") from the previous year, was released this week. The report's major sections address ongoing geopolitical disputes, cloud-related attacks, and extortion attacks without the use of software. 

One of the major findings from the CrowdStrike research is that the number of malicious actors who conducted data theft and extortion attacks without the use of ransomware increased by 20% in 2022 compared to the previous year. Data extortion is the practice of obtaining confidential information from target companies and then threatening to post the information online if the victim does not provide the ransom demanded by the attacker. 

Data extortion has frequently been a part of ransomware operations, with the fear of data exposure intended to provide additional incentive for the victim to pay the demanded ransom. However, as per the CrowdStrike findings, more attackers are now inclining toward data extortion, while abandoning the ransomware element altogether. 

Adam Meyers, head of intelligence at CrowdStrike says that “We’re seeing more and more threat actors moving away from ransomware[…]Ransomware is noisy. It attracts attention. It’s detectable. Encryption is complex.” 

According to Meyers, the rise in extortion addresses the adaptability of cyber adversaries. He further adds that while ransom payments were down slightly in 2022, both extortion and ransomware-as-a-service (RaaS) have witnessed a significant boost. 

CrowdStrike observed and noted the overall waning interest in malware. The firm reported that in 2022, up from 62% in 2021, malware-free activity accounted for 71% of its threat detections. 

"This was partly related to adversaries' prolific abuse of valid credentials to facilitate access and persistence in victim environments[…]Another contributing factor was the rate at which new vulnerabilities were disclosed and the speed with which adversaries were able to operationalize exploits," the report said. 

While also noting the improved resilience of the RaaS network, CrowdStrike stated that affiliated hackers will continue to be a major concern as they move from one network to another despite the move away from conventional ransomware deployment.  

Threat Actors Targeting Vaccine Manufacturing Facility with Tardigrade Malware

 

Biomanufacturing facilities in the US are being actively targeted by an anonymous hacking group leveraging a new custom malware called ‘Tardigrade’. 

In a new threat advisory, the Bioeconomy Information Sharing and Analysis Center (BIO-ISAC) claimed this week that the first attack was launched using this new malware in spring 2021, followed by the second assault in October.

 New malware strain

According to BIO-ISAC, Tardigrade possesses advanced features and is supposedly the work of an advanced threat detection group or a nation-state intelligence service. The malware is primarily used for espionage though it can also cause other issues including network outages. The recent assaults are also believed to be linked to Covid-19 research as the pandemic has shown just how crucial biomanufacturing research is when creating vaccines and other drugs. 

Tardigrade’s functionality includes a Trojan, keylogger, data theft, and also establishes a backdoor into targeted systems. There is some debate regarding the origins of the code used in Tardigrade as BIO-ISAC believes the malware is based on Smoke Loader, a Windows-based backdoor operated by a hacking group called Smoky Spider. However, security researchers that spoke with Bleeping Computer believe that it is a form of the Cobalt Strike HTTP. 

“The biomanufacturing industry along with other verticals are so far behind in cybersecurity, making them a prime target for bad actors. Cyberattacks mostly happen to those that provide easy access or least path of resistance,” George Gerchow, chief security officer of machine data analytics company Sumo Logic Inc., told SiliconANGLE. 

“This is a blatant example of how attackers are focusing on human health during a time of high anxiety, and bioscience is an easy target. The industry is going to have to move quickly to put proper cyber security controls in place. It is going to be a huge mountain for them to climb as some of the companies in the industry have antiquated technology, lacked the proper skill sets, and relied too much on legacy security tools,” Gerchow added. 

The BIO-ISAC report recommends the following steps for biomanufacturing sites that will enhance the security and response postures (i) Scan your biomanufacturing network segmentation, (ii)  Collaborate with biologists and automation experts to design a full-proof analysis for your firm, (iii) Employ antivirus with behavioral analysis capabilities, (iv) Participate in phishing detection training (v) Stay vigilant.

Google: Russian APT Targeting Journalists and Politicians

 

On October 7, 14,000 Google customers were informed that they were potential targets of Russian government-backed threat actors. The next day, the internet giant released cybersecurity upgrades, focusing on high-profile users' email accounts, such as politicians and journalists. 

APT28, also known as Fancy Bear, a Russian-linked threat organisation, has allegedly increased its efforts to target high-profile people. According to MITRE ATT&CK, APT28 has been operating on behalf of Russia's General Staff Main Intelligence Directorate 85th Main Special Service Center military unit 26165 since at least 2004. 

This particular operation, discovered in September, prompted a Government-Backed Attack alert to Google users this week, according to Shane Huntley, head of Google's Threat Analysis Group, or TAG, which handles state-sponsored attacks. 

Huntley verified that Gmail stopped and categorised the Fancy Bear phishing operation as spam. Google has advised targeted users to sign up for its Advanced Protection Program for all accounts. 

Erich Kron, a former security manager for the U.S. Army’s 2nd Regional Cyber Center, told ISMG: "Nation-state-backed APTs are nothing new and will continue to be a significant menace … as cyber warfare is simply a part of modern geopolitics."

Huntley said on Thursday in his Twitter thread, "TAG sent an above-average batch of government-backed security warnings. … Firstly these warnings indicate targeting NOT compromise. … The increased numbers this month come from a small number of widely targeted campaigns which were blocked." 

"The warning really mostly tells people you are a potential target for the next attack so, now may be a good time to take some security actions. … If you are an activist/journalist/government official or work in NatSec, this warning honestly shouldn't be a surprise. At some point some govt. backed entity probably will try to send you something."

Google's Security Keys 

Following the news of Fancy Bear's supposed targeting of high-profile individuals, Google stated in a blog post that cybersecurity functionalities in its APP programme will safeguard against certain attacks and that it was collaborating with organisations to distribute 10,000 free security keys to higher-profile individuals. The keys are two-factor authentication devices tapped by users during suspicious logins. 

According to Grace Hoyt, Google's partnerships manager, and Nafis Zebarjadi, its product manager for account security, Google's APP programme is updated to adapt to evolving threats - it is accessible to users, but is suggested for elected officials, political campaigns, activists, and journalists. It protects from phishing, malware, harmful downloads, and unwanted access. 

Alvarado, currently the threat intelligence team lead at the security firm Digital Shadows stated, "Although Google's actions are certainly a step in the right direction … the old saying, 'Where there is a will, there is a way,' still applies. … These [security] keys will undoubtedly make an attacker's job more difficult, but there are plenty of other options and vulnerabilities for [threat actors] to achieve their goals. 

KnowBe4's Kron alerted, "These security keys, while useful in their own limited scope, do not stop phishing emails from being successful. They only help when an attacker already has access to, or a way to bypass, the username and password for the email account being targeted." 

Global Partnerships 

Google stated it has partnered with the International Foundation for Electoral Systems, the UN Women Generation Equality Action Coalition for Technology and Innovation; and the nonprofit, nonpartisan organisation Defending Digital Campaigns in its initiatives to distribute 10,000 security keys. Google claims that as part of its partnership with the IFES, it has sent free security keys to journalists in the Middle East and female activists throughout Asia. 

Google stated it is giving security training through UN Women for UN chapters and groups that assist women in media, politics, and activism, as well as those in the C-suite. 

2FA Auto-Enrollment 

In a blog post on October 5, Google's group product manager for Chrome, AbdelKarim Mardini, and Guemmy Kim, Google's director of account security and safety, wrote that by the end of 2021, Google also aims to auto-enrol 150 million additional users in two-factor authentication - and require 2 million YouTubers to do the same. 

"We know that having a second form of authentication dramatically decreases an attacker's chance of gaining access to an account," Mardini and Kim wrote. 

"Two-step verification [is] one of the most reliable ways to prevent unauthorized access," Google said in May that it will soon begin automatically enrolling customers in 2-Step Verification if their accounts were configured correctly. 

This week, Google announced that it is auto-enrolling Google accounts with "proper backup mechanisms in place" to move to 2SV.

Threat Actors' Dwell Time Reduced to 24 Days, FireEye Reports

 

FireEye, the intelligence-led security company, published the FireEye Mandiant M-Trends 2021 report. The FireEye-owned forensic specialist’s M-Trends 2021 report was compiled from investigations of targeted attack activity between October 1, 2019, and September 30, 2020. This year’s report outlines critical details on the latest attacker methodologies and malware, the growth of multifaceted extortion and ransomware, preparing for expected UNC2452 / SUNBURST threat actors, growing insider threats, and industry targeting trends. 

“UNC2452, the threat actor responsible for the SolarWinds supply chain attack, reminds us that a highly-disciplined and patient actor cannot be underestimated. This actor’s attention paid to operational security, counter forensics, and even counterintelligence set it apart from its peers. Defense against this actor will not be easy, but it is not impossible. We have learned a great deal about UNC2452 in recent months, and we believe that intelligence will be our advantage in future encounters," said Sandra Joyce, Executive Vice President, Global Threat Intelligence, Mandiant.

Over the past decade, Mandiant has noticed a trending reduction in global median dwell time (defined as the duration between the start of a cyber intrusion and when it is identified). The researchers revealed that 59% of organizations detected attackers within their own environments over the period, a 12-percentage point increase on the previous year. The speed at which they did so also increased: dwell time for attackers inside corporate networks fell below a month for the first time in the report’s history, with the median global figure now at 24 days.

This is in stark contrast to the 416 days it took firms when the report was first published in 2011. It's also more than twice as fast as the previous year (56 days) and shows that detection and response are moving in the right direction. For incidents notified to firms externally, the figure was slightly higher (73 days) and for internally detected attacks it was lower (12 days). In America, dwell time dropped from 60 days in 2019 to just 17 days last year, while in APAC (76 days) and EMEA (66 days) the figure increased slightly. 

The top five most targeted industries, in order, are Business and Professional Services, Retail and Hospitality, Financial, Healthcare and High Technology. Mandiant experts observed that organizations in the Retail and Hospitality industry were targeted more heavily in 2020 – coming in as the second most targeted industry compared to 11th in last year’s report. 

Healthcare also rose significantly, becoming the third most targeted industry in 2020, compared to eighth in last year’s report. This increased focus by threat actors can most likely be explained by the vital role the healthcare sector played during the global pandemic.

However, a major contributing factor to the global reduction in dwell time may be the escalation of ransomware attacks, which usually take place over a shorter time frame than traditional cyber-espionage or data theft operations.