Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Cloud Security. Show all posts

Unveiling Storm-1152: A Top Creator of Fake Microsoft Accounts

 

The Digital Crimes Unit of Microsoft disrupted a major supplier of cybercrime-as-a-service (CaaS) last week, dubbed Storm-1152. The attackers had registered over 750 million fake Microsoft accounts, which they planned to sell online to other cybercriminals, making millions of dollars in the process.

"Storm-1152 runs illicit websites and social media pages, selling fraudulent Microsoft accounts and tools to bypass identity verification software across well-known technology platforms," Amy Hogan-Burney, general manager for Microsoft's DCU, stated . "These services reduce the time and effort needed for criminals to conduct a host of criminal and abusive behaviors online.” 

Cybercriminals can employ fraudulent accounts linked to fictitious profiles as a virtually anonymous starting point for automated illegal operations including ransomware, phishing, spamming, and other fraud and abuse. Furthermore, Storm-1152 is the industry leader in the development of fictitious accounts, offering account services to numerous prominent cyber threat actors. 

Microsoft lists Scattered Spider (also known as Octo Tempest) as one of these cybercriminals. They are the ones responsible for the ransomware attacks on Caesars Entertainment and the MGM Grand this fall). 

Additionally, Hogan-Burney reported that the DCU had located the group's primary ringleaders, Tai Van Nguyen, Linh Van Nguyá»…n (also known as Nguyá»…n Van Linh), and Duong Dinh Tu, all of whom were stationed in Vietnam.

"Our findings show these individuals operated and wrote the code for the illicit websites, published detailed step-by-step instructions on how to use their products via video tutorials, and provided chat services to assist those using their fraudulent services," Burney noted. 

Sophisticated crimeware-as-a-service ring 

Storm-1152's ability to circumvent security measures such as CAPTCHAs and construct millions of Microsoft accounts linked to nonexistent people highlights the group's expertise, according to researchers.

The racket was likely carried out by "leveraging automation, scripts, DevOps practices, and AI to bypass security measures like CAPTCHAs." The CaaS phenomenon is a "complex facet of the cybercrime ecosystem... making advanced cybercrime tools accessible to a wider spectrum of malicious actors," stated Craig Jones, vice president of security operations at Ontinue. 

According to Critical Start's Callie Guenther, senior manager of cyber threat research, "the use of automatic CAPTCHA-solving services indicates a fairly high level of sophistication, allowing the group to bypass one of the primary defences against automated account creation.”

Platforms can take a number of precautions to prevent unwittingly aiding cybercrime, the researchers noted. One such safeguard is the implementation of sophisticated detection algorithms that can recognise and flag suspicious conduct at scale, ideally with the help of AI. 

Furthermore, putting robust multifactor authentication (MFA) in place for the creation of accounts—especially those with elevated privileges—can greatly lower the success rate of creating fake accounts. However, Ontinue's Jones emphasises that more work needs to be done on a number of fronts.

Thousands of Outdated Microsoft Exchange Servers are Susceptible to Cyber Attacks

 

A large number of Microsoft Exchange email servers in Europe, the United States, and Asia are currently vulnerable to remote code execution flaws due to their public internet exposure. These servers are running out-of-date software that is no longer supported, and as a result, they do not receive any updates or security patches. As a result, they are vulnerable to a variety of security issues, some of which have critical severity ratings. 

Recent internet scans conducted by The ShadowServer Foundation have disclosed that nearly 20,000 Microsoft Exchange servers are presently accessible via the public internet and have reached the end of life stage. These statistics, however, may not be indicative of the whole picture. Yutaka Sejiyama, a Macnica security researcher, carried out additional research and identified over 30,000 Microsoft Exchange servers that have reached end-of-life status. 

Sejiyama's Shodan scans discovered nearly 30,635 unsupported Microsoft Exchange devices on the public web. There were 275 Exchange Server 2007 instances, 4,062 Exchange Server 2010 instances, and a whopping 26,298 Exchange Server 2013 instances. 

One of the main concerns with these old servers is the possibility of remote code execution. Outdated Exchange servers are vulnerable to a number of remote code execution bugs, including the critical ProxyLogon vulnerability (CVE-2021-26855), which can be combined with the less serious CVE-2021-27065 flaw to allow remote code execution.

According to Sejiyama's analysis of the scanned systems' build numbers, approximately 1,800 Exchange servers are still vulnerable to ProxyLogon, ProxyShell, and ProxyToken vulnerabilities. 

While some of these flaws do not have critical severity ratings, Microsoft still considers them "important." Furthermore, with the exception of the ProxyLogon chain, which was previously exploited in attacks, all of these flaws are believed to be "more likely" to be targeted. 

Organisations that continue to use obsolete Exchange servers despite having implemented available mitigations are still susceptible. Microsoft strongly advises prioritising the installation of updates on servers that are exposed to the outside world. The only option for servers that have reached the end of support is to upgrade to a version that continues to get security patches. 

The identification of tens of thousands of vulnerable Microsoft Exchange servers emphasises the critical importance of updating software and applying security patches on a regular basis. Failure to do so exposes businesses to the risk of remote code execution and other security breaches.

Top 10 Cutting-Edge Technologies Set to Revolutionize Cybersecurity

 

In the present digital landscape, safeguarding against cyber threats and cybercrimes is a paramount concern due to their increasing sophistication. The advent of new technologies introduces both advantages and disadvantages. 

While these technologies can be harnessed for committing cybercrimes, adept utilization holds the potential to revolutionize cybersecurity. For instance, generative AI, with its ability to learn and generate new content, can be employed to identify anomalies, predict potential risks, and enhance overall security infrastructure. 

The ongoing evolution of technologies will significantly impact cybersecurity strategies as we navigate through the digital realm.

Examining the imminent transformation of cybersecurity, the following ten technologies are poised to play a pivotal role:

1. Quantum Cryptography:
Quantum Cryptography leverages the principles of quantum physics to securely encrypt and transmit data. Quantum key distribution (QKD), a technique ensuring the creation and distribution of interception-resistant keys, forms the foundation of this technology. Quantum cryptography ensures unbreakable security and anonymity for sensitive information and communications.

2. Artificial Intelligence (AI):
AI enables machines and systems to perform tasks requiring human-like intelligence, including learning, reasoning, decision-making, and natural language processing. In cybersecurity, AI automation enhances activities such as threat detection, analysis, response, and prevention. Machine learning capabilities enable AI to identify patterns and anomalies, fortifying cybersecurity against vulnerabilities and hazards.

3. Blockchain:
Blockchain technology creates a decentralized, validated ledger of transactions through a network of nodes. Offering decentralization, immutability, and transparency, blockchain enhances cybersecurity by facilitating digital signatures, smart contracts, identity management, and secure authentication.

4. Biometrics:
Biometrics utilizes physical or behavioral traits for identity verification and system access. By enhancing or replacing traditional authentication methods like passwords, biometrics strengthens cybersecurity and prevents fraud, spoofing, and identity theft.

5. Edge Computing:
Edge computing involves processing data closer to its source or destination, reducing latency, bandwidth, and data transfer costs. This technology enhances cybersecurity by minimizing exposure to external systems, thereby offering increased privacy and data control.

6. Zero Trust:
The zero-trust security concept mandates constant verification and validation of every request and transaction, regardless of the source's location within or outside the network. By limiting lateral movement, unwanted access, and data breaches, zero trust significantly improves cybersecurity.

7. Cloud Security:
Cloud security protects data and applications stored on cloud platforms through tools such as encryption, firewalls, antivirus software, backups, disaster recovery, and identity/access management. Offering scalability, flexibility, and efficiency, cloud security contributes to enhanced cybersecurity.

8. 5G Networks:
5G networks, surpassing 4G in speed, latency, and capacity, improve cybersecurity by enabling more reliable and secure data transfer. Facilitating advancements in blockchain, AI, and IoT, 5G networks play a crucial role in cybersecurity, particularly for vital applications like smart cities, transportation, and healthcare.

9. Cybersecurity Awareness:
Cybersecurity awareness, though not a technology itself, is a critical human component. It involves individuals and organizations defending against cyber threats through security best practices, such as strong passwords, regular software updates, vigilance against phishing emails, and prompt event reporting.

10. Cyber Insurance:
Cyber insurance protects against losses and damages resulting from cyberattacks. Organizations facing financial or reputational setbacks due to incidents like ransomware attacks or data breaches can benefit from cyber insurance, which may also incentivize the adoption of higher security standards and procedures.

Overall, the evolving landscape of cybersecurity is deeply intertwined with technological advancements that both pose challenges and offer solutions. As we embrace the transformative potential of quantum cryptography, artificial intelligence, blockchain, biometrics, edge computing, zero trust, cloud security, 5G networks, cybersecurity awareness, and cyber insurance, it becomes evident that a multi-faceted approach is essential. 

The synergy of these technologies, coupled with a heightened human awareness of cybersecurity best practices, holds the key to fortifying our defenses in the face of increasingly sophisticated cyber threats. As we march forward into the digital future, a proactive integration of these technologies and a commitment to cybersecurity awareness will be paramount in securing our digital domains.

Escalating Global Threats Targeting Cloud Infrastructure

 

Cloud computing's quick uptake has fundamentally changed how businesses manage and keep their data. However, as cloud environments become more and more popular, an alarming increase in cyber threats targeting them has also occurred. The sophistication of attacks on clouds is rising globally, according to recent studies and industry publications, illuminating the changing character of cyber threats.

According to a comprehensive global study on cybersecurity, the sophistication of attacks on clouds has witnessed a notable surge. The report emphasizes the need for enhanced security measures to counter these evolving threats. One of the key findings reveals that India, a major player in the IT industry, has experienced a significant increase in cloud-related cyber incidents. This highlights the urgency for organizations to prioritize their cloud security strategies to safeguard sensitive data.

Thales Data Threat Report's analysis highlights the threat's escalating severity. The biggest reasons of cloud data breaches on a worldwide scale, according to the research, are an increase in ransomware assaults and human mistakes. Organizations must deploy strong security measures to safeguard their cloud assets since fraudsters are using more sophisticated approaches. Ensuring the security, integrity, and availability of data is crucial as cloud-based services increasingly permeate company operations.

Experts caution that a proactive and multi-layered strategy for cybersecurity is necessary in light of these growing risks in cloud platforms. Traditional security measures alone are no longer sufficient. To effectively manage threats, organizations must use cutting-edge technologies and create a thorough security strategy. The importance of data security and encryption techniques, which are essential for securing cloud-stored data, is also emphasized in the paper.

The necessity for stronger security measures is also stressed by a research report on the worldwide cybersecurity business. In order to counter the increasingly complex nature of cyber threats, the research emphasizes the rising demand for cybersecurity solutions and services. It shows that businesses in a range of industries are putting more money into cutting-edge security tools to safeguard their cloud infrastructure and fend off complex threats.

Industry experts stress the value of keeping up with the most recent security trends and implementing preventative security measures in light of these findings. To inform employees of the possible hazards involved with cloud-based operations, organizations must emphasize security awareness training. Strong access controls, frequent vulnerability scans, and the use of threat intelligence tools are essential elements in enhancing cloud security.

Organizations must continue to be cautious and aggressive in their cybersecurity efforts as cloud threats' sophistication continues to rise internationally. Protecting cloud environments against developing cyber threats requires putting in place a thorough security strategy, utilizing cutting-edge technology, and promoting a culture of security awareness.



How is 3-2-1 Backup Policy now Out-dated?


With the growing trend of ransomware attacks, it has become important for individuals and organizations to adopt efficient backup policies and procedures.

According to reports, in year 2022 alone, around 236.1 million ransomware attacks have been detected globally. Cyber criminals have evolved into using innovative tactics malware, cryptography and network infiltration to prevent companies from accessing their data. As a result of these emerging ransomware attacks, companies are required to strengthen their security and data backup procedures which compel companies to financial constrains in exchange for the release of their systems and backups.

Current Status of Backups

Systems compromised with ransomware can be swiftly restored with the right backups and disaster recovery techniques, thwarting the attackers. However, Hackers now know how to lock and encrypt production files while simultaneously deleting or destroying backups. Obviously, their targets would not have to pay the ransom if they can restore their computers from backups.

Conventional The 3-2-1 Backup Policy

The 3-2-1 backup policy has been in place for many years and is considered the "gold standard" for guaranteeing the security of backups. Three data copies must be produced utilizing two different types of storage media, with at least one backup occurring offsite. The backup should ideally also be immutable, which means that it cannot be deleted, altered, or encrypted within the time period specified.

The "two diverse media" has typically indicated one copy on traditional hard drives and the other copy on tape for the past 20 years or so. The most popular methods for achieving immutability involved physically storing the tape in a cardboard box or destroying the plastic tab on the tape cartridge, which rendered the tape unwritable. While most often done by replicating the backup files between two company data centers to create the offsite copy.

Growing Popularity of Cloud Security

The cloud has grown in popularity as a place to store backups in recent years. Since its launch, the majority of businesses have reconsidered the conventional 3-2-1 policy. The majority of firms are using a mixed strategy. Backups are first sent to a local storage appliance because the cloud has a limited amount of bandwidth, which is typically faster than backing up directly to the cloud. In the same way, restoring from backups works. Always, restoring from a local copy will be quicker. However, what if the local backup was deleted by the hackers? in that case, one may have to turn to the copy stored in the cloud.

Today, the majority of cloud storage providers offer "immutable" storage, which is secured and cannot be changed or deleted. You actually need this immutability to prevent hackers from eliminating your backups. Additionally, since the cloud is always "off-site," it satisfies one of the key demands of the 3-2-1 backup scheme. one may still have the cloud backup even if there is a fire, flood, or other event that damages the local backup. People no longer see a need for two different types of media, especially the third copy. 

Replicating the cloud copy to a second cloud site, preferably one that is at least 500 kilometers away, is the practice used most frequently nowadays. The two cloud copies ought to be immutable.

In comparison to on-premises storage systems, cloud storage providers typically offer substantially higher levels of data durability. Amazon, Google, Microsoft, and Wasabi have all chosen the gold standard of 11 nines of durability. If you do the arithmetic, 11 nines of durability indicates that you will statistically lose one object every 659,000 years if a user offers you one million objects to store. Because of this, you never hear about cloud storage providers losing client information. 

The likelihood of losing data due to equipment failure is nearly zero if there are two copies spread across two distinct cloud data centers. The previous requirement of "two different media" is no longer necessary at this level of durability.

Moreover, alongside the added durability, the second cloud copy considerably improves backup data availability. Although the storage system may have an 11-nine durability rating, communications issues occasionally cause entire data centers to fall offline. A data center's availability is typically closer to 4 nines. If one cloud data center goes offline, one can still access their backups at the second cloud data center since they consist of two independent cloud copies. 

One may anticipate that the local copy will be lost during the course of a ransomware attack, thus they would be depending on cloud restoration. A company may as well shut down until the backups are accessed if the cloud goes offline for any reason. This thus makes two having two cloud copies a good investment.  

The Media & Entertainment Industries' Major Public Cloud Security Issues

 

As reported by Wasabi, media and entertainment (M&E) organizations are swiftly resorting to cloud storage to improve their security procedures. While M&E organizations are still fairly new to cloud storage (69% had been using cloud storage for three years or less), public cloud storage use is on the rise, with 89% of respondents looking to increase (74%) or maintain (15%) their cloud services.
On average, M&E respondents reported they spend 13.9% of their IT spending on public cloud storage services. Overdrawn budgets due to hidden fees, as well as cybersecurity and data loss worries, continue to be issued for M&E organizations.

“The media and entertainment industry is a key vertical for cloud storage services, driven by the need for accessibility to large media files among multiple organizations and geographically distributed teams,” said Andrew Smith, senior manager of strategy and market intelligence at Wasabi Technologies, and a former IDC analyst.

“While complex fee structures and cybersecurity concerns remain obstacles for many M&E organizations, planned increases in cloud storage budgeting over the next year, combined with a very high prevalence of storage migration from on-premises to cloud; clearly shows the M&E industry is embracing and growing their cloud storage use year on year,” concluded Smith.

In the previous year, more than half of M&E organizations spent more than their planned amount on cloud storage services. The fees accounted for 49% of M&E firms' public cloud storage expense, with the other half going to actual storage capacity utilized. Understanding the charges and fees connected with cloud usage has been identified as the most difficult cloud migration barrier for M&E organizations.

Since M&E organizations rely substantially on data access, egress, and ingress, M&E respondents reported the highest occurrence of API call fees when compared to the global average. The respondents reported a very high incidence of cloud data migration, with 95% reporting that they migrated storage from on-premises to the public cloud in the previous year.

M&E respondents who plan to expand their public cloud storage budgets in the next 12 months identified new data protection, backup, and recovery requirements as the primary driver, compared to the global average, which rated third. More than one public cloud provider is used by 45% of M&E organizations. One of the major reasons M&E organizations chose a multi-cloud strategy was data security concerns, which came in second (44%) behind different buying centers within the organization making their own purchase decisions (47%).

The following are the top three security concerns that M&E organizations have with a public cloud:
  • Lack of native security services (42%)
  • Lack of native backup, disaster and data protection tools and services (39%)
  • Lack of experience with cloud platform or adequate security training (38%)
“Organizations in the media and entertainment industry are flocking to cloud storage as their digital assets need to be stored securely, cost-effectively and accessed quickly,” said Whit Jackson, VP of Media and Entertainment at Wasabi.

Three Commonly Neglected Attack Vectors in Cloud Security

 

As per a 2022 Thales Cloud Security research, 88% of companies keep a considerable amount (at least 21% of sensitive data) in the cloud. That comes as no surprise. According to the same survey, 45% of organisations have had a data breach or failed an audit involving cloud-based data and apps. This is less surprising and positive news. 

The majority of cloud computing security issues are caused by humans. They make easily avoidable blunders that cost businesses millions of dollars in lost revenue and negative PR. Most don't obtain the training they need to recognise and deal with constantly evolving threats, attack vectors, and attack methods. Enterprises cannot avoid this instruction while maintaining control over their cloud security.

Attacks from the side channels

Side-channel attacks in cloud computing can collect sensitive data from virtual machines that share the same physical server as other VMs and activities. A side-channel attack infers sensitive information about a system by using information gathered from the physical surroundings, such as power usage, electromagnetic radiation, or sound. An attacker, for example, could use statistics on power consumption to deduce the cryptographic keys used to encrypt data in a neighbouring virtual machine.  

Side-channel attacks can be difficult to mitigate because they frequently necessitate careful attention to physical security and may involve complex trade-offs between performance, security, and usability. Masking is a common defence strategy that adds noise to the system, making it more difficult for attackers to infer important information.

In addition, hardware-based countermeasures (shields or filters) limit the amount of data that can leak through side channels.

Your cloud provider will be responsible for these safeguards. Even if you know where their data centre is, you can't just go in and start implementing defences to side-channel assaults. Inquire with your cloud provider about how they manage these issues. If they don't have a good answer, switch providers.

Container breakouts

Container breakout attacks occur when an attacker gains access to the underlying host operating system from within a container. This can happen if a person has misconfigured the container or if the attacker is able to exploit one of the many vulnerabilities in the container runtime. After gaining access to the host operating system, an attacker may be able to access data from other containers or undermine the security of the entire cloud infrastructure.

Securing the host system, maintaining container isolation, using least-privilege principles, and monitoring container activities are all part of defending against container breakout threats. These safeguards must be implemented wherever the container runs, whether on public clouds or on more traditional systems and devices. These are only a few of the developing best practices; they are inexpensive and simple to apply for container developers and security experts.

Cloud service provider vulnerabilities

Similarly to a side-channel attack, cloud service providers can be exposed, which can have serious ramifications for their clients. An attacker could gain access to customer data or launch a denial-of-service attack by exploiting a cloud provider's infrastructure weakness. Furthermore, nation-state actors can attack cloud providers in order to gain access to sensitive data or destroy essential infrastructure, which is the most serious concern right now.

Again, faith in your cloud provider is required. Physical audits of their infrastructure are rarely an option and would almost certainly be ineffective. You require a cloud provider who can swiftly and simply respond to inquiries about how they address vulnerabilities:

Unpatched ICS Flaws in Critical Infrastructure: CISA Issues Alert

 

This week, the US Cybersecurity and Infrastructure Security Agency (CISA) released recommendations for a total of 49 vulnerabilities in eight industrial control systems (ICS) utilised by businesses in various critical infrastructure sectors. Several of these vulnerabilities are still unpatched. 

Organizations in the critical infrastructure sectors must increasingly take cybersecurity into account. Environments for ICS and operational technology (OT) are becoming more and more accessible via the Internet and are no longer air-gapped or compartmentalised as they once were. As a result, both ICS and OT networks have grown in popularity as targets for both nation-state players and threat actors driven by financial gain.

That's bad because many of the flaws in the CISA advisory can be remotely exploited, only require a simple assault to succeed, and provide attackers access to target systems so they may manipulate settings, elevate privileges, get around security measures, steal data, and crash systems. Products from Siemens, Rockwell Automation, Hitachi, Delta Electronics, Keysight, and VISAM all have high-severity vulnerabilities. 

The CISA recommendation was released at the same time as a study from the European Union on threats to the transportation industry, which included a similar warning about the possibility of ransomware attacks on OT systems used by organisations that handle air, sea, rail, and land transportation. Organizations in the transportation industry are also affected by at least some of the susceptible systems listed in CISA's alert. 

Critical vulnerabilities

Siemens' RUGGEDCOM APE1808 technology contains seven of the 49 vulnerabilities listed in CISA's alert and is not currently patched. The flaws give an attacker the ability to crash or increase the level of privileges on a compromised system. The device is presently used by businesses in several critical infrastructure sectors all around the world to host commercial applications. 

The Scalance W-700 devices from Siemens have seventeen more defects in various third-party parts. The product is used by businesses in the chemical, energy, food, agricultural, and manufacturing sectors as well as other critical infrastructure sectors. In order to protect network access to the devices, Siemens has urged organisations using the product to update their software to version 2.0 or later. 

InfraSuite Device Master, a solution used by businesses in the energy sector to keep tabs on the health of crucial systems, is impacted by thirteen of the recently discovered vulnerabilities. Attackers can utilise the flaws to start a denial-of-service attack or to obtain private information that could be used in another attack. 

Other vendors in the CISA advisory that have several defects in their products include Visam, whose Vbase Automation technology had seven flaws, and Rockwell Automaton, whose ThinManager product was employed in the crucial manufacturing industry and had three flaws. For communications and government businesses, Keysight had one vulnerability in its Keysight N6845A Geolocation Server, while Hitachi updated details on a previously known vulnerability in its Energy GMS600, PWC600, and Relion products. 

For the second time in recent weeks, CISA has issued a warning to firms in the critical infrastructure sectors regarding severe flaws in the systems such organisations employ in their operational and industrial technology settings. Similar warnings on flaws in equipment from 12 ICS suppliers, including Siemens, Hitachi, Johnson Controls, Panasonic, and Sewio, were released by the FCC in January. 

Many of the defects in the previous warning, like the current collection of flaws, allowed threat actors to compromise systems, increase their privileges, and wreak other havoc in ICS and OT contexts. 

OT systems under attack

A report this week on cyberthreats to the transportation industry from the European Union Agency for Cybersecurity (ENISA) issued a warning about potential ransomware attacks against OT systems. The report's analysis of 98 publicly reported incidents in the EU transportation sector between January 2021 and October 2022 was the basis for the report. 

According to the data, 47% of the attacks were carried out by cybercriminals who were motivated by money. The majority of these attacks (38%) involved ransomware. Operational disruptions, spying, and ideological assaults by hacktivist groups were a few more frequent reasons. 

Even while these attacks occasionally caused collateral damage to OT systems, ENISA's experts did not discover any proof of targeted attacks on them in the 98 events it examined. 

"The only cases where OT systems and networks were affected were either when entire networks were affected or when safety-critical IT systems were unavailable," the ENISA report stated. However, the agency expects that to change. "Ransomware groups will likely target and disrupt OT operations in the foreseeable future."

The research from the European cybersecurity agency cited an earlier ENISA investigation that warned of ransomware attackers and other new threat groups tracked as Kostovite, Petrovite, and Erythrite that target ICS and OT systems and networks. The report also emphasised the ongoing development of malware designed specifically for industrial control systems, such as Industroyer, BlackEnergy, CrashOverride, and InController, as indicators of increasing attacker interest in ICS environments. 

"In general, adversaries are willing to dedicate time and resources in compromising their targets to harvest information on the OT networks for future purposes," the ENISA report further reads. "Currently, most adversaries in this space prioritize pre-positioning and information gathering over disruption as strategic objectives."

Security Observability: How it Transforms Cloud Security


Security Observability 

Security Observability is an ability to gain recognition into an organization’s security posture, including its capacity to recognize and address security risks and flaws. It entails gathering, analyzing, and visualizing security data in order to spot potential risks and take preventative action to lessen them. 

The process involves data collection from varied security tools and systems, like network logs, endpoint security solutions, and security information and event management (SIEM) platforms, further utilizing the data to observe potential threats. In other words, unlike more conventional security operations tools, it informs you of what is expected to occur rather than just what has actually occurred. Security observability is likely the most significant advancement in cloud security technology that has occurred in recent years because of this major distinction. 

Though, a majority of users are still unaware of security observability, which is something that raises concerns. According to a 2021 Verizon Data Breach Investigations Report, cloud assets were included in 24% of all breaches analyzed, up from 19% in 2020. 

It is obvious that many people working in cloud security are responding slowly to new risks, and a select few need to act more quickly. This is likely to get worse as multi-cloud apps that leverage federated architectures gain popularity and cloud deployments become more varied and sophisticated. The number of attack surfaces will keep growing, and attackers' ingenuity is starting to take off. 

Organizations can embrace cloud security observability to get a more complete understanding of their cloud security position, allowing them to: 

  • Detect and Respond to Threats More Quickly: Cloud security allows firms to recognize and respond to threats fasters, in a much proactive manner, all by collecting data from numerous security tools and systems. 
  • Identity Vulnerabilities and Secure Gaps: With a better knowledge about the potential threats, organizations can take upbeat measures to address the issues before the bad actors could manage to exploit them. 
  • Improve Incident Response: Cloud security observability can help organizations improve their incident response skills and lessen the effect of attacks by giving a more thorough view of security occurrences. 
  • Ensure Compliance: Cloud security observability further aids organizations in analyzing and monitoring their cloud security deployment/posture to maintain compliance with industry rules and regulations, also supporting audits and other legal accounting.  

Future of the Cloud is Plagued by Security Issues

 

Several corporate procedures require the use of cloud services. Businesses may use cloud computing to cut expenses, speed up deployments, develop at scale, share information effortlessly, and collaborate effectively all without the need for a centralised site. 

But, malicious hackers are using these same services more and more inappropriately, and this trend is most likely to continue in the near future. Cloud services are a wonderful environment for eCrime since threat actors are now well aware of how important they are. The primary conclusions from CrowdStrike's research for 2022 are as follows. 

The public cloud lacks specified perimeters, in contrast to conventional on-premises architecture. The absence of distinct boundaries presents a number of cybersecurity concerns and challenges, particularly for more conventional approaches. These lines will continue to blur as more companies seek for mixed work cultures. 

Cloud vulnerability and security risks

Opportunistically exploiting known remote code execution (RCE) vulnerabilities in server software is one of the main infiltration methods adversaries have been deploying. Without focusing on specific industries or geographical areas, this involves searching for weak servers. Threat actors use a range of tactics after gaining initial access to obtain sensitive data. 

One of the more common exploitation vectors employed by eCrime and targeted intrusion adversaries is credential-based assaults against cloud infrastructures. Criminals frequently host phoney authentication pages to collect real authentication credentials for cloud services or online webmail accounts.

These credentials are then used by actors to try and access accounts. As an illustration, the Russian cyberspy organisation Fancy Bear recently switched from using malware to using more credential-harvesting techniques. Analysts have discovered that they have been employing both extensive scanning methods and even victim-specific phishing websites that deceive users into believing a website is real. 

However, some adversaries are still using these services for command and control despite the decreased use of malware as an infiltration tactic. They accomplish this by distributing malware using trusted cloud services.

This strategy is useful because it enables attackers to avoid detection by signature-based methods. This is due to the fact that many network scanning services frequently trust cloud hosting service top-level domains. By blending into regular network traffic, enemies may be able to get around security restrictions by using legitimate cloud services (like chat).

Cloud services are being used against organisations by hackers

Using a cloud service provider to take advantage of provider trust connections and access other targets through lateral movement is another strategy employed by bad actors. The objective is to raise privileges to global administrator levels in order to take control of support accounts and modify client networks, opening up several options for vertical spread to numerous additional networks. 

Attacks on containers like Docker are levelled at a lower level. Criminals have discovered ways to take advantage of Docker containers that aren't set up properly. These images can then be used as the parent to another application or on their own to interact directly with a tool or service. 

This hierarchical model means that if malicious tooling is added to an image, every container generated from it will also be compromised. Once they have access, hostile actors can take advantage of these elevated privileges to perform lateral movement and eventually spread throughout the network. 

Prolonged detection and reaction

Extended detection and reaction is another fundamental and essential component of effective cloud security (XDR). A technology called XDR may gather security data from endpoints, cloud workloads, network email, and many other sources. With all of this threat data at their disposal, security teams can quickly and effectively identify and get rid of security threats across many domains thanks to XDR. 

Granular visibility is offered by XDR platforms across all networks and endpoints. Analysts and threat hunters can concentrate on high-priority threats because they also provide detections and investigations. This is due to XDR's ability to remove from the alert stream abnormalities that have been deemed to be unimportant. Last but not least, XDR systems should include thorough cross-domain threat data as well as information on everything from afflicted hosts and underlying causes to indicators and dates. The entire investigation and treatment procedure is guided by this data.

While threat vectors continue to change every day, security breaches in the cloud are getting more and more frequent. In order to safeguard workloads hosted in the cloud and to continuously advance the maturity of security processes, it is crucial for businesses to understand current cloud risks and use the appropriate technologies and best practises.

2023: The Year of AI? A Closer Look at AI Trends

 

Threats to cyberspace are constantly changing. As a result, businesses rely on cutting-edge tools to respond to risks and, even better, prevent them from happening in the first place. The top five cybersecurity trends from last year were previously listed by Gartner. The need for artificial intelligence and machine learning tools to help people remain ahead of the curve is becoming more and more obvious with each passing development.

Even more compelling for this year are these estimates for 2022. To manage cloud environments, remote labour, and ongoing disruptions, businesses will require a versatile, adaptable toolkit powered by AI and ML. 

Trend 1: Increased attack surface 

Companies are at a turning point as a result of the increase in permanent remote job opportunities. Remote employment has been beneficial for employees and a relief for businesses who weren't sure if their operations would continue after the shift. The drawback is that because these employees need access to company resources wherever they are, businesses have had to move to the cloud, which has exposed more attack surfaces. 

Businesses, in Gartner's opinion, ought to think outside the box. And some businesses have without a doubt. By launching sophisticated algorithms that are completely observable, AI can provide continuous monitoring across all settings, managing even the temporary resources of the cloud. In order to give real-time insight into security-related data, for instance, Security Information and Event Management (SIEM) gathers and analyses log data from numerous sources, including network devices, servers, and apps.

Trend 2: Identity System Defense 

Similar to trend 1, trend 2 sees the misuse of credentials as one of the most typical ways threat actors access sensitive networks. Companies are putting in place what Gartner refers to as "identity threat detection and response" solutions, and AI and machine learning will enable some of the more potent ones. 

For instance, AI-based phishing solutions analyse email content, sender reputation, and email header data to detect and thwart phishing attempts. Businesses can also use anomaly detection. These AI-based detection solutions can employ machine learning algorithms to identify anomalies in network traffic, such as unusual patterns of login attempts or unusual traffic patterns. 

When threat actors attempt credential stuffing or use a huge volume of stolen credential information for a brute-force attack, AI can also warn admins. And while it may surprise humans to find how predictable we are, AI can also examine common behaviour patterns to spot unusual conduct, such as login attempts from a different location, which aids in the quicker detection of potential invasions. 

Trend 3: Risk in the Digital Supply Chain 

By 2025, 45% of firms globally are expected to have been the target of a supply chain assault, according to Gartner. Although supply chains have always been intricate networks, the advent of big data and swift changes in consumer behaviour have pushed margins to precarious levels. 

To avoid disruptions, reduce risk, and make speedy adjustments when something does happen, businesses are utilising AI in a variety of ways. With the help of digital twin techniques, hypothetical scenarios may be successfully tested on precise digital supply chain replicas to identify the optimum solutions in almost any situation. It can also do sophisticated fraud detection or use deep learning algorithms to examine network data and find unwanted activity like malware and DDoS attacks. AI-based response systems can also react swiftly to perceived threats to stop an attack from spreading.

Trend 4: Consolidation of suppliers 

According to Gartner, manufacturers will keep combining their security services and products into packages on a single platform. While this might highlight some difficulties—introducing a single point of failure, for instance—Gartner thinks it will simplify the cybersecurity sector. 

Organizations are becoming more and more interested in collaboration security. Businesses are aware that the digital landscape is no longer confined to a small, on-premises area protected by conventional security technologies. Companies may be able to lessen some of the vulnerabilities present in a complex digital infrastructure by establishing a culture of security throughout the organisation and collaborating with services providing the aforementioned security packages. 

Fifth Trend: Cybersecurity mesh 

By 2024, firms that implement a cybersecurity mesh should see a significant decrease in the cost of individual security incidents, according to Gartner. There is an obvious benefit that businesses that deploy AI-based security products may experience because these systems can: 

  • Automate tedious, time-consuming operations, such as incident triage, investigation, and response, to boost the cybersecurity mesh's efficacy and efficiency. 
  • Utilise machine learning algorithms to analyse data from numerous sources, including network traffic, logs, and threat intelligence feeds, to spot potential security issues in real time and take immediate action. 
  • Use information from multiple sources, including financial transactions, social media, and news articles, to discover and evaluate any potential threats to the cybersecurity mesh and modify the security measures as necessary. 
  • Employ machine learning algorithms to find patterns in network traffic that are odd, such as strange login patterns or strange traffic patterns, which can assist in identifying and addressing potential security issues. 

Gartner's predictions came true in 2022, but in 2023, we're just beginning to witness dynamic AI answers. Businesses are aware that disruptions and cloud migrations mean that security operations from before 2020 cannot be resumed. Instead, AI will be a critical cybersecurity element that supports each trend and encourages businesses to adopt a completely new cybersecurity strategy.

The Cloud Shared Responsibility Model: An Overview

 

Control over security is mostly at the purview of internal teams when an organisation manages its own on-premise data centres. They are in charge of maintaining the security of both the data stored on servers and the servers themselves. 

With the introduction of a cloud service provider (CSP), the security discussion in a hybrid or cloud environment invariably changes. While the CSP is in charge of various security measures, clients frequently "over trust" cloud providers to keep their data secure. 

According to a recent McAfee report, 69% of CISOs have confidence in their cloud service providers to protect their data, and 12% think that cloud service providers are completely in charge of data security. 

In reality, everyone has a role to play in maintaining cloud security. The cloud shared responsibility concept was developed by CSPs like Amazon Web Services (AWS) and Microsoft Azure to inform cloud consumers of their responsibilities (SRM). 

In its most basic form, the cloud shared responsibility model signifies that CSPs are in charge of the cloud's security and that customers are in charge of protecting the data they upload to the cloud. Customer obligations will be decided by the deployment type—IaaS, PaaS, or SaaS. 

Infrastructure-as-a-Service (IaaS) 

IaaS services increase customers' security responsibilities while being designed to give them the maximum level of flexibility and administrative control. Let's utilise Amazon Elastic Compute Cloud (Amazon EC2) as an illustration. 

Customers are in charge of managing the guest operating system, any applications they install on these instances, and the configuration of the offered firewalls when they deploy an Amazon EC2 instance. They are also in charge of managing data, categorising assets, and putting the right permissions in place for identity and access management. 

IaaS consumers have a lot of control, but they can rely on CSPs to provide security in terms of physical, infrastructure, network, and virtualization. 

Platform-as-a-Service (PaaS) (PaaS) 

Most of the labor-intensive tasks are delegated to CSPs in PaaS. CSPs manage running the underlying infrastructure, including guest operating systems, while customers concentrate on building and administering applications (as well as managing data, assets, and rights). PaaS has definite advantages in terms of efficiency. Security and IT personnel recovery time that may be devoted to other urgent issues by not having to worry about patching or other operating system changes. 

Software-as-a-Service (SaaS) 

SaaS imposes the highest level of duty on the CSP out of the three deployment options. Customers are solely responsible for controlling data and user access/identity permissions because the CSP manages the complete infrastructure and the apps. Customers merely need to choose how they wish to utilise the software, as the service provider will manage and maintain it.

The Shared Responsibility Model: How to Keep Your End of the Deal

It is predicted that consumer errors would account for at least 95% of cloud security failures through 2023. Because of this, it's more crucial than ever to dispel misconceptions about the cloud-shared responsibility model and position customers for success. A consistent theme persists despite the obvious changes in duties based on deployment types: it is crucial that organisations be able to see communications between devices, identify potential security concerns in real time, and quickly investigate and fix problems. More security in your cloud investment comes from the absence of black space and quicker response times.

Data: A Thorn in the Flesh for Most Multicloud Deployments

 

Data challenges, such as data integration, data security, data management, and the establishment of single sources of truth, are not new. Combining these problems with multicloud deployments is novel, though. With a little forethought and the application of widespread, long-understood data architecture best practices, many of these issues can be avoided. 

The main issue is when businesses seek to move data to multicloud deployments without carefully considering the typical issues that are likely to occur.

Creating data silos 

It can be challenging to integrate and a number of cloud services, which might lead to isolated data silos. Nobody should be surprised, but multicloud has increased the number of data silos in various ways. These need to be addressed using data integration techniques including utilising data integration technologies, data abstraction/virtualization, or other strategies that are currently widely known. Or simply avoid creating silos in your data storage systems. 

Ignoring data security 

The complexity of ensuring the protection of sensitive data across many cloud services frequently increases security threats. It is crucial to have a solid data security plan in place that takes into account the particular security requirements of each cloud service without adding to the difficulty of handling data security. This frequently entails employing a central security manager or other technology that is available over the public cloud provider, also known as a supercloud or metacloud, to abstract native security functions. This layer of logical technology, which is located above the clouds, is a concept that is now in flux.  

Not using centralised data management 

If you try to handle everything manually, managing data across many cloud services can be a resource-intensive effort. A centralised system for managing data must be in place, able to handle various data sources and guarantee data consistency. Once more, this needs to be centrally managed and abstracted above native data management implementations and public cloud service providers. Data complexity must be managed according to your terms, not those of the data complexity itself. The latter is what the majority choose, which is a grave error. 

The difficult thing about all of these problems is that they are incredibly solvable thanks to enabling technologies and proven solution patterns. Enterprises commit stupid errors by rushing to multicloud deployments as rapidly as they can, and then they fail to see the ROI from multicloud or cloud migrations in general. Self-inflicted injuries account for the majority of the harm. Make sure you do your homework. Plan. Use the appropriate technologies. It is not difficult, and in the long run, it will save you and your company a tonne of time and money.

Source Code & Private Data Stolen From GoTo

GoTo, the parent company of LastPass, has disclosed that hackers recently broke into its systems and seized encrypted backups belonging to users. It claimed that in addition to LastPass user data, hackers managed to obtain data from its other enterprise products.

A data breach including the theft of source code and confidential technical information was announced by GoTo affiliate LastPass in August of last year. GoTo acknowledged being impacted by the attack in November, which was connected to an unidentified third-party cloud security vendor.

Paddy Srinivasan, chief executive of GoTo, revealed that the security breach was more severe than initially suspected and involved the loss of account usernames, salted and hashed passwords, a piece of the Multi-Factor Authentication (MFA) settings, along with some product settings and license data.

Despite the delay, GoTo did not offer any restoration assistance or guidance for the impacted consumers. According to GoTo, the company does not keep track of its client's credit card or bank information or compile personal data like dates of birth, addresses, or Social Security numbers. Contrast that with the incident that affected its subsidiary, LastPass, in which hackers grabbed the contents of users' encrypted password vaults along with their names, email addresses, phone numbers, and payment information.

LastPass' response to the leak was ripped apart by cybersecurity experts, who charged the firm with being opaque about the gravity of the situation and failing to stop the hack. To provide more reliable authentication and login-based security solutions, GoTo is also transferring its accounts onto an improved Identity Management Platform.

The number of impacted consumers was not disclosed by GoTo. Jen Mathews, director of public relations at GoTo, claimed that the company has 800,000 clients, including businesses, but she declined to address other queries.

Three Steps to Achieve the Cloud's True Transformative Potential

 

The introduction of the public cloud in 2006 signaled a paradigm shift in not only computing but also in how business is conducted globally. The cloud opened the door to levels of agility, reliability, scalability, and speed that were previously unthinkable by allowing enterprises to acquire services at the precise time and scale they require them. 

Today, 92 percent of contemporary businesses seek to go to the cloud in order to support their efforts in digital transformation. In fact, to meet heightened performance demands, many major enterprises (82%) operate in hybrid cloud systems that combine on-premises, public, and private cloud services. For the majority of enterprises to survive in the modern world, this change is mission-critical because it offers previously unheard-of scalability, power, and resources. 

However, this transformation is taking place against the backdrop of a more hazardous threat environment, with recent assaults on businesses like Marriott, Cisco, and Toyota. The ability of enterprises to expand their cloud projects is ultimately constrained by the impending risks, rising costs, and increasing complexity of security measures. 

Hybrid cloud landscapes are far more complex than on-prem ones, despite being vital in this digital age, and working with various cloud providers makes it tough to see security concerns, spot performance bottlenecks, or troubleshoot fixes. It's time for enterprises to change how they handle cloud migration if they want to achieve the truly revolutionary potential of the cloud since 76% of IT professionals claim to have reached a wall with the cloud. 

Here are three actions that businesses could take to reduce risks and utilize the cloud's potential for improved business results: 

Reduce the void in cloud visibility 

Cloud migration is essential for operational success in today's digital-first environment. Even though 82% of major enterprises use hybrid cloud systems now, this percentage is only projected to rise, adding to the complexity and raising the danger of a security breach. 

However, there is a method for businesses to use this paradigm to their advantage. They will have to close the visibility gap, which experts recently regarded as the most crucial cloud security factor, in order to do this. Deep observability, which provides IT staff with network-level intelligence in real-time to proactively mitigate security and compliance risk, provide a superior user experience, and reduce the operational complexity of managing hybrid and multi-cloud IT infrastructures, is the only way to achieve this. 

In-depth application visibility that catches known and undiscovered dangers identifies bottlenecks, and provides consistent, high-quality digital experiences will be available to enterprises if this is done appropriately. 

Employ real-time network metadata 

The average weekly number of cyberattacks has reached an all-time high of 925, up by almost 50% in 2021. Furthermore, compared to Q1 of this year, ransomware attacks increased by 24% in Q2. It goes without saying that security teams are under pressure to cooperate in order to keep one step ahead of these knowledgeable threat actors. 

Security teams are being overrun with threats and data at a faster rate than they can even handle as a result of the rising number of adversaries. More than 500 public cloud security alerts are received daily by security professionals on average, and 38% receive more than 1,000. 

To mitigate the dangerous threat landscape and the overwhelming number of alerts, threat analysts need access to real-time data in order to make crucial, well-informed business decisions and proactively protect the enterprise. In fact, real-time metadata is required under a recently proposed law in order to make prompt business decisions. The advantages of having access to this information are starting to dawn on the government, and experts predict that more institutions will follow suit. 

Increase employee capacity 

The business can increase operational agility to make sure its efforts are coordinated and fruitful once it has narrowed the visibility gap and started turning raw data into actionable data. 

Approximately 70% of security operation center (SOC) teams suffer burnout as a result of the high-pressure situations they operate in, indicating that morale and welfare among security team members are currently worse than average. Teams receive more warnings and information than they can possibly absorb, so it's crucial to make sure they are provided with the resources they require to support them rather than overwhelm them. 

With little assistance, SOC teams are navigating through a challenging time for their sector. There are around 700,000 available cybersecurity positions in the United States alone. To prevent burnout and keep talent, organizations must give top priority to filling these gaps and caring for the staff they already have. While real-time metadata and deep observability are essential tools, the efforts would be limited without a strong, well-versed staff of security experts. 

Conclusion 

The recent advancement and widespread use of the cloud have been thrilling to observe. Nevertheless, despite its vast powers, businesses face a number of risks and difficulties. The three actions listed above will enable IT teams, who are currently overburdened by the cloud, to move from a reactive to a proactive security and compliance posture, lowering risk.

5 Methods for Hackers Overcome Cloud Security

Nearly every major company has used cloud computing to varying degrees in its operations. To protect against the biggest threats to cloud security, the organization's cloud security policy must be able to handle the integration of the cloud.

The vulnerability could be exploited against the on-premises version, but the Amazon Web Services (AWS) WAF prohibited all attempts to do so against the cloud version by flagging the SQL injection payload as malicious.

What is cloud security?

Cloud computing environments, cloud-based apps, and cloud-stored data are all protected by a comprehensive set of protocols, technologies, and procedures known as cloud security. Both the consumer and the cloud provider are jointly responsible for cloud security. 

It helps maintain data security and privacy across web-based platforms, apps, and infrastructure. Cloud service providers and users, including individuals, small and medium-sized businesses, and enterprises, must work together to secure these systems. 

How do hackers breach cloud security?

While crypto mining is the primary focus of each hacking operation at present time, some of their methods may be applied to more malicious aims in the future.

1. Cloud Misconfiguration

A major factor in cloud data breaches is incorrectly configured cloud security settings. The tactics used by many enterprises to maintain their cloud security posture are insufficient for safeguarding their cloud-based infrastructure.

Default passwords, lax access controls, improperly managed permissions, inactive data encryption, and various other issues are usual vulnerabilities. Insider threats and inadequate security awareness are the root causes of many of these flaws.

A large data breach could occur, for instance, if the database server was configured incorrectly and data became available through a simple online search.

2. Denonia Cryptominer

Cloud serverless systems using AWS Lambda are the focus of the Denonia malware. The Denonia attackers use a scheme that uses DNS over HTTPS often referred to as DoH, sending DNS requests to resolver servers that are DoH-based over HTTPS. As a result, the attackers can conceal themselves behind encrypted communication, preventing AWS from seeing their fraudulent DNS lookups. As a result, the malware is unable to alert AWS.

The attackers also seem to have thrown in hundreds of lines of user agent HTTPS query strings as additional distractions to divert or perplex security investigators. In order to avoid mitm attacks and endpoint detection & response (EDR) systems, analysts claim that the malware discovered a way to buffer the binary.

3. CoinStomp malware 

Cloud-native malware called CoinStomp targets cloud security providers in Asia with the intention of cryptojacking. In order to integrate into the Unix environments of cloud systems, it also uses a C2 group based on a dev/tcp reverse shell. Then, using root rights, the script installs and runs additional payloads as system-wide system services. 

4.WhatDog Crptojacker

The WatchDog crypto-mining operation has obtained as many as 209 Monero cryptocurrency coins. WatchDog mining malware consists of a multi-part Go Language binary set. One binary emulates the Linux WatchDog daemon mechanism. 

5. Mirai botnet 

In order to build a network of bots that are capable of unleashing destructive cyberattacks, the Mirai botnet searches the internet for unprotected smart devices before taking control of them.

When ARC-based smart devices are infected with the malware known as Mirai, a system of remotely operated bots is created. DDoS attacks are frequently carried out via botnets.
The Mirai malware is intended to attack weaknesses in smart devices and connect them to form an infected device network called a botnet by exploiting the Linux OS, which many Internet of Things (IoT) devices run on.

The WAF did not recognize the new SQL injection payload that Claroty researchers created, yet it was acceptable for the database engine to analyze. They did this by using a JSON syntax. All of the affected vendors responded to the research by including JSON syntax support in their products, but Claroty thinks additional WAFs may also be affected.