Search This Blog

Showing posts with label Cloud Security. Show all posts

How is 3-2-1 Backup Policy now Out-dated?

With the growing trend of ransomware attacks, it has become important for individuals and organizations to adopt efficient backup policies and procedures.

According to reports, in year 2022 alone, around 236.1 million ransomware attacks have been detected globally. Cyber criminals have evolved into using innovative tactics malware, cryptography and network infiltration to prevent companies from accessing their data. As a result of these emerging ransomware attacks, companies are required to strengthen their security and data backup procedures which compel companies to financial constrains in exchange for the release of their systems and backups.

Current Status of Backups

Systems compromised with ransomware can be swiftly restored with the right backups and disaster recovery techniques, thwarting the attackers. However, Hackers now know how to lock and encrypt production files while simultaneously deleting or destroying backups. Obviously, their targets would not have to pay the ransom if they can restore their computers from backups.

Conventional The 3-2-1 Backup Policy

The 3-2-1 backup policy has been in place for many years and is considered the "gold standard" for guaranteeing the security of backups. Three data copies must be produced utilizing two different types of storage media, with at least one backup occurring offsite. The backup should ideally also be immutable, which means that it cannot be deleted, altered, or encrypted within the time period specified.

The "two diverse media" has typically indicated one copy on traditional hard drives and the other copy on tape for the past 20 years or so. The most popular methods for achieving immutability involved physically storing the tape in a cardboard box or destroying the plastic tab on the tape cartridge, which rendered the tape unwritable. While most often done by replicating the backup files between two company data centers to create the offsite copy.

Growing Popularity of Cloud Security

The cloud has grown in popularity as a place to store backups in recent years. Since its launch, the majority of businesses have reconsidered the conventional 3-2-1 policy. The majority of firms are using a mixed strategy. Backups are first sent to a local storage appliance because the cloud has a limited amount of bandwidth, which is typically faster than backing up directly to the cloud. In the same way, restoring from backups works. Always, restoring from a local copy will be quicker. However, what if the local backup was deleted by the hackers? in that case, one may have to turn to the copy stored in the cloud.

Today, the majority of cloud storage providers offer "immutable" storage, which is secured and cannot be changed or deleted. You actually need this immutability to prevent hackers from eliminating your backups. Additionally, since the cloud is always "off-site," it satisfies one of the key demands of the 3-2-1 backup scheme. one may still have the cloud backup even if there is a fire, flood, or other event that damages the local backup. People no longer see a need for two different types of media, especially the third copy. 

Replicating the cloud copy to a second cloud site, preferably one that is at least 500 kilometers away, is the practice used most frequently nowadays. The two cloud copies ought to be immutable.

In comparison to on-premises storage systems, cloud storage providers typically offer substantially higher levels of data durability. Amazon, Google, Microsoft, and Wasabi have all chosen the gold standard of 11 nines of durability. If you do the arithmetic, 11 nines of durability indicates that you will statistically lose one object every 659,000 years if a user offers you one million objects to store. Because of this, you never hear about cloud storage providers losing client information. 

The likelihood of losing data due to equipment failure is nearly zero if there are two copies spread across two distinct cloud data centers. The previous requirement of "two different media" is no longer necessary at this level of durability.

Moreover, alongside the added durability, the second cloud copy considerably improves backup data availability. Although the storage system may have an 11-nine durability rating, communications issues occasionally cause entire data centers to fall offline. A data center's availability is typically closer to 4 nines. If one cloud data center goes offline, one can still access their backups at the second cloud data center since they consist of two independent cloud copies. 

One may anticipate that the local copy will be lost during the course of a ransomware attack, thus they would be depending on cloud restoration. A company may as well shut down until the backups are accessed if the cloud goes offline for any reason. This thus makes two having two cloud copies a good investment.  

The Media & Entertainment Industries' Major Public Cloud Security Issues


As reported by Wasabi, media and entertainment (M&E) organizations are swiftly resorting to cloud storage to improve their security procedures. While M&E organizations are still fairly new to cloud storage (69% had been using cloud storage for three years or less), public cloud storage use is on the rise, with 89% of respondents looking to increase (74%) or maintain (15%) their cloud services.
On average, M&E respondents reported they spend 13.9% of their IT spending on public cloud storage services. Overdrawn budgets due to hidden fees, as well as cybersecurity and data loss worries, continue to be issued for M&E organizations.

“The media and entertainment industry is a key vertical for cloud storage services, driven by the need for accessibility to large media files among multiple organizations and geographically distributed teams,” said Andrew Smith, senior manager of strategy and market intelligence at Wasabi Technologies, and a former IDC analyst.

“While complex fee structures and cybersecurity concerns remain obstacles for many M&E organizations, planned increases in cloud storage budgeting over the next year, combined with a very high prevalence of storage migration from on-premises to cloud; clearly shows the M&E industry is embracing and growing their cloud storage use year on year,” concluded Smith.

In the previous year, more than half of M&E organizations spent more than their planned amount on cloud storage services. The fees accounted for 49% of M&E firms' public cloud storage expense, with the other half going to actual storage capacity utilized. Understanding the charges and fees connected with cloud usage has been identified as the most difficult cloud migration barrier for M&E organizations.

Since M&E organizations rely substantially on data access, egress, and ingress, M&E respondents reported the highest occurrence of API call fees when compared to the global average. The respondents reported a very high incidence of cloud data migration, with 95% reporting that they migrated storage from on-premises to the public cloud in the previous year.

M&E respondents who plan to expand their public cloud storage budgets in the next 12 months identified new data protection, backup, and recovery requirements as the primary driver, compared to the global average, which rated third. More than one public cloud provider is used by 45% of M&E organizations. One of the major reasons M&E organizations chose a multi-cloud strategy was data security concerns, which came in second (44%) behind different buying centers within the organization making their own purchase decisions (47%).

The following are the top three security concerns that M&E organizations have with a public cloud:
  • Lack of native security services (42%)
  • Lack of native backup, disaster and data protection tools and services (39%)
  • Lack of experience with cloud platform or adequate security training (38%)
“Organizations in the media and entertainment industry are flocking to cloud storage as their digital assets need to be stored securely, cost-effectively and accessed quickly,” said Whit Jackson, VP of Media and Entertainment at Wasabi.

Three Commonly Neglected Attack Vectors in Cloud Security


As per a 2022 Thales Cloud Security research, 88% of companies keep a considerable amount (at least 21% of sensitive data) in the cloud. That comes as no surprise. According to the same survey, 45% of organisations have had a data breach or failed an audit involving cloud-based data and apps. This is less surprising and positive news. 

The majority of cloud computing security issues are caused by humans. They make easily avoidable blunders that cost businesses millions of dollars in lost revenue and negative PR. Most don't obtain the training they need to recognise and deal with constantly evolving threats, attack vectors, and attack methods. Enterprises cannot avoid this instruction while maintaining control over their cloud security.

Attacks from the side channels

Side-channel attacks in cloud computing can collect sensitive data from virtual machines that share the same physical server as other VMs and activities. A side-channel attack infers sensitive information about a system by using information gathered from the physical surroundings, such as power usage, electromagnetic radiation, or sound. An attacker, for example, could use statistics on power consumption to deduce the cryptographic keys used to encrypt data in a neighbouring virtual machine.  

Side-channel attacks can be difficult to mitigate because they frequently necessitate careful attention to physical security and may involve complex trade-offs between performance, security, and usability. Masking is a common defence strategy that adds noise to the system, making it more difficult for attackers to infer important information.

In addition, hardware-based countermeasures (shields or filters) limit the amount of data that can leak through side channels.

Your cloud provider will be responsible for these safeguards. Even if you know where their data centre is, you can't just go in and start implementing defences to side-channel assaults. Inquire with your cloud provider about how they manage these issues. If they don't have a good answer, switch providers.

Container breakouts

Container breakout attacks occur when an attacker gains access to the underlying host operating system from within a container. This can happen if a person has misconfigured the container or if the attacker is able to exploit one of the many vulnerabilities in the container runtime. After gaining access to the host operating system, an attacker may be able to access data from other containers or undermine the security of the entire cloud infrastructure.

Securing the host system, maintaining container isolation, using least-privilege principles, and monitoring container activities are all part of defending against container breakout threats. These safeguards must be implemented wherever the container runs, whether on public clouds or on more traditional systems and devices. These are only a few of the developing best practices; they are inexpensive and simple to apply for container developers and security experts.

Cloud service provider vulnerabilities

Similarly to a side-channel attack, cloud service providers can be exposed, which can have serious ramifications for their clients. An attacker could gain access to customer data or launch a denial-of-service attack by exploiting a cloud provider's infrastructure weakness. Furthermore, nation-state actors can attack cloud providers in order to gain access to sensitive data or destroy essential infrastructure, which is the most serious concern right now.

Again, faith in your cloud provider is required. Physical audits of their infrastructure are rarely an option and would almost certainly be ineffective. You require a cloud provider who can swiftly and simply respond to inquiries about how they address vulnerabilities:

Unpatched ICS Flaws in Critical Infrastructure: CISA Issues Alert


This week, the US Cybersecurity and Infrastructure Security Agency (CISA) released recommendations for a total of 49 vulnerabilities in eight industrial control systems (ICS) utilised by businesses in various critical infrastructure sectors. Several of these vulnerabilities are still unpatched. 

Organizations in the critical infrastructure sectors must increasingly take cybersecurity into account. Environments for ICS and operational technology (OT) are becoming more and more accessible via the Internet and are no longer air-gapped or compartmentalised as they once were. As a result, both ICS and OT networks have grown in popularity as targets for both nation-state players and threat actors driven by financial gain.

That's bad because many of the flaws in the CISA advisory can be remotely exploited, only require a simple assault to succeed, and provide attackers access to target systems so they may manipulate settings, elevate privileges, get around security measures, steal data, and crash systems. Products from Siemens, Rockwell Automation, Hitachi, Delta Electronics, Keysight, and VISAM all have high-severity vulnerabilities. 

The CISA recommendation was released at the same time as a study from the European Union on threats to the transportation industry, which included a similar warning about the possibility of ransomware attacks on OT systems used by organisations that handle air, sea, rail, and land transportation. Organizations in the transportation industry are also affected by at least some of the susceptible systems listed in CISA's alert. 

Critical vulnerabilities

Siemens' RUGGEDCOM APE1808 technology contains seven of the 49 vulnerabilities listed in CISA's alert and is not currently patched. The flaws give an attacker the ability to crash or increase the level of privileges on a compromised system. The device is presently used by businesses in several critical infrastructure sectors all around the world to host commercial applications. 

The Scalance W-700 devices from Siemens have seventeen more defects in various third-party parts. The product is used by businesses in the chemical, energy, food, agricultural, and manufacturing sectors as well as other critical infrastructure sectors. In order to protect network access to the devices, Siemens has urged organisations using the product to update their software to version 2.0 or later. 

InfraSuite Device Master, a solution used by businesses in the energy sector to keep tabs on the health of crucial systems, is impacted by thirteen of the recently discovered vulnerabilities. Attackers can utilise the flaws to start a denial-of-service attack or to obtain private information that could be used in another attack. 

Other vendors in the CISA advisory that have several defects in their products include Visam, whose Vbase Automation technology had seven flaws, and Rockwell Automaton, whose ThinManager product was employed in the crucial manufacturing industry and had three flaws. For communications and government businesses, Keysight had one vulnerability in its Keysight N6845A Geolocation Server, while Hitachi updated details on a previously known vulnerability in its Energy GMS600, PWC600, and Relion products. 

For the second time in recent weeks, CISA has issued a warning to firms in the critical infrastructure sectors regarding severe flaws in the systems such organisations employ in their operational and industrial technology settings. Similar warnings on flaws in equipment from 12 ICS suppliers, including Siemens, Hitachi, Johnson Controls, Panasonic, and Sewio, were released by the FCC in January. 

Many of the defects in the previous warning, like the current collection of flaws, allowed threat actors to compromise systems, increase their privileges, and wreak other havoc in ICS and OT contexts. 

OT systems under attack

A report this week on cyberthreats to the transportation industry from the European Union Agency for Cybersecurity (ENISA) issued a warning about potential ransomware attacks against OT systems. The report's analysis of 98 publicly reported incidents in the EU transportation sector between January 2021 and October 2022 was the basis for the report. 

According to the data, 47% of the attacks were carried out by cybercriminals who were motivated by money. The majority of these attacks (38%) involved ransomware. Operational disruptions, spying, and ideological assaults by hacktivist groups were a few more frequent reasons. 

Even while these attacks occasionally caused collateral damage to OT systems, ENISA's experts did not discover any proof of targeted attacks on them in the 98 events it examined. 

"The only cases where OT systems and networks were affected were either when entire networks were affected or when safety-critical IT systems were unavailable," the ENISA report stated. However, the agency expects that to change. "Ransomware groups will likely target and disrupt OT operations in the foreseeable future."

The research from the European cybersecurity agency cited an earlier ENISA investigation that warned of ransomware attackers and other new threat groups tracked as Kostovite, Petrovite, and Erythrite that target ICS and OT systems and networks. The report also emphasised the ongoing development of malware designed specifically for industrial control systems, such as Industroyer, BlackEnergy, CrashOverride, and InController, as indicators of increasing attacker interest in ICS environments. 

"In general, adversaries are willing to dedicate time and resources in compromising their targets to harvest information on the OT networks for future purposes," the ENISA report further reads. "Currently, most adversaries in this space prioritize pre-positioning and information gathering over disruption as strategic objectives."

Security Observability: How it Transforms Cloud Security

Security Observability 

Security Observability is an ability to gain recognition into an organization’s security posture, including its capacity to recognize and address security risks and flaws. It entails gathering, analyzing, and visualizing security data in order to spot potential risks and take preventative action to lessen them. 

The process involves data collection from varied security tools and systems, like network logs, endpoint security solutions, and security information and event management (SIEM) platforms, further utilizing the data to observe potential threats. In other words, unlike more conventional security operations tools, it informs you of what is expected to occur rather than just what has actually occurred. Security observability is likely the most significant advancement in cloud security technology that has occurred in recent years because of this major distinction. 

Though, a majority of users are still unaware of security observability, which is something that raises concerns. According to a 2021 Verizon Data Breach Investigations Report, cloud assets were included in 24% of all breaches analyzed, up from 19% in 2020. 

It is obvious that many people working in cloud security are responding slowly to new risks, and a select few need to act more quickly. This is likely to get worse as multi-cloud apps that leverage federated architectures gain popularity and cloud deployments become more varied and sophisticated. The number of attack surfaces will keep growing, and attackers' ingenuity is starting to take off. 

Organizations can embrace cloud security observability to get a more complete understanding of their cloud security position, allowing them to: 

  • Detect and Respond to Threats More Quickly: Cloud security allows firms to recognize and respond to threats fasters, in a much proactive manner, all by collecting data from numerous security tools and systems. 
  • Identity Vulnerabilities and Secure Gaps: With a better knowledge about the potential threats, organizations can take upbeat measures to address the issues before the bad actors could manage to exploit them. 
  • Improve Incident Response: Cloud security observability can help organizations improve their incident response skills and lessen the effect of attacks by giving a more thorough view of security occurrences. 
  • Ensure Compliance: Cloud security observability further aids organizations in analyzing and monitoring their cloud security deployment/posture to maintain compliance with industry rules and regulations, also supporting audits and other legal accounting.  

Future of the Cloud is Plagued by Security Issues


Several corporate procedures require the use of cloud services. Businesses may use cloud computing to cut expenses, speed up deployments, develop at scale, share information effortlessly, and collaborate effectively all without the need for a centralised site. 

But, malicious hackers are using these same services more and more inappropriately, and this trend is most likely to continue in the near future. Cloud services are a wonderful environment for eCrime since threat actors are now well aware of how important they are. The primary conclusions from CrowdStrike's research for 2022 are as follows. 

The public cloud lacks specified perimeters, in contrast to conventional on-premises architecture. The absence of distinct boundaries presents a number of cybersecurity concerns and challenges, particularly for more conventional approaches. These lines will continue to blur as more companies seek for mixed work cultures. 

Cloud vulnerability and security risks

Opportunistically exploiting known remote code execution (RCE) vulnerabilities in server software is one of the main infiltration methods adversaries have been deploying. Without focusing on specific industries or geographical areas, this involves searching for weak servers. Threat actors use a range of tactics after gaining initial access to obtain sensitive data. 

One of the more common exploitation vectors employed by eCrime and targeted intrusion adversaries is credential-based assaults against cloud infrastructures. Criminals frequently host phoney authentication pages to collect real authentication credentials for cloud services or online webmail accounts.

These credentials are then used by actors to try and access accounts. As an illustration, the Russian cyberspy organisation Fancy Bear recently switched from using malware to using more credential-harvesting techniques. Analysts have discovered that they have been employing both extensive scanning methods and even victim-specific phishing websites that deceive users into believing a website is real. 

However, some adversaries are still using these services for command and control despite the decreased use of malware as an infiltration tactic. They accomplish this by distributing malware using trusted cloud services.

This strategy is useful because it enables attackers to avoid detection by signature-based methods. This is due to the fact that many network scanning services frequently trust cloud hosting service top-level domains. By blending into regular network traffic, enemies may be able to get around security restrictions by using legitimate cloud services (like chat).

Cloud services are being used against organisations by hackers

Using a cloud service provider to take advantage of provider trust connections and access other targets through lateral movement is another strategy employed by bad actors. The objective is to raise privileges to global administrator levels in order to take control of support accounts and modify client networks, opening up several options for vertical spread to numerous additional networks. 

Attacks on containers like Docker are levelled at a lower level. Criminals have discovered ways to take advantage of Docker containers that aren't set up properly. These images can then be used as the parent to another application or on their own to interact directly with a tool or service. 

This hierarchical model means that if malicious tooling is added to an image, every container generated from it will also be compromised. Once they have access, hostile actors can take advantage of these elevated privileges to perform lateral movement and eventually spread throughout the network. 

Prolonged detection and reaction

Extended detection and reaction is another fundamental and essential component of effective cloud security (XDR). A technology called XDR may gather security data from endpoints, cloud workloads, network email, and many other sources. With all of this threat data at their disposal, security teams can quickly and effectively identify and get rid of security threats across many domains thanks to XDR. 

Granular visibility is offered by XDR platforms across all networks and endpoints. Analysts and threat hunters can concentrate on high-priority threats because they also provide detections and investigations. This is due to XDR's ability to remove from the alert stream abnormalities that have been deemed to be unimportant. Last but not least, XDR systems should include thorough cross-domain threat data as well as information on everything from afflicted hosts and underlying causes to indicators and dates. The entire investigation and treatment procedure is guided by this data.

While threat vectors continue to change every day, security breaches in the cloud are getting more and more frequent. In order to safeguard workloads hosted in the cloud and to continuously advance the maturity of security processes, it is crucial for businesses to understand current cloud risks and use the appropriate technologies and best practises.

2023: The Year of AI? A Closer Look at AI Trends


Threats to cyberspace are constantly changing. As a result, businesses rely on cutting-edge tools to respond to risks and, even better, prevent them from happening in the first place. The top five cybersecurity trends from last year were previously listed by Gartner. The need for artificial intelligence and machine learning tools to help people remain ahead of the curve is becoming more and more obvious with each passing development.

Even more compelling for this year are these estimates for 2022. To manage cloud environments, remote labour, and ongoing disruptions, businesses will require a versatile, adaptable toolkit powered by AI and ML. 

Trend 1: Increased attack surface 

Companies are at a turning point as a result of the increase in permanent remote job opportunities. Remote employment has been beneficial for employees and a relief for businesses who weren't sure if their operations would continue after the shift. The drawback is that because these employees need access to company resources wherever they are, businesses have had to move to the cloud, which has exposed more attack surfaces. 

Businesses, in Gartner's opinion, ought to think outside the box. And some businesses have without a doubt. By launching sophisticated algorithms that are completely observable, AI can provide continuous monitoring across all settings, managing even the temporary resources of the cloud. In order to give real-time insight into security-related data, for instance, Security Information and Event Management (SIEM) gathers and analyses log data from numerous sources, including network devices, servers, and apps.

Trend 2: Identity System Defense 

Similar to trend 1, trend 2 sees the misuse of credentials as one of the most typical ways threat actors access sensitive networks. Companies are putting in place what Gartner refers to as "identity threat detection and response" solutions, and AI and machine learning will enable some of the more potent ones. 

For instance, AI-based phishing solutions analyse email content, sender reputation, and email header data to detect and thwart phishing attempts. Businesses can also use anomaly detection. These AI-based detection solutions can employ machine learning algorithms to identify anomalies in network traffic, such as unusual patterns of login attempts or unusual traffic patterns. 

When threat actors attempt credential stuffing or use a huge volume of stolen credential information for a brute-force attack, AI can also warn admins. And while it may surprise humans to find how predictable we are, AI can also examine common behaviour patterns to spot unusual conduct, such as login attempts from a different location, which aids in the quicker detection of potential invasions. 

Trend 3: Risk in the Digital Supply Chain 

By 2025, 45% of firms globally are expected to have been the target of a supply chain assault, according to Gartner. Although supply chains have always been intricate networks, the advent of big data and swift changes in consumer behaviour have pushed margins to precarious levels. 

To avoid disruptions, reduce risk, and make speedy adjustments when something does happen, businesses are utilising AI in a variety of ways. With the help of digital twin techniques, hypothetical scenarios may be successfully tested on precise digital supply chain replicas to identify the optimum solutions in almost any situation. It can also do sophisticated fraud detection or use deep learning algorithms to examine network data and find unwanted activity like malware and DDoS attacks. AI-based response systems can also react swiftly to perceived threats to stop an attack from spreading.

Trend 4: Consolidation of suppliers 

According to Gartner, manufacturers will keep combining their security services and products into packages on a single platform. While this might highlight some difficulties—introducing a single point of failure, for instance—Gartner thinks it will simplify the cybersecurity sector. 

Organizations are becoming more and more interested in collaboration security. Businesses are aware that the digital landscape is no longer confined to a small, on-premises area protected by conventional security technologies. Companies may be able to lessen some of the vulnerabilities present in a complex digital infrastructure by establishing a culture of security throughout the organisation and collaborating with services providing the aforementioned security packages. 

Fifth Trend: Cybersecurity mesh 

By 2024, firms that implement a cybersecurity mesh should see a significant decrease in the cost of individual security incidents, according to Gartner. There is an obvious benefit that businesses that deploy AI-based security products may experience because these systems can: 

  • Automate tedious, time-consuming operations, such as incident triage, investigation, and response, to boost the cybersecurity mesh's efficacy and efficiency. 
  • Utilise machine learning algorithms to analyse data from numerous sources, including network traffic, logs, and threat intelligence feeds, to spot potential security issues in real time and take immediate action. 
  • Use information from multiple sources, including financial transactions, social media, and news articles, to discover and evaluate any potential threats to the cybersecurity mesh and modify the security measures as necessary. 
  • Employ machine learning algorithms to find patterns in network traffic that are odd, such as strange login patterns or strange traffic patterns, which can assist in identifying and addressing potential security issues. 

Gartner's predictions came true in 2022, but in 2023, we're just beginning to witness dynamic AI answers. Businesses are aware that disruptions and cloud migrations mean that security operations from before 2020 cannot be resumed. Instead, AI will be a critical cybersecurity element that supports each trend and encourages businesses to adopt a completely new cybersecurity strategy.

The Cloud Shared Responsibility Model: An Overview


Control over security is mostly at the purview of internal teams when an organisation manages its own on-premise data centres. They are in charge of maintaining the security of both the data stored on servers and the servers themselves. 

With the introduction of a cloud service provider (CSP), the security discussion in a hybrid or cloud environment invariably changes. While the CSP is in charge of various security measures, clients frequently "over trust" cloud providers to keep their data secure. 

According to a recent McAfee report, 69% of CISOs have confidence in their cloud service providers to protect their data, and 12% think that cloud service providers are completely in charge of data security. 

In reality, everyone has a role to play in maintaining cloud security. The cloud shared responsibility concept was developed by CSPs like Amazon Web Services (AWS) and Microsoft Azure to inform cloud consumers of their responsibilities (SRM). 

In its most basic form, the cloud shared responsibility model signifies that CSPs are in charge of the cloud's security and that customers are in charge of protecting the data they upload to the cloud. Customer obligations will be decided by the deployment type—IaaS, PaaS, or SaaS. 

Infrastructure-as-a-Service (IaaS) 

IaaS services increase customers' security responsibilities while being designed to give them the maximum level of flexibility and administrative control. Let's utilise Amazon Elastic Compute Cloud (Amazon EC2) as an illustration. 

Customers are in charge of managing the guest operating system, any applications they install on these instances, and the configuration of the offered firewalls when they deploy an Amazon EC2 instance. They are also in charge of managing data, categorising assets, and putting the right permissions in place for identity and access management. 

IaaS consumers have a lot of control, but they can rely on CSPs to provide security in terms of physical, infrastructure, network, and virtualization. 

Platform-as-a-Service (PaaS) (PaaS) 

Most of the labor-intensive tasks are delegated to CSPs in PaaS. CSPs manage running the underlying infrastructure, including guest operating systems, while customers concentrate on building and administering applications (as well as managing data, assets, and rights). PaaS has definite advantages in terms of efficiency. Security and IT personnel recovery time that may be devoted to other urgent issues by not having to worry about patching or other operating system changes. 

Software-as-a-Service (SaaS) 

SaaS imposes the highest level of duty on the CSP out of the three deployment options. Customers are solely responsible for controlling data and user access/identity permissions because the CSP manages the complete infrastructure and the apps. Customers merely need to choose how they wish to utilise the software, as the service provider will manage and maintain it.

The Shared Responsibility Model: How to Keep Your End of the Deal

It is predicted that consumer errors would account for at least 95% of cloud security failures through 2023. Because of this, it's more crucial than ever to dispel misconceptions about the cloud-shared responsibility model and position customers for success. A consistent theme persists despite the obvious changes in duties based on deployment types: it is crucial that organisations be able to see communications between devices, identify potential security concerns in real time, and quickly investigate and fix problems. More security in your cloud investment comes from the absence of black space and quicker response times.

Data: A Thorn in the Flesh for Most Multicloud Deployments


Data challenges, such as data integration, data security, data management, and the establishment of single sources of truth, are not new. Combining these problems with multicloud deployments is novel, though. With a little forethought and the application of widespread, long-understood data architecture best practices, many of these issues can be avoided. 

The main issue is when businesses seek to move data to multicloud deployments without carefully considering the typical issues that are likely to occur.

Creating data silos 

It can be challenging to integrate and a number of cloud services, which might lead to isolated data silos. Nobody should be surprised, but multicloud has increased the number of data silos in various ways. These need to be addressed using data integration techniques including utilising data integration technologies, data abstraction/virtualization, or other strategies that are currently widely known. Or simply avoid creating silos in your data storage systems. 

Ignoring data security 

The complexity of ensuring the protection of sensitive data across many cloud services frequently increases security threats. It is crucial to have a solid data security plan in place that takes into account the particular security requirements of each cloud service without adding to the difficulty of handling data security. This frequently entails employing a central security manager or other technology that is available over the public cloud provider, also known as a supercloud or metacloud, to abstract native security functions. This layer of logical technology, which is located above the clouds, is a concept that is now in flux.  

Not using centralised data management 

If you try to handle everything manually, managing data across many cloud services can be a resource-intensive effort. A centralised system for managing data must be in place, able to handle various data sources and guarantee data consistency. Once more, this needs to be centrally managed and abstracted above native data management implementations and public cloud service providers. Data complexity must be managed according to your terms, not those of the data complexity itself. The latter is what the majority choose, which is a grave error. 

The difficult thing about all of these problems is that they are incredibly solvable thanks to enabling technologies and proven solution patterns. Enterprises commit stupid errors by rushing to multicloud deployments as rapidly as they can, and then they fail to see the ROI from multicloud or cloud migrations in general. Self-inflicted injuries account for the majority of the harm. Make sure you do your homework. Plan. Use the appropriate technologies. It is not difficult, and in the long run, it will save you and your company a tonne of time and money.

Source Code & Private Data Stolen From GoTo

GoTo, the parent company of LastPass, has disclosed that hackers recently broke into its systems and seized encrypted backups belonging to users. It claimed that in addition to LastPass user data, hackers managed to obtain data from its other enterprise products.

A data breach including the theft of source code and confidential technical information was announced by GoTo affiliate LastPass in August of last year. GoTo acknowledged being impacted by the attack in November, which was connected to an unidentified third-party cloud security vendor.

Paddy Srinivasan, chief executive of GoTo, revealed that the security breach was more severe than initially suspected and involved the loss of account usernames, salted and hashed passwords, a piece of the Multi-Factor Authentication (MFA) settings, along with some product settings and license data.

Despite the delay, GoTo did not offer any restoration assistance or guidance for the impacted consumers. According to GoTo, the company does not keep track of its client's credit card or bank information or compile personal data like dates of birth, addresses, or Social Security numbers. Contrast that with the incident that affected its subsidiary, LastPass, in which hackers grabbed the contents of users' encrypted password vaults along with their names, email addresses, phone numbers, and payment information.

LastPass' response to the leak was ripped apart by cybersecurity experts, who charged the firm with being opaque about the gravity of the situation and failing to stop the hack. To provide more reliable authentication and login-based security solutions, GoTo is also transferring its accounts onto an improved Identity Management Platform.

The number of impacted consumers was not disclosed by GoTo. Jen Mathews, director of public relations at GoTo, claimed that the company has 800,000 clients, including businesses, but she declined to address other queries.

Three Steps to Achieve the Cloud's True Transformative Potential


The introduction of the public cloud in 2006 signaled a paradigm shift in not only computing but also in how business is conducted globally. The cloud opened the door to levels of agility, reliability, scalability, and speed that were previously unthinkable by allowing enterprises to acquire services at the precise time and scale they require them. 

Today, 92 percent of contemporary businesses seek to go to the cloud in order to support their efforts in digital transformation. In fact, to meet heightened performance demands, many major enterprises (82%) operate in hybrid cloud systems that combine on-premises, public, and private cloud services. For the majority of enterprises to survive in the modern world, this change is mission-critical because it offers previously unheard-of scalability, power, and resources. 

However, this transformation is taking place against the backdrop of a more hazardous threat environment, with recent assaults on businesses like Marriott, Cisco, and Toyota. The ability of enterprises to expand their cloud projects is ultimately constrained by the impending risks, rising costs, and increasing complexity of security measures. 

Hybrid cloud landscapes are far more complex than on-prem ones, despite being vital in this digital age, and working with various cloud providers makes it tough to see security concerns, spot performance bottlenecks, or troubleshoot fixes. It's time for enterprises to change how they handle cloud migration if they want to achieve the truly revolutionary potential of the cloud since 76% of IT professionals claim to have reached a wall with the cloud. 

Here are three actions that businesses could take to reduce risks and utilize the cloud's potential for improved business results: 

Reduce the void in cloud visibility 

Cloud migration is essential for operational success in today's digital-first environment. Even though 82% of major enterprises use hybrid cloud systems now, this percentage is only projected to rise, adding to the complexity and raising the danger of a security breach. 

However, there is a method for businesses to use this paradigm to their advantage. They will have to close the visibility gap, which experts recently regarded as the most crucial cloud security factor, in order to do this. Deep observability, which provides IT staff with network-level intelligence in real-time to proactively mitigate security and compliance risk, provide a superior user experience, and reduce the operational complexity of managing hybrid and multi-cloud IT infrastructures, is the only way to achieve this. 

In-depth application visibility that catches known and undiscovered dangers identifies bottlenecks, and provides consistent, high-quality digital experiences will be available to enterprises if this is done appropriately. 

Employ real-time network metadata 

The average weekly number of cyberattacks has reached an all-time high of 925, up by almost 50% in 2021. Furthermore, compared to Q1 of this year, ransomware attacks increased by 24% in Q2. It goes without saying that security teams are under pressure to cooperate in order to keep one step ahead of these knowledgeable threat actors. 

Security teams are being overrun with threats and data at a faster rate than they can even handle as a result of the rising number of adversaries. More than 500 public cloud security alerts are received daily by security professionals on average, and 38% receive more than 1,000. 

To mitigate the dangerous threat landscape and the overwhelming number of alerts, threat analysts need access to real-time data in order to make crucial, well-informed business decisions and proactively protect the enterprise. In fact, real-time metadata is required under a recently proposed law in order to make prompt business decisions. The advantages of having access to this information are starting to dawn on the government, and experts predict that more institutions will follow suit. 

Increase employee capacity 

The business can increase operational agility to make sure its efforts are coordinated and fruitful once it has narrowed the visibility gap and started turning raw data into actionable data. 

Approximately 70% of security operation center (SOC) teams suffer burnout as a result of the high-pressure situations they operate in, indicating that morale and welfare among security team members are currently worse than average. Teams receive more warnings and information than they can possibly absorb, so it's crucial to make sure they are provided with the resources they require to support them rather than overwhelm them. 

With little assistance, SOC teams are navigating through a challenging time for their sector. There are around 700,000 available cybersecurity positions in the United States alone. To prevent burnout and keep talent, organizations must give top priority to filling these gaps and caring for the staff they already have. While real-time metadata and deep observability are essential tools, the efforts would be limited without a strong, well-versed staff of security experts. 


The recent advancement and widespread use of the cloud have been thrilling to observe. Nevertheless, despite its vast powers, businesses face a number of risks and difficulties. The three actions listed above will enable IT teams, who are currently overburdened by the cloud, to move from a reactive to a proactive security and compliance posture, lowering risk.

5 Methods for Hackers Overcome Cloud Security

Nearly every major company has used cloud computing to varying degrees in its operations. To protect against the biggest threats to cloud security, the organization's cloud security policy must be able to handle the integration of the cloud.

The vulnerability could be exploited against the on-premises version, but the Amazon Web Services (AWS) WAF prohibited all attempts to do so against the cloud version by flagging the SQL injection payload as malicious.

What is cloud security?

Cloud computing environments, cloud-based apps, and cloud-stored data are all protected by a comprehensive set of protocols, technologies, and procedures known as cloud security. Both the consumer and the cloud provider are jointly responsible for cloud security. 

It helps maintain data security and privacy across web-based platforms, apps, and infrastructure. Cloud service providers and users, including individuals, small and medium-sized businesses, and enterprises, must work together to secure these systems. 

How do hackers breach cloud security?

While crypto mining is the primary focus of each hacking operation at present time, some of their methods may be applied to more malicious aims in the future.

1. Cloud Misconfiguration

A major factor in cloud data breaches is incorrectly configured cloud security settings. The tactics used by many enterprises to maintain their cloud security posture are insufficient for safeguarding their cloud-based infrastructure.

Default passwords, lax access controls, improperly managed permissions, inactive data encryption, and various other issues are usual vulnerabilities. Insider threats and inadequate security awareness are the root causes of many of these flaws.

A large data breach could occur, for instance, if the database server was configured incorrectly and data became available through a simple online search.

2. Denonia Cryptominer

Cloud serverless systems using AWS Lambda are the focus of the Denonia malware. The Denonia attackers use a scheme that uses DNS over HTTPS often referred to as DoH, sending DNS requests to resolver servers that are DoH-based over HTTPS. As a result, the attackers can conceal themselves behind encrypted communication, preventing AWS from seeing their fraudulent DNS lookups. As a result, the malware is unable to alert AWS.

The attackers also seem to have thrown in hundreds of lines of user agent HTTPS query strings as additional distractions to divert or perplex security investigators. In order to avoid mitm attacks and endpoint detection & response (EDR) systems, analysts claim that the malware discovered a way to buffer the binary.

3. CoinStomp malware 

Cloud-native malware called CoinStomp targets cloud security providers in Asia with the intention of cryptojacking. In order to integrate into the Unix environments of cloud systems, it also uses a C2 group based on a dev/tcp reverse shell. Then, using root rights, the script installs and runs additional payloads as system-wide system services. 

4.WhatDog Crptojacker

The WatchDog crypto-mining operation has obtained as many as 209 Monero cryptocurrency coins. WatchDog mining malware consists of a multi-part Go Language binary set. One binary emulates the Linux WatchDog daemon mechanism. 

5. Mirai botnet 

In order to build a network of bots that are capable of unleashing destructive cyberattacks, the Mirai botnet searches the internet for unprotected smart devices before taking control of them.

When ARC-based smart devices are infected with the malware known as Mirai, a system of remotely operated bots is created. DDoS attacks are frequently carried out via botnets.
The Mirai malware is intended to attack weaknesses in smart devices and connect them to form an infected device network called a botnet by exploiting the Linux OS, which many Internet of Things (IoT) devices run on.

The WAF did not recognize the new SQL injection payload that Claroty researchers created, yet it was acceptable for the database engine to analyze. They did this by using a JSON syntax. All of the affected vendors responded to the research by including JSON syntax support in their products, but Claroty thinks additional WAFs may also be affected.

Mainframes are Still Used in 9 Out of 10 Banks, Google Cloud Wishes to Mitigate


It has been announced that Google Cloud is introducing a simpler, more risk-averse way for enterprises to migrate their legacy mainframe estates to the cloud. Google Cloud's newly launched service is based on technology originally developed by Banco Santander and aims to simplify planning and execution.

As a result, customers can perform real-time testing before they transition to Google Cloud Platform as their primary system to ensure their cloud workloads are performing as expected, running securely, and meeting regulatory compliance requirements – without stopping their application or negatively impacting user experience.

In his interview with Protocol on Tuesday, Nirav Mehta told: "This is a simple concept, but it is difficult to implement - hasn't been done yet," Nirav Mehta, Google Cloud's senior director of product management for cloud infrastructure solutions and growth, said. As compared to moving mainframe applications to the cloud, this solution will substantially reduce the risk associated with doing so." 

A parallel instance of mainframe workloads is created by using virtual machines on the Google Cloud Platform (GCP) through Dual Run. As Mehta describes, a launcher/splitter is an architecture consisting of the necessary mechanisms to duplicate activity - and return the "primary" response of the system - at each interface that drives the incoming requests or triggers the scheduled workload and can handle both.

A dashboard that displays real-time monitoring shows the differences in transaction responses between the mainframe and GCP deployments that are displayed on the dashboard. The single output hub also ensures that there is a single point of contact during the roll-out period for all batch information that needs to be sent out and collected.

Once the customers are comfortable with the use of their mainframes as backups, they can retire their mainframes or use them as storage.

As long as your mainframe is the primary system that handles customer requests, it should remain the system of choice for quite some time to come. You can consider the cloud instance as nothing more than a secondary system. This will also run the same requests as the regular system, Mehta explained. As part of your monitoring process, you maintain a record of the responses coming back from both the mainframe and Google Cloud. This is to determine whether the Google Cloud instance is working equally well as the mainframe. Then at some point, you switch over to using Google Cloud as your primary source of data and the mainframe as your secondary source of data.

The Dual Run device, which is currently in the preview stage, was developed for a wide range of industries, including the financial services, health care, manufacturing, and retail industries, and the public sector as well. Approximately 90% of North America's biggest banks still use mainframes, according to Mehta, while 23 of the 25 largest U.S. retailers use mainframes as well.

"All of these companies are looking to modernize their old mainframe applications and take them to the cloud to maximize security, scalability, and cost efficiency," he said. However, because these systems are so mission-critical - and mainframes are especially unique in this regard since they've been around for so long and contain so much legacy technology - they perceive a lot of risks, so they do not bring them to the cloud."

In May, Banco Santander, a Google Cloud customer, published a report about the progress it has made in digitizing its core banking platform. It said that 80% of its IT infrastructure had been moved to the cloud using software developed in-house called Gravity, to automate the process. The technology is an exclusive license that Google Cloud has acquired, and its engineers have been working with Santander during the past six months to optimize the technology to make it more suitable for end-to-end mainframe migrations for customers in a wide variety of industries. 

Mehta explained that they only had a very limited use case for the software. The relevance of the solution to any mainframe customer has been elevated to a substantial extent thanks to the changes we have made. This is a huge deal for anyone running mainframes because it allows them to access data remotely.

Vulnerability in OCI Could Have Put the Data of Customers Exposed to the Attacker


A vulnerability called 'AttatchMe', discovered by a Wiz engineer could have allowed the attackers to access and steal the OCI storage volumes of any user without their permission. 

During an Oracle cloud infrastructure examination in June, Wiz engineers disclosed a cloud isolation security flaw in Oracle Cloud Infrastructure. They found that connecting a disk to a VM in another account can be done without any permissions, which immediately made them realize it could become a path for cyberattacks for threat actors. 

Elad Gabay, the security researcher at Wiz made a public statement regarding the vulnerability on September 20. He mentioned the possible severe outcomes of the exploitation of the vulnerability saying this could have led to “severe sensitive data leakage” for all OCI customers and could even be exploited to gain code execution remotely. 

To exploit this vulnerability, attackers need unique identifiers and the oracle cloud infrastructure's environment ID (OCID) of the victim, which can be obtained either through searching on the web or through low-privileged user permission to get the volume OCID from the victim's environment. 

The vulnerability 'AttachMe' is a critical cloud isolation vulnerability, which affects a specific cloud service. The vulnerability affects user data/files by allowing malicious actors to execute severe threats including removing sensitive data from your volume, searching for cleartext secrets to move toward the victim's environment, and making the volume difficult to access, in addition to partitioning the disk that contains the operating system folder. 

The guidelines of OCI state that volumes are a “virtual disk” that allows enough space for computer instances. They are available in the two following varieties in OCI: 

1. Block volume: it is detachable storage, allowing you to expand the storage capacity if needed. 

2. Boot volume: it is a detachable boot volume device containing the image used to boot a system such as operating systems, and supporting systems. 

As soon as Oracle's partner and customer Wiz announced the vulnerability, Oracle took immediate measures to patch the vulnerability while thanking wiz for disclosing the security flaw and helping them in resolving it in the last update advisory of receiving the patch for the vulnerability.

A Large Number of Ventures Suffering From Cloud Security Attacks

The advent of technology led malicious actors, to invade the privacy of users' systems in a few steps. Cloud security is one such technology that has increasingly worked to fortify users' data from threat actors. 

However, as per the statistics, even the latest cyber security is at risk; a report publicized by Synk shows, that 80% of the enterprises suffered from these actors’ invasion in just the past 12 months. The wide adoption of cloud security has been considered a major reason for a rapidly increasing number of cases. 

There have been several bigger cases that show the breach of cloud security. Accenture is one of them which came under the claws of cloud security attacks. Once in 2017 when the company's AWS S3 storage was unsecured and was made available for public reach. The attackers found confidential API data, digital certificates, meta info, etc. and they used it to blackmail and squeeze money from the. The second was when in 202, the firm got struck by LockBit ransomware. 
As per Synk’s report, 58% of the people were predicting that they again will face another cloud security attack in the future, and 25% were afraid that they must have endured a breach in their cloud storage but were not aware of it. These thoughts were creating a negative impact on cloud security. Whereas, there are many other similar cases like Accenture, where organisations left their cloud storage open to be accessed publically, and did not have even basic security. 

The CEO and Co-founder of Orca, Avi Shua stated that other than the cloud platforms providing safe spaces for data storage in cloud infrastructure, the state of the business’s workloads, identities, etc. stored in the cloud are also equally responsible for the security of the public cloud data.

For making 100% from cloud storage and evading the problems in cloud securities, it is important to include experts in cloud-native security. and to avoid such incidents as Accenture cases it becomes a necessity to add additional training and education. As an institute can’t deal with such a situation without planning, they should work with proper strategies and focus on how to avoid the risk of 

To make the best of cloud storage and avoid falling prey to problems related to cloud security, it becomes pertinent to include experts in cloud-native security. To avoid such incidents from occurring in Accenture and other such companies, it's important that additional training and education about cloud security handling is provided by the relevant institutes and organisations. It's implausible to deal with such a situation without planning, the companies should work with proper strategies and focus on how to avoid the risk of data theft.  

Cisco SD-WAN Security Flaw Allows Root Code Execution


Cisco SD-WAN implementations are vulnerable to a high-severity privilege-escalation flaw in the IOS IE operating system, which could result in arbitrary code execution. 

Cisco's SD-WAN portfolio enables enterprises of all sizes to link different office sites over the cloud utilising a variety of networking technologies, including standard internet connections. Appliances at each location allow advanced analytics, monitoring, application-specific performance specifications and automation throughout a company's wide-area network. Meanwhile, IOS XE is the vendor's operating system that runs those appliances. 

The vulnerability (CVE-2021-1529) is an OS command-injection flaw that allows attackers to execute unexpected, harmful instructions directly on the operating system that would otherwise be inaccessible. It exists especially in the command-line interface (CLI) for Cisco's IOS XE SD-WAN software, and it could permit an authenticated, local attacker to run arbitrary commands with root privileges. 

According to Cisco’s advisory, posted this week, “The vulnerability is due to insufficient input validation by the system CLI. A successful exploit could allow the attacker to execute commands on the underlying operating system with root privileges.” 

The alert further stated that the exploit method would comprise authenticating to a susceptible device and delivering "crafted input" to the system CLI. An attacker with successful compromise would be able to read and write any files on the system, execute operations as any user, modify system configurations, install and uninstall software, update the OS and/or firmware, and much more, including subsequent access to a corporate network. 

CVE-2021-1529 has a rating of 7.8 on the CVSS vulnerability-severity scale, and researchers and the Cybersecurity and Infrastructure Security Agency (CISA) have advised organisations to fix the problem as soon as possible. 

Greg Fitzgerald, the co-founder of Sevco Security, cautioned that some firms may still have outdated machines connected to their networks, which might provide a hidden threat with issues like these. 

He stated in the email, “The vast majority of organizations do an excellent job patching the vulnerabilities on the systems they know about. The problem arises when enterprises do not have complete visibility into their asset inventory, because even the most responsive IT and security teams can’t patch a vulnerability for an asset they don’t know is connected to their network. Abandoned and unknown IT assets are often the path of least resistance for malicious actors trying to access your network or data.”

This is solely the latest SD-WAN vulnerability addressed by Cisco this year. It patched many significant buffer-overflow and command-injection SD-WAN flaws in January, the most serious of which could be abused by an unauthenticated, remote attacker to execute arbitrary code with root privileges on the affected server.