Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Cloud Services. Show all posts

Rethinking the Cloud: Why Companies Are Returning to Private Solutions


In the past ten years, public cloud computing has dramatically changed the IT industry, promising businesses limitless scalability and flexibility. By reducing the need for internal infrastructure and specialised personnel, many companies have eagerly embraced public cloud services. However, as their cloud strategies evolve, some organisations are finding that the expected financial benefits and operational flexibility are not always achieved. This has led to a new trend: cloud repatriation, where businesses move some of their workloads back from public cloud services to private cloud environments.

Choosing to repatriate workloads requires careful consideration and strategic thinking. Organisations must thoroughly understand their specific needs and the nature of their workloads. Key factors include how data is accessed, what needs to be protected, and cost implications. A successful repatriation strategy is nuanced, ensuring that critical workloads are placed in the most suitable environments.

One major factor driving cloud repatriation is the rise of edge computing. Research from Virtana indicates that most organisations now use hybrid cloud strategies, with over 80% operating in multiple clouds and around 75% utilising private clouds. This trend is especially noticeable in industries like retail, industrial sectors, transit, and healthcare, where control over computing resources is crucial. The growth of Internet of Things (IoT) devices has played a defining role, as these devices collect vast amounts of data at the network edge.

Initially, sending IoT data to the public cloud for processing made sense. But as the number of connected devices has grown, the benefits of analysing data at the edge have become clear. Edge computing offers near real-time responses, improved reliability for critical systems, and reduced downtime—essential for maintaining competitiveness and profitability. Consequently, many organisations are moving workloads back from the public cloud to take advantage of localised edge computing.

Concerns over data sovereignty and privacy are also driving cloud repatriation. In sectors like healthcare and financial services, businesses handle large amounts of sensitive data. Maintaining control over this information is vital to protect assets and prevent unauthorised access or breaches. Increased scrutiny from CIOs, CTOs, and boards has heightened the focus on data sovereignty and privacy, leading to more careful evaluations of third-party cloud solutions.

Public clouds may be suitable for workloads not bound by strict data sovereignty laws. However, many organisations find that private cloud solutions are necessary to meet compliance requirements. Factors to consider include the level of control, oversight, portability, and customization needed for specific workloads. Keeping data within trusted environments offers operational and strategic benefits, such as greater control over data access, usage, and sharing.

The trend towards cloud repatriation shows a growing realisation that the public cloud is only sometimes the best choice for every workload. Organisations are increasingly making strategic decisions to align their IT infrastructure with their specific needs and priorities. 



Rising Email Security Threats: Here’s All You Need to Know

 

A recent study highlights the heightened threat posed by spam and phishing emails due to the proliferation of generative artificial intelligence (AI) tools such as Chat-GPT and the growing popularity of cloud services.

According to a fresh report from VIPRE Security Group, the surge in cloud usage has correlated with an uptick in hacker activity. In this quarter, 58% of malicious emails were found to be delivering malware through links, while the remaining 42% relied on attachments.

Furthermore, cloud storage services have emerged as a prominent method for delivering malicious spam (malspam), accounting for 67% of such delivery in the quarter, as per VIPRE's findings. The remaining 33% utilized legitimate yet manipulated websites.

The integration of generative AI tools has made it significantly harder to detect spam and phishing emails. Traditionally, grammatical errors, misspellings, or unusual formatting were red flags that tipped off potential victims to the phishing attempt, enabling them to avoid downloading attachments or clicking on links.

However, with the advent of AI tools like Chat-GPT, hackers are now able to craft well-structured, linguistically sophisticated messages that are virtually indistinguishable from benign correspondence. This necessitates victims to adopt additional precautions to thwart the threat.

In the third quarter of this year alone, VIPRE's tools identified a staggering 233.9 million malicious emails. Among these, 110 million contained malicious content, while 118 million carried malicious attachments. Moreover, 150,000 emails displayed "previously unknown behaviors," indicating that hackers are continually innovating their strategies to optimize performance.

Phishing and spam persist as favored attack methods in the arsenal of every hacker. They are cost-effective to produce and deploy, and with a stroke of luck, can reach a wide audience of potential victims. Companies are advised to educate their staff about the risks associated with phishing and to meticulously scrutinize every incoming email, regardless of the sender's apparent legitimacy.

Met Police Investigates Alleged Data Breach of Officer Information

The Metropolitan Police in London has launched an investigation into a suspected data breach that reportedly involves the leakage of sensitive information related to officers. The breach has raised concerns over the security of law enforcement personnel's data and the potential consequences of such incidents.

According to reports from reputable sources, the alleged data breach has exposed the personal details of police officers. This includes information that could potentially compromise the safety and privacy of officers and their families. The breach highlights the growing challenge of protecting digital information in an age of increasing cyber threats.

The Metropolitan Police's response to this incident underscores the seriousness of the matter. As law enforcement agencies collect and manage a significant amount of sensitive data, any breach can have far-reaching implications. The leaked information could potentially be exploited by malicious actors for various purposes, including identity theft, targeted attacks, or harassment of officers.

Data breaches are a pressing concern for organizations worldwide, and law enforcement agencies are no exception. The incident serves as a reminder of the need for robust cybersecurity measures to safeguard sensitive information. This includes not only protecting data from external threats but also ensuring that internal protocols and practices are in place to prevent accidental leaks.

Data breaches have the potential to reduce public faith in institutions in the current digital environment. The public's trust in the Metropolitan Police's capacity to handle sensitive data responsibly could be harmed by the disclosure of officer information. Transparent communication about the incident, steps taken to lessen the harm, and initiatives to stop similar breaches in the future are all necessary for reestablishing this trust.

Concerns concerning permission and data sharing are also raised by the breach. The cited sources' link to Yahoo's consent page raises the possibility that the breach and user consent are related. This demonstrates the significance of transparent and moral data-gathering procedures as well as the necessity of granting individuals control over the use of their data.

The Metropolitan Police must work closely with cybersecurity professionals and regulatory agencies as the investigation develops to comprehend the magnitude of the incident and its potential consequences. Lessons acquired from this incident can offer other businesses useful guidance as they work to improve their data protection strategies.


What B2C Service Providers can Learn From Netflix's Accidental Model

 

Netflix made a policy error last month that might provide consumers with long-term security benefits. For other business-to-consumer (B2C) firms wishing to enhance client account security, this unintentional pro-customer safety action may serve as a lesson. 

On May 23, the streaming giant made its new "household" policy available to US consumers. Accounts will now be limited (with few exceptions) to a single Wi-Fi network and associated mobile devices. After months of stagnation and investor apprehension, it's a shot in the arm to treat the aftereffects of COVID and promote user growth. By banning the widespread practise of password sharing, the restriction may unintentionally enhance streamers' account security. 

"Sharing a password undermines control over who has access to an account, potentially leading to unauthorized use and account compromise," stated Craig Jones, vice president of security operations at Ontinue. "Once shared, a password can be further distributed or changed, locking out the original user. Worse yet, if the shared password is used across multiple accounts, a malicious actor could gain access to all of them. The practice of sharing passwords can also make users more susceptible to phishing and social engineering attacks."

With this new policy, Netflix is demonstrating how businesses may encourage or simply force its users to adopt better login practices, whether on purpose or not. However, changing client behaviour for the better isn't always as easy as it looks. 

Use of the gold biometric standard restricted for cloud services 

The mobile phone business is one area of tech that has long since found out how to assist users in logging in safely without sacrificing their experience.

Smartphone users have been selecting simple passcodes for years simply out of laziness or forgetfulness. When Apple debuted TouchID for the iPhone 5S in 2013, drawing inspiration from the Pantech GI100, things started to change. FaceID will soon make it even simpler for consumers to check in securely without slowing down anything, even if facial recognition technology wasn't nearly available at that point.

Even if biometric login is ideal, most businesses lack access to a ready-made solution, according to John Gilmore, head of research at DeleteMe.

"'Face unlock' on iPhones is an example of how this can be done in practice, but it is contingent on a specific device. For services which rely on users being able to access a service on multiple platforms, it is not yet feasible," he explained.

The main issue is that secure authentication frequently reduces usability when it comes to services. 

"Online services tend to resist implementing stronger security protocols because they see that it complicates the user experience. If you create a multistep barrier to entry, such as two-factor authentication (2FA), it is less likely people will actually engage with your platform," Gilmore added. 

Does this arrangement compel service providers to be clunky or unreliable? Experts argue against this. 

How to promote better account security behaviours

Both a carrot and a stick can be used for motivation. Epic Games, the maker of the online game Fortnite, is one business that has achieved success in the former. Epic developed new in-game awards for players who enabled two-factor authentication (2FA) on their accounts after a succession of security problems that affected thousands of the game's (sometimes very young) users. 

Never before have so many children "boogied down" over good internet behaviour! 

Consider Twitter as a case study in practise. Twitter said on February 15 that SMS-based 2FA would only be available to paid members. The decision was received with mixed feelings in the cybersecurity world because it seemed to discourage the usage of a crucial second layer of security, as explained by Darren Guccione, CEO and co-founder of Keeper Security. Although SMS 2FA is still an option, Twitter has switched to using the authenticator app or security key as the default for ordinary accounts. 

All of these instances show that businesses have a significant amount of control over how their customers interact with their security. All of these instances show that businesses have a significant amount of control over how their customers interact with their security.

In the end, Guccione says, "the ethical responsibility falls on the leaders of these companies to support and usher in changes that will ultimately protect their customers."

Growing Public Cloud Spending is Leading to a Shadow Data Risk


Public cloud spending and adoption has emerged as a growing sector. As per the assumptions made by analysts, organizations will spend $591.8 billion on cloud infrastructure and services this year, more than 20.7% from last year. 

According to the Forrester, the public cloud market is set to reach $1 trillion by year 2026, with the lion’s share of investment directed to the big four, i.e. Alibaba, Amazon Web Services, Google Cloud, and Microsoft. 

So, What Is Going On? 

In the wake of pandemic, businesses hastened their cloud migration and reaped the rewards as cloud services sped up innovation, offering elasticity to adjust to change demand, and scaled with expansion. Even as the C-suite reduces spending in other areas, it is certain that there is no going back. The demand from businesses for platform-as-a-service (PaaS), which is expected to reach $136 billion in 2023, and infrastructure-as-a-service (IaaS), which is expected to reach $150 billion, is particularly high. 

Still, this rapid growth, which in fact caught business strategists and technologies by surprise, has its own cons. If organizations do not take the essential actions to increase the security of public cloud data, the risks are likely to grow considerably. 

Shadow Data Is Growing Due to Lax Security Controls 

The challenges posed by "shadow data," or unknown, uncontrolled public cloud data, is a result of a number of issues. Business users are creating their own applications, and programmers are constantly creating new instances of their own code to create and test new applications. A number of these services retain and utilize critical data with no knowledge of the IT and security staff. Versioning, which allows several versions of data to be stored in the same bucket in the cloud, adds risks if policies are not set up correctly. 

Unmanaged data repositories are frequently ignored when the rate of innovation quickens. In addition, if third parties or unrelated individuals are given excessive access privileges, sensitive data that is adequately secured could be transferred to an unsafe location, copied there, or become vulnerable. 

Three Steps to Improve Public Cloud Data Security 

A large number of security experts (82%) are aware of, and in fact, concerned about the growing issues pertaining to the public cloud data security problem. These professionals can swiftly aid in minimizing the hazards by doing the following: 

  • Discover and Classify all Cloud Data 

Teams can automatically find all of their cloud data, not just known or tagged assets, thanks to a next-generation public cloud data security platform. All cloud data storages, including managed and unmanaged assets, virtual machines, shadow data stores, data caches and pipelines, and big data, are detected. This data is used by the platform to create an extensive, unified data catalog for multi-cloud environments used by enterprises. All sensitive data, including PII, PHI, and transaction data from the payment card industry (PCI), is carefully identified and categorized in the catalogs. 

  • Secure and Control Cloud Data 

Security teams may apply and enforce the proper security policies and verify data settings against their organization's specified guardrails with complete insights into their sensitive cloud data. Public cloud data security may aid in exposing complicated policy breaches, which could further help in prioritizing risk-based mannerisms, on the basis of data sensitivity level, security posture, volume, and exposure. 

  • Remediate Risks and Monitor Activities Without Hindering the Data Flow 

The aforementioned is a process named data security posture management, that offers recommendations that are customized for every cloud environment, thus making them more effective and relevant. 

Teams can then begin organizing sensitive data without interfering with corporate operations. Teams will be prompted by a public cloud data security platform to implement best practices, such as enabling encryption and restricting third-party access, and practicing greater data hygiene by eliminating unnecessary sensitive data from the environment. 

Moreover, security teams can utilize the platform to enable constant monitoring of data. This way, security experts can efficiently identify policy violations and ensure that the public cloud data is following the firm’s mentioned guidelines and security postures, no matter where it is stored, used, or transferred in the cloud.  

Future of the Cloud is Plagued by Security Issues

 

Several corporate procedures require the use of cloud services. Businesses may use cloud computing to cut expenses, speed up deployments, develop at scale, share information effortlessly, and collaborate effectively all without the need for a centralised site. 

But, malicious hackers are using these same services more and more inappropriately, and this trend is most likely to continue in the near future. Cloud services are a wonderful environment for eCrime since threat actors are now well aware of how important they are. The primary conclusions from CrowdStrike's research for 2022 are as follows. 

The public cloud lacks specified perimeters, in contrast to conventional on-premises architecture. The absence of distinct boundaries presents a number of cybersecurity concerns and challenges, particularly for more conventional approaches. These lines will continue to blur as more companies seek for mixed work cultures. 

Cloud vulnerability and security risks

Opportunistically exploiting known remote code execution (RCE) vulnerabilities in server software is one of the main infiltration methods adversaries have been deploying. Without focusing on specific industries or geographical areas, this involves searching for weak servers. Threat actors use a range of tactics after gaining initial access to obtain sensitive data. 

One of the more common exploitation vectors employed by eCrime and targeted intrusion adversaries is credential-based assaults against cloud infrastructures. Criminals frequently host phoney authentication pages to collect real authentication credentials for cloud services or online webmail accounts.

These credentials are then used by actors to try and access accounts. As an illustration, the Russian cyberspy organisation Fancy Bear recently switched from using malware to using more credential-harvesting techniques. Analysts have discovered that they have been employing both extensive scanning methods and even victim-specific phishing websites that deceive users into believing a website is real. 

However, some adversaries are still using these services for command and control despite the decreased use of malware as an infiltration tactic. They accomplish this by distributing malware using trusted cloud services.

This strategy is useful because it enables attackers to avoid detection by signature-based methods. This is due to the fact that many network scanning services frequently trust cloud hosting service top-level domains. By blending into regular network traffic, enemies may be able to get around security restrictions by using legitimate cloud services (like chat).

Cloud services are being used against organisations by hackers

Using a cloud service provider to take advantage of provider trust connections and access other targets through lateral movement is another strategy employed by bad actors. The objective is to raise privileges to global administrator levels in order to take control of support accounts and modify client networks, opening up several options for vertical spread to numerous additional networks. 

Attacks on containers like Docker are levelled at a lower level. Criminals have discovered ways to take advantage of Docker containers that aren't set up properly. These images can then be used as the parent to another application or on their own to interact directly with a tool or service. 

This hierarchical model means that if malicious tooling is added to an image, every container generated from it will also be compromised. Once they have access, hostile actors can take advantage of these elevated privileges to perform lateral movement and eventually spread throughout the network. 

Prolonged detection and reaction

Extended detection and reaction is another fundamental and essential component of effective cloud security (XDR). A technology called XDR may gather security data from endpoints, cloud workloads, network email, and many other sources. With all of this threat data at their disposal, security teams can quickly and effectively identify and get rid of security threats across many domains thanks to XDR. 

Granular visibility is offered by XDR platforms across all networks and endpoints. Analysts and threat hunters can concentrate on high-priority threats because they also provide detections and investigations. This is due to XDR's ability to remove from the alert stream abnormalities that have been deemed to be unimportant. Last but not least, XDR systems should include thorough cross-domain threat data as well as information on everything from afflicted hosts and underlying causes to indicators and dates. The entire investigation and treatment procedure is guided by this data.

While threat vectors continue to change every day, security breaches in the cloud are getting more and more frequent. In order to safeguard workloads hosted in the cloud and to continuously advance the maturity of security processes, it is crucial for businesses to understand current cloud risks and use the appropriate technologies and best practises.

Cybersecurity and the Cloud in Modern Times

 


Due to the advent of remote work, most companies - even those in heritage industries - have had to adopt SaaS (software as a service) and other cloud tools to remain competitive and agile in the market. Several modern cloud-based platforms, including Zoom, Slack, and Salesforce have become critical to the effective collaboration of knowledge workers from their homes, which will allow them to work more efficiently. In the last few years, public cloud hosting providers like Amazon Web Services, Microsoft Azure, and Google Cloud have seen phenomenal growth and success. This is a consequence of this tailwind. As per Gartner's predictions, by 2022, $178 billion will be spent on cloud providers, up from $141 billion in 2021. 

The shift to the cloud has led to lots of challenges when it comes to cybersecurity, although public cloud providers have made it easy to use modern software tools. Cloud-first security represents a paradigm shift from traditional, on-premise security in the modern day. Before this change, customers had complete control over their environments and security. They hosted their applications in their own data centers and were responsible for controlling the environment. Customers operated their network in a "walled castle" - where they controlled and secured the network and applications themselves. 

Nevertheless, when customers consume public cloud services, they are obligated to share responsibility for security with the cloud service providers as a shared responsibility. 

If your company stores data in a cloud data center provided by Amazon Web Services, you will be responsible for configuring and managing your cybersecurity policies. This is part of your compliance program. The customer is responsible for monitoring security breaches regardless of whether they have complete control over the data in the Amazon Web Services data center. As a result, when customers adopt public clouds, they no longer have full control over their security in terms of what they do with their data. A major barrier to adopting the cloud is concern about security, which is often among the most common. 

In addition, it is more difficult to secure cloud environments than traditional environments. As a result of today's cloud computing architecture, many cloud service providers utilize what is known as microservices, a design that allows each component of an application (for example, a search bar, a recommendation page, a billing page, etc.) to be created independently. On-premise systems can support as many as ten times the amount of workloads (for example, virtual machines, servers, containers, microservices) that the cloud can support. As a result of this fragmentation and complexity, there is a tendency for access control issues to develop, as well as a higher chance of developer errors - such as leaving a sensitive password in an AWS database. This information can be exposed to the public. Simply put, there is a wider and more complex attack surface area in the cloud than there is in local computing environments. 

Embrace the cloud-first era of cybersecurity

There are not just complexities associated with the cloud, but there has also been an inversion from a top-down to a bottom-up sales model, leading to security buying decisions being made not by CISOs or CISMs, but rather by developers (Chief Information and Security Officers). 

Two reasons have contributed to this happening. Due to the cloud, applications can be developed more efficiently. Therefore, the importance of cybersecurity has become a part of the development process rather than just an afterthought in the past few years. Responsibility for creating code and product releases was traditionally assigned to developers, while the team that works with the CISO is in charge of the cybersecurity aspect. As a result, the responsibilities of each party were split. It has become so easy to update code or to release product updates every day or every week in modern companies due to the cloud. This has made it much easier for them to do so. It's common nowadays for our favorite apps to update themselves frequently. For instance Netflix, Amazon, and Uber, but not so long ago, this wasn't the norm. We had to manually patch them to get them to run smoothly. With the increased frequency of deploying revised code, cybersecurity has become a problem that developers now have to care about because of the increased frequency of application development. 

In the second place, the early adopters and the power users of the cloud are primarily digital start-ups and medium-sized businesses, which are more decentralized in their decision-making processes. Traditionally, CISOs at large enterprises have played an active role in making security decisions about the organization. A CISO, acting as the chief executive officer of the company, makes purchasing decisions on behalf of the rest of the organization. This was after rigorous proof of concept, negotiation, and cost-benefit processes. The different techniques used by start-ups and mid-scale customers to make security buying decisions are very different, and many often, they leave security decision-making to their developer team. 

As a result of this revolutionary top-down sales model, cybersecurity software is about to be built and sold in a completely different way. Developing a sales model that is suitable for developers is different from one designed for CISOs. There is no doubt that developers prefer self-serve features - they often like to try and offer their products to their customers before they have to purchase them. To achieve this goal, we need to build a self-serve and freemium sales model, so we can attract a large number of inbound, free users at the top of the funnel and build a customer base around them. In comparison with the traditional sales model used by security incumbents, this model is completely different, as the incumbents have hired huge sales teams that are responsible for outbound selling large deals to their CIOs in a sales-led approach.

Rackspace: Ransomware Bypasses ProxyNotShell Mitigations

 


According to Rackspace Technology, a cloud hosting company that provides managed cloud services, the massive December 2 attacks have caused the company to take action. As part of the attack, thousands of small and midsized businesses suffered disruption in their email services due to a zero-day exploit against a vulnerability in Microsoft Exchange Server called server-side request forgery (SSRF), or CVE-2022-41080. 

According to Karen O'Reilly-Smith, the chief security officer at Rackspace, in an email response, the root cause of this vulnerability is a zero-day exploit associated with CVE-2022-41080. It has been reported that Microsoft disclosed CVE-2022-41080 as a privilege escalation vulnerability and did not include any notes on the fact that it was part of a remote execution chain that was exploitable. 

According to a third-party advisor to Rackspace, the company had yet to apply the ProxyNotShell patch because the company was concerned that it may cause "authentication errors" that could take down its Exchange servers, as well as other potential issues. As part of its mitigation strategies for the vulnerabilities, Rackspace had already implemented Microsoft's mitigation recommendations, which the software giant had deemed as a means of preventing attacks. 

A security firm called CrowdStrike was hired by Rackspace for its breach investigation, and CrowdStrike posted its findings in an open blog post on its findings. CrowdStrike explained how the Play ransomware group had used a newly developed technique to exploit a new ProxyNotShell RCE vulnerability called CVE-2022-41080 and CVE-2022-41082. 

According to a report, CrowdStrike's post about who beat Backdoor Play was the outcome of the company's investigation into the attack against Rackspace. However, the company's external advisor told us that the research about Play's bypass method was the result of CrowdStrike's investigation into the attack. 

Last month, Microsoft informed Dark Reading that while the attack bypasses mitigations provided by previous releases of ProxyNotShell, it does not bypass the actual patch that is being applied to the system.  

'Patching - if you can do so - is the answer,' says an external advisor, pointing out that the company had weighed the risks and benefits of patching at the time when mitigations were said to have been effective and on the other hand, the patch had the potential to take their servers down. The external advisor's report states that at the time when the risk was being evaluated, considered, and weighed, they were aware of it. Because the patch has not yet been applied, the servers remain unavailable.  

According to a Rackspace spokesperson, the company has not responded to questions about whether or not the ransomware attackers have been paid.

Google Cloud Delivers Web3 Developers for Blockchain Node Engine

The Blockchain still has more than 38 million customers in 140 countries worldwide, according to the Google Cloud website. In a news release, the business stated that the launch represents a resolve to aid Web3 developers in creating and deploying new products on platforms based on blockchain technology. 

Blockchains serve as a sort of decentralized database because they are made up of transaction data that is encrypted and permanently stored. The governing infrastructure is a node, which is a computer or server that holds the whole copy of the blockchain's transaction history in addition to depending on a central authority to confirm data.

Amit Zavery, GM and VP of engineering and platform, and James Tromans, director of cloud web3, announced the new service in a blog post that explained how difficult it is for blockchain nodes to stay in sync since they must continually exchange the most relevant blockchain data. It requires a lot of resources and data.

By providing a service model to handle node creation and a safe development environment in a fully managed product, Google Cloud aims to make it simpler. From Google's standpoint, it is far simpler to let them handle the labor-intensive tasks while you focus on creating your web3 application.

Additionally, Web3 businesses that need dedicated nodes can create effective contracts, relay transactions, read or write blockchain data, and more using the dependable and fast network architecture of Google Cloud. Organizations using Web3 benefit from quicker system setup, secure development, and managed service operations.

The goal of Google's blockchain service is to deploy nodes with the security of a virtual private cloud firewall that restricts networking and communication to vetted users and computers. The ability to access the notes from processes like distributed denial of service assaults will be restricted by other services like Google Cloud Armor.

Gains from Node Engine

The majority will adopt this method after Ethereum, which will employ it first. The following are some advantages that businesses could gain from using this Google Cloud Node Engine.

It takes a significant amount of time to manually node, and it can prove difficult for a node to sync with the network. However, the developers can deploy nodes using Google Cloud's Node Engine in a single transaction, simplifying and speeding up the procedure.

In the realm of cryptocurrency, data security is of utmost importance. The developers will benefit from the Engine Node's assistance in protecting their data and preventing illegal access to the nodes. Additionally, Google Cloud shields the nodes from DDoS assaults, just like Cloud Armor.

This development seeks to "assist enterprises with a stable, easy-to-use blockchain node web host so they can focus their efforts on developing and scaling their Web3 apps," according to Google Cloud's official website.

An approved group fully manages the Google Cloud Engine Node. The staff will administer the system during an outage, therefore you will have no concerns about availability. Nodes need to be restarted and monitored during an outage; the group will take care of it for clients.

Leak of BIOS Source Code Confirmed by Intel


The authenticity of the suspected leak of Intel's Alder Lake BIOS source code has been established, potentially posing a cybersecurity risk to users.

Alder Lake, the firm's 12th generation processor, which debuted in November 2021, is coded for the Unified Extensible Firmware Interface (UEFI) in the released documentation.

The breach, according to an Intel statement provided to Tom's Hardware, does not "reveal any new vulnerabilities since we do not rely on encryption of information as a defense policy."Additionally, it is urging other members of the security research community to use its bug bounty program to submit any potential problems, and it is also alerting customers about the situation.

The 5.97 GB of files, source code, secret keys, patch logs, and compilation tools in the breach have the most recent timestamp of 9/30/22, indicating that a hacker or insider downloaded the data time. Several references to Lenovo may also be found in the leaked source code, including code for 'Lenovo String Service,' 'Lenovo Secure Suite,' and Lenovo Cloud Service integrations.

Tom's Hardware, however, has received confirmation from Intel that such source code is real and is its "exclusive UEFI code."

Sam Linford, vice president of Deep Instinct's EMEA Channels, said: "Source code theft is a very serious possibility for enterprises since it may lead to cyber-attacks. Because source code is a piece of a company's intellectual property, it is extremely valuable to cybercriminals."

This year, there have been multiple instances where an organization's source code was exposed. The password manager LastPass disclosed that some of its source code had been stolen in August 2022, and Rockstar Games' Grand Theft Auto 5 and the Grand Theft Auto 6 version's source code was stolen in September 2022.

Cloudflare Mitigates a Record-Breaking DDoS Assault Peaking at 26 Million RPS

 

Last week, Cloudflare thwarted the largest HTTPS DDoS attack ever recorded. The attack amassed 26 million HTTPS requests per second, breaking the previous record of 15.3 million requests for that protocol set earlier this year in April. 

The attack targeted an unnamed Cloudflare customer and mainly originated from cloud service providers instead of local internet services vendors, which explains its size and indicates that hijacked virtual devices and powerful servers were exploited during the assault, Cloudflare Product Manager Omer Yoachimik disclosed in a blog post. 

To deliver the malicious traffic, nearly 5,000 devices were employed with each endpoint generating roughly 5,200 RPS at peak. This demonstrates the true nature of virtual machines and servers when used for DDoS attacks, as other larger botnets aren’t capable of impersonating a fraction of this power. 

For example, a botnet of 730,000 devices was spotted generating nearly 1 million RPS, which makes the botnet behind the 26 million RPS DDoS attack 4,000 times stronger. 

"To contrast the size of this botnet, we've been tracking another much larger but less powerful botnet of over 730,000 devices," stated Omer Yoachimik. "The latter, larger botnet wasn't able to generate more than one million requests per second, i.e., roughly 1.3 requests per second on average per device. Putting it plainly, this botnet was, on average, 4,000 times stronger due to its use of virtual machines and servers.” 

Thirty seconds into the assault, the botnet generated over 212 million HTTPS requests from more than 1,500 networks, located in 121 nations. Most requests came from Indonesia, the US, Brazil, and Russia with the French OVH (Autonomous System Number 16276), the Indonesian Telkomnet (ASN 7713), the US-based iboss (ASN 137922), and the Libyan Ajeel (ASN 37284) being the top source networks.

According to Cloudflare, the assault was over HTTPS, making it more expensive in terms of required computational resources, as establishing a secure TLS encrypted connection costs more. Consequently, it also costs more to mitigate it. 

"HTTPS DDoS attacks are more expensive in terms of required computational resources because of the higher cost of establishing a secure TLS encrypted connection," Yoachimik explained. "Therefore, it costs the attacker more to launch the attack, and for the victim to mitigate it. We've seen very large attacks in the past over (unencrypted) HTTP, but this attack stands out because of the resources it required at its scale." 

This is one of the multiple volumetric assaults identified by Cloudflare throughout the last several years. An HTTP DDoS attack that was discovered in August 2021 saw around 17.2 million requests per second being generated. More recently, a mitigated 15.3 million rps attack that occurred in April 2022 saw around 6,000 bots being employed in order to target a Cloudflare customer who was running a crypto launchpad. 

Last year in November, Microsoft revealed that it thwarted a record-breaking 3.47 terabits per second (Tbps) DDoS attack that flooded servers used by an Azure customer from Asia with malicious packets.

Nanocore, Netwire, and AsyncRAT Distribution Campaigns Make Use of Public Cloud Infrastructure

 

Threat actors are actively leveraging Amazon and Microsoft public cloud services into their malicious campaigns in order to deliver commodity remote access trojans (RATs) such as Nanocore, Netwire, and AsyncRAT to drain sensitive information from compromised systems. The spear-phishing assaults, which began in October 2021, largely targeted companies in the United States, Canada, Italy, and Singapore, according to Cisco Talos researchers. 

These Remote Administration Tools (RATs) versions are loaded with features that allow them to take control of the victim's environment, execute arbitrary instructions remotely, and steal the victim's information. 

A phishing email with a malicious ZIP attachment serves as the initial infection vector. These ZIP archive files include an ISO image that contains a malicious loader in the form of JavaScript, a Windows batch file, or a Visual Basic script. When the initial script is run on the victim's machine, it connects to a download server to obtain the next step, which can be hosted on an Azure Cloud-based Windows server or an AWS EC2 instance.

Using existing legitimate infrastructure to assist intrusions is increasingly becoming part of an attacker's playbook since it eliminates the need for the attacker to host their own servers and may also be used as a cloaking strategy to avoid detection by security solutions. 

Collaboration and communication applications such as Discord, Slack, and Telegram have found a home in many infection chains in recent months to hijack and exfiltrate data from victim machines. Cloud platform abuse is a tactical extension that attackers may utilize as the first step into a large array of networks. 

"There are several interesting aspects to this particular campaign, and it points to some of the things we commonly see used and abused by malicious actors," said Nick Biasini, head of outreach at Cisco Talos. "From the use of cloud infrastructure to host malware to the abuse of dynamic DNS for command-and-control (C2) activities. Additionally, the layers of obfuscation point to the current state of criminal cyber activities, where it takes lots of analysis to get down to the final payload and intentions of the attack."

The use of DuckDNS, a free dynamic DNS service, to generate malicious subdomains to deliver malware is also noteworthy, with some of the actor-controlled malicious subdomains resolving to the download server on Azure Cloud while other servers function as C2 for the RAT payloads.

"Malicious actors are opportunistic and will always be looking for new and inventive ways to both host malware and infect victims. The abuse of platforms such as Slack and Discord as well as the related cloud abuse are part of this pattern," Biasini concluded.

TeamTNT: New Credential Harvester Targets Cloud Services and other Software

 

Secrets must be kept confidential in order for networks to be protected and supply-chain attacks to be avoided. Malicious actors frequently target secrets in storage mechanisms and harvest credentials from systems that have been compromised. DevOps software often stores credentials in plain text that is accessible even without user intervention, posing a significant security risk. 

When inside a victim's device, malicious actors have been known to steal cloud service provider (CSP) credentials. For example, the cybercriminal group TeamTNT is no stranger to attacking cloud containers, expanding their arsenal to steal cloud credentials, and experimenting with new environments and intrusive activities. 

Trend Micro discovered new evidence that TeamTNT has expanded its credential harvesting capabilities to threaten numerous cloud and non-cloud services in victims' internal networks and systems post-compromise in the group's most recent attack routine. 

The malware created by TeamTNT is designed to steal credentials from specific applications and services. It infects Linux machines with vulnerabilities such as exposed private keys and recycled passwords, and it focuses on looking for cloud-related data on infected devices. 

Cloud misconfigurations and repeated passwords, as in the group's other attacks, make it easy to gain access to a victim's device. To gain access to other systems, the community harvests credentials for Secure Shell (SSH) and Server Message Block (SMB), as before. Both intrusion strategies have the ability to disperse their payloads in a worm-like manner. 

The malware searches for app configurations and data based on a search list when running through the linked devices, and sends them to the command-and-control (C&C) server, using a.netrc file to automatically log in using the harvested credentials. Comparing the harvester with the group’s previous versions, Trend Micro saw a significant increase in targets. 

Since TeamTNT's payloads are focused on illegal Monero mining, it's no surprise that the malware searches the infected system for Monero configuration data. The malware looks for Monero wallets on all devices that the group has access to. The malware attempts to remove all traces of itself from the infected device at the end of its routine. According to research, it strongly suggests that this is not being achieved effectively. Although the command "history -c" clears the Bash history, some commands continue to run and leave traces on other sections of the device. 

Malicious actors deliberately search internal networks and systems for legitimate users' credentials in order to facilitate their post-intrusion activities. They could use the cloud services paid for by legitimate organizations for other malicious purposes if they have CSP credentials. 

Furthermore, plaintext credentials are a gold mine for cybercriminals, particularly when used in subsequent attacks. Vulnerabilities, especially those in unpatched and otherwise unsecured internet-facing systems, are the same. 

Customers are advised to use the hidden vaults provided by their CSPs and adopt these best practices to minimize the risks of this TeamTNT routine and other related threats: 
1.Adopt the collective responsibility model and enforce the concept of least privilege. 
2.Replace default credentials with strong and stable passwords and make sure that the security settings of various systems environments are personalized to the needs of the company. 
3.Avoid storing passwords in plain text and use multifactor authentication.

33.4 Billion Records Exposed In Breaches Due To Cloud Misconfigurations?


With the rise in the number of records ‘exposed’ by cloud misconfigurations year after year from 2018 to 2019 by 80%, there is an evident ascent in the total cost to organizations related with those lost records. As organizations keep on embracing cloud services quite swiftly however they neglect to implement legitimate cloud security measures, sadly, specialists anticipate that this upward trend would remain.


Charles “C.J.” Spallitta, Chief Product Officer at eSentire says, “The rush to adopt cloud services has created new opportunities for attackers – and attackers are evolving faster than companies can protect themselves. The fact that we have seen a 42% increase from 2018 to 2019 in cloud-related breaches attributed to misconfiguration issues proves that attackers are leveraging the opportunity to exploit cloud environments that are not sufficiently hardened. This trend is expected to continue as more organizations move to the cloud,”

“Additionally, common misconfiguration errors that occur in cloud components expand and advance the attacker workflow. Real-time threat monitoring in cloud assets is critical, given the unprecedented rate of scale and nature of cloud services. Organizations should seek-out security services that distill the noise from on-premise and cloud-based security tools while providing broad visibility to enable rapid response when threats are found,” Spallitta concluded.


Key report findings: 
  1. 81 breaches in 2018; 115 in 2019 – a 42% increase
  2. Tech companies had the most data breaches at 41%, followed by healthcare at 20%, and government at 10%; hospitality, finance, retail, education, and business services all came in at under 10% each
  3. 68% of the affected companies were founded prior to 2010, while only 6.6% were founded in 2015 or later
  4. 73 (nearly 42%) of known affected companies experienced a merger or acquisition (M&A) transaction between 2015 and 2019, which indicates cloud security is an area of risk for companies involved in merging disparate IT environments
  5. Elasticsearch misconfigurations accounted for 20% of all breaches, but these incidents accounted for 44% of all records exposed
  6. The number of breaches caused by Elasticsearch misconfigurations nearly tripled from 2018 to 2019
  7. S3 bucket misconfigurations accounted for 16% of all breaches, however, there were 45% fewer misconfigured S3 servers in 2019 compared to 2018 
  8. MongoDB misconfigurations accounted for 12% of all incidents, and the number of misconfigured MongoDB instances nearly doubled YoY