Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Google+. Show all posts

Google Backtracks on Cookie Phaseout: What It Means for Users and Advertisers


 

In a surprising announcement, Google confirmed that it will not be eliminating tracking cookies in Chrome, impacting the browsing experience of 3 billion users. The decision came as a shock as the company struggled to find a balance between regulatory demands and its own business interests.

Google’s New Approach

On July 22, Google proposed a new model that allows users to choose between tracking cookies, Google’s Topics API, and a semi-private browsing mode. This consent-driven approach aims to provide users with more control over their online privacy. However, the specifics of this model are still under discussion with regulators. The U.K.’s Competition and Markets Authority (CMA) expressed caution, stating that the implications for consumers and market outcomes need thorough consideration.

Privacy Concerns and Industry Reaction

Privacy advocates are concerned that most users will not change their default settings, leaving them vulnerable to tracking. The Electronic Frontier Foundation (EFF) criticised Google’s Privacy Sandbox initiative, which was intended to replace tracking cookies but has faced numerous setbacks. The EFF argues that Google’s latest move prioritises profits over user privacy, contrasting sharply with Apple’s approach. Apple’s Safari browser blocks third-party cookies by default, and its recent ad campaign highlighted the privacy vulnerabilities of Chrome users.

Regulatory and Industry Responses

The CMA and the U.K.’s Information Commissioner expressed disappointment with Google’s decision, emphasising that blocking third-party cookies would have been a positive step for consumer privacy. Meanwhile, the Network Advertising Initiative (NAI) welcomed Google’s decision, suggesting that maintaining third-party cookie support is essential for competition in digital advertising.

The digital advertising industry may face unintended consequences from Google’s shift to a consent-driven privacy model. This approach mirrors Apple’s App Tracking Transparency, which requires user consent for tracking across apps. Although Google’s new model aims to empower users, it could lead to an imbalance in data access, benefiting large platforms like Google and Apple.

Apple vs. Google: A Continuing Saga

Apple’s influence is evident throughout this development. The timing of Apple’s privacy campaign, launched just days before Google’s announcement, underscores the competitive dynamics between the two tech giants. Apple’s App Tracking Transparency has already disrupted Meta’s business model, and Google’s similar approach may further reshape the infrastructure of digital advertising.

Google’s Privacy Sandbox has faced criticism for potentially enabling digital fingerprinting, a concern Apple has raised. Despite Google’s defense of its Topics API, doubts about the effectiveness of its privacy measures persist. As the debate continues, the primary issue remains Google’s dual role as both a guardian of user privacy and a major beneficiary of data monetisation.

Google’s decision to retain tracking cookies while exploring a consent-driven model highlights the complex interplay between user privacy, regulatory pressures, and industry interests. The outcome of ongoing discussions with regulators will be crucial in determining the future of web privacy and digital advertising.



Third-Party Cookies Stay: Google’s New Plan for Web Browsing Privacy


Google no longer intends to remove support for third-party cookies, which are used by the advertising industry to follow users and target them with ads based on their online activity.

Google’s Plan to Drop Third-Party Cookies in Chrome Crumbles

In a significant shift, Google has decided to abandon its plan to phase out third-party cookies in its Chrome browser. This decision marks a notable change in the tech giant’s approach to user privacy and web tracking, reflecting the complexities and challenges of balancing privacy concerns with the needs of advertisers and regulators.

In a recent post, Anthony Chavez, VP of Google's Privacy Sandbox, revealed that the search and advertising giant has realized that its five-year effort to build a privacy-preserving ad-tech stack requires a lot of work and has implications for online advertisers, some of whom have been vocally opposed. 

“In light of this, we are proposing an updated approach that elevates user choice. Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing,” Anthony said.

For the time being, the Privacy Sandbox, a suite of APIs for online ad delivery and analytics that are intended to preserve privacy, will coexist with third-party cookies in Chrome.

The Initial Plan

Google’s initial plan, announced in early 2020, aimed to eliminate third-party cookies from Chrome by 2022. Third-party cookies, which are used by advertisers to track users across different websites, have been a cornerstone of online advertising. However, they have also raised significant privacy concerns, as they enable extensive tracking of user behavior without explicit consent.

Instead of dropping third-party cookie support in the Chrome browser next year - subject to testing that began in January - Google intends to give Chrome users the option of playing in its Privacy Sandbox or in the adjacent land of data surveillance, where third-party cookies support all manner of information collection.

It remains to be seen whether Chrome's interface for selecting between Privacy Sandbox and standard third-party cookies will be less confusing than the much-criticized "Enhanced ad privacy in Chrome" popup that announced the arrival of Privacy Sandbox APIs in Chrome last year.

Delays and Challenges

Despite the ambitious timeline, Google’s plan faced numerous delays. The company extended the deadline multiple times, citing the need for more time to develop and test alternative technologies. The complexity of replacing third-party cookies with new solutions that could satisfy both privacy advocates and the advertising industry proved to be a significant hurdle.

One of the key challenges was ensuring that the new technologies would not undermine the effectiveness of online advertising. Advertisers rely heavily on third-party cookies to target ads and measure their performance. Any replacement technology needed to provide similar capabilities without compromising user privacy.

Feedback from Stakeholders

Throughout the process, Google received extensive feedback from various stakeholders, including advertisers, publishers, and regulators. Advertisers expressed concerns about the potential impact on their ability to deliver targeted ads, while regulators emphasized the need for robust privacy protections.

In response to this feedback, Google made several adjustments to its plans. The company introduced new proposals, such as Federated Learning of Cohorts (FLoC), which aimed to group users into cohorts based on similar interests rather than tracking individual users. However, these proposals also faced criticism and skepticism from privacy advocates and industry experts.

The Decision to Abandon the Plan

Ultimately, Google decided to abandon its plan to phase out third-party cookies. Instead, the company will introduce a new experience that allows users to make an informed choice about their web browsing privacy. This approach aims to provide users with greater control over their data while still enabling advertisers to deliver relevant ads.

Google Chrome Users at Risk: Study Reveals Dangerous Extensions Affecting 280 Million

 

A recent study has unveiled a critical security threat impacting approximately 280 million Google Chrome users who have installed dangerous browser extensions. These extensions, often masquerading as useful tools, can lead to severe security risks such as data theft, phishing, and malware infections. 

The research highlights that many of these malicious extensions request excessive permissions, granting them access to sensitive user data, the ability to monitor online activities, and even control over browser settings. This exposure creates significant vulnerabilities, enabling cybercriminals to exploit personal information, which could result in financial losses and privacy invasions. In response, Google has been actively removing harmful extensions from the Chrome Web Store. 

However, the persistence and evolving nature of these threats underscore the importance of user vigilance. Users are urged to carefully evaluate the permissions requested by extensions and consider user ratings and comments before installation. Cybersecurity experts recommend several proactive measures to mitigate these risks. Regularly reviewing and removing suspicious or unnecessary extensions is a crucial step. Ensuring that the browser and its extensions are updated to the latest versions is also vital, as updates often include essential security patches. Employing reputable security tools can further enhance protection by detecting and preventing malicious activities associated with browser extensions. 

These tools provide real-time alerts and comprehensive security features that safeguard user data and browsing activities. This situation underscores the broader need for increased cybersecurity awareness. As cybercriminals continue to develop sophisticated methods to exploit browser vulnerabilities, both users and developers must remain alert. Developers are encouraged to prioritize security in the creation and maintenance of extensions, while users should stay informed about potential threats and adhere to best practices for safe browsing. 

The study serves as a stark reminder that while browser extensions can significantly enhance user experience and functionality, they can also introduce severe risks if not managed correctly. By adopting proactive security measures and staying informed about potential dangers, users can better protect their personal information and maintain a secure online presence. 

Ultimately, fostering a culture of cybersecurity awareness and responsibility is essential in today’s digital age. Users must recognize the potential threats posed by seemingly harmless extensions and take steps to safeguard their data against these ever-present risks. By doing so, they can ensure a safer and more secure browsing experience.

Terrorist Tactics: How ISIS Duped Viewers with Fake CNN and Al Jazeera Channels


ISIS, a terrorist organization allegedly launched two fake channels on Google-owned video platforms YouTube and Facebook. CNN and Al Jazeera claimed to be global news platforms through their YouTube feeds. This goal was to provide credibility and ease the spread of ISIS propaganda.

According to research by the Institute for Strategic Dialogue, they managed two YouTube channels as well as two accounts on Facebook and X (earlier Twitter) with the help of the outlet 'War and Media'.

The campaign went live in March of this year. Furthermore, false profiles that resembled reputable channels were used on Facebook and YouTube to spread propaganda. These videos remained live on YouTube for more than a month. It's unclear when they were taken from Facebook.

The Deceptive Channels

ISIS operatives set up multiple fake channels on YouTube, each mimicking the branding and style of reputable news outlets. These channels featured professionally edited videos, complete with logos and graphics reminiscent of CNN and Al Jazeera. The content ranged from news updates to opinion pieces, all designed to lend an air of credibility.

Tactics and Objectives

1. Impersonation: By posing as established media organizations, ISIS aimed to deceive viewers into believing that the content was authentic. Unsuspecting users might stumble upon these channels while searching for legitimate news, inadvertently consuming extremist propaganda.

2. Content Variety: The fake channels covered various topics related to ISIS’s global expansion. Videos included recruitment messages, calls for violence, and glorification of terrorist acts. The diversity of content allowed them to reach a broader audience.

3. Evading Moderation: YouTube’s content moderation algorithms struggled to detect these fake channels. The professional production quality and branding made it challenging to distinguish them from genuine news sources. As a result, the channels remained active for over a month before being taken down.

Challenges for Social Media Platforms

  • Algorithmic Blind Spots: Algorithms designed to identify extremist content often fail when faced with sophisticated deception. The reliance on visual cues (such as logos) can be exploited by malicious actors.
  • Speed vs. Accuracy: Platforms must strike a balance between rapid takedowns and accurate content assessment. Delayed action allows harmful content to spread, while hasty removal risks false positives.
  • User Vigilance: Users play a crucial role in reporting suspicious content. However, the resemblance to legitimate news channels makes it difficult for them to discern fake from real.

Why is this harmful for Facebook, X users, and YouTube users?

A new method of creating phony social media channels for renowned news broadcasters such as CNN and Al Jazeera reveals how the terrorist organization's approach to avoiding content moderation on social media platforms has developed.

Unsuspecting users may be influenced by "honeypot" efforts, which, according to the research, will become more sophisticated, making it even more difficult to restrict the spread of terrorist content online.

When Legit Downloads Go Rogue: The Oyster Backdoor Story

When Legit Downloads Go Rogue: The Oyster Backdoor Story

Researchers from Rapid7 recently uncovered a sophisticated malvertising campaign that exploits unsuspecting users searching for popular software downloads. This campaign specifically targets users seeking legitimate applications like Google Chrome and Microsoft Teams, leveraging fake software installers to distribute the Oyster backdoor, also known as Broomstick.

“Rapid7 observed that the websites were masquerading as Microsoft Teams websites, enticing users into believing they were downloading legitimate software when, in reality, they were downloading the threat actor’s malicious software,” said the report.

How the Malvertising Campaign Works

The modus operandi of this campaign involves luring users to malicious websites. The threat actors create typo-squatted sites that closely mimic legitimate platforms. For instance, users searching for Microsoft Teams might inadvertently land on a fake Microsoft Teams download page. These malicious websites host supposed software installers, enticing users to download and install the application.

Fake Installers

However, the catch lies in the content of these fake installers. When users download them, they unknowingly execute the Oyster backdoor. This stealthy piece of malware allows attackers to gain unauthorized access to compromised systems. 

Once the backdoor is in place, attackers can engage in hands-on keyboard activity, directly interacting with the compromised system. Furthermore, the Oyster backdoor can deploy additional payloads after execution, potentially leading to further compromise or data exfiltration.

Impact and Mitigation

The impact on users who fall victim to this malvertising campaign can be severe. They inadvertently install the Oyster backdoor on their systems, providing attackers with a foothold. From there, attackers can escalate privileges, steal sensitive information, or launch other attacks.

To reduce such risks, users should remain vigilant:

  • Verify Sources: Always verify the legitimacy of software sources before downloading. Avoid third-party download sites and opt for official websites or trusted app stores.
  • Security Software: Regularly update and use security software to detect and prevent malware infections.
  • User Education: Educate users about the risks of malvertising and emphasize safe browsing practices.

Google Leak Reveals Concerning Privacy Practices

 


An internal leak has revealed troubling privacy and security practices at Google, exposing substantial lapses over a span of six years. This revelation highlights the tech giant's failure to prioritise user data protection, raising concerns about the company's handling of sensitive information.


License Plate Tracking and Storage

One of the most alarming disclosures involves Google Street View's inadvertent tracking and storage of licence plate numbers. The internal documents show that the system was designed to transcribe text from images mistakenly captured and stored geolocated licence plate numbers and fragments. Although Google employees emphasised that this was an unintentional error, it underscores a critical oversight in their data handling processes.


Recording Children’s Voices and Other Issues

Further issues uncovered in the leak include the recording and storing of children's voices, which raises significant privacy concerns. Additionally, Google reportedly failed to secure home addresses on its carpooling systems and had unauthorised access to private videos on YouTube accounts. These incidents reflect a broader pattern of inadequate data security measures and potential violations of user privacy.


While these revelations paint a troubling picture of Google's past practices, the company has reportedly taken steps to address and resolve these security issues. According to the investigation by 404 Media, most of the problems highlighted in the leak have been mitigated. For instance, the licence plate transcriptions have been addressed, and efforts are being made to prevent similar mistakes in the future.


This incident is again throwing us into the reality of how important robust data protection practices are for companies of all sizes. Beyond the immediate impact on users, such lapses can erode trust and lead to significant financial and reputational damage. Businesses must adopt proactive measures to safeguard user data, ensuring compliance with privacy regulations and preventing costly security breaches.


The Google leak exposes critical weaknesses in the company's approach to data privacy and security. While corrective actions have been taken, the incident highlights the ongoing need for vigilance and transparency in handling sensitive information. This case underscores the broader lesson that protecting user data is not just a legal obligation but a fundamental aspect of maintaining customer trust and safeguarding against cyber threats.


Google Faces Scrutiny Over Internal Database Leak Exposing Privacy Incidents

 

A newly leaked internal database has revealed thousands of previously unknown privacy incidents at Google over the past six years. This information, first reported by tech outlet 404 Media, highlights a range of privacy issues affecting a broad user base, including children, car owners, and even video-game giant Nintendo. 

The authenticity of the leaked database was confirmed by Google to Engadget. However, Google stated that many of these incidents were related to third-party services or were not significant concerns. "At Google, employees can quickly flag potential product issues for review by the relevant teams. The reports obtained by 404 are from over six years ago and are examples of these flags — every one was reviewed and resolved at that time. In some cases, these employee flags turned out not to be issues at all or were issues that employees found in third party services," a company spokesperson explained. 

Despite some incidents being quickly fixed or affecting only a few individuals, 404 Media’s Joseph Cox noted that the database reveals significant mismanagement of personal, sensitive data by one of the world's most powerful companies. 

One notable incident involved a potential security issue where a government client’s sensitive data was accidentally transitioned from a Google cloud service to a consumer-level product. As a result, the US-based location for the data was no longer guaranteed for the client. 

In another case from 2016, a glitch in Google Street View’s transcription software failed to omit license plate numbers, resulting in a database containing geolocated license plate numbers. This data was later purged. 

Another incident involved a bug in a Google speech service that accidentally captured and logged approximately 1,000 hours of children’s speech data for about an hour. The report stated that all the data was deleted. Additional reports highlighted various other issues, such as manipulation of customer accounts on Google’s ad platform, YouTube recommendations based on deleted watch histories, and a Google employee accidentally leaking Nintendo’s private YouTube videos. 

Waze, acquired by Google in 2013, also had a carpool feature that leaked users' trips and home addresses. Google's internal challenges were further underscored by another recent leak of 2,500 documents, revealing discrepancies between the company’s public statements and internal views on search result rankings. 

These revelations raise concerns about Google's handling of user data and the effectiveness of its privacy safeguards, prompting calls for increased transparency and accountability from the tech giant.

Google Confirms Leak of 2,500 Internal Documents on Search Algorithm

 

In a significant incident, Google has confirmed the leak of 2,500 internal documents, exposing closely guarded information about its search ranking algorithm. This breach was first highlighted by SEO experts Rand Fishkin and Mike King of The Verge, who sought confirmation from Google via email. After multiple requests, Google spokesperson Davis Thompson acknowledged the leak, urging caution against making inaccurate assumptions based on potentially out-of-context, outdated, or incomplete information.  

The leaked data has stirred considerable interest, particularly as it reveals that Google considers the number of clicks when ranking web pages. This contradicts Google’s longstanding assertion that such metrics are not part of their ranking criteria. Despite this revelation, The Verge report indicates that it remains unclear which specific data points are actively used in ranking. It suggests that some of the information might be outdated, used strictly for training, or collected without being directly applied to search algorithms. 

Thompson responded to the allegations by emphasizing Google's commitment to transparency about how Search works and the factors their systems consider. He also highlighted Google's efforts to protect the integrity of search results from manipulation. This response underscores the complexity of Google's algorithm and the company's ongoing efforts to balance transparency and safeguarding its proprietary technology. The leak comes when the intricacies of Google's search algorithm are under intense scrutiny. 

Recent documents and testimony in the US Department of Justice antitrust case have already provided glimpses into the signals Google uses when ranking websites. This incident adds another layer of insight, though it also raises questions about the security of sensitive information within one of the world’s largest tech companies. Google’s decisions about search rankings have far-reaching implications. From small independent publishers to large online businesses, many rely on Google’s search results for visibility and traffic. 

The revelation of these internal documents not only impacts those directly involved in SEO and digital marketing but also sparks broader discussions about data security and the transparency of algorithms that significantly influence online behaviour and commerce. As the fallout from this leak continues, it serves as a reminder of the delicate balance between protecting proprietary information and the public’s interest in understanding the mechanisms that shape their online experiences. Google’s ongoing efforts to clarify and defend its practices will be crucial in navigating the challenges posed by this unprecedented exposure of its internal workings.

Beware: Cybercriminals Exploit Cloud Storage for SMS Phishing Attacks

Beware: Cybercriminals Exploit Cloud Storage for SMS Phishing Attacks

Security researchers discovered several illicit campaigns that use cloud storage systems like Amazon S3, Google Cloud Storage, Backblaze B2, and IBM Cloud Object Storage. Unnamed threat actors are behind these attacks, which try to divert customers to malicious websites to steal their information via SMS messages.

Campaign details

The campaigns involve exploiting cloud storage platforms such as Amazon S3, Google Cloud Storage, Backblaze B2, and IBM Cloud Object Storage. Unnamed threat actors are behind these campaigns. Their primary goal is to redirect users to malicious websites using SMS messages.

Attack objectives

Bypassing Network Firewalls: First, they want to ensure that scam text messages reach mobile handsets without being detected by network firewalls. Second, they attempt to persuade end users that the communications or links they receive are legitimate. 

Building Trust: They aim to convince end users that the messages or links they receive are trustworthy. By using cloud storage systems to host static websites with embedded spam URLs, attackers can make their messages appear authentic while avoiding typical security safeguards.

Cloud storage services enable enterprises to store and manage files and host static websites by storing website components in storage buckets. Cybercriminals have used this capacity to inject spam URLs into static websites hosted on these platforms. 

Technique

They send URLs referring to these cloud storage sites by SMS, which frequently avoids firewall limitations due to the apparent authenticity of well-known cloud domains. Users who click on these links are unknowingly sent to dangerous websites.

Execution

For example, attackers utilized the Google Cloud Storage domain "storage.googleapis.com" to generate URLs that lead to spam sites. The static webpage housed in a Google Cloud bucket uses HTML meta-refresh techniques to route readers to fraud sites right away. This strategy enables fraudsters to lead customers to fraudulent websites that frequently replicate real offerings, such as gift card promotions, to obtain personal and financial information.

Enea has also detected similar approaches with other cloud storage platforms like Amazon Web (AWS) and IBM Cloud, in which URLs in SMS messages redirect to static websites hosting spam.

Defense recommendations

To protect against such risks, Enea advised monitoring traffic activity, checking URLs, and being cautious of unexpected communications including links.

APT41 Strikes Again: Attacks Italian Industry Via Keyplug Malware


APT41:
A well-known Chinese cyberespionage group with a history of targeting various sectors globally. They are known for their sophisticated techniques and possible state backing.

KeyPlug: A modular backdoor malware allegedly used by APT41. It is written in C++ and functions on both Windows and Linux machines.

Brief overview

Cybersecurity experts at Yorai have discovered the threat. APT41 is a cyber threat group from China that is well-known for its extensive cyber espionage and cybercrime campaigns. It is also known by many aliases, including Amoeba, BARIUM, BRONZE ATLAS, BRONZE EXPORT, Blackfly, Brass Typhoon, Earth Baku, G0044, G0096, Grayfly, HOODOO, LEAD, Red Kelpie, TA415, WICKED PANDA, and WICKED SPIDER. 

APT41 aims to steal confidential information, compromise systems for financial or strategic advantage, and target a wide range of industries, including government, manufacturing, technology, media, education, and gaming. 

Technical Analysis

The backdoor has been developed to target both Windows and Linux operative systems and uses different protocols to communicate which depend on the configuration of the malware sample itself.

The use of malware, phishing, supply chain attacks, and the exploitation of zero-day software vulnerabilities are some of the group's tactics, methods, and procedures (TTPs). Because of the global threat posed by their operations, cybersecurity experts must maintain ongoing awareness to reduce associated risks. 

Notably, the notorious modular backdoor malware, KEYPLUG, was separated by Tinexta Cyber's Yoroi malware ZLab team after a protracted and thorough examination. KEYPLUG is a C++ program that has been in use since at least June 2021. 

It is available for Linux and Windows. It is a powerful weapon in APT41's cyberattack toolbox because it supports several network protocols for command and control (C2) communication, such as HTTP, TCP, KCP over UDP, and WSS.

Malware explained

The first example of malware is an implant that targets Windows operating systems from Microsoft. The infection originates from a different part that uses the.NET framework to function as a loader compared to the implant itself. 

The purpose of this loader is to decrypt a different file that looks like an icon file. The popular symmetric encryption algorithm AES is used for the decryption, and keys are kept right there in the sample.

After the decryption process is finished, the newly created payload with its SHA256 hash can be examined. If one looks more closely at that malware sample, one can see that Mandiant's report "Does This Look Infected?" had a direct correlation with the virus's structure. An Overview of APT41 Aimed against US State Governments. The XOR key in this particular instance is 0x59.

Keyplug malware

The Keyplug malware looks to employ VMProtect and is a little more sophisticated when it comes to Linux. Numerous strings connected to the UPX packer were found during static analysis, although the automated decompression procedure was unsuccessful. 

This version relaunches using the syscall fork after completing the task of decoding the payload code during execution. Malware analysis becomes challenging with this strategy since it breaks the analyst's control flow.

Google Unhappy: Microsoft’s Cybersecurity Struggles: What Went Wrong?

Google Unhappy: Microsoft’s Cybersecurity Struggles: What Went Wrong?

Google released a study of Microsoft's recent security vulnerabilities, finding that Microsoft is "unable to keep their systems and therefore their customers' data safe." Recent incidents have raised questions about Microsoft’s ability to safeguard its systems and protect customer data effectively. In this blog post, we delve into the challenges faced by Microsoft and explore potential implications for its customers.

The Exchange Breach: A Wake-Up Call

Last year, China-backed hackers infiltrated Microsoft Exchange servers, compromising countless accounts. The breach exposed a critical vulnerability, allowing unauthorized access to sensitive information. What compounded the issue was Microsoft’s initial response. The company failed to provide accurate information about the breach, leaving customers in the dark. The Federal Cybersecurity Review Board criticized Microsoft for not rectifying misleading statements promptly.

In its research, Google criticizes Microsoft for failing to accurately characterize a security breach that occurred last year in which China-backed hackers accessed Microsoft Exchange's networks, allowing them to access any Exchange account. Google cites the federal cybersecurity review board's findings that Microsoft customers lacked sufficient information to assess if they were at risk at the time, and Microsoft made a "decision not to correct" comments about the breach that the board found "inaccurate."

Source Code Exposure and Email Compromises

Beyond the Exchange breach, Microsoft faced other cybersecurity setbacks. Russian hackers gained access to the company’s source code, raising concerns about the integrity of its software. Additionally, senior leadership’s email accounts were compromised, highlighting vulnerabilities within Microsoft’s infrastructure. These incidents underscore the need for robust security measures and transparency.

Google’s Perspective: A Safer Alternative?

Google, a competitor in the tech space, has seized the opportunity to position its Google Workspace as a safer alternative. The company emphasizes its engineering excellence, cutting-edge defenses, and transparent security culture. Google Workspace offers features like advanced threat protection, data loss prevention, and real-time monitoring. While Google’s motives may be partly self-serving, it raises valid points about the importance of proactive security practices.

The Way Forward

Microsoft must address its cybersecurity challenges head-on. Transparency, accurate communication, and rapid incident response are critical. Customers deserve timely information to assess their risk and take necessary precautions. 

As organizations increasingly rely on cloud services, trust in providers’ security practices becomes paramount. Microsoft’s reputation hinges on its ability to protect both its systems and its customers’ data.

Google Introduces Advanced Anti-Theft and Data Protection Features for Android Devices

 

Google is set to introduce multiple anti-theft and data protection features later this year, targeting devices from Android 10 up to the upcoming Android 15. These new security measures aim to enhance user protection in cases of device theft or loss, combining AI and new authentication protocols to safeguard sensitive data. 

One of the standout features is the AI-powered Theft Detection Lock. This innovation will lock your device's screen if it detects abrupt motions typically associated with theft attempts, such as a thief snatching the device out of your hand. Another feature, the Offline Device Lock, ensures that your device will automatically lock if it is disconnected from the network or if there are too many failed authentication attempts, preventing unauthorized access. 

Google also introduced the Remote Lock feature, allowing users to lock their stolen devices remotely via android.com/lock. This function requires only the phone number and a security challenge, giving users time to recover their account details and utilize additional options in Find My Device, such as initiating a full factory reset to wipe the device clean. 

According to Google Vice President Suzanne Frey, these features aim to make it significantly harder for thieves to access stolen devices. All these features—Theft Detection Lock, Offline Device Lock, and Remote Lock—will be available through a Google Play services update for devices running Android 10 or later. Additionally, the new Android 15 release will bring enhanced factory reset protection. This upgrade will require Google account credentials during the setup process if a stolen device undergoes a factory reset. 

This step renders stolen devices unsellable, thereby reducing incentives for phone theft. Frey explained that without the device or Google account credentials, a thief won't be able to set up the device post-reset, essentially bricking the stolen device. To further bolster security, Android 15 will mandate the use of PIN, password, or biometric authentication when accessing or changing critical Google account and device settings from untrusted locations. This includes actions like changing your PIN, accessing Passkeys, or disabling theft protection. 

Similarly, disabling Find My Device or extending the screen timeout will also require authentication, adding another layer of security against criminals attempting to render a stolen device untrackable. Android 15 will also introduce "private spaces," which can be locked using a user-chosen PIN. This feature is designed to protect sensitive data stored in apps, such as health or financial information, from being accessed by thieves.                                                                           
These updates, including factory reset protection and private spaces, will be part of the Android 15 launch this fall. Enhanced authentication protections will roll out to select devices later this year. 
Google also announced at Google I/O 2024 new features in Android 15 and Google Play Protect aimed at combating scams, fraud, spyware, and banking malware. These comprehensive updates underline Google's commitment to user security in the increasingly digital age.

Backdoor Malware: Iranian Hackers Disguised as Journalists

Backdoor Malware: Iranian Hackers Disguised as Journalists

Crafting convincing personas

APT42, an Iranian state-backed threat actor, uses social engineering attacks, including posing as journalists, to access corporate networks and cloud environments in Western and Middle Eastern targets.

Mandiant initially discovered APT42 in September 2022, reporting that the threat actors had been active since 2015, carrying out at least 30 activities across 14 countries.

The espionage squad, suspected to be linked to Iran's Islamic Revolutionary Guard Corps Intelligence Organization (IRGC-IO), has been seen targeting non-governmental groups, media outlets, educational institutions, activists, and legal services.

According to Google threat analysts who have been monitoring APT42's operations, the hackers employ infected emails to infect their targets with two custom backdoors, "Nicecurl" and "Tamecat," which allow for command execution and data exfiltration.

A closer look at APT42’s social engineering tactics

APT42 assaults use social engineering and spear-phishing to infect targets' devices with tailored backdoors, allowing threat actors to obtain initial access to the organization's networks.

The attack begins with emails from online personas posing as journalists, NGO representatives, or event organizers, sent from domains that "typosquat" (have identical URLs) with actual organizations.

APT42 impersonates media organizations such as the Washington Post, The Economist, The Jerusalem Post (IL), Khaleej Times (UAE), and Azadliq (Azerbaijan), with Mandiant claiming that the attacks frequently employ typo-squatted names such as "washinqtonpost[.]press".

Luring victims with tempting bait

After exchanging enough information to establish confidence with the victim, the attackers transmit a link to a document connected to a conference or a news item, depending on the lure theme.

APT42 assaults use social engineering and spear-phishing to infect targets' devices with tailored backdoors, allowing threat actors to obtain initial access to the organization's networks.

The attack begins with emails from online personas posing as journalists, NGO representatives, or event organizers, sent from domains that "typosquat" (have identical URLs) with actual organizations.

The imitation game

APT42 impersonates media organizations such as the Washington Post, The Economist, The Jerusalem Post (IL), Khaleej Times (UAE), and Azadliq (Azerbaijan), with Mandiant claiming that the attacks frequently employ typo-squatted names such as "washinqtonpost[.]press".

After exchanging enough information to establish confidence with the victim, the attackers transmit a link to a document connected to a conference or a news item, depending on the lure theme.

Nicecurl, Tamecat: Custom backdoor

APT42 employs two proprietary backdoors, Nicecurl and Tamecat, each designed for a specific function during cyberespionage activities.

Nicecurl is a VBScript-based backdoor that can run commands, download and execute other payloads, and extract data from the compromised host.

Tamecat is a more advanced PowerShell backdoor that can run arbitrary PS code or C# scripts, providing APT42 with significant operational flexibility for data theft and substantial system modification.

Tamecat, unlike Nicecurl, obfuscates its C2 connection with base64, allows for dynamic configuration updates, and examines the infected environment before execution to avoid detection by AV products and other active security mechanisms.

Exfiltration via Legitimate Channels

Both backdoors are sent by phishing emails containing malicious documents, which frequently require macro rights to run. However, if APT42 has established trust with the victim, this requirement becomes less of an impediment because the victim is more inclined to actively disable security features.

Volexity studied similar, if not identical, malware in February, linking the attacks to Iranian threat actors.

The full list of Indicators of Compromise (IoCs) for the recent APT42 campaign, as well as YARA rules for detecting the NICECURL and TAMECAT malware, are available at the end of Google's report.

Protecting Users Against Bugs: Software Providers' Scalable Attempts

Protecting Users Against Bugs

Ransomware assaults, such as the one on Change Healthcare, continue to create serious disruptions. However, they are not inevitable. Software developers can create products that are immune to the most frequent types of cyberattacks used by ransomware gangs. This blog discusses what can be done and encourages customers to demand that software companies take action.

Millions of Americans recently experienced prescription medicine delays or were forced to pay full price as a result of a ransomware assault. While the United States has begun to make headway in reacting to cyberattacks, including the passage of incident reporting requirements into law, it is apparent that much more work remains to be done to combat the ransomware epidemic. 

Ransomware gangs flourish because they usually attack genuinely easy weaknesses in software that serve as the basis for critical operations and services.

Providing scalable solutions: Company duty

Business leaders of software manufacturers hold the key: They can build products that are resilient against the most common classes of cyberattacks by ransomware gangs.

The security community has known how to eliminate classes of vulnerabilities across software for decades. What is needed is not perfectly secure software but “secure enough” software, which software manufacturers are capable of creating.n exploit remarkably simple vulnerabilities in software that is the foundation for the essential processes and services.

Systemic classes of defects like SQL injection or insecure default configurations, such as a lack of multi-factor authentication by default or hardcoded default passwords, enable the vast majority of ransomware attacks and are preventable at scale.

The expense of preventing some types of vulnerabilities during the design stage is substantially less than dealing with the complex aftermath of a breach. 

According to a recent Google study, it has nearly eliminated many common types of vulnerabilities in its products, such as SQL injection and cross-site scripting. Furthermore, Google claims that such tactics were cost-effective and, in some cases, saved money ultimately as a result of having to worry about bugs.

Fighting lack of action

Inaction is exactly what has occurred in the software business. The Biden administration's National Cybersecurity Strategy asks for a shift in this direction, with software manufacturers accepting responsibility for product security from the start.

For example, whereas conventional vulnerability assessment approaches urge a sequential approach to identifying and patching vulnerabilities one by one, the agency's SQL injection alert promotes software manufacturers' executives to lead codebase reviews and eliminate all potentially unsafe functions to prevent SQL injection at the source.

How to identify bugs

Software vendors may assess vulnerability classes on two levels: impact, or the degree of damage that can be done by that class of vulnerability, and the cost of avoiding that flaw at scale.

SQL injection vulnerabilities are likely to be high in impact but inexpensive in cost to eliminate, whereas memory-safety issues have extremely high impact but need large investments to rewrite codebases systematically. Businesses can create a priority list of the most cost-effective tasks for fixing specific types of flaws in their products.

Customer's role: What can you do?

Companies should ask how their vendors attempt to remove entire classes of threats, such as implementing phishing-resistant multi-factor authentication and developing a memory-safe plan to address the most prevalent type of software vulnerability.

It is feasible that future ransomware assaults may be far more difficult to carry out. It's high time for software businesses to make this possibility a reality and safeguard Americans by including security from the beginning. Customers should insist that they do this.

Banking Malware "Brokewell" Hacks Android Devices, Steals User Data

Banking Malware "Brokewell" Hacks Android Devices

Security experts have uncovered a new Android banking trojan called Brokewell, which can record every event on the device, from touches and information shown to text input and programs launched.

The malware is distributed via a fake Google Chrome update that appears while using the web browser. Brokewell is in ongoing development and offers a combination of broad device takeover and remote control capabilities.

Brokewell information

ThreatFabric researchers discovered Brokewell while examining a bogus Chrome update page that released a payload, which is a common approach for deceiving unwary users into installing malware.

Looking back at previous campaigns, the researchers discovered that Brokewell had previously been used to target "buy now, pay later" financial institutions (such as Klarna) while masquerading as an Austrian digital authentication tool named ID Austria.

Brokewell's key capabilities include data theft and remote control for attackers.

Data theft 

  • Involves mimicking login windows of targeted programs to steal passwords (overlay attacks).
  • Uses its own WebView to track and collect cookies once a user logs into a valid website.
  • Captures the victim's interactions with the device, such as taps, swipes, and text inputs, to steal data displayed or inputted on it.
  • Collects hardware and software information about the device.
  • Retrieves call logs.
  • determines the device's physical position.
  • Captures audio with the device's microphone.

Device Takeover: 

  • The attacker can see the device's screen in real time (screen streaming).
  • Remotely executes touch and swipe gestures on the infected device.
  • Allows remote clicking on specific screen components or coordinates.
  • Allows for remote scrolling within elements and text entry into specific fields.
  • Simulates physical button presses such as Back, Home, and Recents.
  • Remotely activates the device's screen, allowing you to capture any information.
  • Adjusts brightness and volume to zero.

New threat actor and loader

According to ThreatFabric, the developer of Brokewell is a guy who goes by the name Baron Samedit and has been providing tools for verifying stolen accounts for at least two years.

The researchers identified another tool named "Brokewell Android Loader," which was also developed by Samedit. The tool was housed on one of Brokewell's command and control servers and is utilized by several hackers.

Unexpectedly, this loader can circumvent the restrictions Google imposed in Android 13 and later to prevent misuse of the Accessibility Service for side-loaded programs (APKs).

This bypass has been a problem since mid-2022, and it became even more of a problem in late 2023 when dropper-as-a-service (DaaS) operations began offering it as part of their service, as well as malware incorporating the tactics into their bespoke loaders.

As Brokewell shows, loaders that circumvent constraints to prevent Accessibility Service access to APKs downloaded from suspicious sources are now ubiquitous and widely used in the wild.

Security experts warn that device control capabilities, like as those seen in the Brokewell banker for Android, are in high demand among cybercriminals because they allow them to commit fraud from the victim's device, avoiding fraud evaluation and detection technologies.

They anticipate Brokewell being further improved and distributed to other hackers via underground forums as part of a malware-as-a-service (MaaS) operation.

To avoid Android malware infections, avoid downloading apps or app updates from sources other than Google Play, and make sure Play Protect is always turned on.

Posthumous Data Access: Can Google Assist with Deceased Loved Ones' Data?

 

Amidst the grief and emotional turmoil after loosing a loved one, there are practical matters that need to be addressed, including accessing the digital assets and accounts of the deceased. In an increasingly digital world, navigating the complexities of posthumous data access can be daunting. One common question that arises in this context is whether Google can assist in accessing the data of a deceased loved one. 

Google, like many other tech companies, has implemented protocols and procedures to address the sensitive issue of posthumous data access. However, accessing the digital assets of a deceased individual is not a straightforward process and is subject to various legal and privacy considerations. 

When a Google user passes away, their account becomes inactive, and certain features may be disabled to protect their privacy. Google offers a tool called "Inactive Account Manager," which allows users to specify what should happen to their account in the event of prolonged inactivity or after their passing. Users can set up instructions for data deletion or designate trusted contacts who will be notified and granted access to specific account data. 

However, the effectiveness of Google's Inactive Account Manager depends on the deceased individual's proactive setup of the tool before their passing. If the tool was not configured or if the deceased did not designate trusted contacts, gaining access to their Google account and associated data becomes significantly more challenging. 

In such cases, accessing the data of a deceased loved one often requires legal authorization, such as a court order or a valid death certificate. Google takes user privacy and data security seriously and adheres to applicable laws and regulations governing data access and protection. Without proper legal documentation and authorization, Google cannot grant access to the account or its contents, even to family members or next of kin. 

Individuals need to plan ahead and consider their digital legacy when setting up their online accounts. This includes documenting login credentials, specifying preferences for posthumous data management, and communicating these wishes to trusted family members or legal representatives. By taking proactive steps to address posthumous data access, individuals can help alleviate the burden on their loved ones during an already challenging time. 

In addition to Google's Inactive Account Manager, there are third-party services and estate planning tools available to assist with digital asset management and posthumous data access. These services may offer features such as data encryption, secure storage of login credentials, and instructions for accessing online accounts in the event of death or incapacity. 

As technology continues to play an increasingly prominent role in our lives, the issue of posthumous data access will only become more relevant. It's crucial for individuals to educate themselves about their options for managing their digital assets and to take proactive steps to ensure that their wishes are carried out after their passing. 

While Google provides tools and resources to facilitate posthumous data management, accessing the data of a deceased loved one may require legal authorization and adherence to privacy regulations. Planning ahead and communicating preferences for digital asset management are essential steps in addressing this sensitive issue. By taking proactive measures, individuals can help ensure that their digital legacy is managed according to their wishes and alleviate the burden on their loved ones during a difficult time.

Google’s Incognito Mode: Privacy, Deception, and the Path Forward

Google’s Incognito Mode: Privacy, Deception, and the Path Forward

In a digital age where privacy concerns loom large, the recent legal settlement involving Google’s Incognito mode has captured attention worldwide. The tech giant, known for its dominance in search, advertising, and web services, has agreed to delete billions of records and make significant changes to its tracking practices. Let’s delve into the details and explore the implications of this landmark decision.

The Incognito Mode Controversy

Incognito mode promises users a private browsing experience. It suggests that their online activities won’t be tracked, cookies won’t be stored, and their digital footprints will vanish once they exit the browser. However, the reality has been far from this idealistic portrayal.

The Illusion of Privacy: Internal documents revealed that Google employees referred to Incognito mode as “effectively a lie” and “a confusing mess”. Users believed they were operating in a secure, private environment, but Google continued to collect data, even in this supposedly incognito state.

Data Collection Despite Settings: The class action lawsuit filed against Google in 2020 alleged that the company tracked users’ activity even when they explicitly set their browsers to private modes. This revelation shattered the illusion of privacy and raised serious questions about transparency.

The Settlement: What It Means

Google’s proposed legal settlement aims to address these concerns and bring about meaningful changes:

Data Deletion: Google will wipe out “hundreds of billions” of private browsing data records it had collected. This move is a step toward rectifying past privacy violations.

Blocking Third-Party Cookies: For the next five years, Google Chrome’s Incognito mode will automatically block third-party cookies by default. These cookies, often used for tracking, will no longer infiltrate users’ private sessions.

Global Impact: The settlement extends beyond U.S. borders. Google’s commitment to data deletion and cookie blocking applies worldwide. This global reach emphasizes the significance of the decision.

The Broader Implications

Transparency and Accountability: The settlement represents an “historic step” in holding tech giants accountable. Lawyer David Boies, who represented users in the lawsuit, rightly emphasized the need for honesty and transparency. Users deserve clarity about their privacy rights.

User Trust: Google’s actions will either restore or further erode user trust. By deleting records and blocking cookies, the company acknowledges its missteps. However, rebuilding trust requires consistent adherence to privacy commitments.

Ongoing Legal Battles: While this settlement is a milestone, Google still faces other privacy-related lawsuits. The outcome of these cases could result in substantial financial penalties. The tech industry is on notice: privacy violations won’t go unnoticed.

The Road Ahead

As users, we must remain vigilant. Privacy isn’t just a checkbox; it’s a fundamental right. Google’s actions should prompt us to reevaluate our digital habits, understand the trade-offs, and demand transparency from all tech companies.

In the end, the battle for privacy isn’t won with a single settlement. It’s an ongoing struggle—one that requires vigilance, legal scrutiny, and a commitment to safeguarding our digital lives. Let’s hope that this landmark decision serves as a catalyst for positive change across the tech landscape.

Google Messages' Gemini Update: What You Need To Know

 



Google's latest update to its Messages app, dubbed Gemini, has ignited discussions surrounding user privacy. Gemini introduces AI chatbots into the messaging ecosystem, but it also brings forth a critical warning regarding data security. Unlike conventional end-to-end encrypted messaging services, conversations within Gemini lack this crucial layer of protection, leaving them potentially vulnerable to access by Google and potential exposure of sensitive information.

This privacy gap has raised eyebrows among users, with some expressing concern over the implications of sharing personal data within Gemini chats. Others argue that this aligns with Google's data-driven business model, which leverages user data to enhance its AI models and services. However, the absence of end-to-end encryption means that users may inadvertently expose confidential information to third parties.

Google has been forthcoming about the security implications of Gemini, explicitly stating that chats within the feature are not end-to-end encrypted. Additionally, Google collects various data points from these conversations, including usage information, location data, and user feedback, to improve its products and services. Despite assurances of privacy protection measures, users are cautioned against sharing sensitive information through Gemini chats.

The crux of the issue lies in the disparity between users' perceptions of AI chatbots as private entities and the reality of these conversations being accessible to Google and potentially reviewed by human moderators for training purposes. Despite Google's reassurances, users are urged to exercise caution and refrain from sharing sensitive information through Gemini chats.

While Gemini's current availability is limited to adult beta testers, Google has hinted at its broader rollout in the near future, extending its reach beyond English-speaking users to include French-speaking individuals in Canada as well. This expansion signifies a pivotal moment in messaging technology, promising enhanced communication experiences for a wider audience. However, as users eagerly anticipate the platform's expansion, it becomes increasingly crucial for them to proactively manage their privacy settings. By taking the time to review and adjust their preferences, users can ensure a more secure messaging environment tailored to their individual needs and concerns. This proactive approach empowers users to navigate digital communication with confidence and peace of mind.

All in all, the introduction of Gemini in Google Messages underscores the importance of user privacy in the digital age. While technological advancements offer convenience, they also necessitate heightened awareness to safeguard personal information from potential breaches.




Restrictions on Gemini Chatbot's Election Answers by Google

 


AI chatbot Gemini has been limited by Google in terms of its ability to respond to queries concerning several forthcoming elections in several countries, including the presidential election in the United States, this year. According to an announcement made by the company on Tuesday, Gemini, Google's artificial intelligence chatbot, will no longer answer election-related questions for users in the U.S. and India. 

Previously known as Bard, Google's AI chatbot Gemini has been unable to answer questions about the general elections of 2024. Various reports indicate that the update is already live in the United States, is already being rolled out in India, and is now being rolled out in all major countries that are approaching elections within the next few months. 

As a result of the change, Google has expressed concern about how the generative AI could be weaponized by users and produce inaccurate or misleading results, as well as the role it has been playing and will continue to play in the electoral process. 

In advance of the general elections in India this spring, millions of Indian citizens will be voting in a general election, and the company has taken several steps to ensure that its services are secure from misinformation. 

Several high-stakes elections are planned this year in countries such as the United States, India, South Africa, and the United Kingdom that require a significant amount of chatbot capabilities. It is widely known that artificial intelligence (AI) is generating disinformation and it is having a significant impact on global elections. This technology allows robocalls, deep fakes, and chatbots to be used to spread misinformation. 

Just days after India released an advisory demanding that companies in the tech industry get government approval before they launch their new AI models, the switch has been made in India. A recent investigation of Google's artificial intelligence products has resulted in a wide range of concerns, including inaccuracies in some historical depictions of people created by Gemini that forced the chatbot's image-generation feature to be halted, which has caused it to receive negative attention. 

According to the CEO of the company, Sundar Pichai, the chatbot is being remediated and is "completely unacceptable" for its responses. The parent company of Facebook, Meta Platforms, announced last month that it would set up a team in advance of the European Parliament elections in June to combat disinformation and the abuse of generative AI. 

As generative AI is advancing across the globe, government officials across the globe have been concerned about misinformation, prompting them to take measures to control its use. As of recently, India has informed technology companies that they need to obtain approval before releasing AI tools that have been "unreliable" or that are undergoing testing. 

The company apologised in February after its recently launched AI image generator, Gemini, created an image of the US Founding Fathers in which a black man was inappropriately depicted as a member of the group. Gemini also created an incorrectly depicted image of German soldiers from World War Two.

Generative AI Worms: Threat of the Future?

Generative AI worms

The generative AI systems of the present are becoming more advanced due to the rise in their use, such as Google's Gemini and OpenAI's ChatGPT. Tech firms and startups are making AI bits and ecosystems that can do mundane tasks on your behalf, think about blocking a calendar or product shopping. But giving more freedom to these things tools comes at the cost of risking security. 

Generative AI worms: Threat in the future

In the latest study, researchers have made the first "generative AI worms" that can spread from one device to another, deploying malware or stealing data in the process.  

Nassi, in collaboration with fellow academics Stav Cohen and Ron Bitton, developed the worm, which they named Morris II in homage to the 1988 internet debacle caused by the first Morris computer worm. The researchers demonstrate how the AI worm may attack a generative AI email helper to steal email data and send spam messages, circumventing several security measures in ChatGPT and Gemini in the process, in a research paper and website.

Generative AI worms in the lab

The study, conducted in test environments rather than on a publicly accessible email assistant, coincides with the growing multimodal nature of large language models (LLMs), which can produce images and videos in addition to text.

Prompts are language instructions that direct the tools to answer a question or produce an image. This is how most generative AI systems operate. These prompts, nevertheless, can also be used as a weapon against the system. 

Prompt injection attacks can provide a chatbot with secret instructions, while jailbreaks can cause a system to ignore its security measures and spew offensive or harmful content. For instance, a hacker might conceal text on a website instructing an LLM to pose as a con artist and request your bank account information.

The researchers used a so-called "adversarial self-replicating prompt" to develop the generative AI worm. According to the researchers, this prompt causes the generative AI model to output a different prompt in response. 

The email system to spread worms

The researchers connected ChatGPT, Gemini, and open-source LLM, LLaVA, to develop an email system that could send and receive messages using generative AI to demonstrate how the worm may function. They then discovered two ways to make use of the system: one was to use a self-replicating prompt that was text-based, and the other was to embed the question within an image file.

A video showcasing the findings shows the email system repeatedly forwarding a message. Also, according to the experts, data extraction from emails is possible. According to Nassi, "It can be names, phone numbers, credit card numbers, SSNs, or anything else that is deemed confidential."

Generative AI worms to be a major threat soon

Nassi and the other researchers report that they expect to see generative AI worms in the wild within the next two to three years in a publication that summarizes their findings. According to the research paper, "many companies in the industry are massively developing GenAI ecosystems that integrate GenAI capabilities into their cars, smartphones, and operating systems."