Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Data Privacy. Show all posts

New EDR Bypass Tool Advertised by FIN7 Hacking Group

 

SentinelOne researchers warn that the financially motivated group FIN7 is utilising various pseudonyms to promote a security evasion tool on several criminal underground forums. FIN7 created a tool called AvNeutralizer (also known as AuKill) that can circumvent safety measures. The researchers discovered that the tool was employed by multiple ransomware operations, including AvosLocker, MedusaLocker, BlackCat, Trigona, and LockBit. 

The researchers identified a new version of AvNeutralizer that uses a novel way to interfere with and bypass security mechanisms, exploiting the Windows driver ProcLaunchMon.sys. 

“New evidence shows FIN7 is using multiple pseudonyms to mask the group’s true identity and sustain its criminal operations in the underground market,” the researchers explained . “FIN7’s campaigns demonstrate the group’s adoption of automated SQL injection attacks for exploiting public-facing applications.” 

Last year in November, SentinelOne reported a potential link between FIN7 and the use of EDR evasion tools in ransomware attacks involving the Black Basta group. 

The cybersecurity firm's analysis revealed that the "AvNeutralizer" tool (also known as AuKill) targeted several endpoint security solutions and was utilised exclusively by one group for six months. This supported the hypothesis that the FIN7 group and the Black Basta gang had a close relationship.

Starting in January 2023, the experts detected the deployment of upgraded versions of AvNeutralizer by multiple ransomware gangs, implying that the programme was made available to multiple threat actors through underground forums. The researchers discovered numerous adverts on underground forums encouraging the sale of AvNeutralizer.

On May 19, 2022, a user named "goodsoft" advertised an AV killing tool for $4,000 on the exploit[.]in forum. Later, on June 14th, 2022, a person named "lefroggy" placed a similar ad on the xss[.]is forum for $15,000. A week later, on June 21st, a user known as "killerAV" advertised the tool on the RAMP forum for $8,000. 

SentinelOne researchers focused on the tool's innovative technique for disabling endpoint security solutions. The unpacked AvNeutralizer payload employs ten approaches to compromise system security systems. While multiple strategies have been reported, such as removing PPL protection using the RTCore64.sys driver and the Restart Manager API, a recently discovered technique includes utilising a Windows built-in driver capability that was previously unknown in the wild. 

“Our investigation into FIN7’s activities highlights its adaptability, persistence and ongoing evolution as a threat group. In its campaigns, FIN7 has adopted automated attack methods, targeting public-facing servers through automated SQL injection attacks,” the researchers concluded. “Additionally, its development and commercialization of specialized tools like AvNeutralizer within criminal underground forums significantly enhance the group’s impact.”

Here's Why You Shouldn't Use Public USB Charging Ports

 

We've all been there: stranded in a coffee shop with a dropping phone battery and no connector, only to find a free USB charging station nearby. Relieved, you plug in your device and go about your business, unaware that a potential threat lurks behind that seemingly benign USB port. 

That concern is "juice jacking," a cybersecurity vulnerability that has received enough attention in recent years to warrant an advisory from the FBI. So, what exactly is juice jacking and how risky is it? Here's all you need to know, along with some recommendations for keeping your mobile devices safe while charging on the road. 

What is juice-jacking? 

Juice-jacking is when hackers siphon your phone's data while it is charging. It achieves this using software placed in a kiosk that allows you to quickly charge your phone, or through a cable connected to a charging station. It can do this by plugging the USB charger directly into the socket. USBs, unlike two-pronged plugs, may transmit data as well as electricity. 

The methodology is similar to how a "skimmer" steals your bank or credit card information; however, juice-jacking has the potential to collect all of the data on your cell phone, including passwords, account information, contacts, emails, and so on. While this form of hacking is not yet widespread, it has the potential to become so. However, there are techniques to defend yourself from this type of hack. 

Prevention Tips 
  • Do not plug your phone directly into a USB charging port. Keep your data secure by using a 2-prong electrical charger.
  • Don't use the provided cord or someone else's 2-prong attachment since it might contain software designed to steal your information. 
  • Use a "sync stop" device to prevent attackers from accessing your phone. When charging your phone, leave it locked or switched off. 
  • Most phones cannot access your information while locked or switched off. Don't rely on others; bring your own personal power bank to charge your mobile device. 

When your phone's battery goes low in the airport, hotel, or coffee shop, be sure you're prepared to give it the power it requires without leaving you powerless.

Chinese APT40 Can Exploit Flaws Within Hours of Public Release

 

A joint government advisory claims that APT40, a Chinese state-sponsored actor, is focusing on recently discovered software vulnerabilities in an attempt to exploit them in a matter of hours.

The advisory, authored by the Cybersecurity and Infrastructure Security Agency, FBI, and National Security Agency in the United States, as well as government agencies in Australia, the UK, Canada, New Zealand, Germany, South Korea, and Japan, stated that the cyber group has targeted organisations in a variety of arenas, employing techniques commonly employed by other state-sponsored actors in China. It has often targeted Australian networks, for instance, and remains a threat, the agencies warned. 

Rather than using strategies that involve user engagement, the gang seems to prefer exploiting vulnerable, public-facing infrastructure and prioritising the collection of valid credentials. It frequently latches on public exploits as soon as they become accessible, creating a "patching race" condition for organisations. 

"The focus on public-facing infrastructure is interesting. It shows they're looking for the path of least resistance; why bother with elaborate phishing campaigns when you can just hit exposed vulnerabilities directly?" stated Tal Mandel Bar, product manager at DoControl. 

The APT targets newly disclosed flaws, but it also has access to a large number of older exploits, according to the agencies. As a result, a comprehensive vulnerability management effort is necessary.

Comprehensive reconnaissance efforts 

APT40 conducts reconnaissance against networks of interest on a regular basis, "including networks in the authoring agencies' countries, looking for opportunities to compromise its targets," according to the joint advice. The group then employs Web shells for persistence and focuses on extracting data from sensitive repositories.

"The data stolen by APT40 serves dual purposes: It is used for state espionage and subsequently transferred to Chinese companies," Chris Grove, director of cybersecurity strategy at Nozomi Networks, stated. "Organizations with critical data or operations should take these government warnings seriously and strengthen their defenses accordingly. One capability that assists defenders in hunting down these types of threats is advanced anomaly detection systems, acting as intrusion detection for attackers able to 'live off the land' and avoid deploying malware that would reveal their presence.” 

APT40's methods have also advanced, with the group now adopting the use of compromised endpoints such as small-office/home-office (SOHO) devices for operations, allowing security agencies to better track it. Volt Typhoon's noted approach is just one of many parts of the group's operation that are comparable to other China-backed threat groups including Kryptonite Panda, Gingham Typhoon, Leviathan, and Bronze Mohawk, the advisory reads. 

The advisory provides mitigating approaches for APT40's four major types of tactics, techniques, and procedures (TTPs), which include initial access, execution, persistence, and privilege escalation.

Zero-Knowledge Proofs: How They Improve Blockchain Privacy?



Zero-knowledge proofs (ZKPs) are emerging as a vital component in blockchain technology, offering a way to maintain transactional privacy and integrity. These cryptographic methods enable verification without revealing the actual data, paving the way for more secure and private blockchain environments.

At its core, a zero-knowledge proof allows one party (the prover) to prove to another party (the verifier) that they know certain information without disclosing the information itself. This is particularly valuable in the blockchain realm, where transparency is key but privacy is also crucial. For example, smart contracts often contain sensitive financial or personal data that must be protected from unauthorised access.

How ZKPs Operate

A ZKP involves the prover performing actions that confirm they know the hidden data. If an unauthorised party attempts to guess these actions, the verifier's procedures will expose the falsity of their claim. ZKPs can be interactive, requiring repeated verifications, or non-interactive, where a single proof suffices for multiple verifiers.

The concept of ZKPs was introduced in a 1985 MIT paper by Shafi Goldwasser and Silvio Micali, which demonstrated the feasibility of proving statements about data without revealing the data itself. Key characteristics of ZKPs include:

  • Completeness: If the prover's statement is true, the verifier will be convinced.
  • Soundness: If the prover's statement is false, the verifier will detect the deception. 
  • Zero-Knowledge: The proof does not reveal any additional information beyond the validity of the statement.

Types of Zero-Knowledge Proofs

Zero-knowledge proofs come in various forms, each offering unique benefits in terms of proof times, verification times, and proof sizes:

  • PLONK: An acronym for "Permutations over Lagrange-bases for Oecumenical Non-interactive arguments of Knowledge," PLONK is known for its versatility. It supports various applications and allows a large number of participants, making it one of the most widely used and trusted ZKP setups.cyber 
  • ZK-SNARKs: Short for "Succinct Non-interactive Argument of Knowledge," ZK-SNARKs are popular due to their efficiency. These proofs are quick to generate and verify, requiring fewer computational resources. They use elliptic curves for cryptographic proofs, making them suitable for systems with limited processing power.

  • ZK-STARKs: "Scalable Transparent ARgument of Knowledge" proofs are designed for scalability and speed. They require minimal interaction between the prover and verifier, which speeds up the verification process. ZK-STARKs are also transparent, meaning they do not require a trusted setup, enhancing their security.
  • Bulletproofs: These are short, non-interactive zero-knowledge proofs that do not require a trusted setup, making them ideal for applications needing high privacy, such as confidential cryptocurrency transactions. Bulletproofs are efficient and compact, providing strong privacy guarantees without significant overhead.

Advantages for Blockchain Privacy

ZKPs are instrumental in preserving privacy on public blockchains, which are typically transparent by design. They enable the execution of smart contracts—self-executing programs that perform agreed-upon actions—without revealing sensitive data. This is particularly important for institutions like banks, which need to protect personal data while complying with regulatory requirements.

For instance, financial institutions can use ZKPs to interact with public blockchain networks, keeping their data private while benefiting from the broader user base. The London Stock Exchange is exploring ZKPs to enhance security and handle large volumes of financial data efficiently.

Practical Applications

Zero-knowledge proofs have a wide array of applications across various sectors, enhancing privacy and security:

1. Private Transactions: Cryptocurrencies like Zcash utilise ZKPs to keep transaction details confidential. By employing ZKPs, Zcash ensures that the sender, receiver, and transaction amount remain private, providing users with enhanced security and anonymity.

2. Decentralised Identity and Authentication: ZKPs can secure identity management systems, allowing users to verify their identity without revealing personal details. This is crucial for protecting sensitive information in digital interactions and can be applied in various fields, from online banking to voting systems.

3. Verifiable Computations: Decentralised oracle networks can leverage ZKPs to access and verify off-chain data without exposing it. For example, a smart contract can obtain weather data from an external source and prove its authenticity using ZKPs, ensuring the data's integrity without compromising privacy.

4. Supply Chain Management: ZKPs can enhance transparency and security in supply chains by verifying the authenticity and origin of products without disclosing sensitive business information. This can prevent fraud and ensure the integrity of goods as they move through the supply chain.

5. Healthcare: In the healthcare sector, ZKPs can protect patient data while allowing healthcare providers to verify medical records and credentials. This ensures that sensitive medical information is kept confidential while enabling secure data sharing between authorised parties.

Challenges and Future Prospects

Despite their promise, ZKPs face challenges, particularly regarding the hardware needed for efficient proof generation. Advanced GPUs are required for parallel processing to speed up the process. Technologies like PLONK are addressing these issues with improved algorithms, but further developments are needed to simplify and broaden ZKP adoption.

Businesses are increasingly integrating blockchain technologies, including ZKPs, to enhance security and efficiency. With ongoing investment in cryptocurrency infrastructure, ZKPs are expected to play a crucial role in creating a decentralized, privacy-focused internet.

Zero-knowledge proofs are revolutionising blockchain privacy, enabling secure and confidential transactions. While challenges remain, the rapid development and significant investment in this technology suggest a bright future for ZKPs, making them a cornerstone of modern blockchain applications.


Australian Man Arrested for Evil Twin Wi-Fi Attacks on Domestic Flights

 

Police in Australia have arrested and charged a man with nine cybercrime crimes for allegedly setting up fictitious public Wi-Fi networks using a portable wireless access point to steal data from unsuspecting users. 

The man designed "evil twin" Wi-Fi networks at airports, during flights, and other places related to his "previous employment" that would deceive people into registering into the fake network using their email address or social media accounts. Police stated the login data was then transferred to the man's devices. 

Dozens of credentials were reportedly obtained. This information might have enabled the perpetrator to get access to victims' accounts and possibly steal further sensitive information such as banking login details or other personal information. Employees of the airline noticed one of the strange in-flight Wi-Fi networks. The anonymous Australian airline then reported the Wi-Fi's presence to authorities, who investigated the situation in April and arrested the suspect in May. 

According to the Australian Broadcasting Corporation, the man, Michael Clapsis, appeared before Perth Magistrates Court and was subsequently released on "strict" bail with limited internet access. He also had to submit his passport. Clapsis' LinkedIn profile, which has since been deleted, hints that he may have previously worked for a shipping company. 

He has been charged with three counts of unauthorised impairment of electronic communication, three counts of possession or control of data with the intent to commit a serious offence, one count of unauthorised access or modification of restricted data, one count of dishonestly obtaining or dealing in personal financial information, and one count of possessing identification information with the intent to commit an offence. Clapsis is set to appear in court again in August. 

Evil twin attacks can use a variety of tactics to steal victims' data. However, they typically entail providing free Wi-Fi networks that appear genuine but actually contain "login pages" designed to steal your data. Genuine Wi-Fi networks should never ask you to login using your social media credentials or provide a password for any of your accounts. It is also recommended to use a VPN and avoid connecting to public Wi-Fi networks when a more secure option is available.

EU Claims Meta’s Paid Ad-Free Option Violates Digital Competition Rules

 

European Union regulators have accused Meta Platforms of violating the bloc’s new digital competition rules by compelling Facebook and Instagram users to either view ads or pay to avoid them. This move comes as part of Meta’s strategy to comply with Europe's stringent data privacy regulations.

Starting in November, Meta began offering European users the option to pay at least 10 euros ($10.75) per month for ad-free versions of Facebook and Instagram. This was in response to a ruling by the EU’s top court, which mandated that Meta must obtain user consent before displaying targeted ads, a decision that jeopardized Meta’s business model of personalized advertising.

The European Commission, the EU’s executive body, stated that preliminary findings from its investigation indicate that Meta’s “pay or consent” model breaches the Digital Markets Act (DMA) of the 27-nation bloc. According to the commission, Meta’s approach fails to provide users the right to “freely consent” to the use of their personal data across its various services for personalized ads.

The commission also criticized Meta for not offering a less personalized service that is equivalent to its social networks. Meta responded by stating that their subscription model for no ads aligns with the direction of the highest court in Europe and complies with the DMA. The company expressed its intent to engage in constructive dialogue with the European Commission to resolve the investigation.

The investigation was launched soon after the DMA took effect in March, aiming to prevent tech “gatekeepers” from dominating digital markets through heavy financial penalties. One of the DMA's objectives is to reduce the power of Big Tech firms that have amassed vast amounts of personal data, giving them an advantage over competitors in online advertising and social media services. The commission suggested that Meta should offer an option that doesn’t rely on extensive personal data sharing for advertising purposes.

European Commissioner Thierry Breton, who oversees the bloc’s digital policy, emphasized that the DMA aims to empower users to decide how their data is used and to ensure that innovative companies can compete fairly with tech giants regarding data access.

Meta now has the opportunity to respond to the commission’s findings, with the investigation due to conclude by March 2025. The company could face fines of up to 10% of its annual global revenue, potentially amounting to billions of euros. Under the DMA, Meta is classified as one of seven online gatekeepers, with Facebook, Instagram, WhatsApp, Messenger, and its online ad business listed among two dozen “core platform services” that require the highest level of regulatory scrutiny.

This accusation against Meta is part of a series of regulatory actions by Brussels against major tech companies. Recently, the EU charged Apple with preventing app makers from directing users to cheaper options outside its App Store and accused Microsoft of violating antitrust laws by bundling its Teams app with its Office software.


Phishing And The Threats of QR Codes

 

Cybercriminals have always been adept at abusing the latest technological developments in their attacks, and weaponizing QR codes is one of their most recent strategies. QR codes have grown in popularity as a method for digital information sharing due to their ease of use and functionality. 

However, their widespread use has created a new channel for phishing attempts, namely QR code phishing (or quishing). With the NCSC recently warning of an increase in these attacks, businesses must grasp how QR codes can be used to compromise staff and what they can do to effectively protect against these rising threats. 

Leaders at risk from QR code attacks 

Quishing attacks, like traditional phishing campaigns, typically attempt to steal credentials by social engineering, in which an email is sent from a supposedly trusted source and uses urgent language to persuade the target to perform a specific action. 

In a quishing attack, the target is frequently induced to scan a QR code disguised as a fake prompt, such as updating an expired password or examining a critical file. The malicious QR code will then direct visitors to a counterfeit login page, prompting them to enter - and ultimately expose - their credentials. 

CEOs and senior executives, who have the system access, are naturally appealing targets due to the high value of account credentials. In fact, the study discovered that C-Suite members were 42 times more likely than other employees to receive QR code phishing assaults. 

Quishing attacks mainly follow the same standard phishing strategy, in which social engineering is employed to control the victim's actions. However, when it comes to QR code phishing, cybercriminals appear to prefer two methods. 

Data collected in the second half of 2023 revealed that QR codes were most commonly used in false notifications for MFA activity (27% of all QR assaults) and shared documents (21%). Whatever the explanation for the malicious code, the majority of QR assaults security experts detected are credential phishing attempts. 

Prevention tips 

The best defence is to keep these attacks from reaching their intended targets at all. However, it is becoming increasingly evident that these new phishing schemes outperform secure email gateways (SEGs) and other legacy email systems. Unfortunately, these safeguards were not intended to thoroughly detect QR code threats or assess the code's destination.

Businesses need to be aware that new threats like QR codes will outsmart many of the classic security solutions, forcing them to switch to more contemporary, dynamic strategies like AI-native detection technologies.

Hacker Claims Data Breach of India’s Blue-Collar Worker Database

 

A hacker claims to have accessed a large database linked with the Indian government's portal for blue-collar workers emigrating from the country. 

The eMigrate portal's database allegedly includes full names, contact numbers, email addresses, dates of birth, mailing addresses, and passport data of individuals who allegedly registered for the portal.

The Ministry of External Affairs launched eMigrate, which helps Indian workers in emigrating overseas. The portal also offers clearance tracking and insurance services to migrating workers. 

The database for sale on a recognised cybercrime forum looks to be genuine and it even includes the contact information for the Indian government's foreign ambassador. While it is unclear whether the data was stolen directly from the eMigrate portal or via a previous breach, the threat actors claim to have access to at least 200,000 internal and registered user accounts. 

India's Computer Emergency Response Team (CERT-In) is working with the relevant authorities to take appropriate action, while the Ministry of External Affairs is yet to respond on the matter. This is not the first time India's government portals have been accused of data leak. 

Earlier this year, an Indian state government website was found exposing sensitive documents and personal information of millions of residents. In May, scammers were found to have tricked government websites into displaying adverts that redirected users to online betting sites. 

The implications of such data breaches is difficult to estimate. However, data breaches can have serious consequences for individuals whose personal information is exposed. Personal information provided on hacker forums is frequently used by attackers to launch phishing attacks, steal identities, and compromise users' financial security. 

“Personal data is its own form of digital currency on the internet and breaches cost organizations a significant amount. The breaches impacting organizations and government entities are what the public sees front and center, but the impact on the end user isn’t as visible.” Satnam Narang, sr. staff research engineer, Tenable stated.

From Hype to Reality: Understanding Abandoned AI Initiatives

From Hype to Reality: Understanding Abandoned AI Initiatives

A survey discovered that nearly half of all new commercial artificial intelligence projects are abandoned in the middle.

Navigating the AI Implementation Maze

A recent study by the multinational law firm DLA Piper, which surveyed 600 top executives and decision-makers worldwide, sheds light on the considerable hurdles businesses confront when incorporating AI technologies. 

Despite AI's exciting potential to transform different industries, the path to successful deployment is plagued with challenges. This essay looks into these problems and offers expert advice for navigating the complex terrain of AI integration.

Why Half of Business AI Projects Get Abandoned

According to the report, while more than 40% of enterprises fear that their basic business models will become obsolete unless they incorporate AI technologies, over half (48%) of companies that have started AI projects have had to suspend or roll them back. Worries about data privacy (48%), challenges with data ownership and insufficient legislative frameworks (37%), customer apprehensions (35%), the emergence of new technologies (33%), and staff worries (29%).

The Hype vs. Reality

1. Unrealistic Expectations

When organizations embark on an AI journey, they often expect immediate miracles. The hype surrounding AI can lead to inflated expectations, especially when executives envision seamless automation and instant ROI. However, building robust AI systems takes time, data, and iterative development. Unrealistic expectations can lead to disappointment and project abandonment.

2. Data Challenges

AI algorithms thrive on data, but data quality and availability remain significant hurdles. Many businesses struggle with fragmented, messy data spread across various silos. With clean, labeled data, AI models can continue. Additionally, privacy concerns and compliance issues further complicate data acquisition and usage.

The Implementation Pitfalls

1. Lack of Clear Strategy

AI projects often lack a well-defined strategy. Organizations dive into AI without understanding how it aligns with their overall business goals. A clear roadmap, including pilot projects, resource allocation, and risk assessment, is crucial.

2. Talent Shortage

Skilled AI professionals are in high demand, but the supply remains limited. Organizations struggle to find data scientists, machine learning engineers, and AI architects. Without the right talent, projects stall or fail.

3. Change Management

Implementing AI requires organizational change. Employees must adapt to new workflows, tools, and mindsets. Resistance to change can derail projects, leading to abandonment.

EU Proposes New Law to Allow Bulk Scanning of Chat Messages

 

The European elections have ended, and the European football tournament is in full flow; why not allow bulk searches of people's private communications, including encrypted ones? Activists around Europe are outraged by the proposed European Union legislation. 

The EU governments' vote on Thursday in a significant Permanent Representatives Committee meeting would not have been the final obstacle to the legislation that aims to identify child sexual abuse material (CSAM). At the last minute, the contentious question was taken off the agenda. 

However, if the EU Council approves the Chat Control regulation later rather than sooner, experts believe it will be enacted towards the end of the difficult political process. Thus, the activists have asked Europeans to take action and keep up the pressure.

EU Council deaf to criticism

Actually, a regulation requiring chat services like Facebook Messenger and WhatsApp to sift through users' private chats in order to look for grooming and CSAM was first put out in 2022. 

Needless to say, privacy experts denounced it, with cryptography professor Matthew Green stating that the document described "the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR.” 

“Let me be clear what that means: to detect “grooming” is not simply searching for known CSAM. It isn’t using AI to detect new CSAM, which is also on the table. It’s running algorithms reading your actual text messages to figure out what you’re saying, at scale,” stated Green. 

However, the EU has not backed down, and the draft law is currently going through the system. To be more specific, the proposed law would establish a "upload moderation" system to analyse all digital messages, including shared images, videos, and links.

The document is rather wild. Consider end-to-end encryption: on the one hand, the proposed legislation states that it is vital, but it also warns that encrypted messaging platforms may "inadvertently become secure zones where child sexual abuse material can be shared or disseminated." 

The method appears to involve scanning message content before encrypting it using apps such as WhatsApp, Messenger, or Signal. That sounds unconvincing, and it most likely is. 

Even if the regulation is approved by EU countries, additional problems may arise once the general public becomes aware of what is at stake. According to a study conducted last year by the European Digital Rights group, 66% of young people in the EU oppose the idea of having their private messages scanned.

AI Technique Combines Programming and Language

 

Researchers from MIT and several other institutions have introduced an innovative technique that enhances the problem-solving capabilities of large language models by integrating programming and natural language. This new method, termed natural language embedded programs (NLEPs), significantly improves the accuracy and transparency of AI in tasks requiring numerical or symbolic reasoning.

Traditionally, large language models like those behind ChatGPT have excelled in tasks such as drafting documents, analysing sentiment, or translating languages. However, these models often struggle with tasks that demand numerical or symbolic reasoning. For instance, while a model might recite a list of U.S. presidents and their birthdays, it might falter when asked to identify which presidents elected after 1950 were born on a Wednesday. The solution to such problems lies beyond mere language processing.

MIT researchers propose a groundbreaking approach where the language model generates and executes a Python program to solve complex queries. NLEPs work by prompting the model to create a detailed program that processes the necessary data and then presents the solution in natural language. This method enhances the model's ability to perform a wide range of reasoning tasks with higher accuracy.

How NLEPs Work

NLEPs follow a structured four-step process. First, the model identifies and calls the necessary functions to tackle the task. Next, it imports relevant natural language data required for the task, such as a list of presidents and their birthdays. In the third step, the model writes a function to calculate the answer. Finally, it outputs the result in natural language, potentially accompanied by data visualisations.

This structured approach allows users to understand and verify the program's logic, increasing transparency and trust in the AI's reasoning. Errors in the code can be directly addressed, avoiding the need to rerun the entire model, thus improving efficiency.

One significant advantage of NLEPs is their generalizability. A single NLEP prompt can handle various tasks, reducing the need for multiple task-specific prompts. This makes the approach not only more efficient but also more versatile.

The researchers demonstrated that NLEPs could achieve over 90 percent accuracy in various symbolic reasoning tasks, outperforming traditional task-specific prompting methods by 30 percent. This improvement is notable even when compared to open-source language models.

NLEPs offer an additional benefit of improved data privacy. Since the programs run locally, sensitive user data does not need to be sent to external servers for processing. This approach also allows smaller language models to perform better without expensive retraining.

Despite these advantages, NLEPs rely on the model's program generation capabilities, meaning they may not work as well with smaller models trained on limited datasets. Future research aims to enhance the effectiveness of NLEPs in smaller models and explore how different prompts can further improve the robustness of the reasoning processes.

The introduction of natural language-embedded programs marks a mounting step forward in combining the strengths of programming and natural language processing in AI. This innovative approach not only enhances the accuracy and transparency of language models but also opens new possibilities for their application in complex problem-solving tasks. As researchers continue to refine this technique, NLEPs could become a cornerstone in the development of trustworthy and efficient AI systems.


Android 15's Lockdown Mode Safeguards Your Phone Against "Juice Jacking"

 

You shouldn't use any random cable that is provided to you to charge your favourite Android phone—or any other device, for that matter—at a public charging station for a few very good reasons. More importantly, there are always a number of security issues, so you might not receive the fastest charging speeds. Even though they are not scalable, "juice jacking" attacks that weaponize charging stations are common; however, Android 15's Lockdown mode now includes defences against such types of attacks. 

Google is still working on Android 15, which is now in beta testing. The most recent development, spotted by apex tech sleuth Mishaal Rahman (via Android Authority), suggests that the operating system update will have built-in protections against fraudulent individuals who attempt to use juice-jacking devices. These attacks have the ability to install malicious apps, run commands, transmit malicious payloads to your device, and maliciously control how the USB connection handles data.

However, Rahman claims there is no reason to be concerned about juice jackers because Android currently prevents you from enabling USB Debugging before you unlock your smartphone. Access to files on the device is similarly restricted until you change the USB connection mode to explicitly allow file transfers. These safety nets work together to prevent attempts to execute ADB commands or tamper with your device's files. Lockdown mode, on the other hand, takes safety to the next level, and it just gets better with Android 15.

Put things on lockdown

Lockdown mode, which was introduced as a safety feature alongside Android 9 in 2018, was made available as a default in the power menu on Pixel phones with Android 12. Other device manufacturers are free to place the option elsewhere, but once selected, it disables all notifications and requires your original PIN, password, or pattern to restore device functionality.

After testing with a Pixel 6 Pro running Android 15 and another device running Android 14, Rahman confirmed that the most recent firmware prevents USB data access. Any current connections to the ADB terminal or linked input devices are likewise terminated when Lockdown mode is enabled. It should work as soon as eligible Pixel phones receive the Android 15 upgrade, but other OEMs must update their devices' USB HAL to include the necessary APIs for this implementation to function. 

In any case, the Android 15 upgrade includes additional safeguards against juice jacking, even if you were already adequately protected on older versions. However, it's worth noting that taking precautions like avoiding unfamiliar chargers at airports and malls is the greatest and most effective defense.

Ransomware Attack on Pathology Services Vendor Disrupts NHS Care in London

 

A ransomware attack on a pathology services vendor earlier this week continues to disrupt patient care, including transplants, blood testing, and other services, at multiple NHS hospitals and primary care facilities in London. The vendor, Synnovis, is struggling to recover from the attack, which has affected all its IT systems, leading to significant interruptions in pathology services. The Russian-speaking cybercriminal gang Qilin is believed to be behind the attack. Ciaran Martin, former chief executive of the U.K. National Cyber Security Center, described the incident as "one of the more serious" cyberattacks ever seen in England. 

Speaking to the BBC, Martin indicated that the criminal group was "looking for money" by targeting Synnovis, although the British government maintains a policy against paying ransoms. Synnovis is a partnership between two London-based hospital trusts and SYNLAB. The attack has caused widespread disruption. According to Brett Callow, a threat analyst at security firm Emsisoft, the health sector remains a profitable target for cybercriminals. He noted that attacks on providers and their supply chains will persist unless security is bolstered and financial incentives for such attacks are removed. 

In an update posted Thursday, the NHS reported that organizations across London are working together to manage patient care following the ransomware attack on Synnovis. Affected NHS entities include Guy's and St Thomas' NHS Foundation Trust and King's College Hospital NHS Foundation Trust, both of which remain in critical incident mode. Other impacted entities are Oxleas NHS Foundation Trust, South London and Maudsley NHS Foundation Trust, Lewisham and Greenwich NHS Trust, Bromley Healthcare, and primary care services in South East London. 

The NHS stated that pathology services at the impacted sites are available but operating at reduced capacity, prioritizing urgent cases. Urgent and emergency services remain available, and patients are advised to access these services normally by dialing 999 in emergencies or using NHS 111. The Qilin ransomware group, operating on a ransomware-as-a-service model, primarily targets critical infrastructure sectors. According to researchers at cyber threat intelligence firm Group-IB, affiliate attackers retain between 80% and 85% of extortion payments. Synnovis posted a notice on its website Thursday warning clinicians that all southeast London phlebotomy appointments are on hold to ensure laboratory capacity is reserved for urgent requests. 

Several phlebotomy sites specifically managed by Synnovis in Southwark and Lambeth will be closed from June 10 "until further notice." "We are incredibly sorry for the inconvenience and upset caused to anyone affected." Synnovis declined to provide additional details about the incident, including speculation about Qilin's involvement. The NHS did not immediately respond to requests for comment, including clarification about the types of transplants on hold at the affected facilities. The Synnovis attack is not the first vendor-related incident to disrupt NHS patient services. Last July, a cyberattack against Ortivus, a Swedish software and services vendor, disrupted access to digital health records for at least two NHS ambulance services in the U.K., forcing paramedics to use pen and paper. 

Additionally, a summer 2022 attack on software vendor Advanced, which provides digital services for the NHS 111, resulted in an outage lasting several days. As the healthcare sector continues to face such cybersecurity threats, enhancing security measures and removing financial incentives for attackers are crucial steps toward safeguarding patient care and data integrity.

New macOS Malware Threat: What Apple Users Need to Know

 

Recently, the Moonlock Lab cybersecurity team discovered a macOS malware strain that can easily evade detection, posing a significant threat to users' data privacy and security. The infection chain for this malware begins when a Mac user visits a website in search of pirated software. 

On such sites, users might encounter a file titled CleanMyMacCrack.dmg, believing it to be a cracked version of the popular Mac cleaning software, CleanMyMac. When this DMG file is launched on the computer, it executes a Mach-O file, which subsequently downloads an AppleScript designed to steal sensitive information from the infected Mac. Once the malware infects a macOS computer, it can perform a variety of malicious actions. It collects and stores the Mac owner's username and sets up temporary directories to hold stolen data before exfiltration. The malware extracts browsing history, cookies, saved passwords, and other sensitive data from web browsers. It also identifies and accesses directories that commonly contain cryptocurrency wallets. 

Additionally, it copies macOS keychain data, Apple Notes data, and cookies from Safari, gathers general user information, system details, and metadata, and then exfiltrates all this stolen data to threat actors. Moonlock Lab has linked this macOS malware to a well-known Russian-speaking threat actor, Rodrigo4. This hacker has been active on the XSS underground forum, where he has been seen recruiting other hackers to help distribute his malware using SEO manipulation and online ads. This discovery underscores the growing threat of sophisticated malware targeting macOS users, a group often perceived as being less vulnerable to such attacks. 

Despite Apple's strong security measures, this incident highlights that no system is entirely immune to threats, especially when users are lured into downloading malicious software from untrustworthy sources. To protect yourself from such threats, it is essential to take several precautions. First and foremost, avoid downloading pirated software and ensure that you only use trusted and official sources for your applications. Pirated software often hides malware that can compromise your system's security. Installing reputable antivirus software and keeping it updated can help detect and block malware on macOS. Regularly updating your macOS and all installed applications is crucial to patch any security vulnerabilities that may be exploited by attackers. 

Additionally, exercise caution with downloads from unfamiliar websites or sources. Always verify the legitimacy of the website and the software before downloading and installing it. Enabling macOS’s built-in security features, such as Gatekeeper and XProtect, can also provide an additional layer of protection against malicious software. Gatekeeper helps ensure that only trusted software runs on your Mac, while XProtect provides continuous background monitoring for known malware. The Moonlock Lab's findings highlight the need for greater awareness and proactive measures to safeguard personal data and privacy. Users should remain vigilant and informed about the latest security threats and best practices for protecting their devices. 

By staying informed and cautious, Apple users can better protect their devices from malware and other cybersecurity threats. Awareness of the potential risks and implementing the recommended security practices can significantly reduce the likelihood of falling victim to such malicious activities. As cyber threats continue to evolve, maintaining robust security measures and staying updated on the latest threats will be crucial in ensuring the safety and integrity of personal data on macOS devices.

The Hidden Cost of Connected Cars: Your Driving Data and Insurance

 

Driving to a weekend getaway or a doctor's appointment leaves more than just a memory; it leaves a data trail. Modern cars equipped with internet capabilities, GPS tracking, or services like OnStar, capture your driving history. This data is not just stored—it can be sold to your insurance company. A recent report highlighted how ordinary driving activities generate a data footprint that can be sold to insurers. These data collections often occur through "safe driving" programs installed in your vehicle or connected car apps. Real-time tracking usually begins when you download an app or agree to terms on your car's dashboard screen. 

Car technology has evolved significantly since General Motors introduced OnStar in 1996. From mobile data enhancing navigation to telematics in the 2010s, today’s cars are more connected than ever. This connectivity offers benefits like emergency alerts, maintenance notifications, and software updates. By 2030, it's predicted that over 95% of new cars will have some form of internet connectivity. Manufacturers like General Motors, Kia, Subaru, and Mitsubishi offer services that collect and share your driving data with insurance companies. Insurers purchase this data to analyze your driving habits, influencing your "risk score" and potentially increasing your premiums. 

One example is the OnStar Smart Driver program, which collects data and sends it to manufacturers who then sell it to data brokers. These brokers resell the data to various buyers, including insurance companies. Following a critical report, General Motors announced it would stop sharing data with these brokers. Consumers often unknowingly consent to this data collection. Salespeople at dealerships may enroll customers without clear consent, motivated by bonuses. The lengthy and complex “terms and conditions” disclosures further obscure the process, making it hard for consumers to understand what they're agreeing to. Even diligent readers struggle to grasp the full extent of data collection. 

This situation leaves consumers under constant surveillance, with their driving data monetized without their explicit consent. This extends beyond driving, impacting various aspects of daily life. To address these privacy concerns, the Electronic Frontier Foundation (EFF) advocates for comprehensive data privacy legislation with strong data minimization rules and clear, opt-in consent requirements. Such legislation would ensure that only necessary data is collected to provide requested services. For example, while location data might be needed for emergency assistance, additional data should not be collected or sold. 

Consumers need to be aware of how their data is processed and have control over it. Opt-in consent rules are crucial, requiring companies to obtain informed and voluntary permission before processing any data. This consent must be clear and not hidden in lengthy, jargon-filled terms. Currently, consumers often do not control or even know who accesses their data. This lack of transparency and control highlights the need for stronger privacy protections. By enforcing opt-in consent and data minimization, we can better safeguard personal data and maintain privacy.

Shell Data Breach: Hacker Group 888 Claims Responsibility

 



A hacker group known as 888 has claimed responsibility for a data breach targeting Shell, the British multinational oil and gas company. The breach, allegedly impacting around 80,000 individuals across multiple countries, has raised significant concerns about data security within the organisation.

The compromised data includes sensitive information such as shopper codes, names, email addresses, mobile numbers, postcodes, site addresses, and transaction details. This information reportedly pertains to Australian users, specifically linked to transactions at Reddy Express (formerly Coles Express) locations in Australia. The hacker, using the pseudonym Kingpin, shared samples of the data on a popular hacking forum, indicating that the breach occurred in May 2024.

The breach affects individuals in several countries, including the United States, United Kingdom, Australia, France, India, Singapore, the Philippines, the Netherlands, Malaysia, and Canada. The extensive range of affected regions stresses upon the potential severity and widespread implications of the breach for Shell’s customers and stakeholders.

At present, there has been no official statement from Shell confirming the breach. The Cyber Express reached out to Shell for verification, but no response has been received. This lack of confirmation leaves the authenticity of the claims uncertain, though the potential risks to those involved are considerable.


This is not the first time Shell has faced cyberattacks. In the past, the company experienced a ransomware attack and a security incident involving Accellion’s File Transfer Appliance. These past events highlight the persistent threat cybercriminals pose to the energy sector.


In response to previous incidents, Shell emphasised its commitment to cybersecurity and data privacy. The company has initiated investigations into the recent claims and is working to address any potential risks. Shell has also engaged with relevant regulators and authorities to ensure compliance with data protection regulations and to mitigate the impact of any breaches.


The situation is still unfolding, and The Cyber Express continues to monitor the developments closely. 


The alleged Shell data breach by hacker group 888 serves as a reminder of the vulnerabilities that even large multinational corporations face in the digital age. As investigations continue, the importance of robust cybersecurity measures and vigilant monitoring cannot be overstated.


Facebook Account Takeovers: Can Tech Giant Stop Hijacking Scams?

 

A Go Public investigation discovered that Meta has allowed a scam campaign to flourish on Facebook, as fraudsters lock users out of their accounts and mimic them. 

According to the CBC, Lesa Lowery is one of the many victims. For three days, she watched helplessly as Facebook scammers duped her friends out of thousands of dollars for counterfeit things. Her Facebook account was taken in early March. 

Lowery had her account hacked after changing her password in response to a Facebook-like email. The scammer locked her out, costing her friends $2,500. Many of Lowery's friends reported the incident to Facebook, but Meta did not. The scammer removed warnings and blocked friends. Lowery's ex-neighbor, Carol Stevens, lost $250 in the swindle. 

Are Meta’s efforts enough? 

Claudiu Popa, author of "The Canadian Cyberfraud Handbook," lambasted Meta for generating billions but failing to secure users, despite the fact that Meta's sales increased 16% to $185 billion last year. 

Meta wrote Go Public, stating that it has "over 15,000 reviewers across the globe" to fix breaches, but did not explain why the retirement home fraud proceeded.

Popa, a cybercrime specialist, believes that fraudsters employ AI to identify victims and create convincing emails. According to Sapio Research, 85% of cybersecurity professionals believe that AI-powered assaults have increased.

In March, 41 US state attorneys general stated that Meta assisted customers as the number of Facebook account takeovers increased. Meta indicated that it attempted to fix the issue but did not disclose specifics. Credential stuffing assaults and data breaches can result in account takeovers and dump sales.

According to The Register, Meta was taken over by Facebook via phone number recycling in the US. New telecom customers receive abandoned numbers without being disconnected from the previous owner's accounts. An outdated number may get a password reset request or a two-factor authentication token, potentially allowing unauthorised access.

Meta is aware of phone number recycling-related account takeovers; however, the social media giant noted that it "does not have control over telecom providers" reissuing phone numbers, and that users who had phone numbers linked to their Facebook accounts were no longer registered with them. 

Meanwhile, cybersecurity experts propose that the government take measures to address Facebook account takeovers. According to Popa, companies like Meta rely on legislation to protect users and respond fast to fraud.

Risks of Generative AI for Organisations and How to Manage Them

 

Employers should be aware of the potential data protection issues before experimenting with generative AI tools like ChatGPT. You can't just feed human resources data into a generative AI tool because of the rise in privacy and data protection laws in the US, Europe, and other countries in recent years. After all, employee data—including performance, financial, and even health data—is often quite sensitive.

Obviously, this is an area where companies should seek legal advice. It's also a good idea to consult with an AI expert regarding the ethics of utilising generative AI (to ensure that you're acting not only legally, but also ethically and transparently). But, as a starting point, here are two major factors that employers should be aware of. 

Feeding personal data

As I previously stated, employee data is often highly sensitive and sensitive. It is precisely the type of data that, depending on your jurisdiction, is usually subject to the most stringent forms of legal protection.

This makes it highly dangerous to feed such data into a generative AI tool. Why? Because many generative AI technologies use the information provided to fine-tune the underlying language model. In other words, it may use the data you provide for training purposes, and it may eventually expose that information to other users. So, suppose you employ a generative AI tool to generate a report on employee salary based on internal employee information. In the future, the AI tool can employ the data to generate responses for other users (outside of your organisation). Personal information could easily be absorbed by the generative AI tool and reused. 

This isn't as shady as it sounds. Many generative AI programmes' terms and conditions explicitly specify that data provided to the AI may be utilised for training and fine-tuning or revealed when users request cases of previously submitted inquiries. As a result, when you agree to the terms of service, always make sure you understand exactly what you're getting yourself into. Experts urge that any data given to a generative AI service be anonymised and free of personally identifiable information. This is frequently referred to as "de-identifying" the data.

Risks of generative AI outputs 

There are risks associated with the output or content developed by generative AIs, in addition to the data fed into them. In particular, there is a possibility that the output from generative AI technologies will be based on personal data acquired and handled in violation of data privacy laws. 

For example, suppose you ask a generative AI tool to provide a report on average IT salary in your area. There is a possibility that the programme will scrape personal data from the internet without your authorization, violating data protection rules, and then serve it to you. Employers who exploit personal data provided by a generative AI tool may be held liable for data protection violations. For the time being, it is a legal grey area, with the generative AI provider likely bearing the most or all of the duty, but the risk remains. 

Cases like this are already appearing. Indeed, one lawsuit claims that ChatGPT was trained on "massive amounts of personal data," such as medical records and information about children, that was accessed without consent. You do not want your organisation to become unwittingly involved in a litigation like this. Essentially, we're discussing an "inherited" risk of violating data protection regulations. However, there is a risk involved. 

The way forward

Employers must carefully evaluate the data protection and privacy consequences of utilising generative AI and seek expert assistance. However, don't let this put you off adopting generative AI altogether. Generative AI, when used properly and within the bounds of the law, can be an extremely helpful tool for organisations.

New Cuckoo Malware Targeting macOS Users to Steal Sensitive Data

 

Cybersecurity experts have identified a new information stealer targeting Apple macOS computers that is intended to establish persistence on compromised hosts and function as spyware.

Kandji's malware, dubbed Cuckoo, is a universal Mach-O binary that can execute on both Intel and Arm Macs. The exact distribution vector is currently unknown, but there are indications that the binary is hosted on sites such as dumpmedia[.]com, tunesolo[.]com, fonedog[.]com, tunesfun[.]com, and tunefab[.]com, which claim to provide free and paid versions of applications for ripping music from streaming services and converting it to MP3 format. 

The disk image file downloaded from the websites is responsible for spawning a bash shell to collect host data and ensuring that infected machines are not located in Armenia, Belarus, Kazakhstan, Russia, Ukraine.

The malicious binary is executed only if the locale check is successful. It also achieves persistence through the use of a LaunchAgent, a strategy previously employed by other malware families such as RustBucket, XLoader, JaskaGO, and a macOS backdoor that bears similarities with ZuRu.

Cuckoo, like the MacStealer macOS stealer malware, uses osascript to create a fake password prompt, luring users into entering their system passwords for privilege escalation. "This malware queries for specific files associated with specific applications, in an attempt to gather as much information as possible from the system," researchers Adam Kohler and Christopher Lopez stated. 

It can execute a sequence of commands to gather hardware data, capture currently running processes, search for installed apps, take screenshots, and collect data from iCloud Keychain, Apple Notes, web browsers, cryptocurrency wallets, and apps such as Discord, FileZilla, Steam, and Telegram. 

"Each malicious application contains another application bundle within the resource directory," the researchers added. "All of those bundles (except those hosted on fonedog[.]com) are signed and have a valid Developer ID of Yian Technology Shenzhen Co., Ltd (VRBJ4VRP).” 

The news comes nearly a month after Apple's device management company revealed another stealer spyware called CloudChat, which masquerades as a privacy-oriented messaging programme and can compromise macOS users whose IP addresses do not geolocate to China. The spyware harvests cryptocurrency private keys transferred to the clipboard as well as data linked with wallet extensions installed in Google Chrome.

How to Erase The Personal Details Google Knows About You

 

One can get a sense of the volume of data they are giving away to Google every day by considering all the things they do on Chrome, Gmail, YouTube, Google Maps, and other Google services. That is... a lot for most of us. 

Google at least offers a thorough web dashboard that you can use to view some of the data being acquired, regardless of whether you believe the targeted advertising and data collecting are worth the free apps you receive in return.

It allows you to eliminate all of the data that Google has already gathered, prevent it from collecting further, or have your data automatically deleted after a predetermined amount of time (such as three months). If you intend to delete your Google account, you can also utilise these tools to clean the records; however, doing so should also remove all of the data linked to your account.

Here's how to use the options that are accessible to you.

Locate your data 

Getting started is really simple: Open your Google account page in your preferred web browser, and sign in if necessary. This screen displays your Google subscriptions, the devices to which you are signed in with your Google account, and any passwords that you may have saved, among other details. 

  • On the left, click "Data and privacy."
  • Look for the history settings. It is divided into three major categories: Web and apps, location, and YouTube.
  • To get a complete list of this data, click the My Activity icon at the bottom of the section. You'll see everything you've done that has been recorded in Google products, beginning with the most recent.
  • Select filter by date & product to narrow the results to certain date ranges or apps.
  • To delete a filter you've applied, click the X at the top of the list. 
  • If additional information is available, click on any entry in the list to view it. You can open YouTube videos or websites that you've visited.

Delete your data

  • When it comes to data that Google has already acquired and logged, you can delete it in a number of ways. 
  • If you are viewing the entire activity list, click Delete (to the right of the filter). 
  • You can delete records from the last hour, day, or a custom range. You can also select Always to erase everything.
  • If you filtered the list by date or product, click Delete results to remove everything that matched the filter.
  • Whether or not the list is filtered, clicking the X next to any single entry deletes it. 

It's useful to have a central repository for all of your data accessible via a single online site, but some sorts of data can also be found elsewhere. You can remove your web activity from within Chrome as long as you are signed in to Google, for example, or access your YouTube view history via the YouTube website.