Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Data Safety. Show all posts

Cracking Down on Crime: Europol Shares Data on Europe's Top Threats

 


There has been a considerable increase in serious organized crime over the past few years, and it continues to pose a significant threat to the EU's internal security. The most threatening criminal networks operating in and affecting the EU need to be clearly understood by law enforcement and policymakers if they are to effectively prioritise resources and guide policy action. 

Certain traits make successful companies agile and resilient, able to anticipate trends and pivot to new environments rapidly while maintaining their operations at the same time. Europol released a report on Friday that indicated that the most threatening criminal networks across the EU are also equipped with these skills. 

Europol has presented a report today (April 5) detailing the state of crime in Europe, highlighting 821 criminal networks that exist within the EU territory, flagged as the most dangerous criminal networks within the EU. Making the invisible visible so that we can know, fight, and defeat it. To produce the report, we consulted with law enforcement agencies from 27 of the member countries, as well as 17 other states, who provided information and participation. 

As Europol pointed out, some key characteristics distinguish the 821 most threatening criminal networks: they are agile as they can adopt business processes in a short time, which is characteristic of economies of scale, overcoming challenges that law enforcement agents may face as well. 

Despite their activities remaining concentrated in a single country, criminal networks are borderless: they can operate within EU and non-EU countries without any significant difficulty. Controlling: They can perform excellent surveillance over everything within the organization, and they generally specialize in a specific criminal activity. In addition to corrupt activities, the 821 networks also engage in significant damage to internal security due to corruption. 

As a result of Europol's report on terrorism, 50 per cent of the most dangerous criminal networks are involved in drug trafficking. For 36 per cent of those networks, drug trafficking is their sole business. A total of 15 percent of the organizations deal with fraud exclusively while the remaining 6 percent deal with human trafficking. 

Regarding drugs, aside from heroin, cannabis, and cocaine, there is also the concern that there is the arrival of new substances on the European market such as Fentanyl, which has already caused thousands of deaths in the United States and has already reached a critical point. Recent months have seen massive shipments of drugs hidden in bananas that have been shipped throughout Europe. 

A shipment of bananas in the British Isles contained a shipment of more than 12,500 pounds of cocaine, which was found in February, breaking the record of the most drugs seized in a single seizure in British history. In August of last year, customs agents in the Netherlands discovered that 17,600 pounds of cocaine had been hidden inside banana crates inside Rotterdam's port. 

In the Italian port of Gioia Tauro, a police dog sniffed out 3 tons of cocaine hidden in a case of bananas three months earlier. As part of the top ten criminal groups identified, nine of them specialize in cyber crimes and are actively operating in France, Germany, Switzerland and the U.S. These organizations, mainly run by Russians and Ukrainians, are active in France, Germany, Switzerland and the U.S. 

They have up to 100 members, but have a core of criminals who are responsible for distributing ransomware to affiliates so that they can conduct cyber attacks. A core group of individuals are responsible for managing the negotiation and payment of ransoms, often in cryptocurrency, and usually pay affiliates 80% of their fee for carrying out an attack. 

As a result of their involvement in fraud schemes and providing cyber services and technology solutions, service providers provide crucial support to criminal networks. The methods used in these campaigns include mass mailings and phishing campaigns, creating fake websites, creating fake advertisements and creating social media accounts. 

According to Europol, the firm has also been supporting online fraud schemes and advising on the movements of cryptocurrencies online. Law enforcement personnel sometimes use countermeasures, such as encrypted telephones to avoid detection by criminal networks, to avoid being detected by them. The other group of people avoid the use of electronic devices in all forms of communication and meet in person instead to avoid leaving any digital footprint on their activities.  

A report released by the European Commission stated that drug trafficking continues to stand out as the most significant activity in the EU countries and is witnessing record seizures of cocaine in Europe, as well as an increase in violent crimes linked to drugs, such as in Belgium and France.  

Half of the most dangerous networks in the criminal world are involved in drug trafficking in some form or another, whether on their own or as part of their overall portfolio. According to the report, more than 70% of networks engage in corruption “to facilitate criminal activity or obstruct law enforcement or judicial processes. 68% of networks use violence as an inherent element of their approach to conduct business,” which is consistent with their criminal or nefarious activities.

It has been reported that gang violence has been rife in Antwerp for decades as the city serves as the main entry point for Latin American cocaine cartels into the European continent. Federal authorities say that drug trafficking is rapidly affecting society as a result of an increase in drug use throughout the whole country. 

In Ylva Johansson, EU Commissioner for Home Affairs, the threat of organised crime is one of the biggest threats facing the society of today, a threat which threatens it with corruption and extreme violence. During a press conference, Europol explained the data it collected would be shared with law enforcement agencies in countries of the EU, which should help better target criminals.

AT&T Data Breach Reveals 73 Million Users' Info on Hacker Forum

 


A telecommunications company, AT&T Inc., has confirmed that data that has recently been found to be on the dark web relating to 73 million of its past and present customers may have come from 2019 or earlier. Originally, the data was being for sale on the now defunct Raid Forums hacking forum in 2021 with the name Social Security numbers and dates of birth and was rumoured to have been for sale for that long. 

Following a breach by a seller earlier this month, the same data appeared in an online search earlier this month. The information may have included AT&T account numbers, full names, email addresses, mailing addresses, telephone numbers, Social Security numbers, dates of birth, and passcodes. Aside from passcodes and Social Security numbers, AT&T also reported that the hacked data may have included email addresses, mailing addresses, phone numbers, and birth dates in addition to passcodes and Social Security numbers, AT&T said. 

A hacker forum reported the breach nearly two weeks ago. It is unknown if the leak is related to a similar breach in 2021 that was widely reported but AT&T did not acknowledge. Before the leak, the telecom giant denied that the data in question came from its systems, and disputed whether it contained accurate customer data. 

As of now, the “recycled” data includes 49 million email addresses and 44 million Social Security numbers, which were acquired from a third party. This is a repeat leak of customer data from the alleged 2021 hack that AT&T has consistently denied took place, and it was published on the popular hacker marketplace BreachForums on March 17th. 

When Recorded Future News was contacted about the dark web posting two weeks ago, a representative stated that AT&T did not have any evidence that AT&T's systems were ever compromised at the time, but that the company had "no indication" that they had been.  There was an indication by the spokesperson that the data set was similar to a set of data offered for sale in 2021 by the hacker group ShinyHunters, which amounted to 73 million AT&T customers. 

The attack was reported to have occurred in 2021 when a threat actor called Shiny Hunters was allegedly selling the stolen data of 73 million AT&T customers, including names, addresses, telephone numbers, social security numbers, and birth dates for many customers. AT&T denied at the time that they had suffered a breach or that the data was theirs. 

It has been discovered that a massive dataset was leaked on a hacking forum by another threat actor, claiming that it is the same data that Shiny Hunters claimed to have stolen. This incident revealed the same sensitive information that ShinyHunters claimed to have stolen, but not all customers' social security numbers or birth dates were exposed as a result. According to security researchers, ShinyHunters is a notorious hacker gang that is known for its high-profile data breaches, including that of 40 million T-Mobile users, just weeks before the AT&T claim, identified in 2020. 

It was found by security researchers that the gang were trying to sell user data stolen from both carriers on the dark markets within days of each other. ShinyHunters, who have been rumoured since then to have taken over the admin duties at BreachForums since the FBI raided the site last March, have been rumoured to have taken over the administrative duties at BreachForums since then. 

Even though AT&T has denied a breach and claims that the data was their own, they are still refusing to admit that such a breach occurred. It has been revealed that some AT&T and DirectTV customers have used Gmail or Yahoo's disposable email feature to create their own DirectTV or AT&T-specific email addresses and they use them only when they sign up for their service. It was confirmed that these email addresses had not been used on any other platform, suggesting that the data must have been generated by AT&T or DirectTV. 

According to AT&T's statement and a new page devoted to keeping AT&T accounts secure, more information about the breach will be shared with the public in the form of a published statement. As a result of analyzing the data, many reports have determined that it contains the same sensitive information that ShinyHunters claims to have stolen. The AT&T company denied, once again, that the breach occurred and that the data had originated from them. There are, however, not all of the customers whose social security numbers or birthdates have been exposed. 

According to BleepingComputer's interviews with more than 50 AT&T and DirectTV customers who have been interviewed since the data was leaked, the data has been leaked in the form of only AT&T account information, and this information has been accessed for AT&T accounts only. According to cybersecurity expert Troy Hunt, if affected customers are not notified promptly, there is a possibility of class action lawsuits resulting from the breach. 

There are approximately 290 million people within the reach of AT&T's wireless 5G network in the United States, putting it among the country's largest providers of mobile and internet services. AT&T previously came under scrutiny due to security lapses, but this is hardly the first time they have been under scrutiny. There was an incident at the end of last year when the company faced a widespread outage attributed to a coding error that caused the company's mobile phone service to go down. 

The incident has been attributed to vulnerabilities within AT&T's infrastructure, though AT&T has claimed that there was not a malicious attack behind it. It was first revealed in 2019 that AT&T employees had been bribed to set up an unauthorized WLAN (wireless access point) inside the infrastructure of the company by the company's executives.

Parent Company of Vans Alerts 35.5 Million Customers Following Data Breach

 

VF Corporation, the parent company of popular brands like Vans and North Face, has confirmed a significant data breach that occurred in December, affecting approximately 35.5 million of its customers. 

The breach exposed sensitive information including email addresses, names, phone numbers, billing and shipping addresses. Additionally, details regarding payment methods, order history, and total order value were compromised in certain instances.

While VF Corporation reassured customers that bank account and credit card information were not accessed by fraudsters, concerns remain about potential identity theft, phishing, and other fraudulent activities that could stem from the breach, depending on the specific personal data exposed. Despite this, the company stated that there is "no evidence" suggesting illicit use of compromised personal information such as phone numbers, emails, addresses, or names.

The disclosure of the breach came a month after its detection on December 13, with VF Corporation acknowledging the disruption to its business operations and the impact on its ability to serve customers. Though the company did not explicitly label the incident as ransomware in its regulatory filings, the nature of the attack, involving encryption of IT systems and data theft, bears similarities to such attacks.

While VF Corporation disclosed the breach concurrently with recommendations from the U.S. Securities and Exchange Commission regarding data breach disclosures, concerns persist about the effectiveness of existing cybersecurity regulations in the United States. 

Research from George Mason University and the University of Minnesota suggests that breach notification laws (BNLs), which require businesses to inform customers of data compromises, have not been effective in reducing the frequency of data breaches. Despite these laws being enacted by all 50 states, the study found no significant decline in data misuse following breaches, regardless of various factors such as duration, types of breaches, and affected companies.

Cybersecurity Specialists Caught Moonlighting as Dark Web Criminals

 

A recent study conducted by the Chartered Institute of Information Security (CIISec) has uncovered a concerning trend in the cybersecurity field. The study reveals that many cybersecurity professionals, facing low pay and high stress, are resorting to engaging in cybercrime activities on the dark web. This revelation adds to the challenges faced by security leaders who already feel ill-equipped to combat the increasing threat of AI-driven cybercrime.

The investigation, led by a former police officer turned cyber investigation specialist, involved six months of scouring dark web sites and job postings. The findings exposed numerous individuals offering their programming skills at remarkably low rates. For instance, one Python developer and Computer Science student advertised their services for as little as $48 (£25) per hour, offering to develop cybercrime tools such as VoIP chatbots, AI chatbots, and hacking frameworks.

In addition to programmers, the investigation uncovered various professionals willing to assist cybercriminals in their activities. These included voiceover artists for vishing campaigns, graphic designers, public relations professionals, and content writers. Despite the presence of these individuals, the investigator noted that it was relatively easy to distinguish between professionals and hardcore cybercriminals, with professionals often referencing their legitimate roles or using language similar to that found on platforms like LinkedIn.

The study's findings suggest that the allure of higher pay and the stress and burnout experienced in cybersecurity roles are driving professionals towards criminal activities. Amanda Finch, CEO of CIISec, highlighted the impact of long hours and high salaries on this trend, noting that the industry must focus on attracting and retaining talent to prevent further defections to cybercrime.

For chief information security officers (CISOs) and executives responsible for safeguarding their companies against cyber threats, these revelations pose a significant challenge. Not only are they contending with escalating cybercriminal activity, including ransomware attacks, but they must also grapple with the possibility of insider threats from their own employees. According to the Office of the Australian Information Commissioner (OAIC), 11% of malicious attacks reported in the latter half of 2023 involved rogue employees.

The escalating threat of AI-augmented cyberattacks further compounds the challenges faced by security professionals. A global survey by Darktrace found that 89% of security professionals anticipate significant impacts from AI-augmented threats within the next two years. Despite this, 60% admit to being unprepared to defend against such attacks.

To combat these evolving threats, defensive AI systems are gaining traction. Initiatives such as the US FTC's push against AI impersonation, Google's AI Cyber Defence Initiative, and the European Union's AI Office demonstrate a concerted effort to develop robust cyber defense mechanisms. The proliferation of AI cyber threat detection-related patents and the entry of new companies into the market underscore the urgency of bolstering defensive capabilities against cyber threats.

Europe's Digital Markets Act Compels Tech Corporations to Adapt

 

Europeans now have the liberty to select their preferred online services, such as browsers, search engines, and iPhone apps, along with determining the usage of their personal online data. 

These changes stem from the implementation of the Digital Markets Act (DMA), a set of laws introduced by the European Union targeting major technology firms including Amazon, Apple, Microsoft, Google (under Alphabet), Meta (formerly Facebook), and ByteDance (owner of TikTok).

This legislation marks Europe's ongoing efforts to regulate large tech companies, requiring them to adapt their business practices. Notably, Apple has agreed to allow users to download smartphone apps from sources other than its App Store. The DMA applies to 22 services ranging from operating systems to messaging apps and social media platforms, affecting prominent offerings like Google Maps, YouTube, Amazon's Marketplace, Apple's Safari browser, Meta's Facebook, Instagram, WhatsApp, Microsoft Windows, and LinkedIn.

Companies found in violation of the DMA could face hefty fines, up to 20% of their global annual revenue, and even potential breakup for severe breaches. The impact of these rules is not limited to Europe, as other countries, including Japan, Britain, Mexico, South Korea, Australia, Brazil, and India, are considering similar legislation to curb tech giants' dominance in online markets.

One significant change resulting from the DMA is Apple's decision to allow European iPhone users to download apps from sources beyond its App Store, a move the company had previously resisted. However, Apple will introduce a 55-cent fee for each iOS app downloaded from external stores, raising concerns among critics about the viability of alternative app platforms.

Furthermore, the DMA grants users greater freedom to choose their preferred online services and restricts companies from favouring their own offerings in search results. 

For instance, Google search results will now include listings from competing services like Expedia for searches related to hotels. Additionally, users can opt out of targeted advertising based on their online data, while messaging systems are required to be interoperable, forcing Meta to propose solutions for seamless communication between its platforms, Facebook Messenger and WhatsApp.

Researchers Develop AI "Worms" Capable of Inter-System Spread, Enabling Data Theft Along the Way

 

A team of researchers has developed a self-replicating computer worm designed to target AI-powered applications like Gemini Pro, ChatGPT 4.0, and LLaVA. The aim of this project was to showcase the vulnerabilities in AI-enabled systems, particularly how interconnections between generative-AI platforms can facilitate the spread of malware.

The researchers, consisting of Stav Cohen from the Israel Institute of Technology, Ben Nassi from Cornell Tech, and Ron Bitton from Intuit, dubbed their creation 'Morris II', drawing inspiration from the infamous 1988 internet worm.

Their worm was designed with three main objectives. Firstly, it was engineered to replicate itself using adversarial self-replicating prompts, which exploit the AI applications' tendency to output the original prompt, thereby perpetuating the worm. 

Secondly, it aimed to carry out various malicious activities, ranging from data theft to the creation of inflammatory emails for propagandistic purposes. Lastly, it needed the capability to traverse hosts and AI applications to proliferate within the AI ecosystem.

The worm utilizes two primary methods for propagation. The first method targets AI-assisted email applications employing retrieval-augmented generation (RAG), where a poisoned email triggers the generation of a reply containing the worm, subsequently spreading it to other hosts. The second method involves inputs to generative-AI models, prompting them to create outputs that further disseminate the worm to new hosts.

During testing, the worm successfully pilfered sensitive information such as social security numbers and credit card details.

To raise awareness about the potential risks posed by such worms, the researchers shared their findings with Google and OpenAI. While Google declined to comment, an OpenAI spokesperson acknowledged the potential exploitability of prompt-injection vulnerabilities resulting from unchecked or unfiltered user inputs.

Instances like these underscore the imperative for increased research, testing, and regulation in the deployment of generative-AI applications.

Cyberattack on Hamilton City Hall Expands to Impact Additional Services

 

Hamilton is currently facing a ransomware attack, causing widespread disruptions to city services for more than a week. City manager Marnie Cluckie disclosed the nature of the cyber attack during a virtual press conference on Monday, marking the first public acknowledgment of the incident since it began on February 25. 

The attack has resulted in the shutdown of almost all city phone lines, hampering city council operations and affecting numerous services such as the bus schedule app, library WiFi, and permit applications.

Cluckie mentioned that the city has not provided a specific timeframe for resolving the situation, emphasizing that systems will only be restored once deemed safe and secure. While the city has not detected any unauthorized access to personal data, Hamilton police have been alerted and will conduct an investigation.

Regarding the attackers' demands, Cluckie remained cautious, refraining from disclosing details such as the requested amount of money or their location due to the sensitive nature of the situation. However, she mentioned that the city is covered by insurance for cybersecurity breaches and has enlisted the expertise of cybersecurity firm Cypfer to manage the incident response.

Ransomware attacks, characterized by denying access to systems or data until a ransom is paid, can have devastating consequences, as highlighted by the Canadian Centre for Cyber Security. Although paying the ransom does not guarantee system restoration, it is sometimes deemed necessary, as seen in previous cases involving other municipalities like St. Marys and Stratford.

Once the city's systems are restored, Cluckie will oversee a comprehensive review to understand the breach's cause and implement preventive measures. Council meetings have been postponed until at least March 15 due to operational constraints, with plans to resume once the situation stabilizes.

The impact of the attack on various city services is extensive. Phone lines for programs, councillors, and essential facilities like long-term care homes are down. Online systems for payments and services related to fire prevention, permits, and property are inaccessible. Engineering services, cemeteries, libraries, public health, property taxes, Ontario Works, vendor payments, waste management, child care, transit, Hamilton Water, city mapping, and recreation facilities are all affected to varying degrees, with disruptions in communication, payments, and service availability.

Efforts are underway to mitigate the effects of the attack, but until the situation is resolved, residents and city officials must navigate the challenges posed by the ransomware attack.

Indian Authorities Probes Data Breach Concerns Involving PMO and EPFO

 

The Open-Source Intelligence (OSINT) team at India Today reviewed leaked data that claimed a Chinese state-affiliated hacker group had targeted major Indian government offices, such as the "PMO" (likely the Prime Minister's Office), as well as businesses like Reliance Industries Limited and Air India. 

Over the weekend, thousands of files, images, and chat messages related to I-Soon—a claimed cybersecurity contractor for China's Ministry of Public Security (MPS)—were secretly shared on GitHub.

The leak reveals a complex network of covert attacks, spyware operations, and sophisticated surveillance by Chinese government-linked cyber criminals. 

A machine-translated version of the leaked internal documents, originally written in Mandarin, shows hackers documenting their techniques, targets, and exploits. Targets included the North Atlantic Treaty Organisation (NATO), an intergovernmental military alliance, European governments, and organisations, as well as Beijing's friends such as Pakistan. 

Indian targets 

The data stolen names Indian targets such as the Ministry of Finance, the Ministry of External Affairs, and the "Presidential Ministry of the Interior," which is likely a reference to the Ministry of Home Affairs. 

During the peak of India-China border tensions, advanced persistent threat (APT) or hacker groups stole 5.49GB of data from various offices of the "Presidential Ministry of the Interior" between May 2021 and October 2021. 

"In India, the primary work goals are the ministries of foreign affairs, finance, and other key departments. We continue to monitor this sector closely and want to capitalise on its potential in the long run," reads the translated India section of what appears to be an internal report prepared by iSoon. 

User data for the state-run pension fund management, the Employees' Provident Fund Organisation (EPFO), the state telecom provider Bharat Sanchar Nigam Limited (BSNL), and the private healthcare chain Apollo Hospitals were also allegedly compromised. 

The leaked documents also mentioned about 95GB of India's immigration statistics from 2020, referred to as "entry and exit points data". Notably, following the conflict in Galwan Valley in 2020, India-China relations deteriorated further.

"India has always been a major emphasis for the Chinese APT side of things. The stolen data inevitably covers quite a few Indian organisations, including Apollo Hospital, persons coming in and out of the nation in 2020, the Prime Minister's Office, and population figures," said Taiwanese researcher Azaka, who initially uncovered the GitHub hack. 

This is not the first time China has been blamed for cyberattacks on India. Seven Indian power hubs were reportedly targeted by hackers linked to China in 2022. Threat actors attempted to breach India's power system in 2021 as well.

Here's How to Safeguard Your Online Travel Accounts from Hackers

 

Just days following Kay Pedersen's hotel reservation in Chiang Mai, Thailand, via Booking.com, she received a troubling email. The email, poorly written in broken English, warned her of "malicious activities" within her account.

Subsequently, Kay and her husband, Steven, encountered issues. Steven noticed unauthorized reservations at different hotels, prompting them to report the fraudulent activity to Booking.com. In response, Booking.com cancelled all their bookings, including the one in Chiang Mai. Despite their immediate action, restoring their original reservation proved challenging. While Booking.com eventually reinstated the reservation, the new rate was more than double the original.

The Pedersens are not isolated cases. A recent surge in hacking incidents has targeted travellers. Criminals reportedly obtained Booking.com passwords through its internal messaging system. Loyalty program accounts and other online travel agencies have also been popular targets.

The susceptibility of travel accounts to attacks is attributed to the wealth of sensitive information they hold, including passports, driver’s licenses, and travel dates. Caroline McCaffery, CEO of ClearOPS, underscores the importance of safeguarding this information.

To mitigate the risk of hacking, travellers can employ several strategies:

1. Utilize two-factor authentication, preferably through an authenticator app, to enhance security.
2. Enable login notifications to receive alerts of any unauthorized account access.
3. Avoid reusing passwords and opt for strong, unique passwords for each account. Password management services like Google Password Manager can be helpful.
4. Exercise caution when using public Wi-Fi networks, and employ a Virtual Private Network (VPN) for added security.

However, travellers themselves also contribute to the problem by sharing excessive personal information and falling victim to phishing scams. Bob Bacheler, managing director of Flying Angels, highlights the risks associated with oversharing on social media and with unknown websites.

Phishing, in particular, remains a prevalent method for hacking attempts. Albert Martinek, a customer cyber threat intelligence analyst at Horizon3.ai, emphasizes the dangers of clicking on suspicious links.

The Pedersens' case underscores the challenges travellers face in resolving hacking incidents. While Booking.com investigated and secured their account, the couple endured uncertainty regarding their hotel reservation.

Ultimately, responsibility for addressing these security concerns lies with the companies that handle travellers' data. Implementing passwordless authentication systems like Passkeys could offer a solution to mitigate hacking risks. However, until travel companies prioritize safeguarding personal information, travellers will continue to bear the consequences.

Beware, iPhone Users: iOS GoldDigger Trojan can Steal Face ID and Banking Details

 

Numerous people pick iPhones over Android phones because they believe iPhones are more secure. However, this may no longer be the case due to the emergence of a new banking trojan designed explicitly to target iPhone users.

According to a detailed report by the cybersecurity firm Group-IB, the Android trojan GoldDigger has now been successfully repurposed to target iPhone and iPad users. The company claims that this is the first malware designed for iOS, posing a huge threat by collecting facial recognition data, ID documents, and even SMS. 

The malware, discovered for the first time last October, now has a new version dubbed GoldPickaxe that is optimised for iOS and Android devices. When installed on an iPhone or Android phone, GoldPickaxe can collect facial recognition data, ID documents, and intercepted text messages, all with the goal of making it easier to withdraw funds from banks and other financial apps. To make matters worse, this biometric data is utilised to create AI deepfakes, which allow attackers to mimic victims and gain access to their bank accounts. 

It is vital to note that the GoldPickaxe malware is now targeting victims in Vietnam and Thailand. However, as with other malware schemes, if this one succeeds, the cybercriminals behind it may expand their reach to target iPhone and Android users in the United States, Europe, and the rest of the world. 

Android banking trojans are typically propagated via malicious apps and phishing campaigns. It is more difficult to install a trojan on an iPhone since Apple's ecosystem is more locked off than Google's. However, as hackers often do,they've figured out a way. 

Initially, the malware was disseminated via Apple's TestFlight program, which allows developers to deploy beta app versions without going through the App Store's authorization process. However, after Apple removed it from TestFlight, the hackers shifted to a more complicated way employing a Mobile Device Management (MDM) profile, which is generally used to manage enterprise devices. 

Given how successful a banking trojan like GoldDigger or GoldPickaxe can be, especially since it can target both iPhones and Android phones, this is unlikely to be the last time we hear about this spyware or the hackers behind it. As of now, even the most latest versions of iOS and iPadOS appear to be vulnerable to this Trojan. Group-IB has contacted Apple about the flaw, so a solution is likely in the works.

Here's Why Passkeys is a Reliable Option to Safeguard Your Data

 

We all use way too many passwords, and they are probably not very secure. Passkeys are the next step in password technology, aiming to replace passwords with a more secure alternative.

Trouble with passwords 

For a long time, we used usernames and passwords to access websites, apps, and gadgets. A fundamental issue with passwords is that their creators are largely to blame. You have to remember the password, thus it's easy to fall into the trap of using real words or phrases. It's also fairly typical to use the same password across several websites and apps in favour of having unique passwords for each one. 

Although it is obviously not very safe, many individuals continue to use passwords like their birthday or the name of their pet. If they are successful, they can attempt it in every other place you use the same password. Using two-factor authentication and special passwords is essential as a result of this. Password managers, which produce random character strings for you and remember them for you, have been developed to solve this issue. 

Passkey vs. password: What distinguishes them 

Over time, not much has changed with regard to the login and password system. Think of passkeys as a full-fledged alternative for the outdated password system. Basically, the process you use to unlock your phone is the same one you use to sign into apps and websites. 

It is among the fundamental distinctions between passkeys and conventional passwords. All locations where Facebook is accessible accept your Facebook password. On the other hand, a passkey is bound to the machine where it was made. The passkey is far more secure than a password because you're not generating a universal password. 

The same security process can be employed to verify a QR code you scanned with your phone to log in on another device. There are no passwords used, thus nothing can be stolen or leaked. Because you must sign in with your phone in hand, you don't need to be afraid about a stranger across the nation using your password.

Device compatibility

Passkeys are still very new, but they already work with all the best phones and a majority of the best laptops. This is because the tech behemoths Microsoft, Google, Apple, and others collaborated to create them using the FIDO Alliance and W3C standards. 

Apple launched passkeys to the iPhone with the release of iOS 16 in the previous fall. Passkeys eliminates the need for a master password on its devices by using TouchID and FaceID for authentication. Here's how to set up passkeys on an iPhone, iPad, or Mac if you want to try them out for yourself. 

Your passkeys are stored and synchronised using the Google Password Manager if you have one of the top Android phones or an Android tablet. If you want to use passkeys with it, you must first enable screen lock on your Android device, as this stops people with access to your smartphone from utilising your passkeys.

In both Windows 10 and Windows 11, you can use Microsoft's Windows Hello to sign into your accounts using passkeys. Because your passkeys are linked to your Microsoft account, you may use them on any device as long as you're signed in.

Regarding your web browser, passkeys are currently supported by Chrome, Edge, Safari, and Firefox. For Chrome/Edge, you must be using version 79 or above, for Safari, version 13 or higher, and for Firefox, version 60 or higher.

Persistent Data Retention: Google and Gemini Concerns

 


While it competes with Microsoft for subscriptions, Google has renamed its Bard chatbot Gemini after the new artificial intelligence that powers it, called Gemini, and said consumers can pay to upgrade its reasoning capabilities to gain subscribers. Gemini Advanced offers a more powerful Ultra 1.0 AI model that customers can subscribe to for US$19.99 ($30.81) a month, according to Alphabet, which said it is offering Gemini Advanced for US$19.99 ($30.81) a month. 

The subscription fee for Gemini storage is $9.90 ($15.40) a month, but users will receive two terabytes of cloud storage by signing up today. They will also have access to Gemini through Gmail and the Google productivity suite shortly. 

It is believed that Google One AI Premium, as well as its partner OpenAI, are the biggest competitors yet for the company. It also shows that consumers are becoming increasingly competitive as they now have several paid AI subscriptions to choose from. 

In the past year, OpenAI's ChatGPT Plus subscription launched an early access program that allowed users to purchase early access to AI models and other features, while Microsoft recently launched a competing subscription for artificial intelligence in Word and Excel applications. The subscription for both services costs US$20 a month in the United States.

According to Google, human annotators are routinely monitoring and modifying conversations that are read, tagged, and processed by Gemini - even though these conversations are not owned by Google Accounts. As far as data security is concerned, Google has not stated whether these annotators are in-house or outsourced. (Google does not specify whether they are in-house or outsourced.)

These conversations will be kept for as long as three years, along with "related data" such as the languages and devices used by the user and their location, etc. It is now possible for users to control how they wish to retain the Gemini-relevant data they use. 

Using the Gemini Apps Activity dashboard in Google’s My Activity dashboard (which is enabled by default), users can prevent future conversations with Gemini from being saved to a Google Account for review, meaning they will no longer be able to use the three-year window for future discussions with Gemini). 

The Gemini Apps Activity screen lets users delete individual prompts and conversations with Gemini, however. However, Google says that even when Gemini Apps Activity is turned off, Gemini conversations will be kept on the user's Google Account for up to 72 hours to maintain the safety and security of Gemini apps and to help improve Gemini apps. 

In user conversations, Google encourages users not to enter confidential or sensitive information which they might not wish to be viewed by reviewers or used by Google to improve their products, services, and machine learning technologies. At the beginning of Thursday, Krawczyk said that Gemini Advanced was available in English in 150 countries worldwide. 

Next week, Gemini will begin launching smartphones in Asia-Pacific, Latin America and other regions around the world, including Japanese and Korean, as well as additional language support for the product. This will follow the company's smartphone rollout in the US.

The free trial subscription period lasts for two months and it is available to all users. Upon hearing this announcement, Krawczyk said the Google artificial intelligence approach had matured, bringing "the artist formerly known as Bard" into the "Gemini era." As GenAI tools proliferate, organizations are becoming increasingly wary of privacy risks associated with such tools. 

As a result of a Cisco survey conducted last year, 63% of companies have created restrictions on what kinds of data might be submitted to GenAI tools, while 27% have prohibited GenAI tools from being used at all. A recent survey conducted by GenAI revealed that 45% of employees submitted "problematic" data into the tool, including personal information and non-public files about their employers, in an attempt to assist. 

Several companies, such as OpenAI, Microsoft, Amazon, Google and others, are offering GenAI solutions that are intended for enterprises that no longer retain data for any primary purpose, whether for training models or any other purpose at all. There is no doubt that consumers are going to get shorted - as is usually the case - when it comes to corporate greed.

City Cyber Taskforce Introduced to Safeguard Corporate Finance in UK

 

Two of the UK's main accounting and security agencies are forming a new taskforce today to help organisations enhance the security of their corporate finance transactions. 

The effort is being led by the Institute of Chartered Accountants in England and Wales (ICAEW) in partnership with the National Cyber Security Centre. Other representatives from banking, law, consulting, and other fields include the Association of Corporate Treasurers, the British Private Equity and Venture Capital Association, Deloitte, EY, KPMG, the Law Society, the London Stock Exchange, the Takeover Panel, and UK Finance.

During the task force's launch earlier this week, the 14 organisations published new regulations meant to help businesses mitigate cyber-risk while engaging in corporate finance activities, such as capital raising, mergers and acquisitions, and initial public offerings. 

Important guidelines regarding building resilience against cyberattacks, protecting commercially sensitive data shared during deal processes, and responding to breaches were all included in Cyber Security in Corporate Finance. Additionally, it will include important details about various cyber-risks. 

According to Michael Izza, CEO of ICAEW, organisations may be vulnerable to security breaches when confidential information is shared during a transaction. 

“A cyber-attack could have a potentially disastrous impact on the dealmaking process, and so it is crucial that boardrooms across the country treat threats very seriously and take preventative action,” Izza added. “We must do all that we can to ensure London remains a pre-eminent place to do deals, raise investment and generate growth.” 

Sarah Lyons, NCSC deputy director for economy and society, stated that chartered accountants are becoming an increasingly appealing target for threat actors due to the sensitive financial and risk data they handle. 

A breach in this sector can not only jeopardise organisations and their customers, but can also undermine trust, confidence and reputation. I'd encourage everyone from across the industry to engage with this report and the NCSC's range of practical guidance, to help increase their cyber resilience, Lyons advised.

X Launches Secure Login with Passkey for iOS Users in US

 

X (formerly known as Twitter) is set to allow users to login in with a passkey rather than a password, but only on iOS devices.

X earlier announced its intention to roll out passwordless technology, and it has now made the option available to iPhone customers. It enables a faster login process by allowing users to authenticate with whatever they use to lock their device, such as their fingerprint, FaceID, or PIN. 

They are also regarded to be safer, because the device generates the underlying cryptographic key, which is unknown to anyone, even the user. This means they are impervious to phishing, which means cybercriminals cannot use fake emails and social engineering strategies to lure them out of targets.

Only for iPhones

The FIDO Alliance designed passkeys and set technological guidelines for them. They employ the WebAuthn standard, which is a vital component of the FIDO2 requirements. The alliance's board of directors includes the majority of top technology firms, including Apple, Google, and Microsoft. 

To set up passkeys on X, open the X app on iPhone and go to "Settings and privacy" under "Your account". Then navigate to "Security and account access" and then "Security". Choose "Passkey" under "Additional password protection" and comply with the on-screen directions. You can remove a passkey from the same menu at any moment. 

Although X does not make passkeys necessary, it highly encourages users to start using them. Currently, users must have a password-protected account with X before they can set up a passkey, however the company advises customers should "stay tuned" on this.

As iOS devices are the only ones capable of logging into X using a passkey (for the time being), users' passkeys will be synced across their Apple devices via Apple's Keychain password manager, allowing multiple iOS devices to login to X with an identical passkey.

DHS and FBI: Chinese Drones Pose Major Threat to U.S. Security

 

The cybersecurity arm of the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) have jointly issued a public service announcement cautioning about the potential risks posed by Chinese-manufactured drones to critical infrastructure and U.S. national security. The advisory, released on Wednesday, emphasizes the likelihood of Chinese drones being used to pilfer American data, citing Chinese laws permitting government access to data held by private entities as a cause for concern.

The document underscores the necessity for careful consideration and potential mitigation when employing Chinese-manufactured Unmanned Aircraft Systems (UAS), as their use may expose sensitive information to Chinese authorities, thereby endangering national security, economic security, and public health and safety. The White House has identified China as the most formidable cyber threat, attributing this to their adept exploitation of data utilized by American consumers.

A 2021 law, according to the agencies, has expanded China's authority over companies and data within its borders, imposing strict penalties for non-compliance. The data collected by these companies is deemed crucial to China's Military-Civil Fusion strategy, aimed at gaining a strategic advantage over the United States by accessing advanced technologies and expertise.

As critical infrastructure sectors increasingly rely on UAS for cost-effective operations, the agencies express concern about the potential exposure of sensitive information due to the use of Chinese-manufactured drones. Chinese drones are noted as capable of receiving and transmitting data, and the potential avenues for exploitation include data transfer, collection through software updates, and the use of docking stations as data collectors.

The consequences of data harvesting by Chinese drones could be severe, including exposing intellectual property, divulging critical infrastructure operations details, compromising cybersecurity and physical security controls, and facilitating easier access for Chinese hackers into systems. To address these risks, CISA and the FBI recommend isolating Chinese-made drones from networks and implementing regular maintenance to uphold adequate security measures.

Data is the Missing Piece in the AI Jigsaw, Here's How to Bridge the Gap

 

The skills gap that is stifling development in artificial intelligence (AI) is well documented, but another aspect stands out: data complexity. According to a new IBM study, the most common barriers to AI success are limited AI skills and knowledge (33%), followed by data complexity (25%). 

The majority of companies (58%) that participated in the poll of 8,584 IT professionals said that they have not yet begun to actively adopt AI. At these non-AI-enabled companies, trust and transparency (43%) and data privacy (57%) are the biggest obstacles to generative AI.

Companies using AI typically face data-related challenges. Some are taking initiatives to ensure trustworthy AI, such as tracking data provenance (37%), and reducing bias (27%). Around one-quarter (24%) of businesses are looking to improve their business analytics or intelligence capabilities, which rely on reliable, high-quality data.

However, several industry leaders warn that organisational data may not be ready to support burgeoning AI ambitions. "To remain competitive, CIOs and technology leaders must adapt their data strategies as they integrate gen AI into their technology stacks," notes PwC's US data, analytics, and AI leader, Matt Labovich. "This involves understanding data and preparing for the transformative impact of emerging technologies.”

Head of AI and analytics at Bristlecone Shipra Sharma believes that "data security, AI decision-making ethics, and AI literacy" are issues that tech professionals and their companies need to address. "With limited AI education due to the newness of this technology, many individuals are left to figure out how to use it on their own." 

The vast amount of data that AI demands can be a frustrating aspect of the puzzle. For example, data at the edge is becoming an important source for huge language models and repositories. There will be significant growth of data at the edge as AI continues to evolve and organisations innovate around their digital transformation to grow revenue and profits, stated Bruce Kornfeld, StorMagic's chief marketing and product officer.

At the moment, he says, "there is too much data in too many different formats, which is causing an influx of internal strife as companies struggle to determine what is business-critical versus what can be archived or removed from their data sets.” 

According to Osmar Olivo, vice president of product management at Inrupt, a business that Sir Tim Berners-Lee co-founded, training data comes from a range of sources, including both public sources and an organisation's intellectual property. 

Between the competitive advantage companies can get by leveraging AI and protecting their most sensitive data," is typically the decision that many organisations must make, Olivo stated. But it doesn't have to be a black-or-white decision. I anticipate that in 2024, creative approaches to data management and privacy will surface, especially when it comes to safeguarding data utilised by AI models.

eBay Settles Blogger Harassment Case with $3 Million Fine

 

eBay has agreed to pay a substantial fine of $3 million (£2.36 million) in order to settle charges related to the harassment of bloggers who were openly critical of the company. The disturbing details emerged in court documents, revealing that high-ranking eBay executives, including Jim Baugh, the former senior director of safety and security, orchestrated a targeted campaign against Ina and David Steiner, the couple behind the newsletter EcommerceBytes, which the company's leadership disapproved of.

The court papers outline a series of alarming incidents, including the dispatch of live spiders and cockroaches to the Steiners' residence in Natick, Massachusetts. This relentless campaign of intimidation left the couple, according to prosecutors, in a state of being "emotionally, psychologically, and physically" terrorized. Jim Baugh, alongside six associates, allegedly spearheaded this effort to silence the Steiners, going to extreme lengths.

The harassment tactics escalated to sending live insects, a foetal pig, and even a funeral wreath to the Steiners' home. Moreover, Baugh and his associates reportedly installed a GPS tracking device on the couple's car, infringing on their privacy. Additionally, the perpetrators created misleading posts on the popular website Craigslist, inviting strangers to engage in sexual encounters at the Steiners' residence.

The aftermath of these reprehensible actions saw the termination of the involved employees by eBay. In the legal proceedings, Philip Cooke, an eBay employee, received an 18-month prison sentence in 2021, while Jim Baugh was handed a nearly five-year sentence in the subsequent year.

Baugh's defense claimed that he faced pressure from eBay's former CEO, Devin Wenig, to rein in the Steiners and control their coverage of the company. However, Wenig, who resigned from his position in 2019, has not been charged in connection with the harassment campaign and vehemently denies any knowledge of it.

Acting Massachusetts US Attorney Josh Levy strongly condemned eBay's conduct, labeling it as "absolutely horrific, criminal conduct." Levy emphasized that the employees and contractors involved in this campaign created a petrifying environment for the victims, with the clear intention of stifling their reporting and safeguarding the eBay brand.

Integrating the Power of AI and Blockchain for Data Security and Transparency

 

In an ever-changing digital landscape, providing strong data security and transparency has become critical. This article explores the dynamic interaction of two transformational technologies: artificial intelligence (AI) and blockchain. 

AI improves data security

Artificial intelligence (AI) is critical for enhancing data security via advanced technology and proactive techniques. Machine learning techniques offer real-time threat detection by recognising patterns and abnormalities that indicate potential security breaches. Predictive analytics assesses and anticipates threats, enabling proactive intervention. Furthermore, AI-driven anomaly detection improves the ability to quickly identify and respond to emerging security concerns. 

Blockchain, a transformational force, enables unparalleled data transparency. Its decentralised and irreversible ledger structure means that once data is recorded, it cannot be changed or tampered with, instilling trust in information integrity. Smart contracts, a critical component of blockchain technology, automate and transparently implement established rules, hence improving overall data governance. Blockchain provides a safe and transparent framework, making it an effective solution for industries looking to establish trust, traceability, and accountability inside their data ecosystems.

Synergies in AI and blockchain

The synergies between AI and Blockchain form a potent combination, tackling an array of data security and transparency concerns. AI's analytical capabilities strengthen blockchain functionality by allowing for advanced data analytics on a decentralised ledger. AI-powered algorithms help to detect trends, anomalies, and potential security threats within the blockchain network, hence strengthening overall security measures. Furthermore, AI-driven verification methods improve the accuracy and dependability of blockchain-stored data, increasing trustworthiness and transparency of information. This collaborative integration enables a more resilient and efficient approach to overseeing and safeguarding data in the digital era. 

Managing the integration of AI with Blockchain poses a number of issues and considerations. Ethical issues arise as AI algorithms make decisions, requiring evaluation to mitigate biases and ensure equality. Scalability concerns exist in blockchain networks, mandating solutions for increased transaction volume. Regulatory issues and compliance standards pose challenges, requiring a balance between innovation and adherence to legal frameworks.

The prospects for using blockchain technology and artificial intelligence (AI) to improve data security and transparency seem promising. As technology advances, it will probably enhance the complementary effects of these two revolutionary forces, increasing the limits of what is possible.

Challenges with integration 

Blockchain and AI integration is not without obstacles, though. As AI systems make decisions, ethical issues surface, requiring constant oversight to avoid prejudices and ensure fairness. Blockchain networks continue to face scalability issues, requiring solutions for increasing transaction volumes. Another level of complexity is added by regulatory compliance, which necessitates a careful balancing act between innovation and legal framework compliance. 

The future of AI and Blockchain in terms of data security and transparency is bright, notwithstanding these obstacles. It is likely that constant development will enhance the synergy between these revolutionary technologies, expanding the limits of what is feasible.

Anthropic Pledges to Not Use Private Data to Train Its AI

 

Anthropic, a leading generative AI startup, has announced that it would not employ its clients' data to train its Large Language Model (LLM) and will step in to safeguard clients facing copyright claims.

Anthropic, which was established by former OpenAI researchers, revised its terms of service to better express its goals and values. The startup is setting itself apart from competitors like OpenAI, Amazon, and Meta, which do employ user material to enhance their algorithms, by severing the private data of its own clients. 

The amended terms state that Anthropic "may not train models on customer content from paid services" and that Anthropic "as between the parties and to the extent permitted by applicable law, Anthropic agrees that customer owns all outputs, and disclaims any rights it receives to the customer content under these terms.” 

The terms also state that they "do not grant either party any rights to the other's content or intellectual property, by implication or otherwise," and that "Anthropic does not anticipate obtaining any rights in customer content under these terms."

The updated legal document appears to give protections and transparency for Anthropic's commercial clients. Companies own all AI outputs developed, for example, to avoid possible intellectual property conflicts. Anthropic also promises to defend clients against copyright lawsuits for any unauthorised content produced by Claude. 

The policy complies with Anthropic's mission statement, which states that AI should to be honest, safe, and helpful. Given the increasing public concern regarding the ethics of generative AI, the company's dedication to resolving issues like data privacy may offer it a competitive advantage.

Users' Data: Vital Food for LLMs

Large Language Models (LLMs), such as GPT-4, LlaMa, and Anthropic's Claude, are advanced artificial intelligence systems that comprehend and generate human language after being trained on large amounts of text data. 

These models use deep learning and neural networks to anticipate word sequences, interpret context, and grasp linguistic nuances. During training, they constantly refine their predictions, improving their capacity to communicate, write content, and give pertinent information.

The diversity and volume of the data on which LLMs are trained have a significant impact on their performance, making them more accurate and contextually aware as they learn from different language patterns, styles, and new information.

This is why user data is so valuable for training LLMs. For starters, it keeps the models up to date on the newest linguistic trends and user preferences (such as interpreting new slang).

Second, it enables personalisation and increases user engagement by reacting to specific user activities and styles. However, this raises ethical concerns because AI businesses do not compensate users for this vital information, which is used to train models that earn them millions of dollars.

2024 Data Dilemmas: Navigating Localization Mandates and AI Regulations

 


Data has been increasing in value for years and there have been many instances when it has been misused or stolen, so it is no surprise that regulators are increasingly focused on it. Shortly, global data regulation is likely to continue to grow, affecting nearly every industry as a result.

There is, however, a particular type of regulation affecting the payments industry, the "cash-free society," known as data localization. This type of regulation increases the costs and compliance investments related to infrastructure and compliance. 

There is a growing array of overlapping (and at times confusing) regulations on data privacy, protection, and localization emerging across a host of countries and regions around the globe, which is placing pressure on the strategy of winning through scale.

As a result of these regulations, companies are being forced to change their traditional uniform approach to data management: organizations that excelled at globalizing their operations must now think locally to remain competitive. 

As a result, their regional compliance costs increase because they have to invest time, energy, and managerial attention in understanding the unique characteristics of each regulatory jurisdiction in which they operate, resulting in higher compliance costs for their region. 

As difficult as it may sound, it is not an easy lift to cross geographical boundaries, but companies that find a way to do so will experience significant benefits — growth and increased market share — by being aware of local regulations while ensuring that their customer experiences are excellent, as well as utilizing the data sets they possess across the globe. 

Second, a trend has emerged regarding the use of data in generative artificial intelligence (GenAI) models, where the Biden administration's AI executive order, in conjunction with the EU's AI Act, is likely to have the greatest influence in the coming year.

The experts have indicated that enforcement of data protection laws will continue to be used more often in the future, affecting a wider range of companies, as well. In 2024, Troy Leach, chief strategy officer for the Cloud Security Alliance (CSA), believes that the time has come for companies to take a more analytical approach towards moving data into the cloud since they will be much more aware of where their data goes. 

The EU, Chinese, and US regulators put an exclamation point on data security regulations in 2023 with some severe fines. There were fines imposed by the Irish Data Protection Commission on Meta, the company behind Facebook, in May for violating localization regulations by transferring personal data about European users to the United States in violation of localization regulations. 

For violating Chinese privacy and data security regulations, Didi Global was fined over 8 billion yuan ($1.2 billion) in July by Chinese authorities for violating the country's privacy and data security laws. As Raghvendra Singh, the head of Tata Consultancy Services' cybersecurity arm, TCS Cybersecurity, points out, the regulatory landscape is becoming more complex, especially as the age of cloud computing grows. He believes that most governments across the world are either currently defining their data privacy and protection policies or are going to the next level if they have already done so," he states.

Within a country, data localization provisions restrict how data is stored, processed, and/or transferred. Generally, the restriction on storage and processing data is absolute, and a company is required to store and process data locally. 

However, transfer restrictions tend to be conditional. These laws are usually based on the belief that data cannot be transferred outside the borders of the country unless certain conditions are met. However, at their most extreme, data localization provisions may require very strict data processing, storing, and accessing procedures only to be performed within a country where data itself cannot be exported. 

Data storage, processing, and/or transfers within a company must be done locally. However, this mandate conflicts with the underlying architecture of the Internet, where caching and load balancing are often system-independent and are often borderless. This is especially problematic for those who are in the payments industry. 

After all, any single transaction involves multiple parties, involving data moving in different directions, often from one country to another (for instance, a U.S. MasterCard holder who pays for her hotel stay in Beijing with her American MasterCard). 

Business is growing worldwide and moving towards centralizing data and related systems, so the restriction of data localization requires investments in local infrastructure to provide storage and processing solutions. 

The operating architecture of businesses, business plans, and hopes for future expansion can be disrupted or made more difficult and expensive, or at least more costly, as a result of these disruptions. AI Concerns Lead to a Shift in The Landscape The technology of the cloud is responsible for the localization of data, however, what will have a major impact on businesses and how they handle data in the coming year is the fast adoption of artificial intelligence services and the government's attempts to regulate the introduction of this new technology. 

Leach believes that as companies become more concerned about being left behind in the innovation landscape, they may not perform sufficient due diligence, which may lead to failure. The GenAI model is a technology that organizations can use to protect their data, using a private instance within the cloud, he adds, but the data in the cloud will remain encrypted, he adds.