Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Microsoft ‘Cherry-picked’ Examples to Make its AI Seem Functional, Leaked Audio Revealed


According to a report by Business Insiders, Microsoft “cherry-picked” examples of generative AI’s output since the system would frequently "hallucinate" wrong responses. 

The intel came from a leaked audio file of an internal presentation on an early version of Microsoft’s Security Copilot a ChatGPT-like artificial intelligence platform that Microsoft created to assist cybersecurity professionals.

Apparently, the audio consists of a Microsoft researcher addressing the result of "threat hunter" testing, in which the AI examined a Windows security log for any indications of potentially malicious behaviour.

"We had to cherry-pick a little bit to get an example that looked good because it would stray and because it's a stochastic model, it would give us different answers when we asked it the same questions," said Lloyd Greenwald, a Microsoft Security Partner giving the presentation, as quoted by BI.

"It wasn't that easy to get good answers," he added.

Security Copilot

Security Copilot, like any chatbot, allows users to enter their query into a chat window and receive responses as a customer service reply. Security Copilot is largely built on OpenAI's GPT-4 large language model (LLM), which also runs Microsoft's other generative AI forays like the Bing Search assistant. Greenwald claims that these demonstrations were "initial explorations" of the possibilities of GPT-4 and that Microsoft was given early access to the technology.

Similar to Bing AI in its early days, which responded so ludicrous that it had to be "lobotomized," the researchers claimed that Security Copilot often "hallucinated" wrong answers in its early versions, an issue that appeared to be inherent to the technology. "Hallucination is a big problem with LLMs and there's a lot we do at Microsoft to try to eliminate hallucinations and part of that is grounding it with real data," Greenwald said in the audio, "but this is just taking the model without grounding it with any data."

The LLM Microsoft used to build Security Pilot, GPT-4, however it was not trained on cybersecurity-specific data. Rather, it was utilized directly out of the box, depending just on its massive generic dataset, which is standard.

Cherry on Top

Discussing other queries in regards to security, Greenwald revealed that, "this is just what we demoed to the government."

However, it is unclear whether Microsoft used these “cherry-picked” examples in its to the government and other potential customers – or if its researchers were really upfront about the selection process of the examples.

A spokeswoman for Microsoft told BI that "the technology discussed at the meeting was exploratory work that predated Security Copilot and was tested on simulations created from public data sets for the model evaluations," stating that "no customer data was used."  

Ahead of Regulatory Wave: Google's Pivotal Announcement for EU Users

 


Users in the European Union will be able to prevent Google services from sharing their data across different services if they do not wish to share their data. Google and five other large technology companies must comply with the EU’s Digital Markets Act by March 6, which requires that they and their users have more control over how their data is used among other things. 

On a support page (via Android Authority), a list of Google services that EU users can keep linked or unlinked is detailed. There are several different services offered by Google, including Search, Google Shopping, Google Maps, Google Play, YouTube, Chrome, and Ad services. In Europe, users can keep the entire set-up connected (as they are today), have none of them connected, or keep just some of them linked together. 

Although Google does not have an official policy about sharing user data, it will continue to share the information with others when it is necessary for a task to be completed, such as complying with the law, stopping fraud, or preventing abuse. 

In addition to the changes on interoperability and competition required of Google by the DMA, which goes into effect on March 6th, the company will also have to make some other adjustments to comply with the new law. The DMA has made many changes to Big Tech, but not all are on board. Despite Google's decision not to appeal its gatekeeper status, Apple, Meta, and TikTok owner ByteDance have all taken legal action against the status. 

In addition to the EU, several other governments have questioned Google's vast amounts of user data. As part of the Department of Justice's antitrust lawsuit in the United States, Google may be the largest antitrust case brought in the country since Microsoft in the 1990s, which was likely the first case of its kind. 

According to the DOJ, one of the arguments it made during the trial established the fact that the sheer amount of data Google had accumulated over the years was what led to it creating a "data fortress" that helped ensure it remained the leading search engine in the world. 

A user can experience some of the features of some of the aforementioned Google services that will not be available if they choose to unlink them. It was stated that reservations made through Google Search would no longer appear in Google Maps, and search recommendations would become less relevant after Google unlinked YouTube, Google Search, and Chrome. 

Even so, the company emphasized that parts of a service that do not involve the sharing of data would not suffer. The good news is that EU users will have the ability to manage their linked services at any time from the Google account settings pages of their Google account. 

In the Data & Privacy page of users's account settings, they will find a new section entitled "Linked Google Services", which will list options for using Google services in addition to the ones they are already using. A user has the final say on whether or not they want to unlink, and it is ultimately up to them. Even though a user might lose some features, he/she will have more control over how he/she uses his/her data within the Google ecosystem as a result of this change. 

There are many other purposes beyond data sharing that the DMA covers. Aside from that, it also restricts Google's ability to offer the best search results, which will make it easier for competitors to compete fairly on the search results page.

The DMA has become an official part of Google's marketing strategy, although other tech giants such as Apple, Meta, and TikTok are challenging it in the courts. In the past, Google tried to force users to centralize all of their personal information under a single Google + identity. 

Despite this, Google eventually backtracked and killed its Google+ platform, and this was a reaction to the significant pushback it received from users. Although the DMA will only apply to users in Europe, it is nevertheless a positive change for those who care about maintaining their privacy and sharing their data. Additionally, Microsoft and Apple will also be obliged to modify their platforms by the EU's DMA in March 2015 as a result of the DMA.

eBay Settles Blogger Harassment Case with $3 Million Fine

 

eBay has agreed to pay a substantial fine of $3 million (£2.36 million) in order to settle charges related to the harassment of bloggers who were openly critical of the company. The disturbing details emerged in court documents, revealing that high-ranking eBay executives, including Jim Baugh, the former senior director of safety and security, orchestrated a targeted campaign against Ina and David Steiner, the couple behind the newsletter EcommerceBytes, which the company's leadership disapproved of.

The court papers outline a series of alarming incidents, including the dispatch of live spiders and cockroaches to the Steiners' residence in Natick, Massachusetts. This relentless campaign of intimidation left the couple, according to prosecutors, in a state of being "emotionally, psychologically, and physically" terrorized. Jim Baugh, alongside six associates, allegedly spearheaded this effort to silence the Steiners, going to extreme lengths.

The harassment tactics escalated to sending live insects, a foetal pig, and even a funeral wreath to the Steiners' home. Moreover, Baugh and his associates reportedly installed a GPS tracking device on the couple's car, infringing on their privacy. Additionally, the perpetrators created misleading posts on the popular website Craigslist, inviting strangers to engage in sexual encounters at the Steiners' residence.

The aftermath of these reprehensible actions saw the termination of the involved employees by eBay. In the legal proceedings, Philip Cooke, an eBay employee, received an 18-month prison sentence in 2021, while Jim Baugh was handed a nearly five-year sentence in the subsequent year.

Baugh's defense claimed that he faced pressure from eBay's former CEO, Devin Wenig, to rein in the Steiners and control their coverage of the company. However, Wenig, who resigned from his position in 2019, has not been charged in connection with the harassment campaign and vehemently denies any knowledge of it.

Acting Massachusetts US Attorney Josh Levy strongly condemned eBay's conduct, labeling it as "absolutely horrific, criminal conduct." Levy emphasized that the employees and contractors involved in this campaign created a petrifying environment for the victims, with the clear intention of stifling their reporting and safeguarding the eBay brand.

Cybersecurity Incidents are Rapidly Increasing in UAE

 

The majority of businesses in the United Arab Emirates experienced a cybersecurity issue at some point in the last two years. 

According to Kaspersky data, 87% of UAE businesses have experienced different kinds of cyber attacks over the past two years. However, 25% of those cybersecurity incidents were caused by malicious behaviour on the part of their employees. 

Growing concern about malicious insider threats

Employees engaging in malicious online activities are becoming a serious concern for businesses across all industries, with Kaspersky identifying them as "the most dangerous of all employees who can provoke cyber incidents."

Kaspersky claims a number of factors encourage individuals to engage in illicit activities against their employers, including understanding their firm's IT and cybersecurity infrastructure, access to the company network, and taking advantage of colleagues' knowledge to launch social engineering attacks.

Jake Moore, global security advisor at ESET, concurs that malicious insider threats are "a significant worry" for businesses, but he emphasises that "humans also carry an accidental risk in business situations." 

He further elaborates: "Accidental threats might include employees inadvertently bringing in malware or enabling data leakage, which can often be mitigated with annual and ad hoc training programs for all staff.”

Although UAE-based companies are facing high levels of cybercrime, which includes 66% experiencing data breaches, the problem is not getting any better.

A previous Kaspersky study, published in December 2023, found that 77% of APAC companies lack the tools required to detect cyberattacks. Meanwhile, 87% of businesses have a cybersecurity talent shortage, making it more difficult to halt cyber criminals in their tracks.

Security officials in the UAE have previously struggled to maintain safe remote access to employee and corporate-owned devices, according to Mohammed Al-Moneer, Infoblox's regional senior director for META. He stated that firms are concerned about data leaks and cloud attacks "and do not believe they have a firm handle on the insider threat." 

Merely 15% of participants in the UAE, according to the Infoblox report, feel that their company is equipped to protect its networks against insider attacks. 

Gopan Sivasankaran, general manager of Secureworks' META region, explained that the UAE's thriving digital economy and increased use of data make it an "attractive" target for both hacker groups and hostile states. 

"The insight from the incident response engagements and active attacks on businesses we've worked on in the Middle East over the last year show organisations in the UAE have been victims to large scale wiper attacks as well as nation-state sponsored attacks," he said.

Navigating the Paradox: Bitcoin's Self-Custody and the Privacy Challenge

 

Self-custody in Bitcoin refers to individuals holding and controlling their private keys, which in turn control their bitcoin. This concept is akin to securing physical gold in a personal safe rather than relying on a bank or third-party custodian. Unlike physical assets such as gold, verifying the legitimacy of bitcoin transactions in the digital realm is more straightforward and does not involve the complex process of melting down to authenticate.

While certain regulations require individuals and entities, particularly in financial services, to report their holdings and transactions to regulatory bodies, this obligation aims to prevent illicit activities and ensure tax compliance. While reasonable for businesses in regulated markets, extending these requirements to personal finances, especially for private individuals, seems contradictory in a society that values personal freedom and privacy.

Bitcoin's architecture presents a paradox: it is transparent, allowing verification of the 21 million cap and transaction history, yet remarkably private as the true control lies with the holder of private keys. This duality ensures currency integrity but poses challenges to personal financial privacy under regulatory scrutiny.

To address this, innovative solutions like multi-signature wallets are emerging. Companies like Swan and On-ramp are developing tools focused on multi-signature wallets for individuals and institutions. This approach, such as a ⅔ multi-signature solution, allows a compliant third party to hold a key without compromising individual control, providing a subtle yet effective means of regulatory verification.

Multisig solutions also enhance security against theft while maintaining user control over assets, striking a delicate balance between autonomy and regulatory compliance. As the Bitcoin ecosystem evolves, these solutions become crucial for preserving personal financial freedom while aligning with existing regulatory frameworks.

The regulatory landscape must adapt to Bitcoin's distinct characteristics, leading to the development of refined self-custody approaches that support privacy, autonomy, and regulatory compliance. Advocacy for standardized reporting mechanisms for self-custodied assets can align with regulatory requirements without compromising Bitcoin's foundational tenets.

Balancing innovation and regulation presents challenges, requiring collaborative discourse among all stakeholders. Bitcoin's principles of autonomy and privacy may clash with regulatory transparency efforts, but finding a balance is essential for the cryptocurrency's revolutionary role in finance. Bitcoiners play a crucial role in advocating for their privacy and sovereignty rights, emphasizing that saving within the Bitcoin network is a legitimate exercise of economic liberty and not a criminal act or subject to public disclosure.

GitHub Faces Rise in Malicious Use

 


GitHub, a widely used platform in the tech world, is facing a rising threat from cybercriminals. They're exploiting GitHub's popularity to host and spread harmful content, making it a hub for malicious activities like data theft and controlling compromised systems. This poses a challenge for cybersecurity, as the bad actors use GitHub's legitimacy to slip past traditional defences. 

 Known as ‘living-off-trusted-sites,’ this technique lets cybercriminals blend in with normal online traffic, making it harder to detect. Essentially, they're camouflaging their malicious activities within the usual flow of internet data. GitHub's involvement in delivering harmful code adds an extra layer of complexity. For instance, there have been cases of rogue Python packages (basically, software components) using secret GitHub channels for malicious commands on hacked systems. 

This situation highlights the need for increased awareness and updated cybersecurity strategies to tackle these growing threats. It's a reminder that even widely used platforms can become targets for cybercrime, and staying informed is crucial to staying secure. 

While it's not very common for bad actors to fully control and command systems through GitHub, they often use it as a way to share secret information. This is called a "dead drop resolver." It's like leaving a message in a hidden spot for someone else to pick up. Malware like Drokbk and ShellBox frequently use this technique. 

Another thing they sometimes do is use GitHub to sneakily take information out of a system. This doesn't happen a lot, and experts think it's because there are limits on how much data they can take and they want to avoid getting caught. 

Apart from these tricks, bad actors find other ways to misuse GitHub. For example, they might use a feature called GitHub Pages to trick people into giving away sensitive information. Sometimes, they even use GitHub as a backup communication channel for their secret operations. 

Understanding these tactics is important because it shows how people with bad intentions can use everyday platforms like GitHub for sneaky activities. By knowing about these things, we can be more careful and put in measures to protect ourselves from online threats. 

This trend of misusing popular online services extends beyond GitHub to other familiar platforms like Google Drive, Microsoft OneDrive, Dropbox, Notion, Firebase, Trello, and Discord. It's not just limited to GitHub; even source code and version control platforms like GitLab, BitBucket, and Codeberg face exploitation. 

GitHub acknowledges that there's no one-size-fits-all solution to detect abuse on their platform. They suggest using a combination of strategies influenced by specific factors like available logs, how organisations are structured, patterns of service usage, and the willingness to take risks. To know that this problem isn't unique to GitHub is crucial. Threat actors are using various everyday services to carry out their activities, making it important for users and organisations to be aware and adopt a mix of strategies to detect and prevent abuse. This includes being mindful of how different platforms may be misused and tailoring detection methods accordingly.


‘BIN’ Attacks: Cybercriminals are Using Stolen ‘BIN’ Details for Card Fraud


While cybersecurity networks might be boosting themselves with newer technologies, cybercrime groups are also augmenting their tactics with more sophisticated tools. 

The latest example in cyberspace is the “BIN attacks,” that targeted small businesses. The tactic involved manipulation of the Bank Identification Number (BIN) of credit cards that allowed threat actors to put the stolen card details through trial and error on unsuspecting e-commerce websites. 

Behind the Scenes of the 'BIN' Attacks

In 2023 alone, the payment card fraud amounted to a whopping $577 million, which was 16.5% more than in 2022. Among its victims, the Commonwealth Bank was the one that experienced the fraud when a Melbourne wholesaler faced a barrage of 13,500 declined e-commerce transactions in a month. 

The incident, previously noted as a clerical error, turned out to be an event of cybercrime that impacted both businesses and consumers. 

The cybercriminals initially obtained the first six digits of a credit card, called the Bank Identification Number (BIN). This information was then used for trial and error to determine what combinations of card numbers, expiration dates, and security codes work. Subsequently, the card data that were taken are verified through inconspicuous transactions to ascertain their authenticity. Once verified, card numbers that have been compromised are either sold by fraudsters or used in larger-scale fraudulent transactions.

Customer Accounts Compromised

Commonwealth Bank account holders, Bob Barrow and John Goodall, discovered that they were the targets of fraudulent activities. Despite having no online activity with their cards, they were astonished when they found out about the transactions made on their accounts. This made them question the security of their financial information.

Credit card numbers are more random and limitless than one might believe. Out of the sixteen digits on a card, the six-digit BIN leaves just ten that follow a pattern. Because there are comparatively fewer options, cybercriminals can leverage automated methods to quickly guess valid combinations, which presents a serious threat to conventional security measures. 

While the affected entities are expected to come up with more stringent safety measures, the responsibility does not solely lay on the banks. Financial institutions do not always conduct the transactions; they are often the victims themselves who issue the cards. The attacks emphasize the necessity of a multi-layered safeguard, with companies utilizing strong fraud prevention systems and online shop security-focused payment processors like Stripe and Square. This is necessary since a BIN attack's aftermath might cause firms to go bankrupt.

AI-Driven Phishing on the Rise: NSA Official Stresses Need for Cyber Awareness

 


Even though the National Security Agency has been investigating cyberattacks and propaganda campaigns, a National Security Agency official said Tuesday that hackers are turning to generative artificial intelligence chatbots, such as ChatGPT, to make their operations appear more convincing to native English speakers to make their operations appear more credible to native English speakers. 

When Rob Joyce, NSA Cybersecurity Director, spoke at Fordham University in New York to discuss cyber security at the International Conference on Cyber Security, NSA Cybersecurity Director said the spy agency has observed hackers and cybercriminals using chatbots to appear more natural to foreign intelligence agencies, and both of them used such bots to make them appear more likely to be native English speakers. 

Cybercriminals of all skill levels are using artificial intelligence to enhance their abilities. However, AI is also helping to hunt them down, as security experts have warned. It was revealed at a conference at Fordham University that the director of cybersecurity at the National Security Agency, Rob Joyce, said that Chinese hackers are using artificial intelligence to get past firewalls when infiltrating networks and using it to their advantage. 

The report Joyce warns that hackers are using artificial intelligence to improve the quality of their English when conducting phishing scams, as well as to give technical advice to them when they attack or infiltrate a network. There was no mention of specific cyberattacks involving the use of artificial intelligence or attribution of particular activity to a state or government in Joyce's remarks, which were aimed at preventing and eradicating threats aimed at critical infrastructure and defence systems within the U.S. 

The recent hacker attacks on U.S. critical infrastructure by China-backed hackers were an example of how AI technology was surfacing malicious activity and giving U.S. intelligence an edge over criminal activity, Joyce argued. These hack attacks were thought to have been made in preparation for the upcoming Chinese invasion of Taiwan. 

There is no need to use traditional malware that can be detected by China state-backed hackers, according to Joyce, but rather they are exploiting vulnerabilities and implementation flaws that allow them to gain access to a network and to appear legitimate and authorized to be in that network. This comment comes at a time when generative artificial intelligence tools are increasingly being used in cyberattacks and espionage campaigns to produce convincing computer-generated text and images. 

As part of its ongoing efforts to establish new standards for AI safety and security, the Biden administration released an executive order in October. This order aims to strengthen the protection against abuses, errors, and abuses of the technology. There has been a recent warning from the Federal Trade Commission regarding the dangers associated with artificial intelligence, including ChatGPT, which has been used “to boost fraud and scams.” Joyce believes that artificial intelligence is a powerful tool that can enable someone incompetent to become competent, but it is going to make those who use it more effective and dangerous, as well. 

In 2023, the US government came under increased attack from groups linked to China and Iran, which they attributed to groups that aim to attack infrastructure sites that are vital to energy and water production in the US. There are several ways that the China-backed 'Volt Typhoon group has used to attack networks, and one of them involves hacking into networks covertly and then using their built-in network administration tools to launch attacks against the networks. 

Although Joyce did not provide any specific examples of recent cyber attacks involving artificial intelligence, she pointed out, "They are hacking into places like electric grids, transportation pipelines, and courts, trying to get in so they can cause social disruption and panic at the time and place they choose.". Groups with strong Chinese links have been gaining access to networks by abusing installation flaws - bugs arising from poorly implemented updates to software - and then establishing themselves as what is perceived as legitimate users of the system to gain access.

However, there are often instances in which their activities and traffic within the network go beyond what is expected, resulting in unusual network behaviour. In an interview with Joyce Chung, he explains how machine learning, artificial intelligence, and big data combine to help us surface (and expose) these behaviours [and] bring them to the forefront, which is important because when it comes to critical infrastructure, these accounts don't behave the same as usual business entities, which gives us the advantage

Beware of Malicious YouTube Channels Propagating Lumma Stealer

 

Attackers have been propagating a Lumma Stealer variant via YouTube channels that post videos about cracking into popular applications. They prevent detection by Web filters by spreading the malware over open source platforms like MediaFire and GitHub rather than proprietary malicious servers. 

The effort, according to FortiGuard researchers, is reminiscent of an attack that was uncovered in March of last year and employed artificial intelligence (AI) to disseminate step-by-step installation manuals for programmes like Photoshop, Autodesk 3ds Max, AutoCAD, and others without a licence. 

"These YouTube videos typically feature content related to cracked applications, presenting users with similar installation guides and incorporating malicious URLs often shortened using services like TinyURL and Cuttly," Cara Lin, Fortinet senior analyst, wrote in a blog post. 

Modus operandi 

The attack begins with a hacker infiltrating a YouTube account and publishing videos pretending to offer cracked software tips, along with video descriptions carrying malicious URLs. The descriptions also lure users to download a.ZIP file containing malicious content. 

The videos identified by Fortinet were uploaded earlier this year; however, the files on the file-sharing site are regularly updated, and the number of downloads continues to rise, suggesting that the campaign is reaching victims. "This indicates that the ZIP file is always new and that this method effectively spreads malware," Lin stated in a blog post. 

The .ZIP file contains an.LNK file that instructs PowerShell to download a.NET execution file from John1323456's GitHub project "New". The other two repositories, "LNK" and "LNK-Ex," both contain .NET loaders and use Lumma as the final payload.

"The crafted installation .ZIP file serves as an effective bait to deliver the payload, exploiting the user's intention to install the application and prompting them to click the installation file without hesitation," Lin wrote.

The .NET loader is disguised with SmartAssembly, a valid obfuscation technique. The loader then acquires the system's environment value and, after the number of data is correct, loads the PowerShell script. Otherwise, the procedure will depart the programme.

YouTube malware evasion and caution

The malware is designed to prevent detection. The ProcessStartInfo object starts the PowerShell process, which eventually calls a DLL file for the following stage of the attack, which analyses the environment using various methods to avoid detection. The technique entails looking for debuggers, security appliances or sandboxes, virtual machines, and other services or files that could impede a malicious process. 

"After completing all environment checks, the program decrypts the resource data and invokes the 'SuspendThread; function," Lin added. "This function is employed to transition the thread into a 'suspended' state, a crucial step in the process of payload injection.” 

Once launched, Lumma communicates with the command-and-control server (C2) and establishes a connection to transfer compressed stolen data back to the attackers. Lin observed that the variation employed in the campaign is version 4.0, but its exfiltration has been upgraded to use HTTPS to better elude detection. 

On the other hand, infection is trackable. In the publication, Fortinet provided users with a list of indications of compromise (IoCs) and cautionary advice regarding "unclear application sources." According to Fortinet, users should make sure that any applications they download from YouTube or any other platform are from reliable and safe sources.

Researchers Claim Apple Was Aware of AirDrop User Identification and Tracking Risks Since 2019

Security researchers had reportedly alerted Apple about vulnerabilities in its AirDrop wireless sharing feature back in 2019. According to these researchers, Chinese authorities recently exploited these vulnerabilities to track users of the AirDrop function. This case has raised concerns about global privacy implications.

The Chinese government allegedly used the compromised AirDrop feature to identify users on the Beijing subway accused of sharing "inappropriate information." The exploit has prompted internet freedom advocates to urge Apple to address the issue promptly and transparently. Pro-democracy activists in Hong Kong have previously used AirDrop, leading to Chinese authorities cracking down on the feature.

Beijing-based Wangshendongjian Technology claimed to have compromised AirDrop, collecting basic identifying information such as device names, email addresses, and phone numbers. Despite Chinese officials presenting this as an effective law enforcement technique, there are calls for Apple to take swift action.

US lawmakers, including Florida Sen. Marco Rubio, have expressed concern about the security of Apple's AirDrop function, calling on the tech giant to act promptly. However, Apple has not responded to requests for comments on the matter.

Researchers from Germany's Technical University of Darmstadt, who identified the flaws in 2019, stated that Apple received their report but did not act on the findings. The researchers proposed a fix in 2021, which Apple has allegedly not implemented.

The Chinese claim has raised alarms among US lawmakers, emphasizing the need for Apple to address security issues promptly. Critics argue that Apple's inaction may be exploited by authoritarian regimes, highlighting the broader implications of tech companies' relationships with such governments.

The Chinese tech firm's exploitation of AirDrop apparently utilized techniques identified by the German researchers in 2019. Experts point out that Apple's failure to add an extra layer of security, known as "salting," allowed the unauthorized access of device-identifying information.

Security experts emphasize that while AirDrop's device-to-device communication is generally secure, users may be vulnerable if they connect with a stranger or accept unsolicited connection requests. The lack of salting in the encryption process makes it easier for unauthorized parties to decipher the exchanged data.

Following the Chinese claim, Senator Ron Wyden criticized Apple for a "blatant failure" to protect users, emphasizing the four-year delay in addressing the security hole in AirDrop. The tech firm behind the AirDrop exploit has a history of collaboration with Chinese law enforcement and security authorities.

The intentional disclosure of the exploit by Chinese officials may serve various motives, including discouraging dissidents from using AirDrop. Experts suggest that Apple may now face challenges in fixing the issue due to potential retaliation from Chinese authorities, given the company's significant presence in the Chinese market. The hack revelation could also provide China with leverage to compel Apple's cooperation with security or intelligence demands.

Bengaluru Woman Escapes a Cyber-scam Attempt, After Indigo’s Bogus ‘Agents’ Cancel Rs.15,600 Tickets


A 32-year-old woman from Bengaluru, India, suffered a cyber scam where the scammers falsely identified themselves as the agents of Indigo Airlines. The scammers attempted to obtain Rs. 15,600 from the victim. 

Following a few questions from the ‘agents,’ victim Mahashweta Pal grew suspicious and called the official helpline number of the airlines to inform them about the narrow escape from the fraud. 

Pal, the social media manager at Inquest, was taken aback when, on January 1, she received a call from an alleged Indigo agent informing her that her tickets had been cancelled. Pal had purchased a round-trip ticket to Kolkata on Indigo.

"The caller then proceeded to offer two options: immediate rebooking at the same fare or a refund of the original booking amount (Rs 15,600) within 24 hours[…]This purported cancellation was presented as a fact, with no prior notification or explanation provided. The caller told me he could send me a link for repayment on WhatsApp," Pal said. 

Pal discovered that her tickets had been cancelled, but that a third party had handled it, after hanging up the phone and dialling Indigo's official hotline number.

She informed that the Indigo ‘agents’ informed her of the cancellation of her tickers and that there was nothing they could do except initiate a “partial refund of the cancelled tickets and I received around Rs 8,000."

However, when Pal asked some follow-up questions, the bogus agents informed them that someone had altered the information on their website. The customer support agent admitted that there were errors in the cancellation information; her phone number and email address did not match what she had entered. This disparity implied that her booking was maliciously altered and that there was illegal access to her account. Even though they acknowledged that there was a problem, they did not provide a fix or the remaining portion of my money.

Following a week of Pal pursuing the case and eventually taking the case to social media, she finally started getting calls from the airlines for assistance. 

"The customer care executive was kind enough to share the information about the scammers. The email ID they used was maheshmeena00417@gmail.com and the number to which the OTP was sent was 9257384638. And the IP address was 157.38.67.21," Pal shared.

The airline took further measures to "ensure the security" of Pal's booking when she purchased the flight tickets once again. "We have temporarily blocked any modifications or web check-in of your booking. If you have any further requirements or need to make changes, we kindly request you to contact our IndiGo contact centre for assistance," Indigo stated.

Moreover, on Tuesday, Pal was contacted by another executive informing her that her previous booking, which had been fraudulently cancelled, had been fully refunded.