Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label User Security. Show all posts

Bing Ad Posing as NordVPN Aims to Propagate SecTopRAT Malware

 

A Bing advertisement that appeared to be a link to install NordVPN instead led to an installer for the remote access malware SecTopRAT. 

Malwarebytes Labs identified the malvertising campaign on Thursday, with the domain name for the malicious ad having been registered only a day earlier. The URL (nordivpn[.]xyz) was intended to resemble an authentic NordVPN domain. The ad link linked to a website with another typosquatted URL (besthord-vpn[.]com) and a duplicate of the actual NordVPN website.

The download button on the fake website directed to a Dropbox folder containing the installer NordVPNSetup.exe. This executable comprised both an authentic NordVPN installation and a malware payload that was injected into MSBuild.exe and connected to the attacker's command-and-control (C2) server.

The threat actor attempted to digitally sign the malicious programme, however the signature proved to be invalid. However, Jérôme Segura, Principal Threat Researcher at Malwarebytes ThreatDown Labs, told SC Media on Friday that he discovered the software had a valid code signing certificate. 

Segura said some security products may block the executable due to its invalid signature, but, “Perhaps the better evasion technique is the dynamic process injection where the malicious code is injected into a legitimate Windows application.” 

“Finally, we should note that the file contains an installer for NordVPN which could very well thwart detection of the whole executable,” Segura added. 

The malicious payload, SecTopRAT, also known as ArechClient, is a remote access trojan (RAT) identified by MalwareHunterTeam in November 2019 and then analysed by GDATA experts. The researchers discovered that the RAT produces an "invisible" second desktop, allowing the attacker to manage browser sessions on the victim's PC. 

SecTopRAT can also provide system information, such as the system name, username, and hardware, to the attacker's C2 server. 

Malwarebytes reported the malware campaign to both Microsoft, which controls Bing, and Dropbox. Dropbox has since deactivated the account that contained the malware, and Segura said his team had yet to hear anything from Microsoft as of Friday. 

“We did notice that the threat actors updated their infrastructure last night, perhaps in reaction to our report. They are now redirecting victims to a new domain thenordvpn[.]info which may indicate that the malvertising campaign is still active, perhaps under another advertiser identity,” Segura concluded. 

Other malvertising efforts promoting SecTopRAT have been discovered in the past. In 2021, Ars Technica reported on a campaign that used Google advertisements to promote the Brave browser.

Last October, threat actors employed malvertising, search engine optimisation (SEO) poisoning, and website breaches to deceive consumers into installing a fake MSIX Windows programme package containing the GHOSTPULSE malware loader. Once deployed, GHOSTPULSE employs a process doppelganging to enable the execution of several malware strains, including SecTopRAT.

Google Strengthens Gmail Security, Blocks Spoofed Emails to Combat Phishing

 

Google has begun automatically blocking emails sent by bulk senders who do not satisfy tighter spam criteria and authenticating their messages in line with new requirements to strengthen defences against spam and phishing attacks. 

As announced in October, users who send more than 5,000 messages per day to Gmail accounts must now configure SPF/DKIM and DMARC email authentication for their domains. 

The updated regulations also mandate that bulk email senders refrain from delivering unsolicited or unwanted messages, offer a one-click unsubscribe option, and react to requests to unsubscribe within two working days. 

Additionally, spam rates must be kept at 0.3%, and "From" headers cannot act like to be from Gmail. Email delivery issues, such as emails being rejected or automatically directed to recipients' spam folders, may arise from noncompliance. 

"Bulk senders who don't meet our sender requirements will start getting temporary errors with error codes on a small portion of messages that don't meet the requirements," Google stated. "These temporary errors help senders identify email that doesn't meet our guidelines so senders can resolve issues that prevent compliance.” 

In April 2024, we will start rejecting non-compliant traffic. Rejection will be gradual, affecting solely non-compliant traffic. We strongly recommend senders to utilise the temporary failure enforcement period to make any necessary changes to become compliant, Google added. 

The company also intends to implement these regulations beginning in June, with an expedited timeline for domains used to send bulk emails starting January 1, 2024.

As Google said when the new guidelines were first released, its AI-powered defences can successfully filter roughly 15 billion unwelcome emails per day, avoiding more than 99.9% of spam, phishing attempts, and malware from reaching users' inboxes. 

"You shouldn't need to worry about the intricacies of email security standards, but you should be able to confidently rely on an email's source," noted Neil Kumaran, Group Product Manager for Gmail Security & Trust in October. "Ultimately, this will close loopholes exploited by attackers that threaten everyone who uses email.”

What are Deepfakes and How to Spot Them

 

Artificial intelligence (AI)-generated fraudulent videos that can easily deceive average viewers have become commonplace as modern computers have enhanced their ability to simulate reality.

For example, modern cinema relies heavily on computer-generated sets, scenery, people, and even visual effects. These digital locations and props have replaced physical ones, and the scenes are almost indistinguishable from reality. Deepfakes, one of the most recent trends in computer imagery, are created by programming AI to make one person look like another in a recorded video. 

What is a deepfake? 

Deepfakes resemble digital magic tricks. They use computers to create fraudulent videos or audio that appear and sound authentic. It's like filming a movie, but with real people doing things they've never done before. 

Deepfake technology relies on a complicated interaction of two fundamental algorithms: a generator and a discriminator. These algorithms collaborate within a framework called a generative adversarial network (GAN), which uses deep learning concepts to create and refine fake content. 

Generator algorithm: The generator's principal function is to create initial fake digital content, such as audio, photos, or videos. The generator's goal is to replicate the target person's appearance, voice, or feelings as closely as possible. 

Discriminator algorithm: The discriminator then examines the generator's content to determine if it appears genuine or fake. The feedback loop between the generator and discriminator is repeated several times, resulting in a continual cycle of improvement. 

Why do deepfakes cause concerns? 

Misinformation and disinformation: Deepfakes can be used to make convincing films or audio recordings of people saying or doing things they did not do. This creates a significant risk of spreading misleading information, causing reputational damage and influencing public opinion.

Privacy invasion: Deepfake technology has the ability to violate innocent people's privacy by manipulating their images or voices for malicious intents, resulting in harassment, blackmail, or even exploitation. 

Crime and fraud: Criminals can employ deepfake technology to imitate others in fraudulent operations, making it challenging for authorities to detect and prosecute those responsible. 

Cybersecurity: As deepfake technology progresses, it may become more difficult to detect and prevent cyberattacks based on modified video or audio recordings. 

How to detect deepfakes 

Though recent advances in generative Artificial Intelligence (AI) have increased the quality of deepfakes, we can still identify telltale signals that differentiate a fake video from an original.

- Pay close attention to the video's commencement. For example, many viewers overlooked the fact that the individual's face was still Zara Patel at the start of the viral Mandana film; the deepfake software was not activated until the person boarded the lift.

- Pay close attention to the person's facial expression throughout the video. Throughout a discourse or an act, there will be irregular variations in expression. 

- Look for lip synchronisation issues. There will be some minor audio/visual sync issues in the deepfake video. Always try to watch viral videos several times before deciding whether they are a deepfake or not. 

In addition to tools, government agencies and tech companies should collaborate to develop cross-platform detection tools that will stop the creation of deepfake videos.

Kiosks in Brookline is Tracking Cell Phone Data

 

Data is everywhere. It is at your fingertips. It's all over town, yet your info may be shared around without your knowledge. Brookline put digital signs throughout town, which have gotten people talking since they are collecting individual cell phone data. 

Glen Gay, who was passing by one of the Washington Street kiosks, stated, "I guess everything is tracked in today's world whether you like to or not." "I am just a little curious what they are doing with the data?” 

Brookline.News initially reported on the kiosks, which are created by a local US company called Soofa. They display a wide range of information, including bus arrival times and local activities. The boards contain sensors in the kiosks that detect a unique identity in your phone when WiFi is turned on. The company claims that the data is encrypted before it is delivered to their data site. The information helps the city in tracking how often people cross the boards. 

Town officials said the data will help them determine the size of the audience the board is reaching. The town hopes to use the boards to send out localised messages ahead of the Boston Marathon. The foot traffic data will also help them learn how many people visit the kiosks throughout the marathon, allowing them to better adapt the board content to high-traffic regions next year. Phone users will not see a prompt indicating that the kiosk is keeping track of their data.

"I linger here 10 to 15 minutes a day, so knowing that freaked me out a little bit," stated Jenna Woods, as she sits near a kiosk. "I wish that it was more public knowledge. I mean, I have nothing to hide, so they can collect as much as they want. Will it be interesting? Probably not.” 

Cyber experts claim that, contrary to popular belief, all of this is completely legal. Usually, the data they monitor is broadcast data from a mobile device.

"It says I am here, and a clock that says I am here for a certain period of time. There is no personal identifiable information," notes Peter Tran, Chief Information Security Officer with the IT security firm Infersight. "With cell phones, users have to be aware that you are broadcasting out certain types of information, so the cell towers can authenticate you and know it's your cellphone. What you are normally broadcasting is some basic information about your hardware, your place in the network of AT&T, Verizon, T-Mobile.” 

Tran claims that while these are individual bits of public information, integrating them can be financially beneficial. Soofa claims that no data correlation is performed, nor that any data is sold to a third party. Only your phone's unique identification is collected. To avoid collecting, Tran recommends turning off your WiFi while you are not using it. The same goes for your Bluetooth.

Critical Flaw Identified in Apple's Silicon M-Series Chips – And it Can't be Patched

 

Researchers have identified a novel, unpatched security vulnerability that can allow an attacker to decrypt data on the most advanced MacBooks. 

This newly discovered vulnerability affects all Macs utilising Apple silicon, including the M1, M2, and M3 CPUs. To make matters worse, the issue is built into the architecture of these chips, so Apple can't fix it properly. Instead, any upgrades must be done before the iPhone maker launches its M4 chips later this year. 

The vulnerability, like last year's iLeakage attack, is a side channel that, under specific circumstances, allows an attacker to extract the end-to-end encryption keys. Fortunately, exploiting this flaw is challenging for an attacker, as it can take a long time. 

The new flaw was identified by a group of seven academic academics from universities across the United States, who outlined their findings in a research paper (PDF) on microarchitectural side channel attacks. 

To demonstrate how this issue could be exploited by hackers, they created GoFetch, an app that does not require root access. Instead, it merely requires the same user privileges as most third-party Mac apps. For those unfamiliar with Apple's M-series chips, they are all organised into clusters that house their respective cores. 

If the GoFetch app and the cryptography app being targeted by an attacker share the same performance cluster, GoFetch will be able to mine enough secrets to reveal a secret key. 

Patching will hinder performance

Patching this flaw will be impossible as it exists in Apple's processors, not in its software. To fully resolve the issue, the iPhone manufacturer would have to create entirely new chips. 

The researchers who found the vulnerability advise Apple to use workarounds in the company's M1, M2, and M3 chips to solve it, as there is no way to fix it. 

In order to implement these solutions, cryptographic software developers would need to incorporate remedies such as ciphertext blinding, which modifies or eliminates masks applied to sensitive variables, such as those found in encryption keys, before or after they are loaded into or saved from memory. 

Why there's no need for concern

To leverage this unfixable vulnerability in an attack, a hacker would first have to dupe a gullible Mac user into downloading and installing a malicious app on their computer. In macOS with Gatekeeper, Apple limits unsigned apps by default, which would make it much harder to install the malicious app required to carry out an attack. 

From here, this attack takes quite some time to complete. In reality, during their tests, the researchers discovered that it took anywhere between an hour and ten hours, during which time the malicious app would have to be operating continually. 

While we haven't heard anything from Apple about this unpatched issue yet, we'll update this post if we do. Until then, the researchers advised that users maintain all of the software on their Apple silicon-powered Macs up to date and apply Apple updates as soon as they become available.

Canadian University Vending Machine Malfunction Discloses Use of Facial Recognition

 

A faulty vending machine at a Canadian university has unintentionally exposed the fact that several of them have been covertly utilising facial recognition technology.

Earlier this month, a snack dispenser at the University of Waterloo displayed the error message "Invenda.Vending.FacialRecognition.App.exe" on the screen. 

There was no prior notice that the machine was employing technology or that a camera was keeping an eye on the whereabouts and purchases of the students. Users' consent was not requested before their faces were scanned or processed. 

"We wouldn’t have known if it weren’t for the application error. There’s no warning here,” stated River Stanley, who reported on the discovery for the university’s newspaper.

Invenda, the company that creates the machines, boasts the usage of "demographic detection software," which it claims can assess clients' gender and age. It claims that the technology complies with GDPR, and the European Union's privacy regulations, although it is uncertain whether it fulfils Canadian equivalents. 

Last year in April, the national retailer Canadian Tyre violated British Columbia privacy rules by using facial recognition technology without customer consent. The government's privacy commissioner stated that even if the retailers had acquired consent, the firm failed to show an appropriate justification for collecting facial information. 

In a statement, the University of Waterloo vowed to get rid of the Invenda machines "as soon as possible" and had "asked that the software be disabled" in the meanwhile. 

Meanwhile, students at Ontario University responded by using gum and paper to cover the hole where they believe the camera is positioned.

Hackers can Spy on Cameras Through Walls, New Study Reveals

 

A new threat to privacy has surfaced, as scientists in the United States have discovered a technique to eavesdrop on video feeds from cameras in a variety of devices, including smartphones and home security systems. 

The EM Eye technique has the ability to take pictures through walls as well, which raises serious concerns regarding potential misuse. 

Kevin Fu, a professor of electrical and computer engineering at Northeastern University, conducted the research, which focuses on a vulnerability in the data transfer cables found in modern cameras. These connections unintentionally serve as radio antennas, emitting electromagnetic information that can be detected and decoded to provide real-time video. 

According to Tech Xplore, the threat exists because companies focus on protecting cameras' valuable digital interfaces, such as the upload channel to the cloud, while ignoring the possibility of information leaking via inadvertent channels. "They never intended for this wire to become a radio transmitter, but it is," Fu said. "If you have your lens open, even if you think you have the camera off, we're collecting." 

The EM Eye approach has been tested on 12 different kinds of cameras, including smartphones, dashcams, and home security systems. The distance required to eavesdrop varies, although it is possible to do so from as far away as 16 feet. 

The method does not require the camera to be recording, thus any device with an open lens is potentially vulnerable. Fu recommends that people use plastic lens covers as a first step in mitigating this threat, while he warns that infrared signals can still penetrate them. 

Fu believes that these discoveries serve as a wake-up call for manufacturers to fix this security hole in their designs. "If you want to have a complete cybersecurity story, yes, do the good science, but you also have to do the computer engineering and the electrical engineering if you want to protect against these kinds of eavesdropping surveillance threats," he stated. 

This research reveals a substantial and ubiquitous risk to privacy in a society where cameras are everywhere. In the words of Fu, "Basically, anywhere there's a camera, now there's a risk.”

Identity Hijack: The Next Generation of Identity Theft

 

Synthetic representations of people's likenesses, or "deepfake" technology, are not new. Picture Mark Hamill's 2019 "The Mandalorian" episode where he played a youthful Luke Skywalker, de-aged. Similarly, artificial intelligence is not a novel concept. 

However, ChatGPT's launch at the end of 2022 made AI technology widely available at a low cost, which in turn sparked a competition to develop more potent models among almost all of the mega-cap tech companies (as well as a number of startups). 

Several experts have been speaking concerning the risks and active threats posed by the current expansion of AI for months, including rising socio economic imbalance, economic upheaval, algorithmic discrimination, misinformation, political instability, and a new era of fraud. 

Over the last year, there have been numerous reports of AI-generated deepfake fraud in a variety of formats, including attempts to extort money from innocent consumers, ridiculing artists, and embarrassing celebrities on a large scale. 

According to Australian Federal Police (AFP), scammers using AI-generated deepfake technology stole nearly $25 million from a multinational firm in Hong Kong last week.

A finance employee at the company moved $25 million into specific bank accounts after speaking with several senior managers, including the company's chief financial officer, via video conference call. Apart from the worker, no one on the call was genuine. 

Despite his initial suspicions, the people on the line appeared and sounded like coworkers he recognised.

"Scammers found publicly available video and audio of the impersonation targets on YouTube, then used deepfake technology to emulate their voices... to lure the victim into following their instructions," acting Senior Superintendent Baron Chan told reporters. 

Lou Steinberg, a deepfake AI expert and the founder of cyber research firm CTM Insights, believes that as AI grows stronger, the situation will worsen. 

"In 2024, AI will run for President, the Senate, the House and the Governor of several states. Not as a named candidate, but by pretending to be a real candidate," Steinberg stated. "We've gone from worrying about politicians lying to us to scammers lying about what politicians said .... and backing up their lies with AI-generated fake 'proof.'" 

"It's 'identity hijacking,' the next generation of identity theft, in which your digital likeness is recreated and fraudulently misused," he added. 

The best defence against static deepfake images, he said, is to embed micro-fingerprint technology into camera apps, which would allow social media platforms to recognise when an image is genuine and when it has been tampered with. 

When it comes to interactive deepfakes (phone calls and videos), Steinberg believes the simple solution is to create a code word that can be employed between family members and friends. 

Companies, such as the Hong Kong corporation, should develop rules to handle nonstandard payment requests that require codewords or confirmations via a different channel, according to Steinberg. A video call cannot be trusted on its own; the officers involved should be called separately and immediately.

Elite Supplements: The Latest Aussie Business to Fall Victim to a Cyber Attack

 

Consumers of a popular Australian supplement brand are being alerted about the possibility that the company's hack exposed their personal data.

In an email obtained by NCA NewsWire, Elite Supplements notified clients that the business had experienced a cyberattack that "gave one or more unknown parties access" to certain online customer information. 

After learning of the possible breach for the first time on January 30, the firm acted "extremely seriously" and informed its customers on Saturday just after 6 p.m. 

Customers may, however, feel secure knowing that the hack did not access any passwords, credit cards, or other financial information. Instead, the attackers stole names, shipping addresses, email addresses, and phone numbers of online customers.

“Our intent was to verify that a breach occurred and to determine as much as possible what data was used before alerting customers,” Elite Supplements told customers in an email. “We have begun notifying relevant government authorities and the company is fully compliant with our reporting obligations under cybersecurity legislation.

“Elite Supplements deeply regrets this incident, despite the significant investments we have made in cybersecurity. We sincerely apologise for any inconvenience or distress the breach may have caused our customers,” the company further stated. 

The business stated that since hiring a cybersecurity provider, the data it possesses has been secured. Customers were advised in the email to be cautious of any correspondence from Elite Supplements going forward, as information had been acquired during the breach. 

Rise in cybercrimes 

Cybercrime remains a problem in Australia. One major worry is frauds; as of 2022, Australians had lost more than $48 million to investment scams. Scams have cost victims around $72 million in total in 2022. Furthermore, 1 in 4 Australians have experienced identity theft. 

Generally speaking, Australians are among the wealthiest people on the planet. A study of the median wealth per adult put Australians at the top of the affluent list, with a median worth of $273,900 – ahead of Belgium ($267,890) and New Zealand ($231,260). This may help to understand why Australian people and businesses are the target of cybercriminals.

A significant data breach at Optus, a telecommunications business, took place in September 2022, affecting about 2.1 million users. 9.8 million individual records—including names, dates of birth, residences, and, in certain situations, passport numbers—were pilfered. However, the hack failed to access any financial data.

Leaked Data from Binance Taken Down


One of the biggest cryptocurrency exchanges in the world's security has come under scrutiny following the recent disclosure of private information from Binance on GitHub. Several documents, including code, internal passwords, and architecture diagrams, were purportedly released by an account on GitHub going by the name "Termf" and were accessible to the public for several months. The content was removed after Binance requested a copyright takedown.

Binance has effectively removed its GitHub data breach

Various technical details, including code about Binance's security procedures, were included in the leaked material. Interestingly, this contained details on multi-factor authentication (MFA) and passwords. A large portion of the code that was made public concerned systems that were identified as "prod," denoting a link to Binance's operational website as opposed to test or development environments.

On January 5, 2024, 404 Media contacted Binance to inform the exchange about the compromised data, which is when the problem became apparent. Binance then retaliated by sending GitHub a copyright removal request. Binance admitted in this request that internal code from the disclosed material "poses a significant risk" to the exchange, resulting in "severe financial harm" as well as possible user misunderstanding or harm.

What next?

Even after admitting the leak, Binance sent out a representative to try and reassure its user base. According to the spokesman, Binance's security team examined the circumstances and came to the conclusion that the code that had been leaked was not similar to the code that was being produced at the time. The representative emphasized the protection of users' data and assets and stated that there was only a "negligible risk" from the compromised information.

The significance of strong security procedures in the Bitcoin sector is highlighted by this occurrence. Crypto exchanges are required to uphold strict security procedures because of their role in managing users' sensitive information and financial assets. The prolonged public disclosure of security-related code and internal passwords on a public forum calls into doubt the effectiveness of Binance's security protocols.

The necessity of heightened security protocols

Another level of worry is raised by the exposed data, especially the code about security protocols like multi-factor authentication and passwords. These kinds of security lapses can have serious repercussions, including the compromise of user funds and accounts. It draws attention to the continuous difficulties Bitcoin platforms have in maintaining the integrity and confidentiality of their internal systems.

Cryptographers Groundbreaking Discovery Enables Private Internet Searches

 

The desire for private internet searches has long been a cryptographic challenge. Historically, getting information from a public database without disclosing what was accessed (known as private information retrieval) has been a difficult task, particularly for large databases. The perfection of a private Google search, in which users can browse through material anonymously, has remained elusive due to the computational demands of such operations. 

However, a new study by three pioneering researchers has made tremendous progress in this field. They developed an innovative version of private information retrieval and expanded it to create a larger privacy method. This technique has been recognised for its pioneering potential, with plaudits expected at the annual Symposium on Theory of Computing in June 2023. 

Breaking barriers in cryptography

This development is based on a new way for discretely pulling information from huge datasets. It addresses the significant challenges of doing private searches across large databases without requiring a corresponding increase in computational effort. This technology is game-changing because it streamlines the process of conducting private searches, making them more viable and efficient. 

The strategy involves creating the database and encoding the entire dataset into a unique structure. This allows inquiries to be answered using only a small section of the structure. This novel approach indicates that a single server can host the information and do the preprocessing independently, enabling future users to retrieve data securely without incurring additional computing costs.

The future of online privacy 

While this breakthrough is noteworthy, practical applications are still being developed. The preprocessing method, as it stands, is most beneficial for extremely big databases and may not be realistic for everyday use due to existing processing performance and storage restrictions. 

Nonetheless, the research community remains optimistic. The history of cryptography reveals a similar pattern of optimising initially difficult outcomes into feasible ones. If the trend continues, private lookups from huge databases could become a reality, drastically changing our connection with the internet and significantly improving user privacy. 

A theoretical breakthrough

The new technique, invented by three cryptographers, employs a sophisticated kind of private information retrieval. It tackles the difficulty of executing private searches across large data sets without requiring additional computer resources. This concept is a major change from standard procedures, which frequently necessitate scanning whole databases to ensure secrecy. 

In a nutshell recent developments in cryptography are an important step towards enabling completely secure internet searches. This advancement has the potential to revolutionise how we access and interact with information online, putting user privacy and security first in an increasingly digital environment.

Ahmedabad Creates History as India’s First City With AI-Linked Surveillance System

 

The largest city in the Indian state of Gujarat, Ahmedabad, made history by being the first to install a surveillance system connected to artificial intelligence (AI). In order to enhance public safety and security, the city has teamed up with a tech company to install a state-of-the-art artificial intelligence system that can analyse massive amounts of data. 

A cutting-edge artificial intelligence command and control station, with an amazing 9 by 3 metre screen that monitors a vast 460 square km region, including Ahmedabad and its surrounding areas, is located in the huge Paldi sector of the city. This AI surveillance system provides a six-camera view of the entire city by combining live drone footage with camera feeds from buses and traffic signals.

With its sophisticated facial recognition technology, the AI-linked surveillance system can recognise and track people in real time. It is a priceless tool for Ahmedabad's law enforcement agencies since it can also identify and react to patterns of criminal activity. 

The local authority is confident that the new AI system will strengthen the effectiveness of its law enforcement operations and aid in the prevention and detection of crime. The system will also help with traffic management, crowd control, and disaster response.

"The implementation of this AI-linked surveillance system is a significant milestone for Ahmedabad and for India as a whole," a spokesperson for the city stated. "We are committed to leveraging the latest technology to ensure the safety and security of our citizens, and we believe that this system will play a crucial role in achieving that goal.” 

The introduction of an AI-powered monitoring system has ignited a national debate regarding potential advantages and drawbacks of such advanced technology in public places. While some have praised the system for its potential to increase safety and security, others have expressed concerns about privacy and data protection issues. 

Nonetheless, Ahmedabad's pioneering initiative has set a precedent for other Indian cities to follow as they seek to use AI to improve public safety and security. Ahmedabad has clearly established itself as a leader in the adoption of AI technology for the benefit of its citizens with the effective implementation of this system.

Hackers Breach Steam Discord Accounts, Launch Malware


On Christmas Day, the popular indie strategy game Slay the Spire's fan expansion, Downfall, was compromised, allowing Epsilon information stealer malware to be distributed over the Steam update system.

Developer Michael Mayhem revealed that the corrupted package is not a mod installed through Steam Workshop, but rather the packed standalone modified version of the original game.

Hackers breached Discord

The hackers took over the Discord and Steam accounts of one of the Downfall devs, giving them access to the mod's Steam account.

Once installed on a compromised system, the malware will gather information from Steam and Discord as well as cookies, saved passwords, and credit card numbers from web browsers (Yandex, Microsoft Edge, Mozilla Firefox, Brave, and Vivaldi).

Additionally, it will search for documents with the phrase "password" in the filenames and for additional credentials, such as Telegram and the local Windows login.

It is recommended that users of Downfall change all significant passwords, particularly those associated with accounts that are not secured by Two-factor authentication ( (2-factor authentification).

The virus would install itself, according to users who received the malicious update, as UnityLibManager in the /AppData/Roaming folder or as a Windows Boot Manager application in the AppData folder.

About Epsilon Stealer

Epsilon Stealer is a trojan that steals information and sells it to other threat actors using Telegram and Discord. It is frequently used to deceive players on Discord into downloading malware under the pretence of paying to test a new game for problems. 

But once the game is installed, malicious software is also launched, allowing it to operate in the background and harvest credit card numbers, passwords, and authentication cookies from users.

Threat actors could sell the stolen data on dark web markets or utilize it to hack other accounts.

Steam strengthens security

Game developers who deploy updates on Steam's usual release branch now need to submit to SMS-based security checks, according to a statement made by Valve in October.

The decision was made in reaction to the growing number of compromised Steamworks accounts that, beginning in late August, were being used to submit dangerous game builds that would infect players with malware.


US Senators Targeted by Swatting Incidents in Multiple States

 

A recent surge of "swatting" incidents across America, primarily targeting Republican politicians, has perplexed police agencies and put victims in risk this holiday season, driving lawmakers to demand for stricter anti-swatting laws and harsher penalties.

Swatting entails filing a false complaint to a law enforcement agency, frequently alleging that a violent crime or hostage incident is taking place at the intended victim's home. A heavily armed SWAT team will typically arrive at the unwary victim's home and barge through the door, pistols drawn. Sometimes the outcome is deadly. 

Republicans including Rep. Marjorie Taylor Greene of Georgia, Sen. Rick Scott of Florida, and Ohio Attorney General Dave Yost were targeted by swatting attacks last month. Democrats have not been spared either; on Christmas Day, Boston Mayor Michelle Wu took a blow to the face. The Atlanta Journal-Constitution reports that a number of Georgian officials, including the lieutenant governor and at least four state senators, claimed to have been swatted in the past few days.

Greene even reported on X (formerly Twitter) that on December 28, efforts to swatte her two daughters' homes occurred. Greene wrote on Christmas Day that she had received about eight personal swatts. 

Kevin Kolbye, a former FBI assistant special agent who investigated swatting crimes, estimates that there are 1,000 swatting events in the US per year. In a 2017 interview, Kolbye—who passed away in October—told Business Insider that swatters frequently pose as someone else and use fictitious phone numbers, making them hard to track down. 

Kolbye claimed that because police are compelled to act quickly in response to reported crimes, they frequently fail to differentiate between an actual emergency and a swatting call in the heat of the moment. 

In order to combat swatting attacks across the country, the FBI announced in June the creation of a new national internet database that will allow hundreds of police departments and law enforcement organisations to share information about swatting instances. 

According to The Associated Press, states such as Ohio and Virginia have recently strengthened their anti-swatting legislation. Ohio made swatting a felony this year, and Virginia increased the maximum term for swatting to 12 months in jail. Clint Dixon, a Georgia state senator, said in a statement that he plans to file legislation in 2024 to impose stronger punishments for false reporting and misuse of police forces. 

"This issue goes beyond politics — it's about public safety and preserving the integrity of our institutions," Dixon stated. "We will not stand for these threats of violence and intimidation. Those involved in swatting must be held accountable under the full extent of the law.”

Three Ways Smart Devices Can Compromise Your Privacy

 

Any gadget that has an internet connection and can be operated by a computer or smartphone is considered a smart device. Home appliances, security cameras, thermostats, doorbells, lighting systems, and other networked gadgets are examples of such devices. 

Smart devices are becoming more prevalent due to the comfort they provide. However, with this ease comes a higher risk to your privacy. 

When people talk about smart gadgets, they are referring to the internet of things (IoT) and its ability to connect all of your devices together. This means that all of the data generated by each device can be viewed and shared with other connected devices, potentially exposing sensitive information about you and your home life. Here are three ways that smart devices might jeopardise your privacy. 

Location tracking 

Many smart devices track and save users' whereabouts, allowing detailed profiles of their behaviours to be created. Without the user's knowledge or consent, this data can then be sold to third parties. 

With smart devices like fitness trackers and smartphones, this has become a serious issue. If you're not careful, your smartphone may be sharing more information than you realise. You may believe that you have control over the data it collects, but this is not always the case. 

Insecure Wi-Fi 

Wi-Fi is used by many smart gadgets to connect to the internet. This means that if adequate safety measures are not in place, it may be vulnerable to hackers. Hackers can gain access to your device, look into sensitive data like passwords, and even take control of it. 

Hackers have been known to hijack smart devices via Wi-Fi connections and use them to launch cyber-attacks. This is especially important if you travel with smart gadgets such as phones or laptops, as they may connect to unsecured Wi-Fi networks. 

Webcam vulnerabilities 

Smart devices frequently include built-in cameras and microphones that can be hacked to gain access to the user's audio and video records. This has been a major problem in recent years, with cases of "webcam hacking" growing steadily. 

People are increasingly installing cameras in their doorbells, baby monitors, and even televisions. All of these can be hacked into if the user does not take proper safety measures. For example, in some cases, hackers have taken over security cameras and utilised them to spy on unsuspecting individuals in their homes. This is an extreme example of a privacy infringement that can be avoided with adequate safety measures. 

Bottom line 

Smart devices can be a wonderful addition to the home, but you must be aware of the risks that they involve. They can violate your privacy in a variety of ways, including  targeted attacks, location tracking, real-time recording, and so on. 

Furthermore, flaws in your connectivity solution can expose your devices, data, and family or customers to cyber-attacks. Understanding the threats and implementing the required security measures will help you secure your privacy. Early intrusion detection is the most successful method of preventing cyber-attacks, and this is still true in the Internet of Things era.

Lancashire-Based Scamming Group Jailed in £500k Charity Fraud

 

A group of charity scammers who pretended to be grocery store bucket collectors and deceived the public out of at least £500,000 have been imprisoned. 

By pretending to be collectors for children's charities such as Children In Need, Great Ormond Street Hospital Children's Charity, The Children's Society, The Christie Charitable Fund, and Mind, the group of fake collectors took advantage of the goodwill of the public. 

David Lavi, 47, who was identified as the main con artist, contacted charities and requested permission to collect money on their behalf using their logos and brand names. The gang used banners, fake ID badges, and Pudsey Bear costumes and set up booths and stalls in supermarkets. 

Preston Crown Court was informed that although the gang collected at least £500,000, they only contributed less than 10% to the charity.

Judge Andrew Jefferies KC stated that he could only surmise the total amount pocketed by the gang and that some cash deposits were made with charity as police began to investigate.

"This was a huge betrayal of trust. You all took advantage of public goodwill and, in some cases, private grief," the judge told Levi and his co-defendants as he handed down his sentence. 

The court heard how Levi and his crew of fraudsters duped stores into allowing collections under false pretences. 

The imposters are believed to have claimed approval from head office or charity administrators and threatened to report an employee to their national office if they were not allowed. 

Lancashire Police launched an inquiry in May 2017 after Children In Need referred the case to Action Fraud. Officers raided Levi's house and business in Lytham, Lancashire, in June, and recovered various phones, iPads, and charity items. 

Detectives subsequently built the case using financial, telephone, and cell-site data, as well as surveillance of some of the collections themselves. 

Levi was sentenced to five years in prison on Thursday for fraud and money laundering. Following his release on parole, he will be subject to a five-year serious crime prevention order. 

"When people donate to a charity, they rightly expect that their money will go to supporting good causes, not lining the pockets of greedy con men like David Levi and his gang," Detective Chief Inspector Mark Riley said following the sentencing. "They have exploited peoples' goodwill and honesty to the tune of thousands of pounds, and I'm pleased that we have been able to bring them to justice.”

Here's How to Avoid Falling for Costly Pig Butchering Scam

 

Hardly a day passes when we fail to notice some sort of scam on our phones or in our emails, attempting to trick us into downloading malware, revealing a password, or making a payment for something that isn't genuine. However, there is one scam that is becoming more and more popular that you really don't want to fall for.

A "pig butchering" scheme is so named because the perpetrators will "fatten up" a victim to gain their trust before "butchering" them — generally by convincing them to invest significant sums of money in a fake venture and then stealing it all. 

The US Department of Justice reported that four males, three of whom were from Southern California, were recently charged in connection with such a scheme. According to the DOJ, the scam cost victims $80 million. 

The DOJ charged Alham Lu Zhang, 36, of Alhambra, Justin Walker, 31, of Cypress, Joseph Wong, 32, of Rosemead, and Hailong Zhu, 40, of Naperville, Ill., with conspiracy to commit money laundering, international money laundering, and concealing money laundering. Zhang and Walker were arrested and appeared in court last week. If convicted, they risk a maximum sentence of 20 years in prison. 

Pig butchering fraudsters mostly learn about their victims on dating sites or social media, or by ringing and pretending to have dialled the wrong number, the federal officials said in a release. The frauds are mostly carried out by criminal organisations from Southeast Asia that exploit human trafficking victims to reach out to millions of people around the world. Scammers establish relationships with victims in order to earn their trust and, in many cases, present an idea of using cryptocurrencies to make a business venture. 

When it comes to cryptocurrencies, victims are lured to fictitious investing platforms where they accidentally transfer their money to accounts under the control of scammers. The victims are then persuaded to contribute more and more money by the platform's false presentation of substantial returns on investment. But when they eventually go to take their money out, the con artists either ignore them or simply take off with the money.

Here's How Unwiped Data On Sold Devices Can Prove Costly

 

As time passes, it is disturbing to see how many people still have a casual attitude towards their personal data, despite the constant stream of cyber incidents and large data breaches in the headlines. Millions of accounts and sensitive personal information have been compromised, but the general public's attitude towards data security remains carelessly lax.

SD cards

Take SD cards, for example, as portable storage medium. These minuscule yet mighty gadgets are immensely useful, allowing us to carry vital data like images, messages, and recordings. But since it's so simple to store personal data on these cards, security breaches frequently occur. 

When these cards are sold or handed on to others, a prevalent issue arises. Many people do not properly erase their private information, which might remain accessible to the new owner. Regular file deletion does not ensure safety, because data recovery tools can frequently recover what was believed to be gone for good. Surprisingly, some people do not even care to erase their data before handing the cards on, exposing their sensitive information. 

SD cards are frequently mistakenly included in the sale of mobile phones and tablets. This omission, along with a general lack of concern, poses a serious risk. Furthermore, company data is occasionally left on these devices, unnoticed by security agencies and personnel.

A study undertaken by the University of Hertfordshire a few years ago brought this issue to the forefront. Researchers bought roughly 100 discarded memory cards from eBay and used phone stores, then attempted to extract data from them. These cards have been utilised in a variety of devices, including phones, tablets, cameras, and drones. Selfies, document images, contact information, browsing history, and much more sensitive items were discovered in the retrieved data. This data is easily exploitable by criminals, revealing a significant disparity between public recognition of the importance of data security and actual user behaviour. 

Hard drives

The Techradar group carried out a study on old hard drives in 2008. They analysed the contents of the drives they bought from internet stores like eBay. The results were alarming: a significant quantity of private information, including records and images, could still be retrieved. 

Smartphones

Similarly, Avast's investigation of used smartphones in 2014 identified an identical issue. Despite the fact that many users thought they had wiped their phones clean, over 40,000 images, including sensitive ones, and financial data were discovered on these devices. 

The aforementioned studies point to a significant knowledge gap regarding digital data security that most people have. Using smartphones' "Restore and reset to factory settings" feature alone does not ensure that personal data is completely erased and permanently lost. Experts in data recovery and hackers can frequently retrieve data that regular commercial tools are unable to. In simple cases, even well-known software tools can retrieve files; however, if a hacker is committed and has the necessary resources, they can go much further.

DNA Security: Companies Must Meet Strict Penalties for Risking Users' Data

DNA Security

The pressing concern of companies ignoring DNA security

DNA security is a concern that is often not talked about in the cybersecurity landscape. Personal information is what's buzzing these days. 

The latest 23andMe data breach serves as a sharp reminder of a terrifying reality: our most important, private data may not be as safe as we believe. It's a striking picture of the blatant ignorance of corporations that profit from users’ DNA while overlooking to protect it.

The cost of getting exposed

Hackers gained access to 6.9 million users' personal information, like birth years, geographic locations, and family trees, due to the 23andMe breach. It raises several of important questions: Are organizations doing anything to safeguard our data? Should we put our most personal information in their hands?

The boldness of 23andMe and similar companies is amazing. They position themselves as defenders of our genetic heritage, as guardians of our ancient histories and possible medical destinies. 

But when the trees are falling and our information is compromised, they use the excuse "It was because of the users' old passwords that led to hacking, not us."

User security should be paramount

Organizations that manage such private information should be pushed to the highest levels possible. This isn't only about credit card numbers or email addresses. We are talking about DNA, the template for our life. If whatever should be regarded as holy in the age of technology, it has to be this.

The DNA testing industry must do more. It has to guarantee that safety precautions are not only sufficient but also exceptional. They should be at the forefront of cybersecurity, setting the standard for all other industries to follow.

What does the future hold?

This is much more than just stronger passwords and multi-factor authentication. This is about an important change in how these organizations see the data with which they have been entrusted. It's about acknowledging their enormous duty, not only to their customers but to society as a whole.

It is past time for 23andMe and the DNA testing business to recognize that they are dealing with more than just data. They are concerned with people's lives, history, and futures. It's about time they begin handling users' data with respect.

Expansion of FemTech Raises Women Data Safety Concerns

 

Globally, women are being empowered by these modern goods and services, which range from breast pumps to fertility trackers. Still, the necessity to safeguard personal data is necessary as FemTech grows. In this article, we'll be sharing tips on how to safeguard your data. 

Women all over the world now have means to monitor their physical changes thanks to the rise in popularity of FemTech in recent years. The invention of disposable towels in 1888 may have been the best thing, but is it really? Although privacy issues have surfaced in recent years, how concerned should you really be about the protection of your personal information? 

Data safety concerns 

Millions of women add personal details about themselves to period and health tracking apps every single day. With the help of these apps, one can easily monitor your physical changes in a less intrusive and time-consuming manner. Though this isn't always the case, most of us believe that our data will be secured by legislation such as the Patient Data Act in Sweden or HIPAA in the United States. 

Data is primarily protected by data protection rules if it is owned by specific entities. An excellent illustration is that if you share medical information with your doctor, it will be protected by laws like HIPAA that safeguard health data protection. However, if you share the same information on a health tracking app, the protections do not apply. The product for the majority of apps is user data, which may be sold for a profit without the user knowing. 

Apps that track your health can gather an incredible amount of personal data about you, including your location and contact details. The Journal of Medical Internet Research released the findings of a study. 23 well-known health tracking applications were used in the study. Only 70% of these applications had a privacy policy, and it was discovered that all of them were gathering sensitive health data from users. You don't have to remove your health-tracking applications, despite how unsettling this may sound. Now let's look into the following security practices to safeguard your data. 

Limit any superfluous permissions

The majority of apps, including those for health, will ask for permission to access a range of data that is kept on your phone. This data may include photographs and/or location. Although most of us will automatically hand over access without a second thought, it is best for you to limit access to information on your phone that isn't necessary. All you have to do is navigate to Settings, select "Privacy and Security," and then check the apps that have access to your data. 

Use encrypted text messaging

When you send a message or send data that is encrypted, it is essentially scrambled and cannot be decrypted without a key. Only the device you are messaging on and yours will have access to the key. By doing this, you can be sure that your information will remain confidential and unaltered before it reaches its designated party. 

Additionally, you can investigate the security of your personal data by looking into your health-tracking app. For instance, no information is ever kept in one place—all of the data you submit to your Flo App is transferred over a secure cloud server. 

Use official app stores 

Installing apps from unidentified sources may put you in danger of having your data collected without your knowledge or consent, among other harmful activities. Although downloading free software from unidentified sources could appear easy at first, there could be a privacy risk involved.