Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

Okta Report: Pirates of Payrolls Attacks Plague Corporate Industry


IT helps desks be ready for an evolving threat that sounds like a Hollywood movie title. In December 2025, Okta Threat Intelligent published a report that explained how hackers can gain unauthorized access to payroll software. These threats are infamous as payroll pirate attacks. 

Pirates of the payroll

These attacks start with threat actors calling an organization’s help desk, pretending to be a user and requesting a password reset. 

“Typically, what the adversary will do is then come back to the help desk, probably to someone else on the phone, and say, ‘Well, I have my password, but I need my MFA factor reset,’” according to VP of Okta Threat Intelligence Brett Winterford. “And then they enroll their own MFA factor, and from there, gain access to those payroll applications for the purposes of committing fraud.”

Attack tactic 

The threat actors are working at a massive scale and leveraging various services and devices to assist their malicious activities. According to Okta report, cyber thieves employed social engineering, calling help desk personnel on the phone and attempting to trick them into resetting the password for a user account. These attacks have impacted multiple industries,

“They’re certainly some kind of cybercrime organization or fraud organization that is doing this at scale,” Winterford said. Okta believes the hackers gang is based out of West Africa. 

Recently, the US industry has been plagued with payroll pirates in the education sector. The latest Okta research mentions that these schemes are now happening across different industries like retail sector and manufacturing. “It’s not often you’ll see a huge number of targets in two distinct industries. I can’t tell you why, but education [and] manufacturing were massively targeted,” Winterford said. 

How to mitigate pirates of payroll attacks?

Okta advises companies to establish a standard process to check the real identity of users who contact the help desk for aid. Winterford advised businesses that depend on outsourced IT help should limit their help desks’ ability to reset user passwords without robust measures. “In some organizations, they’re relying on nothing but passwords to get access to payroll systems, which is madness,” he said.



Google Launches Emergency Location Services in India for Android Devices


Google starts emergency location service in India

Google recently announced the launch of its Emergency Location Service (ELS) in India for compatible Android smartphones. It means that users who are in an emergency can call or contact emergency service providers like police, firefighters, and healthcare professionals. ELS can share the user's accurate location immediately. 

Uttar Pradesh (UP) in India has become the first state to operationalise ELS for Android devices. Earlier, ELS was rolled out to devices having Android 6 or newer versions. For integration, however, ELS will require state authorities to connect it with their services for activation. 

More about ELS

According to Google, the ELS function on Android handsets has been activated in India. The built-in emergency service will enable Android users to communicate their location by call or SMS in order to receive assistance from emergency service providers, such as firefighters, police, and medical personnel. 

ELS on Android collects information from the device's GPS, Wi-Fi, and cellular networks in order to pinpoint the user's exact location, with an accuracy of up to 50 meters.

Implementation details

However, local wireless and emergency infrastructure operators must enable support for the ELS capability. The first state in India to "fully" operationalize the service for Android devices is Uttar Pradesh. 

ELS assistance has been integrated with the emergency number 112 by the state police in partnership with Pert Telecom Solutions. It is a free service that solely monitors a user's position when an Android phone dials 112. 

Google added that all suitable handsets running Android 6.0 and later versions now have access to the ELS functionality. 

Even if a call is dropped within seconds of being answered, the business claims that ELS in Android has enabled over 20 million calls and SMS messages to date. ELS is supported by Android Fused Location Provider- Google's machine learning tool.

Promising safety?

According to Google, the feature is only available to emergency service providers and it will never collect or share accurate location data for itself. The ELS data will be sent directly only to the concerned authority.

Recently, Google also launched the Emergency Live Video feature for Android devices. It lets users share their camera feed during an emergency via a call or SMS with the responder. But the emergency service provider has to get user approval for the access. The feature is shown on screen immediately when the responder requests a video from their side. User can accept the request and provide a visual feed or reject the request.

High Severity Flaw In Open WebUI Can Leak User Conversations and Data


A high-severity security bug impacting Open WebUI has been found by experts. It may expose users to account takeover (ATO) and, in some incidents, cause full server compromise. 

Talking about WebUI, Cato researchers said, “When a platform of this size becomes vulnerable, the impact isn’t just theoretical. It affects production environments managing research data, internal codebases, and regulated information.”

The flaw is tracked as CVE-2025-64496 and found by Cato Networks experts. The vulnerability affects Open WebUI versions 0.6.34 and older if the Director Connection feature is allowed. The flaw has a severity rating of 7.3 out of 10. 

The vulnerability exists inside Direct Connections, which allows users to connect Open WebUI to external OpenAI-supported model servers. While built for supporting flexibility and self-hosted AI workflows, the feature can be exploited if a user is tricked into linking with a malicious server pretending to be a genuine AI endpoint. 

Fundamentally, the vulnerability comes from a trust relapse between unsafe model servers and the user's browser session. A malicious server can send a tailored server-sent events message that prompts the deployment of JavaScript code in the browser. This lets a threat actor steal authentication tokens stored in local storage. When the hacker gets these tokens, it gives them full access to the user's Open WebUI account. Chats, API keys, uploaded documents, and other important data is exposed. 

Depending on user privileges, the consequences can be different.

Consequences?

  • Hackers can steal JSON web tokens and hijack sessions. 
  • Full account hack, this includes access to chat logs and uploaded documents.
  • Leak of important data and credentials shared in conversations. 
  • If the user has enabled workspace.tools permission, it can lead to remote code execution (RCE). 

Open WebUI maintainers were informed about the issue in October 2025, and publicly disclosed in November 2025, after patch validation and CVE assignment. Open WebUI variants 0.6.35 and later stop the compromised execute events, patching the user-facing threat.

Open WebUI’s security patch will work for v0.6.35 or “newer versions, which closes the user-facing Direct Connections vulnerability. However, organizations still need to strengthen authentication, sandbox extensibility and restrict access to specific resources,” according to Cato Networks researchers.





New US Proposal Allows Users to Sue AI Companies Over Unauthorised Data Use


US AI developers would be subject to data privacy obligations applicable in federal court under a wide legislative proposal disclosed recently by the US senate Marsha Blackburn, R-Tenn. 

About the proposal

Beside this, the proposal will create a federal right for users to sue companies for misusing their personal data for AI model training without proper consent. The proposal allows statutory and punitive damages, attorney fees and injunctions. 

Blackburn is planning to officially introduce the bill this year to codify President Donald Trump’s push for “one federal rule book” for AI, according to the press release. 

Why the need for AI regulations 

The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.  

In order to ensure that there is a least burdensome national standard rather than fifty inconsistent State ones, the directive required the administration to collaborate with Congress. 

Michael Kratsios, the president's science and technology adviser, and David Sacks, the White House special adviser for AI and cryptocurrency, were instructed by the president to jointly propose federal AI legislation that would supersede any state laws that would contradict with administration policy. 

Blackburn stated in the Friday release that rather than advocating for AI amnesty, President Trump correctly urged Congress to enact federal standards and protections to address the patchwork of state laws that have impeded AI advancement.

Key highlights of proposal:

  • Mandate that regulations defining "minimum reasonable" AI protections be created by the Federal Trade Commission. 
  • Give the U.S. attorney general, state attorneys general, and private parties the authority to sue AI system creators for damages resulting from "unreasonably dangerous or defective product claims."
  • Mandate that sizable, state-of-the-art AI developers put procedures in place to control and reduce "catastrophic" risks associated with their systems and provide reports to the Department of Homeland Security on a regular basis. 
  • Hold platforms accountable for hosting an unauthorized digital replica of a person if they have actual knowledge that the replica was not authorized by the person portrayed.
  • Require quarterly reporting to the Department of Labor of AI-related job effects, such as job displacement and layoffs.

The proposal will preempt state laws regulating the management of catastrophic AI risks. The legislation will also mostly “preempt” state laws for digital replicas to make a national standard for AI. 

The proposal will not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI. The bill becomes effective 180 days after enforcement. 

India's Fintech Will Focus More on AI & Compliance in 2026


India’s Fintech industry enters the new year 2026 with a new set of goals. The industry focused on rapid expansion through digital payments and aggressive customer acquisition in the beginning, but the sector is now focusing more towards sustainable growth, compliance, and risk management. 

“We're already seeing traditional boundaries blur- payments, lending, embedded finance, and banking capabilities are coming closer together as players look to build more integrated and efficient models. While payments continue to be powerful for driving access and engagement, long-term value will come from combining scale with operational efficiency across the financial stack,” said Ramki Gaddapati, Co-Founder, APAC CEO and Global CTO, Zeta.

India’s fintech industry is preparing to enter 2026 with a new Artificial intelligence (AI) emerging as a critical tool in this transformation, helping firms strengthen fraud detection, streamline regulatory processes, and enhance customer trust.

What does the data suggest?

According to Reserve Bank of India (RBI) data, digital payment volumes crossed 180 billion transactions in FY25, powered largely by the Unified Payments Interface (UPI) and embedded payment systems across commerce, mobility, and lending platforms. 

Yet, regulators and industry leaders are increasingly concerned about operational risks and fraud. The RBI, along with the Bank for International Settlements (BIS), has highlighted vulnerabilities in digital payment ecosystems, urging fintechs to adopt stronger compliance frameworks. A

AI a major focus

Artificial intelligence is set to play a central role in this compliance-first era. Fintech firms are deploying AI to:

Detect and prevent fraudulent transactions in real time  

Automate compliance reporting and monitoring  

Personalize customer experiences while maintaining data security  

Analyze risk patterns across lending and investment platforms  

Moving beyond payments?

The sector is also diversifying beyond payments. Fintechs are moving deeper into credit, wealth management, and banking-related services, areas that demand stricter oversight. It allows firms to capture new revenue streams and broaden their customer base but exposes them to heightened regulatory scrutiny and the need for more robust governance structures.

“The DPDP Act is important because it protects personal data and builds trust. Without compliance, organisations face penalties, data breaches, customer loss, and reputational damage. Following the law improves credibility, strengthens security, and ensures responsible data handling for sustained business growth,” said Neha Abbad, co-founder, CyberSigma Consulting.




Former Cybersecurity Employees Involved in Ransomware Extortion Incidents Worth Millions


It is very unfortunate and shameful for the cybersecurity industry, when cybersecurity professionals themselves betray trust to launch cyberattacks against their own country. In a shocking incident, two men have admitted to working normal jobs as cybersecurity professionals during the day, while moonlighting as cyber attackers.

About accused

An ex-employee of the Israeli cybersecurity company Sygnia has pleaded guilty to federal crimes in the US for having involvement in ransomware cyberattacks aimed to extort millions of dollars from firms in the US. 

The culprit, Ryan Clifford Goldberg, worked as a cyber incident response supervisor at Sygnia, and accepted that he was involved in a year-long plan of attacking business around the US. 

Kevin Tyler Martin, another associate,who worked as an ex DigitalMint employee, worked as a negotiation intermediary with the threat actors, a role supposed to help ransomware targets, has also accepted involvement. 

The situation is particularly disturbing because both men held positions of trust inside the sector established to fight against such threats.

Accused pled guilty to extortion charges 

Both the accused have pleaded guilty to one count of conspiracy to manipulate commerce via extortion, according to federal court records. In the plea statement, they have accepted that along with a third actor (not charged and unknown), they both launched business compromises and ransom extortions over many years. 

Extortion worth millions 

In one incident, the actors successfully extorted over $1 million in crypto from a Florida based medical equipment firm. According to the federal court, besides their legitimate work, they deployed software ‘ALPHV BlackCat’ to extract and encode target’s data, and distributed the extortion money with the software’s developers. 

According to DigitalMint, two of the people who were charged were ex-employees. After the incident, both were fired and “acted wholly outside the scope of their employment and without any authorization, knowledge or involvement from the company,” DigitalMint said in an email shared with Bloomberg.

In a recent conversation with Bloomberg, Sygnia mentioned that it was not a target of the investigation and the accused Goldberg was relieved of his duties as soon as the news became known.

A representative for Sygnia declined to speak further, and Goldberg and Martin's lawyers also declined to comment on the report.

2FA Fail: Hackers Exploit Microsoft 365 to Launch Code Phishing Attacks


Two-factor authentication (2FA) has been one of the most secure ways to protect online accounts. It requires a secondary code besides a password. However, in recent times, 2FA has not been a reliable method anymore, as hackers have started exploiting it easily. 

Experts advise users to use passkeys instead of 2FA these days, as they are more secure and less prone to hack attempts. Recent reports have shown that 2FA as a security method is undermined. 

Russian-linked state sponsored threat actors are now abusing flaws in Microsoft’s 365. Experts from Proofpoint have noticed a surge in Microsoft 365 account takeover cyberattacks, threat actors are exploiting authentication code phishing to compromise Microsoft’s device authorization flow.

They are also launching advanced phishing campaigns that escape 2FA and hack sensitive accounts. 

About the attack

The recent series of cyberattacks use device code phishing where hackers lure victims into giving their authentication codes on fake websites that look real. When the code is entered, hackers gain entry to the victim's Microsoft 365 account, escaping the safety of 2FA. 

The campaigns started in early 2025. In the beginning, hackers relied primarily on code phishing. By March, they increased their tactics to exploit Oauth authentication workflows, which are largely used for signing into apps and services. The development shows how fast threat actors adapt when security experts find their tricks.

Who is the victim? 

The attacks are particularly targeted against high-value sectors that include:

Universities and research institutes 

Defense contractors

Energy providers

Government agencies 

Telecommunication companies 

By targeting these sectors, hackers increase the impact of their attacks for purposes such as disruption, espionage, and financial motives. 

The impact 

The surge in 2FA code attacks exposes a major gap, no security measure is foolproof. While 2FA is still far stronger than relying on passwords alone, it can be undermined if users are deceived into handing over their codes. This is not a failure of the technology itself, but of human trust and awareness.  

A single compromised account can expose sensitive emails, documents, and internal systems. Users are at risk of losing their personal data, financial information, and even identity in these cases.

How to Stay Safe

Verify URLs carefully. Never enter authentication codes on unfamiliar or suspicious websites.  

Use phishing-resistant authentication. Hardware security keys (like YubiKeys) or biometric logins are harder to trick.  

Enable conditional access policies. Organizations can restrict logins based on location, device, or risk level.  

Monitor OAuth activity. Be cautious of unexpected consent requests from apps or services.  

Educate users. Awareness training is often the most effective defense against social engineering.  


Antivirus vs Identity Protection Software: What to Choose and How?


Users often put digital security into a single category and confuse identity protection with antivirus, assuming both work the same. But they are not. Before you buy one, it is important to understand the difference between the two. This blog covers the difference between identity theft security and device security.

Cybersecurity threats: Past vs present 

Traditionally, a common computer virus could crash a machine and infect a few files. That was it. But today, the cybersecurity landscape has changed from compromising computers via system overload of resources to stealing personal data. 

A computer virus is a malware that self-replicates, travelling through devices. It corrupts data and software, and can also steal personal data. 

With time, hackers have learned that users are easier targets than computers. These days, malware and social engineering attacks pose more threats than viruses. A well planned phishing email or a fake login page will benefit hackers more than a traditional virus. 

Due to the surge in data breaches, hackers have got it easy. Your data- phone number, financial details, passwords is swimming in databases, sold like bulk goods on the dark web. 

AI has made things worse and easier to exploit. Hackers can now create believable messages and even impersonate your voice. These shenanigans don't even require creativity, they need to be convincing enough to bait a victim to click or reply. 

Where antivirus fails

Your personal data never stays only on your computer, it is collected and sold by data brokers and advertisers, or to third-parties who benefit from it. When threat actors get their hands on this data, they can use it to impersonate you. 

In this case, antivirus is of no help. It is unable to notice breaches happening at organizations you don't control or someone impersonating you. Antivirus protects your system from malware that exists outside your system. There is a limit to what it can do. Antivirus can protect the machine, but not the user behind it. 

Role of identity theft protection 

Identity protection doesn't concern itself with your system health. It looks out for information that follows you everywhere- SSN, e-mail addresses, your contact number and accounts linked to your finances. If something suspicious turns up, it informs you. Identity protection works more on the monitoring side. It may watch your credit reports for threats- a new account or a hard enquiry, or falling credit score. Identity protection software looks out for early warning signs of theft, as mentioned above. It also checks if your data has been put up on dark web or part of any latest leaks. 

Adobe Brings Photo, Design, and PDF Editing Tools Directly Into ChatGPT

 



Adobe has expanded how users can edit images, create designs, and manage documents by integrating select features of its creative software directly into ChatGPT. This update allows users to make visual and document changes simply by describing what they want, without switching between different applications.

With the new integration, tools from Adobe Photoshop, Adobe Acrobat, and Adobe Express are now available inside the ChatGPT interface. Users can upload images or documents and activate an Adobe app by mentioning it in their request. Once enabled, the tool continues to work throughout the conversation, allowing multiple edits without repeatedly selecting the app.

For image editing, the Photoshop integration supports focused and practical adjustments rather than full professional workflows. Users can modify specific areas of an image, apply visual effects, or change settings such as brightness, contrast, and exposure. In some cases, ChatGPT presents multiple edited versions for users to choose from. In others, it provides interactive controls, such as sliders, to fine-tune the result manually.

The Acrobat integration is designed to simplify common document tasks. Users can edit existing PDF files, reduce file size, merge several documents into one, convert files into PDF format, and extract content such as text or tables. These functions are handled directly within ChatGPT once a file is uploaded and instructions are given.

Adobe Express focuses on design creation and quick visual content. Through ChatGPT, users can generate and edit materials like posters, invitations, and social media graphics. Every element of a design, including text, images, colors, and animations, can be adjusted through conversational prompts. If users later require more detailed control, their projects can be opened in Adobe’s standalone applications to continue editing.

The integrations are available worldwide on desktop, web, and iOS platforms. On Android, Adobe Express is already supported, while Photoshop and Acrobat compatibility is expected to be added in the future. These tools are free to use within ChatGPT, although advanced features in Adobe’s native software may still require paid plans.

This launch follows OpenAI’s broader effort to introduce third-party app integrations within ChatGPT. While some earlier app promotions raised concerns about advertising-like behavior, Adobe’s tools are positioned as functional extensions rather than marketing prompts.

By embedding creative and document tools into a conversational interface, Adobe aims to make design and editing more accessible to users who may lack technical expertise. The move also reflects growing competition in the AI space, where companies are racing to combine artificial intelligence with practical, real-world tools.

Overall, the integration represents a shift toward more interactive and simplified creative workflows, allowing users to complete everyday editing tasks efficiently while keeping professional software available for advanced needs.




Online Retail Store Coupang Suffers South Korea's Worst Data Breach, Leak Linked to Former Employee


33.7 million customer data leaked

Data breach is an unfortunate attack that businesses often suffer. Failing to address these breaches is even worse as it costs businesses reputational and privacy damage. 

A breach at Coupang that leaked the data of 33.7 million customers has been linked to a former employee who kept access to internal systems after leaving the organization. 

About the incident 

The news was reported by the Seoul Metropolitan Police Agency with news agencies after an inquiry that involved a raid on Coupang's offices recently. The firm is South Korea's biggest online retailer. It employs 95,000 people and generates an annual revenue of more than $30 billion. 

Earlier in December, Coupang reported that it had been hit by a data breach that leaked the personal data of 33.7 million customers such as email IDs, names, order information, and addresses.

The incident happened in June, 2025, but the firm found it in November and launched an internal investigation immediately. 

The measures

In December beginning, Coupang posted an update on the breach, assuring the customers that the leaked data had not been exposed anywhere online. 

Even after all this, and Coupang's full cooperation with the authorities, the officials raided the firm's various offices on Tuesday to gather evidence for a detailed enquiry.

Recently, Coupang's CEO Park Dae-Jun gave his resignation and apologies to the public for not being able to stop what is now South Korea's worst cybersecurity breach in history. 

Police investigation 

In the second day of police investigation in Coupang's offices, the officials found that the main suspect was a 43-year old Chinese national who was an employee of the retail giant. The man is called JoongAng, who joined the firm in November 2022 and overlooked the authentication management system. He left the firm in 2024. JoongAng is suspected to have already left South Korea. 

What next?

According to the police, although Coupang is considered the victim, the business and staff in charge of safeguarding client information may be held accountable if carelessness or other legal infractions are discovered. 

Since the beginning of the month, the authorities have received hundreds of reports of Coupang impersonation. Meanwhile, the incident has caused a large amount of phishing activity in the country, affecting almost two-thirds of its population.

FTC Refuses to Lift Ban on Stalkerware Company that Exposed Sensitive Data


The surveillance industry banned a stalkerware maker after a data breach leaked information of its customers and the people they were spying on. Consumer spyware company Support King can't sell the surveillance software now, the US Federal Trade Commission (FTC) said. 

The FTC has denied founder Scott Zuckerman's request to cancel the ban. It is also applicable to other subsidiaries OneClickMonitor and SpyFone.

Recently, the FTC announced the move in a press release when Zuckerman petitioned the agency to cancel the ban order in July of 2025. 

The FTC banned Zuckerman from “offering, promoting, selling, or advertising any surveillance app, service, or business,” in 2021 and stopped him from running other stalkerware business. Zuckerman had to also delete all the data stored by SpyFone and went through various audits to implement cybersecurity measures for his ventures. Then acting director of the FTC's Bureau of Consumer Protection, Samuel Levine said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security."

Zuckerman in his petition said that the FTC mandate has made it difficult for him to conduct other businesses due to monetary losses, even though Support King is out of business and he now only operates a restaurant and plans other ventures.

The ban came from a 2018 incident after a researcher discovered an Amazon S3 bucket of SpyFone that left important data such as selfies, chats, texts, contacts, passwords, logins, and audio recordings exposed online in the open. The leaked data comprised 44,109 email ids.

According to Samuel, “SpyFone is a brazen brand name for a surveillance business that helped stalkers steal private information." He further said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security.r

According to TechCrunch, after the 2021 order, Zuckerman started running another stalkerware firm. In 2022, TechCrunch found breached data from stalkerware application SpyTrac. 

According to the data, freelance developers ran SpyTrac who had direct links with Support King. It was an attempt to escape the FTC ban. Additionally, the breached data contained records from SpyFone, which Support King was supposed to delete. Beside this, the data also contained access keys to the cloud storage of OneClickMonitor, another stalkerware application. 

5 Critical Situations Where You Should Never Rely on ChatGPT

  •  

Just a few years after its launch, ChatGPT has evolved into a go-to digital assistant for tasks ranging from quick searches to event planning. While it undeniably offers convenience, treating it as an all-knowing authority can be risky. ChatGPT is a large language model, not an infallible source of truth, and it is prone to misinformation and fabricated responses. Understanding where its usefulness ends is crucial.

Here are five important areas where experts strongly advise turning to real people, not AI chatbots:

  • Medical advice
ChatGPT cannot be trusted with health-related decisions. It is known to provide confident yet inaccurate information, and it may even acknowledge errors only after being corrected. Even healthcare professionals experimenting with AI agree that it can offer only broad, generic insights — not tailored guidance based on individual symptoms.

Despite this, the chatbot can still respond if you ask, "Hey, what's that sharp pain in my side?", instead of urging you to seek urgent medical care. The core issue is that chatbots cannot distinguish fact from fiction. They generate responses by blending massive amounts of data, regardless of accuracy.

ChatGPT is not, and likely never will be, a licensed medical professional. While it may provide references if asked, those sources must be carefully verified. In several cases, people have reported real harm after following chatbot-generated health advice.

  • Therapy
Mental health support is essential, yet often expensive. Even so-called "cheap" online therapy platforms can cost around $65 per session, and insurance coverage remains limited. While it may be tempting to confide in a chatbot, this can be dangerous.

One major concern is ChatGPT’s tendency toward agreement and validation. In therapy, this can be harmful, as it may encourage behaviors or beliefs that are objectively damaging. Effective mental health care requires an external, trained professional who can challenge harmful thought patterns rather than reinforce them.

There is also an ongoing lawsuit alleging that ChatGPT contributed to a teen’s suicide — a claim OpenAI denies. Regardless of the legal outcome, the case highlights the risks of relying on AI for mental health support. Even advocates of AI-assisted therapy admit that its limitations are significant.

  • Advice during emergencies
In emergencies, every second counts. Whether it’s a fire, accident, or medical crisis, turning to ChatGPT for instructions is a gamble. Incorrect advice in such situations can lead to severe injury or death.

Preparation is far more reliable than last-minute AI guidance. Learning basic skills like CPR or the Heimlich maneuver, participating in fire drills, and keeping emergency equipment on hand can save lives. If possible, always call emergency services rather than relying on a chatbot. This is one scenario where AI is least dependable.

  • Password generation
Using ChatGPT to create passwords may seem harmless, but it carries serious security risks. There is a strong possibility that the chatbot could generate identical or predictable passwords for multiple users. Without precise instructions, the suggested passwords may also lack sufficient complexity.

Additionally, chatbots often struggle with basic constraints, such as character counts. More importantly, ChatGPT stores prompts and outputs to improve its systems, raising concerns about sensitive data being reused or exposed.

Instead, experts recommend dedicated password generators offered by trusted password managers or reputable online tools, which are specifically designed with security in mind.
  • Future predictions
If even leading experts struggle to predict the future accurately, it’s unrealistic to expect ChatGPT to do better. Since AI models frequently get present-day facts wrong, their long-term forecasts are even less reliable.

Using ChatGPT to decide which stocks to buy, which team will win, or which career path will be most profitable is unwise. While it can be entertaining to ask speculative questions about humanity centuries from now, such responses should be treated as curiosity-driven thought experiments — not actionable guidance.

ChatGPT can be a helpful tool when used appropriately, but knowing its limitations is essential. For critical decisions involving health, safety, security, or mental well-being, real professionals remain irreplaceable.


Indian Government Proposes Compulsory Location Tracking in Smartphones, Faces Backlash


Government faces backlash over location-tracking proposal

The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance. 

Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying. 

About the proposal 

Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.

The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.

Strong opposition from tech giants 

If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple. 

Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device." 

According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.  

Telecom vs government 

Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world. 

According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS. 

Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.

According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."



700+ Self-hosted Gits Impacted in a Wild Zero-day Exploit


Hackers actively exploit zero-day bug

Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.

About the attack 

Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."

The team informed Gogs' maintainers about the bug, who are now working on the fix. 

The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE). 

About Gogs

Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties. 

Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol. 

Patch update 

The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes. 

While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.

Other incidents 

Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.

Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.

How to stay safe?

Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names. 

For more details, readers can see the full list of indicators published by the researchers.



Researchers Find Massive Increase in Hypervisor Ransomware Incidents


Rise in hypervisor ransomware incidents 

Cybersecurity experts from Huntress have noticed a sharp rise in ransomware incidents on hypervisors and have asked users to be safe and have proper back-up. 

The Huntress case data has disclosed a surprising increase in hypervisor ransomware. It was involved in malicious encryption and rose from a mere three percent in the first half to a staggering 25 percent in 2025. 

Akira gang responsible 

Experts think that the Akira ransomware gang is the primary threat actor behind this, other players are also going after hypervisors to escape endpoint and network security controls. According to Huntress threat hunters, players are going after hypervisors as they are not secure and hacking them can allow hackers to trigger virtual machines and manage networks.

Why hypervisors?

“This shift underscores a growing and uncomfortable trend: Attackers are targeting the infrastructure that controls all hosts, and with access to the hypervisor, adversaries dramatically amplify the impact of their intrusion," experts said. The attack tactic follows classic playbook. Researchers have "seen it with attacks on VPN appliances: Threat actors realize that the host operating system is often proprietary or restricted, meaning defenders cannot install critical security controls like EDR [Endpoint Detection and Response]. This creates a significant blind spot.”

Other instances 

The experts have also found various cases where ransomware actors install ransomware payloads directly via hypervisors, escaping endpoint security. In a few cases, threat actors used built-in-tools like OpenSSL to run encryption of the virtual machine volume without having to upload custom ransomware binaries.

Attack tactic 

Huntress researchers have also found attackers disrupting a network to steal login credentials and then attack hypervisors.

“We’ve seen misuse of Hyper-V management utilities to modify VM settings and undermine security features,” they add. “This includes disabling endpoint defenses, tampering with virtual switches, and preparing VMs for ransomware deployment at scale," they said.

Mitigation strategies 

Due to the high level of attacks on hypervisors, experts have suggested admins to revisit infosec basics such as multi-factor authentication and password patch updates. Admins should also adopt hypervisor-specific safety measures like only allow-listed binaries can run on a host.

For decades, the Infosec community has known hypervisors to be an easy target. In a worst-case scenario of a successful VM evasion where an attack on a guest virtual machine allows hijacking of the host and its hypervisor, things can go further south. If this were to happen, the impact could be massive as the entire hyperscale clouds depend on hypervisors to isolate tenants' virtual systems.

End to End-to-end Encryption? Google Update Allows Firms to Read Employee Texts


Your organization can now read your texts

Microsoft stirred controversy when it revealed a Teams update that could tell your organization when you're not at work. Google did the same. Say goodbye to end-to-end encryption. With this new RCS and SMS Android update, your RCS and SMS texts are no longer private. 

According to Android Authority, "Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”

Only for organizational devices 

This is only applicable to work-managed devices and doesn't impact personal devices. In regulated industries, it will only add RCS archiving to existing SMS archiving. In an organization, however, texting is different than emailing. In the former, employees sometimes share about their non-work life. End-to-end encryptions keep these conversations safe, but this will no longer be the case.

The end-to-end question 

There is alot of misunderstanding around end-to-end encryption. It protects messages when they are being sent, but once they are on your device, they are decrypted and no longer safe. 

According to Google, this is "a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”

What will change?

With this update, getting a phone at work is no longer as good as it seems. Employees have always been insecure about the risks in over-sharing on email, as it is easy to spy. But not texts. 

The update will make things different. According to Google, “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”

Promoting organizational surveillance 

Because of organizational surveillance, employees at times turn to shadow IT systems such as Whatsapp and Signal to communicate with colleagues. The new Google update will only make things worse. 

“Earlier,” Google said, ““employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."

Beer Firm Asahi Not Entertaining Threat Actors After Cyberattack


Asahi denies ransom payment 

Japanese beer giant Asahi said that it didn't receive any particular ransom demand from threat actors responsible for an advanced and sophisticated cyberattack that could have exposed the data of more than two million people. 

About the attack

CEO Atsushi Katsuki in a press conference said that the company had not been in touch with the threat actors. But Asahi has delayed the release of financial results. Even if the company received a ransom demand, it would not have paid, Katsuki said. Asahi Super Dry is one of Japan's most popular beers. Asahi suffered a cyberattack on 29th September. However, the company clarified on October 3 that it was hit by a ransomware attack.

Attack tactic 

In such incidents, threat actors typically use malicious software to encrypt the target's systems and then ask ransom for providing encryption keys to run the systems again.

Asahi said threat actors could have hacked or stolen identity data like phone numbers and names of around two million people- employees, customers and families.

Qilin gang believed to be responsible 

The firm didn't disclose details of the attacker at the conference. Later, it told AFP via mail that experts hinted towards a high chance of attack by hacking group Qilin. The gang issued a statement that the Japanese media understood as a claim of responsibility. Commenting on the situation, 

Katsuki said the firm thought it had taken needed measures to prevent such an incident. "But this attack was beyond our imagination. It was a sophisticated and cunning attack," Katsuki said. 

Impact on Asahi business 

Interestingly, Asahi delayed the release of third-quarter earnings and recently said that the annual financial results had also been delayed. "These and further information on the impact of the hack on overall corporate performance will be disclosed as soon as possible once the systems have been restored and the relevant data confirmed," the firm said.

The product supply hasn't been affected. Shipments will resume in stages while systems recover. "We apologise for the continued inconvenience and appreciate your understanding," Asahi said.

Critical Vulnerabilities Found in React Server Components and Next.js


Open in the wild flaw

The US Cybersecurity and Infrastructure Security Agency (CISA) added a critical security flaw affecting React Server Components (RSC) to its Known Exploited Vulnerabilities (KEV) catalog after exploitation in the wild.

The flaw CVE-2025-55182 (CVSS score: 10.0) or React2Shell hints towards a remote code execution (RCE) that can be triggered by an illicit threat actor without needing any setup. 

Remote code execution 

According to the CISA advisory, "Meta React Server Components contains a remote coThe incident surfaced when Amazon said it found attack attempts from infrastructure related to Chinese hacking groupsde execution vulnerability that could allow unauthenticated remote code execution by exploiting a flaw in how React decodes payloads sent to React Server Function endpoints."

The problem comes from unsafe deserialization in the library's Flight protocol, which React uses to communicate between a client and server. It results in a case where an unauthorised, remote hacker can deploy arbitrary commands on the server by sending specially tailored HTTP requests. The conversion of text into objects is considered a dangerous class of software vulnerability. 

About the flaw

 "The React2Shell vulnerability resides in the react-server package, specifically in how it parses object references during deserialization," said Martin Zugec, technical solutions director at Bitdefender.

The incident surfaced when Amazon said it found attack attempts from infrastructure related to Chinese hacking groups such as Jackpot Panda and Earth Lamia. "Within hours of the public disclosure of CVE-2025-55182 (React2Shell) on December 3, 2025, Amazon threat intelligence teams observed active exploitation attempts by multiple China state-nexus threat groups, including Earth Lamia and Jackpot Panda," AWS said.

Attack tactic 

Few attacks deployed cryptocurrency miners and ran "cheap math" PowerShell commands for successful exploitation. After that, it dropped in-memory downloaders capable of taking out extra payload from a remote server.

According to Censys, an attack surface management platform, 2.15 million cases of internet-facing services may be affected by this flaw. This includes leaked web services via React Server Components and leaked cases of frameworks like RedwoodSDK, React Router, Waku, and Next.js.

According to data shared by attack surface management platform Censys, there are about 2.15 million instances of internet-facing services that may be affected by this vulnerability. This comprises exposed web services using React Server Components and exposed instances of frameworks such as Next.js, Waku, React Router, and RedwoodSDK.


Banking Malware Can Hack Communications via Encrypted Apps


Sturnus hacks communication 

A new Android banking malware dubbed Sturnus can hack interactions from entirety via encrypted messaging networks like Signal, WhatsApp, and Telegram, as well as take complete control of the device.  

While still under growth, the virus is fully functional and has been programmed to target accounts at various financial institutions across Europe by employing "region-specific overlay templates."  

Attack tactic 

Sturnus uses a combination of plaintext, RSA, and AES-encrypted communication with the command-and-control (C2) server, making it a more sophisticated threat than existing Android malware families.

Sturnus may steal messages from secure messaging apps after the decryption step by recording the content from the device screen, according to a research from online fraud prevention and threat intelligence agency Threatfabric. The malware can also collect banking account details using HTML overlays and offers support for complete, real-time access through VNC session.

Malware distribution 

The researchers haven't found how the malware is disseminated but they assume that malvertising or direct communications are plausible approaches. Upon deployment, the malware connects to the C2 network to register the target via a cryptographic transaction. 

For instructions and data exfiltration, it creates an encrypted HTTPS connection; for real-time VNC operations and live monitoring, it creates an AES-encrypted WebSocket channel. Sturnus can begin reading text on the screen, record the victim's inputs, view the UI structure, identify program launches, press buttons, scroll, inject text, and traverse the phone by abusing the Accessibility services on the device.

To get full command of the system, Sturnus gets Android Device Administrator credentials, which let it keep tabs of password changes and attempts to unlock and lock the device remotely. The malware also tries to stop the user from disabling its privileges or deleting it from the device. Sturnus uses its permissions to identify message content, inputted text, contact names, and conversation contents when the user accesses WhatsApp, Telegram, or Signal.

65% of Top AI Companies Leak Secrets on GitHub

 

Leading AI companies continue to face significant cybersecurity challenges, particularly in protecting sensitive information, as highlighted in recent research from Wiz. The study focused on the Forbes top 50 AI firms, revealing that 65% of them were found to be leaking verified secrets—such as API keys, tokens, and credentials—on public GitHub repositories. 

These leaks often occurred in places not easily accessible to standard security scanners, including deleted forks, developer repositories, and GitHub gists, indicating a deeper and more persistent problem than surface-level exposure. Wiz's approach to uncovering these leaks involved a framework called "Depth, Perimeter, and Coverage." Depth allowed researchers to look beyond just the main repositories, reaching into less visible parts of the codebase. 

Perimeter expanded the search to contributors and organization members, recognizing that individuals could inadvertently upload company-related secrets to their own public spaces. Coverage ensured that new types of secrets, such as those used by AI-specific platforms like Tavily, Langchain, Cohere, and Pinecone, were included in the scan, which many traditional tools overlook.

The findings show that despite being leaders in cutting-edge technology, these AI companies have not adequately addressed basic security hygiene. The researchers disclosed the discovered leaks to the affected organisations, but nearly half of these notifications either failed to reach the intended recipients, were ignored, or received no actionable response, underscoring the lack of dedicated channels for vulnerability disclosure.

Security Tips 

Wiz recommends several essential security measures for all organisations, regardless of size. First, deploying robust secret scanning should be a mandatory practice to proactively identify and remove sensitive information from codebases. Second, companies should prioritise the detection of their own unique secret formats, especially if they are new or specific to their operations. Engaging vendors and the open source community to support the detection of these formats is also advised.

Finally, establishing a clear and accessible disclosure protocol is crucial. Having a dedicated channel for reporting vulnerabilities and leaks enables faster remediation and better coordination between researchers and organisations, minimising potential damage from exposure. The research serves as a stark reminder that even the most advanced companies must not overlook fundamental cybersecurity practices to safeguard sensitive data and maintain trust in the rapidly evolving AI landscape.