Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

Google Appears to Be Preparing Gemini Integration for Chrome on Android

 

Google appears to be testing a new feature that could significantly change how users browse the web on mobile devices. The company is reportedly experimenting with integrating its AI model, Gemini, directly into Chrome for Android, enabling advanced agentic browsing capabilities within the mobile browser.

The development was first highlighted by Leo on X, who shared that Google has begun testing Gemini integration alongside agentic features in Chrome’s Android version. These findings are based on newly discovered references within Chromium, the open-source codebase that forms the foundation of the Chrome browser.

Additional insight comes from a Chromium post, where a Google engineer explained the recent increase in Chrome’s binary size. According to the engineer, "Binary size is increased because this change brings in a lot of code to support Chrome Glic, which will be enabled in Chrome Android in the near future," suggesting that the infrastructure needed for Gemini support is already being added. For those unfamiliar, “Glic” is the internal codename used by Google for Gemini within Chrome.

While the references do not reveal exactly how Gemini will function inside Chrome for Android, they strongly indicate that Google is actively preparing the feature. The integration could mirror the experience offered by Microsoft Copilot in Edge for Android. In such a setup, users might see a floating Gemini button that allows them to summarize webpages, ask follow-up questions, or request contextual insights without leaving the browser.

On desktop platforms, Gemini in Chrome already offers similar functionality by using the content of open tabs to provide contextual assistance. This includes summarizing articles, comparing information across multiple pages, and helping users quickly understand complex topics. However, Gemini’s desktop integration is still not widely available. Users who do have access can launch it using Alt + G on Windows or Ctrl + G on macOS.

The potential arrival of Gemini in Chrome for Android could make AI-powered browsing more accessible to a wider audience, especially as mobile devices remain the primary way many users access the internet. Agentic capabilities could help automate common tasks such as researching topics, extracting key points from long articles, or navigating complex websites more efficiently.

At present, Google has not confirmed when Gemini will officially roll out to Chrome for Android. However, the appearance of multiple references in Chromium suggests that development is progressing steadily. With Google continuing to expand Gemini across its ecosystem, an official announcement regarding its availability on Android is expected in the near future.

AI Agent Integration Can Become a Problem in Workplace Operations


AI agents were considered harmless sometime ago. They did what they were supposed to do: write snippets of code, answer questions, and help users in doing things faster. 

Then business started expecting more.

Slowly, companies started using organizational agents over personal copilots- agents integrated into customer support, HR, IT, engineering, and operations. These agents didn't just suggest, but started acting- touching real systems, changing configurations, and moving real data:

  • A support agent that gets customer data from CRM, triggers backend fixes, updates tickets, and checks bills.
  • An HR agent who overlooks access throughout VPNs, IAM, SaaS apps.
  • A change management agent that processes requests, logs actions in ServiceNow, updates production configurations and Confluence.
  • These AI agents automate oversight and control, and have become core of companies’ operational infrastructure

Work of AI agents

Organizational agents are made to work across many resources, supporting various roles, multiple users, and workflows via a single implement. Instead of getting linked with an individual user, these business agents work as shared resources that cater to requests, and automate work of across systems for many users. 

To work effectively, the AI agents depend on shared accounts, OAuth grants, and API keys to verify with the systems for interaction. The credentials are long-term and managed centrally, enabling the agent to work continuously. 

Threat of AI agents in workplace 

While this approach maximizes convenience and coverage, these design choices can unintentionally create powerful access intermediaries that bypass traditional permission boundaries.

Although this strategy optimizes coverage and convenience, these design decisions may inadvertently provide strong access intermediaries that go beyond conventional permission constraints. The next actions may seem legitimate and harmless when agents inadvertently grant access outside the specific user's authority. 

Reliable detection and attribution are eliminated when the execution is attributed to the agent identity, losing the user context. Conventional security controls are not well suited for agent-mediated workflows because they are based on direct system access and human users. Permissions are enforced by IAM systems according to the user's identity, but when an AI agent performs an activity, authorization is assessed based on the agent's identity rather than the requester's.

The impact 

Therefore, user-level limitations are no longer in effect. By assigning behavior to the agent's identity and concealing who started the action and why, logging and audit trails exacerbate the issue. Security teams are unable to enforce least privilege, identify misuse, or accurately assign intent when using agents, which makes it possible for permission bypasses to happen without setting off conventional safeguards. Additionally, the absence of attribution slows incident response, complicates investigations, and makes it challenging to ascertain the scope or aim of a security occurrence.

n8n Supply Chain Attack Exploits Community Nodes In Google Ads Integration to Steal Tokens


Hackers were found uploading a set of eight packages on the npm registry that pretended as integrations attacking the n8n workflow automation platform to steal developers’ OAuth credentials. 

About the exploit 

The package is called “n8n-nodes-hfgjf-irtuinvcm-lasdqewriit”, it copies Google Ads integration and asks users to connect their ad account in a fake form and steal OAuth credentials from servers under the threat actors’ control. 

Endor Labs released a report on the incident. "The attack represents a new escalation in supply chain threats,” it said. Adding that “unlike traditional npm malware, which often targets developer credentials, this campaign exploited workflow automation platforms that act as centralized credential vaults – holding OAuth tokens, API keys, and sensitive credentials for dozens of integrated services like Google Ads, Stripe, and Salesforce in a single location," according to the report. 

Attack tactic 

Experts are not sure if the packages share similar malicious functions. But Reversing labs Spectra Assure analysed a few packages and found no security issues. In one package called “n8n-nodes-zl-vietts,” it found a malicious component with malware history. 

The campaign might still be running as another updated version of the package “n8n-nodes-gg-udhasudsh-hgjkhg-official” was posted to npm recently.

Once installed as a community node, the malicious package works as a typical n8n integration, showing configuration screens. Once the workflow is started, it launches a code to decode the stored tokens via n8n’s master key and send the stolen data to a remote server. 

This is the first time a supply chain attack has specially targeted the n8n ecosystem, with hackers exploiting the trust in community integrations. 

New risks in ad integration 

The report exposed the security gaps due to untrusted workflows integration, which increases the attack surface. Experts have advised developers to audit packages before installing them, check package metadata for any malicious component, and use genuine n8n integrations. 

The findings highlight the security issues that come with integrating untrusted workflows, which can expand the attack surface. Developers are recommended to audit packages before installing them, scrutinize package metadata for any anomalies, and use official n8n integrations.

According to researchers Kiran Raj and Henrik Plate, "Community nodes run with the same level of access as n8n itself. They can read environment variables, access the file system, make outbound network requests, and, most critically, receive decrypted API keys and OAuth tokens during workflow execution.”

Trust Wallet Browser Extension Hacked, $7 Million Stolen


Users of the Binance-owned Trust wallet lost more than $7 million after the release of an updated chrome extension. Changpenng Zhao, company co-founder said that the company will cover the stolen money of all the affected users. Crypto investigator ZachXBT believes hundreds of Trust Wallet users suffered losses due to the extension flaw. 

Trust Wallets in a post on X said, “We’ve identified a security incident affecting Trust Wallet Browser Extension version 2.68 only. Users with Browser Extension 2.68 should disable and upgrade to 2.69.”

CZ has assured that the company is investigating how threat actors were able to compromise the new version. 

Affected users

Mobile-only users and browser extension versions are not impacted. User funds are SAFE,” Zhao wrote in a post on X.

The compromise happened because of a flaw in a version of the Trust Wallet Google Chrome browser extension. 

What to do if you are a victim?

If you suffered the compromise of Browser Extension v2.68, follow these steps on Trust Wallet X site:

  • To safeguard your wallet's security and prevent any problems, do not open the Trust Wallet Browser Extension v2.68 on your desktop computer. 
  • Copy this URL into the address bar of your Chrome browser to open the Chrome Extensions panel: chrome://extensions/?id=egjidjbpglichdcondbcbdnbeeppgdph
  • If the toggle is still "On," change it to "Off" beneath the Trust Wallet. 
  • Select "Developer mode" from the menu in the top right corner. 
  • Click the "Update" button in the upper left corner. 
  • Verify the 2.69 version number. The most recent and safe version is this one. 

Please wait to open the Browser Extension until you have updated to Extension version 2.69. This helps safeguard the security of your wallet and avoids possible problems.

How did the public react?

Social media users expressed their views. One said, “The problem has been going on for several hours,” while another user complained that the company ”must explain what happened and compensate all users affected. Otherwise reputation is tarnished.” A user also asked, “How did the vulnerability in version 2.68 get past testing, and what changes are being made to prevent similar issues?”

Salesforce Pulls Back from AI LLMs Citing Reliability Issues


Salesforce, a famous enterprise software company, is withdrawing from its heavy dependence on large language models (LLMs) after facing reliability issues that the executive didn't like. The company believes that trust in AI LLMs has declined in the past year, according to The Information. 

Parulekar, senior VP of product marketing said, “All of us were more confident about large language models a year ago.” This means the company has shifted away from GenAI towards more “deterministic” automation in its flagship product Agentforce.

In its official statement, the company said, “While LLMs are amazing, they can’t run your business by themselves. Companies need to connect AI to accurate data, business logic, and governance to turn the raw intelligence that LLMs provide into trusted, predictable outcomes.”

Salesforce cut down its staff from 9,000 to 5,000 employees due to AI agent deployment. The company emphasizes that Agentforce can help "eliminate the inherent randomness of large models.” 

Failing models, missing surveys

Salesforce experienced various technical issues with LLMs during real-world applications. According to CTO Muralidhar Krishnaprasad, when given more than eight prompts, the LLMs started missing commands. This was a serious flaw for precision-dependent tasks. 

Home security company Vivint used Agentforce for handling its customer support for 2.5 million customers and faced reliability issues. Even after giving clear instructions to send satisfaction surveys after each customer conversation, Agentforce sometimes failed to send surveys for unknown reasons. 

Another challenge was the AI drift, according to executive Phil Mui. This happens when users ask irrelevant questions causing AI agents to lose focus on their main goals. 

AI expectations vs reality hit Salesforce 

The withdrawal from LLMs shows an ironic twist for CEO Marc Benioff, who often advocates for AI transformation. In his conversation with Business Insider, Benioff talked about drafting the company's annually strategic document, prioritizing data foundations, not AI models due to “hallucinations” issues. He also suggests rebranding the company as Agentforce. 

Although Agentforce is expected to earn over $500 million in sales annually, the company's stock has dropped about 34% from its peak in December 2024. Thousands of businesses that presently rely on this technology may be impacted by Salesforce's partial pullback from large models as the company attempts to bridge the gap between AI innovation and useful business application.

Okta Report: Pirates of Payrolls Attacks Plague Corporate Industry


IT helps desks be ready for an evolving threat that sounds like a Hollywood movie title. In December 2025, Okta Threat Intelligent published a report that explained how hackers can gain unauthorized access to payroll software. These threats are infamous as payroll pirate attacks. 

Pirates of the payroll

These attacks start with threat actors calling an organization’s help desk, pretending to be a user and requesting a password reset. 

“Typically, what the adversary will do is then come back to the help desk, probably to someone else on the phone, and say, ‘Well, I have my password, but I need my MFA factor reset,’” according to VP of Okta Threat Intelligence Brett Winterford. “And then they enroll their own MFA factor, and from there, gain access to those payroll applications for the purposes of committing fraud.”

Attack tactic 

The threat actors are working at a massive scale and leveraging various services and devices to assist their malicious activities. According to Okta report, cyber thieves employed social engineering, calling help desk personnel on the phone and attempting to trick them into resetting the password for a user account. These attacks have impacted multiple industries,

“They’re certainly some kind of cybercrime organization or fraud organization that is doing this at scale,” Winterford said. Okta believes the hackers gang is based out of West Africa. 

Recently, the US industry has been plagued with payroll pirates in the education sector. The latest Okta research mentions that these schemes are now happening across different industries like retail sector and manufacturing. “It’s not often you’ll see a huge number of targets in two distinct industries. I can’t tell you why, but education [and] manufacturing were massively targeted,” Winterford said. 

How to mitigate pirates of payroll attacks?

Okta advises companies to establish a standard process to check the real identity of users who contact the help desk for aid. Winterford advised businesses that depend on outsourced IT help should limit their help desks’ ability to reset user passwords without robust measures. “In some organizations, they’re relying on nothing but passwords to get access to payroll systems, which is madness,” he said.



Google Launches Emergency Location Services in India for Android Devices


Google starts emergency location service in India

Google recently announced the launch of its Emergency Location Service (ELS) in India for compatible Android smartphones. It means that users who are in an emergency can call or contact emergency service providers like police, firefighters, and healthcare professionals. ELS can share the user's accurate location immediately. 

Uttar Pradesh (UP) in India has become the first state to operationalise ELS for Android devices. Earlier, ELS was rolled out to devices having Android 6 or newer versions. For integration, however, ELS will require state authorities to connect it with their services for activation. 

More about ELS

According to Google, the ELS function on Android handsets has been activated in India. The built-in emergency service will enable Android users to communicate their location by call or SMS in order to receive assistance from emergency service providers, such as firefighters, police, and medical personnel. 

ELS on Android collects information from the device's GPS, Wi-Fi, and cellular networks in order to pinpoint the user's exact location, with an accuracy of up to 50 meters.

Implementation details

However, local wireless and emergency infrastructure operators must enable support for the ELS capability. The first state in India to "fully" operationalize the service for Android devices is Uttar Pradesh. 

ELS assistance has been integrated with the emergency number 112 by the state police in partnership with Pert Telecom Solutions. It is a free service that solely monitors a user's position when an Android phone dials 112. 

Google added that all suitable handsets running Android 6.0 and later versions now have access to the ELS functionality. 

Even if a call is dropped within seconds of being answered, the business claims that ELS in Android has enabled over 20 million calls and SMS messages to date. ELS is supported by Android Fused Location Provider- Google's machine learning tool.

Promising safety?

According to Google, the feature is only available to emergency service providers and it will never collect or share accurate location data for itself. The ELS data will be sent directly only to the concerned authority.

Recently, Google also launched the Emergency Live Video feature for Android devices. It lets users share their camera feed during an emergency via a call or SMS with the responder. But the emergency service provider has to get user approval for the access. The feature is shown on screen immediately when the responder requests a video from their side. User can accept the request and provide a visual feed or reject the request.

High Severity Flaw In Open WebUI Can Leak User Conversations and Data


A high-severity security bug impacting Open WebUI has been found by experts. It may expose users to account takeover (ATO) and, in some incidents, cause full server compromise. 

Talking about WebUI, Cato researchers said, “When a platform of this size becomes vulnerable, the impact isn’t just theoretical. It affects production environments managing research data, internal codebases, and regulated information.”

The flaw is tracked as CVE-2025-64496 and found by Cato Networks experts. The vulnerability affects Open WebUI versions 0.6.34 and older if the Director Connection feature is allowed. The flaw has a severity rating of 7.3 out of 10. 

The vulnerability exists inside Direct Connections, which allows users to connect Open WebUI to external OpenAI-supported model servers. While built for supporting flexibility and self-hosted AI workflows, the feature can be exploited if a user is tricked into linking with a malicious server pretending to be a genuine AI endpoint. 

Fundamentally, the vulnerability comes from a trust relapse between unsafe model servers and the user's browser session. A malicious server can send a tailored server-sent events message that prompts the deployment of JavaScript code in the browser. This lets a threat actor steal authentication tokens stored in local storage. When the hacker gets these tokens, it gives them full access to the user's Open WebUI account. Chats, API keys, uploaded documents, and other important data is exposed. 

Depending on user privileges, the consequences can be different.

Consequences?

  • Hackers can steal JSON web tokens and hijack sessions. 
  • Full account hack, this includes access to chat logs and uploaded documents.
  • Leak of important data and credentials shared in conversations. 
  • If the user has enabled workspace.tools permission, it can lead to remote code execution (RCE). 

Open WebUI maintainers were informed about the issue in October 2025, and publicly disclosed in November 2025, after patch validation and CVE assignment. Open WebUI variants 0.6.35 and later stop the compromised execute events, patching the user-facing threat.

Open WebUI’s security patch will work for v0.6.35 or “newer versions, which closes the user-facing Direct Connections vulnerability. However, organizations still need to strengthen authentication, sandbox extensibility and restrict access to specific resources,” according to Cato Networks researchers.





New US Proposal Allows Users to Sue AI Companies Over Unauthorised Data Use


US AI developers would be subject to data privacy obligations applicable in federal court under a wide legislative proposal disclosed recently by the US senate Marsha Blackburn, R-Tenn. 

About the proposal

Beside this, the proposal will create a federal right for users to sue companies for misusing their personal data for AI model training without proper consent. The proposal allows statutory and punitive damages, attorney fees and injunctions. 

Blackburn is planning to officially introduce the bill this year to codify President Donald Trump’s push for “one federal rule book” for AI, according to the press release. 

Why the need for AI regulations 

The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.  

In order to ensure that there is a least burdensome national standard rather than fifty inconsistent State ones, the directive required the administration to collaborate with Congress. 

Michael Kratsios, the president's science and technology adviser, and David Sacks, the White House special adviser for AI and cryptocurrency, were instructed by the president to jointly propose federal AI legislation that would supersede any state laws that would contradict with administration policy. 

Blackburn stated in the Friday release that rather than advocating for AI amnesty, President Trump correctly urged Congress to enact federal standards and protections to address the patchwork of state laws that have impeded AI advancement.

Key highlights of proposal:

  • Mandate that regulations defining "minimum reasonable" AI protections be created by the Federal Trade Commission. 
  • Give the U.S. attorney general, state attorneys general, and private parties the authority to sue AI system creators for damages resulting from "unreasonably dangerous or defective product claims."
  • Mandate that sizable, state-of-the-art AI developers put procedures in place to control and reduce "catastrophic" risks associated with their systems and provide reports to the Department of Homeland Security on a regular basis. 
  • Hold platforms accountable for hosting an unauthorized digital replica of a person if they have actual knowledge that the replica was not authorized by the person portrayed.
  • Require quarterly reporting to the Department of Labor of AI-related job effects, such as job displacement and layoffs.

The proposal will preempt state laws regulating the management of catastrophic AI risks. The legislation will also mostly “preempt” state laws for digital replicas to make a national standard for AI. 

The proposal will not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI. The bill becomes effective 180 days after enforcement. 

India's Fintech Will Focus More on AI & Compliance in 2026


India’s Fintech industry enters the new year 2026 with a new set of goals. The industry focused on rapid expansion through digital payments and aggressive customer acquisition in the beginning, but the sector is now focusing more towards sustainable growth, compliance, and risk management. 

“We're already seeing traditional boundaries blur- payments, lending, embedded finance, and banking capabilities are coming closer together as players look to build more integrated and efficient models. While payments continue to be powerful for driving access and engagement, long-term value will come from combining scale with operational efficiency across the financial stack,” said Ramki Gaddapati, Co-Founder, APAC CEO and Global CTO, Zeta.

India’s fintech industry is preparing to enter 2026 with a new Artificial intelligence (AI) emerging as a critical tool in this transformation, helping firms strengthen fraud detection, streamline regulatory processes, and enhance customer trust.

What does the data suggest?

According to Reserve Bank of India (RBI) data, digital payment volumes crossed 180 billion transactions in FY25, powered largely by the Unified Payments Interface (UPI) and embedded payment systems across commerce, mobility, and lending platforms. 

Yet, regulators and industry leaders are increasingly concerned about operational risks and fraud. The RBI, along with the Bank for International Settlements (BIS), has highlighted vulnerabilities in digital payment ecosystems, urging fintechs to adopt stronger compliance frameworks. A

AI a major focus

Artificial intelligence is set to play a central role in this compliance-first era. Fintech firms are deploying AI to:

Detect and prevent fraudulent transactions in real time  

Automate compliance reporting and monitoring  

Personalize customer experiences while maintaining data security  

Analyze risk patterns across lending and investment platforms  

Moving beyond payments?

The sector is also diversifying beyond payments. Fintechs are moving deeper into credit, wealth management, and banking-related services, areas that demand stricter oversight. It allows firms to capture new revenue streams and broaden their customer base but exposes them to heightened regulatory scrutiny and the need for more robust governance structures.

“The DPDP Act is important because it protects personal data and builds trust. Without compliance, organisations face penalties, data breaches, customer loss, and reputational damage. Following the law improves credibility, strengthens security, and ensures responsible data handling for sustained business growth,” said Neha Abbad, co-founder, CyberSigma Consulting.




Former Cybersecurity Employees Involved in Ransomware Extortion Incidents Worth Millions


It is very unfortunate and shameful for the cybersecurity industry, when cybersecurity professionals themselves betray trust to launch cyberattacks against their own country. In a shocking incident, two men have admitted to working normal jobs as cybersecurity professionals during the day, while moonlighting as cyber attackers.

About accused

An ex-employee of the Israeli cybersecurity company Sygnia has pleaded guilty to federal crimes in the US for having involvement in ransomware cyberattacks aimed to extort millions of dollars from firms in the US. 

The culprit, Ryan Clifford Goldberg, worked as a cyber incident response supervisor at Sygnia, and accepted that he was involved in a year-long plan of attacking business around the US. 

Kevin Tyler Martin, another associate,who worked as an ex DigitalMint employee, worked as a negotiation intermediary with the threat actors, a role supposed to help ransomware targets, has also accepted involvement. 

The situation is particularly disturbing because both men held positions of trust inside the sector established to fight against such threats.

Accused pled guilty to extortion charges 

Both the accused have pleaded guilty to one count of conspiracy to manipulate commerce via extortion, according to federal court records. In the plea statement, they have accepted that along with a third actor (not charged and unknown), they both launched business compromises and ransom extortions over many years. 

Extortion worth millions 

In one incident, the actors successfully extorted over $1 million in crypto from a Florida based medical equipment firm. According to the federal court, besides their legitimate work, they deployed software ‘ALPHV BlackCat’ to extract and encode target’s data, and distributed the extortion money with the software’s developers. 

According to DigitalMint, two of the people who were charged were ex-employees. After the incident, both were fired and “acted wholly outside the scope of their employment and without any authorization, knowledge or involvement from the company,” DigitalMint said in an email shared with Bloomberg.

In a recent conversation with Bloomberg, Sygnia mentioned that it was not a target of the investigation and the accused Goldberg was relieved of his duties as soon as the news became known.

A representative for Sygnia declined to speak further, and Goldberg and Martin's lawyers also declined to comment on the report.

2FA Fail: Hackers Exploit Microsoft 365 to Launch Code Phishing Attacks


Two-factor authentication (2FA) has been one of the most secure ways to protect online accounts. It requires a secondary code besides a password. However, in recent times, 2FA has not been a reliable method anymore, as hackers have started exploiting it easily. 

Experts advise users to use passkeys instead of 2FA these days, as they are more secure and less prone to hack attempts. Recent reports have shown that 2FA as a security method is undermined. 

Russian-linked state sponsored threat actors are now abusing flaws in Microsoft’s 365. Experts from Proofpoint have noticed a surge in Microsoft 365 account takeover cyberattacks, threat actors are exploiting authentication code phishing to compromise Microsoft’s device authorization flow.

They are also launching advanced phishing campaigns that escape 2FA and hack sensitive accounts. 

About the attack

The recent series of cyberattacks use device code phishing where hackers lure victims into giving their authentication codes on fake websites that look real. When the code is entered, hackers gain entry to the victim's Microsoft 365 account, escaping the safety of 2FA. 

The campaigns started in early 2025. In the beginning, hackers relied primarily on code phishing. By March, they increased their tactics to exploit Oauth authentication workflows, which are largely used for signing into apps and services. The development shows how fast threat actors adapt when security experts find their tricks.

Who is the victim? 

The attacks are particularly targeted against high-value sectors that include:

Universities and research institutes 

Defense contractors

Energy providers

Government agencies 

Telecommunication companies 

By targeting these sectors, hackers increase the impact of their attacks for purposes such as disruption, espionage, and financial motives. 

The impact 

The surge in 2FA code attacks exposes a major gap, no security measure is foolproof. While 2FA is still far stronger than relying on passwords alone, it can be undermined if users are deceived into handing over their codes. This is not a failure of the technology itself, but of human trust and awareness.  

A single compromised account can expose sensitive emails, documents, and internal systems. Users are at risk of losing their personal data, financial information, and even identity in these cases.

How to Stay Safe

Verify URLs carefully. Never enter authentication codes on unfamiliar or suspicious websites.  

Use phishing-resistant authentication. Hardware security keys (like YubiKeys) or biometric logins are harder to trick.  

Enable conditional access policies. Organizations can restrict logins based on location, device, or risk level.  

Monitor OAuth activity. Be cautious of unexpected consent requests from apps or services.  

Educate users. Awareness training is often the most effective defense against social engineering.  


Antivirus vs Identity Protection Software: What to Choose and How?


Users often put digital security into a single category and confuse identity protection with antivirus, assuming both work the same. But they are not. Before you buy one, it is important to understand the difference between the two. This blog covers the difference between identity theft security and device security.

Cybersecurity threats: Past vs present 

Traditionally, a common computer virus could crash a machine and infect a few files. That was it. But today, the cybersecurity landscape has changed from compromising computers via system overload of resources to stealing personal data. 

A computer virus is a malware that self-replicates, travelling through devices. It corrupts data and software, and can also steal personal data. 

With time, hackers have learned that users are easier targets than computers. These days, malware and social engineering attacks pose more threats than viruses. A well planned phishing email or a fake login page will benefit hackers more than a traditional virus. 

Due to the surge in data breaches, hackers have got it easy. Your data- phone number, financial details, passwords is swimming in databases, sold like bulk goods on the dark web. 

AI has made things worse and easier to exploit. Hackers can now create believable messages and even impersonate your voice. These shenanigans don't even require creativity, they need to be convincing enough to bait a victim to click or reply. 

Where antivirus fails

Your personal data never stays only on your computer, it is collected and sold by data brokers and advertisers, or to third-parties who benefit from it. When threat actors get their hands on this data, they can use it to impersonate you. 

In this case, antivirus is of no help. It is unable to notice breaches happening at organizations you don't control or someone impersonating you. Antivirus protects your system from malware that exists outside your system. There is a limit to what it can do. Antivirus can protect the machine, but not the user behind it. 

Role of identity theft protection 

Identity protection doesn't concern itself with your system health. It looks out for information that follows you everywhere- SSN, e-mail addresses, your contact number and accounts linked to your finances. If something suspicious turns up, it informs you. Identity protection works more on the monitoring side. It may watch your credit reports for threats- a new account or a hard enquiry, or falling credit score. Identity protection software looks out for early warning signs of theft, as mentioned above. It also checks if your data has been put up on dark web or part of any latest leaks. 

Adobe Brings Photo, Design, and PDF Editing Tools Directly Into ChatGPT

 



Adobe has expanded how users can edit images, create designs, and manage documents by integrating select features of its creative software directly into ChatGPT. This update allows users to make visual and document changes simply by describing what they want, without switching between different applications.

With the new integration, tools from Adobe Photoshop, Adobe Acrobat, and Adobe Express are now available inside the ChatGPT interface. Users can upload images or documents and activate an Adobe app by mentioning it in their request. Once enabled, the tool continues to work throughout the conversation, allowing multiple edits without repeatedly selecting the app.

For image editing, the Photoshop integration supports focused and practical adjustments rather than full professional workflows. Users can modify specific areas of an image, apply visual effects, or change settings such as brightness, contrast, and exposure. In some cases, ChatGPT presents multiple edited versions for users to choose from. In others, it provides interactive controls, such as sliders, to fine-tune the result manually.

The Acrobat integration is designed to simplify common document tasks. Users can edit existing PDF files, reduce file size, merge several documents into one, convert files into PDF format, and extract content such as text or tables. These functions are handled directly within ChatGPT once a file is uploaded and instructions are given.

Adobe Express focuses on design creation and quick visual content. Through ChatGPT, users can generate and edit materials like posters, invitations, and social media graphics. Every element of a design, including text, images, colors, and animations, can be adjusted through conversational prompts. If users later require more detailed control, their projects can be opened in Adobe’s standalone applications to continue editing.

The integrations are available worldwide on desktop, web, and iOS platforms. On Android, Adobe Express is already supported, while Photoshop and Acrobat compatibility is expected to be added in the future. These tools are free to use within ChatGPT, although advanced features in Adobe’s native software may still require paid plans.

This launch follows OpenAI’s broader effort to introduce third-party app integrations within ChatGPT. While some earlier app promotions raised concerns about advertising-like behavior, Adobe’s tools are positioned as functional extensions rather than marketing prompts.

By embedding creative and document tools into a conversational interface, Adobe aims to make design and editing more accessible to users who may lack technical expertise. The move also reflects growing competition in the AI space, where companies are racing to combine artificial intelligence with practical, real-world tools.

Overall, the integration represents a shift toward more interactive and simplified creative workflows, allowing users to complete everyday editing tasks efficiently while keeping professional software available for advanced needs.




Online Retail Store Coupang Suffers South Korea's Worst Data Breach, Leak Linked to Former Employee


33.7 million customer data leaked

Data breach is an unfortunate attack that businesses often suffer. Failing to address these breaches is even worse as it costs businesses reputational and privacy damage. 

A breach at Coupang that leaked the data of 33.7 million customers has been linked to a former employee who kept access to internal systems after leaving the organization. 

About the incident 

The news was reported by the Seoul Metropolitan Police Agency with news agencies after an inquiry that involved a raid on Coupang's offices recently. The firm is South Korea's biggest online retailer. It employs 95,000 people and generates an annual revenue of more than $30 billion. 

Earlier in December, Coupang reported that it had been hit by a data breach that leaked the personal data of 33.7 million customers such as email IDs, names, order information, and addresses.

The incident happened in June, 2025, but the firm found it in November and launched an internal investigation immediately. 

The measures

In December beginning, Coupang posted an update on the breach, assuring the customers that the leaked data had not been exposed anywhere online. 

Even after all this, and Coupang's full cooperation with the authorities, the officials raided the firm's various offices on Tuesday to gather evidence for a detailed enquiry.

Recently, Coupang's CEO Park Dae-Jun gave his resignation and apologies to the public for not being able to stop what is now South Korea's worst cybersecurity breach in history. 

Police investigation 

In the second day of police investigation in Coupang's offices, the officials found that the main suspect was a 43-year old Chinese national who was an employee of the retail giant. The man is called JoongAng, who joined the firm in November 2022 and overlooked the authentication management system. He left the firm in 2024. JoongAng is suspected to have already left South Korea. 

What next?

According to the police, although Coupang is considered the victim, the business and staff in charge of safeguarding client information may be held accountable if carelessness or other legal infractions are discovered. 

Since the beginning of the month, the authorities have received hundreds of reports of Coupang impersonation. Meanwhile, the incident has caused a large amount of phishing activity in the country, affecting almost two-thirds of its population.

FTC Refuses to Lift Ban on Stalkerware Company that Exposed Sensitive Data


The surveillance industry banned a stalkerware maker after a data breach leaked information of its customers and the people they were spying on. Consumer spyware company Support King can't sell the surveillance software now, the US Federal Trade Commission (FTC) said. 

The FTC has denied founder Scott Zuckerman's request to cancel the ban. It is also applicable to other subsidiaries OneClickMonitor and SpyFone.

Recently, the FTC announced the move in a press release when Zuckerman petitioned the agency to cancel the ban order in July of 2025. 

The FTC banned Zuckerman from “offering, promoting, selling, or advertising any surveillance app, service, or business,” in 2021 and stopped him from running other stalkerware business. Zuckerman had to also delete all the data stored by SpyFone and went through various audits to implement cybersecurity measures for his ventures. Then acting director of the FTC's Bureau of Consumer Protection, Samuel Levine said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security."

Zuckerman in his petition said that the FTC mandate has made it difficult for him to conduct other businesses due to monetary losses, even though Support King is out of business and he now only operates a restaurant and plans other ventures.

The ban came from a 2018 incident after a researcher discovered an Amazon S3 bucket of SpyFone that left important data such as selfies, chats, texts, contacts, passwords, logins, and audio recordings exposed online in the open. The leaked data comprised 44,109 email ids.

According to Samuel, “SpyFone is a brazen brand name for a surveillance business that helped stalkers steal private information." He further said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security.r

According to TechCrunch, after the 2021 order, Zuckerman started running another stalkerware firm. In 2022, TechCrunch found breached data from stalkerware application SpyTrac. 

According to the data, freelance developers ran SpyTrac who had direct links with Support King. It was an attempt to escape the FTC ban. Additionally, the breached data contained records from SpyFone, which Support King was supposed to delete. Beside this, the data also contained access keys to the cloud storage of OneClickMonitor, another stalkerware application. 

5 Critical Situations Where You Should Never Rely on ChatGPT

  •  

Just a few years after its launch, ChatGPT has evolved into a go-to digital assistant for tasks ranging from quick searches to event planning. While it undeniably offers convenience, treating it as an all-knowing authority can be risky. ChatGPT is a large language model, not an infallible source of truth, and it is prone to misinformation and fabricated responses. Understanding where its usefulness ends is crucial.

Here are five important areas where experts strongly advise turning to real people, not AI chatbots:

  • Medical advice
ChatGPT cannot be trusted with health-related decisions. It is known to provide confident yet inaccurate information, and it may even acknowledge errors only after being corrected. Even healthcare professionals experimenting with AI agree that it can offer only broad, generic insights — not tailored guidance based on individual symptoms.

Despite this, the chatbot can still respond if you ask, "Hey, what's that sharp pain in my side?", instead of urging you to seek urgent medical care. The core issue is that chatbots cannot distinguish fact from fiction. They generate responses by blending massive amounts of data, regardless of accuracy.

ChatGPT is not, and likely never will be, a licensed medical professional. While it may provide references if asked, those sources must be carefully verified. In several cases, people have reported real harm after following chatbot-generated health advice.

  • Therapy
Mental health support is essential, yet often expensive. Even so-called "cheap" online therapy platforms can cost around $65 per session, and insurance coverage remains limited. While it may be tempting to confide in a chatbot, this can be dangerous.

One major concern is ChatGPT’s tendency toward agreement and validation. In therapy, this can be harmful, as it may encourage behaviors or beliefs that are objectively damaging. Effective mental health care requires an external, trained professional who can challenge harmful thought patterns rather than reinforce them.

There is also an ongoing lawsuit alleging that ChatGPT contributed to a teen’s suicide — a claim OpenAI denies. Regardless of the legal outcome, the case highlights the risks of relying on AI for mental health support. Even advocates of AI-assisted therapy admit that its limitations are significant.

  • Advice during emergencies
In emergencies, every second counts. Whether it’s a fire, accident, or medical crisis, turning to ChatGPT for instructions is a gamble. Incorrect advice in such situations can lead to severe injury or death.

Preparation is far more reliable than last-minute AI guidance. Learning basic skills like CPR or the Heimlich maneuver, participating in fire drills, and keeping emergency equipment on hand can save lives. If possible, always call emergency services rather than relying on a chatbot. This is one scenario where AI is least dependable.

  • Password generation
Using ChatGPT to create passwords may seem harmless, but it carries serious security risks. There is a strong possibility that the chatbot could generate identical or predictable passwords for multiple users. Without precise instructions, the suggested passwords may also lack sufficient complexity.

Additionally, chatbots often struggle with basic constraints, such as character counts. More importantly, ChatGPT stores prompts and outputs to improve its systems, raising concerns about sensitive data being reused or exposed.

Instead, experts recommend dedicated password generators offered by trusted password managers or reputable online tools, which are specifically designed with security in mind.
  • Future predictions
If even leading experts struggle to predict the future accurately, it’s unrealistic to expect ChatGPT to do better. Since AI models frequently get present-day facts wrong, their long-term forecasts are even less reliable.

Using ChatGPT to decide which stocks to buy, which team will win, or which career path will be most profitable is unwise. While it can be entertaining to ask speculative questions about humanity centuries from now, such responses should be treated as curiosity-driven thought experiments — not actionable guidance.

ChatGPT can be a helpful tool when used appropriately, but knowing its limitations is essential. For critical decisions involving health, safety, security, or mental well-being, real professionals remain irreplaceable.


Indian Government Proposes Compulsory Location Tracking in Smartphones, Faces Backlash


Government faces backlash over location-tracking proposal

The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance. 

Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying. 

About the proposal 

Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.

The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.

Strong opposition from tech giants 

If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple. 

Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device." 

According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.  

Telecom vs government 

Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world. 

According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS. 

Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.

According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."



700+ Self-hosted Gits Impacted in a Wild Zero-day Exploit


Hackers actively exploit zero-day bug

Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.

About the attack 

Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."

The team informed Gogs' maintainers about the bug, who are now working on the fix. 

The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE). 

About Gogs

Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties. 

Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol. 

Patch update 

The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes. 

While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.

Other incidents 

Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.

Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.

How to stay safe?

Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names. 

For more details, readers can see the full list of indicators published by the researchers.



Researchers Find Massive Increase in Hypervisor Ransomware Incidents


Rise in hypervisor ransomware incidents 

Cybersecurity experts from Huntress have noticed a sharp rise in ransomware incidents on hypervisors and have asked users to be safe and have proper back-up. 

The Huntress case data has disclosed a surprising increase in hypervisor ransomware. It was involved in malicious encryption and rose from a mere three percent in the first half to a staggering 25 percent in 2025. 

Akira gang responsible 

Experts think that the Akira ransomware gang is the primary threat actor behind this, other players are also going after hypervisors to escape endpoint and network security controls. According to Huntress threat hunters, players are going after hypervisors as they are not secure and hacking them can allow hackers to trigger virtual machines and manage networks.

Why hypervisors?

“This shift underscores a growing and uncomfortable trend: Attackers are targeting the infrastructure that controls all hosts, and with access to the hypervisor, adversaries dramatically amplify the impact of their intrusion," experts said. The attack tactic follows classic playbook. Researchers have "seen it with attacks on VPN appliances: Threat actors realize that the host operating system is often proprietary or restricted, meaning defenders cannot install critical security controls like EDR [Endpoint Detection and Response]. This creates a significant blind spot.”

Other instances 

The experts have also found various cases where ransomware actors install ransomware payloads directly via hypervisors, escaping endpoint security. In a few cases, threat actors used built-in-tools like OpenSSL to run encryption of the virtual machine volume without having to upload custom ransomware binaries.

Attack tactic 

Huntress researchers have also found attackers disrupting a network to steal login credentials and then attack hypervisors.

“We’ve seen misuse of Hyper-V management utilities to modify VM settings and undermine security features,” they add. “This includes disabling endpoint defenses, tampering with virtual switches, and preparing VMs for ransomware deployment at scale," they said.

Mitigation strategies 

Due to the high level of attacks on hypervisors, experts have suggested admins to revisit infosec basics such as multi-factor authentication and password patch updates. Admins should also adopt hypervisor-specific safety measures like only allow-listed binaries can run on a host.

For decades, the Infosec community has known hypervisors to be an easy target. In a worst-case scenario of a successful VM evasion where an attack on a guest virtual machine allows hijacking of the host and its hypervisor, things can go further south. If this were to happen, the impact could be massive as the entire hyperscale clouds depend on hypervisors to isolate tenants' virtual systems.