Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

Adobe Brings Photo, Design, and PDF Editing Tools Directly Into ChatGPT

 



Adobe has expanded how users can edit images, create designs, and manage documents by integrating select features of its creative software directly into ChatGPT. This update allows users to make visual and document changes simply by describing what they want, without switching between different applications.

With the new integration, tools from Adobe Photoshop, Adobe Acrobat, and Adobe Express are now available inside the ChatGPT interface. Users can upload images or documents and activate an Adobe app by mentioning it in their request. Once enabled, the tool continues to work throughout the conversation, allowing multiple edits without repeatedly selecting the app.

For image editing, the Photoshop integration supports focused and practical adjustments rather than full professional workflows. Users can modify specific areas of an image, apply visual effects, or change settings such as brightness, contrast, and exposure. In some cases, ChatGPT presents multiple edited versions for users to choose from. In others, it provides interactive controls, such as sliders, to fine-tune the result manually.

The Acrobat integration is designed to simplify common document tasks. Users can edit existing PDF files, reduce file size, merge several documents into one, convert files into PDF format, and extract content such as text or tables. These functions are handled directly within ChatGPT once a file is uploaded and instructions are given.

Adobe Express focuses on design creation and quick visual content. Through ChatGPT, users can generate and edit materials like posters, invitations, and social media graphics. Every element of a design, including text, images, colors, and animations, can be adjusted through conversational prompts. If users later require more detailed control, their projects can be opened in Adobe’s standalone applications to continue editing.

The integrations are available worldwide on desktop, web, and iOS platforms. On Android, Adobe Express is already supported, while Photoshop and Acrobat compatibility is expected to be added in the future. These tools are free to use within ChatGPT, although advanced features in Adobe’s native software may still require paid plans.

This launch follows OpenAI’s broader effort to introduce third-party app integrations within ChatGPT. While some earlier app promotions raised concerns about advertising-like behavior, Adobe’s tools are positioned as functional extensions rather than marketing prompts.

By embedding creative and document tools into a conversational interface, Adobe aims to make design and editing more accessible to users who may lack technical expertise. The move also reflects growing competition in the AI space, where companies are racing to combine artificial intelligence with practical, real-world tools.

Overall, the integration represents a shift toward more interactive and simplified creative workflows, allowing users to complete everyday editing tasks efficiently while keeping professional software available for advanced needs.




Online Retail Store Coupang Suffers South Korea's Worst Data Breach, Leak Linked to Former Employee


33.7 million customer data leaked

Data breach is an unfortunate attack that businesses often suffer. Failing to address these breaches is even worse as it costs businesses reputational and privacy damage. 

A breach at Coupang that leaked the data of 33.7 million customers has been linked to a former employee who kept access to internal systems after leaving the organization. 

About the incident 

The news was reported by the Seoul Metropolitan Police Agency with news agencies after an inquiry that involved a raid on Coupang's offices recently. The firm is South Korea's biggest online retailer. It employs 95,000 people and generates an annual revenue of more than $30 billion. 

Earlier in December, Coupang reported that it had been hit by a data breach that leaked the personal data of 33.7 million customers such as email IDs, names, order information, and addresses.

The incident happened in June, 2025, but the firm found it in November and launched an internal investigation immediately. 

The measures

In December beginning, Coupang posted an update on the breach, assuring the customers that the leaked data had not been exposed anywhere online. 

Even after all this, and Coupang's full cooperation with the authorities, the officials raided the firm's various offices on Tuesday to gather evidence for a detailed enquiry.

Recently, Coupang's CEO Park Dae-Jun gave his resignation and apologies to the public for not being able to stop what is now South Korea's worst cybersecurity breach in history. 

Police investigation 

In the second day of police investigation in Coupang's offices, the officials found that the main suspect was a 43-year old Chinese national who was an employee of the retail giant. The man is called JoongAng, who joined the firm in November 2022 and overlooked the authentication management system. He left the firm in 2024. JoongAng is suspected to have already left South Korea. 

What next?

According to the police, although Coupang is considered the victim, the business and staff in charge of safeguarding client information may be held accountable if carelessness or other legal infractions are discovered. 

Since the beginning of the month, the authorities have received hundreds of reports of Coupang impersonation. Meanwhile, the incident has caused a large amount of phishing activity in the country, affecting almost two-thirds of its population.

FTC Refuses to Lift Ban on Stalkerware Company that Exposed Sensitive Data


The surveillance industry banned a stalkerware maker after a data breach leaked information of its customers and the people they were spying on. Consumer spyware company Support King can't sell the surveillance software now, the US Federal Trade Commission (FTC) said. 

The FTC has denied founder Scott Zuckerman's request to cancel the ban. It is also applicable to other subsidiaries OneClickMonitor and SpyFone.

Recently, the FTC announced the move in a press release when Zuckerman petitioned the agency to cancel the ban order in July of 2025. 

The FTC banned Zuckerman from “offering, promoting, selling, or advertising any surveillance app, service, or business,” in 2021 and stopped him from running other stalkerware business. Zuckerman had to also delete all the data stored by SpyFone and went through various audits to implement cybersecurity measures for his ventures. Then acting director of the FTC's Bureau of Consumer Protection, Samuel Levine said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security."

Zuckerman in his petition said that the FTC mandate has made it difficult for him to conduct other businesses due to monetary losses, even though Support King is out of business and he now only operates a restaurant and plans other ventures.

The ban came from a 2018 incident after a researcher discovered an Amazon S3 bucket of SpyFone that left important data such as selfies, chats, texts, contacts, passwords, logins, and audio recordings exposed online in the open. The leaked data comprised 44,109 email ids.

According to Samuel, “SpyFone is a brazen brand name for a surveillance business that helped stalkers steal private information." He further said that the "stalkerware was hidden from device owners, but was fully exposed to hackers who exploited the company’s slipshod security.r

According to TechCrunch, after the 2021 order, Zuckerman started running another stalkerware firm. In 2022, TechCrunch found breached data from stalkerware application SpyTrac. 

According to the data, freelance developers ran SpyTrac who had direct links with Support King. It was an attempt to escape the FTC ban. Additionally, the breached data contained records from SpyFone, which Support King was supposed to delete. Beside this, the data also contained access keys to the cloud storage of OneClickMonitor, another stalkerware application. 

5 Critical Situations Where You Should Never Rely on ChatGPT

  •  

Just a few years after its launch, ChatGPT has evolved into a go-to digital assistant for tasks ranging from quick searches to event planning. While it undeniably offers convenience, treating it as an all-knowing authority can be risky. ChatGPT is a large language model, not an infallible source of truth, and it is prone to misinformation and fabricated responses. Understanding where its usefulness ends is crucial.

Here are five important areas where experts strongly advise turning to real people, not AI chatbots:

  • Medical advice
ChatGPT cannot be trusted with health-related decisions. It is known to provide confident yet inaccurate information, and it may even acknowledge errors only after being corrected. Even healthcare professionals experimenting with AI agree that it can offer only broad, generic insights — not tailored guidance based on individual symptoms.

Despite this, the chatbot can still respond if you ask, "Hey, what's that sharp pain in my side?", instead of urging you to seek urgent medical care. The core issue is that chatbots cannot distinguish fact from fiction. They generate responses by blending massive amounts of data, regardless of accuracy.

ChatGPT is not, and likely never will be, a licensed medical professional. While it may provide references if asked, those sources must be carefully verified. In several cases, people have reported real harm after following chatbot-generated health advice.

  • Therapy
Mental health support is essential, yet often expensive. Even so-called "cheap" online therapy platforms can cost around $65 per session, and insurance coverage remains limited. While it may be tempting to confide in a chatbot, this can be dangerous.

One major concern is ChatGPT’s tendency toward agreement and validation. In therapy, this can be harmful, as it may encourage behaviors or beliefs that are objectively damaging. Effective mental health care requires an external, trained professional who can challenge harmful thought patterns rather than reinforce them.

There is also an ongoing lawsuit alleging that ChatGPT contributed to a teen’s suicide — a claim OpenAI denies. Regardless of the legal outcome, the case highlights the risks of relying on AI for mental health support. Even advocates of AI-assisted therapy admit that its limitations are significant.

  • Advice during emergencies
In emergencies, every second counts. Whether it’s a fire, accident, or medical crisis, turning to ChatGPT for instructions is a gamble. Incorrect advice in such situations can lead to severe injury or death.

Preparation is far more reliable than last-minute AI guidance. Learning basic skills like CPR or the Heimlich maneuver, participating in fire drills, and keeping emergency equipment on hand can save lives. If possible, always call emergency services rather than relying on a chatbot. This is one scenario where AI is least dependable.

  • Password generation
Using ChatGPT to create passwords may seem harmless, but it carries serious security risks. There is a strong possibility that the chatbot could generate identical or predictable passwords for multiple users. Without precise instructions, the suggested passwords may also lack sufficient complexity.

Additionally, chatbots often struggle with basic constraints, such as character counts. More importantly, ChatGPT stores prompts and outputs to improve its systems, raising concerns about sensitive data being reused or exposed.

Instead, experts recommend dedicated password generators offered by trusted password managers or reputable online tools, which are specifically designed with security in mind.
  • Future predictions
If even leading experts struggle to predict the future accurately, it’s unrealistic to expect ChatGPT to do better. Since AI models frequently get present-day facts wrong, their long-term forecasts are even less reliable.

Using ChatGPT to decide which stocks to buy, which team will win, or which career path will be most profitable is unwise. While it can be entertaining to ask speculative questions about humanity centuries from now, such responses should be treated as curiosity-driven thought experiments — not actionable guidance.

ChatGPT can be a helpful tool when used appropriately, but knowing its limitations is essential. For critical decisions involving health, safety, security, or mental well-being, real professionals remain irreplaceable.


Indian Government Proposes Compulsory Location Tracking in Smartphones, Faces Backlash


Government faces backlash over location-tracking proposal

The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance. 

Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying. 

About the proposal 

Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.

The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.

Strong opposition from tech giants 

If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple. 

Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device." 

According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.  

Telecom vs government 

Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world. 

According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS. 

Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.

According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."



700+ Self-hosted Gits Impacted in a Wild Zero-day Exploit


Hackers actively exploit zero-day bug

Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.

About the attack 

Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."

The team informed Gogs' maintainers about the bug, who are now working on the fix. 

The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE). 

About Gogs

Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties. 

Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol. 

Patch update 

The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes. 

While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.

Other incidents 

Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.

Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.

How to stay safe?

Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names. 

For more details, readers can see the full list of indicators published by the researchers.



Researchers Find Massive Increase in Hypervisor Ransomware Incidents


Rise in hypervisor ransomware incidents 

Cybersecurity experts from Huntress have noticed a sharp rise in ransomware incidents on hypervisors and have asked users to be safe and have proper back-up. 

The Huntress case data has disclosed a surprising increase in hypervisor ransomware. It was involved in malicious encryption and rose from a mere three percent in the first half to a staggering 25 percent in 2025. 

Akira gang responsible 

Experts think that the Akira ransomware gang is the primary threat actor behind this, other players are also going after hypervisors to escape endpoint and network security controls. According to Huntress threat hunters, players are going after hypervisors as they are not secure and hacking them can allow hackers to trigger virtual machines and manage networks.

Why hypervisors?

“This shift underscores a growing and uncomfortable trend: Attackers are targeting the infrastructure that controls all hosts, and with access to the hypervisor, adversaries dramatically amplify the impact of their intrusion," experts said. The attack tactic follows classic playbook. Researchers have "seen it with attacks on VPN appliances: Threat actors realize that the host operating system is often proprietary or restricted, meaning defenders cannot install critical security controls like EDR [Endpoint Detection and Response]. This creates a significant blind spot.”

Other instances 

The experts have also found various cases where ransomware actors install ransomware payloads directly via hypervisors, escaping endpoint security. In a few cases, threat actors used built-in-tools like OpenSSL to run encryption of the virtual machine volume without having to upload custom ransomware binaries.

Attack tactic 

Huntress researchers have also found attackers disrupting a network to steal login credentials and then attack hypervisors.

“We’ve seen misuse of Hyper-V management utilities to modify VM settings and undermine security features,” they add. “This includes disabling endpoint defenses, tampering with virtual switches, and preparing VMs for ransomware deployment at scale," they said.

Mitigation strategies 

Due to the high level of attacks on hypervisors, experts have suggested admins to revisit infosec basics such as multi-factor authentication and password patch updates. Admins should also adopt hypervisor-specific safety measures like only allow-listed binaries can run on a host.

For decades, the Infosec community has known hypervisors to be an easy target. In a worst-case scenario of a successful VM evasion where an attack on a guest virtual machine allows hijacking of the host and its hypervisor, things can go further south. If this were to happen, the impact could be massive as the entire hyperscale clouds depend on hypervisors to isolate tenants' virtual systems.

End to End-to-end Encryption? Google Update Allows Firms to Read Employee Texts


Your organization can now read your texts

Microsoft stirred controversy when it revealed a Teams update that could tell your organization when you're not at work. Google did the same. Say goodbye to end-to-end encryption. With this new RCS and SMS Android update, your RCS and SMS texts are no longer private. 

According to Android Authority, "Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”

Only for organizational devices 

This is only applicable to work-managed devices and doesn't impact personal devices. In regulated industries, it will only add RCS archiving to existing SMS archiving. In an organization, however, texting is different than emailing. In the former, employees sometimes share about their non-work life. End-to-end encryptions keep these conversations safe, but this will no longer be the case.

The end-to-end question 

There is alot of misunderstanding around end-to-end encryption. It protects messages when they are being sent, but once they are on your device, they are decrypted and no longer safe. 

According to Google, this is "a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”

What will change?

With this update, getting a phone at work is no longer as good as it seems. Employees have always been insecure about the risks in over-sharing on email, as it is easy to spy. But not texts. 

The update will make things different. According to Google, “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”

Promoting organizational surveillance 

Because of organizational surveillance, employees at times turn to shadow IT systems such as Whatsapp and Signal to communicate with colleagues. The new Google update will only make things worse. 

“Earlier,” Google said, ““employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."

Beer Firm Asahi Not Entertaining Threat Actors After Cyberattack


Asahi denies ransom payment 

Japanese beer giant Asahi said that it didn't receive any particular ransom demand from threat actors responsible for an advanced and sophisticated cyberattack that could have exposed the data of more than two million people. 

About the attack

CEO Atsushi Katsuki in a press conference said that the company had not been in touch with the threat actors. But Asahi has delayed the release of financial results. Even if the company received a ransom demand, it would not have paid, Katsuki said. Asahi Super Dry is one of Japan's most popular beers. Asahi suffered a cyberattack on 29th September. However, the company clarified on October 3 that it was hit by a ransomware attack.

Attack tactic 

In such incidents, threat actors typically use malicious software to encrypt the target's systems and then ask ransom for providing encryption keys to run the systems again.

Asahi said threat actors could have hacked or stolen identity data like phone numbers and names of around two million people- employees, customers and families.

Qilin gang believed to be responsible 

The firm didn't disclose details of the attacker at the conference. Later, it told AFP via mail that experts hinted towards a high chance of attack by hacking group Qilin. The gang issued a statement that the Japanese media understood as a claim of responsibility. Commenting on the situation, 

Katsuki said the firm thought it had taken needed measures to prevent such an incident. "But this attack was beyond our imagination. It was a sophisticated and cunning attack," Katsuki said. 

Impact on Asahi business 

Interestingly, Asahi delayed the release of third-quarter earnings and recently said that the annual financial results had also been delayed. "These and further information on the impact of the hack on overall corporate performance will be disclosed as soon as possible once the systems have been restored and the relevant data confirmed," the firm said.

The product supply hasn't been affected. Shipments will resume in stages while systems recover. "We apologise for the continued inconvenience and appreciate your understanding," Asahi said.

Critical Vulnerabilities Found in React Server Components and Next.js


Open in the wild flaw

The US Cybersecurity and Infrastructure Security Agency (CISA) added a critical security flaw affecting React Server Components (RSC) to its Known Exploited Vulnerabilities (KEV) catalog after exploitation in the wild.

The flaw CVE-2025-55182 (CVSS score: 10.0) or React2Shell hints towards a remote code execution (RCE) that can be triggered by an illicit threat actor without needing any setup. 

Remote code execution 

According to the CISA advisory, "Meta React Server Components contains a remote coThe incident surfaced when Amazon said it found attack attempts from infrastructure related to Chinese hacking groupsde execution vulnerability that could allow unauthenticated remote code execution by exploiting a flaw in how React decodes payloads sent to React Server Function endpoints."

The problem comes from unsafe deserialization in the library's Flight protocol, which React uses to communicate between a client and server. It results in a case where an unauthorised, remote hacker can deploy arbitrary commands on the server by sending specially tailored HTTP requests. The conversion of text into objects is considered a dangerous class of software vulnerability. 

About the flaw

 "The React2Shell vulnerability resides in the react-server package, specifically in how it parses object references during deserialization," said Martin Zugec, technical solutions director at Bitdefender.

The incident surfaced when Amazon said it found attack attempts from infrastructure related to Chinese hacking groups such as Jackpot Panda and Earth Lamia. "Within hours of the public disclosure of CVE-2025-55182 (React2Shell) on December 3, 2025, Amazon threat intelligence teams observed active exploitation attempts by multiple China state-nexus threat groups, including Earth Lamia and Jackpot Panda," AWS said.

Attack tactic 

Few attacks deployed cryptocurrency miners and ran "cheap math" PowerShell commands for successful exploitation. After that, it dropped in-memory downloaders capable of taking out extra payload from a remote server.

According to Censys, an attack surface management platform, 2.15 million cases of internet-facing services may be affected by this flaw. This includes leaked web services via React Server Components and leaked cases of frameworks like RedwoodSDK, React Router, Waku, and Next.js.

According to data shared by attack surface management platform Censys, there are about 2.15 million instances of internet-facing services that may be affected by this vulnerability. This comprises exposed web services using React Server Components and exposed instances of frameworks such as Next.js, Waku, React Router, and RedwoodSDK.


Banking Malware Can Hack Communications via Encrypted Apps


Sturnus hacks communication 

A new Android banking malware dubbed Sturnus can hack interactions from entirety via encrypted messaging networks like Signal, WhatsApp, and Telegram, as well as take complete control of the device.  

While still under growth, the virus is fully functional and has been programmed to target accounts at various financial institutions across Europe by employing "region-specific overlay templates."  

Attack tactic 

Sturnus uses a combination of plaintext, RSA, and AES-encrypted communication with the command-and-control (C2) server, making it a more sophisticated threat than existing Android malware families.

Sturnus may steal messages from secure messaging apps after the decryption step by recording the content from the device screen, according to a research from online fraud prevention and threat intelligence agency Threatfabric. The malware can also collect banking account details using HTML overlays and offers support for complete, real-time access through VNC session.

Malware distribution 

The researchers haven't found how the malware is disseminated but they assume that malvertising or direct communications are plausible approaches. Upon deployment, the malware connects to the C2 network to register the target via a cryptographic transaction. 

For instructions and data exfiltration, it creates an encrypted HTTPS connection; for real-time VNC operations and live monitoring, it creates an AES-encrypted WebSocket channel. Sturnus can begin reading text on the screen, record the victim's inputs, view the UI structure, identify program launches, press buttons, scroll, inject text, and traverse the phone by abusing the Accessibility services on the device.

To get full command of the system, Sturnus gets Android Device Administrator credentials, which let it keep tabs of password changes and attempts to unlock and lock the device remotely. The malware also tries to stop the user from disabling its privileges or deleting it from the device. Sturnus uses its permissions to identify message content, inputted text, contact names, and conversation contents when the user accesses WhatsApp, Telegram, or Signal.

65% of Top AI Companies Leak Secrets on GitHub

 

Leading AI companies continue to face significant cybersecurity challenges, particularly in protecting sensitive information, as highlighted in recent research from Wiz. The study focused on the Forbes top 50 AI firms, revealing that 65% of them were found to be leaking verified secrets—such as API keys, tokens, and credentials—on public GitHub repositories. 

These leaks often occurred in places not easily accessible to standard security scanners, including deleted forks, developer repositories, and GitHub gists, indicating a deeper and more persistent problem than surface-level exposure. Wiz's approach to uncovering these leaks involved a framework called "Depth, Perimeter, and Coverage." Depth allowed researchers to look beyond just the main repositories, reaching into less visible parts of the codebase. 

Perimeter expanded the search to contributors and organization members, recognizing that individuals could inadvertently upload company-related secrets to their own public spaces. Coverage ensured that new types of secrets, such as those used by AI-specific platforms like Tavily, Langchain, Cohere, and Pinecone, were included in the scan, which many traditional tools overlook.

The findings show that despite being leaders in cutting-edge technology, these AI companies have not adequately addressed basic security hygiene. The researchers disclosed the discovered leaks to the affected organisations, but nearly half of these notifications either failed to reach the intended recipients, were ignored, or received no actionable response, underscoring the lack of dedicated channels for vulnerability disclosure.

Security Tips 

Wiz recommends several essential security measures for all organisations, regardless of size. First, deploying robust secret scanning should be a mandatory practice to proactively identify and remove sensitive information from codebases. Second, companies should prioritise the detection of their own unique secret formats, especially if they are new or specific to their operations. Engaging vendors and the open source community to support the detection of these formats is also advised.

Finally, establishing a clear and accessible disclosure protocol is crucial. Having a dedicated channel for reporting vulnerabilities and leaks enables faster remediation and better coordination between researchers and organisations, minimising potential damage from exposure. The research serves as a stark reminder that even the most advanced companies must not overlook fundamental cybersecurity practices to safeguard sensitive data and maintain trust in the rapidly evolving AI landscape.

Sam Altman’s Iris-Scanning Startup Reaches Only 2% of Its Goal

Sam Altman’s ambitious—and often criticized—vision to scan humanity’s eyeballs for a profit is falling far behind its own expectations. The startup, now known simply as World (previously Worldcoin), has barely made a dent in its goal of creating a global biometric identity network. Despite backing from major venture capital firms, the company has reportedly achieved only two percent of its goal to scan one billion people. According to Business Insider, World has so far enrolled around 17.5 million users, which is far more than many initially expected for a project this unconventional—yet still vastly insufficient for its long-term aims.

World is part of Tools for Humanity, co-founded by Altman, who serves as chairman, and CEO Alex Blania. The concept is straightforward but controversial: individuals visit a World location, where a metallic orb scans their irises and converts the pattern into a unique, encrypted digital identifier. This 12,800-digit binary code becomes the user’s key to accessing World’s digital ecosystem, which includes an app marketplace and its own cryptocurrency, Worldcoin. The broader vision is for World to operate as both a verification layer and a payment identity in an online world increasingly swamped by AI-generated content and bots—many created through Altman’s other enterprise, OpenAI.

Although privacy concerns have followed the project since its launch, a few experts have been surprisingly positive about its security model. Encryption specialist Matthew Greene examined the system and noted in 2023: “As you can see, this system appears to avoid some of the more obvious pitfalls of a biometric-based blockchain system… This architecture rules out many threats that might lead to your eyeballs being stolen or otherwise needing to be replaced.”

Gizmodo’s own reporters tested World’s offerings last year and found no major red flags, though their overall impressions were lukewarm. The outlet contacted Tools for Humanity to ask when the company expects to hit its lofty target of one billion scans—a milestone that appears increasingly distant.

Regulatory scrutiny in several countries has further slowed World’s expansion, highlighting the uphill battle it faces in trying to persuade the global population to participate in its unusual biometric program.

To accelerate adoption, World is reportedly looking to land major identity-verification deals with widely used digital platforms. The BI report highlights a strategy centered on partnering with companies that already require or are moving toward stronger identity verification. It states that World launched a pilot with Match Group to verify Tinder users in Japan, and has struck partnerships with Stripe, Visa, and gaming brand Razer. A Semafor report also noted that Reddit has been in discussions with Tools for Humanity about integrating its verification technology.

Even with these potential partnerships, scaling the project remains a steep challenge. Requiring users to physically appear at an office and wait in line to scan their eyes is unlikely to support rapid growth. To realistically reach hundreds of millions of users, the company will likely need to introduce app-based verification or another frictionless alternative. Sources told the New York Post in September that World is aiming for 100 million sign-ups over the next year, suggesting that a major expansion or product evolution may be in the works.

Tesla’s Humanoid Bet: Musk Pins Future on Optimus Robot

 

Elon Musk envisions human-shaped robots, particularly the Optimus humanoid, as a pivotal element in Tesla's future AI and robotics landscape, aiming to revolutionize both industry and daily life. Musk perceives these robots not merely as automated tools but as advanced entities capable of performing complex tasks in the physical world, interacting seamlessly with humans and their environments.

A core motivation behind developing humanoid robots lies in their potential to address various practical challenges, from industrial automation to personal assistance. Musk believes that these robots can work alongside humans in workplaces, handle repetitive or hazardous tasks, and even serve in caregiving roles, thus transforming societal and economic models. Tesla plans include the manufacturing of a large-scale Optimus factory in Fremont, with aims to produce millions of units, emphasizing the strategic importance Musk attaches to this venture.

Technologically, the breakthrough for these robots extends beyond bipedal mechanics. Critical advancements involve sensor fusion—integrating multiple data inputs for real-time decision-making—energy density to ensure longer operational periods, and edge reasoning, which allows autonomous processing without constant cloud connectivity. These innovations are crucial for creating robots that are not only physically capable but also intelligent and adaptable in diverse environments.

The idea of robots interacting with humans in everyday scenarios has garnered significant attention. Musk envisions Optimus playing a major role in daily life, helping with chores, assisting in services like hospitality, and contributing to industries like healthcare and manufacturing. Tesla's ambitious plans include building a factory capable of producing one million units annually, signaling a ratcheting up of competition and investment in humanoid robotics.

Overall, Musk's emphasis on human-shaped robots reflects a strategic vision where AI-powered humanoids are integral to Tesla's growth in artificial intelligence, robotics, and beyond. His goal is to develop robots that are not only functional but also capable of integration into human environments, ultimately aiming for a future where such machines coexist with and assist humans in daily life.

China Announces Major Cybersecurity Law Revision to Address AI Risks

 



China has approved major changes to its Cybersecurity Law, marking its first substantial update since the framework was introduced in 2017. The revised legislation, passed by the Standing Committee of the National People’s Congress in late October 2025, is scheduled to come into effect on January 1, 2026. The new version aims to respond to emerging technological risks, refine enforcement powers, and bring greater clarity to how cybersecurity incidents must be handled within the country.

A central addition to the law is a new provision focused on artificial intelligence. This is the first time China’s cybersecurity legislation directly acknowledges AI as an area requiring state guidance. The updated text calls for protective measures around AI development, emphasising the need for ethical guidelines, safety checks, and governance mechanisms for advanced systems. At the same time, the law encourages the use of AI and similar technologies to enhance cybersecurity management. Although the amendment outlines strategic expectations, the specific rules that organisations will need to follow are anticipated to be addressed through later regulations and detailed technical standards.

The revised law also introduces stronger enforcement capabilities. Penalties for serious violations have been raised, giving regulators wider authority to impose heavier fines on both companies and individuals who fail to meet their obligations. The scope of punishable conduct has been expanded, signalling an effort to tighten accountability across China’s digital environment. In addition, the law’s extraterritorial reach has been broadened. Previously, cross-border activities were only included when they targeted critical information infrastructure inside China. The new framework allows authorities to take action against foreign activities that pose any form of network security threat, even if the incident does not involve critical infrastructure. In cases deemed particularly severe, regulators may impose sanctions that include financial restrictions or other punitive actions.

Alongside these amendments, the Cyberspace Administration of China has issued a comprehensive nationwide reporting rule called the Administrative Measures for National Cybersecurity Incident Reporting. This separate regulation will become effective on November 1, 2025. The Measures bring together different reporting requirements that were previously scattered across multiple guidelines, creating a single, consistent system for organisations responsible for operating networks or providing services through Chinese networks. The Measures appear to focus solely on incidents that occur within China, including those that affect infrastructure inside the country.

The reporting rules introduce a clear structure for categorising incidents. Events are divided into four levels based on their impact. Under the new criteria, an incident qualifies as “relatively major” if it involves a data breach affecting more than one million individuals or if it results in economic losses of over RMB 5 million. When such incidents occur, organisations must file an initial report within four hours of discovery. A more complete submission is required within seventy-two hours, followed by a final review report within thirty days after the incident is resolved.

To streamline compliance, the regulator has provided several reporting channels, including a hotline, an online portal, email, and the agency’s official WeChat account. Organisations that delay reporting, withhold information, or submit false details may face penalties. However, the Measures state that timely and transparent reporting can reduce or remove liability under the revised law.



AI Models Trained on Incomplete Data Can't Protect Against Threats


In cybersecurity, AI is being called the future of threat finder. However, AI has its hands tied, they are only as good as their data pipeline. But this principle is not stopping at academic machine learning, as it is also applicable for cybersecurity.

AI-powered threat hunting will only be successful if the data infrastructure is strong too.

Threat hunting powered by AI, automation, or human investigation will only ever be as effective as the data infrastructure it stands on. Sometimes, security teams build AI over leaked data or without proper data care. This can create issues later. It can affect both AI and humans. Even sophisticated algorithms can't handle inconsistent or incomplete data. AI that is trained on poor data will also lead to poor results. 

The importance of unified data 

A correlated data controls the operation. It reduces noise and helps in noticing patterns that manual systems can't.

Correlating and pre-transforming the data makes it easy for LLMs and other AI tools. It also allows connected components to surface naturally. 

A same person may show up under entirely distinct names as an IAM principal in AWS, a committer in GitHub, and a document owner in Google Workspace. You only have a small portion of the truth when you look at any one of those signs. 

You have behavioral clarity when you consider them collectively. While downloading dozens of items from Google Workspace may look strange on its own, it becomes obviously malevolent if the same user also clones dozens of repositories to a personal laptop and launches a public S3 bucket minutes later.

Finding threat via correlation 

Correlations that previously took hours or were impossible become instant when data from logs, configurations, code repositories, and identification systems are all housed in one location. 

For instance, lateral movement that uses short-lived credentials that have been stolen frequently passes across multiple systems before being discovered. A hacked developer laptop might take on several IAM roles, launch new instances, and access internal databases. Endpoint logs show the local compromise, but the extent of the intrusion cannot be demonstrated without IAM and network data.


How Spyware Steals Your Data Without You Knowing About It


You might not be aware that your smartphone has spyware, which poses a risk to your privacy and personal security. However, what exactly is spyware? 

This type of malware, often presented as a trustworthy mobile application, has the potential to steal your data, track your whereabouts, record conversations, monitor your social media activity, take screenshots of your activities, and more. Phishing, a phony mobile application, or a once-reliable software that was upgraded over the air to become an information thief are some of the ways it could end up on your phone.

Types of malware

Legitimate apps are frequently packaged with nuisanceware. It modifies your homepage or search engine settings, interrupts your web browsing with pop-ups, and may collect your browsing information to sell to networks and advertising agencies.

Nuisanceware

Nuisanceware is typically not harmful or a threat to your fundamental security, despite being seen as malvertising. Rather, many malware packages focus on generating revenue by persuading users to view or click on advertisements.

Generic mobile spyware

Additionally, there is generic mobile spyware. These types of malware collect information from the operating system and clipboard in addition to potentially valuable items like account credentials or bitcoin wallet data. Spray-and-pray phishing attempts may employ spyware, which isn't always targeted.

Stalkerware

Compared to simple spyware, advanced spyware is sometimes also referred to as stalkerware. This spyware, which is unethical and frequently harmful, can occasionally be found on desktop computers but is becoming more frequently installed on phones.

The infamous Pegasus

Lastly, there is commercial spyware of governmental quality. One of the most popular variations is Pegasus, which is sold to governments as a weapon for law enforcement and counterterrorism. 

Pegasus was discovered on smartphones owned by lawyers, journalists, activists, and political dissidents. Commercial-grade malware is unlikely to affect you unless you belong to a group that governments with ethical dilemmas are particularly interested in. This is because commercial-grade spyware is expensive and requires careful victim selection and targeting.

How to know if spyware is on your phone?

There are signs that you may be the target of a spyware or stalkerware operator.

Receiving strange or unexpected emails or messages on social media could be a sign of a spyware infection attempt. You should remove these without downloading any files or clicking any links.

User Privacy:Is WhatsApp Not Safe to Use?


WhatsApp allegedly collects data

The mega-messenger from Meta is allegedly collecting user data to generate ad money, according to recent attacks on WhatsApp. WhatsApp strongly opposes these fresh accusations, but it didn't help that a message of its own appeared to imply the same.  

The allegations 

There are two prominent origins of the recent attacks. Few experts are as well-known as Elon Musk, particularly when it occurs on X, the platform he owns. Musk asserted on the Joe Rogan Experience that "WhatsApp knows enough about what you're texting to know what ads to show you." "That is a serious security flaw."

These so-called "hooks for advertising" are typically thought to rely on metadata, which includes information on who messages whom, when, and how frequently, as well as other information from other sources that is included in a user's profile.  

End-to-end encryption 

The message content itself is shielded by end-to-end encryption, which is the default setting for all 3 billion WhatsApp users. Signal's open-source encryption protocol, which the Meta platform adopted and modified for its own use, is the foundation of WhatsApp's security. So, in light of these new attacks, do you suddenly need to stop using WhatsApp?

In reality, WhatsApp's content is completely encrypted. There has never been any proof that Meta, WhatsApp, or anybody else can read the content itself. However, the platform you are utilizing is controlled by Meta, and it is aware of your identity. It does gather information on how you use the platform.  

How user data is used 

Additionally, it shares information with Meta so that it can "show relevant offers/ads." Signal has a small portion of WhatsApp's user base, but it does not gather metadata in the same manner. Think about using Signal instead for sensitive content. Steer clear of Telegram since it is not end-to-end encrypted and RCS because it is not yet cross-platform encrypted.

Remember that end-to-end encryption only safeguards your data while it is in transit. It has no effect on the security of your content on the device. I can read all of your messages, whether or not they are end-to-end encrypted, if I have control over your iPhone or Android.

TP-Link Routers May Get Banned in US Due to Alleged Links With China


TP-Link routers may soon shut down in the US. There's a chance of potential ban as various federal agencies have backed the proposal. 

Alleged links with China

The news first came in December last year. According to the WSJ, officials at the Departments of Justice, Commerce, and Defense had launched investigations into the company due to national security threats from China. 

Currently, the proposal has gotten interagency approval. According to the Washington Post, "Commerce officials concluded TP-Link Systems products pose a risk because the US-based company's products handle sensitive American data and because the officials believe it remains subject to jurisdiction or influence by the Chinese government." 

But TP-Link's connections to the Chinese government are not confirmed. The company has denied of any ties with being a Chinese company. 

About TP-Link routers 

The company was founded in China in 1996. After the October 2024 investigation, the company split into two: TP-Link Systems and TP-Link Technologies. "TP-Link's unusual degree of vulnerabilities and required compliance with [Chinese] law are in and of themselves disconcerting. When combined with the [Chinese] government's common use of [home office] routers like TP-Link to perpetrate extensive cyberattacks in the United States, it becomes significantly alarming" the officials wrote in October 2024. 

The company dominated the US router market since the COVID pandemic. It rose from 20% of total router sales to 65% between 2019 and 2025. 

Why the investigation?

The US DoJ is investigating if TP-Link was involved in predatory pricing by artificially lowering its prices to kill the competition. 

The potential ban is due to an interagency review and is being handled by the Department of Commerce. Experts say that the ban may be lifted in future due to Trump administration's ongoing negotiations with China. 

Hackers Exploit AI Stack in Windows to Deploy Malware


The artificial intelligence (AI) stack built into Windows can act as a channel for malware transmission, a recent study has demonstrated.

Using AI in malware

Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.

Impact on Windows

Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.

That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.

Why AI in malware is a problem

The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.

ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.

Attack tactic

Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.

Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.

As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.