Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

WhatsApp Under Fire for AI Update Disrupting Group Communication


The new artificial intelligence capability introduced by WhatsApp aims to transform the way users interact with their conversations through sophisticated artificial intelligence. It uses advanced technology from Meta AI to provide a concise summary of unread messages across individual chats as well as group chats, which is referred to as Message Summaries. 

The tool was created to help users stay informed in increasingly active chat environments by automatically compiling key points and contextual highlights, allowing them to catch up in just a few clicks without having to scroll through lengthy message histories to catch up. The company claims all summaries are generated privately, so that confidentiality can be maintained and the process of use is as simple as possible for the user. 

WhatsApp announces its intention of integrating artificial intelligence-driven solutions into its app to improve user convenience as well as reshape communication habits for its global community with this rollout, sparking both excitement and controversy as a result. Despite being announced last month, WhatsApp’s innovative Message Summaries feature has moved from pilot testing to a full-scale rollout after successfully passing pilot testing. 

Having refined the tool and collected feedback from its users, it is now considered to be stable and has been formally launched for wider use. In the initial phase, the feature is only available to US users and is restricted to the English language at this time. This indicates that WhatsApp is cautious when it comes to deploying large-scale artificial intelligence. 

Nevertheless, the platform announced plans to extend its availability to more regions at some point in the future, along with the addition of multilingual support. The phased rollout strategy emphasises that the company is focused on ensuring that the technology is reliable and user-friendly before it is extended to the vast global market. 

It is WhatsApp's intention to focus on a controlled release so as to gather more insights about users' interaction with the AI-generated conversation summaries, as well as to fine-tune the experience before expanding internationally. As a result of WhatsApp's inability to provide an option for enabling or concealing the Message Summaries feature, there has been a significant amount of discontent among users. 

Despite the fact that Meta has refused to clarify the reason regarding the lack of an opt-out mechanism or why users were not offered the opportunity to opt out of the AI integration, they have not provided any explanation so far. As concerning as the technology itself is, the lack of transparency has been regarded equally as a cause for concern by many, raising questions about the control people have over their personal communications. As a result of these limitations, some people have attempted to circumvent the chatbot by switching to a WhatsApp Business account as a response. 

In addition, several users have commented that this strategy removed the AI functionality from Meta AI, but others have noted that the characteristic blue circle, which indicates Meta AI's presence, still appeared, which exacerbated the dissatisfaction and uncertainty. 

The Meta team hasn’t confirmed whether the business-oriented version of WhatsApp will continue to be exempt from AI integration for years to come. This rollout also represents Meta’s broader goal of integrating generative AI into all its platforms, which include Facebook and Instagram, into its ecosystem. 

Towards the end of 2024, Meta AI was introduced for the first time in Facebook Messenger in the United Kingdom, followed by a gradual extension into WhatsApp as part of a unified vision to revolutionise digital interactions. However, many users have expressed their frustration with this feature because it often feels intrusive and ultimately is useless, despite these ambitions. 

The chatbot appears to activate frequently when individuals are simply searching for past conversations or locating contacts, which results in obstructions rather than streamlining the experience. According to the initial feedback received, AI-generated responses are frequently perceived as superficial, repetitive, or even irrelevant to the conversation's context, as well as generating a wide range of perceptions of their value.

A Meta AI platform has been integrated directly into WhatsApp, unlike standalone platforms such as ChatGPT and Google Gemini, which are separately accessible by users. WhatsApp is a communication application that is used on a daily basis to communicate both personally and professionally. Because the feature was integrated without explicit consent and there were doubts about its usefulness, many users are beginning to wonder whether such pervasive AI assistance is really necessary or desirable. 

It has also been noted that there is a growing chorus of criticism about the inherent limitations of artificial intelligence in terms of reliably interpreting human communication. Many users have expressed their scepticism about AI's ability to accurately condense even one message within an active group chat, let alone synthesise hundreds of exchanges. It is not the first time Apple has faced similar challenges; Apple has faced similar challenges in the past when it had to pull an AI-powered feature that produced unintended and sometimes inaccurate summaries. 

As of today, the problem of "hallucinations," which occur in the form of factually incorrect or contextually irrelevant content generated by artificial intelligence, remains a persistent problem across nearly every generative platform, including commonly used platforms like ChatGPT. Aside from that, artificial intelligence continues to struggle with subtleties such as humour, sarcasm, and cultural nuance-aspects of natural conversation that are central to establishing a connection. 

In situations where the AI is not trained to recognise offhand or joking remarks, it can easily misinterpret those remarks. This leads to summaries that are alarmist, distorted, or completely inaccurate, as compared to human recipients' own. Due to the increased risk of misrepresentation, users who rely on WhatsApp for authentic, nuanced communication with colleagues, friends, and family are becoming more apprehensive than before. 

A philosophical objection has been raised beyond technical limitations, stating that the act of participating in a conversation is diminished by substituting real engagement for machine-generated recaps. There is a shared sentiment that the purpose of group chats lies precisely in the experience of reading and responding to the genuine voices of others while scrolling through a backlog of messages. 

However, there is a consensus that it is exhausting to scroll through such a large backlog of messages. It is believed that the introduction of Message Summaries not only threatens clear communication but also undermines the sense of personal connection that draws people into these digital communities in the first place, which is why these critics are concerned. 

In order to ensure user privacy, WhatsApp has created the Message Summaries feature using a new framework known as Private Processing, which is designed to safeguard user privacy. Meta and WhatsApp are specifically ensuring that neither the contents of their conversations nor the summaries that the AI system produces are able to be accessed by them, which is why this approach was developed. 

Instead of sending summaries to external servers, the platform is able to generate them locally on the users' devices, reinforcing its commitment to privacy. Each summary, presented in a clear bullet point format, is clearly labelled as "visible only to you," emphasising WhatsApp's privacy-centric design philosophy behind the feature as well. 

Message Summaries have shown to be especially useful in group chats in which the amount of unread messages is often overwhelming, as a result of the large volume of unread messages. With this tool, users are able to remain informed without having to read every single message, because lengthy exchanges are distilled into concise snapshots that enable them to stay updated without having to scroll through each and every individual message. 

The feature is disabled by default and needs to be activated manually, which addresses privacy concerns. Upon activating the feature, eligible chats display a discreet icon, signalling the availability of a summary without announcing it to other participants. Meta’s confidential computing infrastructure is at the core of its system, and in principle, it is comparable to Apple’s private cloud computing architecture. 

A Trusted Execution Environment (TEE) provides a foundation for Private Processing, ensuring that confidential information is handled in an effective manner, with robust measures against tampering, and clear mechanisms for ensuring transparency are in place.

A system's architecture is designed to shut down automatically or to generate verifiable evidence of the intrusion whenever any attempt is made to compromise the security assurances of the system. As well as supporting independent third-party audits, Meta has intentionally designed the framework in such a way that it will remain stateless, forward secure, and immune to targeted attacks so that Meta's claims about data protection can be verified. 

Furthermore, advanced chat privacy settings are included as a complement to these technical safeguards, as they allow users to select the conversations that will be eligible for AI-generated summaries and thus offer granular control over the use of the feature. Moreover, when a user decides to enable summaries in a chat, no notification is sent to other participants, allowing for greater discretion on the part of other participants.

There is currently a phase in which Message Summaries are being gradually introduced to users in the United States. They can only be read in English at the moment. There has been confirmation by Meta that the feature will be expanded to additional regions and supported in additional languages shortly, as part of their broader effort to integrate artificial intelligence into all aspects of their service offerings. 

As WhatsApp intensifies its efforts to embed AI capabilities deeper and deeper into everyday communication, Message Summaries marks a pivotal moment in the evolution of relationships between technology and human interaction as the company accelerates its ambition to involve AI capabilities across the entire enterprise. 

Even though the company has repeatedly reiterated that it is committed to privacy, transparency, and user autonomy, the response to this feature has been polarised, which highlights the challenges associated with incorporating artificial intelligence in spaces where trust, nuance, and human connection are paramount. 

It is a timely reminder that, for both individuals and organisations, the growth of convenience-driven automation impacts the genuine social fabric that is a hallmark of digital communities and requires a careful assessment. 

As platforms evolve, stakeholders would do well to remain vigilant with the changes to platform policies, evaluate whether such tools align with the communication values they hold dear, and consider offering structured feedback in order for these technologies to mature with maturity. As artificial intelligence continues to redefine the contours of messaging, users will need to be open to innovation while also expressing critical thought about the long-term implications on privacy, comprehension, and even the very nature of meaningful dialogue as AI use continues to grow in popularity.

A Simple Guide to Launching GenAI Successfully

 


Generative AI (GenAI) is one of today’s most exciting technologies, offering potential to improve productivity, creativity, and customer service. But for many companies, it becomes like a forgotten gym membership, enthusiastically started, but quickly abandoned.

So how can businesses make sure they get real value from GenAI instead of falling into the trap of wasted effort? Success lies in four key steps: building a strong plan, choosing the right partners, launching responsibly, and tracking the impact.


1. Set Up a Strong Governance Framework

Before using GenAI, businesses must create clear rules and processes to use it safely and responsibly. This is called a governance framework. It helps prevent problems like privacy violations, data leaks, or misuse of AI tools.

This framework should be created by a group of leaders from different departments—legal, compliance, cybersecurity, data, and business operations. Since AI can affect many parts of a company, it’s important that leadership supports and oversees these efforts.

It’s also crucial to manage data properly. Many companies forget to prepare their data for AI tools. Data should be accurate, anonymous where needed, and well-organized to avoid security risks and legal trouble.

Risk management must be proactive. This includes reducing bias in AI systems, ensuring data security, staying within legal boundaries, and preventing misuse of intellectual property.


2. Choose Technology Partners Carefully

GenAI tools are not like regular software. When selecting a provider, businesses should look beyond basic features and check how the provider handles data, ownership rights, and ethical practices. A lack of transparency is a warning sign.

Companies should know where their data is stored, who can access it, and who owns the results produced by the AI tool. It’s also important to avoid being trapped in systems that make it difficult to switch vendors later. Always review agreements carefully, especially around copyright and data rights.


3. Launch With Care and Strategy

Once planning is complete, the focus should shift to thoughtful execution. Start with small projects that can demonstrate value quickly. Choose use cases where GenAI can clearly improve efficiency or outcomes.

Data used in GenAI must be organized and secured so that no sensitive information is exposed. Also, employees must be trained to work with these tools effectively. Skills like writing proper prompts and verifying AI-generated content are essential.

To build trust and encourage adoption, leaders should clearly explain why GenAI is being introduced and how it will help, not replace employees. GenAI should support teams and improve their performance, not reduce jobs.


4. Track Success and Business Value

Finally, companies need to measure the results. Success isn’t just about using the technology— it’s about making a real impact.

Set clear goals and use simple metrics, like productivity improvements, customer feedback, or employee satisfaction. GenAI should lead to better outcomes for both teams and clients, not just technical performance.

To move beyond the GenAI buzz and unlock real value, companies must approach it with clear goals, responsible use, and long-term thinking. With the right foundation, GenAI can be more than just hype, it can be a lasting asset for innovation and growth.



North Korean Hackers Target Fintech and Gaming Firms with Fake Zoom Apps

 

A newly uncovered cyber campaign is targeting organizations across North America, Europe, and the Asia-Pacific by exploiting fake Zoom applications. Cybersecurity experts have traced the operation to BlueNoroff, a notorious North Korean state-backed hacking group affiliated with the Lazarus Group. The campaign’s primary focus is on the gaming, entertainment, and fintech sectors, aiming to infiltrate systems and steal cryptocurrency and other sensitive financial data. 

Attack strategy 

The attack begins with a seemingly innocuous AppleScript disguised as a routine maintenance operation for Zoom’s software development kit (SDK). However, hidden within the script—buried beneath roughly 10,000 blank lines—are malicious commands that quietly download malware from a counterfeit domain, zoom-tech[.]us. 

Once the malware is downloaded, it integrates itself into the system through LaunchDaemon, granting it persistent and privileged access at every system startup. This allows the malware to operate covertly without raising immediate alarms. The malicious software doesn’t stop there. It fetches additional payloads from compromised infrastructure, presenting them as legitimate macOS components like “icloud_helper” and “Wi-Fi Updater.” 

These files are designed with anti-forensics techniques to erase temporary files and conceal their activity, all while maintaining a hidden backdoor for remote control and data exfiltration. This deceptive approach is particularly dangerous in remote work environments, where minor software issues are often resolved without deep inspection—making it easier for such malware to slip past unnoticed. 

Motives behind the attack

BlueNoroff’s intent appears financially driven. The malware specifically searches for cryptocurrency wallet extensions, browser-stored login credentials, and authentication keys. In one known incident dated May 28, a Canadian online gambling platform fell victim to this scheme after its systems were compromised via a fraudulent Zoom troubleshooting script. 

Protection Measures for Organizations Given the growing sophistication of such campaigns, security experts recommend several protective steps: 

• Independently verify Zoom participants to ensure authenticity. 

• Block suspicious domains like zoom-tech[.]us at the firewall level. 

• Deploy comprehensive endpoint protection that can detect hidden scripts and unauthorized daemons. 

• Invest in reliable antivirus and ransomware protection, especially for firms with cryptocurrency exposure. 

• Use identity theft monitoring services to detect compromised credentials early. 

• Train employees to recognize and respond to social engineering attempts. 

• Secure digital assets with hardware wallets instead of relying on software-based solutions alone.

Dire Wolf Gang Hits Tech and Manufacturing Sectors, Targets 11 Countries


New Group Dire Wolf Attacks

A new group, known as “Dire Wolf”, launched last month, has targeted 16 organizations worldwide, primarily in the manufacturing and technology sectors. The group deploys a double extortion technique for ransom and uses custom encryptors made for particular targets. Trustwave SpiderLabs experts recently found a ransomware sample from the Dire Wolf group and learned about its operations. 

The targets were from 11 countries, and Thailand and the US reported the highest number of incidents. At the time of this story, the Dire Wolf had scheduled to post leaked data of 5 out of 16 victims on its website due to not paying ransoms. 

"During investigation, we observed that the threat actors initially publish sample data and a list of exfiltrated files, then give the victims around one month to pay before releasing all the stolen data," said Trustwave Spiderlabs. The ransom demand from one of the victims was approximately $500,000,” it added.

A deep dive into the incident

The experts studied a Dire Wolf ransomware sample, which contained UPX- a common technique used by hackers to hide malware and restrict static analysis. 

Upon unpacking, the experts discovered that the binary was in Golang, a language that makes it difficult for antivirus software to find the malware written in it. After execution, the ransomware checks for the encryption and presence of the mutex "Global\direwolfAppMutex" in the system to ensure a single operation runs at a time. If any condition is met, the ransomware removes itself and ends the execution.

If the condition is not met, the ransomware disables event logging and ends specific processes that can stop its completion.  One such function is designed to “continuously disable Windows system logging by terminating the 'eventlog' process … by executing a Powershell command," experts said. It also stops apps and services, and executes a series of Windows commands to stop system recovery options. 

How to stay safe

Dire Wolf reminds us that new threat actors are always emerging, even when infamous gangs such as LockBit and Ghost are disrupted. Organizations are advised to follow robust security measures, securing endpoints to stop initial access and also patch flaws in the systems to avoid exploits.

California Residents Are Protesting Against Waymo Self-Driving Cars

 

Even though self-driving cars are becoming popular worldwide, not everyone is happy about it. In Santa Monica, California, some people who were unfortunate enough to live near the Waymo depot found a terrible side effect of Alphabet's self-driving cars: their incredibly annoying backing noise.

Local laws mandate that autonomous cars make a sound anytime they backup, which is something that frequently occurs when Waymos return to base to recharge, as the Los Angeles Times recently revealed. 

"It is bothersome. This neighbourhood already has a lot of noise, and I don't think it's fair to add another level to it," a local woman told KTLA. "I know some people have been kept up at night, and woken up in the middle of the night.” 

Using a technique pioneered by activist group Safe Street Rebel in 2023, when self-driving cars first showed up on San Francisco streets, Santa Monica citizens blocked Waymos with traffic cones, personal automobiles, and even their own bodies. 

Dubbed "coning," the seemingly petty tactic evolved after the California Public Utilities Commission decided 3-1 to allow Waymo and Cruise to operate self-driving vehicles in California neighbourhoods at all times. Prior to the vote, public comments lasted more than six hours, indicating that the proposal was not well received.

Safe Street Rebel conducted a period of direct action known as "The Week of the Cone" in protest of the plan to grant Cruise and Waymo complete control over public streets. The California DMV quickly revoked Cruise's licence to drive in the state as a result of the group's campaign, in addition to multiple mishaps involving autonomous vehicles. 

"This is a clear victory for direct action and the power of people getting in the street," Safe Street Rebel noted in a statement. "Our shenanigans made this an international story and forced a spotlight on the many issues with [self-driving cars].”

However, Waymo isn't going down without a fight back in Santa Monica. According to the LA Times, the company has sued several peaceful protesters and has even called the local police to drive out angry residents.

"My client engaged in justifiable protest, and Waymo attempted to obtain a restraining order against him, which was denied outright," stated Rebecca Wester, an attorney representing a local resident. 

The most recent annoyance Waymo has imposed on its neighbours is the backup noise. Residents of San Francisco reported hearing horns blaring from a nearby Waymo depot nine months ago. This happens when dozens of the cars congest the small parking lot.

Surmodics Hit by Cyberattack, Shuts Down IT Systems Amid Ongoing Investigation

 

Minnesota-headquartered Surmodics, a leading U.S. medical device manufacturer, experienced a cyberattack on June 5 that led to a partial shutdown of its IT infrastructure. The company, known for being the largest domestic supplier of outsourced hydrophilic coatings used in devices like intravascular catheters, detected unauthorized access within its network and immediately took several systems offline. During the disruption, it continued fulfilling orders and shipping products through alternative channels.

The incident was disclosed in a filing with the U.S. Securities and Exchange Commission (SEC), which noted that law enforcement has been informed. Surmodics joins Artivion and Masimo as the third publicly listed medical device company to report a cyberattack to the SEC in recent months.

With assistance from cybersecurity professionals, Surmodics has managed to restore essential IT operations, though a complete assessment of what data was compromised is still underway. Some systems remain in recovery.

“The Company remains subject to various risks due to the cyber Incident, including the adequacy of processes during the period of disruption of the Company's IT systems, diversion of management's attention, potential litigation, changes in customer behavior, and regulatory scrutiny,” said Timothy Arens, Chief Financial Officer of Surmodics, in the SEC filing.

The identity of the attackers remains unknown, and according to the company, no internal or third-party data has been leaked. Surmodics also confirmed it holds cyber insurance, which is expected to cover the bulk of the breach-related expenses.

The company has expressed concern about potential lawsuits stemming from the attack—a growing trend in the aftermath of corporate data breaches. Recent class actions have targeted firms like Coinbase and Krispy Kreme over compromised personal information.

Financially, Surmodics reported $28 million in revenue last quarter. It is currently involved in a legal dispute with the Federal Trade Commission (FTC), which is attempting to block a $627 million acquisition bid by a private equity firm. The FTC argues that the deal would merge the two largest players in the specialized medical coating industry, potentially reducing competition.

Fake Firefox Extensions Mimic Crypto Wallets to Steal Seed Phrases

 

Over 40 deceptive browser extensions available on Mozilla Firefox’s official add-ons platform are posing as trusted cryptocurrency wallets to steal user data, according to security researchers. These malicious add-ons are camouflaged as popular wallet brands such as MetaMask, Coinbase, Trust Wallet, Phantom, Exodus, MyMonero, OKX, and Keplr. 

Behind their familiar logos and fake five-star reviews lies code designed to exfiltrate wallet credentials and seed phrases to servers controlled by attackers. Cybersecurity firm Koi Security, which discovered this threat campaign, suspects a Russian-speaking hacking group is responsible. In a report shared with BleepingComputer, the firm revealed that the fraudulent extensions were modified versions of legitimate open-source wallets, altered to include stealthy monitoring code. 

These extensions monitor browser input for strings that resemble wallet keys or recovery phrases — often identified by their length and character patterns. Once such sensitive input is detected, the information is covertly sent to attackers. To avoid suspicion, the extensions suppress error messages or alerts by rendering them invisible. The most critical data targeted are seed phrases — multi-word recovery codes that serve as master keys for crypto wallets. Anyone with access to a seed phrase can irreversibly drain all assets from a user’s wallet. 

The campaign has reportedly been active since at least April 2025, and new malicious add-ons continue to appear. Some were added as recently as last week. Despite Mozilla’s efforts to flag and remove such add-ons, Koi Security noted that many remained live even after being reported through official channels. The fake extensions often feature hundreds of fraudulent five-star reviews to build trust, although some also have one-star ratings from victims warning of theft. 

In many cases, the number of reviews far exceeds the number of downloads — a red flag missed by unsuspecting users. Mozilla responded by confirming that it is aware of ongoing threats targeting its add-ons ecosystem and has already removed many malicious listings. The organization has implemented a detection system that uses automated tools to flag suspicious behavior, followed by manual review when necessary.

In a statement to BleepingComputer, Mozilla emphasized its commitment to user safety and stated that additional measures are being taken to improve its defense mechanisms. As fake wallet extensions continue to circulate, users are urged to verify the authenticity of browser add-ons, rely on official websites for downloads, and avoid entering recovery phrases into any untrusted source.

OpenAI Rolls Out Premium Data Connections for ChatGPT Users


The ChatGPT solution has become a transformative artificial intelligence solution widely adopted by individuals and businesses alike seeking to improve their operations. Developed by OpenAI, this sophisticated artificial intelligence platform has been proven to be very effective in assisting users with drafting compelling emails, developing creative content, or conducting complex data analysis by streamlining a wide range of workflows. 

OpenAI is continuously enhancing ChatGPT's capabilities through new integrations and advanced features that make it easier to integrate into the daily workflows of an organisation; however, an understanding of the platform's pricing models is vital for any organisation that aims to use it efficiently on a day-to-day basis. A business or an entrepreneur in the United Kingdom that is considering ChatGPT's subscription options may find that managing international payments can be an additional challenge, especially when the exchange rate fluctuates or conversion fees are hidden.

In this context, the Wise Business multi-currency credit card offers a practical solution for maintaining financial control as well as maintaining cost transparency. This payment tool, which provides companies with the ability to hold and spend in more than 40 currencies, enables them to settle subscription payments without incurring excessive currency conversion charges, which makes it easier for them to manage budgets as well as adopt cutting-edge technology. 

A suite of premium features has been recently introduced by OpenAI that aims to enhance the ChatGPT experience for subscribers by enhancing its premium features. There is now an option available to paid users to use advanced reasoning models that include O1 and O3, which allow users to make more sophisticated analytical and problem-solving decisions. 

The subscription comes with more than just enhanced reasoning; it also includes an upgraded voice mode that makes conversational interactions more natural, as well as improved memory capabilities that allow the AI to retain context over the course of a long period of time. It has also been enhanced with the addition of a powerful coding assistant designed to help developers automate workflows and speed up the software development process. 

To expand the creative possibilities even further, OpenAI has adjusted token limits, which allow for greater amounts of input and output text and allow users to generate more images without interruption. In addition to expedited image generation via a priority queue, subscribers have the option of achieving faster turnaround times during high-demand periods. 

In addition to maintaining full access to the latest models, paid accounts are also provided with consistent performance, as they are not forced to switch to less advanced models when server capacity gets strained-a limitation that free users may still have to deal with. While OpenAI has put in a lot of effort into enriching the paid version of the platform, the free users have not been left out. GPT-4o has effectively replaced the older GPT-4 model, allowing complimentary accounts to take advantage of more capable technology without having to fall back to a fallback downgrade. 

In addition to basic imaging tools, free users will also receive the same priority in generation queues as paid users, although they will also have access to basic imaging tools. With its dedication to making AI broadly accessible, OpenAI has made additional features such as ChatGPT Search, integrated shopping assistance, and limited memory available free of charge, reflecting its commitment to making AI accessible to the public. 

ChatGPT's free version continues to be a compelling option for people who utilise the software only sporadically-perhaps to write occasional emails, research occasionally, and create simple images. In addition, individuals or organisations who frequently run into usage limits, such as waiting for long periods of time for token resettings, may find that upgrading to a paid plan is an extremely beneficial decision, as it unlocks uninterrupted access as well as advanced capabilities. 

In order to transform ChatGPT into a more versatile and deeply integrated virtual assistant, OpenAI has introduced a new feature, called Connectors, which is designed to transform the platform into an even more seamless virtual assistant. It has been enabled by this new feature for ChatGPT to seamlessly interface with a variety of external applications and data sources, allowing the AI to retrieve and synthesise information from external sources in real time while responding to user queries. 

With the introduction of Connectors, the company is moving forward towards providing a more personal and contextually relevant experience for our users. In the case of an upcoming family vacation, for example, ChatGPT can be instructed by users to scan their Gmail accounts in order to compile all correspondence regarding the trip. This allows users to streamline travel plans rather than having to go through emails manually. 

With its level of integration, Gemini is similar to its rivals, which enjoy advantages from Google's ownership of a variety of popular services such as Gmail and Calendar. As a result of Connectors, individuals and businesses will be able to redefine how they engage with AI tools in a new way. OpenAI intends to create a comprehensive digital assistant by giving ChatGPT secure access to personal or organisational data that is residing across multiple services, by creating an integrated digital assistant that anticipates needs, surfaces critical insights, streamlines decision-making processes, and provides insights. 

There is an increased demand for highly customised and intelligent assistance, which is why other AI developers are likely to pursue similar integrations to remain competitive. The strategy behind Connectors is ultimately to position ChatGPT as a central hub for productivity — an artificial intelligence that is capable of understanding, organising, and acting upon every aspect of a user’s digital life. 

In spite of the convenience and efficiency associated with this approach, it also illustrates the need to ensure that personal information remains protected while providing robust data security and transparency in order for users to take advantage of these powerful integrations as they become mainstream. In its official X (formerly Twitter) account, OpenAI has recently announced the availability of Connectors that can integrate with Google Drive, Dropbox, SharePoint, and Box as part of ChatGPT outside of the Deep Research environment. 

As part of this expansion, users will be able to link their cloud storage accounts directly to ChatGPT, enabling the AI to retrieve and process their personal and professional data, enabling it to create responses on their own. As stated by OpenAI in their announcement, this functionality is "perfect for adding your own context to your ChatGPT during your daily work," highlighting the company's ambition of making ChatGPT more intelligent and contextually aware. 

It is important to note, however, that access to these newly released Connectors is confined to specific subscriptions and geographical restrictions. A ChatGPT Pro subscription, which costs $200 per month, is exclusive to ChatGPT Pro subscribers only and is currently available worldwide, except for the European Economic Area (EEA), Switzerland and the United Kingdom. Consequently, users whose plans are lower-tier, such as ChatGPT Plus subscribers paying $20 per month, or who live in Europe, cannot use these integrations at this time. 

Typically, the staggered rollout of new technologies is a reflection of broader challenges associated with regulatory compliance within the EU, where stricter data protection regulations as well as artificial intelligence governance frameworks often delay their availability. Deep Research remains relatively limited in terms of the Connectors available outside the company. However, Deep Research provides the same extensive integration support as Deep Research does. 

In the ChatGPT Plus and Pro packages, users leveraging Deep Research capabilities can access a much broader array of integrations — for example, Outlook, Teams, Gmail, Google Drive, and Linear — but there are some restrictions on regions as well. Additionally, organisations with Team plans, Enterprise plans, or Educational plans have access to additional Deep Research features, including SharePoint, Dropbox, and Box, which are available to them as part of their Deep Research features. 

Additionally, OpenAI is now offering the Model Context Protocol (MCP), a framework which allows workspace administrators to create customised Connectors based on their needs. By integrating ChatGPT with proprietary data systems, organizations can create secure, tailored integrations, enabling highly specialized use cases for internal workflows and knowledge management that are highly specialized. 

With the increasing adoption of artificial intelligence solutions by companies, it is anticipated that the catalogue of Connectors will rapidly expand, offering users the option of incorporating external data sources into their conversations. The dynamic nature of this market underscores that technology giants like Google have the advantage over their competitors, as their AI assistants, such as Gemini, can be seamlessly integrated throughout all of their services, including the search engine. 

The OpenAI strategy, on the other hand, relies heavily on building a network of third-party integrations to create a similar assistant experience for its users. It is now generally possible to access the new Connectors in the ChatGPT interface, although users will have to refresh their browsers or update the app in order to activate the new features. 

As AI-powered productivity tools continue to become more widely adopted, the continued growth and refinement of these integrations will likely play a central role in defining the future of AI-powered productivity tools. A strategic approach is recommended for organisations and professionals evaluating ChatGPT as generative AI capabilities continue to mature, as it will help them weigh the advantages and drawbacks of deeper integration against operational needs, budget limitations, and regulatory considerations that will likely affect their decisions.

As a result of the introduction of Connectors and the advanced subscription tiers, people are clearly on a trajectory toward more personalised and dynamic AI assistance, which is able to ingest and contextualise diverse data sources. As a result of this evolution, it is also becoming increasingly important to establish strong frameworks for data governance, to establish clear controls for access to the data, and to ensure adherence to privacy regulations.

If companies intend to stay competitive in an increasingly automated landscape by investing early in these capabilities, they can be in a better position to utilise the potential of AI and set clear policies that balance innovation with accountability by leveraging the efficiencies of AI in the process. In the future, the organisations that are actively developing internal expertise, testing carefully selected integrations, and cultivating a culture of responsible AI usage will be the most prepared to fully realise the potential of artificial intelligence and to maintain a competitive edge for years to come.

Encryption Drops While Extortion-Only Attacks Surge

 

Ransomware remains a persistent threat to organisations worldwide, but new findings suggest cybercriminals are shifting their methods. According to the latest report by Sophos, only half of ransomware attacks involved data encryption this year, a sharp decline from 70 per cent in 2023.  
The report suggests that improved cybersecurity measures may be helping organisations stop attacks before ransomware payloads are deployed. However, larger organisations with 3,001 to 5,000 employees still reported encryption in 65 per cent of attacks, possibly due to the challenges of monitoring vast IT infrastructures. 

As encryption-based tactics decrease, attackers are increasingly relying on extortion-only methods. These attacks, which involve threats to release stolen data without encrypting systems, have doubled to 6 per cent this year. Smaller businesses were disproportionately affected 13 per cent of firms with 100 to 250 employees reported facing such attacks, compared to just 3 per cent among larger enterprises.  

While Sophos highlighted software vulnerabilities as the most common entry point for attackers, this finding contrasts with other industry data. Allan Liska, a ransomware expert at Recorded Future, said leaked or stolen credentials remain the most frequently reported initial attack vector. Sophos, however, reported a drop in attacks starting with credential compromise from 29 per cent last year to 23 per cent in 2024 suggesting variations in data visibility between firms. 

The report also underscored the human cost of cyberattacks. About 41 per cent of IT and security professionals said they experienced increased stress or anxiety after handling a ransomware incident. Liska noted that while emotional tolls are predictable, they are often overlooked in incident response planning.

Ahold Delhaize Reports Major Data Breach Affecting Over 2 Million Employees in the U.S.

 


One of the world’s largest grocery retail groups has confirmed a major cyber incident that compromised sensitive information belonging to more than 2.2 million individuals across its U.S. operations.

The company, known for running supermarket chains like Food Lion, Giant Food, and Stop & Shop, revealed that a ransomware attack last November led to unauthorized access to internal systems. This breach primarily exposed employment-related data of current and former workers, according to a recent report filed with the Maine Attorney General’s office.


What Information Was Exposed?

While not everyone affected had the same type of data compromised, the company stated that hackers may have accessed a combination of the following:

• Full names and contact details

• Birth dates

• Government-issued ID numbers

• Bank account details

• Health and workers’ compensation records

• Job-related documents


The breach does not appear to involve customer information, according to the company’s internal review. In Maine alone, over 95,000 individuals were impacted, triggering formal notification procedures as required by law.


Company’s Response and Next Steps

Following the discovery of the breach on November 6, 2024, Ahold Delhaize immediately launched an investigation and worked to contain the attack. Temporary service disruptions were reported, including issues with pharmacies and delivery services.

To assist those affected, the company is offering two years of free credit and identity monitoring through a third-party provider. It has also engaged external cybersecurity experts to further review and enhance its systems.


Ransomware Group Possibly Involved

Although Ahold Delhaize has not officially identified the group behind the attack, a ransomware operation known as INC Ransom reportedly claimed responsibility earlier this year. Files believed to be taken from the company were published on the group’s leak site in April.

Cybersecurity professionals say the exposed information could be used for identity theft and financial fraud. Experts have advised affected individuals to monitor their credit reports and, where possible, lock their credit files as a precautionary measure.


A Growing Concern for the Sector

Cyberattacks on retail and food service companies are becoming more frequent and severe. According to researchers, this incident stands out due to the unusually high number of records affected. The average breach in this sector usually involves far fewer data points.

Security specialists say such events highlight the urgent need for stronger protection strategies, including multi-factor authentication, network segmentation, and stealth technologies that reduce exposure to cyber threats.


Ahold Delhaize at a Glance

Headquartered in the Netherlands and Belgium, Ahold Delhaize operates more than 9,400 stores worldwide and serves roughly 60 million customers each week. In 2024, the company recorded over $100 billion in global sales.

As the investigation continues, the company has pledged to strengthen its data safeguards and remain vigilant against future threats.

Here's Why Businesses Need to be Wary of Document-Borne Malware

 

The cybersecurity experts are constantly on the lookout for novel tactics for attack as criminal groups adapt to better defences against ransomware and phishing. However, in addition to the latest developments, some traditional strategies seem to be resurfacing—or rather, they never really went extinct. 

Document-borne malware is one such strategy. Once believed to be a relic of early cyber warfare, this tactic remains a significant threat, especially for organisations that handle huge volumes of sensitive data, such as those in critical infrastructure.

The lure for perpetrators is evident. Routine files, including Word documents, PDFs, and Excel spreadsheets, are intrinsically trusted and freely exchanged between enterprises, often via cloud-based systems. With modern security measures focussing on endpoints, networks, and email filtering, seemingly innocuous files can serve as the ideal Trojan horse. 

Reasons behind malicious actors using document-borne malware 

Attacks utilising malicious documents seems to be a relic. It's a decades-old strategy, but that doesn't make it any less detrimental for organisations. Still, while the concept is not novel, threat groups are modernising it to keep it fresh and bypass conventional safety procedures. This indicates that the seemingly outdated method remains a threat even in the most security-conscious sectors.

As with other email-based techniques, attackers often prefer to hide in plain sight. The majority of attacks use standard file types like PDFs, Word documents, and Excel spreadsheets to carry malware. Malware is typically concealed in macros, encoded in scripts like JavaScript within PDFs, or hidden behind obfuscated file formats and layers of encryption and archiving. 

These unassuming files are used with common social engineering approaches, such as a supplier invoice or user submission form. Spoofed addresses or hacked accounts are examples of email attack strategies that help mask malicious content. 

Organisations' challenges in defending against these threats 

Security analysts claim that document security is frequently disregarded in favour of other domains, such as endpoint protection and network perimeter. Although document-borne attacks are sufficiently commonplace to be overlooked, they are sophisticated enough to evade the majority of common security measures.

There is an overreliance on signature-based antivirus solutions, which frequently fail to detect new document-borne threats. While security teams are often aware of harmful macros, formats such as ActiveX controls, OLE objects, and embedded JavaScript may be overlooked. 

Attackers have also discovered that there is a considerable mental blind spot when it comes to documents that appear to have been supplied via conventional cloud-based routes. Even when staff have received phishing awareness training, there is a propensity to instinctively believe a document that arrives from an expected source, such as Google or Office 365.

Mitigation tips 

As with other evolving cyberattack strategies, a multi-layered strategy is essential to defending against document-borne threats. One critical step is to use a multi-engine strategy to malware scanning. While threat actors may be able to deceive one detection engine, using numerous technologies increases the likelihood of detecting concealed malware and minimises false negatives. 

Content Disarm and Reconstruction (CDR) tools are also critical. These sanitise and remove malicious macros, scripts, and active material while keeping the page intact. Suspect files can then be run through enhanced standboxes to detect previously unknown threats' malicious behaviour while in a controlled environment. 

The network should also be configured with strict file rules, such as limiting high-risk file categories and requiring user authentication before document uploads. Setting file size restrictions can also help detect malicious documents that have grown in size due to hidden coding. Efficiency and dependability are also important here. Organisations must be able to detect fraudulent documents in their regular incoming traffic while maintaining a rapid and consistent workflow for customers.