Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Generative AI. Show all posts

Unauthorized Use of AI Tools by Employees Exposes Sensitive Corporate Data


 

Artificial intelligence has rapidly revolutionised the modern workplace, creating both unprecedented opportunities and presenting complex challenges at the same time. Despite the fact that AI was initially conceived to improve productivity, it has quickly evolved into a transformational force that has changed the way employees think, work, and communicate. 

Despite the rapid rise in technology, many organisations are still ill-prepared to deal with the unchecked use of artificial intelligence. With the advent of generative AI, which can produce text, images, videos, and audio in a variety of ways, employees have increasingly adopted it for drafting emails, preparing reports, analysing data, and even creating creative content. 

The ability of advanced language models, which have been trained based on vast datasets, to mimic the language of humans with remarkable fluency can enable workers to perform tasks that once took hours to complete. According to some surveys, a majority of American employees rely on AI tools, often without formal approval or oversight, which are freely accessible with a little more than an email address to use. 

Platforms such as ChatGPT, where all you need is an email address if you wish to use the tool, are inspiring examples of this fast-growing trend. Nonetheless, this widespread use of unregulated artificial intelligence tools raises many concerns about privacy, data protection, and corporate governance—a concern employers must address with clear policies, robust safeguards, and a better understanding of the evolving digital landscape to prevent these concerns from becoming unfounded. 

Cybernews has recently found out that the surge in unapproved AI use in the workplace is a concerning phenomenon. While digital risks are on the rise, a staggering 75 per cent of employees who use so-called “shadow artificial intelligence” tools admit to having shared sensitive or confidential information through them.

Information that could easily compromise their organisations. However, what is more troubling is that the trend is not restricted to junior staff; it is actually a trend led by the leadership at the organisation. With approximately 93 per cent of executives and senior managers admitting to using unauthorised AI tools, it is clear that executives and senior managers are the most frequent users. Management accounts for 73 per cent, followed by professionals who account for 62 per cent. 

In other words, it seems that unauthorised AI tools are not isolated, but rather a systemic problem. In addition to employee records and customer information, internal documents, financial and legal records, and proprietary code, these categories of sensitive information are among the most commonly exposed categories, each of which can lead to serious security breaches each of which has the potential to be a major vulnerability. 

However, despite nearly nine out of ten workers admitting that utilising AI entails significant risks, this continues to happen. It has been found that 64 per cent of respondents recognise the possibility of data leaks as a result of unapproved artificial intelligence tools, and more than half say they will stop using those tools if such a situation occurs. However, proactive measures remain rare in the industry. As a result, there is a growing disconnect between awareness and action in corporate data governance, one that could have profound consequences if not addressed. 

There is also an interesting paradox within corporate hierarchies revealed by the survey: even though senior management is often responsible for setting data governance standards, they are the most frequent infringers on those standards. According to a recent study, 93 per cent of executives and senior managers use unapproved AI tools, outpacing all other job levels by a wide margin.

There is also a significant increase in engagement with unauthorised platforms by managers and team leaders, who are responsible for ensuring compliance and modelling best practices within the organisation. This pattern, researchers suggest, reflects a worrying disconnect between policy enforcement and actual behaviour, one that erodes accountability from the top down. Žilvinas GirÄ—nas, head of product at Nexos.ai, warns that the implications of such unchecked behaviour extend far beyond simple misuse. 

The truth is that it is impossible to determine where sensitive data will end up if it is pasted into unapproved AI tools. "It might be stored, used to train another model, exposed in logs, or even sold to third parties," he explained. It could be possible to slip confidential contracts, customer details, or internal records quietly into external systems without detection through such actions, he added.

A study conducted by IBM underscores the seriousness of this issue by estimating that shadow artificial intelligence can result in an average data breach cost of up to $670,000, an expense that few companies are able to afford. Even so, the Cybernews study found that almost one out of four employers does not have formal policies in place governing artificial intelligence use in the workplace. 

Experts believe that awareness alone will not be enough to prevent these risks from occurring. As Sabeckis noted, “It would be a shame if the only way to stop employees from using unapproved AI tools was through the hard lesson of a data breach. For many companies, even a single breach can be catastrophic. GirÄ—nas echoed this sentiment, emphasising that shadow AI “thrives in silence” when leadership fails to act decisively. 

The speaker warned that employees will continue to rely on whatever tools seem convenient to them if clear guidelines and sanctioned alternatives are not provided, leading to efficiency shortcuts becoming potential security breaches without clear guidelines and sanctioned alternatives. Experts emphasise that organisations must adopt comprehensive internal governance strategies to mitigate the growing risks associated with the use of unregulated artificial intelligence, beyond technical safeguards. 

There are a number of factors that go into establishing a well-structured artificial intelligence framework, including establishing a formal AI policy. This policy should clearly state the acceptable uses for AI, prohibit the unauthorised download of free AI tools, and limit the sharing of personal, proprietary, and confidential information through these platforms. 

Businesses are also advised to revise and update existing IT, network security, and procurement policies in order to keep up with the rapidly changing AI environment. Additionally, proactive employee engagement continues to be a crucial part of addressing AI-related risks. Training programs can provide workers with the information and skills needed to understand potential risks, identify sensitive information, and follow best practices for safe, responsible use of AI. 

Also essential is the development of a robust data classification strategy that enables employees to recognise and handle confidential or sensitive information before interacting with AI systems in a proper manner. 

The implementation of formal authorisation processes for AI tools may also benefit organisations by limiting access to the tools to qualified personnel, along with documentation protocols that document inputs and outputs so that compliance and intellectual property issues can be tracked. Further safeguarding the reputation of your brand can be accomplished by periodic reviews of AI-generated content for bias, accuracy, and appropriateness. 

By continuously monitoring AI tools, including reviewing their evolving terms of service, organisations can ensure ongoing compliance with their company's standards, as well. Finally, it is important to put in place a clearly defined incident response plan, which includes designated points of contact for potential data exposure or misuse. This will help organisations respond more quickly to any AI-related incident. 

Combined, these measures represent a significant step forward in the adoption of structured, responsible artificial intelligence that balances innovation and accountability. Although internal governance is the cornerstone of responsible AI usage, external partnerships and vendor relationships are equally important when it comes to protecting organisational data. 

According to experts, organisation leaders need to be vigilant not just about internal compliance, but also about third-party contracts and data processing agreements. Data privacy, retention, and usage provisions should be explicitly included in any agreement with an external AI provider. These provisions are meant to protect confidential information from being exploited or stored in ways that are outside of the intended use of the information.

Business leaders, particularly CEOs and senior executives, must examine vendor agreements carefully in order to ensure that they are aligned with international data protection frameworks, such as the General Data Protection Regulation and California Consumer Privacy Act (CCPA). In order to improve their overall security posture, organisations can ensure that sensitive data is handled with the same rigour and integrity as their internal privacy standards by incorporating these safeguards into the contract terms. 

In the current state of artificial intelligence, which has been redefining the limits of workplace efficiency, its responsible integration has become an important factor in enhancing organisational trust and resilience as it continues to redefine the boundaries of workplace efficiency. Getting AI to work effectively in business requires not only innovation but also a mature set of governance frameworks that accompany its use. 

Companies that adopt a proactive approach, such as by enforcing clear internal policies, establishing transparency with vendors, and cultivating a culture of accountability, will be able to gain more than simply security. They will also gain credibility with clients and employees, as well as regulators. Although internal governance is the cornerstone of responsible AI usage, external partnerships and vendor relationships are equally important when it comes to protecting organisational data. 

According to experts, organisation leaders need to be vigilant not just about internal compliance, but also about third-party contracts and data processing agreements. Data privacy, retention, and usage provisions should be explicitly included in any agreement with an external AI provider. 

These provisions are meant to protect confidential information from being exploited or stored in ways that are outside of the intended use of the information. Business leaders, particularly CEOs and senior executives, must examine vendor agreements carefully in order to ensure that they are aligned with international data protection frameworks, such as the General Data Protection Regulation and California Consumer Privacy Act (CCPA). 

In order to improve their overall security posture, organisations can ensure that sensitive data is handled with the same rigour and integrity as their internal privacy standards by incorporating these safeguards into the contract terms. In the current state of artificial intelligence, which has been redefining the limits of workplace efficiency, its responsible integration has become an important factor in enhancing organisational trust and resilience as it continues to redefine the boundaries of workplace efficiency. 

Getting AI to work effectively in business requires not only innovation but also a mature set of governance frameworks that accompany its use. Companies that adopt a proactive approach, such as by enforcing clear internal policies, establishing transparency with vendors, and cultivating a culture of accountability, will be able to gain more than simply security. They will also gain credibility with clients and employees, as well as regulators.

In addition to ensuring compliance, responsible AI adoption can improve operational efficiency, increase employee confidence, and strengthen brand loyalty in an increasingly data-conscious market. According to experts, artificial intelligence should not be viewed merely as a risk to be controlled, but as a powerful tool to be harnessed under strong ethical and strategic guidelines. 

It is becoming increasingly apparent that in today's business climate, every prompt, every dataset can potentially create a vulnerability, so organisations that thrive will be those that integrate technological ambition with a disciplined governance framework - trying to transform AI from being a source of uncertainty to being a tool for innovation that is as sustainable and secure as possible.

North Korean Threat Actors Leverage ChatGPT in Deepfake Identity Scheme


North Korean hackers Kimsuky are using ChatGPT to create convincing deepfake South Korean military identification cards in a troubling instance of how artificial intelligence can be weaponised in state-backed cyber warfare, indicating that artificial intelligence is becoming increasingly useful in cyber warfare. 

As part of their cyber-espionage campaign, the group used falsified documents embedded in phishing emails targeting defence institutions and individuals, adding an additional layer of credibility to their espionage activities. 

A series of attacks aimed at deceiving recipients, delivering malicious software, and exfiltrating sensitive data were made more effective by the use of AI-generated IDs. Security monitors have categorised this incident as an AI-related hazard, indicating that by using ChatGPT for the wrong purpose, the breach of confidential information and the violation of personal rights directly caused harm. 

Using generative AI is becoming increasingly common in sophisticated state-sponsored operations. The case highlights the growing concerns about the use of generative AI in sophisticated operations. As a result of the combination of deepfake technology and phishing tactics, these attacks are harder to detect and much more damaging. 

Palo Alto Networks' Unit 42 has observed a disturbing increase in the use of real-time deepfakes for job interviews, in which candidates disguise their true identities from potential employers using this technology. In their view, the deepfake tactic is alarmingly accessible because it can be done in a matter of hours, with just minimal technical know-how, and with inexpensive consumer-grade hardware, so it is alarmingly accessible and easy to implement. 

The investigation was prompted by a report that was published in the Pragmatic Engineer newsletter that described how two fake applicants who were almost hired by a Polish artificial intelligence company raised suspicions that the candidates were being controlled by the same individual as deepfake personas. 

As a result of Unit 42’s analysis, these practices represent a logical progression from a long-standing North Korean cyber threat scheme, one in which North Korean IT operatives attempt to infiltrate organisations under false pretences, a strategy well documented in previous cyber threat reports. 

It has been repeatedly alleged that the hacking group known as Kimsuky, which operated under the direction of the North Korean state, was involved in espionage operations against South Korean targets for many years. In a 2020 advisory issued by the U.S. Department of Homeland Security, it was suggested that this group might be responsible for obtaining global intelligence on Pyongyang's behalf. 

Recent research from a South Korean security firm called Genians illustrates how artificial intelligence is increasingly augmented into such operations. There was a report published in July about North Korean actors manipulating ChatGPT to create fake ID cards, while further experiments revealed that simple prompt adjustments could be made to override the platform's built-in limitations by North Korean actors. 

 It follows a pattern that a lot of people have experienced in the past: Anthropic disclosed in August that its Claude Code software was misused by North Korean operatives to create sophisticated fake personas, pass coding assessments, and secure remote positions at multinational companies. 

In February, OpenAI confirmed that it had suspended accounts tied to North Korea for generating fraudulent resumes, cover letters, and social media content intended to assist with recruitment efforts. These activities, according to Genians director Mun Chong-hyun, highlight the growing role AI has in the development and execution of cyber operations at many stages, from the creation of attack scenarios, the development of malware, as well as the impersonation of recruiters and targets. 

A phishing campaign impersonating an official South Korean military account (.mil.kr) has been launched in an attempt to compromise journalists, researchers, and human rights activists within this latest campaign. To date, it has been unclear how extensive the breach was or to what extent the hackers prevented it. 

Officially, the United States assert that such cyber activities are a part of a larger North Korea strategy, along with cryptocurrency theft and IT contracting schemes, that seeks to provide intelligence as well as generate revenue to circumvent sanctions and fund the nuclear weapons program of the country. 

According to Washington and its allies, Kimsuky, also known as APT43, a North Korean state-backed cyber unit that is suspected of being responsible for the July campaign, was already sanctioned by Washington and its allies for its role in promoting Pyongyang's foreign policy and sanction evasion. 

It was reported by researchers at South Korean cybersecurity firm Genians that the group used ChatGPT to create samples of government and military identification cards, which they then incorporated into phishing emails disguised as official correspondence from a South Korean defense agency that managed ID services, which was then used as phishing emails. 

Besides delivering a fraudulent ID card with these messages, they also delivered malware designed to steal data as well as allow remote access to compromised systems. It has been confirmed by data analysis that these counterfeit IDs were created using ChatGPT, despite the tool's safeguards against replicating government documents, indicating that the attackers misinterpreted the prompts by presenting them as mock-up designs. 

There is no doubt that Kimsuky has introduced deepfake technology into its operations in such a way that this is a clear indication that this is a significant step toward making convincing forgeries easier by using generative AI, which significantly lowers the barrier to creating them. 

It is known that Kimsuky has been active since at least 2012, with a focus on government officials, academics, think tanks, journalists, and activists in South Korea, Japan, the United States, Europe, and Russia, as well as those affected by North Korea's policy and human rights issues. 

As research has shown, the regime is highly reliant on artificial intelligence to create fake summaries and online personas. This enables North Korean IT operatives to secure overseas employment as well as perform technical tasks once they are embedded. There is no doubt that such operatives are using a variety of deceptive practices to obscure their origins and evade detection, including artificial intelligence-powered identity fabrication and collaboration with foreign intermediaries. 

The South Korean foreign ministry has endorsed that claim. It is becoming more and more evident that generative AI is increasingly being used in cyber-espionage, which poses a major challenge for global cybersecurity frameworks: assisting citizens in identifying and protecting themselves against threats not solely based on technical sophistication but based on trust. 

Although platforms like ChatGPT and other large language models may have guardrails in place to protect them from attacks, experts warn that adversaries will continue to seek out weaknesses in the systems and adapt their tactics through prompt manipulation, social engineering, and deepfake augmentation in an effort to defeat the system. 

Kimsuky is an excellent example of how disruptive technologies such as artificial intelligence and cybercrime erode traditional detection methods, as counterfeit identities, forged credentials, and distorted personas blur the line between legitimate interaction and malicious deception, as a result of artificial intelligence and cybercrime. 

The security experts are urging the public to take action by using a multi-layered approach that combines AI-driven detection tools, robust digital identity verification, cross-border intelligence sharing, and better awareness within targeted sectors such as defence, academia, and human rights industries. 

Developing AI technologies together with governments and private enterprises will be critical to ensuring they are harnessed responsibly while minimising misuse of these technologies. It is clear from this campaign that as adversaries continue to use artificial intelligence to sharpen their attacks, defenders must adapt just as fast to maintain trust, privacy, and global security as they do against adversaries.

Cybersecurity Landscape Shaken as Ransomware Activity Nearly Triples in 2024

 


Ransomware is one of the most persistent threats in the evolving landscape of cybercrime, but its escalation in 2024 has marked an extremely alarming turning point. Infiltrating hospitals, financial institutions, and even government agencies in a manner that has never been attempted before, attackers extended their reach with unprecedented precision, as if they were no longer restricted to high-profile corporations. These sectors tend to be vulnerable to such crippling disruptions in the first place. 

As cybercriminals employed stronger encryption methods and more aggressive extortion tactics, they demonstrated a ruthless pursuit of maximising damages and financial gain. This shift is demonstrated in the newly released data from threat intelligence firm Flashpoint, which reveals that the number of ransomware attacks observed in the first half of 2025 increased by 179 per cent in comparison to 2024 during the same period, almost tripling in size in just a year. 

Throughout the years 2022 and 2023, the ransomware landscape offered little relief due to the relentless escalation of threat actors’ tactics. As a result of the threat of public exposure and data infiltration, attackers increasingly used threats of data infiltration to force companies to conform to regulations. 

Even companies that managed to restore their operations from backups were not spared, as sensitive information was often leaking onto underground forums and leak sites controlled by criminal groups, which led to an increase in ransomware incidence of 13 per cent in 2021 compared to 2021 – an increase far greater than the cumulative increases of the past five years combined. 

Verizon’s Data Breach Investigations Report underscored the severity of this trend. It is important to note that Statista has predicted that about 70 per cent of businesses will face at least one ransomware attack in 2022, marking the highest rate of ransomware attacks ever recorded. In the 2022 year-over-year analysis, it was highlighted that education, government, and healthcare were the industries with the greatest impact in 2022. 

By 2023, healthcare will emerge as one of the most targeted sectors due to attackers' calculated strategy to target industries that are least able to sustain prolonged disruption. In light of the ongoing ransomware crisis, small and mid-sized businesses are considered to be some of the most vulnerable targets. 

As part of Verizon’s research, 832 ransomware-related incidents were documented by small businesses by 2022, 130 of these incidents resulted in confirmed data loss, and nearly 80 per cent of these events were directly related to the ransomware attacks. In an effort to compound the risks, the fact that only half of U.S. small businesses maintain a formal cybersecurity plan, according to a report quoted by UpCity Globally, amplifies the risks. 

A survey conducted by Statista found that 72 per cent of businesses were impacted by ransomware, with 64.9% of those organisations ultimately yielding to ransom demands. In a recent survey of 1,500 cybersecurity professionals conducted by Cyberreason, there was a similar picture of concern. More than two-thirds of all organisations reported experiencing a ransomware attack, a 33 per cent increase over the previous year, with almost two-thirds of the attacks associated with compromised third parties. 

The consequences for organisations were severe and went beyond financial losses in the most significant way. Approximately 40% of companies had to lay off employees following an attack, 35 percent reported resignations of senior executives, and one third temporarily suspended operations as a result of an attack. 

Unfortunately, the persistence of attackers within networks often went undetected for long periods of time. There was a reported 63 per cent of organisations that had been attacked for as long as six months, and others reported that they had been accessed for a period of over a year without being noticed. The majority of companies decided to pay ransoms despite the risks involved, with 49 per cent doing so to avoid revenue losses and 41 per cent to speed up recovery. 

In spite of this, even payment provided no guarantee of data recovery; over half of all companies paying ransom reported corrupted or unusable data after the decryption, while the majority of financial damages were between $1 million and $10 million. The use of generative artificial intelligence within ransomware operations is also an emerging concern. 

Even though the scope of these experiments remains limited, some groups have begun to explore large language models that have the potential to reduce operational burdens, such as automating the generation of phishing templates.To develop a more comprehensive understanding of this capability, researchers have identified Funksec, a group that surfaced in late 2024 and is believed to have contributed to the WormGPT model, as one of the first groups to experiment with it, so more gangs will likely start incorporating artificial intelligence into their tactics in the near future.

Furthermore, analysts at Flashpoint found that gang members are recycling victims from other ransomware groups in order to gain a foothold on underground forums, long after initial breaches. The first half of 2025 has been dominated by a few particularly active operators based on scale: 537 attacks were committed by Akira, 402 attacks were committed by Clop/Cl0p, 345 attacks were committed by Qilin, 233 attacks were committed by Safepay Ransomware, and 23 attacks were performed by RansomHub. 

A significant amount of attention has also been drawn to DragonForce in the United Kingdom after the company targeted household names, including Marks & Spencer and the Co-op Group. Despite being the top target, the United States remained the most vulnerable, with 2,160 attacks, far exceeding Canada’s 249 attacks, Germany’s 154 attacks, and the UK’s 148 attacks—but Brazil, Spain, France, India, and Australia also had high numbers. 

A perspective from the manufacturing and technology industries indicates that these were the industries that were most lucrative, causing 22 and 18 per cent of incidents, respectively. Retail, healthcare, and business services, on the other hand, accounted for 15 per cent. The report also highlighted how the boundaries between hacktivist groups and state-sponsored actors are becoming increasingly blurred, thus illustrating the complexity of today's threat environment. 

During the first half of 2025, 137 threat actor activities tracked were attributed to state-sponsored groups, 9 per cent to hacktivists, while the remaining 51 per cent were attributed to cybercriminal organisations. The Iranian government has shown that a growing focus has been placed on critical infrastructure through entities affiliated with the Iranian state, such as GhostSec and Arabian Ghosts. 

In an attempt to target critical infrastructure, these entities are reported to have targeted programmable logic controllers connected to Israeli media and water systems. As a result, groups such as CyberAv3ngers sought to spread unverified narratives in advance of disruptive technology attacks. As a result, state-aligned operations are often resurfacing under a new identity, such as APT IRAN, demonstrating their shifting strategies and adaptive nature. 

There is a sobering picture of the challenges that lie ahead in light of the increase in ransomware activity as well as the diversification of threat actors. Even though no sector, geography, or organisation size is immune to disruption, it appears that cybercriminals will be able to innovate more rapidly than ever, as well as utilise state-linked tactics to do so in the future, which indicates that the stakes will only get higher as time goes on. 

Proactively managing security goes beyond ensuring compliance or minimising damage; it involves cultivating a culture of security that anticipates threats rather than reacts to them, rather than merely reacting to them. By investing in modern defences like continuous threat intelligence, real-time monitoring, and zero-trust architectures, as well as addressing fundamental weaknesses in supply chains and third-party partnerships, which frequently open themselves up to attacks, companies can significantly reduce their risk exposure as well as their vulnerability to attacks. 

Moreover, it is equally important to address the human aspect of cybersecurity resilience: employees must be aware, incidents should be reported quickly, and leadership needs to be committed to cybersecurity resilience. 

Even though the outlook may seem daunting, organisations that make sure they are prepared rather than complacent will have a better chance of dealing with ransomware as well as the wider range of cyber threats that are reshaping the digital age. A resilient security approach remains the ultimate defence in an environment defined by a persistent attacker and the innovative actions of the attacker.

AI-supported Cursor IDE Falls Victim to Prompt Injection Attacks


Experts have found a bug called CurXecute that is present in all variants of the AI-supported code editor Cursor and can be compromised to run remote code execution (RCE), along with developer privileges. 

About the bug

The security bug is now listed as CVE-2025-54135 and can be exploited by giving the AI agent a malicious prompt to activate threat actor control commands. 

The Cursor combined development environment (IDE) relies on AI agents to allow developers to code quicker and more effectively, helping them to connect with external systems and resources using Model Context Protocol (MCP).

According to the experts, a threat actor effectively abusing the CurXecute bug could trigger ransomware and ransomware data theft attacks. 

Prompt-injection 

CurXecute shares similarities to the EchoLeak bug in Microsoft 365 CoPilot that hackers can use to extort sensitive data without interacting with the users. 

After finding and studying EchoLeak, the experts from the cybersecurity company Aim Security found that hackers can even exploit the local AI agent.

Cursor IDE supports the MCP open-standard framework, which increases an agent’s features by connecting it to external data tools and sources.

Agent exploitation

But the experts have warned that doing so can exploit the agent, as it is open to external, suspicious data that can impact its control flow. The threat actor can take advantage by hacking the agent’s session and features to work as a user.

According to the experts, Cursor doesn’t need permission to run new entries to the ~/.cursor/mcp.json file. When the target opens the new conversation and tells the agent to summarize the messages, the shell payload deploys on the device without user authorization.

“Cursor allows writing in-workspace files with no user approval. If the file is a dotfile, editing it requires approval, but creating one if it doesn't exist doesn't. Hence, if sensitive MCP files, such as the .cursor/mcp.json file, don't already exist in the workspace, an attacker can chain an indirect prompt injection vulnerability to hijack the context to write to the settings file and trigger RCE on the victim without user approval,” Cursor said in a report.

Personal AI Agents Could Become Digital Advocates in an AI-Dominated World

 

As generative AI agents proliferate, a new concept is gaining traction: AI entities that act as loyal digital advocates, protecting individuals from overwhelming technological complexity, misinformation, and data exploitation. Experts suggest these personal AI companions could function similarly to service animals—trained not just to assist, but to guard user interests in an AI-saturated world. From scam detection to helping navigate automated marketing and opaque algorithms, these agents would act as user-first shields. 

At a recent Imagination in Action panel, Consumer Reports’ Ginny Fahs explained, “As companies embed AI deeper into commerce, it becomes harder for consumers to identify fair offers or make informed decisions. An AI that prioritizes users’ interests can build trust and help transition toward a more transparent digital economy.” The idea is rooted in giving users agency and control in a system where most AI is built to serve businesses. Panelists—including experts like Dazza Greenwood, Amir Sarhangi, and Tobin South—discussed how loyal, trustworthy AI advocates could reshape personal data rights, online trust, and legal accountability. 

Greenwood drew parallels to early internet-era reforms such as e-signatures and automated contracts, suggesting a similar legal evolution is needed now to govern AI agents. South added that AI agents must be “loyal by design,” ensuring they act within legal frameworks and always prioritize the user. Sarhangi introduced the concept of “Know Your Agent” (KYA), which promotes transparency by tracking the digital footprint of an AI. 

With unique agent wallets and activity histories, bad actors could be identified and held accountable. Fahs described a tool called “Permission Slip,” which automates user requests like data deletion. This form of AI advocacy predates current generative models but shows how user-authorized agents could manage privacy at scale. Agents could also learn from collective behavior. For instance, an AI noting a negative review of a product could share that experience with other agents, building an automated form of word-of-mouth. 

This concept, said panel moderator Sandy Pentland, mirrors how Consumer Reports aggregates user feedback to identify reliable products. South emphasized that cryptographic tools could ensure safe data-sharing without blindly trusting tech giants. He also referenced NANDA, a decentralized protocol from MIT that aims to enable trustworthy AI infrastructure. Still, implementing AI agents raises usability questions. “We want agents to understand nuanced permissions without constantly asking users to approve every action,” Fahs said. 

Getting this right will be crucial to user adoption. Pentland noted that current AI models struggle to align with individual preferences. “An effective agent must represent you—not a demographic group, but your unique values,” he said. Greenwood believes that’s now possible: “We finally have the tools to build AI agents with fiduciary responsibilities.” In closing, South stressed that the real bottleneck isn’t AI capability but structuring and contextualizing information properly. “If you want AI to truly act on your behalf, we must design systems that help it understand you.” 

As AI becomes deeply embedded in daily life, building personalized, privacy-conscious agents may be the key to ensuring technology serves people—not the other way around.

Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.

How Generative AI Is Accelerating the Rise of Shadow IT and Cybersecurity Gaps

 

The emergence of generative AI tools in the workplace has reignited concerns about shadow IT—technology solutions adopted by employees without the knowledge or approval of the IT department. While shadow IT has always posed security challenges, the rapid proliferation of AI tools is intensifying the issue, creating new cybersecurity risks for organizations already struggling with visibility and control. 

Employees now have access to a range of AI-powered tools that can streamline daily tasks, from summarizing text to generating code. However, many of these applications operate outside approved systems and can send sensitive corporate data to third-party cloud environments. This introduces serious privacy concerns and increases the risk of data leakage. Unlike legacy software, generative AI solutions can be downloaded and used with minimal friction, making them harder for IT teams to detect and manage. 

The 2025 State of Cybersecurity Report by Ivanti reveals a critical gap between awareness and preparedness. More than half of IT and security leaders acknowledge the threat posed by software and API vulnerabilities. Yet only about one-third feel fully equipped to deal with these risks. The disparity highlights the disconnect between theory and practice, especially as data visibility becomes increasingly fragmented. 

A significant portion of this problem stems from the lack of integrated data systems. Nearly half of organizations admit they do not have enough insight into the software operating on their networks, hindering informed decision-making. When IT and security departments work in isolation—something 55% of organizations still report—it opens the door for unmonitored tools to slip through unnoticed. 

Generative AI has only added to the complexity. Because these tools operate quickly and independently, they can infiltrate enterprise environments before any formal review process occurs. The result is a patchwork of unverified software that can compromise an organization’s overall security posture. 

Rather than attempting to ban shadow IT altogether—a move unlikely to succeed—companies should focus on improving data visibility and fostering collaboration between departments. Unified platforms that connect IT and security functions are essential. With a shared understanding of tools in use, teams can assess risks and apply controls without stifling innovation. 

Creating a culture of transparency is equally important. Employees should feel comfortable voicing their tech needs instead of finding workarounds. Training programs can help users understand the risks of generative AI and encourage safer choices. 

Ultimately, AI is not the root of the problem—lack of oversight is. As the workplace becomes more AI-driven, addressing shadow IT with strategic visibility and collaboration will be critical to building a strong, future-ready defense.

Foxconn’s Chairman Warns AI and Robotics Will Replace Low-End Manufacturing Jobs

 

Foxconn chairman Young Liu has issued a stark warning about the future of low-end manufacturing jobs, suggesting that generative AI and robotics will eventually eliminate many of these roles. Speaking at the Computex conference in Taiwan, Liu emphasized that this transformation is not just technological but geopolitical, urging world leaders to prepare for the sweeping changes ahead. 

According to Liu, wealthy nations have historically relied on two methods to keep manufacturing costs down: encouraging immigration to bring in lower-wage workers and outsourcing production to countries with lower GDP. However, he argued that both strategies are reaching their limits. With fewer low-GDP countries to outsource to and increasing resistance to immigration in many parts of the world, Liu believes that generative AI and robotics will be the next major solution to bridge this gap. He cited Foxconn’s own experience as proof of this shift. 

After integrating generative AI into its production processes, the company discovered that AI alone could handle up to 80% of the work involved in setting up new manufacturing runs—often faster than human workers. While human input is still required to complete the job, the combination of AI and skilled labor significantly improves efficiency. As a result, Foxconn’s human experts are now able to focus on more complex challenges rather than repetitive tasks. Liu also announced the development of a proprietary AI model named “FoxBrain,” tailored specifically for manufacturing. 

Built using Meta’s Llama 3 and 4 models and trained on Foxconn’s internal data, this tool aims to automate workflows and enhance factory operations. The company plans to open-source FoxBrain and deploy it across all its facilities, continuously improving the model with real-time performance feedback. Another innovation Liu highlighted was Foxconn’s use of Nvidia’s Omniverse to create digital twins of future factories. These AI-operated virtual factories are used to test and optimize layouts before construction begins, drastically improving design efficiency and effectiveness. 

In addition to manufacturing, Foxconn is eyeing the electric vehicle sector. Liu revealed the company is working on a reference design for EVs, a model that partners can customize—much like Foxconn’s strategy with PC manufacturers. He claimed this approach could reduce product development workloads by up to 80%, enhancing time-to-market and cutting costs. 

Liu closed his keynote by encouraging industry leaders to monitor these developments closely, as the rise of AI-driven automation could reshape the global labor landscape faster than anticipated.

Generative AI May Handle 40% of Workload, Financial Experts Predict

 

Almost half of bank executives polled recently by KPMG believe that generative AI will be able to manage 21% to 40% of their teams' regular tasks by the end of the year. 
 
Heavy investment

Despite economic uncertainty, six out of ten bank executives say generative AI is a top investment priority this year, according to an April KPMG report that polled 200 U.S. bank executives from large and small firms in March about the tech investments their organisations are making. Furthermore, 57% said generative AI is a vital part of their long-term strategy for driving innovation and remaining relevant in the future. 

“Banks are walking a tightrope of rapidly advancing their AI agendas while working to better define the value of their investments,” Peter Torrente, KPMG’s U.S. sector leader for its banking and capital markets practice, noted in the report. 

Approximately half of the executives polled stated their banks are actively piloting the use of generative AI in fraud detection and financial forecasting, with 34% stating the same for cybersecurity. Fraud and cybersecurity are the most prevalent in the proof-of-concept stage (45% each), followed by financial forecasting (20%). 

Nearly 78 percent are actively employing generative AI or evaluating its usage for security or fraud prevention, while 21% are considering it. The vast majority (85%) are using generative AI for data-driven insights or personalisation. 

Senior vice president of product and head of AI at Alphasense, an AI market intelligence company, Chris Ackerson, stated that banks are turning to third-party providers for at least certain uses since the present rate of AI development "is breathtaking." 

Alphasense the and similar companies are being used by lenders to streamline their due diligence procedures and assist in deal sourcing in order to identify potentially lucrative possibilities. The latter, according to Ackerson, "can be a revenue generation play," not merely an efficiency increase. 

As banks include generative AI into their cybersecurity, fraud detection, and financial forecasting responsibilities, ensuring that their employees understand how to appropriately use generative AI-powered solutions has become critical to assuring a return on investment. 

Training staff on how to use new tools or software is "a big element of all of this, to get the benefits out of the technology, as well as to make sure that you're upskilling your employees," Torrente stated. 

Numerous financial institutions, particularly larger lenders, are already investing in such training as they implement various AI tools, according to Torrente, but banks of all sizes should prioritise it as consumer expectations shift and smaller banks struggle to remain competitive.

Quantum Computing Could Deliver Business Value by 2028 with 100 Logical Qubits

 

Quantum computing may soon move from theory to commercial reality, as experts predict that machines with 100 logical qubits could start delivering tangible business value by 2028—particularly in areas like material science. Speaking at the Commercialising Quantum Computing conference in London, industry leaders suggested that such systems could outperform even high-performance computing in solving complex problems. 

Mark Jackson, senior quantum evangelist at Quantinuum, highlighted that quantum computing shows great promise in generative AI applications, especially machine learning. Unlike traditional systems that aim for precise answers, quantum computers excel at identifying patterns in large datasets—making them highly effective for cybersecurity and fraud detection. “Quantum computers can detect patterns that would be missed by other conventional computing methods,” Jackson said.  

Financial services firms are also beginning to realize the potential of quantum computing. Phil Intallura, global head of quantum technologies at HSBC, said quantum technologies can help create more optimized financial models. “If you can show a solution using quantum technology that outperforms supercomputers, decision-makers are more likely to invest,” he noted. HSBC is already exploring quantum random number generation for use in simulations and risk modeling. 

In a recent collaborative study published in Nature, researchers from JPMorgan Chase, Quantinuum, Argonne and Oak Ridge national labs, and the University of Texas showcased Random Circuit Sampling (RCS) as a certified-randomness-expansion method, a task only achievable on a quantum computer. This work underscores how randomness from quantum systems can enhance classical financial simulations. Quantum cryptography also featured prominently at the conference. Regulatory pressure is mounting on banks to replace RSA-2048 encryption with quantum-safe standards by 2035, following recommendations from the U.S. National Institute of Standards and Technology. 

Santander’s Mark Carney emphasized the need for both software and hardware support to enable fast and secure post-quantum cryptography (PQC) in customer-facing applications. Gerard Mullery, interim CEO at Oxford Quantum Circuits, stressed the importance of integrating quantum computing into traditional enterprise workflows. As AI increasingly automates business processes, quantum platforms will need to support seamless orchestration within these ecosystems. 

While only a few companies have quantum machines with logical qubits today, the pace of development suggests that quantum computing could be transformative within the next few years. With increasing investment and maturing use cases, businesses are being urged to prepare for a hybrid future where classical and quantum systems work together to solve previously intractable problems.

Security Analysts Express Concerns Over AI-Generated Doll Trend

 

If you've been scrolling through social media recently, you've probably seen a lot of... dolls. There are dolls all over X and on Facebook feeds. Instagram? Dolls. TikTok?

You guessed it: dolls, as well as doll-making techniques. There are even dolls on LinkedIn, undoubtedly the most serious and least entertaining member of the club. You can refer to it as the Barbie AI treatment or the Barbie box trend. If Barbie isn't your thing, you can try AI action figures, action figure starter packs, or the ChatGPT action figure fad. However, regardless of the hashtag, dolls appear to be everywhere. 

And, while they share some similarities (boxes and packaging resembling Mattel's Barbie, personality-driven accessories, a plastic-looking smile), they're all as unique as the people who post them, with the exception of one key common feature: they're not real. 

In the emerging trend, users are using generative AI tools like ChatGPT to envision themselves as dolls or action figures, complete with accessories. It has proven quite popular, and not just among influencers.

Politicians, celebrities, and major brands have all joined in. Journalists covering the trend have created images of themselves with cameras and microphones (albeit this journalist won't put you through that). Users have created renditions of almost every well-known figure, including billionaire Elon Musk and actress and singer Ariana Grande. 

The Verge, a tech media outlet, claims that it started on LinkedIn, a professional social networking site that was well-liked by marketers seeking interaction. Because of this, a lot of the dolls you see try to advertise a company or business. (Think, "social media marketer doll," or even "SEO manager doll." ) 

Privacy concerns

From a social perspective, the popularity of the doll-generating trend isn't surprising at all, according to Matthew Guzdial, an assistant professor of computing science at the University of Alberta.

"This is the kind of internet trend we've had since we've had social media. Maybe it used to be things like a forwarded email or a quiz where you'd share the results," Guzdial noted. 

But as with any AI trend, there are some concerns over its data use. Generative AI in general poses substantial data privacy challenges. As the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) points out, data privacy concerns and the internet are nothing new, but AI is so "data-hungry" that it magnifies the risk. 

Safety tips 

As we have seen, one of the major risks of participating in viral AI trends is the potential for your conversation history to be compromised by unauthorised or malicious parties. To stay safe, researchers recommend taking the following steps: 

Protect your account: This includes enabling 2FA, creating secure and unique passwords for each service, and avoiding logging in to shared computers.

Minimise the real data you give to the AI model: Fornés suggests using nicknames or other data instead. You should also consider utilising a different ID solely for interactions with AI models.

Use the tool cautiously and properly: When feasible, use the AI model in incognito mode and without activating the history or conversational memory functions.

Generative AI Fuels Identity Theft, Aadhaar Card Fraud, and Misinformation in India

 

A disturbing trend is emerging in India’s digital landscape as generative AI tools are increasingly misused to forge identities and spread misinformation. One user, Piku, revealed that an AI platform generated a convincing Aadhaar card using only a name, birth date, and address—raising serious questions about data security. While AI models typically do not use real personal data, the near-perfect replication of government documents hints at training on real-world samples, possibly sourced from public leaks or open repositories. 

This AI-enabled fraud isn’t occurring in isolation. Criminals are combining fake document templates with authentic data collected from discarded paperwork, e-waste, and old printers. The resulting forged identities are realistic enough to pass basic checks, enabling SIM card fraud, bank scams, and more. What started as tools for entertainment and productivity now pose serious risks. Misinformation tactics are evolving too. 

A recent incident involving playback singer Shreya Ghoshal illustrated how scammers exploit public figures to push phishing links. These fake stories led users to malicious domains targeting them with investment scams under false brand names like Lovarionix Liquidity. Cyber intelligence experts traced these campaigns to websites built specifically for impersonation and data theft. The misuse of generative AI also extends into healthcare fraud. 

In a shocking case, a man impersonated renowned cardiologist Dr. N John Camm and performed unauthorized surgeries at a hospital in Madhya Pradesh. At least two patient deaths were confirmed between December 2024 and February 2025. Investigators believe the impersonator may have used manipulated or AI-generated credentials to gain credibility. Cybersecurity professionals are urging more vigilance. CertiK founder Ronghui Gu emphasizes that users must understand the risks of sharing biometric data, like facial images, with AI platforms. Without transparency, users cannot be sure how their data is used or whether it’s shared. He advises precautions such as using pseudonyms, secondary emails, and reading privacy policies carefully—especially on platforms not clearly compliant with regulations like GDPR or CCPA. 

A recent HiddenLayer report revealed that 77% of companies using AI have already suffered security breaches. This underscores the need for robust data protection as AI becomes more embedded in everyday processes. India now finds itself at the center of an escalating cybercrime wave powered by generative AI. What once seemed like harmless innovation now fuels identity theft, document forgery, and digital misinformation. The time for proactive regulation, corporate accountability, and public awareness is now—before this new age of AI-driven fraud becomes unmanageable.

Nearly Half of Companies Lack AI-driven Cyber Threat Plans, Report Finds

 

Mimecast has discovered that over 55% of organisations do not have specific plans in place to deal with AI-driven cyberthreats. The cybersecurity company's most recent "State of Human Risk" report, which is based on a global survey of 1,100 IT security professionals, emphasises growing concerns about insider threats, cybersecurity budget shortages, and vulnerabilities related to artificial intelligence. 

According to the report, establishing a structured cybersecurity strategy has improved the risk posture of 96% of organisations. The threat landscape is still becoming more complicated, though, and insider threats and AI-driven attacks are posing new challenges for security leaders. 

“Despite the complexity of challenges facing organisations—including increased insider risk, larger attack surfaces from collaboration tools, and sophisticated AI attacks—organisations are still too eager to simply throw point solutions at the problem,” stated Mimecast’s human risk strategist VP, Masha Sedova. “With short-staffed IT and security teams and an unrelenting threat landscape, organisations must shift to a human-centric platform approach that connects the dots between employees and technology to keep the business secure.” 

95% of organisations use AI for insider risk assessments, endpoint security, and threat detection, according to the survey, but 81% are concerned regarding data leakage from generative AI (GenAI) technology. In addition to 46% not being confident in their abilities to defend against AI-powered phishing and deepfake threats, more than half do not have defined tactics to resist AI-driven attacks.

Data loss from internal sources is expected to increase over the next year, according to 66% of IT leaders, while insider security incidents have increased by 43%. The average cost of insider-driven data breaches, leaks, or theft is $13.9 million per incident, according to the research. Furthermore, 79% of organisations think that the increased usage of collaboration technologies has increased security concerns, making them more vulnerable to both deliberate and accidental data breaches. 

With only 8% of employees accountable for 80% of security incidents, the report highlights a move away from traditional security awareness training and towards proactive Human Risk Management. To identify and eliminate threats early, organisations are implementing behavioural analytics and AI-driven surveillance. A shift towards sophisticated threat detection and risk mitigation techniques is seen in the fact that 72% of security leaders believe that human-centric cybersecurity solutions will be essential over the next five years.

Hong Kong Launches Its First Generative AI Model

 

Last week, Hong Kong launched its first generative artificial intelligence (AI) model, HKGAI V1, ushering in a new era in the city's AI development. The tool was designed by the Hong Kong Generative AI Research and Development Centre (HKGAI) for the Hong Kong Special Administrative Region (HKSAR) government's InnoHK innovation program. 

The locally designed AI tool, which is driven by DeepSeek's data learning model, has so far been tested by about 70 HKSAR government departments. According to a press statement from HKGAI, this innovative accomplishment marks the successful localisation of DeepSeek in Hong Kong, injecting new vitality to the city's AI ecosystem and demonstrating the strong collaborative innovation capabilities between Hong Kong and the Chinese mainland in AI. 

Sun Dong, the HKSAR government's Secretary for Innovation, Technology, and Industry, highlighted during the launch ceremony that artificial intelligence (AI) is in the vanguard of a new industrial revolution and technical revolution, and Hong Kong is actively participating in this wave. 

Sun also emphasised the HKSAR government's broader efforts to encourage AI research, which include the building of an AI supercomputing centre, a 3-billion Hong Kong dollar (386 million US dollar) AI funding plan, and the clustering of over 800 AI enterprises at Science Park and Cyberport. He expressed confidence that the locally produced large language model will soon be available for usage by not just enterprises and people, but also for overseas Chinese communities. 

DeepSeek, founded by Liang Wengfeng, previously stunned the world with its low-cost AI model, which was created with substantially fewer processing resources than larger US tech businesses such as OpenAI and Meta. The HKGAI V1 system is the first in the world to use DeepSeek's full-parameter fine-tuning research methodology. 

The financial secretary allocated HK$1 billion (US$128.6 million) in the budget to build the Hong Kong AI Research and Development Institute. The government intends to start the institute by the fiscal year 2026-27, with cash set aside for the first five years to cover operational costs, including employing staff. 

“Our goal is to ensure Hong Kong’s leading role in the development of AI … So the Institute will focus on facilitating upstream research and development [R&D], midstream and downstream transformation of R&D outcomes, and expanding application scenarios,” Sun noted.

Generative AI in Cybersecurity: A Double-Edged Sword

Generative AI (GenAI) is transforming the cybersecurity landscape, with 52% of CISOs prioritizing innovation using emerging technologies. However, a significant disconnect exists, as only 33% of board members view these technologies as a top priority. This gap underscores the challenge of aligning strategic priorities between cybersecurity leaders and company boards.

The Role of AI in Cybersecurity

According to the latest Splunk CISO Report, cyberattacks are becoming more frequent and sophisticated. Yet, 41% of security leaders believe that the requirements for protection are becoming easier to manage, thanks to advancements in AI. Many CISOs are increasingly relying on AI to:

  • Identify risks (39%)
  • Analyze threat intelligence (39%)
  • Detect and prioritize threats (35%)

However, GenAI is a double-edged sword. While it enhances threat detection and protection, attackers are also leveraging AI to boost their efforts. For instance:

  • 32% of attackers use AI to make attacks more effective.
  • 28% use AI to increase the volume of attacks.
  • 23% use AI to develop entirely new types of threats.

This has led to growing concerns among security professionals, with 36% of CISOs citing AI-powered attacks as their biggest worry, followed by cyber extortion (24%) and data breaches (23%).

Challenges and Opportunities in Cybersecurity

One of the major challenges is the gap in budget expectations. Only 29% of CISOs feel they have sufficient funding to secure their organizations, compared to 41% of board members who believe their budgets are adequate. Additionally, 64% of CISOs attribute the cyberattacks their firms experience to a lack of support.

Despite these challenges, there is hope. A vast majority of cybersecurity experts (86%) believe that AI can help attract entry-level talent to address the skills shortage, while 65% say AI enables seasoned professionals to work more productively. Collaboration between security teams and other departments is also improving:

  • 91% of organizations are increasing security training for legal and compliance staff.
  • 90% are enhancing training for security teams.

To strengthen cyber defenses, experts emphasize the importance of foundational practices:

  1. Strong Passwords and MFA: Poor password security is linked to 80% of data breaches. Companies are encouraged to use password managers and enforce robust password policies.
  2. Regular Cybersecurity Training: Educating employees on risk management and security practices, such as using antivirus software and maintaining firewalls, can significantly reduce vulnerabilities.
  3. Third-Party Vendor Assessments: Organizations must evaluate third-party vendors for security risks, as breaches through these channels can expose even the most secure systems.

Generative AI is reshaping the cybersecurity landscape, offering both opportunities and challenges. While it enhances threat detection and operational efficiency, it also empowers attackers to launch more sophisticated and frequent attacks. To navigate this evolving landscape, organizations must align strategic priorities, invest in AI-driven solutions, and reinforce foundational cybersecurity practices. By doing so, they can better protect their systems and data in an increasingly complex threat environment.