Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI tools. Show all posts

Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.

Navigating AI Security Risks in Professional Settings


 

There is no doubt that generative artificial intelligence is one of the most revolutionary branches of artificial intelligence, capable of producing entirely new content across many different types of media, including text, image, audio, music, and even video. As opposed to conventional machine learning models, which are based on executing specific tasks, generative AI systems learn patterns and structures from large datasets and are able to produce outputs that aren't just original, but are sometimes extremely realistic as well. 

It is because of this ability to simulate human-like creativity that generative AI has become an industry leader in technological innovation. Its applications go well beyond simple automation, touching almost every sector of the modern economy. As generative AI tools reshape content creation workflows, they produce compelling graphics and copy at scale in a way that transforms the way content is created. 

The models are also helpful in software development when it comes to generating code snippets, streamlining testing, and accelerating prototyping. AI also has the potential to support scientific research by allowing the simulation of data, modelling complex scenarios, and supporting discoveries in a wide array of areas, such as biology and material science.

Generative AI, on the other hand, is unpredictable and adaptive, which means that organisations are able to explore new ideas and achieve efficiencies that traditional systems are unable to offer. There is an increasing need for enterprises to understand the capabilities and the risks of this powerful technology as adoption accelerates. 

Understanding these capabilities has become an essential part of staying competitive in a digital world that is rapidly changing. In addition to reproducing human voices and creating harmful software, generative artificial intelligence is rapidly lowering the barriers for launching highly sophisticated cyberattacks that can target humans. There is a significant threat from the proliferation of deepfakes, which are realistic synthetic media that can be used to impersonate individuals in real time in convincing ways. 

In a recent incident in Italy, cybercriminals manipulated and deceived the Defence Minister Guido Crosetto by leveraging advanced audio deepfake technology. These tools demonstrate the alarming ability of such tools for manipulating and deceiving the public. Also, a finance professional recently transferred $25 million after being duped into transferring it by fraudsters using a deepfake simulation of the company's chief financial officer, which was sent to him via email. 

Additionally, the increase in phishing and social engineering campaigns is concerning. As a result of the development of generative AI, adversaries have been able to craft highly personalised and context-aware messages that have significantly enhanced the quality and scale of these attacks. It has now become possible for hackers to create phishing emails that are practically indistinguishable from legitimate correspondence through the analysis of publicly available data and the replication of authentic communication styles. 

Cybercriminals are further able to weaponise these messages through automation, as this enables them to create and distribute a huge volume of tailored lures that are tailored to match the profile and behaviour of each target dynamically. Using the power of AI to generate large language models (LLMs), attackers have also revolutionised malicious code development. 

A large language model can provide attackers with the power to design ransomware, improve exploit techniques, and circumvent conventional security measures. Therefore, organisations across multiple industries have reported an increase in AI-assisted ransomware incidents, with over 58% of them stating that the increase has been significant.

It is because of this trend that security strategies must be adapted to address threats that are evolving at machine speed, making it crucial for organisations to strengthen their so-called “human firewalls”. While it has been demonstrated that employee awareness remains an essential defence, studies have indicated that only 24% of organisations have implemented continuous cyber awareness programs, which is a significant amount. 

As companies become more sophisticated in their security efforts, they should update training initiatives to include practical advice on detecting hyper-personalised phishing attempts, detecting subtle signs of deepfake audio and identifying abnormal system behaviours that can bypass automated scanners in order to protect themselves from these types of attacks. Providing a complement to human vigilance, specialised counter-AI solutions are emerging to mitigate these risks. 

In order to protect against AI-driven phishing campaigns, DuckDuckGoose Suite, for example, uses behavioural analytics and threat intelligence to prevent AI-based phishing campaigns from being initiated. Tessian, on the other hand, employs behavioural analytics and threat intelligence to detect synthetic media. As well as disrupting malicious activity in real time, these technologies also provide adaptive coaching to assist employees in developing stronger, instinctive security habits in the workplace. 
Organisations that combine informed human oversight with intelligent defensive tools will have the capacity to build resilience against the expanding arsenal of AI-enabled cyber threats. Recent legal actions have underscored the complexity of balancing AI use with privacy requirements. It was raised by OpenAI that when a judge ordered ChatGPT to keep all user interactions, including deleted chats, they might inadvertently violate their privacy commitments if they were forced to keep data that should have been wiped out.

AI companies face many challenges when delivering enterprise services, and this dilemma highlights the challenges that these companies face. OpenAI and Anthropic are platforms offering APIs and enterprise products that often include privacy safeguards; however, individuals using their personal accounts are exposed to significant risks when handling sensitive information that is about them or their business. 

AI accounts should be managed by the company, users should understand the specific privacy policies of these tools, and they should not upload proprietary or confidential materials unless specifically authorised by the company. Another critical concern is the phenomenon of AI hallucinations that have occurred in recent years. This is because large language models are constructed to predict language patterns rather than verify facts, which can result in persuasively presented, but entirely fictitious content.

As a result of this, there have been several high-profile incidents that have resulted, including fabricated legal citations in court filings, as well as invented bibliographies. It is therefore imperative that human review remains part of professional workflows when incorporating AI-generated outputs. Bias is another persistent vulnerability.

Due to the fact that artificial intelligence models are trained on extensive and imperfect datasets, these models can serve to mirror and even amplify the prejudices that exist within society as a whole. As a result of the system prompts that are used to prevent offensive outputs, there is an increased risk of introducing new biases, and system prompt adjustments have resulted in unpredictable and problematic responses, complicating efforts to maintain a neutral environment. 

Several cybersecurity threats, including prompt injection and data poisoning, are also on the rise. A malicious actor may use hidden commands or false data to manipulate model behaviour, thus causing outputs that are inaccurate, offensive, or harmful. Additionally, user error remains an important factor as well. Instances such as unintentionally sharing private AI chats or recording confidential conversations illustrate just how easy it is to breach confidentiality, even with simple mistakes.

It has also been widely reported that intellectual property concerns complicate the landscape. Many of the generative tools have been trained on copyrighted material, which has raised legal questions regarding how to use such outputs. Before deploying AI-generated content commercially, companies should seek legal advice. 

As AI systems develop, even their creators are not always able to predict the behaviour of these systems, leaving organisations with a challenging landscape where threats continue to emerge in unexpected ways. However, the most challenging risk is the unknown. The government is facing increasing pressure to establish clear rules and safeguards as artificial intelligence moves from the laboratory to virtually every corner of the economy at a rapid pace. 

Before the 2025 change in administration, there was a growing momentum behind early regulatory efforts in the United States. For instance, Executive Order 14110 outlined the appointment of chief AI officers by federal agencies and the development of uniform guidelines for assessing and managing AI risks. As a result of this initiative, a baseline of accountability for AI usage in the public sector was established. 

A change in strategy has taken place in the administration's approach to artificial intelligence since they rescinded the order. This signalled a departure from proactive federal oversight. The future outlook for artificial intelligence regulation in the United States is highly uncertain, however. The Trump-backed One Big Beautiful Bill proposes sweeping restrictions that would prevent state governments from enacting artificial intelligence regulations for at least the next decade. 

As a result of this measure becoming law, it could effectively halt local and regional governance at a time when AI is gaining a greater influence across practically all industries. Meanwhile, the European Union currently seems to be pursuing a more consistent approach to AI. 

As of March 2024, a comprehensive framework titled the Artificial Intelligence Act was established. This framework categorises artificial intelligence applications according to the level of risk they pose and imposes strict requirements for applications that pose a significant risk, such as those in the healthcare field, education, and law enforcement. 

Also included in the legislation are certain practices, such as the use of facial recognition systems in public places, that are outright banned, reflecting a commitment to protecting the individual's rights. In terms of how AI oversight is defined and enforced, there is a widening gap between regions as a result of these different regulatory strategies. 

Technology will continue to evolve, and to ensure compliance and manage emerging risks effectively, organisations will have to remain vigilant and adapt to the changing legal landscape as a result of this.

How Generative AI Is Accelerating the Rise of Shadow IT and Cybersecurity Gaps

 

The emergence of generative AI tools in the workplace has reignited concerns about shadow IT—technology solutions adopted by employees without the knowledge or approval of the IT department. While shadow IT has always posed security challenges, the rapid proliferation of AI tools is intensifying the issue, creating new cybersecurity risks for organizations already struggling with visibility and control. 

Employees now have access to a range of AI-powered tools that can streamline daily tasks, from summarizing text to generating code. However, many of these applications operate outside approved systems and can send sensitive corporate data to third-party cloud environments. This introduces serious privacy concerns and increases the risk of data leakage. Unlike legacy software, generative AI solutions can be downloaded and used with minimal friction, making them harder for IT teams to detect and manage. 

The 2025 State of Cybersecurity Report by Ivanti reveals a critical gap between awareness and preparedness. More than half of IT and security leaders acknowledge the threat posed by software and API vulnerabilities. Yet only about one-third feel fully equipped to deal with these risks. The disparity highlights the disconnect between theory and practice, especially as data visibility becomes increasingly fragmented. 

A significant portion of this problem stems from the lack of integrated data systems. Nearly half of organizations admit they do not have enough insight into the software operating on their networks, hindering informed decision-making. When IT and security departments work in isolation—something 55% of organizations still report—it opens the door for unmonitored tools to slip through unnoticed. 

Generative AI has only added to the complexity. Because these tools operate quickly and independently, they can infiltrate enterprise environments before any formal review process occurs. The result is a patchwork of unverified software that can compromise an organization’s overall security posture. 

Rather than attempting to ban shadow IT altogether—a move unlikely to succeed—companies should focus on improving data visibility and fostering collaboration between departments. Unified platforms that connect IT and security functions are essential. With a shared understanding of tools in use, teams can assess risks and apply controls without stifling innovation. 

Creating a culture of transparency is equally important. Employees should feel comfortable voicing their tech needs instead of finding workarounds. Training programs can help users understand the risks of generative AI and encourage safer choices. 

Ultimately, AI is not the root of the problem—lack of oversight is. As the workplace becomes more AI-driven, addressing shadow IT with strategic visibility and collaboration will be critical to building a strong, future-ready defense.

Foxconn’s Chairman Warns AI and Robotics Will Replace Low-End Manufacturing Jobs

 

Foxconn chairman Young Liu has issued a stark warning about the future of low-end manufacturing jobs, suggesting that generative AI and robotics will eventually eliminate many of these roles. Speaking at the Computex conference in Taiwan, Liu emphasized that this transformation is not just technological but geopolitical, urging world leaders to prepare for the sweeping changes ahead. 

According to Liu, wealthy nations have historically relied on two methods to keep manufacturing costs down: encouraging immigration to bring in lower-wage workers and outsourcing production to countries with lower GDP. However, he argued that both strategies are reaching their limits. With fewer low-GDP countries to outsource to and increasing resistance to immigration in many parts of the world, Liu believes that generative AI and robotics will be the next major solution to bridge this gap. He cited Foxconn’s own experience as proof of this shift. 

After integrating generative AI into its production processes, the company discovered that AI alone could handle up to 80% of the work involved in setting up new manufacturing runs—often faster than human workers. While human input is still required to complete the job, the combination of AI and skilled labor significantly improves efficiency. As a result, Foxconn’s human experts are now able to focus on more complex challenges rather than repetitive tasks. Liu also announced the development of a proprietary AI model named “FoxBrain,” tailored specifically for manufacturing. 

Built using Meta’s Llama 3 and 4 models and trained on Foxconn’s internal data, this tool aims to automate workflows and enhance factory operations. The company plans to open-source FoxBrain and deploy it across all its facilities, continuously improving the model with real-time performance feedback. Another innovation Liu highlighted was Foxconn’s use of Nvidia’s Omniverse to create digital twins of future factories. These AI-operated virtual factories are used to test and optimize layouts before construction begins, drastically improving design efficiency and effectiveness. 

In addition to manufacturing, Foxconn is eyeing the electric vehicle sector. Liu revealed the company is working on a reference design for EVs, a model that partners can customize—much like Foxconn’s strategy with PC manufacturers. He claimed this approach could reduce product development workloads by up to 80%, enhancing time-to-market and cutting costs. 

Liu closed his keynote by encouraging industry leaders to monitor these developments closely, as the rise of AI-driven automation could reshape the global labor landscape faster than anticipated.

Google’s AI Virtual Try-On Tool Redefines Online Shopping Experience

 

At the latest Google I/O developers conference, the tech giant introduced an unexpected innovation in online shopping: an AI-powered virtual try-on tool. This new feature lets users upload a photo of themselves and see how clothing items would appear on their body. By merging the image of the user with that of the garment, Google’s custom-built image generation model creates a realistic simulation of the outfit on the individual. 

While the concept seems simple, the underlying AI technology is advanced. In a live demonstration, the tool appeared to function seamlessly. The feature is now available in the United States and is part of Google’s broader efforts to enhance the online shopping experience through AI integration. It’s particularly useful for people who often struggle to visualize how clothing will look on their body compared to how it appears on models.  

However, the rollout of this tool raised valid questions about user privacy. AI systems that involve personal images often come with concerns over data usage. Addressing these worries, a Google representative clarified that uploaded photos are used exclusively for the try-on experience. The images are not stored for AI training, are not shared with other services or third parties, and users can delete or update their photos at any time. This level of privacy protection is notable in an industry where user data is typically leveraged to improve algorithms. 

Given Google’s ongoing development of AI-driven tools, some expected the company to utilize this photo data for model training. Instead, the commitment to user privacy in this case suggests a more responsible approach. Virtual fitting technology isn’t entirely new. Retail and tech companies have been exploring similar ideas for years. Amazon, for instance, has experimented with AI tools in its fashion division. Google, however, claims its new tool offers a more in-depth understanding of diverse body types. 

During the presentation, Vidhya Srinivasan, Google’s VP of ads and commerce, emphasized the system’s goal of accommodating different shapes and sizes more effectively. Past AI image tools have faced criticism for lacking diversity and realism. It’s unclear whether Google’s new tool will be more reliable across the board. Nevertheless, their assurance that user images won’t be used to train models helps build trust. 

Although the virtual preview may not always perfectly reflect real-life appearances, this development points to a promising direction for AI in retail. If successful, it could improve customer satisfaction, reduce returns, and make online shopping a more personalized experience.

Quantum Computing Could Deliver Business Value by 2028 with 100 Logical Qubits

 

Quantum computing may soon move from theory to commercial reality, as experts predict that machines with 100 logical qubits could start delivering tangible business value by 2028—particularly in areas like material science. Speaking at the Commercialising Quantum Computing conference in London, industry leaders suggested that such systems could outperform even high-performance computing in solving complex problems. 

Mark Jackson, senior quantum evangelist at Quantinuum, highlighted that quantum computing shows great promise in generative AI applications, especially machine learning. Unlike traditional systems that aim for precise answers, quantum computers excel at identifying patterns in large datasets—making them highly effective for cybersecurity and fraud detection. “Quantum computers can detect patterns that would be missed by other conventional computing methods,” Jackson said.  

Financial services firms are also beginning to realize the potential of quantum computing. Phil Intallura, global head of quantum technologies at HSBC, said quantum technologies can help create more optimized financial models. “If you can show a solution using quantum technology that outperforms supercomputers, decision-makers are more likely to invest,” he noted. HSBC is already exploring quantum random number generation for use in simulations and risk modeling. 

In a recent collaborative study published in Nature, researchers from JPMorgan Chase, Quantinuum, Argonne and Oak Ridge national labs, and the University of Texas showcased Random Circuit Sampling (RCS) as a certified-randomness-expansion method, a task only achievable on a quantum computer. This work underscores how randomness from quantum systems can enhance classical financial simulations. Quantum cryptography also featured prominently at the conference. Regulatory pressure is mounting on banks to replace RSA-2048 encryption with quantum-safe standards by 2035, following recommendations from the U.S. National Institute of Standards and Technology. 

Santander’s Mark Carney emphasized the need for both software and hardware support to enable fast and secure post-quantum cryptography (PQC) in customer-facing applications. Gerard Mullery, interim CEO at Oxford Quantum Circuits, stressed the importance of integrating quantum computing into traditional enterprise workflows. As AI increasingly automates business processes, quantum platforms will need to support seamless orchestration within these ecosystems. 

While only a few companies have quantum machines with logical qubits today, the pace of development suggests that quantum computing could be transformative within the next few years. With increasing investment and maturing use cases, businesses are being urged to prepare for a hybrid future where classical and quantum systems work together to solve previously intractable problems.

AI Can Create Deepfake Videos of Children Using Just 20 Images, Expert Warns

 

Parents are being urged to rethink how much they share about their children online, as experts warn that criminals can now generate realistic deepfake videos using as few as 20 images. This alarming development highlights the growing risks of digital identity theft and fraud facing children due to oversharing on social media platforms.  

According to Professor Carsten Maple of the University of Warwick and the Alan Turing Institute, modern AI tools can construct highly realistic digital profiles, including 30-second deepfake videos, from a small number of publicly available photos. These images can be used not only by criminal networks to commit identity theft, open fraudulent accounts, or claim government benefits in a child’s name but also by large tech companies to train their algorithms, often without the user’s full awareness or consent. 

New research conducted by Perspectus Global and commissioned by Proton surveyed 2,000 UK parents of children under 16. The findings show that on average, parents upload 63 images to social media every month, with 59% of those being family-related. A significant proportion of parents—21%—share these photos multiple times a week, while 38% post several times a month. These frequent posts not only showcase images but also often contain sensitive data like location tags and key life events, making it easier for bad actors to build a detailed online profile of the child. Professor Maple warned that such oversharing can lead to long-term consequences. 

Aside from potential identity theft, children could face mental distress or reputational harm later in life from having a permanent digital footprint that they never consented to create. The problem is exacerbated by the fact that many parents are unaware of how their data is being used. For instance, 48% of survey respondents did not realize that cloud storage providers can access the data stored on their platforms. In fact, more than half of the surveyed parents (56%) store family images on cloud services such as Google Drive or Apple iCloud. On average, each parent had 185 photos of their children stored digitally—images that may be accessed or analyzed under vaguely worded terms and conditions.  

Recent changes to Instagram’s user agreement, which now allows the platform to use uploaded images to train its AI systems, have further heightened privacy concerns. Additionally, experts have warned about the use of personal images by other Big Tech firms to enhance facial recognition algorithms and advertising models. To protect their children, parents are advised to implement a range of safety measures. These include using secure and private cloud storage, adjusting privacy settings on social platforms, avoiding public Wi-Fi when sharing or uploading data, and staying vigilant against phishing scams. 

Furthermore, experts recommend setting boundaries with children regarding online activity, using parental controls, antivirus tools, and search filters, and modeling responsible digital behavior. The growing accessibility of AI-based image manipulation tools underscores the urgent need for greater awareness and proactive digital hygiene. What may seem like harmless sharing today could expose children to significant risks in the future.