Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Generative AI. Show all posts

Personal AI Agents Could Become Digital Advocates in an AI-Dominated World

 

As generative AI agents proliferate, a new concept is gaining traction: AI entities that act as loyal digital advocates, protecting individuals from overwhelming technological complexity, misinformation, and data exploitation. Experts suggest these personal AI companions could function similarly to service animals—trained not just to assist, but to guard user interests in an AI-saturated world. From scam detection to helping navigate automated marketing and opaque algorithms, these agents would act as user-first shields. 

At a recent Imagination in Action panel, Consumer Reports’ Ginny Fahs explained, “As companies embed AI deeper into commerce, it becomes harder for consumers to identify fair offers or make informed decisions. An AI that prioritizes users’ interests can build trust and help transition toward a more transparent digital economy.” The idea is rooted in giving users agency and control in a system where most AI is built to serve businesses. Panelists—including experts like Dazza Greenwood, Amir Sarhangi, and Tobin South—discussed how loyal, trustworthy AI advocates could reshape personal data rights, online trust, and legal accountability. 

Greenwood drew parallels to early internet-era reforms such as e-signatures and automated contracts, suggesting a similar legal evolution is needed now to govern AI agents. South added that AI agents must be “loyal by design,” ensuring they act within legal frameworks and always prioritize the user. Sarhangi introduced the concept of “Know Your Agent” (KYA), which promotes transparency by tracking the digital footprint of an AI. 

With unique agent wallets and activity histories, bad actors could be identified and held accountable. Fahs described a tool called “Permission Slip,” which automates user requests like data deletion. This form of AI advocacy predates current generative models but shows how user-authorized agents could manage privacy at scale. Agents could also learn from collective behavior. For instance, an AI noting a negative review of a product could share that experience with other agents, building an automated form of word-of-mouth. 

This concept, said panel moderator Sandy Pentland, mirrors how Consumer Reports aggregates user feedback to identify reliable products. South emphasized that cryptographic tools could ensure safe data-sharing without blindly trusting tech giants. He also referenced NANDA, a decentralized protocol from MIT that aims to enable trustworthy AI infrastructure. Still, implementing AI agents raises usability questions. “We want agents to understand nuanced permissions without constantly asking users to approve every action,” Fahs said. 

Getting this right will be crucial to user adoption. Pentland noted that current AI models struggle to align with individual preferences. “An effective agent must represent you—not a demographic group, but your unique values,” he said. Greenwood believes that’s now possible: “We finally have the tools to build AI agents with fiduciary responsibilities.” In closing, South stressed that the real bottleneck isn’t AI capability but structuring and contextualizing information properly. “If you want AI to truly act on your behalf, we must design systems that help it understand you.” 

As AI becomes deeply embedded in daily life, building personalized, privacy-conscious agents may be the key to ensuring technology serves people—not the other way around.

Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.

How Generative AI Is Accelerating the Rise of Shadow IT and Cybersecurity Gaps

 

The emergence of generative AI tools in the workplace has reignited concerns about shadow IT—technology solutions adopted by employees without the knowledge or approval of the IT department. While shadow IT has always posed security challenges, the rapid proliferation of AI tools is intensifying the issue, creating new cybersecurity risks for organizations already struggling with visibility and control. 

Employees now have access to a range of AI-powered tools that can streamline daily tasks, from summarizing text to generating code. However, many of these applications operate outside approved systems and can send sensitive corporate data to third-party cloud environments. This introduces serious privacy concerns and increases the risk of data leakage. Unlike legacy software, generative AI solutions can be downloaded and used with minimal friction, making them harder for IT teams to detect and manage. 

The 2025 State of Cybersecurity Report by Ivanti reveals a critical gap between awareness and preparedness. More than half of IT and security leaders acknowledge the threat posed by software and API vulnerabilities. Yet only about one-third feel fully equipped to deal with these risks. The disparity highlights the disconnect between theory and practice, especially as data visibility becomes increasingly fragmented. 

A significant portion of this problem stems from the lack of integrated data systems. Nearly half of organizations admit they do not have enough insight into the software operating on their networks, hindering informed decision-making. When IT and security departments work in isolation—something 55% of organizations still report—it opens the door for unmonitored tools to slip through unnoticed. 

Generative AI has only added to the complexity. Because these tools operate quickly and independently, they can infiltrate enterprise environments before any formal review process occurs. The result is a patchwork of unverified software that can compromise an organization’s overall security posture. 

Rather than attempting to ban shadow IT altogether—a move unlikely to succeed—companies should focus on improving data visibility and fostering collaboration between departments. Unified platforms that connect IT and security functions are essential. With a shared understanding of tools in use, teams can assess risks and apply controls without stifling innovation. 

Creating a culture of transparency is equally important. Employees should feel comfortable voicing their tech needs instead of finding workarounds. Training programs can help users understand the risks of generative AI and encourage safer choices. 

Ultimately, AI is not the root of the problem—lack of oversight is. As the workplace becomes more AI-driven, addressing shadow IT with strategic visibility and collaboration will be critical to building a strong, future-ready defense.

Foxconn’s Chairman Warns AI and Robotics Will Replace Low-End Manufacturing Jobs

 

Foxconn chairman Young Liu has issued a stark warning about the future of low-end manufacturing jobs, suggesting that generative AI and robotics will eventually eliminate many of these roles. Speaking at the Computex conference in Taiwan, Liu emphasized that this transformation is not just technological but geopolitical, urging world leaders to prepare for the sweeping changes ahead. 

According to Liu, wealthy nations have historically relied on two methods to keep manufacturing costs down: encouraging immigration to bring in lower-wage workers and outsourcing production to countries with lower GDP. However, he argued that both strategies are reaching their limits. With fewer low-GDP countries to outsource to and increasing resistance to immigration in many parts of the world, Liu believes that generative AI and robotics will be the next major solution to bridge this gap. He cited Foxconn’s own experience as proof of this shift. 

After integrating generative AI into its production processes, the company discovered that AI alone could handle up to 80% of the work involved in setting up new manufacturing runs—often faster than human workers. While human input is still required to complete the job, the combination of AI and skilled labor significantly improves efficiency. As a result, Foxconn’s human experts are now able to focus on more complex challenges rather than repetitive tasks. Liu also announced the development of a proprietary AI model named “FoxBrain,” tailored specifically for manufacturing. 

Built using Meta’s Llama 3 and 4 models and trained on Foxconn’s internal data, this tool aims to automate workflows and enhance factory operations. The company plans to open-source FoxBrain and deploy it across all its facilities, continuously improving the model with real-time performance feedback. Another innovation Liu highlighted was Foxconn’s use of Nvidia’s Omniverse to create digital twins of future factories. These AI-operated virtual factories are used to test and optimize layouts before construction begins, drastically improving design efficiency and effectiveness. 

In addition to manufacturing, Foxconn is eyeing the electric vehicle sector. Liu revealed the company is working on a reference design for EVs, a model that partners can customize—much like Foxconn’s strategy with PC manufacturers. He claimed this approach could reduce product development workloads by up to 80%, enhancing time-to-market and cutting costs. 

Liu closed his keynote by encouraging industry leaders to monitor these developments closely, as the rise of AI-driven automation could reshape the global labor landscape faster than anticipated.

Generative AI May Handle 40% of Workload, Financial Experts Predict

 

Almost half of bank executives polled recently by KPMG believe that generative AI will be able to manage 21% to 40% of their teams' regular tasks by the end of the year. 
 
Heavy investment

Despite economic uncertainty, six out of ten bank executives say generative AI is a top investment priority this year, according to an April KPMG report that polled 200 U.S. bank executives from large and small firms in March about the tech investments their organisations are making. Furthermore, 57% said generative AI is a vital part of their long-term strategy for driving innovation and remaining relevant in the future. 

“Banks are walking a tightrope of rapidly advancing their AI agendas while working to better define the value of their investments,” Peter Torrente, KPMG’s U.S. sector leader for its banking and capital markets practice, noted in the report. 

Approximately half of the executives polled stated their banks are actively piloting the use of generative AI in fraud detection and financial forecasting, with 34% stating the same for cybersecurity. Fraud and cybersecurity are the most prevalent in the proof-of-concept stage (45% each), followed by financial forecasting (20%). 

Nearly 78 percent are actively employing generative AI or evaluating its usage for security or fraud prevention, while 21% are considering it. The vast majority (85%) are using generative AI for data-driven insights or personalisation. 

Senior vice president of product and head of AI at Alphasense, an AI market intelligence company, Chris Ackerson, stated that banks are turning to third-party providers for at least certain uses since the present rate of AI development "is breathtaking." 

Alphasense the and similar companies are being used by lenders to streamline their due diligence procedures and assist in deal sourcing in order to identify potentially lucrative possibilities. The latter, according to Ackerson, "can be a revenue generation play," not merely an efficiency increase. 

As banks include generative AI into their cybersecurity, fraud detection, and financial forecasting responsibilities, ensuring that their employees understand how to appropriately use generative AI-powered solutions has become critical to assuring a return on investment. 

Training staff on how to use new tools or software is "a big element of all of this, to get the benefits out of the technology, as well as to make sure that you're upskilling your employees," Torrente stated. 

Numerous financial institutions, particularly larger lenders, are already investing in such training as they implement various AI tools, according to Torrente, but banks of all sizes should prioritise it as consumer expectations shift and smaller banks struggle to remain competitive.

Quantum Computing Could Deliver Business Value by 2028 with 100 Logical Qubits

 

Quantum computing may soon move from theory to commercial reality, as experts predict that machines with 100 logical qubits could start delivering tangible business value by 2028—particularly in areas like material science. Speaking at the Commercialising Quantum Computing conference in London, industry leaders suggested that such systems could outperform even high-performance computing in solving complex problems. 

Mark Jackson, senior quantum evangelist at Quantinuum, highlighted that quantum computing shows great promise in generative AI applications, especially machine learning. Unlike traditional systems that aim for precise answers, quantum computers excel at identifying patterns in large datasets—making them highly effective for cybersecurity and fraud detection. “Quantum computers can detect patterns that would be missed by other conventional computing methods,” Jackson said.  

Financial services firms are also beginning to realize the potential of quantum computing. Phil Intallura, global head of quantum technologies at HSBC, said quantum technologies can help create more optimized financial models. “If you can show a solution using quantum technology that outperforms supercomputers, decision-makers are more likely to invest,” he noted. HSBC is already exploring quantum random number generation for use in simulations and risk modeling. 

In a recent collaborative study published in Nature, researchers from JPMorgan Chase, Quantinuum, Argonne and Oak Ridge national labs, and the University of Texas showcased Random Circuit Sampling (RCS) as a certified-randomness-expansion method, a task only achievable on a quantum computer. This work underscores how randomness from quantum systems can enhance classical financial simulations. Quantum cryptography also featured prominently at the conference. Regulatory pressure is mounting on banks to replace RSA-2048 encryption with quantum-safe standards by 2035, following recommendations from the U.S. National Institute of Standards and Technology. 

Santander’s Mark Carney emphasized the need for both software and hardware support to enable fast and secure post-quantum cryptography (PQC) in customer-facing applications. Gerard Mullery, interim CEO at Oxford Quantum Circuits, stressed the importance of integrating quantum computing into traditional enterprise workflows. As AI increasingly automates business processes, quantum platforms will need to support seamless orchestration within these ecosystems. 

While only a few companies have quantum machines with logical qubits today, the pace of development suggests that quantum computing could be transformative within the next few years. With increasing investment and maturing use cases, businesses are being urged to prepare for a hybrid future where classical and quantum systems work together to solve previously intractable problems.

Security Analysts Express Concerns Over AI-Generated Doll Trend

 

If you've been scrolling through social media recently, you've probably seen a lot of... dolls. There are dolls all over X and on Facebook feeds. Instagram? Dolls. TikTok?

You guessed it: dolls, as well as doll-making techniques. There are even dolls on LinkedIn, undoubtedly the most serious and least entertaining member of the club. You can refer to it as the Barbie AI treatment or the Barbie box trend. If Barbie isn't your thing, you can try AI action figures, action figure starter packs, or the ChatGPT action figure fad. However, regardless of the hashtag, dolls appear to be everywhere. 

And, while they share some similarities (boxes and packaging resembling Mattel's Barbie, personality-driven accessories, a plastic-looking smile), they're all as unique as the people who post them, with the exception of one key common feature: they're not real. 

In the emerging trend, users are using generative AI tools like ChatGPT to envision themselves as dolls or action figures, complete with accessories. It has proven quite popular, and not just among influencers.

Politicians, celebrities, and major brands have all joined in. Journalists covering the trend have created images of themselves with cameras and microphones (albeit this journalist won't put you through that). Users have created renditions of almost every well-known figure, including billionaire Elon Musk and actress and singer Ariana Grande. 

The Verge, a tech media outlet, claims that it started on LinkedIn, a professional social networking site that was well-liked by marketers seeking interaction. Because of this, a lot of the dolls you see try to advertise a company or business. (Think, "social media marketer doll," or even "SEO manager doll." ) 

Privacy concerns

From a social perspective, the popularity of the doll-generating trend isn't surprising at all, according to Matthew Guzdial, an assistant professor of computing science at the University of Alberta.

"This is the kind of internet trend we've had since we've had social media. Maybe it used to be things like a forwarded email or a quiz where you'd share the results," Guzdial noted. 

But as with any AI trend, there are some concerns over its data use. Generative AI in general poses substantial data privacy challenges. As the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) points out, data privacy concerns and the internet are nothing new, but AI is so "data-hungry" that it magnifies the risk. 

Safety tips 

As we have seen, one of the major risks of participating in viral AI trends is the potential for your conversation history to be compromised by unauthorised or malicious parties. To stay safe, researchers recommend taking the following steps: 

Protect your account: This includes enabling 2FA, creating secure and unique passwords for each service, and avoiding logging in to shared computers.

Minimise the real data you give to the AI model: Fornés suggests using nicknames or other data instead. You should also consider utilising a different ID solely for interactions with AI models.

Use the tool cautiously and properly: When feasible, use the AI model in incognito mode and without activating the history or conversational memory functions.