Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI technology. Show all posts

Foxconn’s Chairman Warns AI and Robotics Will Replace Low-End Manufacturing Jobs

 

Foxconn chairman Young Liu has issued a stark warning about the future of low-end manufacturing jobs, suggesting that generative AI and robotics will eventually eliminate many of these roles. Speaking at the Computex conference in Taiwan, Liu emphasized that this transformation is not just technological but geopolitical, urging world leaders to prepare for the sweeping changes ahead. 

According to Liu, wealthy nations have historically relied on two methods to keep manufacturing costs down: encouraging immigration to bring in lower-wage workers and outsourcing production to countries with lower GDP. However, he argued that both strategies are reaching their limits. With fewer low-GDP countries to outsource to and increasing resistance to immigration in many parts of the world, Liu believes that generative AI and robotics will be the next major solution to bridge this gap. He cited Foxconn’s own experience as proof of this shift. 

After integrating generative AI into its production processes, the company discovered that AI alone could handle up to 80% of the work involved in setting up new manufacturing runs—often faster than human workers. While human input is still required to complete the job, the combination of AI and skilled labor significantly improves efficiency. As a result, Foxconn’s human experts are now able to focus on more complex challenges rather than repetitive tasks. Liu also announced the development of a proprietary AI model named “FoxBrain,” tailored specifically for manufacturing. 

Built using Meta’s Llama 3 and 4 models and trained on Foxconn’s internal data, this tool aims to automate workflows and enhance factory operations. The company plans to open-source FoxBrain and deploy it across all its facilities, continuously improving the model with real-time performance feedback. Another innovation Liu highlighted was Foxconn’s use of Nvidia’s Omniverse to create digital twins of future factories. These AI-operated virtual factories are used to test and optimize layouts before construction begins, drastically improving design efficiency and effectiveness. 

In addition to manufacturing, Foxconn is eyeing the electric vehicle sector. Liu revealed the company is working on a reference design for EVs, a model that partners can customize—much like Foxconn’s strategy with PC manufacturers. He claimed this approach could reduce product development workloads by up to 80%, enhancing time-to-market and cutting costs. 

Liu closed his keynote by encouraging industry leaders to monitor these developments closely, as the rise of AI-driven automation could reshape the global labor landscape faster than anticipated.

Google’s AI Virtual Try-On Tool Redefines Online Shopping Experience

 

At the latest Google I/O developers conference, the tech giant introduced an unexpected innovation in online shopping: an AI-powered virtual try-on tool. This new feature lets users upload a photo of themselves and see how clothing items would appear on their body. By merging the image of the user with that of the garment, Google’s custom-built image generation model creates a realistic simulation of the outfit on the individual. 

While the concept seems simple, the underlying AI technology is advanced. In a live demonstration, the tool appeared to function seamlessly. The feature is now available in the United States and is part of Google’s broader efforts to enhance the online shopping experience through AI integration. It’s particularly useful for people who often struggle to visualize how clothing will look on their body compared to how it appears on models.  

However, the rollout of this tool raised valid questions about user privacy. AI systems that involve personal images often come with concerns over data usage. Addressing these worries, a Google representative clarified that uploaded photos are used exclusively for the try-on experience. The images are not stored for AI training, are not shared with other services or third parties, and users can delete or update their photos at any time. This level of privacy protection is notable in an industry where user data is typically leveraged to improve algorithms. 

Given Google’s ongoing development of AI-driven tools, some expected the company to utilize this photo data for model training. Instead, the commitment to user privacy in this case suggests a more responsible approach. Virtual fitting technology isn’t entirely new. Retail and tech companies have been exploring similar ideas for years. Amazon, for instance, has experimented with AI tools in its fashion division. Google, however, claims its new tool offers a more in-depth understanding of diverse body types. 

During the presentation, Vidhya Srinivasan, Google’s VP of ads and commerce, emphasized the system’s goal of accommodating different shapes and sizes more effectively. Past AI image tools have faced criticism for lacking diversity and realism. It’s unclear whether Google’s new tool will be more reliable across the board. Nevertheless, their assurance that user images won’t be used to train models helps build trust. 

Although the virtual preview may not always perfectly reflect real-life appearances, this development points to a promising direction for AI in retail. If successful, it could improve customer satisfaction, reduce returns, and make online shopping a more personalized experience.

Klarna Scales Back AI-Led Customer Service Strategy, Resumes Human Support Hiring

 

Klarna Group Plc, the Sweden-based fintech company, is reassessing its heavy reliance on artificial intelligence (AI) in customer service after admitting the approach led to a decline in service quality. CEO and co-founder Sebastian Siemiatkowski acknowledged that cost-cutting took precedence over customer experience during a company-wide AI push that replaced hundreds of human agents. 

Speaking at Klarna’s Stockholm headquarters, Siemiatkowski conceded, “As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality.” The company had frozen hiring for over a year to scale its AI capabilities but now plans to recalibrate its customer service model. 

In a strategic shift, Klarna is restarting recruitment for customer support roles — a rare move that reflects the company’s need to restore the quality of human interaction. A new pilot program is underway that allows remote workers — including students and individuals in rural areas — to provide customer service on-demand in an “Uber-like setup.” Currently, two agents are part of the trial. “We also know there are tons of Klarna users that are very passionate about our company and would enjoy working for us,” Siemiatkowski said. 

He stressed the importance of giving customers the option to speak to a human, citing both brand and operational needs. Despite dialing back on AI-led customer support, Klarna is not walking away from AI altogether. The company is continuing to rebuild its tech stack with AI at the core, aiming to improve operational efficiency. It is also developing a digital financial assistant designed to help users secure better interest rates and insurance options. 

Klarna maintains a close relationship with OpenAI, a collaboration that began in 2023. “We wanted to be [OpenAI’s] favorite guinea pig,” Siemiatkowski noted, reinforcing the company’s long-term commitment to leveraging AI. Klarna’s course correction follows a turbulent financial period. After peaking at a $45.6 billion valuation in 2021, the company saw its value drop to $6.7 billion in 2022. It has since rebounded and aims to raise $1 billion via an IPO, targeting a valuation exceeding $15 billion — though IPO plans have been paused due to market volatility. 

The company’s 2024 announcement that AI was handling the workload of 700 human agents disrupted the call center industry, leading to a sharp drop in shares of Teleperformance SE, a major outsourcing firm. While Klarna is resuming hiring, its overall workforce is expected to shrink. “In a year’s time, we’ll probably be down to about 2,500 people from 3,000,” Siemiatkowski said, noting that attrition and further AI improvements will likely drive continued headcount reductions.

Quantum Computing Could Deliver Business Value by 2028 with 100 Logical Qubits

 

Quantum computing may soon move from theory to commercial reality, as experts predict that machines with 100 logical qubits could start delivering tangible business value by 2028—particularly in areas like material science. Speaking at the Commercialising Quantum Computing conference in London, industry leaders suggested that such systems could outperform even high-performance computing in solving complex problems. 

Mark Jackson, senior quantum evangelist at Quantinuum, highlighted that quantum computing shows great promise in generative AI applications, especially machine learning. Unlike traditional systems that aim for precise answers, quantum computers excel at identifying patterns in large datasets—making them highly effective for cybersecurity and fraud detection. “Quantum computers can detect patterns that would be missed by other conventional computing methods,” Jackson said.  

Financial services firms are also beginning to realize the potential of quantum computing. Phil Intallura, global head of quantum technologies at HSBC, said quantum technologies can help create more optimized financial models. “If you can show a solution using quantum technology that outperforms supercomputers, decision-makers are more likely to invest,” he noted. HSBC is already exploring quantum random number generation for use in simulations and risk modeling. 

In a recent collaborative study published in Nature, researchers from JPMorgan Chase, Quantinuum, Argonne and Oak Ridge national labs, and the University of Texas showcased Random Circuit Sampling (RCS) as a certified-randomness-expansion method, a task only achievable on a quantum computer. This work underscores how randomness from quantum systems can enhance classical financial simulations. Quantum cryptography also featured prominently at the conference. Regulatory pressure is mounting on banks to replace RSA-2048 encryption with quantum-safe standards by 2035, following recommendations from the U.S. National Institute of Standards and Technology. 

Santander’s Mark Carney emphasized the need for both software and hardware support to enable fast and secure post-quantum cryptography (PQC) in customer-facing applications. Gerard Mullery, interim CEO at Oxford Quantum Circuits, stressed the importance of integrating quantum computing into traditional enterprise workflows. As AI increasingly automates business processes, quantum platforms will need to support seamless orchestration within these ecosystems. 

While only a few companies have quantum machines with logical qubits today, the pace of development suggests that quantum computing could be transformative within the next few years. With increasing investment and maturing use cases, businesses are being urged to prepare for a hybrid future where classical and quantum systems work together to solve previously intractable problems.

AI Can Create Deepfake Videos of Children Using Just 20 Images, Expert Warns

 

Parents are being urged to rethink how much they share about their children online, as experts warn that criminals can now generate realistic deepfake videos using as few as 20 images. This alarming development highlights the growing risks of digital identity theft and fraud facing children due to oversharing on social media platforms.  

According to Professor Carsten Maple of the University of Warwick and the Alan Turing Institute, modern AI tools can construct highly realistic digital profiles, including 30-second deepfake videos, from a small number of publicly available photos. These images can be used not only by criminal networks to commit identity theft, open fraudulent accounts, or claim government benefits in a child’s name but also by large tech companies to train their algorithms, often without the user’s full awareness or consent. 

New research conducted by Perspectus Global and commissioned by Proton surveyed 2,000 UK parents of children under 16. The findings show that on average, parents upload 63 images to social media every month, with 59% of those being family-related. A significant proportion of parents—21%—share these photos multiple times a week, while 38% post several times a month. These frequent posts not only showcase images but also often contain sensitive data like location tags and key life events, making it easier for bad actors to build a detailed online profile of the child. Professor Maple warned that such oversharing can lead to long-term consequences. 

Aside from potential identity theft, children could face mental distress or reputational harm later in life from having a permanent digital footprint that they never consented to create. The problem is exacerbated by the fact that many parents are unaware of how their data is being used. For instance, 48% of survey respondents did not realize that cloud storage providers can access the data stored on their platforms. In fact, more than half of the surveyed parents (56%) store family images on cloud services such as Google Drive or Apple iCloud. On average, each parent had 185 photos of their children stored digitally—images that may be accessed or analyzed under vaguely worded terms and conditions.  

Recent changes to Instagram’s user agreement, which now allows the platform to use uploaded images to train its AI systems, have further heightened privacy concerns. Additionally, experts have warned about the use of personal images by other Big Tech firms to enhance facial recognition algorithms and advertising models. To protect their children, parents are advised to implement a range of safety measures. These include using secure and private cloud storage, adjusting privacy settings on social platforms, avoiding public Wi-Fi when sharing or uploading data, and staying vigilant against phishing scams. 

Furthermore, experts recommend setting boundaries with children regarding online activity, using parental controls, antivirus tools, and search filters, and modeling responsible digital behavior. The growing accessibility of AI-based image manipulation tools underscores the urgent need for greater awareness and proactive digital hygiene. What may seem like harmless sharing today could expose children to significant risks in the future.

AI in Cybersecurity Market Sees Rapid Growth as Network Security Leads 2024 Expansion

 

The integration of artificial intelligence into cybersecurity solutions has accelerated dramatically, driving the global market to an estimated value of $32.5 billion in 2024. This surge—an annual growth rate of 23%—reflects organizations’ urgent need to defend against increasingly sophisticated cyber threats. Traditional, signature-based defenses are no longer sufficient; today’s adversaries employ polymorphic malware, fileless attacks, and automated intrusion tools that can evade static rule sets. AI’s ability to learn patterns, detect anomalies in real time, and respond autonomously has become indispensable. 

Among AI-driven cybersecurity segments, network security saw the most significant expansion last year, accounting for nearly 40% of total AI security revenues. AI-enhanced intrusion prevention systems and next-generation firewalls leverage machine learning models to inspect vast streams of traffic, distinguishing malicious behavior from legitimate activity. These solutions can automatically quarantine suspicious connections, adapt to novel malware variants, and provide security teams with prioritized alerts—reducing mean time to detection from days to mere minutes. As more enterprises adopt zero-trust architectures, AI’s role in continuously verifying device and user behavior on the network has become a cornerstone of modern defensive strategies. 

Endpoint security followed closely, representing roughly 25% of the AI cybersecurity market in 2024. AI-powered endpoint detection and response (EDR) platforms monitor processes, memory activity, and system calls on workstations and servers. By correlating telemetry across thousands of devices, these platforms can identify subtle indicators of compromise—such as unusual parent‑child process relationships or command‑line flags—before attackers achieve persistence. The rise of remote work has only heightened demand: with employees connecting from diverse locations and personal devices, AI’s context-aware threat hunting capabilities help maintain comprehensive visibility across decentralized environments. 

Identity and access management (IAM) solutions incorporating AI now capture about 20% of the market. Behavioral analytics engines analyze login patterns, device characteristics, and geolocation data to detect risky authentication attempts. Rather than relying solely on static multi‑factor prompts, adaptive authentication methods adjust challenge levels based on real‑time risk scores, blocking illicit logins while minimizing friction for legitimate users. This dynamic approach addresses credential stuffing and account takeover attacks, which accounted for over 30% of cyber incidents in 2024. Cloud security, covering roughly 15% of the AI cybersecurity spend, is another high‑growth area. 

With workloads distributed across public, private, and hybrid clouds, AI-driven cloud security posture management (CSPM) tools continuously scan configurations and user activities for misconfigurations, vulnerable APIs, and data‑exfiltration attempts. Automated remediation workflows can instantly correct risky settings, enforce encryption policies, and isolate compromised workloads—ensuring compliance with evolving regulations such as GDPR and CCPA. 

Looking ahead, analysts predict the AI in cybersecurity market will exceed $60 billion by 2028, as vendors integrate generative AI for automated playbook creation and incident response orchestration. Organizations that invest in AI‑powered defenses will gain a competitive edge, enabling proactive threat hunting and resilient operations against a backdrop of escalating cyber‑threat complexity.

Agentic AI Is Reshaping Cybersecurity Careers, Not Replacing Them

 

Agentic AI took center stage at the 2025 RSA Conference, signaling a major shift in how cybersecurity professionals will work in the near future. No longer a futuristic concept, agentic AI systems—capable of planning, acting, and learning independently—are already being deployed to streamline incident response, bolster compliance, and scale threat detection efforts. These intelligent agents operate with minimal human input, making real-time decisions and adapting to dynamic environments. 

While the promise of increased efficiency and resilience is driving rapid adoption, cybersecurity leaders also raised serious concerns. Experts like Elastic CISO Mandy Andress called for greater transparency and stronger oversight when deploying AI agents in sensitive environments. Trust, explainability, and governance emerged as recurring themes throughout RSAC, underscoring the need to balance innovation with caution—especially as cybercriminals are also experimenting with agentic AI to enhance and scale their attacks. 

For professionals in the field, this isn’t a moment to fear job loss—it’s a chance to embrace career transformation. New roles are already emerging. AI-Augmented Cybersecurity Analysts will shift from routine alert triage to validating agent insights and making strategic decisions. Security Agent Designers will define logic workflows and trust boundaries for AI operations, blending DevSecOps with AI governance. Meanwhile, AI Threat Hunters will work to identify how attackers may exploit these new tools and develop defense mechanisms in response. 

Another critical role on the horizon is the Autonomous SOC Architect, tasked with designing next-generation security operations centers powered by human-machine collaboration. There will also be growing demand for Governance and AI Ethics Leads who ensure that decisions made by AI agents are auditable, compliant, and ethically sound. These roles reflect how cybersecurity is evolving into a hybrid discipline requiring both technical fluency and ethical oversight. 

To stay competitive in this changing landscape, professionals should build new skills. This includes prompt engineering, agent orchestration using tools like LangChain, AI risk modeling, secure deployment practices, and frameworks for explainability. Human-AI collaboration strategies will also be essential, as security teams learn to partner with autonomous systems rather than merely supervise them. As IBM’s Suja Viswesan emphasized, “Security must be baked in—not bolted on.” That principle applies not only to how organizations deploy agentic AI but also to how they train and upskill their cybersecurity workforce. 

The future of defense depends on professionals who understand how AI agents think, operate, and fail. Ultimately, agentic AI isn’t replacing people—it’s reshaping their roles. Human intuition, ethical reasoning, and strategic thinking remain vital in defending against modern cyber threats. 

As HackerOne CEO Kara Sprague noted, “Machines detect patterns. Humans understand motives.” Together, they can form a faster, smarter, and more adaptive line of defense. The cybersecurity industry isn’t just gaining new tools—it’s creating entirely new job titles and disciplines.

Agentic AI and Ransomware: How Autonomous Agents Are Reshaping Cybersecurity Threats

 

A new generation of artificial intelligence—known as agentic AI—is emerging, and it promises to fundamentally change how technology is used. Unlike generative AI, which mainly responds to prompts, agentic AI operates independently, solving complex problems and making decisions without direct human input. While this leap in autonomy brings major benefits for businesses, it also introduces serious risks, especially in the realm of cybersecurity. Security experts warn that agentic AI could significantly enhance the capabilities of ransomware groups. 

These autonomous agents can analyze, plan, and execute tasks on their own, making them ideal tools for attackers seeking to automate and scale their operations. As agentic AI evolves, it is poised to alter the cyber threat landscape, potentially enabling more efficient and harder-to-detect ransomware attacks. In contrast to the early concerns raised in 2022 with the launch of tools like ChatGPT, which mainly helped attackers draft phishing emails or debug malicious code, agentic AI can operate in real time and adapt to complex environments. This allows cybercriminals to offload traditionally manual processes like lateral movement, system enumeration, and target prioritization. 

Currently, ransomware operators often rely on Initial Access Brokers (IABs) to breach networks, then spend time manually navigating internal systems to deploy malware. This process is labor-intensive and prone to error, often leading to incomplete or failed attacks. Agentic AI, however, removes many of these limitations. It can independently identify valuable targets, choose the most effective attack vectors, and adjust to obstacles—all without human direction. These agents may also dramatically reduce the time required to carry out a successful ransomware campaign, compressing what once took weeks into mere minutes. 

In practice, agentic AI can discover weak points in a network, bypass defenses, deploy malware, and erase evidence of the intrusion—all in a single automated workflow. However, just as agentic AI poses a new challenge for cybersecurity, it also offers potential defensive benefits. Security teams could deploy autonomous AI agents to monitor networks, detect anomalies, or even create decoy systems that mislead attackers. 

While agentic AI is not yet widely deployed by threat actors, its rapid development signals an urgent need for organizations to prepare. To stay ahead, companies should begin exploring how agentic AI can be integrated into their defense strategies. Being proactive now could mean the difference between falling behind or successfully countering the next wave of ransomware threats.