Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Generative AI. Show all posts

Foxconn’s Chairman Warns AI and Robotics Will Replace Low-End Manufacturing Jobs

 

Foxconn chairman Young Liu has issued a stark warning about the future of low-end manufacturing jobs, suggesting that generative AI and robotics will eventually eliminate many of these roles. Speaking at the Computex conference in Taiwan, Liu emphasized that this transformation is not just technological but geopolitical, urging world leaders to prepare for the sweeping changes ahead. 

According to Liu, wealthy nations have historically relied on two methods to keep manufacturing costs down: encouraging immigration to bring in lower-wage workers and outsourcing production to countries with lower GDP. However, he argued that both strategies are reaching their limits. With fewer low-GDP countries to outsource to and increasing resistance to immigration in many parts of the world, Liu believes that generative AI and robotics will be the next major solution to bridge this gap. He cited Foxconn’s own experience as proof of this shift. 

After integrating generative AI into its production processes, the company discovered that AI alone could handle up to 80% of the work involved in setting up new manufacturing runs—often faster than human workers. While human input is still required to complete the job, the combination of AI and skilled labor significantly improves efficiency. As a result, Foxconn’s human experts are now able to focus on more complex challenges rather than repetitive tasks. Liu also announced the development of a proprietary AI model named “FoxBrain,” tailored specifically for manufacturing. 

Built using Meta’s Llama 3 and 4 models and trained on Foxconn’s internal data, this tool aims to automate workflows and enhance factory operations. The company plans to open-source FoxBrain and deploy it across all its facilities, continuously improving the model with real-time performance feedback. Another innovation Liu highlighted was Foxconn’s use of Nvidia’s Omniverse to create digital twins of future factories. These AI-operated virtual factories are used to test and optimize layouts before construction begins, drastically improving design efficiency and effectiveness. 

In addition to manufacturing, Foxconn is eyeing the electric vehicle sector. Liu revealed the company is working on a reference design for EVs, a model that partners can customize—much like Foxconn’s strategy with PC manufacturers. He claimed this approach could reduce product development workloads by up to 80%, enhancing time-to-market and cutting costs. 

Liu closed his keynote by encouraging industry leaders to monitor these developments closely, as the rise of AI-driven automation could reshape the global labor landscape faster than anticipated.

Generative AI May Handle 40% of Workload, Financial Experts Predict

 

Almost half of bank executives polled recently by KPMG believe that generative AI will be able to manage 21% to 40% of their teams' regular tasks by the end of the year. 
 
Heavy investment

Despite economic uncertainty, six out of ten bank executives say generative AI is a top investment priority this year, according to an April KPMG report that polled 200 U.S. bank executives from large and small firms in March about the tech investments their organisations are making. Furthermore, 57% said generative AI is a vital part of their long-term strategy for driving innovation and remaining relevant in the future. 

“Banks are walking a tightrope of rapidly advancing their AI agendas while working to better define the value of their investments,” Peter Torrente, KPMG’s U.S. sector leader for its banking and capital markets practice, noted in the report. 

Approximately half of the executives polled stated their banks are actively piloting the use of generative AI in fraud detection and financial forecasting, with 34% stating the same for cybersecurity. Fraud and cybersecurity are the most prevalent in the proof-of-concept stage (45% each), followed by financial forecasting (20%). 

Nearly 78 percent are actively employing generative AI or evaluating its usage for security or fraud prevention, while 21% are considering it. The vast majority (85%) are using generative AI for data-driven insights or personalisation. 

Senior vice president of product and head of AI at Alphasense, an AI market intelligence company, Chris Ackerson, stated that banks are turning to third-party providers for at least certain uses since the present rate of AI development "is breathtaking." 

Alphasense the and similar companies are being used by lenders to streamline their due diligence procedures and assist in deal sourcing in order to identify potentially lucrative possibilities. The latter, according to Ackerson, "can be a revenue generation play," not merely an efficiency increase. 

As banks include generative AI into their cybersecurity, fraud detection, and financial forecasting responsibilities, ensuring that their employees understand how to appropriately use generative AI-powered solutions has become critical to assuring a return on investment. 

Training staff on how to use new tools or software is "a big element of all of this, to get the benefits out of the technology, as well as to make sure that you're upskilling your employees," Torrente stated. 

Numerous financial institutions, particularly larger lenders, are already investing in such training as they implement various AI tools, according to Torrente, but banks of all sizes should prioritise it as consumer expectations shift and smaller banks struggle to remain competitive.

Quantum Computing Could Deliver Business Value by 2028 with 100 Logical Qubits

 

Quantum computing may soon move from theory to commercial reality, as experts predict that machines with 100 logical qubits could start delivering tangible business value by 2028—particularly in areas like material science. Speaking at the Commercialising Quantum Computing conference in London, industry leaders suggested that such systems could outperform even high-performance computing in solving complex problems. 

Mark Jackson, senior quantum evangelist at Quantinuum, highlighted that quantum computing shows great promise in generative AI applications, especially machine learning. Unlike traditional systems that aim for precise answers, quantum computers excel at identifying patterns in large datasets—making them highly effective for cybersecurity and fraud detection. “Quantum computers can detect patterns that would be missed by other conventional computing methods,” Jackson said.  

Financial services firms are also beginning to realize the potential of quantum computing. Phil Intallura, global head of quantum technologies at HSBC, said quantum technologies can help create more optimized financial models. “If you can show a solution using quantum technology that outperforms supercomputers, decision-makers are more likely to invest,” he noted. HSBC is already exploring quantum random number generation for use in simulations and risk modeling. 

In a recent collaborative study published in Nature, researchers from JPMorgan Chase, Quantinuum, Argonne and Oak Ridge national labs, and the University of Texas showcased Random Circuit Sampling (RCS) as a certified-randomness-expansion method, a task only achievable on a quantum computer. This work underscores how randomness from quantum systems can enhance classical financial simulations. Quantum cryptography also featured prominently at the conference. Regulatory pressure is mounting on banks to replace RSA-2048 encryption with quantum-safe standards by 2035, following recommendations from the U.S. National Institute of Standards and Technology. 

Santander’s Mark Carney emphasized the need for both software and hardware support to enable fast and secure post-quantum cryptography (PQC) in customer-facing applications. Gerard Mullery, interim CEO at Oxford Quantum Circuits, stressed the importance of integrating quantum computing into traditional enterprise workflows. As AI increasingly automates business processes, quantum platforms will need to support seamless orchestration within these ecosystems. 

While only a few companies have quantum machines with logical qubits today, the pace of development suggests that quantum computing could be transformative within the next few years. With increasing investment and maturing use cases, businesses are being urged to prepare for a hybrid future where classical and quantum systems work together to solve previously intractable problems.

Security Analysts Express Concerns Over AI-Generated Doll Trend

 

If you've been scrolling through social media recently, you've probably seen a lot of... dolls. There are dolls all over X and on Facebook feeds. Instagram? Dolls. TikTok?

You guessed it: dolls, as well as doll-making techniques. There are even dolls on LinkedIn, undoubtedly the most serious and least entertaining member of the club. You can refer to it as the Barbie AI treatment or the Barbie box trend. If Barbie isn't your thing, you can try AI action figures, action figure starter packs, or the ChatGPT action figure fad. However, regardless of the hashtag, dolls appear to be everywhere. 

And, while they share some similarities (boxes and packaging resembling Mattel's Barbie, personality-driven accessories, a plastic-looking smile), they're all as unique as the people who post them, with the exception of one key common feature: they're not real. 

In the emerging trend, users are using generative AI tools like ChatGPT to envision themselves as dolls or action figures, complete with accessories. It has proven quite popular, and not just among influencers.

Politicians, celebrities, and major brands have all joined in. Journalists covering the trend have created images of themselves with cameras and microphones (albeit this journalist won't put you through that). Users have created renditions of almost every well-known figure, including billionaire Elon Musk and actress and singer Ariana Grande. 

The Verge, a tech media outlet, claims that it started on LinkedIn, a professional social networking site that was well-liked by marketers seeking interaction. Because of this, a lot of the dolls you see try to advertise a company or business. (Think, "social media marketer doll," or even "SEO manager doll." ) 

Privacy concerns

From a social perspective, the popularity of the doll-generating trend isn't surprising at all, according to Matthew Guzdial, an assistant professor of computing science at the University of Alberta.

"This is the kind of internet trend we've had since we've had social media. Maybe it used to be things like a forwarded email or a quiz where you'd share the results," Guzdial noted. 

But as with any AI trend, there are some concerns over its data use. Generative AI in general poses substantial data privacy challenges. As the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) points out, data privacy concerns and the internet are nothing new, but AI is so "data-hungry" that it magnifies the risk. 

Safety tips 

As we have seen, one of the major risks of participating in viral AI trends is the potential for your conversation history to be compromised by unauthorised or malicious parties. To stay safe, researchers recommend taking the following steps: 

Protect your account: This includes enabling 2FA, creating secure and unique passwords for each service, and avoiding logging in to shared computers.

Minimise the real data you give to the AI model: Fornés suggests using nicknames or other data instead. You should also consider utilising a different ID solely for interactions with AI models.

Use the tool cautiously and properly: When feasible, use the AI model in incognito mode and without activating the history or conversational memory functions.

Generative AI Fuels Identity Theft, Aadhaar Card Fraud, and Misinformation in India

 

A disturbing trend is emerging in India’s digital landscape as generative AI tools are increasingly misused to forge identities and spread misinformation. One user, Piku, revealed that an AI platform generated a convincing Aadhaar card using only a name, birth date, and address—raising serious questions about data security. While AI models typically do not use real personal data, the near-perfect replication of government documents hints at training on real-world samples, possibly sourced from public leaks or open repositories. 

This AI-enabled fraud isn’t occurring in isolation. Criminals are combining fake document templates with authentic data collected from discarded paperwork, e-waste, and old printers. The resulting forged identities are realistic enough to pass basic checks, enabling SIM card fraud, bank scams, and more. What started as tools for entertainment and productivity now pose serious risks. Misinformation tactics are evolving too. 

A recent incident involving playback singer Shreya Ghoshal illustrated how scammers exploit public figures to push phishing links. These fake stories led users to malicious domains targeting them with investment scams under false brand names like Lovarionix Liquidity. Cyber intelligence experts traced these campaigns to websites built specifically for impersonation and data theft. The misuse of generative AI also extends into healthcare fraud. 

In a shocking case, a man impersonated renowned cardiologist Dr. N John Camm and performed unauthorized surgeries at a hospital in Madhya Pradesh. At least two patient deaths were confirmed between December 2024 and February 2025. Investigators believe the impersonator may have used manipulated or AI-generated credentials to gain credibility. Cybersecurity professionals are urging more vigilance. CertiK founder Ronghui Gu emphasizes that users must understand the risks of sharing biometric data, like facial images, with AI platforms. Without transparency, users cannot be sure how their data is used or whether it’s shared. He advises precautions such as using pseudonyms, secondary emails, and reading privacy policies carefully—especially on platforms not clearly compliant with regulations like GDPR or CCPA. 

A recent HiddenLayer report revealed that 77% of companies using AI have already suffered security breaches. This underscores the need for robust data protection as AI becomes more embedded in everyday processes. India now finds itself at the center of an escalating cybercrime wave powered by generative AI. What once seemed like harmless innovation now fuels identity theft, document forgery, and digital misinformation. The time for proactive regulation, corporate accountability, and public awareness is now—before this new age of AI-driven fraud becomes unmanageable.

Nearly Half of Companies Lack AI-driven Cyber Threat Plans, Report Finds

 

Mimecast has discovered that over 55% of organisations do not have specific plans in place to deal with AI-driven cyberthreats. The cybersecurity company's most recent "State of Human Risk" report, which is based on a global survey of 1,100 IT security professionals, emphasises growing concerns about insider threats, cybersecurity budget shortages, and vulnerabilities related to artificial intelligence. 

According to the report, establishing a structured cybersecurity strategy has improved the risk posture of 96% of organisations. The threat landscape is still becoming more complicated, though, and insider threats and AI-driven attacks are posing new challenges for security leaders. 

“Despite the complexity of challenges facing organisations—including increased insider risk, larger attack surfaces from collaboration tools, and sophisticated AI attacks—organisations are still too eager to simply throw point solutions at the problem,” stated Mimecast’s human risk strategist VP, Masha Sedova. “With short-staffed IT and security teams and an unrelenting threat landscape, organisations must shift to a human-centric platform approach that connects the dots between employees and technology to keep the business secure.” 

95% of organisations use AI for insider risk assessments, endpoint security, and threat detection, according to the survey, but 81% are concerned regarding data leakage from generative AI (GenAI) technology. In addition to 46% not being confident in their abilities to defend against AI-powered phishing and deepfake threats, more than half do not have defined tactics to resist AI-driven attacks.

Data loss from internal sources is expected to increase over the next year, according to 66% of IT leaders, while insider security incidents have increased by 43%. The average cost of insider-driven data breaches, leaks, or theft is $13.9 million per incident, according to the research. Furthermore, 79% of organisations think that the increased usage of collaboration technologies has increased security concerns, making them more vulnerable to both deliberate and accidental data breaches. 

With only 8% of employees accountable for 80% of security incidents, the report highlights a move away from traditional security awareness training and towards proactive Human Risk Management. To identify and eliminate threats early, organisations are implementing behavioural analytics and AI-driven surveillance. A shift towards sophisticated threat detection and risk mitigation techniques is seen in the fact that 72% of security leaders believe that human-centric cybersecurity solutions will be essential over the next five years.

Hong Kong Launches Its First Generative AI Model

 

Last week, Hong Kong launched its first generative artificial intelligence (AI) model, HKGAI V1, ushering in a new era in the city's AI development. The tool was designed by the Hong Kong Generative AI Research and Development Centre (HKGAI) for the Hong Kong Special Administrative Region (HKSAR) government's InnoHK innovation program. 

The locally designed AI tool, which is driven by DeepSeek's data learning model, has so far been tested by about 70 HKSAR government departments. According to a press statement from HKGAI, this innovative accomplishment marks the successful localisation of DeepSeek in Hong Kong, injecting new vitality to the city's AI ecosystem and demonstrating the strong collaborative innovation capabilities between Hong Kong and the Chinese mainland in AI. 

Sun Dong, the HKSAR government's Secretary for Innovation, Technology, and Industry, highlighted during the launch ceremony that artificial intelligence (AI) is in the vanguard of a new industrial revolution and technical revolution, and Hong Kong is actively participating in this wave. 

Sun also emphasised the HKSAR government's broader efforts to encourage AI research, which include the building of an AI supercomputing centre, a 3-billion Hong Kong dollar (386 million US dollar) AI funding plan, and the clustering of over 800 AI enterprises at Science Park and Cyberport. He expressed confidence that the locally produced large language model will soon be available for usage by not just enterprises and people, but also for overseas Chinese communities. 

DeepSeek, founded by Liang Wengfeng, previously stunned the world with its low-cost AI model, which was created with substantially fewer processing resources than larger US tech businesses such as OpenAI and Meta. The HKGAI V1 system is the first in the world to use DeepSeek's full-parameter fine-tuning research methodology. 

The financial secretary allocated HK$1 billion (US$128.6 million) in the budget to build the Hong Kong AI Research and Development Institute. The government intends to start the institute by the fiscal year 2026-27, with cash set aside for the first five years to cover operational costs, including employing staff. 

“Our goal is to ensure Hong Kong’s leading role in the development of AI … So the Institute will focus on facilitating upstream research and development [R&D], midstream and downstream transformation of R&D outcomes, and expanding application scenarios,” Sun noted.

Generative AI in Cybersecurity: A Double-Edged Sword

Generative AI (GenAI) is transforming the cybersecurity landscape, with 52% of CISOs prioritizing innovation using emerging technologies. However, a significant disconnect exists, as only 33% of board members view these technologies as a top priority. This gap underscores the challenge of aligning strategic priorities between cybersecurity leaders and company boards.

The Role of AI in Cybersecurity

According to the latest Splunk CISO Report, cyberattacks are becoming more frequent and sophisticated. Yet, 41% of security leaders believe that the requirements for protection are becoming easier to manage, thanks to advancements in AI. Many CISOs are increasingly relying on AI to:

  • Identify risks (39%)
  • Analyze threat intelligence (39%)
  • Detect and prioritize threats (35%)

However, GenAI is a double-edged sword. While it enhances threat detection and protection, attackers are also leveraging AI to boost their efforts. For instance:

  • 32% of attackers use AI to make attacks more effective.
  • 28% use AI to increase the volume of attacks.
  • 23% use AI to develop entirely new types of threats.

This has led to growing concerns among security professionals, with 36% of CISOs citing AI-powered attacks as their biggest worry, followed by cyber extortion (24%) and data breaches (23%).

Challenges and Opportunities in Cybersecurity

One of the major challenges is the gap in budget expectations. Only 29% of CISOs feel they have sufficient funding to secure their organizations, compared to 41% of board members who believe their budgets are adequate. Additionally, 64% of CISOs attribute the cyberattacks their firms experience to a lack of support.

Despite these challenges, there is hope. A vast majority of cybersecurity experts (86%) believe that AI can help attract entry-level talent to address the skills shortage, while 65% say AI enables seasoned professionals to work more productively. Collaboration between security teams and other departments is also improving:

  • 91% of organizations are increasing security training for legal and compliance staff.
  • 90% are enhancing training for security teams.

To strengthen cyber defenses, experts emphasize the importance of foundational practices:

  1. Strong Passwords and MFA: Poor password security is linked to 80% of data breaches. Companies are encouraged to use password managers and enforce robust password policies.
  2. Regular Cybersecurity Training: Educating employees on risk management and security practices, such as using antivirus software and maintaining firewalls, can significantly reduce vulnerabilities.
  3. Third-Party Vendor Assessments: Organizations must evaluate third-party vendors for security risks, as breaches through these channels can expose even the most secure systems.

Generative AI is reshaping the cybersecurity landscape, offering both opportunities and challenges. While it enhances threat detection and operational efficiency, it also empowers attackers to launch more sophisticated and frequent attacks. To navigate this evolving landscape, organizations must align strategic priorities, invest in AI-driven solutions, and reinforce foundational cybersecurity practices. By doing so, they can better protect their systems and data in an increasingly complex threat environment.