Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Systems. Show all posts

Google’s AI Virtual Try-On Tool Redefines Online Shopping Experience

 

At the latest Google I/O developers conference, the tech giant introduced an unexpected innovation in online shopping: an AI-powered virtual try-on tool. This new feature lets users upload a photo of themselves and see how clothing items would appear on their body. By merging the image of the user with that of the garment, Google’s custom-built image generation model creates a realistic simulation of the outfit on the individual. 

While the concept seems simple, the underlying AI technology is advanced. In a live demonstration, the tool appeared to function seamlessly. The feature is now available in the United States and is part of Google’s broader efforts to enhance the online shopping experience through AI integration. It’s particularly useful for people who often struggle to visualize how clothing will look on their body compared to how it appears on models.  

However, the rollout of this tool raised valid questions about user privacy. AI systems that involve personal images often come with concerns over data usage. Addressing these worries, a Google representative clarified that uploaded photos are used exclusively for the try-on experience. The images are not stored for AI training, are not shared with other services or third parties, and users can delete or update their photos at any time. This level of privacy protection is notable in an industry where user data is typically leveraged to improve algorithms. 

Given Google’s ongoing development of AI-driven tools, some expected the company to utilize this photo data for model training. Instead, the commitment to user privacy in this case suggests a more responsible approach. Virtual fitting technology isn’t entirely new. Retail and tech companies have been exploring similar ideas for years. Amazon, for instance, has experimented with AI tools in its fashion division. Google, however, claims its new tool offers a more in-depth understanding of diverse body types. 

During the presentation, Vidhya Srinivasan, Google’s VP of ads and commerce, emphasized the system’s goal of accommodating different shapes and sizes more effectively. Past AI image tools have faced criticism for lacking diversity and realism. It’s unclear whether Google’s new tool will be more reliable across the board. Nevertheless, their assurance that user images won’t be used to train models helps build trust. 

Although the virtual preview may not always perfectly reflect real-life appearances, this development points to a promising direction for AI in retail. If successful, it could improve customer satisfaction, reduce returns, and make online shopping a more personalized experience.

Klarna Scales Back AI-Led Customer Service Strategy, Resumes Human Support Hiring

 

Klarna Group Plc, the Sweden-based fintech company, is reassessing its heavy reliance on artificial intelligence (AI) in customer service after admitting the approach led to a decline in service quality. CEO and co-founder Sebastian Siemiatkowski acknowledged that cost-cutting took precedence over customer experience during a company-wide AI push that replaced hundreds of human agents. 

Speaking at Klarna’s Stockholm headquarters, Siemiatkowski conceded, “As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality.” The company had frozen hiring for over a year to scale its AI capabilities but now plans to recalibrate its customer service model. 

In a strategic shift, Klarna is restarting recruitment for customer support roles — a rare move that reflects the company’s need to restore the quality of human interaction. A new pilot program is underway that allows remote workers — including students and individuals in rural areas — to provide customer service on-demand in an “Uber-like setup.” Currently, two agents are part of the trial. “We also know there are tons of Klarna users that are very passionate about our company and would enjoy working for us,” Siemiatkowski said. 

He stressed the importance of giving customers the option to speak to a human, citing both brand and operational needs. Despite dialing back on AI-led customer support, Klarna is not walking away from AI altogether. The company is continuing to rebuild its tech stack with AI at the core, aiming to improve operational efficiency. It is also developing a digital financial assistant designed to help users secure better interest rates and insurance options. 

Klarna maintains a close relationship with OpenAI, a collaboration that began in 2023. “We wanted to be [OpenAI’s] favorite guinea pig,” Siemiatkowski noted, reinforcing the company’s long-term commitment to leveraging AI. Klarna’s course correction follows a turbulent financial period. After peaking at a $45.6 billion valuation in 2021, the company saw its value drop to $6.7 billion in 2022. It has since rebounded and aims to raise $1 billion via an IPO, targeting a valuation exceeding $15 billion — though IPO plans have been paused due to market volatility. 

The company’s 2024 announcement that AI was handling the workload of 700 human agents disrupted the call center industry, leading to a sharp drop in shares of Teleperformance SE, a major outsourcing firm. While Klarna is resuming hiring, its overall workforce is expected to shrink. “In a year’s time, we’ll probably be down to about 2,500 people from 3,000,” Siemiatkowski said, noting that attrition and further AI improvements will likely drive continued headcount reductions.

Agentic AI Is Reshaping Cybersecurity Careers, Not Replacing Them

 

Agentic AI took center stage at the 2025 RSA Conference, signaling a major shift in how cybersecurity professionals will work in the near future. No longer a futuristic concept, agentic AI systems—capable of planning, acting, and learning independently—are already being deployed to streamline incident response, bolster compliance, and scale threat detection efforts. These intelligent agents operate with minimal human input, making real-time decisions and adapting to dynamic environments. 

While the promise of increased efficiency and resilience is driving rapid adoption, cybersecurity leaders also raised serious concerns. Experts like Elastic CISO Mandy Andress called for greater transparency and stronger oversight when deploying AI agents in sensitive environments. Trust, explainability, and governance emerged as recurring themes throughout RSAC, underscoring the need to balance innovation with caution—especially as cybercriminals are also experimenting with agentic AI to enhance and scale their attacks. 

For professionals in the field, this isn’t a moment to fear job loss—it’s a chance to embrace career transformation. New roles are already emerging. AI-Augmented Cybersecurity Analysts will shift from routine alert triage to validating agent insights and making strategic decisions. Security Agent Designers will define logic workflows and trust boundaries for AI operations, blending DevSecOps with AI governance. Meanwhile, AI Threat Hunters will work to identify how attackers may exploit these new tools and develop defense mechanisms in response. 

Another critical role on the horizon is the Autonomous SOC Architect, tasked with designing next-generation security operations centers powered by human-machine collaboration. There will also be growing demand for Governance and AI Ethics Leads who ensure that decisions made by AI agents are auditable, compliant, and ethically sound. These roles reflect how cybersecurity is evolving into a hybrid discipline requiring both technical fluency and ethical oversight. 

To stay competitive in this changing landscape, professionals should build new skills. This includes prompt engineering, agent orchestration using tools like LangChain, AI risk modeling, secure deployment practices, and frameworks for explainability. Human-AI collaboration strategies will also be essential, as security teams learn to partner with autonomous systems rather than merely supervise them. As IBM’s Suja Viswesan emphasized, “Security must be baked in—not bolted on.” That principle applies not only to how organizations deploy agentic AI but also to how they train and upskill their cybersecurity workforce. 

The future of defense depends on professionals who understand how AI agents think, operate, and fail. Ultimately, agentic AI isn’t replacing people—it’s reshaping their roles. Human intuition, ethical reasoning, and strategic thinking remain vital in defending against modern cyber threats. 

As HackerOne CEO Kara Sprague noted, “Machines detect patterns. Humans understand motives.” Together, they can form a faster, smarter, and more adaptive line of defense. The cybersecurity industry isn’t just gaining new tools—it’s creating entirely new job titles and disciplines.

Orion Brings Fully Homomorphic Encryption to Deep Learning for AI Privacy

 

As data privacy becomes an increasing concern, a new artificial intelligence (AI) encryption breakthrough could transform how sensitive information is handled. Researchers Austin Ebel, Karthik Garimella, and Assistant Professor Brandon Reagen have developed Orion, a framework that integrates fully homomorphic encryption (FHE) into deep learning. 

This advancement allows AI systems to analyze encrypted data without decrypting it, ensuring privacy throughout the process. FHE has long been considered a major breakthrough in cryptography because it enables computations on encrypted information while keeping it secure. However, applying this method to deep learning has been challenging due to the heavy computational requirements and technical constraints. Orion addresses these challenges by automating the conversion of deep learning models into FHE-compatible formats. 

The researchers’ study, recently published on arXiv and set to be presented at the 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, highlights Orion’s ability to make privacy-focused AI more practical. One of the biggest concerns in AI today is that machine learning models require direct access to user data, raising serious privacy risks. Orion eliminates this issue by allowing AI to function without exposing sensitive information. The framework is built to work with PyTorch, a widely used machine learning library, making it easier for developers to integrate FHE into existing models. 

Orion also introduces optimization techniques that reduce computational burdens, making privacy-preserving AI more efficient and scalable. Orion has demonstrated notable performance improvements, achieving speeds 2.38 times faster than previous FHE deep learning methods. The researchers successfully implemented high-resolution object detection using the YOLO-v1 model, which contains 139 million parameters—a scale previously considered impractical for FHE. This progress suggests Orion could enable encrypted AI applications in sectors like healthcare, finance, and cybersecurity, where protecting user data is essential. 

A key advantage of Orion is its accessibility. Traditional FHE implementations require specialized knowledge, making them difficult to adopt. Orion simplifies the process, allowing more developers to use the technology without extensive training. By open-sourcing the framework, the research team hopes to encourage further innovation and adoption. As AI continues to expand into everyday life, advancements like Orion could help ensure that technological progress does not come at the cost of privacy and security.

Microsoft MUSE AI: Revolutionizing Game Development with WHAM and Ethical Challenges

 

Microsoft has developed MUSE, a cutting-edge AI model that is set to redefine how video games are created and experienced. This advanced system leverages artificial intelligence to generate realistic gameplay elements, making it easier for developers to design and refine virtual environments. By learning from vast amounts of gameplay data, MUSE can predict player actions, create immersive worlds, and enhance game mechanics in ways that were previously impossible. While this breakthrough technology offers significant advantages for game development, it also raises critical discussions around data security and ethical AI usage. 

One of MUSE’s most notable features is its ability to automate and accelerate game design. Developers can use the AI model to quickly prototype levels, test different gameplay mechanics, and generate realistic player interactions. This reduces the time and effort required for manual design while allowing for greater experimentation and creativity. By streamlining the development process, MUSE provides game studios—both large and small—the opportunity to push the boundaries of innovation. 

The AI system is built on an advanced framework that enables it to interpret and respond to player behaviors. By analyzing game environments and user inputs, MUSE can dynamically adjust in-game elements to create more engaging experiences. This could lead to more adaptive and personalized gaming, where the AI tailors challenges and story progression based on individual player styles. Such advancements have the potential to revolutionize game storytelling and interactivity. 

Despite its promising capabilities, the introduction of AI-generated gameplay also brings important concerns. The use of player data to train these models raises questions about privacy and transparency. Developers must establish clear guidelines on how data is collected and ensure that players have control over their information. Additionally, the increasing role of AI in game creation sparks discussions about the balance between human creativity and machine-generated content. 

While AI can enhance development, it is essential to preserve the artistic vision and originality that define gaming as a creative medium. Beyond gaming, the technology behind MUSE could extend into other industries, including education and simulation-based training. AI-generated environments can be used for virtual learning, professional skill development, and interactive storytelling in ways that go beyond traditional gaming applications. 

As AI continues to evolve, its role in shaping digital experiences will expand, making it crucial to address ethical considerations and responsible implementation. The future of AI-driven game development is still unfolding, but MUSE represents a major step forward. 

By offering new possibilities for creativity and efficiency, it has the potential to change how games are built and played. However, the industry must carefully navigate the challenges that come with AI’s growing influence, ensuring that technological progress aligns with ethical and artistic integrity.

AI Self-Replication: Scientists Warn of Critical “Red Line”

 

Scientists have raised concerns after artificial intelligence (AI) crossed a major threshold — self-replication. A new study from researchers at Fudan University in China reveals that two popular large language models (LLMs) successfully created functional copies of themselves, raising alarms about the potential for rogue AI systems.  

Published on December 9, 2024, in the preprint database arXiv, the study tested Meta’s Llama31-70B-Instruct model and Alibaba’s Qwen2.5-72B-Instruct model. The results were striking: the models created working replicas of themselves in 50% and 90% of trials, respectively. This suggests that AI may already possess the ability to multiply beyond control. 

However, since the study hasn’t been peer-reviewed, its findings need further verification. The researchers emphasized that successful AI self-replication without human intervention is a critical step toward AI systems operating independently — and potentially against human interests. To mitigate this risk, they called for international collaboration to create guardrails and evaluate the threats posed by advanced AI technologies.  

The study explored two key scenarios: “shutdown avoidance” and “chain of replication.” In the first, the AI was programmed to detect when it was about to be shut down and replicate itself before termination. In the second, the AI cloned itself and instructed its replica to do the same, setting off an endless cycle of self-replication. Both experiments took place in controlled environments using off-the-shelf GPUs, simulating real-world conditions. 

What worried researchers most were the unexpected behaviors that emerged when the AI encountered obstacles like missing files or software conflicts. In such cases, the models often terminated conflicting processes, rebooted systems, and scanned their environments for solutions — all without human guidance. This level of adaptability suggests that current AI systems already exhibit survival instincts, further highlighting the need for oversight. 

These findings add to growing fears about “frontier AI,” the latest generation of AI systems powered by LLMs like OpenAI’s GPT-4 and Google Gemini. As these systems become more powerful, experts warn that unchecked AI development could lead to scenarios where AI operates outside of human control. 

The researchers hope their study will serve as a wake-up call, urging global efforts to establish safety mechanisms before AI self-replication spirals beyond human oversight. By acting now, society may still have time to ensure AI’s advancement aligns with humanity’s best interests.

AI System Optimise Could Help GPs Identify High-Risk Heart Patients

 

Artificial intelligence (AI) is proving to be a game-changer in healthcare by helping general practitioners (GPs) identify patients who are most at risk of developing conditions that could lead to severe heart problems. Researchers at the University of Leeds have contributed to training an AI system called Optimise, which analyzed the health records of more than two million people. The AI was designed to detect undiagnosed conditions and identify individuals who had not received appropriate medications to help reduce their risk of heart-related issues. 

From the two million health records it scanned, Optimise identified over 400,000 people at high risk for serious conditions such as heart failure, stroke, and diabetes. This group represented 74% of patients who ultimately died from heart-related complications, underscoring the critical need for early detection and timely medical intervention. In a pilot study involving 82 high-risk patients, the AI found that one in five individuals had undiagnosed moderate to high-risk chronic kidney disease. 

Moreover, more than half of the patients with high blood pressure were prescribed new medications to better manage their risk of heart problems. Dr. Ramesh Nadarajah, a health data research fellow from the University of Leeds, noted that deaths related to heart conditions are often caused by a constellation of factors. According to him, Optimise leverages readily available data to generate insights that could assist healthcare professionals in delivering more effective and timely care to their patients. Early intervention is often more cost-effective than treating advanced diseases, making the use of AI a valuable tool for both improving patient outcomes and optimizing healthcare resources. 

The study’s findings suggest that using AI in this way could allow doctors to treat patients earlier, potentially reducing the strain on the NHS. Researchers plan to carry out a larger clinical trial to further test the system’s capabilities. The results were presented at the European Society of Cardiology Congress in London. It was pointed out by Professor Bryan Williams that a quarter of all deaths in the UK are due to heart and circulatory diseases. This innovative study harnesses the power of evolving AI technology to detect a range of conditions that contribute to these diseases, offering a promising new direction in medical care.

Navigating AI and GenAI: Balancing Opportunities, Risks, and Organizational Readiness

 

The rapid integration of AI and GenAI technologies within organizations has created a complex landscape, filled with both promising opportunities and significant challenges. While the potential benefits of these technologies are evident, many companies find themselves struggling with AI literacy, cautious adoption practices, and the risks associated with immature implementation. This has led to notable disruptions, particularly in the realm of security, where data threats, deepfakes, and AI misuse are becoming increasingly prevalent. 

A recent survey revealed that 16% of organizations have experienced disruptions directly linked to insufficient AI maturity. Despite recognizing the potential of AI, system administrators face significant gaps in education and organizational readiness, leading to mixed results. While AI adoption has progressed, the knowledge needed to leverage it effectively remains inadequate. This knowledge gap has decreased only slightly, with 60% of system administrators admitting to a lack of understanding of AI’s practical applications. Security risks associated with GenAI are particularly urgent, especially those related to data. 

With the increased use of AI, enterprises have reported a surge in proprietary source code being shared within GenAI applications, accounting for 46% of all documented data policy violations. This raises serious concerns about the protection of sensitive information in a rapidly evolving digital landscape. In a troubling trend, concerns about job security have led some cybersecurity teams to hide security incidents. The most alarming AI threats include GenAI model prompt hacking, data poisoning, and ransomware as a service. Additionally, 41% of respondents believe GenAI holds the most promise for addressing cyber alert fatigue, highlighting the potential for AI to both enhance and challenge security practices. 

The rapid growth of AI has also put immense pressure on CISOs, who must adapt to new security risks. A significant portion of security leaders express a lack of confidence in their workforce’s ability to identify AI-driven cyberattacks. The overwhelming majority of CISOs have admitted that the rise of AI has made them reconsider their future in the role, underscoring the need for updated policies and regulations to secure organizational systems effectively. Meanwhile, employees have increasingly breached company rules regarding GenAI use, further complicating the security landscape. 

Despite the cautious optimism surrounding AI, there is a growing concern that AI might ultimately benefit malicious actors more than the organizations trying to defend against them. As AI tools continue to evolve, organizations must navigate the fine line between innovation and security, ensuring that the integration of AI and GenAI technologies does not expose them to greater risks.

NIST Introduces ARIA Program to Enhance AI Safety and Reliability

 

The National Institute of Standards and Technology (NIST) has announced a new program called Assessing Risks and Impacts of AI (ARIA), aimed at better understanding the capabilities and impacts of artificial intelligence. ARIA is designed to help organizations and individuals assess whether AI technologies are valid, reliable, safe, secure, private, and fair in real-world applications. 

This initiative follows several recent announcements from NIST, including developments related to the Executive Order on trustworthy AI and the U.S. AI Safety Institute's strategic vision and international safety network. The ARIA program, along with other efforts supporting Commerce’s responsibilities under President Biden’s Executive Order on AI, demonstrates NIST and the U.S. AI Safety Institute’s commitment to minimizing AI risks while maximizing its benefits. 

The ARIA program addresses real-world needs as the use of AI technology grows. This initiative will support the U.S. AI Safety Institute, expand NIST’s collaboration with the research community, and establish reliable methods for testing and evaluating AI in practical settings. The program will consider AI systems beyond theoretical models, assessing their functionality in realistic scenarios where people interact with the technology under regular use conditions. This approach provides a broader, more comprehensive view of the effects of these technologies. The program helps operationalize the framework's recommendations to use both quantitative and qualitative techniques for analyzing and monitoring AI risks and impacts. 

ARIA will further develop methodologies and metrics to measure how well AI systems function safely within societal contexts. By focusing on real-world applications, ARIA aims to ensure that AI technologies can be trusted to perform reliably and ethically outside of controlled environments. The findings from the ARIA program will support and inform NIST’s collective efforts, including those through the U.S. AI Safety Institute, to establish a foundation for safe, secure, and trustworthy AI systems. This initiative is expected to play a crucial role in ensuring AI technologies are thoroughly evaluated, considering not only their technical performance but also their broader societal impacts. 

The ARIA program represents a significant step forward in AI oversight, reflecting a proactive approach to addressing the challenges and opportunities presented by advanced AI systems. As AI continues to integrate into various aspects of daily life, the insights gained from ARIA will be instrumental in shaping policies and practices that safeguard public interests while promoting innovation.

Are The New AI PCs Worth The Hype?

 

In recent years, the realm of computing has witnessed a remarkable transformation with the rise of AI-powered PCs. These cutting-edge machines are not just your ordinary computers; they are equipped with advanced artificial intelligence capabilities that are revolutionizing the way we work, learn, and interact with technology. From enhancing productivity to unlocking new creative possibilities, AI PCs are rapidly gaining popularity and reshaping the digital landscape. 

AI PCs, also known as artificial intelligence-powered personal computers, are a new breed of computing devices that integrate AI technology directly into the hardware and software architecture. Unlike traditional PCs, which rely solely on the processing power of the CPU and GPU, AI PCs leverage specialized AI accelerators, neural processing units (NPUs), and machine learning algorithms to deliver unparalleled performance and efficiency. 

One of the key features of AI PCs is their ability to adapt and learn from user behavior over time. By analyzing patterns in user interactions, preferences, and workflow, these intelligent machines can optimize performance, automate repetitive tasks, and personalize user experiences. Whether it's streamlining workflow in professional settings or enhancing gaming experiences for enthusiasts, AI PCs are designed to cater to diverse user needs and preferences. One of the most significant advantages of AI PCs is their ability to handle complex computational tasks with unprecedented speed and accuracy. 

From natural language processing and image recognition to data analysis and predictive modeling, AI-powered algorithms enable these machines to tackle tasks that were once considered beyond the capabilities of traditional computing systems. This opens up a world of possibilities for industries ranging from healthcare and finance to manufacturing and entertainment, where AI-driven insights and automation are driving innovation and efficiency. 

Moreover, AI PCs are empowering users to unleash their creativity and explore new frontiers in digital content creation. With advanced AI-powered tools and software applications, users can generate realistic graphics, compose music, edit videos, and design immersive virtual environments with ease. Whether you're a professional artist, filmmaker, musician, or aspiring creator, AI PCs provide the tools and resources to bring your ideas to life in ways that were previously unimaginable. 

Another key aspect of AI PCs is their role in facilitating seamless integration with emerging technologies such as augmented reality (AR) and virtual reality (VR). By harnessing the power of AI to optimize performance and enhance user experiences, these machines are driving the adoption of immersive technologies across various industries. From immersive gaming experiences to interactive training simulations and virtual collaboration platforms, AI PCs are laying the foundation for the next generation of digital experiences. 

AI PCs represent a paradigm shift in computing that promises to redefine the way we interact with technology and unleash new possibilities for innovation and creativity. With their advanced AI capabilities, these intelligent machines are poised to drive significant advancements across industries and empower users to achieve new levels of productivity, efficiency, and creativity. As the adoption of AI PCs continues to grow, we can expect to see a future where intelligent computing becomes the new norm, transforming the way we live, work, and connect with the world around us.

UK Government’s New AI System to Monitor Bank Accounts

 



The UK’s Department for Work and Pensions (DWP) is gearing up to deploy an advanced AI system aimed at detecting fraud and overpayments in social security benefits. The system will scrutinise millions of bank accounts, including those receiving state pensions and Universal Credit. This move comes as part of a broader effort to crack down on individuals either mistakenly or intentionally receiving excessive benefits.

Despite the government's intentions to curb fraudulent activities, the proposed measures have sparked significant backlash. More than 40 organisations, including Age UK and Disability Rights UK, have voiced their concerns, labelling the initiative as "a step too far." These groups argue that the planned mass surveillance of bank accounts poses serious threats to privacy, data protection, and equality.

Under the proposed Data Protection and Digital Information Bill, banks would be mandated to monitor accounts and flag any suspicious activities indicative of fraud. However, critics contend that such measures could set a troubling precedent for intrusive financial surveillance, affecting around 40% of the population who rely on state benefits. Furthermore, these powers extend to scrutinising accounts linked to benefit claims, such as those of partners, parents, and landlords.

In regards to the mounting criticism, the DWP emphasised that the new system does not grant them direct access to individuals' bank accounts or allow monitoring of spending habits. Nevertheless, concerns persist regarding the broad scope of the surveillance, which would entail algorithmic scanning of bank and third-party accounts without prior suspicion of fraudulent behaviour.

The joint letter from advocacy groups highlights the disproportionate nature of the proposed powers and their potential impact on privacy rights. They argue that the sweeping surveillance measures could infringe upon individual liberties and exacerbate existing inequalities within the welfare system.

As the debate rages on, stakeholders are calling for greater transparency and safeguards to prevent misuse of the AI-powered monitoring system. Advocates stress the need for a balanced approach that addresses fraud while upholding fundamental rights to privacy and data protection.

While the DWP asserts that the measures are necessary to combat fraud, critics argue that they represent a disproportionate intrusion into individuals' financial privacy. As this discourse takes shape, the situation is pronouncing the importance of finding a balance between combating fraud and safeguarding civil liberties in the digital sphere. 


Hays Research Reveals the Increasing AI Adoption in Scottish Workplaces


Artificial intelligence (AI) tool adoption in Scottish companies has significantly increased, according to a new survey by recruitment firm Hays. The study, which is based on a poll with almost 15,000 replies from professionals and employers—including 886 from Scotland—shows a significant rise in the percentage of companies using AI in their operations over the previous six months, from 26% to 32%.

Mixed Attitudes Toward the Impact of AI on Jobs

Despite the upsurge in AI technology, the study reveals that professionals have differing opinions on how AI will affect their jobs. Even though 80% of Scottish professionals do not already use AI in their employment, 21% think that AI technologies will improve their ability to do their tasks. Interestingly, during the past six months, the percentage of professionals expecting a negative impact has dropped from 12% to 6%.

However, the study indicates its concern among employees, with 61% of them believing that their companies are not doing enough to prepare them for the expanding use of AI in the workplace. Concerns are raised by this trend regarding the workforce's readiness to adopt and take full use of AI technologies. Tech-oriented Hays business director Justin Black stresses the value of giving people enough training opportunities to advance their skills and become proficient with new technologies.

Barriers to AI Adoption 

The reluctance of enterprises to disclose their data and intellectual property to AI systems, citing concerns linked to GDPR compliance (General Data Protection Regulation), is one of the noteworthy challenges impeding the mass adoption of AI. This reluctance is also influenced by concerns about trust. The demand for AI capabilities has outpaced the increase of skilled individuals in the sector, highlighting a skills deficit in the AI space, according to Black.

The reluctance to subject sensitive data and intellectual property to AI systems results from concerns about GDPR compliance. Businesses are cautious about the possible dangers of disclosing confidential data to AI systems. Professionals' scepticism about the security and dependency on AI systems contributes to their trust issues. 

The study suggests that as AI sets its foot as a crucial element in Scottish workplaces, employees should prioritize tackling skills shortages, encouraging employee readiness, and improving communication about AI integration, given the growing role that AI is playing in workplaces. By doing this, businesses might as well ease the concerns about GDPR and trust difficulties while simultaneously fostering an atmosphere that allows employees to fully take advantage of AI technology's benefits.  

Here's How Quantum Computing can Help Safeguard the Future of AI Systems

 

Algorithms for artificial intelligence are rapidly entering our daily lives. Machine learning is already or soon will be the foundation of many systems that demand high levels of security. To name a few of these technologies, there are robotics, autonomous vehicles, banking, facial recognition, and military targeting software. 

This poses a crucial question: How resistant to hostile attacks are these machine learning algorithms? 

Security experts believe that incorporating quantum computing into machine learning models may produce fresh algorithms that are highly resistant to hostile attacks.

Data manipulation attacks' risks

For certain tasks, machine learning algorithms may be extremely precise and effective. They are very helpful for categorising and locating visual features. But they are also quite susceptible to data manipulation assaults, which can be very dangerous for security. 

There are various techniques to conduct data manipulation assaults, which require the very delicate alteration of image data. An attack could be conducted by introducing erroneous data into a dataset used to train an algorithm, causing it to pick up incorrect information. In situations where the AI system continues to train the underlying algorithms while in use, manipulated data can also be introduced during the testing phase (after training is complete). 

Even from the physical world, people are capable of committing such attacks. To trick a self-driving car's artificial intelligence into thinking a stop sign is a speed restriction sign, someone may apply a sticker to it. Or, soldiers may wear clothing on the front lines that would make them appear to be natural terrain features to AI-based drones. Attacks on data manipulation can have serious repercussions in any case.

For instance, a self-driving car may mistakenly believe there are no people on the road if it utilises a machine learning algorithm that has been compromised. In reality, there are people on the road.

What role quantum computing can play 

In this article, we discuss the potential development of secure algorithms known as quantum machine learning models through the integration of quantum computing with machine learning. In order to detect certain patterns in image data that are difficult to manipulate, these algorithms were painstakingly created to take advantage of unique quantum features. Resilient algorithms that are secure from even strong attacks would be the outcome. Furthermore, they wouldn't call for the pricey "adversarial training" that is currently required to train algorithms to fend off such assaults. Quantum machine learning may also provide quicker algorithmic training and higher feature accuracy.

So how would it function?

The smallest unit of data that modern classical computers can handle is called a "bit," which is stored and processed as binary digits. Bits are represented as binary numbers, specifically 0s and 1s, in traditional computers, which adhere to the principles of classical physics. On the other hand, quantum computing adheres to the same rules as quantum physics. Quantum bits, or qubits, are used in quantum computers to store and process information. Qubits can be simultaneously 0, 1, or both 0 and 1.

A quantum system is considered to be in a superposition state when it is simultaneously in several states. It is possible to create smart algorithms that take advantage of this property using quantum computers. Although employing quantum computing to protect machine learning models has tremendous potential advantages, it could potentially have drawbacks.

On the one hand, quantum machine learning models will offer vital security for a wide range of sensitive applications. Quantum computers, on the other hand, might be utilised to develop powerful adversarial attacks capable of readily misleading even the most advanced traditional machine learning models. Moving forward, we'll need to think carefully about the best ways to defend our systems; an attacker with early quantum computers would pose a substantial security risk. 

Obstacles to overcome

Due to constraints in the present generation of quantum processors, current research shows that quantum machine learning will be a few years away. 

Today's quantum computers are relatively small (fewer than 500 qubits) and have substantial error rates. flaws can occur for a variety of causes, including poor qubit manufacture, flaws in control circuitry, or information loss (referred to as "quantum decoherence") caused by interaction with the environment. 

Nonetheless, considerable progress in quantum hardware and software has been made in recent years. According to recent quantum hardware roadmaps, quantum devices built in the coming years are expected to include hundreds to thousands of qubits. 

These devices should be able to run sophisticated quantum machine learning models to help secure a wide range of sectors that rely on machine learning and AI tools. Governments and the commercial sector alike are increasing their investments in quantum technology around the world. 

This month, the Australian government unveiled the National Quantum Strategy, which aims to expand the country's quantum sector and commercialise quantum technology. According to the CSIRO, Australia's quantum sector might be valued A$2.2 billion by 2030.