Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Systems. Show all posts

UK Government’s New AI System to Monitor Bank Accounts

 



The UK’s Department for Work and Pensions (DWP) is gearing up to deploy an advanced AI system aimed at detecting fraud and overpayments in social security benefits. The system will scrutinise millions of bank accounts, including those receiving state pensions and Universal Credit. This move comes as part of a broader effort to crack down on individuals either mistakenly or intentionally receiving excessive benefits.

Despite the government's intentions to curb fraudulent activities, the proposed measures have sparked significant backlash. More than 40 organisations, including Age UK and Disability Rights UK, have voiced their concerns, labelling the initiative as "a step too far." These groups argue that the planned mass surveillance of bank accounts poses serious threats to privacy, data protection, and equality.

Under the proposed Data Protection and Digital Information Bill, banks would be mandated to monitor accounts and flag any suspicious activities indicative of fraud. However, critics contend that such measures could set a troubling precedent for intrusive financial surveillance, affecting around 40% of the population who rely on state benefits. Furthermore, these powers extend to scrutinising accounts linked to benefit claims, such as those of partners, parents, and landlords.

In regards to the mounting criticism, the DWP emphasised that the new system does not grant them direct access to individuals' bank accounts or allow monitoring of spending habits. Nevertheless, concerns persist regarding the broad scope of the surveillance, which would entail algorithmic scanning of bank and third-party accounts without prior suspicion of fraudulent behaviour.

The joint letter from advocacy groups highlights the disproportionate nature of the proposed powers and their potential impact on privacy rights. They argue that the sweeping surveillance measures could infringe upon individual liberties and exacerbate existing inequalities within the welfare system.

As the debate rages on, stakeholders are calling for greater transparency and safeguards to prevent misuse of the AI-powered monitoring system. Advocates stress the need for a balanced approach that addresses fraud while upholding fundamental rights to privacy and data protection.

While the DWP asserts that the measures are necessary to combat fraud, critics argue that they represent a disproportionate intrusion into individuals' financial privacy. As this discourse takes shape, the situation is pronouncing the importance of finding a balance between combating fraud and safeguarding civil liberties in the digital sphere. 


Hays Research Reveals the Increasing AI Adoption in Scottish Workplaces


Artificial intelligence (AI) tool adoption in Scottish companies has significantly increased, according to a new survey by recruitment firm Hays. The study, which is based on a poll with almost 15,000 replies from professionals and employers—including 886 from Scotland—shows a significant rise in the percentage of companies using AI in their operations over the previous six months, from 26% to 32%.

Mixed Attitudes Toward the Impact of AI on Jobs

Despite the upsurge in AI technology, the study reveals that professionals have differing opinions on how AI will affect their jobs. Even though 80% of Scottish professionals do not already use AI in their employment, 21% think that AI technologies will improve their ability to do their tasks. Interestingly, during the past six months, the percentage of professionals expecting a negative impact has dropped from 12% to 6%.

However, the study indicates its concern among employees, with 61% of them believing that their companies are not doing enough to prepare them for the expanding use of AI in the workplace. Concerns are raised by this trend regarding the workforce's readiness to adopt and take full use of AI technologies. Tech-oriented Hays business director Justin Black stresses the value of giving people enough training opportunities to advance their skills and become proficient with new technologies.

Barriers to AI Adoption 

The reluctance of enterprises to disclose their data and intellectual property to AI systems, citing concerns linked to GDPR compliance (General Data Protection Regulation), is one of the noteworthy challenges impeding the mass adoption of AI. This reluctance is also influenced by concerns about trust. The demand for AI capabilities has outpaced the increase of skilled individuals in the sector, highlighting a skills deficit in the AI space, according to Black.

The reluctance to subject sensitive data and intellectual property to AI systems results from concerns about GDPR compliance. Businesses are cautious about the possible dangers of disclosing confidential data to AI systems. Professionals' scepticism about the security and dependency on AI systems contributes to their trust issues. 

The study suggests that as AI sets its foot as a crucial element in Scottish workplaces, employees should prioritize tackling skills shortages, encouraging employee readiness, and improving communication about AI integration, given the growing role that AI is playing in workplaces. By doing this, businesses might as well ease the concerns about GDPR and trust difficulties while simultaneously fostering an atmosphere that allows employees to fully take advantage of AI technology's benefits.  

Here's How Quantum Computing can Help Safeguard the Future of AI Systems

 

Algorithms for artificial intelligence are rapidly entering our daily lives. Machine learning is already or soon will be the foundation of many systems that demand high levels of security. To name a few of these technologies, there are robotics, autonomous vehicles, banking, facial recognition, and military targeting software. 

This poses a crucial question: How resistant to hostile attacks are these machine learning algorithms? 

Security experts believe that incorporating quantum computing into machine learning models may produce fresh algorithms that are highly resistant to hostile attacks.

Data manipulation attacks' risks

For certain tasks, machine learning algorithms may be extremely precise and effective. They are very helpful for categorising and locating visual features. But they are also quite susceptible to data manipulation assaults, which can be very dangerous for security. 

There are various techniques to conduct data manipulation assaults, which require the very delicate alteration of image data. An attack could be conducted by introducing erroneous data into a dataset used to train an algorithm, causing it to pick up incorrect information. In situations where the AI system continues to train the underlying algorithms while in use, manipulated data can also be introduced during the testing phase (after training is complete). 

Even from the physical world, people are capable of committing such attacks. To trick a self-driving car's artificial intelligence into thinking a stop sign is a speed restriction sign, someone may apply a sticker to it. Or, soldiers may wear clothing on the front lines that would make them appear to be natural terrain features to AI-based drones. Attacks on data manipulation can have serious repercussions in any case.

For instance, a self-driving car may mistakenly believe there are no people on the road if it utilises a machine learning algorithm that has been compromised. In reality, there are people on the road.

What role quantum computing can play 

In this article, we discuss the potential development of secure algorithms known as quantum machine learning models through the integration of quantum computing with machine learning. In order to detect certain patterns in image data that are difficult to manipulate, these algorithms were painstakingly created to take advantage of unique quantum features. Resilient algorithms that are secure from even strong attacks would be the outcome. Furthermore, they wouldn't call for the pricey "adversarial training" that is currently required to train algorithms to fend off such assaults. Quantum machine learning may also provide quicker algorithmic training and higher feature accuracy.

So how would it function?

The smallest unit of data that modern classical computers can handle is called a "bit," which is stored and processed as binary digits. Bits are represented as binary numbers, specifically 0s and 1s, in traditional computers, which adhere to the principles of classical physics. On the other hand, quantum computing adheres to the same rules as quantum physics. Quantum bits, or qubits, are used in quantum computers to store and process information. Qubits can be simultaneously 0, 1, or both 0 and 1.

A quantum system is considered to be in a superposition state when it is simultaneously in several states. It is possible to create smart algorithms that take advantage of this property using quantum computers. Although employing quantum computing to protect machine learning models has tremendous potential advantages, it could potentially have drawbacks.

On the one hand, quantum machine learning models will offer vital security for a wide range of sensitive applications. Quantum computers, on the other hand, might be utilised to develop powerful adversarial attacks capable of readily misleading even the most advanced traditional machine learning models. Moving forward, we'll need to think carefully about the best ways to defend our systems; an attacker with early quantum computers would pose a substantial security risk. 

Obstacles to overcome

Due to constraints in the present generation of quantum processors, current research shows that quantum machine learning will be a few years away. 

Today's quantum computers are relatively small (fewer than 500 qubits) and have substantial error rates. flaws can occur for a variety of causes, including poor qubit manufacture, flaws in control circuitry, or information loss (referred to as "quantum decoherence") caused by interaction with the environment. 

Nonetheless, considerable progress in quantum hardware and software has been made in recent years. According to recent quantum hardware roadmaps, quantum devices built in the coming years are expected to include hundreds to thousands of qubits. 

These devices should be able to run sophisticated quantum machine learning models to help secure a wide range of sectors that rely on machine learning and AI tools. Governments and the commercial sector alike are increasing their investments in quantum technology around the world. 

This month, the Australian government unveiled the National Quantum Strategy, which aims to expand the country's quantum sector and commercialise quantum technology. According to the CSIRO, Australia's quantum sector might be valued A$2.2 billion by 2030.