Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Machine learning. Show all posts

Quantum Computing Could Deliver Business Value by 2028 with 100 Logical Qubits

 

Quantum computing may soon move from theory to commercial reality, as experts predict that machines with 100 logical qubits could start delivering tangible business value by 2028—particularly in areas like material science. Speaking at the Commercialising Quantum Computing conference in London, industry leaders suggested that such systems could outperform even high-performance computing in solving complex problems. 

Mark Jackson, senior quantum evangelist at Quantinuum, highlighted that quantum computing shows great promise in generative AI applications, especially machine learning. Unlike traditional systems that aim for precise answers, quantum computers excel at identifying patterns in large datasets—making them highly effective for cybersecurity and fraud detection. “Quantum computers can detect patterns that would be missed by other conventional computing methods,” Jackson said.  

Financial services firms are also beginning to realize the potential of quantum computing. Phil Intallura, global head of quantum technologies at HSBC, said quantum technologies can help create more optimized financial models. “If you can show a solution using quantum technology that outperforms supercomputers, decision-makers are more likely to invest,” he noted. HSBC is already exploring quantum random number generation for use in simulations and risk modeling. 

In a recent collaborative study published in Nature, researchers from JPMorgan Chase, Quantinuum, Argonne and Oak Ridge national labs, and the University of Texas showcased Random Circuit Sampling (RCS) as a certified-randomness-expansion method, a task only achievable on a quantum computer. This work underscores how randomness from quantum systems can enhance classical financial simulations. Quantum cryptography also featured prominently at the conference. Regulatory pressure is mounting on banks to replace RSA-2048 encryption with quantum-safe standards by 2035, following recommendations from the U.S. National Institute of Standards and Technology. 

Santander’s Mark Carney emphasized the need for both software and hardware support to enable fast and secure post-quantum cryptography (PQC) in customer-facing applications. Gerard Mullery, interim CEO at Oxford Quantum Circuits, stressed the importance of integrating quantum computing into traditional enterprise workflows. As AI increasingly automates business processes, quantum platforms will need to support seamless orchestration within these ecosystems. 

While only a few companies have quantum machines with logical qubits today, the pace of development suggests that quantum computing could be transformative within the next few years. With increasing investment and maturing use cases, businesses are being urged to prepare for a hybrid future where classical and quantum systems work together to solve previously intractable problems.

AI in Cybersecurity Market Sees Rapid Growth as Network Security Leads 2024 Expansion

 

The integration of artificial intelligence into cybersecurity solutions has accelerated dramatically, driving the global market to an estimated value of $32.5 billion in 2024. This surge—an annual growth rate of 23%—reflects organizations’ urgent need to defend against increasingly sophisticated cyber threats. Traditional, signature-based defenses are no longer sufficient; today’s adversaries employ polymorphic malware, fileless attacks, and automated intrusion tools that can evade static rule sets. AI’s ability to learn patterns, detect anomalies in real time, and respond autonomously has become indispensable. 

Among AI-driven cybersecurity segments, network security saw the most significant expansion last year, accounting for nearly 40% of total AI security revenues. AI-enhanced intrusion prevention systems and next-generation firewalls leverage machine learning models to inspect vast streams of traffic, distinguishing malicious behavior from legitimate activity. These solutions can automatically quarantine suspicious connections, adapt to novel malware variants, and provide security teams with prioritized alerts—reducing mean time to detection from days to mere minutes. As more enterprises adopt zero-trust architectures, AI’s role in continuously verifying device and user behavior on the network has become a cornerstone of modern defensive strategies. 

Endpoint security followed closely, representing roughly 25% of the AI cybersecurity market in 2024. AI-powered endpoint detection and response (EDR) platforms monitor processes, memory activity, and system calls on workstations and servers. By correlating telemetry across thousands of devices, these platforms can identify subtle indicators of compromise—such as unusual parent‑child process relationships or command‑line flags—before attackers achieve persistence. The rise of remote work has only heightened demand: with employees connecting from diverse locations and personal devices, AI’s context-aware threat hunting capabilities help maintain comprehensive visibility across decentralized environments. 

Identity and access management (IAM) solutions incorporating AI now capture about 20% of the market. Behavioral analytics engines analyze login patterns, device characteristics, and geolocation data to detect risky authentication attempts. Rather than relying solely on static multi‑factor prompts, adaptive authentication methods adjust challenge levels based on real‑time risk scores, blocking illicit logins while minimizing friction for legitimate users. This dynamic approach addresses credential stuffing and account takeover attacks, which accounted for over 30% of cyber incidents in 2024. Cloud security, covering roughly 15% of the AI cybersecurity spend, is another high‑growth area. 

With workloads distributed across public, private, and hybrid clouds, AI-driven cloud security posture management (CSPM) tools continuously scan configurations and user activities for misconfigurations, vulnerable APIs, and data‑exfiltration attempts. Automated remediation workflows can instantly correct risky settings, enforce encryption policies, and isolate compromised workloads—ensuring compliance with evolving regulations such as GDPR and CCPA. 

Looking ahead, analysts predict the AI in cybersecurity market will exceed $60 billion by 2028, as vendors integrate generative AI for automated playbook creation and incident response orchestration. Organizations that invest in AI‑powered defenses will gain a competitive edge, enabling proactive threat hunting and resilient operations against a backdrop of escalating cyber‑threat complexity.

Here's How Google Willow Chip Will Impact Startup Innovation in 2025

 

As technology advances at an unprecedented rate, the recent unveiling of Willow, Google's quantum computing device, ushers in a new age for startups. Willow's unprecedented computing capabilities—105 qubits, roughly double those of its predecessor, Sycamore—allow it to accomplish jobs incomprehensibly quicker than today's most powerful supercomputers. This milestone is set to significantly impact numerous sectors, presenting startups with a rare opportunity to innovate and tackle complex issues. 

The Willow chip's ability to effectively tackle complex issues that earlier technologies were unable to handle is among its major implications. Quantum computing can be used by startups in industries like logistics and pharmaceuticals to speed up simulations and streamline procedures. Willow's computational power, for example, can be utilised by a drug-discovery startup to simulate detailed chemical interactions, significantly cutting down on the time and expense required to develop new therapies. 

The combination of quantum computing and artificial intelligence has the potential to lead to ground-breaking developments in AI model capabilities. Startups developing AI-driven solutions can employ quantum algorithms to manage huge data sets more efficiently. This might lead to speedier model training durations and enhanced prediction skills in a variety of applications, including personalised healthcare, where quantum-enhanced machine learning tools can analyse patient data for real-time insights and tailored treatments. 

Cybersecurity challenges 

The powers of Willow offer many benefits, but they also bring with them significant challenges, especially in the area of cybersecurity. The security of existing encryption techniques is called into question by the processing power of quantum devices, as they may be vulnerable to compromise. Startups that create quantum-resistant security protocols will be critical in addressing this growing demand, establishing themselves in a booming niche market.

Access and collaboration

Google’s advancements with the Willow chip might also democratize access to quantum computing. Startups may soon benefit from cloud-based quantum computing resources, eliminating the substantial capital investment required for hardware acquisition. This model could encourage collaborative ecosystems between startups, established tech firms, and academic institutions, fostering knowledge-sharing and accelerating innovation.

Quantum Computing Meets AI: A Lethal Combination

 

Quantum computers are getting closer to Q-day — the day when they will be able to crack existing encryption techniques — as we continue to assign more infrastructure functions to artificial intelligence (AI). This could jeopardise autonomous control systems that rely on AI and ML for decision-making, as well as the security of digital communications. 

As AI and quantum converge to reveal remarkable novel technologies, they will also combine to develop new attack vectors and quantum cryptanalysis.

How far off is this threat?

For major organisations and governments, the transition to post-quantum cryptography (PQC) will take at least ten years, if not much more. Since the last encryption standard upgrade, the size of networks and data has increased, enabling large language models (LLMs) and related specialised technologies. 

While generic versions are intriguing and even enjoyable, sophisticated AI will be taught on expertly picked data to do specialised tasks. This will quickly absorb all of the previous research and information created, providing profound insights and innovations at an increasing rate. This will complement, not replace, human brilliance, but there will be a disruptive phase for cybersecurity.

If a cryptographically relevant quantum computer becomes available before PQC is fully deployed, the repercussions are unknown in the AI era. Regular hacking, data loss, and even disinformation on social media will bring back memories of the good old days before AI driven by evil actors became the main supplier of cyber carcinogens.

When AI models are hijacked, the combined consequence of feeding live AI-controlled systems personalised data with malicious intent will become a global concern. The debate in Silicon Valley and political circles is already raging over whether AI should be allowed to carry out catastrophic military operations. Regardless of existing concerns, this is undoubtedly the future. 

However, most networks and economic activity require explicit and urgent defensive actions. To take on AI and quantum, critical infrastructure design and networks must advance swiftly and with significantly increased security. With so much at stake and new combined AI-quantum attacks unknown, one-size-fits-all upgrades to libraries such as TLS will not suffice. 

Internet 1.0 was built on old 1970s assumptions and limitations that predated modern cloud technology and its amazing redundancy. The next version must be exponentially better, anticipating the unknown while assuming that our current security estimations are incorrect. The AI version of Stuxnet should not surprise cybersecurity experts because the previous iteration had warning indications years ago.

OpenAI's Latest AI Model Faces Diminishing Returns

 

OpenAI's latest AI model is yielding diminishing results while managing the demands of recent investments. 

The Information claims that OpenAI's upcoming AI model, codenamed Orion, is outperforming its predecessors in terms of performance gains. In staff testing, Orion reportedly achieved the GPT-4 performance level after only 20% of its training. 

However, the shift from GPT-4 to the upcoming GPT-5 is expected to result in fewer quality gains than the jump from GPT-3 to GPT-4.

“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” noted employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”

AI training often yields the biggest improvements in performance in the early stages and smaller gains in subsequent phases. As a result, the remaining 80% of training is unlikely to provide breakthroughs comparable to earlier generational improvements. This predicament with its latest AI model comes at a critical juncture for OpenAI, following a recent investment round that raised $6.6 billion.

With this financial backing, investors' expectations rise, as do technical hurdles that confound typical AI scaling approaches. If these early versions do not live up to expectations, OpenAI's future fundraising chances may not be as attractive. The report's limitations underscore a major difficulty for the entire AI industry: the decreasing availability of high-quality training data and the need to remain relevant in an increasingly competitive environment.

A June research (PDF) predicts that between 2026 and 2032, AI companies will exhaust the supply of publicly accessible human-generated text data. Developers have "largely squeezed as much out of" the data that has been utilised to enable the tremendous gains in AI that we have witnessed in recent years, according to The Information. OpenAI is fundamentally rethinking its approach to AI development in order to meet these challenges. 

“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” states The Information.

HiddenLayer Unveils "ShadowLogic" Technique for Implanting Codeless Backdoors in AI Models

 

Manipulating an AI model's computational graph can allow attackers to insert codeless, persistent backdoors, reports AI security firm HiddenLayer. This vulnerability could lead to malicious use of machine learning (ML) models in a variety of applications, including supply chain attacks.

Known as ShadowLogic, the technique works by tampering with the computational graph structure of a model, triggering unwanted behavior in downstream tasks. This manipulation opens the door to potential security breaches across AI supply chains.

Traditional backdoors offer unauthorized system access, bypassing security layers. Similarly, AI models can be exploited to include backdoors or manipulated to yield malicious outcomes. However, any changes in the model could potentially affect these hidden pathways.

HiddenLayer explains that using ShadowLogic enables threat actors to embed codeless backdoors that persist through model fine-tuning, allowing highly targeted and stealthy attacks.

Building on prior research showing that backdoors can be implemented during the training phase, HiddenLayer investigated how to inject a backdoor into a model's computational graph post-training. This bypasses the need for training phase vulnerabilities.

A computational graph is a mathematical blueprint that controls a neural network's operations. These graphs represent data inputs, mathematical functions, and learning parameters, guiding the model’s forward and backward propagation.

According to HiddenLayer, this graph acts like compiled code in a program, with specific instructions for the model. By manipulating the graph, attackers can override normal model logic, triggering predefined behavior when the model processes specific input, such as an image pixel, keyword, or phrase.

ShadowLogic leverages the wide range of operations supported by computational graphs to embed triggers, which could include checksum-based activations or even entirely new models hidden within the original one. HiddenLayer demonstrated this method on models like ResNet, YOLO, and Phi-3 Mini.

These compromised models behave normally but respond differently when presented with specific triggers. They could, for example, fail to detect objects or generate controlled responses, demonstrating the subtlety and potential danger of ShadowLogic backdoors.

Such backdoors introduce new vulnerabilities in AI models that do not rely on traditional code exploits. Embedded within the model’s structure, these backdoors are harder to detect and can be injected across a variety of graph-based architectures.

ShadowLogic is format-agnostic and can be applied to any model that uses graph-based architectures, regardless of the domain, including autonomous navigation, financial predictions, healthcare diagnostics, and cybersecurity.

HiddenLayer warns that no AI system is safe from this type of attack, whether it involves simple classifiers or advanced large language models (LLMs), expanding the range of potential targets.






Meta Unveils its First Open AI Model That Can Process Images

 

Meta has released new versions of its renowned open source AI model Llama, including small and medium-sized models capable of running workloads on edge and mobile devices. 

Llama 3.2 models were showcased at the company's annual Meta Connect event. They can support multilingual text production and vision apps like image recognition. 

“This is our first open source, multimodal model, and it’s going to enable a lot of interesting applications that require visual understanding,” stated Mark Zuckerberg, CEO of Meta.

Llama 3.2 is based on the huge open source model Llama 3.1, which was released in late July. The previous Llama model was the largest open-source AI model in history, with 405 billion parameters (parameters are the adjustable variables within an AI model that help it learn patterns from data). The size shows the AI's ability to interpret and generate human-like text. 

The new Llama models presented at Meta Connect 2024 are significantly reduced in size. Meta explained that they choose to develop smaller models because not all researchers have the required computational resources and expertise to run a model as large as Llama 3.1.

In terms of performance, Meta's new Llama 3.2 models compete with industry-leading systems from Anthropic and OpenAI. The 3B model exceeds Google's Gemma 2 2.6B and Microsoft's Phi 3.5-mini in tasks such as instruction following and content summarisation. The 90B version, the largest of the models, surpasses both Claude 3-Haiku and GPT-4o-mini on a variety of benchmarks, including the widely used MMLU test, an industry-leading AI model evaluation tool. 

How to access Llama 3.2 models 

The new Llama 3.2 models are open source, so anyone can download and use them to power AI applications. The models can be downloaded straight from llama.com or Hugging Face, a popular open source repository platform. Llama 3.2 models are also available through a number of cloud providers, including Google Cloud, AWS, Nvidia, Microsoft Azure, and Grow, among others.

According to figures published in early September, demand for Meta's Llama models from cloud customers increased tenfold between January and July, and is expected to rise much more in the wake of the new 3.2 line of models. Meta partner Together AI is providing free access to the vision version of Llama 3.2 11B on its platform till the end of the year. 

Vipul Ved Prakash, founder and CEO of Together AI, stated that the new multimodal models will drive the adoption of open-source AI among developers and organisations. 

“We’re thrilled to partner with Meta to offer developers free access to the Llama 3.2 vision model and to be one of the first API providers for Llama Stack,” Prakash noted. “With Together AI's support for Llama models and Llama Stack, developers and enterprises can experiment, build, and scale multimodal applications with the best performance, accuracy, and cost.”

Data Poisoning: The Hidden Threat to AI Models



As ongoing developments in the realms of artificial intelligence and machine learning take place at a dynamic rate, yet another new form of attack is emerging, one which can topple all those systems we use today without much ado: data poisoning. This type of attack involves tampering with data used by AI models in training to make them malfunction, often undetectably. The issue came to light when recently, more than 100 malicious models were uncovered on the popular repository for AI, Hugging Face, by a software management company, JFrog. 

What is Data Poisoning?

Data poisoning is an attack method on AI models by corrupting the data used for its training. In other words, the intent is to have the model make inappropriate predictions or choices. Besides, unlike traditional hacking, it doesn't require access to the system; therefore, data poisoning manipulates input data either before the deployment of an AI model or after the deployment of the AI model, and that makes it very difficult to detect.

One attack happens at the training phase when an attacker manages to inject malicious data into any AI model. Yet another attack happens post-deployment when poisoned data is fed to the AI; it yields wrong outputs. Both kinds of attacks remain hardly detectable and cause damage to the AI system in the long run.

According to research by JFrog, investigators found a number of suspicious models uploaded to Hugging Face, a community where users can share AI models. Those contained encoded malicious code, which the researchers believe hackers-those potentially coming from the KREOnet research network in Korea-might have embedded. The most worrying aspect, however, was the fact that these malicious models went undetected by masquerading as benign.

That's a serious threat because many AI systems today use a great amount of data from different sources, including the internet. In cases where attackers manage to change the data used in the training of a model, that could mean anything from misleading results to actual large-scale cyberattacks.

Why It's Hard to Detect

One of the major challenges with data poisoning is that AI models are built by using enormous data sets, which makes it difficult for researchers to always know what has gone into the model. A lack of clarity of this kind in turn creates ways in which attackers can sneak in poisoned data without being caught.

But it gets worse: AI systems that scrape data from the web continuously in order to update themselves could poison their own training data. This sets up the alarming possibility of an AI system's gradual breakdown, or "degenerative model collapse."

The Consequences of Ignoring the Threat

If left unmitigated, data poisoning could further allow attackers to inject stealth backdoors in AI software that enable them to conduct malicious actions or cause any AI system to behave in ways unexpected. Precisely, they can run malicious code, allow phishing, and rig AI predictions for various nefarious uses.

The cybersecurity industry must take this as a serious threat since more dependence occurs on generative AI linked together, alongside LLMs. If one fails to do so, widespread vulnerability across the complete digital ecosystem will result.

How to Defend Against Data Poisoning

The protection of AI models against data poisoning calls for vigilance throughout the process of the AI development cycle. Experts say that this may require oversight by organisations in using only data from sources they can trust for training the AI model. The Open Web Application Security Project, or OWASP, has provided a list of some best ways to avoid data poisoning; a few of these include frequent checks to find biases and abnormalities during the training of data.

Other recommendations come in the form of multiple AI algorithms that verify results against each other to locate inconsistency. If an AI model starts producing strange results, fallback mechanisms should be in place to prevent any harm.

This also encompasses simulated data poisoning attacks run by cybersecurity teams to test their AI systems for robustness. While it is hard to build an AI system that is 100% secure, frequent validation of predictive outputs goes a long way in detecting and preventing poisoning.

Creating a Secure Future for AI

While AI keeps evolving, there is a need to instil trust in such systems. This will only be possible when the entire ecosystem of AI, even the supply chains, forms part of the cybersecurity framework. This would be achievable through monitoring inputs and outputs against unusual or irregular AI systems. Therefore, organisations will build robust, and more trustworthy models of AI.

Ultimately, the future of AI hangs in the balance with our capability to race against emerging threats like data poisoning. In sum, the ability of businesses to proactively take steps toward the security of AI systems today protects them from one of the most serious challenges facing the digital world.

The bottom line is that AI security is not just about algorithms; it's about the integrity for the data powering those algorithms.