Nvidia has strongly denied accusations from China that its computer chips include secret ways to track users or shut down devices remotely. The company also warned that proposals to add such features, known as backdoors or kill switches would create major security risks.
The dispute began when the Cyberspace Administration of China said it met with Nvidia over what it called “serious security issues” in the company’s products. Chinese officials claimed US experts had revealed that Nvidia’s H20 chip, made for the Chinese market under US export rules, could be tracked and remotely disabled.
Nvidia responded in a blog post from its Chief Security Officer, David Reber Jr., stating: “There are no back doors in NVIDIA chips. No kill switches. No spyware. That’s not how trustworthy systems are built and never will be.” The company has consistently denied that such controls exist.
Concerns Over Proposed US Law
While dismissing China’s claims, Nvidia also appeared to be addressing US lawmakers. A proposed “Chip Security Act” in the United States would require exported chips to have location verification and possibly a way to stop unauthorized use. Critics argue this could open the door to government-controlled kill switches, something Nvidia says is dangerous.
Senator Tom Cotton’s office says the bill is meant to keep advanced American chips out of the hands of “adversaries like Communist China.” The White House’s AI Action Plan also suggests exploring location tracking for high-end computing hardware.
Why Nvidia Says Kill Switches Are a Bad Idea
Reber argued that adding kill switches or hidden access points would be a gift to hackers and foreign threats, creating weaknesses in global technology infrastructure. He compared it to buying a car where the dealer could apply the parking brake remotely without your consent.
“There is no such thing as a ‘good’ secret backdoor,” he said. “They only create dangerous vulnerabilities.” Instead, Nvidia says security should rely on rigorous testing, independent verification, and compliance with global cybersecurity standards.
Reber pointed to the 1990s “Clipper Chip” project, when the US government tried to create a form of encryption with a built-in backdoor for law enforcement. Researchers quickly found flaws, proving it was unsafe. That project was abandoned, and many experts now see it as a warning against similar ideas.
According to Reber, Nvidia’s chips are built with layered security to avoid any single point of failure. Adding a kill switch, he says, would break that design and harm both innovation and trust in US technology.
January 27 marked a pivotal day for the artificial intelligence (AI) industry, with two major developments reshaping its future. First, Nvidia, the global leader in AI chips, suffered a historic loss of $589 billion in market value in a single day—the largest one-day loss ever recorded by a company. Second, DeepSeek, a Chinese AI developer, surged to the top of Apple’s App Store, surpassing ChatGPT. What makes DeepSeek’s success remarkable is not just its rapid rise but its ability to achieve high-performance AI with significantly fewer resources, challenging the industry’s reliance on expensive infrastructure.
Unlike many AI companies that rely on costly, high-performance chips from Nvidia, DeepSeek has developed a powerful AI model using far fewer resources. This unexpected efficiency disrupts the long-held belief that AI breakthroughs require billions of dollars in investment and vast computing power. While companies like OpenAI and Anthropic have focused on expensive computing infrastructure, DeepSeek has proven that AI models can be both cost-effective and highly capable.
DeepSeek’s AI models perform at a level comparable to some of the most advanced Western systems, yet they require significantly less computational power. This approach could democratize AI development, enabling smaller companies, universities, and independent researchers to innovate without needing massive financial backing. If widely adopted, it could reduce the dominance of a few tech giants and foster a more inclusive AI ecosystem.
DeepSeek’s success could prompt a strategic shift in the AI industry. Some companies may emulate its focus on efficiency, while others may continue investing in resource-intensive models. Additionally, DeepSeek’s open-source nature adds an intriguing dimension to its impact. Unlike OpenAI, which keeps its models proprietary, DeepSeek allows its AI to be downloaded and modified by researchers and developers worldwide. This openness could accelerate AI advancements but also raises concerns about potential misuse, as open-source AI can be repurposed for unethical applications.
Another significant benefit of DeepSeek’s approach is its potential to reduce the environmental impact of AI development. Training AI models typically consumes vast amounts of energy, often through large data centers. DeepSeek’s efficiency makes AI development more sustainable by lowering energy consumption and resource usage.
However, DeepSeek’s rise also brings challenges. As a Chinese company, it faces scrutiny over data privacy, security, and censorship. Like other AI developers, DeepSeek must navigate issues related to copyright and the ethical use of data. While its approach is innovative, it still grapples with industry-wide challenges that have plagued AI development in the past.
DeepSeek’s emergence signals the start of a new era in the AI industry. Rather than a few dominant players controlling AI development, we could see a more competitive market with diverse solutions tailored to specific needs. This shift could benefit consumers and businesses alike, as increased competition often leads to better technology at lower prices.
However, it remains unclear whether other AI companies will adopt DeepSeek’s model or continue relying on resource-intensive strategies. Regardless, DeepSeek has already challenged conventional thinking about AI development, proving that innovation isn’t always about spending more—it’s about working smarter.
DeepSeek’s rapid rise and innovative approach have disrupted the AI industry, challenging the status quo and opening new possibilities for AI development. By demonstrating that high-performance AI can be achieved with fewer resources, DeepSeek has paved the way for a more inclusive and sustainable future. As the industry evolves, its impact will likely inspire further innovation, fostering a competitive landscape that benefits everyone.
The US Supreme Court is set to take two landmark cases over Facebook and Nvidia that may rewrite the way investors sue the tech sector after scandals. Two firms urge the Court to narrow legal options available for investment groups, saying claims made were unrealistic.
Facebook's Cambridge Analytica Case
The current scandal is that of Cambridge Analytica, which allowed third-party vendors access to hundreds of millions of user information without adequate check or follow-up. Facebook reportedly paid over $5 billion to the FTC and SEC this year alone due to purportedly lying to the users as well as to the investors about how it uses data. Still, investor class-action lawsuits over the scandal remain, and Facebook is appealing to the Supreme Court in an effort to block such claims.
Facebook argues that the previous data risks disclosed were hypothetical and therefore should not have been portrayed as if they already had happened. The company also argues that forcing it to disclose all past data incidents may lead to "over disclosure," making the reports filled with data not helpful but rather confusing for investors. Facebook thinks disclosure rules should be flexible; if the SEC wants some specific incidents disclosed, it should create new regulations for that purpose.
Nvidia and the Cryptocurrency Boom
The second is that of Nvidia, the world's biggest graphics chip maker, which, allegedly, had played down how much of its 2017-2018 revenue was from cryptocurrency mining. When the crypto market collapsed, Nvidia was forced to cut its earnings forecast, which was an unexpected move for investors. Subsequently, the SEC charged Nvidia with $5.5 million for not disclosing how much of its revenue was tied to the erratic crypto market.
Investors argue that the statements from Nvidia were misleading due to the actual risks but point out that Nvidia responds by saying that such misrepresentation was not done out of malice. However, they argue that demand cannot be predicted in such an ever-changing market and so would lead to unintentional mistakes. According to them, the existing laws for securities lawsuits already impose very high standards to deter the "fishing expedition," where investors try to sue over financial losses without proper evidence. Nvidia's lawyers opine that relaxing these standards would invite more cases; henceforth the economy is harmed as a whole.
Possible Impact of Supreme Court on Investor Litigation
The Supreme Court will hear arguments for Facebook on November 6th, and the case for Nvidia is scheduled for Nov 13th. Judgments could forever alter the framework under which tech companies can be held accountable to the investor class. A judgement in favour of Facebook and Nvidia would make it tougher for shareholders to file a claim and collect damages after a firm has suffered a crisis. It could give tech companies respite but, at the same time, narrow legal options open to shareholders.
These cases come at a time when the trend of business-friendly rulings from the Supreme Court is lowering the regulatory authority of agencies such as the SEC. Legal experts believe that this new conservative majority on the court may be more open than ever to appeals limiting "nuisance" lawsuits, arguing that these cases threaten business stability and economic growth.
Dealing with such cases, the Court would decide whether the federal rules must permit private investors to enforce standards of corporate accountability or if such responsibility of accountability should rest primarily with the regulatory bodies like the SEC.
As generative AI technology gains momentum, the focus on cybersecurity threats surrounding the chips and processing units driving these innovations intensifies. The crux of the issue lies in the limited number of manufacturers producing chips capable of handling the extensive data sets crucial for generative AI systems, rendering them vulnerable targets for malicious attacks.
According to recent records, Nvidia, a leading player in GPU technology, announced cybersecurity partnerships during its annual GPU technology conference. This move underscores the escalating concerns within the industry regarding the security of chips and hardware powering AI technologies.
Traditionally, cyberattacks garner attention for targeting software vulnerabilities or network flaws. However, the emergence of AI technologies presents a new dimension of threat. Graphics processing units (GPUs), integral to the functioning of AI systems, are susceptible to similar security risks as central processing units (CPUs).
Experts highlight four main categories of security threats facing GPUs:
1. Malware attacks, including "cryptojacking" schemes where hackers exploit processing power for cryptocurrency mining.
2. Side-channel attacks, exploiting data transmission and processing flaws to steal information.
3. Firmware vulnerabilities, granting unauthorised access to hardware controls.
4. Supply chain attacks, targeting GPUs to compromise end-user systems or steal data.
Moreover, the proliferation of generative AI amplifies the risk of data poisoning attacks, where hackers manipulate training data to compromise AI models.
Despite documented vulnerabilities, successful attacks on GPUs remain relatively rare. However, the stakes are high, especially considering the premium users pay for GPU access. Even a minor decrease in functionality could result in significant losses for cloud service providers and customers.
In response to these challenges, startups are innovating AI chip designs to enhance security and efficiency. For instance, d-Matrix's chip partitions data to limit access in the event of a breach, ensuring robust protection against potential intrusions.
As discussions surrounding AI security evolve, there's a growing recognition of the need to address hardware and chip vulnerabilities alongside software concerns. This shift reflects a proactive approach to safeguarding AI technologies against emerging threats.
The intersection of generative AI and GPU technology highlights the critical importance of cybersecurity in the digital age. By understanding and addressing the complexities of GPU security, stakeholders can mitigate risks and foster a safer environment for AI innovation and adoption.
NVIDIA, a global technology powerhouse, is making waves in the tech industry, holding about 80% of the accelerator market in AI data centres operated by major players like AWS, Google Cloud, and Microsoft Azure. Recently hitting a monumental $2 trillion market value, NVIDIA's stock market soared by $277 billion in a single day – a historic moment on Wall Street.
In a remarkable financial stride, NVIDIA reported a staggering $22.1 billion in revenue, showcasing a 22% sequential growth and an astounding 265% year-on-year increase. Colette Kress, NVIDIA's CFO, emphasised that we are at the brink of a new computing era.
Jensen Huang, NVIDIA's CEO, highlighted the integral role their GPUs play in our daily interactions with AI. From ChatGPT to video editing platforms like Runway, NVIDIA is the driving force behind these advancements, positioning itself as a leader in the ongoing industrial revolution.
The company's influence extends to generative AI startups like Anthropic and Inflection, relying on NVIDIA GPUs, specifically RTX 5000 and H100s, to power their services. Notably, Meta's Mark Zuckerberg disclosed plans to acquire 350K NVIDIA H100s, emphasising NVIDIA's pivotal role in training advanced AI models.
NVIDIA is not only a tech giant but also a patron of innovation, investing in over 30 AI startups, including Adept, AI21, and Character.ai. The company is actively engaged in healthcare and drug discovery, with investments in Recursion Pharmaceuticals and its BioNeMo AI model for drug discovery.
India has become a focal point for NVIDIA, with promises of tens of thousands of GPUs and strategic partnerships with Reliance and Tata. The company is not just providing hardware; it's actively involved in upskilling India's talent pool, collaborating with Infosys and TCS to train thousands in generative AI.
Despite facing GPU demand challenges last year, NVIDIA has significantly improved its supply chain. Huang revealed plans for a new GPU range, Blackwell, promising enhanced AI compute performance, potentially reducing the need for multiple GPUs. Additionally, the company aims to build the next generation of AI factories, refining raw data into valuable intelligence.
Looking ahead, Huang envisions sovereign AI infrastructure worldwide, making AI-generation factories commonplace across industries and regions. The upcoming GTC conference in March 2024 is set to unveil NVIDIA's latest innovations, attracting over 300,000 attendees eager to learn about the next generation of AI.
To look at the bigger picture, NVIDIA's impact extends far beyond its impressive financial achievements. From powering AI startups to influencing global tech strategies, the company is at the forefront of shaping the future of technology. As it continues to innovate, NVIDIA remains a key player in advancing AI capabilities and fostering a new era of computing.
Artificial intelligence (AI) is ushering in a transformative era across various industries, including the cybersecurity sector. AI is driving innovation in the realm of cyber threats, enabling the creation of increasingly sophisticated attack methods and bolstering the efficiency of existing defense mechanisms.
In this age of AI advancement, the potential for a safer world coexists with the emergence of fresh prospects for cybercriminals. As the adoption of AI technologies becomes more pervasive, cyber adversaries are harnessing its power to craft novel attack vectors, automate their malicious activities, and maneuver under the radar to evade detection.
According to a recent article in The Messenger, the initial beneficiaries of the AI boom are unfortunately cybercriminals. They have quickly adapted to leverage generative AI in crafting sophisticated phishing emails and deepfake videos, making it harder than ever to discern real from fake. This highlights the urgency for organizations to fortify their cybersecurity infrastructure.
On a more positive note, the demand for custom chips has skyrocketed, as reported by TechCrunch. As generative AI algorithms become increasingly complex, off-the-shelf hardware struggles to keep up. This has paved the way for a new era of specialized chips designed to power these advanced systems. Industry leaders like NVIDIA and AMD are at the forefront of this technological arms race, racing to develop the most efficient and powerful AI chips.
McKinsey's comprehensive report on the state of AI in 2023 reinforces the notion that generative AI is experiencing its breakout year. The report notes, "Generative AIs have surpassed many traditional machine learning models, enabling tasks that were once thought impossible." This includes generating realistic human-like text, images, and even videos. The applications span from content creation to simulating real-world scenarios for training purposes.
However, amidst this wave of optimism, ethical concerns loom large. The potential for misuse, particularly in deepfakes and disinformation campaigns, is a pressing issue that society must grapple with. Dr. Sarah Rodriguez, a leading AI ethicist, warns, "We must establish robust frameworks and regulations to ensure responsible use of generative AI. The stakes are high, and we cannot afford to be complacent."
Unprecedented opportunities are being made possible by the generative AI surge, which is changing industries. The potential is limitless and can improve anything from creative processes to data synthesis. But we must be cautious with this technology and deal with the moral issues it raises. Gaining the full benefits of generative AI will require a careful and balanced approach as we navigate this disruptive period.
Computer gaming giant that goes by the motto of “level up experience more”, Nvidia detected bugs in its Shield TV. This gaming company is an American multinational technology company headquartered in California, USA. Nvidia is an artificial intelligence computing giant. The foremost work of Nvidia is to design graphics processing unit (GPU) for the gaming world and the professional market. They also develop the system on a chip unit for the mobile computing and automotive market.