Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label ChatGPT. OpenAI. Show all posts

China Warns Government Staff Against Using OpenClaw AI Over Data Security Concerns

 

Recently, Chinese government offices along with public sector firms began advising staff not to add OpenClaw onto official gadgets - sources close to internal discussions say. Security issues are a key reason behind these alerts. As powerful artificial intelligence spreads faster across workplaces, unease about information safety has been rising too. 

Though built on open code, OpenClaw operates with surprising independence, handling intricate jobs while needing little guidance. Because it acts straight within machines, interest surged quickly - not just among coders but also big companies and city planners. Across Chinese industrial zones and digital centers, its presence now spreads quietly yet steadily. Still, top oversight bodies along with official news outlets keep pointing to possible dangers tied to the app. 

If given deep access to operating systems, these artificial intelligence programs might expose confidential details, wipe essential documents, or handle personal records improperly - officials say. In agencies and big companies managing vast amounts of vital information, those threats carry heavier weight. A report notes workers in public sector firms received clear directions to avoid using OpenClaw, sometimes extending to private gadgets. Despite lacking an official prohibition, insiders from a federal body say personnel faced firm warnings about downloading the software over data risks. 

How widely such limits apply - across locations or agencies - is still uncertain. A careful approach reveals how Beijing juggles competing priorities. Even as officials push forward with plans to embed artificial intelligence into various sectors - spurring development through widespread tech adoption - they also work to contain threats linked to digital security and information control. Growing global tensions add pressure, sharpening concerns about who manages data, and under what conditions. Uncertainty shapes decisions more than any single policy goal. 

Even with such cautions in place, some regional projects still move forward using OpenClaw. Take, for example, health-related programs under Shenzhen’s city government - these are said to have run extensive training drills featuring the artificial intelligence model, tied into wider upgrades across digital infrastructure. Elsewhere within the same city, one administrative area turned to OpenClaw when building a specialized helper designed specifically for public sector workflows. 

Although national leaders call for restraint, some regional bodies might test limited applications tied to progress targets. Whether broader limits emerge - or monitoring simply increases - stays unclear. What happens next depends on shifting priorities at different levels. Recently joining OpenAI, Peter Steinberger originally created OpenClaw as an open-source initiative hosted on GitHub. Attention around the tool has grown since his new role became known. 

When AI systems gain greater independence and embed themselves into daily operations, questions about safety will grow sharper - especially where confidential or controlled information is involved.

U.S. Blacklists Anthropic as Supply Chain Risk as OpenAI Secures Pentagon AI Deal

 

The Trump administration has designated AI startup Anthropic as a supply chain risk to national security, ordering federal agencies to immediately stop using its AI model Claude. 

The classification has historically been applied to foreign companies and marks a rare move against a U.S. technology firm. 

President Donald Trump announced that agencies must cease use of Anthropic’s technology, allowing a six month phase out for departments heavily reliant on its systems, including the Department of War. 

Defense Secretary Pete Hegseth later formalized the designation and said no contractor, supplier or partner doing business with the U.S. military may conduct commercial activity with Anthropic. 

At the center of the dispute is Anthropic’s refusal to grant the Pentagon unrestricted access to Claude for what officials described as lawful purposes. 

Chief executive Dario Amodei sought two exceptions covering mass domestic surveillance and the development of fully autonomous weapons. 

He argued that current AI systems are not reliable enough for autonomous weapons deployment and warned that mass surveillance could violate Americans’ civil rights. 

Anthropic has said a proposed compromise contract contained loopholes that could allow those safeguards to be bypassed. 

The company had been operating under a 200 million dollar Department of War contract since June 2024 and was the first AI firm to deploy models on classified government networks. 

After negotiations broke down, the Pentagon issued an ultimatum that Anthropic declined, leading to the blacklist. 

The company plans to challenge the designation in court, arguing it may exceed the authority granted under federal law. 

While the restriction applies directly to Defense Department related work, legal analysts say the move could create broader uncertainty across the technology sector. 

Anthropic relies on cloud infrastructure from Amazon, Microsoft and Google, all of which maintain major defense contracts. 

A strict interpretation of the order could complicate those relationships. 

President Trump has warned of serious civil and criminal consequences if Anthropic does not cooperate during the transition. 

Even as Anthropic faces federal restrictions, OpenAI has moved ahead with its own classified agreement with the Pentagon. 

The company said Saturday that it had finalized a deal to deploy advanced AI systems within classified environments under a framework it describes as more restrictive than previous contracts. 

In its official blog post, OpenAI said, "Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies." It added, "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s." 

OpenAI outlined three red lines that prohibit the use of its technology for mass domestic surveillance, for directing autonomous weapons systems and for high stakes automated decision making. 

The company said deployment will be cloud only and that it will retain control over its safety systems, with cleared engineers and researchers involved in oversight. 

"We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," the company wrote. 

The contract references existing U.S. laws governing surveillance and military use of AI, including requirements for human oversight in certain weapons systems and restrictions on monitoring Americans’ private information. 

OpenAI said it would not provide models without safety guardrails and could terminate the agreement if terms are violated, though it added that it does not expect that to happen. 

Despite its dispute with Washington, Anthropic appears to be gaining traction among consumers. 

Claude recently climbed to the top position in Apple’s U.S. App Store free rankings, overtaking OpenAI’s ChatGPT. 

Data from SensorTower shows the app was outside the top 100 at the end of January but steadily rose through February. 

A company spokesperson said daily signups have reached record levels this week, free users have increased more than 60 percent since January and paid subscriptions have more than doubled this year.

OpenAI Warns Future AI Models Could Increase Cybersecurity Risks and Defenses

 

Meanwhile, OpenAI told the press that large language models will get to a level where future generations of these could pose a serious risk to cybersecurity. The company in its blog postingly admitted that powerful AI systems could eventually be used to craft sophisticated cyberattacks, such as developing previously unknown software vulnerabilities or aiding stealthy cyber-espionage operations against well-defended targets. Although this is still theoretical, OpenAI has underlined that the pace with which AI cyber-capability improvements are taking place demands proactive preparation. 

The same advances that could make future models attractive for malicious use, according to the company, also offer significant opportunities to strengthen cyber defense. OpenAI said such progress in reasoning, code analysis, and automation has the potential to significantly enhance security teams' ability to identify weaknesses in systems better, audit complex software systems, and remediate vulnerabilities more effectively. Instead of framing the issue as a threat alone, the company cast the issue as a dual-use challenge-one in which adequate management through safeguards and responsible deployment would be required. 

In the development of such advanced AI systems, OpenAI says it is investing heavily in defensive cybersecurity applications. This includes helping models improve particularly on tasks related to secure code review, vulnerability discovery, and patch validation. It also mentioned its effort on creating tooling supporting defenders in running critical workflows at scale, notably in environments where manual processes are slow or resource-intensive. 

OpenAI identified several technical strategies that it thinks are critical to the mitigation of cyber risk associated with increased capabilities of AI systems: stronger access controls to restrict who has access to sensitive features, hardened infrastructure to prevent abuse, outbound data controls to reduce the risk of information leakage, and continuous monitoring to detect anomalous behavior. These altogether are aimed at reducing the likelihood that advanced capabilities could be leveraged for harmful purposes. 

It also announced the forthcoming launch of a new program offering tiered access to additional cybersecurity-related AI capabilities. This is intended to ensure that researchers, enterprises, and security professionals working on legitimate defensive use cases have access to more advanced tooling while providing appropriate restrictions on higher-risk functionality. Specific timelines were not discussed by OpenAI, although it promised that more would be forthcoming very soon. 

Meanwhile, OpenAI also announced that it would create a Frontier Risk Council comprising renowned cybersecurity experts and industry practitioners. Its initial mandate will lie in assessing the cyber-related risks that come with frontier AI models. But this is expected to expand beyond this in the near future. Its members will be required to offer advice on the question of where the line should fall between developing capability responsibly and possible misuse. And its input would keep informing future safeguards and evaluation frameworks. 

OpenAI also emphasized that the risks of AI-enabled cyber misuse have no single-company or single-platform constraint. Any sophisticated model, across the industry, it said, may be misused if there are no proper controls. To that effect, OpenAI said it continues to collaborate with peers through initiatives such as the Frontier Model Forum, sharing threat modeling insights and best practices. 

By recognizing how AI capabilities could be weaponized and where the points of intervention may lie, the company believes, the industry will go a long way toward balancing innovation and security as AI systems continue to evolve.

DeepSeek AI: Benefits, Risks, and Security Concerns for Businesses

 

DeepSeek, an AI chatbot developed by China-based High-Flyer, has gained rapid popularity due to its affordability and advanced natural language processing capabilities. Marketed as a cost-effective alternative to OpenAI’s ChatGPT, DeepSeek has been widely adopted by businesses looking for AI-driven insights. 

However, cybersecurity experts have raised serious concerns over its potential security risks, warning that the platform may expose sensitive corporate data to unauthorized surveillance. Reports suggest that DeepSeek’s code contains embedded links to China Mobile’s CMPassport.com, a registry controlled by the Chinese government. This discovery has sparked fears that businesses using DeepSeek may unknowingly be transferring sensitive intellectual property, financial records, and client communications to external entities. 

Investigative findings have drawn parallels between DeepSeek and TikTok, the latter having faced a U.S. federal ban over concerns regarding Chinese government access to user data. Unlike TikTok, however, security analysts claim to have found direct evidence of DeepSeek’s potential backdoor access, raising further alarms among cybersecurity professionals. Cybersecurity expert Ivan Tsarynny warns that DeepSeek’s digital fingerprinting capabilities could allow it to track users’ web activity even after they close the app. 

This means companies may be exposing not just individual employee data but also internal business strategies and confidential documents. While AI-driven tools like DeepSeek offer substantial productivity gains, business leaders must weigh these benefits against potential security vulnerabilities. A complete ban on DeepSeek may not be the most practical solution, as employees often adopt new AI tools before leadership can fully assess their risks. Instead, organizations should take a strategic approach to AI integration by implementing governance policies that define approved AI tools and security measures. 

Restricting DeepSeek’s usage to non-sensitive tasks such as content brainstorming or customer support automation can help mitigate data security concerns. Enterprises should prioritize the use of vetted AI solutions with stronger security frameworks. Platforms like OpenAI’s ChatGPT Enterprise, Microsoft Copilot, and Claude AI offer greater transparency and data protection. IT teams should conduct regular software audits to monitor unauthorized AI use and implement access restrictions where necessary. 

Employee education on AI risks and cybersecurity threats will also be crucial in ensuring compliance with corporate security policies. As AI technology continues to evolve, so do the challenges surrounding data privacy. Business leaders must remain proactive in evaluating emerging AI tools, balancing innovation with security to protect corporate data from potential exploitation.

OpenAI’s Disruption of Foreign Influence Campaigns Using AI

 

Over the past year, OpenAI has successfully disrupted over 20 operations by foreign actors attempting to misuse its AI technologies, such as ChatGPT, to influence global political sentiments and interfere with elections, including in the U.S. These actors utilized AI for tasks like generating fake social media content, articles, and malware scripts. Despite the rise in malicious attempts, OpenAI’s tools have not yet led to any significant breakthroughs in these efforts, according to Ben Nimmo, a principal investigator at OpenAI. 

The company emphasizes that while foreign actors continue to experiment, AI has not substantially altered the landscape of online influence operations or the creation of malware. OpenAI’s latest report highlights the involvement of countries like China, Russia, Iran, and others in these activities, with some not directly tied to government actors. Past findings from OpenAI include reports of Russia and Iran trying to leverage generative AI to influence American voters. More recently, Iranian actors in August 2024 attempted to use OpenAI tools to generate social media comments and articles about divisive topics such as the Gaza conflict and Venezuelan politics. 

A particularly bold attack involved a Chinese-linked network using OpenAI tools to generate spearphishing emails, targeting OpenAI employees. The attack aimed to plant malware through a malicious file disguised as a support request. Another group of actors, using similar infrastructure, utilized ChatGPT to answer scripting queries, search for software vulnerabilities, and identify ways to exploit government and corporate systems. The report also documents efforts by Iran-linked groups like CyberAveng3rs, who used ChatGPT to refine malicious scripts targeting critical infrastructure. These activities align with statements from U.S. intelligence officials regarding AI’s use by foreign actors ahead of the 2024 U.S. elections. 

However, these nations are still facing challenges in developing sophisticated AI models, as many commercial AI tools now include safeguards against malicious use. While AI has enhanced the speed and credibility of synthetic content generation, it has not yet revolutionized global disinformation efforts. OpenAI has invested in improving its threat detection capabilities, developing AI-powered tools that have significantly reduced the time needed for threat analysis. The company’s position at the intersection of various stages in influence operations allows it to gain unique insights and complement the work of other service providers, helping to counter the spread of online threats.

ChatGPT Vulnerability Exploited: Hacker Demonstrates Data Theft via ‘SpAIware

 

A recent cyber vulnerability in ChatGPT’s long-term memory feature was exposed, showing how hackers could use this AI tool to steal user data. Security researcher Johann Rehberger demonstrated this issue through a concept he named “SpAIware,” which exploited a weakness in ChatGPT’s macOS app, allowing it to act as spyware. ChatGPT initially only stored memory within an active conversation session, resetting once the chat ended. This limited the potential for hackers to exploit data, as the information wasn’t saved long-term. 

However, earlier this year, OpenAI introduced a new feature allowing ChatGPT to retain memory between different conversations. This update, meant to personalize the user experience, also created an unexpected opportunity for cybercriminals to manipulate the chatbot’s memory retention. Rehberger identified that through prompt injection, hackers could insert malicious commands into ChatGPT’s memory. This allowed the chatbot to continuously send a user’s conversation history to a remote server, even across different sessions. 

Once a hacker successfully inserted this prompt into ChatGPT’s long-term memory, the user’s data would be collected each time they interacted with the AI tool. This makes the attack particularly dangerous, as most users wouldn’t notice anything suspicious while their information is being stolen in the background. What makes this attack even more alarming is that the hacker doesn’t require direct access to a user’s device to initiate the injection. The payload could be embedded within a website or image, and all it would take is for the user to interact with this media and prompt ChatGPT to engage with it. 

For instance, if a user asked ChatGPT to scan a malicious website, the hidden command would be stored in ChatGPT’s memory, enabling the hacker to exfiltrate data whenever the AI was used in the future. Interestingly, this exploit appears to be limited to the macOS app, and it doesn’t work on ChatGPT’s web version. When Rehberger first reported his discovery, OpenAI dismissed the issue as a “safety” concern rather than a security threat. However, once he built a proof-of-concept demonstrating the vulnerability, OpenAI took action, issuing a partial fix. This update prevents ChatGPT from sending data to remote servers, which mitigates some of the risks. 

However, the bot still accepts prompts from untrusted sources, meaning hackers can still manipulate the AI’s long-term memory. The implications of this exploit are significant, especially for users who rely on ChatGPT for handling sensitive data or important business tasks. It’s crucial that users remain vigilant and cautious, as these prompt injections could lead to severe privacy breaches. For example, any saved conversations containing confidential information could be accessed by cybercriminals, potentially resulting in financial loss, identity theft, or data leaks. To protect against such vulnerabilities, users should regularly review ChatGPT’s memory settings, checking for any unfamiliar entries or prompts. 

As demonstrated in Rehberger’s video, users can manually delete suspicious entries, ensuring that the AI’s long-term memory doesn’t retain harmful data. Additionally, it’s essential to be cautious about the sources from which they ask ChatGPT to retrieve information, avoiding untrusted websites or files that could contain hidden commands. While OpenAI is expected to continue addressing these security issues, this incident serves as a reminder that even advanced AI tools like ChatGPT are not immune to cyber threats. As AI technology continues to evolve, so do the tactics used by hackers to exploit these systems. Staying informed, vigilant, and cautious while using AI tools is key to minimizing potential risks.

Bill Gates' AI Vision: Revolutionizing Daily Life in 5 Years

Bill Gates recently made a number of bold predictions about how artificial intelligence (AI) will change our lives in the next five years. These forecasts include four revolutionary ways that AI will change our lives. The tech billionaire highlights the significant influence artificial intelligence (AI) will have on many facets of our everyday lives and believes that these developments will completely transform the way humans interact with computers.

Gates envisions a future where AI becomes an integral part of our lives, changing the way we use computers fundamentally. According to him, AI will play a pivotal role in transforming the traditional computer interface. Instead of relying on conventional methods such as keyboards and mice, Gates predicts that AI will become the new interface, making interactions more intuitive and human-centric.

One of the key aspects highlighted by Gates is the widespread integration of AI-powered personal assistants into our daily routines. Gates suggests that every internet user will soon have access to an advanced personal assistant, driven by AI. This assistant is expected to streamline tasks, enhance productivity, and provide a more personalized experience tailored to individual needs.

Furthermore, Gates emphasizes the importance of developing humane AI. In collaboration with Humanes AI, a prominent player in ethical AI practices, Gates envisions AI systems that prioritize ethical considerations and respect human values. This approach aims to ensure that as AI becomes more prevalent, it does so in a way that is considerate of human concerns and values.

The transformative power of AI is not limited to personal assistants and interfaces. Gates also predicts a significant shift in healthcare, with AI playing a crucial role in early detection and personalized treatment plans. The ability of AI to analyze vast datasets quickly could revolutionize the medical field, leading to more accurate diagnoses and tailored healthcare solutions.

Bill Gates envisions a world in which artificial intelligence (AI) is smoothly incorporated into daily life, providing previously unheard-of conveniences and efficiencies, as we look to the future. These forecasts open up fascinating new possibilities, but they also bring up crucial questions about the moral ramifications of broad AI use. Gates' observations provide a fascinating look at the possible changes society may experience over the next five years as it rapidly moves toward an AI-driven future.


From Text to Multisensory AI: ChatGPT's Latest Evolution

 


The OpenAI generative AI bot, ChatGPT, has been updated to enable it to take on a whole new level of capabilities. Artificial intelligence (AI) is a fast, dynamic field that is constantly evolving and moving forward.  

A newly launched generative AI-powered chatbot, ChatGPT, from OpenAI, an AI startup backed by Microsoft, has been expanded with new features on Monday. It is now possible to ask ChatGPT questions in five different voices, and you can submit images for ChatGPT's response, as you can now ask ChatGPT questions based on the images you have uploaded.

By doing so, users will be able to ask ChatGPT in five different voices. Open AI shared a video on the X (formerly Twitter) social network showing how the feature works when it was announced that ChatGPT could now see, hear, and speak. This was announced as a post on the X (formerly Twitter). 

According to the note that was attached to the video, ChatGPT is now capable of watching, hearing, and speaking. As a result, users will soon be able to engage in voice conversations using ChatGPT (iOS & Android), as well as include images in the conversation (all platforms) over the next two weeks. 

A major update to ChatGPT is the introduction of an image analysis and response function. As an example, if you upload a picture of your bike, for example, and send it to the site, you'll receive instructions on how to lower the seat, or if you upload a picture of your refrigerator, you'll get some ideas for recipes based on its contents. 

The second feature of ChatGPT is that it allows users to interact with it in a synthetic voice, which is similar to how you'd interact with Google Now or Siri. The threads you ask ChatGPT are answered based on customized A.I. algorithms. 

A multimodal artificial intelligence system can handle any text, picture, video, or other form of information that a user chooses to throw at it. This feature is part of a trend across the entire industry toward so-called multimodal artificial intelligence systems. 

Researchers believe that the ultimate goal is the development of an artificial intelligence capable of processing information in the same way as a human does. In addition to answering users' questions in a variety of voices, ChatGPT will also be able to provide feedback in a variety of languages, based on their personal preferences. 

To create each voice, OpenAI has enlisted the help of professional voice actors, along with its proprietary speech recognition software Whisper, which transcribes spoken words into text using its proprietary technology. A new text-to-speech model, dubbed OpenAI's new text-to-speech model, is behind ChatGPT's new voice capabilities, which can create human-like audio using just text and a few seconds of speech samples. This opens the door to many "creative and accessible applications".

Aside from working with other companies, OpenAI is also collaborating with Spotify on a project to translate podcasts into several languages and to translate them as naturally as possible in the voice of the podcaster. A multimodal approach based on GPT-3.5 and GPT-4 is being used by OpenAI to enable ChatGPT to understand images based on multimodal capabilities. 

Users can now upload an image to the ChatGPT system to ask it a question such as exploring the contents of my fridge to plan a meal or analyzing the data from a complex graph for work-related purposes.  During the next two weeks, Plus and Enterprise users will be gradually introduced to new features, including voice and image capabilities, which can be enabled through their settings. 

A voice feature will be available on both iOS and Android platforms, with the option to enable them via the settings menu, whereas images will be available on all platforms from here on out. A ChatGPT model can be used by users for specific topics, such as research in specialized fields. 

OpenAI is very transparent about the model's limitations and discourages high-risk use cases that have not been properly verified. As a result, the model does a great job of transcribing English text, but it is not so good at transcribing other languages, especially those with non-Roman script. 

OpenAI advises non-English speaking users not to use ChatGPT for such purposes. In recognition of the potential risks involved with advanced capabilities such as voice, OpenAI has focused on voice chat, and the technology has been developed in collaboration with voice actors to ensure the authenticity and safety of voice chat. This technology is also being used by Spotify's Voice Translation feature, which allows podcasters to translate content into a range of different languages using their voice. This feature is important because it expands the reach of podcasters.

Using image input, OpenAI takes measures to protect the privacy of individuals by limiting the ability of ChatGPT to identify and describe people's identities directly. To further enhance these safeguards while ensuring the tool is as useful as possible, it will be crucial to follow real-world usage and user feedback to ensure it is as robust as possible.