Recently, Chinese government offices along with public sector firms began advising staff not to add OpenClaw onto official gadgets - sources close to internal discussions say. Security issues are a key reason behind these alerts. As powerful artificial intelligence spreads faster across workplaces, unease about information safety has been rising too.
Though built on open code, OpenClaw operates with surprising independence, handling intricate jobs while needing little guidance. Because it acts straight within machines, interest surged quickly - not just among coders but also big companies and city planners. Across Chinese industrial zones and digital centers, its presence now spreads quietly yet steadily.
Still, top oversight bodies along with official news outlets keep pointing to possible dangers tied to the app.
If given deep access to operating systems, these artificial intelligence programs might expose confidential details, wipe essential documents, or handle personal records improperly - officials say. In agencies and big companies managing vast amounts of vital information, those threats carry heavier weight.
A report notes workers in public sector firms received clear directions to avoid using OpenClaw, sometimes extending to private gadgets. Despite lacking an official prohibition, insiders from a federal body say personnel faced firm warnings about downloading the software over data risks.
How widely such limits apply - across locations or agencies - is still uncertain.
A careful approach reveals how Beijing juggles competing priorities. Even as officials push forward with plans to embed artificial intelligence into various sectors - spurring development through widespread tech adoption - they also work to contain threats linked to digital security and information control. Growing global tensions add pressure, sharpening concerns about who manages data, and under what conditions. Uncertainty shapes decisions more than any single policy goal.
Even with such cautions in place, some regional projects still move forward using OpenClaw. Take, for example, health-related programs under Shenzhen’s city government - these are said to have run extensive training drills featuring the artificial intelligence model, tied into wider upgrades across digital infrastructure. Elsewhere within the same city, one administrative area turned to OpenClaw when building a specialized helper designed specifically for public sector workflows.
Although national leaders call for restraint, some regional bodies might test limited applications tied to progress targets. Whether broader limits emerge - or monitoring simply increases - stays unclear. What happens next depends on shifting priorities at different levels.
Recently joining OpenAI, Peter Steinberger originally created OpenClaw as an open-source initiative hosted on GitHub. Attention around the tool has grown since his new role became known.
When AI systems gain greater independence and embed themselves into daily operations, questions about safety will grow sharper - especially where confidential or controlled information is involved.
