Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI in robotics. Show all posts

Clanker: The Viral AI Slur Fueling Backlash Against Robots and Chatbots

 

In popular culture, robots have long carried nicknames. Battlestar Galactica called them “toasters,” while Blade Runner used the term “skinjobs.” Now, amid rising tensions over artificial intelligence, a new label has emerged online: “clanker.” 

The word, once confined to Star Wars lore where it was used against battle droids, has become the latest insult aimed at robots and AI chatbots. In a viral video, a man shouted, “Get this dirty clanker out of here!” at a sidewalk robot, echoing a sentiment spreading rapidly across social platforms. 

Posts using the term have exploded on TikTok, Instagram, and X, amassing hundreds of millions of views. Beyond online humor, “clanker” has been adopted in real-world debates. Arizona Senator Ruben Gallego even used the word while promoting his bill to regulate AI-driven customer service bots. For critics, it has become a rallying cry against automation, generative AI content, and the displacement of human jobs. 

Anti-AI protests in San Francisco and London have also adopted the phrase as a unifying slogan. “It’s still early, but people are really beginning to see the negative impacts,” said protest organizer Sam Kirchner, who recently led a demonstration outside OpenAI’s headquarters. 

While often used humorously, the word reflects genuine frustration. Jay Pinkert, a marketing manager in Austin, admits he tells ChatGPT to “stop being a clanker” when it fails to answer him properly. For him, the insult feels like a way to channel human irritation toward a machine that increasingly behaves like one of us. 

The term’s evolution highlights how quickly internet culture reshapes language. According to etymologist Adam Aleksic, clanker gained traction this year after online users sought a new word to push back against AI. “People wanted a way to lash out,” he said. “Now the word is everywhere.” 

Not everyone is comfortable with the trend. On Reddit and Star Wars forums, debates continue over whether it is ethical to use derogatory terms, even against machines. Some argue it echoes real-world slurs, while others worry about the long-term implications if AI achieves advanced intelligence. Culture writer Hajin Yoo cautioned that the word’s playful edge risks normalizing harmful language patterns. 

Still, the viral momentum shows little sign of slowing. Popular TikTok skits depict a future where robots, labeled clankers, are treated as second-class citizens in human society. For now, the term embodies both the humor and unease shaping public attitudes toward AI, capturing how deeply the technology has entered cultural debates.

Peter Burke Unveils Generative AI-Powered Autonomous Drone Software, Redefining Robotics

 

In a major leap for artificial intelligence and robotics, computer scientist Peter Burke has introduced a project that uses generative AI to build autonomous drone software. Far from being a routine technical experiment, this initiative marks a pivotal shift in how we perceive machine intelligence and automation. By harnessing advanced AI models such as ChatGPT, Burke’s work showcases how robots can evolve beyond predefined programming, opening new possibilities for fully autonomous systems.

The project is designed around training a robot’s "brain" and hardware using generative AI, with minimal human supervision. “It’s a significant step forward,” Burke notes, drawing parallels to The Terminator’s portrayal of self-aware robots—while adding that his goal is to prevent such dystopian outcomes.

At the heart of the innovation lies a dual-robot framework: the AI models run on cloud-based laptops, while the drones execute their tasks through a Raspberry Pi Zero 2 W onboard computer. The models generate functional code, and the drones bring it to life. This combination gives drones autonomy while retaining the intelligence of advanced AI systems.

Burke’s system, called WebGCS, enables drones to host their own control dashboard on a small website, accessible online. This approach represents a clear departure from traditional drone control, offering both flexibility and independence from external operators.

The development process was rigorous, involving multiple “sprints” across different AI tools. Early attempts with models like Claude struggled with context limitations, while Gemini 2.5 and Cursor also posed challenges. Eventually, success came with the Windsurf model, which generated nearly 10,000 lines of code in just 100 hours. To put that into perspective, a similar project—Cloudstation—previously took Burke’s team four years to build. The comparison highlights the disruptive speed and efficiency AI brings to software prototyping.

Industry experts have taken note. Hantz FĂ©vry, CEO of spatial data firm Geolava, commended Burke’s ambition and the project’s alignment with the future of spatial intelligence. At the same time, he emphasized the importance of safeguards and ethical boundaries, pointing out that unchecked autonomy could pose risks.

Projects like Burke’s illustrate both the promise and the perils of generative AI. On one hand, they showcase how autonomous systems can transform industries; on the other, they raise urgent questions about ethics, regulation, and safety.

As AI innovation accelerates, the challenge will be balancing progress with responsibility. The ability for machines to independently develop and execute complex functions forces us to rethink issues of employment, security, and governance.