Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI in assessments. Show all posts

Congress Questions Hertz Over AI-Powered Scanners in Rental Cars After Customer Complaints

 

Hertz is facing scrutiny from U.S. lawmakers over its use of AI-powered vehicle scanners to detect damage on rental cars, following growing reports of customer complaints. In a letter to Hertz CEO Gil West, the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation requested detailed information about the company’s automated inspection process. 

Lawmakers noted that unlike some competitors, Hertz appears to rely entirely on artificial intelligence without human verification when billing customers for damage. Subcommittee Chair Nancy Mace emphasized that other rental car providers reportedly use AI technology but still include human review before charging customers. Hertz, however, seems to operate differently, issuing assessments solely based on AI findings. 

This distinction has raised concerns, particularly after a wave of media reports highlighted instances where renters were hit with significant charges once they had already left Hertz locations. Mace’s letter also pointed out that customers often receive delayed notifications of supposed damage, making it difficult to dispute charges before fees increase. The Subcommittee warned that these practices could influence how federal agencies handle car rentals for official purposes. 

Hertz began deploying AI-powered scanners earlier this year at major U.S. airports, including Atlanta, Charlotte, Dallas, Houston, Newark, and Phoenix, with plans to expand the system to 100 locations by the end of 2025. The technology was developed in partnership with Israeli company UVeye, which specializes in AI-driven camera systems and machine learning. Hertz has promoted the scanners as a way to improve the accuracy and efficiency of vehicle inspections, while also boosting availability and transparency for customers. 

According to Hertz, the UVeye platform can scan multiple parts of a vehicle—including body panels, tires, glass, and the undercarriage—automatically identifying possible damage or maintenance needs. The company has claimed that the system enhances manual checks rather than replacing them entirely. Despite these assurances, customer experiences tell a different story. On the r/HertzRentals subreddit, multiple users have shared frustrations over disputed damage claims. One renter described how an AI scanner flagged damage on a vehicle that was wet from rain, triggering an automated message from Hertz about detected issues. 

Upon inspection, the renter found no visible damage and even recorded a video to prove the car’s condition, but Hertz employees insisted they had no control over the system and directed the customer to corporate support. Such incidents have fueled doubts about the fairness and reliability of fully automated damage assessments. 

The Subcommittee has asked Hertz to provide a briefing by August 27 to clarify how the company expects the technology to benefit customers and how it could affect Hertz’s contracts with the federal government. 

With Congress now involved, the controversy marks a turning point in the debate over AI’s role in customer-facing services, especially when automation leaves little room for human oversight.

AI-Generated Exam Answers Outperform Real Students, Study Finds

 

In a recent study, university exams taken by fictitious students using artificial intelligence (AI) outperformed those by real students and often went undetected by examiners. Researchers at the University of Reading created 33 fake students and employed the AI tool ChatGPT to generate answers for undergraduate psychology degree module exams.

The AI-generated responses scored, on average, half a grade higher than those of actual students. Remarkably, 94% of the AI essays did not raise any suspicion among markers, with only a 6% detection rate, which the study suggests is likely an overestimate. These findings, published in the journal Plos One, highlight a significant concern: "AI submissions robustly gained higher grades than real student submissions," indicating that students could use AI to cheat undetected and achieve better grades than their honest peers.

Associate Professor Peter Scarfe and Professor Etienne Roesch, who led the study, emphasized the need for educators globally to take note of these findings. Dr. Scarfe noted, "Many institutions have moved away from traditional exams to make assessment more inclusive. Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments. We won’t necessarily go back fully to handwritten exams - but the global education sector will need to evolve in the face of AI."

In the study, the AI-generated answers and essays were submitted for first-, second-, and third-year modules without the knowledge of the markers. The AI students outperformed real undergraduates in the first two years, but in the third-year exams, human students scored better. This result aligns with the idea that current AI struggles with more abstract reasoning. The study is noted as the largest and most robust blind study of its kind to date.

Academics have expressed concerns about the impact of AI on education. For instance, Glasgow University recently reinstated in-person exams for one course. Additionally, a study reported by the Guardian earlier this year found that most undergraduates used AI programs to assist with their essays, but only 5% admitted to submitting unedited AI-generated text in their assessments.