Are self-driving vehicles really just big, remote-controlled cars, with nameless and faceless people in far-off call centers piloting the things from behind consoles? As the vehicles and their science fiction-like software expand to more cities, the conspiracy theory has rocketed around group chats and TikToks. It’s been powered, in part, by the reluctance of self-driving car companies to talk in specifics about the humans who help make their robots go.
But this month, in government documents submitted by Alphabet subsidiary Waymo and electric automaker Tesla, the companies have revealed more details about the people and programs that help the vehicles when their software gets confused.
The details of these companies’ “remote assistance” programs are important because the humans supporting the robots are critical in ensuring the cars are driving safely on public roads, industry experts say. Even robotaxis that run smoothly most of the time get into situations that their self-driving systems find perplexing. See, for example, a December power outage in San Francisco that killed stop lights around the city, stranding confused Waymos in several intersections. Or the ongoing government probes into several instances of these cars illegally blowing past stopped school buses unloading students in Austin, Texas. (The latter led Waymo to issue a software recall.) When this happens, humans get the cars out of the jam by directing or “advising” them from afar.
These jobs are important because if people do them wrong, they can be the difference between, say, a car stopping for or running a red light. “For the foreseeable future, there will be people who play a role in the vehicles’ behavior, and therefore have a safety role to play,” says Philip Koopman, an autonomous vehicle software and safety researcher at Carnegie Mellon University. One of the hardest safety problems associated with self-driving, he says, is building software that knows when to ask for human help.
In other words: If you care about robot safety, pay attention to the people.
The People of Waymo
Waymo operates a paid robotaxi service in six metros—Atlanta, Austin, Los Angeles, Phoenix, and the San Francisco Bay Area—and has plans to launch in at least 10 more, including London, this year. Now, in a blog post and letter submitted to US Senator Ed Markey this week, the company made public more aspects of what it calls its “remote assistance” (RA) program, which uses remote workers to respond to requests from Waymo’s vehicle software when it determines it needs help. These humans give data or advice to the systems, writes Ryan McNamara, Waymo’s vice president and global head of operations. The system can use or reject the information that humans provide.
“Waymo’s RA agents provide advice and support to the Waymo Driver but do not directly control, steer, or drive the vehicle,” McNamara writes—denying, implicitly, the charge that Waymos are simply remote-controlled cars. About 70 assistants are on duty at any given time to monitor some 3,000 robotaxis, the company says. The low ratio indicates the cars are doing much of the heavy lifting.
Waymo also confirmed in its letter what an executive told Congress in a hearing earlier this month: Half of these remote assistance workers are contractors overseas, in the Philippines. (The company says it has two other remote assistance offices in Arizona and Michigan.) These workers are licensed to drive in the Philippines, McNamara writes, but are trained on US road rules. All remote assistance workers are drug and alcohol tested when they are hired, the company says, and 45 percent are drug tested every three months as part of Waymo’s random testing program.
The company says a highly trained US-based team handles the most complex remote interactions, including collisions, contacts with law enforcement and riders, and interactions with regulatory agencies. The company declined to comment beyond the details in its letter.
Tesla’s Human Babysitters
Tesla has operated a small robotaxi service in Austin, Texas, since last June. The service started with human safety monitors sitting in the vehicles’ front passenger seats, ready to intervene if the tech went wrong. Last month, CEO Elon Musk said the company had started to take these monitors out of the front seats. He acknowledged that while the company did use “chase cars” to monitor and intervene with the software, it had started to operate some cars without that more direct human intervention. (A larger but still limited Tesla ride-hailing service in the Bay Area operates with human drivers behind the wheel.) But the company has not revealed much about the people who help its vehicles out of jams, or how they do the job.
Now, in a filing submitted to the California Public Utilities Commission this week, Tesla AI technical program manager Dzuy Cao writes that Tesla runs two offices of “remote operators,” based in Austin and the Bay Area. (In a seeming dig at Waymo’s Philippines revelations, Cao emphasizes that it “requires that its remote operators be located domestically.”) The company says these operators undergo “extensive” background checks and drug and alcohol testing, and have valid US driver’s licenses.
But Tesla still hasn’t revealed how often these operators intervene with its self-driving technology, and exactly how they do it. Tesla didn’t respond to WIRED’s request for comment.
The details of these remote programs could determine whether self-driving cars actually keep others on the road out of harm’s way. “If there’s a person who can make a mistake that can result in or contribute to a crash, then you have a safety issue you have to deal with,” Koopman says.
The post Government Docs Reveal New Details About Tesla and Waymo Robotaxis’ Human Babysitters appeared first on Wired.




