| By Gale Staff |
A century ago, in the summer of 1925, an electrical engineer named Francis Houdina debuted one of the first self-driving car prototypes. The demonstration, which occurred on Broadway Avenue in New York City, required two vehicles: a driverless sedan rigged with a radio antenna and a second car trailing behind for a radio operator. Signals from the second car controlled the speed and direction of Houdina’s radio-controlled sedan, the American Wonder.
The “Phantom Auto,” which could turn corners, honk its horn, and adjust speed, all without a physical driver, wowed spectators as it cruised along Fifth Avenue despite heavy traffic. Unfortunately, though maybe not surprisingly, the exhibition abruptly ended when Houdina’s invention crashed into another vehicle. Regardless of its outcome, the American Wonder’s demonstration sparked human imagination and inspired a century-long pursuit of self-driving innovation.
Thanks to Houdina’s ingenuity, self-driving cars are now a reality. In 2017, Waymo One, a self-driving car initiative formerly owned by Google, launched the first public automated ride service. The company has since expanded self-driving rideshare options to Phoenix, San Francisco, and Los Angeles. Additionally, many automakers now integrate self-driving features, like hands-free steering and adaptive cruise control, into their consumer-ready vehicles.
The advancing technology behind autonomous vehicles (AVs) is at full speed, and consumers will likely see its widespread adoption on the roads. However, the benefits and risks of AVs are worthy of debate with serious pros and cons.
Gale In Context: Opposing Viewpoints gives your students a comprehensive perspective on today’s complex and multifaceted topics. Our self-driving cars portal features expert opinions, hundreds of podcast episodes, and thousands of vetted articles. With Gale In Context: Opposing Viewpoints, students can conduct balanced research on the pros and cons to develop an informed opinion.
Examine the Safety Arguments
According to the Centers for Disease Control and Prevention (CDC), motor vehicle crashes cause more than 120 U.S. deaths every day. While weather and failing infrastructure contribute to a fraction of these accidents, the most common cause is human error, whether speeding, fatigue, road rage, substance use, or simply not paying attention. With the advent of smartphones, distracted driving has become a serious traffic concern—even more dangerous than driving under the influence of alcohol or drugs.
Self-driving vehicles are arguably safer than human-operated ones, at least based on research estimations. While accidents occur with self-driving cars, Google’s data suggests that the overall rate of collisions is lower. Meanwhile, Tesla claims a 50% drop in accidents when drivers employ the brand’s hands-free autopilot feature.
Understandably, many are apprehensive about driverless safety. However, AVs are fundamentally immune from human error; self-driving cars don’t succumb to boredom or distractions, get angry, or drink too much.
Weigh the Potential for Increased Accessibility
Many people cannot safely operate a vehicle due to visual impairments, mobility limitations, neurological disorders, or other issues. The public transportation system in the United States is severely underfunded and practically nonexistent in rural areas. Therefore, individuals unable to drive are left to rely on costly taxi services, social agencies, or the kindness of friends.
While perhaps not the end-all solution, driverless cars present an opportunity for transportation independence for individuals with disabilities, older adults, or others unable to operate a vehicle safely. The technology could be life-changing.
AVs present real potential for these underserved individuals. Furthermore, the technology is becoming more accessible, as several automakers are actively developing AI-integrated models. This progress means more vehicle options and better affordability for a broader consumer market.
Reflect on Government AV Regulation—or Lack Thereof
Elon Musk’s Department of Government Efficiency (DOGE) recently eliminated several National Highway Traffic Safety Administration (NHTSA) positions. According to Musk, the agency’s investigations and safety recalls inhibited self-driving automobile technology. While many states are rolling out introductory regulations around self-driving cars, they’re still evolving. Meanwhile, AV technology is charging forward with fewer checks and balances in place.
One significant consideration yet to be resolved is liability: who is responsible for AV collisions? In 2018, a self-driving Uber struck and killed Elaine Herzberg, a 49-year-old woman who was simply crossing the street. The AV’s operator, who was watching television on her smartphone leading up to the crash, was ultimately charged with endangerment, which carried just a three-year probationary sentence.
Regardless of their many potential benefits, self-driving vehicles present a host of ethical considerations around accountability and AI. Policymakers argue that states must develop meaningful regulatory safeguards alongside advancing technology.
Consider the Possible Cybersecurity Risk
Today, data breaches and cybersecurity threats are more pervasive than ever, and hackers pose serious threats to self-driving vehicle safety and privacy. AVs store key personal information, such as phone numbers, location data, garage door codes, and login information for mobile apps. This data is vulnerable, and valuable, to potential hackers.
Researchers have shown that it’s relatively easy to trick AVs. Autonomous driving relies on sensors and maps. Computers, unlike human drivers, can’t make subjective judgments about reality. So, malicious individuals can feasibly fool AV navigation systems to make real driving obstacles—like, say, a stop sign—“disappear.”
In 2023, an anonymous hacker discovered a way to deactivate some of Tesla’s autopilot safety measures. Examples like this support real concerns that a hacker could access a self-driving car’s cloud database, override the autopilot, and turn the vehicle into a weapon.
Keep the Conversation Going with a Classroom Debate
There are many other questions to raise regarding self-driving vehicles. Could driverless cars improve traffic flow in busy urban settings? How will they impact the environment? Do AVs undermine investment in improved public transit? And as AV options expand, will human drivers become obsolete?
Activity Idea: Host a debate to help students consider the social implications of emerging self-driving technologies.
- Divide the class into two teams to consider arguments for and against AVs. Using Gale In Context: Opposing Viewpoints, students can gather arguments on aspects such as safety, environmental impact, accessibility, etc.
- Host a structured debate featuring an opening statement, cross-examinations, rebuttals, and closing arguments.
- Post-debate, guide the entire class through a reflective discussion about the presented arguments. Did they learn anything new? How did the debate change their perspectives?
Gale In Context: Opposing Viewpoints doesn’t force an agenda. Instead, it presents multi-faceted, unbiased materials representing a range of attitudes and well-researched opinions. Educators can feel confident that this database delivers thought-provoking discussion and balanced content, regardless of the classroom topic.
If your school district is not a current Gale subscriber, contact your local representative to discuss the next steps and request a product trial.