AI & hybrid technology Accidents: The Need for Oversight

CAREER COUNSELING WITH CHAIFRY

Chaifry

6/14/2025

        AI and Hybrid Technology Accidents: The Need for Human Oversight

Introduction

Artificial intelligence (AI) and hybrid technologies, which blend AI with human oversight or integrate multiple AI algorithms, have reshaped industries like aviation, transportation, healthcare, and manufacturing. These systems promise to enhance safety by automating complex tasks, reducing human error, and improving efficiency. In aviation, AI assists with navigation, collision avoidance, and maintenance predictions, aiming to make flights safer. Yet, accidents persist, often with devastating consequences, as seen in the 2025 Air India flight AI171 crash in Ahmedabad, which killed 290 people. The statement, “If an accident is happening despite AI and hybrid technology, then a person must think at least once. Have we left everything to AI and Technology,” underscores the critical need for human oversight. This article explores why human intervention remains essential, delving into AI’s limitations, the role of human judgment, and lessons from notable accidents. It argues for a balanced approach where AI augments, rather than replaces, human capabilities to ensure safety and accountability.

The Promise and Limitations of AI in Accident Prevention

AI and hybrid technologies have significantly advanced accident prevention. In aviation, AI-powered systems monitor real-time conditions, enabling features like automatic collision avoidance and predictive maintenance. Hybrid systems, combining AI with human control, enhance decision-making by processing vast data quickly. For example, AI can analyze weather patterns, air traffic, and aircraft performance to optimize flight paths, reducing risks. In transportation, AI-driven systems in autonomous vehicles detect obstacles, while in healthcare, AI aids in diagnosing conditions to prevent medical errors. These advancements have lowered accident rates in many sectors, saving countless lives.

However, AI is not infallible. Its reliance on training data means it may fail in rare or unforeseen scenarios. For instance, an AI system might misinterpret a unique weather condition or an unusual object, leading to errors. Technical failures, such as sensor malfunctions or software glitches, can also cause accidents. Additionally, AI lacks contextual understanding and ethical reasoning, limiting its ability to handle complex human behaviors or moral dilemmas. These limitations suggest that while AI can enhance safety, it cannot eliminate accidents entirely, necessitating human oversight to bridge these gaps.

The Indispensable Role of Human Oversight

Human oversight is vital for ensuring the safe and ethical use of AI systems. Humans provide three key functions that AI cannot replicate:

  1. Ethical Decision-Making: AI lacks the ability to understand societal norms or moral principles. Humans define ethical guidelines, ensuring AI aligns with values like fairness and safety. For example, in aviation, pilots make ethical choices in emergencies, prioritizing passenger safety.

  2. Adaptability: Humans can adapt to new scenarios, crucial for handling situations outside AI’s training scope. A pilot noticing an anomaly in an AI-assisted system can intervene, preventing potential disasters.

  3. Accountability: Human oversight fosters transparency and trust, ensuring errors are addressed and responsibility is assigned. This is critical in high-stakes domains where accidents have severe consequences.

The concept of “Meaningful Human Control” (MHC) is often debated, with some arguing it’s insufficient due to AI’s autonomy. Instead, continuous “Human Oversight” is proposed, involving active monitoring and intervention. In aviation, Human-In-The-Loop (HITL) systems integrate human validation, enhancing safety in navigation and maintenance. Regulatory frameworks, like those mandating human oversight for high-risk AI systems, require mechanisms for intervention, ensuring humans remain in control.

The Air India AI171 Crash: A Case Study

On June 12, 2025, Air India flight AI171, a Boeing 787-8 Dreamliner, crashed 30 seconds after takeoff from Ahmedabad International Airport, en route to London Gatwick. The disaster, India’s worst aviation incident, killed 241 of 242 passengers and crew and at least 49 people on the ground, totaling 290 fatalities. The sole survivor, Vishwash Kumar Ramesh, a British national of Indian origin in seat 11A, escaped through an emergency exit and is recovering in hospital.

The flight departed at 1:38 p.m. local time with 230 passengers, including 169 Indians, 53 Britons, 7 Portuguese, and 1 Canadian, plus 12 crew members. Moments after takeoff, the plane lost altitude at 625 feet and crashed into a doctors’ hostel at Byramjee Jeejeebhoy Medical College and Civil Hospital, causing significant ground casualties. The crash occurred during a lunch break, with parts of the plane smashing through the hostel’s dining hall, leaving a scene of abandoned tables and a gaping hole in the wall. At least four medical students and four doctors’ relatives perished, with identification efforts relying on DNA matching due to the severity of the damage.

Preliminary reports suggest flap misconfiguration and landing gear drag led to insufficient lift, causing a stall. The pilot’s last call reported “Engine failure, no thrust,” indicating a critical failure. The investigation, led by India’s Aircraft Accident Investigation Bureau (AAIB) with support from the US National Transportation Safety Board (NTSB) and UK Air Accidents Investigation Branch, has recovered one black box, a crucial step in determining the cause. Aviation experts speculate that human error in monitoring AI-assisted takeoff settings may have contributed, as flaps appeared retracted in verified footage, a configuration unsuitable for takeoff.

Air India confirmed the accident, established a hotline, and is cooperating with authorities. Boeing canceled its Paris Air Show attendance to focus on supporting the investigation. Tata Group, Air India’s owner, pledged ₹1 crore ($116,868) per deceased family and medical expense coverage. Indian Prime Minister Narendra Modi visited the site, expressing sorrow, while UK leaders, including Prime Minister Sir Keir Starmer, offered condolences and deployed experts to assist.

This crash highlights AI’s limitations in aviation, where automated systems assist with takeoff and navigation. If flap misconfiguration is confirmed, it suggests inadequate human oversight of AI-assisted settings, emphasizing the need for robust training and vigilance.

Other Notable AI-Related Accidents

The AI171 crash is part of a series of AI-related accidents, highlighting the necessity of human oversight in automated systems. Below are key incidents and their implications:

1. Uber Self-Driving Car Fatality (2018)

  • Description: An autonomous Uber vehicle struck and killed a pedestrian in Tempe, Arizona.

  • Cause: The AI failed to correctly classify the pedestrian, and the safety driver was distracted.

  • Key Issue: Lack of real-time human intervention in critical moments.

2. Tesla Autopilot Crashes (2018-2022)

  • Description: Around 11 crashes occurred due to misrecognition of stationary emergency vehicles (e.g., fire trucks, police cars).

  • Cause:

    • Over-reliance on Autopilot by drivers.

    • Misleading marketing (e.g., claims of "full self-driving" capability).

  • Regulatory Action: A German court (2020) banned Tesla from using terms like “full potential for autonomous driving.”

  • Key Issue: Human complacency and lack of understanding of system limitations.

3. Boeing 737 MAX Crashes (2018-2019)

  • Description: Two fatal crashes caused by faulty sensors triggering an automated anti-stall system (MCAS).

  • Cause:

    • Pilots were not trained to override the system.

    • Lack of transparency in AI-human interaction.

  • Key Issue: Insufficient training and failsafe protocols for human intervention.

These incidents demonstrate that AI systems have technical and contextual limitations, making human oversight essential to prevent catastrophic failures. Without proper monitoring, AI errors can escalate with deadly consequences.

Supporting Investigations and Organizations

Investigating AI-related accidents is essential for understanding causes, improving safety, and informing policy. Thorough investigations uncover technical failures, human errors, or systemic issues, enabling stakeholders to implement corrective measures. For instance, the AI171 crash investigation aims to determine whether flap misconfiguration resulted from human oversight or AI system failure, guiding future safety protocols.

Organizations like the Center for Security and Emerging Technology (CSET) play a pivotal role in tracking AI incidents. CSET advocates for a hybrid incident reporting framework, combining mandatory, voluntary, and citizen reporting to ensure comprehensive data collection. This approach balances consistency with flexibility, capturing diverse perspectives while minimizing administrative burdens. Such frameworks help identify patterns, inform regulations, and prevent future accidents.

The AI171 investigation, supported by international agencies, exemplifies the importance of collaborative efforts. By analyzing black box data and crash site evidence, investigators can pinpoint causes and recommend improvements, such as enhanced AI algorithms or better training for pilots. These efforts ensure that lessons from tragedies are applied to enhance safety across the industry.

Human Oversight Across Industries

While aviation and transportation are focal points, human oversight is crucial in other sectors:

  • Healthcare: AI aids in diagnosing diseases and planning treatments, but errors, such as misinterpreting medical images, can occur. Human doctors review AI outputs, ensuring accuracy and providing empathetic care that AI cannot replicate. For example, a radiologist might catch a subtle anomaly missed by an AI scan, preventing misdiagnosis.

  • Finance: AI detects fraud and powers algorithmic trading, but biased data or manipulation can lead to errors. Human analysts oversee AI systems, ensuring compliance and ethical decisions. A financial advisor might override an AI recommendation if it conflicts with a client’s long-term goals.

  • Manufacturing: AI-driven robots enhance productivity, but human workers supervise operations, handle exceptions, and maintain equipment. A technician might notice a robot’s misalignment that AI fails to detect, preventing defective products.

In each sector, human oversight maximizes AI’s benefits while minimizing risks, ensuring technology serves human needs responsibly.

Ethical and Philosophical Considerations

AI-related accidents raise profound ethical questions about accountability and responsibility. Who is liable when an AI system fails? In the Tesla Autopilot crashes, drivers faced legal consequences, but debates persist about manufacturer accountability for misleading claims. In the AI171 crash, liability may involve pilots, Boeing, or Air India, depending on whether human error or system failure is confirmed. Current legal frameworks, designed for human errors, struggle with AI-related incidents, necessitating new approaches to assign responsibility fairly.

Philosophically, over-reliance on AI risks a “responsibility gap,” where neither humans nor machines are fully accountable. This is concerning in high-stakes domains like aviation, where human adaptability is crucial for unpredictable scenarios. The AI171 crash suggests that pilots may have trusted AI-assisted systems too much, missing critical errors. Excessive dependence on AI could also degrade human skills, as operators become less vigilant or less capable of manual intervention. For instance, pilots overly reliant on automation may struggle to respond effectively in emergencies, as seen in the Boeing 737 MAX crashes.

The statement, “a person must think at least once,” reflects the irreplaceable role of human judgment. Humans bring intuition, ethical reasoning, and adaptability, ensuring technology aligns with societal values and safety standards. Maintaining this balance is essential to prevent AI from becoming a liability rather than an asset.

Future Directions for Safe AI Deployment

To prevent accidents like AI171 and harness AI’s potential, several strategies are imperative:

  1. Enhanced Training and Education: Operators, from pilots to healthcare professionals, must be trained to understand AI’s capabilities and limitations. Simulation-based training can prepare them for emergencies, ensuring they can intervene effectively. For example, pilots should practice overriding AI-assisted systems to handle flap misconfigurations or sensor failures.

  2. Robust Regulatory Frameworks: Governments must establish clear regulations for AI deployment, particularly in high-risk sectors. Policies mandating human oversight for critical systems provide a foundation for safety. These regulations should require transparency about AI’s capabilities and limitations, preventing misleading claims that foster over-reliance.

  3. Human-AI Collaboration: Technologies like HITL systems integrate human intuition with AI, enhancing safety and ethical decision-making. In aviation, HITL workflows involve pilots validating AI outputs during takeoff and landing, ensuring accuracy. These systems can be adapted to other industries, such as healthcare, where doctors review AI diagnoses.

  4. Advanced Research and Development: Continued investment in AI safety research is crucial. This includes developing tools to detect AI errors, improving system robustness against unforeseen scenarios, and enhancing interpretability to make AI decisions transparent. Research into preventing human deception, such as misleading marketing or user misuse, can also mitigate risks.

  5. International Coordination: Given AI’s global impact, international cooperation is essential to address risks like system failures or misuse. Collaborative efforts can standardize safety protocols, share incident data, and develop global regulations, ensuring consistent safety standards across borders.

  6. Ethical AI Development: AI must be developed with ethics at its core, respecting human rights, privacy, and societal values. Engaging diverse stakeholders—developers, users, and policymakers—ensures AI benefits all while minimizing harm. Ethical guidelines can prevent scenarios where AI prioritizes efficiency over safety, as suspected in some accidents.

These strategies, implemented collaboratively, can create a future where AI and hybrid technologies enhance safety without compromising human oversight.

Conclusion

The Air India AI171 crash and other AI-related accidents underscore the limitations of AI and hybrid technologies, emphasizing the indispensable role of human oversight. While AI offers precision and efficiency, it cannot replace human judgment in ethical decision-making, adaptability, and accountability. Human oversight ensures that AI serves as a tool to augment, not supplant, human capabilities, addressing technical and contextual gaps that lead to accidents. By investing in training, regulations, and collaborative systems like HITL, we can prevent future tragedies and build trust in technology. The lesson from these incidents is clear: technology is only as safe as the humans who guide it. A balanced approach, where AI and human oversight work in tandem, is the path to a safer, more ethical future.

Comment:

"You have very beautifully started with aviation industry and their dependence on AI and further given the reader a glimpse of AI used in other fields and how slowly it’s taking over. In doing so you have brought out something that very few notice when in it exposes the limitations of human wisdom and the importance of seeking divine guidance. We cannot forget our creator and Master." Mark Desouza