In brief - Insurers should consider impact on laws of negligence, product liability and cyber security insurance

Recent research suggests that by 2050 nearly all cars and commercial vehicles will be autonomous to some degree. The predicted inevitability of widespread use of driverless cars raises some significant questions, not the least of which includes  what it means for legal liability and insurance coverage. 
 
The New South Wales Government's recent announcement of a new $10 million fund dedicated to expanding its trials of driverless vehicles further serves as a reminder that not only are autonomous vehicles being developed and tested right now, but within a few years they could be on the roads. 

How will drivers' duty of care, laws of negligence and product liability laws apply?

In May 2018, Uber suspended its driverless car program following a collision which resulted in the death of a pedestrian in Arizona. 
 
The U.S. National Highway Traffic Safety Administration report Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey showed that approximately 94% of car accidents are the result of human error. Drivers owe a duty of care to other motorists, passengers and pedestrians to drive in a safe and responsible manner. In the case of a collision between or an accident involving traditional driver controlled vehicles, it is the conduct of the drivers involved in the accident that is primarily considered in order to establish if one or more of them acted negligently. 
 
However, the fact that there may be no-one operating an autonomous vehicle raises the question of whether current laws of negligence are applicable when considering liability for an accident. 
 
In the event of an accident involving an autonomous vehicle which is the result of issues with the design, manufacture or assembly of the vehicle, existing product liability laws (including the guarantee that goods are of acceptable quality as required by Part 3-2 Division 1 Subdivision A of the Australian Consumer Law) may apply such that the manufacturer of the autonomous vehicle or one or more of its component parts can be sued in the same manner as the manufacturers of other products that are proven to be defective. In those circumstances the liability may ultimately be passed on by the vehicle owner to the vehicle manufacturer. 

The five levels of autonomous driving 

An added complexity is that there can be varying degrees of autonomous operation of vehicles. The following industry categories of autonomous vehicles demonstrate some of the possible permutations:
  • Level 1 - This is a driver assistance level, meaning that most of the vehicles functions are controlled by the driver but a specific function (like steering or accelerating) can be done automatically by the car. 
  • Level 2 - At this level the driver can be disengaged from physically operating the vehicle by taking his or her hands off the steering wheel and their foot off the pedal. However, the driver must always be ready to take control of the vehicle.
  • Level 3 - At this level the driver is still necessary and will intervene if necessary but is not required to monitor the situation in the same way as in the other lower levels (i.e. "eyes-off"). 
  • Level 4 - This level is what is meant by fully autonomous (i.e. "brain-off"). Vehicles at this level are designed to perform all safety-critical driving functions and monitor roadway conditions. However, it is limited to the operational design domain of the vehicle, meaning that it does not cover every driving scenario. 
  • Level 5 - This level refers to a fully autonomous system that expects the vehicle's performance to equal that of a human driver in every driving scenario.
The above scenarios demonstrate that there can be a "handover" of control between the human driver and the vehicle. The question of who is liable in the event of an accident involving an autonomous vehicle will therefore need to be determined in conjunction with an analysis of the extent to which the vehicle was autonomous (i.e. to what extent it was being driven by a person) and a careful consideration of what was the proximate cause of the accident. 
 
What of the situation where a "passenger" in an autonomous car fails to override the car's automatic control to take evasive action for an accident that could have been avoided? 
 
It may be that existing laws of negligence and product liability require updating to cater for the specific types of liabilities that could arise from accidents involving one or more autonomous vehicles. 
 
One thing would appear to be relatively certain: in circumstances where the decisions that have been made with regards to the operation of the "autonomous" vehicle immediately before an accident have been recorded by the vehicle's on board computer system, the attribution of fault and allocation of liability for accidents will be made easier. 

Compulsory third party insurance, differing levels of automation, cyber and risk ratings among considerations for insurers

The above considerations will be food for thought for insurers who intend to write risks in relation to the operation of autonomous vehicles. 
 
At present, compulsory third party (CTP) insurance is required to register a vehicle in Australia. In some states, such as Victoria, there is a no-fault CTP scheme. However, in other states, CTP insurance covers vehicle owners and drivers who on the balance of probabilities are legally liable for personal injury caused to any person in the event of an accident. Accordingly, the driver of an autonomous vehicle may not necessarily be legally liable in negligence for an accident if the vehicle was fully autonomous. 
 
In these circumstances, it could be the vehicle's manufacturer who would be potentially exposed to a product liability claim. For this reason one might expect that in the event that the use of autonomous vehicles becomes widespread, car manufacturers will necessarily become much more closely involved with vehicle insurance, including CTP insurance. 
 
As noted earlier in the article, there is also the possibility that autonomous vehicles (at least initially) will have different levels of automation. This means that insurers may need different policies to apply to varying forms of automation between vehicles. 
 
Having regard to their dependence upon computer software there is a risk that autonomous vehicles could become another platform for hackers to infiltrate, giving rise to additional risks relevant to policies of cyber and related insurance.
 
Finally, risk ratings applied by insurers in many cases will rely upon data regarding a driver's age, geographical location, years of driving experience, miles driven annually, etc. Such metrics may not be available for or may not be relevant to writing risks in relation to the use and operation of autonomous vehicles. 

Insurers should keep abreast of driverless car developments

With driverless cars already operating in San Francisco, Singapore and Paris, and testing of driverless vehicles currently being conducted in parts of Australia, it appears inevitable that the operation of autonomous vehicles will become more commonplace. While they have the ability to potentially revolutionise transport, it is clear that law, regulation and insurance offerings will need to adapt to keep pace with this exciting technology.

This is commentary published by Colin Biggers & Paisley for general information purposes only. This should not be relied on as specific advice. You should seek your own legal and other advice for any question, or for any specific situation or proposal, before making any final decision. The content also is subject to change. A person listed may not be admitted as a lawyer in all States and Territories. © Colin Biggers & Paisley, Australia 2024.

Related Articles