Faculty submits comprehensive paper on automated vehicles


22 Feb

 

ADVANCES in technology used in automated vehicles could make it impossible to identify the cause of accidents involving them, the Faculty has suggested.

Research aims to produce self-driving vehicles which “think” for themselves, but their reasoning processes will likely be impenetrable, the Faculty says.

And if there is an accident, it may be impossible to determine what produced the behaviour to cause it.

The observations are made in a comprehensive response by the Faculty to a joint consultation  by the Scottish Law Commission and the Law Commission of England and Wales as part of a three-year review to prepare laws for self-driving vehicles.

The Faculty said current technology for automated driving systems was based on algorithms, processes or rules to be followed in problem-solving operations. However, research was seeking to develop “neural networks”, systems which made their own autonomous decisions.

“It is a feature of such systems that their internal ‘reasoning’ processes tend to be opaque and impenetrable (what is known as the ‘black box’ phenomenon) – the programmers may be unable to explain how they achieve their outcomes," the Faculty stated.

“If the operation of the system causes an accident, it might be perfectly possible to determine the cause through examination of the source code of a conventional system (there might be a clearly identifiable bug in the system, or one of the algorithms might be obviously flawed) but where a neural network is involved, it may be literally impossible to determine what produced the behaviour which caused the accident.”

The Faculty said it believed that, at this stage in their development, automated vehicles should not be programmed to let them mount the pavement, for example to allow emergency vehicles to pass, or to “edge through” pedestrians, or to exceed the speed limit in certain circumstances.