Researcher Hacks Self-driving Car Sensors

Kedis82ZE8

'15 CX-5 AWD GT w/Tech Pkg
Contributor
Researcher Hacks Self-driving Car Sensors

The multi-thousand-dollar laser ranging (lidar) systems that most self-driving cars rely on to sense obstacles can be hacked by a setup costing just $60, according to a security researcher.

I can take echoes of a fake car and put them at any location I want, says Jonathan Petit, Principal Scientist at Security Innovation, a software security company. And I can do the same with a pedestrian or a wall.

Using such a system, attackers could trick a self-driving car into thinking something is directly ahead of it, thus forcing it to slow down. Or they could overwhelm it with so many spurious signals that the car would not move at all for fear of hitting phantom obstacles.

In a paper written while he was a research fellow in the University of Corks Computer Security Group and due to be presented at the Black Hat Europe security conference in November, Petit describes a simple setup he designed using a low-power laser and a pulse generator. Its kind of a laser pointer, really. And you dont need the pulse generator when you do the attack, he says. You can easily do it with a Raspberry Pi or an Arduino. Its really off the shelf.
 
When I was an undergrad I worked in an autonomous robot lab developing mapping software for our lidar-equipped robots. There are many limitations and downfalls to it. Some objects don't return reliable signals and then, lets say you have hundreds of vehicles painting the area you're in with it - how do you avoid saturation and tell your laser apart from all the others? This was 11 years ago and it was pretty obvious that lidar was a research stop-gap until optical recognition was developed enough (when I left, our lab had just brought in someone to work on that). I can't see lidar becoming the method of primary navigation in a mainstream vehicle, or any method that requires active scanning vs passive optical.
 
All implementations of any technology has limitations. I wouldn't project the limitations of the particular implementation used by your college lab on the entire field of LIDAR based technologies - it's still in it's infancy. That would be like someone looking over the shoulder of Alexander Graham Bell's early experiments and saying that copper line telephony couldn't go very far because it requires a pair of copper conductors for each separate conversation. Now we can transmit thousands of calls over the same two copper conductors.

Future LIDAR implementations can use various forms of Time Division or Frequency Division to similarly accommodate thousands of users in the same area. Future navigational technologies will likely interface multiple technologies to achieve whatever redundancy, accuracy, speed, distance, penetration, etc. is required.
 
Given a couple minutes of thought, I don't see how time division or frequency division would help much. Both of those work in (assumed) stable environments. A better example would be something like CDMA where each lidar transmits its own code, however, there are still problems of saturation when high numbers of them occupy the same area and if you reach a saturation point.

There's even some bystander eye safety issues that come into play if you have hundreds of near-infrared (905 nm) lasers continuously painting an area as would be expected near busy roadways. Small numbers are no problem as the scan is brief and low in power, but near-infrared light is absorbed by the eye and so prolonged exposure is not a good idea. This problem can be addressed by switching to higher wavelengths that the eye does not absorb, but so far there seems to be little addressing of this.
 
Given a couple minutes of thought, I don't see how time division or frequency division would help much. Both of those work in (assumed) stable environments. A better example would be something like CDMA where each lidar transmits its own code, however, there are still problems of saturation when high numbers of them occupy the same area and if you reach a saturation point.


Trust me, the technologies you were exposed to 10 years ago hadn't even scratched the surface of what's possible.

There's even some bystander eye safety issues that come into play if you have hundreds of near-infrared (905 nm) lasers continuously painting an area as would be expected near busy roadways. Small numbers are no problem as the scan is brief and low in power, but near-infrared light is absorbed by the eye and so prolonged exposure is not a good idea. This problem can be addressed by switching to higher wavelengths that the eye does not absorb, but so far there seems to be little addressing of this.

As the schemes for discriminating individual signals become more sensitive, more users will be able to use less power and less time to scan. Individual scanners can be networked together to share spatial data. Remember, we are talking about technologies that operate at the speed of light. Let your imagination run wild instead of trying to find roadblocks. Who knows how it will develop?
 
Trust me, the technologies you were exposed to 10 years ago hadn't even scratched the surface of what's possible.



As the schemes for discriminating individual signals become more sensitive, more users will be able to use less power and less time to scan. Individual scanners can be networked together to share spatial data. Remember, we are talking about technologies that operate at the speed of light. Let your imagination run wild instead of trying to find roadblocks. Who knows how it will develop?

Trust what? [Citations please] - I actually worked with the hardware, software and research papers that comprise much of what is now actually being implemented. What have you worked on? This isn't magic "oh but we're so much more advanced", it's physics. Things can be processed faster but we're still dealing with point clouds and lasers that fly through certain objects or get bounced the wrong way or generally faulty returns.

The fundamental problem isn't one of processing power, faster scans or networking or whatever. Even if you solve the issues of signal division/saturation and switch to a safer laser, you're still having to actively scan and rely on the return of signals you create, instead of passively working with the world we have already built that is visual in nature. Lidar has its place, it's very useful for direct object scans nearby - which is actually what we see nowadays with collision sensors. But for primary navigation? Nah. It's got to be secondary to passive optical to interface with the world we live in.
 

New Threads and Articles

Back