目前,汽车制造商纷纷投入巨资,开发多种先进驾驶员辅助技术,让驾驶过程更加舒适和安全,而且也的确取得了不错的成果。事实上,很多先进驾驶员辅助系统已经作为车辆功能或特色,成功登陆 SAE 2 级自动驾驶汽车(具体定义见:SAE J3016-2018 《道路机动车辆驾驶自动化系统相关术语的分类和定义》),包括车道保持辅助、自适应巡航控制和自动紧急制动等。这些系统可以在特定行驶条件下发挥作用,控制车辆的运动;然而,为了确保安全行驶,驾驶员仍必须将注意力集中在驾驶上。
迄今为止,这些 2 级自动驾驶系统基本都是围绕摄像头和雷达技术设计的。然而,汽车制造商在为系统选择感知组件时可以改用激光雷达,从而大大提高驾驶员辅助功能的效果和效率。激光雷达技术在某些方面的性能明显要优于摄像头和雷达,这是由其工作原理决定的,而这些性能对于避免前向碰撞事故发生至关重要。因此,激光雷达也成了行业真正应用先进驾驶员辅助系统的关键使能传感装置之一。[1]
激光雷达可实时测量周边目标与车辆的距离,整个过程不需要进行任何额外的运算,也无需其他传感器的支持,因此其在自由空间中的探测效率和准确度均优于摄像头。事实上,激光雷达可为驾驶员辅助系统提供非常准确的自由空间探测,即利用周围物体距离车辆的精确位置,绘制车辆的安全行驶区域,这也是任何驾驶员辅助系统正常工作的基础。
雷达也具备探测周围物体的能力,但其相对“模糊”的成像能力根本无法满足自由空间探测要求,而且还必须依赖其他传感器才能完成“目标分类”任务。此外,雷达很难探测静止物体。“毫米波雷达测距精度高、受环境条件影响小,但缺陷在于角度分辨率差,很容易出现误报情况。”[2]因此,雷达对自由空间探测的作用不大。
与激光雷达不同,一些基于摄像头的解决方案则需要更多传感器的支持,而且还需经过复杂的运算才能推断周围物体的距离,从而确定车辆的安全行驶路径。例如,为了提供“立体视野”,这些系统将至少需要两个摄像头,还需配备“深度估算算法”基于左图像和右图像之间的三角测量,确定视野中物体的深度”[3]。如果系统仅有一部摄像头,则车辆处理器则必须通过比较多帧图像,才能模拟最终的立体图像。然而,与激光雷达相比,通过这种“运动结构”(structure from motion)的方法推导距离必须进行额外的运算,因此复杂度更高。
事实上,摄像头还需面对所谓的“隧道视觉”(Tunnel vision)挑战,这会让基于摄像头的驾驶员辅助系统更加复杂、成本更高。具体来说,摄像头为了聚焦距离更远的物体,则必须以牺牲视场为代价。这种现象其实在民用相机中也很常见:当你对准远处的物体时,镜头中可以捕捉的场景则较少,这点任何使用过相机变焦功能的人应该都有体会。车辆必须同时捕获不同距离(远、中、近)的车辆、目标和行人,因此为了获得恒定的高分辨率图像,先进驾驶员辅助系统必须具备多个焦距,即需要多个摄像头。
现阶段的驾驶员辅助系统过于依赖摄像头,这也带来了一定弊端。具体来说,基于摄像头的驾驶员辅助系统经常会利用算法,来分析摄像头捕捉的图像,从而识别检测到的对象。然而,这种“从图像中提取特征的算法将严重依赖‘对比度’(无论是颜色对比度,还是强度对比度)。”正因如此,基于摄像头的驾驶员辅助系统很容易受到错觉的干扰,比如将大卡车的侧面误报为天空等。[4]事实上,基于摄像头的先进驾驶员辅助系统不仅会受到这些假阴性读数的影响,也会同时受到假阳性读数的干扰。最近,美国公路安全保险协会(IIHS)的一项研究显示,这些有缺陷的读数会导致系统在真实的道路驾驶场景中做出不恰当的的举动。报告称,“在 180 英里内的总行程中,车辆共意外减速 12 次,且其中 7 次均是受到马路上树影的干扰。”[5] IIHS 机构担心,这种糟糕的性能可能会让驾驶员直接完全放弃车辆的安全系统。
此外,摄像头在弱光条件下的性能也令人堪忧,这点很容易理解。事实上,摄像头的工作原理与我们的双眼很类似,也必须依赖环境光才能工作。目前,一些公司正在探索解决这一缺陷的方法,例如借助红外功能改善摄像头在弱光下的性能。这种趋势实际反映了两个问题:首先,汽车制造商也已经意识到单凭摄像头并无法满足车辆的需要;其次,红外技术可能是问题的答案之一。因此,相较于通过一系列“缝缝补补”增强某一种传感器的功能,我们真正需要的是通过一种新的传感器技术,提供一种与以往不同的数据类型。为了在更多条件和环境下保证安全操作,汽车制造商在设计先进驾驶员辅助系统时必须“集各家之所长”,结合使用市场上各种可用的传感器技术。
关于作者
-
Mircea Gradu 博士 Velodyne激光雷达公司 验证高级副总裁 及 首席质量官
-
David Heeren 博士 Velodyne激光雷达公司 高级技术产品市场经理
文献参考
[1] See also “A Safety-First Approach to Developing and Marketing Driver Assistance Technology.”
[2] Wu, X., Ren, J., Wu, Y., and Shao, J., "Study on Target Tracking Based on Vision and Radar Sensor Fusion," SAE Technical Paper 2018-01-0613, 2018, https://doi.org/10.4271/2018-01-0613.
[3] SAE, “J3088: Active Safety System Sensors,” https://www.sae.org/standards/content/j3088_201711/.
[4] Iain Thomson, “Man killed in gruesome Tesla autopilot crash was saved by his car's software weeks earlier,” The Register. June 30, 2016. https://www.theregister.co.uk/2016/06/30/tesla_autopilot_crash_leaves_motorist_dead/.
[5] Insurance Institute for Highway Safety, “Evaluating Autonomy: IIHS examines driver assistance features in road, track tests,” Status Report, 53, No. 4. August 7, 2018. https://www.iihs.org/iihs/news/desktopnews/evaluating-autonomy-iihs-examines-driver-assistance-features-in-road-track-tests.
Automakers have invested heavily in developing advanced driver-assistance technologies to make driving more comfortable and safe. The most advanced of these systems are already offered as vehicle features that satisfy Level 2 automated driving as defined by SAE International in SAE J3016-2018 Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles and incorporate capabilities, such as Lane Keep Assist (LKA), Adaptive Cruise Control (ACC), and Automatic Emergency Braking (AEB). These features can intervene in certain driving scenarios to control the vehicle’s movement; yet, to ensure safe operation, the driver must remain attentive and focused on the driving environment.
To date, these L2 systems have been designed around camera and radar technology. However, automakers can greatly improve the effectiveness and efficiency of driver-assist features by employing a system in which lidar is a key perception component. Lidar technology is inherently superior to camera and radar in certain performance aspects that are crucial for avoiding forward collisions and which support a move within the industry to implementing lidar as a crucial sensor for ADAS applications.[1]
Lidar performs free-space detection more efficiently and precisely than cameras, by providing real-time measurements of how far surrounding objects are from the vehicle, with no additional computational processes or sensors required. As a result, data from a single lidar sensor directly provides the fundamental building block of a successful driver assistance system: accurate free-space detection. That is, lidar utilizes precise distance measurements of surrounding objects to map areas where it is safe for the vehicle to drive.
Radar has the ability to detect some surrounding objects; however, its relatively “fuzzy” image does not provide accurate free-space detection and makes radar dependent on other sensors for object-classification tasks. Furthermore, radar struggles to detect stationary objects. “Millimeter-wave radar has high range accuracy, and is little influenced by environmental conditions. But its angle resolution is poor, and the millimeter-wave radar is prone to false alarm.”[2] These combined weaknesses result in radar not being very helpful in free space detection.
In contrast with lidar, camera-centric approaches require multiple sensors and complex computational processes to infer distance of surrounding objects and thereby determine safe driving paths. For example, in a “stereo vision” approach requiring at least two cameras, “a depth estimation algorithm uses triangulation between the left and right images to determine the depth of objects in the field of view.” [3] Alternatively, if a system utilizes only one camera, the vehicle’s computer must compare multiple frames to simulate a stereo image. However, compared to lidar, this “structure from motion” approach also requires additional computational complexity to derive distance estimates.
The complexity and cost of camera-based approaches are compounded by the fact that cameras suffer from what might be called “tunnel vision”. That is, as cameras focus on objects at greater distances, they sacrifice field of view. Any photographer who has utilized a camera’s zoom feature will recognize this phenomenon: Focusing on a distant object results in less of the scene being captured in the image. As a result, to achieve the constant high-resolution image needed to detect vehicles, objects, and pedestrians at every necessary range (near, mid, and far), advanced driving systems that are designed around cameras require multiple focal lengths and, therefore, multiple cameras.
Overdependence on cameras for driver assistance suffers from yet other setbacks. Although current systems often analyze camera images to identify detected objects, “algorithms performing feature extraction from images rely heavily on the presence of ‘contrast’ (either color-wise or intensity-wise).” This dependency on contrast can make camera-centric systems prone to optical illusions; for example, if the side of a tractor-trailer blends in with the sky. [4] Camera-based systems can suffer not only from these false-negative readings, but also from false positives. A recent IIHS study revealed that these flawed readings can cause systems to react inappropriately in real road-driving scenarios. “In 180 miles,” the report explains, “the car unexpectedly slowed down 12 times, seven of which coincided with tree shadows on the road.”[5] This poor level of performance caused IIHS to fear that drivers would actually turn off their vehicles’ safety systems altogether.
Exacerbating each of these characteristic challenges in camera-centric approaches is their relatively weak performance in low light conditions. Cameras, like our eyes, are dependent on ambient light to function. Some companies are exploring engineering workarounds for this deficiency; for example, by incorporating infrared cameras to improve low light function. Efforts to enhance existing camera and radar modalities demonstrate not only that automakers recognize that these technologies alone are not capable of solving the problem, but that they are exploring infrared as a possible solution. Therefore, rather than designing patchwork solutions to bolster the performance of any single sensor modality, what is truly needed to cover the gaps in existing approaches is a new sensor technology that provides a different kind of data. To achieve safe operation in a broad range of conditions and contexts, the complexity of advanced driving safety requires automakers to combine the relative strengths of every available and appropriate sensor technology on the market.
About the authors
-
Dr. David Heeren is the Senior Technical Product Marketing Manager at Velodyne Lidar Inc.
-
Dr. Mircea Gradu is the Senior Vice President of Validation and Chief Quality Officer at Velodyne Lidar Inc.
References
[1] See also “A Safety-First Approach to Developing and Marketing Driver Assistance Technology.”
[2] Wu, X., Ren, J., Wu, Y., and Shao, J., "Study on Target Tracking Based on Vision and Radar Sensor Fusion," SAE Technical Paper 2018-01-0613, 2018, https://doi.org/10.4271/2018-01-0613.
[3] SAE, “J3088: Active Safety System Sensors,” https://www.sae.org/standards/content/j3088_201711/.
[4] Iain Thomson, “Man killed in gruesome Tesla autopilot crash was saved by his car's software weeks earlier,” The Register. June 30, 2016. https://www.theregister.co.uk/2016/06/30/tesla_autopilot_crash_leaves_motorist_dead/.
[5] Insurance Institute for Highway Safety, “Evaluating Autonomy: IIHS examines driver assistance features in road, track tests,” Status Report, 53, No. 4. August 7, 2018. https://www.iihs.org/iihs/news/desktopnews/evaluating-autonomy-iihs-examines-driver-assistance-features-in-road-track-tests.
等级
打分
- 2分
- 4分
- 6分
- 8分
- 10分
平均分
- 作者:Mircea Gradu
- 行业:汽车
- 主题:管理与产品开发质量、可靠性与耐久性安全性电气电子与航空电子