一直以来,车辆和系统开发人员一直致力于充分利用好车辆系统中的每一个组件和每一片材料,但专注于自动驾驶汽车的设计团队正在越来越多地采用“少即是多”的理念:限制无人驾驶车辆的运行域可能有助于加快自动驾驶汽车和自动驾驶货车的采用速度。
很显然,限制自动驾驶汽车的行驶路线可以显着降低无人驾驶系统的复杂性。这样一来,行驶路线相对固定的穿梭巴士和商用车辆领域可能首先成为无人驾驶汽车的舞台。此外,仅在特定城市地区运营的自动驾驶出租车(robotaxis)也有类似的潜力。
未来,可能会有越来越多的自动驾驶汽车开始采用“限制运行域”的做法。“在系统开发过程中,如果您的车辆必须有能力响应大量无限复杂的场景时,那么最直接、也最合乎情理的方式就是减少车辆需要响应的选项数量,”IHS Markit 首席汽车分析师 Jeremy Carlson 表示,“控制车辆需要响应的选项数量可以让您的产品更快进入市场,这比任何其他手段都更有效。”
当前绝大部分的汽车系统的安全都离不开人类驾驶员的全神贯注,一些最新出现的 SAE L2 级和 L3 级自动驾驶系统对此要求有所放松,但仍不能完全将人类驾驶员从驾驶任务中解放出来。未来,自动驾驶系统将继续增强,向着完全解放人类驾驶员的目标发展。
许多支持者认为,充分展示自动驾驶汽车在特定区域内实现安全自动驾驶的能力,是实现“完全解放驾驶员的伟大目标”的最好方法。西门子工业软件(Siemens Digital Industries Software)公司汽车和运输行业副总裁 Nand Kochhar 表示:“目前,我们的主要业务来自 L2 级到 L3级自动驾驶汽车,但我们已经必须开始着手为 L4 级和 L5 级自动驾驶汽车做好准备了。如今,自动驾驶汽车面临的最大问题是‘多安全才叫安全’?我们并没有如何进行测试或评估的标准。在此背景下,人们决定开始引入运行域,也就是他们经常说的“循序渐进”。
现阶段,着力于实现在有限地理围栏区域中的自动驾驶要比直接追求终极自动驾驶更为现实。这种策略不仅可以减少车辆需要学会响应的变量,而且还更方便运营商在基础设施中部署更多传感器。举个例子,运营商可以在高速公路交叉路口和会车区域附近安装一些传感器,为大量商用车提供其自身车载传感器无法获取的信息。在城市地区,运营商和智能城市规划者也可以在电线杆和建筑物上安装传感器。
基础设施中的传感器可以与车载传感器相互配合,为道路用户提供更完善的信息。“我们正在与智能基础设施规划人员合作,把一些通常需要安装在车辆上的传感器转而安装在城市基础设施中,”大陆集团北美公司研究部技术项目负责人Steffen Hartmann 说,“这些传感器可以更好地监测车流或十字路口的行人,并将这些信息发送到车辆,帮助车辆采取恰当行动。”
车载传感器还有助于应对天气对驾驶系统的关键影响。目前,设计团队正在寻找方法,确保自动驾驶系统的性能不会受到天气的影响。现阶段,开发人员正在训练自动驾驶系统具备人类驾驶员在雾、雨和雪等气候下驾驶汽车的能力,主要尝试的方法包括:调整传感器的安装位置(使其处于可以受到保护的位置)、及时清洗摄像头镜头,以及通过软件减少灰尘和湿气对系统的影响。
“在使用传感器清洁系统时,关键是要最大限度地减少用水量。您肯定不想每天开车都带着一大桶水吧?”法雷奥创新副总裁Guillaume Devauchelle 表示,“我们还有一些算法可以检测摄像头镜头上出现水滴的情况。通常来说,一辆汽车上会有多个摄像头,如果其中某个摄像头已经快要超出其可以正常工作的范围时,那我们就不会考虑这部摄像头的输入。”
然而,无论传感器和系统的性能有多高,哪怕是在有限运行域中也会出现自动驾驶系统无法理解的情况。当系统无法确定该采取何种措施时,车辆通常的做法是尽可能安全地停下来。此外,还有一些公司正在研发新的技术,允许“远程操作员”在此类状况下进行提前干预。
车辆可以向远程中心发送消息,请求远程操作员接管车辆并根据情况进行适当操作。“举个例子,假设有一辆班车已经设置为‘仅在指定站点接送乘客’,这时如果出现其中某一个站点被临时占用的情况,比如那边停了一辆大货车,这辆班车可能会选择在附近停车,并持续等待直至该站点空闲下来。”采埃孚自主移动系统负责人Torsten Gollewski 说,“在这种情况下,远程操作员可以远程调整班车的停车位置,以避免堵塞交通,并允许乘客临时在非站点之外的位置上下车。”
在车辆充分展示其在有限运行域中进行安全驾驶的能力后,运营商就会思考如何继续增强自动驾驶汽车的能力。一旦车辆进入行驶区域,车辆的软件空中升级将提供一种简单、直接的方式来更新软件,确保乘客安全。
Elektrobit 业务管理执行副总裁 Martin Schleicher 指出,这些技术包括针对安全性、可靠性、信息娱乐系统而重新设计的架构。他表示,“车辆要保持实时更新就离不来空中升级,无论是最新的信息娱乐系统功能、运行系统安全布丁、ECU升级,还是配置修改,这都需要通过空中升级完成。”
展望未来,这些增强功能将更多由人工智能程序提供,并依赖于车辆的自身的驾驶经验。这些软件升级通常采用PFGA 现场可编程电路,可以提供大多数半导体芯片无法达到的硬件定制化水平。
Xilinx 汽车业务部高级总监 Willard Tu 表示:“卷积神经网络(CNNs)/深度神经网络将是一个重大推动因素。”一个例子是最早的CNN 网络为 32 位,后来变成 8 位整数,现在正在继续向 4 位量化发展。Xilinx 设备可以支持不同长度的 CNN 网络,而其他公司必须为每种长度设计一个新设备。对开发人员而言,可以在同一硬件设备上升级AI 功能对开发来说也是非常有价值的。”
Limiting operational domains to speed AV adoption.
Vehicle and system developers have always pushed to get the most from every component and material. But design teams focused on autonomous vehicles are increasingly adopting a less-is-more strategy. Limiting operational domains for driverless vehicles may help get cars and vans on the road more quickly.
Limiting vehicles to fixed routes can dramatically reduce the complexity of driving without humans, making shuttles and commercial vehicles some of the areas that will be first to totally eliminate drivers. For robotaxis, geofences that limit taxis to specified urban areas serve a similar purpose.
Setting limits is expected to become a popular strategy. “When you’re working on systems that have to respond and react to a number of infinitely complex scenarios, the most logical way to simplify the challenge is to limit the number of options the vehicle needs to respond to,” said Jeremy Carlson, principal automotive analyst at IHS Markit. “Narrowing the scope of what you address makes it possible to get to market sooner than trying to do everything everywhere.”
On today’s vehicles, safety systems require that humans remain attentive. Emerging SAE Level 2+ and Level 3 systems loosen that limitation but still require constant human monitoring. Those systems will be augmented to eliminate drivers.
Many proponents believe that completely removing humans can best be accomplished by proving the safety of autonomous vehicles running in defined areas. “The business today comes from Level 2 to Level 3 vehicles,” said Nand Kochhar, VP Automotive and Transportation Industry at Siemens Digital Industries Software. “We need to start merging targets for Level 4 and Level 5 together with the realities of today. The big question is, ‘how safe is safe?’ There’s a lack of standards for how systems are tested. When people define operational domains, they’re saying ‘let’s walk before we run.’”
Driving in geofenced areas has benefits beyond reducing variables. It may enable operators to deploy sensors that are built into the infrastructure. Sensors mounted near highway intersections and merging areas can help give commercial vehicles input that on-vehicle sensors can’t necessarily see. In urban areas, operators and smart-city planners can mount sensors on poles and buildings.
Additional data sent to vehicles can augment information collected by onboard sensors. “We’re working with intelligent-infrastructure planners, taking sensors usually mounted on vehicles and mounting them within the infrastructure of cities,” said Steffen Hartmann, head of technical projects, research, for Continental North America. “They can better see things like pedestrians emerging from between cars or people running towards an intersection. Then messages can be sent to vehicles so they can take action.”
On-vehicle sensors have a critical role in another domain limiter: weather. Design teams are seeking ways to ensure weather doesn’t limit system performance. Developers are trying to create systems that match humans’ abilities to drive in fog, rain and snow. Mounting sensors where they’re protected is one technique, washing camera lenses and software that reduces the impact of dirt and moisture all are being examined.
“With a sensor cleaning system, the trick is to minimize water usage. You don’t want to be carrying liters of water,” said Guillaume Devauchelle, VP of innovation at Valeo. “There are also algorithms that will detect a drop of water on the camera lens. When you have multiple cameras, if one is close to its operating limits, you won’t take that camera into account.”
No matter how good sensor and systems are, confusing situations will arise even in limited areas. Vehicles will typically shut down as safely as possible when systems can’t determine what action to take. Some companies are adding technology that lets remote operators step in before that occurs.
Vehicles could alert people in remote centers so these remote operators can take over and analyze the situation. “If a shuttle is programmed to pick-up and drop-off passengers only in designated bus-stops and one of those is blocked, for example, by a delivery truck, the shuttle may stop in the road while waiting for the bus stop to become free,” said Torsten Gollewski, head of autonomous mobility systems for ZF. “In this case, the remote driver could maneuver the shuttle to an available curb space, to avoid blocking traffic and permit passenger loading and unloading.”
After vehicles have proven they can operate safely in limited domains, operators will want to expand their capabilities. Once vehicles are in the field, over the air (OTA) software updates will provide a straightforward way to update the software that keeps passengers safe and secure.
Martin Schleicher, executive VP business management at Elektrobit, noted that these technologies encompass redesigned architectures for safety, security, infotainment and more. “Keeping the vehicle up-todate requires OTA updates,” he noted. “OTA is necessary for distributing everything from the latest infotainment system features to operating-system security patches and ECU updates or configuration changes.”
Going forward, these enhancements will increasingly be powered by artificial intelligence programs that may leverage input from the vehicles’ driving experiences. These software improvements are often being processed by FPGAs, which provide a level of hardware customization that most semiconductors can’t offer.
“Convolutional neural networks (CNNs)/deep neural networks are going to be a huge enabler,” said Willard Tu, senior director, automotive business unit, at Xilinx. “An example is a CNN that started with 32 bit, moved to 8-bit integer and now is moving to 4-bit quantization. Xilinx devices supported each of those transitions with the same device, whereas other companies have to design a new device for each transition. Having powerful and efficient AI that can be upgraded on the same hardware is also very valuable to developers.”
SAE Autonomous Vehicle Engineering