- 奥迪打算将领航汽车中笨重的电子系统缩减成一块单一主板。
在全自动驾驶汽车完全替代驾驶员之前,电子科技仍然需要人在异常情况下做出决策。在正常驾驶条件下,自动控制技术可以负责驾驶,但在需要做出复杂决策时仍需人力的介入。
奥迪最近公布了其自动驾驶车辆项目的技术细节,作为该计划的一部分,今年早些时候,一辆奥迪A7概念车完成了从旧金山到拉斯维加斯的行驶。这辆车在大部分路段中都采用自动驾驶模式,但驾驶员必须保持警惕,这样当警报器提醒其重新掌握驾驶时,可以顺利完成交接。
这辆概念车的车身中配置了一系列计算机,奥迪工程师打算在未来将它们缩减成一块主板。该技术的核心是一系列摄像机、雷达和超声波传感器,并由一个名为zFASd的主板负责控制这些设备。该主板可以将各种传感器的输入数据结合起来,综合构建车辆对外部世界的感知。
“传感器收集到的所有原始信号都会在一个传感器融合箱内汇聚,”奥迪架构驾驶员辅助系统主管Matthias
Rudolph最近在Nvidia GPU技术论坛上如是说。“通过这些输入的数据,可以建立一个虚拟的环境。”
zFAS主板的基础是由四个半导体元件构的。Nvidia k1处理器可收集四个摄像机的数据,并可“在低速行驶过程中完成各种任务,”
Rudolph说。Infineon Aurix处理器负责其他额外工作,Mobileye的
EyeQ3负责视觉处理,而Altera Cyclone FPGA(现场可编程门阵列)则负责进行传感器融合。
软件架构也是多层的,其中感知传感器为第一层。在此之上是融合层,该层架构可将传感器数据与地图、路标和其从他来源获得的信息结合起来。Rudolph指出,这种结合可以同时提升信息的质量和分析的准确性。
“雷达并不擅于确定车辆的宽度,” Rudolph表示。“但摄像机可以做到这一点。我们把二者结合起来,就得到关于车辆前方情况的信息了。”
有一项要求至关重要,那就是确保zFAS主板能够预测潜在威胁,并进行正确回应,且不得出现误报。如果车辆为了躲避并不构成危险的事物而停止或转弯,那么驾驶员很有可能不再使用该系统。
“如果车辆在空无一物的地方突然刹车,会破坏司机对系统的信任,” Rudolph表示。“我们的系统没有出现过误报,这已经在1万小时的驾驶测试中得到了证明,在该测试中车辆的平均速度为60kph
(37mph),且需要经过包括降雪和冻雨在内的各种气候的考验。”
奥迪将注意力放在移动物体上,并根据车辆的驾驶路径和速度分析它们的潜在影响。而所有静止物体都被视为一个单一的目标。
“我们对所有的静止图像一视同仁,” Rudolph表示。“不管是一堵墙还是一辆停在路边的车,我们都要确保不会撞上去。”
对所有自动驾驶系统来说,行人都是最大的挑战之一。行人比车辆更难定位和识别,而且他们行为更加不可预见。奥迪的系统使用一个单目摄影机寻找行人。鉴于某些行人移动方式的不规律性,奥迪决定不让汽车因为行人的存在而停下来,除非行人的行为的确会构成切实的威胁。
“在侦测行人时,我们会计算接触时间,” Rudolph表示。“当车子停下时,车和人之间的距离非常短。这就是我们想要做到的。这个距离几公分就够了,我们不想大老远地就把车停下来。”
尽管领航系统的目标是避免碰到行人和其他绝大多数物体,但奥迪也意识,到碰撞是无法百分百预防的。
“如果一场事故实在不能避免,那我们就会引导车辆使用车身结构部件来承受冲撞,以将人员伤害降至最低,”
Rudolph表示。
车辆的这种行为主要是在驾驶员未能及时接手驾驶的情况下发生的。奥迪使用LED报警系统告诉驾驶员交接的时间。他们可以通过急刹车或急转弯避免碰撞。一台车内摄像头时刻在观察驾驶员,好让系统知道是否需要将LED警报升级成声响警报。
“在领航驾驶模式下,我们可能会需要驾驶员重新掌控驾驶,所以我们得知道驾驶员在做什么,”
Rudolph表示。
Audi details piloted driving technology
Before autonomous vehicles make drivers obsolete, electronic technologies will depend on people to make decisions when something unusual happens. During normal driving conditions, autonomous controls could pilot the vehicle, relying on humans when complex decisions are required.
Audi recently provided technical insight into its piloted vehicle project, in which an Audi A7 concept car drove from San Francisco to Las Vegas earlier this year. The vehicle drove itself most of the journey, though drivers had to remain alert to take over when alerts directed them to resume driving.
The concept car has a range of computers in the trunk. Audi engineers plan to reduce them to a single board over time. The mainstays of the piloted vehicle technologies are an array of cameras, radar, and ultrasonic sensors that are controlled by what’s called the zFAS board. It combines sensor inputs to give the car its view of the world.
“All raw signals from the sensors is collected in a sensor fusion box,” Matthias Rudolph, Head of Architecture Driver Assistance Systems at Audi AG said during the recent Nvidia GPU Technology Conference. “From that input, a virtual environment is created.”
Four semiconductors are the basis of the zFAS board. An Nvidia k1 processor collects data from four cameras and “does everything while driving at low speeds,” Rudolph said. An Infineon Aurix processor handles additional chores. Mobileye’s EyeQ3 performs vision processing, while an Altera Cyclone FPGA (field programmable gate array) performs sensor fusion.
The software architecture is layered, with the perception sensor programs forming the first layer. Above that, there’s a fusion layer that blends data from the sensors with information from maps, road graphs, and other sources. Rudolph noted that combining inputs provides better information and increases confidence in the analysis.
“Radar is not good at determining the width of a car,” Rudolph said. “A camera does that well. If we fuse data from each of them we get good information on what’s ahead.”
Ensuring that the zFAS boards detect potential threats and respond to them correctly without false alerts is critical. If vehicles stop or swerve to avoid something that isn’t a true danger, drivers are likely to stop using the system.
“If the car brakes and nothing’s there, it will destroy the confidence of the driver,” Rudolph said. “We have had no false positives; that’s been proven with over 10,000 hours of driving at an average speed of 60 kph (37 mph) in situations including snow and freezing rain.”
Audi looks at moving objects to analyze their potential impact given the vehicle’s driving path and speed. All stationary items are viewed with a single goal.
“We look at static images as the same,” Rudolph said. “It doesn’t matter if it’s a wall or a parked car, we don’t want to hit it.”
Pedestrians are a major challenge for all types of autonomous systems. They’re harder to spot and categorize than vehicles, and they have more degrees of freedom. The system uses a single monocular camera to search for pedestrians. Given the erratic behavior of some walkers, Audi doesn’t stop for pedestrians unless they’re truly in harm’s way.
“When we detect pedestrians, we compute the time to contact,” Rudolph said. “We’re close when the vehicle stops. We want to be close, just a few centimeters away. We do not want to stop far away.”
Though the piloted system aims to avoid pedestrians and most everything else, Audi realizes that collisions can’t always be prevented.
“If we can’t avoid an accident, we steer to use the structure of the car to minimize the chance of injury,” Rudolph said.
Such an action would occur mainly when the human driver didn’t take over in time to avoid a collision. Audi uses an LED alert system to tell drivers when they need to take charge. They can do that by hitting the brakes or making a sharp steering wheel movement. An internal-looking camera watches drivers so the system knows whether the LED alert needs to be augmented with an audible warning.
“In the piloted driving mode, we may need to get the driver back, so we need to know what he’s doing,” Rudolph said.
等级
打分
- 2分
- 4分
- 6分
- 8分
- 10分
平均分
- 作者:Terry Costlow
- 行业:汽车
- 主题:电气电子与航空电子