Part 1: Invited Industrial Speeches (1 hour)
Chairs: C. Laugier (Inria), Ph. Martinet (Inria), J. Villagra (CSIC)
- Tom Westendorp (Nvidia) – Accelerating the race to AI self-driving cars (10 min.)
For many years car-makers, tier1’s mobility providers, universities and other organizations have been working on developing driver assistance systems and self-driving cars. Conventional computer vision used for ADAS is reaching its threshold because it is impossible to write code for every possible scenario as a vehicle navigates. In order to develop a truly autonomous car, deep learning and artificial intelligence are required. With deep learning, the vehicle can be trained to have super-human levels of perception, driving safer than anyone on the road. An end-to-end artificial intelligence platform based on supercomputers in the cloud and in the vehicle enables cars to get smarter and smarter. Coupled with an extensive software development kit with vision and AI libraries and software modules, automakers, tier 1s, and start-ups can build scalable systems from ADAS to full autonomy.
- Ruigang Yang (Baidu China) – Open Source and Open Data for Autonomous Driving from Baidu (10 min.)
In this speech I will introduce Baidu’s Apollo project, which is the one and only open source project for Autonomous Driving, supported by Baidu’s R&D team. I will discuss the Apollo architecture and functions available, in particular, ApolloScape, which is a project under Apollo that is specifically for advanced research.
- Hadj Hamma Tadjine (Business Development Director, IAV GmbH) – Connected Autonomous Driving Vehicle from concept to homologation (10 min.)
There are considerable challenges to be overcome before fully autonomous vehicles move from concept to commercialization. The commercialization of highly autonomous vehicles, is currently hindered by technical, regulatory, and economical issues. New opportunities by enabling cooperation among agents, and designing novel solutions that could help in addressing the key technical challenges for autonomous driving systems: positioning, perception, and control of autonomous vehicles from the concept phases to the homologation will be addressed and discussed.
- Olivier Lefebvre (EasyMile) – EasyMile’s Autonomous Navigation System, Challenges of an Operational + Safe + Continuously Improving solution (10 min.)
EasyMile is developing an autonomous navigation solution currently running on more than 60 operational shuttles worldwide and also under integration on other types of vehicles for people or goods transportation. In this talk we will present the key points of EasyMile’s navigation solution, and also provide insights into EasyMile’s approach for the development of a product that constantly remains operational and safe and that is also continuously evolving in order to solve new transportation use cases.
- Pier Paolo Porta (Ambarella) – High resolution low power platform for efficient computer vision on cars (10min.)
Stereovision is a topic that has been discussed since long time. First implementations were based on analog cameras and very simple matching algorithms to cope with the current state of the art in terms of computing resources and sensor resolution. As technology progressed, more complex systems have been developed, resolution scaled up to VGA and matching algorithm started to be based on edges to get a better representation of the 3D world, but the real step happened with dense disparity map generation. In this presentation an overview of the evolution of stereo techniques is given leading to a description of the latest implementation of a dense high resolution disparity map. In particular it will be shown the quantity and quality of information that can be extracted from a stereo system. The proposed system is based on 4K images providing a dense disparity map at 30 frame per second. Plus, since the HW platform is a chip specifically designed for computer vision application, power consumption is very optimized.
- Gergely Debreczeni (AImotive) – Challenges and future trends in autonomous driving (10 min.)
The era of autonomous vehicles are quickly approaching, still there are quite some challenges ahead of us to solve before it will really happen on a global scale. At what extent AI will be used in autonomous driving? How much simulation is needed for the validation and verification an autonomous vehicle? The talk will give an overview on the current status, presenting AImotive’s approach and solutions along with a couple of selected examples related to the above questions.
Part 2: Panel session (30 min.)
Moderators: Prof Miguel Angel Sotelo (UAH & President IEEE ITSS) and Javier Ibañez-Guzman (Renault)
- Tom Versterndorp (Nvidia)
- Ruigang Yang (Baidu)
- Hadj Hamma Tadjine (IAV GmbH)
- Olivier Lefebvre (EasyMile)
- Pier Paolo Porta (Ambarella)
- Gergely Debreczeni (AImotive)
Introductory speech Miguel Angel Sotelo
The world of the Autonomous Vehicles has become a day-to-day reality for the citizenship since it started to have a massive media coverage. Vehicles are nowadays powerful robots equipped with AI-based software capable of performing amazing tasks. Nobody doubts about the technical capability of the scientific community and the automotive industry to build self-driving cars that can overcome the capacities of human drivers and contribute to increase road safety and optimize energy consumption and emissions. However, there are other aspects to take into account. We must be able to develop standards that allow to define liability and thus pave the way for the development of legal and insurance-related matters. We, as scientists, must be able to contribute to putting all these pieces together in order to make automated driving a massively deployed reality.
Introductory speech Javier Ibañez-Guzman
From manufacturers of motor vehicles to mobility service providers ? The transformation of vehicle manufacturers into mobility suppliers has multiple technological implications. Intelligent vehicles shall become part of large connected mobility systems providing a large number of services, forming sensor networks, etc. Our work scope will be beyond mobile robots to enter into the realm of complex systems. What are the technological implications?