Interview with Sterling Anderson (Aurora)
As part of the FEV Blog series, this piece provides accessible information about the key trends and factors affecting the push towards sustainable mobility and energy solutions.
In addition to actively supporting many of the companies bringing this technology to life, we’re keeping a finger on the pulse through a regular exchange with thought leaders in this space and we’re bringing our readers the latest updates and developments.
As a leading developer of cutting-edge technologies in the vehicle, propulsion and software/EE categories, we’re speaking with Sterling Anderson, the co-founder and Chief Product Officer of Aurora in this article. A longtime developer of autonomous vehicle technology, Sterling developed the MIT Intelligent Co-Pilot, a shared autonomy framework that paved the way for broad advances in cooperative control of human-machine systems. In 2014, he joined Tesla, where he led the design, development, and launch of the Tesla Model X and then led the team that delivered Tesla Autopilot. Sterling holds several patents and over a dozen publications in autonomous vehicle systems. He earned his Masters and Ph.D. from MIT.
In our conversation, Sterling shares key details about how Aurora is revolutionizing transportation – making it safer, increasingly accessible, and more reliable and efficient than ever before. Their platform brings together software, hardware, and data services to autonomously operate passenger vehicles, light commercial vehicles, and heavy-duty trucks.
Here’s our conversation with Sterling Anderson.
We think Aurora is an exciting company; Sterling, could you start by introducing what you have been working on?
Thank you very much. We at Aurora are developing a self-driving system known as The Aurora Driver and a set of product suites - one for the logistics market for trucking called Horizon, and the other for the ride-hailing market called Connect. Our current partnerships include FedEx, Toyota, PACCAR, Volvo Trucks, and Uber. Our role in these partnerships is creating the autonomous Driver and a set of services that enables the Driver to operate within specific networks. This includes partnering with OEMs to produce vehicles uniquely built for the Aurora Driver and with networks to introduce self-driving vehicles for their respective freight and passenger mobility services.
Aurora has chosen to partner with OEMs to implement self-driving solutions rather than becoming an OEM or providing services direct to end consumers. Why did you choose the partnership model?
Becoming an OEM is an extraordinarily capital-intensive endeavor, and existing OEMs are very good at what they do. End-users trust them, whether they are logistic companies buying trucks or passengers who are comfortable riding in Toyota vehicles. The kind of investments they’ve made into building exceptional self-driving products enables a much more rapid delivery if we work together.
On the network front, there is value in feathering our autonomous vehicle technology into existing services. Take ride-hailing as an example; self-driving systems won’t be capable of offering rides in all conditions immediately. There are a variety of roadways and environmental conditions that the Driver might not be able to handle in the beginning. Feathering our technology in allows asset and vehicle implementation while serving user needs. If a user requests a ride from Uber or Lyft and conditions allow it, a self-driving Aurora vehicle will be there to pick them up. If we provided our own service and were only able to send a self-driving car in some circumstances but not others, the user would be out of luck and learn to distrust the service. Aurora also has unique access to Uber’s data that allows us to understand the trips that get unlocked with each capability that we develop. We can profile across markets of interest to understand what driving capabilities each trip requires.
As you define the requirements for self-driving systems, the set of environments and speeds you intend to operate at establishes the capabilities on the hardware front. If you compile those requirements for low-speed urban driving, you’ll end up with a much smaller area and lower ranges. If you only do it for trucking, your area will expand, but you won’t care as much about some of the near-field stuff important in dense urban centers. Starting with the expectation that we would serve all these markets means our fundamental hardware and software investments can handle both. We made a foundational investment in purchasing a company called Blackmore, the pioneer in frequency modulated continuous wave (FMCW) lidar that allows us to see much further than the conventional lidar can see. We also invested heavily in our Virtual Testing Suite that allows us to efficiently explore the complexities of the world in a way that enables rapid validation of our self-driving systems. This early foundational investment leads to a hardware and software architecture that will scale far better across application domains than one designed for a specific application could.
Effective data collection and management is important to enable reliable verification and validation of automated driving systems. Since FEV is very active in this field as well, I would be curious to understand your approach. How is Aurora collecting data during on-road testing?
We strategically use some of our on-road operations as an exploration of edge and corner cases with expert human drivers. Here, the autonomous hardware and software systems are engaged and logging the maneuvers and behaviors of other actors responding to what our vehicle was doing. This process allows us to collect a baseline set of on-road experiences and permute them. We used these experiences to develop substantial investments internally into an end-to-end virtual development engine that provides us with a faithful representation of not just camera data, but camera, lidar, radar – all of our perception suite – as well as a near bitwise replication of how our software executes on our vehicles. Now we have this massive set of data against which we can test virtually and make changes accordingly, allowing for rapid turnaround and validation.
Of course, simulation is critical too. Based on FEV’s experience, we’ve seen that the ability to develop a reliable simulation model and correlate that simulation to the real world can be incredibly powerful, but it isn’t a trivial task. How did you develop your virtual development suite? And how are you going about gaining a level of confidence that it is representative of real-world information?
We looked at several different possible sources and determined we could develop the most accurate system internally. We created a simulation engine that enables us to uniquely tailor the fidelity of the perception simulation and the execution that happens on the road. The sensors' input is ingested simultaneously to ensure synchronization and accuracy.
In terms of confidence, the short answer is calibration. We calibrate it heavily against real-world logs. We take a log from the real world, convert it into a simulation, re-run that log based on the empirical data derived from the real world with the autonomy system effectively operating the vehicle, and we can run the simulation derived from that log in the same way. Then we can juxtapose the behavior of the self-driving system and look at varying levels of complexity as to how it executed and where it varied.
Aurora has chosen to pursue multiple self-driving applications in parallel rather than focusing on one. Doesn’t this approach lead to over-engineering or slowing down your time to the market?
As with all early investments, we could be excitedly telling the markets today that we're doing more at high levels of performance in a small slice of the application domain. I am sure that would excite many people who don’t appreciate how hard it is to get out of that local maxima into something more significant. It's one of the biggest misconceptions of the industry and part of why the proliferation of autonomous vehicle development companies has been so pervasive for the last several years. If you want to have the impact that Aurora was created to have, you can’t start with a system you have to tear up, throw out, and restart. Our focus has been on creating foundational investments.
You had great experience with Tesla and MIT when you started Aurora, so I imagine you’ve understood the true challenge of self-driving, but the industry has gone through a hype cycle since Aurora was founded in 2017. For some time, everyone seemingly thought that autonomous driving was an easy problem with a solution right around the corner. However, we’re now seeing that our customers are almost universally admitting this is a long-term project that is more complicated than originally expected.
As the hype has come and gone, it seems like you’ve been able to ignore the noise and maintain a long-term vision and focus. How did you do this, and how did you get investors onboard for a long-term vision like that?
I’ve been very appreciative of the partners and investors we’ve had through the years. They understand the enormity of the problem and the substantial value of solving it. They have a lot to lose - they are big players in their industry - and we recognize these folks are very thoughtful about who they partner with. These companies realize that this problem is so important to the future of their business that they are happy to partner with a company with the experience to deploy it safely and at scale.
We are building a company for the next 100 years, not the next 5. We recognize the value of going slow and investing early. Today, we can bring up a new truck platform in 12 weeks, which would be impossible without the early investments we made. When we launched our Dallas to Houston freight route for FedEx, we drove the 200+ mile trip without disengagements by our fourth freight haul. These foundational investments in the self-driving platform and the infrastructure to support it enable the kind of scaling that gets you from hundreds of miles safely driven in autonomy to millions. We’ve been very open about our safety case. We are the only company to release a safety case framework for freight and passenger mobility, and we’re communicating transparently to bring the public and our partners along on the journey. You will see the evidence – empirical, statistical, virtual, and otherwise – that these claims have been satisfied and the system is safe.
Safety is obviously a critical aspect of self-driving technology deployment. We’ve seen a ton of interest lately from customers looking for support in developing and executing a comprehensive Functional Safety Plan, which is the imperative basis for introducing a safe self-driving solution.
My understanding is that you have set a clear bar, stating that you must prove that your system is safer than a human driver in a given application before deployment, which is a very challenging bar to clear. Is that still the gold standard for you?
We won’t deploy anything that imposes an unreal, unreasonable risk to road safety. My barometer is something that my wife told me years ago when she took one of my daughters to school and saw another company testing a self-driving vehicle. The driver had to take over and slam on the brakes as they crossed the crosswalk in front of them. That night, my wife pulled me aside and said: don’t you ever make anyone else's spouse or children the unwitting participants in the empirical validation of something you haven't developed sufficiently before you deploy it. So before we ever drove our first unprotected left turn in the real world, we tested it over two and a quarter million times in virtual testing. We've got a massive trove of experience and virtual development that back up and support the way the system behaves.
We’ve seen that there are many different potential end-uses for self-driving technology. How do you see this rolling out in different markets?
We're starting in trucking, but we're developing a common Aurora driver and a common set of platform requirements. So, what we did early on, is we developed the system architecture in such a way that we could segment both those architectural requirements, as well as the safety burden between what the platform is responsible for and what the drivers are responsible for. That allowed substantial focus and centralization of what happens on the driver. So everything from perception, motion planning, forecasting controls, happen in the core of the driver. Work that happens on the platform is something that we have a very rigorously codified set of requirements that we work with each of these OEM partners to prepare.
So, we’re starting in trucking, and the common driver allows for that experience to accrue for the benefit of all products. If you think about the environment, trucks and cars operate on the same roads. They see the same types of actors. They encounter the same pedestrians, cars, and bikes, so their perception systems are substantially common. But the perception range is longer for a truck with longer braking distances, so here it becomes even more important to have long-range perception with our proprietary FirstLight lidar, as well as camera systems and radar.
We’ll deploy our Aurora Horizon autonomous trucking product, in which carriers and private fleets will purchase vehicles from the manufacturer and then they will subscribe to Horizon from us. This includes provisioning the driver and the backend, which includes the hardware, software, data services, software updates, and security patches. We also provide users with Aurora Beacon, a suite of health monitoring, fleet dispatch, routing, and other tools that enable these carriers to maximize the utilization and uptime of their fleet. And Shield, a suite of tools for roadside assistance, maintenance, certification, and support. Ultimately, Aurora's business in trucking is a very asset-light one. We aren't involved in purchasing the hardware that goes from the OEM to the carrier, we provide a subscription service and support for them to use that driver in their business.
I imagine there are significant similarities, but automating a truck has some obvious differences compared to a passenger vehicle. For example, in my mind, a truck moving through a city is very different from a passenger car since there are different sensor locations, more weight, and longer breaking times. How are you thinking about pursuing these different applications in parallel?
The angular difference is so small that it is mostly irrelevant at longer ranges, but it does matter for shorter ranges as a truck looks down at the roof of cars and a car looks at the back of cars. There is work to be done at the edges in additional training for these models, but much of the complexity in self-driving development actually lies in the complexities of forecasting the future action of those other actors.
There are different dynamic characteristics of passenger vehicles versus a truck. Trucks bend in the middle and sweep out as they turn corners, we are developing our motion planning system to account for that. If the vehicle model accurately counts for its dynamics, the motion planner can be designed to distill away a more common model. For trucks and cars, there are slight variations that are applied in the control determination. There is some additional training, largely of perspective. Sensors laid on the top of the truck will have a slightly different perspective at close range. Not really field of view as our trucks and cars have the same number of sensors arranged in the same areas.
Is there one point that you want someone to take away from this interview?
This is one of the most impactful things that I, and I think many of us can deliver in our lifetimes. I got into it years ago when a close family member was struck by a car and broke his neck. We can save lives, massively improve the efficiency of supply chains, and improve access to transportation for people who do not currently have it. There are a few things I can imagine that will have a greater impact on the world.
As FEV is very active in technologies for carbon-neutral transportation, we know very well that automated driving systems have the potential to make a huge impact in this space. We actively support customers throughout the ADAS and AD system spaces as they develop and implement their technology, including the entire range of development processes such as system requirement definition, hardware and software development, simulation, integration, verification, validation, and system-level support activities such as systems engineering, functional safety, and cyber security. For more information, please contact firstname.lastname@example.org.