Development of autonomous cars is steadily progressing with AI as an accelerator, says Sven Lanwer, XC expert from Bosch – ET Auto

baua



Sven Lanwer, XC expert and head of the Driver Experience, Bosch.

The edited excerpts:

Q: Driving, like cycling, is a skill that people enjoy doing. Then why should people think of driverless cars?

Yeah, a very interesting question. For me it’s how do I get from point A to point B? When you think about how I came from A to B, the answer could be driverless, or by driving. Then it’s a different topic. And I think this is a very individual thing. Some really like driving themselves, feeling the gas pedal and the dynamics and everything.

I also like that I’m not driving all the time, drive less or fully automated, but it’s actually something. Maybe there’s time when you want to drive yourself, enjoy driving, enjoy the dynamics, and then there’s time when you say, I want to go from A to B without driving. It depends on the use case and on your personal preference. So, it is very individual. I would say it may also differ region to region.

Q: There are a lot of use cases, say, from taxi or ride sharing by companies like Uber or Lyft, where there are problems of driver shortage. Maybe driverless cars would work well in other parts of the world. But do you think driverless cars make sense for the Indian market right now or going forward?

Overall, I think that the development should come from a more incremental approach with the driver assistance system and then going more and more advanced and at a certain point also into automated driving, and then at a certain point maybe even driverless. It’s clearly the strategy for us to do this incrementally and not do the moonshot from 0 to 10 in one, because we see that, this is very difficult, this is very expensive, and sometimes even you don’t get there.

So it’s a big bet. Why not bring systems into the vehicle, earn money with that, give a lot of support to the drivers and also a lot of safety to the drivers. Collect the data and do something advanced on the basis of that data you have collected to make it better, more performance, and then even going into more automated driving. And this is the philosophy we are taking.

Q: ADAS as a system is more about safety, because when you come from level one to level two or three, it acts as a driver assistance system to have zero accidents. And statistics show clearly that it’s a great help. But what is your opinion on the development stages. in terms of coming to L1, L2 and L3 globally versus the Indian market, what is the gap? Where and how do you think things can shape up with AI?

So what we have on the streets now is, we have in parking, level four, with the automated pedal parking we have developed together with Mercedes in the parking garage. There’s a level three system, the first level three systems now on the market here in Europe, and the first also in the US, but of course, very developed streets and infrastructure and all this stuff. And also the regulatory, the homologation topic, where Europe went first.

They were kind of bold to say, yes, we want to do that. That’s why we have a homologation here. And I think regulation has to pick that up globally because there are not many regions where we have a release and where we do have rules on how to release a system, what kind of test do we have to do? And I think this is something which is very regional, different from country to country.

And that’s why I think it is extremely complex in infrastructure, or, for example, traffic in Bangalore. It’s extremely difficult for the system to cope with that if you have trained your system on an autobahn in Germany. So this is something which takes, of course, time and needs a lot of data. And for that, you first have to deploy a driver assistance system in the market to collect data, to learn with that data, and then to train the algorithms on the behaviour and then deploy it. And that probably takes some time also in India, where the first systems are coming to the market right now with radar sensors and with multipurpose camera systems. It will take some time to develop and deploy that for many vehicles.

Q: But do you think AI can accelerate this adoption when it comes to bringing the use cases? Earlier you were discussing the ball and the child scenario. Yes, there is a ball. There is a 90% chance that there will be a child following it.

And that’s only one use case. Think of thousands of use cases in that area. If you would programme that manually, it will take you ages, because in the urban scenario, there are thousands of different things which could happen, which you have never seen before.

Q: We have been talking about driverless cars for six or seven years. AI is only two, three years old. Now we’re talking about AI systems in ADAS. What is the possible development in autonomous driving with AI

Well, I mean, deep learning is. When did deep learning start? In 2014 or 2015. With the first deep learning we brought a system into the market in 2020 on the multipurpose camera three. With that we were able to detect objects, roadsides and the like.So that was a better advantage in performance than the standard classical computer vision algorithm.

So we adapted very fast to that, and we see an advantage in development speed and also in performance. The generative AI topic is the next thing to deal with such complex scenarios, scenarios which you have probably not thought of in your normal development process. And that’s why I think that generative AI can be an advantage and also an accelerator, because it will give you a certain understanding of the situation, and you can deal with this output, of this understanding much better.

It’s much smaller. It is not the complexity of the whole world. You can skim it down to a level so that you can work as a programmer with that to do the next steps. That’s why I think it’s an accelerator for that kind of complex topic.

Q: Sometimes AI creates so many use cases or so many things that you don’t know whether that is the real scenario or an artificial one. How do you tackle that situation?

Well, we are using generative AI today in augmenting images to train. So you have a picture which you have collected from the fleet, and we are putting snow on top of it. And we do that with generative AI. And we take that picture of the normal scene, how we have recorded, and the snowy scene, same scene, but with snow augmented. We take that and train our model on both. For this, we are using generative AI, and that is also an advantage.We don’t have to collect more data. We are generalising or we are adapting, augmenting, training, and then use the trained model for our deployment.

Q: How fast can we go with generative AI in this system? Are you actually working on this technology?

We have started to work with that. It’s fairly new, and I think even now we cannot judge its full potential.

Q: But do you think that this is the only technology that can help the exploration of ADAS or can we go towards driverless cars much faster than we have initially thought of?

Is this the only thing, I don’t know, to be honest. But it is a technology which allows us to generalise faster. How fast that can get and exactly how much years that will be at the end to rate compared to a classical project? It is very difficult to say because we are only at the start. We think or we truly believe that it is a game-changer and it can accelerate and also bring down the complexity of what every programmer has to think of. And with that, we see maybe we are starting at a higher point without doing endless iterations of further training, of further adapting of algorithms.

So it is getting us a little bit faster to a point which would have taken longer before. How much this is in years, I cannot say today.

Q: We don’t have a delta right now. Like two, three years could have been saved or something like that..

It depends on the use cases also.

Q: Now, coming back to the affordability factor of this technology. You have said that it’s not very cheap to implement ADAS, especially in countries like India or developing countries like Brazil or South Africa. They cannot afford this kind of technology unless it is in an expensive vehicle beyond INR 25 lakh. How do you see this particular technology percolating down to the affordable segment?

A: Yeah, the first sensors going into the vehicle will be a camera, because it can give you a lot of advantages. This is only one camera. That will not be 25 different sensors where you need a supercomputer in the trunk. This is one camera behind the windscreen. The computer is already in there, and it’s something which is affordable for every market.

We have seen that in China where they were developing very fast. We saw clearly, camera gives a lot of benefits. And from there you add additional radar sensors to collect data. Then you can have level two use cases, for example, highways, Autobahn kind of use cases.

This I see for India is coming now. In the domestic market demand is coming up for systems like video first. Then ultrasonic and radar systems, but not with a supercomputer, We see in China right now eleven cameras, which I think India also will develop over the time to start with.

Q: There are only very few models right now which have ADAS technology. The Koreans have it in their sedans and SUVs. We have Mahindra and then we have Tata. Tata has also come up with some kind of solution, but want to understand how fast India will accelerate into this direction?

The market is picking up some speed in driver assistance that we see. And we have a local footprint in India. We have a large team for Driver assistance in India, more than 2500 people. Of course, they are supporting globally. But also we have a team in India that is developing for the local market.

Q: Is India your main hub for ADAS, R&D?

A: No, we are distributed. In Germany we have a large footprint, also in India and Eastern Europe, We have the local markets in China, Japan, and the US. We have a small team in Brazil also. So we are very distributed. But in India we have a great software team.

What we have done in India is training our multipurpose camera on animal deep learning. We have collected a lot of animals in India with our camera and we have trained for global and for India our system on animals, to detect animals and to react to them. We have done that in India with the Indian team.

Q: Coming back to the drawing board, when we start developing things with these kinds of technology building use cases and all, can you take me through the journey of whole features from L1, L2, L3? When you start developing it and when you start thinking about it. Now we have to develop something like how many sensors and cameras do we put, how our vehicles are getting more and more software-defined right now versus they were initially and where will they be in the next five years or so?

Okay, so we are starting from the system point of view. What kind of functionality do we want to give to the driver? That’s the first thing. And we are thinking about all the situations and we are writing requirements to support the driver on certain things and we are thinking always from the system. So what does the end-customer see? And then we are deriving a system architecture.

The kind of sensors that we need to cope with that use case is decided by our system architecture team. Once they have done that, they write the requirements for the different systems like the video camera, the radar sensor and the sample compute that needs to be done. Then the video system team picks that up and say, “okay, I need to detect XYZ. How do I get to this data? Am I using AI for that or am I using classical algorithms for that?”

So we are deriving those requirements from the system coming to the various subsystems and to the software. Once this is done and the software is written for that, we test it using the V model. Then we test it on the software level and on the integration level of bringing software together into a system. Then we test this on system level and finally in the car and then we release it to our customers.

Q: How many use cases have you identified so far for ADAS? Do they run into millions?

Yes, it is a number which is more. Hundreds, thousands of hours of driving. I don’t know if it’s 1234, but it is more than 100,000′

Q: Just now you told me about the animals that you have detected in India. How many objects have you identified in your system right now?

I don’t have the number. I can only guess. But if we have more than 100,000 hours of driving globally, you can guess how many objects that at the end will be. Because when you’re driving 1 hour, you see, I don’t know, X thousand objects. So it is a huge number.

Q: Anything you would like to add further? And I want to understand further on the generative AI and driverless cars going forward. How will generative AI accelerate the adoption of driverless cars?

When I have an answer, I shall come back to you. Right now, I don’t have the answer yet because we are starting with that. But we really believe that it is an accelerator for our developers in understanding the scene and deriving the strategy out of that scene. It will be a better system at the end and a safe assistant.

  • Published On Mar 13, 2024 at 02:55 PM IST

Join the community of 2M+ industry professionals

Subscribe to our newsletter to get latest insights & analysis.

Download ETAuto App

  • Get Realtime updates
  • Save your favourite articles


Scan to download App


ReElement Technologies, a rare earth element and critical battery metal producer, has partnered with EDP…

Volvo CE, the Volvo Group’s construction equipment subsidiary, and Volvo Trucks have partnered with France-based…

Wireless EV charging pioneer WiTricity has announced that it will be one of the founding…