New Autonomous Vehicle Software
The system, called Selenium, can take in data from visual cameras, laser scanners, or radar systems. It then uses a series of algorithms to establish where the 'it" is, what surrounds it, and how to move. Paul Newman, Professor of Information Engineering at the Department of Engineering Science and co-founder of Oxbotica, said: ‘It takes any vehicle and makes it into an autonomous vehicle. We plan for the software to be used to control not just autonomous cars, but warehouse robots, forklifts, and self-driving public transport vehicles’.
Oxbotica’s software gradually acquires data about the routes along which a vehicle is driven and learns how to react by analysing the way its human driver acts. Professor Ingmar Posner from the Department of Engineering Science and co-founder of Oxbotica, said: ‘When you buy your autonomous car and drive off the (lot), it will know nothing. But at some point it will decide that it knows where it is, that its perception system has been trained by the way you’ve been driving, and it can then offer autonomy’.
Professor Posner added: ‘The software provides two primary functions: localise the vehicle in space, and perceive what’s happening around it. Based on those two feeds, a central planner can determine how the car should move. Both localisation and perception systems rely on sensors dotted about the vehicle, the choice of which depends on application: a warehouse forklift may use just use cheap cameras, while a car could make use of all kinds of sensors.
‘Selenium can compare on-the-fly sensor readings with those stored away in prior maps from previous journeys in similar conditions. If you take it out in the snow and it’s not seen it before, it keeps the ideas of snowy-ness around for the next time. Then it can identify image features—such as details on buildings or placement of street furniture—to localise the vehicle in the wider world. Meanwhile laser data, due to its high resolution, can be used to more accurately localise the car, especially in low-visibility conditions when cameras can falter’.
The research initially focused on Selenium recognising cars and humans by providing it with a labelled training set from which it can learn - over time it will also learn from the driver. Professor Posner highlighted: ‘If a human’s driving and they pass straight through what the car thought was a human, the software can learn from that. The system uses similar prior knowledge and continual learning to work out, for instance, which parts of a surface it can safely drive on or how traffic signals are changing. The result is a vehicle that can gain a deep understanding of the routes it drives regularly. That means that the software isn’t simply trying to do a mediocre job wherever it’s placed—instead, it does an excellent job where it’s learned to drive’.
Oxbotica’s software will be tested in two real-world settings in the near future: in the self-driving public transport vehicles of the GATEway project in Greenwich, London, and the LUTZ Pathfinder driverless pods in Milton Keynes.
For more on this story please visit: