This article is more than 1 year old

Imaginary numbers help AIs solve the very real problem of adversarial imagery

Duke University boffins figure out a way to boost the security of recognition networks

Boffins from Duke University say they have figured out a way to help protect artificial intelligences from adversarial image-modification attacks: by throwing a few imaginary numbers their way.

Computer vision systems which recognise objects are at the heart of a whole swathe of shiny new technologies, from automated shops to robotaxis. Increasingly broad deployment makes them increasingly of interest to ne'er-do-wells - and attacks like AMpLe Poltergeist show how they can be fooled with potentially deadly results.

"We're already seeing machine learning algorithms being put to use in the real world that are making real decisions in areas like vehicle autonomy and facial recognition," said Eric Yeats, a doctoral student at Duke University, following the presentation of his team's work at the 38th International Conference on Machine Learning. "We need to think of ways to ensure that these algorithms are reliable to make sure they can't cause any problems or hurt anyone."

The problem with reliability: adversarial attacks which modify the input imagery in a way imperceptible to the human eye. In an example from a 2015 paper a clearly-recognisable image of a panda, correctly labelled by the object recognition algorithm with a 57.7 per cent confidence level, was modified with noise - making the still-very-clearly-a-panda appear to the algorithm as a gibbon with a worrying 93.3 per cent confidence.

Guidance counselling

The problem lies in how the algorithms are trained, and it's a modification to the training process that could fix it - by introducing a few imaginary numbers into the mix.

The team's work centres on gradient regularisation, a training technique designed to reduce the "steepness" of the learning terrain - like rolling a boulder along a path to reach the bottom, instead of throwing it over the cliff and hoping for the best. "Gradient regularisation throws out any solution that passes a large gradient back through the neural network," Yeats explained.

"This reduces the number of solutions that it could arrive at, which also tends to decrease how well the algorithm actually arrives at the correct answer. That's where complex values can help. Given the same parameters and math operations, using complex values is more capable of resisting this decrease in performance."

By adding just two layers of complex values, made up of real and imaginary number components, to the training process, the team found it could boost the quality of the results by 10 to 20 per cent - and help avoid the problem boulder taking what it thinks is a shortcut and ending up crashing through the roof of a very wrong answer.

"The complex-valued neural networks have the potential for a more 'terraced' or 'plateaued' landscape to explore," Yeates added. "And elevation change lets the neural network conceive more complex things, which means it can identify more objects with more precision."

The paper and a stream of its presentation at the conference are available on the event website. ®

More about

TIP US OFF

Send us news


Other stories you might like