large image

Trusted Reviews is supported by its audience. If you purchase through links on our site, we may earn a commission. Learn more.

MIT’s smart algorithm can fix dumb AI

Not sure which neural network or bludgeoning AI tech is most likely to become humanity’s next robot overlord?

Well that’s all about to change, as the clever folks at MIT (Massachusetts Institute of Technology) have found a way to test which machine-learning technology is the smartest.

The MIT researchers reported developing a new methodology and technology to assess how smart, accurate and “robust” convolutional neural networks (CNNs) are on Friday.

Despite the naming, CNNs aren’t robo news readers. They’re a special type of network designed to process and classify images. The tech is used for a variety of different things ranging from auto tagging in consumer photography apps, to government facial recognition tech and self driving cars.

Related: Best VPN

The issue is, if they go wrong, it can lead to dangerous things, like facial recognition tech throwing up false positives or a self-driving car ignoring a stop sign – which is why ensuring accuracy and eliminating mistakes in CNNs is fairly important.

The new MIT methodology reportedly identifies weak CNNs using nifty a algorithm that generates and throws “adversarial examples” at the networks. These are apparently force the CNN to evaluate a series of images containing minor changes that are undetectable to the human eye, like a limited number of darker, or lighter pixels.

Vincent Tjeng, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL), and first author of the research paper, explained:

“Adversarial examples fool a neural network into making mistakes that a human wouldn’t. For a given input, we want to determine whether it is possible to introduce small perturbations that would cause a neural network to produce a drastically different output than it usually would.

“In that way, we can evaluate how robust different neural networks are, finding at least one adversarial example similar to the input or guaranteeing that none exist for that input.”

Related: Best free anti-virus

In short, the test stress tests the CNN by throwing increasingly difficult adversarial examples at it until it breaks and an image is misclassified.

This apparently lets them get an exact metric how “robust” and accurate the CNN is. In theory this should help companies improve their self driving car, image tagging and facial recognition tech, which will lead to better products and services for consumers in the near future, if it’s implemented.

The MIT researchers aren’t the only people trying to create better checks and safeguards for machine learning and artificial intelligence technologies. The European Commission published a set of laws for AI earlier this year. Sadly they weren’t anywhere near as cool as Asimov’s laws.

Why trust our journalism?

Founded in 2004, Trusted Reviews exists to give our readers thorough, unbiased and independent advice on what to buy.

Today, we have millions of users a month from around the world, and assess more than 1,000 products a year.

author icon

Editorial independence

Editorial independence means being able to give an unbiased verdict about a product or company, with the avoidance of conflicts of interest. To ensure this is possible, every member of the editorial staff follows a clear code of conduct.

author icon

Professional conduct

We also expect our journalists to follow clear ethical standards in their work. Our staff members must strive for honesty and accuracy in everything they do. We follow the IPSO Editors’ code of practice to underpin these standards.