• Homepage
  • Technology
  • Shotgun Shell: Google’s Object Recognition AI thinks This Turtle is a Rifle

Shotgun Shell: Google’s Object Recognition AI thinks This Turtle is a Rifle

Shotgun Shell: Google’s Object Recognition AI  thinks This Turtle is a Rifle

Researchers at MIT were successful in confusing artificial intelligence to categorize a reptile as a weapon, which makes us question the future of object recognition AI security. If it has the shape, size as well as the patterning of a turtle, then it’s most probably a turtle. So something has definitely gone wrong when artificial intelligence assertively declares it to be a gun instead. But this is what researchers belonging to the MIT’s Labsix managed to deceive Google’s object recognition AI, they disclosed in their last week’s published paper.

 

 

The team worked on “adversarial image” as a concept. It means a picture was created from scratch to trick object recognition AI into categorizing it as something absolutely different from what it portrays. For example, a picture depicting a tabby cat was known with 99% surety as a bowl of guacamole to Google’s InceptionV3 image classifier.

Classifiers that are neural network based reach performances that are almost like that of a human in many tasks and they are utilised in high risk and real world systems. Still, these same neural networks are mainly susceptible to adversarial examples, which are carefully manipulated inputs that cause misclassification that are targeted. One such instance is that of the tabby cat mentioned above.

Such illusion work by the careful addition of visual noise to the image so that the bunch of signifiers used by the object recognition AI to recognise what it contains, get puzzled, while no difference is noticed by a human. Adversarial examples that are generated with the assistance of standard techniques, collapse on being transferred into real time world because of the camera noise, zoom and other alterations that are expected even in the physical world.

This is achieved by using a new algorithm for constantly producing adversarial examples that cause focused misclassification under transformations categories such as rotation, blur, translation or zoom and it is used to generate both 2D printouts, as well as 3D models that trick a standard neural network at any given angle. This process works for any random 3D models and not just turtles. MIT researchers were successful in making a baseball that at every angle classified as an espresso. The examples were still able to deceive the neural network when it was put in front of relevant backgrounds like for example, one would never see a rifle underwater or on the same lines an espresso in a baseball mitt.

Since this defect in transferability was discovered, some of the adversarial examples works have dismissed the option of real-world adversarial examples, even when it comes to the case of two-dimensional printouts. This lack of possibility comes despite of the work done formerly by Kurakin et al., who had printed out 2D adversarial examples that in the single viewpoint case remained adversarial. There are works that show and prove that adversarial examples are a prominently big problem in the real world systems that it was previously assumed to be.

But while in object recognition AI there is a lot of theoretical work signifying the possibilities of these attacks, physical demonstrations of the very same technique are not looking very good. Most of the times, just by rotating the image, messing up the colour balance, or by slightly cropping it, it can be sufficient enough to spoil the trick.

 

The researchers at MIT have made this idea reach newer levels for the first time ever, by manipulation of not an ordinary 2D image, but the 3D-printed turtle’s surface texture. The shell pattern that results looks trippy, but yet it still looks completely recognisable as a turtle except in the case of Google’s object recognition AI, which will, with 90% surety say that it’s a rifle.

Same was the case with the 3D printed baseball that the researchers printed with patterns that made it appear like an espresso to the object recognition AI but marginally less success was achieved. The object recognition AI was able to classify it sometimes as a baseball but on other occasions still incorrectly recommended espresso majority of the time.

Researchers wrote that their object recognition AI work shows that adversarial examples are a considerably bigger problem in real world systems than they thought previously. These attacks could prove to be more dangerous as machine vision is used more widely.

Researchers are already studying the likelihood of automatically detecting weapons from images of CCTV using object recognition AI . When a turtle looks like a rifle to such a system, it may simply just cause a false alarm, but when a rifle is detected as a turtle, it may considerably be more dangerous.

There are still many kinks in these systems that the researchers have to work out. However, some researchers have already achieved developing simpler attacks that work against many unknown AIs by applying techniques that have universal applications.

AI companies are showing resistance. Research programs have been published by both Facebook and Google that imply they themselves are involved in the techniques themselves, to look out for methods to safeguard their own systems using object recognition AI .

About author

menu
menu