New Google AI experiment matches poses with images

BY LEAH WILLIAMS
Posted on FEB 09, 2019

Google launched an AI experiment called 'Move Mirror', demonstrating reverse image in-browser search using machine learning and a webcam to match poses with images.

Google reveals "Move Mirror" in a blog post on July 19, a machine learning experiment that matches a user's poses with images of other people in the same pose.

The company also wanted to "make machine learning more accessible to coders and makers" while inspiring them to take the tech and run with it for their own applications.
According to Google's release statement, the "mirror" uses an open-souce "pose estimation model" from Google called PoseNet, which can detect body poses and TensorFlow.js, a library for in-browser machine learning framework.

In finding a matching image, the experiment uses your "pose information" -- the location of 17 different body parts including your shoulders, ankles, and hips. Based on the report, it does not take any individual characteristics into account, such as gender, height or body type.

The motion capture technology has been used for decades to capture real human movements for blockbusters. Video games have used this technology, such as Microsoft's 3D imaging device, the Kinect.

Previous methods require expensive hardware. However, the recent technology from Google occurs in-browser, with a webcam.

Google does not send the images to its servers. All the image recognition happens locally, in-browser. The technology also does not recognize who is in the image because there is "no personally identifiable information associated to pose estimation."

TensorFlow has an extensive rundown of the challenges and programming hurdles they overcame in its blog, for users and researchers that are interested in learning the details of the development.

Users can try the artificial intelligence technology, using their webcam, at Google's experiments page

 

 

Comments