How Google uses pattern recognition – Policies & Principles – Google

http://www.google.com/policies/technologies/pattern-recognition/

How Google uses pattern recognition

Computers don’t “see” photos and videos in the same way that people do. When you look at a photo, you might see your best friend standing in front of her house. From a computer’s perspective, that same image is simply a bunch of data that it may interpret as shapes and information about color values. While a computer won’t react like you do when you see that photo, a computer can be trained to recognize certain patterns of color and shapes. For example, a computer might be trained to recognize the common patterns of shapes and colors that make up a digital image of a face. This process is known as facial detection, and it’s the technology that helps Google to protect your privacy on services like Street View, where computers try to detect and then blur the faces of any people that may have been standing on the street as the Street View car drove by. It is also what helps services like Google+ photos suggest that you tag a photo or video, since it seems like there might be a face present. Facial detection won’t tell you whose face it is, but it can help to find the faces in your photos.

If you get a little more advanced, the same pattern recognition technology that powers facial detection can help a computer to understand characteristics of the face it has detected. For example, there might be certain patterns that suggest a face is wearing a beard or glasses, or that it has attributes like those. Information like this can be used to help with features like red-eye reduction or can let you lighten things up by placing a mustache or a monocle in the right place on your face when you are in a Hangout.

Beyond facial detection technology, Google also uses facial recognition in certain features. Facial recognition, like the name suggests, can help a computer to compare known faces against a new face and see if there is a probable match or similarity. For example, facial recognition helps users of the Find my Face feature to see suggestions about who they might want to tag in a photo or video they’ve uploaded and would like to share. Read more about Find my Face in the Google+ Help Center.

How Voice Search works

Voice Search allows you to provide a voice query to a Google search client application on a device instead of typing that query. It uses pattern recognition to transcribe spoken words to written text. For each voice query made to Voice Search, we store the language, the country, the utterance and our system’s guess of what was said. The stored audio data does not contain your Google Account ID unless you have selected otherwise. We do not send any utterances to Google unless you have indicated an intent to use the Voice Search function (for example, pressing the microphone icon in the quick search bar or in the virtual keyboard or saying “Google” when the quick search bar indicates that the Voice Search function is available). We send the utterances to Google servers in order to recognize what was said by you. We keep utterances to improve our services, including to train the system to better recognize the correct search query.

Elyssa D. Durant. Ed.M.DailyDDoSe © 2009-2014
Research & Policy Analyst

Leave a comment