From what I (and Google) can see, Google Goggles might be one of the next evolutionary steps of mobile search.
It was originally announced at Google's search announcement on Monday, but Google representatives gave a more detailed
demonstration outside the main conference room.
Basically, think of Goggles as object recognition, but with your phone's camera serving as your eye.
How might you identify an unfamiliar object? Perhaps by shape or by context. Does it have text printed on it?
That's also helpful. And it might also help if you had millions of photographs
in your memory to compare objects to,
as Goggles does.
How is this useful? Well, for shopping, perhaps. Identifying an object that isn't properly labeled is certainly useful,
especially if you're overseas. There's tourism, too: Goggles was smart enough to identify a painting in a demonstration,
which linked, of course, to a variety of supplemental information. And some noteworthy objects can be found outside of a
museum: the Transamerica Pyramid, for example, or the Golden Gate Bridge. Even home repair might be assisted by a
computer-generated search for an otherwise unidentifiable widget.
Goggles can perform facial recognition, but it hasn't yet found a business case that would justify it, according to a
product demonstrator who didn't identify himself. Although an obvious solution would be tagging, Google is still
wrestling with how permissions would work and who would be allowed to tag photos, he said.
Google Goggles works with the Android operating system, and representatives demonstrated the technology working on new
Droids at the event. Using the camera, the demonstrator snapped an image of a book cover, with some cartoonish text on the
In this case, Goggles has a couple of options: to try and "read" the text and compare it against a text database, or to
try and match the image or a portion of it against its own database. Keep in mind that Google can crawl a variety of
online bookstores, pulling in content from there. In separate instances, Google identified the image against an image
database, identifying the book and offering a link to an online bookseller. In another instance, scanning the book's
bar code would have provided a link to Hong Kong, where the demonstrator purchased the book, he said.
In another demonstration, Goggles correctly identified an American Express card from an ad. This unwittingly
pointed out a limitation of the technology. For now, Goggles transfers and analyzes data in black and white, so
the card was identified as a gold American Express card, not as the green card that appeared in the ad—obviously, a
key identifying characteristic.
Goggles can also identify landmarks, although how it can do that isn't exactly clear. What it can do,
however, is match the item against its photographic database, so an obvious landmark such as twe Arc de Triomphe
in Paris, can be clearly identified. In the future, Google plans to use information gathered from some other sources
to add more context: for example, to use a café located in the background of the picture to add location information.
If an object can be found in Google's database, chances are that it can be identified. But it's less clear
how Goggles will fare with familiar objects from unfamiliar perspectives: WBill the Statue of Liberty be
identifiable, for example, in a photo taken from the statue's base? Or behind it? Time will tell, I suppose.
At this point, Goggles appears somewhat rudimentary. But it represents what Vic Gundotra,
vice president of engineering for Google, called "an eye to the cloud": By combining the camera
of a smartphone with the vast cloud-based search, recognition, and contextual databases available
to Google, Google has added another dimension to its search inputs.
Potentially, what you can see can be searched. And that's the significance of Goggles in a nutshell.
Will it be a success? Well, Google sees quite a bit these days. And what Google sees, Google seems to know.