Tagging is one thing which you will notice immediately in most of the new community based/social networking web 2.0 applications. I first saw this in Flickr and later in many other applications like Technorati , Riya , Del.icio.us etc.
Tags are usually chosen informally and personally by the author/creator of the item. Tags are typically used in dynamic, flexible, automatically generated online resources such as computer files, web pages, digital images (Flickr, Riya), and Internet bookmarks (del.icio.us etc).
(As a example of tagging a web page, you can check out the tag cloud at the Archives page of this blog)
Tagging of Digital Images has helped in users getting relevant results for a particular keyword. In a community based site like Flickr, you can expect to see more relevant images for a keyword than compared to a search in Google Images.
Flickr uses the tag information keyed in by the users and Google Image Search probably uses the Image file name (which is part of the source URL) and the ALT text (Alternate text) embedded in the HTML.
Again I am not sure what algorithms Google Image bot uses to index the images in web. The only two sources I can think of is the Image name in the SRC URL and the ALT text used for the image. But many web pages don’t use the ALT text for images and hence Google Bot might miss out on these images.
So in order to make the image search results more relevant, Google has started labeling the images !. And they are doing this is in a way typical of Google; using the brain power of Humans 🙂
The Google Image Labeler Beta automatically pairs 2 random users and both of them will be shown the same set of images. If both the users give the same label, the users are prompted with the next image. The idea is that if 2 people come up with the same label, it is probably a good/valid one and will make Google’s image search better. The concept is based on the ESP Game created by Carnegie Mellon professor Luis von Ahn and is now licensed by Google.
So is tagging the answer to better, relevant image search experience?
Riya re-defined the way people search images. They built a face recognition/detection engine which could recognize faces in the pictures uploaded by the registered users. After a little bit of manual training, the engine builds a visual signature for a face, and the same was used to find/display all photos containing this face. The initial version of Riya focused on running the face recognition engine on the photos uploaded by the user.
Now Riya 2.0 is focusing on searching images in the whole of the web. I believe Riya’s crawler has already started indexing images and started building a repository of visual signatures.
Riya’s engine can be used to efficiently search for faces and text within the images. However, Riya additionally uses tagging so that users can input additional information about the images.
I believe Google Image Search with the “human” labeling will help in better search experience, but it lacks face detection and recognition. This might be one of the driving factors for Google’s recent acquisition of Neven Vision.
Riya on the other hand has face recognition in place, but the images they index might not be tagged. Riya also might need a Riya Image Labeler as a part of their Riya 2.0 Roadmap I believe !
Other Interesting Reads:
BusinessWeek : How Google’s Neven Vision could track our lives