![]() ![]() They will also have to find the most efficient numbers of viewers and viewing instances that provide the most accurate tags. red-tipped hawk instead of animal, or Mickey Rourke instead of human face). In order to replace current tagging systems, Tan’s team will have to find ways to determine when humans are making more precise comparisons (i.e. During the experiments, test subjects were given distracting tasks and not told to categorize the images they saw, showing that the conscious mind does not have to be engaged (and in fact, should not be used) to provide the tagging information at that speed. This means that the images could be displayed at that speed without any loss of accuracy in tagging. Surprisingly, no improvement was seen if the viewer was given more than half a second to look at each image. Better results were seen with multiple users, and when each image was viewed multiple times. They also saw good results when contrasting animals with faces, animals with inanimate objects, and some 3-way classifications (i.e. Researchers could determine if someone was looking at a face or an inanimate object. Yet this limited information is enough to distinguish between several useful scenarios. ![]() Some of the images used in the studies.ĮEG readings are taken at the surface of the head and provide only a general guideline to which areas of the brain are active at what times. We’ve heard from Tan before, he was one of those developing muscle-sensing input devices. The EEG image tagging process is just one of many projects that Tan and his team hope to explore in the realm of human-computer interfaces. The work at Microsoft Research was headed by Desney Tan and published over the past few years at IEEE ( pdf) and the Computer Human Interaction Conference ( pdf). This could have a big impact on security surveillance and robotic warfare. Brains and computers working in conjunction could one day provide rapid identification and decision making, even without human conscious effort. While computers can recognize shapes and movements very well (as seen with computers learning sign language), they have a harder time with categorizing objects in human terms. Human and computer visual processing have separate strengths. Eventually EEG readings, or those fMRI techniques that some hope to adopt into security checks, could be used to harness the brain as a valuable analytical tool. Because EEG image tagging requires no conscious effort, workers may be able to perform other tasks during the process. Google Image Labeler has turned the process into a game by pairing taggers to counterparts with whom they can work together. ![]() This work is tedious and repetitive so companies have to come up with interesting ways to get it done on the cheap.Īmazon’s Mechanical Turk offers very small payments to those who wish to tag images online. Whenever an image is entered into a database, it is typically tagged with labels manually by humans. The “mind-reading” technique may be the first step towards a hybrid system of computer and human analysis for images and many other forms of data. It only takes about half a second to read the brain activity associated with each image, making the EEG process much faster than traditional manual tagging. This activity can be measured and scientists can reasonably determine what the person is looking at. When someone looks at an image, different areas of their brain have different levels of activity. Work done at Microsoft Research is using electroencephalograph (EEG) measurements to “read” minds in order to help tag images. The most valuable machine you own may be between your ears. That information may be used to help tag images with labels. ![]() EEG can be used to determine what kinds of objects you're looking at. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |