Who’d a thought that twenty years ago we’d have access to the Internet in not just our own home but nearly everywhere in the world through mobile devices like iPhones, iPads, Kindles, etc.
What’s crazy to think is that it began with the Internet in 1991, and that wasn’t so long ago. I was already born along with others in my generation. We came of age along with the Internet, though I’d say for myself that I didn’t really get my hands dirty yet. I was nearly 9 when I started to search online, starting with Google. I remember how the whole dial-tone modem sounds would annoy me.
I knew we were already in the digital age but I didn’t realize there were other major ages of human communication since the dawn when our species learned to speak. In truth, I find most ancient history fascinating. After reading the section about the Evolution of Human Communication, it really did make more sense to me. So thats what I decided to explore.
Upon doing some research, I found this History World website about Communication.
As we all know, communication begins with language. That opened possibility of the transition from speaking to sending messages no matter how complex. Messages carved on stone seemed to work fairly well, but such messages could only be read within reading range. Over centuries that changed as messages written on papyrus began circulating, then scrolls, paper, books, and finally on the Web. As the word of communication started to grow, so did the spread of the means of transportation to get the message out quickly or in this case, news.
A single picture– worth a thousand words– can have emotions, but just how many? This ScienceDaily article explains a little about how a picture can hold many emotions. It also reports that researchers are using computers to digest data that flows in the form of images which are conveying the exact emotions.
Whenever you log onto Facebook, Twitter etc. you’ll find that alot of the content shared among your friends comes in the form of images. Since images can convey more than a sentence, often they are used provoking emotions of the viewer…as in you.
Jiebo Luo, a professor of computer science from the University of Rochester, among other researchers at Adobe Research, claims to have found a way to train computers to digest data that comes in the form of images. Convolutional neural network is the name of the progressive training process. The trained computer would be used to determine what sentiments these images would draw in response.
Luo claims that this kind of analysis could be useful for things as diverse as predicting elections. The catch is that sentiment analysis of text by computer is a difficult job because many people like to express themselves using photos and video, which computers may find hard to decode.
The researchers describe that challenge as an “image classification problem” meaning that every individual picture would need to be analyzed and labelled. Luo and his team collected a number of Flickr images, had them analyzed, and then chose only the better-labeled images for testing. They even tried adapting their sentiment analysis engine for use decoding images they extracted from Twitter. From there they employed “crowd intelligence” with people helping to categorize the images. By applying the domain adaptation process, Luo and his team showed they could improve on the current state of the art methods for sentiment analysis of Twitter and Flickr images and any other images for that matter.
Here is a map to the way the technology is laid out when being processed as a Convolutional Neural Network.