My blog has moved!

You should be automatically redirected in 6 seconds. If not, visit
http://www.scienceforseo.com
and update your bookmarks.

January 22, 2009

Sentiment analysis in text

Sentiment analysis (also opinion retrieval/mining) is a very useful area of research as once fully functional it would enable us to determine the overall sentiment in text.  We could for example determine automatically if product reviews are negative or positive, if a blog post is in agreement or disagreement with a particular topic or debate, whether news is favourable or not towards a story line and many more possibilities open up once we start thinking along these lines.

For a really nice overview of this topic check out Cornell's freely available resource on it.

As far as blogs go (an area which interests us bloggers greatly) it would enable further clustering possibilities for search engines.  This means that you could look for contradicting views in response to one of your posts or like minded people.  There are of course further things you could do with this kind of technology and I'll leave those open for you to debate amongst friends.

Some of the problems we encounter in making this whole thing possible aren't the easiest to deal with.  Some of them include methods for extracting opinion or sentiment based sections in text, meaning that you have to analyse the content in depths not looked at currently.  You need to be able to rank these documents in order of sentiment intensity afterwards.  How do we determine this?  Also how do you pick out affective or emotive words as opposed to generic ones?

Lets look at some of the research undertaken recently in this area.  I've picked 3 papers where the researchers looked at different approaches:

"A Generation Model to Unify Topic Relevance and Lexicon-based Sentiment for Opinion Retrieval" by Zhang and Ye from Tsinghua University (Beijing) - SIGIR 08

They focused on the issue of combining a document's opiniate score and topic relevance score.  The used a lexicon-based opinion retrieval method which unifies "topic-relevance" and "opinion generation" by a quadratic combination. They used TREC blog data sets and observed an improvement on techniques which are more linear. They were able to show that a Bayesian approach to combining multiple ranking functions is better than using the linear combination.  Their "relevance-based ranking criterion" is used as the weighting factor for the "lexicon-based sentiment ranking function".

They figured out that there was a need to not only identify the sentiment in the text but also to identify and meet the users needs.  I found interesting the way that they pick out sentiment out of the text.  They use WordNet and identify a subset of seed sentiment terms and then enlarged that list with synonyms and antonyms.  The interesting part is that they used HowNet which is a Chinese database where some of the words are tagged as positive or negative.  

Their method can be adapted to all sorts of documents, not just blogs because of its generalised nature.  They're looking at constructing a collection-based sentiment lexicon which I would be highly interested in having access to!


"A Holistic Lexicon-Based Approach to Opinion Mining" by Ding, Liu and Yu from the university of Illinois (Chicago) - WSDM 08

These guys particularly looked at product reviews and also wanted to find a solution to establishing what was negative or positive. They used a "holistic lexicon-based approach" which means that the system looks at opinion words that are context dependant (they use the example of "small").  Their system "OpionionObserver" can also deal with particular constructs, phrases and particular words which typically have an impact on opinion as far as the language is concerned.  More importantly, their system can deal with conflicting opinion words in a sentence.   Here we can say that the approach is semantic based.

They describe the shortcomings of the lexicon based approach.  They state: "To complete the proposed approach, a set of linguistic patterns are devised to handle special words, phrases and constructs based on their underlying meanings or usage patterns, which have not been handled satisfactorily so far by existing methods." - They outperformed all "state-of-the-art existing models".

"Learning to Identify Emotions in Text" by Strapparava (FBK-Irst, Itlay) and Mihalecea (University of North Texas) - SAC 08

They approached the problem from another angle.  They annotated a big dataset with 6 basic emotions: Anger, Disgust, Fear, Joy, Sadness and Surprise.  They worked on automatically identifying these in text.  They used news headlines for this and looked at lexicon approaches, the latent semantic space approach, naive Bayes classifiers and others.  They also looked at the co-occurrence of affective words in text.  They followed the classification found in WordNet Affect and collected words related to their 6 groups.  

You see here that variants of LSA are used in current systems because here for example they used it to identify generic terms and affective lexical concepts. Their method also takes into account a tf-idf weighting schema. They explain that "In the LSA space, an emotion can be represented at least in three ways: (i) the vector of the specific word denoting the emotion (e.g. “anger), (ii) the vector representing the synset of the emotion (e.g. {anger, choler, ire}), and (iii) the vector of all the words in the synsets labeled with the emotion."

They specifically evaluated 5 systems: WN-Affect presence (annotates emotions in the text using Net Affect), LSA single word (similarity between the given text and each emotion), LSA emotion synset (words denoting emotion), LSA all emotion words (adds all words in the synsets labelled with a given emotion), NB trained on blogs (Naive Bayes classifier trained on blog data annotated for emotions).

The WN-Affect system is the highest precision and lowest recall.  LSA using all emotion words has the largest recall but precision is a bit lower. The NB method worked best on Joy and Anger because this was prevalent in the training set.  All other emotions were best identified by the LSA models.  

Why should you care?

This kind of system when working efficiently would mean that reputation management suddenly becomes very important as negative and positive comments could easily be retrieved by users.  An overall bad reputation as far as a company, product or individual is concerned could be very damaging.
 

4 comments:

Arturo Servin said...

What a coincidence. Today in the morning I was thinking about how to do something like this with twitter. It was just a thought but I am glad that you posted some references to start digging.

Regards,
-as

CJ said...

I'm glad you found it useful! I'd be interested in what you conjure up if you feel like sharing :)

Bob Carpenter said...

Our natural language processing API, LingPipe, has a tutorial on sentiment analysis including source code that recreates the Pang and Lee experiments.

My favorite papers on sentiment lately have been Ryan McDonald's. The Blitzer et al. paper is nice for the examples, but I don't think it's worth doing adaptation when training data's so cheap.

CJ said...

Hey Bob,

thanks for adding that. I also agree with you.

cj

Creative Commons License
Science for SEO by Marie-Claire Jenkins is licensed under a Creative Commons Attribution-Non-Commercial-No Derivative Works 2.0 UK: England & Wales License.
Based on a work at scienceforseo.blogspot.com.