Wednesday, December 10, 2014

Paper Summary: A Crowd of Your Own

A Crowd of Your Own: Crowdsourcing for On-Demand Personalization
HCOMP 2014 (Notable Paper)
A lot of my research explores personalization. Personalization is a way for computers to support people’s diverse interests and needs by providing content tailored to the individual. However, despite clear evidence that personalization can improve our information experiences, it remains very hard for computers to actually understand individual differences.

Some recent strides have been made in developing algorithmic approaches to personalization by trying to identify patterns that have seen before early in the sequence. For example, search engine are able to successfully personalize your search results by recognizing queries that you have issued before and helping you get back to the same results you found last time. The patterns identified do not necessarily need to be within your own data. Netflix and Amazon provide pretty good movie and product recommendations by identifying people with similar tastes to yours and recommending what those people have watched or bought to you.

However, the ability to identify and support unusual patterns requires access to a significant amount of data. For a search engine to provide personalized support for navigational queries, need have to have issued that query before. And for Netflix to recommend a movie to you, it must have data from other people with your particular idiosyncratic tastes who have already rated many the same movies that you have, as well as some that you have not seen. If Netflix wanted to recommend a home movie for you to watch from the collection of videos taken years ago by your father, it would find the task impossible because nobody's actually watched those movies since they were taken.

Fortunately, even though computers can't figure out what other people like, people are pretty good at it. Your sister, for example, could probably pick out a pretty reasonable subset of your dad's home movies for you to watch if she wanted to. For this reason, we often turn to our online social networks to ask for personal recommendations.

The people we turn to do not necessarily need to know us to be able to figure out what we like. If we are willing to pay for people's insights, crowdsourcing can provide an effective on-demand tool for personalization. We have explored two approaches to building personalized human computation systems.

Taste-Matching: In an approach similar to collaborative filtering, taste-matching identifies workers with similar taste to the requester, and then uses their taste to infer the requester’s taste.

Taste-Grokking: In an approach similar to what we currently do with friends, taste-grokking asks workers to explicitly predict what the requester will like after seeing examples of other things that the requester likes.

To receive a personalized rating for the kissing grandparents salt shaker shown above, the requester first needs to provide some ratings for other salt shakers. Using taste-matching, the system then collects ratings for the same salt shakers from crowd workers and matches the requester’s ratings with Worker I to predict the requester is very likely to enjoy the unrated salt shaker. Using taste-grokking, Worker III is shown the requester’s earlier ratings and offers an educated guess that the requester will like the last salt shaker.

Both taste-matching and taste-grokking can be used to personalize tasks that we don't yet know how to personalize algorithmically. The best approach to use depends on the particular task, as each approach has different benefits and drawbacks. Some differences include:
  • The number of workers required. To find a good match using taste-matching, opinions must first be solicited from a number of workers. In contrast, taste-grokking can be done by just one or two crowd workers.
  • The ability to reuse data. While taste-matching requires more workers, the data collected using that approach can also be re-used to make future recommendations to other people. Workers in a taste-grokking system provide recommendations specific to individual requesters, while taste-matching workers do not. Taste-matching is a good way to bootstrap systems that are likely to eventually have enough data to support automated approaches.
  • The quality of the data collected. Taste-grokking has a ground truth: successful taste-grokkers will be able to successfully guess what the requester likes. This makes it easy to identify incompetent workers. On the other hand, taste-matching asks for the worker's personal opinions. Because there are no "right" answers when it comes to opinions, workers may be tempted to provide quick-and-easy unthoughtful responses. 
  • The scrutability of the task. In the salt shaker example above, it is easy to use the data to separate people who prefer classic shakers from those who prefer kissing figurine shakers. But people's preferences aren't always so obvious. Taste-grokking salt shaker preferences is much harder when the requester instead says they really liked the first two salt shakers. We have observed that when there are a number of latent, unseen factors that are hard to capture in just a few examples, taste-matching outperforms taste-grokking.
  • Worker enjoyment. Workers report having more fun performing taste-grokking tasks. It can feel like a fun game to guess what other people are thinking, particularly when you get to learn whether you were right or wrong.
Using taste-matching and taste-grokking, we have successfully personalized a wide variety of tasks, ranging from image recommendation to text summarization to handwriting duplication. The crowd is great for tasks that people can do well but that we haven't fully figured out how to automate. Crowd-based personalization seems like an obvious win until our robot overlords finally figure out how to do it better on their own.

See also: Hal Hodson, The Crowd Can Guess What You Want to Watch or Buy. The New Scientist, 2989, 2 October 2014.


  1. This was a great paper at HCOMP. Fun stuff!

  2. Thanks, Michael! You should get Peter to show you what we did with handwriting -- it's a fun example of the power of the approach, but we didn't have space in the HCOMP paper. :)

  3. Nice article,Do you will conduct more workshops? I'd love to have visit it!