Recommender Systems

Recommender Systems

What we have covered What is IR Evaluation Tokenization and properties of text Web crawling Query models Vector methods Measures of similarity Indexing Inverted files Basics of internet and web Spam and SEO Search engine design Google and Link Analysis Metadata, XML, RDF; advanced search, Semantic Web Today RC and scaling

Based on Tutorial: Recommender Systems International Joint Conference on Artificial Intelligence Beijing, August 4, 2013 Dietmar Jannach TU Dortmund Gerhard Friedrich Alpen-Adria Universitt Klagenfurt Recommender Systems Application areas In the Social Web Even more Personalized search

"Computational advertising" Agenda What are recommender systems for Introduction How do they work - fundamentals Collaborative Filtering How to measure their success Evaluation techniques How do they really work Content-based Filtering Knowledge-Based Recommendations Hybridization Strategies

Advanced topics Explanations Human decision making Introduction Why using Recommender Systems? Value for the customer Find things that are interesting Narrow down the set of choices

Help me explore the space of options Discover new things Entertainment Value for the provider Additional and probably unique personalized service for the customer Increase trust and customer loyalty Increase sales, click trough rates, conversion etc. Opportunities for promotion, persuasion

Obtain more knowledge about customers Real-world check Myths from industry generates X percent of their sales through the recommendation lists (30 < X < 70) Netflix (DVD rental and movie streaming) generates X percent of their sales through the recommendation lists (30 < X < 70) There must be some value in it See recommendation of groups, jobs or people on LinkedIn

Friend recommendation and ad personalization on Facebook Song recommendation at News recommendation at (plus 37% CTR) Academia A few studies exist that show the effect increased sales, changes in sales behavior Problem domain Recommendation systems (RS) help to match users with items Ease information overload Sales assistance (guidance, advisory, persuasion,) RS are software agents that elicit the interests and preferences of individual consumers [] and make recommendations accordingly. They have the potential to support and improve the quality of the decisions consumers make while searching for and selecting products online. [Xiao & Benbasat, MISQ, 2007]

Different system designs / paradigms Based on availability of exploitable data Implicit and explicit user feedback Domain characteristics Recommender systems RS seen as a function [AT05] Given: User model (e.g. ratings, preferences, demographics, situational context) Items (with or without description of item characteristics) Find: Relevance score. Used for ranking. Finally: Recommend items that are assumed to be relevant

But: Remember that relevance might be context-dependent Characteristics of the list itself might be important (diversity) Paradigms of recommender systems Recommender systems reduce information overload by estimating relevance Paradigms of recommender systems Personalized recommendations Paradigms of recommender systems Collaborative: "Tell me what's popular among my peers"

Paradigms of recommender systems Content-based: "Show me more of the same what I've liked" Paradigms of recommender systems Knowledge-based: "Tell me what fits based on my needs" Paradigms of recommender systems Hybrid: combinations of various inputs and/or composition of different mechanism Recommender systems: basic techniques Pros Cons

Collaborative No knowledgeengineering effort, serendipity of results, learns market segments Requires some form of rating feedback, cold start for new users and new items Content-based No community required, comparison between items possible

Content descriptions necessary, cold start for new users, no surprises Knowledge-based Deterministic recommendations, assured quality, no coldstart, can resemble sales dialogue Knowledge engineering effort to bootstrap, basically static, does not react to short-term trends Collaborative Filtering

Collaborative Filtering (CF) The most prominent approach to generate recommendations used by large, commercial e-commerce sites well-understood, various algorithms and variations exist applicable in many domains (book, movies, DVDs, ..) Approach use the "wisdom of the crowd" to recommend items Basic assumption and idea Users give ratings to catalog items (implicitly or explicitly) Customers who had similar tastes in the past, will have similar tastes in the future 1992: Using collaborative filtering to weave an information tapestry, D. Goldberg et al., Communications of the ACM

Basic idea: "Eager readers read all docs immediately, casual readers wait for the eager readers to annotate" Experimental mail system at Xerox Parc that records reactions of users when reading a mail Users are provided with personalized mailing list filters instead of being forced to subscribe Content-based filters (topics, from/to/subject) Collaborative filters E.g. Mails to [all] which were replied by [John Doe] and which received positive ratings from [X] and [Y]. 1994: GroupLens: an open architecture for collaborative filtering of netnews, P. Resnick et al., ACM CSCW Tapestry system does not aggregate ratings and requires knowing each other

Basic idea: "People who agreed in their subjective evaluations in the past are likely to agree again in the future" Builds on newsgroup browsers with rating functionality User-based nearest-neighbor collaborative filtering (1) The basic technique: Given an "active user" (Alice) and an item I not yet seen by Alice The goal is to estimate Alice's rating for this item, e.g., by find a set of users (peers) who liked the same items as Alice in the past and who have rated item I use, e.g. the average of their ratings to predict, if Alice will like item I do this for all items Alice has not seen and recommend the best-rated Item2 3 Item3 4

Item4 4 Item5 Alice Item1 5 User1 User2 User3 User4 3

4 3 1 1 3 3 5 2 4 1 5 3 3 5

2 3 5 4 1 ? User-based nearest-neighbor collaborative filtering (2) Some first questions How do we measure similarity? How many neighbors should we consider? How do we generate a prediction from the neighbors' ratings? Item1

Item2 Item3 Item4 Item5 Alice 5 3 4 4

? User1 3 1 2 3 3 User2

4 3 4 3 5 User3 3 3 1

5 4 User4 1 5 5 2 1

Measuring user similarity A popular similarity measure in user-based CF: Pearson correlation a, b : users ra,p : rating of user a for item p P : set of items, rated both by a and b _ _ Possible similarity values between -1 and 1; ra, rb = user's average ratings Item1 Item2 Item3 Item4

Item5 Alice 5 3 4 4 ? User1

3 1 2 3 3 User2 4 3 4

3 5 User3 3 3 1 5 4

User4 1 5 5 2 1 sim = 0.85 sim = 0,70 sim = -0.79 Pearson correlation

Takes differences in rating behavior into account 6 Alic e 5 Use r1 4 Use r4 Ratings

3 2 1 0 Item1 Item2 Item3 Item4 Works well in usual domains, compared with alternative measures such as cosine similarity Making predictions A common prediction function:

Calculate, whether the neighbors' ratings for the unseen item i are higher or lower than their average Combine the rating differences use the similarity as a weight Add/subtract the neighbors' bias from the active user's average and use this as a prediction Making recommendations Making predictions is typically not the ultimate goal Usual approach (in academia) Rank items based on their predicted ratings However This might lead to the inclusion of (only) niche items In practice also: Take item popularity into account Approaches

"Learning to rank" Optimize according to a given rank evaluation metric (see later) Item-based collaborative filtering Basic idea: Use the similarity between items (and not users) to make predictions Example: Look for items that are similar to Item5 Take Alice's ratings for these items to predict the rating for Item5 Item1 Item2 Item3 Item4

Item5 Alice 5 3 4 4 ? User1

3 1 2 3 3 User2 4 3 4

3 5 User3 3 3 1 5 4

User4 1 5 5 2 1 The cosine similarity measure Produces better results in item-to-item filtering for some datasets, no consistent picture in literature Ratings are seen as vector in n-dimensional space

Similarity is calculated based on the angle between the vectors Adjusted cosine similarity take average user ratings into account, transform the original ratings U: set of users who have rated both items a and b Pre-processing for item-based filtering Item-based filtering does not solve the scalability problem itself Pre-processing approach by (in 2003) Calculate all pair-wise item similarities in advance The neighborhood to be used at run-time is typically rather small, because only items are taken into account which the user has rated Item similarities are supposed to be more stable than user similarities Memory requirements Up to N2 pair-wise similarities to be memorized (N = number of items) in theory

In practice, this is significantly lower (items with no co-ratings) Further reductions possible Minimum threshold for co-ratings (items, which are rated at least by n users) Limit the size of the neighborhood (might affect recommendation accuracy) More on ratings Pure CF-based systems only rely on the rating matrix Explicit ratings Most commonly used (1 to 5, 1 to 7 Likert response scales) Research topics "Optimal" granularity of scale; indication that 10-point scale is better accepted in movie domain Multidimensional ratings (multiple ratings per movie) Challenge Users not always willing to rate many items; sparse rating matrices How to stimulate users to rate more items?

Implicit ratings clicks, page views, time spent on some page, demo downloads Can be used in addition to explicit ones; question of correctness of interpretation Data sparsity problems Cold start problem How to recommend new items? What to recommend to new users? Straightforward approaches Ask/force users to rate a set of items Use another method (e.g., content-based, demographic or simply nonpersonalized) in the initial phase Alternatives Use better algorithms (beyond nearest-neighbor approaches) Example: In nearest-neighbor approaches, the set of sufficiently similar neighbors might

be to small to make good predictions Assume "transitivity" of neighborhoods Example algorithms for sparse datasets Recursive CF Assume there is a very close neighbor n of u who however has not rated the target item i yet. Idea: Apply CF-method recursively and predict a rating for item i for the neighbor Use this predicted rating instead of the rating of a more distant direct neighbor Item1 Item2 Item3

Item4 Item5 Alice 5 3 4 4 ? User1

3 1 2 3 ? User2 4 3

4 3 5 User3 3 3 1 5 4

User4 1 5 5 2 1 sim = 0,85 Predict rating for User1

A picture says 1 Sue 0.8 0.6 0.4 0.2 Bob Mary 0 -1

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 -0.2 -0.4

-0.6 -0.8 -1 Alice 0.6 0.8 1 Matrix factorization M k U k k Vk

SVD: T Uk Dim1 Dim2 V kT Alice 0.47

-0.30 Dim1 -0.44 -0.57 0.06 0.38 0.57 Bob -0.44

0.23 Dim2 0.58 -0.66 0.26 0.18 -0.36 Mary

0.70 -0.06 Sue 0.31 0.93 Prediction: k T k rui ru U k ( Alice) k V ( EPL)

= 3 + 0.84 = 3.84 Dim1 Dim2 Dim1 5.63 0 Dim2 0 3.23 Association rule mining

Commonly used for shopping behavior analysis aims at detection of rules such as "If a customer purchases baby-food then he also buys diapers in 70% of the cases" Association rule mining algorithms can detect rules of the form X => Y (e.g., baby-food => diapers) from a set of sales transactions D = {t1, t2, tn} measure of quality: support, confidence Summarizing recent methods Recommendation is concerned with learning from noisy observations (x, y), where f ( x) y has to be determined such that is minimal. 2

( y y ) y A variety of different learning strategies have been applied trying to estimate f(x) Non parametric neighborhood models MF models, SVMs, Neural Networks, Bayesian Networks, Collaborative Filtering Issues Pros: well-understood, works well in some domains, no knowledge engineering required

Cons: requires user community, sparsity problems, no integration of other knowledge sources, no explanation of results What is the best CF method? In which situation and which domain? Inconsistent findings; always the same domains and data sets; differences between methods are often very small (1/100) How to evaluate the prediction quality? MAE / RMSE: What does an MAE of 0.7 actually mean? Serendipity: Not yet fully understood What about multi-dimensional ratings? Evaluation of Recommender Systems

Recommender Systems in e-Commerce One Recommender Systems research question What should be in that list? Recommender Systems in e-Commerce Another question both in research and practice How do we know that these are good recommendations? Recommender Systems in e-Commerce This might lead to What is a good recommendation? What is a good recommendation strategy? What is a good recommendation strategy for my business?

We hope youbeen will buy also for quite a while now These have in stock What is a good recommendation? What are the measures in practice? Total sales numbers Promotion of certain items Click-through-rates

Interactivity on platform Customer return rates Customer satisfaction and loyalty Purpose and success criteria (1) Different perspectives/aspects Depends on domain and purpose No holistic evaluation scenario exists Retrieval perspective

Reduce search costs Provide "correct" proposals Assumption: Users know in advance what they want Recommendation perspective Serendipity identify items from the Long Tail Users did not know about existence When does a RS do its job well? Recommend items from the long tail

"Recommend widely unknown items that users might actually like!" 20% of items accumulate 74% of all positive ratings Purpose and success criteria (2) Prediction perspective

Interaction perspective Predict to what degree users like an item Most popular evaluation scenario in research Give users a "good feeling" Educate users about the product domain

Convince/persuade users - explain Finally, conversion perspective Commercial situations Increase "hit", "clickthrough", "lookers to bookers" rates Optimize sales margins and profit How do we as researchers know? Test with real users A/B tests Example measures: sales increase, click through rates Laboratory studies Controlled experiments Example measures: satisfaction with the system (questionnaires)

Offline experiments Based on historical data Example measures: prediction accuracy, coverage Empirical research Characterizing dimensions: Who is the subject that is in the focus of research? What research methods are applied? In which setting does the research take place? Subject Research method Setting Online customers, students, historical online sessions, computers, Experiments, quasi-experiments, non-experimental

research Lab, real-world scenarios Research methods Experimental vs. non-experimental (observational) research methods Experiment (test, trial): "An experiment is a study in which at least one variable is manipulated and units are randomly assigned to different levels or categories of manipulated variable(s)." Units: users, historic sessions, Manipulated variable: type of RS, groups of recommended items, explanation strategies Categories of manipulated variable(s): content-based RS, collaborative RS Experiment designs

Evaluation in information retrieval (IR) Recommendation is viewed as information retrieval task: Retrieve (recommend) all items which are predicted to be "good" or "relevant". Common protocol : Hide some items with known ground truth Rank items or predict ratings -> Count -> Cross-validate Prediction Ground truth established by human domain experts Rated Good Rated

Bad Reality Actually Good Actually Bad True Positive (tp) False Positive (fp) False Negative (fn) True Negative (tn) Metrics: Precision and Recall Precision: a measure of exactness, determines the fraction of relevant items retrieved out of all items retrieved E.g. the proportion of recommended movies that are actually good Recall: a measure of completeness, determines the fraction of relevant

items retrieved out of all relevant items E.g. the proportion of all good movies recommended Dilemma of IR measures in RS IR measures are frequently applied, however: Ground truth for most items actually unknown What is a relevant item?

Different ways of measuring precision possible Results from offline experimentation may have limited predictive power for online user behavior. Metrics: Rank Score position matters For a user: Recommended (predicted as good) Actually good Item 237 Item 899 hit

Item 345 Item 237 Item 187 Rank Score extends recall and precision to take the positions of correct items in a ranked list into account Particularly important in recommender systems as lower ranked items may be overlooked by users Learning-to-rank: Optimize models for such measures (e.g., AUC) Accuracy measures Datasets with items rated by users MovieLens datasets 100K-10M ratings Netflix 100M ratings Historic user ratings constitute ground truth Metrics measure error rate

Mean Absolute Error (MAE) computes the deviation between predicted ratings and actual ratings Root Mean Square Error (RMSE) is similar to MAE, but places more emphasis on larger deviation Offline experimentation example Netflix competition Web-based movie rental Prize of $1,000,000 for accuracy improvement (RMSE) of 10% compared to own Cinematch system. Historical dataset ~480K users rated ~18K movies on a scale of 1 to 5 (~100M ratings) Last 9 ratings/user withheld Probe set for teams for evaluation Quiz set evaluates teams submissions for leaderboard

Test set used by Netflix to determine winner Today Rating prediction only seen as an additional input into the recommendation process An imperfect world Offline evaluation is the cheapest variant Still, gives us valuable insights and lets us compare our results (in theory) Dangers and trends: Domination of accuracy measures Focus on small set of domains (40% on movies in CS) Alternative and complementary measures:

Diversity, Coverage, Novelty, Familiarity, Serendipity, Popularity, Concentration effects (Long tail) Online experimentation example Effectiveness of different algorithms for recommending cell phone games [Jannach, Hegelich 09] Involved 150,000 users on a commercial mobile internet portal Comparison of recommender methods Details and results Recommender variants included:

Item-based collaborative filtering SlopeOne (also collaborative filtering) Content-based recommendation Hybrid recommendation Top-rated items } non-personalized Top-sellers Findings: Personalized methods increased sales up to 3.6% compared to nonpersonalized Choice of recommendation algorithm depends on user situation (e.g. avoid content-based RS in post-sales situation)

Other approaches Two additional major paradigms of recommender systems Content-based Knowledge-based Hybridization: take the best of different paradigms Advanced topics: recommender systems are about human decision making Content-based recommendation Content-based recommendation Collaborative filtering does NOT require any information about the items, However, it might be reasonable to exploit such information E.g. recommend fantasy novels to people who liked fantasy novels in the past What do we need:

Some information about the available items such as the genre ("content") Some sort of user profile describing what the user likes (the preferences) The task: Learn user preferences Locate/recommend items that are "similar" to the user preferences Paradigms of recommender systems Content-based: "Show me more of the same what I've liked" What is the "content"? The genre is actually not part of the content of a book Most CB-recommendation methods originate from Information Retrieval (IR) field: The item descriptions are usually automatically extracted (important words) Goal is to find and rank interesting text documents (news articles, web pages)

Here: Classical IR-based methods based on keywords No expert recommendation knowledge involved User profile (preferences) are rather learned than explicitly elicited Content representation and item similarities Simple approach Compute the similarity of an unseen item with the user profile based on the keyword overlap (e.g. using the Dice coefficient) sim(bi, bj) = Term-Frequency - Inverse Document Frequency (TF-IDF) Simple keyword representation has its problems In particular when automatically extracted because

Not every word has similar importance Longer documents have a higher chance to have an overlap with the user profile Standard measure: TF-IDF Encodes text documents as weighted term vector TF: Measures, how often a term appears (density in a document) Assuming that important terms appear more often Normalization has to be done in order to take document length into account IDF: Aims to reduce the weight of terms that appear in all documents TF-IDF Compute the overall importance of keywords Given a keyword i and a document j TF-IDF (i,j) = TF(i,j) * IDF(i) Term frequency (TF)

Let freq(i,j) number of occurrences of keyword i in document j Let maxOthers(i,j) denote the highest number of occurrences of another keyword of j Inverse Document Frequency (IDF) N: number of all recommendable documents n(i): number of documents in which keyword i appears Example TF-IDF representation Figure taken from More on the vector space model Vectors are usually long and sparse Improvements

Remove stop words ("a", "the", ..) Use stemming Size cut-offs (only use top n most representative words, e.g. around 100) Use additional knowledge, use more elaborate methods for feature selection Detection of phrases as terms (such as United Nations) Limitations Semantic meaning remains unknown Example: usage of a word in a negative context "there is nothing on the menu that a vegetarian would like.." Usual similarity metric to compare vectors: Cosine similarity (angle)

Recommending items Simple method: nearest neighbors Given a set of documents D already rated by the user (like/dislike) Find the n nearest neighbors of a not-yet-seen item i in D Take these ratings to predict a rating/vote for i (Variations: neighborhood size, lower/upper similarity thresholds) Query-based retrieval: Rocchio's method The SMART System: Users are allowed to rate (relevant/irrelevant) retrieved documents (feedback) The system then learns a prototype of relevant/irrelevant documents Queries are then automatically extended with additional terms/weight of relevant documents Limitations of content-based recommendation methods Keywords alone may not be sufficient to judge quality/relevance of a document or web page

Up-to-dateness, usability, aesthetics, writing style Content may also be limited / too short Content may not be automatically extractable (multimedia) Ramp-up phase required Some training data is still required Web 2.0: Use other sources to learn the user preferences Overspecialization Algorithms tend to propose "more of the same" E.g. too similar news items Knowledge-Based Recommender Systems Why do we need knowledge-based recommendation? Products with low number of available ratings

Time span plays an important role Five-year-old ratings for computers User lifestyle or family situation changes Customers want to define their requirements explicitly The color of the car should be black" Knowledge-based recommendation Knowledge-based: "Tell me what fits based on my needs" Knowledge-based recommendation I Explicit domain knowledge

Sales knowledge elicitation from domain experts System mimics the behavior of experienced sales assistant Best-practice sales interactions Can guarantee correct recommendations (determinism) with respect to expert knowledge Conversational interaction strategy Opposed to one-shot interaction Elicitation of user requirements Transfer of product knowledge (educating users) Knowledge-Based Recommendation II Different views on knowledge

Similarity functions Determine matching degree between query and item (casebased RS) Utility-based RS E.g. MAUT Multi-attribute utility theory Logic-based knowledge descriptions (from domain expert) E.g. Hard and soft constraints Ask user Computation of minimal revisions of requirements Do you want to relax your brand preference? Accept Panasonic instead of Canon brand Or is photographing landscapes with a wide-angle lens and maximum cost less important?

Lower focal length > 28mm and Price > 350 EUR Optionally guided by some predefined weights or past community behavior Be aware of possible revisions (e.g. age, family status, ) Constraint-based recommendation III More variants of recommendation task Customers maybe not know what they are seeking Find "diverse" sets of items Notion of similarity/dissimilarity Idea that users navigate a product space If recommendations are more diverse than users can navigate via critiques on recommended "entry points" more efficiently (less steps of interaction) Bundling of recommendations Find item bundles that match together according to some knowledge

E.g. travel packages, skin care treatments or financial portfolios RS for different item categories, CSP restricts configuring of bundles Conversational strategies Process consisting of multiple conversational moves Resembles natural sales interactions Not all user requirements known beforehand Customers are rarely satisfied with the initial recommendations Different styles of preference elicitation:

Free text query interface Asking technical/generic properties Images / inspiration Proposing and Critiquing Limitations of knowledge-based recommendation methods Cost of knowledge acquisition From domain experts From users Remedy: exploit web resources Accuracy of preference models Very fine granular preference models require many interaction cycles with the user or sufficient detailed data about the user Remedy: use collaborative filtering, estimates the preference of a user However: preference models may be instable

E.g. asymmetric dominance effects and decoy items Hybridization Strategies Hybrid recommender systems All three base techniques are naturally incorporated by a good sales assistance (at different stages of the sales act) but have their shortcomings Idea of crossing two (or more) species/implementations hybrida [lat.]: denotes an object made by combining two different elements Avoid some of the shortcomings Reach desirable properties not present in individual approaches Different hybridization designs Monolithic exploiting different features

Parallel use of several systems Pipelined invocation of different systems Advanced topics I Explanations in recommender systems Explanations in recommender systems Motivation The digital camera Profishot is a must-buy for you because . . . . Why should recommender systems deal with explanations at all? The answer is related to the two parties providing and receiving recommendations: A selling agent may be interested in promoting particular products A buying agent is concerned about making the right buying decision

Explanations in recommender systems Additional information to explain the systems output following some objectives Objectives of explanations Transparency Efficiency Validity Satisfaction Trustworthiness

Relevance Persuasiveness Comprehensibility Effectiveness Education Explanations in general How? and Why? explanations in expert systems Form of abductive reasoning Given: (item i is recommended by method RS) Find s.t. Principle of succinctness

Find smallest subset of s.t. i.e. for all holds But additional filtering Some parts relevant for deduction, might be obvious for humans [Friedrich & Zanker, AI Magazine, 2011] Taxonomy for generating explanations in RS Major design dimensions of current explanation components: Category of reasoning model for generating explanations White box Black box RS paradigm for generating explanations Determines the exploitable semantic relations

Information categories Similarity between items Similarity between users Tags Tag relevance (for item) Tag preference (of user) Explanations in recommender systems: Summary There are many types of explanations and various goals that an explanation can achieve Which type of explanation can be generated depends greatly on the recommender approach applied Explanations may be used to shape the wishes and desires of customers

but are a double-edged sword On the one hand, explanations can help the customer to make wise buying decisions; On the other hand, explanations can be abused to push a customer in a direction which is advantageous solely for the seller As a result a deep understanding of explanations and their effects on customers is of great interest. Personality Different personality properties pose specific requirements on the design of recommender user interfaces Some personality traits are more susceptible to heuristical simplifications Provide various interfaces Personality traits Theory

Description Internal vs. external Locus of Externally influenced users need more guidance; internally control (LOC) controlled users want to actively and selectively search for additional information Need for closure Describes the individual pursuit of making a decision as soon as possible Maximizer vs. satisficer Maximizers try to find an optimal solution; satisficers search for solutions that fulfill their basic requirements Summary of online consumer decision making Recommender systems are persuasive systems Estimated-utility is often not a good model of human decision making Several simplifying heuristics Bounded rationality / accuracy-effort-tradeoff makes users susceptible for

decision biases Decoy effects, position effects, framing, priming, defaults, Different personality characteristics require different recommender interaction methods (Max/sat., need4closure, trust, locOfcontrol) Outlook Additional topics covered by the book Recommender Systems - An Introduction Case study on the Mobile Internet Attacks on CF Recommender Systems Recommender Systems in the next generation Web (Social Web, Semantic Web) More on consumer decision making Recommending in ubiquitous environments Current and emerging topics in RS Social Web recommendations

Context-aware recommendation Learning-to-rank SEARCH ENGINES VS. RECOMMENDER SYSTEMS Search Engines Recommender Systems Goal answer users ad hoc Goal recommend queries services or items to user Input user ad-hoc need Input - user preferences defined as a query defined as a profile Output- ranked items relevant to user need

(based on her preferences???) Methods - Mainly IR based methods Output - ranked items based on her preferences Methods variety of methods, IR, ML, UM The two are starting to combine Open Source Recommender Systems Summary of results EasyRec RecLab MyMediaLite

LensKit Open Source Recommender Systems What we learned Recommendation systems use similar methods that we covered for information retrieval systems Vector representations Ranking by similarity measures such as cosine Text based methods such as TFIDF References [Adomavicius & Tuzhilin, IEEE TKDE, 2005] Adomavicius G., Tuzhilin, A. Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions, IEEE TKDE, 17(6), 2005, pp.734-749. [ALP03] Ariely, D., Loewenstein, G., Prelec, D. (2003) Coherent Arbitrainess: Stable Demand Curves Without Stable Preferences. The Quarterly Journal of Economics, February 2003, 73-105. [BKW+10] Bollen, D., Knijnenburg, B., Willemsen, M., Graus, M. (2010) Understanding Choice Overload in Recommender Systems. ACM

Recommender Systems, 63-70. [Brynjolfsson et al., Mgt. Science, 2003] Brynjolfsson, E., Hu, Y., Smith, M.: Consumer Surplus in the Digital Economy: Estimating the Value of Increased Product Variety at Online Booksellers, Management Science, Vol 49(11), 2003, pp. 1580-1596. [BS97] Balabanovic, M., Shoham, Y. (1997) Fab: content-based, collaborative recommendation, Communications of the ACM, Vol. 40(3), pp. 66-72. [FFG+07] Felfernig, A. , Friedrich, G., Gula, B. et al. (2007) Persuasive recommendation: serial position effects in knowledge-based recommender systems. 2nd international conference on Persuasive technology, Springer, 283294. [Friedrich& Zanker, AIMag, 2011] Friedrich, G., Zanker, M.: A Taxonomy for Generating Explanations in Recommender Systems. AI Magazine, Vol. 32(3), 2011. [Jannach et al., CUP, 2010] Jannach D., Zanker M., Felfernig, A., Friedrich, G.: Recommender Systems an Introduction, Cambridge University Press, 2010. [Jannach et al., JITT, 2009] Jannach, D., Zanker, M., Fuchs, M.: Constraint-based recommendation in tourism: A multi-perspective case study, Information Technology & Tourism, Vol 11(2), pp. 139-156. [Jannach, Hegelich 09] Jannach, D., Hegelich K.: A case study on the effectiveness of recommendations in the Mobile Internet, ACM Conference on Recommender Systems, New York, 2009, pp. 205-208 [Ricci et al., JITT, 2009] Mahmood, T., Ricci, F., Venturini, A.: Improving Recommendation Effectiveness by Adapting the Dialogue Strategy in Online Travel Planning. Information Technology & Tourism, Vol 11(4), 2009, pp. 285-302.

[Teppan& Felfernig, CEC, 2009] Teppan, E., Felfernig, A.: Asymmetric Dominance- and Compromise Effects in the Financial Services Domain. IEEE International Conference on E-Commerce and Enterprise Computing, 2009, pp. 57-64 [TF09] Teppan, E., Felfernig, A. (2009) Impacts of decoy elements on result set evaluations in knowledge-based recommendation. Int. J. Adv. Intell. Paradigms 1, 358373. References [Xiao & Benbasat, MISQ, 2007] Xiao, B., Benbasat, I.: E-Commerce Product Recommendation Agents: Use, Characteristics, and Impact, MIS Quarterly, Vol 31(1), pp. 137-209. [Zanker et al., EC-Web, 2006] Zanker, M., Bricman, M., Gordea, S., Jannach, D., Jessenitschnig, M.: Persuasive online-selling in quality & taste domains, 7th International Conference on Electronic Commerce and Web Technologies, 2006, pp. 51-60. [Zanker, RecSys, 2008] Zanker M., A Collaborative Constraint-Based Meta-Level Recommender. ACM Conference on Recommender Systems, 2008, pp. 139-146. [Zanker et al., UMUAI, 2009] Zanker, M., Jessenitschnig, M., Case-studies on exploiting explicit customer requirements in recommender systems, User Modeling and User-Adapted Interaction, Springer, 2009, pp.133-166. [Zanker et al., JITT, 2009] Zanker M., Jessenitschnig M., Fuchs, M.: Automated Semantic Annotations of Tourism Resources Based on Geospatial Data, Information Technology & Tourism, Vol 11(4), 2009, pp. 341-354. [Zanker et al., Constraints, 2010] Zanker M., Jessenitschnig M., Schmid W.: Preference reasoning with soft constraints in constraint-based

recommender systems. Constraints, Springer, Vol 15(4), 2010, pp. 574-595.

Recently Viewed Presentations

  • The Space Launch System and The Political Facts of Life

    The Space Launch System and The Political Facts of Life

    In this time period, Intercontinental Ballistic Missiles - which were derivatives of V-2 rockets - had been previously employed to support WWII endeavors and made a good design base for a rocket. "The Atlas D launch vehiclebecame operational in mid-1959.
  • Legal Aid NSW Criminal Law Conference 2012

    Legal Aid NSW Criminal Law Conference 2012

    LEGAL AID NSW . CRIMINAL LAW CONFERENCE 2012. Updates from. CLS, PLS and Drug Court Te'res. Sia, A/SIC, Children's Legal Service. Will Hutchins, SIC, Prisoners Legal Service
  • Diapositiva 1 - UNS

    Diapositiva 1 - UNS

    Iglesia de Cluny III: 187 m. de longitud. Nave central: 10 m. de ancho x 30 m. de altura. La edificación del claustro se inició durante el abadiato del sucesor de Santo Domingo: Fortunio (1073-1100) y fue consagrado hacia 1088....
  • Self-Governing Muslims of Medieval Spain 1240 to 1327Conquered

    Self-Governing Muslims of Medieval Spain 1240 to 1327Conquered

    When the moderate missionary approach of the archbishop of Granada, Hernando de Talavera (1428-1507), was replaced by the fanaticism of Cardinal Cisneros (c.1436-1518), who organised mass conversions and the burning of all religious texts in Arabic, these events resulted in...
  • Physical models for interior design: models simulate various

    Physical models for interior design: models simulate various

    simulating lighting conditions in white model showing primary components, structure, order A typical model showing the exterior shell of a building. (model by Technical Models Ltd., UK) The house features a two-storey custom wood shelving unit that wraps from the...
  • L 7: Linear Systems and Metabolic Networks

    L 7: Linear Systems and Metabolic Networks

    L 7: Linear Systems and Metabolic Networks Linear Equations Form System Linear Systems Vocabulary If b=0, then system is homogeneous If a solution (values of x that satisfy equation) exists, then system is consistent, else it is inconsistent.
  • Do Now Get a Chromebook  Open Google Sheets

    Do Now Get a Chromebook Open Google Sheets

    ENDGAME—IB EXAMINATION REVIEW. There is a menu containing the . entire collection of "Essential Questions" ... Food chains and the carbon cycle .
  • Chapter 3 Nuclear Radiation - HCC Learning Web

    Chapter 3 Nuclear Radiation - HCC Learning Web

    49Mn 49Cr + 0e 25 24 +1 Gamma Radiation In gamma radiation energy is emitted from an unstable nucleus, indicated by m following the mass number. the mass number and the atomic number of the new nucleus are the same.