While users interact with online services (e.g. search engines, recommender systems, conversational agents), they leave behind fine grained traces of interaction patterns. The ability to understand user behavior, record and interpret user interaction signals, gauge user satisfaction and incorporate user feedback gives online systems a vast treasure trove of insights for improvement and experimentation. More generally, the ability to learn from user interactions promises pathways for solving a number of problems and improving user engagement and satisfaction.
Understanding and learning from user interactions involves a number of different aspects - from understanding user intent and tasks, to developing user models and personalization services. A user's understanding of their need and the overall task develop as they interact with the system. Supporting the various stages of the task involves many aspects of the system, e.g. interface features, presentation of information, retrieving and ranking. Often, online systems are not specifically designed to support users in successfully accomplishing the tasks which motivated them to interact with the system in the first place. Beyond understanding user needs, learning from user interactions involves developing the right metrics and expiermentation systems, understanding user interaction processes, their usage context and designing interfaces capable of helping users.
Learning from user interactions becomes more important as new and novel ways of user interactions surface. There is a gradual shift towards searching and presenting the information in a conversational form. Chatbots, personal assistants in our phones and eyes-free devices are being used increasingly more for different purposes, including information retrieval and exploration. With improved speech recognition and information retrieval systems, more and more users are increasingly relying on such digital assistants to fulfill their information needs and complete their tasks. Such systems rely heavily on quickly learnig from past interactions and incorporating implicit feedback signals into their models for rapid development.
Learning from User Interactions will be a highly interactive full day workshop that will provide a forum for academic and industrial researchers working at the intersection of user understanding, search tasks, conversational IR and user interactions. The purpose is to provide an opportunity for people to present new work and early results, brainstorm different use cases, share best practices, and discuss the main challenges facing this line of research.
Research Eng. Director
Google
Abstract:
Recent years have seen great advances in using machine-learned ranking functions for relevance prediction. Any learning-to-rank framework requires abundant labeled training examples. In web search, labels may either be assigned explicitly (say, through crowd-sourced assessors) or based on implicit user feedback (say, result clicks). In personal (e.g. email) search, obtaining labels is more difficult: document-query pairs cannot be given to assessors due to privacy constraints, and clicks on query-document pairs are extremely sparse since each user has a separate corpus. Over the past several years, we have worked on techniques for training ranking functions on result clicks in an unbiased and scalable fashion. Our techniques are used in many Google products, such as Gmail, Inbox, Drive and Calendar. In this talk, I will present an overview of this line of research.
Speaker Bio:
Marc Najork is a Research Engineering Director at Google, where he manages a team working on a portfolio of machine learning problems. Before joining Google in 2014, Marc spent 12 years at Microsoft Research Silicon Valley and 8 years at Digital Equipment Corporations’s Systems Research Center in Palo Alto. He received a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign. Marc has published about 60 papers and holds 25 issued patents. Much of his past research has focused on improving web search, and on understanding the evolving nature of the web. He served as ACM TWEB editor-in-chief, CACM news board co-chair, WWW 2004 program co-chair, WSDM 2008 conference chair, and in numerous senior PC member roles.
Associate Professor
Emory University
Abstract:
Voice-based assistants such as Alexa, Cortana and Siri, have rekindled the dream of a true conversation with a computer search engine. Hence, finding and presenting useful answers for the searchers' information needs is more important than ever. User interactions have been previously shown helpful for improving ranking, passage retrieval, and result summary generation. By viewing and interacting with the results and the underlying content, users implicitly and explicitlyindicate quality, interestingness, and relevance of content to their information needs. I will describe ongoing work on adapting these user modeling techniques for the conversational setting, where users expect more from search while doing less.
Speaker Bio:
Dr. Eugene Agichtein is an Associate Professor of Computer Science at Emory University, where he founded and leads the Intelligent Information Access Laboratory (IR Lab). Eugene's research spans the areas of information retrieval, natural language processing, data mining, and human computer interaction. A large part of this work was done in collaboration with researchers and engineers at Microsoft, Google, and Yahoo (now Oath). Dr. Agichtein has co-authored over 100 publications, which have been recognized by multiple awards, including the A.P. Sloan fellowship and the 2013 Karen Spark Jones Award from the British Computer Society. Eugene was Program Co-Chair of the WSDM 2012 and WWW 2017 conferences. More information is at http://www.mathcs.emory.edu/~eugene/.
Senior Research Scientist
Google Research
Abstract:
Neural networks have become increasingly successful throughout many machine learning applications, and in the past few years, have become the state-of-the-art approach recommender systems. While DNNs have been well explored for applications in computer vision and natural language processing, their application to collaborative filtering-style recommendation creates new opportunities to understand user behavior and opens interesting questions on how to design these models. In this talk, I will present recent research on user and item dynamics modeled by RNNs, challenges in successfully capturing long-range dynamics, and how contextual information factors into modeling user behavior with neural networks.
Speaker Bio:
Alex Beutel is a Senior Research Scientist at Google Research working on neural user behavior modeling, fairness in machine learning, and ML systems. He received his Ph.D. in 2016 from Carnegie Mellon University's Computer Science Department, and previously received his B.S. from Duke University in computer science and physics. His Ph.D. thesis on large-scale user behavior modeling, covering recommender systems, fraud detection, and scalable machine learning, was given the SIGKDD 2017 Doctoral Dissertation Award Runner-Up. His work has appeared in KDD, WWW, ICDM, SDM, AISTATS, and TKDD; and he has given tutorials on user behavior modeling at KDD, one of the premier data mining conferences, and CCS, one of the premier security conferences. He received the Best Paper Award at KDD 2016 and ACM GIS 2010, was a finalist for best paper in KDD 2014 and ASONAM 2012, and was awarded the Facebook Fellowship in 2013 and the NSF Graduate Research Fellowship in 2011. More details can be found at http://alexbeutel.com.
All workshop submissions must be formatted according to ACM SIG Proceedings template. Please feel free to include author names & affiliations in the submissions. We welcome submissions in either long or short format spanning 4-6 pages.
Authors should submit original papers in PDF format through the Easychair system.
This is a workshop where discussion is central, and all attendees are active participants. The workshop will include keynote talks to set the stage and ensure all attendees are on the same page. A small number of contributed papers will be selected for short oral presentation (15-10 minutes), all other papers have a 2 minute boaster, and all papers are presented as poster in an interactive poster session.
The results will be disseminated in various ways:
Rishabh Mehrotra (Spotify Research; University College London)
Emine Yilmaz (University College London; Alan Turing Institute)
Ahmed Hassan Awadallah (Microsoft Research)
- Milad Shokouhi (Microsoft)
- Fernando Diaz (Spotify)
- Filip Radlinski (Google Research)
- Evangelos Kanoulas (University of Amsterdam)