Publications

This paper introduces a Knowledge-Aware Sequential Conversational Recommender System (KASCRS) that provides personalized recommendations through multi-turn natural language interactions. KASCRS uses transformers and knowledge graph embeddings to predict suitable recommendations based on conversation content. The model encodes each conversation as a sequence of mentioned entities (items and properties) and is trained to predict the final item in the sequence using a cloze task. Key features include leveraging transformers and self-attention to capture sequential dependencies, and using knowledge graph embeddings to pre-train representations of items and properties. This integration enhances the accuracy of user and item representations. Experiments demonstrated that KASCRS outperformed several state-of-the-art baselines on two datasets.

This paper introduces a Knowledge-aware Recommender System (KARS) using Graph Neural Networks (GNNs) that leverage pre-trained content-based embeddings to enhance user and item representations. By utilizing textual features to provide a different perspective on catalog items, the system aims to deliver more accurate recommendations. Pre-trained item representations based on textual content are used as input for the GNN-based KARS, allowing it to integrate both unstructured content and structured knowledge from the knowledge graph. Experiments show that incorporating pre-trained embeddings significantly improves predictive accuracy, outperforming all baselines in various settings.

In the last few years, Knowledge-Aware Recommender Systems (KARSs) got an increasing interest in the community thanks to their ability at encoding diverse and heterogeneous data sources, both structured (such as knowledge graphs) and unstructured (such as plain text). Indeed, as shown by several shreds of evidence, thanks to the combination of such information, KARSs are able to provide competitive performances in several scenarios. In particular, state-of-the-art KARSs leverage the current wave of deep learning and are able to process and exploit large corpora of information that provide complementary and useful characteristics of the items, including knowledge graphs, descriptive properties, reviews, text, and multimedia content. The objective of my Ph.D. is to investigate methods to design and develop knowledge-aware recommendation models based on the merging of heterogeneous embeddings. Based on the combination of diverse information sources, I plan to develop novel models able to provide accurate, fair, and explainable recommendations.

This paper presents a methodology for generating review-based natural language justifications to support personalized recommendations. The approach adapts justifications to various contextual situations in which items will be used. The intuition is that justifications should vary with the context, such as different reasons for recommending a restaurant depending on whether a person is going out with friends or family. A pipeline based on distributional semantics models generates vector space representations of each context using a term-context matrix to identify suitable review excerpts. Validated through user studies in movies and restaurants, the methodology improved perceived transparency and helped users make more informed choices, confirming the initial intuitions.

This paper presents a strategy for providing knowledge-aware recommendations by combining graph neural networks (GNNs) and sentence encoders. The approach leverages both structured data from knowledge graphs and unstructured textual content to create accurate item representations. GNNs encode collaborative features and item properties, while a transformer-based sentence encoder processes textual descriptions. These embeddings are then combined using a deep neural network with self-attention and cross-attention mechanisms to refine the representations. The network predicts user interest to generate a top-k recommendation list. Experiments on two datasets demonstrate that this method outperforms several competitive baselines.

The last few years showed a growing interest in the design and development of Knowledge-Aware Recommender Systems (KARSs). This is mainly due to their capability in encoding and exploiting several data sources, both structured (such as knowledge graphs) and unstructured (such as plain text). Nowadays, a lot of models at the state-of-the-art in KARSs use deep learning, enabling them to exploit large amounts of information, including knowledge graphs (KGs), user reviews, plain text, and multimedia content (pictures, audio, videos). In my Ph.D. I will follow this research trend and I will explore and study techniques for designing KARSs leveraging representations learnt from multi-modal information sources, in order to provide users with fair, accurate, and explainable recommendations.

The paper presents a method for knowledge-aware recommendations by combining graph neural networks (GNNs) and sentence encoders. We expolited structured data from knowledge graphs and unstructured textual content to learn accurate user and item representations. GNNs encode collaborative features and item properties, while a sentence encoder encodes textual descriptions. These embeddings are merged using a deep neural network with self-attention and cross-attention mechanisms to refine representations. The model predicts user interest to generate a top-k recommendation list, and experiments show this approach outperforms several competitive baselines.

This paper introduces a knowledge-aware recommendation framework utilizing neuro-symbolic graph embeddings that encode first-order logical (FOL) rules. The process begins with a knowledge graph (KG) that captures user preferences through explicit ratings and item properties. The recommendation framework consists of three main modules (i) a rule learner that extracts FOL rules from the KG, (ii) a graph embedding module that learns embeddings of users and items based on KG triples and the extracted FOL rules, and (iii) a recommendation module that uses these embeddings to feed a deep learning architecture. Experimental results on two datasets indicate that integrating KG embeddings with FOL rules enhances both the accuracy and novelty of the recommendations.