1 Introduction
Recommendation systems are important modules for abundant online applications, helping users explore items of potential interest. As one of the most effective approaches, collaborative filtering Sarwar et al. (2001); Koren and Bell (2015); He et al. (2017)
and its deep neural networks based variants
He et al. (2017); Wu et al. (2016); Liang et al. (2018); Li and She (2017); Yang et al. (2017); Wang et al. (2018a) have been widely studied. These methods leverage patterns across similar users and items, predicting user preferences and demonstrating encouraging results in recommendation tasks Bennett and Lanning (2007); Hu et al. (2008); Schedl (2016). Among these work, beside “useritem” pairs, side information, , user reviews and scores on items, are involved and have achieved remarkable success Menon et al. (2011); Fang and Si (2011). Such side information is a kind of user feedback to the recommended items, which is promising to improve the recommendation systems.Unfortunately, both the useritem pairs and user feedback are extremely sparse compared with the search space of items. What is worse, when the recommendation systems are trained on static observations, the feedback is unavailable until it is deployed in realworld applications — in both training and validation phases, the target systems have no access to any feedback because no one has observed the recommended items. Therefore, the recommendation systems may suffer overfitting, and their performance may degrade accordingly, especially in the initial phase of deployment. Although realworld recommendation systems are usually updated in an online manner with the assist of increasing observed user behavior Rendle and SchmidtThieme (2008); Agarwal et al. (2010); He et al. (2016), introducing a feedback mechanism during their training phases can potentially improve the efficiency of the initial systems. However, this is neglected by existing learning frameworks.
Motivated by the above observations, we propose a novel framework that achieves collaborative filtering with a synthetic feedback loop (CFSFL). As shown in Figure 1, the proposed framework consists of a “recommender” and a “virtual user.” The recommender is a collaborative filtering (CF) model, that predicts items from observed user behavior. The observed user behavior reflects intrinsic preferences of users, while the recommended items represent the potential user preferences estimated by the model. Regarding the fusion of the observed user behavior and the recommended items as inputs, the virtual user, which is the key of our model, imitates realworld scenarios and synthesizes user feedback. In particular, the virtual user contains a reward estimator and a feedback generator: the reward estimator estimates rewards based on the fused inputs (the compatible representation of the user observation and its recommended items), learned with a generative adversarial regularizer. The feedback generator provides feedback embeddings to augment the original user embeddings, conditioned on the estimated rewards as well as the fused inputs. Such a framework constructs a closed loop between the target CF model and the virtual user, synthesizing user feedback as side information to improve recommendation results.
The proposed CFSFL framework can be interpreted as inverse reinforcement learning (IRL) approach, in which the recommender learns to recommend user items (policy) with the estimated guidance (feedback) from the proposed virtual user. The proposed feedback loops can be understood as an effective rollout procedure for recommendation, jointly updating the recommender (policy) and the virtual user (the reward estimator and the feedback generator). Eventually, even if side information (, realworld user feedback) is unobservable, our algorithm is still applicable to synthesize feedback in both the training and inference phases. The proposed framework is general and compatible with most CF methods. Experimental results show that the performance of existing approaches can be remarkably improved within the proposed framework.
2 Proposed Framework
In this section, we first describe the problem we are interested in and give a detail description of each module that is included in the framework.
2.1 Problem Statement
Suppose we have users with items in total, we denote the observed useritem matrix as
, where each vector
, , represents observed user behaviors. indicates the the th item is bought or reviewed via the th user and otherwise the th item can either be irrelevant to the th user or we have not enough knowledge about their relationship. The desired recommendation system aims to predict each user’s preference, denoted as , whose element indicates the preference of the th user to the th item. Accordingly, the system recommends each user with the items having large ’s.Ideally, for each user, just contains partial (actually, very sparse) information about user preference and a practical recommendation system works dynamically with a closed loop — users often generate feedback to the recommended items while the recommendation system considers these feedback to revise recommended items in the future. Therefore, we can formulate the whole recommendation process as
(1) 
where represents the target recommender while represents the coupled feedback mechanism of the system. is the embedding of user feedback to historical recommended items. At each time , the recommender predicts preferred items according to observed user behaviors and previous feedback, and the user generates feedback to the recommender. Note that (1) is different from existing sequential recommendation models Mishra et al. (2015); Wang et al. (2016a) because those methods ignore the feedback loop as well, which just updates recommender according to observed sequential observations, i.e., for different time ’s.^{1}^{1}1When the static observation in (1) is replaced with sequential observation , (1) is naturally extended to a sequential recommendation system with a feedback loop. In this work, we focus on the case with static observations and train a recommender system accordingly.
Unfortunately, the feedback information is often unavailable in the training and inference phases. Accordingly, most of existing collaborative filteringbased recommendation methods ignore the feedback loop in the system, learning the target system purely from static observation useritem matrix Liang et al. (2018); Li and She (2017). Although in some scenarios side information like user reviewers is associated with the observation matrix, the methods using such information often treat it as a kind of static knowledge rather than dynamic feedback. They mainly focus on fitting the groundtruth recommended items with the recommender given fixed ’s and ’s, while ignore the imitation of the whole recommendationfeedback loop in (1). Without the feedback mechanism , tends to overfit the observed user behavior and static side information, which may degrade in practical dynamical scene.
To overcome the problems mentioned above, we propose a collaborative filtering framework with a synthetic feedback loop (CFSFL), which explains the whole recommendation process from a viewpoint of reinforcement learning. As shown in Figure 1, besides traditional recommendation module the proposed framework further introduces a virtual user, which imitates the recommendationfeedback loop even if the realworld feedback are unobservable.
2.2 Recommender
In our framework, the recommender implements the function in (1), which takes the observed user behavior and his/her previous feedback embedding as input and recommends items accordingly. In principle, the recommender can be defined with high flexibility, which can be arbitrary collaborative filtering methods that predicting items from user representations, such as WMF Hu et al. (2008), CDAE Wu et al. (2016), VAE Liang et al. (2018) etc. In this work, we formulate the recommender from the viewpoint of reinforcement learning.
In particular, the recommendationfeedback loop generates a sequence of interactions between each user and the recommender, , for . Here, is the representation of user at time , which is a sample in the state space describing user preference. indicates the recommended items provided by the recommender, which is a sample in the action space
of the recommender. Accordingly, we can model the recommendationfeedback loop as a Markov Decision Process (MDP)
, whereis the transition probability of user preference and
is the reward function used to evaluate recommended items. The recommender works as a policy parametrized by , , , which corresponds to the distribution of items conditioned on user preference. The target recommender should be an optimal policy that maximizes the expected reward: , where means the reward over the stateaction pair . For the th user, given , the recommender selects potentiallypreferred items via(2) 
Note that different from traditional reinforcement learning tasks, in which both and are available while and are with limited accessibility, our recommender just receives partial information of state — it does not observe users’ feedback embedding . In other words, to optimize the recommender, we need to build a reward model and a feedback generator jointly, which motivates us to introduce a virtual user into the framework.
2.3 Virtual User
The virtual user module aims to implement the feedback function in (1), which not only models the reward of the items provided by the recommender but also generates feedback to complete the representations of state. Accordingly, the virtual user contains the following two modules:
Reward Estimator The reward estimator parametrizes the function of reward, which takes the current prediction and user preference as input and evaluate their compatibility. In this work, we implement the estimator with parameter , which is defined as
(3) 
In this work, we use the static part of the state , , the observed user behaviors as input. is the fusion function which merges and into a real value vector (the fused input is shown in Figure 5 and described in Appendix).
is the single value regression function that translates the fused input into a single reward value. The sigmoid function is used to restrict the regression value between 0 and 1.
Feedback Generator The feedback generator connects the reward estimator with the recommender via generating a feedback embedding, ,
(4) 
where represents the parameters of the generator. Specifically, the a parametric function considers the fused input and the estimated reward and returns a feedback embedding to the recommender. indicates the compatibility between the recommended items and user preferences, and , which is a vector rather than a scalar like reward, further enriches the information of the reward to generate feedback embeddings. Consequently, the recommender receives the informative feedback as a complementary component of the static observation to make an improved recommendation via (2).
3 Learning Algorithm
3.1 Learning task
Based on the proposed framework, we need to jointly learn the policy corresponding to the recommender , the reward estimator and the feedback generator . Suppose that we have a set of labeled samples , where is the historical behaviors of user derived from the useritem matrix and is the ground truth of the recommended item for the user based on his/her behavior . We formulate the learning task as the following minmax optimization problem.
(5) 
where
(6) 
In particular, the first term in (6) can be any supervised loss based on labeled data , , the evidence lower bound (ELBO) proposed in VAEs Liang et al. (2018) (and used in our work). This term ensures the recommender to fit the groundtruth labeled data. The second term considers the following two types of interactions among these three modules:

The collaboration between the recommender policy and the feedback generator towards a better predictive recommender.

The adversarial game between the recommender policy and the reward estimator .
Accordingly, given current reward model, we update the recommender policy and the feedback generator to maximize the expected reward derived from the generated user preference and the recommended item . On the contrary, given the recommended policy and the feedback generator, we improve the reward estimator by sharpening its criterion — the updated reward estimator maximizes the expected reward derived from the generated user preference and the ground truth of item while minimize the expected reward based on the recommended item. Therefore, we solve (5) via alternating optimization. The updating of and is achieved by minimizing
(7) 
And the updating of is achieved by maximizing
(8) 
Both these two updating steps can be solved effectively via stochastic gradient descent.
3.2 Unrolling for learning and inference
Because the proposed framework contains a closed loop among learnable modules, during training we unroll the loop and let the recommender interact with the virtual user in steps. Specifically, at the initial stage, the recommender takes the observed user behaviour and an allzero initial feedback embedding , to make recommendations. At each step , the recommender predicts the items given and to the virtual user, and receives the feedback embedding . The loss is defined according to the output of the last step, , and , and the modules are updated accordingly. After the model is learned, in the testing phase we need to infer the recommended item in the same manner, unrolling the feedback loop and deriving as the final recommended item. The details of unrolling process are illustrated in Figure 2, and the detailed scheme of our learning algorithm is shown in Algorithm 1 in appendix.
4 CFSFL as Inverse Reinforcement Learning
Our CFSFL framework automatically discovers informative user feedback as side information and gradually improve the training for the recommender. Theoretically, it closely connects with Inverse Reinforcement Learning (IRL). Specifically, we jointly learn the reward function and the policy (recommender) from the expert trajectories (the observed labeled data). typically consists of stateaction pairs generated from an expert policy with the corresponding environment dynamics . And the goal of the IRL is to recover the optimal reward function as well as the corresponding recommender . Formally, the IRL is defined as:
(9)  
(10) 
Intuitively, the objective enforces the expert policy to induce higher rewards (the part), than all other policies. This objective is suboptimal if the expert trajectories are noisy, i.e., the expert is not perfect and its trajectories are not optimal. This will render the learned policy always performs worse than the expert one. Besides, the illeddefined IRL objective often induces multiple solutions due to flexible solution space, i.e., one can assign an arbitrary reward to trajectories not from expert, as long as these trajectories yields lower rewards than the expert trajectories. To alleviate these issues, some constraints are placed into the objective functions, e.g., a convex reward functional, , which usually works as a regularizer.
(11) 
To imitate the expert policy and provide better generalization, we adopt the adversarial regularizer Ho and Ermon (2016), which defines with the following form:
where . This regularizer places low penalty on reward functions that assign an amount of positive value to expert stateaction pairs; however, if assigns low value (close to zero, which is the lower bound) to the expert, then the regularizer will heavily penalize
. With induced adversarial regularizer, we obtain a new imitation learning algorithm for recommender:
(12) 
Intuitively, we want to find a saddle point of the expression:
(13) 
where . Note equation 11 is derived from the objective of traditional IRL. However, distinct from the traditional approach, we propose a feedback generator to provide feedbacks to the recommender. In terms of the reward estimator, it tends to assign lower rewards to the predicted results by the recommender and higher rewards for the expert policy , which aims to discriminate from :
(14) 
Similar to standard IRL, we update the generator to maximize the expected reward with respect to , moving towards expertlike regions of useritem space. In practice, we incorporate feedback embedding to update the user preferences, and the objective of the recommender is:
(15) 
where .
5 Related Work
Collaborative Filtering. Collaborative filtering (CF) can be roughly categorized into two groups: CF with implicit feedback Bayer et al. (2017); Hu et al. (2008) and those with explicit feedback Koren (2008); Liu et al. (2010). In implicit CF, useritem interactions are binary in natural (i.e., 1 if clicked and 0 otherwise) as oppose to explicit CF where item ratings (e.g., 15 stars) are typically the subject of interests. Implicit CF has been widely studied, examples including factorization of useritem interactions He et al. (2016); Koren (2008); Liu et al. (2016); Rendle (2010); Rennie and Srebro (2005) and ranking based approach Rendle et al. (2009). And our CFSFL is a new framework for implicit CF.
Currently, neural network based models have achieved stateoftheart performance for various recommender systems Cheng et al. (2016); He et al. (2018, 2017); Zhang et al. (2018); Liang et al. (2018). Among these methods, NCF He et al. (2017) casts the wellestablished matrix factorization algorithm into an entire neural framework, combing the shallow innerproduct based learner with a series of stacked nonlinear transformations. This method outperforms various of traditional baselines and has motivated many following works such as NFM He et al. (2017), Deep FM Guo et al. (2017) and Wide and Deep Cheng et al. (2016)
. Recently, deep learning approaches
Wang et al. (2016b, c), especially deep generative models Chen et al. (2017); Yang et al. (2019); Wang et al. (2018b, 2019a, 2019b) have achieved remarkable success. VAEs Liang et al. (2018) uses variational inference to scale up the algorithm for largescale dataset and has shown significant success in recommender systems with a usage of multinormial likelihood. Our CFSFL is a general framework that can adapt to these models seamlessly.RL in Collaborative Filtering.
For RLbased methods, contextual multiarmed bandits are firstly utilized to model the interactive nature of recommender systems. Thompson Sampling (TS)
Chapelle and Li (2011); Kveton et al. (2015); Zhang et al. (2017) and Upper Confident Bound (UCB) Li et al. (2010) are used to balance the tradeoff between exploration and exploitation. Zhao et al. (2013) combine matrix factorization with bandits to include latent vectors of items and users for better exploration. The MDPBased CF model can be viewed as a partial observable MDP (POMDP) with partial observation of user preferences Sunehag et al. (2015). Value function approximation and policy based optimization can be employed to solve the MDP. Zheng et al. (2018) and Taghipour and Kardan (2008) modeled web page recommendation as a QLearning problem and learned to make recommendations from web usage data. Sunehag et al. (2015) introduced agents that successfully address sequential decision problems. Zhao et al. (2018) propose a novel pagewise recommendation framework based on deep reinforcement learning. In this paper, we consider the recommending procedure as sequential interactions between virtual users and recommender; and leverage feedbacks from virtual users to improve the recommendation. Recently, Chen et al. (2019) proposed an offpolicy corrections technique, and successfully applied it in realworld applications.6 Experiments
Datasets
We investigate the effectiveness of the proposed CFSFL framework on three benchmark datasets of recommendation systems. (i) MovieLens20M (ML20M), a movie recommendation service contains tens of millions usermovie ratings. (ii) NetflixPrize (Netflix), another usermovie ratings dataset collected by the Netflix Prize Bennett and Lanning (2007). (iii) Million Song Dataset (MSD), a usersong rating dataset, which is released as part of the Million Song Dataset BertinMahieux et al. (2011). To directly compare with existing work, we employed the same preprocessing procedure as Liang et al. (2018). A summary statistics of these datasets are provided in Table 1.
Evaluation Metrics
We employ Recallr^{2}^{2}2https://en.wikipedia.org/wiki/Precision_and_recall together with NDCG@r^{3}^{3}3https://en.wikipedia.org/wiki/Discounted_cumulative_gain
as the evaluation metric for recommendation, which measures the similarity between the recommended items and the ground truth. Recall
r considers topr recommended items equally, while NDCG@r ranks the topr items and emphasizes the importance of the items that are with high ranks.Setup
For our CFSFL framework, the architectures of its recommender, reward estimator and feedback generator are shown in Table 2. To represent the user preference, we normalize and
independently and concatenate the two into a single vector. To learn the model, we pretrain the recommender (150 epochs for ML20M and 75 epochs for Netflix and MSD) and optimize the entire framework (50 epochs for ML20M and 25 epochs for the other two).
regularization with a penalty term is applied to the recommender, and Adam optimizer Kingma and Ba (2014) with batch in size of is employed.Baselines
To demonstrate the superiority of our framework, we consider multiple stateoftheart approaches as baselines, which can be categorized into two types: (i) Linear based models: SLIM Ning and Karypis (2011) and WMF Hu et al. (2008). (ii) Deep neural network based models: CDAE Wu et al. (2016), VAE Liang et al. (2018) and aWAE Zhong and Zhang (2018). It should be noted that our CFSFL is a generalized framework, which is compatible with all these approaches. In particular, as shown in Table 2, we implement our recommender as the VAEbased model Liang et al. (2018) for a fair comparison. In the following experiments, we will show that besides such a setting the recommender can be implemented by other existing models as well.
Performance Analysis
Methods  ML20M  Netflix  MSD  

R20  R50  NDCG100  R20  R50  NDCG100  R20  R50  NDCG100  
SLIM  0.370  0.495  0.401  0.347  0.428  0.379       
WMF  0.360  0.498  0.386  0.316  0.404  0.351  0.211  0.312  0.257 
CDAE  0.391  0.523  0.418  0.343  0.428  0.376  0.188  0.283  0.237 
aWAE  0.391  0.532  0.424  0.354  0.441  0.381       
VAE  0.395  0.537  0.426  0.351  0.444  0.386  0.266  0.364  0.316 
0.395  0.535  0.425  0.350  0.444  0.386  0.260  0.356  0.311  
0.396  0.536  0.426  0.352  0.445  0.387  0.263  0.360  0.314  
CFSFL  0.404  0.542  0.435  0.355  0.449  0.394  0.273  0.369  0.323 
All the evaluation metrics are averaged across all the test sets.
(i) Quantitative results: we test various methods and report their results in Table 3. With the proposed CFSFL framework, we significantly improve the performance of the baselines on all the evaluation metrics. These experimental results demonstrate the power of the proposed CFSFL framework, which provides informative feedback as the side information. Particularly, we observed that the performance between the base model (VAE) is similar to that of its variation with the reward estimator (VAE). It implies that simply learning a feedback from the reward estimator via backpropagation is inefficient. Compared with such a naive strategy, the proposed CFSFL provides more informative feedback to the recommender, and thus, is able to improve recommendation results more effectively.
(ii) Learning Comparison: In Figure 3, we show the training trajectory of the baselines (VAE, VAE+reward estimator) and the CFSFL with multiple time steps. There are several interesting findings. (a) The performance of the base VAE doesn’t improve after the pretraining steps, e.g., 75 epochs for Netflix. In comparison, the proposed CFSFL framework can further improve the performance once the whole model is triggered. (b) The CFSFL yields fast convergence once the whole framework is activated. (c) Coincide with results in Table 3, the trajectory of VAE in Figure 3, is similar to that of the base VAEs (VAE). In contrast, the trajectories of our CFSFL methods are more smooth and able to converge to a better local minimum. This phenomenon further verifies that our CFSFL learns informative user feedback with better stability. (d) With an increasing of time steps in a particular range ( for ML20M), CFSFL achieves faster and better performance. One possible explanation is the learning with our unrolled structure — parameters are shared across different timesteps, and a more accurate gradient is found towards the local minimum. (e) We find ML20M and MSD are more sensitive to the choice of when compared with Netflix. Therefore, the choice of should adjust to different datasets.
(iii) CFSFL with dynamic time steps: As shown in Figure 2, the learning of CFSFL involves a recurrent structure with times steps. We investigate the choice of and report its influence on the performance of our method. Specifically, the NDCG100 with different ’s is shown in Figure 4. Within 6 time steps, CFSFL consistently boots the performance on all the three datasets. Even with a larger time steps, the results remain stable. Additionally, the inference time of CFSFL is linear on time steps . To achieve a tradeoff between performance and efficiency, in our experiments we set to for ML20M and Netflix and for MSD.
Generalization Study
As aforementioned, our CFSFL is a generalized framework which is compatible with many existing collaborative filtering approaches. We study the usefulness of our CFSFL on different recommenders and present the results in Table 4. Specifically, two types of recommenders are being considered: the linear approaches like WARP Weston et al. (2011) and MF Hu et al. (2008), and deep learning methods, e.g., DAE Liang et al. (2018) and the variation of VAE in Liang et al. (2018). We find that our CFSFL is capable of generalizing to most of the exisiting collaborative filtering approaches and boosts their performance accordingly. The gains achieved by our CFSFL may vary depending on the choice of recommender.
7 Conclusion
We propose a CFSLF framework to simulate user feedback. It constructs a virtual user to provide informative side information as user feedback. Mathematically we formulate the framework as an IRL problem and learn the optimal policy by feeding back the action and reward. Specially, a recurrent architecture was built to unrolled the framework for efficient learning. Empirically we improve the performance of stateoftheart collaborative filtering method with a nontrivial margin. Our framework serves as a practical solution making IRL feasible over largescale collaborative filtering. And it will be interesting to investigate the framework in other applications, such as sequential recomender systems etc.
References
 Fast online learning through offline initialization for timesensitive recommendation. In KDD, Cited by: §1.
 A generic coordinate descent framework for learning from implicit feedback. In WWW, Cited by: §5.
 The netflix prize. In KDD cup and workshop, Cited by: §1, §6.
 The million song dataset.. In Ismir, Cited by: §6.
 An empirical evaluation of thompson sampling. In NIPS, Cited by: §5.
 Continuoustime flows for deep generative models. arXiv preprint arXiv:1709.01179. Cited by: §5.
 Topk offpolicy correction for a reinforce recommender system. In WSDM, Cited by: §5.
 Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, Cited by: §5.
 Matrix cofactorization for recommendation with rich side information and implicit feedback. In Proceedings of the 2nd International Workshop on Information Heterogeneity and Fusion in Recommender Systems, Cited by: §1.
 Deepfm: a factorizationmachine based neural network for ctr prediction. arXiv preprint arXiv:1703.04247. Cited by: §5.
 Outer productbased neural collaborative filtering. arXiv preprint arXiv:1808.03912. Cited by: §5.
 Neural collaborative filtering. In WWW, Cited by: §1, §5.
 Fast matrix factorization for online recommendation with implicit feedback. In SIGIR, Cited by: §1, §5.
 Generative adversarial imitation learning. In NIPS, Cited by: §4.
 Collaborative filtering for implicit feedback datasets. In ICDM, Cited by: §1, §2.2, §5, §6, §6.
 Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §6.
 Advances in collaborative filtering. In Recommender systems handbook, Cited by: §1.
 Factorization meets the neighborhood: a multifaceted collaborative filtering model. In KDD, Cited by: §5.
 Cascading bandits: learning to rank in the cascade model. In ICML, Cited by: §5.
 A contextualbandit approach to personalized news article recommendation. In WWW, Cited by: §5.

Collaborative variational autoencoder for recommender systems
. In KDD, Cited by: §1, §2.1.  Variational autoencoders for collaborative filtering. WWW. Cited by: §1, §2.1, §2.2, §3.1, §5, §6, §6, §6.
 Unifying explicit and implicit feedback for collaborative filtering. In CIKM, Cited by: §5.
 Learning optimal social dependency for recommendation. arXiv preprint arXiv:1603.04522. Cited by: §5.
 Response prediction using collaborative filtering with hierarchies and sideinformation. In KDD, Cited by: §1.
 A web recommendation system considering sequential information. Decision Support Systems. Cited by: §2.1.
 Slim: sparse linear methods for topn recommender systems. In ICDM, Cited by: §6.
 BPR: bayesian personalized ranking from implicit feedback. In UAI, Cited by: §5.
 Onlineupdating regularized kernel matrix factorization models for largescale recommender systems. In Recsys, Cited by: §1.
 Factorization machines. In ICDM, Cited by: §5.
 Fast maximum margin matrix factorization for collaborative prediction. In ICML, Cited by: §5.
 Itembased collaborative filtering recommendation algorithms. In WWW, Cited by: §1.
 The lfm1b dataset for music retrieval and recommendation. In ICMR, Cited by: §1.
 Deep reinforcement learning with attention for slate markov decision processes with highdimensional states and actions. arXiv preprint arXiv:1512.01124. Cited by: §5.
 A hybrid web recommender system based on qlearning. In SAC, Cited by: §5.
 Neural memory streaming recommender networks with adversarial training. In KDD, Cited by: §1.
 Spore: a sequential personalized spatial item recommender system. In ICDE, Cited by: §2.1.

Deep metric learning with data summarization.
In
Joint European Conference on Machine Learning and Knowledge Discovery in Databases
, pp. 777–794. Cited by: §5.  Earlinessaware deep convolutional networks for early time series classification. arXiv preprint arXiv:1611.04578. Cited by: §5.

Topicguided variational autoencoders for text generation
. arXiv preprint arXiv:1903.07137. Cited by: §5. 
Zeroshot learning via classconditioned deep generative models.
In
ThirtySecond AAAI Conference on Artificial Intelligence
, Cited by: §5.  Improving textual network learning with variational homophilic embeddings. arXiv preprint arXiv:1909.13456. Cited by: §5.
 Wsabie: scaling up to large vocabulary image annotation. In IJCAI, Cited by: §6.
 Collaborative denoising autoencoders for topn recommender systems. In ICDM, Cited by: §1, §2.2, §6.

Bridging collaborative filtering and semisupervised learning: a neural approach for poi recommendation
. In KDD, Cited by: §1.  An endtoend generative architecture for paraphrase generation. Cited by: §5.
 Learning structural weight uncertainty for sequential decisionmaking. arXiv preprint arXiv:1801.00085. Cited by: §5.
 NeuRec: on nonlinear transformation for personalized ranking. arXiv preprint arXiv:1805.03002. Cited by: §5.
 Recommendations with negative feedback via pairwise deep reinforcement learning. arXiv preprint arXiv:1802.06501. Cited by: §5.
 Interactive collaborative filtering. In CIKM, Cited by: §5.
 DRN: a deep reinforcement learning framework for news recommendation. In WWW, Cited by: §5.
 Wasserstein autoencoders for collaborative filtering. arXiv preprint arXiv:1809.05662. Cited by: §6.
Appendix A Appendix
a.1 Fusion function
Here we give a detail description of the fused function we have proposed. A straightforward way to build the fusion function is to concatenate and , and feed it into a linear layer to learn a lower dimensional representation. However, in practice this method is infeasible since the dimension of items, , is extremely high and the usage of the concatenation will make the problem even worse. To this end, we introduce a sparse layer. This layer includes a lookup table . Once we have inferred the the recommended items based on the observation , we build the the fused input as
(16) 
where is the Dirac Delta function and takes value 1 if , is number of 1 in . The parameters of the lookup table will be automatically learned during the training phrase. We show an example to illustrate the working scheme for the proposed fusion function in Figure 5
. The benefits for the proposed approach can be summarized as two folds: 1) it reduce the computational cost of the standard linear transformation under the general sparse set up and saves number of parameters in our proposed adversarial learning framework; 2) This lookup table is shared across the observation and the recommended items, building a unified space for the users existing preference and missing preference. Empirically such shared knowledge boosts the performance of our CFSFL framework.
Comments
There are no comments yet.