diff --git "a/abs_29K_G/test_abstract_long_2405.00824v1.json" "b/abs_29K_G/test_abstract_long_2405.00824v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00824v1.json" @@ -0,0 +1,72 @@ +{ + "url": "http://arxiv.org/abs/2405.00824v1", + "title": "Efficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations", + "abstract": "Conventional recommendation systems (RSs) are typically optimized to enhance\nperformance metrics uniformly across all training samples.\n This makes it hard for data-driven RSs to cater to a diverse set of users due\nto the varying properties of these users. The performance disparity among\nvarious populations can harm the model's robustness with respect to\nsub-populations. While recent works have shown promising results in adapting\nlarge language models (LLMs) for recommendation to address hard samples, long\nuser queries from millions of users can degrade the performance of LLMs and\nelevate costs, processing times and inference latency. This challenges the\npractical applicability of LLMs for recommendations. To address this, we\npropose a hybrid task allocation framework that utilizes the capabilities of\nboth LLMs and traditional RSs. By adopting a two-phase approach to improve\nrobustness to sub-populations, we promote a strategic assignment of tasks for\nefficient and responsible adaptation of LLMs. Our strategy works by first\nidentifying the weak and inactive users that receive a suboptimal ranking\nperformance by RSs. Next, we use an in-context learning approach for such\nusers, wherein each user interaction history is contextualized as a distinct\nranking task and given to an LLM. We test our hybrid framework by incorporating\nvarious recommendation algorithms -- collaborative filtering and\nlearning-to-rank recommendation models -- and two LLMs -- both open and\nclose-sourced. Our results on three real-world datasets show a significant\nreduction in weak users and improved robustness of RSs to sub-populations\n$(\\approx12\\%)$ and overall performance without disproportionately escalating\ncosts.", + "authors": "Kirandeep Kaur, Chirag Shah", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.HC" + ], + "label": "Original Paper", + "paper_cat": "LLM Fairness", + "gt": "Conventional recommendation systems (RSs) are typically optimized to enhance\nperformance metrics uniformly across all training samples.\n This makes it hard for data-driven RSs to cater to a diverse set of users due\nto the varying properties of these users. The performance disparity among\nvarious populations can harm the model's robustness with respect to\nsub-populations. While recent works have shown promising results in adapting\nlarge language models (LLMs) for recommendation to address hard samples, long\nuser queries from millions of users can degrade the performance of LLMs and\nelevate costs, processing times and inference latency. This challenges the\npractical applicability of LLMs for recommendations. To address this, we\npropose a hybrid task allocation framework that utilizes the capabilities of\nboth LLMs and traditional RSs. By adopting a two-phase approach to improve\nrobustness to sub-populations, we promote a strategic assignment of tasks for\nefficient and responsible adaptation of LLMs. Our strategy works by first\nidentifying the weak and inactive users that receive a suboptimal ranking\nperformance by RSs. Next, we use an in-context learning approach for such\nusers, wherein each user interaction history is contextualized as a distinct\nranking task and given to an LLM. We test our hybrid framework by incorporating\nvarious recommendation algorithms -- collaborative filtering and\nlearning-to-rank recommendation models -- and two LLMs -- both open and\nclose-sourced. Our results on three real-world datasets show a significant\nreduction in weak users and improved robustness of RSs to sub-populations\n$(\\approx12\\%)$ and overall performance without disproportionately escalating\ncosts.", + "main_content": "INTRODUCTION Recommendation systems (RSs) have become an integral part of numerous online platforms, assisting users in navigating vast amounts of content to relieve information overload [1]. While Collaborative Filtering based RSs [2] primarily rely on user-item interactions to predict users\u2019 preferences for certain candidate items, the utilization of language in recommendations has been prevalent for decades in hybrid and content-based recommenders, mainly through item descriptions and text-based reviews [3]. Furthermore, conversational recommenders [4] have highlighted language as a primary mechanism for allowing users to naturally and intuitively express their preferences [5]. Deep recommendation models are trained under the Empirical Risk Minimization (ERM) framework that minimizes the loss function uniformly for all training samples. Such models, however, fail to cater to a diverse set of sub-populations, affecting robustness [6\u201312]. Empirical analysis conducted by Li et al. [13] shows that active users who have rated many items receive better recommendations on average than inactive users. This inadvertent disparity in recommendations requires careful scrutiny to ensure equitable recommendation experiences for all users [14]. arXiv:2405.00824v1 [cs.IR] 1 May 2024 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. On the other hand, Large Language Models (LLMs) like GPT [15], LLaMA [16], LaMDA [17], Mixtral [18] can effectively analyze and interpret textual data, thus enabling a better understanding of user preferences. These foundation models demonstrate remarkable versatility, adeptly tackling various tasks across multiple domains [19\u201321]. However, the field of recommendations is highly domain-specific and requires in-domain knowledge. Consequently, many researchers have sought to adapt LLMs for recommendation tasks [22\u201325]. Authors in [25] outline four key stages in integrating LLMs into the recommendation pipeline: user interaction, feature encoding, feature engineering, and scoring/ranking. The purpose of using LLMs as a ranking function aligns closely with generalpurpose recommendation models. The transition from traditional library-based book searches to evaluating various products, job applicants, opinions, and potential romantic partners signifies an important societal transformation, emphasizing the considerable responsibility incumbent upon ranking systems [26]. Existing works that deploy LLMs for ranking [5, 27\u201334] have proven excellence of LLMs as zero-shot or few-shot re-rankers demonstrating their capabilities in re-ranking with frozen parameters. These works use traditional RSs as candidate item retrieval models to limit the candidate items that need to be ranked by LLM due to a limited context window. Furthermore, Hou et al. [27], Xu et al. [28] interpret user interaction histories as prompts for LLMs and show that LLMs perform well only when the interaction length is up to a few items, demonstrating the ability of LLMs for (near) cold-start users. Since adapting LLMs can raise concerns around economic and efficiency factors, most of these works train RS on entire datasets but randomly sample interaction histories of some users to evaluate the performance of LLMs, questioning the generalizability of results for all users. This leads us to two important research questions. \u2022 RQ1: Though LLMs have shown remarkable ranking performance even in zero-shot settings, how can we reduce the high costs associated with adapting LLMs to support practical applicability? \u2022 RQ2: Conventional recommendation systems are cost-effective and can perform well on most users, as shown by previous works; how can we prevent performance degradation on sub-populations? To address these RQs, we propose a task allocation strategy that leverages LLM and RS\u2019s capabilities in a hybrid framework (Fig. 1). Our strategy operates in two phases based on the responsible and strategic selection of tasks for the cost-effective usage of LLMs. First, we identify the users with highly sparse interaction histories on whom the ranking performance of RS is below a certain threshold \ud835\udc61\ud835\udc5d. All such users are termed as weak users. In the second phase, interaction histories of weak users are contextualized using in-context learning to demonstrate user preferences as instruction inputs for LLM. While the strong users receive the final recommendations retrieved by RS, weak users receive the recommendations ranked by LLM if the quality of the ranked list is better than the RS. We test our framework based on collaborative filtering and learning-to-rank recommendation models and our results show the efficacy of our strategy, both with open-source as well as closedsource LLMs, in boosting the model robustness to sub-population and data sparsity and improving the quality of recommendations. For reproducibility and to support research community, our code is available on https://anonymous.4open.science/r/resp-llmsRS/. In short, the following are our contributions in this paper. \u2022 We introduce a novel hybrid task allocation strategy that combines the strengths of LLMs and traditional RSs to improve robustness to subpopulations and data sparsity. \u2022 Our unique method for pinpointing weak users based upon two criteria (user activity and the received recommendation quality below a set threshold) facilitates interventions using LLMs for equitable recommendations. \u2022 Our proposed framework improves the robustness of traditional recommendation models by reducing weak user count, enhancing recommendation quality, and addressing high costs associated with adapting LLMs. \u2022 Our experiments, both on closed-source and open-source LLMs, show the efficacy of our framework in improving the model robustness to sub-populations by (\u224812%) for varying levels of sparsity and reducing the count of weak users significantly. 2 RELATED WORK Robustness in machine learning (ML) targets developing models capable of withstanding the challenges posed by imperfect data in diverse forms [35]. Within the paradigm of recommendations, some existing works developed models resilient to shifts in popularity distribution [36\u201338], distribution disparity in train and test datasets [39, 40], adversarial and data poisoning attacks [41\u201345]. Our work aims to tackle the recommendation model\u2019s robustness to data sparsity [46] and sub-populations [47]. In their research, Li et al. [13] illustrated that RSs excel in catering to active users but fall short in meeting the overall needs of inactive ones. To address this inequality, they proposed a re-ranking technique that reduced the disparity among active and inactive users. Their results depict that such post-processing techniques [48\u201350] can either harm the average performance on advantaged users to reduce the disparity or reduce the overall utility of models. Though the in-processing techniques [51\u201353] for improving equitable recommendations across various sub-populations can tackle fairness-utility trade-offs, simply adding regularizer term results in sub-optimal performance [54]. Most of these works have shown disparity and evaluated existing models by grouping users based on their activity, demographics, and preferences. Similarly, Wen et al. [55] developed a Streaming-Distributionally Robust Optimization (S-DRO) framework to enhance performance across user subgroups, particularly by accommodating their preferences for popular items. Different from these, our work first builds upon the existing literature that elicits the issue of performance disparities among active and inactive users and then indicates that though inactive users receive lower-quality recommendations on average, this degradation only affects a subset of inactive users rather than all inactive users. Unlike these works, our framework identifies weak users\u2014 inactive individuals whose preferences traditional recommendation systems struggle to capture effectively. Many researchers have turned to LLMs to address some of these problems because, in recent years, LLMs have proven to be excellent \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY re-rankers and have often outperformed existing SOTA recommendation models in zero-shot and few-shot settings without requiring fine-tuning. For example, Gao et al. [56] proposed an enhanced recommender system that integrates ChatGPT with traditional RS by synthesizing user-item history, profiles, queries, and dialogue to provide personalized explanations to the recommendations through iterative refinement based on user feedback. AgentCF [31], designed to rank items for users, involves treating users and items as agents and optimizing their interactions collaboratively. While user agents capture user preferences, item agents reflect item characteristics and potential adopters\u2019 preferences. They used collaborative memorybased optimization to ensure agents align better with real-world behaviours. While the retrieval-ranker framework in [29] remains similar to previous works, authors generate instructions with key values obtained from both users (e.g., gender, age, occupation) and items (e.g., title, rating, category). Despite the excellence of LLMs as ranking agents, adapting LLMs can involve processing lengthy queries containing numerous interactions from millions of users. Furthermore, each query can raise various economic and latency concerns. Thus, all these works randomly select a few users from the original datasets to evaluate the performance of LLMs. In practice, this user base can involve many more users, which questions the practical applicability of large models for recommendations. However, some recent studies have shown the efficacy of large language models (LLMs) as re-ranking agents to cater to queries with shorter interaction histories compared to lengthy instructions that constitute hundreds of interactions. For example, Hou et al. [27] trained recommendation systems to generate candidate item sets and then used user-item interactions to develop instructions. The authors sorted users\u2019 rating histories based on timestamps and used in-context learning to design recency-focused prompts. They prompted LLMs to re-rank the candidate items retrieved by the recommendation systems. Their analysis showed decreased performance of LLMs if the candidate item set had more than 20 items. ProLLM4Rec [28] adopted a unified framework for prompting LLMs for recommendation. The authors integrated existing recommendation systems and works that use LLMs for recommendations within a single framework. They provided a detailed comparison of the capabilities of LLMs and recommendation systems. Their empirical analysis showed that while state-of-the-art sequential recommendation models like SASRec [57] improve with a growing number of interactions, LLMs start to perform worse when the number of interactions grows. Furthermore, both of these works sampled some users to evaluate the performance of LLMs due to the high adaptation costs. To investigate the effectiveness of various prompting strategies Sanner et al. [5] focused on a (near) cold-start scenario where minimal interaction data is available. They used various prompting techniques to provide a natural language summary of preferences to enhance user satisfaction by offering a personalized experience. By exploiting rich positive and negative descriptive content and item preferences within a unified framework, they compared the efficacy of prompting paradigms with large language models against collaborative filtering baselines that rely solely on item ratings. In summary, past works suggest that despite the high costs associated with adapting LLMs for recommendations, these models can outperform existing recommendation models significantly. Moreover, we acknowledge that the literature shows the contrasting capabilities of both RSs and LLMs \u2013 RSs fail to perform well on inactive users due to sparse interaction vectors, and in contrast, LLMs can be prompted to cater to inactive users in near cold-start settings without requiring any fine-tuning. Building upon these crucial insights, our framework first aims to identify the weak users for whom RS finds it hard to capture their preferences accurately. We then use in-context learning to prompt LLMs to generate recommendations for such users. While past works like ProLLM4Rec by [28], dynamic reflection with divergent thinking within a retriever-reranked by [33], recency-focused prompting by [27] and aligning ChatGPT with conventional ranking techniques such as point-wise, pair-wise, and list-wise ranking by [58] are all different techniques to design prompts with different variations, our main contribution lies in the responsible task allocation within recommendation systems and all such techniques can be used within our framework for designing prompts. In the next section, we discuss our methodology in detail. 3 METHODOLOGY We begin here by providing a formal definition of the existing problem. We then discuss our framework, which adopts a hybrid structure by leveraging the capabilities of both traditional RSs and LLMs. For this, we first identify users for whom RSs do not perform well and then leverage LLMs for these users to demonstrate user preferences using in-context learning. 3.1 Problem Formulation Consider a recommendation dataset D with \ud835\udc58data points. Let \ud835\udc48= {\ud835\udc621,\ud835\udc622, . . . ,\ud835\udc62\ud835\udc40} be the set of users and |\ud835\udc48| = \ud835\udc40represents the number of users in D. Let \ud835\udc3c= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc41} be the set of all the items and |\ud835\udc3c| = \ud835\udc41represents the number of items in D. D = {(\ud835\udc62\ud835\udc5a,\ud835\udc56\ud835\udc5b,\ud835\udc5f\ud835\udc5a\ud835\udc5b) : \ud835\udc5a= 1, 2, . . . , \ud835\udc40;\ud835\udc5b= 1, 2, . . . , \ud835\udc41} (1) Here, the triplet \ud835\udc51\ud835\udc5a\ud835\udc5b= (\ud835\udc62\ud835\udc5a,\ud835\udc56\ud835\udc5b,\ud835\udc5f\ud835\udc5a\ud835\udc5b) represents one data point where a user \ud835\udc62\ud835\udc5aprovided a rating of \ud835\udc5f\ud835\udc5a\ud835\udc5bto an item \ud835\udc56\ud835\udc5b. Now, if a user \ud835\udc62\ud835\udc5ahas rated a set of items, then let [\ud835\udc5f\ud835\udc5a\ud835\udc5b]\ud835\udc41 \ud835\udc5b=1 denote the rating vector consisting of explicit rating values ranging from 1 to 5 if a user provided a rating and 0 otherwise. Additionally, \ud835\udf03\ud835\udc5f represents the conventional recommendation model. The first step to solving the problem includes determining different criteria to categorize a user as weak. This includes ranking users based on the RS performance on each one of them. Then, the goal is to understand user characteristics to categorize extremely weak users. For each weak user, we contextualize interaction history as a distinct recommendation task and finally allocate these tasks to LLM. 3.2 Identifying Weak Users We consider two criteria for identifying weak users for recommendation model \ud835\udf03\ud835\udc5f. First, given \ud835\udc3eusers and their associated rating vectors, we evaluate how well the model could rank the relevant user items, often termed as positive items above the irrelevant or negative items. Let \ud835\udc5fdenote the rank of the relevant item, and \ud835\udc5f\u2032 be the rank of irrelevant items. Then, \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. \ud835\udeff(\ud835\udc5f< \ud835\udc5f\u2032) (2) denotes an indicator function that outputs one if the rank of the relevant item \ud835\udc5fis higher than that of the irrelevant item \ud835\udc5f\u2032. Let \ud835\udc41 denote the total number of items and |\ud835\udc45| be the set of all relevant items. Then, similar to Rendle et al. [59], we use AUC measure to evaluate how hard it was for \ud835\udf03\ud835\udc5fto rank items preferred by a certain user, given by P(u) = 1 |\ud835\udc45|(\ud835\udc41\u2212|\ud835\udc45|) \u2211\ufe01 \ud835\udc5f\u2208\ud835\udc45 \u2211\ufe01 \ud835\udc5f\u2032\u2208{1,...,\ud835\udc41}\\\ud835\udc45 \ud835\udeff(\ud835\udc5f< \ud835\udc5f\u2032) (3) Here, |\ud835\udc45|(\ud835\udc5b\u2212|\ud835\udc45|) denotes all possible pairs of relevant and irrelevant items. We acknowledge that various metrics like NDCG, F1, precision and recall have been used to measure the quality of ranking ability of recommendation models. However, these metrics place significant importance on the outcomes of the top-k items in the list and completely ignore the tail. For identifying weak users, we require a metric consistent under sampling i.e. if a recommendation model tends to give better recommendations than another on average across all the data, it should still tend to do so even if we only look at a smaller part of the data. The aim of our framework is a clear task distribution. The performance of top-k metrics varies with \ud835\udc58, and this might raise uncertainty as \ud835\udc58varies with varying users and platforms. Nevertheless, AUC is the only metric which remains consistent under-sampling, and as \ud835\udc58reduces, all top-k metrics collapse to AUC. For more details, we refer the readers to [60]. Past works [13] have shown that active users that provide more ratings receive better recommendations than the inactive users on average. However, only a few inactive users might receive irrelevant recommendations individually (Fig. 3). Thus, we evaluate each user\u2019s activity. Let a user \ud835\udc62rated |\ud835\udc45| items out of a total of \ud835\udc41 items. Then sparsity index S\ud835\udc3cassociated with a given user \ud835\udc62can be calculated as: S\ud835\udc3c(\ud835\udc62) = |\ud835\udc45| \ud835\udc41 (4) If this value falls above a certain threshold \ud835\udc61\ud835\udc60, the user is considered as inactive. Combining with the weak user identification, we obtain, Definition 3.1. Given dataset D and a recommendation model \ud835\udf03\ud835\udc5f, we say that a user \ud835\udc62\ud835\udc5ais extremely weak if the likelihood of \ud835\udf03\ud835\udc5f being able to rank the relevant items above the irrelevant items is below \ud835\udc61\ud835\udc5dand the rating vector [\ud835\udc5f\ud835\udc5a\ud835\udc5b] has extremely high sparsity. i.e., above \ud835\udc61\ud835\udc60 P(\ud835\udc62\ud835\udc5a) \u2264\ud835\udc61\ud835\udc5d && S\ud835\udc3c(\ud835\udc62) > \ud835\udc61\ud835\udc60 (5) It is important to note that a higher AUC value implies better performance, and the value always lies between 0 to 1. Further, we use \ud835\udc61\ud835\udc60= \ud835\udc4e\ud835\udc63\ud835\udc54(S\ud835\udc3c(\ud835\udc37)), the average sparsity of all users in D i.e., \ud835\udc4e\ud835\udc63\ud835\udc54(S\ud835\udc3c(\ud835\udc37)) = 1/\ud835\udc5a\u2217\u00cd\ud835\udc5a \ud835\udc57=1 S\ud835\udc3c(\ud835\udc62\ud835\udc57) for determining this threshold. 3.3 Designing Natural Language Instructions for Ranking Closest to our work, Hou et al. [27] formalized the recommendation problem as a conditional ranking task considering sequential interaction histories as conditions and uses the items retrieved by traditional RS as \ud835\udc50\ud835\udc4e\ud835\udc5b\ud835\udc51\ud835\udc56\ud835\udc51\ud835\udc4e\ud835\udc61\ud835\udc52items. While we aim to design the conditional ranking tasks, our approach differs significantly from theirs as instead of using LLMs as a re-ranking agent for all users; we instruct LLM with the preferences of weak users preferences (sorted in descending order of decreased preference). This technique is detailed below. For each user, we use in-context learning to instruct LLM about user preferences conditions and assign the task of ranking the candidate items. For a user \ud835\udc62, let H\ud835\udc62= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc5b} depict the user interaction histories sorted in decreasing order of preference and C\ud835\udc62= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc57} be the candidate items to be ranked. Then, each instruction can be generated as a sum of conditions and candidate items, i.e., I \ud835\udc62= H\ud835\udc62+ C\ud835\udc62 (6) In-context learning: We use in-context learning to provide a demonstration of the user preferences to LLM using certain examples. As suggested by Hou et al. [27], providing examples of other users may introduce extra noise if a user has different preferences. Therefore, we sort every weak user\u2019s preferences based on explicit user ratings. For example, \"User {\ud835\udc62\ud835\udc60\ud835\udc52\ud835\udc5f_\ud835\udc56\ud835\udc51} liked the following movies in decreasing order of preference where the topmost item is the most preferred one: 1. Harry Potter, 2. Jurassic Park ...\". This forms the condition part of the instruction. We then select items which served as test items for recommendation models as candidate items and instruct LLM to rank them in decreasing order of preference as \"Now, rank the following items in decreasing order of preference such that the top most movie should be the most preferred one: Multiplicity, Dune ...\". It is important to note that while the presentation order in conditions plays a significant role in demonstrating user preferences to LLM, we deliberately shuffle the candidate items to test the ability of LLM to rank correctly. Since LLMs can generate items out of the set, we specially instruct to restrict recommendations to the candidate set. Fig. 2 shows the final template of the instruction given to LLM for a particular user. We use the same template for all identified weak users to contextualize their past interactions into a ranking task. 3.4 Our Framework This section discusses the workflow adopted by our framework as depicted in Fig. 1 and corresponding algorithm 1. Initially, the model takes input as the training D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5band test dataset D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61, a set of users U, a recommendation model \ud835\udf03\ud835\udc5f, large language model \ud835\udf03\ud835\udc59and two thresholds: sparsity threshold \ud835\udc61\ud835\udc60and performance threshold \ud835\udc61\ud835\udc5dwhich depict the minimum sparsity and performance values for user to be classified as strong user. It is important to note that splitting data will not yield a mutually exclusive set of users in both sets, but item ratings for each user in D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5bwill differ from those in D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61. \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 2: Instruction template for contextualizing interaction histories of weak users. Table 1: Datasets statistics ML-1M ML-100k Book-Crossing # Users 6,041 943 6,810 # Items 3,952 1,682 9,135 # Interactions 1,000,209 100,000 114,426 Sparsity 95.81% 93.7% 99.82% Domain Movies Movies Books The algorithm begins by training the recommendation model \ud835\udf03\ud835\udc5fon the training set D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5band provides ranked items for all users. Using D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61, we test the ranking ability of the model for each user by evaluating P(\ud835\udc62\ud835\udc5a) using Eq. 3. Further, each user is also assigned a sparsity score S\ud835\udc3c(\ud835\udc62) evaluated using Eq. 4. If P(\ud835\udc62) has a value less than \ud835\udc61\ud835\udc5dand the sparsity index S\ud835\udc3c(\ud835\udc62) for a particular user falls below \ud835\udc61\ud835\udc60, the user is termed as a weak user. While previous works have shown that, on average, inactive users receive poor performance, we pinpoint weak users by evaluating both the sparsity and performance. For all such weak users, we convert rating histories from D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b as conditions H\ud835\udc62using in-context learning and use test items as candidate items C\ud835\udc62for testing purposes. However, in practice, these candidate items can be replaced by unobserved items. The final instructions are generated by combining conditions and candidate items as depicted by (Eq.6). These instructions are given to the LLM, which provides a ranked list of items for each user. For all the strong users, the recommendations presented are the ones ranked by the conventional recommendation model. However, the weak users receive final ranked lists generated by the LLM. 4 EXPERIMENTS This section discusses our experimental setup with details of the datasets and models used, followed by the implementation details of all these models and various metrics used. We finally present empirical results and a comparative analysis of various recommendation models and LLMs. 4.1 Experimental Setup 4.1.1 Datasets. To test the effectiveness of our framework, we conducted experiments on three real-world datasets: ML-1M1, ML100k2, and Book-Crossing (B-C)3. Both ML100k and ML1M 1https://grouplens.org/datasets/movielens/1m/ 2https://grouplens.org/datasets/movielens/100k/ 3http://www2.informatik.uni-freiburg.de/ cziegler/BX/ Algorithm 1 Hybrid LLM-RecSys Algorithm for Ranking 1: Input: D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b: training dataset; D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61: test dataset; U: set of users; S\ud835\udc3c: Sparsity index for all users; \ud835\udf03\ud835\udc5f: recommendation algorithm; \ud835\udf03\ud835\udc59: large language model, \ud835\udc61\ud835\udc60: sparsity threshold, \ud835\udc61\ud835\udc5d: performance threshold. 2: Output: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5b\ud835\udc54: ranked lists of items for strong users, \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58: ranked lists of items for weak users. 3: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\u2190\ud835\udf03\ud835\udc5f(D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b) 4: for each user \ud835\udc62\ud835\udc5a\u2208U do 5: Calculate P(\ud835\udc62\ud835\udc5a) using Eq. 3 6: Calculate S(\ud835\udc62\ud835\udc5a) using Eq. 4 7: if P(\ud835\udc62\ud835\udc5a) < \ud835\udc61\ud835\udc5d&& S\ud835\udc3c(\ud835\udc62\ud835\udc5a) then 8: U\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58\u2190\ud835\udc62\ud835\udc5a 9: else 10: U\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5b\ud835\udc54\u2190\ud835\udc62\ud835\udc5a 11: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5b\ud835\udc54\u2190\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51[\ud835\udc62\ud835\udc5a] 12: end if 13: end for 14: for each \ud835\udc62\ud835\udc56\u2208U\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58do 15: Generate instruction I \ud835\udc62\ud835\udc56using Eq. 6 16: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc62\ud835\udc56= \ud835\udf03\ud835\udc3f(I \ud835\udc62) 17: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58\u2190\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc62\ud835\udc56 18: end for are movie-rating datasets, and book-crossing is a book-rating dataset. We select three datasets with varying levels of sparsity for evaluating robustness to data sparsityML100k has the least sparsity, and Book-Crossing has the highest sparsity (for exact values, refer to table 1). All these datasets have explicit user preference in the form of ratings ranging from 0 \u22125 for movie ratings and 0 \u221210 for book ratings dataset. We do not filter out users from ML1M and ML100k as each user has rated at least 20 movies in both these datasets. For consistency, we filter out users with less than 20 ratings from the Book-Crossing dataset. While both movie-ratings datasets have clustering based on sensitive attributes like age and gender, this paper aims to boost performance on all weak users irrespective of the sensitive features. Thus, following the protocol adopted by [13], we divided users based on their activity or the number of items rated. Any user who has rated items below a certain threshold \ud835\udc61\ud835\udc60 is termed an inactive user, and all those above this threshold are active users. We calculated the number of items rated on an average by all the users and used this average value as a threshold; this threshold can always vary and be set to different values per application. Table 1 presents the statistics of all three datasets. 4.1.2 Baselines and Models. Our hybrid framework uses both traditional recommendation systems and LLMs. Thus, we include two different types of recommendation models: (i) Collaborativefiltering based: Neural Collaborative Filtering (NCF) [61] as well as ItemKNN [62]; and (ii) Learning-to-rank modelBayesian Personalized Ranking (BPR) [59]. While these models identify weak users and generate candidate items, LLMs are further deployed to improve the performance of such users. We use both open (Mixtral-8x-7binstruct) and closed-sourced (GPT-3.5-turbo)to test the capability of the proposed framework. It is important to note that the Collaborative Filtering models are mostly used to capture the long-term \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. (a) ItemKNN (ML1M) (b) NCF (ML1M) (c) BPR (ML1M) (d) ItemKNN (ML100k) (e) NCF (ML100k) (f) BPR (ML100k) (g) ItemKNN (B-C) (h) NCF (B-C) (i) BPR (B-C) Figure 3: AUC vs Sparsity scatter plots for illustrating the performance (measured using AUCx-axis) for all users in ML1M, ML100k and Book-Crossing (B-C) dataset on three different algorithms. preferences of users. We acknowledge that existing works (refer to Section 2) have used sequential recommendation models for comparing the performance of LLMs. These works also use recommendation models as candidate retrieval models and then use LLMs to rerank the candidate items. However, sequential models are used to predict the next item according to the recently bought items. We test our framework mainly on long-term user preferences and use collaborative filtering models not only for candidate item retrieval but also for recommending top-k items to strong users. However, we believe that any existing model adopting this retrieval-reranker strategy can adopt our framework. For space constraints, we present an evaluation of our framework only on NCF, ItemKNN and BPR. In line with existing literature [27, 28], we design instructions by randomly sampling 20 rated items to demonstrate the user\u2019s preferences to the LLM. Furthermore, existing works do not discuss the responsible adaptation of LLMs, and the underlying task of retrieval-reranker of such models remains consistent and can thus use our framework. 4.1.3 Implementation details. For ease of reproducibility, we use the open-source recommendation library RECBOLE [63] for implementing all recommendation models and API calls for access to LLMs 45. Each dataset is split into train (80%), test (10%) and validation set (10%). We carefully use the validation set to tune all recommendation models\u2019 hyperparameters. For BPR, we search for 4https://platform.openai.com/docs/api-reference 5https://www.llama-api.com/ optimal learning rate in [5\ud835\udc52\u22125, 1\ud835\udc52\u22124, 5\ud835\udc52\u22124, 7\ud835\udc52\u22124, 1\ud835\udc52\u22123, 5\ud835\udc52\u22123, 7\ud835\udc52\u22123] and in [5\ud835\udc52\u22127, 1\ud835\udc52\u22126, 5\ud835\udc52\u22126, 1\ud835\udc52\u22125, 1\ud835\udc52\u22124, 1\ud835\udc52\u22123] for NCF. Additionally, we use [64, 32, 16] as MLP hidden size for all layers and search optimal dropout probability within [0.0, 0.1, 0.3] for NCF. Two hyperparameter for ItemKNN involve \ud835\udc58(neighborhood size) in [10, 50, 100, 200, 250, 300, 400] and \ud835\udc60\u210e\ud835\udc5f\ud835\udc56\ud835\udc5b\ud835\udc58(normalization parameter to calculate cosine distance) in [0.0, 0.1, 0.5, 1, 2]. We adopt the protocol presented by a recently released toolkit RGRecSys [64] for evaluating robustness to sub-population using NDCG and AUC. We emphasize that the use of AUC to measure the hardness associated with each user for a given recommendation model is because of the consistency property of AUC. We use the popular CatBoost6 library that offers AUC implementation for ranking and also report final NDCG@10 scores. Furthermore, we set the temperature to 0 in GPT-3.5-turbo to minimize the generation of out-of-list items and hallucinations. However, as per our observations, setting the temperature to 0 in Mixtral-8x7b-instruct, the model outputs the list in the same order in which it was given input to it. Hence, we set the temperature to 1 and removed the items which were not originally present in the candidate list. We now discuss our empirical inferences as we conduct experiments following these details. \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Table 2: Tabular illustration of the overall comparison of results in terms of ranking quality measured using AUC and NDCG@10 for two collaborative filtering based (Neural Collaborative Filtering, ItemKNN) and one learning-to-rank models in comparison to their usage within our framework along with one open-sourced LLM (GPT-3.5-turbo) and one close-sourced LLM (Mixtral8x7b-instruct). ML1M ML100K Book-Crossing AUC AUC (Weak Users) NDCG@10 NDCG@10 (Weak Users) AUC AUC (Weak Users) NDCG@10 NDCG@10 (Weak Users) AUC AUC (Weak Users) NDCG@10 NDCG@10 (Weak Users) ItemKNN 0.47032 0.23776 0.66792 0.58226 0.45616 0.24778 0.66792 0.58226 0.43309 0.25909 0.75197 0.65098 ItemKNN + GPT-3.5-turbo 0.58142 0.51776 0.82643 0.70352 0.59781 0.51953 0.82643 0.82598 0.61629 0.49713 0.86212 0.77101 ItemKNN + Mixtral-8x7b-instruct 0.56035 0.51708 0.70147 0.69276 0.59972 0.52327 0.70147 0.82438 0.55203 0.47215 0.85904 0.76183 NCF 0.47805 0.22945 0.78795 0.59801 0.48311 0.25182 0.78795 0.67734 0.51852 0.29004 0.78370 0.66148 NCF + GPT-3.5-turbo 0.58935 0.52122 0.80317 0.71723 0.60831 0.50814 0.80317 0.82603 0.61946 0.50513 0.88219 0.78133 NCF + Mixtral-8x7b-instruct 0.57211 0.52100 0.79178 0.70741 0.61174 0.50903 0.79178 0.82306 0.59901 0.49897 0.86254 0.78001 BPR 0.57957 0.37824 0.88833 0.73998 0.51629 0.17020 0.88833 0.71944 0.53310 0.24426 0.80405 0.70289 BPR + GPT-3.5-turbo 0.65397 0.51997 0.90098 0.82117 0.6387 0.51435 0.90098 0.82452 0.62145 0.50998 0.88173 0.81933 BPR + Mixtral-8x7b-instruct 0.64174 0.51972 0.89742 0.82998 0.64910 0.52135 0.89742 0.81761 0.61625 0.50081 0.87798 0.80284 4.2 Empirical Evaluation 4.2.1 Comparative analysis. The first phase begins by identifying the inactive users. For this, we calculate the average sparsity of all users in the dataset and identify users above this threshold as inactive users. However, one can use different values for this threshold like [13] used only top 20% of the sparse users for annotating the inactive users. We then evaluate the AUC score using equation 3 to measure the performance of RS on all these users. In line with the findings of Li et al. [13], RS performs significantly well on active users as compared to inactive users. For instance-by-instance analysis of every user, we then plot AUC scores against the sparsity index as shown in Fig. 3 for all three datasets using three different recommendation algorithms: ItemKNN, NCF, and BPR. While ItemKNN and NCF are collaborative filtering algorithms, the overall scatterplot for BPR shows better AUC scores as compared to the other two algorithms. This is because of the inherent nature of learning-to-rank models like BPR, which rank user preferences better. This figure shows that though RS performs poorly on inactive users on average, not all inactive users receive poor-quality recommendations. We thus use our definition 5 to identify such users and mark them as weak. It might be interesting to explore why not all inactive users receive poor performance. We leave this exploratory study for future work. For the second phase, we design instructions for these weak users using the approach discussed inSection 3. Our results in Table 2 show that LLMs perform significantly well on these users. Using LLM and base RS models yields the best results for all three datasets. Our results show improvement in both AUC and NDCG@10 for weak users, thus demonstrating improved robustness to the subpopulation of weak users. This further leads to an overall improved ranking quality. We also highlight that previous works like [27, 34] show that close-source models perform much better than opensource. However, understanding the properties of users for which LLMs inherently perform well (like we provide a mechanism of finding weak users) and responsibly assigning tasks to large models improve performance using open-source as well as closed-source models. Our results show that Mixtral-8x-7b-instruct can perform almost equally well on weak users in all the datasets and base models. Furthermore, as observed for the ML100k dataset, this open-source model can outperform GPT-3.5-turbo when evaluated 6https://github.com/catboost/ on AUC. It should be noted that AUC mainly evaluates the discriminatory ability of model to rank positive items over negative and NDCG focuses on the user\u2019s satisfaction with the ranked list, considering both relevance and position. The reason for this can be associated with dataset sparsity. As shown in Table 1, the sparsity of the ML100k dataset is lesser (\u224893%) compared to the other two datasets. While Mixtral is a good choice when datasets are small and more dense, GPT-3.5-turbo performs well for extremely sparse datasets. Yet, it is important to note that in either case, the margin of the performance of both these LLMs is not significant, and thus, even open-source models can give a comparable performance. Nevertheless, usage of GPT model yields best NDCG@10 scores for all datasets. 4.2.2 Reduction in weak user count. For analyzing the variations in the count of weak users, we counted a number of weak users identified in the first phase whose interactions were contextualized and given to LLMs as the base for comparison with LLMs using the same threshold \ud835\udc61\ud835\udc5d. We evaluated AUC on the rankings obtained by LLM for these users. It was noted that a few users continued being hard even for LLM if the AUC lied below \ud835\udc61\ud835\udc5d. Fig. 4 shows that when used with large models, the count of weak users in recommendation systems drops significantly. In highly sparse datasets like that of Book-Crossing and ML1M, GPT-3.5-turbo reduced the number of weak users by \u224887% and Mixtral-8x7b-instruct by \u224885%. On the contrary, when the dataset is dense like ML100k, Mixtral-8x7binstruct can reduce the count by \u224899% and the closed-source model by \u224888%. While the reduction ability of GPT-3.5-turbo remains consistent over all datasets, the open-source models yield better performance for less sparse datasets yet improve the robustness of RS to sub-populations. In addition, we noted that a single query takes \u22488 seconds in GPT-3.5-turbo and \u224811 seconds in Mixtral-8x7b-instruct. This shows each user query\u2019s high processing times and inference latency. Thus, it is crucial to use these models responsibly by identifying what they are good at. Considering the example of the smallest dataset of ML100k, which consists of 943 users. Our strategy for identifying weak users results in only 330 weak users (worst case by ItemKNN), which leads to an overhead of 2, 640 seconds in using GPT-3.5-turbo in addition to the training time of base RS models, which is significantly less than the 7, 544 seconds if used for all the users. \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. (a) ML1M (b) ML100k (c) Book-Crossing Figure 4: Comparative analysis of reduction in the count of weak users 5 DISCUSSION In this work, we implemented a novel approach for responsible adaptation of LLMs for ranking tasks.As suggested by Burnell et al. [65], the involvement of AI in high-stake decision-based applications (like ranking models for job recommendations) requires instance-by-instance evaluation instead of aggregated metrics for designing responsible AI models. Our results in Fig. 3 show that many inactive users receive recommendations of poor quality by traditional RS. Some inactive users still receive recommendations comparable to the active users, but this might be due to high similarity scores with active users that existing models can still capture their preferences effectively. We leave this as an exploratory study for future work. However, the overall performance scores on active users remain better than those of inactive users. Building upon these weak instances, our framework emphasizes on instance-by-instance evaluation of users. While we group users based on activity and then evaluate the performance of inactive users, our approach pinpoints to weak users whose preferences remain hard for traditional RS to capture effectively. We believe that our framework inherently addresses the issue of group fairness. Though we do not group users based on demographics, our framework can also be extended to these scenarios where instead of activity, user demographics can be used to group users and then within the marginalized groups, the interaction histories of users who receive poor performance can be contextualized and given to LLM. However, irrespective of the demographics, our framework mainly addressed the robustness to data sparsity and sub-population of weak users, which inherently tackles the fairness issue. This framework further helped reduce the number of queries which needed to be given to the LLM. Since most existing works (refer to Section 2) randomly select a few users out of all the users present in the dataset to evaluate the performance of LLMs, our framework provides a systematic way of selecting users for which LLMs can be used. Leveraging the capabilities of LLMs for weak users, our work emphasizes the importance of low-cost traditional RSs as well. We also observed that in some cases (Fig. 4), LLM might not be able to perform well on every weak user. This opens up new research opportunities for understanding similarities and differences within the identified weak users on which LLM does and does not perform well. Further, one can think of various prompting strategies to prompt the model to capture the preferences of extremely weak users effectively. Past works have developed various prompting strategies, which can all be tested to observe which strategies remain effective for which types of users. Nevertheless, the main goal of this paper remains to emphasise the importance of responsible adaptation of LLMs by strategically selecting tasks for which these models inherently perform well. It is also important to note that for weak users, we still obtain candidate items from traditional RS, as has been done in most past works (refer to Section 2). This helped in reducing the candidate set from thousands of unrated items to a few, which were given to the LLM to then rank. While this approach ensures that the results obtained by RS for weak users are utilized for generating candidate items instead of discarding such results directly, thus maximizing the usage of RSs even for weak users, it has a limitation. Traditional RSs perform worse on these users, and the candidate items might not capture the true preference of weak users. When we give these candidate items to LLM for ranking, the results might deviate further from true preferences. This issue can be the one reason that LLMs might not perform well on all weak users. One can, thus, further investigate the relation of candidate items to the performance of LLMs on certain users. If the candidate items are already non-preferred items by users, LLM might inherently find it difficult to perform well. Our work, thus, represents a foundational step towards responsibly adapting LLMs while emphasizing the importance of traditional models, particularly focusing on addressing the challenges posed by sub-populations with sparse interaction histories. Our instance-by-instance evaluation approach, inspired by the imperative highlighted by recent studies in high-stakes decision-based AI applications, underscores the necessity of a nuanced understanding of individual user needs and preferences. While our framework emphasizes the importance of leveraging traditional recommendation systems alongside LLMs, we acknowledge the need to further explore the performance variations among weak users and the impact of candidate item selection on LLM effectiveness. Moving forward, our work lays the groundwork for continued research into refining the adaptation of LLMs, ensuring their responsible deployment across diverse user populations and application scenarios. \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY 6", + "additional_graph_info": { + "graph": [ + [ + "Kirandeep Kaur", + "Sujit Gujar" + ] + ], + "node_feat": { + "Kirandeep Kaur": [ + { + "url": "http://arxiv.org/abs/2405.00824v1", + "title": "Efficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations", + "abstract": "Conventional recommendation systems (RSs) are typically optimized to enhance\nperformance metrics uniformly across all training samples.\n This makes it hard for data-driven RSs to cater to a diverse set of users due\nto the varying properties of these users. The performance disparity among\nvarious populations can harm the model's robustness with respect to\nsub-populations. While recent works have shown promising results in adapting\nlarge language models (LLMs) for recommendation to address hard samples, long\nuser queries from millions of users can degrade the performance of LLMs and\nelevate costs, processing times and inference latency. This challenges the\npractical applicability of LLMs for recommendations. To address this, we\npropose a hybrid task allocation framework that utilizes the capabilities of\nboth LLMs and traditional RSs. By adopting a two-phase approach to improve\nrobustness to sub-populations, we promote a strategic assignment of tasks for\nefficient and responsible adaptation of LLMs. Our strategy works by first\nidentifying the weak and inactive users that receive a suboptimal ranking\nperformance by RSs. Next, we use an in-context learning approach for such\nusers, wherein each user interaction history is contextualized as a distinct\nranking task and given to an LLM. We test our hybrid framework by incorporating\nvarious recommendation algorithms -- collaborative filtering and\nlearning-to-rank recommendation models -- and two LLMs -- both open and\nclose-sourced. Our results on three real-world datasets show a significant\nreduction in weak users and improved robustness of RSs to sub-populations\n$(\\approx12\\%)$ and overall performance without disproportionately escalating\ncosts.", + "authors": "Kirandeep Kaur, Chirag Shah", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.HC" + ], + "main_content": "INTRODUCTION Recommendation systems (RSs) have become an integral part of numerous online platforms, assisting users in navigating vast amounts of content to relieve information overload [1]. While Collaborative Filtering based RSs [2] primarily rely on user-item interactions to predict users\u2019 preferences for certain candidate items, the utilization of language in recommendations has been prevalent for decades in hybrid and content-based recommenders, mainly through item descriptions and text-based reviews [3]. Furthermore, conversational recommenders [4] have highlighted language as a primary mechanism for allowing users to naturally and intuitively express their preferences [5]. Deep recommendation models are trained under the Empirical Risk Minimization (ERM) framework that minimizes the loss function uniformly for all training samples. Such models, however, fail to cater to a diverse set of sub-populations, affecting robustness [6\u201312]. Empirical analysis conducted by Li et al. [13] shows that active users who have rated many items receive better recommendations on average than inactive users. This inadvertent disparity in recommendations requires careful scrutiny to ensure equitable recommendation experiences for all users [14]. arXiv:2405.00824v1 [cs.IR] 1 May 2024 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. On the other hand, Large Language Models (LLMs) like GPT [15], LLaMA [16], LaMDA [17], Mixtral [18] can effectively analyze and interpret textual data, thus enabling a better understanding of user preferences. These foundation models demonstrate remarkable versatility, adeptly tackling various tasks across multiple domains [19\u201321]. However, the field of recommendations is highly domain-specific and requires in-domain knowledge. Consequently, many researchers have sought to adapt LLMs for recommendation tasks [22\u201325]. Authors in [25] outline four key stages in integrating LLMs into the recommendation pipeline: user interaction, feature encoding, feature engineering, and scoring/ranking. The purpose of using LLMs as a ranking function aligns closely with generalpurpose recommendation models. The transition from traditional library-based book searches to evaluating various products, job applicants, opinions, and potential romantic partners signifies an important societal transformation, emphasizing the considerable responsibility incumbent upon ranking systems [26]. Existing works that deploy LLMs for ranking [5, 27\u201334] have proven excellence of LLMs as zero-shot or few-shot re-rankers demonstrating their capabilities in re-ranking with frozen parameters. These works use traditional RSs as candidate item retrieval models to limit the candidate items that need to be ranked by LLM due to a limited context window. Furthermore, Hou et al. [27], Xu et al. [28] interpret user interaction histories as prompts for LLMs and show that LLMs perform well only when the interaction length is up to a few items, demonstrating the ability of LLMs for (near) cold-start users. Since adapting LLMs can raise concerns around economic and efficiency factors, most of these works train RS on entire datasets but randomly sample interaction histories of some users to evaluate the performance of LLMs, questioning the generalizability of results for all users. This leads us to two important research questions. \u2022 RQ1: Though LLMs have shown remarkable ranking performance even in zero-shot settings, how can we reduce the high costs associated with adapting LLMs to support practical applicability? \u2022 RQ2: Conventional recommendation systems are cost-effective and can perform well on most users, as shown by previous works; how can we prevent performance degradation on sub-populations? To address these RQs, we propose a task allocation strategy that leverages LLM and RS\u2019s capabilities in a hybrid framework (Fig. 1). Our strategy operates in two phases based on the responsible and strategic selection of tasks for the cost-effective usage of LLMs. First, we identify the users with highly sparse interaction histories on whom the ranking performance of RS is below a certain threshold \ud835\udc61\ud835\udc5d. All such users are termed as weak users. In the second phase, interaction histories of weak users are contextualized using in-context learning to demonstrate user preferences as instruction inputs for LLM. While the strong users receive the final recommendations retrieved by RS, weak users receive the recommendations ranked by LLM if the quality of the ranked list is better than the RS. We test our framework based on collaborative filtering and learning-to-rank recommendation models and our results show the efficacy of our strategy, both with open-source as well as closedsource LLMs, in boosting the model robustness to sub-population and data sparsity and improving the quality of recommendations. For reproducibility and to support research community, our code is available on https://anonymous.4open.science/r/resp-llmsRS/. In short, the following are our contributions in this paper. \u2022 We introduce a novel hybrid task allocation strategy that combines the strengths of LLMs and traditional RSs to improve robustness to subpopulations and data sparsity. \u2022 Our unique method for pinpointing weak users based upon two criteria (user activity and the received recommendation quality below a set threshold) facilitates interventions using LLMs for equitable recommendations. \u2022 Our proposed framework improves the robustness of traditional recommendation models by reducing weak user count, enhancing recommendation quality, and addressing high costs associated with adapting LLMs. \u2022 Our experiments, both on closed-source and open-source LLMs, show the efficacy of our framework in improving the model robustness to sub-populations by (\u224812%) for varying levels of sparsity and reducing the count of weak users significantly. 2 RELATED WORK Robustness in machine learning (ML) targets developing models capable of withstanding the challenges posed by imperfect data in diverse forms [35]. Within the paradigm of recommendations, some existing works developed models resilient to shifts in popularity distribution [36\u201338], distribution disparity in train and test datasets [39, 40], adversarial and data poisoning attacks [41\u201345]. Our work aims to tackle the recommendation model\u2019s robustness to data sparsity [46] and sub-populations [47]. In their research, Li et al. [13] illustrated that RSs excel in catering to active users but fall short in meeting the overall needs of inactive ones. To address this inequality, they proposed a re-ranking technique that reduced the disparity among active and inactive users. Their results depict that such post-processing techniques [48\u201350] can either harm the average performance on advantaged users to reduce the disparity or reduce the overall utility of models. Though the in-processing techniques [51\u201353] for improving equitable recommendations across various sub-populations can tackle fairness-utility trade-offs, simply adding regularizer term results in sub-optimal performance [54]. Most of these works have shown disparity and evaluated existing models by grouping users based on their activity, demographics, and preferences. Similarly, Wen et al. [55] developed a Streaming-Distributionally Robust Optimization (S-DRO) framework to enhance performance across user subgroups, particularly by accommodating their preferences for popular items. Different from these, our work first builds upon the existing literature that elicits the issue of performance disparities among active and inactive users and then indicates that though inactive users receive lower-quality recommendations on average, this degradation only affects a subset of inactive users rather than all inactive users. Unlike these works, our framework identifies weak users\u2014 inactive individuals whose preferences traditional recommendation systems struggle to capture effectively. Many researchers have turned to LLMs to address some of these problems because, in recent years, LLMs have proven to be excellent \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY re-rankers and have often outperformed existing SOTA recommendation models in zero-shot and few-shot settings without requiring fine-tuning. For example, Gao et al. [56] proposed an enhanced recommender system that integrates ChatGPT with traditional RS by synthesizing user-item history, profiles, queries, and dialogue to provide personalized explanations to the recommendations through iterative refinement based on user feedback. AgentCF [31], designed to rank items for users, involves treating users and items as agents and optimizing their interactions collaboratively. While user agents capture user preferences, item agents reflect item characteristics and potential adopters\u2019 preferences. They used collaborative memorybased optimization to ensure agents align better with real-world behaviours. While the retrieval-ranker framework in [29] remains similar to previous works, authors generate instructions with key values obtained from both users (e.g., gender, age, occupation) and items (e.g., title, rating, category). Despite the excellence of LLMs as ranking agents, adapting LLMs can involve processing lengthy queries containing numerous interactions from millions of users. Furthermore, each query can raise various economic and latency concerns. Thus, all these works randomly select a few users from the original datasets to evaluate the performance of LLMs. In practice, this user base can involve many more users, which questions the practical applicability of large models for recommendations. However, some recent studies have shown the efficacy of large language models (LLMs) as re-ranking agents to cater to queries with shorter interaction histories compared to lengthy instructions that constitute hundreds of interactions. For example, Hou et al. [27] trained recommendation systems to generate candidate item sets and then used user-item interactions to develop instructions. The authors sorted users\u2019 rating histories based on timestamps and used in-context learning to design recency-focused prompts. They prompted LLMs to re-rank the candidate items retrieved by the recommendation systems. Their analysis showed decreased performance of LLMs if the candidate item set had more than 20 items. ProLLM4Rec [28] adopted a unified framework for prompting LLMs for recommendation. The authors integrated existing recommendation systems and works that use LLMs for recommendations within a single framework. They provided a detailed comparison of the capabilities of LLMs and recommendation systems. Their empirical analysis showed that while state-of-the-art sequential recommendation models like SASRec [57] improve with a growing number of interactions, LLMs start to perform worse when the number of interactions grows. Furthermore, both of these works sampled some users to evaluate the performance of LLMs due to the high adaptation costs. To investigate the effectiveness of various prompting strategies Sanner et al. [5] focused on a (near) cold-start scenario where minimal interaction data is available. They used various prompting techniques to provide a natural language summary of preferences to enhance user satisfaction by offering a personalized experience. By exploiting rich positive and negative descriptive content and item preferences within a unified framework, they compared the efficacy of prompting paradigms with large language models against collaborative filtering baselines that rely solely on item ratings. In summary, past works suggest that despite the high costs associated with adapting LLMs for recommendations, these models can outperform existing recommendation models significantly. Moreover, we acknowledge that the literature shows the contrasting capabilities of both RSs and LLMs \u2013 RSs fail to perform well on inactive users due to sparse interaction vectors, and in contrast, LLMs can be prompted to cater to inactive users in near cold-start settings without requiring any fine-tuning. Building upon these crucial insights, our framework first aims to identify the weak users for whom RS finds it hard to capture their preferences accurately. We then use in-context learning to prompt LLMs to generate recommendations for such users. While past works like ProLLM4Rec by [28], dynamic reflection with divergent thinking within a retriever-reranked by [33], recency-focused prompting by [27] and aligning ChatGPT with conventional ranking techniques such as point-wise, pair-wise, and list-wise ranking by [58] are all different techniques to design prompts with different variations, our main contribution lies in the responsible task allocation within recommendation systems and all such techniques can be used within our framework for designing prompts. In the next section, we discuss our methodology in detail. 3 METHODOLOGY We begin here by providing a formal definition of the existing problem. We then discuss our framework, which adopts a hybrid structure by leveraging the capabilities of both traditional RSs and LLMs. For this, we first identify users for whom RSs do not perform well and then leverage LLMs for these users to demonstrate user preferences using in-context learning. 3.1 Problem Formulation Consider a recommendation dataset D with \ud835\udc58data points. Let \ud835\udc48= {\ud835\udc621,\ud835\udc622, . . . ,\ud835\udc62\ud835\udc40} be the set of users and |\ud835\udc48| = \ud835\udc40represents the number of users in D. Let \ud835\udc3c= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc41} be the set of all the items and |\ud835\udc3c| = \ud835\udc41represents the number of items in D. D = {(\ud835\udc62\ud835\udc5a,\ud835\udc56\ud835\udc5b,\ud835\udc5f\ud835\udc5a\ud835\udc5b) : \ud835\udc5a= 1, 2, . . . , \ud835\udc40;\ud835\udc5b= 1, 2, . . . , \ud835\udc41} (1) Here, the triplet \ud835\udc51\ud835\udc5a\ud835\udc5b= (\ud835\udc62\ud835\udc5a,\ud835\udc56\ud835\udc5b,\ud835\udc5f\ud835\udc5a\ud835\udc5b) represents one data point where a user \ud835\udc62\ud835\udc5aprovided a rating of \ud835\udc5f\ud835\udc5a\ud835\udc5bto an item \ud835\udc56\ud835\udc5b. Now, if a user \ud835\udc62\ud835\udc5ahas rated a set of items, then let [\ud835\udc5f\ud835\udc5a\ud835\udc5b]\ud835\udc41 \ud835\udc5b=1 denote the rating vector consisting of explicit rating values ranging from 1 to 5 if a user provided a rating and 0 otherwise. Additionally, \ud835\udf03\ud835\udc5f represents the conventional recommendation model. The first step to solving the problem includes determining different criteria to categorize a user as weak. This includes ranking users based on the RS performance on each one of them. Then, the goal is to understand user characteristics to categorize extremely weak users. For each weak user, we contextualize interaction history as a distinct recommendation task and finally allocate these tasks to LLM. 3.2 Identifying Weak Users We consider two criteria for identifying weak users for recommendation model \ud835\udf03\ud835\udc5f. First, given \ud835\udc3eusers and their associated rating vectors, we evaluate how well the model could rank the relevant user items, often termed as positive items above the irrelevant or negative items. Let \ud835\udc5fdenote the rank of the relevant item, and \ud835\udc5f\u2032 be the rank of irrelevant items. Then, \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. \ud835\udeff(\ud835\udc5f< \ud835\udc5f\u2032) (2) denotes an indicator function that outputs one if the rank of the relevant item \ud835\udc5fis higher than that of the irrelevant item \ud835\udc5f\u2032. Let \ud835\udc41 denote the total number of items and |\ud835\udc45| be the set of all relevant items. Then, similar to Rendle et al. [59], we use AUC measure to evaluate how hard it was for \ud835\udf03\ud835\udc5fto rank items preferred by a certain user, given by P(u) = 1 |\ud835\udc45|(\ud835\udc41\u2212|\ud835\udc45|) \u2211\ufe01 \ud835\udc5f\u2208\ud835\udc45 \u2211\ufe01 \ud835\udc5f\u2032\u2208{1,...,\ud835\udc41}\\\ud835\udc45 \ud835\udeff(\ud835\udc5f< \ud835\udc5f\u2032) (3) Here, |\ud835\udc45|(\ud835\udc5b\u2212|\ud835\udc45|) denotes all possible pairs of relevant and irrelevant items. We acknowledge that various metrics like NDCG, F1, precision and recall have been used to measure the quality of ranking ability of recommendation models. However, these metrics place significant importance on the outcomes of the top-k items in the list and completely ignore the tail. For identifying weak users, we require a metric consistent under sampling i.e. if a recommendation model tends to give better recommendations than another on average across all the data, it should still tend to do so even if we only look at a smaller part of the data. The aim of our framework is a clear task distribution. The performance of top-k metrics varies with \ud835\udc58, and this might raise uncertainty as \ud835\udc58varies with varying users and platforms. Nevertheless, AUC is the only metric which remains consistent under-sampling, and as \ud835\udc58reduces, all top-k metrics collapse to AUC. For more details, we refer the readers to [60]. Past works [13] have shown that active users that provide more ratings receive better recommendations than the inactive users on average. However, only a few inactive users might receive irrelevant recommendations individually (Fig. 3). Thus, we evaluate each user\u2019s activity. Let a user \ud835\udc62rated |\ud835\udc45| items out of a total of \ud835\udc41 items. Then sparsity index S\ud835\udc3cassociated with a given user \ud835\udc62can be calculated as: S\ud835\udc3c(\ud835\udc62) = |\ud835\udc45| \ud835\udc41 (4) If this value falls above a certain threshold \ud835\udc61\ud835\udc60, the user is considered as inactive. Combining with the weak user identification, we obtain, Definition 3.1. Given dataset D and a recommendation model \ud835\udf03\ud835\udc5f, we say that a user \ud835\udc62\ud835\udc5ais extremely weak if the likelihood of \ud835\udf03\ud835\udc5f being able to rank the relevant items above the irrelevant items is below \ud835\udc61\ud835\udc5dand the rating vector [\ud835\udc5f\ud835\udc5a\ud835\udc5b] has extremely high sparsity. i.e., above \ud835\udc61\ud835\udc60 P(\ud835\udc62\ud835\udc5a) \u2264\ud835\udc61\ud835\udc5d && S\ud835\udc3c(\ud835\udc62) > \ud835\udc61\ud835\udc60 (5) It is important to note that a higher AUC value implies better performance, and the value always lies between 0 to 1. Further, we use \ud835\udc61\ud835\udc60= \ud835\udc4e\ud835\udc63\ud835\udc54(S\ud835\udc3c(\ud835\udc37)), the average sparsity of all users in D i.e., \ud835\udc4e\ud835\udc63\ud835\udc54(S\ud835\udc3c(\ud835\udc37)) = 1/\ud835\udc5a\u2217\u00cd\ud835\udc5a \ud835\udc57=1 S\ud835\udc3c(\ud835\udc62\ud835\udc57) for determining this threshold. 3.3 Designing Natural Language Instructions for Ranking Closest to our work, Hou et al. [27] formalized the recommendation problem as a conditional ranking task considering sequential interaction histories as conditions and uses the items retrieved by traditional RS as \ud835\udc50\ud835\udc4e\ud835\udc5b\ud835\udc51\ud835\udc56\ud835\udc51\ud835\udc4e\ud835\udc61\ud835\udc52items. While we aim to design the conditional ranking tasks, our approach differs significantly from theirs as instead of using LLMs as a re-ranking agent for all users; we instruct LLM with the preferences of weak users preferences (sorted in descending order of decreased preference). This technique is detailed below. For each user, we use in-context learning to instruct LLM about user preferences conditions and assign the task of ranking the candidate items. For a user \ud835\udc62, let H\ud835\udc62= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc5b} depict the user interaction histories sorted in decreasing order of preference and C\ud835\udc62= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc57} be the candidate items to be ranked. Then, each instruction can be generated as a sum of conditions and candidate items, i.e., I \ud835\udc62= H\ud835\udc62+ C\ud835\udc62 (6) In-context learning: We use in-context learning to provide a demonstration of the user preferences to LLM using certain examples. As suggested by Hou et al. [27], providing examples of other users may introduce extra noise if a user has different preferences. Therefore, we sort every weak user\u2019s preferences based on explicit user ratings. For example, \"User {\ud835\udc62\ud835\udc60\ud835\udc52\ud835\udc5f_\ud835\udc56\ud835\udc51} liked the following movies in decreasing order of preference where the topmost item is the most preferred one: 1. Harry Potter, 2. Jurassic Park ...\". This forms the condition part of the instruction. We then select items which served as test items for recommendation models as candidate items and instruct LLM to rank them in decreasing order of preference as \"Now, rank the following items in decreasing order of preference such that the top most movie should be the most preferred one: Multiplicity, Dune ...\". It is important to note that while the presentation order in conditions plays a significant role in demonstrating user preferences to LLM, we deliberately shuffle the candidate items to test the ability of LLM to rank correctly. Since LLMs can generate items out of the set, we specially instruct to restrict recommendations to the candidate set. Fig. 2 shows the final template of the instruction given to LLM for a particular user. We use the same template for all identified weak users to contextualize their past interactions into a ranking task. 3.4 Our Framework This section discusses the workflow adopted by our framework as depicted in Fig. 1 and corresponding algorithm 1. Initially, the model takes input as the training D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5band test dataset D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61, a set of users U, a recommendation model \ud835\udf03\ud835\udc5f, large language model \ud835\udf03\ud835\udc59and two thresholds: sparsity threshold \ud835\udc61\ud835\udc60and performance threshold \ud835\udc61\ud835\udc5dwhich depict the minimum sparsity and performance values for user to be classified as strong user. It is important to note that splitting data will not yield a mutually exclusive set of users in both sets, but item ratings for each user in D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5bwill differ from those in D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61. \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 2: Instruction template for contextualizing interaction histories of weak users. Table 1: Datasets statistics ML-1M ML-100k Book-Crossing # Users 6,041 943 6,810 # Items 3,952 1,682 9,135 # Interactions 1,000,209 100,000 114,426 Sparsity 95.81% 93.7% 99.82% Domain Movies Movies Books The algorithm begins by training the recommendation model \ud835\udf03\ud835\udc5fon the training set D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5band provides ranked items for all users. Using D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61, we test the ranking ability of the model for each user by evaluating P(\ud835\udc62\ud835\udc5a) using Eq. 3. Further, each user is also assigned a sparsity score S\ud835\udc3c(\ud835\udc62) evaluated using Eq. 4. If P(\ud835\udc62) has a value less than \ud835\udc61\ud835\udc5dand the sparsity index S\ud835\udc3c(\ud835\udc62) for a particular user falls below \ud835\udc61\ud835\udc60, the user is termed as a weak user. While previous works have shown that, on average, inactive users receive poor performance, we pinpoint weak users by evaluating both the sparsity and performance. For all such weak users, we convert rating histories from D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b as conditions H\ud835\udc62using in-context learning and use test items as candidate items C\ud835\udc62for testing purposes. However, in practice, these candidate items can be replaced by unobserved items. The final instructions are generated by combining conditions and candidate items as depicted by (Eq.6). These instructions are given to the LLM, which provides a ranked list of items for each user. For all the strong users, the recommendations presented are the ones ranked by the conventional recommendation model. However, the weak users receive final ranked lists generated by the LLM. 4 EXPERIMENTS This section discusses our experimental setup with details of the datasets and models used, followed by the implementation details of all these models and various metrics used. We finally present empirical results and a comparative analysis of various recommendation models and LLMs. 4.1 Experimental Setup 4.1.1 Datasets. To test the effectiveness of our framework, we conducted experiments on three real-world datasets: ML-1M1, ML100k2, and Book-Crossing (B-C)3. Both ML100k and ML1M 1https://grouplens.org/datasets/movielens/1m/ 2https://grouplens.org/datasets/movielens/100k/ 3http://www2.informatik.uni-freiburg.de/ cziegler/BX/ Algorithm 1 Hybrid LLM-RecSys Algorithm for Ranking 1: Input: D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b: training dataset; D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61: test dataset; U: set of users; S\ud835\udc3c: Sparsity index for all users; \ud835\udf03\ud835\udc5f: recommendation algorithm; \ud835\udf03\ud835\udc59: large language model, \ud835\udc61\ud835\udc60: sparsity threshold, \ud835\udc61\ud835\udc5d: performance threshold. 2: Output: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5b\ud835\udc54: ranked lists of items for strong users, \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58: ranked lists of items for weak users. 3: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\u2190\ud835\udf03\ud835\udc5f(D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b) 4: for each user \ud835\udc62\ud835\udc5a\u2208U do 5: Calculate P(\ud835\udc62\ud835\udc5a) using Eq. 3 6: Calculate S(\ud835\udc62\ud835\udc5a) using Eq. 4 7: if P(\ud835\udc62\ud835\udc5a) < \ud835\udc61\ud835\udc5d&& S\ud835\udc3c(\ud835\udc62\ud835\udc5a) then 8: U\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58\u2190\ud835\udc62\ud835\udc5a 9: else 10: U\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5b\ud835\udc54\u2190\ud835\udc62\ud835\udc5a 11: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5b\ud835\udc54\u2190\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51[\ud835\udc62\ud835\udc5a] 12: end if 13: end for 14: for each \ud835\udc62\ud835\udc56\u2208U\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58do 15: Generate instruction I \ud835\udc62\ud835\udc56using Eq. 6 16: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc62\ud835\udc56= \ud835\udf03\ud835\udc3f(I \ud835\udc62) 17: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58\u2190\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc62\ud835\udc56 18: end for are movie-rating datasets, and book-crossing is a book-rating dataset. We select three datasets with varying levels of sparsity for evaluating robustness to data sparsityML100k has the least sparsity, and Book-Crossing has the highest sparsity (for exact values, refer to table 1). All these datasets have explicit user preference in the form of ratings ranging from 0 \u22125 for movie ratings and 0 \u221210 for book ratings dataset. We do not filter out users from ML1M and ML100k as each user has rated at least 20 movies in both these datasets. For consistency, we filter out users with less than 20 ratings from the Book-Crossing dataset. While both movie-ratings datasets have clustering based on sensitive attributes like age and gender, this paper aims to boost performance on all weak users irrespective of the sensitive features. Thus, following the protocol adopted by [13], we divided users based on their activity or the number of items rated. Any user who has rated items below a certain threshold \ud835\udc61\ud835\udc60 is termed an inactive user, and all those above this threshold are active users. We calculated the number of items rated on an average by all the users and used this average value as a threshold; this threshold can always vary and be set to different values per application. Table 1 presents the statistics of all three datasets. 4.1.2 Baselines and Models. Our hybrid framework uses both traditional recommendation systems and LLMs. Thus, we include two different types of recommendation models: (i) Collaborativefiltering based: Neural Collaborative Filtering (NCF) [61] as well as ItemKNN [62]; and (ii) Learning-to-rank modelBayesian Personalized Ranking (BPR) [59]. While these models identify weak users and generate candidate items, LLMs are further deployed to improve the performance of such users. We use both open (Mixtral-8x-7binstruct) and closed-sourced (GPT-3.5-turbo)to test the capability of the proposed framework. It is important to note that the Collaborative Filtering models are mostly used to capture the long-term \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. (a) ItemKNN (ML1M) (b) NCF (ML1M) (c) BPR (ML1M) (d) ItemKNN (ML100k) (e) NCF (ML100k) (f) BPR (ML100k) (g) ItemKNN (B-C) (h) NCF (B-C) (i) BPR (B-C) Figure 3: AUC vs Sparsity scatter plots for illustrating the performance (measured using AUCx-axis) for all users in ML1M, ML100k and Book-Crossing (B-C) dataset on three different algorithms. preferences of users. We acknowledge that existing works (refer to Section 2) have used sequential recommendation models for comparing the performance of LLMs. These works also use recommendation models as candidate retrieval models and then use LLMs to rerank the candidate items. However, sequential models are used to predict the next item according to the recently bought items. We test our framework mainly on long-term user preferences and use collaborative filtering models not only for candidate item retrieval but also for recommending top-k items to strong users. However, we believe that any existing model adopting this retrieval-reranker strategy can adopt our framework. For space constraints, we present an evaluation of our framework only on NCF, ItemKNN and BPR. In line with existing literature [27, 28], we design instructions by randomly sampling 20 rated items to demonstrate the user\u2019s preferences to the LLM. Furthermore, existing works do not discuss the responsible adaptation of LLMs, and the underlying task of retrieval-reranker of such models remains consistent and can thus use our framework. 4.1.3 Implementation details. For ease of reproducibility, we use the open-source recommendation library RECBOLE [63] for implementing all recommendation models and API calls for access to LLMs 45. Each dataset is split into train (80%), test (10%) and validation set (10%). We carefully use the validation set to tune all recommendation models\u2019 hyperparameters. For BPR, we search for 4https://platform.openai.com/docs/api-reference 5https://www.llama-api.com/ optimal learning rate in [5\ud835\udc52\u22125, 1\ud835\udc52\u22124, 5\ud835\udc52\u22124, 7\ud835\udc52\u22124, 1\ud835\udc52\u22123, 5\ud835\udc52\u22123, 7\ud835\udc52\u22123] and in [5\ud835\udc52\u22127, 1\ud835\udc52\u22126, 5\ud835\udc52\u22126, 1\ud835\udc52\u22125, 1\ud835\udc52\u22124, 1\ud835\udc52\u22123] for NCF. Additionally, we use [64, 32, 16] as MLP hidden size for all layers and search optimal dropout probability within [0.0, 0.1, 0.3] for NCF. Two hyperparameter for ItemKNN involve \ud835\udc58(neighborhood size) in [10, 50, 100, 200, 250, 300, 400] and \ud835\udc60\u210e\ud835\udc5f\ud835\udc56\ud835\udc5b\ud835\udc58(normalization parameter to calculate cosine distance) in [0.0, 0.1, 0.5, 1, 2]. We adopt the protocol presented by a recently released toolkit RGRecSys [64] for evaluating robustness to sub-population using NDCG and AUC. We emphasize that the use of AUC to measure the hardness associated with each user for a given recommendation model is because of the consistency property of AUC. We use the popular CatBoost6 library that offers AUC implementation for ranking and also report final NDCG@10 scores. Furthermore, we set the temperature to 0 in GPT-3.5-turbo to minimize the generation of out-of-list items and hallucinations. However, as per our observations, setting the temperature to 0 in Mixtral-8x7b-instruct, the model outputs the list in the same order in which it was given input to it. Hence, we set the temperature to 1 and removed the items which were not originally present in the candidate list. We now discuss our empirical inferences as we conduct experiments following these details. \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Table 2: Tabular illustration of the overall comparison of results in terms of ranking quality measured using AUC and NDCG@10 for two collaborative filtering based (Neural Collaborative Filtering, ItemKNN) and one learning-to-rank models in comparison to their usage within our framework along with one open-sourced LLM (GPT-3.5-turbo) and one close-sourced LLM (Mixtral8x7b-instruct). ML1M ML100K Book-Crossing AUC AUC (Weak Users) NDCG@10 NDCG@10 (Weak Users) AUC AUC (Weak Users) NDCG@10 NDCG@10 (Weak Users) AUC AUC (Weak Users) NDCG@10 NDCG@10 (Weak Users) ItemKNN 0.47032 0.23776 0.66792 0.58226 0.45616 0.24778 0.66792 0.58226 0.43309 0.25909 0.75197 0.65098 ItemKNN + GPT-3.5-turbo 0.58142 0.51776 0.82643 0.70352 0.59781 0.51953 0.82643 0.82598 0.61629 0.49713 0.86212 0.77101 ItemKNN + Mixtral-8x7b-instruct 0.56035 0.51708 0.70147 0.69276 0.59972 0.52327 0.70147 0.82438 0.55203 0.47215 0.85904 0.76183 NCF 0.47805 0.22945 0.78795 0.59801 0.48311 0.25182 0.78795 0.67734 0.51852 0.29004 0.78370 0.66148 NCF + GPT-3.5-turbo 0.58935 0.52122 0.80317 0.71723 0.60831 0.50814 0.80317 0.82603 0.61946 0.50513 0.88219 0.78133 NCF + Mixtral-8x7b-instruct 0.57211 0.52100 0.79178 0.70741 0.61174 0.50903 0.79178 0.82306 0.59901 0.49897 0.86254 0.78001 BPR 0.57957 0.37824 0.88833 0.73998 0.51629 0.17020 0.88833 0.71944 0.53310 0.24426 0.80405 0.70289 BPR + GPT-3.5-turbo 0.65397 0.51997 0.90098 0.82117 0.6387 0.51435 0.90098 0.82452 0.62145 0.50998 0.88173 0.81933 BPR + Mixtral-8x7b-instruct 0.64174 0.51972 0.89742 0.82998 0.64910 0.52135 0.89742 0.81761 0.61625 0.50081 0.87798 0.80284 4.2 Empirical Evaluation 4.2.1 Comparative analysis. The first phase begins by identifying the inactive users. For this, we calculate the average sparsity of all users in the dataset and identify users above this threshold as inactive users. However, one can use different values for this threshold like [13] used only top 20% of the sparse users for annotating the inactive users. We then evaluate the AUC score using equation 3 to measure the performance of RS on all these users. In line with the findings of Li et al. [13], RS performs significantly well on active users as compared to inactive users. For instance-by-instance analysis of every user, we then plot AUC scores against the sparsity index as shown in Fig. 3 for all three datasets using three different recommendation algorithms: ItemKNN, NCF, and BPR. While ItemKNN and NCF are collaborative filtering algorithms, the overall scatterplot for BPR shows better AUC scores as compared to the other two algorithms. This is because of the inherent nature of learning-to-rank models like BPR, which rank user preferences better. This figure shows that though RS performs poorly on inactive users on average, not all inactive users receive poor-quality recommendations. We thus use our definition 5 to identify such users and mark them as weak. It might be interesting to explore why not all inactive users receive poor performance. We leave this exploratory study for future work. For the second phase, we design instructions for these weak users using the approach discussed inSection 3. Our results in Table 2 show that LLMs perform significantly well on these users. Using LLM and base RS models yields the best results for all three datasets. Our results show improvement in both AUC and NDCG@10 for weak users, thus demonstrating improved robustness to the subpopulation of weak users. This further leads to an overall improved ranking quality. We also highlight that previous works like [27, 34] show that close-source models perform much better than opensource. However, understanding the properties of users for which LLMs inherently perform well (like we provide a mechanism of finding weak users) and responsibly assigning tasks to large models improve performance using open-source as well as closed-source models. Our results show that Mixtral-8x-7b-instruct can perform almost equally well on weak users in all the datasets and base models. Furthermore, as observed for the ML100k dataset, this open-source model can outperform GPT-3.5-turbo when evaluated 6https://github.com/catboost/ on AUC. It should be noted that AUC mainly evaluates the discriminatory ability of model to rank positive items over negative and NDCG focuses on the user\u2019s satisfaction with the ranked list, considering both relevance and position. The reason for this can be associated with dataset sparsity. As shown in Table 1, the sparsity of the ML100k dataset is lesser (\u224893%) compared to the other two datasets. While Mixtral is a good choice when datasets are small and more dense, GPT-3.5-turbo performs well for extremely sparse datasets. Yet, it is important to note that in either case, the margin of the performance of both these LLMs is not significant, and thus, even open-source models can give a comparable performance. Nevertheless, usage of GPT model yields best NDCG@10 scores for all datasets. 4.2.2 Reduction in weak user count. For analyzing the variations in the count of weak users, we counted a number of weak users identified in the first phase whose interactions were contextualized and given to LLMs as the base for comparison with LLMs using the same threshold \ud835\udc61\ud835\udc5d. We evaluated AUC on the rankings obtained by LLM for these users. It was noted that a few users continued being hard even for LLM if the AUC lied below \ud835\udc61\ud835\udc5d. Fig. 4 shows that when used with large models, the count of weak users in recommendation systems drops significantly. In highly sparse datasets like that of Book-Crossing and ML1M, GPT-3.5-turbo reduced the number of weak users by \u224887% and Mixtral-8x7b-instruct by \u224885%. On the contrary, when the dataset is dense like ML100k, Mixtral-8x7binstruct can reduce the count by \u224899% and the closed-source model by \u224888%. While the reduction ability of GPT-3.5-turbo remains consistent over all datasets, the open-source models yield better performance for less sparse datasets yet improve the robustness of RS to sub-populations. In addition, we noted that a single query takes \u22488 seconds in GPT-3.5-turbo and \u224811 seconds in Mixtral-8x7b-instruct. This shows each user query\u2019s high processing times and inference latency. Thus, it is crucial to use these models responsibly by identifying what they are good at. Considering the example of the smallest dataset of ML100k, which consists of 943 users. Our strategy for identifying weak users results in only 330 weak users (worst case by ItemKNN), which leads to an overhead of 2, 640 seconds in using GPT-3.5-turbo in addition to the training time of base RS models, which is significantly less than the 7, 544 seconds if used for all the users. \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. (a) ML1M (b) ML100k (c) Book-Crossing Figure 4: Comparative analysis of reduction in the count of weak users 5 DISCUSSION In this work, we implemented a novel approach for responsible adaptation of LLMs for ranking tasks.As suggested by Burnell et al. [65], the involvement of AI in high-stake decision-based applications (like ranking models for job recommendations) requires instance-by-instance evaluation instead of aggregated metrics for designing responsible AI models. Our results in Fig. 3 show that many inactive users receive recommendations of poor quality by traditional RS. Some inactive users still receive recommendations comparable to the active users, but this might be due to high similarity scores with active users that existing models can still capture their preferences effectively. We leave this as an exploratory study for future work. However, the overall performance scores on active users remain better than those of inactive users. Building upon these weak instances, our framework emphasizes on instance-by-instance evaluation of users. While we group users based on activity and then evaluate the performance of inactive users, our approach pinpoints to weak users whose preferences remain hard for traditional RS to capture effectively. We believe that our framework inherently addresses the issue of group fairness. Though we do not group users based on demographics, our framework can also be extended to these scenarios where instead of activity, user demographics can be used to group users and then within the marginalized groups, the interaction histories of users who receive poor performance can be contextualized and given to LLM. However, irrespective of the demographics, our framework mainly addressed the robustness to data sparsity and sub-population of weak users, which inherently tackles the fairness issue. This framework further helped reduce the number of queries which needed to be given to the LLM. Since most existing works (refer to Section 2) randomly select a few users out of all the users present in the dataset to evaluate the performance of LLMs, our framework provides a systematic way of selecting users for which LLMs can be used. Leveraging the capabilities of LLMs for weak users, our work emphasizes the importance of low-cost traditional RSs as well. We also observed that in some cases (Fig. 4), LLM might not be able to perform well on every weak user. This opens up new research opportunities for understanding similarities and differences within the identified weak users on which LLM does and does not perform well. Further, one can think of various prompting strategies to prompt the model to capture the preferences of extremely weak users effectively. Past works have developed various prompting strategies, which can all be tested to observe which strategies remain effective for which types of users. Nevertheless, the main goal of this paper remains to emphasise the importance of responsible adaptation of LLMs by strategically selecting tasks for which these models inherently perform well. It is also important to note that for weak users, we still obtain candidate items from traditional RS, as has been done in most past works (refer to Section 2). This helped in reducing the candidate set from thousands of unrated items to a few, which were given to the LLM to then rank. While this approach ensures that the results obtained by RS for weak users are utilized for generating candidate items instead of discarding such results directly, thus maximizing the usage of RSs even for weak users, it has a limitation. Traditional RSs perform worse on these users, and the candidate items might not capture the true preference of weak users. When we give these candidate items to LLM for ranking, the results might deviate further from true preferences. This issue can be the one reason that LLMs might not perform well on all weak users. One can, thus, further investigate the relation of candidate items to the performance of LLMs on certain users. If the candidate items are already non-preferred items by users, LLM might inherently find it difficult to perform well. Our work, thus, represents a foundational step towards responsibly adapting LLMs while emphasizing the importance of traditional models, particularly focusing on addressing the challenges posed by sub-populations with sparse interaction histories. Our instance-by-instance evaluation approach, inspired by the imperative highlighted by recent studies in high-stakes decision-based AI applications, underscores the necessity of a nuanced understanding of individual user needs and preferences. While our framework emphasizes the importance of leveraging traditional recommendation systems alongside LLMs, we acknowledge the need to further explore the performance variations among weak users and the impact of candidate item selection on LLM effectiveness. Moving forward, our work lays the groundwork for continued research into refining the adaptation of LLMs, ensuring their responsible deployment across diverse user populations and application scenarios. \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY 6" + } + ], + "Sujit Gujar": [ + { + "url": "http://arxiv.org/abs/1401.3884v1", + "title": "Redistribution Mechanisms for Assignment of Heterogeneous Objects", + "abstract": "There are p heterogeneous objects to be assigned to n competing agents (n >\np) each with unit demand. It is required to design a Groves mechanism for this\nassignment problem satisfying weak budget balance, individual rationality, and\nminimizing the budget imbalance. This calls for designing an appropriate rebate\nfunction. When the objects are identical, this problem has been solved which we\nrefer as WCO mechanism. We measure the performance of such mechanisms by the\nredistribution index. We first prove an impossibility theorem which rules out\nlinear rebate functions with non-zero redistribution index in heterogeneous\nobject assignment. Motivated by this theorem, we explore two approaches to get\naround this impossibility. In the first approach, we show that linear rebate\nfunctions with non-zero redistribution index are possible when the valuations\nfor the objects have a certain type of relationship and we design a mechanism\nwith linear rebate function that is worst case optimal. In the second approach,\nwe show that rebate functions with non-zero efficiency are possible if\nlinearity is relaxed. We extend the rebate functions of the WCO mechanism to\nheterogeneous objects assignment and conjecture them to be worst case optimal.", + "authors": "Sujit Gujar, Yadati Narahari", + "published": "2014-01-16", + "updated": "2014-01-16", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT" + ], + "main_content": "Introduction Consider that p resources are available and each of n > p agents is interested in utilizing one of them. It is desirable that we assign the resources to the agents who value them the most. Since the classical Vickery-Clarke-Groves mechanisms (Vickrey, 1961; Clarke, 1971; Groves, 1973) have attractive properties such as dominant strategy incentive compatibility (DSIC) and allocative e\ufb03ciency (AE), Groves mechanisms are quite appealing to use in this context. However, in general, a Groves mechanism need not be budget balanced. That is, the total transfer of money in the system may not be zero. So the system will be left with a surplus or de\ufb01cit. Using Clarke\u2019s (1971) mechanism, we can ensure under fairly weak conditions, that there is no de\ufb01cit of money (that is the mechanism is weakly budget balanced). In such a case, the system or the auctioneer will be left with some money. Often, the surplus money is not really needed in many social settings such as allocations by the Government among its departments, etc. Since strict budget balance cannot coexist with DSIC and AE (Green-La\ufb00ont theorem, see Green & La\ufb00ont, 1979), we would like to redistribute the surplus to the participants as far as possible, preserving DSIC and AE. This idea was originally proposed by La\ufb00ont (1979). The total payment made by the mechanism as a redistribution will be referred to as the rebate to the agents. c \u20dd2011 AI Access Foundation. All rights reserved. \fGujar & Narahari In this paper, we consider the following problem. There are n agents and p heterogeneous objects (n > p > 1). Each agent desires one object out of these p objects. Each agent\u2019s valuation for any of the objects is independent of his valuations for the other objects. Valuations of the di\ufb00erent agents are also mutually independent. Our goal is to design a mechanism for assignment of the p objects among the n agents which is allocatively e\ufb03cient, dominant strategy incentive compatible, and maximizes the rebate (which is equivalent to minimizing the budget imbalance). In addition, we would like the mechanism to satisfy feasibility and individual rationality. Thus, we seek to design a Groves mechanism for assigning p heterogeneous objects among n agents satisfying: 1. Feasibility (F) or weak budget balance. That is, the total payment to the agents should be less than or equal to the total received payment. 2. Individual Rationality (IR), which means that each agent\u2019s utility by participating in the mechanism should be non-negative. 3. Minimizes budget imbalance. We call such a Groves mechanism that redistributes Clarke\u2019s Payment as Groves redistribution mechanism or simply redistribution mechanism. Designing a redistribution mechanism involves the design of an appropriate rebate function. If in a redistribution mechanism, the rebate function for each agent is a linear function of the valuations of the remaining agents, we refer to such a mechanism as a linear redistribution mechanism (LRM). In many situations, design of an appropriate LRM reduces to a problem of solving a linear program. Due to the Green-La\ufb00ont theorem , we cannot guarantee 100% redistribution for all type pro\ufb01les. So a performance index for the redistribution mechanism would be the worst case redistribution, that is, the fraction of the surplus which is guaranteed to be redistributed irrespective of the bid pro\ufb01les. This fraction will be referred to as redistribution index in the rest of the paper. The advantage of worst case analysis is that, it does not require any distributional information on the type sets of the agents. It is desirable that the rebate function is deterministic and anonymous. A rebate function is said to be anonymous if two agents having the same bids get the same rebate. Also, when valuation spaces are identical for all the agents, without loss of generality, we can restrict our attention to the anonymous rebate functions. Thus, the aim is to design an anonymous, deterministic rebate function which maximizes the redistribution index and satis\ufb01es feasibility and individual rationality. Our work in this paper seeks to non-trivially extend the results of Moulin (2009) and Guo and Conitzer (2009) who have independently designed a Groves mechanism in order to redistribute the surplus when objects are identical (homogeneous objects case). Their mechanism is deterministic, anonymous, and has maximum redistribution index over all possible Groves redistribution mechanisms. We will refer to their mechanism as the worst case optimal (WCO) mechanism. The WCO Mechanism is a linear redistribution mechanism. In this paper, we concentrate on designing a linear redistribution mechanism for the heterogeneous objects case. 132 \fRedistribution Mechanisms 1.1 Relevant Work As it is impossible to achieve allocative e\ufb03ciency, DSIC, and strict budget balance simultaneously, we have to compromise on one of these properties. Faltings (2005) and Guo and Conitzer (2008a) achieve budget balance by compromising on AE. If we are interested in preserving AE and DSIC, we have to settle for a non-zero surplus or a non-zero de\ufb01cit of the money (budget imbalance) in the system. To reduce budget imbalance, various rebate functions have been designed by Bailey (1997), Cavallo (2006), Moulin (2009), and Guo and Conitzer (2009). Moulin (2009) and Guo and Conitzer (2009) designed a Groves redistribution mechanism for assignment of p homogeneous objects among n > p agents with unit demand. Guo and Conitzer (2009) generalize the work in earlier paper (Guo & Conitzer, 2007) for multi-unit demand of identical items. In the work of Guo and Conitzer (2008b), authors designed a redistribution mechanism which is optimal in the expected sense for the homogeneous objects setting. Thus, it will require some distributional information over the type sets of the agents. Clippel and co-authors (2009) use the idea of destroying some of the items to maximize the agents\u2019 utilities. A preliminary version of the results presented in this paper have appeared in our earlier papers (Gujar & Narahari, 2009, 2008). 1.2 Contributions and Outline Our objective in this paper is to design a Groves redistribution mechanism for assignment of heterogeneous objects with unit demand. To the best of our knowledge, this is the \ufb01rst attempt to design a redistribution mechanism for assignment of heterogeneous objects. First, we investigate the question of existence of a linear rebate function for redistribution of surplus in assignment of heterogeneous objects. Our result shows that in general, when the domain of valuations for each agent is Rp +, it is impossible to design a linear rebate function, with non-zero redistribution index, for the heterogeneous settings. However, we can relax the assumption of independence of valuations of di\ufb00erent objects to get a linear rebate function with non-zero redistribution index. Another way to get around the impossibility theorem is to relax the linearity requirement of a rebate function. In particular, our contributions in this paper can be summarized as follows. \u2022 We \ufb01rst prove the impossibility of existence of a linear rebate function with non-zero redistribution index for heterogeneous settings, when the domain of valuations for each agent is Rp + and the valuations for the objects are independent. \u2022 When the objects are heterogeneous but the values for the objects of an agent can be derived from one single number, we design a Groves redistribution mechanism that is linear, anonymous, deterministic, feasible, individually rational, and e\ufb03cient. In addition, the mechanism is worst case optimal with non-zero redistribution index. \u2022 We show the existence of a non-linear rebate function that has a non-zero redistribution index. \u2022 We propose a mechanism, HETERO, which extends Moulin/WCO mechanism for heterogeneous settings. We conjecture HETERO to have non-zero redistribution index and to be worst case optimal. 133 \fGujar & Narahari The paper is organized as follows. In Section 2, we introduce the notation followed in the paper and describe relevant background work from the literature. We also explain the WCO mechanism there. In Section 3, we state and prove our impossibility result. We derive an extension of the WCO mechanism for heterogeneous objects but with single dimensional private information in Section 4. The impossibility result does not rule out possibility of non-linear rebate functions with strictly positive redistribution index. We show this with a redistribution mechanism, BAILEY-CAVALLO, which is Bailey\u2019s mechanism (1997) applied to the settings under consideration in Section 5. We design another non-linear rebate function, HETERO, that actually matches with Moulin\u2019s rebate function when the objects are identical. We describe the construction of HETERO in Section 5. We have carried out simulations to provide empirical evidence for our conjecture regarding HETERO. The experimental setup and results are described in Section 6. We conclude the paper in Section 7 and provide some directions for future work. In our analysis, we need an ordering of the bids of the agents which we de\ufb01ne in Appendix A. The proofs of some of the lemmas in the paper are presented in Appendix B. 2. Preliminaries and Notation In this section we will \ufb01rst de\ufb01ne the notation used in the paper and preliminaries about the redistribution mechanisms. 2.1 The Model and Notation The notation used is summarized in Table 1. Where the context is clear, we will use t, ti, ri, k, and vi to indicate t(b), ti(b), ri(b), k(b), and vi(k(b)) respectively. In this paper, we assume that the payment made by agent i is of the form ti(\u00b7) \u2212ri(\u00b7), where ti(\u00b7) is agent i\u2019s payment in the Clarke pivotal mechanism (1971). We refer to P i ti, as the total Clarke payment or the surplus in the system. In general, we assume there are n agents and p distinct objects. We also assume that the allocation rule satis\ufb01es the allocative e\ufb03ciency (AE) property. 2.2 Important De\ufb01nitions We provide a few important de\ufb01nitions here in a conceptual way. De\ufb01nition 1 (DSIC) We say a mechanism is Dominant Strategy Incentive Compatible (DSIC) if it is a best response for each agent to report its type truthfully, irrespective of the types reported by the other agents. De\ufb01nition 2 (Allocative E\ufb03ciency) We say a mechanism is allocatively e\ufb03cient (AE) if the mechanism chooses, in every given type pro\ufb01le, an allocation of objects among the agents such that sum of the valuations1 of the allocated agents is maximized. De\ufb01nition 3 (Redistribution Mechanism) We refer to a Groves mechanism as a Groves redistribution mechanism or simply redistribution mechanism, if it allocates objects to the 1. Sum of the valuations of the allocated agents in an allocation is also referred as total value or value of the allocation. 134 \fRedistribution Mechanisms n Number of agents N Set of the agents = {1, 2, . . . , n} p Number of objects i Index for an agent, i = 1, 2, . . . , n j Index for object, j = 1, 2, . . . , p R+ Set of positive real numbers \u0398i The space of valuations of agent i, \u0398i = Rp + bi Bid submitted by agent i, = (bi1, bi2, . . . , bip) \u2208\u0398i b (b1, b2, . . . , bn), the bid vector K The set of all allocations of p objects to n agents, each getting at most one object k(b) An allocation, k(\u00b7) \u2208K, corresponding to the bid pro\ufb01le b k\u2217(b) An allocatively e\ufb03cient allocation when the bid pro\ufb01le is b k\u2217 \u2212i(b) An allocatively e\ufb03cient allocation when the bid pro\ufb01le is b and agent i is excluded from the system vi(k(b)) Valuation of the allocation k to the agent i, when b is the bid pro\ufb01le v v : K \u2192R, the valuation function, v(k(b)) = P i\u2208N vi(k(b)) ti(b) Payment made by agent i in the Clarke pivotal mechanism, when the bid pro\ufb01le is b, ti(b) = vi(k\u2217(b)) \u2212 \u0000v(k\u2217(b)) \u2212v(k\u2217 \u2212i(b)) \u0001 t(b) The Clarke payment, that is, the total payment received from all the agents, t(b) = P i\u2208N ti(b) t\u2212i The Clarke payment received in the absence of the agent i ri(b) Rebate to agent i when bid pro\ufb01le is b e The redistribution index of the mechanism, = infb:t(b)\u0338=0 P ri(b) t(b) Table 1: Notation: redistribution mechanisms agents in an allocatively e\ufb03cient way and redistributes the Clarke surplus in the system in the form of rebates to the agents such that the net payment made by each agent still follows the Groves payment structure. De\ufb01nition 4 (Linear Rebate Function) We say the rebates to an agent follow a linear rebate function if the rebate is a linear combination of bid vectors of all the remaining agents. Moreover, if a redistribution mechanism uses linear rebate functions for all the agents, we say the mechanism is a linear redistribution mechanism. De\ufb01nition 5 (Redistribution Index) The redistribution index of a redistribution mechanism is de\ufb01ned to be the worst case fraction of Clarke\u2019s surplus that gets redistributed among the agents. That is, e = inf b:t(b)\u0338=0 P ri(b) t(b) 135 \fGujar & Narahari 2.3 Optimal Worst Case Redistribution when Objects are Identical When the objects are identical, every agent i has the same value for each object, call it vi. Without loss of generality, we will assume, v1 \u2265v2 \u2265. . . \u2265vn. In Clarke\u2019s pivotal mechanism, the \ufb01rst p agents will receive the objects and each of these p agents will pay vp+1. So, the surplus in the system is pvp+1. For this situation, Moulin (2009) and Guo and Conitzer (2009) have independently designed a redistribution mechanism. Guo and Conitzer (2009) maximize the worst case fraction of the total surplus which gets redistributed. This mechanism is called the WCO mechanism. Moulin (2009) minimizes the ratio of budget imbalance to the value of an optimal allocation, that is the value of an allocatively e\ufb03cient allocation. The WCO mechanism coincides with Moulin\u2019s feasible and individually rational mechanism. Both the above mechanisms work as follows. After receiving bids from the agents, bids are sorted in decreasing order. The \ufb01rst p agents receive the objects. Each agent\u2019s Clarke payment is calculated, say ti. Every agent i pays, pi = ti \u2212ri, where, ri is the rebate function for an agent i. rWCO i = cp+1vp+2 + cp+2vp+3 + . . . + cn\u22121vn i = 1, . . . p + 1 rWCO i = cp+1vp+1 + . . . + ci\u22121vi\u22121 + civi+1 + . . . + cn\u22121vn i = p + 2, . . . n (1) where, ci = (\u22121)i+p\u22121 (n \u2212p) \u0012 n \u22121 p \u22121 \u0013 i \u0012 n \u22121 i \u0013 Pn\u22121 j=p \u0012 n \u22121 j \u0013 \uf8f1 \uf8f2 \uf8f3 n\u22121 X j=i \u0012 n \u22121 j \u0013\uf8fc \uf8fd \uf8fe; i = p + 1, . . . , n \u22121 (2) Suppose y1 \u2265y2 \u2265. . . \u2265yn\u22121 are the bids of the (n \u22121) agents excluding the agent i, then equivalently the rebate to the agent i is given by, rWCO i = n\u22121 X j=p+1 cjyj (3) The redistribution index of this mechanism is e\u2217, where e\u2217is given by, e\u2217= 1 \u2212 \u0012 n \u22121 p \u0013 Pn\u22121 j=p \u0012 n \u22121 j \u0013 This is an optimal mechanism, since there is no other mechanism which can guarantee more than e\u2217fraction redistribution in the worst case. Before we proceed to present our impossibility theorem we state the following theorem by Guo and Conitzer (2009) which will be used to design our mechanism. Theorem 1 (Guo & Conitzer, 2009) For any x1 \u2265x2 \u2265. . . xn \u22650, a1x1 + a2x2 + . . . anxn \u22650 i\ufb00 j X i=1 ai \u22650 \u2200j = 1, 2 . . . , n 136 \fRedistribution Mechanisms 3. Impossibility of Linear Rebate Function with Non-Zero Redistribution Index We have just reviewed the design of a redistribution mechanism for homogeneous objects. We have seen that the WCO mechanism is a linear function of the types of agents. We now explore the general case. In the homogeneous case, the bids are real numbers which can be arranged in decreasing order. The Clarke surplus is a linear function of these ordered bids. For the heterogeneous scenario, this would not be the case. Each bid bi belongs to Rp +; hence, there is no unique way of de\ufb01ning an order among the bids. Moreover, the Clarke surplus is not a linear function of the received bids in the heterogeneous case. So, we cannot expect any linear/a\ufb03ne rebate function of types to work well at all type pro\ufb01les. We will prove this formally. We \ufb01rst generalize a theorem from the work of Guo and Conitzer (2009). The context in which Guo and Conitzer stated and proved the theorem is in the homogeneous setting. We show that this result holds true in the heterogeneous objects case also. The symbol \u227d denotes the order over the bids of the agents, as de\ufb01ned in the A.2. Theorem 2 In the Groves redistribution mechanism, any deterministic, anonymous rebate function f is DSIC i\ufb00, ri = f(v1, v2, . . . , vi\u22121, vi+1, . . . , vn) \u2200i \u2208N (4) where, v1 \u227dv2 \u227d. . . \u227dvn. Proof: \u2022 The \u201cif\u201d part: If ri takes the form given by equation (4), then the rebate of agent i is independent of his valuation. The allocation rule satis\ufb01es allocative e\ufb03ciency. So, the mechanism is still Groves and hence DSIC. The rebate function de\ufb01ned is deterministic. If two agents have the same bids, then, as per the ordering de\ufb01ned in Appendix, \u227d, they will have the same ranking. Suppose agents i and i + 1 have the same bids. Thus vi \u227dvi+1 and vi+1 \u227dvi. So, ri = f(v1, v2, . . . , vi\u22121, vi+1, . . . , vn) and ri+1 = f(v1, v2, . . . , vi, vi+2, . . . , vn). Since vi = vi+1, ri = ri+1. Thus the rebate function is anonymous. \u2022 The \u201conly if\u201d part: For the mechanism to be strategyproof, the rebate function for agent i should be independent of his bid. So, ri should depend on only v\u2212i. So, for deterministic rebate function, ri = fi(v\u2212i). Now, we desire anonymous rebate function. That is, rebate should be independent of the identity of the agent. Thus, if vi = vj, then ri = rj. With out loss of generality, say vi = vi+1, then v\u2212i = v\u2212(i+1). So, ri = ri+1 implies, fi = fi+1. Similarly fi+1 = fi+2 and so on. Thus, ri = f(v\u2212i) \u2200i \u2208N. \u25a1 We now state and prove the main result of this paper. Theorem 3 If a redistribution mechanism is feasible and individually rational, then there cannot exist a linear rebate function which simultaneously satis\ufb01es all the following properties: 137 \fGujar & Narahari \u2022 DSIC \u2022 deterministic \u2022 anonymous \u2022 non-zero redistribution index. Proof : Assume to the contrary that there exists a linear function, say f, which satis\ufb01es the above properties. Let v1 \u227dv2 \u227d. . . \u227dvn. Then according to Theorem 2, for each agent i, ri = f(v1, v2, . . . , vi\u22121, vi+1, . . . , vn) = (c0, ep) + (c1, v1) + . . . + (cn\u22121, vn) where, ci = (ci1, ci2, . . . , cip) \u2208Rp, ep = (1, 1, . . . , 1) \u2208Rp, and (\u00b7, \u00b7) denotes the inner product of two vectors in Rp. Now, we will show that the worst case performance of f will be zero. To this end, we will study the structure of f, step by step. Observation 1: Consider type pro\ufb01le (v1, v2, . . . , vn) where v1 = v2 = . . . = vn = (0, 0, . . . , 0). For this type pro\ufb01le, the total Clarke surplus is zero and ri = (c0, ep) \u2200i \u2208N. Individual rationality implies, (c0, ep) \u22650 (5) Feasibility implies the total redistributed amount is less than the surplus, that is, X i ri = n(c0, ep) \u2a7d0 (6) From, (5) and (6), it is easy to see that, (c0, ep) = 0. Observation 2: Consider type pro\ufb01le (v1, v2, . . . , vn) where v1 = (1, 0, 0, . . . , 0) and v2 = . . . , vn = (0, 0, . . . , 0). For this type pro\ufb01le, r1 = 0 and if i \u0338= 1, ri = c11 \u22650 for individual rationality. For this type pro\ufb01le, it can be seen through straight forward calculations that the Clarke surplus is zero. Thus, for feasibility, P i ri = (n \u22121)c11 \u2264t = 0. This implies, c11 = 0. In the above pro\ufb01le, by considering v1 = (0, 1, 0, . . . , 0), we get c12 = 0. Similarly, one can show c13 = c14 = . . . = c1p = 0. Observation 3: Continuing on the same lines as above with, v1 = v2 = . . . = vi = ep, and vi+1 = (1, 0 . . . , 0) or (0, 1, 0 . . . , 0), . . . or (0, . . . , 0, 1), we get, ci+1 = (0, 0, . . . , 0) \u2200i \u2264p \u22121. Thus, ri = \uf8f1 \uf8f2 \uf8f3 (cp+1, vp+2) + . . . + (cn\u22121, vn) : if i \u2264p + 1 (cp+1, vp+1) + . . . + (ci\u22121, vi\u22121) +(ci, vi+1) + . . . + (cn\u22121, vn) : otherwise (7) Thus a rebate function in any linear redistribution mechanism has to be necessarily of the form in the Equation (7). We now claim that the redistribution index of such a mechanism 138 \fRedistribution Mechanisms is zero. For any individually rational redistribution mechanism, the trivial lower bound on redistribution index is zero. We prove that in a linear redistribution mechanism, there exists a type pro\ufb01le, at which the fraction of the Clarke surplus that gets redistributed is zero. Consider the type pro\ufb01le: v1 = (2p \u22121, 2p \u22122, . . . , p + 1, p) v2 = (2p \u22122, 2p \u22123, . . . , p, p \u22121) . . . vp\u22121 = (p + 1, p, . . . , 3, 2) vp = (p, p \u22121, . . . , 2, 1) and vp+1 = vp+2 . . . = vn = (0, 0, . . . , 0). Now it can be seen, through straightforward calculations of the Clarke payments, that, with this type pro\ufb01le, agent 1 pays (p \u22121), agent 2 pays (p \u22122), . . . , agent (p \u22121) pays 1 and the remaining agents pay 0. Thus, the Clarke payment received is non-zero but it can be seen that ri = 0 for all the agents. Hence, the redistribution index for any linear redistribution mechanism has to be zero. \u25a1 The above theorem provides a disappointing piece of news. It rules out the possibility of a linear redistribution mechanism for heterogeneous settings which will have non-zero redistribution index. However, there are two ways to get around it. 1. The domain of types under which Theorem 3 holds is, \u0398i = Rp +, \u2200i \u2208N. One idea is to restrict the domain of types. In Section 4, we design a worst case optimal linear redistribution mechanism when the valuations of agents for the heterogeneous objects have a certain type of relationship. 2. Explore the existence of a rebate function which is not linear but yields a non-zero redistribution index. We explore this in Section 5. It should be noted that our impossibility result holds true when we are de\ufb01ning a linear rebate functions as in De\ufb01nition 4. Our result may not hold for other types of linearity. For example, sort bid components of other (n \u22121) agents and de\ufb01ne rebate function to be linear combination of these (n \u22121)p elements. At this point, we have not explored such linear rebate functions. 4. A Redistribution Mechanism for Heterogeneous Objects when Valuations have a Scaling Based Relationship Consider a scenario where the objects are not identical but the valuations for the objects are related and can be derived by a single parameter. As a motivating example, consider there is a website where people can put up their ads for free and assume that there are p slots available for advertisements and there are n agents interested in displaying their ads. Naturally, every agent will have a higher preference for a higher slot. Another motivating example could be, there is university web site which has p slots to display news about 139 \fGujar & Narahari various departments. De\ufb01ne click through rate of a slot as the number of times the ad is clicked, when the ad is displayed in that slot, divided by the number of impressions. Let the click through rates for slots be \u03b31 \u2265\u03b32 \u2265\u03b33 . . . \u2265\u03b3p. Assume that each agent has the same value for each click by the user, say vi. So, the agent\u2019s value for the jth slot will be \u03b3jvi. Let us use the phrase valuations with scaling based relationship to describe such valuations. We de\ufb01ne this more formally below. De\ufb01nition 6 We say the valuations of the agents have scaling based relationship if there exist positive real numbers \u03b31, \u03b32, \u03b33, . . . , \u03b3p > 0 such that, for each agent i \u2208N, the valuation for object j, say \u03b8ij, is of the form \u03b8ij = \u03b3jvi, where vi \u2208R+ is a private signal observed by agent i. Without loss of generality, we assume, \u03b31 \u2265\u03b32 \u2265\u03b33 . . . \u2265\u03b3p > 0. (For simplifying equations, we will assume that there are (n \u2212p) virtual objects, with \u03b3p+1 = \u03b3p+2 = . . . = \u03b3n = 0). We immediately note that the homogeneous setting is a special case that arises when \u03b31 = \u03b32 = \u03b33 = . . . = \u03b3p > 0 For the above setting, we design a Groves mechanism which is almost budget balanced and optimal in the worst case. Our mechanism is similar to that of Guo and Conitzer (2009) and our proof uses the same line of arguments. 4.1 The Proposed Mechanism We will use a linear rebate function. We propose the following mechanism: \u2022 The agents submit their bids. \u2022 The bids are sorted in decreasing order. \u2022 The highest bidder will be allotted the \ufb01rst object, the second highest bidder will be allotted the second object, and so on. \u2022 Agent i will pay ti \u2212ri, where ti is the Clarke payment and ri is the rebate. ti = p X j=i (\u03b3j \u2212\u03b3j+1)vj+1 \u2022 Let agent i\u2019s rebate be, ri = c0 + c1v1 + . . . + ci\u22121vi\u22121 + civi+1 + . . . + cn\u22121vn ci\u2019s are de\ufb01ned as follows. The mechanism is required to be individually rational and feasible. \u2022 The mechanism will be individually rational i\ufb00ri \u22650 \u2200i \u2208N. That is, \u2200i \u2208N, c0 + c1v1 + . . . + ci\u22121vi\u22121 + civi+1 + . . . + cn\u22121vn \u22650. 140 \fRedistribution Mechanisms \u2022 The mechanism will be feasible if the total redistributed payment is less than or equal to the surplus. That is, P i ri \u2264t = P i ti or t \u2212P i ri \u22650, where, t = p X j=1 j(\u03b3j \u2212\u03b3j+1)vj+1. With the above setup, we now derive c0, c1, . . . , cn\u22121 that will maximize the fraction of the surplus which is redistributed among the agents. Step 1: First, we claim that c0 = c1 = 0. This can be proved as follows. Consider the type pro\ufb01le, v1 = v2 = . . . = vn = 0. For this type pro\ufb01le, individual rationality implies ri = c0 \u22650 and t = 0. So for feasibility, P i ri = nc0 \u2264t = 0. That is, c0 should be zero. Similarly, by considering type pro\ufb01le v1 = 1, v2 = . . . = vn = 0, we get c1 = 0. Step 2: Using c0 = c1 = 0, \u2022 The feasibility condition can be written as: n\u22121 X j=2 ( (j \u22121)(\u03b3j\u22121 \u2212\u03b3j) \u2212(j \u22121)cj\u22121 \u2212(n \u2212j)cj ) vj \u2212(n \u22121)cn\u22121vn \u22650 (8) \u2022 The individual rationality condition can be written as c2v2 + . . . + ci\u22121vi\u22121 + civi+1 + . . . + cn\u22121vn \u22650 (9) Step 3: When we say our mechanism\u2019s redistribution index is e, we mean, P i ri \u2265et, that is, n\u22121 X j=2 \u0010 \u2212e(j \u22121)(\u03b3j\u22121 \u2212\u03b3j) + (j \u22121)cj\u22121 + (n \u2212j)cj \u0011 vj + (n \u22121)cn\u22121vn \u22650 (10) Step 4: De\ufb01ne \u03b21 = \u03b31 \u2212\u03b32, and for i = 2, . . . , n \u22121, let \u03b2i = i(\u03b3i \u2212\u03b3i+1) + \u03b2i\u22121. Now, inequalities (8), (9), and (10) have to be satis\ufb01ed for all values of v1 \u2265v2 \u2265. . . \u2265vn \u22650. By Theorem (1), we need to satisfy the following set of inequalities: Pj i=2 ci \u22650 \u2200j = 2, . . . n \u22121 e\u03b21 \u2264(n \u22122)c2 \u2264\u03b21 e\u03b2i\u22121 \u2264n Pi\u22121 j=2 cj + (n \u2212i)ci \u2264\u03b2i\u22121 i = 3, . . . , p e\u03b2p \u2264n Pi\u22121 j=2 cj + (n \u2212i)ci \u2264\u03b2p i = p + 1, . . . , n \u22121 e\u03b2p \u2264n Pn\u22121 j=2 cj \u2264\u03b2p Now, the mechanism designer wishes to design a mechanism that maximizes e subject to the above constraints. De\ufb01ne xj = Pj i=2 ci for j = 2, . . . , n \u22121. This is equivalent to solving the following linear program. 141 \fGujar & Narahari maximize e s.t. e\u03b21 \u2264(n \u22122)x2 \u2264\u03b21 e\u03b2i\u22121 \u2264ixi\u22121 + (n \u2212i)xi \u2264\u03b2i\u22121 i = 3, . . . , p e\u03b2p \u2264ixi\u22121 + (n \u2212i)xi \u2264\u03b2p i = p + 1, . . . , n \u22121 e\u03b2p \u2264nxn\u22121 \u2264\u03b2p xi \u22650 \u2200i = 2, . . . , n \u22121 (11) \u25a1 So, given n and p, the social planner will have to solve the above optimization problem and determine the optimal values of e, c2, c3, . . . , cn\u22121. It would be of interest to derive a closed form solution for the above problem. The discussion above can be summarized as the following theorem. Theorem 4 When the valuations of the agents have scaling based relationship, for any p and n > p+1, the linear redistribution mechanism obtained by solving LP (11) is worst case optimal among all Groves redistribution mechanisms that are feasible, individually rational, deterministic, and anonymous. This mechanism is an example of a mechanism having non-zero redistribution index. Proof: The worst case optimality of the mechanism can be proved following the line of arguments of Guo and Conitzer (2009). As per the impossibility Theorem 3, there is no linear redistribution mechanism for general heterogeneous setting having non-zero e\ufb03ciency. However, when objects have scaling based relationship, the linear redistribution mechanism, that is obtained by solving LP (11) has non-zero e\ufb03ciency at least for some (n, p) instances. This is obtained by actually solving the LP (for example, using MATLAB) for various values of n and p. This certainly proves that, at least for n = 10, 12, 14, p = 2, 3, 4, . . . , 8 and when valuations have scaling based correlation, the worst case optimal mechanism given by the LP (11) has non-zero redistribution index. Now we obtain an upper bound on the redistribution index of a redistribution mechanism LP (11). Claim 1 If e\u2217is the solution of the LP (11), then e\u2217\u2264min \u001aA B , B A \u001b where, A = P i=1,3,5,... \u03b2i\u22121 \u0012 n i \u0013 and B = P i=2,4,6,... \u03b2i\u22121 \u0012 n i \u0013 . The LP (11) can be written as maximize e s.t. e\u03b2 \u2264Mx \u2264\u03b2 x \u22650 142 \fRedistribution Mechanisms where x = (x2, x3, . . . , xn\u22121) \u2208R+n\u22122 and \u03b2 = (\u03b21, \u03b22, . . . , \u03b2p, \u03b2p, . . . , \u03b2p) \u2208R+n\u22121 and M = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 n \u22122 0 0 \u00b7 \u00b7 \u00b7 0 3 n \u22123 0 \u00b7 \u00b7 \u00b7 0 0 4 n \u22124 \u00b7 \u00b7 \u00b7 0 . . . . . . . . . . . . . . . 0 n \u22121 1 0 0 n \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb Now, y = (y1, y2, . . . , yn\u22121) \u2208Rn\u22121 is in the range of M i\ufb00 \u0012 n 2 \u0013 y1 + \u0012 n 4 \u0013 y3 + \u00b7 \u00b7 \u00b7 = \u0012 n 3 \u0013 y2 + \u0012 n 5 \u0013 y4 + \u00b7 \u00b7 \u00b7 (12) Now, e\u2217\u03b2i \u0012 n i + 1 \u0013 \u2264 \u0012 n i + 1 \u0013 (Mx)i \u2264 \u0012 n i + 1 \u0013 \u03b2i \u2200i \u2208{1, 2, 3, . . . , n \u22121}. Now summing these inequalities for odd is and using (12), we get e\u2217A \u2264B and summing over even is we get e\u2217B \u2264A. This proves our claim. \u25a1 We veri\ufb01ed using MATLAB for n = 10, 12, 14 and p = 2, 3, . . . 8, that the redistribution index of the proposed mechanism is in fact, e\u2217= min \b A B, B A \t . 5. Non-linear Redistribution Mechanisms for the Heterogeneous Setting We should note that the homogeneous objects case is a special case of the heterogeneous objects case in which each bidder submits the same bid for all objects. Thus, we cannot expect any redistribution mechanism to perform better than the homogeneous objects case. For n \u2264p + 1, the worst case redistribution is zero for the homogeneous case and so will it be for the heterogeneous case (Guo & Conitzer, 2009; Moulin, 2009). So, we assume n > p+1. In this section, we propose two redistribution mechanisms with non-linear rebate functions. We construct a redistribution scheme by applying the mechanism proposed by Bailey (1997) to the heterogeneous settings. We refer to this proposed mechanism on heterogeneous objects as BAILEY-CAVALLO redistribution mechanism. It is crucial to note that the non-zero redistribution index of the BAILEY-CAVALLO mechanism does not trivially follow from that of the mechanism in the work of Bailey. We rewrite the WCO mechanism and extend the rebate functions to heterogeneous objects settings. We call this mechanism as HETERO. In each of the mechanisms, namely BAILEY-CAVALLO and HETERO, the objects are assigned to those agents who value them most. The Clarke payments are collected from the agents and the surplus is redistributed among the agents according to the rebate functions de\ufb01ned in the mechanism. Hence, both are Groves redistribution mechanisms and hence DSIC. As stated above, for n \u2264(p + 1), the redistribution index for any redistribution mechanism has to be zero. For the case n > p + 1, the redistribution index for any linear redistribution mechanism has to be zero (Theorem 3). We prove, for n \u2265(2p + 1), that 143 \fGujar & Narahari BAILEY-CAVALLO has non-zero redistribution index. We only conjecture that HETERO is worst case optimal, that is no mechanism can have better redistribution index than HETERO. We also conjecture that HETERO\u2019s redistribution index is the same as that of WCO, which is non-zero when n > (p + 1). Thus, for n \u2208{p + 2, p + 3, . . . , 2p}, there is still no redistribution mechanism for which non-zero redistribution index is proved. 5.1 BAILEY-CAVALLO Mechanism First, consider the case when p = 1. Let the valuations of the agents for the object be, v1 \u2265v2 \u2265. . . \u2265vn. The agent with the highest valuation will receive the object and would pay the second highest bid. Cavallo (2006) proposed the rebate function as: r1 = r2 = 1 nv3 ri = 1 nv2 i > 2 A similar mechanism was independently proposed by Porter et al (2004). Motivated by this scheme, we propose a scheme for the heterogeneous setting. Suppose agent i is excluded from the system. Then let t\u2212i be the Clarke surplus in the system (de\ufb01ned in Table 1). De\ufb01ne, riB = 1 nt\u2212i \u2200i \u2208N (13) \u2022 As the Clarke surplus is always positive, riB \u22650 for all i. Thus, this scheme satis\ufb01es individual rationality. \u2022 t\u2212i \u2264t \u2200i (revenue monotonicity). So, P i riB = P i 1 nt\u2212i \u2264n 1 nt = t. Thus, this scheme is feasible. (The revenue monotonicity follows from the fact that the valuations are non-negative and unit demand preferences. Gul and Stacchetti (1999) showed that with unit demand preferences, the VCG payments coinside with the smallest Walrasian prices which in turn would not decrease by addition of an agent. Thus addition of any agent cannot decrease the total payments.)2 We now show that the BAILEY-CAVALLO scheme has non-zero redistribution index if n \u22652p + 1. First we state two lemmas. The proof will be given in Appendix B. These lemmas are useful in designing redistribution mechanisms for the heterogeneous settings as well as in analysis of the mechanisms. Lemma 2 is used to show that the redistribution index of the BAILEY-CAVALLO mechanism is non-zero. Lemma 1 is used to \ufb01nd an allocatively e\ufb03cient outcome for the settings under consideration. Lemma 1 is also useful in determining the Clarke payments. Lemma 1 If we sort the bids of all the agents for each object, then: 1. An optimal allocation, that is an allocatively e\ufb03cient allocation, will consist of the agents having bids among the p highest bids for each object. 2. Consider an optimal allocation k\u2217. If any of the p agents receiving objects in k\u2217is dropped, then there always exists an allocation k\u2217 \u2212i that is an optimal allocation (on 2. We thank the annonymous reviewer for pointing out this reference. 144 \fRedistribution Mechanisms the remaining n \u22121 agents) which allocates objects to the remaining (p \u22121) agents. The objects that these (p \u22121) agents receive in k\u2217 \u2212i, may not however be the same as the objects they are allocated in k\u2217. Lemma 2 There are at most 2p agents involved in deciding the Clarke payment. Note: When the objects are identical, the bids of (p + 1) agents are involved in determining the Clarke payments. Now, we show that the redistribution index of the BAILEY-CAVALLO mechanism is non-zero. Theorem 5 When there are su\ufb03cient number of agents (in particular, n > 2p), the BAILEY-CAVALLO redistribution mechanism has non-zero redistribution index. Proof: In Lemma 2, we have shown that there will be at most 2p agents involved in determining the Clarke surplus. Thus, given a type pro\ufb01le, there will be (n \u22122p) agents, for whom, t\u2212i = t and this implies that at least n\u22122p n t will be redistributed. That is the redistribution index of the mechanism is at least n\u22122p n > 0. \u25a1 Note that the above mechanism may not be worst case optimal. This is because, when objects are identical, the WCO mechanism performs better on worst case analysis than the above mechanism. So, we suspect that in heterogeneous settings as well, the above mechanism would not be optimal on worst case analysis. In the next subsection, we explore another rebate function, namely HETERO. 5.2 HETERO: A Redistribution Mechanism for the Heterogeneous Setting When the objects are identical, the WCO mechanism is given by equation (3). We give a novel interpretation to it. Consider the scenario in which one agent is absent from the scene. Then the Clarke payment received is either pvp+1 or pvp+2 depending upon which agent is absent. If we remove two agents, the surplus is pvp+1 or pvp+2 or pvp+3, depending upon which two agents are removed. Till (n \u2212p \u22121) agents are removed, we get non-zero surplus. If we remove (n \u2212p) or more agents from the system, there is no need for any mechanism for assignment of the objects. So, we will consider the cases when we remove k agents, where, 1 \u2264k < n \u2212p. Now let t\u2212i,k be the average payment received when agent i is removed along with k other agents that is, a total of (k + 1) agents are removed comprising of i. The average is taken over all possible selections of k agents from the remaining (n \u22121) agents. We can rewrite the WCO mechanism in terms of t\u2212i, t\u2212i,k. Observe that, t\u2212i, t\u2212i,k can be de\ufb01ned in heterogeneous settings as well. We propose to use a rebate function de\ufb01ned as, rH i = \u03b11t\u2212i + k=n\u2212p\u22121 X k=2 \u03b1kt\u2212i,k\u22121 (14) where \u03b1k are the suitable weights assigned to the surplus generated when a total of k agents are removed from the system. By using di\ufb00erent \u03b1ks, we get di\ufb00erent mechanisms. However, we prefer to choose \u03b1ks as the following. 145 \fGujar & Narahari 5.2.1 The Equivalence of HETERO and WCO when Objects are Identical It is desirable that HETERO should match with the WCO mechanism when the objects are homogeneous. So we choose \u03b1\u2019s in Equation (14) in a way that ensures that, when the objects are identical, rH i in equation (14) is equal to rWCO i in equation (3) for all type pro\ufb01les. Since the rebate is a function of the remaining (n \u22121) bids, we can write it as, ri = f(x1, x2, . . . , xn\u22121) where x1, x2, . . . , xn\u22121 are the bids without the agent i, in decreasing order. Note, in this case, that each bidder will be submitting a bid bi \u2208R+. Now, we can write, t\u2212i,k, rH i , and ri in terms of x1, x2, . . . , xn\u22121, as, t\u2212i,k\u22121 = k\u22121 X l=0 \u0012 p + l p \u0013\u0012 n \u2212p \u22122 \u2212l k \u22121 \u2212l \u0013 \u0012 n \u22121 k \u22121 \u0013 xp+1+l rH i = k=n\u2212p\u22121 X k=1 \u03b1k t\u2212i,k\u22121 (15) rWCO i = n\u2212p\u22121 X l=0 cp+1+l xp+1+l (16) where, ci, i = p + 1, p + 2, . . . , n \u22121 are given by equation (2). Consider the type pro\ufb01le (x1 = 1, x2 = 1, . . . , xp+1 = 1, xp+2 = 0, . . . , xn\u22121 = 0). For HETERO to agree with WCO, the coe\ufb03cients of xp+1 in equation (15) and equation (16) should be the same. Now consider the type pro\ufb01le (x1 = 1, x2 = 1, . . . , xp+2 = 1, xp+3 = 0, . . . , xn\u22121 = 0). As the coe\ufb03cients of xp+1 in equation (15) and equation (16) are the same, the coe\ufb03cients of xp+2 should also be equal in equation (15) and equation (16). Thus, the coe\ufb03cients of xp+1, xp+2, . . . , xn\u22121 in equation (15) and equation (16) should agree. Let L = n \u2212p \u22121. Thus, for i = p + 1, . . . , n \u22121, ci = n\u2212i\u22121 X k=0 \u03b1L\u2212k \u0012 i \u22121 p \u0013 \u0012 n \u2212i \u22121 k \u0013 \u0012 n \u22121 p + 1 + k \u0013 (17) The above system of equations yields, for i = 1, 2, . . . , L, \u03b1i = (\u22121)(i+1)(L \u2212i)!p! (n \u2212i)! \u03c7 L\u2212i X j=0 \uf8f1 \uf8f2 \uf8f3 \u0012 i + j \u22121 j \u0013 n\u22121 X l=p+i+j \u0012 n \u22121 l \u0013\uf8fc \uf8fd \uf8fe (18) where \u03c7 is given by, \u03c7 = (n\u2212p) \uf8eb \uf8edn \u22121 p \u22121 \uf8f6 \uf8f8 Pn\u22121 j=p \uf8eb \uf8edn \u22121 j \uf8f6 \uf8f8 . 146 \fRedistribution Mechanisms 5.2.2 Properties of HETERO As the HETERO mechanism matches with the WCO when objects are identical, the HETERO mechanism satis\ufb01es individual rationality and feasibility in the homogeneous case. These two properties, however, remain to be shown in the heterogeneous case. Conjecture 1 The HETERO mechanism satis\ufb01es individual rationality, feasibility, is worst case optimal, and has redistribution index same as WCO. 5.2.3 Intuition Behind Individual Rationality of HETERO We have to show that for each agent i, rH i \u22650 at all type pro\ufb01les. For convenience, we will assume i implicitly. So, say, rH i = r and \u03931 = t\u2212i, \u0393j = t\u2212i,j\u22121, j = 2, . . . , L. Now, the rebate is given by the equation, r = P j \u03b1j\u0393j. We have to show that r \u22650. Note that, \u03931 \u2265\u03932 \u2265. . . \u2265\u0393L \u22650. The \u0393\u2019s are monotone as the absence of more agents would either decrease the VCG payments or the payments remain the same. So, if Pj i=1 \u03b1i \u22650 \u2200j = 1 \u2192L, individual rationality would follow from Theorem 1. We observe that, in general, this is not true. The important observation is, though \u0393i\u2019s are decreasing positive real numbers, they are related. For example, we can show that if \u03931 > 0, then \u03932 > 0. In our experiments, which we describe in the next section, we keep track of \u03932 \u03931 . We observed that this ratio is in [0.5, 1]. For Theorem 1 to be applicable, this ratio can be any value in [0, 1]. Thus, though \u03b1\u2019s are alternately positive and negative, the relation among \u0393\u2019s would not make r to become negative and it will be within limits in such a way that total rebate to the agents will be less than or equal to total Clarke payment. It remains to show individual rationality analytically in the general case. We are, however, only able to show in the following cases. 1. Consider the case when p = 2. (i). If n = 4, \u03b11 = 1 4. (ii). If n = 5, \u03b11 = 0.27273, \u03b12 = \u22120.18182. (iii). If n = 6, \u03b11 = 0.29487, \u03b12 = \u22120.25641, \u03b13 = 0.12821. 2. Consider the case when p = 3. (i). If n = 5, \u03b11 = 1 5. (ii). If n = 6, \u03b11 = 0.21875, \u03b12 = \u22120.15625. (iii). If n = 7, \u03b11 = 0.23810, \u03b12 = \u22120.21429, \u03b13 = 0.11905. By Theorem 1, it follows that for the above cases, the proposed mechanism satis\ufb01es individual rationality. 5.2.4 Feasibility and Worst Case Optimality of HETERO Similarly, we also believe that, \u03b1\u2019s adjust the rebate functions optimally such that, HETERO remains feasible and is worst case optimal and has the same redistribution index as WCO. Though we do not have analytical proof, we provide some empirical evidence for the conjecture in Section 6. 6. Experimental Analysis We perform our experiments in two sets. In the \ufb01rst set, we consider bids to be real numbers. In the second set we consider the bidders submitting binary bids. We use these experiments to provide an empirical evidance for our Conjecture 1. 147 \fGujar & Narahari 6.1 Empirical Evidence for Individual Rationality of HETERO Solving equations (18) is a challenging task. Though the new mechanism is the extension of the Moulin or the WCO mechanism, yet, we are not able to prove individual rationality and feasibility of HETERO analytically. We therefore seek empirical evidence. 6.1.1 Simulation 1 We consider various combinations of n and p. For each agent, and for each object, the valuation is generated as a uniform random variable in [0, 100]. We run our simulations for the following combinations of n and p. For p = 2, n = 5, 6, . . . , 14, for p = 3, n = 7, 8, . . . , 14 and for p = 4, n = 9, 10, . . . , 14. For each combination of n and p = 2, we generated randomly 100,000 bid pro\ufb01les and evaluated our mechanism. We also kept track of the worst case performance of our mechanism over these 100,000 bid pro\ufb01les. Our mechanism was feasible and individually rational in these 100,000 bid pro\ufb01les. The redistribution index of our mechanism is upper bounded by that of the WCO mechanism. We observed that the worst case performance over these 100,000 random bid pro\ufb01les was the same as that of WCO. This is a strong indication that our mechanism will perform well in general. 6.1.2 Simulation 2: Bidders with Binary Valuation Suppose each bidder has valuation for each object, either 0 or 1. Then there are 2np possible bid pro\ufb01les. We ran an experiment to evaluate our mechanism with all possible bid pro\ufb01les of agents with binary valuations. We considered p = 2 and n = 5, 6, . . . , 12. We found that the mechanism is feasible, individually rational, and the worst case performance is the same as that of the WCO mechanism. Note, as indicated earlier, no mechanism can perform better than the WCO mechanism in the worst case. And our mechanism performs equally well as the WCO. Thus, though an analytical proof is elusive, for binary valuation settings, for p = 2 and n = 5, 6, . . . , 12, our mechanism is worst case optimal. 6.2 BAILEY-CAVALLO vs HETERO In this subsection, we compare the worst case redistribution index of BAILEY-CAVALLO with the worst case redistribution index of HETERO for varying number of objects when there are 10 agents in the system. That is, we study worst case redistribution index for various p when n = 10. The worst case is taken over randomly generated 50K bid pro\ufb01les. The comparison is depicted in Figure 1. The redistribution index of WCO is an upper bound on any Redistribution Mechanism for heterogeneous settings. However, the simulations not being exhaustive, the worst case performance of the mechanisms could perhaps be better than that of WCO. Exact worst case may be worse than WCO. However, in the simulations, we never encountered a situation where HETERO is worse than WCO. We can see from Figure 1 that BAILEY-CAVALLO mechanism\u2019s worst case performance is better than that of HETERO, for p = 3, 4, 5, 6, 7. This worst case is the worst over 50,000 randomly generated bid pro\ufb01les in our simulations. 148 \fRedistribution Mechanisms The other observation we made in our simulations is that most of the time (70%), BAILEY-CAVALLO redistributes more VCG surplus than HETERO ever though the worst case performance is worse than that of HETERO. These observations also lead to a question that Cavallo (2008) raised in the context of dynamic redistribution mechanisms. Do we really need a highly sophisticated mechanism, that is worst case optimal, when a simple mechanism performs quite well in general. 2 3 4 5 6 7 8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of objects, (p) redistribution index HETERO BAILEY\u2212CAVELLO WCO n = 10 Figure 1: Redistribution index vs number of objects when number of agents = 10 7." + }, + { + "url": "http://arxiv.org/abs/0902.0524v3", + "title": "An Optimal Multi-Unit Combinatorial Procurement Auction with Single Minded Bidders", + "abstract": "The current art in optimal combinatorial auctions is limited to handling the\ncase of single units of multiple items, with each bidder bidding on exactly one\nbundle (single minded bidders). This paper extends the current art by proposing\nan optimal auction for procuring multiple units of multiple items when the\nbidders are single minded. The auction minimizes the cost of procurement while\nsatisfying Bayesian incentive compatibility and interim individual rationality.\nUnder appropriate regularity conditions, this optimal auction also satisfies\ndominant strategy incentive compatibility.", + "authors": "Sujit Gujar, Y Narahari", + "published": "2009-02-03", + "updated": "2010-04-24", + "primary_cat": "cs.GT", + "cats": [ + "cs.GT" + ], + "main_content": "Introduction 1.1 Motivation and Background Auction based mechanisms are extremely relevant in modern day electronic procurement systems [2, 15] since they enable a promising way of automating negotiations with suppliers and achieving the ideal goals of procurement e\ufb03ciency and cost minimization. In many cases it may be bene\ufb01cial to allow the suppliers to bid on combinations of items rather than on single items. Such auctions are called combinatorial auctions. Simply de\ufb01ned, a combinatorial auction is a mechanism where bidders can submit bids on combinations of items. The winner determination problem is to select a winning set of bids such that each item to be bought is included in at least one of the selected bids, and the total cost of procurement is minimized. In this paper, our interest is in multi-unit combinatorial procurement auctions, where a buyer is interested in procuring multiple units of multiple items. In mechanism design literature, an optimal auction refers to an auction which optimizes a performance metric (for example maximizes revenue to a seller or minimizes cost to a buyer) subject to two critical game theoretic properties: (1) incentive compatibility and (2) individual rationality. Incentive compatibility comes in two forms: dominant strategy incentive compatibility (DSIC) and Bayesian incentive compatibility (BIC). DSIC property is a property that guarantees that reporting true valuations (or costs as the case may be) is a best response for each bidder, irrespective of the valuations (or costs) reported by the other bidders. BIC is a much weaker property which ensures that truth revelation is a best response for each bidder whenever the other bidders are also truthful. Individual rationality (IR) is a property which assures non-negative utility to each participant in the mechanism thus ensuring their voluntary participation. The IR property may be (1) ex-ante IR (if the bidders decide on participation even before knowing their exact types (valuations or costs) 1 \for (2) interim IR (if the bidders decide on participation just after observing their types), or ex-post IR (if the bidders can withdraw even after the game is over). For more details on these concepts, the reader is referred to [5, 6, 12, 14]. 1.2 Contributions and Outline In his seminal work, Myerson [14] characterized an optimal auction for selling a single unit of a single item. Extending his work has been attempted by several researchers and there have been some generalizations of his work for multi-unit single item auctions [11, 9, 7]. Armstrong [1] characterized an optimal auction for two objects where type sets are binary. Malakhov and Vohra [11] studied an optimal auction for a single item multi-unit procurement auctions using a network interpretation. An implicit assumption in the above papers is that the sellers have limited capacity for the item. They also assume that the valuation sets are discrete. Kumar and Iyengar [9] and Gautam, Hemachandra, Narahari, Prakash [7] have proposed an optimal auction for multi-unit, single item procurement. Recently, Ledyard [10] has looked at single unit combinatorial auctions in the presence of single minded bidders. A single minded bidder is one who only bids on a particular subset of the items. Ledyard\u2019s auction, however, does not take into account multiple units of multiple items and this motivates our current work which extends Ledyard\u2019s auction to the case of procuring multiple units of multiple items. The following are our speci\ufb01c contributions. 1. We characterize Bayesian incentive compatibility and interim individual rationality for procuring multiple units of multiple items when the bidders are single minded, by deriving a necessary and su\ufb03cient condition. 2. We design an optimal auction that minimizes the cost of procurement while satisfying Bayesian incentive compatibility and interim individual rationality. 3. We show under appropriate regularity conditions that the proposed optimal auction also satis\ufb01es dominant strategy incentive compatibility. Some of the results presented here appeared in our paper [8]. The rest of the paper is organized as follows. First, we will explain our model in Section 2 and describe the notation that we use. We also outline certain essential technical details of optimal auctions from the literature. In Section 3, we present the three contributions listed above. Section 4 concludes the paper. 2 The Model We consider a scenario in which there is a buyer and multiple sellers. The buyer is interested in procuring a set of distinct objects, I. She is interested in procuring multiple units of each object. She speci\ufb01es her demand for each object. The sellers are single minded. That is each seller is interested in selling a speci\ufb01c bundle of the objects. We illustrate through an example below. Example 2.1. Consider a buyer interested in buying 100 units of A, 150 units of B, and 200 units of C. Assume that there are three sellers. Seller 1 might be interested in providing 70 units of bundle {A, B}, that is, 70 units of A and 70 units of B as a bundle. Because he is single minded, he does not bid for any other bundles. We also assume that he would supply equal numbers of A and B. Similarly, seller 2 may provide a bid for 100 units of the bundle {B, C}. The bid from seller 3 may be 125 units of the bundle {A, C}. The sellers are capacitated i.e. there is a maximum quantity of the bundle of interest they could supply. The bid therefore speci\ufb01es a unit cost of the bundle and the maximum quantity that can be supplied. After receiving these bids, the buyer will determine the allocation and payment as per auction rules. We summarize below important assumptions in the model. \u2022 The sellers are single minded. \u2022 The sellers can collectively ful\ufb01ll the demands speci\ufb01ed by the buyer. 2 \fTable 1: Notation I Set of items the buyer is interested in buying, {1, 2, . . ., m} Dj Demand for item j, j = . . . m N Set of sellers. {1, 2, . . ., n} ci True cost of production of one unit of bundle of interest to the seller i, ci \u2208[ci, \u00af ci] qi True capacity for bundle which seller i can supply, qi \u2208[qi, \u00af qi] \u02c6 ci Reported cost by the seller i \u02c6 qi Reported capacity by the seller i \u03b8i True type i.e. cost and capacity of the seller i, \u03b8i = (ci, qi) bi Bid of the seller i. bi = (\u02c6 ci, \u02c6 qi) b Bid vector, (b1, b2, . . . , bn) b\u2212i Bid vector without the seller i, i.e. (b1, b2, . . . , bi\u22121, bi+1, . . . , bn) ti(b) Payment to the seller i when submitted bid vector is b Ti(bi) Expected payment to the seller i when he submits bid bi. Expectation is taken over all possible values of b\u2212i xi = xi(b) Quantity of the bundle to be procured from the seller i when the bid vector is b Xi(bi) Expected quantity of the bundle to be procured from the seller i when he submits bid bi. Expectation is taken over all possible values of b\u2212i fi(ci, qi) Joint probability density function of (ci, qi) Fi(ci, qi) Cumulative distribution function of fi(ci, qi) fi(ci|qi) Conditional probability density function of production cost when it is given that the capacity of the seller i is qi Fi(ci|qi) Cumulative distribution function of fi(ci|qi) Hi(ci, qi) Virtual cost function for seller i, Hi(ci, qi) = ci + Fi(ci|qi) fi(ci|qi) \u03c1i(bi) Expected o\ufb00ered surplus to seller i, when his bid is bi ui(b, \u03b8i) Utility to seller i, when bid vector is b and his type is \u03b8i Ui(bi, \u03b8i) Expected utility to the seller i, when he submits bid bi and his type is \u03b8i. Expectation is taken over all possible values of b\u2212i \u2022 The sellers are capacitated i.e. they can not supply beyond the capacity speci\ufb01ed in the bids. \u2022 The seller will never in\ufb02ate his capacity, as it can be detected. If he fails to supply the quantity exceeding his capacity, he incurs a penalty which is deterrent on in\ufb02ating his capacity. This is an important assumption. \u2022 Whenever a buyer buys anything from the seller, she will procure the same number of units of each of the items from the seller\u2019s bundle of interest. \u2022 All the participants are rational and intelligent. Table 1 shows the notation that will be used in the rest of the paper. 2.1 Some Preliminaries The problem of designing an optimal mechanism was \ufb01rst studied by Myerson [14] and Riley and Samuelson [16]. Myerson\u2019s work is more general and considers the setting of a seller trying to sell a single unit of a single object to one of several possible buyers. Note here that, unlike the rest of paper, the auctioneer is the seller 3 \fand his objective is to maximize the revenue. (In the rest of the paper, the auctioneer will be a buyer and her objective will be to minimize the cost of procurement.) So in this particular setting, as per notation de\ufb01ned in Table 1, m = 1, D1 = 1. (So, qi will be 1 for all the agents and no longer a private information). Fi, Hi de\ufb01ned in Table 1 will be function of single variable. The buyer\u2019s private information will be the maximum cost he is willing to pay, which we will denote as \u03b8i. \u03b8i \u2208\u0398i = [\u03b8i, \u03b8i]. Myerson [14] characterizes all auction mechanisms that are Bayesian incentive compatible and interim individually rational in this setting. From this, he derives the allocation rule and the payment function for the optimal auction mechanism, using an interesting notion called the virtual cost function, de\ufb01ned as follows: Hi(\u03b8i) = \u03b8i \u22121 \u2212Fi(\u03b8i) fi(\u03b8i) He has shown that an optimal auction is one with allocation rule as: xi(\u03b8) = 1 if Hi(\u03b8i) > max n 0, max j\u0338=i Hj(\u03b8j) o = 0 otherwise (1) Ti(\u03b8i) = Eb\u2212i(ui(\u03b8) \u2212\u03b8i(xi(\u03b8))) = Ui(\u03b8i) \u2212\u03b8iXi(\u03b8i) = Z \u03b8i \u03b8i Xi(s)ds \u2212\u03b8iXi(\u03b8i) (2) One such payment rule is given by, ti(\u03b8i, \u03b8\u2212i) = \u0010 Z \u03b8i \u03b8i xi(s, \u03b8\u2212i)ds \u0011 \u2212 \u0010 \u03b8ixi(\u03b8) \u0011 \u2200\u03b8 Any auction for single unit of an single item which satis\ufb01es Equation (1) and Equation (2) is optimal i.e. maximizes seller\u2019s revenue and is BIC and IIR. Regularity Assumption: If Hi(\u03b8i) is increasing with respect to \u03b8i, then we say, the virtual cost function is regular or regularity condition holds true. Under this assumption one such optimal auction is, 1. Collect bids from the buyers 2. Sort them according to their virtual costs 3. If the highest virtual cost is positive, allocate the object to the corresponding bidder 4. The winner, say i, will pay ti(\u03b8\u2212i) = inf{\u03b8i|Hi(\u03b8i) > 0 and Hi(\u03b8i) > Hj(\u03b8j)\u2200j \u0338= i} From the payment rule, it is a dominant strategy for each bidder to bid truthfully under the regularity assumption. When bidders are symmetric, i.e. Fi is same \u2200i, then the above optimal auction is Vickrey\u2019s second price auction [17]. Myerson\u2019s work can be easily extended to the case of multi-unit auctions with unit demand. But problems arise when the unit-demand assumption is relaxed. We move into a setting of multi-dimensional type information which makes truth elicitation non-trivial. Several attempts have addressed this problem, albeit under some restrictive assumptions [11, 9, 7]. It is assumed, for example, that even though the seller is selling multiple units (or even objects), the type information of the entities is still one dimensional [3, 4, 18]. Researchers have also worked on extending Myerson\u2019s work for an optimal auction for multiple objects. The private information, in this setting may not be single dimensional. Armstrong [1] has solved this problem for two object case, when type sets are binary by enumerating all incentive compatibility conditions. Recently, Ledyard [10] has characterized an optimal multi-object single unit auction, when bidders are single minded. 4 \f3 Optimal Multi-Unit Combinatorial Procurement Auction We will start this section with an example to illustrate that in a multi-unit, multi-item procurement auction, the suppliers may have an incentive to misreport their costs. Example 3.1. Suppose the buyer has a requirement for 1000 units. Also, suppose that there are four suppliers with (ci, qi) values of S1 : (10, 500), S2 : (8, 500), S3 : (12, 800) and S4 : (6, 500). Suppose the buyer conducts the classic kth price auction, where the payment to a supplier is equal to the cost of the \ufb01rst losing supplier. In this case, the sellers will be able to do better by misreporting types. To see this, consider that all suppliers truthfully bid both the cost and the quantity bids. The allocation then would be S1 : 0, S2 : 500, S3 : 0, S4 : 500 and this minimizes the total payment. Under this allocation the payment to S4 would be 10 \u00d7 500 = 5000 currency units. However, if he bids his quantity to be 490, then the allocation changes to S1 : 10, S2 : 500, S3 : 0, S4 : 490 giving him a payment of 12 \u00d7 490 = 5880 currency units and thus incentive compatibility does not hold. Thus it is evident that such uniform price mechanisms are not applicable to the case where both unit cost and maximum quantity are private information. The intuitive explanation for this could be that by under reporting their capacity values, the suppliers create an arti\ufb01cial scarcity of resources in the system. Such \ufb01ctitious shortages force the buyer to pay overboard for use of the virtually limited resources. We also make another observation here. Suppose, the seller 4 bids (6,600). Then the buyer will order from him 600 units at the cost of 10 per unit. Being his capacity 500, he would not be able to supply the remaining 100 units. If he bids (6,1000), then he will be paid only 8 per unit and the buyer will be ordering him 1000 units. This clearly indicates our assumption that a seller will not in\ufb02ate his capacity is quite natural. We are interested in designing an optimal mechanism, for a buyer, that satis\ufb01es Bayesian incentive compatibility (BIC) and individual rationality (IR). BIC means that the best response of each seller is to bid truthfully if all the other sellers are bidding truthfully. IR implies the players have non-negative payo\ufb00by participating in the mechanism. More formally, these can be stated as (see Table 1 for notation), \u2200i \u2208N and \u2200\u03b8i \u2208[ci, \u00af ci] \u00d7 [qi, \u00af qi] Ui(\u03b8i, \u03b8i) \u2265 Ui(bi, \u03b8i)\u2200bi, (BIC) (3) Ui(\u03b8i, \u03b8i) \u2265 0 (IR) (4) The IR condition above corresponds to interim individual rationality. 3.1 Necessary and Su\ufb03cient Conditions for BIC and IR To make the sellers report their types truthfully, the buyer has to o\ufb00er them incentives. We propose the following incentive, motivated by paying a seller higher than what he claims to be the total cost of the production for the ordered quantity. \u2200i \u2208N, \u03c1i(bi) = Ti(bi) \u2212\u02c6 ciXi(bi), where bi = (\u02c6 ci, \u02c6 qi) \u21d2 Ui(bi, \u03b8i) = Ti(bi) \u2212ciXi(bi) = \u03c1i(bi) \u2212(ci \u2212\u02c6 ci)Xi(bi) (5) With the above o\ufb00ered incentive, we now state and prove the following theorem. Theorem 3.1. Any mechanism in the presence of single minded, capacitated sellers is BIC and IR i\ufb00 1. \u03c1i(bi) = \u03c1i( \u00af ci,\u02c6 qi) + R \u00af ci \u02c6 ci Xi(t, \u02c6 qi)dt 2. \u03c1i(bi) non-negative, and non-decreasing in \u02c6 qi \u2200\u02c6 ci \u2208[ci, \u00af ci] 3. The quantity which seller i is asked to supply, Xi(ci, qi) is non-increasing in ci \u2200qi \u2208[qi, \u00af qi]. 5 \fProof. : A similar theorem is presented by Kumar and Iyengar [9] for the case of multi-unit single item procurement auctions. Using the notion of single minded bidder [10], we state and prove a result for a wider setting. To prove the necessity part of the theorem, we \ufb01rst observe that, Ui(bi, \u03b8i) = Ui(\u02c6 ci, \u02c6 qi, ci, qi) = Ti(bi) \u2212ciXi(bi) and BIC \u21d2Ui(\u02c6 ci, \u02c6 qi, ci, qi) \u2264Ui(ci, qi, ci, qi), \u2200(\u02c6 ci, \u02c6 qi) and (ci, qi) \u2208\u0398i In particular, Ui(\u02c6 ci, qi, ci, qi) \u2264Ui(ci, qi, ci, qi) Without loss of generality, we assume \u02c6 ci > ci. Rearrangement of these terms yields, Ui(\u02c6 ci, qi, ci, qi) = Ui(\u02c6 ci, qi, \u02c6 ci, qi) +(\u02c6 ci \u2212ci)Xi(\u02c6 ci, qi) \u21d2 Ui(\u02c6 ci, qi, \u02c6 ci, qi) \u2212Ui(ci, qi, ci, qi) \u02c6 ci \u2212ci \u2264\u2212Xi(\u02c6 ci, qi) Similarly using, Ui(ci, qi, \u02c6 ci, qi) \u2264Ui(\u02c6 ci, qi, \u02c6 ci, qi) \u2212Xi(c, q) \u2264Ui(\u02c6 ci, qi, \u02c6 ci, qi) \u2212Ui(ci, qi, ci, qi) \u02c6 ci \u2212ci \u2264\u2212Xi(\u02c6 ci, qi). (6) Taking limit \u02c6 ci \u2192ci, we get, \u2202Ui(ci, qi, ci, qi) \u2202ci = \u2212Xi(ci, qi). (7) Equation (6) implies, Xi(ci, qi) is non-increasing in ci. This proves statement 3 of the theorem in the forward direction. When the seller bids truthfully, from Equation (5), \u03c1i(ci, qi) = Ui(ci, qi, ci, qi). (8) For BIC, Equation (7) should be true. So, \u03c1i(ci, qi) = \u03c1i(\u00af ci, qi) + Z \u00af ci ci Xi(t, qi)dt (9) This proves claim 1 of the theorem. BIC also requires, qi \u2208arg max \u02c6 qi\u2208[qi,qi] Ui(ci, \u02c6 qi, ci, qi) \u2200ci \u2208[ci, \u00af ci] (Note that \u02c6 qi \u2208[qi, qi] and not \u2208[qi, \u00af qi] as it is assumed that a bidder will not over report his capacity.) This implies, \u2200ci \u03c1i(ci, qi) should be non-decreasing in qi.. The IR conditions (Equations (4) and (8)) imply \u03c1i(ci, qi) \u22650. This proves statement 2 of the theorem. Thus, these three conditions are necessary for BIC and IR properties. We now prove that these are su\ufb03cient conditions for BIC and IR. Assume that all three conditions are true, \u21d2Ui(\u03b8i, \u03b8i) = \u03c1i(ci, qi) \u22650. 6 \fSo the IR property is satis\ufb01ed. Ui(bi, \u03b8i) = \u03c1i(\u02c6 ci, \u02c6 qi) + (\u02c6 ci \u2212ci)Xi(\u02c6 ci, \u02c6 qi) = \u03c1i(\u00af ci, \u02c6 qi) + Z \u00af ci \u02c6 ci Xi(t, \u02c6 qi)dt +(\u02c6 ci \u2212ci)Xi(\u02c6 ci, \u02c6 qi) = \u03c1i(\u00af ci, \u02c6 qi) + Z \u00af ci ci Xi(t, \u02c6 qi)dt \u2212 Z \u02c6 ci ci Xi(t, \u02c6 qi)dt +(\u02c6 ci \u2212ci)Xi(\u02c6 ci, \u02c6 qi) \u2264 \u03c1i(ci, \u02c6 qi) as Xi is non-increasing in ci \u2264 \u03c1i(ci, qi) = Ui(\u03b8i, \u03b8i) as \u03c1i is non-decreasing in qi This proves the su\ufb03ciency of the three conditions. 3.2 Allocation and Payment Rules of Optimal Auction The buyer\u2019s problem is to solve, min Eb Pn i=1 ti(b) s.t. 1. ti(b) = \u03c1i(b) + \u02c6 cixi(b) 2. All three conditions in Theorem 3.1 hold true. 3. She procures at least Dj units of each item j. Expectation being a linear operator, the buyer\u2019s problem is to minimize Pn i=1 EbiTi(\u02c6 ci, \u02c6 qi). Condition 1 of the theorem has to hold true, which will imply the ith term in the summation is given by, R \u00af qi qi R \u00af ci ci \u0010 ciXi(ci, qi) + \u03c1i(\u00af ci, qi) + R \u00af ci ci Xi(t, qi)dt \u0011 fi(ci, qi)dcidqi However, R \u00af ci ci \u0010R \u00af ci ci Xi(t, qi)dt \u0011 fi(ci, qi)dci = R \u00af ci ci Xi(ci, qi)Fi(ci|qi)fi(qi)dci Condition 2 of Theorem 3.1 requires \u03c1i(\u00af ci, qi) \u22650 and the buyer wants to minimize the total payment to be made. So, she has to assign \u03c1i(\u00af ci, qi) = 0 \u2200qi, \u2200i. So her problem is to solve, min Pn i=1 R \u00af qi qi R \u00af ci ci \u0010 ci + Fi(ci|qi) fi(ci|qi) \u0011 Xi(ci, qi)fi(ci, qi)dcidqi That is, 7 \fmin Pn i=1 R \u00af qi qi R \u00af ci ci Hi(ci, qi)Xi(ci, qi) fi(ci, qi)dcidqi where, Hi(ci, qi) is the virtual cost function, de\ufb01ned in Table 1. De\ufb01ne, \u00af c = ( \u00af c1, \u00af c2, . . . , \u00af cn) c = (c1, c2, . . . , cn) c = (c1, c2, . . . , cn). Similarly, de\ufb01ne \u00af q , q and q. Let, dc = dc1dc2 . . . dcn dq = dq1dq2 . . . dqn f(c, q) = Qn i=1 fi(ci, qi) Her problem now reduces to, min R \u00af q q R \u00af c c (Pn i=1 Hi(ci, qi)xi(ci, qi)) f(c, q)dcdq s.t. 1. \u2200i, Xi(ci, qi) is non-increasing in ci, \u2200qi. 2. The Buyer\u2019s minimum requirement of each item is satis\ufb01ed. This is an optimal auction for the buyer in the presence of the single minded sellers. In the next subsection, we will see an optimal auction under regularity conditions. 3.3 Optimal Auction under Regularity Assumption First, we make the assumption that, Hi(ci, qi) = ci + Fi(ci|qi) fi(ci|qi) is non-increasing in qi and non-decreasing in ci. This regularity assumption is the same as regularity assumption made by Kumar and Iyengar [9]. With this assumption, the buyer\u2019s optimal auction when bidder i submits bid as (ci, qi) is, min n X i=1 xiHi(ci, qi) subject to 1. 0 \u2264xi \u2264qi, where xi denotes the quantity that seller i has to supply of bundle \u00af xi. 2. Buyer\u2019s demands are satis\ufb01ed. The condition Xi(ci, qi) is non increasing in ci, \u2200qi and \u2200i. After this problem has been solved, the buyer pays each seller i the amount ti = cix\u2217 i + Z \u00af ci ci xi(t, qi)dt (10) where x\u2217 i is what agent i has to supply after solving the above problem. We exemplify the optimal mechanism with one example. Example 3.2. Suppose, the buyer is interested in buying 100 units of {A, C, D} and 250 units of {B}. Seller 1 (S1) is interested in providing q1 = 100 units of bundle {A, B}, seller 2 (S2): q2 = 100 units of {B}, seller 3, (S3) q3 = 150 units of {B, C, D} and seller 4 (S4) is interested in up to q4 = 120 units of {A, B, C, D}. The unit costs of the respective bundles are c1 = 100, c2 = 50, c3 = 70 and c4 = 110. Each seller will submit his bid as (ci, qi). After receiving the bids, buyer will solve, min x1H1(100, 100) + x2H2(50, 100) 8 \f+x3H3(70, 150) + x4H4(110, 120) s.t. xi \u2265 0 i = 1, 2, 3, 4. x1 \u2264 100 x2 \u2264 100 x3 \u2264 150 x4 \u2264 120 x1 + x2 \u2265 100 (11) x1 + x2 + x3 + x4 \u2265 250 (12) x3 + x4 \u2265 100 (13) Equation (11) is required to be satis\ufb01ed as at least 100 units of A has to be procured. Equation (12) is for procuring at least 250 units of B, and Equation (13) is for procuring at least 100 units of C and D. After solving this optimization problem, she will determine the payment according to Equation (10). It can be seen that for the seller i, the best response is to bid truthfully irrespective of whatever the others are bidding. Thus, this mechanism enjoys the stronger property, namely dominant strategy incentive compatibility. Note that this property is much stronger than BIC. The above property is a direct consequence of the result proved by Mookherjee, and Stefan [13]. They have given the monotonicity conditions for DSIC implementation of a BIC mechanism. Under these regularity assumptions, xi satis\ufb01es these conditions. So we have a DSIC mechanism. In the next section we consider X-OR bidding with unit demand case. 4 An Optimal Auction when Bidders are XOR Minded Consider the situation where a supplier can manufacture some of the items required by the buyer, say A, B, C, D. However, with the machinery he has, at a time either he can manufacture A, D or B, C but not any other combination simultaneously. Thus he can either supply A, D as bundle or B, C as a bundle but not both. That is, he is interested in X-OR bidding. De\ufb01nition 4.1 (XOR Minded Bidder). We say a bidder is an XOR minded if he is interested in supplying either of two disjoint subsets of items auctioned for but not both. To simplify the analysis, in this section, we restrict ourselves to the unit demand case. That is the buyer is interested in buying single unit of each of the items from I. And hence there are no capacity constraints. We formally state assumptions. \u2022 We assume that the bidders are XOR minded. \u2022 For each bidder, his costs of the two bundles of his interest are independent. \u2022 The two bundles for which each seller is going to submit an X-OR bid, are known. \u2022 The sellers can collectively supply the items required by the buyer. \u2022 The buyer and the sellers are strategic. \u2022 Free disposal. That is, if the buyer procures more than one unit of an item, he can freely dispose it of. With the above assumptions, we now discuss an extension of the current art of designing optimal auctions for combinatorial auctions in the presence of XOR minded bidders. Though we assume the bidders are XOR minded, the BIC characterization and the auction designed here work even though the bidders are either single minded or XOR minded. 9 \f4.1 Notation As, qi = 1 for each bidder, we drop capacity from the types and bids for all the agents. Each agent will be reporting the costs for each bundle of his interest, he will be bidding two real numbers. And we need to calculate virtual costs on both the bundles. Thus, we need appropriate modi\ufb01cations in some of the notation used in the paper. We summarize the new notation for this section in Table 2. Each agent is submitting tow di\ufb00erent bids on two di\ufb00erent bundles. We will use j to refer to the bundle. Table 2: Notation: XOR Minded Bidders j j = 1 or 2. Bundle index. Bij The jth bundle of items for which the agent i is bidding. j = 1, 2 cij True cost of production of Bij to the seller i. cij \u2208[ci, \u00af ci] ci = (ci1, ci2) \u03b8i True type i.e. costs for i, \u03b8i = (ci1, ci2) bi Bid of the seller i. bi = ( \u02c6 ci1, \u02c6 ci2) xij = xij(b) Indicator variable to indicate whether Bi1 is to be procured from the seller i when the bid vector is b Xij(bi) Probability that Bij is procured from the seller i when he submits bid bi. Expectation is taken over all possible values of b\u2212i fij(cij) Probability density function of (cij) Fij(cij) Cumulative distribution function of cij Hij(cij) Virtual cost function for seller i, for bundle Bij Hij(cij) = cij + Fij (cij ) fij (cij ) 4.2 Optimal Auctions When Bidders Are XOR Minded First we characterize the BIC and IIR mechanisms for the settings under consideration in next subsection. We design an optimal auction in subsection 4.2.2. 4.2.1 BIC and IIR: Necessary and Su\ufb03cient Conditions The utility for the agent i is Ui(bi, \u03b8i) = \u2212ci1Xi1 \u2212ci2Xi2 + Ti(bi, \u03b8i) Using similar arguments as in the proof of the Theorem 3.1, for any mechanism in the presence of XOR minded bidders, the necessary condition for BIC is, \u2202U(.) \u2202ci1 = Xi1(ci1, ci2) \u2202U(.) \u2202ci2 = Xi2(ci1, ci2) (14) and Xij(ci1, ci2) should be non-increasing in cij, j = 1, 2. We make an assumption that, \u2202Xi1 \u2202ci2 = \u2202Xi2 \u2202ci1 (15) In general, the above assumption is not necessary for the mechanism to be truthful. However, if we assume that Equation (15) is true, we can solve PDE (14) analytically. Now we can state the following theorem, Theorem 4.1. With assumption (15), a necessary and su\ufb03cient condition for a mechanism to be BIC and IIR in the presence of XOR minded bidders is, 10 \f1. Ti(.) = ci1Xi1 + ci2Xi2 + R (ci,ci) (ci1,ci2) \u25bdUi(.)d\u03b8i 2. Ui(ci, ci) \u22650. 4.2.2 Optimal Auction with Regularity Assumption Suppose we assume that, Hij is non-decreasing in cij for each i, j. This is the same regularity assumption as Myerson [14]. Now, following similar treatment for buyers problem as in Section 3.3, reduces the buyers problem to: min Pn i=1 P2 j=1 xijHij(cij) subject to 1. xij \u22080, 1, where xij indicates whether supplier i is supplying hiss jth bundle or no. 2. xi1 + xi2 \u22641. (XOR minded bidder). 3. All the items are procured. (16) Now, we show that at optimal allocation, the assumption (15) holds true. For an agent i, \ufb01x, \u03b8\u2212i and consider the square of his types [ci, ci] \u00d7 [ci, ci]. When he bids, bi = (ci, ci), he does not win any item. However, if he decreases his bid on cij, he wins the bundle Bij at some lower bid and at a lower bid for Bij, he continues to win. Also, he being XOR minded, he cannot win both the bundles. Thus, the type set\u2019s square can be partitioned into three regions, R1, R2 and R3 as shown in Figure 1. When his type is in region Rj, he is asked to supply Bij, j = 1, 2 and when it is in R3 he is not in the list of winning agents. Now, except on the boundary between R1 and R2, the assumption (15) holds true. Hence, though we are not using (15) as a necessary condition, it is getting satis\ufb01ed in optimization problem (16). Thus OCAX is an optimal combinatorial auction for the buyer in the presence of XOR minded bidders. Figure 1: X-OR Bidding 11 \f4.2.3 The Case when Regularity Assumption is not Satis\ufb01ed Though we do not solve the buyer\u2019s problem of optimal mechanism design without the regularity assumption, we highlight some thoughts on this. If we can assume (15), then we can design an optimal auction very similar to the OCAS, in the presence of XOR minded bidders. The challenge is, we cannot use (15) as a necessary condition nor can we assume it. However, it may happen that in an optimal auction, the condition (15), will hold true. We are still working on this. 5" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file