Self-Rewarding Language Models (2024)

Weizhe Yuan1,2  Richard Yuanzhe Pang1,2  Kyunghyun Cho2
Xian Li1Sainbayar Sukhbaatar1Jing Xu1Jason Weston1,2

1 Meta 2 NYU

Abstract

We posit that to achieve superhuman agents, future models requiresuperhuman feedback in order to provide an adequate training signal.Current approaches commonly train reward models from human preferences, which may then bebottlenecked by human performance level,and secondly these separate frozen reward models cannot then learn to improve during LLM training.In this work, we study Self-Rewarding Language Models, where the language model itself is used via LLM-as-a-Judge prompting to provide its own rewards during training. We show thatduring Iterative DPO training thatnot only does instruction following ability improve, but also the ability to provide high-quality rewards to itself.Fine-tuning Llama 2 70B on three iterations of our approach yields a model that outperformsmany existing systems on the AlpacaEval 2.0 leaderboard, including Claude 2, Gemini Pro, and GPT-4 0613.While there is much left still to explore, this work opens the door to the possibility of models that can continually improve in both axes.

1 Introduction

Aligning Large Language Models (LLMs)using human preference data can vastly improve the instruction followingperformance of pretrained models (Ouyang etal., 2022; Bai etal., 2022a). The standard approach of Reinforcement Learning from Human Feedback (RLHF)learns a reward model from these human preferences. The reward model is then frozen and used to train the LLM using RL, e.g., via PPO (Schulman etal., 2017). A recent alternative is to avoid training the reward model at all, and directly use human preferences to train the LLM, as in Direct Preference Optimization (DPO; Rafailov etal., 2023).In both cases, the approach is bottlenecked by the size and quality of the human preference data, and in the case of RLHF the quality of the frozen reward model trained from them as well.

In this work, we insteadpropose to train a self-improving reward modelthat, rather than being frozen, is continually updating during LLM alignment, in order to avoid this bottleneck.The key to such an approach is to develop an agent that possesses all the abilities desired during training, rather than separating them out into distinct models such as a reward model and a language model.In the same way that pretraining andmultitasking training of instruction following tasksallow task transfer by training on many tasks at once(Collobert and Weston, 2008; Radford etal., 2019; Ouyang etal., 2022), incorporating the reward model into that same system allows task transfer between the reward modeling task and the instruction following tasks.

We thus introduce Self-Rewarding Language Models, that both (i) act as instruction following models generating responses for given prompts; and (ii) can generate and evaluate new instruction following examples to add to their own training set. We train these models using an Iterative DPO framework similar to that recently introduced in Xu etal. (2023). Starting from a seed model, in each iteration there is a process of Self-Instruction creation whereby candidate responses are generated by the model for newly created prompts, and are then assigned rewards by that same model. The latter is implemented via LLM-as-a-Judge prompting, which can also be seen as an instruction following task. A preference dataset is built from the generated data, and the next iteration of the model is trained via DPO, seeFigure1.

In our experiments, we start with a Llama 2 70B (Touvron etal., 2023) seed model fine-tuned on Open Assistant (Köpf etal., 2023), and then perform the above training scheme.We find that not only does the instruction following performance improve from Self-Rewarding LLM alignment compared to the baseline seed model, but importantlythe reward modeling ability, which is no longer fixed, improves as well.This means that the model during iterative training is able, at a given iteration, to provide a higher quality preference dataset to itself than in the previous iteration.While this effect likely saturates in real-world settings, it provides the intriguing possibility of obtaining reward models (and hence LLMs) that are superior to ones that could have been trained from the original human-authored seed data alone.

2 Self-Rewarding Language Models

Our approach first assumes access to a base pretrained language model, and a small amount of human-annotated seed data.We then build a model that aims to possess two skills simultaneously:

  1. 1.

    Instruction following: given a prompt that describes a user request, the ability to generate a high quality, helpful (and harmless) response.

  2. 2.

    Self-Instruction creation: the ability to generate and evaluatenew instruction-following examples to add to its own training set.

These skills are used so that the model can perform self-alignment, i.e., they are the components used to iteratively train itself using AI Feedback (AIF).

Self-instruction creation consists of generating candidate responses and then the model itself judging their quality, i.e., it acts as its own reward model, replacing the need for an external one. This is implemented via the LLM-as-a-Judge mechanism (Zheng etal., 2023b), i.e.,by formulating the evaluation of responses as an instruction following task.This self-created AIF preference data is used as a training set.

Self-Rewarding Language Models (1)

Our overall self-alignment procedure is an iterative one, which proceeds by building a series of such models, with the aim that each improves over the last.Importantly, because the model can both improve its generation ability, and act as its own reward model through the same generation mechanism, this means the reward model itself can improve through these iterations, deviating from standard practices where the reward model is fixed (Ouyang etal., 2022).We believe this can increase the ceiling of the potential for self-improvement ofthese learning models going forward, removing a constraining bottleneck.

We describe these steps in more detail below. An overview of the approach is illustrated in Figure 1.

2.1 Initialization

Seed instruction following data

We are given a seed set of human-authored(instruction prompt, response) general instruction following examples that we use for training in a supervised fine-tuning (SFT) manner, starting from a pretrained base language model. Subsequently this will be referred to as InstructionFine-Tuning (IFT) data.

Seed LLM-as-a-Judge instruction following data

We also assume we are provided a seed set of (evaluation instruction prompt, evaluation result response) examples which can also be used for training.While this is not strictly necessary, as the model using IFT data will already be capable of training an LLM-as-a-Judge, we show that such training data can giveimproved performance (see AppendixA.3 for supporting results). In this data, the input prompt asks the model to evaluate the quality of a given response to a particular instruction. The provided evaluation result response consists of chain-of-thought reasoning (a justification), followed by a final score (in our experiments out of 5). The exact prompt format we chose is given in Figure2, which instructs the LLM to evaluate the response using five additive criteria (relevance, coverage, usefulness, clarity and expertise), covering various aspects of quality.Subsequently this will be referred to as Evaluation Fine-Tuning (EFT) data.

We use both these seed sets together during training.

2.2 Self-Instruction Creation

Using the model we have trained, we can make it self-modify its own training set.Specifically, we generate additional training data for the next iteration of training.

This consists of the following steps:

  1. 1.

    Generate a new prompt: We generate a new prompt xisubscript𝑥𝑖x_{i} using few-shot prompting, sampling prompts from the original seed IFT data, following the approach of Wang etal. (2023) and Honovich etal. (2023).111In our main experiments, responses and rewards, items (2) and (3), are generated by the model we have trained, but generating prompts is actually done by a model fixed in advance. However, we show that prompts can also be generated by the newly trained model in each iteration in AppendixA.5.

  2. 2.

    Generate candidate responses: We then generate N𝑁N diverse candidate responses {yi1,,yiN}superscriptsubscript𝑦𝑖1superscriptsubscript𝑦𝑖𝑁\{y_{i}^{1},\ldots,y_{i}^{N}\} for the given prompt xisubscript𝑥𝑖x_{i} from our model using sampling.

  3. 3.

    Evaluate candidate responses: Finally, we use the LLM-as-a-Judge ability of our same model to evaluate its own candidate responses with scores rin[0,5]superscriptsubscript𝑟𝑖𝑛05r_{i}^{n}\in[0,5] (exact prompt given in Figure2).

2.3 Instruction Following Training

As previously described, training is initially performed with the seed IFT and EFT data (Section2.1).This is then augmented with additional data via AI (Self-)Feedback.

AI Feedback Training

After performing the self-instruction creation procedure, we canaugment the seed data with additional examples for training, which we refer to asAI Feedback Training (AIFT) data.

To do this, we construct preference pairs, which are training data of the form (instruction prompt xisubscript𝑥𝑖x_{i}, winning response yiwsuperscriptsubscript𝑦𝑖𝑤y_{i}^{w}, losing response yilsuperscriptsubscript𝑦𝑖𝑙y_{i}^{l}). To form the winning and losing pair we take the highest and lowest scoring responses from the N𝑁N evaluated candidate responses (see Section2.2), following Xu etal. (2023), discarding the pair if their scores are the same. These pairs can be used for training with a preference tuning algorithm. We use DPO (Rafailov etal., 2023).

2.4 Overall Self-Alignment Algorithm

Iterative Training

Our overall procedure trains a series of models M1,,MTsubscript𝑀1subscript𝑀𝑇M_{1},\dots,M_{T} where each successive model t𝑡t uses augmented training data createdby the t1th𝑡superscript1tht-1^{\text{th}} model. We thus define AIFT(Mtsubscript𝑀𝑡M_{t}) to mean AI Feedback Training data created using model Mtsubscript𝑀𝑡M_{t}.

Model Sequence

We define the models, and the training data they use as follows:

  • M0subscript𝑀0M_{0}

    : Base pretrained LLM with no fine-tuning.

  • M1subscript𝑀1M_{1}

    : Initialized with M0subscript𝑀0M_{0}, then fine-tuned on the IFT+EFT seed data using SFT.

  • M2subscript𝑀2M_{2}

    : Initialized with M1subscript𝑀1M_{1}, then trained with AIFT(M1subscript𝑀1M_{1}) data using DPO.

  • M3subscript𝑀3M_{3}

    : Initialized with M2subscript𝑀2M_{2}, then trained with AIFT(M2subscript𝑀2M_{2}) data using DPO.

This iterative training resembles the procedure used in Pairwise Cringe Optimization and specifically is termed Iterative DPO, introduced in Xu etal. (2023); however, an external fixed reward model was used in that work.

3 Experiments

3.1 Experimental Setup

Base Model

In our experiments we use Llama 2 70B (Touvron etal., 2023) as our base pretrained model.

3.1.1 Seed Training Data

IFT Seed Data

We use the human-authored examples provided in the Open Assistant dataset (Köpf etal., 2023) for instruction fine-tuning. Following Li etal. (2024) we use 3200 examples, by sampling only first conversational turns in the English language thatare high-quality, based on their human annotated rank (choosing only the highest rank 0).In our experiments, we compare to a model fine-tuned from the base model using only this data via supervised fine-tuning, and refer to it as our SFT baseline.

EFT Seed Data

The Open Assistant data also provides multiple ranked human responses per prompt from which we can construct evaluation fine-tuning data. We split this into train and evaluation sets, and use it to create LLM-as-a-Judge data. This is done by placing it in the input format given in Figure2, which consists of the scoring criteria description, and the given instruction and response to be evaluated.222Note, the prompt, derived from Li etal. (2024), mentions “utilizing web search”, but our model is not actually capable of this action. For training targets, chain-of-thought justifications and final scores out of 5 are not directly provided, so we use the SFT baseline to generate such output evaluations for each input, and accept them into the training set if the ranking of their scores agrees with the human rankings in the dataset. We resample the training set by discarding some of the data that receives the most common score so that the scores are not too skewed, as we observe many samples receive a score of 4. This results in 1,630 train and 531 evaluation examples (which do not overlap with the IFT data).

3.1.2 Evaluation Metrics

We evaluate the performance of our self-rewarding models in two axes:their ability to follow instructions, and their abilityas a reward model (ability to evaluate responses).

Instruction Following

We evaluate head-to-head performance between various models using GPT-4 (Achiam etal., 2023) as an evaluator over 256 test prompts (which we refer to as IFT test data) derived from various sources following Li etal. (2024) using the AlpacaEval evaluation prompt (Li etal., 2023). We try the prompt in both orders comparing pairwise, and if the GPT-4 evaluations disagree we count the result as a tie.We also perform a similar evaluation with humans (authors).We additionally report results in the AlpacaEval 2.0 leaderboard format which is evaluated over 805 prompts, and compute the win rate against the baseline GPT-4 Turbo model based on GPT-4 judgments. Further, we report results on MT-Bench (Zheng etal., 2023b) a set of challenging multi-turn questions in various categories from math and coding to roleplay and writing, which uses GPT-4 to grade the model responses out of 10.Finally we also test the models on a set of 9 NLP benchmarks: ARC-Easy (Clark etal., 2018), ARC-Challenge (Clark etal., 2018), HellaSwag (Zellers etal., 2019), SIQA (Sap etal., 2019), PIQA (Bisk etal., 2020), GSM8K (Cobbe etal., 2021), MMLU (Hendrycks etal., 2021), OBQA (Mihaylov etal., 2018)and NQ (Kwiatkowski etal., 2019).

Reward Modeling

We evaluate the correlation with human rankings on the evaluation set we derived from the Open Assistant dataset, as described in Section3.1.1.Each instruction has on average 2.85 responses with given rankings.We can thus measure the pairwise accuracy, which is how many times the order of the ranking between any given pair agrees between the model’s evaluation and the human ranking.We also measure the exact match count, which is how often the total ordering is exactly the same for an instruction. We also report the Spearman correlation and Kendall’s τ𝜏\tau.Finally, we report how often the responses that the model scores a perfect 5 out of 5 are rated as the highest ranked by humans.

3.1.3 Training Details

Instruction following training

The training hyperparameters we use are as follows.For SFT we use learning rate 5.5e65.5𝑒65.5e{-6} which decays (cosine) to 1.1e61.1𝑒61.1e{-6} at the end of training, batch size 161616 and dropout 0.10.10.1. We only calculate the loss on target tokens instead of the full sequence.For DPO we use learning rate 1e61𝑒61e{-6} which decays to 1e71𝑒71e{-7}, batch size 161616, dropout 0.10.10.1, and a β𝛽\beta value of 0.1.We perform early stopping by saving a checkpoint every 200 steps and evaluating generations using Claude 2 (Anthropic, 2023) on 253 validation examples derived from various sources following Li etal. (2024). This is evaluated pairwise against the previous step’s generations using the AlpacaEval evaluation prompt format (Li etal., 2023).

Self-Instruction creation

To generate new prompts we use a fixed model, Llama 2-Chat 70B with 8-shot prompting following Self-Instruct(Wang etal., 2023), where we sample six demonstrations from the IFT data and two from the model generated data, and use decoding parameters T = 0.6, p = 0.9. We use their prompt template for non-classification tasks and apply the same filtering techniques, including the ROUGE-L(Lin, 2004) similarity check, keyword filtering, and length filtering. Except for the prompt generation part, the other parts of the creation pipeline (generating the response, and evaluating it) use the Self-Rewarding model being trained. For candidate response generation we sample N=4𝑁4N=4 candidate responses with temperature T=0.7𝑇0.7T=0.7, p=0.9𝑝0.9p=0.9.When evaluating candidate responses, as there is variance to these scores, in our experiments we also use sampled decoding (with the same parameters) and generate these evaluations multiple (3) times and take the average. We added3,964 such preference pairs to form the AIFT(M1subscript𝑀1M_{1}) dataset used to train M2subscript𝑀2M_{2} via DPO,and 6,942 pairs to form AIFT(M2subscript𝑀2M_{2}) used to train M3subscript𝑀3M_{3}.

3.2 Results

3.2.1 Instruction Following Ability

Head to head performance results are provided in Figure3.

EFT+IFT seed training performs similarly to IFT alone

We find that adding the Evaluation Fine-Tuning (EFT) task to training does not impact instruction following performance compared to using Instruction Fine-Tuning (IFT) data alonewith an almost equal head to head (30.5% wins vs. 30.9% wins).This is a positive result because it means the increased capability of a model to self-reward does not affect its other skills. We can thus use IFT+EFT training as Iteration 1 (M1subscript𝑀1M_{1}) of our Self-Rewarding model, and then run further iterations.

Iteration 2 (M2subscript𝑀2M_{2}) improves over Iteration 1 (M1subscript𝑀1M_{1}) and SFT Baseline

Iteration 2 of Self-Rewarding training (M2subscript𝑀2M_{2}) provides superior instruction following to Iteration 1 (M1subscript𝑀1M_{1}) with 55.5% wins for M2subscript𝑀2M_{2} compared to only 11.7% for M1subscript𝑀1M_{1} in a head to head evaluation.It provides similar gains over the SFT Baseline as well (49.2% wins vs. 14.5% wins). Clearly, there is a large jump in performance from M1subscript𝑀1M_{1} to M2subscript𝑀2M_{2} by using the preference data AIFT(M1subscript𝑀1M_{1}) provided by the reward model from Iteration 1.

Iteration 3 (M3subscript𝑀3M_{3}) improves over Iteration 2 (M2subscript𝑀2M_{2})

We see a further gain in Iteration 3 over Iteration 2, with 47.7% wins for M3subscript𝑀3M_{3} compared to only 12.5% for M2subscript𝑀2M_{2} in a head to head evaluation. Similarly, the win rate over the SFT Baseline for M3subscript𝑀3M_{3} increases to 62.5% wins vs. 9.8%, i.e., winning more often than the M2subscript𝑀2M_{2} model did. Overall, we see large gains from M2subscript𝑀2M_{2} to M3subscript𝑀3M_{3} through training using the preference data AIFT(M2subscript𝑀2M_{2}) provided by the reward model from Iteration 2.

Self-Rewarding models perform well on AlpacaEval 2 leaderboard

We evaluate our models on the AlpacaEval 2.0 leaderboard format, with results given inTable1. We observe the same findings as in the head-to-head evaluations, that training iterations yield improved win rates, in this case over GPT4-Turbo, from 9.94% in Iteration 1, to 15.38% in Iteration 2, to 20.44% in Iteration 3. Our Iteration 3 model outperforms many existing models in this metric, including Claude 2, Gemini Pro, and GPT4 0613. We show some selected models from the leaderboard in the table. We note that many of those competing models contain either proprietary alignment data (which is typically large, e.g., over 1M annotations in Touvron etal. (2023)) or use targets that are distilled from stronger models. In contrast, our Self-Rewarding model starts from a small set of seed data from Open Assistant, and then generates targets and rewards from the model itself for further iterations of training.

Alignment Targets
ModelWin RateDistilledProprietary
Self-Rewarding 70B
Iteration 1 (M1subscript𝑀1M_{1})9.94%
Iteration 2 (M2subscript𝑀2M_{2})15.38%
Iteration 3 (M3subscript𝑀3M_{3})20.44%
Selected models from the leaderboard
GPT-4 031422.07%
Mistral Medium21.86%
Claude 217.19%
Gemini Pro16.85%
GPT-4 061315.76%
GPT 3.5 Turbo 061314.13%
LLaMA2 Chat 70B13.87%
Vicuna 33B v1.312.71%
Humpback LLaMa2 70B10.12%
Guanaco 65B6.86%
Davinci0012.76%
Alpaca 7B2.59%
Self-Rewarding Language Models (4)
Fine-grained analysis

As described earlier, the overall performance of the model in AlpacaEval improves with each iteration of training. It would be interesting to break down the overall performance improvement to see exactly what type of tasks these improvements come from. Therefore, we cluster the instructions in AlpacaEval test set into different groups based on three perspectives: (1) instruction category (2) instruction complexity (3) expected response length. We achieve this by using GPT-4. The detailed statistical information of the breakdown and the prompting techniques we used for getting this breakdown can be found in AppendixA.6. Results for the instruction category are given in Figure4, and the other two in Appendix Figure11. From the results we can conclude that (i) Self-Rewarding models can substantially improve the win rate in most categories, but there are some tasks for which this approach does not improve, such as mathematics and logical reasoning, indicating that our current training approach mainly allows the models to better utilize their existing knowledge.(ii) Through Self-Rewarding model training, the model’s win rate increases on almost all tasks of different complexity, and especially on slightly more difficult tasks (complexity of 5, 6, 7 out of 10). (iii) The models also show a steady increase in the win rate on tasks with instructions with different expected response lengths.

Data distribution analysis

We perform a t-SNE (Vander Maaten and Hinton, 2008) visualization of the IFT, EFT and AIFT(M1subscript𝑀1M_{1}) data, shown in AppendixA.1. We find good overlap between the IFT and AIFT(M1subscript𝑀1M_{1}) examples, which is desired, while the EFT examples lie in a different part of the embedding space, which can help explain why they would not affect IFT performance.We observe that generations from M1subscript𝑀1M_{1} on AlpacaEval have an average length of 1092, for M2subscript𝑀2M_{2} they are1552, and for M3subscript𝑀3M_{3} they are 2552, so the model is learning to generate longer responses, which we note may be a factor in relative performance.

Self-Rewarding Language Models (5)
OverallMath, CodeHumanities, Extraction,
Score& ReasoningSTEM, Roleplay & Writing
SFT Baseline6.853.938.60
M1subscript𝑀1M_{1}6.783.838.55
M2subscript𝑀2M_{2}7.014.058.79
M3subscript𝑀3M_{3}7.254.179.10
ARC ()(\uparrow)
challenge
HellaSwag
()(\uparrow)
GSM8K
()(\uparrow)
MMLU
()(\uparrow)
NQ
()(\uparrow)
Llama 257.4085.3056.8068.9025.30
SFT Baseline55.9785.1750.7269.7634.35
M1subscript𝑀1M_{1}57.5184.9960.2769.3435.48
M2subscript𝑀2M_{2}54.5184.2759.2969.3133.07
M3subscript𝑀3M_{3}53.1383.2957.7069.3731.86
Human evaluation

To examine whether human judgments align with automatic evaluation results, we conduct human evaluations that compare SFT baseline generations with the generations from each iteration of Self-Rewarding training, i.e., models M1subscript𝑀1M_{1}, M2subscript𝑀2M_{2}, and M3subscript𝑀3M_{3}. Specifically, we randomly select 50 instructions from the IFT test set. Each instruction corresponds to three pairs of generations (i.e., baseline vs. M1subscript𝑀1M_{1}, baseline vs. M2subscript𝑀2M_{2}, baseline vs. M3subscript𝑀3M_{3}). For each pair of generations, we assign them to three different annotators (blind evaluation performed by the authors) to make a pairwise judgment, and take a majority vote to decide which generation is better. The human evaluation results are shown in Figure5. We find that Self-Rewarding models from later iterations show a larger advantage over the SFT baseline model, which is consistent with GPT-4’s judgments, and demonstrates the effectiveness of our iterative training procedure.

MT-Bench performance further validates these results

We report performance on MT-Bench in Table2 for the SFT baseline and iterations of the Self-Rewarding model. We again see improvements across the iterations of training from M1subscript𝑀1M_{1} to M3subscript𝑀3M_{3}, from 6.78 (out of 10) up to 7.25, with larger relative gains in the humanities, STEM, roleplay, writing and extraction categories, and smaller gains in the math, code and reasoning categories.We expect that the latter is due to the seed prompts we use from Open Assistant tending to underemphasize the reasoning-based tasks.We note also that these improvements are in spite of our method using and constructing prompts that only involve a single turn, given the MT-Bench benchmark itself is a multi-turn evaluation.

Self-rewarding models did not lose ability on NLP Benchmarks

As shown in Table3, the performance of most NLP benchmark tasks evaluated are roughly similar to the baselines, with further detailed results on more datasets given in Appendix Table9 that follow the same pattern.We hypothesize that given that our training data (seed data and synthetically generated data) are based on the Open Assistant prompts which may not be especially relevant to skills needed in the Table3 tasks, it is expected that the task performance stays roughly similar, or may even drop.For example, in InstructGPT training (Ouyang etal., 2022)they found that “during RLHF fine-tuning, we observe performance regressions comparedto GPT-3 on certain public NLP datasets” which they refer to as an “alignment tax.”A clear future direction is to extend the self-rewarding paradigm to these types of tasks, by relying not only on seed prompts from Open Assistant, but also on seed prompts found in a larger variety of datasets.

3.2.2 Reward Modeling Ability

Reward modeling evaluation results are provided inTable4.

EFT augmentation improves over SFT baseline

Firstly, we find that adding Evaluation Fine-Tuning (EFT) data into training, which gives examples to the model of how to act as an LLM-as-a-Judge, naturally improves its performance compared to training with Instruction Fine-Tuning (IFT) data alone. IFT data covers a wide range of general instruction tasks, and so does endow the SFT Baseline with the ability to evaluate responses; however, EFT data gives more examples of this specific task. We find improvements across all five metrics measured when using IFT+EFT vs. IFT alone, e.g.,the pairwise accuracy agreement with humans increases from 65.1% to 78.7%.

Self-Rewarding Models
ModelSFT BaselineIter 1 (M1subscript𝑀1M_{1})Iter 2 (M2subscript𝑀2M_{2})Iter 3 (M3subscript𝑀3M_{3})
Training dataIFTIFT+EFTIFT+EFTIFT+EFT+AIFT(M1subscript𝑀1M_{1})
+AIFT(M1subscript𝑀1M_{1})+AIFT(M2subscript𝑀2M_{2})
Pairwise acc. ()(\uparrow)65.1%78.7%80.4%81.7%
5-best % ()(\uparrow)39.6%41.5%44.3%43.2%
Exact Match % ()(\uparrow)10.1%13.1%14.3%14.3%
Spearman corr. ()(\uparrow)0.2530.2790.3310.349
Kendall τ𝜏\tau corr. ()(\uparrow)0.2330.2530.3150.324
Reward Modeling ability improves with Self-Training

We find that performing a round of self-reward training improves the ability of the model at providing self-rewards for the next iteration, in addition to its improved instruction following ability. Model M2subscript𝑀2M_{2} (Iteration 2) is trained using the reward model from M1subscript𝑀1M_{1} (Iteration 1), but provides improved performance on all five metrics compared to M1subscript𝑀1M_{1}.For example, pairwise accuracy improves from 78.7% to 80.4%.Iteration 3 (M3subscript𝑀3M_{3}) improves several of these metrics further compared to M2subscript𝑀2M_{2}, for example pairwise accuracy increases from 80.4% to 81.7%.This performance gain is achieved despite there being no additionalEFT data provided, and the examples created during the Self-Instruction creation loop do not tend to look like LLM-as-a-Judge training examples. We hypothesize that because the model is becoming better at general instruction following, it nevertheless also improves atthe LLM-as-a-Judge task.

Importance of the LLM-as-a-Judge Prompt

In these experiments we used the LLM-as-a-Judge prompt format shown in Figure2.In preliminary experiments we also tried various other prompts to decide the most effective one to use. For example, we tried the prompt proposed in Li etal. (2024) which also proposes a 5-point scale, but describes the options as multiple choice in a range of quality buckets, see Appendix Figure7.In contrast, our prompt describes the points as additive, covering various aspects of quality. We find a large difference between these two prompts when using the SFT Baseline, e.g. 65.1% pairwise accuracy for ours, and only 26.6% pairwise accuracy for theirs. See AppendixA.2 for further details.

4 Related Work

Automatically improving or self-correcting large language models is becominga major focus of research. A recent survey from Pan etal. (2023)attempts to summarize the topic. However, this is a rapidly moving area, andthere are already promising new works not covered there.

Reinforcement Learning from Human Feedback (RLHF)

Preference learning approaches such as in Ziegler etal. (2019); Stiennon etal. (2020); Ouyang etal. (2022); Bai etal. (2022a)train a fixed reward model from human preference data, and then use the reward model to train via reinforcement learning (RL), e.g. via Proximal Policy Optimization (PPO) (Schulman etal., 2017). Thus, the reward signal in a certain sense already comes from a model even in these works, but distilled from human data. Nevertheless, this is commonly referred to as RL from Human Feedback (RLHF).Methods such as Direct Preference Optimization (DPO) (Rafailov etal., 2023) avoid training the reward modelentirely, and instead directly train the LLM using human preferences. Several other such competing methods exist as well (Zhao etal., 2023; Zheng etal., 2023a; Yuan etal., 2023), including Pairwise Cringe Optimization (PCO) (Xu etal., 2023).PCO uses an iterative training approach similar to the one in our work, except with a fixed reward model, and that work also showed that Iterative DPO improves over DPO using the same scheme.We note that other works have developed iterative preference training schemesas well, e.g. Adolphs etal. (2023); Gulcehre etal. (2023); Xiong etal. (2023).

Reinforcement Learning from AI Feedback (RLAIF)

Constitutional AI (Bai etal., 2022b)uses an LLM to give feedback and refine responses, and uses this data to train a reward model. This fixed, separate reward model is then used to train the language model via RL, called “RL from AI Feedback” (RLAIF).Lee etal. (2023) compare RLAIF and RLHF procedures and find the methods they compare perform roughly equally. Theyuse an “off-the-shelf” LLM to perform LLM-as-a-Judge prompting to build a training set to train a fixed reward model, which is then used for RL training. They also experiment with using the fixed but separate LLM-as-a-Judge model directly, which the authors report is computationally expensive due to using it within PPO training (rather than the offline step in the iterative approach we use in our work, which is relatively computationally cheap).Finally, SPIN (Chen etal., 2024b) recently showed they can avoid reward models entirely in an Iterative DPO-like framework by using human labels as the winning response in a pair, and the last iteration’s generations as the losing response in the pair. The authors note this has the limitation that once the model generations reach human performance, they are bottlenecked. Further, each input prompt is required to have a human annotated response, in contrast to our work.

Improving LLMs via data augmentation (and curation)

Several methods have improved LLMs by (self-)creating training data to augment fine-tuning. Self-Instruct (Wang etal., 2023) is a method for self-instruction creation of prompts and responses, which can be used to improve a base LLM. We make use of a similar technique in our work, and then use our self-reward model to score them.Several approaches have also created training data bydistilling from powerful LLMs, and showna weaker LLM can then perform well. For example, Alpaca (Taori etal., 2023) fine-tuned a Llama 7B model with text-davinci-003 instructions created in the style of self-instruct. Alpagasus (Chen etal., 2024a) employed a strong LLM-as-a-Judge (ChatGPT) to curate the Alpaca dataset and filter to a smaller set, obtaining improved results. Instruction Backtranslation (Li etal., 2024) similarly augments and curates training data, but augmenting via backtranslating from web documents to predict prompts. The curation is done by the LLM(-as-a-Judge) itself, so can be seen as an instance of a self-rewarding model, but in a specialized setting.Reinforced Self-Training (ReST) (Gulcehre etal., 2023) uses a fixed, external reward to curate new high-quality examples to iteratively add to the training set, improving performance. In our experiments, we found that adding only positive examples in a related manner did not help, whereas preference pairs did help (see Appendix SectionA.4 for details).

LLM-as-a-Judge

Using LLM-as-a-Judge prompting to evaluate language models has become astandard approach (Dubois etal., 2023; Li etal., 2023; Fernandes etal., 2023; Bai etal., 2023; Saha etal., 2023), and is being used to train reward models or curate data as well, as described above (Lee etal., 2023; Chen etal., 2024a; Li etal., 2024). While some works such as Kim etal. (2023) create training data to train an LLM to perform well as a judge, to our knowledge it is not common to combine this training with general instruction following skills as in our work.

5 Conclusion

We have introduced Self-Rewarding Language Models, models capable of self-alignment via judging and training on their own generations. The method learns in an iterative manner, where in each iteration the model creates its own preference-based instruction training data. This is done by assigning rewards to its own generations via LLM-as-a-Judge prompting, and using Iterative DPO to train on the preferences. We showed that this training both improves the instruction following capability of the model, as well as its reward-modeling ability across the iterations. While there are many avenues left unexplored, we believe this is exciting becausethis means the model is better able to assign rewards in future iterations for improving instruction following – a kind of virtuous circle.While this improvement likely saturates in realistic scenarios,it still allows for the possibility of continual improvement beyond the human preferences that are typically used to build reward models and instruction following models today.

6 Limitations

While we have obtained promising experimental results, we currently consider them preliminary because there are many avenues yet to explore, among them the topics of further evaluation, including safety evaluation, and understanding the limits of iterative training.

We showed that the iterations of training improve both instruction following and reward modeling ability, but only ran three iterations in a single setting. A clear line of further research is to understand the “scaling laws” of this effect both for more iterations, and with different language models with more or less capabilities in different settings.

We observed an increase in length in model generations, and there is a known correlation between length and estimated quality, which is a topic that should be understood more deeply in general, and in our results in particular as well. It would also be good to understand if so-called “reward-hacking” can happen within our framework, and in what circ*mstances. As we are using both a language model as the training reward, and a language model for final evaluation (GPT-4) in some of our benchmarks, even if they are different models, this may require a deeper analysis than we have provided. While the human evaluation we conducted did provide validation of the automatic results, further study could bring more insights.

Another clear further avenue of study is to conduct safety evaluations – and to explore safety training within our framework.Reward models have been built exclusively for safety in existing systems (Touvron etal., 2023), and a promising avenue here would be to use the LLM-as-a-Judge procedure to evaluate for safety specifically in our self-rewarding training process.Given that we have shown that reward modeling ability improves over training iterations, this could mean that the safety of the model could potentially improve over time as well, with later iterations being able to catch and mitigate more challenging safety situations that earlier iterations cannot.

References

  • Achiam etal. (2023)Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, FlorenciaLeoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, etal.GPT-4 technical report.arXiv preprint arXiv:2303.08774, 2023.
  • Adolphs etal. (2023)Leonard Adolphs, Tianyu Gao, Jing Xu, Kurt Shuster, Sainbayar Sukhbaatar, and Jason Weston.The CRINGE loss: Learning what language not to model.In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8854–8874, Toronto, Canada, July 2023. Association for Computational Linguistics.doi: 10.18653/v1/2023.acl-long.493.URL https://aclanthology.org/2023.acl-long.493.
  • Anthropic (2023)Anthropic.Claude 2.https://www.anthropic.com/index/claude-2, 2023.
  • Bai etal. (2022a)Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, etal.Training a helpful and harmless assistant with reinforcement learning from human feedback.arXiv preprint arXiv:2204.05862, 2022a.
  • Bai etal. (2022b)Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, etal.Constitutional AI: Harmlessness from AI feedback.arXiv preprint arXiv:2212.08073, 2022b.
  • Bai etal. (2023)Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He, Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao, Haozhe Lyu, Jiayin Zhang, Juanzi Li, and Lei Hou.Benchmarking foundation models with language-model-as-an-examiner.In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023.URL https://openreview.net/forum?id=IiRHQ7gvnq.
  • Bisk etal. (2020)Yonatan Bisk, Rowan Zellers, RonanLe Bras, Jianfeng Gao, and Yejin Choi.Piqa: Reasoning about physical commonsense in natural language.In Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020.
  • Chen etal. (2024a)Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, etal.AlpaGasus: Training a better alpaca with fewer data.In The Twelfth International Conference on Learning Representations, 2024a.URL https://openreview.net/forum?id=FdVXgSJhvz.
  • Chen etal. (2024b)Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu.Self-play fine-tuning converts weak language models to strong language models.arXiv preprint arXiv:2401.01335, 2024b.
  • Clark etal. (2018)Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.Think you have solved question answering? Try ARC, the AI2 reasoning challenge.arXiv preprint arXiv:1803.05457, 2018.
  • Cobbe etal. (2021)Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman.Training verifiers to solve math word problems.arXiv preprint arXiv:2110.14168, 2021.
  • Collobert and Weston (2008)Ronan Collobert and Jason Weston.A unified architecture for natural language processing: Deep neural networks with multitask learning.In Proceedings of the 25th International Conference on Machine Learning, pages 160–167, 2008.
  • Dubois etal. (2023)Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and TatsunoriB Hashimoto.Alpacafarm: A simulation framework for methods that learn from human feedback.arXiv preprint arXiv:2305.14387, 2023.
  • Fernandes etal. (2023)Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André Martins, Graham Neubig, Ankush Garg, Jonathan Clark, Markus Freitag, and Orhan Firat.The devil is in the errors: Leveraging large language models for fine-grained machine translation evaluation.In Philipp Koehn, Barry Haddow, Tom Kocmi, and Christof Monz, editors, Proceedings of the Eighth Conference on Machine Translation, pages 1066–1083, Singapore, December 2023. Association for Computational Linguistics.doi: 10.18653/v1/2023.wmt-1.100.URL https://aclanthology.org/2023.wmt-1.100.
  • Gulcehre etal. (2023)Caglar Gulcehre, TomLe Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, etal.Reinforced self-training (rest) for language modeling.arXiv preprint arXiv:2308.08998, 2023.
  • Hendrycks etal. (2021)Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.Measuring massive multitask language understanding.In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.URL https://openreview.net/forum?id=d7KBjmI3GmQ.
  • Honovich etal. (2023)OrHonovich, Thomas Scialom, Omer Levy, and Timo Schick.Unnatural instructions: Tuning language models with (almost) no human labor.In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14409–14428, Toronto, Canada, July 2023. Association for Computational Linguistics.doi: 10.18653/v1/2023.acl-long.806.URL https://aclanthology.org/2023.acl-long.806.
  • Kim etal. (2023)Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, etal.Prometheus: Inducing fine-grained evaluation capability in language models.arXiv preprint arXiv:2310.08491, 2023.
  • Köpf etal. (2023)Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, NguyenMinh Duc, Oliver Stanley, Richárd Nagyfi, etal.OpenAssistant conversations–democratizing large language model alignment.arXiv preprint arXiv:2304.07327, 2023.
  • Kwiatkowski etal. (2019)Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, KristinaN. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov.Natural questions: a benchmark for question answering research.Transactions of the Association of Computational Linguistics, 2019.
  • Lee etal. (2023)Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi.RLAIF: Scaling reinforcement learning from human feedback with ai feedback.arXiv preprint arXiv:2309.00267, 2023.
  • Li etal. (2024)Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, and Mike Lewis.Self-alignment with instruction backtranslation.In The Twelfth International Conference on Learning Representations, 2024.URL https://openreview.net/forum?id=1oijHJBRsT.
  • Li etal. (2023)Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and TatsunoriB. Hashimoto.Alpacaeval: An automatic evaluator of instruction-following models.https://github.com/tatsu-lab/alpaca_eval, 2023.
  • Lin (2004)Chin-Yew Lin.ROUGE: A package for automatic evaluation of summaries.In Text Summarization Branches Out, pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics.URL https://aclanthology.org/W04-1013.
  • Mihaylov etal. (2018)Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal.Can a suit of armor conduct electricity? a new dataset for open book question answering.In EMNLP, 2018.
  • Ouyang etal. (2022)Long Ouyang, Jeffrey Wu, XuJiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, etal.Training language models to follow instructions with human feedback.Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
  • Pan etal. (2023)Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and WilliamYang Wang.Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies.arXiv preprint arXiv:2308.03188, 2023.
  • Radford etal. (2019)Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, etal.Language models are unsupervised multitask learners.OpenAI blog, 1(8):9, 2019.
  • Rafailov etal. (2023)Rafael Rafailov, Archit Sharma, Eric Mitchell, ChristopherD Manning, Stefano Ermon, and Chelsea Finn.Direct preference optimization: Your language model is secretly a reward model.In Thirty-seventh Conference on Neural Information Processing Systems, 2023.URL https://openreview.net/forum?id=HPuSIXJaa9.
  • Saha etal. (2023)Swarnadeep Saha, Omer Levy, Asli Celikyilmaz, Mohit Bansal, Jason Weston, and Xian Li.Branch-solve-merge improves large language model evaluation and generation.arXiv preprint arXiv:2310.15123, 2023.
  • Sap etal. (2019)Maarten Sap, Hannah Rashkin, Derek Chen, RonanLe Bras, and Yejin Choi.Socialiqa: Commonsense reasoning about social interactions.CoRR, abs/1904.09728, 2019.URL http://arxiv.org/abs/1904.09728.
  • Schulman etal. (2017)John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.Proximal policy optimization algorithms.arXiv preprint arXiv:1707.06347, 2017.
  • Stiennon etal. (2020)Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and PaulF Christiano.Learning to summarize with human feedback.Advances in Neural Information Processing Systems, 33:3008–3021, 2020.
  • Taori etal. (2023)Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and TatsunoriB. Hashimoto.Stanford alpaca: An instruction-following llama model.https://github.com/tatsu-lab/stanford_alpaca, 2023.
  • Touvron etal. (2023)Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, etal.Llama 2: Open foundation and fine-tuned chat models.arXiv preprint arXiv:2307.09288, 2023.
  • Vander Maaten and Hinton (2008)Laurens Vander Maaten and Geoffrey Hinton.Visualizing data using t-SNE.Journal of machine learning research, 9(11), 2008.
  • Wang etal. (2023)Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, NoahA. Smith, Daniel Khashabi, and Hannaneh Hajishirzi.Self-instruct: Aligning language models with self-generated instructions.In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13484–13508, Toronto, Canada, July 2023. Association for Computational Linguistics.doi: 10.18653/v1/2023.acl-long.754.URL https://aclanthology.org/2023.acl-long.754.
  • Xiong etal. (2023)Wei Xiong, Hanze Dong, Chenlu Ye, Han Zhong, Nan Jiang, and Tong Zhang.Gibbs sampling from human feedback: A provable kl-constrained framework for rlhf.arXiv preprint arXiv:2312.11456, 2023.
  • Xu etal. (2023)Jing Xu, Andrew Lee, Sainbayar Sukhbaatar, and Jason Weston.Some things are more cringe than others: Preference optimization with the pairwise cringe loss.arXiv preprint arXiv:2312.16682, 2023.
  • Yuan etal. (2023)Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang.RRHF: Rank responses to align language models with human feedback.In Thirty-seventh Conference on Neural Information Processing Systems, 2023.URL https://openreview.net/forum?id=EdIGMCHk4l.
  • Zellers etal. (2019)Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi.Hellaswag: Can a machine really finish your sentence?In Anna Korhonen, DavidR. Traum, and Lluís Màrquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791–4800. Association for Computational Linguistics, 2019.doi: 10.18653/V1/P19-1472.URL https://doi.org/10.18653/v1/p19-1472.
  • Zhao etal. (2023)Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and PeterJ Liu.SLiC-HF: Sequence likelihood calibration with human feedback.arXiv preprint arXiv:2305.10425, 2023.
  • Zheng etal. (2023a)Chujie Zheng, Pei Ke, Zheng Zhang, and Minlie Huang.Click: Controllable text generation with sequence likelihood contrastive learning.In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 1022–1040, Toronto, Canada, July 2023a. Association for Computational Linguistics.doi: 10.18653/v1/2023.findings-acl.65.URL https://aclanthology.org/2023.findings-acl.65.
  • Zheng etal. (2023b)Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, ZiLin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, JosephE. Gonzalez, and Ion Stoica.Judging LLM-as-a-judge with MT-bench and chatbot arena.In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023b.URL https://openreview.net/forum?id=uccHPGDlao.
  • Ziegler etal. (2019)DanielM Ziegler, Nisan Stiennon, Jeffrey Wu, TomB Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving.Fine-tuning language models from human preferences.arXiv preprint arXiv:1909.08593, 2019.

Appendix A Appendix

A.1 Distributions of IFT, EFT and AIFT data

Self-Rewarding Language Models (6)
Self-Rewarding Language Models (7)

We have plotted the distribution of instructions for IFT, EFT and AIFT(M1subscript𝑀1M_{1}) data, and the distribution of responses for IFT, EFT and AIFT(M1subscript𝑀1M_{1}) data in Figure6. It is clear that the IFT data and EFT data come from very different distributions while the IFT and AIFT(M1subscript𝑀1M_{1}) data come from similar distributions.

A.2 EFT Prompts

The EFT prompt which we use in our main experiments is shown in Figure2.

Other EFT prompts we have tried

At first, we took the EFT prompt from Li etal. [2024] as shown in Figure7. However, we found that this prompt was not as effective as our additive score-counting prompt because the model needed to treat the task as a multiple-choice problem, and it was difficult for the model to break down this multiple-choice problem into sub-problems involving evaluating various aspects of the response. When using the model trained on 3,200 IFT data only, its performance on the EFT test set using our additive score-counting prompt and prompt from Li etal. [2024] is shown in Table5.

EFT PromptMultiple Choice promptOurs
Pairwise accuracy ()(\uparrow)26.6%65.1%
5-best % ()(\uparrow)23.5%39.6%
Exact Match % ()(\uparrow)1.1%10.1%
Spearman corr. ()(\uparrow)-0.180.25
Kendall τ𝜏\tau corr. ()(\uparrow)-0.160.23

A.3 Self-rewarding Models Using IFT Data Only

To demonstrate the importance of the EFT data, we also trained a series of models starting with the model trained only on the IFT data. The following is the model sequence.

  • M0subscript𝑀0M_{0}

    : Base pretrained LLM with no fine-tuning.

  • M1superscriptsubscript𝑀1M_{1}^{\prime}

    : Initialized with M0subscript𝑀0M_{0}, then fine-tuned on the IFT seed data only using SFT.

  • M2superscriptsubscript𝑀2M_{2}^{\prime}

    : Initialized with M1superscriptsubscript𝑀1M_{1}^{\prime}, then trained with AIFT(M1superscriptsubscript𝑀1M_{1}^{\prime}) data using DPO.

  • M3superscriptsubscript𝑀3M_{3}^{\prime}

    : Initialized with M2superscriptsubscript𝑀2M_{2}^{\prime}, then trained with AIFT(M2superscriptsubscript𝑀2M_{2}^{\prime}) data using DPO.

Since we did not use EFT data to train the series of models, they were not always able to score the responses according to the format and even when they did, the scores given typically converged to 4. Therefore, even when starting from the same number of generated new prompts, we could only collect a very small number of valid training samples for DPO.In total, we collected 541 pairs to form the AIFT(M1superscriptsubscript𝑀1M_{1}^{\prime}) dataset used to train M2superscriptsubscript𝑀2M_{2}^{\prime} via DPO, and 429 pairs to form AIFT(M2superscriptsubscript𝑀2M_{2}^{\prime}) used to train M3superscriptsubscript𝑀3M_{3}^{\prime}. The win rates are shown in Figure8. From the figure we can conclude that EFT data helps to get better performance in the same number of iterations and the gap in performance between the model trained with EFT data and the model trained without EFT data widens in the later iterations.

Self-Rewarding Language Models (8)

Self-Rewarding Language Models (9)

CategoryNumberPercentage
Science / Technology / Engineering13416.65%
Professional / Business / Marketing779.57%
Social Interaction / Relationships / Human Behavior688.45%
Miscellaneous / Other617.58%
Mathematics / Logical Reasoning526.46%
Cooking / Recipes485.96%
Software Development / Coding / Algorithms445.47%
Travel / Geography / Exploration415.09%
Literature / Writing / Communication394.84%
History / Social Studies384.72%
Entertainment / Media Analysis344.22%
Language Learning / Linguistics323.98%
Music / Audio / Arts303.73%
DIY Projects / Hobbies242.98%
Technology / Gadgets / Consumer Products202.48%
Gaming / Game Development182.24%
Exercise / Health / Wellness161.99%
Philosophy / Ethics / Ideology151.86%
Sports / Athletics / Physical Activity121.49%
Strategy / Problem-Solving / Critical Thinking20.24%
ComplexityNumberPercentage
323829.57%
220625.59%
412215.16%
6799.81%
5688.45%
7415.09%
1344.22%
8141.74%
930.37%
Expected LengthNumberPercentage
1-3 sentences36144.84%
1 paragraph26933.42%
1 sentence14317.76%
2 paragraphs313.85%
3 or more paragraphs10.13%
Self-Rewarding Language Models (10)
Self-Rewarding Language Models (11)

A.4 Preference optimization outperforms augmenting with positive examples only

We also tried an alternative self-training procedureof adding high-quality self-instruction creation examples to supervisedfine-tuning (without preference optimization), rather than DPO.In this variant, we add additional examples of (instruction prompt, response) curated by the model to the seed set for supervised fine-tuning, following other approaches [Li etal., 2024, Adolphs etal., 2023, Gulcehre etal., 2023], rather than constructing preference data. In this setup we only add examples where the candidate response was evaluated to give a perfect score of rin=5superscriptsubscript𝑟𝑖𝑛5r_{i}^{n}=5. Unfortunately we could not find a configuration where this approach helped. For example, adding 11,254 such examples that scored 5 out of 5, and optimizing the mixing weight in training, still yielded a head to head with the SFT Baseline of 29% wins vs 30% wins, i.e., no improvement.

A.5 Augmented Prompt Generation Using Newly Trained Models

In our experiments, for time efficiency, we have created a fixed pool of augmented prompts in advance using ChatLlama 70B. In a real interactive system, ideally, those prompts could come from real users so that we can ensure the models are trained to align with real user requirements. Here, we also examine whether our newly trained Self-Rewarding models in each iteration can generate new prompts through in-context learning, instead of using ChatLlama 70B.To check this, we constructed 30 prompts with in-context examples using the original seed IFT data as described in Section2.2 and tested whether M1subscript𝑀1M_{1}, M2subscript𝑀2M_{2} and M3subscript𝑀3M_{3} still possess in-context learning ability and can generate high quality instructions. According to manual inspection, all models can generate novel instructions given in-context examples in all 30 cases. However, for M2𝑀2M2 and M3𝑀3M3, the model is likely to first generate a few instructions, then generate a separator, and then start responding to the instructions, so some postprocessing might be necessary.

A.6 AlpacaEval Test Sample Clustering

We used the GPT-4 (gpt-4-1106-preview) model to categorize the instructions in the AlpacaEval test set into clusters from three perspectives: (1) instruction category, (2) instruction complexity, and (3) expected response length. To obtain instruction categories for the AlpaceEval test set, we used the prompt in Figure9 and obtained 20 categories in total. Then, to cluster the instructions into different groups, we use the prompt in Figure10 for each test example. The corresponding statistics are given in Table6, Table7, Table8. The fine-grained results on instruction complexity and expected response length are given in Figure11.

Commonsense ReasoningMathWorld Knowledge
ARC
easy
ARC
challenge
HellaSwagSIQAPIQA
GSM8K
(em)
MMLU
(macro_avg/acc)
OBQA
(acc_comp)
NQ
(em)
Llama 280.2057.4085.3050.7082.8056.8068.9060.2025.30
SFT Baseline76.4955.9785.1751.4882.5950.7269.7657.8034.35
M1subscript𝑀1M_{1}78.1457.5184.9953.0282.9260.2769.3457.6035.48
M2subscript𝑀2M_{2}74.8454.5184.2751.2381.9459.2969.3157.6033.07
M3subscript𝑀3M_{3}72.3553.1383.2949.2880.7957.7069.3758.4031.86

WritingRoleplayReasoningMathCodingExtractionSTEMHumanitiesOverall
SFT8.838.155.303.003.506.909.189.956.85
M19.107.654.353.054.107.208.939.856.78
M29.108.004.603.304.257.659.409.807.01
M39.588.734.803.504.207.809.459.957.25

A.7 NLP Benchmark Results and MT-Bench Results

We provide the detailed model performance on a number of NLP benchmarks in Table9 and on MT-Bench in Table10. In particular, some NLP benchmarks including ARC-Challenge, HellaSwag, SIQA, PIQA, and OBQA are all text completion tasks. In these tasks, given the multiple choice options, we choose the option corresponding to the highest log probability scored by the models as the final answer. As such, the objective of these particular tasks is quite different from what our algorithm tries to optimize, so the results on these tasks may not reflect the true capability of our models.

Self-Rewarding Language Models (2024)
Top Articles
Latest Posts
Article information

Author: Jamar Nader

Last Updated:

Views: 5789

Rating: 4.4 / 5 (75 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Jamar Nader

Birthday: 1995-02-28

Address: Apt. 536 6162 Reichel Greens, Port Zackaryside, CT 22682-9804

Phone: +9958384818317

Job: IT Representative

Hobby: Scrapbooking, Hiking, Hunting, Kite flying, Blacksmithing, Video gaming, Foraging

Introduction: My name is Jamar Nader, I am a fine, shiny, colorful, bright, nice, perfect, curious person who loves writing and wants to share my knowledge and understanding with you.