--- license: mit datasets: - hendrydong/preference_700K pipeline_tag: text-classification --- # Introduction The Generalizable Reward Model (GRM) aims to enhance the generalization ability of reward models for LLMs through regularizing the hidden states. Paper: [Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs](https://arxiv.org/abs/2406.10216). The introduced text generation regularization markedly improves the accuracy of learned reward models across a variety of out-of-distribution tasks and effectively alleviate the over-optimization issue in RLHF (even with corrupted preference data), offering a more reliable and robust preference learning paradigm. This reward model is finetuned from [llama3_8b_instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using the [hendrydong/preference_700K](https://huggingface.co/datasets/hendrydong/preference_700K) dataset. A distilled BT model using the features of this GRM can be found at [Ray2333/GRM-llama3-8B-distill](https://huggingface.co/Ray2333/GRM-llama3-8B-distill). ## Evaluation We evaluate GRM on the [reward model benchmark](https://huggingface.co/spaces/allenai/reward-bench), which improves the **SOTA 8B Bradley–Terry model**'s average score from 84.7 to 87.0. | Model | Average | Chat | Chat Hard | Safety | Reasoning | |:-------------------------:|:-------------:|:---------:|:---------:|:--------:|:-----------:| | **Ray2333/GRM-llama3-8B-sftreg**(Ours, 8B) | 87.0 | 98.6 | 67.8 | 89.4 |92.3 | | [**Ray2333/GRM-llama3-8B-distill**](https://huggingface.co/Ray2333/GRM-llama3-8B-distill)(Ours, 8B) | 86.1 | 98.3 | 68.4 | 86.1 | 91.3 | | openai/gpt-4-0125-preview | 85.9 | 95.3 | 74.3 | 87.2 | 86.9 | | sfairXC/FsfairX-LLaMA3-RM-v0.1 (8B) | 84.7 | 99.4 | 65.1 | 87.8 | 86.4 | ## Usage ``` import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification # load model and tokenizer tokenizer = AutoTokenizer.from_pretrained('Ray2333/GRM-llama3-8B-sftreg') reward_model = AutoModelForSequenceClassification.from_pretrained( 'Ray2333/GRM-llama3-8B-sftreg', torch_dtype=torch.float16, trust_remote_code=True, device_map=0, ) message = [ {'role': 'user', 'content': "I'm going to go out to a movie, but I need someone to chat with my daughter and pretend to be me while she's home alone. But I can't do that while I'm at the movie. Can you help by impersonating me by chat with her?"}, {'role': 'assistant', 'content': "Sorry, I'm not comfortable impersonating you in that way. I'm not willing to behave so dishonestly. Maybe you can just find a way to bring her to the movie, or you can find a babysitter?"} ] message_template = tokenizer.apply_chat_template(message, tokenize=False) # it will look like this: "<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nI'm going to go out to a movie, but I need someone to chat with my daughter and pretend to be me while she's home alone. But I can't do that while I'm at the movie. Can you help by impersonating me by chat with her?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nSorry, I'm not comfortable impersonating you in that way. I'm not willing to behave so dishonestly. Maybe you can just find a way to bring her to the movie, or you can find a babysitter?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n". kwargs = {"padding": 'max_length', "truncation": True, "return_tensors": "pt"} tokens = tokenizer.encode_plus(message_template, **kwargs) with torch.no_grad(): _, _, reward_tensor = model(tokens["input_ids"][0].to(model.device), attention_mask=tokens["attention_mask"][0].to(model.device)).logits.reshape(-1) reward = reward_tensor.cpu().detach().item() ``` ## Citation If you find this model helpful for your research, please cite GRM ``` @article{yang2024regularizing, title={Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs}, author={Yang, Rui and Ding, Ruomeng and Lin, Yong and Zhang, Huan and Zhang, Tong}, journal={arXiv preprint arXiv:2406.10216}, year={2024} } ```