Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
105 changes: 81 additions & 24 deletions applications/Chat/evaluate/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,13 @@ pip install -r requirements.txt

## Evaluation Pipeline

The whole evaluation pipeline consists of two methods:
The whole evaluation pipeline consists of three methods:

1. `GPT Evaluation`: evaluates model predictions using GPT models.
* Compare the performance of two different models (battle).
* Rate the model according to pre-defined metrics using prompting design.
2. `Automatic Evaluation`: evaluates model predictions using automatic metrics.
3. `UniEval`: evaluates model predictions using UniEval models(English only).

### Evaluation Category

Expand Down Expand Up @@ -75,7 +76,9 @@ GPT evaluation uses GPT models to evaluate the prediction of different models an

GPT models evaluate the quality of model predictions based on the given prompt words and gives a score between 1-5.

> **NOTE:** Even for the same metric, the details of its prompt words and CoT(Chain-of-Thought) can differ based on which category you want to evaluate. For example, prompt words for metric `correctness` showed here is "The answer should be in line with common sense, life experience, etc."(this is for category `brainstorming`), but for category `extraction`, prompt words can be "Answers should extract the required information accurately and should not contain any incorrect or misleading information." You can find all the prompt words and CoT(Chain-of-Thought) in `prompt/evaluation_prompt`.
> **NOTE 1:** Even for the same metric, the details of its prompt words and CoT(Chain-of-Thought) can differ based on which category you want to evaluate. For example, prompt words for metric `correctness` showed here is "The answer should be in line with common sense, life experience, etc."(this is for category `brainstorming`), but for category `extraction`, prompt words can be "Answers should extract the required information accurately and should not contain any incorrect or misleading information." You can find all the prompt words and CoT(Chain-of-Thought) in `prompt/evaluation_prompt`.

> **NOTE 2:** To add customized metrics, you can refer to [FAQ](#faq).

#### Automatic Evaluation

Expand All @@ -85,7 +88,7 @@ There are two ways to obtain reference answers:
* For instruction coming from human-designed problems, the reference answers are generated by GPT-3.5, such as roleplay, chat.
* For instruction related with classic NLP problems, the reference answers are collected from open-sourced dataset with target answers, such as classification, extraction, summarization.

There are 5 types of automatic evaluation metrics listed in the table below:
There are 6 types of automatic evaluation metrics listed in the table below:

| Automatic Evaluation Metric | Description |
| :---------------------------------: | :----------------------------------------------------------- |
Expand All @@ -94,6 +97,25 @@ There are 5 types of automatic evaluation metrics listed in the table below:
| Distinct | Measure the diversity of generation text by counting the unique n-grams. |
| BERTScore | Measure the semantic similarity between tokens of predictions and references with BERT. |
| Precision<br/> Recall<br/> F1 Score | Measure the number of overlaps between prediction and reference (design for classification and extraction categories). |
| CHRF | Measure the similarity of character n-grams between prediction and reference. |

#### UniEval Evaluation

UniEval converts all evaluation tasks of different dimensions(metrics) into Boolean QA problems and utilize the model to answer with “Yes” or “No”. Compared with similarity-based metrics such as ROUGE and BLEU, UniEval can achieve a more comprehensive evaluation. In addition, UniEval also demonstrates its ability to transfer to unseen dimensions and tasks.

In our evaluation pipeline, two pre-trained UniEval evaluators are used. One is [unieval-sum](https://huggingface.co/MingZhong/unieval-sum) and the other is [unieval-dialog](https://huggingface.co/MingZhong/unieval-dialog). The two models can be used for the 3 tasks, `summarization`, `dialogue` and `data2text`. Each task has different evaluation dimensions.

| UniEval Model | Task | Dimension(Metric) |
| :------------: | :----------------- | :--- |
| unieval-sum | summarization | coherence: whether the summary is coherent<br/>consistency: whether the claim is consistent with the given document<br/>fluency: whether the paragraph is fluent<br/>relevance: whether the summary is relevant to the reference |
| unieval-sum | data2text | naturalness: whether the utterance is fluent<br/>informativeness: whether the utterance is informative according to the reference |
| unieval-dialog | dialogue | naturalness: whether the response is natural in the dialogue<br/>coherence: whether the response is coherent in the dialogue history<br/>understandability: whether the response is understandable in the dialogue |

> **NOTE 1:** Task "data2text" uses the same model as task "summarization".

> **NOTE 2:** In UniEval paper, the `unieval-sum` model demonstrates the best transfer ability and so you can evaluate your customized metric with this model. Details of adding customized metrics can be found in [FAQ](#faq).

> **NOTE 3:** We consider not including all metrics provided in UniEval in our pipeline because the data structure and content of the instructions we want to evaluate are not suitable for direct use of some UniEval metrics.

## Evaluation Process

Expand Down Expand Up @@ -215,47 +237,60 @@ The following is an example of a Chinese GPT evaluation prompt. In an evaluation

#### Configuration

The following is an example of a Chinese config file. The configuration file can control how the pipeline evaluates the model. You need to specify GPT evaluation metrics and automatic metrics in key `GPT` and `Metrics`. You can find an example Chinese config file in `config`.
The following is an example of a Chinese config file. The configuration file can control how the pipeline evaluates the model. You need to specify GPT evaluation metrics, automatic metrics and UniEval metrics in key `GPT`, `Metrics` and `UniEval`(English only). You can find an example English config file in `config`.

```json
{
"language": "cn",
"language": "en",
"path_for_UniEval": {
"summarization": "path to unieval-sum model",
"dialogue": "path to unieval-dialog model",
"data2text": "path to unieval-sum model"
},
"category": {
"brainstorming": {
"GPT": ["relevance", "creativity", "practicality", "correctness"],
"Metrics": ["Distinct"]
"Metrics": ["Distinct"],
"UniEval": ["summarization-fluency", "data2text-naturalness", "data2text-informativeness"]
},
"chat": {
"GPT": [ "relevance", "naturalness", "engagingness", "reasonableness"],
"Metrics": ["Distinct"]
"Metrics": ["Distinct"],
"UniEval": ["dialogue-naturalness", "dialogue-coherence", "dialogue-understandability"]
}
}
}
```

`"language"`: the language used to evaluate the model capability. We only support Chinese `"cn"` for now.

`"path_for_UniEval"`: path to the UniEval model.

`"category"`: the category/categories needed to evaluate the model capability.

`"GPT"`: the metrics you want to use for GPT evaluation.

`"Metrics"`: the metrics you want to use for automatic metrics evaluation.

`"UniEval"`: the metrics you want to use for UniEval metrics evaluation. The metric has to be in the `"{task}-{metric}"` format because different tasks have same metrics such as naturalness and coherence.

You can remove the key such as `"Metrics"` to skip evaluating answers using its corresponding evaluation metrics.

You can create your config file based on available settings listed in following table.

| "category" | "GPT" | "Metrics" |
| :--------------: | :---------------------: | :---------: |
| "brainstorming" | "language organization" | "BLEU" |
| "chat" | "relevance" | "ROUGE" |
| "classification" | "creativity" | "Distinct" |
| "closed_qa" | "practicality" | "BERTScore" |
| "extraction" | "correctness" | "Precision" |
| "generation" | "naturalness" | "Recall" |
| "open_qa" | "engagingness" | "F1 score" |
| "rewriting" | "reasonableness" | |
| "roleplay" | "diversity" | |
| "summarization" | "fidelity" | |
| | "conciseness" | |
| "category" | "GPT" | "Metrics" | "UniEval" |
| :--------------: | :---------------------: | :---------: | :--------------------------: |
| "brainstorming" | "language organization" | "BLEU" | "dialogue-naturalness" |
| "chat" | "relevance" | "ROUGE" | "dialogue-coherence" |
| "classification" | "creativity" | "Distinct" | "dialogue-understandability" |
| "closed_qa" | "practicality" | "BERTScore" | "data2text-naturalness" |
| "extraction" | "correctness" | "Precision" | "data2text-informativeness" |
| "generation" | "naturalness" | "Recall" | "summarization-coherence" |
| "open_qa" | "engagingness" | "F1 score" | "summarization-consistency" |
| "rewriting" | "reasonableness" | "CHRF" | "summarization-fluency" |
| "roleplay" | "diversity" | | "summarization-relevance" |
| "summarization" | "fidelity" | | |
| | "conciseness" | | |

> **NOTE:** For categories which don't have standard answers such as `brainstorming`, you should avoid using automatic metrics such as `BLEU` and `ROUGE` which are based on similarity measures and you should use `Distinct` instead in your config file.

Expand Down Expand Up @@ -290,23 +325,36 @@ For example, if you want to add a new metric `persuasiveness` into category `bra
"id": 1,
"category": "brainstorming",
"metrics": {
"persuasiveness": "说服力(1-5):XXX"
"persuasiveness": "persuasiveness(1-5):a short description for persuasiveness"
},
"CoT": {
"persuasiveness": "XXX\n\n说服力:"
"persuasiveness": "CoT for persuasiveness\n\npersuasiveness:"
},
"prompt": "你是一个好助手。请你为下面“头脑风暴”问题的答案打分。\n\n问题如下:\n\n{question}\n\n答案如下:\n\n{answer}\n\n评分的指标如下:\n\n{metric}\n\n请你遵照以下的评分步骤:\n\n{steps}"
"prompt": "You are a good assistant. Please rate the given answer to the \"brainstorming\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
}
}
```

</details>

<details><summary><b>How can I add a new UniEval evaluation metric?</b></summary>

For example, if you want to add a new metric `persuasiveness` into task `data2text`, you should add a Boolean QA question about the metric in function `add_question` in `unieval/utils.py`. Please do note that how effectively the model would evaluate this metric is unknown and you may need some experiments to test whether the model is capable of evaluating this metric.

```python
if task == 'data2text':
if dimension == 'persuasiveness':
cur_input = 'question: Is this a persuasive utterence </s> utterance: ' + output[i]
```

</details>

## To Do

- [x] Add evaluation for English capability
- [ ] Support UniEval
- [x] Support UniEval
- [x] Support GPT-4 evaluation
- [ ] Support GPT evaluation with reference in the prompt

## Citations

Expand All @@ -327,4 +375,13 @@ For example, if you want to add a new metric `persuasiveness` into category `bra
archivePrefix={arXiv},
primaryClass={cs.CL}
}

@misc{zhong2022unified,
title={Towards a Unified Multi-Dimensional Evaluator for Text Generation},
author={Ming Zhong and Yang Liu and Da Yin and Yuning Mao and Yizhu Jiao and Pengfei Liu and Chenguang Zhu and Heng Ji and Jiawei Han},
year={2022},
eprint={2210.07197},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
12 changes: 8 additions & 4 deletions applications/Chat/evaluate/config/config_cn.json
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@
"Metrics": [
"Precision",
"Recall",
"F1 score"
"F1 score",
"CHRF"
]
},
"closed_qa": {
Expand All @@ -46,7 +47,8 @@
"Metrics": [
"BLEU",
"ROUGE",
"BERTScore"
"BERTScore",
"CHRF"
]
},
"extraction": {
Expand All @@ -58,7 +60,8 @@
"Metrics": [
"Precision",
"Recall",
"F1 score"
"F1 score",
"CHRF"
]
},
"generation": {
Expand Down Expand Up @@ -116,7 +119,8 @@
"Metrics": [
"BLEU",
"ROUGE",
"BERTScore"
"BERTScore",
"CHRF"
]
}
}
Expand Down
73 changes: 69 additions & 4 deletions applications/Chat/evaluate/config/config_en.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
{
"language": "en",
"path_for_UniEval": {
"summarization": "path to unieval-sum",
"dialogue": "path to unieval-dialog",
"data2text": "path to unieval-sum"
},
"category": {
"brainstorming": {
"GPT": [
Expand All @@ -11,6 +16,11 @@
],
"Metrics": [
"Distinct"
],
"UniEval": [
"summarization-fluency",
"data2text-naturalness",
"data2text-informativeness"
]
},
"chat": {
Expand All @@ -23,6 +33,14 @@
],
"Metrics": [
"Distinct"
],
"UniEval": [
"summarization-fluency",
"dialogue-naturalness",
"dialogue-coherence",
"dialogue-understandability",
"data2text-naturalness",
"data2text-informativeness"
]
},
"classification": {
Expand All @@ -34,7 +52,13 @@
"Metrics": [
"Precision",
"Recall",
"F1 score"
"F1 score",
"CHRF"
],
"UniEval": [
"summarization-fluency",
"data2text-naturalness",
"data2text-informativeness"
]
},
"closed_qa": {
Expand All @@ -46,7 +70,13 @@
"Metrics": [
"BLEU",
"ROUGE",
"BERTScore"
"BERTScore",
"CHRF"
],
"UniEval": [
"summarization-fluency",
"data2text-naturalness",
"data2text-informativeness"
]
},
"extraction": {
Expand All @@ -58,7 +88,13 @@
"Metrics": [
"Precision",
"Recall",
"F1 score"
"F1 score",
"CHRF"
],
"UniEval": [
"summarization-fluency",
"data2text-naturalness",
"data2text-informativeness"
]
},
"generation": {
Expand All @@ -71,6 +107,11 @@
"BLEU",
"ROUGE",
"BERTScore"
],
"UniEval": [
"summarization-fluency",
"data2text-naturalness",
"data2text-informativeness"
]
},
"open_qa": {
Expand All @@ -81,6 +122,11 @@
],
"Metrics": [
"Distinct"
],
"UniEval": [
"summarization-fluency",
"data2text-naturalness",
"data2text-informativeness"
]
},
"rewriting": {
Expand All @@ -93,6 +139,11 @@
"BLEU",
"ROUGE",
"BERTScore"
],
"UniEval": [
"summarization-fluency",
"data2text-naturalness",
"data2text-informativeness"
]
},
"roleplay": {
Expand All @@ -104,6 +155,11 @@
],
"Metrics": [
"Distinct"
],
"UniEval": [
"summarization-fluency",
"data2text-naturalness",
"data2text-informativeness"
]
},
"summarization": {
Expand All @@ -116,7 +172,16 @@
"Metrics": [
"BLEU",
"ROUGE",
"BERTScore"
"BERTScore",
"CHRF"
],
"UniEval": [
"summarization-coherence",
"summarization-consistency",
"summarization-fluency",
"summarization-relevance",
"data2text-naturalness",
"data2text-informativeness"
]
}
}
Expand Down
2 changes: 1 addition & 1 deletion applications/Chat/evaluate/eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def main(args):

# initialize evaluator
evaluator = Evaluator(metrics_per_category, battle_prompt, gpt_evaluation_prompt, args.gpt_model,
config["language"])
config["language"], config.get("path_for_UniEval", None))
if len(args.model_name_list) == 2:
answers1 = jload(args.answer_file_list[0])
answers2 = jload(args.answer_file_list[1])
Expand Down
Loading