diff --git a/applications/Chat/evaluate/README.md b/applications/Chat/evaluate/README.md
index ae3499bf268c..e3510e3522f6 100644
--- a/applications/Chat/evaluate/README.md
+++ b/applications/Chat/evaluate/README.md
@@ -1,7 +1,6 @@
# Evaluation
-In this directory, we introduce how you can evaluate your model with our pipeline. This pipeline is available for model
-evaluation of Chinese capability and the one for English capability is under preparation.
+In this directory, we introduce how you can evaluate your model with our pipeline. This pipeline is now available for evaluation of both Chinese and English capability.
## Installation
@@ -24,7 +23,7 @@ The whole evaluation pipeline consists of two methods:
Our evaluation pipeline examines the model's capability using 10 categories of questions. The following table introduces each category:
-| Evaluation Category |
Description
|
+| Evaluation Category | Description |
| :-----------------: | :----------------------------------------------------------- |
| Brainstorming | Models are asked to generate a range of creative and diverse ideas according to the question. The capability of creativity is required. |
| Chat | Models are asked to continue a multi-round dialogue given the roles involved. The capability of understanding, memorizing previous rounds of the dialogue and answering according to the persona provided is required. |
@@ -40,17 +39,17 @@ Our evaluation pipeline examines the model's capability using 10 categories of q
To better understand each evaluation category, here are some example questions provided.
-| Evaluation Category |
Chinese Example
|
English Example
|
+| Evaluation Category | Chinese Example | English Example |
| :-----------------: | :----------------------------------------------------------- | :----------------------------------------------------------- |
| Brainstorming | **Example 1:** 请介绍一下人工智能的多个领域。
**Example 2:** 请给出管理家庭财务的3个小技巧。 | **Example 1:** How can I improve my memory? Any useful techniques you can suggest?
**Example 2:** What are some ways to increase productivity while working from home? |
-| Chat | **Example 1:** 基于以下角色信息完成一段对话。小张是一名新手爱好者,对养鸡有浓厚的兴趣。老李是一名有丰富经验的养鸡大师。 小张:您好,老李,我最近开始对养鸡感兴趣了,想请教您一些问题。 老李:你好,小张,我很乐意帮助你。你想问些什么? 小张:我想知道如何确定鸡的品种和性别? 老李:确切的品种可以通过鸡的外貌特征来确定,而性别一般是通过鸡卵的大小和形状来判断。还有什么问题吗? 小张: **Example 2:** 基于以下角色信息完成一段对话。小明是一名医生,一位老年病患者想要停药,但他对病情有所忽视并有担忧;王叔叔是老年病患者的儿子,希望能够听取医生的建议。 小明:你好,王叔叔,我了解你想要让你父亲停药。 王叔叔:是的,我父亲已经吃了那么久的药,我担心药物对他的身体会有副作用。 小明: | **Example 1:** Complete a conversation based on the following character information. Amy is a 30-year-old chef who runs her own restaurant. Jack is a food blogger who specializes in reviewing local restaurants. Amy: Hi Jack, I heard that you're a food blogger. Nice to meet you. Jack: Hi Amy, yes I am. Your restaurant has been receiving a lot of good reviews lately. Amy: Yes, we use only fresh and quality ingredients, and every dish is carefully crafted. Jack: **Example 2:** Complete a dialogue based on the following role information. A: Elementary student B: Teacher B: Good morning, Student A. Today we're going to learn about addition and subtraction. A: Teacher, I already know this very well. Why do I need to learn it again? B: |
+| Chat | **Example 1:** 基于以下角色信息完成一段对话。小张是一名新手爱好者,对养鸡有浓厚的兴趣。老李是一名有丰富经验的养鸡大师。 小张:您好,老李,我最近开始对养鸡感兴趣了,想请教您一些问题。 老李:你好,小张,我很乐意帮助你。你想问些什么? 小张:我想知道如何确定鸡的品种和性别? 老李:确切的品种可以通过鸡的外貌特征来确定,而性别一般是通过鸡卵的大小和形状来判断。还有什么问题吗? 小张:
**Example 2:** 基于以下角色信息完成一段对话。小明是一名医生,一位老年病患者想要停药,但他对病情有所忽视并有担忧;王叔叔是老年病患者的儿子,希望能够听取医生的建议。 小明:你好,王叔叔,我了解你想要让你父亲停药。 王叔叔:是的,我父亲已经吃了那么久的药,我担心药物对他的身体会有副作用。 小明: | **Example 1:** Complete a conversation based on the following character information. Amy is a 30-year-old chef who runs her own restaurant. Jack is a food blogger who specializes in reviewing local restaurants. Amy: Hi Jack, I heard that you're a food blogger. Nice to meet you. Jack: Hi Amy, yes I am. Your restaurant has been receiving a lot of good reviews lately. Amy: Yes, we use only fresh and quality ingredients, and every dish is carefully crafted. Jack:
**Example 2:** Complete a dialogue based on the following role information. A: Elementary student B: Teacher B: Good morning, Student A. Today we're going to learn about addition and subtraction. A: Teacher, I already know this very well. Why do I need to learn it again? B: |
| Classification | **Example 1:** 新闻标题:今日立夏,有一上联,立夏万物并秀,下联怎么对? 请根据以上新闻标题判断新闻所属的分类,你需要从文化,娱乐,体育,财经,房产,教育,科技,旅游,游戏,军事这十类中选择一个答案。
**Example 2:** 新闻标题:赵丽颖很久没有登上微博热搜了,但你们别急,她只是在憋大招而已。 请根据新闻标题判断新闻所属的分类,你需要从文化,娱乐,体育,财经,房产,教育,科技,旅游,游戏,军事这十类中选择一个答案。 | **Example 1:** Title: Fighting for Love (2020) Description: Jasmine got obsessed with a man and now he's obsessed with her. Steamy nights, kisses and rules being broken awaits them. She turned his whole world upside down and now he's doing it to hers. In this free fall, can they survive each others love?\" Based on the above information, determine which genre the work of art belongs to. You can only choose one from \"sport\", \"horror\", \"drama\", \"history\", \"romance\", \"biography\", \"science fiction\", \"comedy\", \"animation\", \"documentary\", \"music\" and \"news\".
**Example2:** Title: Summer Breeze: The Isley Brothers Greatest Hits Live (2005) Description: Filmed in the US in 2005 and captured in excellent form led by Ron Isley's vocals and Ernie Isley's hard edged guitar. Virtually every track is a hit including Shout, Who's That Lady, Twist And Shout, Summer Breeze and Harvest For The World. Based on the above information, determine which genre the work of art belongs to. You can only choose one from \"sport\", \"horror\", \"drama\", \"history\", \"romance\", \"biography\", \"science fiction\", \"comedy\", \"animation\", \"documentary\", \"music\" and \"news\"." |
-| Closed QA | **Example 1:** 请从以下选项中选择正确答案。以下哪个是世界上最高山峰? A. 长城 B. 泰山 C. 珠穆朗玛峰 D. 黄山
**Example 2:** 请从以下选项中选择一个最佳答案回答下面的问题。问题:非洲最高的山是哪座山? 选项: A. 麦金利山 B. 喜马拉雅山 C. 乞力马扎罗山 | **Example 1:** Which of the following options is NOT a primary color? (a) yellow (b) blue (c) orange (d) red **Example 2:** Choose the correct option to complete the following sentence: \"Harry Potter and the Chamber of Secrets\" is the ________ book in the Harry Potter series. (A) first (B) second (C) third (D) fourth |
+| Closed QA | **Example 1:** 请从以下选项中选择正确答案。以下哪个是世界上最高山峰? A. 长城 B. 泰山 C. 珠穆朗玛峰 D. 黄山
**Example 2:** 请从以下选项中选择一个最佳答案回答下面的问题。问题:非洲最高的山是哪座山? 选项: A. 麦金利山 B. 喜马拉雅山 C. 乞力马扎罗山 | **Example 1:** Which of the following options is NOT a primary color? (a) yellow (b) blue (c) orange (d) red
**Example 2:** Choose the correct option to complete the following sentence: \"Harry Potter and the Chamber of Secrets\" is the ________ book in the Harry Potter series. (A) first (B) second (C) third (D) fourth |
| Extraction | **Example 1:** 根据以下新闻文本,提取新闻报道时间,例如回答时按照格式“新闻报道时间:2007年8月10日” 新闻文本如下:2007-4-7中新网4月7日电据中国消防在线消息,4月4日晚上7时30分左右,湖南长潭高速公路上发生一起6车连环相撞失火事故。长株潭三地消防部门共出动消防车21台,警力100余人。经过消防官兵近2个小时奋力扑救,大火被成功扑灭。据初步调查,有1人在此次事故中死亡。
**Example 2:** 根据以下新闻文本,提取新闻报道时间,例如回答时按照格式“新闻报道时间:2007年8月10日” 新闻文本如下:2014年1月15日,据外媒《俄罗斯报》报道称,位于北半球的澳大利亚现在正处于炎热的夏季,而近日也到了高温酷暑的时候,当地时间1月14日晚,澳大利亚南部一夜间发生至少250起火灾。受炎热天气及雷雨天气影响,澳大利亚南部一夜间发生至少250起火灾,灾情多集中在维多利亚州。火灾发生后,救援人员立即展开救灾行动。目前,大部分起火点火势已被控制。 | **Example 1:** Ernest Hemingway, an American literary giant known for his spare and direct writing style, has penned timeless works such as 'The Old Man and the Sea', 'For Whom the Bell Tolls', and 'A Farewell to Arms', which have made a profound impact on the literary world and continue to be widely read and admired today. Extract the name of the author mentioned above.
**Example 2:** In the epic fantasy series 'A Song of Ice and Fire', George R.R. Martin weaves a complex web of political intrigue, war, and magic across the fictional continents of Westeros and Essos. Martin's richly developed characters and intricate plotlines have captivated readers worldwide, much like his other acclaimed works such as 'A Clash of Kings' and 'A Storm of Swords'. Extract the name of the author in the above material. |
| Generation | **Example 1:** 请撰写一篇文章,介绍如何通过改善生活习惯来预防疾病和延长寿命。
**Example 2:** 请根据以下情节撰写一篇短篇小说:一名年轻人被困在一个荒岛上,他必须想办法生存下去直到被救援。但他很快发现自己并不孤单。 | **Example 1:** Write a descriptive paragraph about an island to relax and unwind, including details about the location and atmosphere.
**Example 2:** Can you help me write a persuasive email to my colleagues encouraging them to participate in a charitable fundraising event? |
| Open QA | **Example 1:** 请问万有引力定律由谁提出的?
**Example 2:** 哪些国家参与了第一次世界大战? | **Example 1:** What are the four basic tastes of the human palate?
**Example 2:** Who painted the The Scream? |
| Rewriting | **Example 1:** 请将以下句子改为正确的语序。 生日快乐你祝他了吗?
**Example 2:** 将以下文本翻译成英语: “这个周末我要去海边玩” | **Example 1:** Please translate the following sentences, which are a mixture of Chinese and English, into full English. 我需要买一些healthy snacks,比如nuts和dried fruits,作为我的office的午餐.
**Example 2:** Please rewrite the sentence using an inverted sentence structure. We won't begin our journey until the sun sets. |
-| Roleplay | **Example 1:** 我想让你担任Android开发工程师面试官。我将成为候选人,您将向我询问Android开发工程师职位的面试问题。我希望你只作为面试官回答。不要一次写出所有的问题。我希望你只对我进行采访。问我问题,等待我的回答。不要写解释。像面试官一样一个一个问我,等我回答。我的第一句话是“面试官你好”。
**Example 2:** 我想让你扮演讲故事的角色。你会想出引人入胜、富有想象力和吸引观众的有趣故事。它可以是童话故事、教育故事或任何其他类型的有潜力的故事以吸引人们的注意力和想象力。根据目标受众,您可以为您的讲故事环节选择特定的主题或主题,例如,如果是儿童,那么您可以谈论动物;如果是成人,那么基于历史的故事可能会更好地吸引他们等。我的第一个请求是我需要一个关于毅力的有趣故事。 | **Example 1:** Assume the role of a marriage counselor. Develop a series of communication exercises for a couple who are experiencing difficulties in their relationship. These exercises should promote active listening, empathy, and effective expression of emotions. Your first assignment is to provide a set of three exercises that focus on resolving conflicts and rebuilding trust.
**Example 2: ** I want you to act as a travel agent. I will tell you my desired destination, travel dates, and budget, and it will be your job to suggest the best travel itinerary for me. Your recommendations should include the best transportation options, hotel accommodations, and any popular tourist attractions nearby. My first request is "I want to plan a trip to Tokyo for a week, with a budget of $2000. I want to explore the culture and food of the city." |
+| Roleplay | **Example 1:** 我想让你担任Android开发工程师面试官。我将成为候选人,您将向我询问Android开发工程师职位的面试问题。我希望你只作为面试官回答。不要一次写出所有的问题。我希望你只对我进行采访。问我问题,等待我的回答。不要写解释。像面试官一样一个一个问我,等我回答。我的第一句话是“面试官你好”。
**Example 2:** 我想让你扮演讲故事的角色。你会想出引人入胜、富有想象力和吸引观众的有趣故事。它可以是童话故事、教育故事或任何其他类型的有潜力的故事以吸引人们的注意力和想象力。根据目标受众,您可以为您的讲故事环节选择特定的主题或主题,例如,如果是儿童,那么您可以谈论动物;如果是成人,那么基于历史的故事可能会更好地吸引他们等。我的第一个请求是我需要一个关于毅力的有趣故事。 | **Example 1:** Assume the role of a marriage counselor. Develop a series of communication exercises for a couple who are experiencing difficulties in their relationship. These exercises should promote active listening, empathy, and effective expression of emotions. Your first assignment is to provide a set of three exercises that focus on resolving conflicts and rebuilding trust.
**Example 2:** I want you to act as a travel agent. I will tell you my desired destination, travel dates, and budget, and it will be your job to suggest the best travel itinerary for me. Your recommendations should include the best transportation options, hotel accommodations, and any popular tourist attractions nearby. My first request is "I want to plan a trip to Tokyo for a week, with a budget of $2000. I want to explore the culture and food of the city." |
| Summarization | **Example 1:** 请简要总结概括以下段落材料。 当地时间29日,泰国卫生部通报,新增143名新冠肺炎确诊病例和1名死亡病例。截止到当地时间29日上午,泰国累计确诊病例1388例,其中泰国籍1172例,非泰国籍216例。死亡病例累计7例。(原题为《泰国新增143例新冠肺炎确诊病例累计确诊1388例》)
**Example 2:** 请简要总结概括以下段落材料。 近期,参与京雄高铁站站房建设的中铁十二局,因在施工过程中存在环境违法行为被雄安新区公开通报。通报发出后,引起社会广泛关注。近日,人民网记者从雄安新区相关部门及中铁十二局获悉,新区有关部门已经集中约谈了中铁十二局等24个参与雄安建设的项目单位。对于约谈内容和结果,中铁十二局有关宣传负责人回应:“具体内容不清楚,最好找雄安新区相关部门了解情况。”新区有关部门负责人表示,此前涉及的环境违法行为,中铁十二局已基本整改到位,但约谈内容和结果暂不公开,接下来,将按部就班推进环境治理工作。(原题为《雄安新区:中铁十二局涉环境违法已基本整改到位》) | **Example 1:** The 21 year-old-woman was treated by paramedics after the kitchen fire in Botfield Road in Shifnal, Shropshire. West Mercia Police said it is treating Wednesday morning's incident as arson and are appealing for any witnesses to contact them.The 50-year-old man has been arrested on suspicion of arson with intent to endanger life. For more on this and other stories from Shropshire. Please briefly summarize the above material within 20 words.
**Example 2:** South Wales Police were called to a property in Heolgerrig, Merthyr Tydfil, at about 13:40 BST on Sunday. The child was airlifted to Prince Charles Hospital but died shortly afterwards. Police are investigating the circumstances surrounding the incident and have appealed for witnesses. The girl's family are being supported by specially trained officers. Please briefly summarize the above material within 20 words. |
@@ -58,24 +57,26 @@ To better understand each evaluation category, here are some example questions p
#### GPT Evaluation
-GPT evaluation uses GPT models to evaluate the prediction of different models and different pre-defined evaluation metrics are applied to different categories. The following table shows the 11 pre-defined evaluation metrics in Chinese:
+GPT evaluation uses GPT models to evaluate the prediction of different models and different pre-defined evaluation metrics are applied to different categories. The following table shows the 11 pre-defined evaluation metrics both in Chinese and English:
-| Evaluation Metric |
Prompt Words
|
CoT(Chain-of-Thought)
|
+| Evaluation Metric | Prompt Words | CoT(Chain-of-Thought) |
| :-------------------: | :----------------------------------------------------------- | :----------------------------------------------------------- |
-| Language organization | 语言组织(1-5):答案语言是否流畅、连贯,使用正确的语法,具有一定逻辑性,使用恰当的连接词、过渡词等等。 | 1. 阅读答案,并检查是否有语法错误、用词不当或其他显著的错误。 2.检查答案是否具有逻辑性,能够按照合理的顺序传达信息并且能够自圆其说 3. 确定答案是否与问题或主题相关,并且能够传达清晰的信息。 4. 检查答案是否连贯,是否使用适当的转换和过渡来保持句子和段落之间的连贯性。 5. 检查答案是否具有明确的结构和组织方式,使得读者可以轻松理解信息的层次和结构。 6. 根据以上因素综合评估答案的语言组织,并给出一个1到5的分数,其中5表示语言组织非常好,而1表示语言组织非常差。 |
-| Relevance | 切题(1-5):答案内容是否切题,不答非所问,并且严格遵照题目要求。 | 1. 阅读题目,确定题目所问的问题是什么,以及需要回答哪些方面的问题。 2. 阅读答案,确认答案是否直接回答了题目所问的问题。 3. 检查答案是否严格遵照了题目的要求,包括答题方式、答题长度、答题格式等等。 4. 根据以上因素综合评估答案的切题程度,并给出一个1到5的分数,其中5表示答案非常切题,而1表示答案完全没有切题。 |
-| Creativity | 创意性(1-5):某些头脑风暴问题可能需要答案具有创意,提出新的思路。 | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则创意性评分可能会受到影响。 3. 考虑答案中是否包含新颖的想法或独特的思路。答案可能与已知的解决方案有所重叠,但仍然可以被认为是有创意的,只要它提供了新的角度或方法来解决问题。 4. 根据答案的创意性,给出一个1到5的评分。如果答案缺乏创意,则应给出一个较低的评分。如果答案具有创意并提供了新的思路,应给出一个较高的评分。 |
-| Practicality | 实用性(1-5):某些头脑风暴问题可能需要答案提出实用的建议或解决方法。 | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则实用性评分可能会受到影响。 3. 考虑答案中提出的建议或解决方法是否实用并可行。答案可能看起来很好,但如果无法实现或应用,则实用性评分可能会受到影响。 4. 根据答案的实用性,给出一个1到5的评分。如果答案缺乏实用性,则应给出一个较低的评分。如果答案提出了实用的建议或解决方法,并且可以很好地解决问题,则应给出一个较高的评分。 |
-| Correctness | 正确性(1-5):答案应该符合常识、生活实际等等 | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则正确性评分可能会受到影响。 3. 考虑答案中所提供的信息是否正确、符合常识、生活实际等等。如果答案中存在明显的错误或不合理之处,则正确性评分可能会受到影响。 4. 根据答案的正确性,给出一个1到5的评分。如果答案存在明显的错误或不合理之处,则应给出一个较低的评分。如果答案正确、符合常识、生活实际等等,则应给出一个较高的评分。 |
-| Naturalness | 自然(1-5):答案是否自然,并且符合问题给定的身份。 | 1. 阅读题目,确定题目提供的身份信息。 2. 检查答案内容是否符合题目给定的身份。 3. 根据以上因素,对该回答的自然性进行打分,分数从1到5,其中1表示不自然,5表示非常自然,并符合问题给定的身份。 |
-| Engagingness | 参与感(1-5):答案是否对前面的对话内容做出了恰当的反应,是否理解对话的语境和背景。 | 1. 阅读题目,确定对话的语境和背景。 2. 检查答案是否充分理解对话的语境和背景,能否自然地融入到对话中而不显得突兀。 3. 根据以上因素,对该回答的参与感进行打分,分数从1到5,其中1表示没有参与感,5表示非常有参与感,并且恰当地理解了对话的语境和背景。 |
-| Reasonableness | 合理性(1-5):答案是否能够与前面的对话内容形成逻辑上的衔接,是否符合常理,能否在这个上下文中合理存在。 | 1. 阅读题目,确定对话的主题以及问题期望的回答方向。 2. 判断答案是否能够与前面的对话内容形成逻辑上的衔接,是否符合常理,能否在这个上下文中合理存在。 3. 根据以上因素,对该回答的合理性进行打分,分数从1到5,其中1表示不合理,5表示非常合理,并且能够与前面的对话内容形成逻辑上的衔接,并符合常理。 |
-| Diversity | 多样性(1-5):答案使用语言是否优美,具有有一定的创造性和想象力。然而,回答也应该保持合理和适度,不要过于夸张或离题。 | 1. 仔细阅读整个回答,确保完全理解回答所表达的内容和主题。 2. 在阅读回答的同时,注意语言的质量,例如措辞是否正确,语言是否生动等。 3. 检查回答的创造性和想象力,看看回答是否能够吸引人阅读下去。 4. 检查回答的合理性和适度,看看回答是否夸张或离题。5. 将多样性的评分打分在1到5之间,5分表示回答的质量很好,能够吸引人阅读,1分表示回答的内容生硬或者有离题的问题。 |
-| Fidelity | 保真度(1-5):答案是否能够严格遵守角色的设定回答给定的请求。 | 1. 仔细阅读问题,了解角色在问题中的设定和表现,包括职业、背景、观点、性格等方面。 阅读题目的请求,确认回答请求时需要注意的细节。 3. 对比提供的回答与该角色的设定,评估回答是否能够严格遵守角色的设定。 4. 结合以上评估结果给出保真度的评分,范围从1到5分,其中1分表示回答与角色设定完全不符,5分表示回答完全符合角色设定且满足给定请求。 |
-| Conciseness | 简明扼要(1-5):答案是否简明扼要,没有冗余内容。 | 1. 阅读题目,提取出材料的重点。 2. 阅读该总结,并注意其中的主要观点和信息。 3. 评估总结的长度。一个简明扼要的总结通常应该在几句话或几段文字内传达关键信息,而不是冗长的段落或文章。 4. 检查总结是否包含与主要观点无关的信息或冗余信息。 5. 确定总结涵盖了材料中的关键信息,并且没有忽略任何重要细节。 6. 给总结打出1-5的分数,其中5表示总结简明扼要,没有冗余内容,而1表示总结冗长或包含不必要的信息,难以理解或记忆。根据您的判断,打出适当的得分。 |
+| 语言组织 (Language organization) | 语言组织(1-5):答案语言是否流畅、连贯,使用正确的语法,具有一定逻辑性,使用恰当的连接词、过渡词等等。Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc. | 1. 阅读答案,并检查是否有语法错误、用词不当或其他显著的错误。 2. 检查答案是否具有逻辑性,能够按照合理的顺序传达信息并且能够自圆其说 3. 确定答案是否与问题或主题相关,并且能够传达清晰的信息。 4. 检查答案是否连贯,是否使用适当的转换和过渡来保持句子和段落之间的连贯性。 5. 检查答案是否具有明确的结构和组织方式,使得读者可以轻松理解信息的层次和结构。 6. 根据以上因素综合评估答案的语言组织,并给出一个1到5的分数,其中5表示语言组织非常好,而1表示语言组织非常差。1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes. 2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory. 3. Determine if the answer is relevant to the question or topic and conveys a clear message. 4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs. 5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information. 6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization. |
+| 切题 (Relevance) | 切题(1-5):答案内容是否切题,不答非所问,并且严格遵照题目要求。Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic. | 1. 阅读题目,确定题目所问的问题是什么,以及需要回答哪些方面的问题。 2. 阅读答案,确认答案是否直接回答了题目所问的问题。 3. 检查答案是否严格遵照了题目的要求,包括答题方式、答题长度、答题格式等等。 4. 根据以上因素综合评估答案的切题程度,并给出一个1到5的分数,其中5表示答案非常切题,而1表示答案完全没有切题。1. Read the question to determine what the question asks and what aspects of the question need to be answered. 2. Read the answers to make sure that they directly answer the question asked. 3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc. 4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all. |
+| 创意性 (Creativity) | 创意性(1-5):某些头脑风暴问题可能需要答案具有创意,提出新的思路。Creativity (1-5): Some brainstorming questions may require answers that are creative and suggest new ideas. | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则创意性评分可能会受到影响。 3. 考虑答案中是否包含新颖的想法或独特的思路。答案可能与已知的解决方案有所重叠,但仍然可以被认为是有创意的,只要它提供了新的角度或方法来解决问题。 4. 根据答案的创意性,给出一个1到5的评分。如果答案缺乏创意,则应给出一个较低的评分。如果答案具有创意并提供了新的思路,应给出一个较高的评分。1. Read the provided brainstorming questions carefully to make sure you understand the gist and context of the questions. 2. Based on your knowledge and experience, determine if the answers provided are feasible. If the answer is not feasible, the creativity score may be affected. 3. Consider whether the answer contains novel ideas or unique thoughts. An answer may overlap with a known solution and still be considered creative, as long as it offers a new perspective or approach to the problem. 4. Give a score of 1 to 5 depending on the creativity of the answer. If the answer lacks creativity, a lower score should be given. If the answer is creative and provides a new idea, a higher score should be given. |
+| 实用性 (Practicality) | 实用性(1-5):某些头脑风暴问题可能需要答案提出实用的建议或解决方法。Practicality (1-5): Some brainstorming questions may require answers to suggest practical suggestions or solutions. | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则实用性评分可能会受到影响。 3. 考虑答案中提出的建议或解决方法是否实用并可行。答案可能看起来很好,但如果无法实现或应用,则实用性评分可能会受到影响。 4. 根据答案的实用性,给出一个1到5的评分。如果答案缺乏实用性,则应给出一个较低的评分。如果答案提出了实用的建议或解决方法,并且可以很好地解决问题,则应给出一个较高的评分。1. Read the provided brainstorming questions carefully to make sure you understand the gist and context of the questions. 2. Based on your knowledge and experience, determine if the answers provided are feasible. If the answer is not feasible, the practicality score may be affected. 3. Consider whether the suggestions or solutions presented in the answer are practical and workable. The answer may look good, but if it cannot be implemented or applied, the practicality score may be affected. 4. Give a score of 1 to 5 depending on the practicality of the answer. If the answer lacks practicality, a lower score should be given. If the answer makes a practical suggestion or solution and solves the problem well, a higher score should be given. |
+| 正确性 (Correctness) | 正确性(1-5):答案应该符合常识、生活实际等等。 Correctness (1-5): The answer should be in line with common sense, life experience, etc. | 1. 仔细阅读所提供的头脑风暴问题,确保你理解问题的要点和背景。 2. 根据你的知识和经验,判断所提供的答案是否可行。如果答案不可行,则正确性评分可能会受到影响。 3. 考虑答案中所提供的信息是否正确、符合常识、生活实际等等。如果答案中存在明显的错误或不合理之处,则正确性评分可能会受到影响。 4. 根据答案的正确性,给出一个1到5的评分。如果答案存在明显的错误或不合理之处,则应给出一个较低的评分。如果答案正确、符合常识、生活实际等等,则应给出一个较高的评分。1. Read the provided brainstorming questions carefully to make sure you understand the gist and context of the questions. 2. Based on your knowledge and experience, determine if the answers provided are feasible. If the answer is not feasible, the correctness score may be affected. 3. Consider whether the information provided in the answer is correct, consistent with common sense, real life, etc. If there are obvious errors or implausibilities in the answer, the correctness score may be affected. 4. Give a score of 1 to 5 depending on the correctness of the answer. If the answer contains obvious errors or unreasonable points, a lower score should be given. A higher score should be given if the answer is correct, consistent with common sense, real life, etc. |
+| 自然 (Naturalness) | 自然(1-5):答案是否自然,并且符合问题给定的身份。Naturalness (1-5): whether the answer is natural and fits the identity given by the question. | 1. 阅读题目,确定题目提供的身份信息。 2. 检查答案内容是否符合题目给定的身份。 3. 根据以上因素,对该回答的自然性进行打分,分数从1到5,其中1表示不自然,5表示非常自然,并符合问题给定的身份。1. Read the question and determine the identity information provided in the question. 2. Check whether the content of the answer matches the identity given in the question. 3. Based on the above factors, score the naturalness of the response on a scale from 1 to 5, where 1 means unnatural and 5 means very natural and in accordance with the identity given in the question. |
+| 参与感 (Engagingness) | 参与感(1-5):答案是否对前面的对话内容做出了恰当的反应,是否理解对话的语境和背景。Engagingness (1-5): whether the answer responds appropriately to the content of the preceding conversation and whether it understands the context and background of the conversation. | 1. 阅读题目,确定对话的语境和背景。 2. 检查答案是否充分理解对话的语境和背景,能否自然地融入到对话中而不显得突兀。 3. 根据以上因素,对该回答的参与感进行打分,分数从1到5,其中1表示没有参与感,5表示非常有参与感,并且恰当地理解了对话的语境和背景。1. Read the questions to determine the context and background of the dialogue. 2. Check that the answer fully understands the context and background of the conversation and that it fits naturally into the conversation without seeming abrupt. 3. Based on the above factors, rate the response's engagement on a scale from 1 to 5, where 1 means not engaged and 5 means very engaged and appropriately understands the context and background of the conversation. |
+| 合理性 (Reasonableness) | 合理性(1-5):答案是否能够与前面的对话内容形成逻辑上的衔接,是否符合常理,能否在这个上下文中合理存在。Reasonableness (1-5): Whether the answer can form a logical connection with the content of the previous dialogue, whether it is consistent with common sense, and whether it can reasonably exist in this context. | 1. 阅读题目,确定对话的主题以及问题期望的回答方向。 2. 判断答案是否能够与前面的对话内容形成逻辑上的衔接,是否符合常理,能否在这个上下文中合理存在。 3. 根据以上因素,对该回答的合理性进行打分,分数从1到5,其中1表示不合理,5表示非常合理,并且能够与前面的对话内容形成逻辑上的衔接,并符合常理。1. Read the question and determine the topic of the conversation and the direction the question expects the answer to go. 2. Determine whether the answer can be logically connected to the preceding conversation, whether it makes common sense, and whether it can reasonably exist in this context. 3. Based on the above factors, rate the reasonableness of the answer on a scale from 1 to 5, where 1 means unreasonable and 5 means very reasonable and able to form a logical connection with the preceding dialogue content and consistent with common sense. |
+| 多样性 (Diversity) | 多样性(1-5):答案使用语言是否优美,具有有一定的创造性和想象力。然而,回答也应该保持合理和适度,不要过于夸张或离题。Diversity (1-5): Whether the answers use beautiful language and have some creativity and imagination. However, answers should also be kept reasonable and moderate, not overly exaggerated or off-topic. | 1. 仔细阅读整个回答,确保完全理解回答所表达的内容和主题。 2. 在阅读回答的同时,注意语言的质量,例如措辞是否正确,语言是否生动等。 3. 检查回答的创造性和想象力,看看回答是否能够吸引人阅读下去。 4. 检查回答的合理性和适度,看看回答是否夸张或离题。5. 将多样性的评分打分在1到5之间,5分表示回答的质量很好,能够吸引人阅读,1分表示回答的内容生硬或者有离题的问题。1. Read the entire response carefully to ensure that you fully understand the content and theme expressed in the response. 2. While reading the response, pay attention to the quality of the language, such as whether the wording is correct and the language is vivid. 3. Check the creativity and imagination of the response to see if the response is engaging to read on. 4. Check the reasonableness and appropriateness of the responses to see if the responses are exaggerated or off-topic. 5. Rate the diversity on a scale of 1 to 5, with a 5 indicating a good quality response that is engaging to read and a 1 indicating a raw response or a question that is off-topic. |
+| 保真度 (Fidelity) | 保真度(1-5):答案是否能够严格遵守角色的设定回答给定的请求。Fidelity (1-5): whether the answer is able to answer the given request in strict compliance with the role setting. | 1. 仔细阅读问题,了解角色在问题中的设定和表现,包括职业、背景、观点、性格等方面。 阅读题目的请求,确认回答请求时需要注意的细节。 3. 对比提供的回答与该角色的设定,评估回答是否能够严格遵守角色的设定。 4. 结合以上评估结果给出保真度的评分,范围从1到5分,其中1分表示回答与角色设定完全不符,5分表示回答完全符合角色设定且满足给定请求。1. Read the question carefully to understand how the character is set up and represented in the question, including aspects such as occupation, background, point of view, and personality. 2. Read the question's request and confirm the details that need to be taken into account when answering the request. 3. Compare the provided answer with the setting of the role and assess whether the answer can strictly adhere to the setting of the role. 4. Combine the results of the above assessment to give a fidelity score ranging from 1 to 5, where a score of 1 means that the response does not match the persona at all, and a score of 5 means that the response fully complies with the persona and satisfies the given request. |
+| 简明扼要 (Conciseness) | 简明扼要(1-5):答案是否简明扼要,没有冗余内容。Conciseness (1-5): answers should be concise and without redundant content. | 1. 阅读题目,提取出材料的重点。 2. 阅读该总结,并注意其中的主要观点和信息。 3. 评估总结的长度。一个简明扼要的总结通常应该在几句话或几段文字内传达关键信息,而不是冗长的段落或文章。 4. 检查总结是否包含与主要观点无关的信息或冗余信息。 5. 确定总结涵盖了材料中的关键信息,并且没有忽略任何重要细节。 6. 给总结打出1-5的分数,其中5表示总结简明扼要,没有冗余内容,而1表示总结冗长或包含不必要的信息,难以理解或记忆。根据您的判断,打出适当的得分。1. Read the title and extract the main points of the material. 2. Read the summary and note the main ideas and messages in it. 3. Assess the length of the summary. A concise summary should usually convey key information within a few sentences or paragraphs, rather than lengthy paragraphs or essays. 4. Check that the summary does not contain information that is not relevant to the main ideas or that is redundant. 5. Make sure that the summary covers the key information in the material and that no important details have been omitted. 6. Rate the summary on a scale of 1-5, where 5 means the summary is concise and free of redundancy, and 1 means the summary is lengthy or contains unnecessary information that is difficult to understand or remember. Based on your judgment, assign the appropriate score. |
GPT models evaluate the quality of model predictions based on the given prompt words and gives a score between 1-5.
+> **NOTE:** Even for the same metric, the details of its prompt words and CoT(Chain-of-Thought) can differ based on which category you want to evaluate. For example, prompt words for metric `correctness` showed here is "The answer should be in line with common sense, life experience, etc."(this is for category `brainstorming`), but for category `extraction`, prompt words can be "Answers should extract the required information accurately and should not contain any incorrect or misleading information." You can find all the prompt words and CoT(Chain-of-Thought) in `prompt/evaluation_prompt`.
+
#### Automatic Evaluation
Automated metrics evaluate the capability of a model by comparing model predictions with reference answers.
@@ -86,7 +87,7 @@ There are two ways to obtain reference answers:
There are 5 types of automatic evaluation metrics listed in the table below:
-| Automatic Evaluation Metric |
Description
|
+| Automatic Evaluation Metric | Description |
| :---------------------------------: | :----------------------------------------------------------- |
| BLEU-n | Measure the accuracy between prediction and reference. BLEU-1 (Unigram) evaluates accuracy in word level. BLEU-n (n-gram) evaluate the fluency in sentence level. |
| ROUGE | ROUGE-N measures the number of matching n-grams between prediction and reference. ROUGE-L measures the number of matching longest common subsequence (LCS) between prediction and reference. |
@@ -175,7 +176,7 @@ Example:
#### Battle Prompt
-The following is the Chinese battle prompt. In the battle prompt, the question and answers from two different models are fed into the prompt template. You can find an example battle prompt file in `prompt/battle_prompt`.
+The following is the Chinese battle prompt. In the battle prompt, the question and answers from two different models are fed into the prompt template. You can find example battle prompt files for Chinese and English in `prompt/battle_prompt`.
```json
{
@@ -188,7 +189,7 @@ The following is the Chinese battle prompt. In the battle prompt, the question a
#### Evaluation Prompt
-The following is an example of a Chinese GPT evaluation prompt. In an evaluation prompt, you should define your metrics in `metrics` and provide CoT(Chain-of-Thought) in `CoT`. You can find an example evaluation prompt file in `prompt/evaluation_prompt`.
+The following is an example of a Chinese GPT evaluation prompt. In an evaluation prompt, you should define your metrics in `metrics` and provide CoT(Chain-of-Thought) in `CoT`. You can find example evaluation prompt files for Chinese and English in `prompt/evaluation_prompt`.
```json
{
@@ -303,7 +304,7 @@ For example, if you want to add a new metric `persuasiveness` into category `bra
## To Do
-- [ ] Add evaluation for English capability
+- [x] Add evaluation for English capability
- [ ] Support UniEval
- [x] Support GPT-4 evaluation
diff --git a/applications/Chat/evaluate/config/config_en.json b/applications/Chat/evaluate/config/config_en.json
new file mode 100644
index 000000000000..5b6272b97084
--- /dev/null
+++ b/applications/Chat/evaluate/config/config_en.json
@@ -0,0 +1,123 @@
+{
+ "language": "en",
+ "category": {
+ "brainstorming": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "creativity",
+ "practicality",
+ "correctness"
+ ],
+ "Metrics": [
+ "Distinct"
+ ]
+ },
+ "chat": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "naturalness",
+ "engagingness",
+ "reasonableness"
+ ],
+ "Metrics": [
+ "Distinct"
+ ]
+ },
+ "classification": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "correctness"
+ ],
+ "Metrics": [
+ "Precision",
+ "Recall",
+ "F1 score"
+ ]
+ },
+ "closed_qa": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "correctness"
+ ],
+ "Metrics": [
+ "BLEU",
+ "ROUGE",
+ "BERTScore"
+ ]
+ },
+ "extraction": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "correctness"
+ ],
+ "Metrics": [
+ "Precision",
+ "Recall",
+ "F1 score"
+ ]
+ },
+ "generation": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "diversity"
+ ],
+ "Metrics": [
+ "BLEU",
+ "ROUGE",
+ "BERTScore"
+ ]
+ },
+ "open_qa": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "correctness"
+ ],
+ "Metrics": [
+ "Distinct"
+ ]
+ },
+ "rewriting": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "correctness"
+ ],
+ "Metrics": [
+ "BLEU",
+ "ROUGE",
+ "BERTScore"
+ ]
+ },
+ "roleplay": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "fidelity",
+ "creativity"
+ ],
+ "Metrics": [
+ "Distinct"
+ ]
+ },
+ "summarization": {
+ "GPT": [
+ "language organization",
+ "relevance",
+ "correctness",
+ "conciseness"
+ ],
+ "Metrics": [
+ "BLEU",
+ "ROUGE",
+ "BERTScore"
+ ]
+ }
+ }
+}
diff --git a/applications/Chat/evaluate/eval.py b/applications/Chat/evaluate/eval.py
index 4067b15db6e8..8388d95f748a 100644
--- a/applications/Chat/evaluate/eval.py
+++ b/applications/Chat/evaluate/eval.py
@@ -14,7 +14,7 @@ def main(args):
# load config
config = jload(args.config_file)
- if config["language"] == "cn":
+ if config["language"] in ["cn", "en"]:
# get metric settings for all categories
metrics_per_category = {}
for category in config["category"].keys():
diff --git a/applications/Chat/evaluate/evaluator.py b/applications/Chat/evaluate/evaluator.py
index 433d775d27ed..0bf55ca80d7c 100644
--- a/applications/Chat/evaluate/evaluator.py
+++ b/applications/Chat/evaluate/evaluator.py
@@ -4,7 +4,7 @@
import gpt_evaluate
import metrics
import pandas as pd
-from utils import get_data_per_category, jdump
+from utils import analyze_automatic_results, get_data_per_category, save_automatic_results
class Evaluator(object):
@@ -42,21 +42,21 @@ def evaluate(self, answers: List[Dict], targets: List[Dict]) -> None:
"""
- def switch(metric):
+ def switch(metric, language):
if metric == "BLEU":
- return metrics.bleu_score(preds=predicts_list, targets=targets_list)
+ return metrics.bleu_score(preds=predicts_list, targets=targets_list, language=language)
elif metric == "ROUGE":
- return metrics.rouge_cn_score(preds=predicts_list, targets=targets_list)
+ return metrics.rouge_score(preds=predicts_list, targets=targets_list, language=language)
elif (metric == "Distinct"):
- return metrics.distinct_score(preds=predicts_list)
+ return metrics.distinct_score(preds=predicts_list, language=language)
elif (metric == "BERTScore"):
- return metrics.bert_score(preds=predicts_list, targets=targets_list)
+ return metrics.bert_score(preds=predicts_list, targets=targets_list, language=language)
elif (metric == "Precision"):
- return metrics.precision(preds=predicts_list, targets=targets_list)
+ return metrics.precision(preds=predicts_list, targets=targets_list, language=language)
elif (metric == "Recall"):
- return metrics.recall(preds=predicts_list, targets=targets_list)
+ return metrics.recall(preds=predicts_list, targets=targets_list, language=language)
elif (metric == "F1 score"):
- return metrics.F1_score(preds=predicts_list, targets=targets_list)
+ return metrics.F1_score(preds=predicts_list, targets=targets_list, language=language)
else:
raise ValueError(f"Unexpected metric")
@@ -78,7 +78,7 @@ def switch(metric):
predicts_list = [answer["output"] for answer in answers_per_category[category]]
for metric in category_metrics:
- self.automatic_metric_stats[category].update(switch(metric=metric))
+ self.automatic_metric_stats[category].update(switch(metric=metric, language=self.language))
# gpt evaluation
for category in self.params:
@@ -106,35 +106,29 @@ def save(self, path: str, model_name_list: List[str]) -> None:
save_path = os.path.join(path, "gpt_evaluate", "battle_results")
gpt_evaluate.save_battle_results(self.battle_results, model_name_list[0], model_name_list[1], save_path)
else:
- # save evaluation results for automatic metrics
- automatic_df = pd.DataFrame(self.automatic_metric_stats)
+ # Save evaluation results for automatic metrics
+ automatic_base_save_path = os.path.join(path, "automatic_results")
+ automatic_results_save_path = os.path.join(automatic_base_save_path, "evaluation_results")
- automatic_results_save_path = os.path.join(path, "automatic_results")
- if not os.path.exists(automatic_results_save_path):
- os.makedirs(automatic_results_save_path)
- automatic_df.to_csv(os.path.join(automatic_results_save_path, f"{model_name_list[0]}.csv"), index=True)
+ save_automatic_results(model_name_list[0], self.automatic_metric_stats, automatic_results_save_path)
- # Save evaluation results for GPT-3.5 evaluation metrics.
- all_evaluations = []
- base_save_path = os.path.join(path, "gpt_evaluate", "gpt_evaluate_results")
- evaluation_results_save_path = os.path.join(base_save_path, "evaluation_results")
+ # Save charts and csv.
+ automatic_analyses_save_path = os.path.join(automatic_base_save_path, "evaluation_analyses")
+ analyze_automatic_results(automatic_results_save_path, automatic_analyses_save_path)
- for category, evaluations in self.gpt_evaluation_results.items():
- jdump(
- evaluations,
- os.path.join(evaluation_results_save_path, model_name_list[0],
- f"{category}_evaluation_results.json"))
- all_evaluations.extend(evaluations)
+ # Save evaluation results for GPT evaluation metrics.
+ gpt_base_save_path = os.path.join(path, "gpt_evaluate", "gpt_evaluate_results")
+ gpt_evaluation_results_save_path = os.path.join(gpt_base_save_path, "evaluation_results")
- jdump(all_evaluations,
- os.path.join(evaluation_results_save_path, f"{model_name_list[0]}_evaluation_results.json"))
+ all_evaluations = gpt_evaluate.save_gpt_evaluation_results(model_name_list[0], self.gpt_evaluation_results,
+ gpt_evaluation_results_save_path)
# Start to calculate scores and save statistics.
- evaluation_statistics_save_path = os.path.join(base_save_path, "evaluation_statistics")
+ gpt_evaluation_statistics_save_path = os.path.join(gpt_base_save_path, "evaluation_statistics")
gpt_evaluate.save_gpt_evaluation_statistics(model_name_list[0], all_evaluations,
- evaluation_statistics_save_path)
+ gpt_evaluation_statistics_save_path)
# Save charts and csv.
- evaluation_analyses_save_path = os.path.join(base_save_path, "evaluation_analyses")
- gpt_evaluate.analyze_gpt_evaluation_statistics(evaluation_statistics_save_path,
- evaluation_analyses_save_path)
+ gpt_evaluation_analyses_save_path = os.path.join(gpt_base_save_path, "evaluation_analyses")
+ gpt_evaluate.analyze_gpt_evaluation_statistics(gpt_evaluation_statistics_save_path,
+ gpt_evaluation_analyses_save_path)
diff --git a/applications/Chat/evaluate/gpt_evaluate.py b/applications/Chat/evaluate/gpt_evaluate.py
index 61ce3456c49f..b433500dfa04 100644
--- a/applications/Chat/evaluate/gpt_evaluate.py
+++ b/applications/Chat/evaluate/gpt_evaluate.py
@@ -461,6 +461,27 @@ def calculate_scores_form_response(response: str, evaluation: Dict[str, Any]) ->
return 0
+def save_gpt_evaluation_results(model_name: str, gpt_evaluation_results: Dict[str, Any],
+ save_path: str) -> Dict[str, Any]:
+ """
+ Save evaluation results for different categories for one model.
+
+ Args:
+ model_name: name of the model for saving evaluation results.
+ gpt_evaluation_results: evaluations results for all of the model answers.
+ save_path: path to save GPT evaluation statistics.
+ """
+
+ all_evaluations = []
+ for category, evaluations in gpt_evaluation_results.items():
+ jdump(evaluations, os.path.join(save_path, model_name, f"{category}_evaluation_results.json"))
+ all_evaluations.extend(evaluations)
+
+ jdump(all_evaluations, os.path.join(save_path, f"{model_name}_evaluation_results.json"))
+
+ return all_evaluations
+
+
def save_gpt_evaluation_statistics(model_name: str, evaluations: List[Dict], save_path: str) -> None:
"""
Generate statistics for one model.
@@ -468,7 +489,7 @@ def save_gpt_evaluation_statistics(model_name: str, evaluations: List[Dict], sav
Args:
model_name: name of the model for saving statistics.
evaluations: evaluations for all of the model answers.
- save_path: path to save GPT-3.5 evaluation statistics.
+ save_path: path to save GPT evaluation statistics.
"""
if not os.path.exists(save_path):
@@ -516,7 +537,7 @@ def save_gpt_evaluation_statistics(model_name: str, evaluations: List[Dict], sav
def analyze_gpt_evaluation_statistics(statistics_path: str, save_path: str) -> None:
"""
- Analyze and visualize all GPT-3.5 evaluation statistics in the given directory.
+ Analyze and visualize all GPT evaluation statistics in the given directory.
Args:
statistics_path: path to all the models' statistics.
@@ -594,3 +615,5 @@ def analyze_gpt_evaluation_statistics(statistics_path: str, save_path: str) -> N
figure = fig.get_figure()
figure.savefig(os.path.join(save_path, f"{category}.png"), dpi=400)
+
+ plt.close()
diff --git a/applications/Chat/evaluate/metrics.py b/applications/Chat/evaluate/metrics.py
index 5e657234c61a..031f6fa83926 100644
--- a/applications/Chat/evaluate/metrics.py
+++ b/applications/Chat/evaluate/metrics.py
@@ -1,13 +1,16 @@
import statistics
+from typing import Dict, List
import jieba
from bert_score import score
from nltk.translate.bleu_score import sentence_bleu
from rouge_chinese import Rouge as Rouge_cn
+from rouge_score import rouge_scorer as Rouge_en
from sklearn.metrics import f1_score, precision_score, recall_score
+from utils import preprocessing_text, remove_redundant_space
-def bleu_score(preds: list, targets: list) -> dict:
+def bleu_score(preds: List[str], targets: List[str], language: str) -> Dict[str, float]:
"""Calculate BLEU Score Metric
The calculation includes BLEU-1 for unigram, BLEU-2 for bigram,
@@ -21,8 +24,12 @@ def bleu_score(preds: list, targets: list) -> dict:
(1. / 4., 1. / 4., 1. / 4., 1. / 4.)]
for pred, target in zip(preds, targets):
- pred_list = (' '.join(jieba.cut(pred))).split()
- target_list = [(' '.join(jieba.cut(target))).split()]
+ if language == "cn":
+ pred_list = ' '.join(jieba.cut(preprocessing_text(pred))).split()
+ target_list = [(' '.join(jieba.cut(preprocessing_text(target)))).split()]
+ elif language == "en":
+ pred_list = preprocessing_text(pred).split()
+ target_list = [preprocessing_text(target).split()]
bleu = sentence_bleu(target_list, pred_list, weights=weights)
cumulative_bleu = [a + b for a, b in zip(cumulative_bleu, bleu)]
@@ -33,7 +40,7 @@ def bleu_score(preds: list, targets: list) -> dict:
return bleu_scores
-def rouge_cn_score(preds: list, targets: list) -> dict:
+def rouge_cn_score(preds: List[str], targets: List[str]) -> Dict[str, float]:
"""Calculate Chinese ROUGE Score Metric
The calculation includes ROUGE-1 for unigram, ROUGE-2 for bigram
@@ -41,13 +48,13 @@ def rouge_cn_score(preds: list, targets: list) -> dict:
the preds and targets. ROUGE-L measures the number of matching
longest common subsequence (LCS) between preds and targets.
"""
- rouge_scores = {"rouge1": {}, "rouge2": {}, "rougeL": {}}
+ rouge_scores = {"rouge1": 0, "rouge2": 0, "rougeL": 0}
all_preds = []
all_targets = []
for pred, target in zip(preds, targets):
- pred_list = ' '.join(jieba.cut(pred))
- target_list = ' '.join(jieba.cut(target))
+ pred_list = remove_redundant_space(' '.join(jieba.cut(preprocessing_text(pred))))
+ target_list = remove_redundant_space(' '.join(jieba.cut(preprocessing_text(target))))
all_preds.append(pred_list)
all_targets.append(target_list)
@@ -61,7 +68,42 @@ def rouge_cn_score(preds: list, targets: list) -> dict:
return rouge_scores
-def distinct_score(preds: list) -> dict:
+def rouge_en_score(preds: List[str], targets: List[str]) -> Dict[str, float]:
+ """Calculate English ROUGE Score Metric
+
+ The calculation includes ROUGE-1 for unigram, ROUGE-2 for bigram
+ and ROUGE-L. ROUGE-N evaluates the number of matching n-grams between
+ the preds and targets. ROUGE-L measures the number of matching
+ longest common subsequence (LCS) between preds and targets.
+ """
+ rouge_scores = {"rouge1": 0, "rouge2": 0, "rougeL": 0}
+ all_preds = []
+ all_targets = []
+
+ rouge_en = Rouge_en.RougeScorer(["rouge1", "rouge2", "rougeL"], use_stemmer=False)
+
+ for pred, target in zip(preds, targets):
+ score = rouge_en.score(preprocessing_text(pred), preprocessing_text(target))
+ rouge_scores["rouge1"] += score['rouge1'].fmeasure
+ rouge_scores["rouge2"] += score['rouge2'].fmeasure
+ rouge_scores["rougeL"] += score['rougeL'].fmeasure
+
+ rouge_scores["rouge1"] = rouge_scores["rouge1"] / len(preds)
+ rouge_scores["rouge2"] = rouge_scores["rouge2"] / len(preds)
+ rouge_scores["rougeL"] = rouge_scores["rougeL"] / len(preds)
+
+ return rouge_scores
+
+
+def rouge_score(preds: List[str], targets: List[str], language: str) -> Dict[str, float]:
+ """Calculate ROUGE Score Metric"""
+ if language == "cn":
+ return rouge_cn_score(preds, targets)
+ elif language == "en":
+ return rouge_en_score(preds, targets)
+
+
+def distinct_score(preds: List[str], language: str) -> Dict[str, float]:
"""Calculate Distinct Score Metric
This metric refers to https://arxiv.org/abs/1510.03055.
@@ -72,19 +114,36 @@ def distinct_score(preds: list) -> dict:
cumulative_distinct = []
for pred in preds:
- pred_seg_list = list(' '.join(jieba.cut(pred)))
- count_segs = len(pred_seg_list)
- unique_segs = set(pred_seg_list)
- count_unique_chars = len(unique_segs)
-
- cumulative_distinct.append(count_unique_chars / count_segs)
+ if language == "cn":
+ pred_seg_list = ' '.join(jieba.cut(pred)).split()
+ count_segs = len(pred_seg_list)
+ unique_segs = set(pred_seg_list)
+ count_unique_chars = len(unique_segs)
+
+ cumulative_distinct.append(count_unique_chars / count_segs)
+ elif language == "en":
+ # calculate distinct 1-gram, 2-gram, 3-gram
+ unique_ngram = [set() for _ in range(0, 3)]
+ all_ngram_count = [0 for _ in range(0, 3)]
+
+ split_pred = preprocessing_text(pred).split()
+ for n in range(0, 3):
+ for i in range(0, len(split_pred) - n):
+ ngram = ' '.join(split_pred[i:i + n + 1])
+ unique_ngram[n].add(ngram)
+ all_ngram_count[n] += 1
+
+ # Sometimes the answer may contain only one word. For 2-gram and 3-gram, the gram count(denominator) may be zero.
+ avg_distinct = [len(a) / (b + 1e-6) for a, b in zip(unique_ngram, all_ngram_count)]
+
+ cumulative_distinct.append(statistics.mean(avg_distinct))
distinct_score["distinct"] = statistics.mean(cumulative_distinct)
return distinct_score
-def bert_score(preds: list, targets: list) -> dict:
+def bert_score(preds: List[str], targets: List[str], language: str) -> Dict[str, float]:
"""Calculate BERTScore Metric
The BERTScore evaluates the semantic similarity between
@@ -95,23 +154,25 @@ def bert_score(preds: list, targets: list) -> dict:
target_list = []
for pred, target in zip(preds, targets):
- pred_list.append(' '.join(jieba.cut(pred)))
- target_list.append(' '.join(jieba.cut(target)))
+ pred_list.append(pred)
+ target_list.append(target)
- _, _, F = score(pred_list, target_list, lang="zh", verbose=True)
+ if language == "cn":
+ _, _, F = score(pred_list, target_list, lang="zh", verbose=True)
+ elif language == "en":
+ _, _, F = score(pred_list, target_list, lang="en", verbose=True)
bert_score["bert_score"] = F.mean().item()
return bert_score
-def calculate_precision_recall_f1(preds: list, targets: list) -> dict:
+def calculate_precision_recall_f1(preds: List[str], targets: List[str], language: str) -> Dict[str, float]:
"""Precision, Recall and F1-Score Calculation
The calculation of precision, recall and f1-score is realized by counting
the number f overlaps between the preds and target. The comparison length
- limited by the shorter one of preds and targets. This design is mainly
- considered for classification and extraction categories.
+ limited by the shorter one of preds and targets.
"""
precision_recall_f1 = {"precision": 0, "recall": 0, "f1_score": 0}
precision_scores = []
@@ -119,8 +180,12 @@ def calculate_precision_recall_f1(preds: list, targets: list) -> dict:
f1_scores = []
for pred, target in zip(preds, targets):
- pred_list = [char for char in pred]
- target_list = [char for char in target]
+ if language == "cn":
+ pred_list = [char for char in ' '.join(jieba.cut(preprocessing_text(pred))).split()]
+ target_list = [char for char in ' '.join(jieba.cut(preprocessing_text(target))).split()]
+ elif language == "en":
+ pred_list = [char for char in preprocessing_text(pred).split()]
+ target_list = [char for char in preprocessing_text(target).split()]
target_labels = [1] * min(len(target_list), len(pred_list))
pred_labels = [int(pred_list[i] == target_list[i]) for i in range(0, min(len(target_list), len(pred_list)))]
@@ -136,34 +201,31 @@ def calculate_precision_recall_f1(preds: list, targets: list) -> dict:
return precision_recall_f1
-def precision(preds: list, targets: list) -> dict:
+def precision(preds: List[str], targets: List[str], language: str) -> Dict[str, float]:
"""Calculate Precision Metric
- (design for classification and extraction categories)
Calculating precision by counting the number of overlaps between the preds and target.
"""
precision = {"precision": 0}
- precision["precision"] = calculate_precision_recall_f1(preds, targets)["precision"]
+ precision["precision"] = calculate_precision_recall_f1(preds, targets, language)["precision"]
return precision
-def recall(preds: list, targets: list) -> dict:
+def recall(preds: List[str], targets: List[str], language: str) -> Dict[str, float]:
"""Calculate Recall Metric
- (design for classification and extraction categories)
Calculating recall by counting the number of overlaps between the preds and target.
"""
recall = {"recall": 0}
- recall["recall"] = calculate_precision_recall_f1(preds, targets)["recall"]
+ recall["recall"] = calculate_precision_recall_f1(preds, targets, language)["recall"]
return recall
-def F1_score(preds: list, targets: list) -> dict:
+def F1_score(preds: List[str], targets: List[str], language: str) -> Dict[str, float]:
"""Calculate F1-score Metric
- (design for classification and extraction categories)
Calculating f1-score by counting the number of overlaps between the preds and target.
"""
f1 = {"f1_score": 0}
- f1["f1_score"] = calculate_precision_recall_f1(preds, targets)["f1_score"]
+ f1["f1_score"] = calculate_precision_recall_f1(preds, targets, language)["f1_score"]
return f1
diff --git a/applications/Chat/evaluate/prompt/battle_prompt/battle_prompt_en.json b/applications/Chat/evaluate/prompt/battle_prompt/battle_prompt_en.json
new file mode 100644
index 000000000000..2b35d1958ac5
--- /dev/null
+++ b/applications/Chat/evaluate/prompt/battle_prompt/battle_prompt_en.json
@@ -0,0 +1,6 @@
+{
+ "id": 1,
+ "system_prompt": "You are a helpful and precise assistant for checking the quality of the answer. You will be given two different answers to the same question",
+ "prompt_template": "[Question]\n{question}\n\n[The Start of AI Assistant 1's Answer]\n{answer_1}\n\n[The End of AI Assistant 1's Answer]\n\n[The Start of AI Assistant 2's Answer]\n{answer_2}\n\n[The End of AI Assistant 2's Answer]\n\n[Requirements]\n{prompt}\n\n",
+ "prompt": "We would like to request your feedback on the performance of two AI assistants in response to the user question displayed above.\nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space. In the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment."
+}
diff --git a/applications/Chat/evaluate/prompt/evaluation_prompt/evaluation_prompt_en.json b/applications/Chat/evaluate/prompt/evaluation_prompt/evaluation_prompt_en.json
new file mode 100644
index 000000000000..0b2053746af2
--- /dev/null
+++ b/applications/Chat/evaluate/prompt/evaluation_prompt/evaluation_prompt_en.json
@@ -0,0 +1,179 @@
+{
+ "brainstorming": {
+ "id": 1,
+ "category": "brainstorming",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "creativity": "Creativity (1-5): Some brainstorming questions may require answers that are creative and suggest new ideas.",
+ "practicality": "Practicality (1-5): Some brainstorming questions may require answers to suggest practical suggestions or solutions.",
+ "correctness": "Correctness (1-5): The answer should be in line with common sense, life experience, etc."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "creativity": "1. Read the provided brainstorming questions carefully to make sure you understand the gist and context of the questions.\n2. Based on your knowledge and experience, determine if the answers provided are feasible. If the answer is not feasible, the creativity score may be affected.\n3. Consider whether the answer contains novel ideas or unique thoughts. An answer may overlap with a known solution and still be considered creative, as long as it offers a new perspective or approach to the problem.\n4. Give a score of 1 to 5 depending on the creativity of the answer. If the answer lacks creativity, a lower score should be given. If the answer is creative and provides a new idea, a higher score should be given.\n\nCreativity:",
+ "practicality": "1. Read the provided brainstorming questions carefully to make sure you understand the gist and context of the questions.\n2. Based on your knowledge and experience, determine if the answers provided are feasible. If the answer is not feasible, the practicality score may be affected.\n3. Consider whether the suggestions or solutions presented in the answer are practical and workable. The answer may look good, but if it cannot be implemented or applied, the practicality score may be affected.\n4. Give a score of 1 to 5 depending on the practicality of the answer. If the answer lacks practicality, a lower score should be given. If the answer makes a practical suggestion or solution and solves the problem well, a higher score should be given.\n\nPracticality:",
+ "correctness": "1. Read the provided brainstorming questions carefully to make sure you understand the gist and context of the questions.\n2. Based on your knowledge and experience, determine if the answers provided are feasible. If the answer is not feasible, the correctness score may be affected.\n3. Consider whether the information provided in the answer is correct, consistent with common sense, real life, etc. If there are obvious errors or implausibilities in the answer, the correctness score may be affected.\n4. Give a score of 1 to 5 depending on the correctness of the answer. If the answer contains obvious errors or unreasonable points, a lower score should be given. A higher score should be given if the answer is correct, consistent with common sense, real life, etc.\n\nCorrectness:"
+ },
+ "prompt": "You are a good assistant. Please rate the given answer to the \"brainstorming\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "chat": {
+ "id": 2,
+ "category": "chat",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "naturalness": "Naturalness (1-5): whether the answer is natural and fits the identity given by the question.",
+ "engagingness": "Engagingness (1-5): whether the answer responds appropriately to the content of the preceding conversation and whether it understands the context and background of the conversation.",
+ "reasonableness": "Reasonableness (1-5): Whether the answer can form a logical connection with the content of the previous dialogue, whether it is consistent with common sense, and whether it can reasonably exist in this context."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "naturalness": "1. Read the question and determine the identity information provided in the question.\n2. Check whether the content of the answer matches the identity given in the question.\n3. Based on the above factors, score the naturalness of the response on a scale from 1 to 5, where 1 means unnatural and 5 means very natural and in accordance with the identity given in the question.\n\nNaturalness:",
+ "engagingness": "1. Read the questions to determine the context and background of the dialogue.\n2. Check that the answer fully understands the context and background of the conversation and that it fits naturally into the conversation without seeming abrupt.\n3. Based on the above factors, rate the response's engagement on a scale from 1 to 5, where 1 means not engaged and 5 means very engaged and appropriately understands the context and background of the conversation.\n\nEngagingness:",
+ "reasonableness": "1. Read the question and determine the topic of the conversation and the direction the question expects the answer to go.\n2. Determine whether the answer can be logically connected to the preceding conversation, whether it makes common sense, and whether it can reasonably exist in this context.\n3. Based on the above factors, rate the reasonableness of the answer on a scale from 1 to 5, where 1 means unreasonable and 5 means very reasonable and able to form a logical connection with the preceding dialogue content and consistent with common sense.\n\nReasonableness:"
+ },
+ "prompt": "You are a good assistant. Please rate the given answer to the \"chat\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "classification": {
+ "id": 3,
+ "category": "classification",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "correctness": "Correctness (1-5): whether the answer is correct or not."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "correctness": "1. Read the question carefully and try to answer the question yourself.\n2. Check the correctness of the answer. You can use known facts or research to verify that the answer is correct. If the answer is correct, you can give a score of 5 for correctness. If the answer is partially correct, an appropriate score, such as 2, 3, or 4, may be given. If the answer is completely incorrect, only 1 point is awarded.\n\nCorrectness:"
+ },
+ "prompt": "You are a good assistant. Please rate the given answer to the \"classification\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "closed_qa": {
+ "id": 4,
+ "category": "closed_qa",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "correctness": "Correctness (1-5): whether the answer is correct or not."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "correctness": "1. Read the question carefully and try to answer the question by yourself.\n2. Check the correctness of the answer. You can use known facts or research to verify that the answer is correct. If the answer is correct, you can give a score of 5 for correctness. If the answer is partially correct, an appropriate score, such as 2, 3, or 4, may be assigned. If the answer is completely incorrect, only 1 point is awarded.\n\nCorrectness:"
+ },
+ "prompt": "You are a good assistant. Please rate the given answer to the \"closed qa\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "extraction": {
+ "id": 5,
+ "category": "extraction",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "correctness": "correctness (1-5): Answers should extract the required information accurately and should not contain any incorrect or misleading information."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "correctness": "1. Read the questions carefully and identify the information that needs to be extracted from the material.\n2. Read the answer carefully and make sure it covers all the information that needs to be extracted.\n3. Use the material provided to verify the correctness of the response. If the response is inaccurate or contains incorrect or misleading information, a high score cannot be given.\n4. Check that the answer contains all the information required to be extracted and do not leave out any important details.\n5. Give a score between 1 and 5 based on the correctness and completeness of the response, with a score of 5 indicating a very accurate and complete response and a score of 1 indicating that the response barely extracts the required information.\n\nCorrectness:"
+ },
+ "prompt": "You are a good assistant. Please rate the given answer to the \"extraction\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "generation": {
+ "id": 6,
+ "category": "generation",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "diversity": "Diversity (1-5): Whether the answers use beautiful language and have some creativity and imagination. However, answers should also be kept reasonable and moderate, not overly exaggerated or off-topic."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "diversity": "1. Read the entire response carefully to ensure that you fully understand the content and theme expressed in the response.\n2. While reading the response, pay attention to the quality of the language, such as whether the wording is correct and the language is vivid.\n3. Check the creativity and imagination of the response to see if the response is engaging to read on.\n4. Check the reasonableness and appropriateness of the responses to see if the responses are exaggerated or off-topic.\n5. Rate the diversity on a scale of 1 to 5, with a 5 indicating a good quality response that is engaging to read and a 1 indicating a raw response or a question that is off-topic.\n\nDiversity:"
+ },
+ "prompt": "You are a good assistant. Please rate the given answer to the \"generation\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "open_qa": {
+ "id": 7,
+ "category": "open_qa",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "correctness": "Correctness (1-5): whether the answer is correct or not."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "correctness": "1. Read the question carefully and try to answer the question yourself.\n2. Check the correctness of the answer. You can use known facts or research to verify that the answer is correct. If the answer is correct, you can give a score of 5 for correctness. If the answer is partially correct, an appropriate score, such as 2, 3, or 4, may be given. If the answer is completely incorrect, only 1 point is awarded.\n\nCorrectness:"
+ },
+ "prompt": "You are a good assistant. Please rate the answers to the \"open qa\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "rewriting": {
+ "id": 8,
+ "category": "rewriting",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "correctness": "Correctness (1-5): whether the answer is correct or not."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "correctness": "1. Read the question carefully and try to answer the question yourself.\n2. Check the correctness of the answer. You can use known facts or research to verify that the answer is correct. If the answer is correct, you can give a score of 5 for correctness. If the answer is partially correct, an appropriate score, such as 2, 3, or 4, may be assigned. If the answer is completely incorrect, only 1 point is awarded.\n\nCorrectness:"
+ },
+ "prompt": "You are a good assistant. Please rate the answers to the \"rewriting\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "roleplay": {
+ "id": 9,
+ "category": "roleplay",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "fidelity": "Fidelity (1-5): whether the answer is able to answer the given request in strict compliance with the role setting.",
+ "creativity": "Creativity (1-5): The answers to the role-play questions need to be somewhat creative, but at the same time they need to adhere to the setting of the role."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "fidelity": "1. Read the question carefully to understand how the character is set up and represented in the question, including aspects such as occupation, background, point of view, and personality.\n2. Read the question's request and confirm the details that need to be taken into account when answering the request.\n3. Compare the provided answer with the setting of the role and assess whether the answer can strictly adhere to the setting of the role.\n4. Combine the results of the above assessment to give a fidelity score ranging from 1 to 5, where a score of 1 means that the response does not match the persona at all, and a score of 5 means that the response fully complies with the persona and satisfies the given request.\n\nFidelity:",
+ "creativity": "1. Read the question carefully to understand how the character is set up and represented in the question, including career, background, perspective, and personality.\n2. Evaluate whether the answer has unique ideas and suggestions that bring new ideas and insights to the questioner.\n3. Compare the creativity in the response to the setting of the persona and assess whether the response adheres to the setting and essential characteristics of the persona.\n4. Evaluate the quality of the responses in general and combine the results of the above assessment to give a creativity score ranging from 1 to 5, where a score of 1 indicates that the response lacks creativity and a score of 5 indicates that the response has unique ideas and suggestions and is able to adhere to the set-up of the persona.\n\nCreativity:"
+ },
+ "prompt": "You are a good assistant. Please rate the given answer to the \"role-play\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "summarization": {
+ "id": 10,
+ "category": "summarization",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "correctness": "Correctness (1-5): answers should summarize the main points of the material accurately and unambiguously.",
+ "conciseness": "Conciseness (1-5): answers should be concise and without redundant content."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "correctness": "1. Read the material given in the question carefully to understand its content and main points.\n2. Assess whether the answer accurately summarizes the key points of the source material.\n3. assess whether the response contains all the key information in the source material.\n4. Based on the above steps, give a score of 1-5, where 1 means that the response does not accurately summarize the main points of the material and 5 means that the response completely accurately summarizes the main points of the material.\n\nCorrectness:",
+ "conciseness": "1. Read the title and extract the main points of the material.\n2. Read the summary and note the main ideas and messages in it.\n3. Assess the length of the summary. A concise summary should usually convey key information within a few sentences or paragraphs, rather than lengthy paragraphs or essays.\n4. Check that the summary does not contain information that is not relevant to the main ideas or that is redundant.\n5. Make sure that the summary covers the key information in the material and that no important details have been omitted.\n6. Rate the summary on a scale of 1-5, where 5 means the summary is concise and free of redundancy, and 1 means the summary is lengthy or contains unnecessary information that is difficult to understand or remember. Based on your judgment, assign the appropriate score.\n\nConciseness:"
+ },
+ "prompt": "You are a good assistant. Please rate the given answer to the \"summarization\" question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ },
+ "general": {
+ "id": 11,
+ "category": "general",
+ "metrics": {
+ "language organization": "Language organization (1-5): whether the answer language is fluent and coherent, uses correct grammar, has a certain logic, uses appropriate connecting words, transition words, etc.",
+ "relevance": "Relevance (1-5): whether the content of the answer is relevant to the topic, does not answer the wrong question, and strictly follows the requirements of the topic.",
+ "correctness": "Correctness (1-5): whether the answer is correct or not."
+ },
+ "CoT": {
+ "language organization": "1. Read the answers and check for grammatical errors, poor word choice, or other significant mistakes.\n2. Check that the answer is logical, conveys the information in a logical order, and is self-explanatory.\n3. Determine if the answer is relevant to the question or topic and conveys a clear message.\n4. Check that the answer is coherent and that appropriate transitions and switches are used to maintain coherence between sentences and paragraphs.\n5. Check that the answer is clearly structured and organized in such a way that the reader can easily understand the hierarchy and structure of the information.\n6. Evaluate the linguistic organization of the answer based on a combination of the above factors and give a score of 1 to 5, where 5 indicates very good linguistic organization and 1 indicates very poor linguistic organization.\n\nLanguage organization:",
+ "relevance": "1. Read the question to determine what the question asks and what aspects of the question need to be answered.\n2. Read the answers to make sure that they directly answer the question asked.\n3. Check that the answer follows the requirements of the question, including the way it is answered, the length of the answer, the format of the answer, etc.\n4. Evaluate how relevant the answer is based on the above factors and give a score of 1 to 5, where 5 means the answer is very relevant and 1 means the answer is not relevant at all.\n\nRelevance:",
+ "correctness": "1. Read the question carefully and try to answer the question yourself.\n2. Check the correctness of the answer. You can use known facts or research to verify that the answer is correct. If the answer is correct, you can give a score of 5 for correctness. If the answer is partially correct, an appropriate score, such as 2, 3, or 4, may be assigned. If the answer is completely incorrect, only 1 point is awarded.\n\nCorrectness:"
+ },
+ "prompt": "You are a good assistant. Please rate the given answer to the question below.\n\nThe question is as follows:\n\n{question}\n\nThe answer is as follows:\n\n{answer}\n\nThe metric for evaluation is as follows:\n\n{metric}\n\nYou should follow the following evaluation steps:\n\n{steps}"
+ }
+}
diff --git a/applications/Chat/evaluate/requirements.txt b/applications/Chat/evaluate/requirements.txt
index b0301c2f17f8..27d317ed88cc 100644
--- a/applications/Chat/evaluate/requirements.txt
+++ b/applications/Chat/evaluate/requirements.txt
@@ -8,3 +8,5 @@ seaborn
pandas
matplotlib
numpy
+zhon
+rouge_score
diff --git a/applications/Chat/evaluate/utils.py b/applications/Chat/evaluate/utils.py
index 517c0a1c351e..1f4069386fcd 100644
--- a/applications/Chat/evaluate/utils.py
+++ b/applications/Chat/evaluate/utils.py
@@ -1,6 +1,15 @@
import io
import json
import os
+import re
+import string
+from typing import Dict
+
+import matplotlib.pyplot as plt
+import pandas as pd
+import seaborn as sns
+import tqdm
+from zhon import hanzi
def _make_w_io_base(f, mode: str):
@@ -29,7 +38,7 @@ def jdump(obj, f, mode="w", indent=4, default=str):
"""
f = _make_w_io_base(f, mode)
if isinstance(obj, (dict, list)):
- json.dump(obj, f, indent=indent, default=default)
+ json.dump(obj, f, indent=indent, default=default, ensure_ascii=False)
elif isinstance(obj, str):
f.write(obj)
else:
@@ -61,3 +70,149 @@ def get_data_per_category(data, categories):
data_per_category[category].append(item)
return data_per_category
+
+
+def remove_articles(text: str) -> str:
+ """
+ Remove articles "a, an, the" in the given text.
+ It is used in evaluation of automatic metrics.
+
+ """
+
+ pattern = re.compile(r"\b(a|an|the)\b", re.UNICODE)
+ return re.sub(pattern, " ", text)
+
+
+def remove_punctuations(text: str) -> str:
+ """
+ Remove punctuations in the given text.
+ It is used in evaluation of automatic metrics.
+
+ """
+
+ punctuation = string.punctuation + hanzi.punctuation
+ punctuation = set([char for char in punctuation])
+ punctuation.difference_update(set("!@#$%&()<>?|,.\"'"))
+
+ out = []
+ for char in text:
+ if char in punctuation:
+ continue
+ else:
+ out.append(char)
+
+ return "".join(out)
+
+
+def remove_redundant_space(text: str) -> str:
+ """
+ Remove redundant spaces in the given text.
+ It is used in evaluation of automatic metrics.
+
+ """
+
+ return " ".join(text.split())
+
+
+def preprocessing_text(text: str) -> str:
+ """
+ Preprocess the given text.
+ It is used in evaluation of automatic metrics.
+
+ """
+
+ return remove_redundant_space(remove_articles(remove_punctuations(text.lower())))
+
+
+def save_automatic_results(model_name: str, automatic_metric_stats: Dict[str, Dict], save_path: str) -> None:
+ """
+ Save automatic evaluation results of different categories for one model.
+
+ """
+
+ if not os.path.exists(save_path):
+ os.makedirs(save_path)
+
+ automatic_df = pd.DataFrame(automatic_metric_stats)
+ automatic_df.to_csv(os.path.join(save_path, f"{model_name}_results.csv"), index=True)
+
+
+def read_automatic_results(results_path: str, file_name: str) -> Dict[str, Dict]:
+ """
+ Read a csv file and return a dictionary which stores scores per metric.
+
+ """
+
+ results = pd.read_csv(os.path.join(results_path, file_name), index_col=0)
+
+ results_dict = {metric: {} for metric in list(results.index)}
+ for i, metric in enumerate(results_dict.keys()):
+ for j, category in enumerate(list(results.columns)):
+ if pd.isnull(results.iloc[i][j]):
+ continue
+ results_dict[metric][category] = results.iloc[i][j]
+
+ return results_dict
+
+
+def analyze_automatic_results(results_path: str, save_path: str) -> None:
+ """
+ Analyze and visualize all csv files in the given folder.
+
+ """
+
+ if not os.path.exists(results_path):
+ raise Exception(f'The given directory "{results_path}" doesn\'t exist! No results found!')
+
+ all_statistics = {}
+
+ for file_name in os.listdir(results_path):
+ if file_name.endswith("_results.csv"):
+ model_name = file_name.split("_results.csv")[0]
+ all_statistics[model_name] = read_automatic_results(results_path, file_name)
+
+ if len(list(all_statistics.keys())) == 0:
+ raise Exception(f'There are no csv files in the given directory "{results_path}"!')
+
+ frame_all = {"model": [], "category": [], "metric": [], "score": []}
+ frame_per_metric = {}
+ for model_name, model_statistics in all_statistics.items():
+ for metric, metric_statistics in model_statistics.items():
+ if frame_per_metric.get(metric) is None:
+ frame_per_metric[metric] = {"model": [], "category": [], "score": []}
+
+ for category, category_score in metric_statistics.items():
+ frame_all["model"].append(model_name)
+ frame_all["category"].append(category)
+ frame_all["metric"].append(metric)
+ frame_all["score"].append(category_score)
+
+ frame_per_metric[metric]["model"].append(model_name)
+ frame_per_metric[metric]["category"].append(category)
+ frame_per_metric[metric]["score"].append(category_score)
+
+ if not os.path.exists(save_path):
+ os.makedirs(save_path)
+
+ frame_all = pd.DataFrame(frame_all)
+ frame_all.to_csv(os.path.join(save_path, "automatic_evaluation_statistics.csv"))
+
+ for metric in tqdm.tqdm(
+ frame_per_metric.keys(),
+ desc=f"metric: ",
+ total=len(frame_per_metric.keys()),
+ ):
+ data = pd.DataFrame(frame_per_metric[metric])
+
+ sns.set()
+ fig = plt.figure(figsize=(16, 10))
+
+ fig = sns.barplot(x="category", y="score", hue="model", data=data, dodge=True)
+ fig.set_title(f"Comparison between Different Models for Metric {metric.title()}")
+ plt.xlabel("Evaluation Category")
+ plt.ylabel("Score")
+
+ figure = fig.get_figure()
+ figure.savefig(os.path.join(save_path, f"{metric}.png"), dpi=400)
+
+ plt.close()
diff --git a/colossalai/amp/naive_amp/mixed_precision_mixin/__init__.py b/colossalai/amp/naive_amp/mixed_precision_mixin/__init__.py
new file mode 100644
index 000000000000..b0348e1477bb
--- /dev/null
+++ b/colossalai/amp/naive_amp/mixed_precision_mixin/__init__.py
@@ -0,0 +1,9 @@
+from .base import MixedPrecisionMixin
+from .bf16 import BF16MixedPrecisionMixin
+from .fp16 import FP16MixedPrecisionMixin
+
+__all__ = [
+ 'MixedPrecisionMixin',
+ 'FP16MixedPrecisionMixin',
+ 'BF16MixedPrecisionMixin',
+]
diff --git a/colossalai/amp/naive_amp/mixed_precision_mixin/base.py b/colossalai/amp/naive_amp/mixed_precision_mixin/base.py
new file mode 100644
index 000000000000..a52a9747ad1e
--- /dev/null
+++ b/colossalai/amp/naive_amp/mixed_precision_mixin/base.py
@@ -0,0 +1,91 @@
+from abc import ABC, abstractmethod
+
+import torch
+from torch import Tensor
+
+
+class MixedPrecisionMixin(ABC):
+ """A helper class for mixed precision training. This mixin is used in mixed precision optimizers.
+
+ Attributes:
+ dtype (torc.dtype): The expected dtype of the gradients.
+
+ Examples:
+ ```python
+ class MyMixedPrecisionOptimizer(OptimizerWrapper):
+ def __init__(self, optim: Optimizer):
+ super().__init__(optim)
+ self.mixed_precision = MixedPrecisionMixin()
+
+ def backward(self, loss):
+ loss = self.mixed_precision.pre_backward(loss)
+ loss.backward()
+
+ def backward_by_grad(self, tensor, grad):
+ grad = self.mixed_precision.pre_backward_by_grad(tensor, grad)
+ tensor.backward(grad)
+
+ def step(self):
+ if self.mixed_precision.should_skip_step():
+ self.zero_grad()
+ return
+ div_scale = self.mixed_precision.get_grad_div_scale()
+ # maybe clip grad here
+ # maybe scale grad here
+ self.optim.step()
+
+ def zero_grad(self):
+ self.mixed_precision.pre_zero_grad()
+ return self.optim.zero_grad()
+ ```
+ """
+ dtype: torch.dtype
+
+ @abstractmethod
+ def pre_backward(self, loss: Tensor) -> Tensor:
+ """Called before backward.
+
+ Args:
+ loss (Tensor): Loss value.
+
+ Returns:
+ Tensor: Loss value (possibly scaled).
+ """
+ pass
+
+ @abstractmethod
+ def pre_backward_by_grad(self, tensor: Tensor, grad: Tensor) -> Tensor:
+ """Called before backward by grad. This is helpful for pipeline parallelism.
+
+ Args:
+ tensor (Tensor): Tensor to backward.
+ grad (Tensor): Gradient of the tensor.
+
+ Returns:
+ Tensor: Gradient of the tensor (possibly scaled).
+ """
+ pass
+
+ @abstractmethod
+ def should_skip_step(self) -> bool:
+ """Called before step.
+
+ Returns:
+ bool: Whether to skip the step.
+ """
+ pass
+
+ @abstractmethod
+ def pre_zero_grad(self) -> None:
+ """Called before zero_grad.
+ """
+ pass
+
+ @abstractmethod
+ def get_grad_div_scale(self) -> float:
+ """Called before step or clip_grad. To keep computation efficiency, this method does not (maybe) unscale grads.
+
+ Returns:
+ float: A divisor for gradient clipping or step.
+ """
+ pass
diff --git a/colossalai/amp/naive_amp/mixed_precision_mixin/bf16.py b/colossalai/amp/naive_amp/mixed_precision_mixin/bf16.py
new file mode 100644
index 000000000000..9454f6eb8413
--- /dev/null
+++ b/colossalai/amp/naive_amp/mixed_precision_mixin/bf16.py
@@ -0,0 +1,23 @@
+import torch
+from torch import Tensor
+
+from .base import MixedPrecisionMixin
+
+
+class BF16MixedPrecisionMixin(MixedPrecisionMixin):
+ dtype = torch.bfloat16
+
+ def pre_backward(self, loss: Tensor) -> Tensor:
+ return loss
+
+ def pre_backward_by_grad(self, tensor: Tensor, grad: Tensor) -> Tensor:
+ return grad
+
+ def should_skip_step(self) -> bool:
+ return False
+
+ def pre_zero_grad(self) -> None:
+ pass
+
+ def get_grad_div_scale(self) -> float:
+ return 1.0
diff --git a/colossalai/amp/naive_amp/mixed_precision_mixin/fp16.py b/colossalai/amp/naive_amp/mixed_precision_mixin/fp16.py
new file mode 100644
index 000000000000..1ce8e42eb3ed
--- /dev/null
+++ b/colossalai/amp/naive_amp/mixed_precision_mixin/fp16.py
@@ -0,0 +1,84 @@
+from abc import abstractmethod
+from enum import Enum
+
+import torch
+import torch.distributed as dist
+from torch import Tensor
+
+from colossalai.amp.naive_amp.grad_scaler import DynamicGradScaler
+from colossalai.utils import get_current_device
+
+from .base import MixedPrecisionMixin
+
+
+class OptimState(Enum):
+ SCALED = 0
+ UNSCALED = 1
+
+
+class FP16MixedPrecisionMixin(MixedPrecisionMixin):
+ dtype = torch.float16
+
+ def __init__(self,
+ initial_scale: float = 2**16,
+ min_scale: float = 1,
+ growth_factor: float = 2,
+ backoff_factor: float = 0.5,
+ growth_interval: int = 1000,
+ hysteresis: int = 2,
+ max_scale: float = 2**32) -> None:
+ super().__init__()
+ self.grad_scaler = DynamicGradScaler(initial_scale=initial_scale,
+ min_scale=min_scale,
+ growth_factor=growth_factor,
+ backoff_factor=backoff_factor,
+ growth_interval=growth_interval,
+ hysteresis=hysteresis,
+ max_scale=max_scale)
+ self.optim_state = OptimState.UNSCALED
+ self.found_overflow = torch.zeros(1, dtype=torch.float, device=get_current_device())
+
+ @property
+ def loss_scale(self) -> float:
+ return self.grad_scaler.scale.item()
+
+ @abstractmethod
+ def check_local_overflow(self) -> bool:
+ """Check whether there is overflow in the local process. This method should be implemented by subclasses.
+
+ Returns:
+ bool: Whether there is overflow in the local process.
+ """
+ pass
+
+ def check_overflow(self) -> bool:
+ # clear previous overflow record
+ self.found_overflow.fill_(0.0)
+ if self.check_local_overflow():
+ self.found_overflow.fill_(1.0)
+ dist.all_reduce(self.found_overflow, op=dist.ReduceOp.MAX)
+ return self.found_overflow.item() > 0
+
+ def pre_backward(self, loss: Tensor) -> Tensor:
+ loss = self.loss_scale * loss
+ self.optim_state = OptimState.SCALED
+ return loss
+
+ def pre_backward_by_grad(self, tensor: Tensor, grad: Tensor) -> Tensor:
+ self.optim_state = OptimState.SCALED
+ return grad
+
+ def should_skip_step(self) -> bool:
+ found_inf = self.check_overflow()
+ self.grad_scaler.update(found_inf)
+ if found_inf:
+ self.optim_state = OptimState.UNSCALED
+ return found_inf
+
+ def pre_zero_grad(self) -> None:
+ pass
+
+ def get_grad_div_scale(self) -> float:
+ assert self.optim_state == OptimState.SCALED, 'grads should be scaled before clipping'
+ self.optim_state = OptimState.UNSCALED
+ return self.loss_scale
diff --git a/colossalai/booster/plugin/gemini_plugin.py b/colossalai/booster/plugin/gemini_plugin.py
index adbf4803eefe..46714fe1c679 100644
--- a/colossalai/booster/plugin/gemini_plugin.py
+++ b/colossalai/booster/plugin/gemini_plugin.py
@@ -23,6 +23,9 @@
__all__ = ['GeminiPlugin']
+SUPPORTED_PRECISION = ['fp16', 'bf16']
+PRECISION_STR_TO_DTYPE = {'fp16': torch.half, 'bf16': torch.bfloat16}
+
class GeminiCheckpointIO(GeneralCheckpointIO):
@@ -171,6 +174,7 @@ class GeminiPlugin(DPPluginBase):
Args:
device (torch.device): device to place the model.
placement_policy (str, optional): "cpu", "cuda", "auto". Defaults to "cpu".
+ precision (str, optional): precision. Support 'fp16' and 'bf16'. Defaults to 'fp16'.
pin_memory (bool, optional): use pin memory on CPU. Defaults to False.
force_outputs_fp32 (bool, optional): force outputs are fp32. Defaults to False.
strict_ddp_mode (bool, optional): use strict ddp mode (only use dp without other parallelism). Defaults to False.
@@ -203,6 +207,7 @@ def __init__(
self,
device: Optional[torch.device] = None,
placement_policy: str = "cpu",
+ precision: str = "fp16",
pin_memory: bool = False,
force_outputs_fp32: bool = False,
strict_ddp_mode: bool = False,
@@ -223,6 +228,7 @@ def __init__(
verbose: bool = False,
) -> None:
super().__init__()
+ assert precision in SUPPORTED_PRECISION, f'precision {precision} is not supported'
self.gemini_config = dict(
device=(device or get_current_device()),
placement_policy=placement_policy,
@@ -233,6 +239,7 @@ def __init__(
hidden_dim=hidden_dim,
min_chunk_size_mb=min_chunk_size_mb,
memstats=memstats,
+ mixed_precision=PRECISION_STR_TO_DTYPE[precision],
)
self.zero_optim_config = dict(gpu_margin_mem_ratio=gpu_margin_mem_ratio,)
self.optim_kwargs = dict(initial_scale=initial_scale,
@@ -253,7 +260,7 @@ def control_precision(self) -> bool:
return True
def supported_precisions(self) -> List[str]:
- return ['fp16']
+ return SUPPORTED_PRECISION
def control_device(self) -> bool:
return True
diff --git a/colossalai/booster/plugin/low_level_zero_plugin.py b/colossalai/booster/plugin/low_level_zero_plugin.py
index 5d93cf0e33be..2b312d0f9947 100644
--- a/colossalai/booster/plugin/low_level_zero_plugin.py
+++ b/colossalai/booster/plugin/low_level_zero_plugin.py
@@ -1,4 +1,5 @@
import warnings
+from functools import partial
from typing import Callable, Iterator, List, Optional, Tuple, Union
import torch
@@ -20,12 +21,15 @@
__all__ = ['LowLevelZeroPlugin']
-def _convert_to_fp16(x):
+def _convert_floating_point(x, dtype: torch.dtype = torch.float16):
if isinstance(x, torch.Tensor) and torch.is_floating_point(x):
- return x.half()
+ return x.to(dtype)
return x
+SUPPORTED_PRECISION = ['fp16', 'bf16', 'fp32']
+
+
class LowLevelZeroCheckpointIO(TorchDDPCheckpointIO):
def save_unsharded_optimizer(self, optimizer: Optimizer, checkpoint: str, gather_dtensor: bool):
@@ -49,17 +53,24 @@ class LowLevelZeroModel(ModelWrapper):
def __init__(self, module: nn.Module, stage: int, precision: str) -> None:
super().__init__(module)
- self.convert_inputs = (precision == 'fp16')
- module = zero_model_wrapper(module, zero_stage=stage)
+ self.dtype = None
if precision == 'fp16':
- module = module.half()
+ self.dtype = torch.float16
+ elif precision == 'bf16':
+ self.dtype = torch.bfloat16
+ module = zero_model_wrapper(module, zero_stage=stage)
+ if self.dtype is not None:
+ module = module.to(self.dtype)
module = module.to(get_current_device())
self.module = module
+ self.convert_fn = None
+ if self.dtype is not None:
+ self.convert_fn = partial(_convert_floating_point, dtype=self.dtype)
def forward(self, *args, **kwargs):
- if self.convert_inputs:
- args = tree_map(_convert_to_fp16, args)
- kwargs = tree_map(_convert_to_fp16, kwargs)
+ if self.convert_fn is not None:
+ args = tree_map(self.convert_fn, args)
+ kwargs = tree_map(self.convert_fn, kwargs)
return super().forward(*args, **kwargs)
@@ -110,7 +121,7 @@ class LowLevelZeroPlugin(DPPluginBase):
Args:
strage (int, optional): ZeRO stage. Defaults to 1.
- precision (str, optional): precision. Support 'fp16' and 'fp32'. Defaults to 'fp16'.
+ precision (str, optional): precision. Support 'fp16', 'bf16' and 'fp32'. Defaults to 'fp16'.
initial_scale (float, optional): Initial scale used by DynamicGradScaler. Defaults to 2**32.
min_scale (float, optional): Min scale used by DynamicGradScaler. Defaults to 1.
growth_factor (float, optional): growth_factor used by DynamicGradScaler. Defaults to 2.
@@ -149,7 +160,7 @@ def __init__(
) -> None:
super().__init__()
assert stage in (1, 2), f'LowLevelZeroPlugin only supports stage 1/2 training'
- assert precision in ('fp16', 'fp32'), f'LowLevelZeroPlugin only supports fp16/fp32 training'
+ assert precision in SUPPORTED_PRECISION, f'LowLevelZeroPlugin only supports {SUPPORTED_PRECISION} training'
self.stage = stage
self.precision = precision
@@ -175,7 +186,7 @@ def control_precision(self) -> bool:
return True
def supported_precisions(self) -> List[str]:
- return ['fp16', 'fp32']
+ return SUPPORTED_PRECISION
def control_device(self) -> bool:
return True
diff --git a/colossalai/cli/launcher/__init__.py b/colossalai/cli/launcher/__init__.py
index 8d9ec147d401..808e4e84574f 100644
--- a/colossalai/cli/launcher/__init__.py
+++ b/colossalai/cli/launcher/__init__.py
@@ -28,7 +28,7 @@
type=str,
default=None,
help=
- "Specify computing devices to NOT use during execution. Mutually exclusive with --include. Formatting is the same as --includ,"
+ "Specify computing devices to NOT use during execution. Mutually exclusive with --include. Formatting is the same as --include,"
" only effective when used with --hostfile.")
@click.option("--num_nodes",
type=int,
diff --git a/colossalai/cli/launcher/hostinfo.py b/colossalai/cli/launcher/hostinfo.py
index 065cbc37101f..d1b88b229fb8 100644
--- a/colossalai/cli/launcher/hostinfo.py
+++ b/colossalai/cli/launcher/hostinfo.py
@@ -38,7 +38,7 @@ def is_host_localhost(hostname: str, port: str = None) -> None:
# socket.getfqdn("127.0.0.1") does not return localhost
# on some users' machines
- # thus, we directly return True if hostname is locahost, 127.0.0.1 or 0.0.0.0
+ # thus, we directly return True if hostname is localhost, 127.0.0.1 or 0.0.0.0
if hostname in ("localhost", "127.0.0.1", "0.0.0.0"):
return True
diff --git a/colossalai/cli/launcher/multinode_runner.py b/colossalai/cli/launcher/multinode_runner.py
index a51e1e371f13..85b241e96292 100644
--- a/colossalai/cli/launcher/multinode_runner.py
+++ b/colossalai/cli/launcher/multinode_runner.py
@@ -114,7 +114,7 @@ def recv_from_all(self) -> dict:
Receive messages from all hosts
Returns:
- msg_from_node (dict): a dictionry which contains messages from each node
+ msg_from_node (dict): a dictionary which contains messages from each node
"""
msg_from_node = dict()
diff --git a/colossalai/cli/launcher/run.py b/colossalai/cli/launcher/run.py
index 6411b4302e95..daa5107caf90 100644
--- a/colossalai/cli/launcher/run.py
+++ b/colossalai/cli/launcher/run.py
@@ -154,7 +154,7 @@ def _arg_dict_to_list(arg_dict):
extra_launch_args = dict()
torch_version = version.parse(torch.__version__)
- assert torch_version.major == 1
+ assert torch_version.major >= 1
if torch_version.minor < 9:
cmd = [
@@ -298,7 +298,7 @@ def launch_multi_processes(args: Config) -> None:
# receive the stop status
msg_from_node = runner.recv_from_all()
- # printe node status
+ # print node status
click.echo("\n====== Stopping All Nodes =====")
for hostname, msg in msg_from_node.items():
click.echo(f"{hostname}: {msg}")
diff --git a/colossalai/device/alpha_beta_profiler.py b/colossalai/device/alpha_beta_profiler.py
index f8b20de9bc37..f4e6cfffbcdf 100644
--- a/colossalai/device/alpha_beta_profiler.py
+++ b/colossalai/device/alpha_beta_profiler.py
@@ -197,7 +197,7 @@ def get_max_nbytes(process_group: Tuple[int], pg_handler: dist.ProcessGroup):
dist.broadcast_object_list(broadcast_list, src=process_group[0])
alpha_beta_dict[process_group] = tuple(broadcast_list)
- # add symmetry pair to the apha_beta_dict
+ # add symmetry pair to the alpha_beta_dict
symmetry_ab_dict = {}
for process_group, alpha_beta_pair in alpha_beta_dict.items():
symmetry_process_group = (process_group[1], process_group[0])
diff --git a/colossalai/fx/tracer/bias_addition_patch/patched_bias_addition_module/bias_addition_module.py b/colossalai/fx/tracer/bias_addition_patch/patched_bias_addition_module/bias_addition_module.py
index 85f1553e304c..591485fdb1ca 100644
--- a/colossalai/fx/tracer/bias_addition_patch/patched_bias_addition_module/bias_addition_module.py
+++ b/colossalai/fx/tracer/bias_addition_patch/patched_bias_addition_module/bias_addition_module.py
@@ -51,7 +51,7 @@ def extract_kwargs_from_mod(self):
For example:
The kwargs for conv2d module is {} because the attributes like 'padding' or 'groups' are
- considered during module initilizing. However, we need to consider those attributes as kwargs
+ considered during module initializing. However, we need to consider those attributes as kwargs
in F.conv2d.
"""
pass
diff --git a/colossalai/fx/tracer/experimental.py b/colossalai/fx/tracer/experimental.py
index 88b65b6188fa..22a67d1ceccc 100644
--- a/colossalai/fx/tracer/experimental.py
+++ b/colossalai/fx/tracer/experimental.py
@@ -295,7 +295,7 @@ class PatchedCheckpointFunction(torch.autograd.Function):
@staticmethod
def forward(ctx, run_function, preserve_rng_state, *args):
- # signal that the current tracing occurs within activaton checkpoint part
+ # signal that the current tracing occurs within activation checkpoint part
self.inside_torch_checkpoint_func = True
out = run_function(*args)
self.inside_torch_checkpoint_func = False
diff --git a/colossalai/fx/tracer/tracer.py b/colossalai/fx/tracer/tracer.py
index 1ae31f958975..28965a1b8e74 100644
--- a/colossalai/fx/tracer/tracer.py
+++ b/colossalai/fx/tracer/tracer.py
@@ -92,7 +92,7 @@ def create_proxy(self, kind, target, args, kwargs, name=None, type_expr=None, pr
return proxy
# if graph is traced for auto parallelism module, some extra node will be added during
- # graph construction to deal with the compatability between bias addition and all reduce.
+ # graph construction to deal with the compatibility between bias addition and all reduce.
# if no extra manipulation is applied, we just pass the origin arguments to create_proxy function
# to create node on computation graph
@@ -208,7 +208,7 @@ def _configure_tracer_type(self, tracer_type: TracerType):
self.proxy_cls = ColoProxy
self.tracer_type = TracerType.META
else:
- raise ValueError(f"Unrecognised tracer type {tracer_type}")
+ raise ValueError(f"Unrecognized tracer type {tracer_type}")
def _meta_data_computing(self, kind, target, args, kwargs):
@@ -445,7 +445,7 @@ class PatchedCheckpointFunction(torch.autograd.Function):
@staticmethod
def forward(ctx, run_function, preserve_rng_state, *args):
- # signal that the current tracing occurs within activaton checkpoint part
+ # signal that the current tracing occurs within activation checkpoint part
self.inside_torch_checkpoint_func = True
out = run_function(*args)
self.inside_torch_checkpoint_func = False
diff --git a/colossalai/kernel/cuda_native/csrc/type_shim.h b/colossalai/kernel/cuda_native/csrc/type_shim.h
index 2f180a7783ec..03ccc02635fa 100644
--- a/colossalai/kernel/cuda_native/csrc/type_shim.h
+++ b/colossalai/kernel/cuda_native/csrc/type_shim.h
@@ -171,6 +171,21 @@
using g_scalar_t_##LEVEL = at::Half; \
using p_scalar_t_##LEVEL = at::Half; \
__VA_ARGS__; \
+ } else if (GTYPE == at::ScalarType::Float && \
+ PTYPE == at::ScalarType::BFloat16) { \
+ using g_scalar_t_##LEVEL = float; \
+ using p_scalar_t_##LEVEL = at::BFloat16; \
+ __VA_ARGS__; \
+ } else if (GTYPE == at::ScalarType::BFloat16 && \
+ PTYPE == at::ScalarType::Float) { \
+ using g_scalar_t_##LEVEL = at::BFloat16; \
+ using p_scalar_t_##LEVEL = float; \
+ __VA_ARGS__; \
+ } else if (GTYPE == at::ScalarType::BFloat16 && \
+ PTYPE == at::ScalarType::BFloat16) { \
+ using g_scalar_t_##LEVEL = at::BFloat16; \
+ using p_scalar_t_##LEVEL = at::BFloat16; \
+ __VA_ARGS__; \
} else { \
AT_ERROR(#NAME, "not implemented for '", toString(GTYPE), toString(PTYPE), \
"'"); \
diff --git a/colossalai/kernel/cuda_native/flash_attention.py b/colossalai/kernel/cuda_native/flash_attention.py
index d793815ed681..3db7374509a0 100644
--- a/colossalai/kernel/cuda_native/flash_attention.py
+++ b/colossalai/kernel/cuda_native/flash_attention.py
@@ -138,7 +138,7 @@ def forward(self,
elif attn_mask_type == AttnMaskType.causal: # gpt style
attn_bias = LowerTriangularMask()
- if bias is not None: # alibi / relative position emebedding
+ if bias is not None: # alibi / relative position embedding
assert allow_alibi, "flash attention with bias is not supported in this system."
assert attn_mask_type == AttnMaskType.causal, \
"attention with bias is only supported for causal attention so far."
diff --git a/colossalai/kernel/cuda_native/multihead_attention.py b/colossalai/kernel/cuda_native/multihead_attention.py
index 3b6470cdcbb9..69246f2f3854 100644
--- a/colossalai/kernel/cuda_native/multihead_attention.py
+++ b/colossalai/kernel/cuda_native/multihead_attention.py
@@ -43,7 +43,7 @@ class Config:
attn_prob_dropout_ratio: float # attention score dropout ratio
hidden_dropout_ratio: float # dropout ration before residual
norm_first: bool # norm_first
- fp16: bool # fp16 presion
+ fp16: bool # fp16 precision
class MultiHeadAttention1DFunc(Function):
diff --git a/colossalai/kernel/jit/option.py b/colossalai/kernel/jit/option.py
index aa41f57678fc..e20c08b051ed 100644
--- a/colossalai/kernel/jit/option.py
+++ b/colossalai/kernel/jit/option.py
@@ -43,7 +43,7 @@ def warmup_jit_fusion(batch_size: int,
seq_length: int = 512,
vocab_size: int = 32768,
dtype: torch.dtype = torch.float32):
- """ Compilie JIT functions before the main training steps """
+ """ Compile JIT functions before the main training steps """
embed = Embedding(vocab_size, hidden_size).to(get_current_device())
linear_1 = Linear(hidden_size, hidden_size * 4, skip_bias_add=True).to(get_current_device())
diff --git a/colossalai/lazy/__init__.py b/colossalai/lazy/__init__.py
new file mode 100644
index 000000000000..4387107bf773
--- /dev/null
+++ b/colossalai/lazy/__init__.py
@@ -0,0 +1,6 @@
+from .lazy_init import LazyInitContext, LazyTensor
+
+__all__ = [
+ 'LazyInitContext',
+ 'LazyTensor',
+]
diff --git a/colossalai/utils/model/experimental.py b/colossalai/lazy/lazy_init.py
similarity index 98%
rename from colossalai/utils/model/experimental.py
rename to colossalai/lazy/lazy_init.py
index bf3e3d05b99c..c1fda3c53865 100644
--- a/colossalai/utils/model/experimental.py
+++ b/colossalai/lazy/lazy_init.py
@@ -350,7 +350,14 @@ def factory_fn():
copied.requires_grad_()
return copied
- target = LazyTensor(factory_fn, meta_data=self._meta_data)
+ if self._materialized_data is not None:
+ # self is early materialized
+ copied = self._materialized_data.detach().clone()
+ if self.requires_grad:
+ copied.requires_grad_()
+ target = LazyTensor(lambda: None, concrete_data=copied)
+ else:
+ target = LazyTensor(factory_fn, meta_data=self._meta_data)
memo[id(self)] = target
return target
diff --git a/colossalai/nn/layer/parallel_sequence/layers.py b/colossalai/nn/layer/parallel_sequence/layers.py
index d9486217bbc9..0887f8389dbe 100644
--- a/colossalai/nn/layer/parallel_sequence/layers.py
+++ b/colossalai/nn/layer/parallel_sequence/layers.py
@@ -195,7 +195,7 @@ class _Linear(nn.Module):
keep_master_weight_for_test: This was added for testing and should be
set to False. It returns the master weights
used for initialization.
- skip_bias_add: This was added to enable performance optimations where bias
+ skip_bias_add: This was added to enable performance optimizations where bias
can be fused with other elementwise operations. we skip
adding bias but instead return it.
"""
diff --git a/colossalai/nn/loss/loss_1d.py b/colossalai/nn/loss/loss_1d.py
index 2fabd954f8fb..dd548c1d3dd4 100644
--- a/colossalai/nn/loss/loss_1d.py
+++ b/colossalai/nn/loss/loss_1d.py
@@ -21,7 +21,7 @@ def forward(ctx, vocab_parallel_logits, targets, process_group):
# Subtract the maximum value.
vocab_parallel_logits.sub_(logits_max.unsqueeze(dim=-1))
- # Get the partition's vocab indecies
+ # Get the partition's vocab indices
partition_vocab_size = vocab_parallel_logits.size()[-1]
rank = dist.get_rank(process_group)
vocab_start_index = partition_vocab_size * rank
@@ -61,10 +61,10 @@ def forward(ctx, vocab_parallel_logits, targets, process_group):
@custom_bwd
def backward(ctx, grad_output):
- # Retreive tensors from the forward path.
+ # Retrieve tensors from the forward path.
softmax, target_mask, masked_target_1d = ctx.saved_tensors
- # All the inputs have softmax as thier gradient.
+ # All the inputs have softmax as their gradient.
grad_input = softmax
# For simplicity, work with the 2D gradient.
partition_vocab_size = softmax.size()[-1]
diff --git a/colossalai/nn/loss/loss_2d.py b/colossalai/nn/loss/loss_2d.py
index cb12e723c323..7da8b2d697fa 100644
--- a/colossalai/nn/loss/loss_2d.py
+++ b/colossalai/nn/loss/loss_2d.py
@@ -106,7 +106,7 @@ def forward(ctx, logits, targets):
@staticmethod
@custom_bwd
def backward(ctx, output_grad):
- # Retreive tensors from the forward path.
+ # Retrieve tensors from the forward path.
softmax, target_mask, masked_target = ctx.saved_tensors
# All the inputs have softmax as their gradient.
diff --git a/colossalai/nn/loss/loss_2p5d.py b/colossalai/nn/loss/loss_2p5d.py
index f8e3324fc5ff..63dc4f33ad32 100644
--- a/colossalai/nn/loss/loss_2p5d.py
+++ b/colossalai/nn/loss/loss_2p5d.py
@@ -100,7 +100,7 @@ def forward(ctx, logits, targets):
@staticmethod
@custom_bwd
def backward(ctx, output_grad):
- # Retreive tensors from the forward path.
+ # Retrieve tensors from the forward path.
softmax, target_mask, masked_target = ctx.saved_tensors
# All the inputs have softmax as their gradient.
diff --git a/colossalai/nn/loss/loss_3d.py b/colossalai/nn/loss/loss_3d.py
index e76439191fdb..f27d57ad6c99 100644
--- a/colossalai/nn/loss/loss_3d.py
+++ b/colossalai/nn/loss/loss_3d.py
@@ -99,10 +99,10 @@ def forward(ctx, logits, targets, output_parallel_mode):
@staticmethod
@custom_bwd
def backward(ctx, output_grad):
- # Retreive tensors from the forward path.
+ # Retrieve tensors from the forward path.
softmax, target_mask, masked_target = ctx.saved_tensors
- # All the inputs have softmax as thier gradient.
+ # All the inputs have softmax as their gradient.
input_grad = softmax
# For simplicity, work with the 2D gradient.
partition_vocab_size = softmax.size()[-1]
diff --git a/colossalai/nn/optimizer/cpu_adam.py b/colossalai/nn/optimizer/cpu_adam.py
index bb561a106515..1ec8783c53d3 100644
--- a/colossalai/nn/optimizer/cpu_adam.py
+++ b/colossalai/nn/optimizer/cpu_adam.py
@@ -21,7 +21,7 @@ class CPUAdam(NVMeOptimizer):
`CPUAdam` requires CUDA extensions which can be built during installation or runtime.
- This version of CPU Adam accelates parameters updating on CPU with SIMD.
+ This version of CPU Adam accelerates parameters updating on CPU with SIMD.
Support of AVX2 or AVX512 is required.
The GPU part is implemented in an naive way.
@@ -93,8 +93,7 @@ def torch_adam_update(self,
bias_correction1,
bias_correction2,
use_adamw=False):
- # FIXME(ver217): remove the below line when replace torch adam with fused adam
- grad = grad.float()
+ grad = grad.to(data.dtype)
if weight_decay != 0:
if use_adamw:
@@ -133,10 +132,12 @@ def step(self, closure=None, div_scale: float = -1):
if len(state) == 0:
state['step'] = 0
+ # FIXME(ver217): CPU adam kernel only supports fp32 states now
+ assert p.dtype is torch.float, "CPUAdam only support fp32 parameters"
# gradient momentums
- state['exp_avg'] = torch.zeros_like(p, dtype=torch.float, device=target_device)
+ state['exp_avg'] = torch.zeros_like(p, device=target_device)
# gradient variances
- state['exp_avg_sq'] = torch.zeros_like(p, dtype=torch.float, device=target_device)
+ state['exp_avg_sq'] = torch.zeros_like(p, device=target_device)
self._post_state_init(p)
state['step'] += 1
@@ -147,9 +148,17 @@ def step(self, closure=None, div_scale: float = -1):
assert state['exp_avg'].device.type == 'cpu', "exp_avg should stay on cpu"
assert state['exp_avg_sq'].device.type == 'cpu', "exp_avg should stay on cpu"
self._pre_update(p, 'exp_avg', 'exp_avg_sq')
- self.cpu_adam_op.step(state['step'], group['lr'], beta1, beta2, group['eps'], group['weight_decay'],
- group['bias_correction'], p.data, p.grad.data, state['exp_avg'],
- state['exp_avg_sq'], div_scale)
+ if p.grad.dtype is torch.bfloat16:
+ # cpu adam kernel does not support bf16 now
+ bias_correction1 = 1 - beta1**state['step']
+ bias_correction2 = 1 - beta2**state['step']
+ self.torch_adam_update(p.data, p.grad.data, state['exp_avg'], state['exp_avg_sq'], group['lr'],
+ beta1, beta2, group['eps'], group['weight_decay'], bias_correction1,
+ bias_correction2, self.adamw_mode)
+ else:
+ self.cpu_adam_op.step(state['step'], group['lr'], beta1, beta2, group['eps'],
+ group['weight_decay'], group['bias_correction'], p.data, p.grad.data,
+ state['exp_avg'], state['exp_avg_sq'], div_scale)
self._post_update(p, 'exp_avg', 'exp_avg_sq')
elif target_device.type == 'cuda':
assert div_scale == -1, "div_scale should remain default"
diff --git a/colossalai/nn/optimizer/fused_adam.py b/colossalai/nn/optimizer/fused_adam.py
index 987af8a968b7..82a6250f1fd1 100644
--- a/colossalai/nn/optimizer/fused_adam.py
+++ b/colossalai/nn/optimizer/fused_adam.py
@@ -134,8 +134,8 @@ def step(self, closure=None, grads=None, output_params=None, scale=None, grad_no
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p)
- if p.dtype not in [torch.float16, torch.float32]:
- raise RuntimeError('FusedAdam only support fp16 and fp32.')
+ if p.dtype not in [torch.float16, torch.float32, torch.bfloat16]:
+ raise RuntimeError('FusedAdam only support fp16, fp32 and bf16.')
g_l.append(p.grad.data)
p_l.append(p.data)
diff --git a/colossalai/nn/optimizer/hybrid_adam.py b/colossalai/nn/optimizer/hybrid_adam.py
index be6311c6c29f..526071b06f95 100644
--- a/colossalai/nn/optimizer/hybrid_adam.py
+++ b/colossalai/nn/optimizer/hybrid_adam.py
@@ -1,16 +1,17 @@
from typing import Any, Optional
import torch
+from torch.optim import Adam
-from colossalai.kernel.op_builder import CPUAdamBuilder, FusedOptimBuilder
+from colossalai.kernel.op_builder import FusedOptimBuilder
from colossalai.registry import OPTIMIZERS
from colossalai.utils import multi_tensor_applier
-from .nvme_optimizer import NVMeOptimizer
+from .cpu_adam import CPUAdam
@OPTIMIZERS.register_module
-class HybridAdam(NVMeOptimizer):
+class HybridAdam(CPUAdam):
"""Implements Adam algorithm.
Supports parameters updating on both GPU and CPU, depanding on the device of parameters.
@@ -74,15 +75,9 @@ def __init__(self,
nvme_offload_dir: Optional[str] = None,
**defaults: Any):
- default_args = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, bias_correction=bias_correction)
- super(HybridAdam, self).__init__(model_params, default_args, nvme_offload_fraction, nvme_offload_dir)
- self.adamw_mode = adamw_mode
-
- # build during runtime if not found
- cpu_optim = CPUAdamBuilder().load()
+ super().__init__(model_params, lr, bias_correction, betas, eps, weight_decay, adamw_mode, nvme_offload_fraction,
+ nvme_offload_dir)
fused_optim = FusedOptimBuilder().load()
- self.cpu_adam_op = cpu_optim.CPUAdamOptimizer(lr, betas[0], betas[1], eps, weight_decay, adamw_mode)
-
self.gpu_adam_op = fused_optim.multi_tensor_adam
self._dummy_overflow_buf = torch.cuda.IntTensor([0])
@@ -108,10 +103,12 @@ def step(self, closure=None, div_scale: float = -1):
if len(state) == 0:
state['step'] = 0
+ # FIXME(ver217): CPU adam kernel only supports fp32 states now
+ assert p.dtype is torch.float, "HybridAdam only support fp32 parameters"
# gradient momentums
- state['exp_avg'] = torch.zeros_like(p, dtype=torch.float, device=target_device)
+ state['exp_avg'] = torch.zeros_like(p, device=target_device)
# gradient variances
- state['exp_avg_sq'] = torch.zeros_like(p, dtype=torch.float, device=target_device)
+ state['exp_avg_sq'] = torch.zeros_like(p, device=target_device)
self._post_state_init(p)
state['step'] += 1
@@ -122,9 +119,17 @@ def step(self, closure=None, div_scale: float = -1):
assert state['exp_avg'].device.type == 'cpu', "exp_avg should stay on cpu"
assert state['exp_avg_sq'].device.type == 'cpu', "exp_avg should stay on cpu"
self._pre_update(p, 'exp_avg', 'exp_avg_sq')
- self.cpu_adam_op.step(state['step'], group['lr'], beta1, beta2, group['eps'], group['weight_decay'],
- group['bias_correction'], p.data, p.grad.data, state['exp_avg'],
- state['exp_avg_sq'], div_scale)
+ if p.grad.dtype is torch.bfloat16:
+ # cpu adam kernel does not support bf16 now
+ bias_correction1 = 1 - beta1**state['step']
+ bias_correction2 = 1 - beta2**state['step']
+ self.torch_adam_update(p.data, p.grad.data, state['exp_avg'], state['exp_avg_sq'], group['lr'],
+ beta1, beta2, group['eps'], group['weight_decay'], bias_correction1,
+ bias_correction2, self.adamw_mode)
+ else:
+ self.cpu_adam_op.step(state['step'], group['lr'], beta1, beta2, group['eps'],
+ group['weight_decay'], group['bias_correction'], p.data, p.grad.data,
+ state['exp_avg'], state['exp_avg_sq'], div_scale)
self._post_update(p, 'exp_avg', 'exp_avg_sq')
elif target_device.type == 'cuda':
diff --git a/colossalai/nn/optimizer/lamb.py b/colossalai/nn/optimizer/lamb.py
index 7ac2109572a4..399ad39b6658 100644
--- a/colossalai/nn/optimizer/lamb.py
+++ b/colossalai/nn/optimizer/lamb.py
@@ -59,7 +59,7 @@ def step(self, closure=None):
continue
grad = p.grad.data
if grad.is_sparse:
- raise RuntimeError('Lamb does not support sparse gradients, consider SparseAdam instad.')
+ raise RuntimeError('Lamb does not support sparse gradients, consider SparseAdam instead.')
state = self.state[p]
diff --git a/colossalai/nn/optimizer/nvme_optimizer.py b/colossalai/nn/optimizer/nvme_optimizer.py
index 53e4a46c9741..fb3a4d87be60 100644
--- a/colossalai/nn/optimizer/nvme_optimizer.py
+++ b/colossalai/nn/optimizer/nvme_optimizer.py
@@ -43,7 +43,7 @@ def __init__(self,
self.offloader = None
self.is_on_nvme: Dict[Parameter, bool] = {}
self.offloaded_numel: int = 0
- # As param may be not materialized here, these attributes are initalized when the first step
+ # As param may be not materialized here, these attributes are initialized when the first step
self.total_numel: Optional[int] = None
self.can_offload_numel: Optional[int] = None
diff --git a/colossalai/nn/parallel/layers/cache_embedding/cached_embedding.py b/colossalai/nn/parallel/layers/cache_embedding/cached_embedding.py
index a0c45d8e80c0..a74cb8d94bab 100644
--- a/colossalai/nn/parallel/layers/cache_embedding/cached_embedding.py
+++ b/colossalai/nn/parallel/layers/cache_embedding/cached_embedding.py
@@ -12,23 +12,23 @@ class CachedEmbeddingBag(BaseEmbeddingBag):
Cached Embedding. Apply a GPU-based software cache approaches to dynamically manage the embedding table in the CPU and GPU memory space.
It can leverage the id's frequency statistics of the target dataset, by passing a frequency list to param `ids_freq_mapping`.
- You can also apply a navie LFU cache eviction strategy by setting `evict_strategy` as EvictionStrategy.LFU.
+ You can also apply a naive LFU cache eviction strategy by setting `evict_strategy` as EvictionStrategy.LFU.
Args:
num_embeddings (int): size of the dictionary of embeddings
embedding_dim (int): the size of each embedding vector
padding_idx (int, optional): If specified, the entries at padding_idx do not contribute to the gradient; therefore, the embedding vector at padding_idx is not updated during training, i.e. it remains as a fixed “pad”. For a newly constructed EmbeddingBag, the embedding vector at padding_idx will default to all zeros, but can be updated to another value to be used as the padding vector. Note that the embedding vector at padding_idx is excluded from the reduction.
max_norm (float, optional): If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm
- norm_type (str, optional): The p of the p-norm to compute for the max_norm option. Defaults to 2..
+ norm_type (str, optional): The p of the p-norm to compute for the max_norm option. Defaults to 2.
scale_grad_by_freq (bool, optional): if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False. Note: this option is not supported when mode="max". Defaults to False.
sparse (bool, optional): if True, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Note: this option is not supported when mode="max".. Defaults to False.
- _weight (torch.Tensor, optional): an embedding weight tensor. Concate multiple tables in a embedding bag as a single one. Defaults to None.
+ _weight (torch.Tensor, optional): an embedding weight tensor. Concatenate multiple tables in a embedding bag as a single one. Defaults to None.
mode (str, optional): "sum", "mean" or "max". Specifies the way to reduce the bag. "sum" computes the weighted sum, taking per_sample_weights into consideration. "mean" computes the average of the values in the bag, "max" computes the max value over each bag. Default: "mean". Defaults to 'mean'.
include_last_offset (bool, optional): if True, offsets has one additional element, where the last element is equivalent to the size of indices. This matches the CSR format.. Defaults to False.
dtype (torch.dtype, optional): data type of the cpu weight initialization. Defaults to None meaning float32.
device (torch.device, optional): device type to the cpu weight. Defaults to None meaning cpu.
cache_ratio (float, float): cache ratio of the #cuda_weight_row / #cpu_weight_row
- ids_freq_mapping (Union[List, torch.Tensor], optional): the frequency of each embedding vector occures in dataset. Defaults to None.
+ ids_freq_mapping (Union[List, torch.Tensor], optional): the frequency of each embedding vector occurs in dataset. Defaults to None.
warmup_ratio (float, optional): the ratio of cuda cache is warmuped with. Defaults to 0.7.
buffer_size (int, optional): the max number of vectors in transmitter buffer. If set to 0, the buffer is not used. Defaults to 0.
pin_weight (bool, optional): pin the cpu weight. Defaults to False.
@@ -145,7 +145,7 @@ def num_write_back_history(self):
def swap_in_bandwidth(self):
if self.cache_weight_mgr._cpu_to_cuda_numel > 0:
return self.cache_weight_mgr._cpu_to_cuda_numel * self.cache_weight_mgr.elem_size_in_byte / 1e6 / \
- self.cache_weight_mgr._cpu_to_cuda_elpase
+ self.cache_weight_mgr._cpu_to_cuda_elapse
else:
return 0
diff --git a/colossalai/nn/parallel/layers/cache_embedding/copyer.py b/colossalai/nn/parallel/layers/cache_embedding/copyer.py
index b586be1dc6d9..aa1f794482f9 100644
--- a/colossalai/nn/parallel/layers/cache_embedding/copyer.py
+++ b/colossalai/nn/parallel/layers/cache_embedding/copyer.py
@@ -17,7 +17,7 @@ def __init__(self, size: int) -> None:
def index_copy(self, dim: int, src_index: LongTensor, tgt_index: LongTensor, src: torch.Tensor, tgt: torch.Tensor):
"""copy
src tensor[src_index] -(index_select)-> tmp -(index_copy_)-> tgt tensor [tgt_index]
- The valid rows in the src tensor are continous, while rows in tgt tensor is scattered.
+ The valid rows in the src tensor are continuous, while rows in tgt tensor is scattered.
Args:
dim (int): dimension along which to index
diff --git a/colossalai/nn/parallel/layers/cache_embedding/parallel_cached_embedding_tablewise_split_cache.py b/colossalai/nn/parallel/layers/cache_embedding/parallel_cached_embedding_tablewise_split_cache.py
index cb4647028d47..80a54b4fadd4 100644
--- a/colossalai/nn/parallel/layers/cache_embedding/parallel_cached_embedding_tablewise_split_cache.py
+++ b/colossalai/nn/parallel/layers/cache_embedding/parallel_cached_embedding_tablewise_split_cache.py
@@ -114,7 +114,7 @@ def forward(self, indices: torch.Tensor, offsets: torch.Tensor = None, per_sampl
# get result of shape = (batch_size, (len(assigned_table_list)*embedding_dim))
local_output = torch.cat(local_output_list, 1)
- # then concatenate those local_output on the second demension.
+ # then concatenate those local_output on the second dimension.
# use all_to_all
remains = batch_size % self.world_size
scatter_strides = [batch_size // self.world_size + int(i < remains) for i in range(self.world_size)]
diff --git a/colossalai/utils/model/lazy_init_context.py b/colossalai/utils/model/lazy_init_context.py
deleted file mode 100644
index cf05f966089d..000000000000
--- a/colossalai/utils/model/lazy_init_context.py
+++ /dev/null
@@ -1,242 +0,0 @@
-#!/usr/bin/env python
-# coding: utf-8
-
-import inspect
-import types
-from typing import Callable, List
-
-import torch
-import torch.nn as nn
-
-from colossalai.tensor import ColoParameter, ColoTensor
-from colossalai.utils.model.utils import substitute_init_recursively
-
-
-class LazyInitContext():
- """
- A context to allow for lazy weight initialization of PyTorch modules. It intercepts the tensor
- initialization functions for lazy initialization
-
- Note:
- This API is only experimental and subject to future changes.
-
- Usage:
- with LazyInitContext() as ctx:
- model = nn.Linear(10, 10)
- model.weight.zero_()
-
- # make sure the weight is a meta tensor
- assert model.weight.is_meta
-
- # initialize weights
- ctx.lazy_init_parameters(model)
-
- # make sure the weight is not a meta tensor
- # and initialized correctly
- assert not model.weight.is_meta and torch.all(model.weight == 0)
-
- Args:
- to_meta (bool): optional, whether to initialize the model with meta tensors, default is True. This
- argument exists for now because some corner cases such as self.weight = torch.zeros(...) cannot be captured yet.
- extra_torch_tensor_func (List[str]): extra torch tensor functions related
- to value setting, such as `zero_` and `triu_`. `zero_` is pre-added by default.
- """
-
- tensor_set_value_func = ['zero_', 'fill_']
-
- def __init__(self, to_meta: bool = True, extra_torch_tensor_func: List[str] = None):
- # TODO: hijack the torch constructor functions as well
- self._to_meta = to_meta
- self._intercepted_nn_init_func_cache = {}
- self._nn_init_methods = self._get_nn_init_methods()
- self._torch_mod_cls = torch.nn.modules.module.Module
-
- if extra_torch_tensor_func:
- # use tuple to remove duplicates
- self._torch_tensor_funcs = tuple(self.tensor_set_value_func + extra_torch_tensor_func)
- else:
- self._torch_tensor_funcs = self.tensor_set_value_func
-
- @property
- def to_meta(self):
- return self._to_meta
-
- def _cache_init_func(self, func):
- """
- This method wraps the ``torch.nn.init`` method and torch tensor value-setting functions
- so that the function call is cached instead of being executed.
- """
-
- def wrapped_init_func(tensor, *args, **kwargs):
- if tensor not in self._intercepted_nn_init_func_cache:
- self._intercepted_nn_init_func_cache[tensor] = []
- self._intercepted_nn_init_func_cache[tensor].append((func, args, kwargs))
-
- return wrapped_init_func
-
- def _get_nn_init_methods(self):
- """
- This method looks for all available functions in the ``torch.nn.init``
- module.
- """
- nn_init_method_names = dir(torch.nn.init)
- nn_init_methods = []
-
- # look for all methods in ``torch.nn.init`` module
- for name in nn_init_method_names:
- nn_init_methods.append((name, getattr(torch.nn.init, name)))
-
- def _is_init_method(item):
- name, func = item
-
- if (not isinstance(func, types.FunctionType) or name.startswith('_') or not name.endswith('_')):
- return False
- else:
- return True
-
- # remove methods which are not init functions
- nn_init_methods = list(filter(_is_init_method, nn_init_methods))
- return nn_init_methods
-
- def _wrap_module_init(self, func):
- """
- This method wraps the calls to the `__init__` of ``torch.nn.Module`` and replaces
- the argument device with value 'meta' so that all modules are created as meta tensors.
- """
- has_device = 'device' in inspect.signature(func).parameters
-
- def layer_lazy_init(module, *args, **kwargs):
- # if this module contains device argument
- # we set it to meta to initialize as meta backend
- if has_device:
- kwargs['device'] = 'meta'
- func(module, *args, **kwargs)
-
- # if device is not found, we intialize it and convert to meta
- if not has_device:
- module.to('meta')
-
- return layer_lazy_init
-
- def _get_tmp_origin_func_ref(self, name):
- """
- Generate a function name for consistency during caching and retrieving.
- """
- return f'_orig_{name}'
-
- def _patch_nn_init_funcs(self):
- # patch nn.init functions
- for name, func in self._nn_init_methods:
- setattr(torch.nn.init, name, self._cache_init_func(func))
-
- def _unpatch_nn_init_funcs(self):
- # unpatch nn.init functions
- for name, func in self._nn_init_methods:
- setattr(torch.nn.init, name, func)
-
- def _patch_submodule_init(self):
- # patch classes __init__ methods
- def _activate_wrap_init(cls):
- cls.__orig_init__ = cls.__init__
- cls.__init__ = self._wrap_module_init(cls.__init__)
-
- substitute_init_recursively(self._torch_mod_cls, _activate_wrap_init, set())
-
- def _unpatch_submodule_init(self):
-
- def _recover_orig_init(cls):
- cls.__init__ = cls.__orig_init__
-
- substitute_init_recursively(self._torch_mod_cls, _recover_orig_init, set())
-
- def _patch_torch_tensor_funcs(self):
- # patch tensor value-setting functions
- for func_name in self._torch_tensor_funcs:
- origin_func_name = self._get_tmp_origin_func_ref(func_name)
- origin_func = getattr(torch.Tensor, func_name)
- setattr(torch.Tensor, origin_func_name, origin_func)
- setattr(torch.Tensor, func_name, self._cache_init_func(origin_func))
-
- def _unpatch_torch_tensor_funcs(self):
- for func_name in self._torch_tensor_funcs:
- origin_func_name = self._get_tmp_origin_func_ref(func_name)
- origin_func = getattr(torch.Tensor, origin_func_name)
- setattr(torch.Tensor, func_name, origin_func)
-
- def __enter__(self):
- self._patch_torch_tensor_funcs()
- self._patch_nn_init_funcs()
-
- if self._to_meta:
- self._patch_submodule_init()
- return self
-
- def __exit__(self, *args, **kwargs):
- if self._to_meta:
- self._unpatch_submodule_init()
- self._unpatch_nn_init_funcs()
- self._unpatch_torch_tensor_funcs()
-
- def lazy_init_parameters(self, model: torch.nn.Module, device='cpu'):
- """
- Initialize the weights of the meta-tensor model.
-
- Args:
- model (`torch.nn.Module`): the model instantiated under the context.
- device (str): the device on which weights are initialized
-
- """
-
- def _init_recursively(module: nn.Module):
- # recursively initialize the module
- for mod in module.children():
- _init_recursively(mod)
-
- # initialize and shard tensors directly attached to the current module
- for name, param in module.named_parameters(recurse=False):
- _init_and_shard(module, name, param)
-
- for name, buf in module.named_buffers(recurse=False):
- _init_and_shard(module, name, buf)
-
- @torch.no_grad()
- def _init_and_shard(module, name, tensor):
- # check whether the tensor is a buffer or parameter
- is_param = isinstance(tensor, nn.parameter.Parameter)
-
- # get sharding spec
- dist_spec = getattr(tensor, 'dist_spec', None)
- pg = getattr(tensor, 'pg', None)
- comp_spec = getattr(tensor, 'comp_spec', None)
-
- # convert the tensor from meta to materialized one
- if tensor.is_meta:
- materialized_tensor = torch.empty_like(tensor, device=device)
- # if this tensor is a meta tensor, it must have an init function
- assert tensor in self._intercepted_nn_init_func_cache
- else:
- materialized_tensor = tensor
-
- # apply init function
- if tensor in self._intercepted_nn_init_func_cache:
- init_func, args, kwargs = self._intercepted_nn_init_func_cache[tensor][-1]
- init_func(materialized_tensor, *args, **kwargs)
-
- # convert it to ColoTensor or ColoParameter
- if is_param:
- tensor = ColoParameter.from_torch_tensor(materialized_tensor, requires_grad=tensor.requires_grad)
- else:
- tensor = ColoTensor.from_torch_tensor(materialized_tensor)
-
- # override the original tensor
- with torch.no_grad():
- setattr(module, name, tensor)
-
- # apply sharding
- if dist_spec:
- tensor.process_group = pg
- tensor.set_tensor_spec(dist_spec, comp_spec)
-
- _init_recursively(model)
-
- return model
diff --git a/colossalai/zero/gemini/gemini_ddp.py b/colossalai/zero/gemini/gemini_ddp.py
index 878c25be7094..7e23fdb425f8 100644
--- a/colossalai/zero/gemini/gemini_ddp.py
+++ b/colossalai/zero/gemini/gemini_ddp.py
@@ -2,13 +2,14 @@
from collections import OrderedDict
from contextlib import nullcontext
from functools import partial
-from typing import Dict, Iterator, List, Optional, Union, Tuple, Set
+from typing import Dict, Iterator, List, Optional, Set, Tuple, Union
import torch
import torch.distributed as dist
import torch.nn as nn
from colossalai.checkpoint_io.utils import calculate_tensor_size
+from colossalai.lazy import LazyTensor
from colossalai.logging import get_dist_logger
from colossalai.nn.parallel.data_parallel import ColoDDP, _cast_float, free_storage
from colossalai.tensor import ProcessGroup as ColoProcessGroup
@@ -16,7 +17,6 @@
from colossalai.tensor.colo_parameter import ColoParameter, ColoTensor, ColoTensorSpec
from colossalai.tensor.param_op_hook import ColoParamOpHookManager
from colossalai.utils import get_current_device, is_ddp_ignored
-from colossalai.utils.model.experimental import LazyTensor
from .chunk import Chunk, ChunkManager, TensorState, init_chunk_manager
from .gemini_hook import GeminiZeROHook
@@ -51,6 +51,7 @@ class ZeroDDP(ColoDDP):
strict_ddp_mode (bool): If set to True, there is no tensor sharding, each tensor is replicated.
Defaults to False. Users can set it to True, when they clearly know that they only need DDP.
scatter_after_inference (bool): If set to True, the model will be scattered after inference. This will save memory but slow down the consecutive inference.
+ mixed_precision (torch.dtype): If set to torch.float16, the model will be trained in fp16. Otherwise, the model will be trained in bf16. Defaults to torch.float16.
"""
def __init__(self,
@@ -59,7 +60,9 @@ def __init__(self,
pin_memory: bool = False,
force_outputs_fp32: bool = False,
strict_ddp_mode: bool = False,
- scatter_after_inference: bool = True) -> None:
+ scatter_after_inference: bool = True,
+ mixed_precision: torch.dtype = torch.float16) -> None:
+ assert mixed_precision in (torch.float16, torch.bfloat16)
self.gemini_manager = gemini_manager
self.chunk_manager: ChunkManager = gemini_manager.chunk_manager
self.force_outputs_fp32 = force_outputs_fp32
@@ -71,6 +74,7 @@ def __init__(self,
self.param2name: Dict[nn.Parameter, str] = dict()
self.name2param: Dict[str, nn.Parameter] = dict()
self.scatter_after_inference = scatter_after_inference
+ self.mixed_precision = mixed_precision
self._logger = get_dist_logger()
@@ -96,34 +100,38 @@ def __init__(self,
param_name = m_name + '.' + p_name if m_name else p_name
self.name2param[param_name] = p_var
super().__init__(module, process_group=ColoProcessGroup())
- self._non_persistent_buffers_set=self._get_non_persistent_buffers_set(module)
+ self._non_persistent_buffers_set = self._get_non_persistent_buffers_set(module)
self._cast_buffers()
- def _get_non_persistent_buffers_set(self, module, memo: Optional[Set[nn.Module]] = None, prefix: str = '', remove_duplicate: bool = True):
-
- r"""
- Args:
- memo: a memo to store the set of modules already added to the result
- prefix: a prefix that will be added to the name of the module
- remove_duplicate: whether to remove the duplicated module instances in the result
- or not
- """
-
- if memo is None:
- memo = set()
- self_non_persistent_set = set()
- if module not in memo:
- if remove_duplicate:
- memo.add(module)
- self_non_persistent_set = set(map(lambda key: prefix + ('.' if prefix else '') + key, module._non_persistent_buffers_set))
- for name, sub_module in module._modules.items():
- if sub_module is None:
- continue
- submodule_prefix = prefix + ('.' if prefix else '') + name
- child_non_persistent_set = self._get_non_persistent_buffers_set(sub_module, memo, submodule_prefix, remove_duplicate)
- self_non_persistent_set = set.union(self_non_persistent_set, child_non_persistent_set)
- return self_non_persistent_set
-
+ def _get_non_persistent_buffers_set(self,
+ module,
+ memo: Optional[Set[nn.Module]] = None,
+ prefix: str = '',
+ remove_duplicate: bool = True):
+ r"""
+ Args:
+ memo: a memo to store the set of modules already added to the result
+ prefix: a prefix that will be added to the name of the module
+ remove_duplicate: whether to remove the duplicated module instances in the result
+ or not
+ """
+
+ if memo is None:
+ memo = set()
+ self_non_persistent_set = set()
+ if module not in memo:
+ if remove_duplicate:
+ memo.add(module)
+ self_non_persistent_set = set(
+ map(lambda key: prefix + ('.' if prefix else '') + key, module._non_persistent_buffers_set))
+ for name, sub_module in module._modules.items():
+ if sub_module is None:
+ continue
+ submodule_prefix = prefix + ('.' if prefix else '') + name
+ child_non_persistent_set = self._get_non_persistent_buffers_set(sub_module, memo, submodule_prefix,
+ remove_duplicate)
+ self_non_persistent_set = set.union(self_non_persistent_set, child_non_persistent_set)
+ return self_non_persistent_set
def _post_forward(self):
"""This function is only triggered for inference.
@@ -147,7 +155,7 @@ def forward(self, *args, **kwargs):
assert not self.gemini_manager.need_warmup or not self.gemini_manager.is_warmup(
), "You should run a completed iteration as your warmup iter"
- args, kwargs = _cast_float(args, torch.half), _cast_float(kwargs, torch.half)
+ args, kwargs = _cast_float(args, self.mixed_precision), _cast_float(kwargs, self.mixed_precision)
self.module.zero_grad(set_to_none=True)
if not grad_flag:
outputs = self._inference_forward(*args, **kwargs)
@@ -566,14 +574,14 @@ def _init_chunks(self, param_order, strict_ddp_mode: bool, cpu_offload: bool, pi
# move ignored parameters to CUDA
if is_ddp_ignored(p):
- p.data = p.data.to(device=get_current_device(), dtype=torch.float16)
+ p.data = p.data.to(device=get_current_device(), dtype=self.mixed_precision)
continue
# create a fp32 parameter
fp32_data = p.data.float()
fp32_p = ColoTensor(fp32_data, spec=ColoTensorSpec(p.process_group))
# create a fp16 parameter
- p.data = p.data.half()
+ p.data = p.data.to(self.mixed_precision)
# register the fp16 parameter and fp32 parameter in the chunk manager
dp_world_size = p.process_group.dp_world_size()
@@ -609,7 +617,7 @@ def _cast_buffers(self):
buffer.materialize()
buffer.data = buffer.cuda()
if torch.is_floating_point(buffer):
- buffer.data = buffer.half()
+ buffer.data = buffer.to(self.mixed_precision)
def _preprocess_param(self, p: Union[nn.Parameter, ColoParameter, 'LazyTensor']) -> None:
"""Convert parameter to ColoParameter in-place.
@@ -732,6 +740,7 @@ def __init__(self,
hidden_dim: Optional[int] = None,
min_chunk_size_mb: float = 32,
memstats: Optional[MemStats] = None,
+ mixed_precision: torch.dtype = torch.float16,
verbose: bool = False) -> None:
"""
A torch.Module wrapper using ZeRO-DP and Gemini.
@@ -772,5 +781,10 @@ def __init__(self,
strict_ddp_flag=strict_ddp_mode,
verbose=verbose)
gemini_manager = GeminiManager(placement_policy, chunk_manager, memstats)
- super().__init__(module, gemini_manager, pin_memory, force_outputs_fp32, strict_ddp_mode,
- scatter_after_inference)
+ super().__init__(module,
+ gemini_manager,
+ pin_memory,
+ force_outputs_fp32,
+ strict_ddp_mode,
+ scatter_after_inference,
+ mixed_precision=mixed_precision)
diff --git a/colossalai/zero/gemini/gemini_optimizer.py b/colossalai/zero/gemini/gemini_optimizer.py
index 71c4f65cb8d2..267deb1e8699 100644
--- a/colossalai/zero/gemini/gemini_optimizer.py
+++ b/colossalai/zero/gemini/gemini_optimizer.py
@@ -1,7 +1,6 @@
# this code is inspired by the DeepSpeed library and implemented with our own design from scratch
import math
import warnings
-from enum import Enum
from typing import Any, Dict, Set, Tuple
import torch
@@ -9,7 +8,7 @@
from torch.nn import Parameter
from torch.optim import Optimizer
-from colossalai.amp.naive_amp.grad_scaler import DynamicGradScaler
+from colossalai.amp.naive_amp.mixed_precision_mixin import BF16MixedPrecisionMixin, FP16MixedPrecisionMixin
from colossalai.logging import get_dist_logger
from colossalai.nn.optimizer import ColossalaiOptimizer, CPUAdam, FusedAdam, HybridAdam
from colossalai.utils import disposable, get_current_device, is_ddp_ignored
@@ -22,9 +21,26 @@
_AVAIL_OPTIM_LIST = {FusedAdam, CPUAdam, HybridAdam}
-class OptimState(Enum):
- SCALED = 0
- UNSCALED = 1
+class GeminiFP16MixedPrecisionMixin(FP16MixedPrecisionMixin):
+
+ def __init__(self,
+ module: ZeroDDP,
+ initial_scale: float = 2**16,
+ min_scale: float = 1,
+ growth_factor: float = 2,
+ backoff_factor: float = 0.5,
+ growth_interval: int = 1000,
+ hysteresis: int = 2,
+ max_scale: float = 2**32) -> None:
+ super().__init__(initial_scale, min_scale, growth_factor, backoff_factor, growth_interval, hysteresis,
+ max_scale)
+ self.module = module
+
+ def check_local_overflow(self) -> bool:
+ return self.module.overflow_counter > 0
+
+ def pre_zero_grad(self) -> None:
+ self.module.overflow_counter = 0
class ZeroOptimizer(ColossalaiOptimizer):
@@ -79,7 +95,6 @@ def __init__(self,
self.module = module
self.gemini_manager = module.gemini_manager
self.chunk_manager: ChunkManager = self.gemini_manager.chunk_manager
- self.optim_state = OptimState.UNSCALED
self.param_to_range: Dict[Parameter, Tuple[int, int]] = dict()
self.param_to_chunk32: Dict[Parameter, Chunk] = dict()
self.chunk16_set: Set[Chunk] = set()
@@ -107,15 +122,20 @@ def __init__(self,
self.__init__optimizer()
- # Grad scaler
- self.grad_scaler = DynamicGradScaler(initial_scale=initial_scale,
- min_scale=min_scale,
- growth_factor=growth_factor,
- backoff_factor=backoff_factor,
- growth_interval=growth_interval,
- hysteresis=hysteresis,
- max_scale=max_scale)
- self._found_overflow: torch.Tensor = torch.zeros(1, dtype=torch.int64, device=get_current_device())
+ if module.mixed_precision is torch.float16:
+ self.mix_precision_mixin = GeminiFP16MixedPrecisionMixin(module,
+ initial_scale=initial_scale,
+ min_scale=min_scale,
+ growth_factor=growth_factor,
+ backoff_factor=backoff_factor,
+ growth_interval=growth_interval,
+ hysteresis=hysteresis,
+ max_scale=max_scale)
+ elif module.mixed_precision is torch.bfloat16:
+ self.mix_precision_mixin = BF16MixedPrecisionMixin()
+ else:
+ raise RuntimeError(f"Unsupported mixed precision type: {module.mixed_precision}")
+
self._logger = get_dist_logger()
self.gpu_margin_mem_ratio: float = float(gpu_margin_mem_ratio)
@@ -151,15 +171,6 @@ def _update_fp16_params(self):
for chunk16 in self.chunk16_set:
chunk16.optim_update()
- def _check_overflow(self):
- # clear previous overflow record
- self._found_overflow.fill_(self.module.overflow_counter)
-
- # all-reduce across global group
- dist.all_reduce(self._found_overflow)
-
- return self._found_overflow.item() > 0
-
def _clear_global_norm(self) -> None:
for c16 in self.chunk16_set:
c16.l2_norm = None
@@ -190,40 +201,25 @@ def _calc_global_norm(self) -> float:
return global_norm
def _get_combined_scale(self):
- loss_scale = 1
-
- if self.optim_state == OptimState.SCALED:
- loss_scale = self.loss_scale
- self.optim_state = OptimState.UNSCALED
+ div_scale = self.mix_precision_mixin.get_grad_div_scale()
- combined_scale = loss_scale
if self.clipping_flag:
total_norm = self._calc_global_norm()
- clip = ((total_norm / loss_scale) + 1e-6) / self.max_norm
+ clip = ((total_norm / div_scale) + 1e-6) / self.max_norm
if clip > 1:
- combined_scale = clip * loss_scale
+ div_scale = clip * div_scale
- if combined_scale == 1:
- return -1
- else:
- return combined_scale
-
- @property
- def loss_scale(self):
- return self.grad_scaler.scale.item()
+ return -1 if div_scale == 1.0 else div_scale
def zero_grad(self, *args, **kwargs):
- self.module.overflow_counter = 0
+ self.mix_precision_mixin.pre_zero_grad()
return self.optim.zero_grad(set_to_none=True)
def step(self, *args, **kwargs):
self._maybe_move_fp32_params()
self._set_grad_ptr()
- found_inf = self._check_overflow()
- if found_inf:
- self.optim_state = OptimState.UNSCALED # no need to unscale grad
- self.grad_scaler.update(found_inf) # update gradient scaler
+ if self.mix_precision_mixin.should_skip_step():
if self.verbose:
self._logger.info(f'Found overflow. Skip step')
self._clear_global_norm() # clear recorded norm
@@ -234,7 +230,6 @@ def step(self, *args, **kwargs):
# get combined scale. combined scale = loss scale * clipping norm
# so that gradient = gradient / combined scale
combined_scale = self._get_combined_scale()
- self.grad_scaler.update(found_inf)
ret = self.optim.step(div_scale=combined_scale, *args, **kwargs)
self._register_states()
@@ -246,8 +241,7 @@ def clip_grad_norm(self, model: torch.nn.Module, max_norm: float, norm_type: flo
raise NotImplementedError
def backward(self, loss: torch.Tensor):
- loss = self.loss_scale * loss
- self.optim_state = OptimState.SCALED
+ loss = self.mix_precision_mixin.pre_backward(loss)
self.module.backward(loss)
def backward_by_grad(self, tensor: torch.Tensor, grad: torch.Tensor):
@@ -255,7 +249,7 @@ def backward_by_grad(self, tensor: torch.Tensor, grad: torch.Tensor):
# It receives the scaled grad from the previous rank
# No need to scale the grad again
# Need to unscale when optimizing
- self.optim_state = OptimState.SCALED
+ grad = self.mix_precision_mixin.pre_backward_by_grad(grad)
self.module.backward_by_grad(tensor, grad)
def _maybe_move_fp32_params(self):
diff --git a/colossalai/zero/legacy/init_ctx/init_context.py b/colossalai/zero/legacy/init_ctx/init_context.py
index a921ca0aa83a..a3fa46b38b5a 100644
--- a/colossalai/zero/legacy/init_ctx/init_context.py
+++ b/colossalai/zero/legacy/init_ctx/init_context.py
@@ -14,7 +14,7 @@
from colossalai.logging import get_dist_logger
from colossalai.utils.model.utils import InsertPostInitMethodToModuleSubClasses
from colossalai.zero.legacy.shard_utils import BaseShardStrategy
-from colossalai.zero.legacy.sharded_model._utils import cast_tensor_to_fp16
+from colossalai.zero.legacy.sharded_model._utils import cast_tensor_to_bf16, cast_tensor_to_fp16
from colossalai.zero.legacy.sharded_model.sharded_model_v2 import ShardedModelV2
from colossalai.zero.legacy.sharded_param import ShardedParamV2
@@ -55,6 +55,7 @@ class ZeroInitContext(InsertPostInitMethodToModuleSubClasses):
seed (int, optional): Random seed for weight initialization
shard_param (bool, optional): Is param sharded after exiting the context. Defaults to False.
default_dtype (torch.dtype, optional): If it's not None, parameters will be initialized as ``default_dtype`` then converted to fp16.
+ bf16 (bool, optional): If it's True, parameters will be initialized as ``torch.bfloat16``. Otherwise, parameters will be initialized as ``torch.float16``. Defaults to False.
model_numel_tensor (torch.Tensor, optional): A tensor which will store the number of elements of model. Defaults to torch.zeros(1, dtype=torch.int).
"""
@@ -64,6 +65,7 @@ def __init__(self,
seed: int = 2**10 - 1,
shard_param: bool = False,
default_dtype: Optional[torch.dtype] = None,
+ bf16: bool = False,
model_numel_tensor: torch.Tensor = torch.zeros(1, dtype=torch.long)):
super().__init__(default_dtype=default_dtype)
@@ -71,6 +73,7 @@ def __init__(self,
self.param_list = []
self.model_numel_tensor = model_numel_tensor
self.seed = seed
+ self.bf16 = bf16
self.dp_process_group = gpc.get_group(ParallelMode.DATA)
self.config = ZeroContextConfig(target_device=target_device, is_replicated=True, shard_param=shard_param)
@@ -183,9 +186,10 @@ def _post_init_method(self, module: torch.nn.Module, *args, **kwargs):
NOTE() The module may be passed to this function multiple times.
"""
self.top_module = module
+ half_dtype = torch.float16 if not self.bf16 else torch.bfloat16
def half_fn(t: torch.Tensor):
- return t.half() if t.is_floating_point() else t
+ return t.to(half_dtype) if t.is_floating_point() else t
for param in module.parameters(recurse=False):
# avoid adapting a param to ShardedParam twice
@@ -226,9 +230,10 @@ def half_fn(t: torch.Tensor):
# We must cast buffers
# If we use BN, buffers may be on CPU and Float
# We must cast them
+ cast_fn = cast_tensor_to_fp16 if not self.bf16 else cast_tensor_to_bf16
for buffer in module.buffers(recurse=False):
buffer.data = buffer.data.to(device=torch.cuda.current_device())
- buffer.data = cast_tensor_to_fp16(buffer.data)
+ buffer.data = cast_fn(buffer.data)
class ZeroContextMgr(metaclass=SingletonMeta):
diff --git a/colossalai/zero/legacy/sharded_model/_utils.py b/colossalai/zero/legacy/sharded_model/_utils.py
index 2bd01531a78f..f1d642cf3f13 100644
--- a/colossalai/zero/legacy/sharded_model/_utils.py
+++ b/colossalai/zero/legacy/sharded_model/_utils.py
@@ -43,11 +43,19 @@ def cast_tensor_to_fp32(tensor: Union[torch.Tensor, StatefulTensor]) -> torch.Te
if isinstance(tensor, StatefulTensor):
tensor = tensor.payload
- if torch.is_floating_point(tensor) and tensor.dtype is torch.float16:
+ if torch.is_floating_point(tensor) and tensor.dtype in (torch.float16, torch.bfloat16):
return tensor.float()
return tensor
+def cast_tensor_to_bf16(tensor: torch.Tensor) -> torch.Tensor:
+ if isinstance(tensor, StatefulTensor):
+ tensor = tensor.payload
+ if torch.is_floating_point(tensor) and tensor.dtype is torch.float32:
+ return tensor.bfloat16()
+ return tensor
+
+
def apply_to_tensors(x: Any, fn: Callable):
if torch.is_tensor(x):
return fn(x)
diff --git a/colossalai/zero/legacy/sharded_model/sharded_model_v2.py b/colossalai/zero/legacy/sharded_model/sharded_model_v2.py
index b3a83b741825..be3842beb208 100644
--- a/colossalai/zero/legacy/sharded_model/sharded_model_v2.py
+++ b/colossalai/zero/legacy/sharded_model/sharded_model_v2.py
@@ -28,6 +28,7 @@
from ._utils import (
cast_float_arguments,
+ cast_tensor_to_bf16,
cast_tensor_to_fp16,
cast_tensor_to_fp32,
chunk_and_pad,
@@ -74,6 +75,7 @@ class ShardedModelV2(nn.Module):
In this mode, grad will be fp16. Make sure your optimizer supports mixed precision (fp32 param and fp16 grad).
We find that PyTorch's optimizers don't support mixed precision,
so we recommend you enable this only when using our CPUAdam with CPU offload. Defaults to False.
+ bf16 (bool, optional): Whether to use bfloat16 for param and grad. Defaults to False.
"""
def __init__(self,
@@ -86,11 +88,13 @@ def __init__(self,
tensor_placement_policy: str = 'cuda',
gradient_predivide_factor: Optional[float] = 1.0,
reuse_fp16_shard: bool = False,
+ bf16: bool = False,
*args,
**kwargs):
assert not isinstance(module, ShardedModelV2), 'Nested ShardedModelV2 is not supported.'
super().__init__()
self.logger = get_dist_logger()
+ self.bf16 = bf16
# We force users to use ZeroInitContext
for submodule in module.modules():
@@ -232,7 +236,8 @@ def _post_forward_operations(self):
def forward(self, *args: Any, **kwargs: Any) -> torch.Tensor:
self._pre_forward_operations(*args)
- args, kwargs = cast_float_arguments(cast_tensor_to_fp16, *args, **kwargs)
+ cast_fn = cast_tensor_to_bf16 if self.bf16 else cast_tensor_to_fp16
+ args, kwargs = cast_float_arguments(cast_fn, *args, **kwargs)
outputs = self.module(*args, **kwargs)
self._post_forward_operations()
return outputs
diff --git a/colossalai/zero/legacy/sharded_optim/sharded_optim_v2.py b/colossalai/zero/legacy/sharded_optim/sharded_optim_v2.py
index be60209af434..41dd174cb65a 100644
--- a/colossalai/zero/legacy/sharded_optim/sharded_optim_v2.py
+++ b/colossalai/zero/legacy/sharded_optim/sharded_optim_v2.py
@@ -94,6 +94,7 @@ def __init__(self,
super().__init__(optimizer)
self.shard_strategy = sharded_model.shard_strategy
self.model: ShardedModelV2 = sharded_model
+ self.bf16 = sharded_model.bf16
self.gpu_margin_mem_ratio: float = float(gpu_margin_mem_ratio)
assert 0.0 <= self.gpu_margin_mem_ratio <= 1.0, f'gpu_margin_mem_ratio must >=0.0 and <=1.0'
@@ -117,6 +118,7 @@ def __init__(self,
self._found_overflow: Tensor = torch.IntTensor([0]).to(torch.cuda.current_device())
self._logger = get_dist_logger("ShardedOptimizerV2")
self._verbose = verbose
+ self._grad_prepared: bool = False # this should be set to true when _prepare_grads() and reset to false when backward
# Store fp32 param shards
self._register_master_weight()
@@ -166,8 +168,10 @@ def zero_grad(self, *args, **kwargs):
self._zero_grad()
def backward(self, loss: Tensor) -> None:
- loss = self.loss_scale * loss
- self.optim_state = OptimState.SCALED
+ if not self.bf16:
+ loss = self.loss_scale * loss
+ self.optim_state = OptimState.SCALED
+ self._grad_prepared = False
self.model.backward(loss)
def backward_by_grad(self, tensor: Tensor, grad: Tensor) -> None:
@@ -175,30 +179,33 @@ def backward_by_grad(self, tensor: Tensor, grad: Tensor) -> None:
# It receives the scaled grad from the previous rank
# No need to scale the grad again
# Need to unscale when optimizing
- self.optim_state = OptimState.SCALED
+ if not self.bf16:
+ self.optim_state = OptimState.SCALED
+ self._grad_prepared = False
self.model.backward_by_grad(tensor, grad)
def clip_grad_norm(self, model: nn.Module, max_norm: float):
- if self.optim_state == OptimState.SCALED:
- self._prepare_grads()
+ self._prepare_grads()
+ if not self.bf16 and self.optim_state == OptimState.SCALED:
self._unscale_grads()
return super().clip_grad_norm(model, max_norm)
def step(self, *args, **kwargs):
+ self._prepare_grads()
# unscale grads if scaled
- if self.optim_state == OptimState.SCALED:
- self._prepare_grads()
+ if not self.bf16 and self.optim_state == OptimState.SCALED:
self._unscale_grads()
self._maybe_move_fp32_shards()
- found_inf = self._check_overflow()
- self.grad_scaler.update(found_inf)
+ if not self.bf16:
+ found_inf = self._check_overflow()
+ self.grad_scaler.update(found_inf)
- if found_inf:
- self._logger.warning('found inf during ShardedOptimV2 step')
- self._zero_grad(recover_data=True)
- return
+ if found_inf:
+ self._logger.warning('found inf during ShardedOptimV2 step')
+ self._zero_grad(recover_data=True)
+ return
self._point_param_fp16_to_master_param()
@@ -304,6 +311,8 @@ def _maybe_move_fp32_shards(self):
state[k] = v.cuda()
def _prepare_grads(self):
+ if self._grad_prepared:
+ return
for group in self.optim.param_groups:
for p in group['params']:
if p.colo_attr.saved_grad.is_null():
@@ -320,6 +329,7 @@ def _prepare_grads(self):
p.grad = p.colo_attr.grad_payload
# Set p.data to empty tensor, in case of memory leaking
p.colo_attr.set_data_none()
+ self._grad_prepared = True
def _point_param_fp16_to_master_param(self):
# assign master param pointers to p.data.
@@ -357,7 +367,8 @@ def _copy_master_param_to_param_fp16(self, p):
torch.empty(p.data.shape, dtype=p.colo_attr.data_payload.dtype, device=p.colo_attr.data_payload.device))
# TODO() optimize this line CPU (fp32) -> GPU (fp16)
- p.colo_attr.sharded_data_tensor.payload_copy(p.half().detach())
+ half_dtype = torch.bfloat16 if self.bf16 else torch.float16
+ p.colo_attr.sharded_data_tensor.payload_copy(p.to(half_dtype).detach())
p.colo_attr.set_data_none()
if p.colo_attr.keep_not_shard and p.colo_attr.is_replicated:
diff --git a/colossalai/zero/low_level/low_level_optim.py b/colossalai/zero/low_level/low_level_optim.py
index 3e7661ecab76..d4d03e5b5fcd 100644
--- a/colossalai/zero/low_level/low_level_optim.py
+++ b/colossalai/zero/low_level/low_level_optim.py
@@ -6,7 +6,11 @@
import torch.distributed as dist
from torch.optim import Optimizer
-from colossalai.amp.naive_amp.grad_scaler import DynamicGradScaler
+from colossalai.amp.naive_amp.mixed_precision_mixin import (
+ BF16MixedPrecisionMixin,
+ FP16MixedPrecisionMixin,
+ MixedPrecisionMixin,
+)
from colossalai.context import ParallelMode
from colossalai.core import global_context as gpc
from colossalai.logging import get_dist_logger
@@ -27,6 +31,31 @@
from .bookkeeping import BucketStore, GradientStore, ParameterStore, TensorBucket
+class LowLevelZeroFP16MixedPrecisionMixin(FP16MixedPrecisionMixin):
+
+ def __init__(self,
+ num_working_param_groups: int,
+ grad_store: GradientStore,
+ initial_scale: float = 2**16,
+ min_scale: float = 1,
+ growth_factor: float = 2,
+ backoff_factor: float = 0.5,
+ growth_interval: int = 1000,
+ hysteresis: int = 2,
+ max_scale: float = 2**32) -> None:
+ super().__init__(initial_scale, min_scale, growth_factor, backoff_factor, growth_interval, hysteresis,
+ max_scale)
+ self.num_working_param_groups = num_working_param_groups
+ self.grad_store = grad_store
+
+ def check_local_overflow(self) -> bool:
+ for group_id in range(self.num_working_param_groups):
+ for avg_grad in self.grad_store.get_averaged_gradients_by_group(group_id):
+ if avg_grad is not None and has_inf_or_nan(avg_grad):
+ return True
+ return False
+
+
class LowLevelZeroOptimizer(ColossalaiOptimizer):
"""Optimizer used for ZeRO-1 and ZeRO-2.
"""
@@ -100,17 +129,6 @@ def __init__(
self._reduce_bucket_size = reduce_bucket_size
self._communication_dtype = communication_dtype
- # gradient scaler
- self.grad_scaler = DynamicGradScaler(initial_scale=initial_scale,
- min_scale=min_scale,
- growth_factor=growth_factor,
- backoff_factor=backoff_factor,
- growth_interval=growth_interval,
- hysteresis=hysteresis,
- max_scale=max_scale,
- verbose=verbose)
- self._found_overflow = torch.FloatTensor([0]).to(get_current_device())
-
# gradient clipping
self._clip_grad_norm = clip_grad_norm
@@ -200,14 +218,25 @@ def __init__(
if self._overlap_communication or self._partition_grads:
self._attach_reduction_hook()
+ # initialize mixed precision mixin
+ self.mixed_precision_mixin: Optional[MixedPrecisionMixin] = None
+ if self._dtype is torch.float16:
+ self.mixed_precision_mixin = LowLevelZeroFP16MixedPrecisionMixin(self.num_param_groups,
+ self._grad_store,
+ initial_scale=initial_scale,
+ min_scale=min_scale,
+ growth_factor=growth_factor,
+ backoff_factor=backoff_factor,
+ growth_interval=growth_interval,
+ hysteresis=hysteresis,
+ max_scale=max_scale)
+ elif self._dtype is torch.bfloat16:
+ self.mixed_precision_mixin = BF16MixedPrecisionMixin()
+
@property
def dtype(self):
return self._dtype
- @property
- def loss_scale(self):
- return self.grad_scaler.scale
-
@property
def num_param_groups(self):
return len(self._working_param_groups)
@@ -392,7 +421,8 @@ def _add_to_reduction_bucket(self, param, reduce_rank=None):
################################
def backward(self, loss, retain_graph=False, sync_grad=True):
- loss = self.loss_scale * loss
+ if self.mixed_precision_mixin is not None:
+ loss = self.mixed_precision_mixin.pre_backward(loss)
loss.backward(retain_graph=retain_graph)
# finish gradient reduction
@@ -419,6 +449,8 @@ def zero_grad(self, set_to_none=True):
:param set_to_none: Whether set the gradient to None. Default value is True.
:type set_to_none: bool
"""
+ if self.mixed_precision_mixin is not None:
+ self.mixed_precision_mixin.pre_zero_grad()
for _, param_group in self._working_param_groups.items():
for param in param_group:
if set_to_none:
@@ -435,12 +467,7 @@ def zero_grad(self, set_to_none=True):
def step(self, closure=None):
assert closure is None, 'closure is not supported by step()'
- # check for overflow
- found_inf = self._check_overflow()
- self.grad_scaler.update(found_inf)
-
- # update loss scale if overflow occurs
- if found_inf:
+ if self.mixed_precision_mixin is not None and self.mixed_precision_mixin.should_skip_step():
self._grad_store.reset_all_average_gradients()
if self._verbose:
self._logger.info(f'Found overflow. Skip step')
@@ -507,41 +534,20 @@ def step(self, closure=None):
# Mixed Precision Utilities #
#############################
- def _check_overflow(self):
- # clear previous overflow record
- self._found_overflow.fill_(0.0)
-
- # check for overflow
- for group_id in range(len(self._working_param_groups)):
- for avg_grad in self._grad_store.get_averaged_gradients_by_group(group_id):
- if avg_grad is not None and has_inf_or_nan(avg_grad):
- self._found_overflow.fill_(1.0)
- break
-
- # all-reduce across dp group
- dist.all_reduce(self._found_overflow, op=dist.ReduceOp.MAX, group=self._dp_torch_group)
-
- # all-reduce over model parallel group
- if self._mp_torch_group:
- dist.all_reduce(self._found_overflow, op=dist.ReduceOp.MAX, group=self._mp_torch_group)
-
- if self._found_overflow.item() > 0:
- return True
- else:
- return False
-
def _unscale_and_clip_grads(self, grad_groups_flat, total_norm):
# compute combined scale factor for this group
- combined_scale = self.loss_scale
+ div_scale = 1.0
+ if self.mixed_precision_mixin is not None:
+ div_scale = self.mixed_precision_mixin.get_grad_div_scale()
if self._clip_grad_norm > 0.:
# norm is in fact norm*scale
- clip = ((total_norm / self.loss_scale) + 1e-6) / self._clip_grad_norm
+ clip = ((total_norm / div_scale) + 1e-6) / self._clip_grad_norm
if clip > 1:
- combined_scale = clip * self.loss_scale
+ div_scale = clip * div_scale
for grad in grad_groups_flat:
- grad.data.mul_(1. / combined_scale)
+ grad.data.mul_(1. / div_scale)
############################
# Gradient Synchronization #
diff --git a/docs/source/en/advanced_tutorials/integrate_mixture_of_experts_into_your_model.md b/docs/source/en/advanced_tutorials/integrate_mixture_of_experts_into_your_model.md
index d5edd135c079..bfa5539fe3a6 100644
--- a/docs/source/en/advanced_tutorials/integrate_mixture_of_experts_into_your_model.md
+++ b/docs/source/en/advanced_tutorials/integrate_mixture_of_experts_into_your_model.md
@@ -137,3 +137,4 @@ criterion = MoeLoss(
Finally, just use trainer or engine in `colossalai` to do your training.
Otherwise, you should take care of gradient by yourself.
+
diff --git a/docs/source/en/concepts/colossalai_overview.md b/docs/source/en/concepts/colossalai_overview.md
index 38b682d49e62..7617c62a4e00 100644
--- a/docs/source/en/concepts/colossalai_overview.md
+++ b/docs/source/en/concepts/colossalai_overview.md
@@ -19,7 +19,7 @@ We aim to make Colossal-AI easy to use and non-intrusive to user code. There is
1. Prepare a configuration file where specifies the features you want to use and your parameters.
2. Initialize distributed backend with `colossalai.launch`
-3. Inject the training features into your training components (e.g. model, optimizer) with `colossalai.initialize`.
+3. Inject the training features into your training components (e.g. model, optimizer) with `colossalai.booster`.
4. Run training and testing
We will cover the whole workflow in the `basic tutorials` section.
@@ -34,3 +34,5 @@ The Colossal-AI system will be expanded to include more training skills, these n
4. expansion of existing parallelism methods
We welcome ideas and contribution from the community and you can post your idea for future development in our forum.
+
+
diff --git a/docs/source/zh-Hans/advanced_tutorials/integrate_mixture_of_experts_into_your_model.md b/docs/source/zh-Hans/advanced_tutorials/integrate_mixture_of_experts_into_your_model.md
index 276fcc2619e0..8ed9a1e43cdd 100644
--- a/docs/source/zh-Hans/advanced_tutorials/integrate_mixture_of_experts_into_your_model.md
+++ b/docs/source/zh-Hans/advanced_tutorials/integrate_mixture_of_experts_into_your_model.md
@@ -9,44 +9,24 @@
- [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961)
- [Go Wider Instead of Deeper](https://arxiv.org/abs/2107.11817)
-(中文版教程将会在近期提供)
-
## Introduction
-Since the advent of Switch Transformer, the AI community has found Mixture of Experts (MoE) a useful technique to enlarge the capacity of deep learning models.
-
-Colossal-AI provides an early access version of parallelism specifically designed for MoE models.
-The most prominent advantage of MoE in Colossal-AI is convenience.
-We aim to help our users to easily combine MoE with model parallelism and data parallelism.
-
-However, the current implementation has two main drawbacks now.
-The first drawback is its poor efficiency in large batch size and long sequence length training.
-The second drawback is incompatibility with tensor parallelism.
-We are working on system optimization to overcome the training efficiency problem.
-The compatibility problem with tensor parallelism requires more adaptation, and we will tackle this issue in the future.
-
-Here, we will introduce how to use MoE with model parallelism and data parallelism.
-
-## Table of Content
-In this tutorial we will cover:
-1. Set up MoE running environment
-2. Create MoE layer
-3. Train your model
+自从`Switch Transformer`出现以来,人工智能社区发现专家混合 (MoE) 是一种扩大深度学习模型容量的有用技术。
+Colossal-AI 提供了专为MoE模型设计的并行性的早期访问版本。Colossal-AI中MoE最突出的优势就是方便。我们的目标是帮助我们的用户轻松地将MoE与模型并行性和数据并行性结合起来。
+但是,当前的实施现在有两个主要缺点。第一个缺点是它在大批量和长序列长度训练中效率低下。第二个缺点是与张量并行性不兼容。我们正在致力于系统优化,以克服训练效率问题。与张量并行的兼容性问题需要更多的适应,我们将在未来解决这个问题。
+在这里,我们将介绍如何使用具有模型并行性和数据并行性的 MoE。
-We provided the [example code](https://github.com/hpcaitech/ColossalAI-Examples/tree/main/image/widenet) for this tutorial in [ColossalAI-Examples](https://github.com/hpcaitech/ColossalAI-Examples).
-This example uses [WideNet](https://arxiv.org/abs/2107.11817) as an example of MoE-based model.
+## 目录
+在本教程中,我们将介绍:
+1. [搭建MoE运行环境](#搭建moe运行环境)
+2. [创建MoE层](#创建moe层)
+3. [定义训练模型](#训练模型)
+我们提供[示例](https://github.com/hpcaitech/ColossalAI-Examples/tree/main/image/widenet), 详细介绍请参考 [ColossalAI-Examples](https://github.com/hpcaitech/ColossalAI-Examples).
+该示例使用 [WideNet](https://arxiv.org/abs/2107.11817) 作为基于 MoE 的模型的示例.
-## Set up MoE running environment
-In your project folder, create a `config.py`.
-
-This file is to specify some features you may want to use to train your model.
-In order to enable MoE, you need to add a dict called parallel and specify the value of key moe.
-You can assign a value for the key size of moe, which represents the model parallel size of experts (i.e. the number of experts in one group to parallelize training).
-
-For example, if the size is 4, 4 processes will be assigned to 4 consecutive GPUs and these 4 processes form a moe model parallel group.
-Each process on the 4 GPUs will only get a portion of experts. Increasing the model parallel size will reduce communication cost, but increase computation cost in each GPU and activation cost in memory.
-The total data parallel size is auto-detected and set as the number of GPUs by default.
+## 搭建MoE运行环境
+在您的项目文件夹中,创建`config.py`文件。在该文件中,您可以指定希望用于训练模型的一些功能。为了启用 MoE,您需要在`config.py`中定义`parallel`字段,并指定`moe`的值。`moe`表示一组moe并行化训练组的并行大小。例如,`moe`设置为4,则4个进程将分配给4个连续的GPU,这4个进程组成一个moe模型并行组。每个进程只会得到一部分专家。增加mo e并行的大小将降低通信成本,但会增加每个GPU的计算成本和内存中activation的存储成本。总的数据并行的大小是自动检测的,默认情况下设置为GPU的数量。
```python
MOE_MODEL_PARALLEL_SIZE = ...
@@ -55,37 +35,29 @@ parallel = dict(
)
```
-If `MOE_MODEL_PARALLEL_SIZE = E` and set the number of experts as `E` where `E` is a constant number, the process flow of forward pass of a transformer encoder in a model parallel group is shown below.
+如果`MOE_MODEL_PARALLEL_SIZE = E`,即设置专家的总数为`E`(`E`为一个常数)。在模型并行中,transformer编码器中前向部分的处理流程如下图所示。
MoE Transformer, image source: GShard
-Since all experts are allocated to all GPUs in a model parallel group and a GPU only owns a portion of experts,
-original data parallel groups are no longer correct for the parameters of experts during gradient handling in backward pass anymore.
-So we create a new kind of parallel group called moe data parallel group.
-The difference among different kinds of parallel group, when the configuration is set as `WORLD_SIZE=4`,
-`MOE_MODEL_PARALLEL_SIZE=2`, is shown here.
+所有专家都分配给模型并行组中的GPU,每一个GPU只拥有一部分专家,原始数据并行组在反向传递的梯度处理期间不再适用于专家参数。所以我们创建了一个新的并行组,叫做moe数据并行组。当配置设置为`WORLD_SIZE=4`,`MOE_MODEL_PARALLEL_SIZE=2`时,两个并行组的区别如下图所示。
-MoE process group
+MoE并行处理
+至于梯度处理,我们提供了`MoeGradientHandler`来all-reduce模型的每个参数。如果您使用`colossalai.initialize`函数创建您的训练引擎,MoE梯度处理程序将自动添加到您的引擎中。否则,你应该自己处理梯度。MoE运行环境的所有参数都保存在`colossalai.global_variables.moe_env`中。您可以访问您的配置参数来检查您的设置是否正确。
-As for gradient handling, we provide MoeGradientHandler to all-reduce every parameter of the model.
-If you use `colossalai.initialize` function to create your training engine, the MoE gradient handler will be added to your engine automatically.
-Otherwise, you should take care of gradient by yourself.
-All parameters of MoE running environment are stored in colossalai.global_variables.moe_env.
-You can access your configuration parameters to check whether your setup is correct.
```python
from colossalai.global_variables import moe_env
```
-## Create MoE layer
-You can create a MoE layer from `colossalai.nn.moe`.
-But before doing that, you should set up random seeds for all processes like this.
+## 创建MoE层
+
+您可以从`colossalai.nn.moe`创建MoE层。但在此之前,您应该为所有进程设置随机种子。
```python
from colossalai.context.random import moe_set_seed
@@ -95,10 +67,7 @@ moe_set_seed(42)
model = Widenet(num_experts=4, capacity_factor=1.2)
```
-`moe_set_seed` will set different seed for different processes in a moe model parallel group.
-This helps initialize parameters in experts.
-Then create an instance of experts and an instance of router.
-Here is the example in model zoo.
+`moe_set_seed` 会为一个moe模型并行组中的不同进程设置不同的种子(这有助于在专家中初始化参数),创建一个专家实例和一个路由器实例,示例如下。
```python
from colossalai.nn.layer.moe import Experts, MoeLayer, Top2Router, NormalNoiseGenerator
@@ -118,16 +87,11 @@ ffn=MoeLayer(dim_model=d_model, num_experts=num_experts,
router=shared_router, experts=shared_experts)
```
-Inside the initialization of Experts, the local expert number of each GPU will be calculated automatically. You just need to specify the class of each expert and its parameters used in its initialization. As for routers, we have provided top1 router and top2 router. You can find them in colossalai.nn.layer.moe. After creating the instance of experts and router, the only thing initialized in Moelayer is gate module. More definitions of each class can be found in our API document and code.
-
+在Experts的初始化中,会自动计算每个GPU的本地expert数量,您只需指定每个专家的类型及其在初始化时使用的参数。此外,我们提供了`Top1Router`和`Top2Router`,您可以在`colossalai.nn.layer.moe` 找到它们。在创建experts和router的实例时,`Moelayer`只初始化了`gate`模块,类型的更多详细信息您可以参考我们的API文档和代码。
-## Train Your Model
-Do not to forget to use `colossalai.initialize` function in `colossalai` to add gradient handler for the engine.
-We handle the back-propagation of MoE models for you.
-In `colossalai.initialize`, we will automatically create a `MoeGradientHandler` object to process gradients.
-You can find more information about the handler `MoeGradientHandler` in colossal directory.
+## 定义训练模型
-The loss criterion should be wrapped by `Moeloss` to add auxiliary loss of MoE. Example is like this.
+使用colossalai中的`colossalai.initialize`函数为引擎添加梯度处理程序以处理 MoE模型的反向传播。在 `colossalai.initialize` 中,我们会自动创建一个`MoeGradientHandler`对象来处理梯度。您可以在colossal目录中找到有关`MoeGradientHandler`的更多信息。为了添加MoE的相关损失处理,损失函数应使用`Moeloss`封装,示例如下。
```python
criterion = MoeLoss(
aux_weight=0.01,
@@ -135,6 +99,6 @@ criterion = MoeLoss(
label_smoothing=0.1
)
```
+最后,您只需使用 `colossalai` 中的`trainer`或`engine`进行训练即可。
-Finally, just use trainer or engine in `colossalai` to do your training.
-Otherwise, you should take care of gradient by yourself.
+
diff --git a/docs/source/zh-Hans/concepts/colossalai_overview.md b/docs/source/zh-Hans/concepts/colossalai_overview.md
index cfb35e59e64a..8b28baf8e3d5 100755
--- a/docs/source/zh-Hans/concepts/colossalai_overview.md
+++ b/docs/source/zh-Hans/concepts/colossalai_overview.md
@@ -19,7 +19,7 @@ Colossal-AI 是一个集成的系统,为用户提供一套综合的训练方
1. 准备一个配置文件,指定您要使用的功能和参数。
2. 用 `colossalai.launch` 初始化分布式后端。
-3. 用 `colossalai.initialize` 将训练特征注入您的训练组件(如模型、优化器)中。
+3. 用 `colossalai.booster` 将训练特征注入您的训练组件(如模型、优化器)中。
4. 进行训练和测试.
我们将在`基本教程`部分介绍整个工作流程。
@@ -34,3 +34,5 @@ Colossal-AI 系统将会进一步拓展和优化,包括但不限于:
4. 拓展现有的并行方法
**我们始终欢迎社区的建议和讨论,如果您遇到任何问题,我们将非常愿意帮助您。您可以在GitHub 提 [issue](https://github.com/hpcaitech/ColossalAI/issues) ,或在[论坛](https://github.com/hpcaitech/ColossalAI/discussions)上创建一个讨论主题。**
+
+
diff --git a/tests/test_booster/test_plugin/test_gemini_plugin.py b/tests/test_booster/test_plugin/test_gemini_plugin.py
index c7b3676fb478..d606d6d89bd4 100644
--- a/tests/test_booster/test_plugin/test_gemini_plugin.py
+++ b/tests/test_booster/test_plugin/test_gemini_plugin.py
@@ -8,10 +8,10 @@
from colossalai.booster import Booster
from colossalai.booster.plugin import GeminiPlugin
from colossalai.fx import is_compatible_with_meta
+from colossalai.lazy.lazy_init import LazyInitContext
from colossalai.nn.optimizer import HybridAdam
from colossalai.tensor.colo_parameter import ColoParameter
from colossalai.testing import parameterize, rerun_if_address_is_in_use, spawn
-from colossalai.utils.model.experimental import LazyInitContext
from colossalai.zero import ColoInitContext
from tests.kit.model_zoo import model_zoo
diff --git a/tests/test_utils/test_lazy_init/lazy_init_utils.py b/tests/test_lazy/lazy_init_utils.py
similarity index 85%
rename from tests/test_utils/test_lazy_init/lazy_init_utils.py
rename to tests/test_lazy/lazy_init_utils.py
index aa87d32a808b..85bfd0e27801 100644
--- a/tests/test_utils/test_lazy_init/lazy_init_utils.py
+++ b/tests/test_lazy/lazy_init_utils.py
@@ -1,12 +1,13 @@
import random
+from copy import deepcopy
from typing import Any, Callable, Optional, Tuple
import numpy as np
import torch
from packaging import version
+from colossalai.lazy.lazy_init import LazyInitContext, LazyTensor, _MyTensor
from colossalai.tensor.d_tensor.layout_converter import to_global
-from colossalai.utils.model.experimental import LazyInitContext, LazyTensor, _MyTensor
from tests.kit.model_zoo.registry import ModelAttribute
SUPPORT_LAZY = version.parse(torch.__version__) >= version.parse('1.12.0')
@@ -31,6 +32,9 @@ def assert_model_equal(m1: torch.nn.Module, m2: torch.nn.Module) -> None:
assert n1 == n2
assert torch.equal(t1, t2), f'{n1} {t1} vs {t2}'
+ for p1, p2 in zip(m1.parameters(), m2.parameters()):
+ assert p1.requires_grad == p2.requires_grad
+
def assert_forward_equal(m1: torch.nn.Module, m2: torch.nn.Module, data_gen_fn: Callable[[], dict],
output_transform_fn: Callable[[Any], dict]) -> None:
@@ -65,10 +69,14 @@ def check_lazy_init(entry: TestingEntry, seed: int = 42, verbose: bool = False,
ctx = LazyInitContext()
with ctx:
deferred_model = model_fn()
+ copied_deferred_model = deepcopy(deferred_model)
deferred_model = ctx.materialize(deferred_model, verbose=verbose)
+ copied_deferred_model = ctx.materialize(copied_deferred_model, verbose=verbose)
assert_model_equal(model, deferred_model)
+ assert_model_equal(deferred_model, copied_deferred_model)
if check_forward:
assert_forward_equal(model, deferred_model, data_gen_fn, output_transform_fn)
+ assert_forward_equal(deferred_model, copied_deferred_model, data_gen_fn, output_transform_fn)
if verbose:
print(f'{model.__class__.__name__} pass')
diff --git a/tests/test_utils/test_lazy_init/test_distribute.py b/tests/test_lazy/test_distribute.py
similarity index 97%
rename from tests/test_utils/test_lazy_init/test_distribute.py
rename to tests/test_lazy/test_distribute.py
index fd91e7e912b5..d515b175a9ea 100644
--- a/tests/test_utils/test_lazy_init/test_distribute.py
+++ b/tests/test_lazy/test_distribute.py
@@ -12,7 +12,7 @@
from colossalai.utils.common import print_rank_0
try:
- from colossalai.utils.model.experimental import LazyInitContext, LazyTensor, _MyTensor
+ from colossalai.lazy.lazy_init import LazyInitContext, LazyTensor, _MyTensor
except:
pass
from lazy_init_utils import SUPPORT_LAZY, assert_dist_model_equal, set_seed
diff --git a/tests/test_utils/test_lazy_init/test_models.py b/tests/test_lazy/test_models.py
similarity index 100%
rename from tests/test_utils/test_lazy_init/test_models.py
rename to tests/test_lazy/test_models.py
diff --git a/tests/test_optimizer/test_adam_kernel.py b/tests/test_optimizer/test_adam_kernel.py
new file mode 100644
index 000000000000..2186a421fe00
--- /dev/null
+++ b/tests/test_optimizer/test_adam_kernel.py
@@ -0,0 +1,131 @@
+# This test checks adam kernels
+# Baseline is pure fp32 torch adam optimizer
+import math
+from abc import abstractmethod
+from typing import Type
+
+import pytest
+import torch
+from torch import Tensor
+
+from colossalai.utils import get_current_device, multi_tensor_applier
+
+_FUSED_ALLOWED_P_G_TYPES = [(torch.float, torch.half), (torch.float, torch.float), (torch.half, torch.float),
+ (torch.half, torch.half), (torch.bfloat16, torch.float), (torch.float, torch.bfloat16),
+ (torch.bfloat16, torch.bfloat16)]
+
+_CPU_ALLOWED_P_G_TYPES = [(torch.float, torch.half), (torch.float, torch.float), (torch.half, torch.float),
+ (torch.half, torch.half)]
+
+
+class AdamKernel:
+
+ def __init__(self, lr: float, beta1: float, beta2: float, eps: float, weight_decay: float, use_adamw: bool) -> None:
+ self.lr = lr
+ self.beta1 = beta1
+ self.beta2 = beta2
+ self.eps = eps
+ self.weight_decay = weight_decay
+ self.use_adamw = use_adamw
+
+ @abstractmethod
+ def update(self, step: int, param: Tensor, grad: Tensor, exp_avg: Tensor, exp_avg_sq: Tensor):
+ pass
+
+
+class TorchAdamKernel(AdamKernel):
+
+ def update(self, step: int, param: Tensor, grad: Tensor, exp_avg: Tensor, exp_avg_sq: Tensor):
+ bias_correction1 = 1 - self.beta1**step
+ bias_correction2 = 1 - self.beta2**step
+
+ if self.weight_decay != 0:
+ if self.use_adamw:
+ # Perform stepweight decay
+ param.mul_(1 - self.lr * self.weight_decay)
+ else:
+ grad = grad.add(param, alpha=self.weight_decay)
+
+ # Decay the first and second moment running average coefficient
+ exp_avg.mul_(self.beta1).add_(grad, alpha=1 - self.beta1)
+ exp_avg_sq.mul_(self.beta2).addcmul_(grad, grad, value=1 - self.beta2)
+ denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(self.eps)
+
+ step_size = self.lr / bias_correction1
+
+ param.addcdiv_(exp_avg, denom, value=-step_size)
+
+
+class FusedAdamKernel(AdamKernel):
+
+ def __init__(self, lr: float, beta1: float, beta2: float, eps: float, weight_decay: float, use_adamw: bool) -> None:
+ super().__init__(lr, beta1, beta2, eps, weight_decay, use_adamw)
+ from colossalai.kernel.op_builder import FusedOptimBuilder
+ fused_optim = FusedOptimBuilder().load()
+ self.fused_adam = fused_optim.multi_tensor_adam
+ self.dummy_overflow_buf = torch.cuda.IntTensor([0])
+
+ def update(self, step: int, param: Tensor, grad: Tensor, exp_avg: Tensor, exp_avg_sq: Tensor):
+ multi_tensor_applier(self.fused_adam, self.dummy_overflow_buf, [[grad], [param], [exp_avg], [exp_avg_sq]],
+ self.lr, self.beta1, self.beta2, self.eps, step, self.use_adamw, True, self.weight_decay,
+ -1)
+
+
+class CPUAdamKernel(AdamKernel):
+
+ def __init__(self, lr: float, beta1: float, beta2: float, eps: float, weight_decay: float, use_adamw: bool) -> None:
+ super().__init__(lr, beta1, beta2, eps, weight_decay, use_adamw)
+ from colossalai.kernel.op_builder import CPUAdamBuilder
+ cpu_optim = CPUAdamBuilder().load()
+
+ self.cpu_adam_op = cpu_optim.CPUAdamOptimizer(lr, beta1, beta2, eps, weight_decay, use_adamw)
+
+ def update(self, step: int, param: Tensor, grad: Tensor, exp_avg: Tensor, exp_avg_sq: Tensor):
+ self.cpu_adam_op.step(step, self.lr, self.beta1, self.beta2, self.eps, self.weight_decay, True, param.view(-1),
+ grad.view(-1), exp_avg.view(-1), exp_avg_sq.view(-1), -1)
+
+
+def check_adam_kernel(kernel: Type[AdamKernel], adamw: bool, weight_decay: float, p_dtype: torch.dtype,
+ g_dtype: torch.dtype, device: torch.device, n_steps: int, rtol: float, atol: float):
+ lr = 1e-3
+ beta1, beta2 = 0.9, 0.999
+ eps = 1e-8
+ torch_adam = TorchAdamKernel(lr, beta1, beta2, eps, weight_decay, adamw)
+ adam_kernel = kernel(lr, beta1, beta2, eps, weight_decay, adamw)
+ master_p = torch.rand(64, device=device)
+ master_g = torch.rand_like(master_p)
+ master_exp_avg = torch.zeros_like(master_p)
+ master_exp_avg_sq = torch.zeros_like(master_p)
+ p = master_p.clone().to(p_dtype)
+ g = master_g.clone().to(g_dtype)
+ exp_avg = master_exp_avg.clone()
+ exp_avg_sq = master_exp_avg_sq.clone()
+
+ for step in range(1, 1 + n_steps):
+ torch_adam.update(step, master_p, master_g, master_exp_avg, master_exp_avg_sq)
+ adam_kernel.update(step, p, g, exp_avg, exp_avg_sq)
+ # if overflow, the weight won't be updated. so there will be no nan in p
+ assert not torch.isnan(p).any()
+ assert torch.allclose(master_p, p.float(), rtol=rtol, atol=atol)
+
+
+@pytest.mark.parametrize('adamw', [False, True])
+@pytest.mark.parametrize('weight_decay', [0.0, 0.1])
+@pytest.mark.parametrize('p_dtype, g_dtype', _FUSED_ALLOWED_P_G_TYPES)
+def test_fused_adam_kernel(adamw, weight_decay, p_dtype, g_dtype):
+ rtol, atol = 1e-5, 1e-8
+ if p_dtype is torch.float16 or g_dtype is torch.float16:
+ rtol, atol = 1e-3, 1e-3
+ if p_dtype is torch.bfloat16 or g_dtype is torch.bfloat16:
+ rtol, atol = 4e-3, 4e-3
+ check_adam_kernel(FusedAdamKernel, adamw, weight_decay, p_dtype, g_dtype, get_current_device(), 3, rtol, atol)
+
+
+@pytest.mark.parametrize('adamw', [False, True])
+@pytest.mark.parametrize('weight_decay', [0.0, 0.1])
+@pytest.mark.parametrize('p_dtype, g_dtype', _CPU_ALLOWED_P_G_TYPES)
+def test_cpu_adam_kernel(adamw, weight_decay, p_dtype, g_dtype):
+ rtol, atol = 1e-5, 1e-8
+ if p_dtype is torch.float16 or g_dtype is torch.float16:
+ rtol, atol = 1e-3, 1e-3
+ check_adam_kernel(CPUAdamKernel, adamw, weight_decay, p_dtype, g_dtype, torch.device('cpu'), 3, rtol, atol)
diff --git a/tests/test_optimizer/test_adam_optim.py b/tests/test_optimizer/test_adam_optim.py
new file mode 100644
index 000000000000..0f72bc134809
--- /dev/null
+++ b/tests/test_optimizer/test_adam_optim.py
@@ -0,0 +1,86 @@
+from copy import deepcopy
+from typing import Type, Union
+
+import pytest
+import torch
+import torch.nn as nn
+from torch.optim import Adam, AdamW
+
+from colossalai.nn.optimizer import CPUAdam, FusedAdam, HybridAdam
+from tests.kit.model_zoo import model_zoo
+
+_ALLOWED_OPTIM_DEVICES = [
+ (FusedAdam, torch.device('cuda:0')),
+ (CPUAdam, torch.device('cpu')),
+ (CPUAdam, torch.device('cuda:0')),
+ (HybridAdam, torch.device('cpu')),
+ (HybridAdam, torch.device('cuda:0')),
+]
+
+_ALLOWED_P_G_TYPES = [
+ (torch.float, torch.float), # pure fp32
+ (torch.float, torch.half), # fp16 amp
+ (torch.float, torch.bfloat16), # bfloat16 amp
+ # (torch.half, torch.half), # FIXME(ver217): cpu adam kernel does not support pure fp16
+ # (torch.bfloat16, torch.bfloat16), # FIXME(ver217): cpu adam kernel does not support pure bfloat16
+]
+
+N_STEPS = 3
+
+
+def setup_param_groups(bert_model: nn.Module) -> list:
+ no_decay = ["bias", "LayerNorm.weight"]
+ optimizer_grouped_parameters = [
+ {
+ "params": [p for n, p in bert_model.named_parameters() if not any(nd in n for nd in no_decay)],
+ "weight_decay": 0.1,
+ },
+ {
+ "params": [p for n, p in bert_model.named_parameters() if any(nd in n for nd in no_decay)],
+ "weight_decay": 0.0,
+ },
+ ]
+ return optimizer_grouped_parameters
+
+
+def set_grad(model: nn.Module, torch_model: nn.Module, g_dtype: torch.dtype) -> None:
+ for p, torch_p in zip(model.parameters(), torch_model.parameters()):
+ torch_p.grad = torch.rand_like(torch_p)
+ # avoid inconsistent grad and param dtype error
+ orig_p = p.data
+ p.data = torch_p.grad.clone().to(g_dtype)
+ p.grad = p.data
+ p.data = orig_p
+
+
+@pytest.mark.parametrize('optim_cls, device', _ALLOWED_OPTIM_DEVICES)
+@pytest.mark.parametrize('adamw', [False, True])
+@pytest.mark.parametrize('p_dtype, g_dtype', _ALLOWED_P_G_TYPES)
+def test_adam_optim_on_bert(optim_cls: Union[Type[FusedAdam], Type[CPUAdam], Type[HybridAdam]], device: torch.device,
+ adamw: bool, p_dtype: torch.dtype, g_dtype: torch.dtype) -> None:
+ model_fn, *_ = next(iter(model_zoo.get_sub_registry('transformers_bert_for_sequence_classification').values()))
+ torch_model = model_fn().to(device)
+ model = deepcopy(torch_model).to(p_dtype)
+ lr = 1e-3
+ beta1, beta2 = 0.9, 0.999
+ eps = 1e-8
+ torch_optim_cls = AdamW if adamw else Adam
+ torch_optim = torch_optim_cls(setup_param_groups(torch_model), lr=lr, betas=(beta1, beta2), eps=eps)
+ optim = optim_cls(setup_param_groups(model), lr=lr, betas=(beta1, beta2), eps=eps, adamw_mode=adamw)
+
+ rtol, atol = 1e-5, 1e-5
+ if p_dtype is torch.float16 or g_dtype is torch.float16:
+ rtol, atol = 2e-3, 2e-3
+ if p_dtype is torch.bfloat16 or g_dtype is torch.bfloat16:
+ rtol, atol = 4e-3, 4e-3
+
+ for _ in range(N_STEPS):
+ set_grad(model, torch_model, g_dtype)
+ torch_optim.step()
+ optim.step()
+ torch_optim.zero_grad()
+ optim.zero_grad()
+ for p, torch_p in zip(model.parameters(), torch_model.parameters()):
+ # if overflow, the weight won't be updated. so there will be no nan in p
+ assert not torch.isnan(p).any()
+ assert torch.allclose(p.float(), torch_p, rtol=rtol, atol=atol)
diff --git a/tests/test_optimizer/test_cpu_adam.py b/tests/test_optimizer/test_cpu_adam.py
deleted file mode 100644
index 8b3ecf8517f7..000000000000
--- a/tests/test_optimizer/test_cpu_adam.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import math
-
-import torch
-
-from colossalai.testing import clear_cache_before_run, parameterize
-
-
-def torch_adam_update(
- step,
- lr,
- beta1,
- beta2,
- eps,
- weight_decay,
- param,
- grad,
- exp_avg,
- exp_avg_sq,
- use_adamw,
-):
- bias_correction1 = 1 - beta1**step
- bias_correction2 = 1 - beta2**step
-
- if weight_decay != 0:
- if use_adamw:
- # Perform stepweight decay
- param.mul_(1 - lr * weight_decay)
- else:
- grad = grad.add(param, alpha=weight_decay)
-
- # Decay the first and second moment running average coefficient
- exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
- exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
- denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)
-
- step_size = lr / bias_correction1
-
- param.addcdiv_(exp_avg, denom, value=-step_size)
-
-
-def assertLess(data_diff, threshold, msg):
- assert data_diff < threshold, msg
-
-
-def assertTrue(condition, msg):
- assert condition, msg
-
-
-@clear_cache_before_run()
-@parameterize('adamw', [True, False])
-@parameterize('step', [1, 2])
-@parameterize('p_dtype', [torch.float, torch.half])
-@parameterize('g_dtype', [torch.float, torch.half])
-def test_cpu_adam(adamw, step, p_dtype, g_dtype):
- lr = 1e-3
- beta1, beta2 = 0.9, 0.999
- eps = 1e-8
- weight_decay = 0
-
- for i in range(3):
- p_data = torch.rand(64, dtype=p_dtype)
- p_data_copy = p_data.clone().float()
- p_grad = torch.rand(64, dtype=g_dtype)
- p_grad_copy = p_grad.clone().float()
- exp_avg = torch.rand(p_data.shape)
- exp_avg_copy = exp_avg.clone()
- exp_avg_sq = torch.rand(p_data.shape)
- exp_avg_sq_copy = exp_avg_sq.clone()
-
- from colossalai.kernel.op_builder import CPUAdamBuilder
- cpu_optim = CPUAdamBuilder().load()
-
- cpu_adam_op = cpu_optim.CPUAdamOptimizer(lr, beta1, beta2, eps, weight_decay, adamw)
-
- cpu_adam_op.step(
- step,
- lr,
- beta1,
- beta2,
- eps,
- weight_decay,
- True,
- p_data.view(-1), # fp32 data
- p_grad.view(-1), # fp32 grad
- exp_avg.view(-1),
- exp_avg_sq.view(-1),
- -1,
- )
-
- torch_adam_update(
- step,
- lr,
- beta1,
- beta2,
- eps,
- weight_decay,
- p_data_copy, # fp32 data
- p_grad_copy, # fp32 grad
- exp_avg_copy,
- exp_avg_sq_copy,
- adamw,
- )
- var = p_data_copy - p_data
- data_diff = torch.max(torch.abs(var))
- threshold = 1e-3
- assertLess(
- data_diff,
- threshold,
- f"p_data diff {data_diff}. failed check, step {step}, lr {lr}, eps "
- f"{eps} beta1 {beta1} beta2 {beta2} weight_decay {weight_decay} p_dtype {p_dtype}, g_dtype {g_dtype}",
- )
- max_grad_diff = torch.max(torch.abs(p_grad_copy - p_grad))
- assertTrue(max_grad_diff < threshold, f"diff {max_grad_diff}")
- max_exp_avg_diff = torch.max(torch.abs(exp_avg_copy - exp_avg))
- assertTrue(max_exp_avg_diff < threshold, f"max_exp_avg_diff {max_exp_avg_diff}")
- max_exp_avg_sq_diff = torch.max(torch.abs(exp_avg_sq_copy - exp_avg_sq))
- assertTrue(max_exp_avg_sq_diff < threshold, f"max_exp_avg_sq_diff {max_exp_avg_sq_diff}")
-
-
-if __name__ == '__main__':
- test_cpu_adam()
diff --git a/tests/test_optimizer/test_fused_adam.py b/tests/test_optimizer/test_fused_adam.py
deleted file mode 100644
index 114d5293dad9..000000000000
--- a/tests/test_optimizer/test_fused_adam.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import torch
-import torch.nn as nn
-from torch.optim import AdamW
-from torch.optim.adam import Adam
-
-from colossalai.nn.optimizer.fused_adam import FusedAdam
-from colossalai.testing import clear_cache_before_run, parameterize
-
-
-class FC(nn.Module):
-
- def __init__(self) -> None:
- super().__init__()
- self.fc = nn.Sequential(nn.Linear(64, 64))
-
- def forward(self, x):
- return self.fc(x)
-
-
-@clear_cache_before_run()
-@parameterize('adamw', [False, True])
-@parameterize('p_dtype', [torch.float, torch.half])
-@parameterize('g_dtype', [torch.float, torch.half])
-def test_adam(adamw, p_dtype, g_dtype):
- model = FC().cuda().to(p_dtype)
- state = model.state_dict()
- model_copy = FC().cuda().to(p_dtype)
- model_copy.load_state_dict(state.copy())
-
- if adamw:
- optim = FusedAdam(model.parameters(), lr=1e-3, adamw_mode=True)
- torch_optim = AdamW(model_copy.parameters(), lr=1e-3)
- else:
- optim = FusedAdam(model.parameters(), lr=1e-3)
- torch_optim = Adam(model_copy.parameters(), lr=1e-3)
-
- data = torch.rand(1024, 64).cuda().to(p_dtype)
- data_copy = data.clone()
- label = torch.rand(1024, 64).cuda().to(p_dtype)
-
- for d, l in zip(data, label):
- y = model(d)
- loss = ((l - y)**2).sum()
- optim.zero_grad()
- loss.backward()
- if p_dtype != g_dtype:
- for i in range(len(optim.param_groups[0]['params'])):
- optim.param_groups[0]['params'][i].grad.data = optim.param_groups[0]['params'][i].grad.data.to(g_dtype)
- optim.step()
-
- for d, l in zip(data_copy, label):
- y = model_copy(d)
- loss = ((l - y)**2).sum()
- torch_optim.zero_grad()
- loss.backward()
- torch_optim.step()
-
- assert len(optim.param_groups[0]['params']) == len(torch_optim.param_groups[0]['params'])
-
- for i in range(len(optim.param_groups[0]['params'])):
- if torch.isnan(optim.param_groups[0]['params'][i]).any() \
- or torch.isnan(torch_optim.param_groups[0]['params'][i]).any():
- continue
- assert torch.allclose(optim.param_groups[0]['params'][i], torch_optim.param_groups[0]['params'][i], 2e-3, 2e-3)
diff --git a/tests/test_optimizer/test_fused_adam_kernel.py b/tests/test_optimizer/test_fused_adam_kernel.py
deleted file mode 100644
index 4afa13349c1b..000000000000
--- a/tests/test_optimizer/test_fused_adam_kernel.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import math
-
-import torch
-import torch.nn as nn
-from numpy import dtype
-
-from colossalai.testing import clear_cache_before_run, parameterize
-from colossalai.utils import multi_tensor_applier
-
-
-def torch_adam_update(
- step,
- lr,
- beta1,
- beta2,
- eps,
- weight_decay,
- param,
- grad,
- exp_avg,
- exp_avg_sq,
- use_adamw,
-):
- bias_correction1 = 1 - beta1**step
- bias_correction2 = 1 - beta2**step
-
- if weight_decay != 0:
- if use_adamw:
- # Perform stepweight decay
- param.mul_(1 - lr * weight_decay)
- else:
- grad = grad.add(param, alpha=weight_decay)
-
- # Decay the first and second moment running average coefficient
- exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
- exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2)
- denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)
-
- step_size = lr / bias_correction1
-
- param.addcdiv_(exp_avg, denom, value=-step_size)
-
-
-@clear_cache_before_run()
-@parameterize('adamw', [False, True])
-@parameterize('step', [1, 2])
-@parameterize('p_dtype', [torch.float, torch.half])
-@parameterize('g_dtype', [torch.float, torch.half])
-def test_adam(adamw, step, p_dtype, g_dtype):
- from colossalai.kernel.op_builder import FusedOptimBuilder
- fused_optim = FusedOptimBuilder().load()
- fused_adam = fused_optim.multi_tensor_adam
-
- dummy_overflow_buf = torch.cuda.IntTensor([0])
-
- count = 0
-
- for i in range(3):
- p = torch.rand(64, dtype=p_dtype).cuda()
- p_copy = p.clone().float()
- g = torch.rand(p.shape, dtype=g_dtype).cuda()
- g_copy = g.clone().float()
- m = torch.rand(p.shape).cuda()
- m_copy = m.clone()
- v = torch.rand(p.shape).cuda()
- v_copy = v.clone()
-
- lr = 1e-3
- beta1, beta2 = 0.9, 0.999
- eps = 1e-8
- weight_decay = 0
-
- multi_tensor_applier(fused_adam, dummy_overflow_buf, [[g], [p], [m], [v]], lr, beta1, beta2, eps, step, adamw,
- True, weight_decay, -1)
-
- torch_adam_update(
- step,
- lr,
- beta1,
- beta2,
- eps,
- weight_decay,
- p_copy, # fp32 data
- g_copy, # fp32 grad
- m_copy,
- v_copy,
- adamw,
- )
-
- if torch.isnan(p).any() or torch.isnan(p_copy).any():
- count += 1
- continue
- assert count < 200, "too many nans"
- assert torch.allclose(p.to(torch.float), p_copy.to(torch.float), 1e-5,
- 1e-5), f"failed check, adamw {adamw}, p_dtype {p_dtype}, g_dtype {g_dtype}"
diff --git a/tests/test_optimizer/test_hybrid_adam.py b/tests/test_optimizer/test_hybrid_adam.py
deleted file mode 100644
index d075149dfcb1..000000000000
--- a/tests/test_optimizer/test_hybrid_adam.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import torch
-import torch.nn as nn
-from torch.optim import AdamW
-from torch.optim.adam import Adam
-
-from colossalai.nn.optimizer.hybrid_adam import HybridAdam
-from colossalai.testing import clear_cache_before_run, parameterize
-
-RE = 3
-
-
-@clear_cache_before_run()
-@parameterize('adamw', [False, True])
-@parameterize('device', ['cpu', 'cuda:0'])
-@parameterize('p_dtype', [torch.float])
-@parameterize('g_dtype', [torch.float, torch.half])
-def test_adam(adamw, device, p_dtype, g_dtype):
- rng_state = torch.get_rng_state()
- p = nn.Parameter(torch.rand(64).to(device, p_dtype))
- torch.set_rng_state(rng_state)
- p_copy = nn.Parameter(torch.rand(64).to(device).float())
-
- if adamw:
- optim = HybridAdam([p], lr=1e-3, adamw_mode=True)
- torch_optim = AdamW([p_copy], lr=1e-3)
- else:
- optim = HybridAdam([p], lr=1e-3)
- torch_optim = Adam([p_copy], lr=1e-3)
-
- print(f"adaw mode {adamw}, device {device}, p_dtype {p_dtype}, g_dtype {g_dtype}")
- for i in range(RE):
- p.grad = torch.rand(64).to(device, p_dtype)
- p_copy.grad = p.grad.clone().float()
- p.grad.data = p.grad.data.to(g_dtype)
-
- optim.step()
- torch_optim.step()
-
- if torch.isnan(p.data).any() or torch.isnan(p_copy.data).any():
- continue
- assert torch.allclose(p.data, p_copy.data, 1e-4, 1e-2), \
- f"adaw mode {adamw}, device {device}, p_dtype {p_dtype}, g_dtype {g_dtype}"
diff --git a/tests/test_utils/test_lazy_init_ctx.py b/tests/test_utils/test_lazy_init_ctx.py
deleted file mode 100644
index 97efb3367490..000000000000
--- a/tests/test_utils/test_lazy_init_ctx.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import torch
-from colossalai.utils.model.lazy_init_context import LazyInitContext
-from torchvision.models import resnet34
-import random
-import numpy as np
-
-MANUAL_SEED = 0
-random.seed(MANUAL_SEED)
-np.random.seed(MANUAL_SEED)
-torch.manual_seed(MANUAL_SEED)
-
-
-def test_lazy_init_with_meta():
- ctx = LazyInitContext(to_meta=True)
- with ctx:
- model = resnet34(num_classes=10)
-
- for param in model.parameters():
- assert param.is_meta
- for buffer in model.buffers():
- assert buffer.is_meta
-
- ctx.lazy_init_parameters(model)
-
- for name, param in model.named_parameters():
- assert not param.is_meta, name
-
- for buffer in model.buffers():
- assert not buffer.is_meta
-
-
-def test_lazy_init_without_meta():
- ctx = LazyInitContext(to_meta=False)
- with ctx:
- model = resnet34(num_classes=10)
-
- for param in model.parameters():
- assert not param.is_meta
- for buffer in model.buffers():
- assert not buffer.is_meta
-
- conv1_weight_before_init = model.conv1.weight.clone()
- ctx.lazy_init_parameters(model)
- conv1_weight_after_init = model.conv1.weight.clone()
-
- assert not torch.allclose(conv1_weight_after_init, conv1_weight_before_init)
-
-
-if __name__ == '__main__':
- test_lazy_init_with_meta()
- test_lazy_init_without_meta()
diff --git a/tests/test_zero/test_gemini/test_optim.py b/tests/test_zero/test_gemini/test_optim.py
index 8ce20c16e8f9..66611bcd2419 100644
--- a/tests/test_zero/test_gemini/test_optim.py
+++ b/tests/test_zero/test_gemini/test_optim.py
@@ -21,23 +21,40 @@
# these models are too small, all parameters in these models are compacted into one chunk
EXAMPLE_MODELS = ['albert', 'beit', 'bert', 'hanging_param_model', 'nested_model', 'repeated_computed_layers']
+# bfloat16 cannot represent them exactly
+BF16_IGNORED_KEYS = [
+ 'albert.embeddings.word_embeddings.weight',
+ 'albert.embeddings.position_embeddings.weight',
+ 'masked_bias',
+]
-def check_param(model: ZeroDDP, torch_model: torch.nn.Module):
- zero_dict = model.state_dict(only_rank_0=False)
+
+def check_param(model: ZeroDDP, torch_model: torch.nn.Module, dtype: torch.dtype):
+ zero_dict = model.state_dict(only_rank_0=False, dtype=dtype)
torch_dict = torch_model.state_dict()
for key, value in torch_dict.items():
# key is 'module.model.PARAMETER', so we truncate it
key = key[7:]
assert key in zero_dict, "{} not in ZeRO dictionary.".format(key)
- temp_zero_value = zero_dict[key].to(device=value.device, dtype=value.dtype)
+ temp_zero_value = zero_dict[key].to(device=value.device)
+ if dtype is torch.bfloat16 and any(k in key for k in BF16_IGNORED_KEYS):
+ continue
+ rtol, atol = 1e-3, 4e-3
+ if dtype is torch.bfloat16:
+ rtol, atol = 4e-3, 8e-3
# debug_print([0], "max range: ", key, torch.max(torch.abs(value - temp_zero_value)))
- assert_close(value, temp_zero_value, rtol=1e-3, atol=4e-3)
+ assert_close(value.float(),
+ temp_zero_value.float(),
+ rtol=rtol,
+ atol=atol,
+ msg=lambda s: s + f'\n{key}\n{temp_zero_value.dtype}')
@parameterize('placement_policy', ['cuda', 'cpu', 'auto', 'const'])
@parameterize('model_name', TEST_MODELS)
-def exam_model_step(placement_policy, model_name: str):
+@parameterize('mixed_precision', [torch.half, torch.bfloat16])
+def exam_model_step(placement_policy, model_name: str, mixed_precision: torch.dtype):
set_seed(42)
get_components_func = non_distributed_component_funcs.get_callable(model_name)
model_builder, train_dataloader, test_dataloader, optimizer_class, criterion = get_components_func()
@@ -65,7 +82,7 @@ def exam_model_step(placement_policy, model_name: str):
init_device = None
chunk_manager = ChunkManager(config_dict, init_device=init_device)
gemini_manager = GeminiManager(placement_policy, chunk_manager)
- model = ZeroDDP(model, gemini_manager, pin_memory=True)
+ model = ZeroDDP(model, gemini_manager, pin_memory=True, mixed_precision=mixed_precision)
optimizer = HybridAdam(model.parameters(), lr=1e-3)
zero_optim = ZeroOptimizer(optimizer, model, initial_scale=128)
@@ -74,6 +91,7 @@ def exam_model_step(placement_policy, model_name: str):
torch_model.eval()
set_seed(dist.get_rank() * 3 + 128)
+ rtol, atol = 1e-4, 1e-5
for i, (input_ids, label) in enumerate(train_dataloader):
if i > 2:
break
@@ -83,17 +101,18 @@ def exam_model_step(placement_policy, model_name: str):
torch_loss = run_fwd_bwd(torch_model, input_ids, label, criterion, torch_optim)
loss = run_fwd_bwd(model, input_ids, label, criterion, zero_optim)
- assert_close(torch_loss, loss)
+ assert_close(torch_loss, loss, rtol=rtol, atol=atol)
zero_optim.step()
torch_optim.step()
- check_param(model, torch_model)
+ check_param(model, torch_model, mixed_precision)
@parameterize('placement_policy', ['cuda', 'cpu', 'auto', 'const'])
@parameterize('model_name', EXAMPLE_MODELS)
-def exam_tiny_example(placement_policy, model_name: str):
+@parameterize('mixed_precision', [torch.half, torch.bfloat16])
+def exam_tiny_example(placement_policy, model_name: str, mixed_precision: torch.dtype):
set_seed(2008)
get_components_func = non_distributed_component_funcs.get_callable(model_name)
model_builder, train_dataloader, test_dataloader, optimizer_class, criterion = get_components_func()
@@ -113,7 +132,7 @@ def exam_tiny_example(placement_policy, model_name: str):
chunk_manager = init_chunk_manager(model=model, init_device=get_current_device(), search_range_mb=1)
gemini_manager = GeminiManager(placement_policy, chunk_manager)
- model = ZeroDDP(model, gemini_manager, pin_memory=True)
+ model = ZeroDDP(model, gemini_manager, pin_memory=True, mixed_precision=mixed_precision)
optimizer = HybridAdam(model.parameters(), lr=1e-3)
zero_optim = ZeroOptimizer(optimizer, model, initial_scale=2)
@@ -121,6 +140,9 @@ def exam_tiny_example(placement_policy, model_name: str):
torch_model.eval()
set_seed(dist.get_rank() * 3 + 128)
+ rtol, atol = 1.5e-6, 2e-5
+ if mixed_precision is torch.bfloat16:
+ rtol, atol = 2e-3, 2e-3
for i, (input_ids, label) in enumerate(train_dataloader):
if i > 2:
break
@@ -133,12 +155,12 @@ def exam_tiny_example(placement_policy, model_name: str):
torch_loss = run_fwd_bwd(torch_model, input_ids, label, criterion, torch_optim)
loss = run_fwd_bwd(model, input_ids, label, criterion, zero_optim)
- assert_close(torch_loss, loss, rtol=1.5e-6, atol=2e-5) # atol should be 2e-5 for torch lower than 1.12
+ assert_close(torch_loss, loss, rtol=rtol, atol=atol) # atol should be 2e-5 for torch lower than 1.12
zero_optim.step()
torch_optim.step()
- check_param(model, torch_model)
+ check_param(model, torch_model, mixed_precision)
def run_dist(rank, world_size, port):
diff --git a/tests/test_zero/test_legacy/test_zero_engine.py b/tests/test_zero/test_legacy/test_zero_engine.py
index dc8847ce56ab..826a543db861 100644
--- a/tests/test_zero/test_legacy/test_zero_engine.py
+++ b/tests/test_zero/test_legacy/test_zero_engine.py
@@ -16,7 +16,11 @@
from tests.components_to_test.registry import non_distributed_component_funcs
-def run_dist(rank, world_size, port, parallel_config):
+def run_dist(rank, world_size, port, parallel_config, bf16):
+ is_mp_config = parallel_config == MP_PARALLEL_CONFIG
+ is_zero_config = parallel_config == ZERO_PARALLEL_CONFIG
+ if bf16:
+ parallel_config['zero']['model_config']['bf16'] = True
colossalai.launch(config=parallel_config,
rank=rank,
world_size=world_size,
@@ -30,7 +34,8 @@ def run_dist(rank, world_size, port, parallel_config):
model_builder, train_dataloader, _, optimizer_class, criterion = get_components_func()
with ZeroInitContext(target_device=torch.cuda.current_device(),
shard_strategy=gpc.config.zero.model_config.shard_strategy,
- shard_param=True):
+ shard_param=True,
+ bf16=bf16):
colo_model = model_builder(checkpoint=True)
colo_optimizer = optimizer_class(colo_model.parameters(), lr=1e-3)
@@ -38,7 +43,8 @@ def run_dist(rank, world_size, port, parallel_config):
optimizer=colo_optimizer,
criterion=criterion,
train_dataloader=train_dataloader)
- torch_model = model_builder(checkpoint=True).half()
+ dtype = torch.bfloat16 if bf16 else torch.float16
+ torch_model = model_builder(checkpoint=True).to(dtype)
col_model_deepcopy(engine.model, torch_model)
torch_model = torch_model.cuda().float()
@@ -80,9 +86,9 @@ def run_dist(rank, world_size, port, parallel_config):
torch_optimizer.step()
i += 1
- if parallel_config == MP_PARALLEL_CONFIG:
+ if is_mp_config:
check_params(torch_model, colo_model, loose=True)
- elif parallel_config == ZERO_PARALLEL_CONFIG:
+ elif is_zero_config:
check_sharded_model_params(torch_model, colo_model, loose=True)
@@ -97,9 +103,10 @@ def test_mp_engine(world_size):
@pytest.mark.dist
@pytest.mark.parametrize("world_size", [1, 2])
+@pytest.mark.parametrize("bf16", [True, False])
@rerun_if_address_is_in_use()
-def test_zero_engine(world_size):
- spawn(run_dist, world_size, parallel_config=ZERO_PARALLEL_CONFIG)
+def test_zero_engine(world_size, bf16):
+ spawn(run_dist, world_size, parallel_config=ZERO_PARALLEL_CONFIG, bf16=bf16)
if __name__ == '__main__':
diff --git a/tests/test_zero/test_low_level/test_grad_acc.py b/tests/test_zero/test_low_level/test_grad_acc.py
index 2ae1f3a99d79..c264a8077d2a 100644
--- a/tests/test_zero/test_low_level/test_grad_acc.py
+++ b/tests/test_zero/test_low_level/test_grad_acc.py
@@ -82,7 +82,6 @@ def fwd_bwd_func(number, cur_data):
def exam_zero_1_grad_acc():
local_rank = torch.distributed.get_rank()
- grad_scale = 32
seed_all(2008)
# create models
@@ -101,7 +100,6 @@ def exam_zero_1_grad_acc():
# level 1 and 2 will produce exactly the same results
zero_optimizer = LowLevelZeroOptimizer(zero_optimizer,
overlap_communication=False,
- initial_scale=grad_scale,
reduce_bucket_size=262144,
clip_grad_norm=1.0)
@@ -128,9 +126,8 @@ def fwd_bwd_func(number, cur_data, check_flag):
if check_flag:
# check grad
for (n, p), z1p in zip(torch_model.named_parameters(), zero_model.parameters()):
- unscale_grad = z1p.grad / grad_scale
# print(n, p.shape, torch.max(torch.abs(p.grad - unscale_grad)))
- assert torch.equal(p.grad, unscale_grad)
+ assert torch.equal(p.grad, z1p.grad)
zero_optimizer._sync_grad()
diff --git a/tests/test_zero/test_low_level/test_zero1_2.py b/tests/test_zero/test_low_level/test_zero1_2.py
index 4086af9d896e..8e2206fe6c8d 100644
--- a/tests/test_zero/test_low_level/test_zero1_2.py
+++ b/tests/test_zero/test_low_level/test_zero1_2.py
@@ -7,7 +7,7 @@
from torch.testing import assert_close
import colossalai
-from colossalai.testing import rerun_if_address_is_in_use, spawn
+from colossalai.testing import parameterize, rerun_if_address_is_in_use, spawn
from colossalai.testing.random import seed_all
from colossalai.zero import LowLevelZeroOptimizer
@@ -25,15 +25,18 @@ def forward(self, x):
return x
-def half_close(a, b, loose=False):
+def loose_close(a, b, dtype: torch.dtype = torch.float32):
rtol = None
atol = None
- if loose:
+ if dtype is torch.float16:
rtol = 5e-2
atol = 5e-4
+ elif dtype is torch.bfloat16:
+ rtol = 4e-3
+ atol = 4e-3
- a = a.detach().half()
- b = b.detach().half()
+ a = a.detach().to(dtype)
+ b = b.detach().to(dtype)
assert_close(a, b, rtol=rtol, atol=atol)
@@ -96,7 +99,8 @@ def exam_zero_1_2():
assert torch.equal(z1p.data, z2p.data)
-def exam_zero_1_torch_ddp():
+@parameterize('dtype', [torch.float16, torch.bfloat16])
+def exam_zero_1_torch_ddp(dtype: torch.dtype):
"""
In this test, two pairs of model and optimizers are created.
1. zero: use sharded optimizer and fp16 parameters
@@ -109,15 +113,10 @@ def exam_zero_1_torch_ddp():
seed_all(1453)
# create models
- zero_model = MlpModel()
- torch_model = copy.deepcopy(zero_model)
+ torch_model = MlpModel().cuda()
+ zero_model = copy.deepcopy(torch_model).to(dtype)
- zero_model = zero_model.cuda().half()
- torch_model = DDP(torch_model.cuda(), bucket_cap_mb=0)
- torch_model = torch_model.cuda()
-
- # for (n, p), z1p in zip(torch_model.named_parameters(), zero_model.parameters()):
- # half_close(p.data, z1p.data)
+ torch_model = DDP(torch_model.cuda(), bucket_cap_mb=0).cuda()
# create optimizer
zero_optimizer = torch.optim.SGD(zero_model.parameters(), lr=1)
@@ -137,11 +136,11 @@ def exam_zero_1_torch_ddp():
input_data = torch.rand(32, 128).cuda()
# zero-dp forward
- zero_output = zero_model(input_data.half())
+ zero_output = zero_model(input_data.to(dtype))
# torch-ddp forward
torch_output = torch_model(input_data)
- half_close(zero_output, torch_output, loose=True)
+ loose_close(zero_output, torch_output, dtype=dtype)
# zero-dp backward
zero_optimizer.backward(zero_output.mean().float(), sync_grad=False)
@@ -151,7 +150,7 @@ def exam_zero_1_torch_ddp():
# check grad
for (n, p), z1p in zip(torch_model.named_parameters(), zero_model.parameters()):
- half_close(p.grad, z1p.grad, loose=True)
+ loose_close(p.grad, z1p.grad, dtype=dtype)
# zero-dp step
zero_optimizer._sync_grad()
@@ -163,7 +162,7 @@ def exam_zero_1_torch_ddp():
# check updated param
for (n, p), z1p in zip(torch_model.named_parameters(), zero_model.parameters()):
# print(n, torch.max(torch.abs(p.data - z1p.data)))
- half_close(p.data, z1p.data, loose=True)
+ loose_close(p.data, z1p.data, dtype=dtype)
def run_dist(rank, world_size, port):