Skip to content

jw9603/Logical_Fallacy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

88 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation

πŸ“Œ NAACL 2025 Findings - This repository provides the source code & dataset used in our paper:

Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation.

πŸ“© If you have any questions or issues, feel free to ask!

Overview

Model 2

Model Below

⚑ Preliminaries

Before running the code, make sure you have access to the following:

πŸ”— Required APIs

1️⃣ ChatGPT API – Required for GPT-based experiments
2️⃣ LLaMA2 & LLaMA3 API – Required for LLaMA-based models

πŸ“‚ Datasets

The original datasets used in our study can be found at the links below:

Dataset Source Link
Argotario πŸ”— Link
Logic (edu_train, edu_dev, edu_test) πŸ”— Link
Propaganda πŸ”— Link
CLIMATE & COVID-19 πŸ”— Link

πŸ“Œ Preprocessed datasets can be found in the data folder.

βš™οΈ Generating Augmented Data

To generate Contextual Augmentation, run:

python make_case.py

To generate Reformulated Queries, run:

python make_case_query.py

How to Run the Code

Before running the experiments, create a result directory.

πŸ“‚ All results will be saved as text files in this result directory.

1. Running GPT-Series Models

python fallacy_gpt_{data/...}.py

{data/...} includes PROPAGANDA, ARGOTARIO, LOGIC, CLIMATE, and COVID-19.

2. Running LLaMA-Series Models

python fallacy_llama3_{data/...}.py

{data/...} includes PROPAGANDA, ARGOTARIO, LOGIC, CLIMATE, and COVID-19.

3. Running RoBERTa-Base Fine-Tuning

python fine-tune-LM_concat_{data/...}.py

{data/...} includes PROPAGANDA, ARGOTARIO, LOGIC, CLIMATE, and COVID-19.

Citation

If this work is helpful in your research, we would appreciate if you could cite our paper as follows:

@inproceedings{jeong-etal-2025-large,
    title = "Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation",
    author = "Jeong, Jiwon  and
      Jang, Hyeju  and
      Park, Hogun",
    editor = "Chiruzzo, Luis  and
      Ritter, Alan  and
      Wang, Lu",
    booktitle = "Findings of the Association for Computational Linguistics: NAACL 2025",
    month = apr,
    year = "2025",
    address = "Albuquerque, New Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.findings-naacl.384/",
    pages = "6918--6937",
    ISBN = "979-8-89176-195-7",
}

About

[NAACL 2025 Findings] Large Language Models Are Better Logical Fallacy Reasoners with Counterargument, Explanation, and Goal-Aware Prompt Formulation πŸ€–

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages