Skip to content

Conversation

@arfon
Copy link
Member

@arfon arfon commented Dec 7, 2025

This pull request updates the JOSS policy on AI usage for both authors and reviewers, replacing the previous interim guidance with a comprehensive and detailed AI Usage Policy. The changes clarify permitted uses, mandatory disclosure requirements, and accountability for AI-assisted contributions in both the submission and review process.

@arfon arfon marked this pull request as draft December 7, 2025 21:06

<p>AI is not allowed for conversational interactions between authors and editors or reviewers unless it is being used for translation purposes.</p>

<p>Authors remain fully responsible for the accuracy, originality, licensing, and ethical/legal compliance of all submitted materials. Failure to provide a complete and accurate disclosure of AI usage may be considered an ethical breach. Consequences can include desk rejection, mandatory revisions, and post-publication correction or withdrawal. In cases of intentional misrepresentation or non-disclosure, JOSS reserves the right to notify the authors' institutions, funders, and/or relevant professional or scholarly societies in accordance with standard research-integrity practices.</p>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think comments here might not be sufficient for the critical conversation that is necessary. But for a start, what happens when LLM-generated content is clearly-infringing the work of others? Suppose a human types // sparse matrix transpose, presses tab, and gets a page of code that still has namespaces from an existing library with copyright/attribution stripped. What if the algorithmic system obfuscates just enough that it isn't obvious to the human that it's plagiarizing an existing library and violating that project's license? A human doing that knowingly has committed scholarly misconduct, but is mere disclosure of LLM use a cheat code granting plausible deniability?

What are our standards for due diligence, both from submitters and for JOSS to protect the integrity of our publication? We need to defend against the self-inflicted epistemic corruption and denial of service attack that some publication venues are now mired in.

COPE has a whole flowchart for ghost authorship. Ghost authorship is scholarly misconduct because it misrepresents the epistemic relation of the authors to the submission. (It is not about awarding credit to unnamed contributors: ghost authors give consent, and their business model depends on that.) The effect of the proposed policy is to allow ghost authorship when it's laundered through an LLM. That is an epistemic disaster.

Cc: @oliviaguest, who has done extensive scholarship on these issues and cares greatly about JOSS.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somewhat similar comment to the one I gave below to @sneakers-the-rat:

This policy goes hand-in-hand with a major scope update that I won't link to here but will share in Slack with you.

I'm not sure if you've already reviewed that but I think it makes sense to review them both hand-in-hand.


<ul>
<li><strong>Tool use:</strong> The tools/models used (and versions) and where they were used (code, paper text, docs).</li>
<li><strong>The nature and scope of assistance:</strong> e.g., code generation, refactoring, test scaffolding, copy-editing, drafting.</li>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont think this is sufficient for disclosure: the disclosure statement could just read "AI used to generate code, docs, and tests" and that doesnt really give the needed information that a reviewer would need to know how to evaluate sections of AI-generated code, or even which were generated and which were written.

I think a disclosure policy like this needs to have language of intent in it that motivates the response: e.g. "the disclosure should provide enough detail about the use of AI to guide their review..."

Or "... Must differentiate sections that were wholly generated from those that were written by a human"

Etc.

Returning to the purpose of the disclosure policy - reviewers need to know what they're reviewing so they can make informed choices about how they spend their (volunteer) time. A package thats 100% AI generated and read over once by the author is different from a package that has used AI to generate boilerplate but then the human took it from there, and while there are some mentions of this in yhe current language, The floor of the disclosure requirement has to cover this

Copy link
Member Author

@arfon arfon Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sneakers-the-rat – please note that this policy goes hand-in-hand with a major scope update that I won't link to here but will share in Slack with you.

I'm not sure if you've already reviewed that but I think it makes sense to review them both hand-in-hand.

<ul>
<li><strong>Tool use:</strong> The tools/models used (and versions) and where they were used (code, paper text, docs).</li>
<li><strong>The nature and scope of assistance:</strong> e.g., code generation, refactoring, test scaffolding, copy-editing, drafting.</li>
<li><strong>Confirmation of review:</strong> Authors must assert that human authors reviewed, edited, validated all AI-assisted outputs and made the core design decisions.</li>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Reviewers have the right to discontinue review if there are clear violations of this affirmation, e.g. large sections of dead or nonsensical code"

@sneakers-the-rat
Copy link
Contributor

One overall comment: i think a policy like this needs to be framed in terms of its goals:

  • JOSS should not be used as a means of outsourcing the labor of primary human review of AI generated code
  • Disclosure is to allow reviewers to make informed decisions about what they choose to review, and ensure authors receive reviews from reviewers who share their ethics and values w.r.t. generative AI
  • Generative AI expands the labor of reviewing substantially, as it often generates hyperverbose code with endless fallback conditions, unused code, etc. Authors must minimize the labor required to review their work by cleaning bloated code before submission and making disclosure detailed enough for a reviewer to navigate substantial portions of generated code.

@lwasser
Copy link

lwasser commented Dec 15, 2025

Hi - i'm commenting here so I can watch this evolve. We are also working on a policy and i'd love to see us align with JOSS on it. Thank you for all of the work here! This is a tricky topic for us all!

@arfon arfon merged commit 9743a93 into main Jan 5, 2026
2 checks passed
@arfon arfon deleted the ai-policy-updates branch January 5, 2026 11:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants