Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
525fda0
fix:compatibility
Mar 12, 2026
58bda57
Merge branch 'master' into compat
richarddushime Mar 12, 2026
2ecf31a
Merge branch 'master' into compat
richarddushime Mar 13, 2026
b03da92
Add working link checker workflow using lychee
LukasWallrich Mar 18, 2026
0a0f3a8
Merge branch 'master' into fix/link-checker-workflow
LukasWallrich Mar 18, 2026
7efb9eb
Fix lychee action path: lycheeverse/lychee-action
LukasWallrich Mar 18, 2026
d221f1d
Crawl full site via sitemap instead of single URL
LukasWallrich Mar 18, 2026
96f8f47
Merge remote-tracking branch 'origin/compat' into fix/link-checker-wo…
LukasWallrich Mar 18, 2026
e947b0e
Use portable grep/sed for sitemap URL extraction
LukasWallrich Mar 18, 2026
3a475ff
Check all links (internal + external) using build artifact
LukasWallrich Mar 18, 2026
17f3995
Replace deprecated --base with --base-url
LukasWallrich Mar 18, 2026
f2f2aba
Fix malformed author fields causing broken URLs
LukasWallrich Mar 18, 2026
b3883c9
Exclude internal forrt.org links and bot-blocking publishers
LukasWallrich Mar 18, 2026
be5eaa1
Accept 403 globally instead of excluding publishers
LukasWallrich Mar 18, 2026
b0347ea
Replace publisher DOI URLs with doi.org and flag remaining ones
LukasWallrich Mar 18, 2026
927d2e0
Broaden publisher URL detection to include ScienceDirect, JSTOR, etc.
LukasWallrich Mar 18, 2026
b3766d3
Collapse 403 errors into a separate section in issue report
LukasWallrich Mar 18, 2026
c92e49c
Deduplicate errors and shorten issue report
LukasWallrich Mar 18, 2026
8a3043d
Truncate 403 and publisher URL lists to fit GitHub issue body limit
LukasWallrich Mar 18, 2026
d147ff4
Show page locations for broken links, uncollapse publisher section
LukasWallrich Mar 18, 2026
861d868
Compact publisher URL output: show file:line + URL only
LukasWallrich Mar 18, 2026
38e1f2e
Merge branch 'master' into fix/link-checker-workflow
richarddushime Mar 19, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
171 changes: 171 additions & 0 deletions .github/workflows/link-check.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
name: Link Checker

# =======================
# Website Link Validation
# =======================
# Purpose: Downloads the latest built site and checks all links (internal + external)
# Triggers: Weekly on Mondays at 01:30 UTC or manual dispatch
# Reports: Creates a GitHub issue with label "link-check" when broken links are found
# Config: See .lychee.toml for exclusion patterns and request settings

on:
schedule:
# Runs at 01:30 UTC every Monday
- cron: '30 1 * * 1'
workflow_dispatch:

permissions:
contents: read
issues: write
actions: read

concurrency:
group: link-check-${{ github.ref }}
cancel-in-progress: true

jobs:
link-check:
name: Check Links
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Download latest build artifact
uses: dawidd6/action-download-artifact@07ab29fd4a977ae4d2b275087cf67563dfdf0295
with:
workflow: deploy.yaml
name: forrt-website-.*
name_is_regexp: true
path: /tmp/site
github_token: ${{ secrets.GITHUB_TOKEN }}
search_artifacts: true
if_no_artifact_found: fail

- name: Run lychee link checker
id: lychee
uses: lycheeverse/lychee-action@v2
with:
args: "--config .lychee.toml --base-url https://forrt.org /tmp/site"
output: /tmp/lychee/out.md
fail: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

- name: Process lychee output
if: steps.lychee.outputs.exit_code != 0
run: |
python3 << 'PYEOF'
import re

with open("/tmp/lychee/out.md") as f:
content = f.read()

lines = content.split("\n")

# Keep the summary table (everything before "## Errors per input")
summary_lines = []
error_lines = []
in_errors = False
for line in lines:
if line.startswith("## Errors per input"):
in_errors = True
continue
if in_errors:
error_lines.append(line)
else:
summary_lines.append(line)

# Parse errors, tracking which pages each URL appears on
# url -> {"status": str, "pages": [str]}
url_info = {}
main_errors = {} # non-403
forbidden_errors = {} # 403
current_page = ""

for line in error_lines:
# Track section headers (### Errors in /tmp/site/.../index.html)
page_match = re.match(r"^### Errors in /tmp/site/[^/]+/(.+)", line)
if page_match:
# Convert file path to URL path
path = page_match.group(1)
path = re.sub(r"/index\.html$", "/", path)
current_page = f"/{path}"
continue

m = re.match(r"^\* \[(\w+)\] <([^>]+)>", line)
if not m:
continue
status, url = m.group(1), m.group(2)

target = forbidden_errors if status == "403" else main_errors
if url not in target:
target[url] = {"status": status, "pages": []}
if current_page and current_page not in target[url]["pages"]:
target[url]["pages"].append(current_page)

# Build output
output = "\n".join(summary_lines).rstrip()
output += "\n\n## Broken links\n\n"

if main_errors:
for url, info in main_errors.items():
pages = info["pages"]
page_str = f" (in {pages[0]})" if len(pages) == 1 else f" (in {len(pages)} pages)"
output += f"* [{info['status']}] <{url}>{page_str}\n"
else:
output += "No broken links found (excluding 403s).\n"

if forbidden_errors:
output += f"\n<details>\n<summary>403 Forbidden ({len(forbidden_errors)} URLs — likely bot-blocking, not broken)</summary>\n\n"
output += "These sites block automated requests. The links may still be valid.\n"
output += "Showing first 100 — see workflow logs for full list.\n\n"
for i, (url, info) in enumerate(forbidden_errors.items()):
if i >= 100:
output += f"\n*... and {len(forbidden_errors) - 100} more*"
break
output += f"* <{url}>\n"
output += "\n</details>\n"

with open("/tmp/lychee/out.md", "w") as f:
f.write(output)
PYEOF

- name: Find publisher URLs that should use doi.org
id: doi-check
run: |
# Search source markdown files (excluding glossary, which is auto-generated)
# for direct publisher URLs that should use https://doi.org/ instead.
PUBLISHERS='(journals\.sagepub|tandfonline|psycnet\.apa|onlinelibrary\.wiley|link\.springer|academic\.oup|sciencedirect|jstor\.org|journals\.lww|royalsocietypublishing)\.(com|org)'
# Extract just file:line and the URL itself (not full line content)
MATCHES=$(grep -rno --include='*.md' -E \
"https?://[^ )\"']*(${PUBLISHERS})/[^ )\"']*(doi/|article|fulltext)[^ )\"']*" \
content/ --exclude-dir=content/glossary | sort -u || true)
if [ -n "$MATCHES" ]; then
COUNT=$(echo "$MATCHES" | wc -l)
{
echo ""
echo "## Publisher URLs that should use doi.org ($COUNT found)"
echo ""
echo "The following links point directly to publisher websites instead of using"
echo "\`https://doi.org/{DOI}\` format. Publishers often block automated requests,"
echo "making these URLs uncheckable. Please replace them with doi.org links."
echo "If the DOI is not visible in the URL, look it up on https://search.crossref.org"
echo ""
echo '```'
echo "$MATCHES"
echo '```'
} >> /tmp/lychee/out.md
echo "found=true" >> "$GITHUB_OUTPUT"
else
echo "found=false" >> "$GITHUB_OUTPUT"
fi

- name: Create issue from lychee output
if: steps.lychee.outputs.exit_code != 0 || steps.doi-check.outputs.found == 'true'
uses: peter-evans/create-issue-from-file@v5
with:
title: "Link Checker Report"
content-filepath: /tmp/lychee/out.md
labels: link-check
token: ${{ secrets.GITHUB_TOKEN }}
50 changes: 50 additions & 0 deletions .lychee.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Lychee link checker configuration
# https://lychee.cli.rs/usage/config/
#
# Used by the link-check GitHub Actions workflow.
# Checks all links (internal + external) in the built HTML files.

# ---------------------
# Exclusions
# ---------------------
# Patterns to exclude from link checking (common false positives)
exclude = [
# Internal links — already verified as local files in the build artifact
"forrt\\.org",

# Placeholder / example domains
"example\\.com",
"localhost",
"127\\.0\\.0\\.1",

# Social media sites that block automated requests
"linkedin\\.com",
"twitter\\.com",
"x\\.com",

# Web Archive — often slow or flaky
"web\\.archive\\.org",

# GitHub edit links with templated paths
"github\\.com/.*/edit/",
]

# ---------------------
# Request settings
# ---------------------
# Accept 2xx/3xx and 429 (rate limiting)
# Note: 403 is NOT accepted — those are separated into a collapsed section
# by the workflow's post-processing step, since many publishers block bots.
accept = ["100..=399", "429"]

# Timeout per request in seconds
timeout = 30

# Maximum number of retries per link
max_retries = 3

# Maximum concurrent requests
max_concurrency = 16

# Do not show progress bar (cleaner CI output)
no_progress = true
2 changes: 1 addition & 1 deletion content/authors/berit-t-barthelmes-msc/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ name: "Berit T. Barthelmes, M.Sc."

# Username (this should match the folder name)
authors:
- Name "Berit T. Barthelmes, M.Sc."
- "Berit T. Barthelmes, M.Sc."

# Is this the primary user of the site?
superuser: false
Expand Down
2 changes: 1 addition & 1 deletion content/clusters/cluster3.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ PsuTeachR's [Data Skills for Reproducible Science](https://psyteachr.github.io/m

***Includes tools such as statcheck.io, GRIM, and SPRITE***

* Brown, N. J., & Heathers, J. A. (2016). The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology. Social Psychological and Personality Science, 1948550616673876. http://journals.sagepub.com/doi/pdf/10.1177/1948550616673876
* Brown, N. J., & Heathers, J. A. (2016). The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology. Social Psychological and Personality Science, 1948550616673876. https://doi.org/10.1177/1948550616673876

* Nuijten, M. B., Van Assen, M. A. L. M., Hartgerink, C. H. J., Epskamp, S., & Wicherts, J. M. (2017). The validity of the tool “statcheck” in discovering statistical reporting inconsistencies. Preprint retrieved from https://psyarxiv.com/tcxaj/.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ As another example, when I illustrate the smallest-effect-size-of-interest workf
![Notes](fig9.webp "Notes")


One last issue that comes out of the simulations is the number of assumptions that one must make in the process of doing a simulation study. This includes both statistical assumptions, such as the size of the standard deviation of the outcome measure, and non-statistical assumptions, such as the length of time it takes for a typical participate in the study (a fact that is necessary to accurately estimate the number of participants who can participate in a lab-based study, for example). I argue that pilot studies are useful for developing good values for these assumptions. Pilot studies are _not_ useful for directly estimating the value of the target effect size itself ([Albers & Lakens, 2018](https://www.sciencedirect.com/science/article/pii/S002210311630230X?casa_token=OETt_Sm5VFEAAAAA:-9rK8QScds9e0A1siznusvdtvl0-yC2WpBVWe7ztdGkZ8eVILbyqWMC5WmcsAxHWp6X7X7voPeA)); in any case it is better to power to a smallest effect size of interest than the expected effect size.
One last issue that comes out of the simulations is the number of assumptions that one must make in the process of doing a simulation study. This includes both statistical assumptions, such as the size of the standard deviation of the outcome measure, and non-statistical assumptions, such as the length of time it takes for a typical participate in the study (a fact that is necessary to accurately estimate the number of participants who can participate in a lab-based study, for example). I argue that pilot studies are useful for developing good values for these assumptions. Pilot studies are _not_ useful for directly estimating the value of the target effect size itself ([Albers & Lakens, 2018](https://www.sciencedirect.com/science/article/pii/S002210311630230X)); in any case it is better to power to a smallest effect size of interest than the expected effect size.

![Workflow 2](fig10.webp "Workflow 2")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ We thus gathered people interested in qualitative open science research in educa
![Criteria for Reporting Qualitative Studies](Fig1.webp "Criteria for Reporting Qualitative Studies")


3. **Open Materials.** Thanks to the internet, researchers have websites and repositories where they can upload the tools for others to access. In qualitative research, this might mean interview protocols, memos, coding notebooks, tools (such as Nvivo or R packages), or even the data itself. This provides a sort of audit trail so others can verify the results of the research. There is no all or nothing here; open materials, much like the rest of these open science practices, exist along a spectrum. Not only what researchers share is on a spectrum; researchers can also dictate who may access the open materials. Perhaps it’s the entire public, but it could just be people who want to verify findings (i.e., dissertation committees, participants, reviewers). Below you can see how [Bowman and Keene (2018)](https://www.tandfonline.com/doi/pdf/10.1080/08824096.2018.1513273) described open science practices as a layered onion with the innermost layer being the most transparent. However, no matter what or to whom materials are shared, researchers must include their plan within their consent procedures and IRB protocols to not violate any ethical boundaries.
3. **Open Materials.** Thanks to the internet, researchers have websites and repositories where they can upload the tools for others to access. In qualitative research, this might mean interview protocols, memos, coding notebooks, tools (such as Nvivo or R packages), or even the data itself. This provides a sort of audit trail so others can verify the results of the research. There is no all or nothing here; open materials, much like the rest of these open science practices, exist along a spectrum. Not only what researchers share is on a spectrum; researchers can also dictate who may access the open materials. Perhaps it’s the entire public, but it could just be people who want to verify findings (i.e., dissertation committees, participants, reviewers). Below you can see how [Bowman and Keene (2018)](https://doi.org/10.1080/08824096.2018.1513273) described open science practices as a layered onion with the innermost layer being the most transparent. However, no matter what or to whom materials are shared, researchers must include their plan within their consent procedures and IRB protocols to not violate any ethical boundaries.


![Conceptual Onion of Open Science Practices](Fig2.webp "Conceptual Onion of Open Science Practices")
Expand Down
2 changes: 1 addition & 1 deletion content/educators-corner/010-Neurodiversity/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ People with disabilities are more likely to be excluded from the academic workfo

Many gatekeepers determine whether an individual is neurodivergent and these processes are driven by individuals who are neurotypical. As a result, referral time for these services vary widely, from 4 weeks to 201 weeks within the UK (Lloyd, 2019) and if a person does not fit the criteria, the individual can be ignored and may not receive the much-needed help that they require. This can lead to poor self-esteem, unemployment (e.g. around 22% in autistic people are in any type of employment; see Figure 2 in [fact sheet](https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/disability/articles/outcomesfordisabledpeopleintheuk/2020)). As a result, neurodivergent individuals may blame themselves for the difficulties they encounter, as opposed to the barriers that society has placed on them.

Despite this, people of different neurodivergent conditions or families of the people with the conditions have begun meeting and talking to each other about their experiences and one common shared experience is a history of misinterpretation and mistreatment by the dominant neurotypical cultures and its institutions such as academia. As a result of centuries of oppression of disabled people worldwide and a hyper-normalised environment, in addition to seeing the disproportionate impact of the coronavirus pandemic on disabled students (see this amazing [paper ](https://link.springer.com/article/10.1007/s10639-021-10559-3)by Dr Joanna Zawadka), many neurodivergent and disabled staff feel discouraged in an environment that should aim to support them. They do not feel like they belong, their differences are seen as an impairment and their voice does not seem to matter. They do not see themselves represented in psychological science, academia, business, teaching or elsewhere.
Despite this, people of different neurodivergent conditions or families of the people with the conditions have begun meeting and talking to each other about their experiences and one common shared experience is a history of misinterpretation and mistreatment by the dominant neurotypical cultures and its institutions such as academia. As a result of centuries of oppression of disabled people worldwide and a hyper-normalised environment, in addition to seeing the disproportionate impact of the coronavirus pandemic on disabled students (see this amazing [paper ](https://doi.org/10.1007/s10639-021-10559-3)by Dr Joanna Zawadka), many neurodivergent and disabled staff feel discouraged in an environment that should aim to support them. They do not feel like they belong, their differences are seen as an impairment and their voice does not seem to matter. They do not see themselves represented in psychological science, academia, business, teaching or elsewhere.

We are a group of early-career neurotypical and neurodivergent researchers that are a part of the Framework of Open Reproducible Research and Training (FORRT) community, aiming to make academia and the open scholarship community more open to neurodiversity. Everyone, no matter what they identify with, is welcome in this group. We aim to discuss how open scholarship can be intersected with the neurodiversity movement, and emphasise how differences should be highlighted and accepted, whilst supporting the idea of accessibility. Our neurodiversity team is a group that currently consists of individuals that have autism, dyspraxia/DCD, speech-language differences, ADHD, dyslexia, or are neurotypical allies. If you have these or other neurominorities and wish to be part of the team, you are more than welcome to join!

Expand Down
Loading
Loading