Skip to content

Release SelectivBench on Hugging Face #1

@NielsRogge

Description

@NielsRogge

Hi @YounesBouhadjar 🤗

I'm Niels and work as part of the open-source team at Hugging Face. I discovered your work on Arxiv and was wondering whether you would like to submit it to hf.co/papers to improve its discoverability. If you are one of the authors, you can submit it at https://huggingface.co/papers/submit.

The paper page lets people discuss about your paper and lets them find artifacts about it (your dataset for instance), you can also claim the paper as yours which will show up on your public profile at HF, add Github and project page URLs.

Would you like to host the SelectivBench benchmark you've released on https://huggingface.co/datasets?
I see that the data is currently generated locally via scripts. Hosting a canonical version or a loading script for the benchmark on Hugging Face will give your work more visibility and enable better discoverability. It would also allow people to do:

from datasets import load_dataset

dataset = load_dataset("your-hf-org-or-username/SelectivBench")

If you're down, leaving a guide here: https://huggingface.co/docs/datasets/loading. We support loading scripts for procedural/synthetic data, which seems like a great fit for your rule-based grammar approach.

Besides that, there's the dataset viewer which allows people to quickly explore the generated sequences and transitions in the browser.

After uploaded, we can also link the dataset to the paper page (read here) so people can discover your work.

Let me know if you're interested/need any guidance.

Kind regards,

Niels

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions