Skip to content

Sampling/scoring models reversed in Fast-DetectGPT code #6

@SkyKingL

Description

@SkyKingL

Hi, thanks for the great repo!

In the paper, the authors describe the Fast-DetectGPT setup as follows:

“We chose the optimal settings reported by the authors, using GPT-Neo-2.7b as the scoring model and GPT-J-6b as the reference model.”

So Fast-DetectGPT should use GPT-J for sampling and GPT-Neo-2.7b for scoring.

However, in the current implementation, the roles appear to be reversed — GPT-J is used for scoring and GPT-Neo for sampling.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions