Skip to content

create codemeta evaluation benchmark #1

@alee

Description

@alee

create a codemeta examples dataset with examples from representative communities

  • identify candidate set from each registry / repository
  • generate codemeta or pull existing codemeta
  • "grade" codemeta (manually?)
  • include alternative representations (e.g., datacite.json?)
  • include context in the README.md file
  • organize examples in an examples directory, with a directory per registry / repository e.g.,
examples
├── all.csv
├── ascl
├── comses
│   └── artificial-anasazi
│       ├── README.md
│       └── codemeta.json
├── csdms
├── rsd
└── zenodo

provide guidance on how to consistently map somewhat unclear terms:

  • peer reviewed -> Review / reviewBody / reviewPart
  • dependencies / softwareRequirements
  • related publications
  • preferred citation
  • narrative documentation
  • others?

should we recommend including fields with no corresponding codemeta term in a schema.org additionalProperty?

examples from CSDMS:

examples from Caltech:

Examples from OSSci Innovation Sprint:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions