Skip to content

Experiment: context size #37

@gonzalobenegas

Description

@gonzalobenegas

Description

Hypothesis or Goal

A small context size (e.g. 128, 256, 512) could be enough for good performance on variant effect prediction. Models with small context size could afterwards be finetuned on longer range tasks and reach good performance (perhaps using a hierarchical model where this gLM operates at high resolution but low context, and subsequent layer operate at lower resolution but higher context).

Links

Training code
Analysis code
Wandb: 1

Results

  • On VEP (see Experiment: promoters YOLO run #21) it's unclear if there's any difference between 512bp and 256bp (the latter trained with double batch size so equal tokens per batch).
Image

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions