Skip to content

nitinthedreamer/Learning-generative-principles-of-a-symbol-system

 
 

Repository files navigation

This repository contains results, training/testing samples and code used in the computational model sections.

Results/

We used 100 model with randomly initialized weights, 50 of them are directly used to test on the Which-Is-N task without training and 50 of them are trained using numbers used in two kid's behavior studies. The 50 untrained models will be refered to as untrained/ and the 50 trained models will be refered as kid_trained/ .

We used two different measure to compute this model's performance on the Which-Is-N task:

    Raw Probabilities for all target word generated by the model. These results are in Results/Edit_distance/.
    Edit distance to compare between model generated target/foil label and true target label Results/Probabilities.

We also include probability of which region (left or right) the model's attention tend to fall in when generating each word. These files are in Results/Attention_measure/

Training_and_testing_items/

Raw images are in images_raw/ for training/validation/testing numbers. CSV files are in csv/ for training/validation/testing numbers. A few steps of pre-processing is necessary for the model to be able to intake these numbers, so the processed files for running the models are in images_processed .

Code/

This folder includes all the code we used for training, testing as well as generating result files. The model and training procedure here are based on code from https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning. These scripts are adapted from aforementioned repo:

Python 3.6 and PyTorch 0.4.1 are used for these scripts.

Training

python train_bootstrap.py -m [PATH_TO_MODEL_OUTPUT] -d ../Training_and_testing_items/images_processed/ -n [NUN_OF_MODELS]

Testing Trained Models

Test newly trained models: python caption_bootstrap.py -m [PATH_TO_SAVED_MODELS] -wm ../Training_and_testing_items/images_processed/WORDMAP_coco_1_cap_per_img_1_min_word_freq.json -i ../Training_and_testing_items/images_raw/test_new/ -n 50 \ -oa [PATH_TO_SAVE_RAW_ATTENTION] -g ../Training_and_testing_items/images_raw/test_new/num_labels.json \ -t ../Training_and_testing_items/csv/test_trial_new.csv -ot [PATH_TO_SAVE_TRIAL_LEVEL_RESULTS] \ -r [PATH_TO/OVERALL_ACC.csv] -st [new or old] -et [edit or prob] -op [PATH_TO_SAVE_RAW_PROBABILITY] Test our trained models, please download our models here and then run the above command.

Below are the README.md content from https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning. More information can be found there.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.8%
  • R 1.2%