-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Hey there!
I'm having some issues following the guide "2. Using NILINKER for inference".
- (SOLVED) On 2.1 running ./get_NILINKER_use_data.sh is not enough, as the code will still error out saying you require other files that are downloaded by get_EvaNIL_preparation_data.sh (This in turn downloads it to ./kb_files when the code is expecting those files in ./data/kb_files.
- When loading the model I'm getting:
`ValueError Traceback (most recent call last)
Cell In[1], line 7
4 target_kb = 'medic'
5 top_k = 10 # Top-k candidates to return for input entity
----> 7 nilinker = load_model(target_kb, top_k=top_k)
9 entity = "parkinsonian disorders"
11 top_candidates = nilinker.prediction(entity)
File ~/projects/tests/NILINKER/src/NILINKER/predict_nilinker.py:33, in load_model(partition, top_k)
31 model.compile(run_eagerly = True)
32 model.built = True
---> 33 model.load_weights(model_dir + "best.h5")
34 print("-----> NILINKER ready!")
36 return model
File ~/.virtualenvs/test_nilinker/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback..error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.traceback)
120 # To get the full stack trace, call:
121 # keras.config.disable_traceback_filtering()
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File ~/.virtualenvs/test_nilinker/lib/python3.10/site-packages/keras/src/legacy/saving/legacy_h5_format.py:357, in load_weights_from_hdf5_group(f, model)
355 layer_names = filtered_layer_names
356 if len(layer_names) != len(filtered_layers):
--> 357 raise ValueError(
358 "Layer count mismatch when loading weights from file. "
359 f"Model expected {len(filtered_layers)} layers, found "
360 f"{len(layer_names)} saved layers."
361 )
363 for k, name in enumerate(layer_names):
364 g = f[name]
ValueError: Layer count mismatch when loading weights from file. Model expected 0 layers, found 4 saved layers.`
Any clue on what might be going wrong here?
Thank you in advance