[Question] Difference between two different ways to train multi-GPU index #3636
Replies: 2 comments
-
|
The first example builds an index and moves it to GPU prior to training. |
Beta Was this translation helpful? Give feedback.
-
|
Oh I see, I did not realize that I noticed that some indices are not yet possible to build on GPUs; for example, it seems using method 1 with Would a general rule of thumb to be use method 1 whenever possible, and fall back to method 2 if not? If not, is there some other guidance one can follow? Thank you! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
I see two different examples of ways to train IVF ANN on multiple GPUs. I'm wondering what is the difference between the two.
Platform
OS:
Faiss version:
Installed from:
Faiss compilation options:
Running on:
Interface:
Reproduction instructions
The first example is from the benchmarks
https://github.com/facebookresearch/faiss/blob/master/benchs/bench_gpu_sift1m.py
and the second from a popular gist https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6
I'm wondering why we call
index.trainon the return value ofindex_cpu_to_gpuin the first example and callindex.traindirectly on theindex_factoryin the second example.Beta Was this translation helpful? Give feedback.
All reactions