RAM usage during training of the IVF65536,PQ64 index #3659
Replies: 2 comments
-
|
2 remarks:
|
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the very useful remarks, @mdouze! Both the "OPQ64_256,IVF65536,PQ64" and "OPQ64_512,IVF65536,PQ64" index with 20M vectors were trained successfully on my machine. So which one do you recommend? If I understand correctly, the rotation matrix of "OPQ64_512" index (1024 x 512) is larger than the matrix of "OPQ64_256" index (1024 x 256). Will larger rotation matrix (more trainable parameters) lead to better kNN search accuracy? Thanks |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
I intended to train an "IVF65536,PQ64" index with 20 million 1024d vectors on a server with 400G RAM and 4 x V100 (16G) GPUs.
It ran out of memory. I know these vectors already take ~80G RAM even without doing anything.
My questions are
Thanks
Platform
OS: Ubuntu 18.04.6 LTS
Faiss version: 1.7.2
Installed from:
pip install faiss-gpuFaiss compilation options:
Running on:
Interface:
Reproduction instructions
Beta Was this translation helpful? Give feedback.
All reactions