-
Notifications
You must be signed in to change notification settings - Fork 219
Description
Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template
System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): YES
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: NO
- TensorFlow installed from (source or binary): from https://oss.sonatype.org/
- TensorFlow version (use command below): 2.3.1
- Python version: 3.7.7
- Bazel version (if compiling from source): NO
- GCC/Compiler version (if compiling from source): NO
- CUDA/cuDNN version: 10.1
- GPU model and memory: Tesla K80, compute capability 3.7 (but we also tested this on Tesla V100 7.0 compute capability)
You can collect some of this information using our environment capture script
You can also obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
Here is the result of the capture script:
tf_env.txt
Describe the current behavior
We tested the new Tensorflow Java API (not the legacy one). The brand new version released in October 2020. We tested it on some machines including Azure Databricks NC6_v3 and Azure Virtual Machines (the capture log is from the virtual machine). I noticed that in case of no GPU available the library falls back to CPU. And this is fine. However we also measured the time for some example processing (a few vector operations). And we see that there is no significant difference between processing time on GPU and on CPU. It looks as it is not using GPU, even if this is present (we tried two graphic cards: Tesla K80 with compute compatibility 3.7 and Tesla V100 with compute compatibility 7.0). In both cases we do not see any difference in processing time.
Describe the expected behavior
Expected behaviour is to get execution times much better if the program is executed on a machine with GPU present.
Code to reproduce the issue
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
We used the following java program:
HelloTensorFlow_java.txt
pom_xml.txt
The source was compiled to class file and it was run via the following command:
java -classpath protobuf-java-3.8.0.jar:ndarray-0.2.0.jar:javacpp-1.5.4.jar:javacpp-1.5.4-linux-x86_64.jar:tensorflow-core-api-0.2.0.jar:tensorflow-core-api-0.2.0-linux-x86_64-gpu.jar:tensorflow-core-platform-gpu-0.2.0.jar:. HelloTensorFlow
The listed libraries were downloaded from https://oss.sonatype.org/.
Other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
The enclosed program issues the following log:
log.txt
From the log you may see that the GPU was present and recognized.
However the execution time did not differ, when we started it with GPU and without.