Questions tagged [cuda]
23 questions
4
votes
1 answer
CUDA compatibility of GTX 1650ti versus 1650
I am confused about CUDA compatibility. I am studying deep learning and looking for a laptop to buy. One laptop has GTX 1650ti and another has GTX 1650. Will both be able to use GPU for model training, or only second one?
I checked for gpu…
user105772
- 43
- 1
- 3
4
votes
3 answers
torch cuda not able to identify gpu on aws g4dn.xlarge
I have created an EC2 instance with GPU g4dn.xlarge and launched it. I want to run some code from command line and this code is pytorch based. While pycuda is able to identify the GPUs the pytorch is not able to identify it.
import pycuda.driver as…
Shruti
- 55
- 1
- 5
2
votes
0 answers
Why is Tensorflow LSTM training slower on a machine with far better components?
Training an LSTM using the exact code and dataset on two different machines with different components yields different results in terms of training time. However, for my case, the results were the opposite of what was expected. Is there reasoning…
Keanu Correia
- 21
- 1
2
votes
0 answers
Keras multi-gpu seems to heavily load one of the cards
I'm using Keras (tf backend) to train a neural net; I'm accelerating with GPUs using the multi gpu options in Keras.
For some reason, the program seems to heavily load one of the cards and the others only lightly. See the output from nvidia-smi…
Dan Scally
- 1,784
- 8
- 26
2
votes
1 answer
"model.to('cuda:6')" becomes (nvidia-smi) GPU 4, same with any other "cuda:MY_GPU", only "cuda:0" becomes GPU 0. How do I get rid of this mapping?
Strange mapping: example
In the following example, the first column is chosen in the code, second column is the one that does the work instead:
0:0 1234 MiB
1:2 1234 MiB
2:7 1234 MiB
3:5 2341 MiB
4:1 3412 MiB
5:3 3412 MiB
6:4 3412 MiB
7:6 3412…
questionto42
- 215
- 1
- 10
2
votes
1 answer
Parallel Data preprocessing
I am looking for a suggestion. Is it possible to implement the data preprocessing steps like missing value imputation, outlier detection, normalization, label encoding in parallel? Can I implement cuda/openmp/mpi programming for data…
Encipher
- 381
- 1
- 11
2
votes
3 answers
How do I install CUDA GPU for Visual Studio 2022 for windows 10?
I cannot find the visual studio 2019 version and every time I try to install CUDA 11.2.2 on my laptop, It warns me about not that I haven't installed Visual Studio. I've tried installing the C++ add-ons (Mobile and Desktop development for C++) but…
rutvik jere
- 21
- 1
- 1
- 3
1
vote
0 answers
Using gpu accelerated libSVM in python
I have been using libSVM in python notebook to classify my dataset and it takes approximately 5 hours for one run and for 5 fold cross validation, it will take almost a day+ time.
I am planning to switch to gpu accelerated libSVM. I could find gpu…
khushi
- 111
- 2
0
votes
1 answer
Cuda for PyTorch and Cuda for Tensorflow
I want to install PyTorch and for that I visited PyTorch official website, and they give me a command to install it with Cuda:
pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f…
Adarsh Wase
- 105
- 4
0
votes
1 answer
Is it worth to upgrade CUDA and cuDNN while having older GPUs?
New CUDA 11.x versions add support for TF32 format, other new features for newer cards (RTX30xx, A100 etc).
Is it worth upgrading to CUDA 11.x if you have GTX 1050 or RTX 2080 (having tensor cores)?
Could it be that new features only add…
Emil
- 308
- 1
- 8
0
votes
1 answer
Unable to use pip package obtained from building Tensorflow 2.3 from source
I've managed to build Tensorflow 2.3 from source, following these instructions:
https://towardsdatascience.com/how-to-compile-tensorflow-2-3-with-cuda-11-1-8cbecffcb8d3
But, when I install obtained pip package in new conda environment, and import…
latida
- 51
- 3
0
votes
1 answer
Why does my GPU immediately run out of memory when I try to run this code?
I am trying to write a neural network that will train on plays by Shakespeare and then write its own passages. I am using pytorch. For some reason, my GPU immediately runs out of memory. Note I am not running it on my own GPU; I am running it using…
0
votes
1 answer
Why GPU doesn't utilise System memory?
I have noticed that more often when training huge Deep Learning models on consumer GPUs (like GTX 1050ti) The network often doesn't work.
The reason is that the GPU just doesn't have enough memory to train the said network. This problem has…
neel g
- 227
- 1
- 5
- 11
0
votes
1 answer
why cuda is not available?
I was trying to install pytorch with cuda on my system. My gpu is a little old. Its NVIDIA GeForce GT 720m and my driver version is 391.35.
When I installed cudatoolkit in anaconda, it installed CUDA 11.3. Then, I went to…
Ali.A
- 159
- 8
0
votes
0 answers
Which is better: tensorflow 2.3.0 with GPU or tensorflow 2.18.0 with only CPU?
What is the speed of running tensorflow 2.3.0 with GPU relative to tensorflow 2.18.0 with only CPU?
Hardware
Laptop: MacBook Pro 15-inch 2012 64-bit.
OS: Windows 10 Pro 22H2
Processor: Intel(R) Core(TM) i7-4850HQ CPU @ 2.30GHz
GPU: GeForce GT 750M,…
DrJerryTAO
- 101
- 2