How does ConvTranspose in pytorch with groups > 1 work?
I’m trying to understand the workflow of convtranspose of pytorch with groups > 1 , mainly focusing on the calculation process between grouped transposeconv weights and padded input, I’ve experimented with my code, but I cant understand how the result was calculated.
How to calculate similarity/distance in a Siamese network (pytorch)
How exactly do you calculate the similarity/distance in a Siamese network, and classify them after?
How to put same input/output value for autoencoder? (pytorch)
I want to use autoencoder’s latent vector as feature extractor. For that, I want to put same input/output image for the autoencoder model.
Pytorch -> Is it possible to obtain meaningful results on the intermediate layers of a neural network? [closed]
Closed 9 hours ago.
RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [64, 64, 358, 12]
I’m trying to run a Graph Neural Network model called “STFGNN” (available on GitHub at https://github.com/lwm412/STFGNN-Pytorch/tree/main?tab=readme-ov-file) on Kaggle. However, I’m encountering several issues:
Some layers of my designed deep learning model are initialized in the model class but are not used in the forward process,which cause different result
Some layers of my designed deep learning model are initialized in the model class but are not used in the forward process. I found that when the code of these layers are remained, the performance is different from the result when the code of these layers are deleted. Does this phenomenon caused by initialization of model? An example is as follows:
Some layers of my designed deep learning model are initialized in the model class but are not used in the forward process,which cause different result
Some layers of my designed deep learning model are initialized in the model class but are not used in the forward process. I found that when the code of these layers are remained, the performance is different from the result when the code of these layers are deleted. Does this phenomenon caused by initialization of model? An example is as follows:
Tuple objec has no attribute size, Error running PyTorch in google Colab
I am learning to use pytorch and i am using it on google colab. The Script that I am using is the following where I have a Toy dataset that I’ve created and an LSTM model that I want to train. I am getting several error that I am not able to solve
Issue with the pytorch interpolate
I am new to model and deep learning training , In my training section i am trying to find the loss of the segmentation of the image , so in here before cross entropy loss calculation i have used interpolate to downsize the image which came out of the model,
prefetch_factor option could only be specified in multiprocessing.let num_workers > 0 to enable multiprocessing
I am a newbie in this forum.
I hope you are doing well.
I have some problems with running the code on Github
https://github.com/pdenailly/Probabilistic_forecasting
But I have a problem when I run the file example_bike.py