Relative Content

Tag Archive for pythondeep-learningpytorch

How does ConvTranspose in pytorch with groups > 1 work?

I’m trying to understand the workflow of convtranspose of pytorch with groups > 1 , mainly focusing on the calculation process between grouped transposeconv weights and padded input, I’ve experimented with my code, but I cant understand how the result was calculated.

Some layers of my designed deep learning model are initialized in the model class but are not used in the forward process,which cause different result

Some layers of my designed deep learning model are initialized in the model class but are not used in the forward process. I found that when the code of these layers are remained, the performance is different from the result when the code of these layers are deleted. Does this phenomenon caused by initialization of model? An example is as follows:

Some layers of my designed deep learning model are initialized in the model class but are not used in the forward process,which cause different result

Some layers of my designed deep learning model are initialized in the model class but are not used in the forward process. I found that when the code of these layers are remained, the performance is different from the result when the code of these layers are deleted. Does this phenomenon caused by initialization of model? An example is as follows:

Issue with the pytorch interpolate

I am new to model and deep learning training , In my training section i am trying to find the loss of the segmentation of the image , so in here before cross entropy loss calculation i have used interpolate to downsize the image which came out of the model,