Quantization effects on Gradient Descent algorithm
Im writing a research paper about the effects of quantization on Gradient descent algorithm when we reduce the precision from float to fixed point airthmatic for instance 16 bits. If anyone could throw some light on how would gradient descent in restricted bit configuration, i will be very grateful to you 🙂