Increasing the size of a neural network layer without compiling a new model in Tensorflow
I am training a narrow 3-layer tensorflow neural network with layer sizes (input_size, small_number_hidden_units, output_size)
and using the learned weights as a blueprint for the initial conditions of a wider 3-layer network with layer sizes (input_size, large_number_hidden_units, output_size)
. My goal is to piggyback on the solution found by the narrow model to make the wider model less costly to train. Besides the number of hidden units, both models have the same architecture.