I'm researching the scientific literature, about (deep) machine learning approaches tried before Alpha-Go for Chess.
Anybody knows if fully connected deep neural networks architectures have been tried before adopting the convolution neural net approach. Was it a processing power question, or a superfluous parameters problem (CNN does as well as DNN for chess data sets)?
This thread is a long shot, to avoid the pain of reading implementation discussions ad infinitum, before getting the gist of my question answered.
The motivation for the question, is about the input encoding, and the local correlation assumptions underlying CNN, while DNN are the real zero approach (it would be nice to know if even DNN approaches behave like CNN, comforting the CNN bias).
lit. pointers to shorten my search, appreciated. Or any comments about how I have it all wrong, even asking such a question....
Anybody knows if fully connected deep neural networks architectures have been tried before adopting the convolution neural net approach. Was it a processing power question, or a superfluous parameters problem (CNN does as well as DNN for chess data sets)?
This thread is a long shot, to avoid the pain of reading implementation discussions ad infinitum, before getting the gist of my question answered.
The motivation for the question, is about the input encoding, and the local correlation assumptions underlying CNN, while DNN are the real zero approach (it would be nice to know if even DNN approaches behave like CNN, comforting the CNN bias).
lit. pointers to shorten my search, appreciated. Or any comments about how I have it all wrong, even asking such a question....