Abstract: The talk will present new results that help deep neural networks scale to learn and recall even more patterns. These new research results extend ordinary unidirectional backpropagation to the more general case of bidirectional backpropagation and replace the current ReLU hidden neurons with the new NoVa or nonvanishing neurons that perturb a logistic sigmoid. NoVa neurons mitigate the problem of “vanishing gradients” in very deep neural networks. Bidirectional backpropagation trains a classifier or regression network both forwards and backwards through the same network of synapses and neurons. It reveals in an ordinary deep classifier network a hidden regressor in the backward direction. Bayesian bidirectional backpropagation further allows prior probabilities to shape the network’s global posterior probability structure and improve classification accuracy. These and related changes to deep networks allow them to learn and recall far more patterns than current deep networks can. They may also suggest new electro-optical or other hardware structures to fully exploit.
Bio: Dr. Olaoluwa (Oliver) Adigun is an adjunct lecturer and a postdoctoral researcher in the Department of Electrical and Computer Engineering at the University of Southern California (USC), Los Angeles. He obtained his doctoral degree in Electrical and Computer Engineering under the supervision of Professor Bart Kosko at USC. Dr. Adigun has served as a research intern in machine intelligence at Amazon, Google AI, and Microsoft, and was the co-recipient of the Best Paper Award at the 2017 International Joint Conference on Neural Networks (IJCNN-2017). He won the 2018 Best Teaching Assistant Award from the Viterbi School of Engineering, USC. His research interest includes machine learning, probabilistic modeling, and nonlinear signal processing.