LearningBackprop
ThoughtStorms Wiki
Context: MachineLearning, NeuralNetworks
This is wild. Back-prop (back-propagation) is the way that neural networks feed information about the error between what they are currently generating and what the trainers would like them to be generating. It's a crucial part of their learning.
Here are some people who claim to have an algorithm that can actually learn back-prop itself (presumably by some lower-level "learning method") without having some of the overhead of sending the information back.
Not sure how this can really work, but I'm making a page for it while I try to understand it.
No Backlinks