Liveblogging: The Anglo-Saxon Chronicle: Alfred King of England and His Successors

If the use-case data is not (a subset of) the training data, Deep Learning blows up spectacularly. And since we do not understand why Deep Learning works, we have no clue as to how to fix this—how to make Deep Learning algorithms at all robust. A human brain has a hundred billion neurons, and each neuron and its interconnections are roughly equivalent to perhaps 100000 transistors. the iPhone's A11 has 4 billion transistors—that means 2500000 iPhones wired together to approach brain-like levels of complexity. Add to that a billion years of the genetic algorithm tuning networks of neurons... and anything like human-level intelligence, or even the robustness of behavior characteristic of insects, looks far off still:

Hal Hodson: DeepMind and Google: The Battle to Control Artificial Intelligence: "The power of reinforcement learning and the preternatural ability of DeepMind’s computer programs.... But... if the virtual paddle were moved even fractionally higher, the program would fail. The skill learned by DeepMind’s program is so restricted that it cannot react even to tiny changes to the environment that a person would take in their stride.... Releasing programs perfected in virtual space into the wild is fraught with difficulty.... Success within virtual environments depends on the existence of a reward function.... Unfortunately, the real world doesn’t offer simple rewards.... It is rare for human brains to receive explicit feedback about the success of a task while in the midst of it.... Current and former researchers at DeepMind and Google, who requested anonymity due to stringent non-disclosure agreements, have also expressed scepticism that DeepMind can reach AGI.... The focus on achieving high performance within simulated environments makes the reward-signal problem hard to tackle. Yet this approach is at the heart of DeepMind...


#noted

Comments