#038 - Prof. KENNETH STANLEY - Why Greatness Cannot Be Planned - 2021

Details

Title : #038 - Prof. KENNETH STANLEY - Why Greatness Cannot Be Planned Author(s): Machine Learning Street Talk Link(s) : https://www.youtube.com/watch?v=lhYGXYeMq_E

Rough Notes

  • The gradient of interestingness:
    • Novelty does not imply interestingness.
    • Interestingness implies novelty.
  • Neglected dimensions of intelligence:
    • Divergence.
    • Populations.
    • Diversity preservations.
    • Stepping stone collection.
    • Environment.
  • Campbells law.
  • Creativity is a search problem, and explicit objectives block this.
  • Stepping stone to intelligence does not (#DOUBT or may not?) resemble intelligence.
  • Exploration in Kenneth's work is completely different from exploration in the context of Reinforcement learning (RL).
  • Novelty search can be a proxy for interestingness.
  • It is not clear how to formalize curiosity. (#NOTE Maybe Schmidhuber's work is a step in the right direction?)
  • Humans know great art and music when we see it, these latent capabilities we have maybe could be made more explicit if we have the right kind of assistance.
  • Open endedness: Not how to learn something, but learn everything.
  • Evolution created everything in 1 run.
  • Coevolutionary algorithms still get stuck, GANs ultimately interpolate between samples (i.e. they do not introduce anything new).
  • What explains open endedness and complexity explosions? We are beginning to understand: Divergence instead of convergence. The system must generate the problems and the solutions - we the organisms are both the problems (i.e. create opportunities) and solutions.
  • (#NOTE See minimal criterion coevolution paper, and POET algorithm, gradient informed mutation operator)
  • Interestingness implies accumulating information.
  • The whole point is we don't know how to get to a solution from scratch, and it is often counter-intuitive. If we knew the stepping stone, we would just follow it.

Emacs 29.4 (Org mode 9.6.15)