Deep learning
In an interview last week, AI pioneer Geoff Hinton said - “Deep learning is going to be able to do everything”
And if we have those breakthroughs, will we be able to approximate all human intelligence through deep learning?
Yes. Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reason. But we also need a massive increase in scale. The human brain has about 100 trillion parameters or synapses. What we now call a really big model, like GPT-3, has 175 billion. It’s a thousand times smaller than the brain. GPT-3 can now generate pretty plausible-looking text, and it’s still tiny compared to the brain.
When you say the scale, do you mean bigger neural networks, more data, or both?
Both. There’s a sort of discrepancy between what happens in computer science and what happens with people. People have a huge amount of parameters compared with the amount of data they’re getting. Neural nets are surprisingly good at dealing with a rather small amount of data, with a huge number of parameters, but people are even better.
Deep learning has come a long way in the past few years and changed a lot of minds including Geoff Hilton's.
Here are some good resources to get started on Deep Learning:
- Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville - This book by the founding fathers of deep learning will help cover all the theoretical elements and have a solid understanding of the fundamentals.
- Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD by Jeremy Howard and Sylvain Gugger - This is a very practical book with lots of code samples.
- Deep Learning with Python by François Chollet - This is another practical book with tons of code samples.