No amount of scaling (throwing billion$ for computational resources & trillions of words) would help LLMs achieve the two key cognitive capabilities that make human intelligence so special: 1. Learn - Autonomously, Adaptively, Conceptually, Contextually & Incrementally. 2. Reason - Conceptually, Contextually & Metacognitively We discussed the criticality of this in greater detail here: https://lnkd.in/ggZm_W-w It is for this reason, LLMs can never get us to AGI: https://lnkd.in/d3vEPJ2P So, what will get us to AGI: https://lnkd.in/ggFTxuUz It's no wonder Demis Hassabis, CEO of Google DeepMind said this about LLMs Intelligence on July 9, 2024 at the Future of Britain Conference hosted by Tony Blair: “We’re still not even at cat intelligence yet, as a general system,” https://lnkd.in/gtyQNZyc It is great to see Yann LeCun arriving at the same conclusions about LLMs. https://lnkd.in/gm9Nfe4P
This is an interesting post format. What do you think about the claims that LLMs have 'emergent' properties?
What percentage of human's have these traits? How often do you use these traits?
The third could be Quantum Entanglement. https://www.popularmechanics.com/science/a61854962/quantum-entanglement-consciousness/
Unfalsifiable. Nothing an LLM may do would convince people like you that it can, say, "reason contextually" (which is something they already can). This is an a priori religious belief rather than science.
yes, currently there no way to create a new association between two concept if the training set does not have that association in some form. i.e. make a new unique new concept by linking/connecting/ two distinct concept.
Clara Shih what's your insight on LLMs vs AGI?
I feel like we are going in circles on this. Also there seems a percentage of folks that simply do not get any of this. They are smart people but they have this religious belief that LLMs at scale are somehow AGI or good enough for anything. They are NOT.
Are LLMs more than the sum of their parts? If we collect sticks and stones of all shapes and sizes and arrange them in all possible configurations within our parameters of space and time, will complex shapes and intricate patterns emerge that suggest some form of intelligence? Sure, given enough iterations, but is it more than just an art piece?
Building the coolest and most impactful materials innovation company with an incredible team.
7moWe discussed this recently on the AI panel at #DSW24 and pretty much made the same arguments - Andy Minteer Krishna Gade Beddhu Murali