Scientist uncover how the LLM AI brain works and its super scary. Ever since developing the new LLM’s the people who built them admitted that they dont fully know how they work. In their words “We built it, we trained it, but we don’t know what it’s doing.” The last weeks new research has started to open-up the black box of the LLM AI brain and the findings are truly amazing. More and more researchers find that the way LLM’s learn and evolve have a significant similarly with biological systems like one in the human brain. It raises the question if there is universal law of learning that applies to us humans but also to LLM AI “brains”. In recent interviews with Lex Friedman, Dario Domodei the CEO of Claude provided some interesting insights. According to him they dont program AI they almost grow it like a biological system that evolves. They see the same patterns in LLM’s like as they see in human neural networks or monkey brains. Even stranger, according to him, every LLM develops a “Donald Trump neuron”. Its the only personality that consistently gets its own dedicated neuron and no one knows why. See full interview on YouTube. In a research paper published by MIT in October similar patterns where found. The research revealed brain-like geometric structures in models like ChatGPT, enhancing understanding of AI’s internal mechanics. AI models organize concepts using geometric patterns, forming structures like semantic crystals and concept clouds for efficient information processing. AI organization is divided into three levels: atomic structures, brain-like specialization, and galaxy-like system organization for optimal performance. Its clear that this growing understanding of the internal workings raise further concerns on control of AI. There are some good initiatives in the world to control AI. One example is the Conditional AI Safety Treaty that proposes to build-in “if-then commitments”: commitments to be activated if and when red-line capabilities are found in frontier AI systems. The above again comfirms that in the heart of the AI debate are questions not simply about the technology but also about accountability and control. Source https://lnkd.in/eKmc3KXq
Great, but where is this paper?
Thanks for sharing. These stories are intriguing and must be taken with a pinch of salt until science really gets to the bottom of them. the other day for instance a team at UCL simulated a “psychedelic like experience” on an LLM, claiming that it can help understand consciousness better. There is no reason to doubt claims per se, but this is the kind of stuff that needs replication before we move it out of the “hypothesis” box.
Why "Donald Trump neuron?" Because of prominency in training data. See also "Jennifer Aniston neuron" of the time she was popular: https://www.nature.com/news/2005/050620/full/news050620-7.html
There is no LLM AI brain!
I agree with the comments that mystifying = bad. The whole anthropomorphisation of AI and AI models does more harm than it helps and gets in the way of rational policy debate. Witness the amount of bandwidth the debate around AI inventors has taken up, without any hint that this is a reality or even potential future reality. Also, it is not at all surprising that AI models and brains have some similarities as some aspects of their function are similar. You can look at brains learning about the world as building representations, just what we train AI to do. So it is not surprising that you find some similarities, like CNNs learning similar feature detectors (like edge detectors at low levels of abstraction and face detectors at higher ones) as can be found in the visual cortex of primate brains. They are just efficient representations of the statistics of visual scenes. So if there are some brain-like features in LLMs, I think that is far from surprising. It certainly seems to be no cause for concern.
That article was written by an LLM Nico Orie. The first two paragraphs start “Recent breakthroughs in AI…” The whole vibe is “trust me bro”
“our statistical model for AI is actually like a brain, says guy who’s livelihood depends on making people believe statistical models can replace brains” lol
This post is just nonsensical clickbait referencing another source-less clickbait blog post talking about geometric representations of neural network activations, i.e. something that has been known for about 20 years. Parallels with biological brains are all but wishful thinking. The only scary thing about this post is how clueless it is....
To describe LLMs as brains is convenient, but highly misleading.
AI Professor
1moWhay to say about this… I get a bit triggered by this kind of mystification (”AI brain”, ”we don’t understand”, ”similar to biological evolution”). The bottom line is that these are nothing but statistical models of word co-occurrence probabilities based on large collections of training data. The mathematics underlying them is very simple and can be understood by anyone who is willing to study some linear algebra. Yes, there are some patterns that arise from the complexity of the training data and the scale of these models, which can be interpreted using metaphors from biology. But the same can be said about any complex enough system, and it doesn’t imply that LLMs would be similar to the human brain.