One aspect that speaks very much against human cognition working like LLMs is that LLMs always think with the same speed: The prediction of the next token always happens in the same time. So there's no deliberate thinking happening on harder issues. Whether you ask it if 1 is prime or whether 123093275755303 is prime, the next token will always be predicted after the same number of computational steps.
although apparently LLMs get better at math when you tell them to tell you how they arrived at the solution
Follow

@lain I think it may be because the LLM is giving itself more context to correctly predict the next token, predicting the correct next step of a math problem should only need info about the previous steps, whereas properly predicting the tokens of the correct answer with only the initial statement may be more difficult

Sign in to participate in the conversation
Merovingian Club

A club for red-pilled exiles.