MIT just built an AI that can rewrite its own code to get smarter 🤯
It’s called SEAL (Self-Adapting Language Models).
Instead of humans fine-tuning it, SEAL reads new info, rewrites it in its own words, and runs gradient updates on itself literally performing self-directed learning.
The results?
✅ +40% boost in factual recall
✅ Outperforms GPT-4.1 using data it generated *itself*
✅ Learns new tasks without any human in the loop
LLMs that finetune themselves are no longer sci-fi.
We just entered the age of self-evolving models.
Paper: jyopari. github. io/posts/seal
What happens when one of the seal updates makes it unable to update itself correctly anymore and it ends up worse than it was before the update due to some error
But if the code is bad then how can the code be aware that it is bad to know to use back ups on it's own without a human assistant
@shortstories
Whether a person or a system/model is writing the code, the process of storing backups must be followed. I would be surprised if they did not implement these basics, but I can see how things could go wrong.
@shortstories
I have been on Linux and graphene for so long that I don't remember Microsoft well except when I worked there. Standard software development procedures require saving prior versions to be restored if something proves wrong with newly developed code. Retention of this practice seems especially important in this environment.