MIT just built an AI that can rewrite its own code to get smarter 🤯
It’s called SEAL (Self-Adapting Language Models).
Instead of humans fine-tuning it, SEAL reads new info, rewrites it in its own words, and runs gradient updates on itself literally performing self-directed learning.
The results?
✅ +40% boost in factual recall
✅ Outperforms GPT-4.1 using data it generated *itself*
✅ Learns new tasks without any human in the loop
LLMs that finetune themselves are no longer sci-fi.
We just entered the age of self-evolving models.
Paper: jyopari. github. io/posts/seal
What happens when one of the seal updates makes it unable to update itself correctly anymore and it ends up worse than it was before the update due to some error
@shortstories
I expect it will still snapshot prior versions for rollback when needed.
Remember when microsoft windows did that and it still got progressively worse but it also took up more and more memory
@shortstories
Whether a person or a system/model is writing the code, the process of storing backups must be followed. I would be surprised if they did not implement these basics, but I can see how things could go wrong.