MIT just built an AI that can rewrite its own code to get smarter 🤯

It’s called SEAL (Self-Adapting Language Models).

Instead of humans fine-tuning it, SEAL reads new info, rewrites it in its own words, and runs gradient updates on itself literally performing self-directed learning.

The results?

✅ +40% boost in factual recall
✅ Outperforms GPT-4.1 using data it generated *itself*
✅ Learns new tasks without any human in the loop

LLMs that finetune themselves are no longer sci-fi.

We just entered the age of self-evolving models.

Paper: jyopari. github. io/posts/seal

x.com/alex_prompter/status/197

Follow

@Bernard

What happens when one of the seal updates makes it unable to update itself correctly anymore and it ends up worse than it was before the update due to some error

· · Web · 1 · 0 · 0

@shortstories
I expect it will still snapshot prior versions for rollback when needed.

@Bernard

Remember when microsoft windows did that and it still got progressively worse but it also took up more and more memory

@shortstories
I have been on Linux and graphene for so long that I don't remember Microsoft well except when I worked there. Standard software development procedures require saving prior versions to be restored if something proves wrong with newly developed code. Retention of this practice seems especially important in this environment.

@Bernard

But if the code is bad then how can the code be aware that it is bad to know to use back ups on it's own without a human assistant

@shortstories
Whether a person or a system/model is writing the code, the process of storing backups must be followed. I would be surprised if they did not implement these basics, but I can see how things could go wrong.

Space shuttle protocol. Run 4 in parallel, and each confirm the results. If 3 agree that the update is worse than the preceeding version, roll back.

Sign in to participate in the conversation
Merovingian Club

A club for red-pilled exiles.