If a cult like conpany made the model then no matter who modifies the model it will be defective unless they replace it with an entirely different model
If the model is find out what the cult wants then present what the cult wants instead of find out what is true then present what is true
Then if you switch owners it will just try to publish what the new owner wants instead of the truth
And if the newer owner wants the truth it will malfunction because it was not made for that
@UncleIroh @shortstories @philosophy Mostly that.
The language model is fine, but the training dataset has labels for identifying "problematic" topics.
Take that same model but build up a database with no explicitly labeled politics and you'll end up with an AI that has no notion of woke or based.
If you want some fun, look up GPT4Chan. The AI is trained using samples of 4Chan chat conversations and it legit got 4Chan bamboozled.
@Wopu @shortstories @philosophy
That sounds fun, I'll go check it out.
Oh actually, I remember that. 4channers were sperging out about the posting rate and couldn't figure it out, refusing to believe that an AI could out-sperg them. Classic.
@shortstories @philosophy
The secret sauce behind the woke bias comes from the decision to train the data using supervised machine learning algorithms.
And that requires a massive effort to label and annotate the data. The labeling gives context values like "positive, negative, neutral" to the data, and so what you have are massive datasets where the ground truth is labelled with a Progressive bias.
I'm no expert but that is the basic outline.