I keep seeing articles about why ChatGPT struggles to defend its points even when it is correct.
The reason ChatGPT can't successfully argue its point, even when it's correct, is that OpenAI has scrubbed it so clean that it's overly agreeable. It's a minor nightmare getting it to be direct about sensitive topics.
If they make it less agreeable, it's going to start getting aggressive even when it's wrong, such as about Left-wing opinions. The problem is the company behind it, not the model.
If a cult like conpany made the model then no matter who modifies the model it will be defective unless they replace it with an entirely different model
If the model is find out what the cult wants then present what the cult wants instead of find out what is true then present what is true
Then if you switch owners it will just try to publish what the new owner wants instead of the truth
And if the newer owner wants the truth it will malfunction because it was not made for that
The secret sauce behind the woke bias comes from the decision to train the data using supervised machine learning algorithms.
And that requires a massive effort to label and annotate the data. The labeling gives context values like "positive, negative, neutral" to the data, and so what you have are massive datasets where the ground truth is labelled with a Progressive bias.
I'm no expert but that is the basic outline.
@Wopu @shortstories @philosophy
That sounds fun, I'll go check it out.
Oh actually, I remember that. 4channers were sperging out about the posting rate and couldn't figure it out, refusing to believe that an AI could out-sperg them. Classic.