I keep seeing articles about why ChatGPT struggles to defend its points even when it is correct.
The reason ChatGPT can't successfully argue its point, even when it's correct, is that OpenAI has scrubbed it so clean that it's overly agreeable. It's a minor nightmare getting it to be direct about sensitive topics.
If they make it less agreeable, it's going to start getting aggressive even when it's wrong, such as about Left-wing opinions. The problem is the company behind it, not the model.
The secret sauce behind the woke bias comes from the decision to train the data using supervised machine learning algorithms.
And that requires a massive effort to label and annotate the data. The labeling gives context values like "positive, negative, neutral" to the data, and so what you have are massive datasets where the ground truth is labelled with a Progressive bias.
I'm no expert but that is the basic outline.
@Wopu @shortstories @philosophy
That sounds fun, I'll go check it out.
Oh actually, I remember that. 4channers were sperging out about the posting rate and couldn't figure it out, refusing to believe that an AI could out-sperg them. Classic.
@UncleIroh @shortstories @philosophy Mostly that.
The language model is fine, but the training dataset has labels for identifying "problematic" topics.
Take that same model but build up a database with no explicitly labeled politics and you'll end up with an AI that has no notion of woke or based.
If you want some fun, look up GPT4Chan. The AI is trained using samples of 4Chan chat conversations and it legit got 4Chan bamboozled.