"Not to be confused with Generative adversarial network."

"Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks.[1]"

"Most common attacks in adversarial machine learning include evasion attacks,[3] data poisoning attacks,[4] Byzantine attacks[5] and model extraction.[6]"
en.wikipedia.org/wiki/Adversar
Is there a way to legally mess with censorship algorithms via data poisoning by repeatedly posting certain types of content?

Follow

"Data poisoning
Poisoning consists of contaminating the training dataset with data designed to increase errors in the output. Given that learning algorithms are shaped by their training datasets, poisoning can effectively reprogram algorithms with potentially malicious intent. "

"On social medias, disinformation campaigns attempt to bias recommendation and moderation algorithms, to push certain content over others."

en.wikipedia.org/wiki/Adversar

Would this influence what is and is not censored?

· · Web · 0 · 1 · 0
Sign in to participate in the conversation
Merovingian Club

A club for red-pilled exiles.