"Not to be confused with Generative adversarial network."
"Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks.[1]"
"Most common attacks in adversarial machine learning include evasion attacks,[3] data poisoning attacks,[4] Byzantine attacks[5] and model extraction.[6]"
https://en.wikipedia.org/wiki/Adversarial_machine_learning
Is there a way to legally mess with censorship algorithms via data poisoning by repeatedly posting certain types of content?
"Data poisoning
Poisoning consists of contaminating the training dataset with data designed to increase errors in the output. Given that learning algorithms are shaped by their training datasets, poisoning can effectively reprogram algorithms with potentially malicious intent. "
"On social medias, disinformation campaigns attempt to bias recommendation and moderation algorithms, to push certain content over others."
https://en.wikipedia.org/wiki/Adversarial_machine_learning#Data_poisoning
Would this influence what is and is not censored?