Why does Arch Wiki, a public open source documentation system, not want a crawler to index their site? People can scream AI all they want, but the admins are also destroying any new attempts to break into the search engine market. Do they think Google/Bing/Yandex don't already get past this, or do their servers return different results for the big search bots?

People use to be able to view Arch wiki without Javascript. Now they can't. 😡

@djsumdog

That makes it hard to archive and maybe impossible to archive with web dot archive dot org also known as the way back machine

Additionally by clicking I am human they can spy on you

@shortstories @djsumdog Arguably not much more than they can spy on you anyway [especially if you have scripts enabled]
The main objection is that javascript is fucking shit and makes everything worse.

@Zergling_man @djsumdog

If I had to guess it is not javascript that is the problem but javascript libraries plus trying to keep things updated that is the problem

If you write code to do certain basic things that do not need constant updating and do not access a library then if it works it should continue to work

Once someone puts library software in the code working code can malfunction when the library changes

@shortstories @djsumdog Browse the web with scripts disabled, then browse it with scripts enabled.
Observe, with your own eyes, the performance difference.

@Zergling_man @djsumdog

I would suggest there are two reasons for the difference

1 Having more code to run slows down everything in exchange for doing whatever adfitional feature is provided by running the code

2 The problem is not primarily from Javascript itself other than the mistake of whoever put the library feature. I would suggest the problem is with people who are bad at computer programming writing the code in Javascript using libraries because they do not know how to program

@shortstories @djsumdog It's just 1. It's all 1.
If it were actually an "additional feature" it would make sense.
99% of the time it is not. Like loading a form; as if there isn't a standard way to do that already.

@Zergling_man @djsumdog
If someone does not know how to program something or is too lazy to do it themself they will use someone else's library

These libraries can change at any time

So what might have worked might stop working when the library is changed

These libraries allow people who do not know what they are doing to look competant and slip in bad code that will malfunction later after they get paid to do their job.

I would suggest that these libraries are an additional serious problem

I have no idea what you're arguing. The problem I had was I couldn't see a static website because it requires Javascript, NOT for functionality, not even for DDoS protection, but to solve a proof of work because they don't want to be scraped by "AI" ... an open source documentation site not wanting to be scraped. Let that sink in.
Follow

@djsumdog @Zergling_man

I think they wanted to avoid moral accountability or accountability for being caught making mistakes

I think some people infiltrated it and made it into a cult

And maybe it is also part of their planned cyber pandemic

They do not want people to have easy open source alternatives

They want a monopoly so they can control lines of communication to prevent access to information that will get people to doubt they should have blind faith in mainstream media

· · Web · 1 · 0 · 1

@djsumdog @Zergling_man

If they can ruin information about open source software and ruin future editions of open source software and remove past editions of open source software then they can more easily maintain their monopoly

@Zergling_man @djsumdog

They could be a good intentioned but misinformed plant who was duped by a third party

Sign in to participate in the conversation
Merovingian Club

A club for red-pilled exiles.