Follow

@j @dcc @p @lanodan @hj @sun
I wanted to add. AI has no intelligence. Intelligence means understanding, something that is intelligible. To understand something implies a Mind that can understand. AI means Artificial Intelligence, it simulates understanding, it simulates a mind, but it has no mind. No understanding, no intelligence.

@irie @j @dcc @p @lanodan @hj what's the problem if its not really intelligence if it can do stuff you want? I told it the other day to write me a complete embedded raspberry pi pico project with wifi to control an LCD panel and it did it all and it worked perfectly.
@sun @irie @dcc @hj @j @lanodan @p this tbh
if it can turn my psuedocode into properly formatted javascript that does what i was describing who gives a fuck where it lands on the spectrum of consciousness and agency?
@All_bonesJones @dcc @j @p @irie @hj @sun Yeah, although that bit is like Google Translate / DeepL, and of course it can. Of course not reliably, but we all know that for Google Translate / DeepL, or at least quickly learn it.

But that kind of stuff is different than the AI side of things, which is purely just marketing, and marketing that tends to go away when things become tools.
Like how Siri is barely considered an AI anymore and more taken as a speech-to-text interface.
@All_bonesJones @sun @dcc @hj @irie @j @lanodan Well, that was more or less what Hamming and the others thought, and I'm inclined to agree: it's not a meaningful question. The thing is there, however you want to classify it.

@sun @dcc @j @p @lanodan @hj there's no problem with using mindless AI for whatever work you want it to do, that's the ideal usage for it imo. The problem arises when people expect it to understand things, or expect it to "develop" consciousness. This error stems from how modern neuroscience theorises about human consciousness (network) so they cannot see why a complex-enough artificial network wouldn't achieve consequences too, which is all nonsense of course (dead ≠ living)

@irie @sun @dcc @j @lanodan @hj

> The problem arises when people expect it to understand things, or expect it to "develop" consciousness.

You can't stop that. They treat computers as magical and now the computer talks.
>They treat computers as magical and now the computer talks.

Yeah it's been pretty much doomed as such the moment computers were mass marketed.
@irie @j @dcc @lanodan @hj @sun

> No understanding, no intelligence.

Well, that's the thing, right? Define "intelligence" in objective terms.

Back when AI was a respectable field (it only manages to be respectable long enough for a breakthrough before hype overwhelms it; this was what happened 50 years ago, so if that was any indicator, we'll have an "AI winter" where the research resumes when the money dries up--the money dicks care about the industry and not the research--and then it'll go back to being respectable again), there was an observation (I forget who originated it but I got it from Hofstadter and I recommend reading "Goedel, Escher, Bach" and then ignoring everything else he has done since then, because GEB is entertaining and interesting and gets at a few of the problems in AI while also doing, like, a tour of philosophy and math and enough to stay interesting but not so much that it ruins non-programmers' enjoyment of the book, and this is really rare) that however you define "intelligence", a machine will be able to do it. That is, an objective definition for intelligence means an objective benchmark for it.

Hamming's lectures at the Naval Academy are all available to the public; I got them from Youtube. In particular, he did two about AI and basically everything he said holds: people are more interested in whether or not a machine can replace a human than what a human can do when given an intelligent machine as a tool. (Hamming was a rung up from Doug McIlroy, who was the boss of dmr, ken, bwk, and everyone else at Bell Labs. In addition to the things the lab did while Hamming was in charge of it, Hamming personally developed automatic error correction and fiberoptics, among other achievements. Brilliant guy. *All* the lectures are interesting, but the AI bits are relevant here.) That's something Jobs observed, also: he trotted this anecdote out in at least two interviews, someone had left a copy of "Popular Science" on the table at the office and it had a chart on efficiency of movement: how many calories per mile? By a long shot, humans performed the worst, least effecient, and birds of prey performed the best, because they can lock their wings and just coast on air currents and float up there with almost no effort. But, someone had added one other entry to the chart: a human on a bicycle. If you give the human a bicycle, we pass the birds of prey again. So he said he thought of computers like that, a bicycle for your mind. And, long before he decided to fuck everything up and lock it all down, that was more or less what Apple was making.

So what Hamming said was that you've got to make up your mind whether or not you believe in a soul, and you've got to then be aware of how that will fuck you up: a person with an ardent belief in a metaphysical component to humanity will consistently underestimate machines and miss opportunities, and a person that believes very strongly that man is just a biological machine is going to consistently *over*estimate machines, inevitably over-investing and falling over.

I'm in either Hamming's or Hofstadter's camp (it's the same camp, they arrived from opposite ends; I think Gosper or Greenblatt or somebody--I'm tired and I will continue fucking up attributions--said something like "I don't want to simulate a human: I want to make something different"): whether the machine is "intelligent" or not doesn't really matter. There the machine is, and this is what it can do, this is what you can use it for. I think I've mentioned C. elegans and OpenWorm about a million times, but it's got just a handful of neurons and we can simulate it with only about 80% accuracy. We are not going to get near to simulating a human without hollowing out the moon and replacing it with vector machines. (And even then, we'll have to worry about heat.) But the machine to date has been very good at doing things that are tedious or impossible for a human to do. Now there are LLMs and they predict tokens and they're pretty good at it, and anyone that is smart will be looking at what new niche we can put the machines into so humans don't have to do it any more, but as I said about as soon as I saw a paragraph written by a machine, so far the things are mostly used for spam. (And see my other remarks about the machines obsoleting propaganda by giving anyone that can buy enough GPUs an entire fleet of words-of-mouth to replace the traditional broadcast outlets: the CIA doesn't need to buy newspapers any more, it just has to roll out a few hundred thousand bots and then everyone can have their own tailor-made propaganda machine addressing their specific concerns. We all hop to fedi and it's decentralized and they can't coopt or take down at will...and this place is even easier to flood with bots. I can't wait to say "Huh, that's weird" about eighty times and then someone leaks that they government was using a bot army for surveillance and astroturfing and that all the weird shit was neatly explained by a revolution in influence operations.)
@p @dcc @j @irie @lanodan @hj @sun
My GPU is smarter than me. I realized this two years year when I was ERPing with it. I plan on selling it on Ebay before it sucks me back in.
@p @dcc @j @irie @lanodan @hj @sun
No children were involved, imaginary or otherwise.
@p @dcc @j @irie @hj @sun I kind of feel like you could have a definition of intelligence that a machine pretty much can't do except mimic it.

For example, darwin awards are like "stupidity awards", so could put identifying danger as one aspect, as seen with lab rats too.
And I'd put communicating with others and teaching as a form of intelligence too because helping your peers avoid danger as well effectively makes you stronger.
Then I'd guess things like records and history because you got good enough at immediate preservation that you can try to plan more long-term than a lifetime.

Meanwhile computers/software is ridiculously good at destroying itself all the time.
And so far it seems like what's there is low-level and immediate reactive stuff like error-correction and basic forms of pattern recognition on filters, which feels more like an immune system kind of thing, like your body learning stuff rather than yourself.
@lanodan @dcc @hj @irie @j @sun Well, I mean, that's still informal, right; I think the essence there is that "consciousness" and "intelligence" have thus far evaded a *formal* definition and they are not going to get any more solid now that robots are continuing to inch towards the latter.
@p @dcc @j @irie @hj @sun Yeah, that's the most annoying bit about it, and part of why I tend to avoid using "intelligent", specially in technical discussions, there's usually more precise words without getting wordier.
Sign in to participate in the conversation
Merovingian Club

A club for red-pilled exiles.