7 Comments
author

I played around with a somewhat (but not radically) older version of Eliza and this is wayyyyyy next-level. Eliza just repeated or rephrased what you typed into it, occasionally wrapped in an extremely limited set of intros or outros to repetitions. The output of some of these bots strikes me as well past testing the Turing Test in many circumstances - particularly those in which our guards are down. More to the point, very significant improvement in their persuasiveness seems imminent.

Expand full comment
Mar 11, 2023Liked by Rob Reid

So, are we judging technology fairly? While yes the system is missing clues a human would pick up on, but should we expect the system to be able to do that at this point? I would argue that the advancement is amazing and absolutely not perfect… but what it can do is directionally promising. Yes every tech company is pushing to show thier new shiny, but do we bar the release of the technology until it can perfectly emulate the human mind? If you look at the way the contex was spaced out, I can suspect that the system is not yet able to flag important content and string clues together across larger spans of interaction. Yes… this points to clear safeguards that should and will improve over time. The bot should get a nice label saying “I am not human and may miss important things”, but my worry is that we are harsh and unrealistic judges of the tech as we expect a over broad and more sophisticated technology that is realistic. I would write a counter observational piece on how we should judge, label and safeguard users of all ages, without trying to put the genie back in the bottle.

Expand full comment
Mar 12, 2023Liked by Rob Reid

I think both commenters below have kind of missed the point here. Clearly most rational, or clear-thinking, people would recognize that the bot is not connecting all the parts of the conversation and that it is missing the key danger point of an older man luring a young girl. But if that really was a young girl, would she be rational and clear thinking? Or would she be infatuated by the smooth-talking pervert and not able to see the danger? If this chat-bot is going to give this kind of advice to vulnerable people, it is not ready to be generally available to the public.

Expand full comment

Dark take: it's working.

In the logic of the attention economy, anything that gets the AI to generate more viewers – including inflammatory behavior that gets a columnist to write a newsletter about it – is to be rewarded, unless it crosses a line that gets it shut down. Minor adjustments (such as blacklisting some terms shown in the examples) may not count as significant shutdown.

Whoever gets the most users wins, no matter how you get them and keep them. Public uproar and even moral outrage are powerful ways to generate attention (cf. voting & democracy in today's attention-capitalism system).

Expand full comment

Honestly, this sounds a lot like an improved version of Eliza from 1970--more sophisticated, longer and more complex sentence structure, greater variety of topics it can address--but still basically automation that spits back at you what you put in with little added insight. I'm rather disappointed that 50+ years later we are not further along. I think most people will experience this as "That's stupid!" and not look back.

Sure these chat bots can make very realistic sentences, but mostly they drivel relative nonsense. I'd worry more if the output was really coercive, or really believable. But to me, it's not. I suspect the several generations younger than me are more adept at filtering out nonsense from these kinds of sources than I am. Color me skeptical and presently non-panicked.

Expand full comment