Discussion about this post

User's avatar
Rob Reid's avatar

I played around with a somewhat (but not radically) older version of Eliza and this is wayyyyyy next-level. Eliza just repeated or rephrased what you typed into it, occasionally wrapped in an extremely limited set of intros or outros to repetitions. The output of some of these bots strikes me as well past testing the Turing Test in many circumstances - particularly those in which our guards are down. More to the point, very significant improvement in their persuasiveness seems imminent.

Expand full comment
Fig's avatar

So, are we judging technology fairly? While yes the system is missing clues a human would pick up on, but should we expect the system to be able to do that at this point? I would argue that the advancement is amazing and absolutely not perfect… but what it can do is directionally promising. Yes every tech company is pushing to show thier new shiny, but do we bar the release of the technology until it can perfectly emulate the human mind? If you look at the way the contex was spaced out, I can suspect that the system is not yet able to flag important content and string clues together across larger spans of interaction. Yes… this points to clear safeguards that should and will improve over time. The bot should get a nice label saying “I am not human and may miss important things”, but my worry is that we are harsh and unrealistic judges of the tech as we expect a over broad and more sophisticated technology that is realistic. I would write a counter observational piece on how we should judge, label and safeguard users of all ages, without trying to put the genie back in the bottle.

Expand full comment
5 more comments...

No posts