• Question: will ai eventually pass the turing test

    Asked by anon-377267 on 10 Jan 2024.
    • Photo: Carl Peter Robinson

      Carl Peter Robinson answered on 10 Jan 2024:


      Yes, I believe so. Look at systems like ChatGPT that can produce coherent textual responses to the prompts we give it. I still think Large Language Models like ChatGPT have a little way to go to fool an expert into believing they are having a conversation with another human, but they are getting there. But that’s only because they have been trained on vast amounts of [questionably-gathered] data, most of which has been generated by humans. Their task has therefore been made easier in predicting which word should follow the previous word when they are constructing their output. Despite this, there are still telltale signs that these LLMs provide in their responses that give them away. Finally, I don’t believe the Turing Test is still a relevant test for detemining intelligent behaviour, not with the way these AI tools (I use the term ‘tools’ very deliberately) have been created. Turing couldn’t have foreseen these tools. If he had known LLMs would become a thing, he would have postulated a different, more relevant test for intelligence as these LLMs are not intelligent.

    • Photo: Beatriz Costa Gomes

      Beatriz Costa Gomes answered on 11 Jan 2024:


      I think an AI made to pass the Turing test will pass the Turing test. The question could be, however, if any AI will pass the Turing test and that I don’t think so! It’s just a matter of what tasks was the AI trained to do

    • Photo: Gareth Hartwell

      Gareth Hartwell answered on 11 Jan 2024:


      The Turing Test is actually not very specific so it depends how you interpret it. But I think that for a very long time to come it will be possible for humans to invent questions which the AI would answer in such a way that you could realise that it wasn’t human.

      As a trivial example if you ask general questions now to ChatGPT the responses seem quite human-like. But if you ask it to decrypt a simple coded message it makes a complete mess of it and it is pretty clear that it doesnt really understand the concept. (I tried this over Christmas when I was stuck on a puzzle in a Christmas quiz involving codes.) While you can design AIs to understand more and more concepts over time I believe that there will always be some concepts that they can’t easily understand and apply.

    • Photo: Andrew Maynard

      Andrew Maynard answered on 11 Jan 2024:


      More and more experts are beginning to realize that the Turing Test isn’t a great way of evaluating modern AI. Many would say that some AIs have already passed the test (the example given in the Royal Institution lectures didn’t really show how capable current AI is). The bigger question is what we mean by artificial intelligence (we don’t have a good definition of this yet), and when we know what we’re looking for, how will we test that!

Comments