← Back to context

Comment by ArekDymalski

2 years ago

This article inspires to ask a fundamental question "What do we expect/want AI to work like?". Do we want a xerocopying machine, providing verbatim copies or are we willing to accept that intelligence is connected to creativity and interpretation so the resulting output will be processed and might contain errors, ommissions etc. To be honest the same applies to humans. There's this passage in the article:

>If a large-language model has compiled a vast number of correlations between economic terms—so many that it can offer plausible responses to a wide variety of questions—should we say that it actually understands economic theory?

In the above passage we can easily switch "larger-language model" to "Professor Jean Tirole" and ponder how high do we set the bar for AI. Can we accept AI only if it will be flawless and "more intelligent" (whatever that means) than all humans?