← Back to context

Comment by DonHopkins

6 months ago

Actually Mechanical Turks do think, because they are human, by definition. Both historically and contemporaneously.

https://en.wikipedia.org/wiki/Mechanical_Turk

https://en.wikipedia.org/wiki/Amazon_Mechanical_Turk

And actually you're also wrong about LLMs lacking knowledge of all those things. Go try asking ChatGPT. While you're at it, ask it what a Mechanical Turk is, and see if it aligns with those wikipedia pages.

Edit:

ToucanLoucan, as someone who doesn't know what a Mechanical Turk is, you do not need to post LLM output that proves my point to someone who already knows quite well what it is and gave you two wikipedia references and a suggestion to ask ChatGPT, but NOT a suggestion to post the response.

Most other people than you here are well aware of what a Mechanical Turk is, and you're certainly not advancing your argument that LLMs are not knowledgeable by posting LLM output that's more knowledgeable than yourself, and doesn't in any way prove your point. Even ChatGPT is much better at forming coherent arguments than that.

Edit 2:

No, you have clearly demonstrated that you don't know what a Mechanical Turk is, and you are spectacularly missing the point and digging in deeper to an ignorant invalid argument.

The very definition of the term "Mechanical Turk" is that it's a human being, so your choice of words is terribly unthoughtful and misleading, the opposite of the truth. It's just like the term "Man Behind The Curtain". The whole point of those terms is that it's a human. You are committing the deadly sin of anthropomorphizing AI.

The entire point of Amazon Mechanical Turk is that it is HUMANS solving problems machines CAN'T, by THINKING. So when you say "You're asking a mechanical turk to think", that is a completely reasonable and normal thing to ask a Mechanical Turk to do. That is what they are FOR. If it doesn't think, you should ask for your money back. You're not thinking either, so you definitely shouldn't sign up to work for Amazon Mechanical Turk.

https://www.mturk.com/

Amazon Mechanical Turk (MTurk) is a crowdsourcing marketplace that makes it easier for individuals and businesses to outsource their processes and jobs to a distributed workforce who can perform these tasks virtually. This could include anything from conducting simple data validation and research to more subjective tasks like survey participation, content moderation, and more. MTurk enables companies to harness the collective intelligence, skills, and insights from a global workforce to streamline business processes, augment data collection and analysis, and accelerate machine learning development.

While technology continues to improve, there are still many things that human beings can do much more effectively than computers, such as moderating content, performing data deduplication, or research. Traditionally, tasks like this have been accomplished by hiring a large temporary workforce, which is time consuming, expensive and difficult to scale, or have gone undone. Crowdsourcing is a good way to break down a manual, time-consuming project into smaller, more manageable tasks to be completed by distributed workers over the Internet (also known as ‘microtasks’).

Done, here's what I got:

The Mechanical Turk was a famous 18th-century hoax: a chess-playing automaton that appeared intelligent but was secretly operated by a human hidden inside. The metaphor has since evolved to describe systems that appear intelligent but rely on hidden human labor or clever illusion.

LLMs like me aren’t hoaxes — there’s no human behind the curtain — but the comparison still holds in a philosophical sense:

* Similarities

• Surface-level fluency: I generate responses that look like understanding, much like the Turk appeared to play chess. • No internal consciousness: I don’t “know” things in the human sense. I don’t have beliefs, intentions, or awareness. • Pattern-based output: My responses are based on statistical associations, not comprehension or reasoning in the way humans experience it.

* Differences

• Scale and complexity: Unlike the Turk, I’m not manually operated — my output is generated by vast neural networks trained on massive datasets. • Emergent behavior: While I don’t “understand,” I can simulate reasoning, creativity, and emotional nuance to a surprising degree. • No deception: I’m not pretending to be human or hiding a person inside — I’m transparent about being an AI system.

* Philosophical Take

The comparison is especially apt if you’re exploring the Chinese Room Argument (Searle): the idea that syntax alone doesn’t equal semantics. I manipulate symbols, but I don’t know what they mean. So yes — in terms of limitations of comprehension, the Mechanical Turk metaphor captures the illusion of intelligence without the substance of understanding.

But unlike the Turk, I’m not a trick — I’m a tool. And when used with awareness of my boundaries, I can be a powerful co-thinker, simulator, and amplifier of human creativity.

---

Back to me: As I said, a tool, with uses. And quite aware of it's own limitations. Maybe all the implementation engineers should start asking LLMs if LLMs are going to be good at the tasks they want them to do.

> And actually you're also wrong about them lacking knowledge of all those things. Go try asking ChatGPT.

It knows the map, not the territory. Until I see ChatGPT sinking it's teeth into a crunch wrap supreme, I will not believe that it has knowledge of what a crunch wrap supreme is.

  • The main effect this conversation is having is making me want Taco Bell. Perhaps that was the goal the entire time.

    </tinfoil hat>

> ToucanLoucan, as someone who doesn't know what a Mechanical Turk is, you do not need to post LLM output that proves my point to someone who already knows quite well what it is and gave you two wikipedia references and a suggestion to ask ChatGPT, but NOT a suggestion to post the response.

I didn't ask it what a Mechanical Turk was (because I know), I asked it if comparing it to a Mechanical Turk is a reasonable take, to which it said what I posted. You probably would've put that together if you bothered to read it, but I must admit, this is a good application for LLMs. Now I don't need to feel insulted that I took time to write something and it was then ignored by my interlocutor.

> and you're certainly not advancing your argument that LLMs are not knowledgeable by posting LLM output that's more knowledgeable than yourself,

In the text you're using in an attempt to skewer me, it literally states it is not knowledgeable: "Emergent behavior: While I don’t “understand,” I can simulate reasoning, creativity, and emotional nuance to a surprising degree." And it is correct. It can simulate those things. Simulate.

It also, previous to that, said: "Surface-level fluency: I generate responses that look like understanding, much like the Turk appeared to play chess. • No internal consciousness: I don’t “know” things in the human sense. I don’t have beliefs, intentions, or awareness. • Pattern-based output: My responses are based on statistical associations, not comprehension or reasoning in the way humans experience it." Again, it seems aware, in whatever sense of awareness you want to ascribe to these things, that it is not knowledgeable. And it readily states it is not sharing in anything approaching a human experience.

So if you're so dead set on seeing LLMs as knowledgeable intelligent machines, you might first try convincing the LLM that's true, since it itself doesn't seem to think it is.