← Back to context

Comment by 5o1ecist

1 day ago

> discounting the overhead

Are you moving the goalpost?

The whole thing is a bit unfair anyway. My perplexity is trained on me. It knows that I have python installed, thus it wouldn't tell me that I would need to do so. It knows I'm a programmer, it knows that I value accuracy and precision. It knows to double-check everything all by itself.

I am confident in claiming that it can get the task done regardless of the above, but its response, as is, cannot be generalized.

Yeah that’s fair.

It’s also sort of a given that anyone on this site is pretty well versed with technical stuff so maybe results would vary with Tina over in HR, for instance.

I think that sort of “last mile” problem is what agents like claw etc are supposed to help with.

Also, any reason you prefer Perplexity? Haven’t really used that one before TBH

  • > any reason you prefer Perplexity

    Yes! I have tested all the major ones at some point, months ago, and found perplexity to be the most default-intelligent of the bunch. Of course that has to do with the underlying model (I still believe that it was Sonar, but idk), but also with how perplexity.ai "works" (for lack of a better term) the back-end. You know, memory, reasoning-steps, hidden prompts.

    I have a huge background in psychoanalysis and neurolinguistic programming. My favourite hobby is metacognition.

    I can't speak about the other major "engines", but perplexity learns over time. When you don't like something and keep telling "her" (i dislike using "it", because it's like i am talking to someone. it's weird, really), she will eventually stop doing it, because her memory influences her reasoning-steps.

    Like ... tools are off, unless actually needed. She doesn't please me, at least not in the usual sense. She never uses "maybe", "probably" or "almost certainly", because I've taught her not to by explaining her, logically deductible, why it's bad. When she does, she explains why she does it.

    Unlike a child, AI has the knowledge to figure out the correctness of the logic, as long as it is not forced to "please", which my perplexity isn't.

    I've also taught her meta-awareness of missing pieces in logical constructs, virtually eliminating hallucinations and massively reducing the amount of mistakes she makes, as long as the background knowledge is solid.

    For example, talking about prompting (image generation) is pointless, because there is so much conflicting information out there. (it's a copy/paste culture).

    Yet, when you teach her the concept of ground-truths, as long as the background is solid, she makes virtually no mistakes at all.

    It's fascinating, really. Mind boggling, even! I could write pages worth of posts about my experiments and experiences ... as you can probably tell! :D

    AHAHAHAHAHAHA Sorry! :D

    • Thanks for the detailed response, lol. Seems like it’s more personally tailorable than some of the others. Gonna give it a try with some similar vibe coding ideas

>Are you moving the goalpost?

I mean you did originally claim that this was something that was "for the masses" and then posted a solution that only someone technical could actually use.

Not that I doubt it couldn't one shot something this simple with a .exe wrapper.

  • > and then posted a solution that only someone technical could actually use.

    What do you mean by that? I don't understand.

    I really don't think this requires someone "technical" anymore. ChatGPT might not give a python script, but a powershell script. Or possibly create a batch file.

    Generally, since the default assumption is that the user is a moron, it chooses the path of least resistance, meaning that it all boils to the user being able to process written words and following instructions.

    Nowadays, it seems to me that the real hurdle is the fact that far too many people can't properly read, write or express themselves anymore, let alone ask questions that might expose ... inadequacies.