← Back to context

Comment by peteforde

8 hours ago

I guess we have to agree to disagree, because I am not particularly interested in chemistry and ChatGPT has been extraordinarily helpful in demystifying electronics. Having 24/7 access to a patient person who can unpack the difference between TTL and CMOS logic or when you'd choose a buffer instead of a Schmitt trigger without belittling you for not already knowing what they know is awesome and not going to get anyone even slightly killed.

What are you doing with TTL logic in 2026, out of curiosity?

(I’m not saying it’s not used, but the only thing I’d use TTL for is building old circuits out of the Forrest Mims books.)

  • Reasonable question and hopefully an interesting answer...

    The simple lack of reasons to use TTL logic in 2026 was exactly why I didn't know what the deal was. It'd never come up, but I'd see it referenced.

    I'm self-taught and in defiance of the people who insist that LLMs turn our brains to passive mush, the more things I learn the more things I have to be curious about.

    LLMs remove the gatekeeping around asking "simple" questions that tend to make EEs roll their eyes. I didn't know, so I asked and now I know!

    • What was the answer?

      I’m just curious at this point about what the quality of the answer is, just because you made a point about LLM use not turning your brain into mush.

      I’ve not really used LLMs to answer questions, since it hasn’t gotten me the answers I wanted, but maybe I’m just set in my ways.

Electronics can kill too. IIRC capacitors in CRTs are particularly deadly. Though I suppose someone using LLMs only as a first step, much like Wikipedia, is probably at much less risk than someone using it as their only source.

  • Yeah, okay but... look, I concede that someone who shouldn't be doing anything except watching passive entertainment could absolutely take insane advice from an LLM (or a sociopathic human) and seriously hurt themselves.

    But raw dogging capacitors in CRTs is such an overtly straw man argument in this conversation. People who are cleaning bathrooms for the first time can hopefully be trusted not to drink the bleach, right?

    If someone licks a running table saw because an LLM said it would be fine, we're talking about entirely different problems.

> Having 24/7 access to a patient person

It’s not a person. You understand that, right? I have to ask considering the amount of people who are “dating” and wanting to marry chatbots.

It’s a tool. There’s no reason to anthropomorphise it.

  • I'm glad that you brought that up, because I actually hovered on my response precisely because of those words. Specifically, I wondered if I could reliably count on someone showing up to say something patronizing and unnecessary.

    This particular combination of snark, faux-concern and pedantry doesn't help the point you're trying to make about my loving AI wife.

    • It was not my intention to be patronising nor snarky, nor am I the least bit concerned for you (faux or otherwise). Though on a reread I do understand how my reply can be understood as unkind. I regret that and apologise for it. It was not my intention but it was my mistake. I should’ve made it shorter:

      > It’s not a person, It’s a tool. There’s no reason to anthropomorphise it.