Comment by peteforde
2 hours ago
Thanks so much for the additional context. You've given me more to dig into.
What I would like to know from you is:
1. On the whole, is the information you see it present more or less coherent and useful? Is it better to have this information than not have it at all?
2. Where does this land in terms of your expectations? Did anything surprise you?
It's clear from your reply that you know what you're talking about, while I'm still clawing my way up from nothing... so it makes sense that you have fewer things that you need to ask about.
I've bootstrapped my entire EE skillset over the past 2-3 years, largely with the help of LLMs to interrogate. It's helped me design and build my first product. I'm confident that without these tools, it's not a question of how long it would have taken so much as the truth: it would have died on the vine.
Follow-up: https://chatgpt.com/share/69a184b0-7c38-8012-b36d-c3f2cefc13...
I asked it about the AHC family equivalent and it recommended against using it, suggesting either AHCT or sticking with HCT. For what it's worth, the reference board that I'm tracing uses an HCT, so the LLM isn't wrong.
Note that at the time I'm writing this, I have an extremely fuzzy understanding of the difference between these three... but I'm working through it.
I’m mostly just curious about how people use LLMs to learn. I don’t know what your goals are, and even if your goals were the same as mine, I don’t know how LLMs stack up against the way I learned (mostly from books). At least, not long-term. I’m not that good at electronics, I’m just a hobbyist that went through Forrest Mims mini-notebooks and later Horowitz and Hill.
What I like about information from humans is that humans are always trying to figure out how to say things that are relevant and informative. By “relevant”, I mean that we try to avoid saying things that don’t help you. By “informative”, I mean that we try to include information that you want to know, even if you didn’t specifically ask for it.
Picking on the chat for a moment—when you started out with the question, my first thought was, “This person is specifically asking about HC versus HCT, but maybe they want a broader overview of logic families, and maybe they want to understand which logic family to pick for their hobby project.” That’s an example where I think ChatGPT could have identified something that you wanted to know, but didn’t. (It wasn’t as informative as it could have been.)
Then there’s some times that ChatGPT gave you information of dubious relevance.
> Important: On the HCT125, the enable is active-LOW.
I don’t think that’s contextually important. It’s like saying, “Important: On the Honda Civic, the gas tank is on the left.” That’s contextually important when you’re at the gas station, but not when you’re buying a car.
I’m not sure why the LLM is recommending the TTL-compatible chips. IMO, the right thing to do here is probably to run everything at 3.3V, unless you have something that specifically needs 5V. When everything is at 3.3V, you don’t have to think about level shifting and you can just pick a very boring logic family like AHC. But I don’t know what you’re building. Likewise, I would lean towards using normal CMOS logic levels, unless I had a specific reason to choose TTL-compatible. The regular CMOS versions have better noise margin, because the threshold is in the optimum place—right in the middle.