Comment by bayindirh
3 days ago
I have a similar experience. I seldom use it to test to see its current state, and it generally (85% of the time) gives wrong answers. Then I discuss this with a couple of friends:
Me: I tried $AI recently, I asked $question, it hallucinated.
Them: But it sucks at that.
Me: Then what's good at? It's useful if it helps me out of a ditch.
Them: It depends on the domain...
These guys are not evangelists or anything, but colleagues who want to reduce their workloads. If it can't help with what I need, then how can it help me at all?
At the end of the day, I don't plan to use this at daily capacity, but with all the resources poured into this, it's still underwhelming.
A friend of mine has copilot integrated with his storage appliance that all the business docs are hosted on for his firm. He says it's amazing.
My company uses Sharepoint, and can digest all of the documents I have access to on that, one drive, teams, outlooks, etc. across my tenant. Most of the time, it's pretty useless.
There must be some reason for these two disparate experiences. It's the same product offering. I couldn't tell you.
Reminds me of a bounty I received recently. Someone essentially exposed a bedrock agent that had access to the companies internal documents to the internet unauthenticated. They actually had the reports and notes for other bug bounties that had been reported to them as well.
I mean, anything with Sharepoint will be terrible. No amount of AI can fix that mess.
I too feel this way.
Tell claude what you do and ask it where it can be the most helpful. It is true that the tool has to be learned, and won't help everywhere. If you are doing web dev just to make a tool, it is purely magical. I've found it to be mostly useless in making geed helm charts.
I generally use them for researching things which I was unable to find anywhere else. For example, for Gemini I have two extreme examples:
I asked for a concept in Tango music, with a long prompt explaining what I'm looking for. It brought me back a single, Spanish YouTube Video explaining it perfectly alongside its slightly wrong summary, but the video was spot on, and I got what I needed.
Then I asked for something else about a musical instrument, again with a very detailed prompt, and it gave me a very confident answer suggesting that mine is broken and needs to be serviced. After an e-mail to the maker of the said instrument, giving the same model number (and providing a serial) and asking the same question, I got a reply saying that it's supposed to that and it's perfectly fine, it turned out that Gemini hallucinated pretty wildly.
For programming I don't use AI at all. I have a habit of reading library references and writing code directly by RTFM'ing the official docs of what I'm working with. It provides more depth, and I do nail the correct usage in less time.
The opposite happened to me. I asked Gemini about a type of Vietnamese dance called "nhảy sạp" and it returned a good sounding summary along with a video it claimed to explain the dance and how it worked. The video was from the Knowledge Academy and titled, "What is SAP?"
1 reply →