Comment by the_snooze
3 days ago
>Workspace AI includes things like email summaries in Gmail, generated designs for spreadsheets and videos, an automated note-taker for meetings, the powerful NotebookLM research assistant, and writing tools across apps.
Maybe I'm just an old curmudgeon stuck in my ways, but I haven't found much compelling value in these use cases in my day-to-day work. For summaries and note-taking specifically, I feel they're solving the wrong problem: it's not that I have all this information that I really want to go through, but it's that I have too much information and it's become all noise.
The real solution to too much email is fewer and higher-priority emails. The real solution to too many meetings is fewer and more-focused meetings. These tools paper over the root cause of the problem, which is that people/organizations cannot (or are unwilling to) be clear about communication priorities and say "maybe this email/meeting isn't a good use of time after all."
How is AI in email a good thing?!
There's a cartoon going around where in the first frame, one character points to their screen and says to another: "AI turns this single bullet point list into a long email I can pretend I wrote".
And in the other frame, there are two different characters, one of them presumably the receiver of the email sent in the first frame, who says to their colleague: "AI makes a single bullet point out of this long email I can pretend I read".
It's true: Why should I bother to read something you didn't bother to write?
There's a trend of people replying to posts/tweets/etc. with 'I asked ChatGPT and it said...'
It's the modern equivalent of LMGTFY. The OP could just as easily written the same prompt themselves. The difference is that LMGTFY was an expression of irritation, smugness and hazing. The ChatGPT reply is just garrulous laziness. I expect and hope we'll develop social rules that mean this type of reply will be seen as passe.
3 replies →
Yes, when I see something written by AI I don't read it. Its a waste of time.
7 replies →
My expectation is that:
1: people will use ChatGPT to write their formal emails based on a casually written text 2: people will use ChatGPT to convert their emails from formal text to summaries\ 3: this will get automated by email providers 4: eventually the automation will be removed and we'll just talk in plain language again
2 replies →
It is funny but it is genuinely a enormous waste of energy and money.
You can run it through AI to summarize it down to a sentence or two. It's like the telephone game but with computers.
15 replies →
My email is disliked due to its brevity, turning the single clear and concise sentence of into a multi paragraph treatise might just lead to promotions, raises and bonuses which I can trickle down through the economy.
I think this underrates how many emails are literally just replies of "sounds good". Small snippet replies seem to be the vast majority of automatically suggested responses in gmail
A reply of "sounds good" means the initial email has been read and its contents agreed upon. Ho would AI improve upon this?
- sending "sounds good" even when the recipient hasn't, in fact, read the initial email => catastrophic alternative
- writing an elaborate email explaining in luxurious details why it in fact sounds good => not catastrophic, but costing time on the other side to read and understand, with zero added value
1 reply →
Sounds good.
Email is a dated form of communication, that's why every other message platform will let you just like and heart stuff.
23 replies →
Proton has a nice feature for writing emails.
They specifically allow you a grammar/spell check and also change tone (formal/informal) and length. Length one I have never used but the grammar spell check is a godsend that I use almost always.
You're aware we've had grammar/spell check since (checks) 1961 right? It's built right into your operating system.
1 reply →
Maybe you aren't in a space where it would be useful, but not everyone who has to write an email is a great and concise writer.
I worked with groups of tradespeople who had poor literacy and they had to write emails and some of them were very poorly written. AI would have helped these people a great deal in providing information but also being able to understand what was coming back to them.
I worked with engineers daily for around 40 years and now I work with trades people daily. In general the trades people are better communicators.
Formal writing is just that.
Alice: Hey, Bob, I finished the job, pay me
Letter: Blah blah blah, Bob, blah blah blah, $$$, blah blah blah
Bob: Oh, Alice is done, hey Charlie, pay her
Letter: Blah blah blah, Charlie, blah blah blah, Alice, blah, $$$, blah blah
Charlie: Ok, Alice is paid
Letter: Blah blah, Alice, blah blah, $$$, blah blah, bank account, blah
Alice: kthx
Letter: Blah blah blah...
I like this version of the same joke (unfortunately no idea what the source is): https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fw...
Found it: https://marketoonist.com/2023/03/ai-written-ai-read.html
It almost can't be a good thing. LLMs are only useful when given all the relevant context. When you write an email, the context is mostly in your head.
It isn't, though; it's in all the meetings that happened beforehand and all the documents around them.
The biggest productivity boost I ever managed was using Whisper to convert meetings to text and then a big model to summarize what happened.
Then I can chat with the docs and meetings about who decided what, when, and why. It's a superpower that I could only implement because I'm in the C-suite and could tell everyone else to get bent if they didn't like it—and gave babysitters to the rest of the C-suite.
Having visibility and ownership for decisions is a huge deal when everyone has access to it.
2 replies →
My experience with LLMs expanding on bullet points is that they often enough misrepresent my intentions as a writer. Often in infuriatingly subtle ways.
Same when summarizing, just less frequently.
As someone who cares about precision and clarity in my writing, I do not use LLMs in the context of communication.
If you're a non native speaker trying to get the tone just right with recipients whom you don't know, it's invaluable.
Sometimes I would spend 15 minutes writing a 3 or 4-line email of this kind. Not anymore.
> How is AI in email a good thing?!
> There's a cartoon going around (...)
Both frames of the cartoon represent a real perceived need: for the sender, the need to inflate the message to "look nice" because "people expect it", and then for the recipient, the need to summarize the nice-looking message to get the actual point they care about.
Hopefully the use of AI in email will make that cartoon (and the underlying message) widespread, and lead to people finally realizing what they failed to realize all these decades: just send the goddamn bullet point. We don't need AI in e-mails. We just need to stop wasting each other's time.
EDIT: and riffing off rpigab's comment downthread, https://news.ycombinator.com/item?id=42723756 -- I wish for the future people will feel comfortable, instead of sending AI-generated e-mails, to send the goddamn prompt instead. It carries all the information and much less noise.
I mean, using LLMs makes sense if you actually need to communicate in prose - for many, myself included, it's much easier to evaluate whether some text sounds right, than to write it that way in the first place, so LLMs are useful in evolving and refactoring your own writing (and learning how to write better from it, over time). But that is rarely the case in transactional or business communication - for that, just send the prompt.
Google seems to have an advantage here; as the client on both ends in many emails, they could just check if this ai expand/summary process is occurring and if so just send the bullet point (or if they want to be really clever just pass the bullet point through a thesaurus, so nobody will notice even if the sender happens to see what the recipient got).
Oh boy the future is so underwhelming.
1 reply →
Given how much compute these models take to run, I don't think there's any value in that.
Do you happen to have a link for that comic?
Not the person you asked, but I too enjoy good web comics.
https://marketoonist.com/2023/03/ai-written-ai-read.html
2 replies →
what are people even worried about here? they're just trying things to see whether they're useful. don't expand your emails into long prose if it adds no value for you and they will focus on other things.
This is so funny I screamed laughed just reading over it XD
someday 99% of all computing power is going to be used to generate and summarize vast amounts of text.
The most inefficient protocol of the internet.
This was literally in the initial gmail demo about AI :D
Really? Wow. And they think if they're pointing it out, it absolves them somehow? Like those companies that used to have Dilbert cartoons pinned on cubicle walls?
[dead]
Right now at Amazon we are going through the annual feedback cycle where you have to write strengths and growth areas for your colleagues. You will usually have to do ~12 of those.
I don't use ChatGPT for those, but it is the epitome of what you are describing, people will take a single sentence, ask some LLM to blow it up the correct length and in the process make it a complete waste of time for everyone.
My guess is that with long-form text losing value due to LLMs, we will see a return of very succint 1-2 lines employee feedback.
This is one of the few places I have gotten value out of the LLM. I tell it about my relationship to the colleague and what we worked on, in a very quick rough way. Then I tell it we are writing peer review and the actual review prompt. It gives quite good results that aren't just BS, but I didn't have to spend the time phrasing it perfectly. Because I do want my peer reviews to reflect well on both me and the colleague.
I get where you are coming from with this, but in my opinion being able to give feedback in a clear and concise fashion is a skill that people should have. LLMs will help you elaborate but they will also add their own flair by choosing the actual work. You can think "wow that's actually what a better person of me would have written" but you are biasing yourself based on what the LLM understood of your prompt focusing on form over substance.
But as the other comments mention it might just all be bullshit anyway.
1 reply →
> that aren't just BS
Having been on the receiving end of many of these, it absolutely is pure BS and I lose all respect for anyone who themselves have so little respect for their colleague's time as to subject them to the AI-written slop instead of actual genuine feedback.
The whole fucking point is to give them actionable feedback, both good and bad, for them to work on themselves, not some generic hallucinated summary of some bullet points you haphazardly threw together. I can copy/paste the review prompt into ChatGPT myself, thank you very much, I don't need you to do it for me and to pass it off as your own genuine thoughts.
2 replies →
If a colleague gave me LLM responses instead of genuine feedback I would never ask them for a review again. Which may be what they were going for. But sadly this is not what I wanted.
Be better. Someone respected your opinion enough to go out and ask for it. Take a minute to reflect.
> we will see a return of very succint 1-2 lines employee feedback.
This would be a great outcome in a lot of areas!
Why even start with a single sentence? They're asking you to come up with excuses ("growth areas") to fire twelve of your colleagues. It's a waste of your time, and you should figure out with your colleagues and manager exactly what text you need to generate to deal with this silliness.
Why do you think this is what performance review cycles are?
3 replies →
> people will take a single sentence, ask some LLM to blow it up the correct length and in the process make it a complete waste of time for everyone.
It's more complicated than this.
The short form isnt actually the best form. It's incomplete. The LLM is being used to decompress, because it can be difficult to do. Blindly using an LLM isn't the solution but it can be part of an effective workflow to write good feedback.
Also, I'm sure some people take a brief, complete idea and expand it into an entire paragraph because they have some warped perception. That's bad, but I dont think most people are doing that because most people dont see any reason to.
I bet the reviews are evaluated by AI too—AI writes, AI evaluates, what could go wrong? :)
I hope it drives a cultural revival in appreciation of laconicism.
I just exited the toilet following 2.5 hours of back-to-back meetings, and was looking forward to actually getting some work done when the product owner grabbed me for a conversation about priorities for the sprint planning session that's scheduled in a couple of hours.
In this week so far (first week back from Christmas / New Year leave) I've spent maybe half a day total on work that could be classified as "progress". The rest of the time has been meetings and the required meeting follow-up work.
There's no point in Sprint Planning or considering adding priorities to the current plate. It's full. But nobody has time to eat things off the plate because we're always in meetings to work out how we can eat off the plate more efficiently.
/rant
I've come back from holidays angry. Things gotta change.
The secret is to add every meeting into your Jira as a task, and then close it once the meeting is done.
Equally, instead of talking about meetings as detracting from your work, start talking about them as the work.
When your manager asks about your milestones, or accomplishments, or success stories, make meeting attendance front and center.
When discussing software development, bug fixing, etc in the meetings, point out that you won't actually do any of it. Point out that 20+ hours of your week is in meetings, 10 hours of admin (reading, writing, updating tickets), 5 hours of testing etc.
"This task will take 40 hours. At 1 hour per week I expect to be done in October sometime. If all goes to plan'
Yes, it seems cynical, but actually it has real outcomes. Firstly your "productivity" goes up. (As evidenced by your ticket increase.)
Secondly your mental state improves. By acknowledging (to yourself) that you are fundamentally paid to attend meetings, you can relax in your own productivity.
Thirdly by making your time allocations obvious to your manager, you place the burden for action on him.
If you convince your colleagues to do the same, you highlight the root problem, while moving the responsibility to fix it off your plate.
Thank you for this!
I was just thinking about how for the people requesting all of these meetings, the meetings are the work. If they don’t meet / waste everyone’s time, they are… unproductive.
For engineers, meetings are the non-productive part and are not counted anywhere.
Adding them to Jira and accounting for their cost is the way. Businesses understand money. Meetings are expensive.
Does your company log meetings as tickets?
3 replies →
Have you considered setting more meetings with various stakeholders to discuss how to prioritize time for the next 2 weeks? And then follow up check in meetings every 2 days to change direction in an agile way?
You really have to schedule a meeting to discuss an upcoming meeting, so the upcoming meeting can be more efficient.
(yes this happened to me before)
3 replies →
I don't blame you for getting angry.
How big's the org? This setup feels unavoidable past a certain company size as growth attracts grifters who then call meetings atop meetings to appear useful.
Unless you own the shop I don't see the issue - good money for a day's work a week?
It's more a case of team-member churn, requiring a near-constant re-establishment of work practises, alongside a number of over-officiated processes that are in a constant state of being re-engineered for efficiency because they're a constant source of "time drain away from actual progress". There's also a lot of tech debt that has only recently (in the past three years) been really focused on to grow out of. There's also a lot of complexity to the system(s) we work with and the combination of complexity and tech debt is neither pretty nor easy.
Unless you own the shop I don't see the issue - good money for a day's work a week?
Yeah, except I have a visceral feeling of pressure to make progress and I don't want to be "one of those people" who don't work towards some kind of improvement. I had a bit of a rant today, and one of the leaders agreed with basically all of my points, although they said that there's a limited amount that can change in the immediate due to existing priorities. However, I'm still going to dedicate some time every day to map out how to improve on the status quo - this will further inhibit my actual task progress, but in the pursuit of a loftier goal (so, yes, potentially making it worse, but it'll feel like I might make things better...).
I had a few use cases with searching and organizing emails I would have used. For example, I wanted a table of all my Lyft rides from a certain year with distances driven, start/end locations, cost, etc. All that info is available in the email you get after riding, so I figured Gemini could read my emails and organize the info.
Turns out it doesn't work at all. It gave me a random selection of rides, was missing info in some of them, and worst didn't realize it was giving me bad info. Pretty disappointing.
That's the glaring issue with all of these AI "features". If it can't be trusted to produce something that is both accurate and complete, it's generating negative work for whoever has to track down and fix the problems. Maybe some people like cleaning up sloppy work from their coworkers more than just doing the damn thing, but I personally hate spending time on that and GenAI adds a whole bunch more of it to every process it gets shoved into.
I take a slightly different approach - I usually have AI assist in writing a script that does the task I want to do, instead of AI doing the task directly. I find it is much easier for me to verify the script does what I want and then run it myself to get guaranteed good output, vs verifying the AI output if it did the task directly.
I mean if I'm going to proof-read the full task output from the AI, I might as well do the task by hand... but proof-reading a script is much quicker and easier.
I used Gemini to do a similar task and for whatever reason, i found it performed better when i broke down the task into individual steps.
These LLMs are excel at making more. More emails with more words. More blog posts with more fluff. Making it open to more people means more usage means more numbers being more which means more money for the people building these systems.
I don't see what I get out of 80% of these products. It's just more noise.
Google's implementation of AI really shows the innovators dilemma in action
These features are just so rudimentary you just know a bunch of MBAs from McKinsey came up with them over a 7 month and $25m
I find AI meeting transcripts and summaries to be one of the most genuinely useful things to come out of this era of LLM tools. Being able to see a quick summary of what was decided or who was supposed to do what next is just so helpful, either for refreshing your memory after the weekend or just because people aren’t all that great at taking and sharing notes.
I prefer to take succinct notes on paper or eInk and cut the noise while I’m on the meeting. I’m better focused, keep the meeting to what really matters. A colleague sent me one of those summaries, it didn’t make sense. For me it can’t replace a good system, precise notes and useful on point meetings. Maybe for people who have useless meetings they must attend it’s better ?
It's nice if you're the one presenting or leading the meeting, and/or if the person you've asked to take notes is not especially diligent. I've also been sent a photograph of someone's handwritten notes after a meeting and found it...not terribly useful.
1 reply →
Indeed.
That does sound generally useful. Out of interest: Do you ever see a one hour meeting being summed up so brief that the participants question why they spend an hour on the meeting (or more realistically, question if the LLM understood the meeting at all).
Even when meetings are summed up, which I think they should be, you frequently see that no real progress was made, someone did all the work before the meeting started and this is now just a one hour sign off, or everything is simply pushed to the next meeting.
I can hardly wait to use it as an excuse. "Oh sorry I didn't do that because it wasn't in the AI summary" ;)
I had the opposite experience recently. I was sent a summary of a sales video call, and the summary stated that we had promised to deliver something that was not nearly ready in 2 weeks! I was panicking but then started to doubt that the person in question would make such an irresponsible promise (but not.. completely sure it you know what I mean) so fortunately the summary included links to timestamps in the video call and I watched it. From the video it was clear he was talking completely hypothetically and not promising anything at all! The AI completely failed to pick up the nuance and almost made me change team priorities for the next sprint. Glad I verified it.
So, instead of the people in the meeting spending a few minutes writing up a few notes to send to you about actionable next steps, you got to waste your time on the artificially intelligent fuck up.
These are human problems desperate for magical ways to do less work.
> For summaries and note-taking specifically, I feel they're solving the wrong problem: it's not that I have all this information that I really want to go through, but it's that I have too much information and it's become all noise.
I think this really encapsulates something that I hadn't been able to put my finger on in regards to LLM summarization. What it seems to indicate is that, if you need a computer to summarize a large amount of text that someone has sent to you, there are two likely possibilities:
1) The information is incredibly dense/important/technical/complex. This necessitates the extreme length of the message - (think: technical documents, research papers, a rough draft of a legal notice, or your will.) For these sorts of things, you should not rely on an LLM to summarize it, because it may miss key details of the message.
2) The person sending it to you is bad at communicating, in which case the solution is help them learn better communication, rather than "de-noising" their clumsy wording into something comprehensible.
"But what if its number 2, but it's coming from your boss?"
Then I see two obvious points to consider:
First, you should absolutely be telling them about the problem, regardless of the position that they hold. You can phrase it in a way that isn't rude. "Hey boss, I saw (message) but I'm not 100% the intent. I've actually noticed that with (other time)...I usually try to front-load the action items up front, and put the specifics lower down. Anyway, to make sure I'm tracking, you're talking about (action) on (thing), right?"
Second, until (or unless) their communication style is de-noised, then part of your job is being able to "translate" their instructions. Using an AI to do that for you is a bad idea because, at some point or another they're going to be trying to speak to you in-person, or by phone.
Not having dealt with their mannerisms in an unfiltered way might lead to you being "out of practice" and struggling much harder to figure out what they're trying to convey.
Well, that’s because you’re thinking as someone who likely has a stake in quality/specific outcomes actually happening. Or was raised/grew up in an environment where that was important.
Notably, in my experience there is a high correlation with that background and being curmudgeonly. Mainly because that means someone has been responsible for outcomes, regardless of feelings. And something often has to give, and it’s usually feelings. It’s also hard to not be cranky or even angry if someone has to constantly be the one ‘not having fun’ or cleaning up messes so the whole thing doesn’t fall apart.
There is huge market demand exactly for what you’re complaining about, which is faking things happening as convincingly as possible, precisely because being clear/concise, etc. helps with seeing the root cause of problems, and if someone is worried (or is legitimately) a root cause of the problem, of course they’ll consider that bad.
For example, a good sign of a badly led organization is that it’s always busy, but never seems to get anything done. Everything is an emergency, so nothing really gets fixed, etc.
Or there are constant meetings and emails, but nothing gets decided.
People will pay good money for the right kind of wallpaper that makes that ugly wall look pretty again.
> The real solution to too much email is fewer and higher-priority emails.
Sure, and that's an actionable solution if you can control the actions of everyone else who emails you.
I don’t use it often either, but sometimes it is very useful. When I caught Covid last fall my wife incorrectly thought I had it three times. I was using a beta Google Gemini, and paying for it, and I asked “read my @gmail and tell me the date ranges when I have had Covid.”
That worked, but to be honest I have tried similar things more recently that didn’t work. Perhaps there is a routing model up front that decides whether or not to use a lot of compute for any given query?
Google also plans on charging more money for APIs for code completion plugins for IntelliJ IDs, etc. this year.
I would like to see AI pricing models be sustainable, not give things away for free, and have lots of control over when I use a lot of compute. I actually have this right now because I usually use LLM APIs and write my own agents for specific tasks.
Congrats on winning an argument against your wife. Billions well spent.
> I haven't found much compelling value in these use cases in my day-to-day work.
If my experience with Microsoft Office Copilot is any indication, these features produce very confusing, low-quality content if they are not completely wrong and useless. Used it once and never touched them again. (My company is still paying for this and rolling this out widely despite many reports of how unhelpful they are.) I doubt Google Workspace can do any better.
> it's not that I have all this information that I really want to go through, but it's that I have too much information and it's become all noise.
I tend to agree, except these two things are kind of the same thing. It can make going through the noise easier by intelligently filtering out the noise or finding you the signal. Search. It doesn't necessarily need to eliminate the noise.
Maybe AI would be better if it prevented the noise, and its definitely going to add noise (expanding a few basic thoughts into an email with lots of fluff), but it can also solve it.
I'm kind of a cynic, so I'd say that the Workspace customer isn't you, the person who's using Workspace. It's your big company's SVP of IT or whoever who wants to spend money to adopt cool AI stuff so that he can say that he did AI stuff.
I'm in this role for my company.
There is no value for a bloated autocompletion tool.
There is value for concise drafts.
I wish Google would cut the PMs and bean counters, ressurect some of their better projects, and trim their fat instead of cut their sinews.
I also find that summarising content helps me digest it better. I have to fully understand the source in order to write the summary. The process of writing a summary is of immense value. Sometimes the summary itself is of minimal value.
I’m getting a lot of value out of NotebookLM drafting documents. If I’ve got a bunch of notes that need to be in a coherent design doc, it can give me a good enough first draft for me to edit into shape. Alternatively when I’ve got a design doc for something, but need to submit, say, a work request to another org, NotebookLM can take my doc and turn it into another format based on a doc template pretty nicely.
These outputs still require editing for sure, but each one can easily save me half the time to write these things.
I only use NotebookLM a couple times a month, but when I use it I get value from it. I wanted to put out a new edition of a book I wrote last year so I ingested the PDF for the previous version of my book and some notes on what I was thinking of adding. Then in Chat mode I asked for suggestions of interesting topics that I didn’t think of and a few other questions, then got a short summary that I used as a checklist for things that I might add.
I probably spent 20 minutes doing this and got value for my 20 minutes.
I feel quite the opposite.
I’m not a native english speaker, but working at US subsidiary I must produces reports in english etc - and having an LLM proofread my texts for me is great.
LLM:s are new modality to computing. If you need it, they are great. But just like excel/sheet have limited applications a LLM with data has limiited use as well.
I agree. I don't want all my existing work apps to take on LLM features I don't need.
At the same time, I tried the Gemini Research feature last night, via the Gemini webapp, and was resoundingly impressed. From a vague description, it can find the open source project I was looking for, provided ample links, and a pretty good summary of the project.
deets: https://news.ycombinator.com/item?id=42706997
I really want to like Gemini Deep Research but I have had a pretty low ROI with it. It fails because it has no ability to evaluate the quality of sources, so some SEOd to hell page has equal weight as the deep dive blog post of a highly invested individual. Its also very hard to steer unless you provide paragraphs of context, if you provide too little it might hyper focus on something you said and go into some random rabbit hole of research.
Man if only there was a company out there specializing in the ranking page quality on the web…
I totally agree. I upgraded to the AI-enabled version of Google One because they gave a couple week free trial. I found it totally useless, and it reeked of "Some PM said we had to stuff AI in everywhere".
Note I do use ChatGPT pretty frequently, but I've found it much more useful to have a separate space for the kinds of conversations I have with ChatGPT.
What if there was something that communicated the company’s top priorities and helped everyone align and stay organized without so many meetings, and give concise drafts for your to-dos? Would that be something you’d try?
I for one would love to try.
Yeah I’m tired of workspace getting more expensive and me getting zero additional value from it. I don’t want this, didn’t ask for it, and it actively annoys me.
management uses them to fluff up their emails and I use them to boil the emails down to actionable bullet points.
Enshittification #353: solving cuStomers problems has poor ROI
It's decent at search