Comment by gitremote
16 hours ago
Software development jobs must be very diverse if even this anti-vibe-coding guy thinks AI coding definitely makes developers more productive.
In my work, the bigger bottleneck to productivity is that very few people can correctly articulate requirements. I work in backend, API development, which is completely different from fullstack development with backend development. If you ask PMs about backend requirements, they will dodge you, and if you ask front-end or web developers, they are waiting for you to provide them the API. The hardest part is understanding the requirements. It's not because of illiteracy. It's because software development is a lot more than coding and requires critical thinking to discover the requirements.
> Software development jobs must be very diverse if even this anti-vibe-coding guy thinks AI coding definitely makes developers more productive.
As a Professor of English who teaches programming to humanities students, the writer has had an extremely interesting and unusual academic career [1]. He sounds awesome, but I think it's fair to suggest he may not have much experience of large scale commercial software development or be particularly well placed to predict what will or will not work in that environment. (Not that he necessarily claims to, but it's implicit in strong predictions about what the "future of programming" will be.)
[1] https://stephenramsay.net/about/
Hard to say but to back his claim that he was programming since the 90's his CV shows he was working on stuff that's clearly more than your basic undergraduate skill level since the early 2000's. I'd be willing to bet he has more years under his belt than most HN users. I mean I'm considered old here, in my mid 30's, and this guy has been programming most my life. Though that doesn't explicitly imply experience, or more specifically experience in what.
That said, I think people really under appreciate how diverse programmers actually are. I started in physics and came over when I went to grad school. While I wouldn't expect a physicist to do super well on leetcode problems I've seen those same people write incredible code that's optimized for HPC systems and they're really good at tracing bottlenecks (it's a skill that translates from physics really really well). Hell, the best programmer I've ever met got that way because he was doing his PhD in mechanical engineering. He's practically the leading expert in data streaming for HPC systems and gained this skill because he needed more performance for his other work.
There's a lot of different types of programmers out there but I think it's too easy to think the field is narrow.
>I'm considered old here, in my mid 30's
I'm 62, and I'm not old yet, you're just a kid. ;-)
Seriously, there are some folks here who started on punch cards and/or paper tape in the 1960s.
7 replies →
38 there. If you didn't suffer Win9x's 'stability', then editing X11 config files by hand, getting mad with ALSA/Dmix, writing new ad-hoc drivers for weird BTTV tuners reusing old known ones for $WEIRDBRAND, you didn't live.
> I mean I'm considered old here, in my mid 30's
sigh
12 replies →
>As a Professor of English who teaches programming to humanities students
That is the strangest thing I've heard today.
The world of the Digital Humanities is a lot of fun (and one I've been a part of, teaching programming to Historians and Philosophers of Science!) It uses computation to provide new types of evidence for historical or rhetorical arguments and data-driven critiques. There's an art to it as well, showing evidence for things like multiple interpretations of a text through the stochasticity of various text extraction models.
From the author's about page:
> I discovered digital humanities (“humanities computing,” as it was then called) while I was a graduate student at the University of Virginia in the mid-nineties. I found the whole thing very exciting, but felt that before I could get on to things like computational text analysis and other kinds of humanistic geekery, I needed to work through a set of thorny philosophical problems. Is there such a thing as “algorithmic” literary criticism? Is there a distinct, humanistic form of visualization that differs from its scientific counterpart? What does it mean to “read” a text with a machine? Computational analysis of the human record seems to imply a different conception of hermeneutics, but what is that new conception?
https://stephenramsay.net/about/
That was such a strange aspect. If you will excuse my use of the tortured analogy of comparing programming to wood working, there are is a lot of talk about hand tools versus power tools, but for people who aren't in a production capacity--not making cabinets for a living, not making furniture for a living--you see people choosing to exclusively use hand tools because they just enjoy it more. There isn't pressure about "you most use power tools or else you're in self-denial about their superiority." Well , at least for people who actually practice the hobby. You'll find plenty of armchair woodworkers in the comments section on YouTube. But I digress. For someone who claims to enjoy programming for the sake of programming, it was a very strange statement to make about coding.
I very much enjoy the act of programming, but I'm also a professional software developer. Incidentally, I've almost always worked in fields where subtly wrong answers could get someone hurt or killed. I just can't imagine either giving up my joy in the former case or abdicating my responsibility to understand my code in the latter.
And this is why the wood working analogy falls down. The scale at which damage can occur due to the decision to use power tools over hand tools is, for most practical purposes, limited to just myself. With computers, we can share our fuck ups with the whole world.
Nicely put. The wood working analogy does work.
Exactly, I don't think ppl understand why programming languages even came about anymore. Lotsa ppl don't understand why a natural language is not suitable for programming and by extension prompting an LLM
I have done both strict back-end, strict front-end, full stack, QA automation and some devops as well, I worked in an all Linux shop where we were encouraged by great senior devs to always strive for better software all around. I think you're right, it mostly depends on your mindset and how much you expose yourself to the craft. I can tackle obscure front-end things sometimes better than back-end issues despite hating front-end but knowing enough to be dangerous. (My first job in tech really had me doing everything imaginable)
I find the LLMs boost my productivity because I've always had a sort of architectural mindset, I love looking up projects that solve specific problems and keeping them on the back of my mind, turns out I was building myself up for instructing LLMs on how to build me software, and it takes several months worth of effort and spits it out in a few hours.
Speaking of vibe coding in archaic languages, I'm using LLMs to understand old Shockwave Lingo to translate it to a more modern language, so I can rebuild a legacy game in a modern language. Maybe once I spin up my blog again I'll start documenting that fun journey.
> Speaking of vibe coding in archaic languages
Well, I think we can say C is archaic when most developers write in something that for one isn't C, two isn't a language itself written in C, or three isn't running on something written in C :)
If we take the most popular programming languages and look at what their reference (or most popular) implementations are written in, then we get:
Far from archaic indeed. We're still living in the C/C++ world.
1 reply →
(Python has exited the chat)
Ah lingo, where the programming metaphor was a theatre production!
> it takes several months worth of effort and spits it out in a few hours
lol
Hehe. In the "someone should make a website"™ department: using a crap tons of legacy protocols and plugins semi-interoperable with modern while offering legacy browsers loaded with legacy plugins something usable to test with, i.e.,
- SSL 2.0-TLS 1.1, HTTP/0.9-HTTP/1.1, ftp, WAIS, gopher, finger, telnet, rwho, TinyFugue MUD, UUCP email, SHOUTcast streaming some public domain radio whatever
- <blink>, <marquee>, <object>, XHTML, SGML
- Java <applet>, Java Web Start
- MSJVM/J++, ActiveX, Silverlight
- Flash, Shockwave (of course), Adobe Air
- (Cosmo) VRML
- Joke ActiveX control or toolbar that turns a Win 9x/NT-XP box into a "real" ProgressBar95. ;)
(Gov't mandated PSA: Run vintage {good,bad}ness with care.)
To be fair, we have Flash emulators that run in modern browsers, and a Shockwave one as well, though it seems to be slowing down a bit in traction. Man, VRML brought me back. Don't forget VBScript!
why even write webpages or apps anymore just prompt an LLM everytime a user makes a request and write the page to send to the user :D
2 replies →
The thing is that some imagined AI that can reliably produce reliable software will also likely be able to be smart enough to come up with the requirements on its own. If vibe coding is that capable, then even vibe coding itself is redundant. In other words, vibe coding cannot possibly be "the future", because the moment vibe coding can do all that, vibe coding doesn't need to exist.
The converse is that if vibe coding is the future, that means we assume there are things the AI cannot do well (such as come up with requirements), at which point it's also likely it cannot actually vibe code that well.
The general problem is that once we start talking about imagined AI capabilities, both the capabilities and the constraints become arbitrary. If we imagine an AI that does X but not Y, we could just as easily imagine an AI that does both X and Y.
My bet is that it will be good enough to devise the requirements.
They already can brainstorm new features and make roadmaps. If you give them more context about the business strategy/goals then they will make better guesses. If you give them more details about the user personas / feedback / etc they will prioritize better.
We're still just working our way up the ladder of systematizing that context, building better abstractions, workflows, etc.
If you were to start a new company with an AI assistant and feed it every piece of information (which it structures / summarizes synthesizes etc in a systematic way) even with finite context it's going to be damn good. I mean just imagine a system that can continuously read and structure all the data from regular news, market reports, competitor press releases, public user forums, sales call transcripts, etc etc. It's the dream of "big data".
If it gets to that point, why is the customer even talking to a software company? Just have the AI build whatever. And if an AI assistant can synthesize every piece of business information, why is there a need for a new company? The end user can just ask it to do whatever.
I agree with the first part which is basically 'being able to do a software engineers full job' is basically ASI/AGI complete.
But I think it is certainly possible that we reach a point/plateau where everything is just 'english -> code' compilation but that 'vibe coding' compilation step is really really good.
It's possible, but I don't see any reason to assume that it's more likely that machines will be able to code as well as working programmers yet not be able to come up with requirements or even ideas as well as working PMs. In fact, why not the opposite? I think that currently LLMs are better at writing general prose, offering advice etc.., than they are at writing code. They are better at knowing what people generally want than they are at solving complex logic puzzle that require many deduction steps. Once we're reduced to imagining what AI can and cannot do, we can imagine pretty much any capability or restriction we like. We can imagine something is possible, and we can just as well choose to imagine it's not possible. We're now in the realm of, literally, science fiction.
2 replies →
The only reason to imagine that plateau is because it’s painful to imagine a near future where humans have zero economic value.
3 replies →
This is the most coherent comment in this thread. People who believe in vibe coding but not in generalizing it to “engineering”... brother the LLMs speak English. They can even hold conversations with your uncle.
Yup. I would never be able to give my Jira tickets to an LLM because they're too damn vague or incomplete. Getting the requirements first needs 4 rounds of lobbying with all stakeholders.
We had a client who'd create incredibly detailed Jira tickets. Their lead developer (also their only developer) would write exactly how he'd want us to implement a given feature, and what the expected output would be.
The guy is also a complete tool. I'd point out that what he described wasn't actually what they needed, and that there functionality was ... strange and didn't actually do anything useful. We'd be told to just do as we where being told, seeing as they where the ones paying the bills. Sometimes we'd read between the lines, and just deliver what was actually needed, then we'd be told just do as we where told next time, and they'd then use the code we wrote anyway. At some point we got tired of the complaining and just did exactly as the tasks described, complete with tests that showed that everything worked as specified. Then we where told that our deliveries didn't work, because that wasn't what they'd asked for, but couldn't tell us where we misunderstood the Jira task. Plus the tests showed that the code functioned as specified.
Even if the Jira tasks are in a state where it seems like you could feed them directly to an LLM, there's no context (or incorrect context) and how is a chatbot to know that the author of the task is a moron?
Maybe you'll appreciate having it pointed out to you: you should work on your usage of "where" vs "were".
Every time I've received overly detailed JIRA tickets like this it's always been significantly more of a headache than the vague ones from product people. You end up with someone with enough tech knowledge to have an opinion, but separated enough from the work that their opinions don't quite work.
1 reply →
> how is a chatbot to know that the author of the task is a moron?
Does it matter?
The chatbot could deliver exactly what was asked for (even if it wasn't what was needed) without any angst or interpersonal issues.
Don't get me wrong. I feel you. I've been there, done that.
OTOH, maybe we should leave the morons to their shiny new toys and let them get on with specifying enough rope to hang themselves from the tallest available structure.
Are you working at OpenAI?
1 reply →
"The guy is also a complete tool." - Who says Hackers news is not filled with humor?
Who says an LLM can’t be taught or given a system prompt that enables them to do this?
Agentic AI can now do 20 rounds of lobbying with all stake holders as long as it’s over something like slack.
A significant part of my LLM workflow involves having the LLM write and update tickets for me.
It can make a vague ticket precise and that can be an easy platform to have discussions with stakeholders.
I like this use of LLM because I assume both the developer and ticket owner will review the text and agree to its contents. The LLM could help ensure the ticket is thorough and its meaning is understood by all parties. One downside is verbosity, but the humans in the loop can edit mercilessly. Without human review, these tickets would have all the downsides of vibe coding.
Thank you for sharing this workflow. I have low tolerance for LLM written text, but this seems like a really good use case.
Wait until you learn that the people on the other side of your ticket updates are also using LLMs to respond. It's LLMs talking to LLMs now.
9 replies →
A significant part of my workflow is getting a ticket that is ill-defined or confused and rewriting it so that it is something I can do or not do.
From time to time I have talked over a ticket with an LLM and gotten back what I think is a useful analysis of the problem and put it into the text or comments and I find my peeps tend to think these are TLDR.
1 reply →
Claude Code et al. asks clarifying questions in plan mode before implementing. This will eventually extend to jira comments
You think the business line stakeholder is going to patiently hang out in JIRA, engaging with an overly cheerful robot that keeps "missing the point" and being "intentionally obtuse" with its "irrelevant questions"?
This is how most non-technical stakeholders feel when you probe for consistent, thorough requirements and a key professional skill for many more senior developers and consultants is in mastering the soft skills that keep them attentive and sufficiently helpful. Those skills are not generic sycophancy, but involve personal attunement to the stakeholder, patience (exercising and engendering), and cycling the right balance between persistence and de-escalation.
Or do you just mean there will be some PM who acts as proxy between for the stakeholder on the ticket, but still needs to get them onto the phone and into meetings so the answers can be secured?
Because in the real world, the prior is outlandish and the latter doesn't gain much.
1 reply →
What do you mean by eventually?
this already exists.
>In my work, the bigger bottleneck to productivity is that very few people can correctly articulate requirements.
Agreed.
In addition, on the other side of the pipeline, code reviews are another bottleneck. We could have more MRs in review thanks to AI, but we can't really move at the speed of LLM's outputs unless we blindly trust it (or trust another AI to do the reviews, at which point what are we doing here at all...)
This means your difficulty is not programming per se, but that you are working on a very suboptimal industry / company / system. With all due respect, you use programming at work, but true programming is the act of creating a system that you or your team designed and want to make alive. Confusing the reality of writing code for a living in some company with what Programming with capitalized P is, produces a lot of misunderstanding.
To be honest I've never worked in an environment that seemed too complex. On my side my primary blocker is writing code. I have an unending list of features, protocols, experiments, etc. to implement, and so far the main limit was the time necessary to actually write the damn code.
That sounds like papier mache more than bridge building, forever pasting more code on as ideas and time permit without the foresight to engineer or architect towards some cohesive long-term vision.
Most software products built that way seem to move fast at first but become monstrous abominations over time. If those are the only places you keep finding yourself in, be careful!
There are a wide number of small problems for which we do not need bridges.
As a stupid example, I hate the functionality that YouTube has to maintain playlists. However, I don't have the time to build something by hand. It turns out that the general case is hard, but the "for me" case is vibe codable. (Yes, I could code it myself. No, I'm not going to spend the time to do so.)
Or, using the Jira API to extract the statistics I need instead of spending a Thursday night away from the family or pushing out other work.
Or, any number of tools that are within my capabilities but not within my time budget. And there's more potential software that fits this bill than software that needs to be bridge-stable.
1 reply →
I don’t want to imply this is your case, because of course I’ve no idea how you work. But I’ve seen way too often, the reason for so many separate features is:
A) as stated by parent comment, the ones doing req. mngmt. Are doing a poor job of abstracting the requirements, and what could be done as one feature suddenly turns in 25.
B) in a similar manner as A, all solutions imply writing more and more code, and never refactor and abstract parts away.
My guess would be that the long list is maybe not self contained features (although still can be, I know I have more feature ideas than I can deliver in the next couple years myself), but behaviors or requirements of one or a handful of product feature areas.
When you start getting down into the weeds, there can be tons and tons of little details around state maintenance, accessibility, edge cases, failure modes, alternate operation modes etc.
That all combines to make lots of code that is highly interconnected, so you need to write even more code to test it. Sometimes much more than even the target implementations code.
Hehe. Try working for some telecoms dealing with gsm, umts, LTR and 5g.
or banking. or finance. or manufacturing. or $other_enterprise_lob_area.
souce: been there, done some of that.
Man I wish this was my job. I savor the days when I actually don’t have to do requirements gathering and can just code.
Unfortunately a lot of it is also because of illiteracy.
Lots of people hide the fact that they struggle with reading and a lot of people hide or try to hide the fact they don’t understand something.
I don’t mind the coding, it’s the requirements gathering and status meetings I want AI to automate away. Those are the parts I don’t like and where we’d see the biggest productivity gains. They are also the hardest to solve for, because so much of it is subjective. It also often involves decisions from leadership which can come with a lot of personal bias and occasionally some ego.
This is like the reverse centaur form of coding. The machine tells you what to make, and the human types the code to do the thing.
Well, when put like that it sounds pretty bad too.
I was thinking more that the human would tell the machine want to make. The machine would help flesh out the idea into actual requirements, and make any decisions the humans are too afraid or indecisive to make. Then the coding can start.
I don't think the author would disagree with you. Ad you point out coding is just one part of software development. I understand his point to be that the coding portion of the job is going to be very different going forward. A skilled developer is still going to need to understand frameworks and tradeoffs so that they can turn requirements into a potential solution. It just they might not be coding up the implementation.
I constantly run into issues where features are planned and broken down outside-in, and it always makes perfect sense if you consider it in terms of the pure user interface and behaviour. It completely breaks down when you consider the API, or the backend, is a cross-cutting concern across many of those tidy looking tasks and cannot map to them 1:1 without creating an absolute mess.
Trying to insert myself, or the right backend people, into the process, is more challenging now than it used to be, and a bad API can make or break the user experience as the UI gets tangled in the web of spaghetti.
It hobbles the effectiveness of whatever you could get an LLM to do because you’re already starting on the backfoot, requirements-wise.
> very few people can correctly articulate requirements
This is the new programming. Programming and requirements are both a form of semantics. One conveys meaning to a computer at a lower level, the other conveys it to a human at a higher level. Well now we need to convey it at a higher level to an LLM so it can take care of the lower-level translation.
I wonder if the LLM will eventually skip the programming part and just start moving bits around in response to requirements?
We have a machine that turns requirements into code. It's called a compiler. What happened to programming after the invention of the compiler?
My solution as a consultant was to build some artifact that we could use as a starting point. Otherwise, you're sitting around spinning your wheels and billing big $ and the pressure is mounting. Building something at least allows you to demonstrate you are working on their behalf with the promise that it will be refined or completely changed as needed. It's very hard when you don't get people who can send down requirements, but that was like 100% of the places I worked. I very seldom ran into people who could articulate what they needed until I stepped up, showed them something they could sort of stand on, and then go from there.
Mythical Man Month had it all--build one to throw away.
I like my requirements articulated so clearly and unambiguously that an extremely dumb electronic logic machine can follow every aspect of the requirements and implement them "perfectly" (limited only by the physical reliability of the machine).
Aka "coding". I see what you mean ;)
Yeah, the hardest part is understanding the requirements. But it then still takes hours and hours and hours to actually build the damn thing.
Except that now it still takes me the same time to understand the requirements ... and then the coding takes 1/2 or 1/3 of the time. The coding also always takes 1/3 of the effort so I leave my job less burned out.
Context: web app development agency.
I really don't understand this "if it does not replace me 100% it's not making me more productive" mentality. Yeah, it's not a perfect replacement for a senior developer ... but it is like putting the senior developer on a bike and pretending that it's not making them go any faster because they are still using their legs.
The solo projects I do are 10x, the team projects I do maybe 2-3x in productivity. I think in big companies it's much much less.
Highest gains are def in full stack frameworks (like nextjs), with Database ORM, and building large features in one go, not having to go back & forth with stakeholders or collegues.
This feels like addressing a point TFA did not make. TFA talks mostly about vibe-coding speeding up coding, whereas your comment is about software development as a whole. As you point out, coding is just one aspect of engineering and we must be clear about what "productivity" we are talking about.
Sure, there are the overhypers who talk about software engineers getting entirely replaced, but I get the sense those are not people who've ever done software development in their lives. And I have not seen any credible person claiming that engineering as whole can be done by AI.
On the other hand, the most grounded comments about AI-assisted programming everywhere are about the code, and maybe some architecture and design aspects. I personally, along with many other commenters here and actual large-scale studies, have found that AI does significantly boost coding productivity.
So yes, actual software engineering is much more than coding. But note that even if coding is, say, only 25% of engineering (there are actually studies about this), putting a significant dent in that is still a huge boost to overall productivity.
Also that requirements engineering in general isn't being done correctly.
I'm the last guy to be enthused about any "ritualistic" seeming businessy processes. Just let me code...
However, some things do need actually well defined adhered to processes where all parties are aware of and agreeing with the protocol.
This is like saying the typewriter won’t make a newspaper company more productive because the biggest bottlenecks are the research and review processes rather than the typing. It’s absolutely true, but it was still worth it to go up to typewriters, and the fact that people were spending less effort and time on the handwriting part helps all aspects of energy levels etc across their job.
Convince your PMs to use an LLM to help "breadboard" their requirements. It's a really good use case. They can ask their dumb questions they are afraid to and an LLM will do a decent job of parsing their ideas, asking questions, and putting together a halfway decent set of requirements.
PMs wouldn't be able to ask the right questions. They have zero experience with developer experience (DevEx) and they only have experience with user experience (UX).
You can hope that an LLM might have some instructions related to DevEx in its prompt at least. There's no way to completely fix stupid, anymore than you can convince a naive vibecoder that just vibing a new Linux-compatible kernel written entirely in Zig is a feasible project.
How does the LLM get all the required knowledge about the domain and the product to ask relevant questions?
Give it access to the codebase and a text file with all relevant business knowledge.
1 reply →
"the bigger bottleneck to productivity is that very few people can correctly articulate requirements."
I've found the same way. I just published an AI AUP for my company and most of it is teaching folks HOW to use AI.
You can vibe ask the requirements. Not even kidding.
AI is making coding so cheap, you can now program a few versions of the API and choose what works better.
I write a library which is used by customers to implement integrations with our platform. The #1 thing I think about is not
> How do I express this code in Typescript?
it's
> What is the best way to express this idea in a way that won't confuse or anger our users? Where in the library should I put this new idea? Upstream of X? Downstream of Y? How do I make it flexible so they can choose how to integrate this? Or maybe I don't want to make it flexible - maybe I want to force them to use this new format?
> Plus making sure that whatever changes I make are non-breaking, which means that if I update some function with new parameters, they need to be made optional, so now I need to remember, downstream, that this particular argument may or may not be `undefined` because I don't want to break implementations from customers who just upgraded the most recent minor or patch version
The majority of the problems I solve are philosophical, not linguistic
This is one reason I think spec driven development is never really going to work the way people claim it should. It's MUCH harder to write a truly correct, comprehensive, and useful spec than the code in many cases.
Sounds like you work with inexperienced PMs that are not doing their job, did you try having a serious conversation about this pattern with them? I'm pretty sure some communication would go a long way towards getting you on a better collaboration groove.
I've been doing API development for over ten years and worked at different companies. Most PMs are not technical and it's the development team's job figure out the technical specifications for APIs we build. If you press the PMs, they will ask the engineering/development manager for the written technical requirements, and if the manager is not technical, they will assign it to the developers/engineers. Technical requirements for an API are really a system design question.
and in reality, all the separate roles should be deprecated
we vibe requirements to our ticket tracker with an api key, vibe code ticket effort, and manage the state of the tickets via our commits and pull requests and deployments
just teach the guy the product manager is shielding you from not to micromanage and all the frictions are gone
in this same year I've worked at an organization that didn't allow AI use at all, and by Q2, Co-Pilot was somehow solving their data security concerns (gigglesnort)
in a different organization none of those restrictions are there and the productivity boost is through an order of magnitude greater
If AI doesn't make you more productive you're using it wrong, end of story.
Even if you don't let it author or write a single line of code, from collecting information, inspecting code, reviewing requirements, reviewing PRs, finding bugs, hell even researching information online, there's so many things it does well and fast that if you're not leveraging it, you're either in denial or have ai skill issues period.
Not to refute your point but I’ve met overly confident people with “AI skills” who are “extremely productive” with it, while producing garbage without knowing, or not being able to tell the difference.
You're describing lack of care and lack of professionalism, fire these people, nothing to do with the tools, it's the person using it the problem.
5 replies →
I've not really seen this outside of extremely junior engineers. On the flip side I've seen plenty of seniors who can't manage to understand how to interact with AI tools come away thinking they are useless when just watching them for a bit it's really clear the issue is the engineer.
Garbage to whom? Are we talking about something that the user shudders to think about, or something more like a product the user loves, but behind the scenes the worst code ever created?
1 reply →
They just shovel the garbage on someone else who has to fact check and clean it up.
you can say that about overly confident people with "xyz" skills.
It sounds like you're the one in denial? AI makes some things faster, like working in a language I don't know very well. It makes other things slower, like working in a language I already know very well. In both cases, writing code is a small percentage of the total development effort.
No I'm not, I'm just sick of these edgy takes where AI does not improve productivity when it obviously does.
Even if you limit your AI experience to finding information online through deep research it's such a time saver and productivity booster that makes a lot of difference.
The list of things it can do for you is massive, even if you don't have it write a single line of code.
Yet the counter argument is like "bu..but..my colleague is pushing slop and it's not good at writing code for me", come on, then use it at things it's good at, not things you don't find it satisfactory.
9 replies →
My company mandates AI usage and logs AI usage metrics as input to performance evaluation, so I use it every day. It's a Copilot subscription, though.
why though? are they just using it as a proxy for "is 'gitremote' working today?"
4 replies →