Comment by morelandjs

3 days ago

I have mixed thoughts on this. These thoughts are my own. On the one hand, it’s objectively silly to pretend like we’ve solved the age old problem of measuring developer productivity. Metric-obsessed leadership can also be intolerable, counterproductive, and it’s a good way to paint yourself into a corner undervaluing your best talent and overvaluing your mediocre talent.

That said, I’m kind of having a blast using CC in corporate with all the connectors available at our disposal, and I baffled how little some of my coworkers know about what’s available and what the capabilities are. So it’s clear that perhaps some encouragement is prudent for those who are slower to embrace new technologies, but I’m not sure tokencounting and tokenmaxing are the answer.

Could you list us some of the capabilities you use that bring value besides “summarize my email”

  • Yes, we can crawl our entire internal documentation via LLM. Want to know if someone is already working in the space of your latest idea? Ask Claude, it hits the internal search APIs and finds docs and references directly relevant to your query. There are a lot of separate document stores so this took a lot of effort previously. I can also query Slack, Outlook, etc. I don’t understand the cynicism in your comment.

  • Not OP, but within Amazon we have pretty good connectors around integrating with our task system (so you can pretty easily ask your GenAI tool "look up the next item in our sprint board, let me know if you have any clarifying questions, but otherwise start implementing it"). We have decent integration with internal wiki and search systems, so it's easier now to figure out the best Amazon way to do some coding task. And Amazon being a big doc-writing company, there are lots of great tools for helping improve all phases of writing.

  • I found it very useful running a TDD workflow the other day. It created a test plan, generated tests, documented them, implemented and modified existing code, and added structured logging. It also identified really good refactor candidates and explained them to me after I noted a core design issue in the code we were modifying. This wasn't autonomous: I spent some time correcting it and sending it in new directions. Still, it was a pretty nice feeling to not have to go manually configure Logback (it one shotted a nice basic config), not have to write a bunch of repetitive test setup code, etc. It even pulled in a newer JUnit feature that I didn't know about that was perfect for what I was doing. Definitely not the silver bullet a lot of people are trying to sell, but still a very powerful tool.

  • A company requires a specific % of code coverage but doesn't give developers enough time to actually write tests. AI can be used to generate the tests needed to get pass the code coverage and avoid being fired for not working fast enough.