Comment by malshe

2 days ago

I will give my perspective as an academic who writes R and Python code for data analysis (including a lot of data cleaning).

1. I find AI written code verbose and inelegant. It makes it difficult to troubleshoot. I take pride in my own code and share it confidently with my doctoral students and coauthors. I can't share AI written/assisted code with the same confidence let alone pride.

2. Often AI takes shortcuts and writes terrible code. This is especially true for Bayesian models. My first check with any Bayesian model is to recover parameters using a simulated dataset. If the code fails to do that, there is no point in going forward. I used Opus 4.5, Gemini 3.0, and GPT 5.2 recently to write rather simple code for random parameters dynamic panel model. There are already papers that have done it. All three failed numerous times. I got it to work after a lot of handholding.

3. AI helps tremendously while creating web apps that make my analysis more actionable. A lot of reviewers now want to see something in action, so using Streamlit or Shiny is the way to go.