Comment by n8henrie
2 days ago
I don't know what exponential weighted covariance is, but I've had pretty good luck converting time series-based analyses from pandas to polars (for patient presentations to my emergency department -- patients per hour, per day, per shift, etc.). Resample has a direct (and easier IMO) replacement in polars, and there is group_by_dynamic.
I've had trouble determining whether one timestamp falls between two others across tens of thousands of rows (with the polars team suggesting I use a massive cross product and filter -- which worked but excludes the memory requirement), whereas in pandas I was able to sort the timestamps and thereby only need to compare against the preceding / following few based on the index of the last match.
The other issue I've had with resampling is with polars automatically dropping time periods with zero events, giving me a null instead of zero for the count of events in certain time periods (which then gets dropped from aggregations). This has caught me a few times.
But other than that I've had good luck.
> cross product and filter
`.join_where()`[1] was also added recently.
[1]: https://docs.pola.rs/api/python/stable/reference/dataframe/a...
I'm curious how is polars group_by_dynamic easier than resample in pandas. In pandas if I want to resample to a monthly frequency anchored to the last business day of the month, I'd write:
> my_df.resample("BME").apply(...)
Done. I don't think it gets any easier than this. Every time I tried something similar with polars, I got bogged down in calendar treatment hell and large and obscure SQL like contraptions.
Edit: original tone was unintentionally combative - apologies.
Totally fair. And thank you for the rewording (sincerely). I haven't used polars for anything business or finance related, so this is likely one of many blind spots for me.
Reviewing my work, only needed an hourly aggregation, which was similarly easy in polars and pandas (I misspoke about being easier) -- what I found easier was grouping by time data that wasn't amenable to `resample`.
In polars I had no problems using a regular group_by with a pl.col.dt object, whereas in pandas I remember struggling to do so, even though it seemed straightforward.
Sorry, I wish I could remember more details; this was probably 5 years ago that I was writing the pandas code and just converted it to polars about a year ago, so it's possible that I just got better at python in the meantime (though I was writing much more python back then). And of course a rewrite is likely to feel easier the second time.
The other confounding issue is that the eager pandas code crashed with OOM regularly and took several minutes to run, whereas polars handles it very well (which I'm sure to some degree is it optimizing things that I could have done manually), but this made iterating on this codebase feel much less onerous.