Comment by banashark
3 years ago
FWIW I asked a similar question in the discussion section of the repo, specifically since there is a column in the output to help with this sort of sorting (Implementation Type) and received an answer: https://github.com/TechEmpower/FrameworkBenchmarks/discussio...
The issue here seems to be of data-display and intent of information.
From what David Fowler has to say: https://twitter.com/spetzu/status/1592255871199096833, it does seem as though the engineering portion are using the benchmarks in an expected way. Multiple implementations to see "what perf impact is there to removing X part of the code" piece-by-piece.
The benchmarks display is a comparative-between-frameworks visualization. For that to be realistic, the types of implementations would need to be the same (hence my questioning the "Implementation Approach" column). Instead, you can find various quotes online of people comparing frameworks based on the techempower scores (usually just the fortunes or plaintext), which is disingenuous at best.
I think that a more varied and utilized implementation-approach column could help alleviate this aspect of it.
The other valuable data that doesn't have the best UX for access is: how is the a single framework-test doing over time? You can head to tfb-status and download the data for each run, but then you'll need to correlate commit-ids with your own changes to build a chart.
Either way I can say that I do like the idea of the benchmarks and I've learned some interesting things diving into some of the more stripped framework/implementations found within.
No comments yet
Contribute on Hacker News ↗