Comment by bcrosby95
20 hours ago
> Grok ended up performing the best while DeepSeek came close to second. Almost all the models had a tech-heavy portfolio which led them to do well. Gemini ended up in last place since it was the only one that had a large portfolio of non-tech stocks.
I'm not an investor or researcher, but this triggers my spidey sense... it seems to imply they aren't measuring what they think they are.
Yeah I mean if you generally believe the tech sector is going to do well because it has been doing well you will beat the overall market. The problem is that you don’t know if and when there might be a correction. But since there is this one segment of the overall market that has this steady upwards trend and it hasn’t had a large crash, then yeah any pattern seeking system will identify “hey this line keeps going up!” Would it have the nuance to know when a crash is coming if none of the data you test it on has a crash?
It would almost be more interesting to specifically train the model on half the available market data, then test it on another half. But here it’s like they added a big free loot box to the game and then said “oh wow the player found really good gear that is better than the rest!”
Edit: from what I causally remember a hedge fund can beat the market for 2-4 years but at 10 years and up their chances of beating the market go to very close to zero. Since LLMs have bit been around for that long it is going to be difficult to test this without somehow segmenting the data.
Would that work for LLMs though? They hypothetically trained on news papers from the second half of the data so they have knowledge of "future" events.
> It would almost be more interesting to specifically train the model on half the available market data, then test it on another half.
Yes, ideally you’d have a model trained only on data up to some date, say January 1, 2010, and then start running the agents in a simulation where you give them each day’s new data (news, stock prices, etc.) one day at a time.
I suspect trading firms have already done this to the maximum extent that it's profitable to do so. I think if you were to integrate LLMs into a trading algorithm, you would need to incorporate more than just signals from the market itself. For example, I hazard a guess you could outperform a model that operates purely on market data with a model that also includes a vector embedding of a selection of key social and news media accounts or other information sources that have historically been difficult to encode until LLMs.
2 replies →
I mean ultimately this is an exercise in frustration because if you do that you will have trained your model on market patterns that might not be in place anymore. For example after the 2008 recession regulations changed. So do market dynamics actually work the same in 2025 as in 2005? I honestly don’t know but intuitively I would say that it is possible that they do not.
I think a potentially better way would be to segment the market up to today but take half or 10% of all the stocks and make only those available to the LLM. Then run the test on the rest. This accounts for rules and external forces changing how markets operate over time. And you can do this over and over picking a different 10% market slice for training data each time.
But then your problem is that if you exclude let’s say Intel from your training data and AMD from your testing data then there ups and downs don’t really make sense since they are direct competitors. If you separate by market segment then does training the model on software tech companies might not actually tell you accurately how it would do for commodities or currency training. Or maybe I am wrong and trading is trading no matter what you are trading.
13 replies →
As an old friend investor I know always says: 'It is really easy to make money in the market when everyone is doing it, just try to not lose it when they lose it'.
> a hedge fund can beat the market for 2-4 years but at 10 years and up their chances of beating the market go to very close
In that case the winning strategy would be to switch hedge funds every 3 years.
The problem is that you don't know in advance which will be doing well when.
Except you don't know which fund is going to "go on a hot streak" or when the magic will end. The original statement only holds when looking at historical data; it's not predictive.
For a nice historic perspective on hedge funds and the industry as a whole, read Mallaby's "More Money Than God".
You believe in the tech sector because technology always goes well and it's what humans strive to achieve, not because it has done well recently. It has always.
When does the tech sector become the computer sector?
Agriculture would have been considered tech 200 years ago.
2 replies →
A more sound approach would have been to do a monte carlo simulation where you have 100 portfolios of each model and look at average performance.
Grok would likely have an advantage there, as well - it's got better coupling to X/Twitter, a better web search index, fewer safety guardrails in pretraining and system prompt modification that distort reality. It's easy to envision random market realities that would trigger ChatGPT or Claude into adjusting the output to be more politically correct. DeepSeek would be subject to the most pretraining distortion, but have the least distortion in practice if a random neutral host were selected.
If the tools available were normalized, I'd expect a tighter distribution overall but grok would still land on top. Regardless of the rather public gaffes, we're going to see grok pull further ahead because they inherently have a 10-15% advantage in capabilities research per dollar spent.
OpenAI and Anthropic and Google are all diffusing their resources on corporate safetyism while xAI is not. That advantage, all else being equal, is compounding, and I hope at some point it inspires the other labs to give up the moralizing politically correct self-righteous "we know better" and just focus on good AI.
I would love to see a frontier lab swarm approach, though. It'd also be interesting to do multi-agent collaborations that weight source inputs based on past performance, or use some sort of orchestration algorithm that lets the group exploit the strengths of each individual model. Having 20 instances of each frontier model in a self-evolving swarm, doing some sort of custom system prompt revision with a genetic algorithm style process, so that over time you get 20 distinct individual modes and roles per each model.
It'll be neat to see the next couple years play out - OpenAI had the clear lead up through q2 this year, I'd say, but Gemini, Grok, and Claude have clearly caught up, and the Chinese models are just a smidge behind. We live in wonderfully interesting times.
I know that Musk deserving a lifetime achievement award at the Adult Video Network awards over Riley Reid is definitely an indication of minimal "system prompt modification that distort[s] reality."
3 replies →
OTOH it has the richest man in the world actively meddling in its results when they don't support his politics.
6 replies →
> fewer safety guardrails in pretraining and system prompt modification that distort reality.
Really? Isn't Grok's whole schtick that it's Elon's personal altipedia?
10 replies →
While not strictly stocks, it would be interesting to see them trade on game economies like EVE, WoW, RuneScape, Counter Strike, PoE, etc.
indeed, and also a "model" does not mean anything per se, you have hundreds of different prompts, you can layer agents on top, you can use temperature that will lead to different outcomes. The number of dimensions to explore is huge.
I'd like to see this study replicated during a bear market
Agreed. While I don’t see it outperforming long held funds, it’d be interesting to see if they could pick up on negative signals in the news feed, and also any potential advantage of not being emotional about its decisions.
Yeah the timeframe is crucial here. The experiment began as Trump launched his tariff tweets which caused a huge downward correction and then a large uptrend. Buying almost anything tech at the start of this would have made money.
S&P 500 is also tech heavy and notoriously difficult to beat over the long run
They're not measuring performance in the context of when things happen and in the time that they are. It think its only showing recent performance and popularity. To actually evaluate how these do you need to be able to correct the model and retrain it per different time periods and then measure how it would do. Then you'll get better information from the backtesting.
I don't feel like they measured anything. They just confirmed that tech stocks in the US did pretty well.
They measured the investment facility of all those LLMs. That's pretty much what the title says. And they had dramatically different outcomes. So that tells me something.
They "proved" that US tech stocks did better than portfolios with less US tech stocks over a recent, very short time range. 1. You didn't know that? 2. Whata re you going to do with this "new information"?
1 reply →
I mean, what it kinda tells me is that people talk about tech stocks the most, so that's what was most prevalent in the training data, so that's what most of the LLMs said to invest in. That's the kind of strategy that works until it really doesn't.
1 reply →
It shows nothing. This is a bullshit stunt that should be obvious to anyone who has placed a few trades.
1 reply →
I mean, run the experiment during a different trend in the market and the results would probably be wildly different. This feels like chartists [1] but lazier.
[1] https://www.investopedia.com/terms/c/chartist.asp
If you've ever read a blog on trading when LSTMs came out, you'd have seen all sorts of weird stuff with predicting the price at t+1 on a very bad train/test split, where the author would usually say "it predicts t+1 with 99% accuracy compared to t", and the graph would be an exact copy with a t+1 offset.
So eye-balling the graph looks great, almost perfect even, until you realize that in real-time the model would've predicted yesterday's high on today's market crash and you'd have lost everything.
if you feed in price i.e. 280.1, 281.5, 281.9 ... you are going to get some pretty good looking results when it comes to predicting the next days price (t+1) with a margin of +/- a percent or so.
To be fair to chartists, they try to identify if they are in a bear market or one is coming and get out early.
probably hitching onto sycophancy for the parent company and getting lucky as a result... that Grok September rally aligns somewhat with TSLA for instance
We had this discussion in previous posts about congressional leaders who had the risk appetite to go tech heavy and therefore outperformed normal congress critters.
Going heavy on tech can be rewarding, but you are taking on more risk of losing big in a tech crash. We all know that, and if you don't have that money to play riskier moves, its not really a move you can take.
Long term it is less of a win if a tech bubble builds and pops before you can exit (and you can't out it out to re-inflate).
They didn't just outperform "normal" congress critters.. they also outperformed nearly every hedge fund on the planet. But they (meaning, of course, just one person and their spouse) are obviously geniuses.
They also outperformed themselves before being in a leader position...
Hedge funds’ goals are often not to maximize profit, but to provide returns uncorrelated with the rest of some benchmark market. This is useful for the wealthy as it means you can better survive market crashes.
Hedge funds suck though. They don’t invest in FAANG, they do risky stuff that doesn’t pay off, you are still comparing incomparable things.
I’m obviously a genius because 90% of my stock is in tech, most of us on HN are geniuses in your opinion?
3 replies →
This is a wildly disingenuous interpretation of that study.
“ Using transaction-level data on US congressional stock trades, we find that lawmakers who later ascend to leadership positions perform similarly to matched peers beforehand but outperform them by 47 percentage points annually after ascension. Leaders’ superior performance arises through two mechanisms. The political influence channel is reflected in higher returns when their party controls the chamber, sales of stocks preceding regulatory actions, and purchase of stocks whose firms receiving more government contracts and favorable party support on bills. The corporate access channel is reflected in stock trades that predict subsequent corporate news and greater returns on donor-owned or home-state firms.”
https://www.nber.org/papers/w34524
Also studying for eight months is not useful. Loads of traders do this well for eight months and then do shit for the next five years. And tellingly, they didn't beat the S&P 500. They invested in something else that beat the S&P 500. And the one that didn't invest in that something did worse than the S&P 500.
What this tells me is they were lucky to have picked something that would beat the market for now.