InvestAI etc. - 6/18/2024
AI for investors, explained in simple terms. An open thread updated weekly.
Topics discussed this week:
Training vs. Inference
Nvidia Cyclicality
Paper: “ChatGPT is bullshit”
Metrics of some robotics and AI companies
Training vs. Inference
>
RICHARD >
I was thinking a bit more about Nvidia, the flag bearer of the entire AI market mania. In particular, I was thinking of Nvidia’s cyclicality, which you discussed in TWL #209 and TWL #221. At the time, you wrote:
There were periods [in the past] when Nvidia revenues stalled or grew only slowly, for example in 2010-11, 2013-14, and 2015-16. In each of these instances, the stock either went nowhere or fell by 50% or more. The difficulty with Nvidia is figuring out when the party ends. Quick surges tend to be followed by periods of stagnation, in a cycle of two to three years.
>
SAMI >
TWL #221 was less than four weeks ago and the stock is up 38% since. We are still in the “quick surge” phase for the stock, and perhaps for revenues as well.
>
RICHARD >
Yes, there is no use acting the hero and calling a top while the mania is still full-on. But a turning point will inevitably occur sooner or later. In this vein, investors should probably be more aware of the difference between training (constructing the AI models) and inference (actually running them). You need big GPU (graphics processing units, made by Nvidia and others) farms for both, but there is a big time lag between the two.
>
SAMI >
Can you explain this a bit more, training vs. inference? and the lag between the two?