InvestAI etc. - 7/30/2024
AI for investors, explained in simple terms. An open thread updated weekly.
Topics discussed this week:
How LLMs reduced costs and enabled slop
Which websites contribute to the training of LLMs
Governance and Regulation AI
The PUBLIC LIST updated
The NONPUBLIC LIST
AI links to consider
How LLMs reduced costs and enabled slop
>
SAMI >
We have been talking about possible inflection points in the AI mania. One of these could come from a phenomenon that was formerly observed in other new technologies: lower costs as technology improves and usage increases. You wrote me a couple of days ago that AI costs are now also coming down.
>
RICHARD >
Yes, more than coming down. They are plummeting. OpenAI’s latest model GPT-4o was optimized for speed and efficiency. The cost of a token has been reduced by 85 to 90% from last year’s most powerful model which was GPT-4 Turbo. A token is a unit of input or output, typically a word or a character. Inputs into an LLM are sequences of tokens that are processed for training or inference.
>
SAMI >
So, the costs of operating an LLM (for a provider like OpenAI) or of using an LLM (for a GPT user) are coming down rapidly.
>
RICHARD >
Yes OpenAI passes on the cost reduction. Here is what Olivier Godement, the head of API Product at the company, said:
We’re not really in the business of maximizing revenue margins. We’re in the business of enabling people to just build more, to try out more use cases and see what sticks. Every time there is some cost optimization, we pass on the price to our customers. My intuition is that we are nowhere near the ceiling on both intelligence and cost.
Added to the improved capability of the model, this is great news for OpenAI customers.