InvestAI etc. - 10/29/2024
AI for investors, explained in simple terms. An open thread updated weekly.
Topics discussed this week:
Is AI killing the MBA?
AI and the future of this newsletter
Foundation Models as the GPU bubble bursts
Future consumer AI behemoths
The public LIST updated
AI links to consider
Is AI killing the MBA?
>
RICHARD >
You pointed me to this Forbes piece How AI is Killing the Harvard MBA, that talks about how major banks like JP Morgan Chase and Goldman Sachs are building tools like ChatCFO and IndexGPT to do many of the tasks that were until now assigned to freshly-minted MBA associates. Here is an excerpt that sums it up:
Just about all of the big name investment banks and venture capital firms - from [Goldman Sachs, Morgan Stanley, JP Morgan], Citigroup, HSBC and Barclays to Sequoia Capital, Andreessen Horowitz, Tiger Global Management are investing in startups that are building AI applications and infrastructure and quietly building (or buying) applications that can sift through vast amounts of data to discover the next big startup, assess financial health of their investment targets, determine market potential, and perform predictive analytics and decision support. These applications also have algorithms to forecast success rates and make quicker decisions.
>
SAMI >
I found the article thought-provoking, but I also thought that it was a headline in search of a rationale. One way to approach this question is to remember what is the primary mission of a business school. Is it 1) to create cohorts of a competent managerial class all with similar skills, or 2) to foster professional networks and connections between future managers and business leaders?
The typical b-school brochure probably spends more time on 1) but the reality is closer to 2). This ambiguity serves a good purpose for both sides, the school and the student. The school can claim that it imparts knowledge and wisdom that can be found nowhere else, thereby confirming its academic pedigree and justifying its high tuitions. And the student can pay those tuitions with the confidence that he/she is not only getting that knowledge and wisdom, but also joining an elite group of movers and shakers.
>
RICHARD >
The tools being developed by the banks are apt to replace the first, the cohorts of a competent managerial class all with similar skills. AI threatens primarily this aspect of a business education. In fact, the Harvard model of teaching through case studies is not unlike an LLM’s training and inference. It typically seeks to answer the following question: “Based on what you have seen at hundreds of other companies in the past that faced similar challenges (training), what do you think is the best course of action for company XYZ (inference)?”
>
SAMI >
Exactly. LLMs can be trained through all other similar experiences by other companies and then give you a set of decisions that are most conducive to success. But that leaves the second unspoken mission of a business school: to foster professional networks and create connections between future managers and business leaders. Is this also replaceable by robots?
Parenthetically, I recall a fellow MBA a few years ago describing the MBA program as “a necessary waste of time.” Of course, a school like Harvard, Wharton or any other business school would object to this statement. They would say, how can anyone call it a waste of time when we are “imparting knowledge and wisdom that can be found nowhere else?”
But I think that my friend was onto something. No right thinking person would plunk down over $150,000 in college expenses + the opportunity cost of foregoing two years of salary merely to acquire the knowledge and skills taught at business school. You can do this through night classes, or nowadays through free YouTube classes if you are diligent enough.
Instead, an important reason to go to business school is to obtain an entry ticket to a professional network that would otherwise be difficult to penetrate. To use the analogy of a concert or sporting event, the better the business school, the better your seat will be inside that professional network. The top five schools get the front middle seats.
There is no way for AI to replicate this. Yes, AI tools will accelerate the work of humans. In some cases, they will even replace the work of humans. But the ultimate dealmaker at a bank, the ultimate private banker will still be humans. And the dealmaker and private banker both want to work with humans who are lower in the chain, interns, associates, vice-presidents etc (vice-president at an investment bank is a middle management position). That is because humans need inspiration and motivation from other humans.
Finally, there is one other consideration, which is that humans in a hierarchy bring trust and accountability. I once worked with a portfolio manager who joked (or maybe he was serious?) that the only reason to hire analysts is to blame them when something goes wrong. Although flippant, this remark speaks to accountability, which is difficult to pinpoint when robots are running everything from top to bottom. “Sorry, our robot screwed up the valuation,” is not an excuse that will assuage an upset client of a major investment bank.
So in my view, the MBA is here to stay, but its specific function will evolve as it always has. AI will be an assistant and accelerator, but not a full substitute.
>
RICHARD >
In a way, we have already seen some of this transition to robots in investment management. An indexed fund is essentially a fund that is on autopilot, like a pre-programmed robot. We have had indexed funds for decades and they have gained in popularity, but people are still paying for analysts, investment managers, financial advisors and private bankers. If anything, employment in these fields has grown, not declined.
>
AI and the future of this newsletter
>
RICHARD >
This all made me think about the newsletter business in general and about our own, mine at Personal Science, yours at The Wednesday Letter, our joint effort here with InvestAI.
The challenge we all face is how to remain relevant, not just to current readers, but to anyone interested in how AI will affect their own jobs, especially in a world where a future version of ChatGPT will be better than any of us at finding, summarizing, and analyzing consequential information. Who's going to read us when it'll be possible to subscribe to personalized, high-quality, and comprehensive information about any topic of interest?
>
SAMI >
People who want specifically your voice and mine will continue to read us. When I subscribe to a newsletter, I do so to get that author’s voice and tone and way of thinking, not just the knowledge that he shares. Over time, it becomes a voice that I trust and that becomes a part of my weekly routine, at least for a while.
An AI-generated newsletter is intrinsically limited. If we think of a spectrum between ‘information’ on one side and ‘voice’ on the other, we could say that ‘inference’ is in the middle. An AI-generated newsletter would fall somewhere between ‘information’ and ‘inference’, whereas a good human newsletter in the future will fall between ‘inference’ and ‘voice’.
>
RICHARD >
I hear you, but this voice needs to be supplemented with information and analysis that are fresh. One question is how or whether we ought to shift the mission of this newsletter given what we've mutually learned about AI.
>
SAMI >
I am all about it. It is important to keep things fresh, to keep moving forward with new ideas. We need to integrate AI into our own efforts.
>
RICHARD >
Yes, and I can think of two options. The first, as you regularly note, is to emphasize the “human element”, the unique voice. People enjoy good, personable writing by real people. Maybe top content creators will need to insert more friendly, relatable anecdotes that only a fellow human could find interesting. For example, we could go all Tom Friedman style, with stories about taxi drivers and bell hops, dropping names here and there about some famous person we saw or imagined.
>
SAMI >
Hah yes. A good part of Friedman’s success, and the success of a Noah Smith or a Heather Cox Richardson is their ability to humanize their columns, while also offering strong information and analysis. It is in the very name of their substacks, Noahpinion (I am Noah and this is my opinion), and Letters from an American (who am I? I am an American and a writer).
The futurist Marshall McLuhan (who coined the phrase ‘the global village’ among other things) once said: “People don't read the morning newspaper, they slip into it like a warm bath.” So, do we want to be in the hard information+analysis business or in the warm bath business? I would think both, like a good familiar morning newspaper.
>
RICHARD >
Indeed, and with a fresh twist, which is to move our newsletter from ‘how’ to ‘what.’
As I wrote in the past, AI will force all of us to stop thinking ‘how’ to do work and instead think more about ‘what’ to do. Although voice is important as you noted, a weekly newsletter like ours, no matter how good we make the content, will eventually be out-competed by well-done LLMs that can beat us with quality, up-to-date content, if for no other reason that these LLMs can sift through new information much faster than we can. So, we should find ways to get to the source of what our readers want to accomplish and help them do that instead.
For example, it’s becoming ridiculously easy to develop simple web apps, to the point where somebody with no programming experience can whip up an iPhone app in minutes (see this week’s AI Links below).
Our readers have a variety of reasons they choose to spend time with us each week, but most of them I assume hope to find some insight into AI that will be valuable to their career or job. What if, instead of explaining how to use AI to do something, we offered to work directly with our readers—as clients—to actually do the something?
>
SAMI >
I agree. With your expertise in technology and mine in investing, we can offer advice on how to implement an AI strategy. The future evolution of InvestAI can also be an example of how we combine AI’s information+inference component with the human inference+voice component. I like to think that we have already done that to some degree in previous postings.
>
Foundation Models as the GPU bubble bursts
>
RICHARD >
Amid all the excitement over new AI capabilities, it’s easy to lose track of a major difference between the kinds of LLM systems out there. Most of us see LLMs through applications, like ChatGPT, Bing Chat, or zillions of generative AI-based chat apps like those for customer service. But these applications are all built on just a few foundation models, like those from OpenAI, Anthropic, Meta, and a few other big companies.
These models are “foundational” because they form the basis of all the other generative AI applications. Foundation models, generally referred to by their generation numbers (GPT-3, GPT-4, etc.) are notoriously difficult and expensive to train. Nvidia’s exponential growth comes from its status as supplier of choice to everyone who needs the thousands of top-of-the-line GPUs chips required to train the foundation models, and then even more chips to do the “inference” required to run them for a particular application.
A year or two ago, all the Big Tech incumbents were investing heavily into building their own foundation models, thinking this would give them a “moat” and a shot at the winner-takes-all ultimate prize. But now, it’s beginning to look like there are too many foundation models, leading Marc Andreesen and others to wonder whether we’re seeing a “race to the bottom” as deep-pocketed competitors fall over themselves to build what’s beginning to look like a commodity.
>
SAMI >
I saw that from Andreesen, but the piece that you linked from Building Our Future also notes that a16z, the Andreesen Horowitz VC company, continues to invest in new foundation models.
>
RICHARD >
Yes but the space is getting crowded. As evidence, the GPU Rental Bubble has popped, with rentals plunging from $8/hr last year to under $2/hr now. An Nvidia investor presentation from 2023 (last year!) suggested that you could buy and then rent out a GPU and you'd net a profit of $4/hr. A lot of companies took that trade, but now that bubble has popped. Most of the plunge is driven by the release of Meta's Llama 3+ series of foundation models that make it far more cost-effective for developers to build private, proprietary LLMs without needing expensive GPUs for training. Another problem for GPU resellers is that the majors (OpenAI, etc.) now own their own clusters and don't resell any excess capacity.
>
SAMI >
The article you linked states that “capex on foundation model training is the “fastest depreciating asset in history”.” How long will it be before this filters to the entire GPU universe, and in particular to Nvidia’s revenues?
READ MORE > > > Foundational Model vs. LLM: Understanding the Differences
READ MORE > > > $2 H100s: How the GPU Rental Bubble Burst
>
Future consumer AI behemoths
>
RICHARD >
Rex Woodbury is right to wonder whether not enough people are paying attention to consumer AI. The biggest tech-oriented companies of past computing breakthroughs—the ones that topped the $100B capitalization size—are all household or ‘consumer’ names like Microsoft, Apple, Google, Facebook/Meta, Uber, etc.
Each tech wave in the past started with promising potential for incumbents who saw dollar signs for their existing businesses. Think about IBM and its PC, telecom providers who invested in networking for the early internet, or traditional mobile phone players solving the wrong problems when smartphones arrived. But the really big winners ended up being new companies that saw an opening in the consumer space, companies like Apple, Amazon and Meta. It’s reasonable to expect that the current tech wave will also see the rise of new AI companies unleashed by LLMs that will pass the large incumbents, and that some will be consumer companies.
>
SAMI >
I am always reminded that Intel, a company that never sold directly to the consumer, successfully positioned itself as a consumer brand through its ‘Intel inside’ campaign back in the 1990s. People who knew nothing about semiconductors were suddenly insisting on only buying personal computers that had Intel chips inside. But chips from AMD or others were just as good or in some cases better.
>
RICHARD >
As with previous tech generations, it’s impossible to tell which consumer startup of today will turn into the next Google or Facebook, but it’s a safe bet that the eventual winners have already been founded (or soon will be) and that they are, as we speak, planting the seeds of their future meteoric rise.
In retrospect it seems obvious that the marriage of smartphones and GPS would bring about an Uber, or that Airbnb could arise from the same social features that began connecting new smartphone users to each other.
I don’t have any good bets myself. There are too many startups out there. But, when it comes to the entertainment or social media subspace, there are already some incredible AI-generated video memes like this one.
(click on the image 👇 to view the video).
>
SAMI >
Very timely and entertaining indeed, two days before Halloween and seven days before the US election. Our politicians have never looked as cool. Of course, anything would look good with John Fogerty’s masterpiece as soundtrack.
We keep track of AI small caps in our LIST segment below. Perhaps one of the companies with market cap below $10 billion will be a consumer AI giant, but it is as likely that the future winners are still unlisted.