InvestAI etc. - 10/22/2024
AI for investors, explained in simple terms. An open thread updated weekly.
Topics discussed this week:
AI as geopolitical mastermind
AI war images
AI and the 2024 election
The public LIST updated
AI links to consider
AI as geopolitical mastermind
>
SAMI >
Geopolitical tensions continue to rise in Europe, the Middle East and Asia. This past week, North Korean troops joined the Ukraine war on Russia’s side. Both Hamas and Hezbollah remain defiant and have vowed to keep fighting after the killing of Hamas leader Yahya Sinwar. And China has conducted extensive military exercises around Taiwan that simulate a blockade or invasion.
Since world leaders are not doing a good job at resolving all these crises peacefully, I asked AI for some help. The prompts I used are ambitious by design.
Here is the first that I posed to Grok-2, followed by its response:
>> Resolve the Taiwan situation peacefully.
I challenged Grok by responding that China’s official stance is incompatible with this list which essentially represents the status quo. Grok agreed that its blueprint would not be acceptable to the Chinese leadership, given its “strong emphasis on national rejuvenation and territorial integrity.” When prompted for a peaceful solution that would be acceptable to China, Grok was unable to produce one and merely repackaged the same list shown above.
This is discouraging but confirms the notion that war comes when there is no overlap in the positions of two protagonists. There is no intersection or common ground between the two following ideas:
“China and Taiwan must be reunified under China’s communist regime.”
“Taiwan must remain free and democratic.”
If war is avoided, it will be mainly through deterrence and negotiations to stretch the timeline of events.
>
RICHARD >
How would deterrence work. Did you ask Grok about that?
>
SAMI >
I did. Grok did better here. Here are the most important points of its response (my emphasis):
Military Deterrence: Strengthening Taiwanese Defenses: Taiwan needs to invest heavily in asymmetric warfare capabilities that could impose a high cost on China for any military action. This includes anti-ship missiles, air defense systems, drones, and cyber warfare capabilities.
Economic and Technological Leverage: Tech Wars: Leveraging Taiwan's critical role in global technology supply chains, especially semiconductors, to underline the economic repercussions of disrupting this.
Economic Interdependence: Highlighting the mutual economic benefits of peace and the catastrophic impacts of war on China's economy, given its reliance on global trade routes.
>
RICHARD >
Interesting, but as one would expect from AI, there is no mention of specific humans making a difference, no mention of how a change in leaders, or in leaders’ dispositions, in China, the US or Taiwan would alter the equation and lead to one outcome instead of another.
>
SAMI >
Exactly. Human leaders make a huge difference. There are multiple scenarios, each of which depends on a different combination of leaders.
Since we can only give Grok a C for its “solution” to the Taiwan crisis, I decided to see if Perplexity is any better at geopolitics. Here is the prompt that I entered, and Perplexity’s response:
>> Resolve the Arab-Israeli conflict in a way that is acceptable to all sides.
Perplexity gets points for at least mentioning the human role, with the phrase “courageous leadership” in its conclusion. But this list is the usual two-state blueprint that has been around for decades. Even if a two-state solution was acceptable to both sides, a return to 1967 lines (item 2 in Perplexity’s response) would involve Israel giving up on West Bank settlements, a policy turnabout that is unlikely. So here again, the absence of overlap between two positions is what leads to war.
>
RICHARD >
Keep in mind that the creativity and depth of answers is highly dependent on the prompts you use. Think of an LLM as a humongous ball of words and phrases representing every idea any human has ever had about anything. Your prompt will drop you into a cluster of words that most resembles the specific words you used in asking the question. If you use general terms like “Arab-Israeli conflict” or “Taiwan and China”, you’ll tend to get responses that are located nearest, in wordspace, to the proposals that people have written on these topics.
To get a more creative or original answer, try tossing words into your prompt that can kick the response to a different cluster of words. For example, you can slip in the names of great diplomats in history (e.g. how would Bismarck solve this?) or ask for analogies to other seemingly unresolvable conflicts (“As a highly-skilled marriage counselor, what would you do?”)
>
SAMI >
Great point. Let’s try that. I started by asking Perplexity who were the greatest geopolitical masterminds in history. Perplexity produced a list of a dozen that includes Sun Tzu, Otto von Bismarck and Henry Kissinger. Here is the prompt that I put to Grok:
>> Give a proposal for Middle East peace as Bismarck would have conceived it.
Grok gave me a long answer that included the de-militarization of the conflict and the building of economic, social and cultural ties over a stretched timeline. It emphasized pragmatism over idealism, as in this passage (my emphasis):
Bismarck's approach to Middle East peace would be pragmatic, focusing on what can be realistically managed rather than idealistically hoped for. His strategy would involve complex diplomacy, economic incentives, and a balance of power that ensures no single entity can dominate without consequence, aiming for a pragmatic, sustainable peace rather than a moral or just one.
Although the proposal was short on specifics, it did have some merit. In particular, the mention of pragmatism is noteworthy because some parties in the region are not pragmatic and are instead ideological and, at the extremes, absolutist. Here is a concise version of Grok’s response.
>
AI war images
>
SAMI >
A well-known adage is that “truth is the first casualty of war.” This will certainly apply to war images in the future. We talked last week about the fake image of a distressed child hugging his puppy in the aftermath of hurricane Helene. Here is a ‘photo’ of an MEA (Middle East Airlines) plane landing in Beirut. MEA is the national Lebanese carrier.
The image is not unrealistic in terms of its showing a commercial jet landing during an air raid. MEA has for decades done a heroic job of continuing to fly in impossible circumstances, for example landing at night without runway lights during the civil war.
But for anyone familiar with this location, the photo is easy to spot as a fake because the runways at Beirut airport run north-south along the coastline, and nearly all planes land heading south (see map). Because the sea lies west of the coast, all this means that a plane pointing left in a photo, as in this image, would have the sea as its backdrop instead of land or the city of Beirut.
There will be a proliferation of fake images in war, but the lesson here is that fakes need to at least be plausible in order to be believable.
Here is a video of a passenger’s view on landing in Beirut. The plane is flying south.
>
AI and the 2024 election
>
SAMI >
Last month, Pew Research published the result of a survey about Americans’ concerns over the impact of AI on the 2024 presidential campaign. It found that 57% are ‘extremely/very concerned’ and 25% are ‘somewhat’ concerned about AI’s influence on the 2024 election. Here is one of the charts published by Pew.
>
RICHARD >
The word “AI” is doing a lot of work here. I looked up the exact question that poll respondents were asked:
How concerned are you that people or organizations seeking to influence the presidential election will use AI to create and distribute fake or misleading information about the presidential candidates and campaigns?
After years of press reports bemoaning the rise of misinformation, how would you answer that question? In other words, I don’t think this is about AI per se. I bet you’d get roughly the same answer if instead of AI you substituted, say, “social media” or even “the internet”.
I always want pollsters to ask questions more directly:
How concerned are you that you personally will be misled by fake or misleading information about the presidential candidates and campaigns?
I’m pretty sure that all of our InvestAI readers, for example, think of themselves as mostly immune from misinformation campaigns. Sure, we see “deep fakes” showing the candidates doing or saying outrageous things, but long experience has taught most of us to be highly skeptical of anything that looks too good to be true.
Concerns about misinformation are almost always concerns about the effect it will have on people who disagree with me. AI doesn’t change that.
Meanwhile, I’d be worried about something perhaps equally pernicious: the threat that overzealous politicians will use the “threat” of this new-fangled technology as an excuse to regulate or perhaps even ban various kinds of speech that they find inconvenient.
>
SAMI >
I agree with you. By now, we have all trained our brains to disregard images or videos that appear fake. In addition, when a fake image goes viral on social media, it is quickly identified as fake by several people. There is a form of crowd-auditing that filters out fake images. On the other hand, there will always be people who will believe what they want to believe.
Your idea that governments will impose restrictions on AI in order to “protect” the public is true of autocratic regimes. In fact, it is already happening. But there is also a risk of over-regulation in democratic countries.