Nation pieces: inflation, AI
I had a couple of pieces in (on?) The Nation, recently.
The first is on inflation, which is real, not easy to solve, and a potential problem for a green agenda.
The standard remedy—raising interest rates and provoking a recession—would be disastrous in an economy still recovering from the Covid shock. But we can’t deny that huge deficit spending and an infusion of trillions of dollars conjured out of nothing has something to do with the problem. The deficit spending financed a remarkably generous, though too temporary, aid package. It boosted household incomes despite sudden and massive job loss in the early months of the pandemic. That aid is still keeping millions of households afloat and has left many others with unusually large savings balances.
It would be a crime to take those benefits away, but an immense amount of purchasing power was introduced into an economy that was stretched to the limit, with workers in some areas hard to find, taut global supply chains vulnerable to interruption (a lesson for labor militants!), a preference for keeping only the thinnest possible stock of inventories, and a public infrastructure ragged from decades of underinvestment.
And the second is a review of former Theranos board member Henry Kissinger and former Google CEO Eric Schmidt at the Council on Foreign Relations, chewing the fat about AI.
Just for a moment, let’s cede the point that AI is something big that is changing the way we live. Schmidt and especially Kissinger worry about what this means for being human. (It’s weird when the architect of the secret bombing of Cambodia becomes the humanist on the program, but such are the politics of elite organizations.) Over the next 15 years, Schmidt claims, computers will increasingly set their own agenda, exploring paths and producing results beyond the intention or understanding of their human programmers. What will this do to our sense of ourselves, Schmidt asked, “if we’re not the top person in intelligence anymore?”
One response might be, “Well, maybe don’t let them go there?” But the authors will have none of that. “Once AI’s performance outstrips that of humans for a given task, failing to apply that AI, at least as an adjunct to human efforts, may appear increasingly as perverse or even negligent,” they declare. Will we delegate our war-making capacities to machines—not merely in guiding weapons to their targets but deciding whether to attack in the first place? Schmidt apparently thinks so, though he acknowledges that there are some complexities. “So, you’re in a war and the computer correctly calculates that to win the war you have to allow your aircraft carrier to be sunk, which would result in the deaths of 5,000 people, or what have you…. Would a human make that decision? Almost certainly not. Would the computer be willing to do it? Absolutely.”
I meant to add, Henry Kissinger would have made such a decision.