John Horgan interviews Eliezer Yudkowsky
When the University of Chicago polls 50 top economists on subjects like fiscal stimulus and the minimum wage, I am often appalled by the results. In contrast, I wish Eliezer Yudkowsky were made King of the World (assuming there was a King of the World, which I’m opposed to). This is from Scientific American:
Horgan: If you were King of the World, what would top your “To Do” list?
Yudkowsky: I once observed, “The libertarian test is whether, imagining that you’ve gained power, your first thought is of the laws you would pass, or the laws you would repeal.” I’m not an absolute libertarian, since not everything I want would be about repealing laws and softening constraints. But when I think of a case like this, I imagine trying to get the world to a condition where some unemployed person can offer to drive you to work for 20 minutes, be paid five dollars, and then nothing else bad happens to them. They don’t have their unemployment insurance phased out, have to register for a business license, lose their Medicare, be audited, have their lawyer certify compliance with OSHA rules, or whatever. They just have an added $5.
I’d try to get to the point where employing somebody was once again as easy as it was in 1900. I think it can make sense nowadays to have some safety nets, but I’d try to construct every safety net such that it didn’t disincent or add paperwork to that simple event where a person becomes part of the economy again.
I’d try to do all the things smart economists have been yelling about for a while but that almost no country ever does. Replace investment taxes and income taxes with consumption taxes and land value tax. Replace minimum wages with negative wage taxes. Institute NGDP level targeting regimes at central banks and let the too-big-to-fails go hang. Require loser-pays in patent law and put copyright back to 28 years. Eliminate obstacles to housing construction. Copy and paste from Singapore’s healthcare setup. Copy and paste from Estonia’s e-government setup. Try to replace committees and elaborate process regulations with specific, individual decision-makers whose decisions would be publicly documented and accountable. Run controlled trials of different government setups and actually pay attention to the results. I could go on for literally hours.
And I also liked this, which makes the current political circus seem pretty unimportant by comparison:
There is a conceivable world where there is no intelligence explosion and no superintelligence. Or where, a related but logically distinct proposition, the tricks that machine learning experts will inevitably build up for controlling infrahuman AIs carry over pretty well to the human-equivalent and superhuman regime. Or where moral internalism is true and therefore all sufficiently advanced AIs are inevitably nice. In conceivable worlds like that, all the work and worry of the Machine Intelligence Research Institute comes to nothing and was never necessary in the first place, representing some lost number of mosquito nets that could otherwise have been bought by the Against Malaria Foundation.
There’s also a conceivable world where you work hard and fight malaria, where you work hard and keep the carbon emissions to not much worse than they are already (or use geoengineering to mitigate mistakes already made). And then it ends up making no difference because your civilization failed to solve the AI alignment problem, and all the children you saved with those malaria nets grew up only to be killed by nanomachines in their sleep. (Vivid detail warning! I don’t actually know what the final hours will be like and whether nanomachines will be involved. But if we’re happy to visualize what it’s like to put a mosquito net over a bed, and then we refuse to ever visualize in concrete detail what it’s like for our civilization to fail AI alignment, that can also lead us astray.)
I think that people who try to do thought-out philanthropy, e.g., Holden Karnofsky of Givewell, would unhesitatingly agree that these are both conceivable worlds we prefer not to enter. The question is just which of these two worlds is more probable as the one we should avoid. And again, the central principle of rationality is not to disbelieve in goblins because goblins are foolish and low-prestige, or to believe in goblins because they are exciting or beautiful. The central principle of rationality is to figure out which observational signs and logical validities can distinguish which of these two conceivable worlds is the metaphorical equivalent of believing in goblins.
I think it’s the first world that’s improbable and the second one that’s probable. I’m aware that in trying to convince people of that, I’m swimming uphill against a sense of eternal normality – the sense that this transient and temporary civilization of ours that has existed for only a few decades, that this species of ours that has existed for only an eyeblink of evolutionary and geological time, is all that makes sense and shall surely last forever. But given that I do think the first conceivable world is just a fond dream, it should be clear why I don’t think we should ignore a problem we’ll predictably have to panic about later. The mission of the Machine Intelligence Research Institute is to do today that research which, 30 years from now, people will desperately wish had begun 30 years earlier.
Vote Eliezer Yudkowsky, King of the World
PS. Long ago I used to read the paper version of Scientific American, and their economics articles were consistently awful. Have things improved?