Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.

•••

On March 11, 2011, an earthquake with a magnitude of 7.4 unleashed a 130-foot tsunami that struck the eastern coast of Japan with immense power and crippled the Fukushima nuclear power plant. It exposed Japan and the rest of the world to the most serious nuclear crisis since Chernobyl. The world is, in fact, still grappling with the crisis. Earlier this year, Japan began releasing radioactive water from the damaged facility into the Pacific Ocean.

What needs to be remembered is that the designers, engineers and policymakers responsible for safeguarding the Fukushima facility had actually contemplated the possibility of a tsunami and equipped it with a heavy-duty wall that would protect the power plant against a "one-in-1,000-year" tsunami.

Unfortunately, the tidal wave that hit Fukushima was a one-in-10,000-year event.

As designers, builders and policymakers contemplate regulating artificial intelligence, the story of Fukushima is worth keeping in mind.

Like the benefits of nuclear power, the benefits of artificial intelligence are immense. It would be imprudent not to explore how society might leverage the technology for the benefit of humankind.

At the same time, it is unwise to deny AI's many risks. It is, for example, within the realm of possibility that an unstable, radicalized individual with only a limited understanding of biotechnology could use artificial intelligence to create and unleash upon the world a novel, hyper-virulent airborne pathogen that could make life on this planet unsustainable.

Many thoughtful and intelligent experts have offered differing opinions on the question of whether artificial intelligence poses an existential threat to society. Supporters of looser regulations typically espouse the idea that while such threats are possible, the odds of such things happening are extremely low.

My position is simple: Regulators and legislators must enact and enforce the tightest regulations and the strongest laws with regard to AI. I say this not because I am anti-technology or anti-business, but because I understand math and probabilities. Even if the odds of artificial intelligence posing an existential threat to humanity are a mere .0001 percent — or one in 10,000 — we must take that threat seriously.

History reminds us that rare, low-probability "Black Swan" events occasionally occur and that even thoughtful safeguards — such as the wall at Fukushima — are found insufficient after the fact. Second, and more importantly, a simple cost-benefit analysis of AI doesn't come out in favor of light regulation.

When the cost of something happening is quite literally incalculable (i.e., the loss of all human life), it is hard to fathom why a society wouldn't do everything in its power to guard against that possibility — however remote or unlikely. To do anything less is not just inviting disaster, it is a recipe for disaster.

Jack Uldrich is the author of "A Smarter Farm: How AI is Revolutionizing the Future of Agriculture." He formerly served as director of the Minnesota Office of Strategic and Long-Range Planning under Gov. Jesse Ventura.