Looking for cheaper nuclear energy? Turn the process of design into a game for artificial intelligence

0

Researchers have shown that more effective nuclear reactors can be constructed using Deep Reinforcement Learning.

In the United States, nuclear power offers more carbon-free energy than combined solar and wind power, making it a significant player in the fight against climate change.

U.S. nuclear plants are ageing, however, and operators are under pressure to streamline their operations in order to compete with coal and gas plants.

Deep in the reactor center, where power is produced, is one of the main places to cut expenses.

They burn less fuel and need less maintenance when the fuel rods that drive the reactions are ideally located there.

Nuclear engineers have learned through decades of trial and error to design better layouts to prolong the life of costly fuel rods. It’s now predicted that artificial intelligence would help them do just that.

MIT and Exelon researchers show that an AI system can be trained to create thousands of optimal configurations that extend the life of each fuel rod by about 5 percent and save an estimated $3 million a year for a typical power plant, the researchers say.

In a safe, virtual world, the AI system can also find optimal solutions faster than a human one and easily change designs.

Their results were published in the journal Nuclear Engineering and Design in December 2020.

This technology can be applied to any nuclear reactor in the world,”This technology can be applied to any nuclear reactor in the world,” “By improving the economics of nuclear power, which provides 20 percent of the electricity generated in the U.S., we can help limit the growth of global carbon emissions and attract the best young talent to this important clean energy sector.”

The fuel rods are lined up on a grid in a standard reactor according to their uranium and gadolinium oxide content, like chess pieces on a board, with the reactions being driven by the radioactive uranium and slowed down by the rare earth gadolinium.

In an ideal arrangement, these opposing impulses balance each other out and drive efficient reactions.

Engineers have attempted to use traditional algorithms to enhance human-devised layouts, but there may be an astronomical number of choices to test in a simple 100-rod configuration. They’ve had modest success so far.

The researchers wondered if the screening process could be accelerated by Deep Reinforcement Learning, an AI technique that has achieved superhuman success in games like chess and Go.

Deep reinforcement learning blends deep neural networks with reinforcement learning, in which learning is connected to a reward signal, such as winning a game, as in Go, or achieving a high score, as in Super Mario Bros., to identify patterns in large amounts of data.

In this case, with each favorable shift, the researchers trained their agent to place the fuel rods under a set of constraints and gain more points.

Decades of expert experience embedded in the laws of physics represented each constraint or rule selected by the researchers.

For example, to slow reactions there, the agent could score points by placing rods with low uranium content at the edges of the array; by spreading the poison rods of gadolinium to maintain even burn levels; and by limiting the number of poison rods to 16 to 18.

“After you build in rules, the neural networks start acting very well,” says the lead author of the research, Majdi Radaideh, a postdoctoral fellow at Shirvan’s lab. “We don’t waste time on random processes anymore.

It was fun watching them learn to play the game the way a person would have done it.

AI has learnt to play increasingly complex games as well or better than humans through reinforcement learning.

But in the real world, its capabilities are still largely untested. The researchers demonstrate here that reinforcement learning has potentially important applications.

“This study is an exciting example of transferring an AI technique for playing board and video games to help us solve practical problems in the world,” says study co-author Joshua Joseph, a researcher at MIT Quest for Intelligence.

In a simulated environment, Exelon is now testing a beta version of the AI system that includes one assembly in a boiling water reactor and about 200 assemblies in a pressurized water reactor, the most common in the world.

Share.

Leave A Reply