At the outset, he outlined key approaches such as machine learning and deep learning, emphasizing that “AI demonstrates excellent interpolation ability in data-rich domains, but it cannot make correct judgments when faced with unknown conditions.” He continued, “AI can find optimal solutions within given data, but it is powerless when data do not exist. Therefore, we should not aim for full automation through AI—it should be positioned as a collaborative tool that humans must monitor and evaluate.”
Turning to generative AI exemplified by ChatGPT, Prof. Washio explained its underlying mechanisms: “Generative AI is essentially a massive probabilistic model—it does not engage in creative thinking like humans.” He cautioned that “although such systems can generate text that sounds plausible, they may produce errors when applied to unfamiliar contexts.” Furthermore, he warned, “The ‘answers’ produced by AI are merely extensions of the most statistically likely word sequences. Unless users understand this nature, misuse could undermine safety culture itself.”
As an example of practical application, Prof. Washio introduced his joint research with the National Institute of Advanced Industrial Science and Technology (AIST) and NEC Corporation on analyzing stray light in space telescopes. Using an AI algorithm that automatically explores risk conditions, the project identified hazardous scenarios 100,000 times more efficiently than random searches. He noted, “This methodology could be applied to automatically extract unanticipated accident scenarios in nuclear power plants.”
He further described collaborations with Osaka University, where AI optimized chemical reaction conditions and derived high-yield results from limited experimental data. In another project with Nissan Motor Co., AI analyzed plant operation data to automatically adjust simulation models, improving their consistency with real-world operations. Prof. Washio stated, “Process optimization and high-precision operational planning through AI can contribute to the safe operation of nuclear facilities.”
During the Q&A session, a Swedish researcher asked whether AI could eventually “inherit human competence.” Prof. Washio replied clearly: “AI can handle knowledge and data, but it cannot replicate human judgment or insight.” He added, “The key issue is how we design AI outputs and connect them to decision-making in human society. This depends not only on technology but also on organizational systems, social institutions, and human dialogue.” Another participant suggested that “AI might become an entity that nurtures competence through education or meetings,” to which Prof. Washio responded that such discussion “belongs to future philosophical and ethical debates.”
An engineer from the United States raised a practical question about the difficulty of reading data from obsolete media such as floppy disks and CD-ROMs. Prof. Washio stressed that “AI and database maintenance should not be left solely to corporate self-responsibility. In the future, public management by governments will become necessary.” He concluded that “information and AI models should be preserved as part of social infrastructure.”
Finally, when asked whether AI can explore the unknown, Prof. Washio answered, “The fundamental limitation of AI lies in its inability to quantify the unknown.” He explained, “AI can assist in discovering the unknown, but it cannot create it on its own. That is precisely why combining human scientific intuition with AI’s analytical power is essential.”
In closing, Prof. Washio remarked, “The nuclear industry tends to be overly cautious about introducing new technologies due to its emphasis on safety. Yet to enhance safety, we must pursue innovation with a ‘conservatively exploratory’ mindset.” He concluded, “By understanding the limitations of AI and integrating its strengths with human judgment, we can build the foundation for the next generation of safety culture.” The lecture ended with resounding applause from the audience.