The process of constructing a Phenomena Identification and Ranking Table (PIRT) originated as part of the U.S. NRC’s Code Scaling, Applicability, and Uncertainty (CSAU) evaluation methodology . CSAU was a demonstration methodology for use of best estimate simulation codes in licensing of nuclear power plants under rules approved by the U.S. NRC in September 1988. The PIRT process was created as a systematic and documented means of completing a CSAU exercise with a limited amount of resources. Phenomena and processes are ranked in the PIRT based on there influence on primary safety criteria, and efforts focused on the most important of these. This process has proven valuable in other contexts and its specifications have been broadened over the years (see Ref. 2). In recent years the value of the PIRT process has been recognized outside the nuclear safety community as an important component of any validation process.
The PIRT process begins with some crucial steps performed by the organization needing the PIRT. First, objectives of the exercise must be clearly documented. One key conclusion of Wilson and Boyack  is that the value of the final PIRT is directly proportional to the degree of detail in the initial specification of a transient scenario and system in which the scenario occurs. An organization will obtain more efficiency from a series of specific PIRT exercises (e.g. DVI line break in AP600) rather than trying to cover a range of analyses with a very general PIRT exercise (e.g. small break loss of coolant accident in a pressurized water reactor).
With well defined objectives, scenario and system in hand, the next step is selection of the panel of experts. This should begin with the selection of a panel coordinator. In addition to relevant technical expertise, this individual needs to be experienced in the PIRT process and to have strong interpersonal skills, including the ability to gracefully sort relevant from irrelevant team member contributions. The coordinator should have direct access to management members who have requested the PIRT, access to staff outside the panel who can perform studies needed to clarify the importance of any given phenomena, and sufficient wisdom to use these resources effectively.
The panel should have the necessary breadth and depth to handle the problem as defined. Depth is achieved by carefully selecting high quality experts. Breadth is obtained by attention to each individual’s fields of expertise. At least one member should have a primary focus in each of the following areas, relevant to the scenario and system under study:
The panel of experts begins by reviewing objectives, system, and scenario, and then defining parameters of interest. For a LBLOCA in a given PWR, the critical parameter is peak clad temperature. In other cases the list of parameters of interest could be much longer, and might be modified as phenomena and processes are identified and ranked.
With this initial groundwork in place, the next phase is identification of relevant existing information, primarily experimental data and results of related analysis. This relies heavily on the knowledge and experience of panel members, but can also be scheduled to permit research by available staff.
The central work follows with identification of phenomena and processes associated with the system under the specified scenario. Wilson and Boyack recommend starting by identifying high level system processes (e.g. depressurization, debris transport). Next some structure is supplied by dividing the scenario into time phases in which dominant processes do not change significantly, and splitting the system into components or subsystems, which can be expected to spatially isolate some key phenomena. This provides a matrix of zones in time and space for which all plausible phenomena and processes can be identified. Some or all of the steps to this point could be handled without assembling the panel in one location. However, a face to face brainstorming session is needed at this point to assemble the initial list and move on to ranking of importance.
The ranking process is iterative both within the initial panel session and on a longer time scale as more information becomes available from experiments and analysis. A good starting point is to rank phenomena and processes as having low, medium, or high significance. When more resolution is required, panels have split each of these categories into three subdivisions giving a nine level scale, or simply split the high and low categories into two subdivisions, giving a five level scale. It is common after the discussion associated with the first round of ranking to realize that phenomena considered early in the process were under or over-emphasized. This results in further discussion and shuffling before the first draft of the PIRT is produced. Discussions may also expose a clear lack of available knowledge, and result in requests for specific sensitivity calculations before release of a final PIRT.
Interpretation of the PIRT depends on details of the objectives. If the PIRT is used to aid design of an experiment, rankings are with respect to need for accurate measurements and need for care in scaling to properly capture its effect in a full-scale system. If the PIRT is to be used to improve modelling in a simulation code, ranking addresses the level of detail required in special models programmed for the phenomenon or process. If the PIRT is directed towards a sensitivity study the ranking permits a practical statistical analysis. Phenomena with low importance may be dropped from the uncertainty analysis, or their impact estimated with bounding calculations. Highly ranked phenomena are treated individually and perturbations of underlying models properly included in statistical methodology. Treatment of phenomena with a medium ranking is done on a case by case basis.
The ranking table is only a useful overview of the process, and the primary value rests in the full documentation produced by the panel of experts. Sections of the document provide:
As already indicated creation of a PIRT is an iterative process. After it is first applied results of requested experiments, sensitivity studies, or other results from simulations may require revisions to the original PIRT and associated documentation. However, the value of the PIRT process lies not in absolute accuracy at point in time, but in its rational guidance in allocation of limited resources to a complex research process.
Boyack, B. et al., “Quantifying
Reactor Safety Margins: Application of Code Scaling, Applicability, and
Uncertainty Evaluation Methodology to a Large-Break, Loss-of-Coolant