Verification and Validation

 

The title of this lecture may seem redundant to you.  However, in the field of numerical simulation, each of these words has a very distinct definition.  Although there isn’t universal agreement on the details of these definitions, by checking other references you’ll see that there is fairly standard agreement on their usage.  The simplest definitions that I’ve seen are given in Roache’s book on Verification and Validation (V&V). To paraphrase him, Verification demonstrates that you are solving the equations right.  Validation demonstrates that you are solving the right equations.


One thing to remember about V&V is that it is applied in two very distinct ways.  The simulation software must go through a verification process, and the models in that software validated over some specified range of conditions.  If you are using a commercial or public domain package, take a careful look at the documentation of the V&V process associated with the software.  A similar V&V process is necessary for each input model developed for the simulation tool.  As part of verification, values used for geometry, initial conditions and boundary conditions must be carefully checked and documented.  Also, as appropriate, mesh and time step sensitivity studies must be performed to bound the error associated with discrete approximations to the differential equations.  Validation of the calculation involves establishing the range of physical conditions obtained from the calculation, and quoting and/or performing comparisons of results from the same software with experiments that span that range of conditions.

 

I've already introduced seven steps in problem solution:

 

    1. Determine appropriate mathematical model
    2. Classification of partial differential equation
    3. Transformation of mathematical model
    4. Select grid pattern
    5. Formation of finite difference equations
    6. Solution algorithm
    7. Perform auxiliary calculations


Verification and Validation can be considered as an eighth step, verifying and validating the results of the simulation.  However, you must also consider them as procedures applied during each of of the above steps.

 

Validation is the process of determining if the physical approximations introduced in the first step are adequate for the range of situations currently being simulated.  An important fine point here is that you validate a simulation code over a specific (and limited) set of conditions.  For each new result that you produce, you should document the fact that the physical conditions fall within the range of conditions already validated.  If they don’t, you need to qualify your presentation of results clearly noting where you are extrapolating beyond the existing validated region.  Ideally when the bounds of existing validation are exceeded, additional experiments are analyzed (and if necessary designed and performed).

 

Verification covers steps four through seven, but must precede Validation.  Don’t expect reasonable conclusions when comparing to experimental results if you’ve got serious errors due to:

  1. Selection of mesh spacing or time steps that are too large;
  2. Selection of convergence criteria for iterative equation solution that are too loose;
  3. Programming errors in the computer code;
  4. Errors in the specification of input for the simulation of the experiment;
  5. Errors in understanding of code output.


The presence of such errors is a particular problem if a finite volume, finite difference, or finite element code is used in the process of determining values for coefficients (a.k.a. fudge factors) in engineering correlations used to model specific physical processes (e.g. heat transfer coefficients, turbulence models, …). This process is sometimes called “tuning” or “calibration”.  I have seen a number of instances where models were tuned to operate well on a “reasonable” spatial discretization, but later when good mesh convergence studies were done, comparisons to experimental data became unacceptable.  In such cases, the code developers have canceled discretization errors with errors in one or more physical models. I have also seen cases where engineering correlations were adapted without change from the literature and produced poor matches to data on a “standard” spatial discretization, but performed very well on an adequately refined mesh.

 

Some of you who are really alert might wonder about steps two and three of the problem solution process.  Classification of equations normally feeds into selection of solution algorithms.  An error here would be detected during verification.  Transformation of a basic set of equations requires both Validation and Verification.  You could be using an incorrect set of transformation equations, or you could have a bug in your implementation of a correct transformation.

 

Computational simulation of physical systems has been around for a very long time, and Computational Fluid Dynamics is often considered to be a mature field.  Why at this stage of the game do I put such a strong emphasis on V&V?  Much of the answer is in Mahaffy’s first law of human nature:

 

Nobody’s perfect, and most people drastically underestimate their distance from that state.

 

Until relatively recently most computer based simulations have been performed by the authors of the simulation codes.   All too often, code developers have been too enthralled by the beauty of their theories, or too vested in the success of large programming projects to look too deeply at results.  I’ve seen the consequences in comments from representatives of one of my sponsors over the years, the U.S. Navy.  CFD has frequently been referred to as “Colorful Fluid Dynamics” and with other less flattering terms.  Too many CFD predictions made for them were later discovered to have significant errors.

 

The need for V&V goes deeper than over-inflated self esteem.  Another fundamental human trait feeding the need for V&V is that we see what we want to see (Mahaffy’s Fourth Law of Human Nature). This is not always a bad trait, when it's working for people, we refer to them as visionaries.  However, when it's not working, people end up in metaphorical dead end alleys, or their victims simply end up dead.   Humanity, has come up with various ways of dealing with this behavior.   I view Science as the collection of procedures and knowledge developed over millennia to overcome the negative aspects of this trait.  It permits us to see what is really there, rather than what we want to see.   To make progress you need a balance of seeing what you want to see, and checking what you thought you saw with good scientific practice.


At its heart V&V is just good science.   However, neither Verification nor Validation are simple processes.  These two lectures are meant to give you a brief introduction, and hopefully keep you from getting into too much trouble.  If you find yourself seriously involved in either activity, take the time to at least read Roache’s book and Oberkampf’s various reports on the subjects.  Oberkampf’s SANDIA reports are free via the DOE Information Bridge.  Roache’s book on V&V is available from Hermosa Publishers and well worth the price.  A discussion with recent references can also be found in "Best Practice Guidelines for the use of CFD in Nuclear Reactor Safety Applications," by Mahaffy et al.

Errors

 

As with Verification and Validation, specific meanings are attached to terms used in the discussion of error.  Generally the word “error” is only used to describe a source of inaccuracy that can in principal be corrected or limited to any desired level.  The five items listed above fall into that category.  These errors are frequently subdivided into recognized errors (hopefully just 1 and 2 in the above list) and unrecognized errors (items 3-5 above). 

 

Inaccuracies in physical model implementations are normally due to lack of knowledge of underlying processes,  the fundamentally stochastic nature of those processes (e.g. turbulence), or low precision experimental measurements for key quantities.  In this instance the term “uncertainty” is applied in discussions rather than error.  If you’ve ever looked at data underlying various engineering correlations, you can appreciate this problem.  However, even state quantities such as conductivity have an experimental basis and associated experimental uncertainty.  At some point it is important to determine the sensitivity of key outputs from a simulation to these uncertainties. 

 

I will focus the remainder of this section on what can be done about the five sources of error listed above, and those resulting from a specific choice of a physical model.  My discussion relies on over 30 years of personal experience, and insights derived from Roache’s V&V book and numerous SANDIA National Laboratory Reports by Oberkampf and his colleagues. 

 

Mesh and Time Step Size Sensitivity Studies.

 

Mesh and time step sensitivity studies lead to an estimate of error associated with discretization, and are also important in procedures used to detect software errors.  Roache and Oberkampf have good discussions of this error analysis based upon Richardson Extrapolation.  It basically boils down to fitting a curve to a sequence of results and extrapolating beyond those results to estimate the limiting answer with zero mesh length or time step.  Consider a sequence of three mesh lengths or time step sizes (from smallest to largest) h1, h2, and h3.  Normally the sequence is generated with a constant refinement ratio r= h3/ h2 =  h2/h1.  Let f1, f2, and f3  be the computed results at the same point in space and time for the three corresponding values of h.  Taking a clue from Taylor series expansions, we look for an expression for f as a function of h in the form:

 

 

hence,

 

 

subtracting the equations in pairs gives

 

 

hence

 

Note that if the scaling ratio is constant we can solve for p, but it is much more difficult.  Also notice that if values of f are not monotonic, the formula won’t work.  Although it is possible to have non-monotonic convergence, you will need results on more than three grids (or time steps) to convince me of any error estimate in such situations.

 

Given a value of p, equations for the two finest meshes can be solved for the remaining unknowns.

As a result the error on the finest mesh can be estimated as:

 

Note that if you have faith in the value of p obtained from a Taylor series truncation error analysis, you can use this expression with results from just two meshes to give an error estimate.  However, this is a dangerous approach.  With just two meshes (or time step sizes) you can’t always be certain that your spacing is small enough that higher order terms in the Taylor expansion are insignificant. 

 

These formulas for error and order of accuracy are relatively easy to implement for time step sensitivity and finite difference mesh sensitivity studies where the refined grids contain the points evaluated on the coarser grids.  However, for finite volume, if I double the number of volumes, the volume centers don’t match between two levels of refinement.  Since f1 and f2 must be compared at the same points in space and time interpolation is required on one of the grids.  Be careful that your interpolation is sufficiently accurate that the calculated value of p tells you about the order of accuracy of your finite volume approximation rather than the order of accuracy of your interpolation.

 

Roache notes that the above equation is not always a reliable bound on error.  He recommends multiplying any such error estimate by a “Factor of Safety” (Fs).  Values of this factor would range from a high of 3 for a two mesh study to a low of 1.25 for a three mesh study confirming convergence of the mesh.  Use of 3 corresponds to replacement of an error estimate for a second order method with one for a first order method.  Roache also recommends reporting of error in terms of a Grid Convergence Index (GCI):

Iteration Convergence Errors

I’ll say more about this later during the discussion of solution procedures for flow equations.  Anytime you solve a system of non-linear equations, and frequently when you solve a large system of linear equations, you will use an iterative solution procedure, and will need to set criteria to declare convergence and end the iteration. Check to see that the convergence criteria are reasonable.  For example on a method with a slow convergence rate, look directly at the equation residuals rather than change in independent variables.  The simplest study of sensitivity to iteration convergence is to drop all convergence criteria by an order of magnitude and measure changes in key simulation state variables.  If a Newton iteration is used, samples should be taken of key variables over a full series of iterations to check for quadratic convergence.  If quadratic convergence is rarely if ever observed, you’re on to looking at programming errors.

 

Software Errors

If you are a code developer, these are the things that give you nightmares.  However, there are systematic things that you can do, using review, careful programming practice, and testing, to cut the number of errors and get a good night’s sleep.

Quality Assurance

 

The first thing to remember about programming errors is that they occur regardless of programming practices.  Testing procedures must be in place to minimize the number of bugs that survive for any significant time. Quality Assurance (QA) procedures are one way to control the introduction of bugs and formalize test procedure used to localize bugs.  However, I don’t recommend rigorous adherence to international standards for QA programs.  At some point the system becomes rigid enough that the best scientist/programmers leave to find a better work environment, and the project under formal QA is doomed to mediocrity at best. 

 

The three components of QA are documentation, testing, and review.  Written standards for these components should be established at the beginning of a project and accepted by all involved.  Documentation of a new simulation capability usually begins with a simple statement of requirements for what must be modeled, what approximations are and are not acceptable, and the form of implementation.  A complete written description of the underlying mathematical model provides a basis for Verification activities.  A clear description of any experimental basis for the model aids Validation.  Good validation testing compares against data beyond the set originally used to generate the model.  A clear description of any uncertainties in the model can be valuable in later studies of sensitivity of results to model uncertainties. 

 

Basic documentation should also include a clear written description of the model’s software implementation. This aids later review or modifications of the programming.  Relatively little effort is normally expended here describing the coding implementing the model itself.  More time should be spent documenting flow of data, revisions to data structures, and definitions of important variables.

 

The final piece of the basic documentation is a test plan.  Here a careful explanation is provided of a set of test problems that clearly exercise all new and revised programming.  The tests should cleanly isolate individual features of the new capability, and demonstrate that the software correctly implements the underlying model.  For revisions to physical models, relevant tests against experiments should also be specified.

 

This documentation should be generated in two drafts.  The first precedes actual implementation of the software, and the second is issued as a final report including the final form implemented and results of all proposed tests.  It should be accompanied by two phases of independent review, the first focusing on the viability of the proposed approach, and the second focusing on the completeness of testing.  My experience has been that even without review, generation of this documentation significantly cuts the number of programming errors introduced into the final product.  The act of describing implementation with words, forces a careful review of the software.  More importantly, a systematic written description of a test procedure insures that very little can slip through the testing process. 

 

Documentation must also exist at a more automated level via a source code configuration management procedure.  This starts with a systematic record of all changes, dates of change and individuals responsible for the changes.  When under software control this level of code management lets you remove old updates to a program if they are found to be inappropriate, and maintain specialized versions of a base code.  These capabilities have been used for a long time on large software projects.  The current favorite configuration control tool is CVS, which is GNU open source software, and free.  However, you might also consider the improvements to CVS available in a management tool named Subversion. The project under which I do most of my research uses CVS at it’s heart, but extends capabilities via a web page that provides links to all accepted versions of the software, related documentation, test problems, and supporting scripts for version generation and execution of test problems. 

 

The act of bringing a new simulation capability under configuration control (creating a new code version), should provide the most rigorous review for code errors.  However, this is largely a function of the individual appointed to be the configuration control manager.  Success of a software project often depends on the quality of individual doing that job.  He or she must have the breadth of technical experience to understand all documentation associated with updates.  He or she should also be well versed in testing procedures and basic scientific method in order to judge the completeness of the test sets submitted with each update. 

 

Any new problems submitted with an update should be included in a regression test suite.  This is also a major line of defense against introduction of coding errors.   In a complicated simulation code, its much easier than you might think to introduce your own amazing improvement, and unintentionally cripple another portion of the program.  However, if that portion went through the documentation and testing procedure that I’ve described, its specific test problems were embedded into the regression test set.  By running the regression test set for each new change, bugs affecting older capabilities are detected very quickly, and corrected before being accepted into the official program.  The project were I do most of my software development started seven years ago with a regression test set of about 50 problems.  It’s now up to about 1500 problems, taking about 3 hours to run on a high end Intel based workstation.  The rate of increase of computer speed and adaptation to use of parallel clusters will keep our testing productive and growing through the useful life of the software.

Method of Manufactured Solutions (MMS)

  

Roache and Oberkampf are both advocates of the method of manufactured solutions as a way to verify coding.  I’ve tried it and also consider it to be very valuable.  The idea is fairly simple.  Start with the basic PDE (or system of PDE’s) in the mathematical model for your problem.  For example a 1-D transient conduction problem.

 

 

The next thing that you do is pick a solution T(x,t) that you like and run it through the differential operators.

So all I’ve got to do is set

 

declare initial conditions T(x,0)=300, boundary conditions T(3,t)=300 and T(-3,t)=300, and I’ve got a conduction problem that I can feed to my finite difference or finite volume code with a known answer.  This is a nice choice for testing methods that are at least first order accurate in time and second order accurate in space.  When such methods are functioning correctly, they will reproduce the solution to machine accuracy.

 

If you want to test more aggressively with non-zero derivatives at all orders, move away from polynomials.  For a conduction problem, there are simple analytic solutions available.  If I go for a solution with q=0, then its going to be a Fourier expansion.  I can isolate one term in the expansion

by judicious choice of the boundary conditions and initial conditions

 

This particular choice could be considered as testing against an analytic solution.  However, I have manipulated initial and boundary conditions to manufacture a solution to the conduction problem.


For best results from a manufactured solution the following rules should be followed.


 

When checking against solutions like this one, it is important to perform mesh and time step convergence studies as described above .  Subtle bugs may be hidden on a coarse mesh or with a time step that is too large.  For more details on application of MMS see an expanded summary that I have provided and  the SANDIA report by Salari and Knupp.

Basic Programming Practices

Before you write a new program, understand the information that you need to input, what you need to  output, and any temporary storage that you need to accomplish your calculations.  Based on this survey, design a set of data structures that will contain all of this information in your program.  If speed is important for your application, you should design two or three variants of the more complicated data structures (e.g. derived type arrays containing components that are pointers to other arrays).  Set up a test program to time the access to the different options.  Elegance often comes at a price you can't afford.  As you implement your solution algorithm, follow some basic guidelines for readable programs, so that it is easier to spot errors.  To the maximum extent possible, divide the elements of your calculation into subprograms (functions or subroutines) that can be individually tested.  Your main program and frequently the next layer of subprograms should be nothing more than outlines calling other subprograms to do the actual calculations.


Evolutionary Programming

 

The best way to avoid coding bugs that I know is through the practice of evolutionary programming.   There is next to nothing completely new under the sun.  Whether you know it or not, any simulation tool that you are likely to write will be an extension of something already in existence. If you can obtain source code that implements a large subset of your goal, I recommend that you either gradually change that software to meet your goals, or start the creation of your new product so that it should in principle match results with the older program for some set of test problems. 

 

If you start with new code, first check to make certain that you can reproduce the results of the older code to within machine round-off error.  You should not expect to match results exactly, because your programming will probably implement expressions that are formally identical, but use difference ordering of arithmetic operations to get the result.  Slight differences in source code can also influence whether or not intermediate results are temporarily stored in 80 bit CPU registers rather than 64 bit memory. Either changes in order of operations or changes in storage precision generally produce results that differ in the low order bits.  To get a quick feel for the level of impact to be expected from this change in round-off error, compile either the old or new programs to produce codes that are both fully optimized and unoptimized.  Optimization will change both order of operation and storage of intermediate quantities from what is used in unoptimized compilation.

 

From this point either the new or adapted code approach follow the same path.  Add new features in a way that separates three classes of changes:

  1. Code modifications that preserve results to the last bit (e.g. data structure changes, restructuring that does not alter the order of computational operations);
  2. Code modifications that only produce changes due to different machine round-off (e.g. substitution of one direct matrix solution package with another, other code restructuring that change the order of arithmetic operations); and
  3. Code modifications that produce significant changes in results (e.g. new turbulence model, switch to higher order difference equations).

The first change class is easy to test and debug, although you may need to suppress compiler optimization to see the exact match.  The second is more difficult, in that you really do need to confirm that differences in results can be attributed to differences in round-off error.  The third is where you apply techniques described above such as the Method of Manufactured Solutions for changes to numerical methods, and rigorous validation to check changes in physical models.

 

As you review discrepancies between results of new and old programs, remember that you may find bugs in the old program.  You have no guarantee that it is perfect. 

Numerical Jacobians

 

As a final suggestion for detection of programming errors, I want to briefly reference material covered in a later lecture on numerically generated Jacobians.  When developing a solution to a set of nonlinear equations, always create at least one version of your program that generates elements of the Jacobian matrix numerically for comparison against any analytic expressions for the same elements.  When creating analytic derivatives use features available in Mathematica, Maple, or MacSyma to do the necessary symbolic differentiation and to automatically convert the results to Fortran or C expressions.  Also consider the use of automatic differentiation tools such as ADIFOR (http://www-fp.mcs.anl.gov/autodiff/).

Input Model Errors

 

Input model errors fall into two general categories.  The first is due to entry of incorrect information into the input file. In my experience this is the largest source of correctable errors in a simulation.  As a code developer, I have dealt with huge number of code bug reports, at least 90% of which have turned out to be errors in the code user's input file.  Mostly these are typographic errors introduced as the input model is created.  They are also frequently related to failure to read guidelines for creation of the input model.  At times these errors are a secondary result of errors in documentation of the system being modeled or errors in documentation of the code’s input file structure. Input to any simulation tool can in a broad sense be viewed as a programming language, and this type of error is fully analogous to a software error in the simulation code itself.  Procedures above (except for MMS and Numerical Jacobians) are applicable to minimizing or locating input errors.  In particular good QA for each input file and separately for the simulation code’s input manual are crucial. 

 

The second error category is uncertainty in system geometry and initial and boundary conditions.  These are often difficult to quantify, but when the uncertainty bounds are known the impact can be assessed with a carefully designed sensitivity analysis.

 

Errors in Understanding Code Output

 

In my experience this is a relatively small source of problems.  Usually this is simply a case of assuming the wrong units for a value, or associating a value with the wrong point in space or time. The root causes are normally failure to read documentation or errors in documentation.

 

Validation

 

I don’t claim to be an expert on validation, and will keep my comments on the subject very brief.  If you need to perform serious validation of results from a computer simulation, start by reading work by William Oberkampf and his associates, and by Patrick Roache.  It’s also worth doing a search of the validation literature in the Nuclear Reactor Safety (NRS) community.  Nuclear plant regulators (e.g. U. S. Nuclear Regulatory Commission) have taken validation very seriously for a very long time.  One valuable lesson from NRS is that new experiments should be analyzed before experimental results are published.  There are too many ways to use a modern simulation code as a curve fitting tool.  Ideally release of most experimental results should be delayed for some period after the test, so that system configuration and measurements needed for boundary conditions to a calculation can be provided as accurately as possible without revealing other measurements of state information within the system.

 

Validation of a simulation must be performed at several levels of detail.  Appropriate separate effects experiments must be modeled to capture the quality of local modeling capabilities (e.g. nucleate boiling, specific chemical reactions).  Tests on individual components of a system should also be used in the validation process (e.g. pump, wing).  Full system capabilities should be checked against scaled test facilities (e.g. scale model of a plane in a wind tunnel), and whenever possible data from the full scale system should be compared against simulations.  As you start to build a list of all processes, components, and states of the full system that may need testing, you will realize that neither the budget nor the time exists to fully validate every modeling capability that might influence the results of your simulation.  This is the point at which you gather a group of experts and construct a Phenomena Identification and Ranking Table (PIRT).  In this process you take the long list and rank phenomena by importance (high, medium, or low) to the particular scenario being simulated and validated.

 

In addition to ranking importance of phenomena, the PIRT panel of experts documents the adequacy of models within the simulation code for each phenomena, the adequacy of the code verification and validation test sets, the adequacy of existing experimental data for validation and needs for additional data.  Metrics for quantitative judgments of adequacy of the simulation are also reviewed and if necessary alternate metrics suggested.

 

I have participated in one PIRT process and reviewed others.  At first glance the process seemed to me to be too qualitative to be effective.  However, the structure of the process does work very well and produces a validation plan that is very effective in the real world of deadlines and finite resources.  One key to success with PIRT is that it an iterative process.  During the validation process, results must be reviewed with the understanding that later analysis or experimental results may increase the ranking of a phenomenon, and require revisions to recommendations.

 

Validation metrics are a subject of ongoing research and debate.  Researchers try to come up with ways to measure quality of predicted results.  I believe that the key to success here is the use of the plural metrics.  Although I may come up with a single final metric, it will be a weighted average of other metrics that account for various aspects of match to data.  The predictive computer codes that I use, as do most practitioners of CFD and structural analysis are deterministic.  If I repeat a calculation with the same initial and boundary conditions, I get the same answer.  I want to start with a metric that considers the match of deterministic calculation using my best understanding of initial and boundary conditions to the data (maybe some variant of an L2 norm applied to the vector of data points).  In the process, I will consider experimental error, but I need to distinguish between experimental error due to some systematic bias in the measurements and random experimental error.  That part of experimental error resulting from bias, needs to be identified, and used to quantify how seriously I take my initial metric.  If I’ve got a set of error bars that bound systematic bias in an experiment, I would make certain that I understood the nature of the bias.  If appropriate, I could draw one “experimental” curve through the top of the error bars, and a second through the bottom, apply the same metric(s) to each that I used for comparison to the data as measured.  Presentation of the best and worst of the three metrics would tell me something useful.

 

I could do something similar for random experimental error but with less faith in the meaning of the results.  Take a look at the comparison experiment and data below.  The mean data values and calculation have different trends, which will be reflected to some degree in the construction of metrics just mentioned.  However, if the error bars reflect truly random error, I’d conclude that I’ve potentially got a pretty good match.  A metric involving percentage of the time that the calculation is outside the data error bounds might be appropriate.

Although the simulation code is deterministic, you need to account for the fact that the calculation with the code is not.  The next level of metrics account for uncertainty in the experiment’s boundary conditions, initial conditions, and geometry.  In some CFD applications you might be surprised at the impact of small deviations between reported and actual geometries.  To the extent that these input uncertainties are available, a statistically based sensitivity study should be performed to put uncertainty bounds on calculated results.  Metrics could be developed to say something about the overlap of the region within the experimental error bounds and the region within the calculation’s uncertainty bounds.

 

There is another aspect of a calculation’s uncertainty that should be treated in addition to input.   Physical models within the computer program contain parameters derived from experiments and subject to uncertainty.  For a really important prediction like whether or not the core of a nuclear reactor will melt, I need to determine the impact of these internal uncertainties on prediction of critical state variables (such as maximum metal temperatures in a reactor core).  In this instance I’m not trying to make a direct statement about the quality of a simulation code and result.  I need to construct an envelope for highly probable states, and then determine if any unacceptable results fall within that envelope.  A number of statistical methodologies have been developed to minimize the number of simulations that must be done to construct such an envelope.  I recommend that you look at papers authored by D’Auria, de Crecy,  and G. E. Wilson for some specific approaches and books by Gamerman and Cullen for general information.  D’Auria has an unusual approach that has a great deal of merit when the number of uncertain model parameters is very large, or the level of uncertainty in these parameters can’t be adequately specified (very frequently the case).

 

 

V&V References

 

  1. AIAA, “AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations,” AIAA Report G-077-1988, 1998.
  2.  

  3. Casey, M., and Wintergerste, T., (ed.), “Special Interest Group on ‘Quality and Trust in Industrial CFD’ Best Practice Guidelines, Version 1,” ERCOFTAC Report, 2000.
  4.  

  5. Casey, M., and Wintergerste, T., “The best practice guidelines for CFD - A European initiative on quality and trust,” American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP, v 448, n 1, p 1-10, 2002.
  6.  

  7. Roache, P.J., “Verification and Validation in Computational Science and Engineering,” Hermosa Publishers, pp. 403-412, 1998.
  8.  

  9. Menter, F., “CFD Best Practice Guidelines for CFD Code Validation for Reactor-Safety Applications,” European Commission, 5th EURATOM Framework Programme, Draft Report, EVOL-ECORA-D1, February 2002.
  10.  

  11. WS Atkins Consultants, “Best Practices Guidelines for Marine Applications of CFD,” MARNET-CFD Report, 2002.
  12.  

  13. Oberkampf, W. L. and Trucano, T. G., “Verification and validation in Computational Fluid Dynamics", Sandia National Laboratory Report SAND2002-059, 2002.
  14.  

  15. Salari, K. and Knupp, P. "Verification by the Method of Manufactured Solutions", Sandia National Laboratory Report SAND2000-0144, 2000.
  16.  

  17. Oberkampf, W. L. and Aeschliman, D. P., “Joint Computational/Experimental Aerodynamics Research on a Hypersonic Vehicle: Part 1, Experimental Results,” AIAA Journal, Vol. 30, No. 8, pp. 2000,2009, 1992.
  18.  

  19. Walker, M. A., and OberKampf, W. L., “Joint Computational/Experimental Aerodynamics Research on a Hypersonic Vehicle: Part 2, Computational Results,” AIAA Journal, Vol. 30, No. 8, pp. 2010,2016, 1992.
  20.  

  21. Oberkampf, W. L. , Aeschliman, D. P., Henfling, J. F., and Larson, D. E., “Surface Pressure Measurements for CFD Code Validation in Hypersonic Flow,” AIAA Paper No. 95-2273, 26th Fluid Dynamics Conference, San Diego CA, 1995.
  22.  

  23. Roy, C. J., McWherter-Payne, M. A., and Oberkampf, W. L., “Verification and Validation for Laminar Hypersonic Flowfields,” AIAA2000-2550, Fluids 2000 Conference, Denver, CO, 2000.
  24.  

  25. Oberkampf, W. L.,  Trucano, T. G., Hirsch, C., “Verification Validation and Predictive Capability in Computational Engineering and Physics,” Foundations for Verification and Validation in the 21st Century Workshop,” Johns Hopkins Applied Physics Laboratory, October 2002, SANDIA National Laboratory Report SAND2003-3769, 2003.
  26.  

  27. Hatton, L. “The T Experiments: Errors in Scientific Software, IEEE Computational Science & Engineering, Vol.4, No.2, pp. 27-38, 1997.
  28.  

  29. Oberkampf, W. L., Blottner, F. G., and Aeshliman, D. P., “Methodology for Computational Fluid Dynamics Code Verification/Validation,” AIAA Paper 95-2226, 26th AIAA Fluid Dynamics Conference, San Diego, California, 19-22 June 1995.
  30.  

  31. Wilson, G. E., and Boyack, B. E. The Role of the PIRT in Experiments, Code Development and Code Applications Associated With Reactor Safety Assessment, Nuclear Engineering and Design, Vol. 186, 1998; 23-37.
  32.  

  33. de Crecy, A., “Determination of the uncertainties of the constitutive relationships in the Cathare 2 Code,” Proceedings of the 1996 4th ASME/JSME International Conference on Nuclear Engineering, ICONE-4, May 1996.
  34.  

  35. Cullen, A. C., and Frey, H. C. Probabilistic Techniques in Exposure Assessment: A Handbook for Dealing with Variability and Uncertainty in Models and Inputs, Plenum Press, New York, 1999.
  36.  

  37. Gamerman, D. Markov Chain Monte Carlo, Chapman & Hall, London, 1997.
  38.  

  39. Wilson, G. E., Boyack, B. E., Catton, I., Duffey, R. B., Griffith, P., Katsma, K. R., Lellouche, G. S., Levy, S., Rohatgi, U. S., Wulff, W., and Zuber, N. Quantifying Reactor Safety Margins Part 2: Characterization of Important Contributors to Uncertainty, Nuclear Engineering and Design, Vol. 119, pp. 17-31, 1990.
  40.  

  41. D'Auria, Francesco (Univ of Pisa); Debrecin, Nenad; Galassi, Giorgio Maria, “Outline of the uncertainty methodology based on accuracy extrapolation,” Nuclear Technology, v 109, n 1, , pp. 21-37, 1995.

 

 


Other Lectures / Home


Created and Maintained by John Mahaffy : jhm@psu.edu

  ©2004