Shiny apps are becoming more prevalent as a way to automate statistical products and share them with others who do not know R. This tutorial will cover Shiny app syntax and how to create basic Shiny apps. Participants will create basic apps by working through several examples and explore how to change and improve these apps. Participants will leave the session with the tools to create their own complicated applications. Participants will need a computer with R, R Studio, and the shiny R package installed.
Determination of Power for Complex Experimental Designs
Mr. Pat Whitcomb
Wednesday, March 21
10:30 AM
Power tells us the probability of rejecting the null hypothesis for an effect of a given size, and helps us select an appropriate design prior to running the experiment.
The key to computing power for an effect is determining the size of the effect. We describe a general approach for sizing effects that covers a wide variety of designs including two-level factorials, multilevel factorials with categorical levels, split-plot and response surface designs. The application of power calculations to DoE is illustrated by way of several case studies. These case studies include both continuous and binomial responses. In the case of response surface designs, the fitted model is usually used for drawing contour maps, 3D surfaces, making predictions, or performing optimization. For these purposes, it is important that the model adequately represent the response behavior over the region of interest. Therefore, power to detect individual model parameters is not a good measure of what we are designing for. A discussion and pertinent examples will show attendees how the precision of the fitted surface (i.e. the precision of the predicted response) relative to the noise is a critical criterion in design selection. In this presentation, we introduce a process to determine if the design has adequate precision for DoE needs.
Operational Testing of Cyber Systems
Mr. Paul Johnson
Thursday, March 22
10:00 AM
Previous operational tests that included cybersecurity focused in on vulnerabilities discovered at the component level and ad hoc system level exploitation attacks during adversarial assessments.
The subsequent evaluation of vulnerabilities and attacks as it relates to the overall resilience of the system were largely qualitative in nature and chalk full of human centered biases making them unreliable estimators of system resilience in a cyber contested environment. To mitigate these shortcomings this tutorial will present an approach for more structured operational tests based on common search algorithms; and, more rigorous quantitative measurements and analysis based on actuarial methods for estimating resilience.
Demystifying Data Science
Dr. Alyson Wilson
Thursday, March 22
10:00 AM
Data science is the new buzz word – it is being touted as the solution for everything from curing cancer to self-driving cars.
How is data science related to traditional statistics methods? Is data science just another name for “big data”? In this mini-tutorial, we will begin by discussing what data science is (and is not). We will then discuss some of the key principles of data science practice and conclude by examining the classes of problems and methods that are included in data science.
Statistics Boot Camp
Dr. Stephanie Lane
Wednesday, March 21
1:15 PM
In the test community, we frequently use statistics to extract meaning from data. These inferences may be drawn with respect to topics ranging from system performance to human factors.
In this mini-tutorial, we will begin by discussing the use of descriptive and inferential statistics. We will continue by discussing commonly used parametric and nonparametric statistics within the defense community, ranging from comparisons of distributions to comparisons of means. We will conclude with a brief discussion of how to present your statistical findings graphically for maximum impact.
Robust Parameter Design
Dr. Geoff Vining
Thursday, March 22
3:00 PM
The Japanese industrial engineer, Taguchi, introduced the concept of robust parameter design in the 1950s.
Since then, it has seen widespread, successful application in automotive and aerospace applications. Engineers have applied this methodology both to physical and computer experimentation. This tutorial provides a basic introduction to these concepts, with an emphasis on how robust parameter design provides a proper basis for the evaluation and confirmation of system performance. The goal is to show how to modify basic robust parameter designs to meet the specific needs of the weapons testing community.This tutorial targets systems engineers, analysts, and program managers who must evaluate and confirm complex system performance. The tutorial illustrates new ideas that are useful for the evaluation and the confirmation of the performance for such systems.What students will learn: • The basic concepts underlying robust parameter design
• The importance of the statistical concept of interaction to robust parameter design
• How statistical interaction is the key concept underlying much of the evaluation and confirmation of system performance, particularly of weapon systems
Exploratory Data Analysis
Dr. Jim Filliben
Thursday, March 22
3:00 PM
Materials TBD
After decades of seminal methodological research on the subject—accompanied by a myriad of applications—John Tukey formally created the statistical discipline known as EDA with the publication of his book “Exploratory Data Analysis” in 1977.
The breadth and depth of this book was staggering, and its impact pervasive, running the gamut from today’s routine teaching of box plots in elementary schools, to the existent core philosophy of data exploration “in-and-for-itself” embedded in modern day statistics and AI/ML. As important as EDA was at its inception, it is even more essential now, with data sets increasing in both complexity and size. Given a science & engineering problem/question, and given an existing data set, we argue that the most important deliverable in the problem-solving process is data-driven insight; EDA visualization techniques lie at the core of extracting that insight. This talk has 3 parts: 1. Data Diamond: In light of the focus of DATAWorks to share essential methodologies for operational testing/evaluation, we first present a problem-solving framework (simple in form but rich in content) constructed and fine-tuned over 4 decades of scientific/engineering problem-solving: the data diamond. This data-centric structure has proved essential for systematically approaching a variety of research and operational problems, for determining if the data on hand has the capacity to answer the question at hand, and for identifying weaknesses in the total experimental effort that might compromise the rigor/correctness of derived solutions. 2. EDA Methods & Block Plot: We discuss those EDA graphical tools that have proved most important/insightful (for the presenter) in attacking the wide variety of physical/chemical/ biological/engineering/infotech problems existent in the NIST environment. Aside from some more commonly-known EDA tools in use, we discuss the virtues/applications of the block plot, which is a tool specifically designed for the “comparative” problem type–ascertaining as to whether the (yes/no) conclusion about the statistical significance of a single factor under study, is in fact robustly true over the variety of other factors (material/machine/method/operator/ environment, etc.) that co-exist in most systems. The testing of army bullet-proof vests is used as an example. 3. 10-Step DEX Sensitivity Analysis: Since the rigor/robustness of testing & evaluation conclusions are dictated not only by the choice of (post-data) analysis methodologies, but more importantly by the choice of (pre-data) experiment design methodologies, we demonstrate a recommended procedure for the important “sensitivity analysis” problem–determining what factors most affect the output of a multi-factor system. The deliverable is a ranked list (ordered by magnitude) of main effects (and interactions). Design-wise, we demonstrate the power and efficiency of orthogonal fractionated 2-level designs for this problem; analysis-wise, we present a structured 10-step graphical analysis which provides detailed data-driven insight into what “drives” the system, what optimal settings exist for the system, what prediction model exists for the system, and what direction future experiments should be to further optimize the system. The World Trade Center collapse analysis is used as an example.
Quality Control and Statistical Process Control
Dr. Ron Fricker
Thursday, March 22
1:15 PM
This mini-tutorial will introduce attendees to the foundational principles of Statistical Process Monitoring (also known as Statistical Process Control or SPC).
Attendees will learn about some of the classical univariate monitoring techniques such as the Shewhart, EWMA, and CUSUM control charts as well as some multivariate monitoring techniques such as the T2, MEWMA, and MCUSUM control charts. The mini-tutorial will conclude by illustrating more complicated methods with defense applications, including network monitoring and biosurveillance.
In a well-designed randomized experiment, the operational tester may manipulate several factors of interest to isolate their causal effects on system performance. On the other hand, the need to draw causal inference about factors not under the researchers’ control, calls for a specialized set of techniques developed for observational studies. The persuasiveness and adequacy of such an analysis depends in part on the ability to recover metrics from the data that would approximate those of an experiment. This tutorial will provide a brief overview of the common problems encountered with lack of randomization, as well as suggested approaches for rigorous analysis of observational studies.
Evolving Statistical Tools
Dr. Matthew Avery, Dr. Stephanie Lane, Dr. Tyler Morgan-Wall, Dr. Jason Sheldon, Dr. Benjamin Ashwell & Dr. Kevin Kirshenbaum
Wednesday, March 21
3:00 PM
In this session, researchers from the Institute for Defense Analyses (IDA) present a collection of statistical tools designed to meet ongoing and emerging needs for planning, designing, and evaluating operational tests.
We first present a suite of interactive applications hosted on testscience.org that are designed to address common analytic needs in the operational test community. These freely available resources include tools for constructing confidence intervals, computing statistical power, comparing distributions, and computing Bayesian reliability.
Next, we discuss four dedicated software tools:
JEDIS – a JMP Add-In for automating power calculations for designed experiments
skpr – an R package for generating optimal experimental designs and easily evaluating power for normal and non-normal response variables
ciTools – an R package for quickly and simply generating confidence intervals and quantifying uncertainty for simple and complex linear models
nautilus – an R package for visualizing and analyzing aspects of sensor performance, such as detection range and track completeness