Abstract

Qualitative data are commonly collected in higher, graduate and postgraduate education; however, perhaps especially in the quantitative sciences, utilization of these qualitative data for decision-making can be challenging. A method for the analysis of qualitative data is the degrees of freedom analysis (DoFA), published in 1975. Given its origins in political science and its application in mainly business contexts, the DoFA method is unlikely to be discoverable or used to understand survey or other educational data obtained from teaching, training or evaluation. This article therefore introduces and demonstrates the DoFA with modifications specifically to support educational research and decision-making with examples in bioinformatics. DoFA identifies and aligns theoretical or applied principles with qualitative evidence. The demonstrations include two hypothetical examples, and a case study of the role of scaffolding in an independent project (‘capstone’) of a graduate course in biostatistics. Included to promote inquiry, inquiry-based learning and the development of research skills, the capstone is often scaffolded (instructor-supported and therefore, formative), although it is intended to be summative. The case analysis addresses the question of whether the scaffolding provided for a capstone assignment affects its utility for formative or summative assessment. The DoFA is also used to evaluate the relative efficacies of other models for scaffolding the capstone project. These examples are intended to both explain this method and to demonstrate how it can be used to make decisions within a curriculum or for bioinformatics training.

Introduction

Survey and other qualitative data are commonly collected in higher, graduate and postgraduate education, e.g. as the end-of-term evaluation of instruction. Campbell and Nehm [1] point out that, while there were many papers published on genomics and bioinformatics education (GBE) research between 1995 and 2010, few of these related to postgraduate training or education outside of formal educational settings. One reason for this may be that the data collected from training are in the form of surveys, and tend not to be the type of valid or reliable assessment of whether the training has had “a meaningful impact” on the learners (p. 530). However, surveys are common in higher education, while methodology for the appropriate analysis of that data is relatively uncommon in GBE.

Utilization of qualitative data, such as survey results, for decision-making can be challenging. Educational decisions can include, “is my assessment aligned with my teaching goals?”, “should I use X or Y teaching method?”, or “will changing the curriculum help to achieve specific teaching goals?”. An established method for the analysis of qualitative data to inform decisions is the degrees of freedom analysis (DoFA), initially published in 1975 [2–7] but used almost exclusively in business applications. As such, it is unlikely to be recognized or even discoverable by those in other fields seeking to understand survey or other educational data they obtain from teaching, training, assessing, evaluation or other common contexts. Additionally, qualitative methods such as DoFA may not be within the scope of ‘analysis tools’ that many quantitative investigators find useful (or find at all).

However, DoFA is a potentially important tool for both research and decision-making in the context of training and education. Fundamentally, DoFA identifies and aligns theoretical or applied principles with qualitative evidence; so, it can provide organization and structure for formulating, collecting evidence about and analyzing educational decisions. For example, disciplines such as biology, statistics, medicine and economics may be contemplating whether and how to integrate computational methods, training or courses into established degree or certificate programs [8, 9]; evidence that can both support the decision and provide information about how, when and to what extent the integration should be done will necessarily be qualitative (as educational decisions about topical coverage and course offerings often are). The DoFA method can support the use of educational theory in decisions about teaching and learning across disciplines; it can also help leverage (or identify data collection options for) classroom research. The DoFA method is therefore introduced and demonstrated in this article, to support educational research and decision that promote its utility in educational decision-making and research. The method with these modifications is demonstrated with two examples and a case study about the role and structure of scaffolding in the final, independent project (“capstone”) of a graduate course in biostatistics for life science students.

DoFA method and modifications for educational decision-making and research

The DoFA [2] is a method of qualitative analysis that was originally intended for theory building [2–7]). Originally, the DoFA uses a matrix to align qualitative data (observations) with theory or theoretical predictions; in this manner, the relative strengths of evidence for (or against) competing theories can be evaluated. However, the method can accommodate a wide variety of “data”, including summaries of literature [10] and interview results [11]; it is therefore also useful to understand a wide array of evidence (from a variety of sources) for and against a particular hypothesis or, in educational decision-making, pedagogic strategy [10]. Most of the DoFAs that have been published to date articulate or follow these steps (taken from [7], p. 244):

1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest.

1 A. Familiarity with this knowledge base identifies at least one theory; theoretical features are used to construct the prediction matrix (Step 2).

2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions, about which the evidence to be reviewed will provide evidence (for or against) make up the columns of this matrix.

3. Data relating to the theory/theories and their predictions are collected in the rows. At this point, a matrix with columns representing theories, and rows representing evidence (data), has been constructed.

4. Trained judges evaluate each piece of evidence (collected systematically in Step 3) and determine (independently) whether a given piece of evidence provides support for one (or more) of the theories. Judges’ ratings fill the cells of the matrix. In this step, judges’ ratings are “yes” or “1”; “no” or “0”, or “partly” or “0.5”, characterizing the alignment of the evidence with each element of the theory/theories. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns).

4 A. The agreement among judges must be assessed. Only one of three options can be chosen: either they must reach consensus; or the average of their ratings (0, 0.5, 1) is used; or the evidence must be considered “uninformative”—which is a result.

5. The “degrees of freedom” are then computed by summing the “points” in each column, the marginals; the theory (column) with the highest total evidence support is the “best supported theory”. Moreover, depending on how advanced is the theory that was used to create the predictions, the support or alignment of the data that is collected can also be summed in the marginal for each row. Row marginals then give an idea of which of the evidence was/were most useful in distinguishing the columns. If one piece of evidence gets the same rating (Step 4) in every column, it will have a high row marginal, which shows it does not distinguish among the column options. The marginals can be analyzed as a chi squared if desired to support the choice of one theory over another.

These steps, which occur in nearly all publications with business questions, support the development or selection of theory in business applications. However, in education and training, the focus is rarely to formulate new theories of education; therefore, one adaptation of the DoFA method is to take features of existing educational and cognitive frameworks and construct a prediction matrix that enables exploration of the alignment, or consistency, of data from educational activities with those theoretical frameworks or their principles. The method can also be used to organize and synthesize otherwise difficult-to-summarize qualitative data, such as that from surveys. To apply the DoFA method to support decision-making in educational contexts (hypothetical and case study examples to follow), these modifications are recommended:

A. The decision(s) to be made should be the prediction matrix columns, and the rows should represent some existing educational theoretical framework based on which decisions can be made, and evaluated. Then, the evidence (rows) is assembled, and its contribution to the decision-making is made explicit.

This modification is subtle, but important because it can be difficult (and in some cases, undesirable, beyond the scope of the problem or both) to identify the theory underpinning decisions about teaching and learning in education and training. Instead, this modification promotes the alignment of the evidence with the educational decision that is motivating the study. However, a specific decision may not yet be formulated at the time the data are/were collected; the method (with this modification) can also support the alignment of evidence (rows) with structural or other features of the education or training challenge being faced. To do this, at least two options or alternatives representing the decision to be made must be identified. For example, if a program is interested in whether to integrate experience with bioinformatics (EwB) training opportunities into a “traditional” biology undergraduate course, the two obvious options are “integrate experience” and “leave teaching as it is”. Alternatively, this specific decision can be framed as having three options, “integrate experience with bioinformatics into every course”, “integrate bioinformatics into some courses” and “do not integrate bioinformatics”. For the standard DoFA, data and theory must already be in their analyzable states; this modification makes DoFA available to decision makers before the collection of data and possibly even before pedagogy or cognitive frameworks have been identified (which is not possible for the business application).

B. A second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns; e.g. to fully, partially or not integrate bioinformatics experience) with the data—which, again, can be relevant literature [12, 13] or survey results [14] or other qualitative information (see Example 3 below). By articulating the educational decision to be made (columns) and setting out to evaluate the alignment of the evidence in the literature or from surveys (rows) for or against the options that the decision entails, these modifications to the DoFA procedure concretely and explicitly promote evidence-informed, and thereby justifiable, decision-making in teaching and assessment (consistent with principles of actionable evidence [15, 16]).

Table 1 shows the modifications as described, with notes on their importance and use. Modification A permits the alignment and subsequent evaluation of evidence with options under consideration—depending on the data that were (or can be) collected and the need for a decision. Modification B can help to identify what evidence is lacking, possibly suggesting classroom action research questions or data that should be collected from students to support, or evaluate, the decision that is made. Neither of these arises in the original methodology formulation, which happens after data and theory are articulated/complete. With these modifications, the identification of what evidence may be lacking may be a result of the DoFA. It points to next steps or potentially the need for pilot data. Importantly, Modification B may yield a second DoFA matrix ([11], and see Example 3 below): the first aligns the evidence with the decisions and the second aligns the decisions with theoretical (or other) information that can provide additional validity and evidence for the decision suggested by the first DoFA matrix.

Table 1

Steps in DoFA: original (theory building/testing) and modifications for educational decisions

Original (Woodside, 2010)Modification (this article)Notes on modifications
1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest. Familiarity with this knowledge base may identify competing theories, which are used to construct the prediction matrix (Step 2)1. Have/gain familiarity with the existing knowledge base about the phenomenon of interest sufficient to describe at least two options that represent the theory to be tested or decision to be made (columns). The prediction matrix is started here in Step 1The decision(s) to be made should be the columns of this matrix; and the rows (created in Step 2) should then represent an educational theoretical framework based on which decisions can be made, and evaluated. This modification permits the alignment and subsequent evaluation of the options under consideration. It is crucial not to conflate what aspects of the analysis will appear in the rows with what appears in the columns
2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions about which the evidence to be reviewed will provide evidence (for or against), make up the columns of this matrix. Data (Step 3) make up the rows2. Add rows to help make the decision from Step 1. Identify at least one theory or framework that can inform or help justify, the decision; alternatively, add observations (data), as rows. This prediction matrix permits a visual and computational alignment of the decision (columns) with the evidence to be reviewed (rows; either theoretical features or data)The second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns) with the theory, survey results or other qualitative information. Nonoverlapping theory elements should be articulated clearly, so their alignment with the decision options can be perceived
3. Data relating to the theory/theories and their predictions are collected, and each observation becomes one rows; alternatively, cases or groups are summarized in each row3. Evaluation of the alignment of the features of the chosen theory (or theories, in multiple matrices), with the decision options, is now possible. The rating system to be used (e.g. 0 for “no alignment”; 0.5 for “some” or “possible alignment”; and 1 for “full alignment”) should be determined before the evaluation (Step 4)It is possible to identify alignment between different options and different theoretical frameworks or between each option of the decision and theory in one matrix and with survey responses in another
 At this point, a matrix with columns representing theories and rows representing evidence has been constructed
4. Trained judges (at least two) evaluate each piece of evidence (rows) and independently determine whether a given piece of evidence provides support for one (or more) of the theories. Judges fill in the cells of the matrix with “yes” or “1”; “no” or “0”, “partly” or “0.5”. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns)4. At least one independent judge evaluates each theory element or observation (rows) with respect to the decision options (columns), according to the a priori rating scale. It is helpful to consult an expert on education theory (e.g. institutional resource or colleague); otherwise, consensus among those involved in the decision-making itself (at least two) is advisable. Including explanation of the rating in each cell can be helpful to explicate the choice of ratingsIn classroom- or course- based analyses, finding an independent judge that is sufficiently familiar with the evidence and the decision to be made can be challenging. For course-specific data, it is important that the judge (the instructor) be sufficiently objective for plausible and interpretable results. Collaborators with expertise in the educational context of the decision to be made can be as important as those with expertise in educational or cognitive theories
4A. The agreement among judges must be assessed; either they must reach consensus; or the average of their ratings is used; or the evidence must be considered “uninformative”
5. The “degrees of freedom” can be computed by summing the “points” in each column, the marginals. Column marginals help identify the theory (column) with the highest total evidence support, the “best supported theory”. Row mariginals can be useful to identify the most “theoretically consistent” observations, if that is of interest. If useful, a chi square statistic can be computed and its P-value estimated5. The “degrees of freedom” are computed as the column marginals; however, simple visualization (e.g. one column has all NO/0s and the other has a mix) may make marginals redundant. Column marginals are important as they highlight the decision option that is most consistent with theory, and row marginals are less soThe filled-in prediction matrix, and not its statistical analysis, can support decision-making without marginals (Table 1), or even point to a need for more data or another theory. The identification of theory elements (rows) that are not aligned with a decision (row marginal=0) can help determine if additional theories are needed or if in fact one decision option is just not consistent (aligned) with theory. Although a chi square analysis is possible, it is not interpretable in the decision-making context
Original (Woodside, 2010)Modification (this article)Notes on modifications
1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest. Familiarity with this knowledge base may identify competing theories, which are used to construct the prediction matrix (Step 2)1. Have/gain familiarity with the existing knowledge base about the phenomenon of interest sufficient to describe at least two options that represent the theory to be tested or decision to be made (columns). The prediction matrix is started here in Step 1The decision(s) to be made should be the columns of this matrix; and the rows (created in Step 2) should then represent an educational theoretical framework based on which decisions can be made, and evaluated. This modification permits the alignment and subsequent evaluation of the options under consideration. It is crucial not to conflate what aspects of the analysis will appear in the rows with what appears in the columns
2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions about which the evidence to be reviewed will provide evidence (for or against), make up the columns of this matrix. Data (Step 3) make up the rows2. Add rows to help make the decision from Step 1. Identify at least one theory or framework that can inform or help justify, the decision; alternatively, add observations (data), as rows. This prediction matrix permits a visual and computational alignment of the decision (columns) with the evidence to be reviewed (rows; either theoretical features or data)The second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns) with the theory, survey results or other qualitative information. Nonoverlapping theory elements should be articulated clearly, so their alignment with the decision options can be perceived
3. Data relating to the theory/theories and their predictions are collected, and each observation becomes one rows; alternatively, cases or groups are summarized in each row3. Evaluation of the alignment of the features of the chosen theory (or theories, in multiple matrices), with the decision options, is now possible. The rating system to be used (e.g. 0 for “no alignment”; 0.5 for “some” or “possible alignment”; and 1 for “full alignment”) should be determined before the evaluation (Step 4)It is possible to identify alignment between different options and different theoretical frameworks or between each option of the decision and theory in one matrix and with survey responses in another
 At this point, a matrix with columns representing theories and rows representing evidence has been constructed
4. Trained judges (at least two) evaluate each piece of evidence (rows) and independently determine whether a given piece of evidence provides support for one (or more) of the theories. Judges fill in the cells of the matrix with “yes” or “1”; “no” or “0”, “partly” or “0.5”. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns)4. At least one independent judge evaluates each theory element or observation (rows) with respect to the decision options (columns), according to the a priori rating scale. It is helpful to consult an expert on education theory (e.g. institutional resource or colleague); otherwise, consensus among those involved in the decision-making itself (at least two) is advisable. Including explanation of the rating in each cell can be helpful to explicate the choice of ratingsIn classroom- or course- based analyses, finding an independent judge that is sufficiently familiar with the evidence and the decision to be made can be challenging. For course-specific data, it is important that the judge (the instructor) be sufficiently objective for plausible and interpretable results. Collaborators with expertise in the educational context of the decision to be made can be as important as those with expertise in educational or cognitive theories
4A. The agreement among judges must be assessed; either they must reach consensus; or the average of their ratings is used; or the evidence must be considered “uninformative”
5. The “degrees of freedom” can be computed by summing the “points” in each column, the marginals. Column marginals help identify the theory (column) with the highest total evidence support, the “best supported theory”. Row mariginals can be useful to identify the most “theoretically consistent” observations, if that is of interest. If useful, a chi square statistic can be computed and its P-value estimated5. The “degrees of freedom” are computed as the column marginals; however, simple visualization (e.g. one column has all NO/0s and the other has a mix) may make marginals redundant. Column marginals are important as they highlight the decision option that is most consistent with theory, and row marginals are less soThe filled-in prediction matrix, and not its statistical analysis, can support decision-making without marginals (Table 1), or even point to a need for more data or another theory. The identification of theory elements (rows) that are not aligned with a decision (row marginal=0) can help determine if additional theories are needed or if in fact one decision option is just not consistent (aligned) with theory. Although a chi square analysis is possible, it is not interpretable in the decision-making context
Table 1

Steps in DoFA: original (theory building/testing) and modifications for educational decisions

Original (Woodside, 2010)Modification (this article)Notes on modifications
1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest. Familiarity with this knowledge base may identify competing theories, which are used to construct the prediction matrix (Step 2)1. Have/gain familiarity with the existing knowledge base about the phenomenon of interest sufficient to describe at least two options that represent the theory to be tested or decision to be made (columns). The prediction matrix is started here in Step 1The decision(s) to be made should be the columns of this matrix; and the rows (created in Step 2) should then represent an educational theoretical framework based on which decisions can be made, and evaluated. This modification permits the alignment and subsequent evaluation of the options under consideration. It is crucial not to conflate what aspects of the analysis will appear in the rows with what appears in the columns
2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions about which the evidence to be reviewed will provide evidence (for or against), make up the columns of this matrix. Data (Step 3) make up the rows2. Add rows to help make the decision from Step 1. Identify at least one theory or framework that can inform or help justify, the decision; alternatively, add observations (data), as rows. This prediction matrix permits a visual and computational alignment of the decision (columns) with the evidence to be reviewed (rows; either theoretical features or data)The second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns) with the theory, survey results or other qualitative information. Nonoverlapping theory elements should be articulated clearly, so their alignment with the decision options can be perceived
3. Data relating to the theory/theories and their predictions are collected, and each observation becomes one rows; alternatively, cases or groups are summarized in each row3. Evaluation of the alignment of the features of the chosen theory (or theories, in multiple matrices), with the decision options, is now possible. The rating system to be used (e.g. 0 for “no alignment”; 0.5 for “some” or “possible alignment”; and 1 for “full alignment”) should be determined before the evaluation (Step 4)It is possible to identify alignment between different options and different theoretical frameworks or between each option of the decision and theory in one matrix and with survey responses in another
 At this point, a matrix with columns representing theories and rows representing evidence has been constructed
4. Trained judges (at least two) evaluate each piece of evidence (rows) and independently determine whether a given piece of evidence provides support for one (or more) of the theories. Judges fill in the cells of the matrix with “yes” or “1”; “no” or “0”, “partly” or “0.5”. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns)4. At least one independent judge evaluates each theory element or observation (rows) with respect to the decision options (columns), according to the a priori rating scale. It is helpful to consult an expert on education theory (e.g. institutional resource or colleague); otherwise, consensus among those involved in the decision-making itself (at least two) is advisable. Including explanation of the rating in each cell can be helpful to explicate the choice of ratingsIn classroom- or course- based analyses, finding an independent judge that is sufficiently familiar with the evidence and the decision to be made can be challenging. For course-specific data, it is important that the judge (the instructor) be sufficiently objective for plausible and interpretable results. Collaborators with expertise in the educational context of the decision to be made can be as important as those with expertise in educational or cognitive theories
4A. The agreement among judges must be assessed; either they must reach consensus; or the average of their ratings is used; or the evidence must be considered “uninformative”
5. The “degrees of freedom” can be computed by summing the “points” in each column, the marginals. Column marginals help identify the theory (column) with the highest total evidence support, the “best supported theory”. Row mariginals can be useful to identify the most “theoretically consistent” observations, if that is of interest. If useful, a chi square statistic can be computed and its P-value estimated5. The “degrees of freedom” are computed as the column marginals; however, simple visualization (e.g. one column has all NO/0s and the other has a mix) may make marginals redundant. Column marginals are important as they highlight the decision option that is most consistent with theory, and row marginals are less soThe filled-in prediction matrix, and not its statistical analysis, can support decision-making without marginals (Table 1), or even point to a need for more data or another theory. The identification of theory elements (rows) that are not aligned with a decision (row marginal=0) can help determine if additional theories are needed or if in fact one decision option is just not consistent (aligned) with theory. Although a chi square analysis is possible, it is not interpretable in the decision-making context
Original (Woodside, 2010)Modification (this article)Notes on modifications
1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest. Familiarity with this knowledge base may identify competing theories, which are used to construct the prediction matrix (Step 2)1. Have/gain familiarity with the existing knowledge base about the phenomenon of interest sufficient to describe at least two options that represent the theory to be tested or decision to be made (columns). The prediction matrix is started here in Step 1The decision(s) to be made should be the columns of this matrix; and the rows (created in Step 2) should then represent an educational theoretical framework based on which decisions can be made, and evaluated. This modification permits the alignment and subsequent evaluation of the options under consideration. It is crucial not to conflate what aspects of the analysis will appear in the rows with what appears in the columns
2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions about which the evidence to be reviewed will provide evidence (for or against), make up the columns of this matrix. Data (Step 3) make up the rows2. Add rows to help make the decision from Step 1. Identify at least one theory or framework that can inform or help justify, the decision; alternatively, add observations (data), as rows. This prediction matrix permits a visual and computational alignment of the decision (columns) with the evidence to be reviewed (rows; either theoretical features or data)The second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns) with the theory, survey results or other qualitative information. Nonoverlapping theory elements should be articulated clearly, so their alignment with the decision options can be perceived
3. Data relating to the theory/theories and their predictions are collected, and each observation becomes one rows; alternatively, cases or groups are summarized in each row3. Evaluation of the alignment of the features of the chosen theory (or theories, in multiple matrices), with the decision options, is now possible. The rating system to be used (e.g. 0 for “no alignment”; 0.5 for “some” or “possible alignment”; and 1 for “full alignment”) should be determined before the evaluation (Step 4)It is possible to identify alignment between different options and different theoretical frameworks or between each option of the decision and theory in one matrix and with survey responses in another
 At this point, a matrix with columns representing theories and rows representing evidence has been constructed
4. Trained judges (at least two) evaluate each piece of evidence (rows) and independently determine whether a given piece of evidence provides support for one (or more) of the theories. Judges fill in the cells of the matrix with “yes” or “1”; “no” or “0”, “partly” or “0.5”. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns)4. At least one independent judge evaluates each theory element or observation (rows) with respect to the decision options (columns), according to the a priori rating scale. It is helpful to consult an expert on education theory (e.g. institutional resource or colleague); otherwise, consensus among those involved in the decision-making itself (at least two) is advisable. Including explanation of the rating in each cell can be helpful to explicate the choice of ratingsIn classroom- or course- based analyses, finding an independent judge that is sufficiently familiar with the evidence and the decision to be made can be challenging. For course-specific data, it is important that the judge (the instructor) be sufficiently objective for plausible and interpretable results. Collaborators with expertise in the educational context of the decision to be made can be as important as those with expertise in educational or cognitive theories
4A. The agreement among judges must be assessed; either they must reach consensus; or the average of their ratings is used; or the evidence must be considered “uninformative”
5. The “degrees of freedom” can be computed by summing the “points” in each column, the marginals. Column marginals help identify the theory (column) with the highest total evidence support, the “best supported theory”. Row mariginals can be useful to identify the most “theoretically consistent” observations, if that is of interest. If useful, a chi square statistic can be computed and its P-value estimated5. The “degrees of freedom” are computed as the column marginals; however, simple visualization (e.g. one column has all NO/0s and the other has a mix) may make marginals redundant. Column marginals are important as they highlight the decision option that is most consistent with theory, and row marginals are less soThe filled-in prediction matrix, and not its statistical analysis, can support decision-making without marginals (Table 1), or even point to a need for more data or another theory. The identification of theory elements (rows) that are not aligned with a decision (row marginal=0) can help determine if additional theories are needed or if in fact one decision option is just not consistent (aligned) with theory. Although a chi square analysis is possible, it is not interpretable in the decision-making context

Example 1. “Should we incorporate experience with bioinformatics?”

As a hypothetical example using the modified DoFA method, consider a biology department facing a decision about whether or not/to what extent to incorporate EwB into a course or a curriculum. This oversimplified example is not especially authentic but can be informative with respect to the method. It is essential to articulate what exactly it will mean to “integrate experience with bioinformatics”—e.g. will it entail change in all course structures, will it change the curricular sequencing (order of courses or topics within a course) and/or perhaps the assessments that are used? What exactly will it look like to “integrate experience with bioinformatics”? Possibly more importantly, what does it mean for the instructor or program to “leave teaching as it is”? This is the first opportunity to formally consider the details of the decision the support of which motivated the analysis the first place, which is not a feature of the standard approach to DoFA. The decisions (options) will organically become the columns of the eventual degrees of freedom prediction matrix; thus, Modification A is important for the DoFA in decision-making in this example.

It might be desirable to examine the evidence that EwB is consistent with key principles of andragogy, or with the development and promotion of expertise (in the given content area); each of these entails respective theoretical principles (the rows). Articulating the principles associated with any of these (andragogy, promotion of expertise) will have the result of aligning the specific evidence to show what support (if any) is available for the options in the decision to be made.

The hypothetical decision about whether to integrate EwB (columns) can be explored with a prediction matrix that uses the framework provided by the seven principles of “how learning works”, which is a synthesis of the empirical literature on learning in higher education published by Ambrose et al. [12]. These seven principles are:

  1. Students’ prior knowledge can help or hinder learning.

  2. How students organize knowledge influences? How they learn and apply. What they know?

  3. Students’ motivation determines, directs and sustains what they do to learn.

  4. To develop mastery, students must acquire component skills, practice integrating them and know when to apply what they have learned.

  5. Goal-directed practice coupled with targeted feedback enhances the quality of students’ learning.

  6. Students’ current level of development interacts with the social, emotional and intellectual climate of the course to impact learning.

  7. To become self-directed learners, students must learn to monitor and adjust their approaches to learning.

([12], pp. 4–6).

Organizing this list as rows with the decision options (do/do not integrate EwB) as the columns, a DoFA prediction matrix is created (Table 2). The table was completed from the perspective of cognitive psychology (the author’s background); instructors without this background may complete the ratings by consensus across faculty or perhaps in consultation with a center for education excellence at their local institution.

Table 2

Aligning decision options (columns ) about whether to integrate experiences with bioinformatics (EwB) into biology undergraduate courses with principles of learning (rows; adapted from [12])

Principles of learning [12]Integrate EwBaContinue course(s) without any mention of bioinformaticsb
Prior knowledge can be helpfulYes. Also assumes that the “prior knowledge” from this course (i.e. the EwB) will support future engagement with bioinformatics topics and toolsNo. Not introducing what is an important aspect of modern biology suggests a reliance on students to independently seek, find and integrate knowledge of bioinformatics
Knowledge organization supports learning and application of new knowledgeYes—the EwB must necessarily focus on the need for application of new knowledge (that may not yet exist); the source of the new knowledge need not be specified—preparing students to want/expect to learn new things is an essential featureNo. Maintaining a separation of “traditional” biological information and new/modern methods and ideas does not promote organizing biological knowledge to easily accommodate bioinformatics or computational information
Promotes motivation to learn/sustain learningPartially—EwB may be authentic, and problems that frame and motivate learning may arise from exposure to EwB, but whether it is sustained is undeterminedNo. Without exposure to EwB, longer-term commitment to learning may be hampered and potentially limited (not promoted) because future work will most likely require some EwB
Mastery is supported (opportunities to acquire component skills, practice integration and learn when to apply them)No—mastery is not possible with superficial exposure, but introducing the idea that ongoing learning is necessary is a critical purpose for EwBNo—the role of bioinformatics in modern biology is important for mastery in the domain; ignoring this role or assuming later training can be useful that undermines the likelihood of mastery
Goal-directed practice with formative feedback providedPartially—exposure to a short-term training as the EwB highlights the need for goal-directed learning and practice, but formative feedback may or may not be includedPartially—this may be achieved with traditional biology training, but may be limited to traditional biology and will not permit engagement with bioinformatics
Course climate supports learningDepends on instructor and course structureDepends on instructor and course structure
Students will learn to monitor and adjust their approaches to learningPartially—depending on how embedded the EwB is, and how much emphasis is placed on the importance of these experiences, this skill set may be initiated but may not be fully realizedPartially—students may develop this skill set but may not apply it to bioinformatics until after graduation
Principles of learning [12]Integrate EwBaContinue course(s) without any mention of bioinformaticsb
Prior knowledge can be helpfulYes. Also assumes that the “prior knowledge” from this course (i.e. the EwB) will support future engagement with bioinformatics topics and toolsNo. Not introducing what is an important aspect of modern biology suggests a reliance on students to independently seek, find and integrate knowledge of bioinformatics
Knowledge organization supports learning and application of new knowledgeYes—the EwB must necessarily focus on the need for application of new knowledge (that may not yet exist); the source of the new knowledge need not be specified—preparing students to want/expect to learn new things is an essential featureNo. Maintaining a separation of “traditional” biological information and new/modern methods and ideas does not promote organizing biological knowledge to easily accommodate bioinformatics or computational information
Promotes motivation to learn/sustain learningPartially—EwB may be authentic, and problems that frame and motivate learning may arise from exposure to EwB, but whether it is sustained is undeterminedNo. Without exposure to EwB, longer-term commitment to learning may be hampered and potentially limited (not promoted) because future work will most likely require some EwB
Mastery is supported (opportunities to acquire component skills, practice integration and learn when to apply them)No—mastery is not possible with superficial exposure, but introducing the idea that ongoing learning is necessary is a critical purpose for EwBNo—the role of bioinformatics in modern biology is important for mastery in the domain; ignoring this role or assuming later training can be useful that undermines the likelihood of mastery
Goal-directed practice with formative feedback providedPartially—exposure to a short-term training as the EwB highlights the need for goal-directed learning and practice, but formative feedback may or may not be includedPartially—this may be achieved with traditional biology training, but may be limited to traditional biology and will not permit engagement with bioinformatics
Course climate supports learningDepends on instructor and course structureDepends on instructor and course structure
Students will learn to monitor and adjust their approaches to learningPartially—depending on how embedded the EwB is, and how much emphasis is placed on the importance of these experiences, this skill set may be initiated but may not be fully realizedPartially—students may develop this skill set but may not apply it to bioinformatics until after graduation

aAssumes that EwB is fully integrated into this course—with introduction and practice with mention of the experience and the tools ongoing throughout the course, and not simply mentioned in a single class meeting.

bAssumes that students do not seek out bioinformatics training themselves (possibly because it remains (or appears) orthogonal to success in a traditional biology program).

Table 2

Aligning decision options (columns ) about whether to integrate experiences with bioinformatics (EwB) into biology undergraduate courses with principles of learning (rows; adapted from [12])

Principles of learning [12]Integrate EwBaContinue course(s) without any mention of bioinformaticsb
Prior knowledge can be helpfulYes. Also assumes that the “prior knowledge” from this course (i.e. the EwB) will support future engagement with bioinformatics topics and toolsNo. Not introducing what is an important aspect of modern biology suggests a reliance on students to independently seek, find and integrate knowledge of bioinformatics
Knowledge organization supports learning and application of new knowledgeYes—the EwB must necessarily focus on the need for application of new knowledge (that may not yet exist); the source of the new knowledge need not be specified—preparing students to want/expect to learn new things is an essential featureNo. Maintaining a separation of “traditional” biological information and new/modern methods and ideas does not promote organizing biological knowledge to easily accommodate bioinformatics or computational information
Promotes motivation to learn/sustain learningPartially—EwB may be authentic, and problems that frame and motivate learning may arise from exposure to EwB, but whether it is sustained is undeterminedNo. Without exposure to EwB, longer-term commitment to learning may be hampered and potentially limited (not promoted) because future work will most likely require some EwB
Mastery is supported (opportunities to acquire component skills, practice integration and learn when to apply them)No—mastery is not possible with superficial exposure, but introducing the idea that ongoing learning is necessary is a critical purpose for EwBNo—the role of bioinformatics in modern biology is important for mastery in the domain; ignoring this role or assuming later training can be useful that undermines the likelihood of mastery
Goal-directed practice with formative feedback providedPartially—exposure to a short-term training as the EwB highlights the need for goal-directed learning and practice, but formative feedback may or may not be includedPartially—this may be achieved with traditional biology training, but may be limited to traditional biology and will not permit engagement with bioinformatics
Course climate supports learningDepends on instructor and course structureDepends on instructor and course structure
Students will learn to monitor and adjust their approaches to learningPartially—depending on how embedded the EwB is, and how much emphasis is placed on the importance of these experiences, this skill set may be initiated but may not be fully realizedPartially—students may develop this skill set but may not apply it to bioinformatics until after graduation
Principles of learning [12]Integrate EwBaContinue course(s) without any mention of bioinformaticsb
Prior knowledge can be helpfulYes. Also assumes that the “prior knowledge” from this course (i.e. the EwB) will support future engagement with bioinformatics topics and toolsNo. Not introducing what is an important aspect of modern biology suggests a reliance on students to independently seek, find and integrate knowledge of bioinformatics
Knowledge organization supports learning and application of new knowledgeYes—the EwB must necessarily focus on the need for application of new knowledge (that may not yet exist); the source of the new knowledge need not be specified—preparing students to want/expect to learn new things is an essential featureNo. Maintaining a separation of “traditional” biological information and new/modern methods and ideas does not promote organizing biological knowledge to easily accommodate bioinformatics or computational information
Promotes motivation to learn/sustain learningPartially—EwB may be authentic, and problems that frame and motivate learning may arise from exposure to EwB, but whether it is sustained is undeterminedNo. Without exposure to EwB, longer-term commitment to learning may be hampered and potentially limited (not promoted) because future work will most likely require some EwB
Mastery is supported (opportunities to acquire component skills, practice integration and learn when to apply them)No—mastery is not possible with superficial exposure, but introducing the idea that ongoing learning is necessary is a critical purpose for EwBNo—the role of bioinformatics in modern biology is important for mastery in the domain; ignoring this role or assuming later training can be useful that undermines the likelihood of mastery
Goal-directed practice with formative feedback providedPartially—exposure to a short-term training as the EwB highlights the need for goal-directed learning and practice, but formative feedback may or may not be includedPartially—this may be achieved with traditional biology training, but may be limited to traditional biology and will not permit engagement with bioinformatics
Course climate supports learningDepends on instructor and course structureDepends on instructor and course structure
Students will learn to monitor and adjust their approaches to learningPartially—depending on how embedded the EwB is, and how much emphasis is placed on the importance of these experiences, this skill set may be initiated but may not be fully realizedPartially—students may develop this skill set but may not apply it to bioinformatics until after graduation

aAssumes that EwB is fully integrated into this course—with introduction and practice with mention of the experience and the tools ongoing throughout the course, and not simply mentioned in a single class meeting.

bAssumes that students do not seek out bioinformatics training themselves (possibly because it remains (or appears) orthogonal to success in a traditional biology program).

The body of Table 2 (i.e. not the marginals) shows that the principles of learning outlined by Ambrose et al. [12] may be partially met by the integration of EwB, but if EwB is only integrated into one class meeting, or not fully engaged in by the students, then these principles may not be sufficiently met. The table also shows that when both the integration of EwB and leaving it out are “partially” aligned with the principles of learning, the downside of integrating EwB can be seen to be less negative than continuing without it. Similarly, where the alignment shows that neither decision is aligned with one of the principles (i.e. “promotes mastery”), the reason why not is a function of the use of “exposure” rather than “full integration” in the case of integrating EwB, while the reason why not in the case of leaving EwB out of the biology course or curriculum is because future learning may be curtailed. Thus, not only can yes/partially/no values be entered into the DoFA matrix, rationales for each answer can also be used to qualify (possibly leading to a different point allocation for “type of NO” or “relatively PARTIALLY”) which would then be reflected in marginals. Marginals are not needed to summarize the data in Table 2. This highlights a difference between DoFA applied/modified for decision-making versus for theory building, where the marginals are needed to summarize the data.

Finally, Table 2 shows that, before integrating EwB (if that is the option that ends up being best supported), a survey to determine whether the teaching context promotes motivation to learn/sustain knowledge with EwB versus without EwB could easily be incorporated; then, any intervention to integrate EwB can be evaluated in a concrete and formal way. Formal plans to evaluate the results of whichever decision is taken (to integrate EwB or to continue without it) should be initiated (e.g. following [17]). Even with just these two options, a DoFA like the one shown in Table 2 can be informative about educational decisions to be made, and also how to determine whether intended effects of those decisions are achieved (promoting actionable evidence [15, 16] for either decision).

Thus, the DoFA method can bring together—as well as promote the collection of—diverse information, and also highlight what additional information is needed (e.g. to clarify what “integrate EwB” entails). Further, this same decision (incorporate/continue without EwB) can be explored in DoFA tables with other criteria in addition to/instead of these seven principles of learning. For example, we could also, or next, contemplate whether and to what extent either decision (integrate/continue without EwB) is consistent with principles of learning outcomes articulation ([16]; see [18] for an example) or with key features of assessment validity [1].

Example 2. “Would a (new) capstone experience align our program with international survey results on important bioinformatics training needs?”

Example 2 might be seen to follow from Example 1 (if the decision taken was in fact “yes, we should integrate EwB into our curriculum”), but could also be the decision currently under consideration. Specifically, if a degree, certificate or training program did want to incorporate EwB, one option is to add a capstone experience to the program. A “capstone” is a typically independent project where each student synthesizes prior learning within the program into a presentation, paper or other work product. Independent research projects are key capstone activities in undergraduate majors, Coursera specializations, and are also the main objectives in some master’s and most doctoral programs. The purpose of the capstone may be to demonstrate (summative) or develop (formative) independence, discipline-specific research skills or the completion of a self-initiated project.

Capstone projects are typically summative assignments—that is, representing “…the knowledge that has been accrued after the learning has ended” ([19], p. 212). The capstone can vary widely in its learning goals, including (but not limited to): (a) “experience” with either research or independent thinking; (b) synthesis of prior learning; (c) generating something novel or extending existing work in a novel direction; or (d) some combination of (a)–(c).

As outlined in Example 1, the decision to “integrate a capstone” will require details about exactly how that will be done. However, before mounting that effort, it is worthwhile to determine whether the capstone project will in fact achieve some or any specific teaching or learning goals. Nine learning objectives for a capstone were identified based on the Boyer Commission Report (1998) [20] and the Educational Effectiveness Working Groups at UC Berkeley (2003) [21]. These objectives, which are presented here in a general format so as to be applicable for end-of-degree; end-of-term; and end-of-course capstones, are to:

1. Teach research skills

2. Assess possession of research skills

3. Assess learning of research skills

4. Provide experience with inquiry

5. Assess/estimate independence in research skills

6. Engage inquiry-based learning

7. Teach inquiry-based writing; and that it should

8A. Function formatively; some may also or instead desire that it should

8B. Function summatively

Table 3 is a DoFA table that explores how well a capstone, designed to meet these nine specific objectives, laid out in the columns of Table 3, is aligned with domains that have been identified as unmet needs for bioinformatics through two national surveys conducted in the United States [22] and in Australia [14], which appear in the rows.

Table 3

Alignment of capstone objectives with bioinformatics resource needs survey results (United States/Australia)

Domains on the NSF/EMBL-ABR surveyCapstone objectives
Teach research skillsAssess possession of research skillsAssess learning of research skillsProvide experience with inquiryAssess/estimate independence in research skillsEngage inquiry- based learningTeach inquiry- based writingFunction formativelyFunction summatively
Publish data to the community
Maintain sufficient data storage
Share data with colleagues
Update/use updated analysis software
Train on data management and metadataxx
Bioinformatics analysis and supportx
Train on basic computing and scriptingxxxxxxxx
Search for data and discover relevant data setsxxxxxxxx
Multistep analysis workflows or pipelinesxx
Train on integration of multiple data typesxxxxxxxx
Use cloud computing
Train on scaling analysis to cloud or high-performance computingxxxxxxxx
644544055
Domains on the NSF/EMBL-ABR surveyCapstone objectives
Teach research skillsAssess possession of research skillsAssess learning of research skillsProvide experience with inquiryAssess/estimate independence in research skillsEngage inquiry- based learningTeach inquiry- based writingFunction formativelyFunction summatively
Publish data to the community
Maintain sufficient data storage
Share data with colleagues
Update/use updated analysis software
Train on data management and metadataxx
Bioinformatics analysis and supportx
Train on basic computing and scriptingxxxxxxxx
Search for data and discover relevant data setsxxxxxxxx
Multistep analysis workflows or pipelinesxx
Train on integration of multiple data typesxxxxxxxx
Use cloud computing
Train on scaling analysis to cloud or high-performance computingxxxxxxxx
644544055
Table 3

Alignment of capstone objectives with bioinformatics resource needs survey results (United States/Australia)

Domains on the NSF/EMBL-ABR surveyCapstone objectives
Teach research skillsAssess possession of research skillsAssess learning of research skillsProvide experience with inquiryAssess/estimate independence in research skillsEngage inquiry- based learningTeach inquiry- based writingFunction formativelyFunction summatively
Publish data to the community
Maintain sufficient data storage
Share data with colleagues
Update/use updated analysis software
Train on data management and metadataxx
Bioinformatics analysis and supportx
Train on basic computing and scriptingxxxxxxxx
Search for data and discover relevant data setsxxxxxxxx
Multistep analysis workflows or pipelinesxx
Train on integration of multiple data typesxxxxxxxx
Use cloud computing
Train on scaling analysis to cloud or high-performance computingxxxxxxxx
644544055
Domains on the NSF/EMBL-ABR surveyCapstone objectives
Teach research skillsAssess possession of research skillsAssess learning of research skillsProvide experience with inquiryAssess/estimate independence in research skillsEngage inquiry- based learningTeach inquiry- based writingFunction formativelyFunction summatively
Publish data to the community
Maintain sufficient data storage
Share data with colleagues
Update/use updated analysis software
Train on data management and metadataxx
Bioinformatics analysis and supportx
Train on basic computing and scriptingxxxxxxxx
Search for data and discover relevant data setsxxxxxxxx
Multistep analysis workflows or pipelinesxx
Train on integration of multiple data typesxxxxxxxx
Use cloud computing
Train on scaling analysis to cloud or high-performance computingxxxxxxxx
644544055

Importantly, the bioinformatics needs surveys were not conducted with the purpose of informing curriculum design or decision-making; the DoFA method, as modified, permits their integration into this decision-making nonetheless. Because no details have yet been articulated about exactly how the capstone might be implemented, rather than “scores” or ratings, Table 3 contains just ‘x’s where there might be opportunities for alignment; numeric ratings are not appropriate here, but the matrix can still be useful. The incorporation of a capstone might achieve all nine objectives (columns, see Example 3 below), but these achievements are unlikely to address the first NSF/EMBL-ABR—identified “unmet bioinformatics need”, publishing data to the community. Specifically, students are unlikely to generate such data, so the capstone may achieve many objectives, but not that one. However, the NSF/EMBL-ABR need “train on data management and metadata” could be included as a feature of the capstone experience, and if so, it could plausibly function in either a formative or a summative fashion. “Bioinformatics analysis and support” could be achieved by teaching research skills, a key objective of the capstone; again the alignment will depend on how exactly it is integrated.

Because Table 3 is included as an example of an exploration of whether the capstone objectives (columns) can also help meet internationally recognized unmet bioinformatics needs (rows), the column marginals are computed by treating each ‘x’ as ‘one point’ without consideration of how strongly such needs would be met. The column marginals show that one capstone objective (“teach inquiry-based writing”) is unlikely to align with any of these unmet needs, although inquiry-based writing experiences might be created specifically to meet those needs, e.g. around the search for data, and discovery of relevant data sets. As at most only 6 of the 12 bioinformatics needs could be addressed with a capstone experience, a capstone may not be an ideal modification of the curriculum to address these unmet needs. Conversely, as bioinformatics skills do not exist in a vacuum, Table 3 can be useful for challenging instructors to ensure that all of the capstone objectives are met in ways that also achieve some learning relating to all of these unmet needs.

These examples do not include observed data, so an authentic example of the application of the method to decision-making follows in the next sections. In this last example, the method is used to study the question of whether scaffolding (described below) in the capstone limits the potential for useful summative assessment, and if so, to what extent. Unlike the other two examples, the exact implementation of the capstone is known (and described below), so the method is used to determine if the assessment as implemented is aligned with the capstone objectives. If it is aligned, the analysis provides evidence of this; if it is not aligned, the analysis provides direction for what might need to change so that the assignment and its learning objectives will be better aligned in future.

Scaffolding is instructor-mediated construction of knowledge or skills, where these are specific learning goals [23, 24]. The instructor targets individual students with specific modeling of the target skill/behavior (e.g. conceptual understanding, specific skill), and just enough instruction for the individual to develop the target. Originally proposed in the early 1970 s and still current [25], scaffolding is a pedagogic construct. If the capstone is scaffolded, and thereby, formative, the extent of scaffolding each student requires could be considered to generate “actionable evidence” [15, 16] for the instructor to better prepare learners for success on the project.

Scaffolding in a written project (or a project involving writing) comes in the form of—often individualized—draft reviews that the instructor or grader provides for each student to facilitate the optimal final product. Formal scaffolding can take place through the provision of feedback on drafts, through self-assessment using a predefined rubric [26] or a combination of these.

Scaffolding may be important in capstone projects, but the extent to which scaffolding is provided may limit the potential for the project to serve as a summative assessment. A formative assessment, in contrast to the summative type, is intended to provide specific feedback to the student in real time, to facilitate and improve learning; formative assessment therefore necessarily cannot provide summative evidence about whether/how much learning has taken place. In that sense, this example can both answer the instructor’s question (“is the assessment aligned with the learning goals?”) as well as address an educational research question about the role of scaffolding in the capstone experience.

Method

Degrees of freedom analysis

As described, the steps to executing a DoFA, with the previously identified modifications, were followed using observational data that were collected from a course (described below).

Example 3. Case study of data collected from a graduate course.

A capstone project was used as the ‘Final Exam’ for a 12-student graduate level introductory biostatistics course. For 3 of the final 4 weeks of the 15 week semester, a new component of this multipart assignment was the focal point of email-based one-on-one discussions between the instructor and the student:

  1. Identify a research question that is relevant to the student, including the motivation and background for the question;

  2. Identify the appropriate statistical test to answer the question, including a description of data required (type/amount) and consideration of assumptions and contingencies for the specified statistical method; and

  3. Integrate elements 1 and 2, in the form of an overall design of the student’s ‘dream study’ including the refined question (in light of statistics and data), final choice of inference test plus contingency plan (i.e. in case assumptions are not met, or sensitivity analyses), plus power calculation.

In the fourth week, the graded assignment was a 10–15 min presentation based on work completed in weeks 1–3; this component did not receive any input from the instructor. Students had 1 week to complete each subtask in the assignment; subtasks were iteratively completed with as much scaffolding as was warranted (according to the instructor): students submitted work that was returned with comments or suggestions until both instructor and student were satisfied with each subtask.

The four-part assignment was intended to authentically assess the presence of, encourage, and model, a stepwise approach to analysis, clear and careful thinking, and the statistical methods and fluency that the course was meant to develop in students.

Creating the DoFA prediction matrix

A DoFA prediction matrix was constructed by using the nine capstone objectives (DoFA Step 1), which are taken as the predictions that are plausible if the capstone is functioning as intended, given the cases (data in rows) represented by 12 Master’s degree students completing the capstone project at the end of one course (DoFA Step 2). The data (DoFA Step 3) were observations of the 12 presentations, rated by the instructor according to whether they could be useful for determining whether any of the nine objectives were met for that student [Yes (Y) = 1; No (N) = 0; Partially (P) = 0.5] (DoFA Step 4). In this example, only the marginals of the columns were important for decision-making. Marginals of the rows would summarize the individual student’s consistency with the capstone objectives outlined above; however, these would not be informative about student performance because the columns relate to the assignment, and not to its execution (i.e. student grades on the assignment were derived with a presentation-specific rubric). Similarly, the student-level summaries (row marginals) would not permit summarization of the consistency of the assignment with the column features.

The evidence derived from the student case-level data was then summarized in a second DoFA prediction matrix. Starting again at DoFA Step 1, further specification (and familiarity) with scaffolding created four options for the decision about the role of scaffolding in the capstone. The second DoFA matrix examined the alignment with the capstone objectives [20, 21], now in the rows (because the decision is now about which model of scaffolding (columns options) to use, from four general models for the inclusion of scaffolding in the capstone:

  1. Set aside a single, unscaffolded and thereby summative, capstone at the end of the program (or course) as the sole inquiry-based exercise (e.g. in Coursera specializations).

  2. Set aside a single scaffolded capstone opportunity at the end of the program or course as the sole inquiry-based exercise (i.e. intention of this course).

  3. Provide a series (>1) of discrete, equally scaffolded capstone exercises over time (e.g. some PhD programs that require multiple publications by each student to be synthesized into one thesis).

  4. Provide a series (>1) of discrete capstone exercises over time with more scaffolding for the first and none for the last (e.g. some PhD programs require a “master’s thesis” which is heavily scaffolded, and then require the PhD thesis to be independently done).

These examples of the integration of scaffolding into capstone experiences represent approaches that are in use across disciplines, and can be observed at universities across the United States.

Example 3 results

The capstone in the course was designed as a single, scaffolded opportunity (i.e. scaffolding Model 2). The data from student presentations are shown in DoFA prediction matrices in Table 4.

Table 4

Cases and their consistency with/support for capstone learning objectives

Learning objectives of the capstone: Cases:Teach research skillsAssess possession of research skillsAssess learning of research skillsProvide experience with inquiryEstimate independent research skillsEngage inquiry-based learningTeach inquiry-based writingCapstone functions formativelyCapstone functions summatively
Cases where capstone was a single, scaffolded experience
Case 1PPPYNYPYN
Case 2PPPYNYPYN
Case 3PPPYNYPYN
Case 4PPPYNYPYN
Case 5PPPYNYPYN
Case 6PPPYNYPYN
Case 7PPPYNYPYN
Case 8PPPYNYPYN
Case 9PPPYNYPYN
Capstone as a series of > 1 experiences with more scaffolding early and none at the end
Case 10YYYYYYYYY
Case 11YYYYYYYYY
Case 12YYYYYYYYY
Learning objectives of the capstone: Cases:Teach research skillsAssess possession of research skillsAssess learning of research skillsProvide experience with inquiryEstimate independent research skillsEngage inquiry-based learningTeach inquiry-based writingCapstone functions formativelyCapstone functions summatively
Cases where capstone was a single, scaffolded experience
Case 1PPPYNYPYN
Case 2PPPYNYPYN
Case 3PPPYNYPYN
Case 4PPPYNYPYN
Case 5PPPYNYPYN
Case 6PPPYNYPYN
Case 7PPPYNYPYN
Case 8PPPYNYPYN
Case 9PPPYNYPYN
Capstone as a series of > 1 experiences with more scaffolding early and none at the end
Case 10YYYYYYYYY
Case 11YYYYYYYYY
Case 12YYYYYYYYY

Notes: Consistency of the case with each capstone learning objective: Y= yes (consistent with/supportive of that objective); P = partially (partially consistent with/supportive of that objective); N = no (neither consistent with nor supportive of that objective).

Table 4

Cases and their consistency with/support for capstone learning objectives

Learning objectives of the capstone: Cases:Teach research skillsAssess possession of research skillsAssess learning of research skillsProvide experience with inquiryEstimate independent research skillsEngage inquiry-based learningTeach inquiry-based writingCapstone functions formativelyCapstone functions summatively
Cases where capstone was a single, scaffolded experience
Case 1PPPYNYPYN
Case 2PPPYNYPYN
Case 3PPPYNYPYN
Case 4PPPYNYPYN
Case 5PPPYNYPYN
Case 6PPPYNYPYN
Case 7PPPYNYPYN
Case 8PPPYNYPYN
Case 9PPPYNYPYN
Capstone as a series of > 1 experiences with more scaffolding early and none at the end
Case 10YYYYYYYYY
Case 11YYYYYYYYY
Case 12YYYYYYYYY
Learning objectives of the capstone: Cases:Teach research skillsAssess possession of research skillsAssess learning of research skillsProvide experience with inquiryEstimate independent research skillsEngage inquiry-based learningTeach inquiry-based writingCapstone functions formativelyCapstone functions summatively
Cases where capstone was a single, scaffolded experience
Case 1PPPYNYPYN
Case 2PPPYNYPYN
Case 3PPPYNYPYN
Case 4PPPYNYPYN
Case 5PPPYNYPYN
Case 6PPPYNYPYN
Case 7PPPYNYPYN
Case 8PPPYNYPYN
Case 9PPPYNYPYN
Capstone as a series of > 1 experiences with more scaffolding early and none at the end
Case 10YYYYYYYYY
Case 11YYYYYYYYY
Case 12YYYYYYYYY

Notes: Consistency of the case with each capstone learning objective: Y= yes (consistent with/supportive of that objective); P = partially (partially consistent with/supportive of that objective); N = no (neither consistent with nor supportive of that objective).

Two, not just the intended one, of the four possible scaffolding models were observed in the data. The final in-class presentation followed Model 2 for just 9 of the 12 students. Unexpectedly, the other three students (without notifying the instructor) developed a new project after completing the three subtasks with extensive scaffolding. Only these three students followed scaffolding Model 4, providing an unanticipated opportunity to explore how scaffolding affects the ability to study whether a capstone satisfies its objectives. Students who, in their final presentation, followed the plan that was established over the course of completing the three assignment subparts clearly showed that they could integrate the complex elements of a complex argument into a coherent whole. However, only the three who created new study designs provided the summative assessment that was intended. The fact that only two of these three achieved the target skill set provides actionable evidence for the instructor: the course as given does not provide sufficient training in self-assessment about study planning, and the capstone project as it was intended is not summative.

Given the data from Table 4 about two of four scaffolding-in-capstone models, Table 5 shows the alignment with the capstone objectives achieved by the four models.

Table 5

Consistency of scaffolding models with objectives of a capstone experience

Nine objectives of the capstoneFour models of the capstone experience
Model 1: One opportunity without scaffoldingaModel 2: One opportunity with scaffoldingModel 3: Multiple opportunities with equivalent scaffoldingaModel 4: Multiple opportunities with decreasing scaffolding
1. Teach research skillsNoPartiallyYesYes
2. Assess possession of research skillsYesPartiallyYesYes
3. Assess learning of research skillsNoPartiallyPartiallyYes
4. Provide experience with inquiryPartially/noYesYesYes
5. Assess/estimate independent research skillsYesNoPartiallyYes
6. Engage inquiry-based learningNoYesYesYes
7. Teach inquiry-based writingNoPartiallyYesYes
8. Capstone functions formativelyNoYesYesYes
9. Capstone functions summativelyYesNoNoYes
Total consistency score:3/94/96/99/9
Nine objectives of the capstoneFour models of the capstone experience
Model 1: One opportunity without scaffoldingaModel 2: One opportunity with scaffoldingModel 3: Multiple opportunities with equivalent scaffoldingaModel 4: Multiple opportunities with decreasing scaffolding
1. Teach research skillsNoPartiallyYesYes
2. Assess possession of research skillsYesPartiallyYesYes
3. Assess learning of research skillsNoPartiallyPartiallyYes
4. Provide experience with inquiryPartially/noYesYesYes
5. Assess/estimate independent research skillsYesNoPartiallyYes
6. Engage inquiry-based learningNoYesYesYes
7. Teach inquiry-based writingNoPartiallyYesYes
8. Capstone functions formativelyNoYesYesYes
9. Capstone functions summativelyYesNoNoYes
Total consistency score:3/94/96/99/9

aThese models were not observed in the data; their “hits” and “misses” are inferred and could be tested in a future empirical study.

Table 5

Consistency of scaffolding models with objectives of a capstone experience

Nine objectives of the capstoneFour models of the capstone experience
Model 1: One opportunity without scaffoldingaModel 2: One opportunity with scaffoldingModel 3: Multiple opportunities with equivalent scaffoldingaModel 4: Multiple opportunities with decreasing scaffolding
1. Teach research skillsNoPartiallyYesYes
2. Assess possession of research skillsYesPartiallyYesYes
3. Assess learning of research skillsNoPartiallyPartiallyYes
4. Provide experience with inquiryPartially/noYesYesYes
5. Assess/estimate independent research skillsYesNoPartiallyYes
6. Engage inquiry-based learningNoYesYesYes
7. Teach inquiry-based writingNoPartiallyYesYes
8. Capstone functions formativelyNoYesYesYes
9. Capstone functions summativelyYesNoNoYes
Total consistency score:3/94/96/99/9
Nine objectives of the capstoneFour models of the capstone experience
Model 1: One opportunity without scaffoldingaModel 2: One opportunity with scaffoldingModel 3: Multiple opportunities with equivalent scaffoldingaModel 4: Multiple opportunities with decreasing scaffolding
1. Teach research skillsNoPartiallyYesYes
2. Assess possession of research skillsYesPartiallyYesYes
3. Assess learning of research skillsNoPartiallyPartiallyYes
4. Provide experience with inquiryPartially/noYesYesYes
5. Assess/estimate independent research skillsYesNoPartiallyYes
6. Engage inquiry-based learningNoYesYesYes
7. Teach inquiry-based writingNoPartiallyYesYes
8. Capstone functions formativelyNoYesYesYes
9. Capstone functions summativelyYesNoNoYes
Total consistency score:3/94/96/99/9

aThese models were not observed in the data; their “hits” and “misses” are inferred and could be tested in a future empirical study.

Evidence is obtained for two of the four models of scaffolding, and the structure of the matrix, following the modifications to the DoFA method in Table 1, permits evidence-informed, logical inferences about the alignment of the other two models with the capstone objectives. Table 5 shows that these two (unobserved) models involve the least (no scaffolding) and the second most (multiple opportunities, equal scaffolding) alignment with the capstone experience objectives. Although these models of scaffolding in capstone assignments can be observed today but were not observed in the current case, the DoFA results for their alignment with the capstone objectives are interpretable and plausible. Although two of these models were not observed in the data, Table 5 provides evidence about how and why a capstone should be included in a course or for a program of study. This analysis also shows that, although it succeeded as a formative assessment, the capstone in this case failed as a summative assessment for the skills of greatest interest. This is only demonstrated within the DoFA (Table 5) and would not have been observable if the DoFA had not followed the modifications articulated in Table 1, so that models of scaffolding, rather than theory, could be better understood. This case study suggests that only Model 4 permits the assessment of learning of research skills; this design will also result in the greatest alignment (9/9) with the objectives of using a capstone (according to [20, 21]). As designed, the scaffolded, one-time assignment discussed in the case analysis only achieves 4/9 capstone learning objectives. If a summative assessment of whether research skills have been learned is one of—or the—purpose of including a capstone, the project should follow Model 4 in terms of scaffolding.

Discussion and conclusions

The DoFA method, with slight modifications in construction, can be used to summarize the qualitative data that are commonly collected in higher, graduate and postgraduate education and training. The two hypothetical examples that were discussed used the results of an extensive review of the literature on adult learning ([12], Example 1) and of the report of an expert panel [20, 21] on the learning objectives of the capstone and two national surveys of unmet bioinformatics needs ([14, 22], Example 2). The third example featured observational data about an actual assessment and a research question.

The DoFA method enables the analysis and summarization of qualitative data in a systematic way, but effort is required to identify pedagogical principles or educational theories with which decision options can be evaluated for their alignment. It can be challenging for decision makers to articulate these decisions, or their options, sufficiently for this evaluation of alignment with relevant theory; however, this articulation project can be leveraged to increase buy-in from faculty ([27], ch. 1) as well as from students. Both of these aspects of the method (i.e. identifying relevant theory, and explicating decisions to be made) can benefit from specific expertise in the educational/cognitive domains, and/or consensus among instructors in terms of the options or the ratings of how consistent the options or the data are with dimensions of the selected theories. As with all qualitative research, transparent reporting of methods and the full consideration of plausible alternatives in the analysis are needed for interpretable—defensible—results.

Decision-making can use qualitative data, even for training in quantitative sciences like bioinformatics. The DoFA method can summarize qualitative evidence, without collecting data (Example 1), based on survey results (Example 2), or with observed data (Example 3) to support planning and decision-making in courses or curricula. The method can also promote formal evaluation of those decisions, encouraging evidence-informed excellence in bioinformatics training and education.

Key Points

  • Survey data are commonly collected in education and training; however, perhaps especially in the quantitative sciences, utilization of this qualitative data for decision-making can be challenging—but it can be done.

  • An established method for the analysis of qualitative data to inform decisions is the DoFA, initially published in 1975; this qualitative focus method is unlikely to be discovered but is important for analyzing and interpreting survey or other educational data obtained from teaching or training, for example in bioinformatics.

  • The method identifies and aligns theoretical or applied principles with qualitative data, transforming survey and evaluation results, and similar data, into interpretable results for evidence-informed decision-making in education and training.

  • Important aspects of using this method include (a) effort is required to identify pedagogical principles or educational theories with which decision options can be evaluated for their alignment; (b) it can be challenging for decision makers to articulate decisions, or their options, sufficiently for this evaluation of alignment with relevant theory; and (c) expertise and/or consensus among instructors may be needed for interpretable results. These features can promote reliability and validity of educational decisions and support formal evaluation of the outcomes of those decisions.

Rochelle E. Tractenberg is a cognitive scientist focusing on learning and assessment in higher education, and a research methodologist with accreditation as a Professional Statistician from the American Statistical Association. Collaborative for Research on Outcomes and –Metrics; Departments of Neurology; Biostatistics, Bioinformatics & Biomathematics; and Rehabilitation Medicine Georgetown University Medical Center, Washington, D.C

References

1

Campbell
CE
,
Nehm
RH.
A critical analysis of assessment quality in genomics and bioinformatics education research
.
CBE Life Sci Educ
2013
;
12
:
530
41
.

2

Campbell
DT.
“Degrees of Freedom” and the case study
.
Comp Polit Stud
1975
;
8
:
178
93
.

3

Wilson
EJ
,
Wilson
DT
, “Degrees of Freedom” in case research of behavioral theories of group buying. In:
MJ
Houston
(ed),
Advances in Consumer Research
, Vol.
15
.
Provo, UT
:
Association for Consumer Research
,
1988
,
587
94
.

4

Wilson
EJ
,
Vlosky
RP.
Partnering relationship activities: building theory from case study research
.
J Bus Res
1997
;
39
:
59
70
.

5

Wilson
EJ
,
Woodside
AG.
Degrees-of-Freedom analysis of case data in business marketing research
.
Ind Mark Manage
1999
;
28
:
215
29
.

6

Woodside
AG
,
Wilson
EJ.
Case study research for theory-building
.
J Bus Ind Mark
2003
;
18
:
493
508
.

7

Woodside
AG.
Case Study Research: Theory, Methods and Practice
.
Bangles
:
Emerald Group
,
2010
.

8

National Research Council
.
A New Biology for the 21st Century: Ensuring the United States Leads the Coming Biology Revolution
.
Washington, DC
:
National Academies Press
,
2009
.

9

De Veaux
RD
,
Agarwal
M
,
Averett
M
, et al.
Curriculum guidelines for undergraduate programs in data science
.
Annu Rev Stat Appl
2017
;
4
:
15
30
. doi 10.1146/annurev-statistics-060116-053930. http://www.amstat.org/asa/files/pdfs/EDU-DataScienceGuidelines.pdf (date last accessed 2 January 2017).

10

Tractenberg
RE
,
Gordon
M.
Supporting evidence-informed teaching in biomedical and health professions education through knowledge translation: an inter-disciplinary literature review
.
Teach Learn Med
2017
;
29
:
268
79
. Published online 30 March 2017, http://www.tandfonline.com/doi/full/10.1080/10401334.2017.1287572

11

Tractenberg
RE
,
FitzGerald
KT
,
Collmann
J.
Evidence of sustainable learning with the Mastery Rubric for Ethical Reasoning
.
Educ Sci
2017
;
7
(
1
):
2.
doi:10.3390/educsci7010002 http://www.mdpi.com/2227-7102/7/1/2

12

Ambrose
SA
,
Bridges
MW
,
DiPietro
M
, et al.
How Learning Works: Seven Research-Based Principles for Smart Teaching
.
San Francisco, CA
:
Jossey-Bass
,
2010
.

13

Moseley
D
,
Baumfield
V
,
Elliott
J
, et al.
Frameworks for Thinking: a handbook for teaching and learning
.
Cambridge
:
Cambridge University Press
.

14

Schneider
MV
,
Flannery
M
,
Griffin
P.
Survey of Bioinformatics and Computational Needs in Australia,
2016
. https://dx.doi.org/10.6084/m9.figshare.4307768.v1 (date last accessed 21 March 2017).

15

Hutchings
P
,
Kinzie
J
,
Kuh
GD.
Evidence of student learning: what counts and what matters for improvement. In
GD
Kuh
,
SO
Ikenberry
,
NA
Jankowsk
(eds),
Using Evidence of Student Learning to Improve Higher Education
.
Somerset, NJ
:
Jossey-Bass
,
2015
,
27
50
.

16

National Institute for Learning Outcomes Assessment
(
2016
, May).
Higher Education Quality: Why Documenting Learning Matters
.
Urbana, IL
:
University of Illinois; Indiana University; National Institute for Learning Outcomes Assessment
.

17

William and Flora Hewlett Foundation
. Evaluation Principles and Practices: An Internal Working Paper,
2012
. http://www.hewlett.org/wp-content/uploads/2016/08/EvaluationPrinciples-FINAL.pdf (date last accessed 5 February 2017I).

18

Tractenberg
RE.
How the Mastery Rubric for Statistical Literacy can generate actionable evidence about statistical and quantitative learning outcomes
.
Educ Sci
2017
;
7
(
1
):
3.
doi:10.3390/educsci7010003 http://www.mdpi.com/2227-7102/7/1/3/pdf

19

Nilson
LB.
Teaching at its Best: A research-Based Resource for College Instructors (2E)
.
Bolton, MA
:
Anker Publishing Company
,
2003
.

20

Boyer Commission on Educating Undergraduates in the Research University (Boyer Commission)
.
Reinventing Undergraduate Education: A Blueprint for America's Research Universities
.
Stony Brook, NY
:
Boyer Commission on Educating Undergraduates in the Research University
,
1998
. http://files.eric.ed.gov/fulltext/ED424840.pdf (date last accessed 4 March 2015).

21

Educational Effectiveness Working Groups, UC Berkeley
. UC Berkeley Educational Effectiveness Report,
2003
, 4–17. californiacompetes.org/wp-content/uploads/2012/01/UC%20Berkeley%20-%20Self%20Study.pdf (date last accessed 4 March 2015).

22

Barone
L
,
Williams
J
,
Micklos
D.
Unmet Needs for Analyzing Biological Big Data: A Survey of 704 NSF Principal Investigators
.
bioRxiv
2017
;
108555
. doi: https://doi.org/10.1101/108555 (date last accessed 21 March 2017).

23

Vygotsky
LS.
Mind in Society
.
Cambridge
:
Cambridge University Press
,
1978
.

24

Chaiklin
S.
The zone of proximal development in Vygotsky's analysis of learning and instruction. In
A
Kozulin
,
B
Gindis
,
V
Ageyev
, et al. (eds),
Vygotsky's Educational Theory and Practice in Cultural Context
.
Cambridge
:
Cambridge University Press
,
2003
.

25

Wells
G
,
Edwards
A
(eds).
Pedagogy in Higher Education: A cultural historical approach
.
Cambridge
:
Cambridge University Press
,
2015
.

26

Stevens
DD
,
Levi
A.
Introduction to Rubrics, 2E
.
Sterling, VA
:
Stylus Publishing
,
2012
.

27

Diamond
RM.
Designing and Assessing Courses and Curricula, 3E
.
San Francisco, CA
:
Jossey Bass
,
2008
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://dbpia.nl.go.kr/journals/pages/open_access/funder_policies/chorus/standard_publication_model)