-
PDF
- Split View
-
Views
-
Cite
Cite
Rochelle E Tractenberg, Degrees of freedom analysis in educational research and decision-making: leveraging qualitative data to promote excellence in bioinformatics training and education, Briefings in Bioinformatics, Volume 20, Issue 2, March 2019, Pages 416–425, https://doi.org/10.1093/bib/bbx106
- Share Icon Share
Abstract
Qualitative data are commonly collected in higher, graduate and postgraduate education; however, perhaps especially in the quantitative sciences, utilization of these qualitative data for decision-making can be challenging. A method for the analysis of qualitative data is the degrees of freedom analysis (DoFA), published in 1975. Given its origins in political science and its application in mainly business contexts, the DoFA method is unlikely to be discoverable or used to understand survey or other educational data obtained from teaching, training or evaluation. This article therefore introduces and demonstrates the DoFA with modifications specifically to support educational research and decision-making with examples in bioinformatics. DoFA identifies and aligns theoretical or applied principles with qualitative evidence. The demonstrations include two hypothetical examples, and a case study of the role of scaffolding in an independent project (‘capstone’) of a graduate course in biostatistics. Included to promote inquiry, inquiry-based learning and the development of research skills, the capstone is often scaffolded (instructor-supported and therefore, formative), although it is intended to be summative. The case analysis addresses the question of whether the scaffolding provided for a capstone assignment affects its utility for formative or summative assessment. The DoFA is also used to evaluate the relative efficacies of other models for scaffolding the capstone project. These examples are intended to both explain this method and to demonstrate how it can be used to make decisions within a curriculum or for bioinformatics training.
Introduction
Survey and other qualitative data are commonly collected in higher, graduate and postgraduate education, e.g. as the end-of-term evaluation of instruction. Campbell and Nehm [1] point out that, while there were many papers published on genomics and bioinformatics education (GBE) research between 1995 and 2010, few of these related to postgraduate training or education outside of formal educational settings. One reason for this may be that the data collected from training are in the form of surveys, and tend not to be the type of valid or reliable assessment of whether the training has had “a meaningful impact” on the learners (p. 530). However, surveys are common in higher education, while methodology for the appropriate analysis of that data is relatively uncommon in GBE.
Utilization of qualitative data, such as survey results, for decision-making can be challenging. Educational decisions can include, “is my assessment aligned with my teaching goals?”, “should I use X or Y teaching method?”, or “will changing the curriculum help to achieve specific teaching goals?”. An established method for the analysis of qualitative data to inform decisions is the degrees of freedom analysis (DoFA), initially published in 1975 [2–7] but used almost exclusively in business applications. As such, it is unlikely to be recognized or even discoverable by those in other fields seeking to understand survey or other educational data they obtain from teaching, training, assessing, evaluation or other common contexts. Additionally, qualitative methods such as DoFA may not be within the scope of ‘analysis tools’ that many quantitative investigators find useful (or find at all).
However, DoFA is a potentially important tool for both research and decision-making in the context of training and education. Fundamentally, DoFA identifies and aligns theoretical or applied principles with qualitative evidence; so, it can provide organization and structure for formulating, collecting evidence about and analyzing educational decisions. For example, disciplines such as biology, statistics, medicine and economics may be contemplating whether and how to integrate computational methods, training or courses into established degree or certificate programs [8, 9]; evidence that can both support the decision and provide information about how, when and to what extent the integration should be done will necessarily be qualitative (as educational decisions about topical coverage and course offerings often are). The DoFA method can support the use of educational theory in decisions about teaching and learning across disciplines; it can also help leverage (or identify data collection options for) classroom research. The DoFA method is therefore introduced and demonstrated in this article, to support educational research and decision that promote its utility in educational decision-making and research. The method with these modifications is demonstrated with two examples and a case study about the role and structure of scaffolding in the final, independent project (“capstone”) of a graduate course in biostatistics for life science students.
DoFA method and modifications for educational decision-making and research
The DoFA [2] is a method of qualitative analysis that was originally intended for theory building [2–7]). Originally, the DoFA uses a matrix to align qualitative data (observations) with theory or theoretical predictions; in this manner, the relative strengths of evidence for (or against) competing theories can be evaluated. However, the method can accommodate a wide variety of “data”, including summaries of literature [10] and interview results [11]; it is therefore also useful to understand a wide array of evidence (from a variety of sources) for and against a particular hypothesis or, in educational decision-making, pedagogic strategy [10]. Most of the DoFAs that have been published to date articulate or follow these steps (taken from [7], p. 244):
1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest.
1 A. Familiarity with this knowledge base identifies at least one theory; theoretical features are used to construct the prediction matrix (Step 2).
2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions, about which the evidence to be reviewed will provide evidence (for or against) make up the columns of this matrix.
3. Data relating to the theory/theories and their predictions are collected in the rows. At this point, a matrix with columns representing theories, and rows representing evidence (data), has been constructed.
4. Trained judges evaluate each piece of evidence (collected systematically in Step 3) and determine (independently) whether a given piece of evidence provides support for one (or more) of the theories. Judges’ ratings fill the cells of the matrix. In this step, judges’ ratings are “yes” or “1”; “no” or “0”, or “partly” or “0.5”, characterizing the alignment of the evidence with each element of the theory/theories. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns).
4 A. The agreement among judges must be assessed. Only one of three options can be chosen: either they must reach consensus; or the average of their ratings (0, 0.5, 1) is used; or the evidence must be considered “uninformative”—which is a result.
5. The “degrees of freedom” are then computed by summing the “points” in each column, the marginals; the theory (column) with the highest total evidence support is the “best supported theory”. Moreover, depending on how advanced is the theory that was used to create the predictions, the support or alignment of the data that is collected can also be summed in the marginal for each row. Row marginals then give an idea of which of the evidence was/were most useful in distinguishing the columns. If one piece of evidence gets the same rating (Step 4) in every column, it will have a high row marginal, which shows it does not distinguish among the column options. The marginals can be analyzed as a chi squared if desired to support the choice of one theory over another.
These steps, which occur in nearly all publications with business questions, support the development or selection of theory in business applications. However, in education and training, the focus is rarely to formulate new theories of education; therefore, one adaptation of the DoFA method is to take features of existing educational and cognitive frameworks and construct a prediction matrix that enables exploration of the alignment, or consistency, of data from educational activities with those theoretical frameworks or their principles. The method can also be used to organize and synthesize otherwise difficult-to-summarize qualitative data, such as that from surveys. To apply the DoFA method to support decision-making in educational contexts (hypothetical and case study examples to follow), these modifications are recommended:
A. The decision(s) to be made should be the prediction matrix columns, and the rows should represent some existing educational theoretical framework based on which decisions can be made, and evaluated. Then, the evidence (rows) is assembled, and its contribution to the decision-making is made explicit.
This modification is subtle, but important because it can be difficult (and in some cases, undesirable, beyond the scope of the problem or both) to identify the theory underpinning decisions about teaching and learning in education and training. Instead, this modification promotes the alignment of the evidence with the educational decision that is motivating the study. However, a specific decision may not yet be formulated at the time the data are/were collected; the method (with this modification) can also support the alignment of evidence (rows) with structural or other features of the education or training challenge being faced. To do this, at least two options or alternatives representing the decision to be made must be identified. For example, if a program is interested in whether to integrate experience with bioinformatics (EwB) training opportunities into a “traditional” biology undergraduate course, the two obvious options are “integrate experience” and “leave teaching as it is”. Alternatively, this specific decision can be framed as having three options, “integrate experience with bioinformatics into every course”, “integrate bioinformatics into some courses” and “do not integrate bioinformatics”. For the standard DoFA, data and theory must already be in their analyzable states; this modification makes DoFA available to decision makers before the collection of data and possibly even before pedagogy or cognitive frameworks have been identified (which is not possible for the business application).
B. A second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns; e.g. to fully, partially or not integrate bioinformatics experience) with the data—which, again, can be relevant literature [12, 13] or survey results [14] or other qualitative information (see Example 3 below). By articulating the educational decision to be made (columns) and setting out to evaluate the alignment of the evidence in the literature or from surveys (rows) for or against the options that the decision entails, these modifications to the DoFA procedure concretely and explicitly promote evidence-informed, and thereby justifiable, decision-making in teaching and assessment (consistent with principles of actionable evidence [15, 16]).
Table 1 shows the modifications as described, with notes on their importance and use. Modification A permits the alignment and subsequent evaluation of evidence with options under consideration—depending on the data that were (or can be) collected and the need for a decision. Modification B can help to identify what evidence is lacking, possibly suggesting classroom action research questions or data that should be collected from students to support, or evaluate, the decision that is made. Neither of these arises in the original methodology formulation, which happens after data and theory are articulated/complete. With these modifications, the identification of what evidence may be lacking may be a result of the DoFA. It points to next steps or potentially the need for pilot data. Importantly, Modification B may yield a second DoFA matrix ([11], and see Example 3 below): the first aligns the evidence with the decisions and the second aligns the decisions with theoretical (or other) information that can provide additional validity and evidence for the decision suggested by the first DoFA matrix.
Steps in DoFA: original (theory building/testing) and modifications for educational decisions
Original (Woodside, 2010) . | Modification (this article) . | Notes on modifications . |
---|---|---|
1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest. Familiarity with this knowledge base may identify competing theories, which are used to construct the prediction matrix (Step 2) | 1. Have/gain familiarity with the existing knowledge base about the phenomenon of interest sufficient to describe at least two options that represent the theory to be tested or decision to be made (columns). The prediction matrix is started here in Step 1 | The decision(s) to be made should be the columns of this matrix; and the rows (created in Step 2) should then represent an educational theoretical framework based on which decisions can be made, and evaluated. This modification permits the alignment and subsequent evaluation of the options under consideration. It is crucial not to conflate what aspects of the analysis will appear in the rows with what appears in the columns |
2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions about which the evidence to be reviewed will provide evidence (for or against), make up the columns of this matrix. Data (Step 3) make up the rows | 2. Add rows to help make the decision from Step 1. Identify at least one theory or framework that can inform or help justify, the decision; alternatively, add observations (data), as rows. This prediction matrix permits a visual and computational alignment of the decision (columns) with the evidence to be reviewed (rows; either theoretical features or data) | The second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns) with the theory, survey results or other qualitative information. Nonoverlapping theory elements should be articulated clearly, so their alignment with the decision options can be perceived |
3. Data relating to the theory/theories and their predictions are collected, and each observation becomes one rows; alternatively, cases or groups are summarized in each row | 3. Evaluation of the alignment of the features of the chosen theory (or theories, in multiple matrices), with the decision options, is now possible. The rating system to be used (e.g. 0 for “no alignment”; 0.5 for “some” or “possible alignment”; and 1 for “full alignment”) should be determined before the evaluation (Step 4) | It is possible to identify alignment between different options and different theoretical frameworks or between each option of the decision and theory in one matrix and with survey responses in another |
At this point, a matrix with columns representing theories and rows representing evidence has been constructed | ||
4. Trained judges (at least two) evaluate each piece of evidence (rows) and independently determine whether a given piece of evidence provides support for one (or more) of the theories. Judges fill in the cells of the matrix with “yes” or “1”; “no” or “0”, “partly” or “0.5”. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns) | 4. At least one independent judge evaluates each theory element or observation (rows) with respect to the decision options (columns), according to the a priori rating scale. It is helpful to consult an expert on education theory (e.g. institutional resource or colleague); otherwise, consensus among those involved in the decision-making itself (at least two) is advisable. Including explanation of the rating in each cell can be helpful to explicate the choice of ratings | In classroom- or course- based analyses, finding an independent judge that is sufficiently familiar with the evidence and the decision to be made can be challenging. For course-specific data, it is important that the judge (the instructor) be sufficiently objective for plausible and interpretable results. Collaborators with expertise in the educational context of the decision to be made can be as important as those with expertise in educational or cognitive theories |
4A. The agreement among judges must be assessed; either they must reach consensus; or the average of their ratings is used; or the evidence must be considered “uninformative” | ||
5. The “degrees of freedom” can be computed by summing the “points” in each column, the marginals. Column marginals help identify the theory (column) with the highest total evidence support, the “best supported theory”. Row mariginals can be useful to identify the most “theoretically consistent” observations, if that is of interest. If useful, a chi square statistic can be computed and its P-value estimated | 5. The “degrees of freedom” are computed as the column marginals; however, simple visualization (e.g. one column has all NO/0s and the other has a mix) may make marginals redundant. Column marginals are important as they highlight the decision option that is most consistent with theory, and row marginals are less so | The filled-in prediction matrix, and not its statistical analysis, can support decision-making without marginals (Table 1), or even point to a need for more data or another theory. The identification of theory elements (rows) that are not aligned with a decision (row marginal=0) can help determine if additional theories are needed or if in fact one decision option is just not consistent (aligned) with theory. Although a chi square analysis is possible, it is not interpretable in the decision-making context |
Original (Woodside, 2010) . | Modification (this article) . | Notes on modifications . |
---|---|---|
1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest. Familiarity with this knowledge base may identify competing theories, which are used to construct the prediction matrix (Step 2) | 1. Have/gain familiarity with the existing knowledge base about the phenomenon of interest sufficient to describe at least two options that represent the theory to be tested or decision to be made (columns). The prediction matrix is started here in Step 1 | The decision(s) to be made should be the columns of this matrix; and the rows (created in Step 2) should then represent an educational theoretical framework based on which decisions can be made, and evaluated. This modification permits the alignment and subsequent evaluation of the options under consideration. It is crucial not to conflate what aspects of the analysis will appear in the rows with what appears in the columns |
2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions about which the evidence to be reviewed will provide evidence (for or against), make up the columns of this matrix. Data (Step 3) make up the rows | 2. Add rows to help make the decision from Step 1. Identify at least one theory or framework that can inform or help justify, the decision; alternatively, add observations (data), as rows. This prediction matrix permits a visual and computational alignment of the decision (columns) with the evidence to be reviewed (rows; either theoretical features or data) | The second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns) with the theory, survey results or other qualitative information. Nonoverlapping theory elements should be articulated clearly, so their alignment with the decision options can be perceived |
3. Data relating to the theory/theories and their predictions are collected, and each observation becomes one rows; alternatively, cases or groups are summarized in each row | 3. Evaluation of the alignment of the features of the chosen theory (or theories, in multiple matrices), with the decision options, is now possible. The rating system to be used (e.g. 0 for “no alignment”; 0.5 for “some” or “possible alignment”; and 1 for “full alignment”) should be determined before the evaluation (Step 4) | It is possible to identify alignment between different options and different theoretical frameworks or between each option of the decision and theory in one matrix and with survey responses in another |
At this point, a matrix with columns representing theories and rows representing evidence has been constructed | ||
4. Trained judges (at least two) evaluate each piece of evidence (rows) and independently determine whether a given piece of evidence provides support for one (or more) of the theories. Judges fill in the cells of the matrix with “yes” or “1”; “no” or “0”, “partly” or “0.5”. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns) | 4. At least one independent judge evaluates each theory element or observation (rows) with respect to the decision options (columns), according to the a priori rating scale. It is helpful to consult an expert on education theory (e.g. institutional resource or colleague); otherwise, consensus among those involved in the decision-making itself (at least two) is advisable. Including explanation of the rating in each cell can be helpful to explicate the choice of ratings | In classroom- or course- based analyses, finding an independent judge that is sufficiently familiar with the evidence and the decision to be made can be challenging. For course-specific data, it is important that the judge (the instructor) be sufficiently objective for plausible and interpretable results. Collaborators with expertise in the educational context of the decision to be made can be as important as those with expertise in educational or cognitive theories |
4A. The agreement among judges must be assessed; either they must reach consensus; or the average of their ratings is used; or the evidence must be considered “uninformative” | ||
5. The “degrees of freedom” can be computed by summing the “points” in each column, the marginals. Column marginals help identify the theory (column) with the highest total evidence support, the “best supported theory”. Row mariginals can be useful to identify the most “theoretically consistent” observations, if that is of interest. If useful, a chi square statistic can be computed and its P-value estimated | 5. The “degrees of freedom” are computed as the column marginals; however, simple visualization (e.g. one column has all NO/0s and the other has a mix) may make marginals redundant. Column marginals are important as they highlight the decision option that is most consistent with theory, and row marginals are less so | The filled-in prediction matrix, and not its statistical analysis, can support decision-making without marginals (Table 1), or even point to a need for more data or another theory. The identification of theory elements (rows) that are not aligned with a decision (row marginal=0) can help determine if additional theories are needed or if in fact one decision option is just not consistent (aligned) with theory. Although a chi square analysis is possible, it is not interpretable in the decision-making context |
Steps in DoFA: original (theory building/testing) and modifications for educational decisions
Original (Woodside, 2010) . | Modification (this article) . | Notes on modifications . |
---|---|---|
1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest. Familiarity with this knowledge base may identify competing theories, which are used to construct the prediction matrix (Step 2) | 1. Have/gain familiarity with the existing knowledge base about the phenomenon of interest sufficient to describe at least two options that represent the theory to be tested or decision to be made (columns). The prediction matrix is started here in Step 1 | The decision(s) to be made should be the columns of this matrix; and the rows (created in Step 2) should then represent an educational theoretical framework based on which decisions can be made, and evaluated. This modification permits the alignment and subsequent evaluation of the options under consideration. It is crucial not to conflate what aspects of the analysis will appear in the rows with what appears in the columns |
2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions about which the evidence to be reviewed will provide evidence (for or against), make up the columns of this matrix. Data (Step 3) make up the rows | 2. Add rows to help make the decision from Step 1. Identify at least one theory or framework that can inform or help justify, the decision; alternatively, add observations (data), as rows. This prediction matrix permits a visual and computational alignment of the decision (columns) with the evidence to be reviewed (rows; either theoretical features or data) | The second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns) with the theory, survey results or other qualitative information. Nonoverlapping theory elements should be articulated clearly, so their alignment with the decision options can be perceived |
3. Data relating to the theory/theories and their predictions are collected, and each observation becomes one rows; alternatively, cases or groups are summarized in each row | 3. Evaluation of the alignment of the features of the chosen theory (or theories, in multiple matrices), with the decision options, is now possible. The rating system to be used (e.g. 0 for “no alignment”; 0.5 for “some” or “possible alignment”; and 1 for “full alignment”) should be determined before the evaluation (Step 4) | It is possible to identify alignment between different options and different theoretical frameworks or between each option of the decision and theory in one matrix and with survey responses in another |
At this point, a matrix with columns representing theories and rows representing evidence has been constructed | ||
4. Trained judges (at least two) evaluate each piece of evidence (rows) and independently determine whether a given piece of evidence provides support for one (or more) of the theories. Judges fill in the cells of the matrix with “yes” or “1”; “no” or “0”, “partly” or “0.5”. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns) | 4. At least one independent judge evaluates each theory element or observation (rows) with respect to the decision options (columns), according to the a priori rating scale. It is helpful to consult an expert on education theory (e.g. institutional resource or colleague); otherwise, consensus among those involved in the decision-making itself (at least two) is advisable. Including explanation of the rating in each cell can be helpful to explicate the choice of ratings | In classroom- or course- based analyses, finding an independent judge that is sufficiently familiar with the evidence and the decision to be made can be challenging. For course-specific data, it is important that the judge (the instructor) be sufficiently objective for plausible and interpretable results. Collaborators with expertise in the educational context of the decision to be made can be as important as those with expertise in educational or cognitive theories |
4A. The agreement among judges must be assessed; either they must reach consensus; or the average of their ratings is used; or the evidence must be considered “uninformative” | ||
5. The “degrees of freedom” can be computed by summing the “points” in each column, the marginals. Column marginals help identify the theory (column) with the highest total evidence support, the “best supported theory”. Row mariginals can be useful to identify the most “theoretically consistent” observations, if that is of interest. If useful, a chi square statistic can be computed and its P-value estimated | 5. The “degrees of freedom” are computed as the column marginals; however, simple visualization (e.g. one column has all NO/0s and the other has a mix) may make marginals redundant. Column marginals are important as they highlight the decision option that is most consistent with theory, and row marginals are less so | The filled-in prediction matrix, and not its statistical analysis, can support decision-making without marginals (Table 1), or even point to a need for more data or another theory. The identification of theory elements (rows) that are not aligned with a decision (row marginal=0) can help determine if additional theories are needed or if in fact one decision option is just not consistent (aligned) with theory. Although a chi square analysis is possible, it is not interpretable in the decision-making context |
Original (Woodside, 2010) . | Modification (this article) . | Notes on modifications . |
---|---|---|
1. Investigator becomes familiar with the existing knowledge base about the phenomenon of interest. Familiarity with this knowledge base may identify competing theories, which are used to construct the prediction matrix (Step 2) | 1. Have/gain familiarity with the existing knowledge base about the phenomenon of interest sufficient to describe at least two options that represent the theory to be tested or decision to be made (columns). The prediction matrix is started here in Step 1 | The decision(s) to be made should be the columns of this matrix; and the rows (created in Step 2) should then represent an educational theoretical framework based on which decisions can be made, and evaluated. This modification permits the alignment and subsequent evaluation of the options under consideration. It is crucial not to conflate what aspects of the analysis will appear in the rows with what appears in the columns |
2. Create a prediction matrix, which captures all of the relevant theoretical elements of one or competing (multiple) theoretical frameworks. The theoretical elements, or predictions about which the evidence to be reviewed will provide evidence (for or against), make up the columns of this matrix. Data (Step 3) make up the rows | 2. Add rows to help make the decision from Step 1. Identify at least one theory or framework that can inform or help justify, the decision; alternatively, add observations (data), as rows. This prediction matrix permits a visual and computational alignment of the decision (columns) with the evidence to be reviewed (rows; either theoretical features or data) | The second modification to the DoFA method is to identify at least one theoretical or empirical framework to evaluate the alignment of each decision option (columns) with the theory, survey results or other qualitative information. Nonoverlapping theory elements should be articulated clearly, so their alignment with the decision options can be perceived |
3. Data relating to the theory/theories and their predictions are collected, and each observation becomes one rows; alternatively, cases or groups are summarized in each row | 3. Evaluation of the alignment of the features of the chosen theory (or theories, in multiple matrices), with the decision options, is now possible. The rating system to be used (e.g. 0 for “no alignment”; 0.5 for “some” or “possible alignment”; and 1 for “full alignment”) should be determined before the evaluation (Step 4) | It is possible to identify alignment between different options and different theoretical frameworks or between each option of the decision and theory in one matrix and with survey responses in another |
At this point, a matrix with columns representing theories and rows representing evidence has been constructed | ||
4. Trained judges (at least two) evaluate each piece of evidence (rows) and independently determine whether a given piece of evidence provides support for one (or more) of the theories. Judges fill in the cells of the matrix with “yes” or “1”; “no” or “0”, “partly” or “0.5”. The judges need to have been trained to an acceptable level of skill in the evaluation of the evidence, and they must also be familiar with the theories (columns) | 4. At least one independent judge evaluates each theory element or observation (rows) with respect to the decision options (columns), according to the a priori rating scale. It is helpful to consult an expert on education theory (e.g. institutional resource or colleague); otherwise, consensus among those involved in the decision-making itself (at least two) is advisable. Including explanation of the rating in each cell can be helpful to explicate the choice of ratings | In classroom- or course- based analyses, finding an independent judge that is sufficiently familiar with the evidence and the decision to be made can be challenging. For course-specific data, it is important that the judge (the instructor) be sufficiently objective for plausible and interpretable results. Collaborators with expertise in the educational context of the decision to be made can be as important as those with expertise in educational or cognitive theories |
4A. The agreement among judges must be assessed; either they must reach consensus; or the average of their ratings is used; or the evidence must be considered “uninformative” | ||
5. The “degrees of freedom” can be computed by summing the “points” in each column, the marginals. Column marginals help identify the theory (column) with the highest total evidence support, the “best supported theory”. Row mariginals can be useful to identify the most “theoretically consistent” observations, if that is of interest. If useful, a chi square statistic can be computed and its P-value estimated | 5. The “degrees of freedom” are computed as the column marginals; however, simple visualization (e.g. one column has all NO/0s and the other has a mix) may make marginals redundant. Column marginals are important as they highlight the decision option that is most consistent with theory, and row marginals are less so | The filled-in prediction matrix, and not its statistical analysis, can support decision-making without marginals (Table 1), or even point to a need for more data or another theory. The identification of theory elements (rows) that are not aligned with a decision (row marginal=0) can help determine if additional theories are needed or if in fact one decision option is just not consistent (aligned) with theory. Although a chi square analysis is possible, it is not interpretable in the decision-making context |
Example 1. “Should we incorporate experience with bioinformatics?”
As a hypothetical example using the modified DoFA method, consider a biology department facing a decision about whether or not/to what extent to incorporate EwB into a course or a curriculum. This oversimplified example is not especially authentic but can be informative with respect to the method. It is essential to articulate what exactly it will mean to “integrate experience with bioinformatics”—e.g. will it entail change in all course structures, will it change the curricular sequencing (order of courses or topics within a course) and/or perhaps the assessments that are used? What exactly will it look like to “integrate experience with bioinformatics”? Possibly more importantly, what does it mean for the instructor or program to “leave teaching as it is”? This is the first opportunity to formally consider the details of the decision the support of which motivated the analysis the first place, which is not a feature of the standard approach to DoFA. The decisions (options) will organically become the columns of the eventual degrees of freedom prediction matrix; thus, Modification A is important for the DoFA in decision-making in this example.
It might be desirable to examine the evidence that EwB is consistent with key principles of andragogy, or with the development and promotion of expertise (in the given content area); each of these entails respective theoretical principles (the rows). Articulating the principles associated with any of these (andragogy, promotion of expertise) will have the result of aligning the specific evidence to show what support (if any) is available for the options in the decision to be made.
The hypothetical decision about whether to integrate EwB (columns) can be explored with a prediction matrix that uses the framework provided by the seven principles of “how learning works”, which is a synthesis of the empirical literature on learning in higher education published by Ambrose et al. [12]. These seven principles are:
Students’ prior knowledge can help or hinder learning.
How students organize knowledge influences? How they learn and apply. What they know?
Students’ motivation determines, directs and sustains what they do to learn.
To develop mastery, students must acquire component skills, practice integrating them and know when to apply what they have learned.
Goal-directed practice coupled with targeted feedback enhances the quality of students’ learning.
Students’ current level of development interacts with the social, emotional and intellectual climate of the course to impact learning.
To become self-directed learners, students must learn to monitor and adjust their approaches to learning.
([12], pp. 4–6).
Organizing this list as rows with the decision options (do/do not integrate EwB) as the columns, a DoFA prediction matrix is created (Table 2). The table was completed from the perspective of cognitive psychology (the author’s background); instructors without this background may complete the ratings by consensus across faculty or perhaps in consultation with a center for education excellence at their local institution.
Aligning decision options (columns ) about whether to integrate experiences with bioinformatics (EwB) into biology undergraduate courses with principles of learning (rows; adapted from [12])
Principles of learning [12] . | Integrate EwBa . | Continue course(s) without any mention of bioinformaticsb . |
---|---|---|
Prior knowledge can be helpful | Yes. Also assumes that the “prior knowledge” from this course (i.e. the EwB) will support future engagement with bioinformatics topics and tools | No. Not introducing what is an important aspect of modern biology suggests a reliance on students to independently seek, find and integrate knowledge of bioinformatics |
Knowledge organization supports learning and application of new knowledge | Yes—the EwB must necessarily focus on the need for application of new knowledge (that may not yet exist); the source of the new knowledge need not be specified—preparing students to want/expect to learn new things is an essential feature | No. Maintaining a separation of “traditional” biological information and new/modern methods and ideas does not promote organizing biological knowledge to easily accommodate bioinformatics or computational information |
Promotes motivation to learn/sustain learning | Partially—EwB may be authentic, and problems that frame and motivate learning may arise from exposure to EwB, but whether it is sustained is undetermined | No. Without exposure to EwB, longer-term commitment to learning may be hampered and potentially limited (not promoted) because future work will most likely require some EwB |
Mastery is supported (opportunities to acquire component skills, practice integration and learn when to apply them) | No—mastery is not possible with superficial exposure, but introducing the idea that ongoing learning is necessary is a critical purpose for EwB | No—the role of bioinformatics in modern biology is important for mastery in the domain; ignoring this role or assuming later training can be useful that undermines the likelihood of mastery |
Goal-directed practice with formative feedback provided | Partially—exposure to a short-term training as the EwB highlights the need for goal-directed learning and practice, but formative feedback may or may not be included | Partially—this may be achieved with traditional biology training, but may be limited to traditional biology and will not permit engagement with bioinformatics |
Course climate supports learning | Depends on instructor and course structure | Depends on instructor and course structure |
Students will learn to monitor and adjust their approaches to learning | Partially—depending on how embedded the EwB is, and how much emphasis is placed on the importance of these experiences, this skill set may be initiated but may not be fully realized | Partially—students may develop this skill set but may not apply it to bioinformatics until after graduation |
Principles of learning [12] . | Integrate EwBa . | Continue course(s) without any mention of bioinformaticsb . |
---|---|---|
Prior knowledge can be helpful | Yes. Also assumes that the “prior knowledge” from this course (i.e. the EwB) will support future engagement with bioinformatics topics and tools | No. Not introducing what is an important aspect of modern biology suggests a reliance on students to independently seek, find and integrate knowledge of bioinformatics |
Knowledge organization supports learning and application of new knowledge | Yes—the EwB must necessarily focus on the need for application of new knowledge (that may not yet exist); the source of the new knowledge need not be specified—preparing students to want/expect to learn new things is an essential feature | No. Maintaining a separation of “traditional” biological information and new/modern methods and ideas does not promote organizing biological knowledge to easily accommodate bioinformatics or computational information |
Promotes motivation to learn/sustain learning | Partially—EwB may be authentic, and problems that frame and motivate learning may arise from exposure to EwB, but whether it is sustained is undetermined | No. Without exposure to EwB, longer-term commitment to learning may be hampered and potentially limited (not promoted) because future work will most likely require some EwB |
Mastery is supported (opportunities to acquire component skills, practice integration and learn when to apply them) | No—mastery is not possible with superficial exposure, but introducing the idea that ongoing learning is necessary is a critical purpose for EwB | No—the role of bioinformatics in modern biology is important for mastery in the domain; ignoring this role or assuming later training can be useful that undermines the likelihood of mastery |
Goal-directed practice with formative feedback provided | Partially—exposure to a short-term training as the EwB highlights the need for goal-directed learning and practice, but formative feedback may or may not be included | Partially—this may be achieved with traditional biology training, but may be limited to traditional biology and will not permit engagement with bioinformatics |
Course climate supports learning | Depends on instructor and course structure | Depends on instructor and course structure |
Students will learn to monitor and adjust their approaches to learning | Partially—depending on how embedded the EwB is, and how much emphasis is placed on the importance of these experiences, this skill set may be initiated but may not be fully realized | Partially—students may develop this skill set but may not apply it to bioinformatics until after graduation |
aAssumes that EwB is fully integrated into this course—with introduction and practice with mention of the experience and the tools ongoing throughout the course, and not simply mentioned in a single class meeting.
bAssumes that students do not seek out bioinformatics training themselves (possibly because it remains (or appears) orthogonal to success in a traditional biology program).
Aligning decision options (columns ) about whether to integrate experiences with bioinformatics (EwB) into biology undergraduate courses with principles of learning (rows; adapted from [12])
Principles of learning [12] . | Integrate EwBa . | Continue course(s) without any mention of bioinformaticsb . |
---|---|---|
Prior knowledge can be helpful | Yes. Also assumes that the “prior knowledge” from this course (i.e. the EwB) will support future engagement with bioinformatics topics and tools | No. Not introducing what is an important aspect of modern biology suggests a reliance on students to independently seek, find and integrate knowledge of bioinformatics |
Knowledge organization supports learning and application of new knowledge | Yes—the EwB must necessarily focus on the need for application of new knowledge (that may not yet exist); the source of the new knowledge need not be specified—preparing students to want/expect to learn new things is an essential feature | No. Maintaining a separation of “traditional” biological information and new/modern methods and ideas does not promote organizing biological knowledge to easily accommodate bioinformatics or computational information |
Promotes motivation to learn/sustain learning | Partially—EwB may be authentic, and problems that frame and motivate learning may arise from exposure to EwB, but whether it is sustained is undetermined | No. Without exposure to EwB, longer-term commitment to learning may be hampered and potentially limited (not promoted) because future work will most likely require some EwB |
Mastery is supported (opportunities to acquire component skills, practice integration and learn when to apply them) | No—mastery is not possible with superficial exposure, but introducing the idea that ongoing learning is necessary is a critical purpose for EwB | No—the role of bioinformatics in modern biology is important for mastery in the domain; ignoring this role or assuming later training can be useful that undermines the likelihood of mastery |
Goal-directed practice with formative feedback provided | Partially—exposure to a short-term training as the EwB highlights the need for goal-directed learning and practice, but formative feedback may or may not be included | Partially—this may be achieved with traditional biology training, but may be limited to traditional biology and will not permit engagement with bioinformatics |
Course climate supports learning | Depends on instructor and course structure | Depends on instructor and course structure |
Students will learn to monitor and adjust their approaches to learning | Partially—depending on how embedded the EwB is, and how much emphasis is placed on the importance of these experiences, this skill set may be initiated but may not be fully realized | Partially—students may develop this skill set but may not apply it to bioinformatics until after graduation |
Principles of learning [12] . | Integrate EwBa . | Continue course(s) without any mention of bioinformaticsb . |
---|---|---|
Prior knowledge can be helpful | Yes. Also assumes that the “prior knowledge” from this course (i.e. the EwB) will support future engagement with bioinformatics topics and tools | No. Not introducing what is an important aspect of modern biology suggests a reliance on students to independently seek, find and integrate knowledge of bioinformatics |
Knowledge organization supports learning and application of new knowledge | Yes—the EwB must necessarily focus on the need for application of new knowledge (that may not yet exist); the source of the new knowledge need not be specified—preparing students to want/expect to learn new things is an essential feature | No. Maintaining a separation of “traditional” biological information and new/modern methods and ideas does not promote organizing biological knowledge to easily accommodate bioinformatics or computational information |
Promotes motivation to learn/sustain learning | Partially—EwB may be authentic, and problems that frame and motivate learning may arise from exposure to EwB, but whether it is sustained is undetermined | No. Without exposure to EwB, longer-term commitment to learning may be hampered and potentially limited (not promoted) because future work will most likely require some EwB |
Mastery is supported (opportunities to acquire component skills, practice integration and learn when to apply them) | No—mastery is not possible with superficial exposure, but introducing the idea that ongoing learning is necessary is a critical purpose for EwB | No—the role of bioinformatics in modern biology is important for mastery in the domain; ignoring this role or assuming later training can be useful that undermines the likelihood of mastery |
Goal-directed practice with formative feedback provided | Partially—exposure to a short-term training as the EwB highlights the need for goal-directed learning and practice, but formative feedback may or may not be included | Partially—this may be achieved with traditional biology training, but may be limited to traditional biology and will not permit engagement with bioinformatics |
Course climate supports learning | Depends on instructor and course structure | Depends on instructor and course structure |
Students will learn to monitor and adjust their approaches to learning | Partially—depending on how embedded the EwB is, and how much emphasis is placed on the importance of these experiences, this skill set may be initiated but may not be fully realized | Partially—students may develop this skill set but may not apply it to bioinformatics until after graduation |
aAssumes that EwB is fully integrated into this course—with introduction and practice with mention of the experience and the tools ongoing throughout the course, and not simply mentioned in a single class meeting.
bAssumes that students do not seek out bioinformatics training themselves (possibly because it remains (or appears) orthogonal to success in a traditional biology program).
The body of Table 2 (i.e. not the marginals) shows that the principles of learning outlined by Ambrose et al. [12] may be partially met by the integration of EwB, but if EwB is only integrated into one class meeting, or not fully engaged in by the students, then these principles may not be sufficiently met. The table also shows that when both the integration of EwB and leaving it out are “partially” aligned with the principles of learning, the downside of integrating EwB can be seen to be less negative than continuing without it. Similarly, where the alignment shows that neither decision is aligned with one of the principles (i.e. “promotes mastery”), the reason why not is a function of the use of “exposure” rather than “full integration” in the case of integrating EwB, while the reason why not in the case of leaving EwB out of the biology course or curriculum is because future learning may be curtailed. Thus, not only can yes/partially/no values be entered into the DoFA matrix, rationales for each answer can also be used to qualify (possibly leading to a different point allocation for “type of NO” or “relatively PARTIALLY”) which would then be reflected in marginals. Marginals are not needed to summarize the data in Table 2. This highlights a difference between DoFA applied/modified for decision-making versus for theory building, where the marginals are needed to summarize the data.
Finally, Table 2 shows that, before integrating EwB (if that is the option that ends up being best supported), a survey to determine whether the teaching context promotes motivation to learn/sustain knowledge with EwB versus without EwB could easily be incorporated; then, any intervention to integrate EwB can be evaluated in a concrete and formal way. Formal plans to evaluate the results of whichever decision is taken (to integrate EwB or to continue without it) should be initiated (e.g. following [17]). Even with just these two options, a DoFA like the one shown in Table 2 can be informative about educational decisions to be made, and also how to determine whether intended effects of those decisions are achieved (promoting actionable evidence [15, 16] for either decision).
Thus, the DoFA method can bring together—as well as promote the collection of—diverse information, and also highlight what additional information is needed (e.g. to clarify what “integrate EwB” entails). Further, this same decision (incorporate/continue without EwB) can be explored in DoFA tables with other criteria in addition to/instead of these seven principles of learning. For example, we could also, or next, contemplate whether and to what extent either decision (integrate/continue without EwB) is consistent with principles of learning outcomes articulation ([16]; see [18] for an example) or with key features of assessment validity [1].
Example 2. “Would a (new) capstone experience align our program with international survey results on important bioinformatics training needs?”
Example 2 might be seen to follow from Example 1 (if the decision taken was in fact “yes, we should integrate EwB into our curriculum”), but could also be the decision currently under consideration. Specifically, if a degree, certificate or training program did want to incorporate EwB, one option is to add a capstone experience to the program. A “capstone” is a typically independent project where each student synthesizes prior learning within the program into a presentation, paper or other work product. Independent research projects are key capstone activities in undergraduate majors, Coursera specializations, and are also the main objectives in some master’s and most doctoral programs. The purpose of the capstone may be to demonstrate (summative) or develop (formative) independence, discipline-specific research skills or the completion of a self-initiated project.
Capstone projects are typically summative assignments—that is, representing “…the knowledge that has been accrued after the learning has ended” ([19], p. 212). The capstone can vary widely in its learning goals, including (but not limited to): (a) “experience” with either research or independent thinking; (b) synthesis of prior learning; (c) generating something novel or extending existing work in a novel direction; or (d) some combination of (a)–(c).
As outlined in Example 1, the decision to “integrate a capstone” will require details about exactly how that will be done. However, before mounting that effort, it is worthwhile to determine whether the capstone project will in fact achieve some or any specific teaching or learning goals. Nine learning objectives for a capstone were identified based on the Boyer Commission Report (1998) [20] and the Educational Effectiveness Working Groups at UC Berkeley (2003) [21]. These objectives, which are presented here in a general format so as to be applicable for end-of-degree; end-of-term; and end-of-course capstones, are to:
1. Teach research skills
2. Assess possession of research skills
3. Assess learning of research skills
4. Provide experience with inquiry
5. Assess/estimate independence in research skills
6. Engage inquiry-based learning
7. Teach inquiry-based writing; and that it should
8A. Function formatively; some may also or instead desire that it should
8B. Function summatively
Table 3 is a DoFA table that explores how well a capstone, designed to meet these nine specific objectives, laid out in the columns of Table 3, is aligned with domains that have been identified as unmet needs for bioinformatics through two national surveys conducted in the United States [22] and in Australia [14], which appear in the rows.
Alignment of capstone objectives with bioinformatics resource needs survey results (United States/Australia)
Domains on the NSF/EMBL-ABR survey . | Capstone objectives . | ||||||||
---|---|---|---|---|---|---|---|---|---|
Teach research skills . | Assess possession of research skills . | Assess learning of research skills . | Provide experience with inquiry . | Assess/estimate independence in research skills . | Engage inquiry- based learning . | Teach inquiry- based writing . | Function formatively . | Function summatively . | |
Publish data to the community | |||||||||
Maintain sufficient data storage | |||||||||
Share data with colleagues | |||||||||
Update/use updated analysis software | |||||||||
Train on data management and metadata | x | x | |||||||
Bioinformatics analysis and support | x | ||||||||
Train on basic computing and scripting | x | x | x | x | x | x | x | x | |
Search for data and discover relevant data sets | x | x | x | x | x | x | x | x | |
Multistep analysis workflows or pipelines | x | x | |||||||
Train on integration of multiple data types | x | x | x | x | x | x | x | x | |
Use cloud computing | |||||||||
Train on scaling analysis to cloud or high-performance computing | x | x | x | x | x | x | x | x | |
6 | 4 | 4 | 5 | 4 | 4 | 0 | 5 | 5 |
Domains on the NSF/EMBL-ABR survey . | Capstone objectives . | ||||||||
---|---|---|---|---|---|---|---|---|---|
Teach research skills . | Assess possession of research skills . | Assess learning of research skills . | Provide experience with inquiry . | Assess/estimate independence in research skills . | Engage inquiry- based learning . | Teach inquiry- based writing . | Function formatively . | Function summatively . | |
Publish data to the community | |||||||||
Maintain sufficient data storage | |||||||||
Share data with colleagues | |||||||||
Update/use updated analysis software | |||||||||
Train on data management and metadata | x | x | |||||||
Bioinformatics analysis and support | x | ||||||||
Train on basic computing and scripting | x | x | x | x | x | x | x | x | |
Search for data and discover relevant data sets | x | x | x | x | x | x | x | x | |
Multistep analysis workflows or pipelines | x | x | |||||||
Train on integration of multiple data types | x | x | x | x | x | x | x | x | |
Use cloud computing | |||||||||
Train on scaling analysis to cloud or high-performance computing | x | x | x | x | x | x | x | x | |
6 | 4 | 4 | 5 | 4 | 4 | 0 | 5 | 5 |
Alignment of capstone objectives with bioinformatics resource needs survey results (United States/Australia)
Domains on the NSF/EMBL-ABR survey . | Capstone objectives . | ||||||||
---|---|---|---|---|---|---|---|---|---|
Teach research skills . | Assess possession of research skills . | Assess learning of research skills . | Provide experience with inquiry . | Assess/estimate independence in research skills . | Engage inquiry- based learning . | Teach inquiry- based writing . | Function formatively . | Function summatively . | |
Publish data to the community | |||||||||
Maintain sufficient data storage | |||||||||
Share data with colleagues | |||||||||
Update/use updated analysis software | |||||||||
Train on data management and metadata | x | x | |||||||
Bioinformatics analysis and support | x | ||||||||
Train on basic computing and scripting | x | x | x | x | x | x | x | x | |
Search for data and discover relevant data sets | x | x | x | x | x | x | x | x | |
Multistep analysis workflows or pipelines | x | x | |||||||
Train on integration of multiple data types | x | x | x | x | x | x | x | x | |
Use cloud computing | |||||||||
Train on scaling analysis to cloud or high-performance computing | x | x | x | x | x | x | x | x | |
6 | 4 | 4 | 5 | 4 | 4 | 0 | 5 | 5 |
Domains on the NSF/EMBL-ABR survey . | Capstone objectives . | ||||||||
---|---|---|---|---|---|---|---|---|---|
Teach research skills . | Assess possession of research skills . | Assess learning of research skills . | Provide experience with inquiry . | Assess/estimate independence in research skills . | Engage inquiry- based learning . | Teach inquiry- based writing . | Function formatively . | Function summatively . | |
Publish data to the community | |||||||||
Maintain sufficient data storage | |||||||||
Share data with colleagues | |||||||||
Update/use updated analysis software | |||||||||
Train on data management and metadata | x | x | |||||||
Bioinformatics analysis and support | x | ||||||||
Train on basic computing and scripting | x | x | x | x | x | x | x | x | |
Search for data and discover relevant data sets | x | x | x | x | x | x | x | x | |
Multistep analysis workflows or pipelines | x | x | |||||||
Train on integration of multiple data types | x | x | x | x | x | x | x | x | |
Use cloud computing | |||||||||
Train on scaling analysis to cloud or high-performance computing | x | x | x | x | x | x | x | x | |
6 | 4 | 4 | 5 | 4 | 4 | 0 | 5 | 5 |
Importantly, the bioinformatics needs surveys were not conducted with the purpose of informing curriculum design or decision-making; the DoFA method, as modified, permits their integration into this decision-making nonetheless. Because no details have yet been articulated about exactly how the capstone might be implemented, rather than “scores” or ratings, Table 3 contains just ‘x’s where there might be opportunities for alignment; numeric ratings are not appropriate here, but the matrix can still be useful. The incorporation of a capstone might achieve all nine objectives (columns, see Example 3 below), but these achievements are unlikely to address the first NSF/EMBL-ABR—identified “unmet bioinformatics need”, publishing data to the community. Specifically, students are unlikely to generate such data, so the capstone may achieve many objectives, but not that one. However, the NSF/EMBL-ABR need “train on data management and metadata” could be included as a feature of the capstone experience, and if so, it could plausibly function in either a formative or a summative fashion. “Bioinformatics analysis and support” could be achieved by teaching research skills, a key objective of the capstone; again the alignment will depend on how exactly it is integrated.
Because Table 3 is included as an example of an exploration of whether the capstone objectives (columns) can also help meet internationally recognized unmet bioinformatics needs (rows), the column marginals are computed by treating each ‘x’ as ‘one point’ without consideration of how strongly such needs would be met. The column marginals show that one capstone objective (“teach inquiry-based writing”) is unlikely to align with any of these unmet needs, although inquiry-based writing experiences might be created specifically to meet those needs, e.g. around the search for data, and discovery of relevant data sets. As at most only 6 of the 12 bioinformatics needs could be addressed with a capstone experience, a capstone may not be an ideal modification of the curriculum to address these unmet needs. Conversely, as bioinformatics skills do not exist in a vacuum, Table 3 can be useful for challenging instructors to ensure that all of the capstone objectives are met in ways that also achieve some learning relating to all of these unmet needs.
These examples do not include observed data, so an authentic example of the application of the method to decision-making follows in the next sections. In this last example, the method is used to study the question of whether scaffolding (described below) in the capstone limits the potential for useful summative assessment, and if so, to what extent. Unlike the other two examples, the exact implementation of the capstone is known (and described below), so the method is used to determine if the assessment as implemented is aligned with the capstone objectives. If it is aligned, the analysis provides evidence of this; if it is not aligned, the analysis provides direction for what might need to change so that the assignment and its learning objectives will be better aligned in future.
Scaffolding is instructor-mediated construction of knowledge or skills, where these are specific learning goals [23, 24]. The instructor targets individual students with specific modeling of the target skill/behavior (e.g. conceptual understanding, specific skill), and just enough instruction for the individual to develop the target. Originally proposed in the early 1970 s and still current [25], scaffolding is a pedagogic construct. If the capstone is scaffolded, and thereby, formative, the extent of scaffolding each student requires could be considered to generate “actionable evidence” [15, 16] for the instructor to better prepare learners for success on the project.
Scaffolding in a written project (or a project involving writing) comes in the form of—often individualized—draft reviews that the instructor or grader provides for each student to facilitate the optimal final product. Formal scaffolding can take place through the provision of feedback on drafts, through self-assessment using a predefined rubric [26] or a combination of these.
Scaffolding may be important in capstone projects, but the extent to which scaffolding is provided may limit the potential for the project to serve as a summative assessment. A formative assessment, in contrast to the summative type, is intended to provide specific feedback to the student in real time, to facilitate and improve learning; formative assessment therefore necessarily cannot provide summative evidence about whether/how much learning has taken place. In that sense, this example can both answer the instructor’s question (“is the assessment aligned with the learning goals?”) as well as address an educational research question about the role of scaffolding in the capstone experience.
Method
Degrees of freedom analysis
As described, the steps to executing a DoFA, with the previously identified modifications, were followed using observational data that were collected from a course (described below).
Example 3. Case study of data collected from a graduate course.
A capstone project was used as the ‘Final Exam’ for a 12-student graduate level introductory biostatistics course. For 3 of the final 4 weeks of the 15 week semester, a new component of this multipart assignment was the focal point of email-based one-on-one discussions between the instructor and the student:
Identify a research question that is relevant to the student, including the motivation and background for the question;
Identify the appropriate statistical test to answer the question, including a description of data required (type/amount) and consideration of assumptions and contingencies for the specified statistical method; and
Integrate elements 1 and 2, in the form of an overall design of the student’s ‘dream study’ including the refined question (in light of statistics and data), final choice of inference test plus contingency plan (i.e. in case assumptions are not met, or sensitivity analyses), plus power calculation.
In the fourth week, the graded assignment was a 10–15 min presentation based on work completed in weeks 1–3; this component did not receive any input from the instructor. Students had 1 week to complete each subtask in the assignment; subtasks were iteratively completed with as much scaffolding as was warranted (according to the instructor): students submitted work that was returned with comments or suggestions until both instructor and student were satisfied with each subtask.
The four-part assignment was intended to authentically assess the presence of, encourage, and model, a stepwise approach to analysis, clear and careful thinking, and the statistical methods and fluency that the course was meant to develop in students.
Creating the DoFA prediction matrix
A DoFA prediction matrix was constructed by using the nine capstone objectives (DoFA Step 1), which are taken as the predictions that are plausible if the capstone is functioning as intended, given the cases (data in rows) represented by 12 Master’s degree students completing the capstone project at the end of one course (DoFA Step 2). The data (DoFA Step 3) were observations of the 12 presentations, rated by the instructor according to whether they could be useful for determining whether any of the nine objectives were met for that student [Yes (Y) = 1; No (N) = 0; Partially (P) = 0.5] (DoFA Step 4). In this example, only the marginals of the columns were important for decision-making. Marginals of the rows would summarize the individual student’s consistency with the capstone objectives outlined above; however, these would not be informative about student performance because the columns relate to the assignment, and not to its execution (i.e. student grades on the assignment were derived with a presentation-specific rubric). Similarly, the student-level summaries (row marginals) would not permit summarization of the consistency of the assignment with the column features.
The evidence derived from the student case-level data was then summarized in a second DoFA prediction matrix. Starting again at DoFA Step 1, further specification (and familiarity) with scaffolding created four options for the decision about the role of scaffolding in the capstone. The second DoFA matrix examined the alignment with the capstone objectives [20, 21], now in the rows (because the decision is now about which model of scaffolding (columns options) to use, from four general models for the inclusion of scaffolding in the capstone:
Set aside a single, unscaffolded and thereby summative, capstone at the end of the program (or course) as the sole inquiry-based exercise (e.g. in Coursera specializations).
Set aside a single scaffolded capstone opportunity at the end of the program or course as the sole inquiry-based exercise (i.e. intention of this course).
Provide a series (>1) of discrete, equally scaffolded capstone exercises over time (e.g. some PhD programs that require multiple publications by each student to be synthesized into one thesis).
Provide a series (>1) of discrete capstone exercises over time with more scaffolding for the first and none for the last (e.g. some PhD programs require a “master’s thesis” which is heavily scaffolded, and then require the PhD thesis to be independently done).
These examples of the integration of scaffolding into capstone experiences represent approaches that are in use across disciplines, and can be observed at universities across the United States.
Example 3 results
The capstone in the course was designed as a single, scaffolded opportunity (i.e. scaffolding Model 2). The data from student presentations are shown in DoFA prediction matrices in Table 4.
Learning objectives of the capstone: Cases: . | Teach research skills . | Assess possession of research skills . | Assess learning of research skills . | Provide experience with inquiry . | Estimate independent research skills . | Engage inquiry-based learning . | Teach inquiry-based writing . | Capstone functions formatively . | Capstone functions summatively . |
---|---|---|---|---|---|---|---|---|---|
Cases where capstone was a single, scaffolded experience | |||||||||
Case 1 | P | P | P | Y | N | Y | P | Y | N |
Case 2 | P | P | P | Y | N | Y | P | Y | N |
Case 3 | P | P | P | Y | N | Y | P | Y | N |
Case 4 | P | P | P | Y | N | Y | P | Y | N |
Case 5 | P | P | P | Y | N | Y | P | Y | N |
Case 6 | P | P | P | Y | N | Y | P | Y | N |
Case 7 | P | P | P | Y | N | Y | P | Y | N |
Case 8 | P | P | P | Y | N | Y | P | Y | N |
Case 9 | P | P | P | Y | N | Y | P | Y | N |
Capstone as a series of > 1 experiences with more scaffolding early and none at the end | |||||||||
Case 10 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Case 11 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Case 12 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Learning objectives of the capstone: Cases: . | Teach research skills . | Assess possession of research skills . | Assess learning of research skills . | Provide experience with inquiry . | Estimate independent research skills . | Engage inquiry-based learning . | Teach inquiry-based writing . | Capstone functions formatively . | Capstone functions summatively . |
---|---|---|---|---|---|---|---|---|---|
Cases where capstone was a single, scaffolded experience | |||||||||
Case 1 | P | P | P | Y | N | Y | P | Y | N |
Case 2 | P | P | P | Y | N | Y | P | Y | N |
Case 3 | P | P | P | Y | N | Y | P | Y | N |
Case 4 | P | P | P | Y | N | Y | P | Y | N |
Case 5 | P | P | P | Y | N | Y | P | Y | N |
Case 6 | P | P | P | Y | N | Y | P | Y | N |
Case 7 | P | P | P | Y | N | Y | P | Y | N |
Case 8 | P | P | P | Y | N | Y | P | Y | N |
Case 9 | P | P | P | Y | N | Y | P | Y | N |
Capstone as a series of > 1 experiences with more scaffolding early and none at the end | |||||||||
Case 10 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Case 11 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Case 12 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Notes: Consistency of the case with each capstone learning objective: Y= yes (consistent with/supportive of that objective); P = partially (partially consistent with/supportive of that objective); N = no (neither consistent with nor supportive of that objective).
Learning objectives of the capstone: Cases: . | Teach research skills . | Assess possession of research skills . | Assess learning of research skills . | Provide experience with inquiry . | Estimate independent research skills . | Engage inquiry-based learning . | Teach inquiry-based writing . | Capstone functions formatively . | Capstone functions summatively . |
---|---|---|---|---|---|---|---|---|---|
Cases where capstone was a single, scaffolded experience | |||||||||
Case 1 | P | P | P | Y | N | Y | P | Y | N |
Case 2 | P | P | P | Y | N | Y | P | Y | N |
Case 3 | P | P | P | Y | N | Y | P | Y | N |
Case 4 | P | P | P | Y | N | Y | P | Y | N |
Case 5 | P | P | P | Y | N | Y | P | Y | N |
Case 6 | P | P | P | Y | N | Y | P | Y | N |
Case 7 | P | P | P | Y | N | Y | P | Y | N |
Case 8 | P | P | P | Y | N | Y | P | Y | N |
Case 9 | P | P | P | Y | N | Y | P | Y | N |
Capstone as a series of > 1 experiences with more scaffolding early and none at the end | |||||||||
Case 10 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Case 11 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Case 12 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Learning objectives of the capstone: Cases: . | Teach research skills . | Assess possession of research skills . | Assess learning of research skills . | Provide experience with inquiry . | Estimate independent research skills . | Engage inquiry-based learning . | Teach inquiry-based writing . | Capstone functions formatively . | Capstone functions summatively . |
---|---|---|---|---|---|---|---|---|---|
Cases where capstone was a single, scaffolded experience | |||||||||
Case 1 | P | P | P | Y | N | Y | P | Y | N |
Case 2 | P | P | P | Y | N | Y | P | Y | N |
Case 3 | P | P | P | Y | N | Y | P | Y | N |
Case 4 | P | P | P | Y | N | Y | P | Y | N |
Case 5 | P | P | P | Y | N | Y | P | Y | N |
Case 6 | P | P | P | Y | N | Y | P | Y | N |
Case 7 | P | P | P | Y | N | Y | P | Y | N |
Case 8 | P | P | P | Y | N | Y | P | Y | N |
Case 9 | P | P | P | Y | N | Y | P | Y | N |
Capstone as a series of > 1 experiences with more scaffolding early and none at the end | |||||||||
Case 10 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Case 11 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Case 12 | Y | Y | Y | Y | Y | Y | Y | Y | Y |
Notes: Consistency of the case with each capstone learning objective: Y= yes (consistent with/supportive of that objective); P = partially (partially consistent with/supportive of that objective); N = no (neither consistent with nor supportive of that objective).
Two, not just the intended one, of the four possible scaffolding models were observed in the data. The final in-class presentation followed Model 2 for just 9 of the 12 students. Unexpectedly, the other three students (without notifying the instructor) developed a new project after completing the three subtasks with extensive scaffolding. Only these three students followed scaffolding Model 4, providing an unanticipated opportunity to explore how scaffolding affects the ability to study whether a capstone satisfies its objectives. Students who, in their final presentation, followed the plan that was established over the course of completing the three assignment subparts clearly showed that they could integrate the complex elements of a complex argument into a coherent whole. However, only the three who created new study designs provided the summative assessment that was intended. The fact that only two of these three achieved the target skill set provides actionable evidence for the instructor: the course as given does not provide sufficient training in self-assessment about study planning, and the capstone project as it was intended is not summative.
Given the data from Table 4 about two of four scaffolding-in-capstone models, Table 5 shows the alignment with the capstone objectives achieved by the four models.
Nine objectives of the capstone . | Four models of the capstone experience . | |||
---|---|---|---|---|
Model 1: One opportunity without scaffoldinga . | Model 2: One opportunity with scaffolding . | Model 3: Multiple opportunities with equivalent scaffoldinga . | Model 4: Multiple opportunities with decreasing scaffolding . | |
1. Teach research skills | No | Partially | Yes | Yes |
2. Assess possession of research skills | Yes | Partially | Yes | Yes |
3. Assess learning of research skills | No | Partially | Partially | Yes |
4. Provide experience with inquiry | Partially/no | Yes | Yes | Yes |
5. Assess/estimate independent research skills | Yes | No | Partially | Yes |
6. Engage inquiry-based learning | No | Yes | Yes | Yes |
7. Teach inquiry-based writing | No | Partially | Yes | Yes |
8. Capstone functions formatively | No | Yes | Yes | Yes |
9. Capstone functions summatively | Yes | No | No | Yes |
Total consistency score: | 3/9 | 4/9 | 6/9 | 9/9 |
Nine objectives of the capstone . | Four models of the capstone experience . | |||
---|---|---|---|---|
Model 1: One opportunity without scaffoldinga . | Model 2: One opportunity with scaffolding . | Model 3: Multiple opportunities with equivalent scaffoldinga . | Model 4: Multiple opportunities with decreasing scaffolding . | |
1. Teach research skills | No | Partially | Yes | Yes |
2. Assess possession of research skills | Yes | Partially | Yes | Yes |
3. Assess learning of research skills | No | Partially | Partially | Yes |
4. Provide experience with inquiry | Partially/no | Yes | Yes | Yes |
5. Assess/estimate independent research skills | Yes | No | Partially | Yes |
6. Engage inquiry-based learning | No | Yes | Yes | Yes |
7. Teach inquiry-based writing | No | Partially | Yes | Yes |
8. Capstone functions formatively | No | Yes | Yes | Yes |
9. Capstone functions summatively | Yes | No | No | Yes |
Total consistency score: | 3/9 | 4/9 | 6/9 | 9/9 |
aThese models were not observed in the data; their “hits” and “misses” are inferred and could be tested in a future empirical study.
Nine objectives of the capstone . | Four models of the capstone experience . | |||
---|---|---|---|---|
Model 1: One opportunity without scaffoldinga . | Model 2: One opportunity with scaffolding . | Model 3: Multiple opportunities with equivalent scaffoldinga . | Model 4: Multiple opportunities with decreasing scaffolding . | |
1. Teach research skills | No | Partially | Yes | Yes |
2. Assess possession of research skills | Yes | Partially | Yes | Yes |
3. Assess learning of research skills | No | Partially | Partially | Yes |
4. Provide experience with inquiry | Partially/no | Yes | Yes | Yes |
5. Assess/estimate independent research skills | Yes | No | Partially | Yes |
6. Engage inquiry-based learning | No | Yes | Yes | Yes |
7. Teach inquiry-based writing | No | Partially | Yes | Yes |
8. Capstone functions formatively | No | Yes | Yes | Yes |
9. Capstone functions summatively | Yes | No | No | Yes |
Total consistency score: | 3/9 | 4/9 | 6/9 | 9/9 |
Nine objectives of the capstone . | Four models of the capstone experience . | |||
---|---|---|---|---|
Model 1: One opportunity without scaffoldinga . | Model 2: One opportunity with scaffolding . | Model 3: Multiple opportunities with equivalent scaffoldinga . | Model 4: Multiple opportunities with decreasing scaffolding . | |
1. Teach research skills | No | Partially | Yes | Yes |
2. Assess possession of research skills | Yes | Partially | Yes | Yes |
3. Assess learning of research skills | No | Partially | Partially | Yes |
4. Provide experience with inquiry | Partially/no | Yes | Yes | Yes |
5. Assess/estimate independent research skills | Yes | No | Partially | Yes |
6. Engage inquiry-based learning | No | Yes | Yes | Yes |
7. Teach inquiry-based writing | No | Partially | Yes | Yes |
8. Capstone functions formatively | No | Yes | Yes | Yes |
9. Capstone functions summatively | Yes | No | No | Yes |
Total consistency score: | 3/9 | 4/9 | 6/9 | 9/9 |
aThese models were not observed in the data; their “hits” and “misses” are inferred and could be tested in a future empirical study.
Evidence is obtained for two of the four models of scaffolding, and the structure of the matrix, following the modifications to the DoFA method in Table 1, permits evidence-informed, logical inferences about the alignment of the other two models with the capstone objectives. Table 5 shows that these two (unobserved) models involve the least (no scaffolding) and the second most (multiple opportunities, equal scaffolding) alignment with the capstone experience objectives. Although these models of scaffolding in capstone assignments can be observed today but were not observed in the current case, the DoFA results for their alignment with the capstone objectives are interpretable and plausible. Although two of these models were not observed in the data, Table 5 provides evidence about how and why a capstone should be included in a course or for a program of study. This analysis also shows that, although it succeeded as a formative assessment, the capstone in this case failed as a summative assessment for the skills of greatest interest. This is only demonstrated within the DoFA (Table 5) and would not have been observable if the DoFA had not followed the modifications articulated in Table 1, so that models of scaffolding, rather than theory, could be better understood. This case study suggests that only Model 4 permits the assessment of learning of research skills; this design will also result in the greatest alignment (9/9) with the objectives of using a capstone (according to [20, 21]). As designed, the scaffolded, one-time assignment discussed in the case analysis only achieves 4/9 capstone learning objectives. If a summative assessment of whether research skills have been learned is one of—or the—purpose of including a capstone, the project should follow Model 4 in terms of scaffolding.
Discussion and conclusions
The DoFA method, with slight modifications in construction, can be used to summarize the qualitative data that are commonly collected in higher, graduate and postgraduate education and training. The two hypothetical examples that were discussed used the results of an extensive review of the literature on adult learning ([12], Example 1) and of the report of an expert panel [20, 21] on the learning objectives of the capstone and two national surveys of unmet bioinformatics needs ([14, 22], Example 2). The third example featured observational data about an actual assessment and a research question.
The DoFA method enables the analysis and summarization of qualitative data in a systematic way, but effort is required to identify pedagogical principles or educational theories with which decision options can be evaluated for their alignment. It can be challenging for decision makers to articulate these decisions, or their options, sufficiently for this evaluation of alignment with relevant theory; however, this articulation project can be leveraged to increase buy-in from faculty ([27], ch. 1) as well as from students. Both of these aspects of the method (i.e. identifying relevant theory, and explicating decisions to be made) can benefit from specific expertise in the educational/cognitive domains, and/or consensus among instructors in terms of the options or the ratings of how consistent the options or the data are with dimensions of the selected theories. As with all qualitative research, transparent reporting of methods and the full consideration of plausible alternatives in the analysis are needed for interpretable—defensible—results.
Decision-making can use qualitative data, even for training in quantitative sciences like bioinformatics. The DoFA method can summarize qualitative evidence, without collecting data (Example 1), based on survey results (Example 2), or with observed data (Example 3) to support planning and decision-making in courses or curricula. The method can also promote formal evaluation of those decisions, encouraging evidence-informed excellence in bioinformatics training and education.
Survey data are commonly collected in education and training; however, perhaps especially in the quantitative sciences, utilization of this qualitative data for decision-making can be challenging—but it can be done.
An established method for the analysis of qualitative data to inform decisions is the DoFA, initially published in 1975; this qualitative focus method is unlikely to be discovered but is important for analyzing and interpreting survey or other educational data obtained from teaching or training, for example in bioinformatics.
The method identifies and aligns theoretical or applied principles with qualitative data, transforming survey and evaluation results, and similar data, into interpretable results for evidence-informed decision-making in education and training.
Important aspects of using this method include (a) effort is required to identify pedagogical principles or educational theories with which decision options can be evaluated for their alignment; (b) it can be challenging for decision makers to articulate decisions, or their options, sufficiently for this evaluation of alignment with relevant theory; and (c) expertise and/or consensus among instructors may be needed for interpretable results. These features can promote reliability and validity of educational decisions and support formal evaluation of the outcomes of those decisions.
Rochelle E. Tractenberg is a cognitive scientist focusing on learning and assessment in higher education, and a research methodologist with accreditation as a Professional Statistician from the American Statistical Association. Collaborative for Research on Outcomes and –Metrics; Departments of Neurology; Biostatistics, Bioinformatics & Biomathematics; and Rehabilitation Medicine Georgetown University Medical Center, Washington, D.C