Warning: mkdir(): Permission denied in /home/virtual/lib/view_data.php on line 93 Warning: chmod() expects exactly 2 parameters, 3 given in /home/virtual/lib/view_data.php on line 94 Warning: fopen(/home/virtual/ejpbl/journal/upload/ip_log/ip_log_2024-10.txt): failed to open stream: No such file or directory in /home/virtual/lib/view_data.php on line 100 Warning: fwrite() expects parameter 1 to be resource, boolean given in /home/virtual/lib/view_data.php on line 101 Using Prompts to Scaffold Metacognition in Case-Based Problem Solving within the Domain of Attribution Theory
J Probl Based Learn Search

CLOSE


J Probl Based Learn > Volume 7(1); 2020 > Article
Zumbach, Ortler, Deibl, and Moser: Using Prompts to Scaffold Metacognition in Case-Based Problem Solving within the Domain of Attribution Theory

Abstract

Case-based problem solving is a core approach to foster knowledge acquisition, especially during the learning process by which novices become more and more expert within a domain. This study investigated whether metacognitive scaffolding leads to better learning outcomes compared to learning without such support in a case-based learning environment. In addition, we examined the interaction between prior domain knowledge and prior metacognitive abilities and skills. Within a one-factorial design, we explored the role of metacognitive prompting during the learning phase. A pre- and post-test were applied assessing knowledge acquisition, metacognitive skills, and cognitive load. Results indicate no main effects for prompting, and no interaction effect between prior knowledge and prompting. Metacognitive prompting enables learners that already possess metacognitive abilities to activate these during problem-solving and, thus, score higher in the knowledge post-test than learners with low metacognitive abilities and no metacognitive prompting.

INTRODUCTION

Self-regulated learning has become a central part in our modern society. Self-regulated learning can be seen as a proactive process that helps students to acquire knowledge and skills. Within this pro-cess, self-regulated learners set their own learning goals, they select and deploy adequate strategies and they are able to reflect their effectiveness. The crucial question is “whether a learner displays personal initiative, perseverance, and adaptive skill” (Zimmerman, 2008, p.167). The qualities of self-regulated learners contain from advantageous motivational feelings and beliefs as well as to metacognitive strategies (Hardy, Day & Steele, 2019; Zimmerman, 2008). With the increase of digital resources, different approaches and methods for fostering self-regulated learning have been developed, e.g. training, scripting or prompting. While most of these approaches directly aim on supporting learners on a cognitive level, more and more support for learning refers to metacognitions, as they are an important aspect of self-regulated learning (Hardy et al., 2019; Winne & Azevedo, 2014). Learner support within the cognitive domain usually refers to processes that support elaboration or direct information processing, rehearsal, etc. With the term metacognition, we refer here to either the abilities of learners or the advice to monitor and to adjust (cognitive) information processing before, during and after learning.
Supporting metacognition is particularly important in self-regulated learning. Zheng (2016), for ex-ample, found in this regard that a computer-based learning environment is ideal for supporting self-regulated learning, but students still need supportive elements to scaffold their activities. A meta-analysis conducted by the same author found positive effects of self-regulated learning scaffolds, e.g. metacognitive support, on academic performance in computer-based learning environments (Reiser & Tabak, 2014; Zumbach, Rammerstorfer & Deibl, 2020). In addition, metacognition is crucial for successful problem solving as it regulates and directs any relevant problem-relevant cognitive processes (Hardy et al., 2019; Winne & Azevedo, 2014). Research on supporting metacognitive processes during computer-supported problem solving is still scarce, in particular in the domain of psychology (Bannert, Hildebrand & Mengelkamp, 2009). This study addresses the question of whether metacognitive scaffolding in computer-based and problem-based learning scenarios requiring students to solve a case-based problem on attribution theory leads to better learning outcomes compared to learning without such support.

Metacognition as a crucial aspect of self-regulated learning

Research in metacognition has become an important part in contemporary educational and instruc-tional research. There are two major categories of metacognition (Winne & Azevedo, 2014). One category refers to the ability of learners to diagnose their learning process and outcomes (i.e., their “sensitivity” related to learning). The second category refers to the learners’ performance and is what they genuinely describe as metacognition. This includes, e.g. the knowledge and application of learning strategies. A more precise definition of metacognition is provided by Flavell (1979) who introduces the term of metacognitive knowledge. Metacognitive knowledge can refer to three domains or variables. The first one includes knowledge about the learner her- or himself (“personal variables”; e.g. knowledge about the abilities of a person, strengths and weaknesses). The second includes knowledge about the task a learner is confronted with (“task variables”, e.g. whether a task is judged as difficult or rather easy). The third aspect includes knowledge about information processing or problem solving strategies that might be applied, which ones are likely to be effective and which ones are rather not (“strategy variables”).
The use of cognitive strategies and metacognitions during self-regulated learning is crucial to learning (Eckhardt, Urhahne, Conrad & Harms, 2013). Self-regulated learners have to make important decisions about what to study when, whether to continue or terminate studying or how long they should focus on the specific learning material when studying. Learners with strong self-regulated learning skills are reflective, intentional, and autonomous learners who benefit from problem-solving (Greene, Bolick & Robertson, 2010). Unfortunately, as research has shown, students often fail to regulate their learning processes (Ohtani & Hisasaka, 2018). The activation and application of such strategies is rather poor. Students are often faced with severe difficulties and problems to regulate their own process and to find and apply appropriate strategies (Winne & Azevedo, 2014). With regard to cognitive and metacognitive strategies, students often fail to apply and maintain these strategies during self-regulated learning although these strategies would contribute to deeper learning and, thus, to higher learning gains (Bannert, Hildebrand & Mengelkamp, 2009; Reiser & Tabak, 2014). Wild (2005) differentiates cognitive strategies here between revision, elaboration and organization strategies. They differ from metacognitive strategies, because they refer directly to what learners do with the learning material. Metacognitive strategies do not directly refer to the learning material itself but rather a learner’s abilities to control his or her cognitive processes.
The use of metacognitive strategies in self-regulated learning scenarios is also emphasized within the conceptual framework for Problem-Based Learning (PBL) as suggested by Little and McMillan (2016). In this framework, metacognitive processing acts not only as helpful strategy for students within their learning progress and process but also as an assessment instrument for improving course quality and sustainabilty, especially within PBL-programs.
Metacognitive strategies help students to control their learning process. They imply planning, monitoring and controlling of the learning processes (Winne & Azevedo, 2014). Research assumes that regulation of cognition refers to better use of attentional resources, better use of existing strategies, and a greater awareness of comprehension breakdowns (Schraw, 1998). However, students are often not able to regulate their learning activities (Azevedo, 2009; Bannert & Mengelkamp, 2013) and thus guidance or instruction especially during learning situation with simulations is needed. Bannert, Hildebrand, and Mengelkamp (2009) argue that metacognitive support to increase student’s learning competence is more effective than systematic instruction.
Taken together, there is theoretical and empirical evidence that cognitive and metacognitive strategies contribute to sustainable self-regulated learning. As a central framework and theoretical model, Winne and Hadwin (1998) suggest that metacognitive processes in terms of control, monitoring, and evaluation are a core part of the cognitive system.

Fostering metacognition

As metacognition supports and enhances learning (Winne & Azevedo, 2014), the fostering of meta-cognition is one central aim of instructional approaches. There are different approaches and strate-gies on how to do so. In general, these instructional strategies differ between direct and indirect. Direct approaches include training strategies, where learners are almost always given explanations about the nature of self-regulated learning, the nature of cognitive and metacognitive strategies and how these strategies can be applied. Usually these strategies are exemplarily applied within the training context. Such direct approaches aim to create awareness and knowledge about metacognition and usually train their application. Bannert et al. (2009, p.830) point out, that “(…) for students lacking metacognitive competence (so-called mediation deficit, e.g. Hasselhorn, 2006), direct training is necessary in order to extensively teach the metacognitive knowledge skills.” In a study by Koch (2001) a training of metacognitive strategies for reading comprehension was applied. Results revealed that the training was able to enhance reading comprehension and performance in a knowledge test compared to a group without training. While other research shows no impact of metacognitive training or even negative outcomes (Jing, 2006), metacognitive training that is addi-tionally accompanied by incentives for using these strategies and continuous feedback on strategy use seems to contribute to learning (Miller & Geraci, 2011).
There are also indirect strategies where learners are not necessarily explicitly informed about metacognitive strategies themselves but rather are required to apply pre-given strategies. There are many ways to design this way of scaffolding, but one central way is to script learners’ behavior by providing prompts during a learning phase. These prompts usually require learners to fill in pre-given forms that include a specific strategy that has to be applied to the current learning task (“Please read the following text and formulate in two or three sentences what you understood and what you did not. Consider also what you need to know to understand the text completely.”). According to Lin, Hmelo, Kinzer and Secules (1999) prompting includes several strategies that might be applied during learning. These include process displays, process prompts, process models and reflective social discourse. Within the area of metacognition, the use of process prompts is one major approach of fostering learning. Bannert (2004) defines prompts as “(…) instructional measures integrated in the learning context which ask students to carry out specific metacognitive activities” (p.2). Using instructional prompts, the intention is not to teach new information, but rather “(…) support the recall and execution of student’s knowledge and skills” (Bannert, 2009, p.140). Thus, students get precise instructions they should consider during the learning phase in order to draw attention on certain aspects of the learning process (Reiser & Tabak, 2014). Students who already possess metacognitive skills can also be supported by using these strategies. A study by Bannert and Mengelkamp (2013), for example, was able to show that prompting enhances students’ activation of learning strategies.
Especially within the area of multimedia learning, many studies relate to self-explanation prompts. This might be due to the fact that “self-explanation may be particularly well suited to multimedia learning because generated inferences form connections both within and between verbal and non-verbal representations” (Van Meter, Cameron & Waters, 2017, p. 188). Kim and Hannafin (2011) showed that metacognitive scaffolds (e.g. self-questions) could enhance learning with simulations and improve scientific reasoning and learning outcomes. Piksööt and Sarapuu (2015) found question prompts for supporting students’ self-monitoring of their own learning progress as an effective instructional approach.
Nevertheless, we have to note that this “metacognitive prompting” might be a misleading term. The prompting itself can fulfil two functions: To really activate metacognitive thinking processes (e.g. monitoring one’s own attentional processes), but also to act as a metacognitive process itself and activate subsequent cognitive processes. Thus, metacognitive prompting or scaffolding can enable both, metacognitive and cognitive processes (Reiser & Tabak, 2014).
However, additional support via prompting demands a certain amount of cognitive resources from learners. Berthold, Röder, Knörzer, Kessler and Renkl (2011) already found a double-edged effect of prompts, showing that besides positive effects on learning, prompts caused the cognitive load to reach the upper limit of working memory capacities, which is particularly detrimental in terms of procedural aspects. In addition, Bannert (2004) showed that this might be particularly problematic for students with lower prior domain knowledge, because they often struggle to use prompts in adequate ways.

Assessing metacognition

Assessment of metacognition and metacognitive processes involves different strategies and methods. During the 1980s, a number of different instruments with different methods such as the self-report inventory of students’ strategies LASSI (Learning and Study Strategies Inventory; Weinstein, Schulte & Palmer, 1987); a structured interview like the Self-Regulated Learning Interview Scale (SRLIS; Zimmerman & Martinez-Pons, 1988) or a questionnaire like the Motivated Strategies for Learning Questionnaire (MSLQ; Meijs, C., Neroni, J., Gijselaers et al., 2019) have been developed (see Zimmerman, 2008). The MSLQ has been designed to assess different cognitive and metacognitive strategy use including planning, monitoring and controlling of ones’ learning processes during self-regulated learning.

Supporting problem solving in case-based learning environments

As shown above, metacognition is a crucial aspect of self-regulated learning. In addition, metacognitive processes play an essential role for successful problem solving. A successful problem solver uses his or her cognitive and metacognitive skills for analyzing a problem, developing, choosing and applying a most likely successful problem-solving strategy, and evaluating one’s own progress and possible solutions (Ohtani, & Hisasaka, 2018; Winne & Azevedo, 2014). Effective problem solvers, i.e., experts in contrast to novices within a domain, show different problem solving strategies that make them effective. Experts usually have within their domain extensive schemata that are available for problem solving. Confronted with a familiar problem they can easily activate an existing problem-solving schema, adapt it to the current problem and, thus, solve the problem. Experts – compared to novices – spend more time on analyzing a given problem and usually rely on a forward problem-solving strategy by familiar problems (Boshuizen, Gruber & Strasser, 2020). Experts are able to control, to monitor, and to evaluate their own problem solving and, thus, use metacognitive strategies more frequently than novices (Richey & Nokes-Malach, 2015). Thus, prior knowledge or expertise seems to be strong predictor of successful problem solving. Nevertheless, on the way to become a successful problem-solver, especially when solving cases, instructional support is helpful. Ertmer and Koehler (2014) were able to show that students benefit from case studies, when instructional prompts keep students focused on relevant details of presented cases. In a study by Harkrider, MacDougall, Bagdasarov, et al. (2013) the influence of case presentation and prompting has been examined. Results reveal that when cases are presented sequentially, prompts that ask learners to structure the learning material might contribute to better learning outcomes if the use of prompts is not exaggerated. When cases are presented simultaneously, unstructured prompts (e.g. asking for comparing cases for similarities) contributed to elaborated sense making strategies among learners. Taken together, the fostering of case-based problem solving by using scaffolding approaches seems to support learning processes when applied usefully and contributes to deeper learning than without such instructional aids (Kim & Hannafin, 2011).
Taken together, the theoretical models and empirical evidence provided so far let us assume that metacognitive processes, applied adequately, can foster self-regulated learning. While direct training of metacognitive strategies seems to be not as effective as more implicit approaches this research aims on analyzing the effects of indirect, scaffolded metacognitive support in an applied case- and problem-based learning environment. Such learning environments can usually be characterized to focus instead on ill-structured domains and to demand a high level of students’ self-regulated learning skills. We also assume that almost all students (especially in Higher Education) have already developed a more or less elaborated explicit and/or implicit set of self-regulated learning skills. Nevertheless, students often cannot or fail to activate these skills, especially during applied problem solving. Therefore, the following study has been designed in order to examine how students can be supported in activating (metacognitive) learning strategies by means of prompting.

Research questions and hypotheses

There is a body of evidence that prompting might support metacognitive processes that contribute to deeper and more successful learning processes (Bannert & Mengelkamp, 2013). Thus, we assume that the use of active metacognitive prompting should also foster the application of these strategies within a hypermedia-based problem-solving scenario when learning with cases. The term “active” means that such prompts are not completely determined but learners rather have to adapt pre-given strategies to the current learning material. We assume that learners in the prompting condition will demonstrate greater knowledge acquisition than learners in the non-prompting condition. (Hypothesis 1). As suggested by Bannert (2004), cognitive load can play a role when promoting metacognitive strategies and should be taken into account. We therefore further investigate the role of cognitive load as a potential influence on learning processes.
In addition, the role of prior knowledge (i.e., the factual and procedural knowledge within a certain domain) is one of the strongest predictors for successful learning (Hattie, 2008). However, there are inconsistent results regarding the role of prior knowledge in supporting metacognition. Bannert (2004), for example, found that learners with low prior knowledge within a domain might have diffi-culties in successfully using prompts, which may also be due to the additional cognitive load. Kapa (2001), on the other hand, found that metacognitive support had a greater positive effect on stu-dents with low prior knowledge than on students with high prior knowledge. Additionally, research on problem solving has shown that experts use metacognitive strategies more frequently than novices do (Richey & Nokes-Malach, 2015). In other words, the use of metacognitive strategies becomes more likely the greater the prior domain expertise. Against this backdrop, we further address the role of prior knowledge and assume that instructional metacognitive support might be helpful on the way to becoming a successful problem-solver, especially when solving cases. Thus, learners with no or little prior knowledge should derive greater benefits from metacognitive support by using prompts compared to experienced learners. This leads us to the assumption that the effect of prompting depends on learners’ prior knowledge: Learners with low prior knowledge should demonstrate significantly higher knowledge acquisition in the promoting condition compared to the non-prompting condition, while for learners with high prior knowledge, there should be no significant differences between the two conditions (Hypothesis 2).
Finally, it can be assumed that not only prior knowledge within a domain might have impact on sub-sequent learning processes, but also prior knowledge related to metacognitive strategies: A higher level of metacognitive strategy knowledge might promote the use of such strategies and thus con-tribute to higher learning achievement. Nevertheless, it can also be assumed that the application of such strategies is an automatism. Learners with knowledge about metacognitive strategies might be supported by metacognitive prompts that help them to activate and apply this knowledge in a problem-solving situation (Bannert et al., 2009; Bannert & Mengelkamp, 2013). Thus, we assume that, in the prompting condition, the common use of learning strategies should be a stronger predictor for knowledge acquisition compared to the non-prompting condition (Hypothesis 3).

METHOD

Sample and design

Participants were 40 volunteers (N = 27, university students, mainly psychology students; N = 13 nonacademic participants). The average age was 25.90 years (SD = 3.91); 30 participants were female and 10 were male. The study was conducted using a one-factorial pre-/post-test experimental research design. Participants were randomly assigned to one of the two conditions (with or without prompting) with 20 participants per cell.

Material

The learning environment designed for this study was a case- and problem-based learning scenario on attribution theory and learning. It was designed as a hypermedia learning environment that al-lowed learners to navigate freely through the content. The program started with the case of “Benno”, a high school student whose school performance significantly dropped after he moved with his mother into a new town. In addition to the written case description, three video files presenting three perspectives on the reason for Benno’s problems were accessible (perspective of Benno, his mother and his teacher). Each of these videos presented different explanations based on attribution theory (Weiner, 1994). Attribution theory presents a framework how humans explain events or actions in daily life and how determine causes for such events. Attribution theory tries to explain why humans do what they do. Following Weiner (1994) the most important factors affecting attributions are ability, effort, task difficulty, and luck. These attributions can be classified within three causal dimensions: Locus of control (internal vs. external), stability (stable vs. not stable), and controllability (controllable by a person or not controllable). Within the scenario here, different perspectives were presented. Each of them presented a different attribution of causes to Benno's problems in school.
Learners had access to five different pages within the learning environment that presented the theoretical background of attribution theory. This material was taken out of textbook chapters and modified for screen presentation. Learners could navigate freely throughout the case presentation, the video presentations and the theoretical background pages (Figure 1). All written and spoken texts were presented in German.

Measures and instruments

Knowledge test

In order to assess participants’ knowledge acquisition a test was constructed consisting out of 16 items (Cronbach’s Alpha = .75) based upon the content of the learning environment. The test con-sisted out of fill-in-the-blank questions, open questions and multiple-choice questions (e.g. “Which different kinds of attribution can be distinguished?”). Every question in the pre-test had the option “I do not know the answer”. All questions addressed the learning objectives of the learning environment, i.e., all questions and possible answers were related to Attribution Theory. The knowledge test was administered as pre and post-test with the same questions each, maximum score was 16 points. To ensure objectivity of analysis, the evaluation of participants’ results of the knowledge test was performed by a researcher who followed a pre-defined sample application. Only if there was a match between the researcher and the sample application, an answer was rated as correct.

Common use of learning strategies

The LIST questionnaire for measuring learning strategies of students (Wild & Schiefele, 1994) was applied in order to assess students’ common use of learning strategies. Regarding the purpose of our study, we used three subscales: cognitive strategies (31 items; α = .81; e.g. “I try to make connections between the contents of related domains or classes”), metacognitive strategies (11 items: α = .72; e.g. “I try to plan which content of a certain domain I do or don’t have to learn” ), and resource-oriented learning strategies (28 items; α = .83; e.g. “I do literature research when my own notes are not sufficient”). The instrument uses a 5-point rating scale (1 = rarely; 5 = very often) where participants indicate how often they use the particular strategies. The LIST was applied in post-test only.

Cognitive Load

In the post-test a slightly adapted version of the NASA-TLX (Task Load Index; Hart & Staveland, 1988) was used. To measure participants’ cognitive load is of high interest here as it provides information about participants’ information processing. The NASA-TLX consists of five self-report items (i.e. task requirements, effort in understanding the content, expectation of success, effort in navigation, and stress) and is a well-established instrument. Participants could indicate their answers on a five-point rating scale (1 = completely agree; 5 = completely disagree; the scale had an original internal consistency of α = .70; after exclusion of two items assessing expectation of success, the scale had an internal consistency of α = .88).

Procedure

Participants were randomly assigned to one of the two conditions (with prompting / without prompting). After assignment, the pre-test was administered. Subsequently participants started with the learning environment presented on standard IBM compatible personal computers (Figure 1). Every participant got the instruction: “The learning environment presented at the computer has been designed to enhance your knowledge acquisition within the domain of attribution theory. Your objective is to learn as much as you can about this topic. This will be tested in a post-test. ” In the prompting condition, participants were additionally informed about the nature of and use of metacognitive strategies. They got the additional advice to choose one of the presented strategies on the prompting screen that appeared after each page turn within the software. After choosing a strategy, they had to fill in the form on the prompting screen by adapting the chosen strategy to their current learning progress (Figure 2). There was no given time limit for working with the program but on average, participants in the standard condition needed about 30 minutes time and within the prompting condition participants needed about 45 minutes. When participants left the learning environment the post-test was administered.

RESULTS

For all data analyses, values of single items were aggregated to their scale values. For the knowledge tests, all correct answers were counted with a value of “1” and a sum score has been computed for pre-test and post-test correspondingly. For all other scales, the mean value of single items has been computed. Tests for normal distribution revealed that all dependent variables are normally distributed.
The results for knowledge acquisition revealed an increase from pre- to post-test. This increase was statistically significant for all groups (F (1, 38) = 163.58, p < 0.001; η2 = .81). However, a MANCOVA using the knowledge pre-test as a covariate, the condition as an independent variable, and the knowledge post-test as a dependent variable revealed no significant overall main effect (F (2, 36) = 0.13, p = 0.88; η2 = .007). The single contrasts also confirmed no group differences with regard to knowledge acquisition (F (1, 37) = 0.15, p = 0.70; η2 = .004) and cognitive load (F (1, 37) = 0.08, p = 0.78; η2 = .002). The descriptive values indicate a slightly increased value with regard to the knowledge post-test in favor of the prompting condition as well as a smaller value with regard to the cognitive load (Table 1). Prior knowledge as a covariate showed an overall effect (F (2, 36) = 15.22, p < 0.001; η2 = .46). Prior knowledge had a significant impact on performance in the knowledge post-test (F (1, 37) = 21.22, p < 0.001; η2 = .36) and on the cognitive load (F (1, 37) = 6.64, p = 0.001; η2 = .26). Correlation analyses reveal a moderate correlation between prior knowledge and knowledge in the post-test (r = .60; p < 0.001), and a moderate correlation between prior knowledge and cognitive load (r = .52; p < 0.001)
These results primarily do not support the first hypothesis. The prompting itself did not really affect learning outcomes, at least not to a significant extent. In order to test Hypothesis 2, a median-split was performed, using the median (= 4.0) of the knowledge pre-test results. A 2x2-factorial ANOVA with prior knowledge (high vs. low) and prompting (with vs. without) as independent variables as well as knowledge post-test results as a dependent measure were computed. The results revealed a significant effect for prior knowledge (F (1, 36) = 14.24, p = 0.001; η2 = .28) and no significant main effect for prompting (F (1, 36) = 1.38, p = 0.249; η2 = .04). The interaction between both independent variables missed significance by a margin (F (1, 36) = 3.46, p = 0.071; η2 = .09).
The descriptive data reveals that especially learners with low prior knowledge benefitted from prompting (Table 2), albeit the assumed interaction did not reach a significant level.
As the number of participants within the cells becomes very small, an additional single comparison for the effects of prompting using Mann-Whitney-U-Analysis and comparing knowledge post-test results within the group with low prior knowledge was conducted. The results reveal a significant one-sided effect (U = 19; p = 0.038; one-sided). The same analysis for the group with high prior knowledge did not reach significance (U = 48.5; p = 0.501).
In order to test the third hypothesis, linear regression models were tested. To this end, we used the performance in the knowledge post-test as a dependent variable and prior knowledge as well as the skills in use of metacognitive strategies (i.e., self-reported prior use of meta-cognitive strategies) as assessed by means of the LIST questionnaire as predictors. A stepwise linear regression resulted in the prompting condition with both of these variables as significant predictors (prior knowledge: Standardized Beta = 0.40; p = 0.02; prior use of metacognitive strategies: Standardized Beta = - 0.58; p = 0.002 (R2 = 0.57, F = 11.26; p < 0.001).
The model for the condition without prompting was also significant (R2 = 0.54, F = 20.79; p < 0.001), but only revealed prior knowledge (Standardized Beta = 0.73; p < 0.001) but not prior use of meta-cognitive strategies (Standardized Beta = - 0.15; p = 0.362) as a significant predicator. Thus, the re-sults confirm Hypothesis 3, suggesting that metacognitive strategies and prompting together can contribute to the application of subsequent learning strategies, which, in turn, contribute to en-hanced knowledge acquisition.

DISCUSSION

This study examined the influence of metacognitive support and prior knowledge on knowledge acquisition within a hypermedia case-based learning environment. There is a huge body of evidence that metacognition is one central key to initiate, apply, monitor and reflect learning, especially in self-regulated learning (Eckhardt et al., 2013). There is also evidence, that the long-term training of metacognitive strategies is helpful to support learners (Koch, 2001; Miller & Geraci, 2011). While such direct training strategies are not always beneficial (Jing, 2006), the application of rather indirect approaches like the prompting of metacognitive strategies has been proven to support self-regulated learning, especially in hypermedia learning environments (Bannert, 2004, Bannert & Mengelkamp, 2013).
The aim of our study was to replicate these findings on the one hand and to take into account the effect of students’ cognitive load on knowledge acquisition on the other. In contrast to the findings of Bannert and colleagues, we could not replicate that metacognitive prompting has a direct impact on knowledge acquisition: Learners in the prompting condition did not demonstrate greater knowledge acquisition than learners in the non-prompting condition (Hypothesis 1). A basic differ-ence in the treatment might be a possible explanation: In the studies by Bannert and colleagues learners had to navigate through a large hypermedia learning environment and prompting was ap-plied during navigation and mainly also referred to this navigation. The learning environment used in this study was rather well-structured and navigation was rather clear and implied a linear course. Thus, the prompting was designed to solve/explain the problem presented within the case-based scenario. What is more, the prompts were not adapted to learners’ needs, so every participant in the prompting group received the same amount and type of prompting. We could therefore not guarantee that all prompts provided were actually needed by the participants.
As additionally assumed in the first hypothesis, we could not find any influence of students’ cognitive load on their learning outcome. Students’ self-reported cognitive load was in the medium range across all study conditions. We can only presume why there is no difference here. On the one hand, it can be seen as positive that the prompting did not demand further cognitive resources, but on the other hand, it might indicate that the students did not really use the prompts intensively. For further research, an intelligent tutoring system as used by Taub, Azevedo, Bouchet and Khosravifa (2014), for example, might be better suited for this case.
The design of the prompts as suggested by Nokes and colleagues (2011) is another important aspect in relation to the student’s prior knowledge, as different prompts stimulate different cognitive processes. The effectivity of the prompting varies depending on student’s prior knowledge as well as on the nature of the task and the learning content. It might be possible that the learning environment itself was not complex enough to make the prompting sustainably effective in general or, at least, provide a surplus value.
However, the assumed interaction effect between prompting and prior knowledge in our study likewise failed to reach significance (Hypothesis 2). We assumed that learners with no or little prior knowledge should benefit from metacognitive prompts by providing them central strategies for coping with the information provided within the learning scenario. While the missing interaction effect did not provide evidence for this assumption, a closer non-parametric analysis indeed revealed that learners with low prior knowledge benefit from metacognitive support. Nevertheless, for those who already possessed some experience within the learning domain, the likelihood of performing better in the knowledge post-test was significantly higher but remained unaffected by metacognitive prompting.
While the low level of prior knowledge in general did not really contribute to ideal conditions for testing the assumed hypothesis, the prerequisites for testing the third hypothesis were given: We assumed that the prompting rather facilitates and activates those learners that already possess and apply metacognitive strategies (Hypothesis 3). Indeed, findings suggest such an aptitude-treatment-interaction revealing that in the prompting condition performance in knowledge post-test was significantly and positively influenced by the availability of metacognitive skills and prior knowledge. It is likely that the prompting was able to activate this knowledge and to apply it to the learning environment. In the condition without prompting the only significant predictor for performance in the knowledge post-test was prior knowledge. Thus, we assume that there was no stimulus and no need for participants to apply their metacognitive repertoire. In other words: The prompting of metacognitive skills contributed to enhance knowledge acquisition among learners that already possessed corresponding skills. This is in accordance with prior research, showing that especially learners with knowledge about metacognitive strategies benefit from metacognitive support (Bannert et al., 2009; Bannert & Mengelkamp, 2013). However, the activation of these skills within the learning environment is crucial. These findings imply that it might be helpful to provide a solid repertoire in metacognitive abilities first (e.g. by means of training) and then, as a second step, provide scaffolding approaches like prompting.
While this assumption could be confirmed, limitations of this experimental design did not allow validating the interaction of expertise and metacognitive prompting. A subsequent quasi-experimental design with real experts and novices might be able to analyse possible interaction effects.
Another limitation derives from the research approach as chosen here: The short-time intervention here was able to provide first and basic insights how prior domain knowledge and metacognitive skills can interact in applied problem solving. Nevertheless, findings from this study only affect learners with rather low prior knowledge within the chosen domain. It might be worthwhile to investigate the interplay between metacognitive abilities and knowledge acquisition within a domain or a program on the long run in order to investigate how these variables change and interact during time.
In addition, process data (e.g. thinking aloud protocols) might be helpful to analyse basic mechanisms of metacognitive support during applied problem solving. These might help to understand the way students navigate through the learning program, and with this information we might be able to identify critical moments in the learning program where students need support, which might in turn allow us to prepare the right prompts for this situation in further studies.
Finally, these findings might contribute to further design of problem solving course formats or PBL: Outcomes of this study suggest that novice learners can benefit from fostering metacognitive pro-cesses by means of scaffolding. This approach can be effective when such self-regulated learning skills are available but cannot successfully be activated or applied. Thus, it seems to be crucial to provide opportunities to develop and apply these skills by means of instructional support or devices. Especially at the beginning of courses or programs that require self-regulated learning skills such support might improve students’ problem-solving abilities. With increasing expertise within a certain domain and increasing metacognitive abilities, such additional instructional support might become more and more unnecessary and can be faded out.

Notes

Conflict of interest

The authors declared no conflict of interest.

Funding

None.

Data availability

Not applicable.

Authors’ contributions

JZ designed the study and wrote most parts of the manuscript, CO conducted the study and contributed to the research design; ID was involved in data analyses and also wrote some minor parts of the manuscript. SM also conducted data analyses and wrote some parts. All authors read and approved the final manuscript.

Figure 1.
Sample screen from the learning material.
jpbl-2020-00206f1.jpg
Figure 2.
Example of a metacognitive strategy prompt.
jpbl-2020-00206f2.jpg
Table 1.
Means (standard deviations in brackets) of dependent variables. Cognitive Load scores range from 1 (highest) to 5 (lowest) self-reported cognitive load.
Without Prompting With Prompting
Knowledge pre-test 3.90 (2.38) 3.75 (2.78)
Knowledge post-test 8.23 (2.73) 8.38 (1.91)
Cognitive load 2.65 (0.76) 2.56 (0.84)
Table 2.
Means (standard deviations in brackets) of dependent variables with prior knowledge as
Low Prior Knowledge
High Prior Knowledge
Without Prompting With Prompting Without Prompting With Prompting
Knowledge post-test 5.86 (2.38) 7.81 (1.98) 9.50 (1.99) 9.06 (1.67)
N = 7 N = 11 N = 13 N = 9

Additional independent variable.

REFERENCES

Azevedo, R. (2009). Theoretical, conceptual, methodological, and instructional issues in research on metacognition and self-regulated learning: A discussion. Metacognition and Learning, 4, 87–95.

Bannert, M. (2004). Designing metacognitive support for hypermedia learning. In H. Niegemann, D. Leutner & R. Brünken (Eds.), Instructional design for multimedia-learning (pp. 19–30). Münster: Waxmann.

Bannert, M. (2009). Promoting self-regulated learning through prompts: A discussion. German Journal of Educational Psychology, 23, 139–145.
crossref pdf
Bannert, M., Hildebrand, M., & Mengelkamp, C. (2009). Effects of a metacognitive support device in learning environments. Computers in Human Behavior, 25, 829–835.

Bannert, M., & Mengelkamp, C. (2013). Scaffolding hypermedia learning through metacognitive prompts. In R. Azevedo & V. Aleven (Eds.), International Handbook of Metacognition and Learning Technologies (pp. 171–186). Amsterdam: Springer Science.
crossref
Berthold, K., Röder, H., Knörzer, D., Kessler, W., & Renkl, A. (2011). The double-edged effects of explanation prompts. Computers in Human Behavior, 27, 69–75.
crossref
Boshuizen, H., Gruber, H., & Strasser, J. (2020). Knowledge restructuring through case processing: The key to generalise expertise development theory across domains? Educational Research Review, 29.

Eckhardt, M., Urhahne, D., Conrad, O., & Harms, U. (2013). How effective is instructional support for learning with computer simulations? Instructional Science, 41, 105–124.
crossref
Ertmer, P. A., & Koehler, A. A. (2014). Online case-based discussions: Examining coverage of the afforded problem space. Educational Technology Research and Development, 62, 617–636.
crossref
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive developmental inquiry. American Psychologist, 34, 906–911.
crossref
Greene, J. A., Bolick, C. M., & Robertson, J. (2010). Fostering historical knowledge and thinking skills using hypermedia learning environments: The role of self-regulated learning. Computers & Education, 54(1), 230–243.
crossref pdf
Hardy, J. H., Day, E. A., & Steele, L. M. (2019). Interrelationships Among Self-Regulated Learning Processes: Toward a Dynamic Process-Based Model of Self-Regulated Learning. Journal of Management, 45(8), 3146–3177.
crossref
Harkrider, L. N., MacDougall, A. E., Bagdasarov, Z., Johnson, J. F., Thiel, C. E., & Mumford, M. D., et al. (2013). Structuring case-based ethics training: How comparing cases and structured prompts influence training effectiveness. Ethics & Behavior, 23, 179–198.
crossref
Hart, S. G., & Staveland, L. E. (1988). Development of a multi-dimensional workload rating scale: Results of empirical and theoretical research. In P. A. Hancock & N.. Meshkati (Eds.), Human mental workload (pp. 139–183). Amsterdam, The Netherlands: Elsevier.
crossref
Hasselhorn, M. (2006). Metakognition [Metacognition]. In D.H. Rost (Hrsg.) (Ed.), Concise Dictionary Educational Psychology (3. Aufl., S. 480-485). Weinheim: Psychologie Verlags Union.
crossref
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge.

Jing, H. (2006). Learner resistance in metacognition training? An exploration of mismatches between learner and teacher agendas. Language Teaching Research, 10(1), 95–117.

Kapa, E. (2001). A metacognitive support during the process of problem solving in a computerized environment. Educational Studies in Mathematics, 47, 317–336.

Kim, M. C., & Hannafin, M. J. (2011). Scaffolding problem solving in technology-enhanced learning environments (TELEs): Bridging research and theory with practice. Computers & Education, 56, 403–417.
crossref
Koch, A. (2001). Training in metacognition and comprehension of physics texts. Science Education, 85, 758–768.
crossref
Lin, X., Hmelo, C., Kinzer, C. K., & Secules, T. J. (1999). Designing technology to support reflection. Educational Technology, Research, and Development, 47, 43–62.
crossref
Little, P., & McMillan, M. (2016). Determining the sustainability of a model of PBL: A Conceptual framework. Journal of Problem-Based Learning, 3(1), 1–8.
crossref pdf
Meijs, C., Neroni, J., Gijselaers, H., Leontjevas, R., Kirschner, P., & de Groot, R. (2019). Motivated strategies for learning questionnaire part B revisited: New subscales for an adult distance education setting. Internet & Higher Education, 40, 1–12.
crossref pdf
Miller, T. N., & Geraci, L. (2011). Training metacognition in the classroom: the influence of incentives and feedback on exam predictions. Metacognition Learning, 6, 303–314.
crossref
Nokes, T.J., Hausmann, R. G., VanLehn, K., & Gershman, S. (2011). Testing the instructional fit hypothesis: the case of self-explanation prompts. Instructional Science, 39, 645–666.
crossref pdf
Ohtani, K., & Hisasaka, T. (2018). Beyond intelligence: a meta-analytic review of the relationship among metacognition, intelligence, and academic performance. Metacognition Learning, 13, 179–212.
crossref pdf
Piksööt, J., & Sarapuu, T. (2015). Supporting students’ knowledge transfer in modeling activities. Journal of Educational Computing Research, 50, 213–229.
crossref pdf
Reiser, B., & Tabak, I. (2014). Scaffolding. In R. Sawyer (Ed.), The Cambridge Handbook of the Learning Sciences (Cambridge Handbooks in Psychology, pp. 44-62). Cambridge: Cambridge University Press.
crossref
Richey, J. E., & Nokes-Malach, T. J. (2015). Comparing four instructional techniques for promoting robust knowledge. Educational Psychology Review, 27, 181–218.

Schraw, G. (1998). Promoting general metacognitive awareness. Instructional science, 26(1-2), 113–125.
crossref pdf
Taub, M., Azevedo, R., Bouchet, F., & Khosravifar, B. (2014). Can the use of cognitive and metacognitive self-regulated learning strategies be predicted by learners’ levels of prior knowledge in hypermedia-learning environments? Computers in Human Behavior, 39, 356–367.
crossref
Van Meter, P. N., Cameron, C., & Waters, J. R. (2017). Effects of response prompts and diagram comprehension ability on text and diagram learning in a college biology course. Learning and Instruction, 49, 188–198.
crossref
Weiner, B. (1994). Motivationspsychologie [Psychology of Motivation]. Weinheim: Psychologie Verlags-Union.

Weinstein, C. E., Schule, A. C., & Palmer, D. R. (1987). LASSI: Learning and Study Strategies Inventory Clearwater, FL: H. & H.

Winne, P., & Hadwin, A. (1998). Studying as self-regulated learning. In D. J. Hacker & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Hillsdale, NJ: Erlbaum.

Wild, K.-P. (2005). Individuelle Lernstrategien von Studierenden. Konsequenzen für die Hochschuldidaktik und die Hochschullehre [Individual learning strategies. Consequences for didactics and teaching of higher education]. Contributions to Teacher Training, 23, 191–206.

Wild, K.-P., & Schiefele, U. (1994). Lernstrategien im Studium: Ergebnisse zur Faktorenstruktur und Reliabilität eines neuen Fragebogens [Learning strategies for studying: results of factor structure and reliability of a new questionnaire]. Journal for Differential Psychology and Diagnostics in Psychology, 15, 185–200.

Winne, P., & Azevedo, R. (2014). Metacognition. In R. Sawyer (Ed.), The Cambridge Handbook of the Learning Sciences (Cambridge Handbooks in Psychology, pp. 63-87). Cambridge: Cambridge University Press.

Zheng, L. (2016). The effectiveness of self-regulated learning scaffolds on academic performance in computer-based learning environments: A meta-analysis. Asia Pacific Education Review, 17(2), 187–202.
crossref pdf
Zimmerman, B. J., & Martinez-Pons, M. (1988). Construct validation of a strategy model of student self-regulated learning. Journal of Educational Psychology, 80, 284–290.
crossref pdf
Zimmerman, B. J. (2008). Investigating Self-Regulation and Motivation: Historical Background, Methodological Developments, and Future Prospects. American Educational Research Journal, 45(1), 166–183.
crossref
Zumbach, J., Rammerstorfer, L., & Deibl, I. (2020). Cognitive and metacognitive support in learning with a serious game about demographic change. Computers in Human Behavior, 103, 120–129.
crossref
TOOLS
Share :
Facebook Twitter Linked In Google+
METRICS Graph View
  • 5 Crossref
  •    
  • 11,565 View
  • 179 Download


ABOUT
ARTICLE CATEGORY

Browse all articles >

BROWSE ARTICLES
AUTHOR INFORMATION
Editorial Office
Halla·Newcastle PBL Education and Research Center, Cheju Halla University
38 Halladaehak-ro, Jeju-si, Jeju Special Self-Governing Province, 63092, Korea
Tel: +82-64-741-7430    Fax: +82-64-741-7431    E-mail: jpbleditor@gmail.com                

Copyright © 2024 by International Society for Problem-Based Learning.

Developed in M2PI

Close layer
prev next