Warning: mkdir(): Permission denied in /home/virtual/lib/view_data.php on line 86 Warning: fopen(/home/virtual/ejpbl/journal/upload/ip_log/ip_log_2024-03.txt): failed to open stream: No such file or directory in /home/virtual/lib/view_data.php on line 88 Warning: fwrite() expects parameter 1 to be resource, boolean given in /home/virtual/lib/view_data.php on line 89 Development of a Framework for Problem Domain Transference in Health-Related Problem Based Learning and Assessment
J Probl Based Learn Search

CLOSE


J Probl Based Learn > Volume 8(2); 2021 > Article
Summons, Harmon, Park, Colloc, Yeom, Inder, and Pitt: Development of a Framework for Problem Domain Transference in Health-Related Problem Based Learning and Assessment

Abstract

Purpose

Investigate the capability of a knowledge-based framework and architecture, used in a specific health domain problem that can utilise transfer learning, to speed virtual patient development for problem-based training and assessment in other health domains.

Methods

Analysis of a case study, based on a virtual patient used in the training of pharmacy students, to discover the viability of using generic, ontological knowledge capable of transfer to virtual patients in other health domains.

Results

Areas of the virtual pharmacy patient knowledge-base were identified, along with corresponding expected student questions, that are generic to other health domains. Using the framework from the case study to develop a new virtual patient for problem-based learning and assessment in a new health domain, these generic target questions could be utilised to speed up the development of other learning stimuli in future projects involving different health domains, such as nurse training in pain management.

Conclusions

With some modification, the framework of the case-study virtual patient was found to be capable of supporting generic expected student questions capable of re-use in virtual patients with new clinical conditions.

INTRODUCTION

Problem-based learning (PBL) is a major pedagogical approach in education for healthcare. It uses real-life case scenarios, interactivity and guidance to help students develop skills in critical thinking, knowledge transfer and problem-solving (Wood, 2003).
There has been a long history in the development of learning paradigms in healthcare, with some differences in their interpretation and implementation. In some instances, PBL and case-based learning (CBL) have much in common, although PBL stimulus material is more exploratory and CBL is more often intensively guided by an instructor:
“CBL uses a guided inquiry method and provides more structure during small-group sessions unlike PBL which is an open inquiry approach where facilitators play a minimal role and do not guide the discussion, even when learners explore tangents” (Seitia et al., 2011).
Learner satisfaction and educational attainment resulting from the paradigms often depends on the nature of implementation and this can produce difficulties in comparison and evaluation. While a study on fourth-year medical students comparing CBL and PBL approaches to learning a topic involving eating disorders found no significant difference in learning outcomes between the two approaches (Katsikitis et al., 2002), more recent studies with medical students comparing CBL (implemented as guided inquiry) and PBL (implemented as open inquiry), found CBL was preferred to PBL (Srinivasan et al., 2007;Seitia et al., 2011).

Problem-based learning in nurse education

The aim of PBL in nurse education is to “improve clinical reasoning skills through problem solving and critical thinking among students” (Wosinski et al., 2018). An early meta-analysis study of the literature around PBL in nurse education concluded that the methodology had positive effects on learner training satisfaction, education and skills (Shin & Kim, 2013). However, other studies from that time reported inconclusive results regarding improvements from the use of PBL in nurse education (Zhang, 2014). The recent 2018 systematic review of undergraduate nursing students’ satisfaction with PBL and its effectiveness as a teaching method indicated that there were inconsistent results reported in the literature, which may have been related to a “…lack of homogeneity of PBL practice in nursing education, the tutor’s role, activities performed and the personal learning environment” (Wosinski et al., 2018, p.68). The systematic review ofSayyah et al (2017) showed that “using PBL may have a positive effect on the academic achievement of undergraduate medical courses” and suggested that “…teachers and medical education decision makers give more attention on using this method for effective and proper training” (Sayyah et al., 2017, p.691).
While PBL has been introduced as part of a useful paradigm shift in current nursing programs, its implementation sometimes intersects with Case-Based Learning (CBL). A recent systematic review of PBL in undergraduate nursing programs, undertaken by Wosinski et al. (2018), included among their five findings that during PBL “the nursing tutor models clinical reasoning and leadership skills”, “nursing students acquire skills that foster clinical reasoning” and when “used as intended, nursing students understand its purpose and process” (2018, p. 67). They also concluded that tutors needed to be trained to guide students through the PBL process. The pure PBL paradigm is intended for unguided exploration of learners, with feedback and assessment of the quality of their learning decisions. It can be useful to provide some guidance through feedback at various stages of a PBL process. This allows the learner open choices for their ongoing progression, but also provides them more information for their subsequent choices in decision-making.

Problem-based learning and technology in health education

Virtual Learning Environments (VLEs) and Virtual Patients (VPs) include a broad range of IT tools and systems that may be implemented in differing modalities, and can address differing learning areas, competencies and educational roles (Harmon et al., 2021). Virtual patients are based on artificial intelligence (AI) architectures and knowledge-model frameworks (Colloc & Sybord, 2003) and are implemented in various ways in health and medical applications, such as chatbots in disease education and prevention (Pereira & Diaz, 2019). The term ‘virtual patient’ in healthcare education is used as a “broad umbrella term for computer-based programs to simulate real-life clinical scenarios” (Hege et al., 2019).
According to Bearman & Cesnick (2001), the aim of a virtual patient is to respond and answer questions from a student, in much the same way as a real patient would. The difficulty in the design and implementation of a VP is that the student can ask a question in many ways, including ways not directly related to the condition that the VP is simulating. Historically, VP design involves a lot of variety, but two major approaches have included: i) a narrative structure, based on decision-trees and cause-and-effect scenarios, with a student being guided through correct/incorrect choices; or ii) a problem-solving structure developing clinical reasoning and accuracy in diagnosis, where a student has to collect information and make a decision based on their findings (Bearman & Cesnick, 2001).
Traditional methods of improving student’s interpersonal and history-taking skills in the health and medical professions included the use of actors being employed as a simulated patient in both tutorial practice sessions, assessments and oral examinations, such as the objective structured clinical assessment/examination (OSCA/OSCE) in medicine, health and nursing (Serpell, 2009;APHRA, 2020). The literature has long evidenced the educational advantages of this practice, especially the ease with which VP repetition provides for standardisation in the scenarios (Wind et al, 2004;Tamblyn et al, 2007;Zayyan, 2011). There are significant resource advantages in using computerised VPs rather than actors, such as a reduction in training time, lower costs compared to employing real actors, and the ability for easy modification of the VP appearance, involving characteristics such as race, gender and age.
Technology has been integrated into contemporary health education; in PBL methods this facilitates more automation of feedback and guidance. For example, the use of case-of-the-week (COW) problems is widespread in both formative and summative clinical online assessment (Marques & Correia, 2017). A COW is an on-line clinical exercise, developed as a clinical vignette of a real-world problem. The COW is presented to students, (individually or teams), who are then required to find the most appropriate answers, often to multiple-choice questions relating to the problem (Marques & Correia, 2017).
Peddle et al (2019) studied the effects of undergraduate nursing students exposure to web-based virtual patients. They concluded that the interactions influenced students’ knowledge, attitudes and practices of non-technical skills, encouraging students to learn through making mistakes and providing socialisation towards their future professional role.
Virtual patients are being used to improve communication and interpersonal skills, which are vital for students in the health professions (Banski, 2018). Advances in artificial intelligence (AI) technology and techniques have enabled VPs to be designed to develop medical students’ information gathering and history-taking skills. For example, a VP that presented a 3-D patient image and used natural language recognition (NLR) to test Ohio State University medical students’ interview skills in differential diagnosis achieved a 79%-86% level of accuracy in its responses to student questions (Maicher et al., 2017). The web version of the VP was constructed with a Unity game creation engine and students typed questions to the VP, which responded with text to the students.
Research in question generating systems is promising, but these generally create questions using natural language processing (NLP) methods that require underlying natural language understanding (NLU) systems. The NLU systems are either rule-based systems, such as that ofMaicher et al. (2017), which may be limited to very specific domains, or those based on machine learning, exemplified byKenny et al. (2010), or deep learning, exemplified byZini et al. (2019), that require a large amount of data for training and implementation.
Normally in deep learning applications a large amount of data that has been labelled (supervised learning) for specific categories is required. ‘Transfer learning’ is an artificial intelligence machine learning technique, in which an already trained machine learning model is applied as a basis to a different, but related problem. It can be used to enhance development time of new deep learning neural networks, especially when limited training data is available for a new application. The weights and architecture of a deep learning network that has been trained on a general problem, e.g., tuberculosis detection in lung X-Rays, can be utilised to form the basis of the architecture with pre-trained weights for a new problem, such as detection of coal workers’ pneumoconiosis (CWP) in lung X-Rays.

METHODS

This paper investigates the possibility of applying transfer learning to re-use the knowledge of an existing VP that was successfully used for PBL with pharmacy students (Newby et al., 2011). Training the VP to simulate different conditions in the specific pharmacy domain was very time consuming. The framework and knowledge base developed for the pharmacy VP was intended to be expandable and have the ability to be reused in other health domains to alleviate the time-consuming training necessary for a new VP in a different health domain.
This paper investigates a case study of an earlier design and implementation of a VP, the virtual pharmacy patient (VPP), to determine if it contains generic content that may be applicable for transfer to VPs that are based on a similar framework but used in other health domains. The generic content to be investigated is in the ontology of the VPP knowledge-base, consisting of a domain lexicon and knowledge of domain questions, their appropriate answers and also their sequencing and interaction with other domain questions and answers.
The VPP incorporates a proven framework that was successfully implemented in practice, and had multiple Human Computer Interface modes (designers, learners, implementers, administrators) that provide a high degree of generality as a PBL framework for applying it to other health domains. This case study focused on the design and teaching principles of the VPP. The VPP was chosen to see if the principles and architecture that comprise its existing knowledge-base framework could be applied to the development of virtual patients, using a similar framework, in different health domains. Thereby shortening the VP development time, providing an initial labelled training data set for VP’s that may employ other architectures than the VPP, such as deep learning.
The VPP had different interfaces for i) the problem learners (university pharmacy students) who accessed the VPP for real-time interviews and problem solving exercises and who received feedback from the VPP on their performance, ii) the problem designers (university lecturers in the pharmacy domain) who provided the domain problems, the learner questions that should be expected from the students and appropriate VPP answers to the questions and, iii) the implementers and moderators (university tutors and course administrators) of the learning experience when the VPP was provided to students, and who receive feedback and analysis of both individual and class learner performance. Using the feedback generated by the VPP to individuals and for aggregated class performance, tutors and lecturers provide additional feedback and guidance to individual students and to the class.

Case study: the virtual pharmacy patient system

The VPP, used for assessment of pharmacy student’ communication, history taking and diagnostic skills, was developed under an Australian Learning and Teaching Council (ALTC) grant. It was alpha tested at the University of Tasmania and later successfully implemented as an assessment tool at three other Australian universities in 2010 (Newby et al., 2011). Even though a learning mode for the VPP was developed, it was not activated in the model employed at the university pharmacy student assessments. The activation of this learning mode would have allowed the VPP to adjust its recognition of student questions to include wide variations of questioning and incorrect grammar, something that the pharmacy domain experts did not want as they expected correct grammar from students. Overall, in the three university implementations for pharmacy student assessments, the VPP took free speech student questions as input and achieved a question recognition accuracy of 62% for domestic students and 52% for international students, which was competitive with world’s best recognition at the time (Newby et al., 2011, p8 and p56), those being the DIgital ANimated Avatar (DIANA), created by the University of Florida (Lok et al., 2006) and the Keele University avatar (Connelly, 2008;Keele University, 2007), built for Monash University as part of its ePharm program as demonstrated in 2009 at the Monash Pharmacy Education Symposium in Prato, Italy.
The main reason for choosing the VPP as the basis for this case study is that its design strategy was scalable and generically designed for transfer to other health domains. Although the domain scenarios (health conditions) used for the VPP initial implementation were limited to three conditions, specifically conditions that are diagnosed by pharmacists, the virtual patient system itself is scalable to conditions and domains other than those related to pharmacy. The domain content is initially determined by the domain teachers in their roles as administrators of the knowledge content of the system domain; however, the knowledge base of the virtual patient system can be expanded by both domain teachers and by students when it is used for formative training. This gives the virtual patient system the potential to be used in most health disciplines where structured questioning is important. In addition, the architecture provides for detailed individual and aggregated assessment and feedback for both the learner and the assessor, as well as providing an assessment of the appropriate sequencing and style of student questions regarding the patient's condition (Summons et al., 2009;Summons et al., 2011; Park & Summons, 2013).
The initial concept for the VPP was to develop it as an application, using training data from past pharmacy objective structured clinical examinations (OSCEs). However, there were no labelled examples of oral OSCE videos or transcripts available that would enable neural net supervised learning, so the development included building a generalised system that would construct an ontology for a domain, in this case the pharmacy conditions, consisting of a domain lexicon and knowledge of domain questions and appropriate answers, that may later be used for AI student question recognition techniques and training. Hence, the framework design for the VPP system took into account portability and scalability into other health domains.
In the VPP, pharmacy domain experts identified typical patient assessment questions regarding a health domain condition (cough, constipation and gastro-oesophageal reflux disease for the VPP). The VPP framework termed these expected questions as ‘target questions’. Responses to these questions were developed with domain experts for each of the conditions and also for varying severities of the conditions (mild, moderate and acute).
Variations in the way in which a target question could be phrased and alternate ways of asking the target question, were termed ‘alias questions’. For a specific domain, there can be many aliases associated with a particular target question, however each target question was matched against only one aspect of the domain. One key aspect of the VPP architecture was for the provision of generic target questions that could be transferred to other domains.
The main problem in communications between a pharmacy student user and the VPP system, as with other virtual patient systems, was allowing students to ask questions in free speech rather than selecting from a limited question set, requiring the VPP to recognise the question that a student asked. The students’ conversation was not limited to questions and so may not have been specific to the health domain under consideration. For example, a student might greet the VPP and say ‘hello, my name is Peter, how can I help you?’, or ‘good morning, how are you?’ The student’s conversation, especially a question relating to the domain, had to be recognised by the VPP to enable it to be mapped to a specific target question, if applicable, so that the VPP could provide a suitable response to the student, or answer a student’s specific domain question.
As indicated earlier, the learning mode was not included in the VPP initial testing, however, the VPP design does incorporate a learning mode capability. The VPP learning mode capability is based on the artificial intelligence simple hill-climbing approach (Javatpoint, 2021), to learn new alias questions for an existing target question. When a user repeatedly asked a question that the system could not recognise, the VPP assumed that the student was either asking a question to which it had no target question, or that it was asking alias questions for a target question but that the latter were not in its knowledge base.
The design assumption was that the student is asking a question related to ‘something’, which corresponds to an existing target question. If the student phrasing of the question was not recognised by the system, then the student will re-phrase the question but will still be asking about ‘something’, albeit in a slightly different way. If a correctly recognised question is entered the student will be presented with all their unrecognised questions (since their last correctly recognised question) and asked to indicate if any of the unrecognised questions correspond to the currently entered and recognised question. In this manner the VP acts in a training mode and ‘learns’ alternative phrasing for its list of expected target questions, thus adding new alias questions that match a target question and building its lexicon for future matches between student questions and expected questions. Any unknown questions that are unmatched to existing target questions are flagged to be investigated later by the teacher or knowledge engineer/assessment creator, who can liaise with domain experts to either add a new target question, together with the appropriate alias questions, or add the student’s questions to a more appropriate existing target question’s set of aliases.

Virtual pharmacy patient user interfaces

The VPP has three interfaces corresponding to each of the participant roles in the assessment: assessment creator/manager (teacher), assessment moderator (tutor or instructor), and assessed learner (pharmacy student).

Student (Learner) interface

The VPP has an interface for students who are being assessed on their style of communication (selection of closed-ended and open-ended questions, repetition and sequencing of questions) and their ability to ask questions pertinent for the information gathering required to diagnose three specific conditions (cough, constipation and gastro-oesophageal reflux disease or GORD) and their severities (mild, moderate, and acute) for the VPP assessments. In the VPP trials, students did nine patient assessment sessions. In each assessment they were presented with an image of a person (a 3-D talking head with limited expressions) having a specific condition and severity, until all nine combinations of the three conditions and three severities had been assessed. The VPP student interface, with male and female example patients, is shown in Figure 1. Students input their questions to the VPP as typed text, due to the vagueness of speech recognition at the time and also classroom assessment environment of multiple students being assessed simultaneously, with the VPP answering as audio speech. Based on the questions asked, individual students received written feedback at the end of each assessment session as to the effectiveness of their communication and indications of areas that needed to be worked on (Figure 2).
This feedback allows students to examine areas that have been missed during their assessment (Figure 3A), along with providing them with feedback on more appropriate questioning style with open or closed questions (Figure 3B).

Tutor interface

The VPP provides reports to tutors, acting as assessment moderators, and to pharmacy lecturers as overall class managers. The reports provide details of individual student performance concerning specific patient condition assessments, individual student transcripts of assessments, and of aggregated class performance over specific conditions, categories and sub-categories. The aggregated report indicates how many students attempted each category for each condition/category/sub-category. This allows the class teacher to get an overall view of the class performance and indicates areas in which they require remediation. An example of the aggregated report is shown in Figure 4.
Individual student performance reports allow a tutor to see what conditions and what severities have been attempted for a particular student. An example of the assessment history of specific student’s (student ID 32-N002) is shown in Figure 5. The tutor can examine a complete transcript of each assessment session for that student, showing the actual text of the questions that student entered to the VPP and the VPP responses, as well is a report that indicates the target questions that were matched by the VPP for the student questions and the answer given by the VPP. These reports are shown in Figure 6.

Assessment domain creator/manager interface

Teachers/Lecturers take the role of Assessment creators/managers and are system administrators responsible for the content of the clinical domains that are to be assessed. They can easily create new or modify existing condition domains, categories and subcategories (Figure 7A), specify and modify the types of expected target questions associated with particular conditions/categories/sub-categories, as well as creating/modifying the answer (VPP response), answer type (closed or open ended), and the text label for the patient image facial expression (a description sent to the image software module, such as ‘smile’ or ‘frown’) to be generated by the virtual patient image for different severity levels of a condition (Figure 7B), or specify alternative or ‘alias’ questions for target questions (Figure 7C).
Other screens allow teachers to indicate the style of questions required from the student for a particular assessment category or domain (starting with open-ended questions, or a greeting, etc). They can specify the assessment of a student’s question sequence by indicating what questions are required as follow-up questions when specific VPP answers are given. The VPP provides teachers with a list of unmatched questions from specific assessments (Figure 7D). After examining the results of assessments, this provides the ability for teachers to add alias questions to existing target questions, or to create a new target question with appropriate aliases, for future assessments. This illustrates the scalability of the VPP, maintaining a dynamic ontology and increasing recognition of student questioning, especially to questions that were not anticipated by the domain experts. The ability of the VPP framework to increase its ontology with use also potentially provides a richer source of transfer learning to other domains.

RESULTS

The VPP framework was found to be advantageous in terms of its assessment and feedback to both students and instructors. The VPP target questions were analysed to investigate commonalities between the three assessment conditions in its knowledge base. There were several areas that were found to be generic in the nature and content of their corresponding target questions.
Domain experts converted the domain dimensions of health and medical conditions into the framework of the VPP, structured as categories and sub-categories that were expected to be investigated by a student during the assessment. Although categories and sub-categories were created for a specific health domain, new categories and/or sub-categories could be created depending on the analysis of assessment results by domain experts as indicated previously. Some categories consisted of standard areas that might apply, and which would be expected to be questioned by a student, across many conditions, thus facilitating transfer learning. These included areas such as ‘Medications Taken’, ‘Duration’ (of condition), ‘Other Symptoms’ and ‘General Opening Questions’. These categories were also broken down into sub-categories; for example, the category ‘Duration’, was broken into sub-categories ‘Start’ of Condition, ‘Existence’ of condition and ‘length’ of condition. Sub-categories enabled finer reporting, allowed for scalability and transferability from the pharmacy domain, and catered for analysis logic to determine appropriate sequencing and style (open-ended or close-ended questions) of student questions. Sub-categories were sometimes created to distinguish the open-ended and closed-ended expected questions contained in the category, for example the ‘Frequency of Cough’ in the cough condition was subdivided into ‘Frequency of Cough-Closed’ and ‘Frequency of Cough-Open’. Other categories were particular to a specific condition, such as the categories of ‘Normal Bowel Movements’ for the constipation condition, thus the VPP framework was able to accommodate areas particular to new health domains.
The VPP categories were populated with (target and alias) questions that were expected to be asked by a student to ensure they had investigated that category. There are many dimensions that are common across healthcare and medical scenarios. These dimensions have specific target questions that can be applied generically across these scenarios. The most fundamental area, consisting of variables from many dimensions, is that used when taking the demographics and history of a patient. Details of gender, age or date of birth, name, address, height, weight and many other variables contribute to this area of information gathering. In many cases additional information that may not be available from a real patient, such as blood type or BP may be input to the system as part of the scenario to be provided from a VP, either in the form of responses to a student questions, or as a prepared medical chart/history displayed by the VP.
One of the fundamental dimensions, common to all systems, is that of time. Temporal relationships form the basis of many clinical questions (Colloc & Summons, 2015). The system used in the case study VP is based on the interval algebra developed by Allen (1983), modified by temporal anchor points. Common target questions relating to a specific condition X that establishes then existence of a condition (association of the condition with a person), an anchor point (the beginning of the condition) and a duration for the condition (to the present time) would include:
Do you have X? When did X begin? and How long have you had X?
There would be many questions that correspond to the target questions, such as:
When did you first notice X? and Have you had X for a long time?
The foundational work of James Allen (Allen, 1983) defined an interval algebra, consisting of thirteen interval relations, that provided a calculus for temporal reasoning based on relationships between time intervals. Allen’s interval algebra (1983) can be used to express relationships between symptoms or signs that may occur before, starting with, during, or even after, a specific condition X. The temporal duration measurements are generally expressed as ordinal, interval, or ratio, however there can be times when a nominal value is sometimes used implicitly to indicate an interval, such as ‘pregnancy’, where the classification is ‘pregnant’ or ‘not pregnant’ to a question of “are you pregnant?”.
Another fundamental target question dimension is the magnitude or intensity of a specific condition X. This may be expressed either in ranges or by an absolute value. The magnitude dimension can be expressed by variables that come from either ordinal (advanced, moderate, mild), interval (temperature reading), or ratio scales (pain score).
Frequency is another target question dimension. It can be expressed either as the number of occurrences/repetitions of the condition X, or as a measurement for a factor or variable related with condition X. It can be expressed as ordinal (never, sometimes, often) or ratio (heartrate).
There are other questions that may be more specific to the domain under consideration but are still considered generic in nature. These may include questions regarding medication, any presenting symptoms, things that relieve or aggravate condition, request for a description of a symptom, sign or condition, allergies and questions regarding past medical history.
An example of questions requiring open-ended (O), close-ended (C), or both (DB or double-barrelled) answers from the VPP case study is given in Figure 8.
While most of the dimensions above are easily translated into new domains or new clinical conditions, there are also questions in the VPP that may have dependencies within, or between, the categories/sub-categories for a specific clinical condition, The VPP framework provides the capability, as shown in Figure 9, for the assessment creator to create reasoning logic and potential for sequencing questions that are required to be asked following a specific question being asked from the same category (intra-category logic rules) or from a different category (inter-category logic rules). These enforce logic rules for the expected sequencing of student questions. The rules depend on the answer from the virtual patient to a student question. For example, if a specific question such as ‘Are you on medication?’ is asked by the student and the virtual patient’s answer is ‘Yes’, then follow-up questions regarding the nature of the medication, or of what symptoms the medication is for, are generally required from the student. The converse is also true, if a symptom is described by the VPP then the student would be expected to provide follow-up questions on whether medication is being taken for it. These rules are generally specific to a domain but there are generic question forms that can be generated, for instance, ‘What medication do you take?’ as an expected target question.
The VPP framework was found to be advantageous in terms of its assessment and feedback to both students and instructors. It was seen to be capable of providing a rich ontology in terms of generic expected student questions, associated VP answers and generic reasoning logic that included sequencing and interrelationships between expected student questions and also between questions expected to be asked by a student following specific VP answers. The VPP structure supported the creation of generic target questions, which could be transferred to Virtual Patients employing the VPP framework, but for different clinical conditions. The transfer would include all alias questions mapped to the generic target questions, as well as the reasoning logic for sequencing of expected target questions, and also for reasoning regarding student questioning following VPP responses, for example indicating that the VPP had already responded to a repeated student question, or expecting close-ended or follow-up questions to a VPP question response. There would be some additional, but minimal, programming required to translate the generic target and alias questions to a specific condition, to transfer the ‘X’ in a target question from cough to pain for example.
These could be incorporated through transfer learning within the knowledge-base of a VP, that employed the same framework and architecture as the VPP, but in a different health domain. Alternatively, if the new domain VP was based on another architecture, such as a machine learning neural net, or a deep learning model, then the alias questions could be used a labelled input dataset for supervised learning. The target questions would be the desired outputs representing classifications in the new domain that corresponded to generic VPP categories and sub-categories. Both results would significantly hasten virtual patient development in the new domain.

CONCLUSIONS

This paper used a VPP framework and architecture as a case study in a specific health domain to investigate if it possessed mechanisms capable of providing parts of an ontology that could be used to shorten development of VP’s in different health domains. The scalability of the VPP knowledgebase for a specific domain was demonstrated, in terms of mechanisms to maintain and expand its expected student target questions, categories and sub-categories, as well as its capability for increasing student question recognition and associated VPP answers through evolution of its alias questions for specific target questions through its learning ability. The scalability and learning ability that would apply to the generic components of the VPP increase the ontology that can be created and would be available for transfer other domains. The framework and the implementation of the VPP was seen to be capable of generating generic components that may be applied across health domains for different clinical conditions.
Future work is indicated to provide proof-of-concept assessment of the efficiency of the transfer learning. This might be achieved through an implementation of the pharmacy VPP and a test of the generic components that could be transferred to a VP in a new health domain, for example, a VP used in a PBL formative assessment of nursing students knowledge of acute pain management for a gastro-intestinal patient.

Figure 1.
Male and female virtual patients for the virtual pharmacy patient student assessment.
jpbl-2021-00066f1.jpg
Figure 2.
Individual student feedback.
jpbl-2021-00066f2.jpg
Figure 3.
Unreported Questions (A) and Open/Closed Question (B) feedback.
jpbl-2021-00066f3.jpg
Figure 4.
Aggregated class assessment data reports.
jpbl-2021-00066f4.jpg
Figure 5.
Student assessment History.
jpbl-2021-00066f5.jpg
Figure 6.
Individual Student Questions covered and actual Student text transcript.
jpbl-2021-00066f6.jpg
Figure 7.
(A-D) Assessment creation and management interfaces.
jpbl-2021-00066f7.jpg
Figure 8.
Examples of virtual pharmacy patient alias questions and answers.
jpbl-2021-00066f8.jpg
Figure 9.
Reasoning logic rule creation for expected question sequencing.
jpbl-2021-00066f9.jpg

REFERENCES

Allen, J.F. (1983). Maintaining knowledge about temporal intervals. Communications of the ACM, 26(11), 832–843.
crossref
APHRA (2020). What is the OSCE? Nursing and Midwifery Board, The Australian Health Practitioner Regulation Agency. Retrieved February 12, 2021, from https://www.nursingmidwiferyboard.gov.au/Accreditation/IQNM/Examination/Objective-structured-clinical-exam.aspx

Banski, F., Beilby, J., Quail, N., Allen, P.J., Brundage, S.B., & Spitalnick, J. (2018). A clinical educator’s experience using a virtual patient to teach communication and interpersonal skills. Australasian Journal of Educational Technology, 34(3), 60–73.

Bearman, M., Cesnik, B., & Liddell, M. (2001). Random comparison of 'virtual patient' models in the context of teaching clinical communication skills. Medical Education, 35(9), 824–32.
crossref pmid
Colloc, J., & Sybord, C. (2003). A multi-agent approach to involve multiple knowledge models and the case base reasoning approach in decision support systems. Proceedings of the 35th IEEE Southeastern Symposium on System Theory (SSST’03), Morgantown USA, 2003, 247–251.
crossref
Colloc, J., & Summons, P. (2015). An analogical model to design time in clinical objects. Journées RITS. SGBM Dourdan (pp. 121–23.

Connelly, D. (2008). Avatars help Keele students hone skills [Electronic Version], The Pharmaceutical Journal, 280, 249. Retrieved 28 June 2010 from: http://www.pharmj.com/pdf/articles/pj_20080301 avatars.pdf

Harmon, J., Pitt, V., Summons, P., & Inder, K.J. (2021). Use of artificial intelligence and virtual reality within clinical simulation for nursing pain education: a scoping review. Nurse Education Today, 97, 104700.
crossref pmid
Hege, I., Kononowicz, A.A., Tolks, D., Edelbring, S., & Kuehlmeye, K. (2019). A qualitative analysis of virtual patient descriptions in healthcare education based on a systematic literature review. BMC Medical Education, 16(146), 1–11.
crossref
Javatpoint. (2021). Hill climbing algorithm in artificial intelligence. Retrieved June 12, June 2021, from https://www.javatpoint.com/hill-climbing-algorithm-in-ai

Katsikitis, M., Hay, P.J., Barrett, R.J., & Wade, T. (2002). Problem- versus case-based approaches in teaching medical students about eating disorders: a controlled comparison. Educational Psychology, 22(3), 277–283.
crossref
Keele University. (2007). Virtual patient demonstration. Retrieved June 28, 2010, from keele.ac.uk/schools/pharm/explore/vp.htm.

Kenny, P.G., Parsons, T.D., & Garrity, P. (2010). Virtual patients for virtual sick call medical training. Proceedings of the Interservice/Industry Training. Simulation and Education Conference (I/ITSEC) (pp. 1–13.

Lok, B., Ferdig; R, Raij., Johnsen, K., Dickerson, R., & Coutts, J., et al. (2006). Applying virtual reality in medical communication education: Current findings and potential teaching and learning benefits of immersive virtual patients. Virtual Reality, 10(3), 185–195.
crossref
Maicher, K., Danforth, D., Price, A., Zimmerman, L., Wilcox, B., & Liston, B., et al. (2017). Developing a conversational virtual standardized patient to enable students to practice history-taking skills. Society for Simulation in Healthcare, 12(2), 124–131.
crossref
Marques, P.A.O., & Correia, N.C.M. (2017). Nursing education based on hybrid problem-based learning: the impact of PBL-based clinical cases on a pathophysiology course. Journal of Nursing Education, 56(1), 60.
crossref pmid
Newby, D.A., Jin, J.S., Summons, P.F., Athauda, R.I., Park, M., & Schneider, J.J., et al. (2011). Development of a computer-generated digital patient for teaching and assessment in pharmacy: Final Report. Australian Learning and Teaching Council, 75.

Park, M., & Summons, P. (2013). An efficient virtual patient image model: interview training in pharmacy. International Journal of Bio-Science and Bio-Technology, 5, 137–146.
crossref
Peddle, M., Bearman, M., Mckenna, L., & Nestel, D. (2019). Exploring undergraduate nursing student interactions with virtual patients to develop ‘non-technical skills’ through case study methodology. Advances in Simulation, 4(2), 1–11.
crossref pmid pmc
Pereira, J., & Díaz, Ó. (2019). Using health chatbots for behavior change: a mapping study. Journal of Medical Systems, 43(135).
crossref
Sayyah, M., Shirbandi, K., Saki-Malehi, A., & Rahim, F. (2017). Use of a problem-based learning teaching model for undergraduate medical and nursing education: a systematic review and meta-analysis. Advances in medical education and practice, 8, 691–700.
pmid pmc
Seitia, S., Bobby, Z., Ananthanarayanan, P., Radhika, M., Kavitha, M., & Prashanth, T. (2011). Case based learning versus problem based learning: a direct comparison from first year medical students perspective. Webmed Central Medical Education, 2(6).

Serpell, J.W. (2009). Evolution of the OSCA-OSCE-clinical examination of the Royal Australasian College of Surgeons. ANZ Journal of Surgery, 79(3), 161–8.
crossref pmid
Shin, I-S., & Kim, J-H. (2013). The effect of problem-based learning in nursing education: a meta-analysis. Advances in Health Sciences Education, 18(5).
crossref
Srinivasan, M., Wilkes, M., Stevenson, F., Nguyen, T., & Slavin, S. (2007). Comparing problem-based learning with case-based learning: effects of a major curricular shift at two institutions. Academic Medicine, 82, 74–82.
crossref pmid
Summons, P.F., Newby, D.A., Athauda, R.I., & Park, M. (2011). Modelling a simulated pharmacy patient. Proceedings of the 9th International Industrial Simulation Conference ISC'2011, (p68-72), Venice 6-8 June.

Summons, P.F., Newby, D.A., Athauda, R.I., Park, M., Shaw, P., Pranata, I., et al., (2009). Design strategy for a scalable virtual pharmacy patient. Proceedings of the 20th Australasian Conference on Information Systems, p96-110, Melbourne 2-4 Dec, Australia.

Tamblyn, R., Abrahamowicz, M., Dauphinee, D., Wenghofer, E., Jacques, A., & Klass, D., et al. (2007). Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. JAMA, 298(8), 993–1001.
crossref pmid
Wind, L.A., Van Dalen, J., Muijtjens, A.M., & Rethans, J.J. (2004). Assessing simulated patients in an educational setting: the MaSP (Maastricht Assessment of Simulated Patients). Medical Education, 38(1), 39–44.
crossref pmid
Wood, D.F. (2003). Problem based learning. British Medical Journal, 326(7384), 328–330.
pmid pmc
Wosinski, J., Belcher, A.E., Dürrenberger, Y., Allin, A-C., Stormacq, C., & Gerson, L. (2018). Facilitating problem-based learning among undergraduate nursing students: a qualitative systematic review. Nurse Education Today, 60, 67–74.
crossref pmid
Zayyan M, . (2011). Objective structured clinical examination: the assessment of choice. Oman Medical Journal, 26(4), 219–222.
pmid pmc
Zhang, W. (2014). Problem based learning in nursing education. Advances in Nursing, 1, 1–5.
crossref
Zini, J.E., Rizk, Y., Awad, M., & Antoun, J. (2019). Towards a deep learning question-answering specialized chatbot for objective structured clinical examinations. Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), 1-9.

TOOLS
Share :
Facebook Twitter Linked In Google+
METRICS Graph View
  • 0 Crossref
  •    
  • 5,863 View
  • 70 Download


ABOUT
ARTICLE CATEGORY

Browse all articles >

BROWSE ARTICLES
AUTHOR INFORMATION
Editorial Office
Halla·Newcastle PBL Education and Research Center, Cheju Halla University
38 Halladaehak-ro, Jeju-si, Jeju Special Self-Governing Province, 63092, Korea
Tel: +82-64-741-7430    Fax: +82-64-741-7431    E-mail: jpbleditor@gmail.com                

Copyright © 2024 by International Society for Problem-Based Learning.

Developed in M2PI

Close layer
prev next