Skip to main content

Related forms


Introduction

1.1 Background to the UGME Program

The College of Medicine at the University of Saskatchewan offers a four-year undergraduate medical education program. For information regarding admission to the program, please see online.

The MD program is designed to ensure that participants graduate with a common foundation of knowledge, skills, values and attitudes. This general professional education prepares undifferentiated graduates for subsequent education in primary or specialty care areas. Those with a research interest may consider application to the MD/MSc-MD/PhD program.

The curriculum is under the direction of the Curriculum Committee, which reports directly to the Faculty Council of the College of Medicine.

The educational approach underlying our curricular planning is learning centered, making use of increasingly complex and relevant cases within the following broad approaches: Cooperative, Active, Self-Directed and/or Experiential learning (i.e., CASE-based). Students benefit from early and frequent patient contact, solid grounding in basic biomedical sciences and the frequent use of integrated case studies to link basic and clinical science learning. Our graduates are known for their sound clinical competence and initiative as they enter postgraduate residency education programs across the country. A Statement of Educational Philosophy was presented to faculty Council in March 2010.

Years One and Two of the program run from late August to May. Clerkship begins in third year. Year Three runs from August to mid-August the following year, followed immediately by Year Four from mid-August through April.

1.2 - Purpose of the Evaluation

This evaluation strategy is implemented by the Program Evaluation Sub-Committee, a sub-committee of the Curriculum Committee that reports to the Curriculum Committee Chair.

Mandate
To establish formal, ongoing program evaluation procedures to demonstrate the extent to which the College of Medicine is achieving its educational objectives. This strategy complies with Accreditation elements 8.4 and 8.5, which pertain to evaluation of program effectiveness, as stated below:

8.4. A medical school collects and uses a variety of outcome data, including national norms of accomplishment, to demonstrate the extent to which medical students are achieving medical education program objectives and to enhance medical education program quality. These data are collected during program enrollment and after program completion.

8.5. In evaluating medical education program quality, a medical school has formal processes in place to collect and consider medical student evaluations of their courses, clerkships, and teachers, and other relevant information.

To achieve these elements, several sources of data are gathered, including measurement of student satisfaction of their courses, clerkship rotations, and instructors as well as outcome data from a variety of sources that will be used by the Curriculum Committee, its sub-committees, and Year and other committees and working groups in curriculum design.

Our Statement of Educational Philosophy (March 2010) states, “We will use the most advanced and effective practices of evaluation to determine at both the course and program levels the extent to which (a) the intended curriculum has been implemented and (b) goals and objectives of our program have been realized.”

OBJECTIVES:

  • Provide on a regular basis a variety of high quality and timely (a) outcome data and analyses (including national examinations of accomplishment) and (b) student evaluations of courses, clerkships, and instructors to the Curriculum Committee so that it may:
    • monitor the extent to which the planned changes to the UGME curriculum have been implemented
    • ensure that current and future curriculum changes meet program goals and objectives
  • Monitor the implementation of the UGME Program Evaluation Strategy

2.0 Approach to the Evaluation

2.1 Key Principles

The development and implementation of the UGME Program Evaluation Strategy is based on the following key principles:

2.1.1 Collaborative
The strategy presented in this document takes a collaborative approach to the evaluation of the UGME Program. The evaluation has been, and will continue to be, a negotiated process (Guba & Lincoln, 1989; Louie, Byrne, & Wasylenki, 1996; O’Sullivan, 2004). It is characterized by a significant degree of collaboration among key stakeholders including administration, faculty, and students in both its development and implementation (Cousins, Donohue, & Bloom, 1996; Stern, 1996). Because responsibility and decision making is shared by key stakeholders, the evaluation is responsive to the needs of the UGME Program as well as those of program stakeholders (O’Sullivan, 2004). It is anticipated that this collaborative approach will result in increased stakeholder cooperation and involvement in the evaluation and receptivity to the findings and will serve to build evaluation capacity within the College of Medicine.

2.1.2 Centralized
This strategy involves a centralized system administered through the Undergraduate Medical Education Office. It should be noted that the evaluation of the UGME Program is the responsibility of the MD Program Evaluation Sub-Committee which reports directly to the Curriculum Committee. The centralization of the evaluation process will facilitate the overall evaluation of the undergraduate curriculum as well as curricular change (Gerrity & Mahaffy,1998).

2.1.3 Reflective
The UGME Program Evaluation Strategy is designed to promote reflective practice. As part of the reflective process, Year Chairs and Course Directors are required to respond to student feedback. In this way, the evaluation will be central to curricular change and ongoing program development (Hendry, Cumming, Lyon, & Gordon, 2001; Louie et al., 1996; Spratt & Walls, 2003).

2.1.4 Student Involvement
Similar to evaluation strategies currently employed by the University of Manitoba and the University of British Columbia, the UGME Program Evaluation Strategy is characterized by considerable student involvement. As such, it facilitates curricular improvement and student learning through the integration of the curriculum planning and change processes (Louie et al., 1996). Students are actively involved in the ongoing evaluation and monitoring of courses and clinical rotations. They are encouraged to express their opinions and to provide feedback on content and pedagogical strategies as well as to make suggestions for improving the exchange of information.

2.1.5 Timely
The importance of acknowledging and responding to feedback in a timely fashion is recognized by the evaluation strategy (Hendry et al., 2001). Timely feedback may, when appropriate, allow students to “witness changes to a course as they experience it, rather than moving on without ever knowing whether their recommendations had any effect” (p. 336). As well, the evaluation system supports staff development by providing practical, timely feedback to faculty. Information about the implementation and outcomes of the UGME Program will be communicated to key stakeholders, including program administrators, faculty and students, on a regular basis (Smith, Herbert, Robinson, & Watt, 2001; Stern, 1996; University of Saskatchewan, 2002).

2.1.6 Reliable and Valid
In order to ensure the reliability and validity of the findings of the evaluation of the UGME Program, data and methodological triangulation will be employed (Coombes, 2000; Milburn, Fraser, Secker, & Pavis, 1995; Whitman & Cockayne, 1984). Data will be examined from different sources and over time and a combination of qualitative and quantitative research methods will be used. In addition, all evaluation instruments will be designed in consultation with key stakeholders. Summary reports will be reviewed by key stakeholders in order to validate the findings.

2.1.7 Professional Standards
Our Statement of Educational Philosophy (March 2010) states, “We will use the most advanced and effective practices of evaluation to determine at both the course and program levels the extent to which (a) the intended curriculum has been implemented and (b) goals and objectives of our program have been realized.”
The evaluation of the UGME Program is therefore guided by the standards established by the Joint Committee on Standards for Educational Evaluation (Fitzpatrick, Sanders, & Worthen, 2004; Issel, 2004; Joint Committee on Standards for Educational Evaluation, 1994). Specifically, the evaluation will be: (1) informative, timely, and will meet the needs of key stakeholders (Utility Standard); (2) realistic, prudent, diplomatic, and economical (Feasibility Standard); (3) conducted legally and ethically protecting the rights of those involved (Propriety Standard); and (4) comprehensive and will communicate the findings accurately and appropriately (Accuracy Standard).

2.2 Metaevaluation

The UGME Program Evaluation Strategy will be monitored on an ongoing basis by the MD Program Evaluation Sub-Committee to ensure that: (1) the design is feasible; (2) activities are completed as planned and in a timely manner; and (3) instruments and products (data and reports) are of high quality (Fitzpatrick et al., 2004; Scriven, 1991). The strategy will be modified as needed and as appropriate.

2.3 Evaluation Model

The model developed for the purpose of the evaluation of the UGME Program (see Figure 1) provides for the collection of formative (process and outcome) as well as summative (outcome) data. Formative data will be used to monitor the process of curricular change, to suggest and support additional changes to the curriculum, and to help understand what was done to achieve program outcomes by identifying gaps between program outcomes and implementation objectives (Gerrity & Mahaffy, 1998; O’Sullivan, 2004; Scriven, 1991). Furthermore, process evaluation data will provide a context for interpreting the findings of the outcome and impact evaluation (Issel, 2004). On the other hand, formative outcome evaluation data will primarily serve to answer the question (Nestel, 2002; Patton, 1998) – To what extent were the outcome objectives of the UGME Program achieved? It is anticipated that all formative data will be timely, concrete, and useful. Findings will be communicated to program administrators, faculty, and students on a regular basis.

Summative evaluation data will assist program administrators when making judgments about the overall merit (or worth) of the UGME Program and to assess the achievement of outcome objectives (Fitzpatrick et al., 2004; O’Sullivan, 2004; Rossi, Freeman, & Lipsey, 1999). These data may also be used, for example, to determine the generalizability of curricular changes, the need for further restructuring of the curriculum, and/or the allocation of resources (Rossi et al., 1999; Scriven, 1991). Summative data will be used by external evaluators for accreditation purposes.

This strategy will consist primarily of process and outcome evaluations. However, some specific sources of data will also assess the unmet needs of medical students, reflecting needs assessment. The three evaluation components are discussed below.

Needs Assessment
Needs assessments will help to identify and measure the level of unmet needs within the UGME program at the U of S. Essentially, needs assessments will detect areas in which students may need additional training or preparation. Measures which may help detect areas of unmet need include the Program Level Objectives self-assessment (i.e., items which receive low overall ratings may be areas of unmet need) and comments provided through the SCRC and SMSS.

Process Evaluation
Process evaluation components of the evaluation framework will determine the extent to which the UGME curriculum is being implemented as intended. Specifically, this will examine the extent to which various intended aspects of the UGME program are:

  • actually being delivered
  • to the intended students
  • in the intended amount
  • at the intended level of quality

Specifically, the intended and actual goals, objectives, inputs, activities, and outputs of the UGME will be identified. Then, any discrepancies between what is intended and what is actually delivered will be highlighted. Measures included in the process evaluation component of this framework include course evaluations, examination reviews, and feedback from the SCRC.

Outcome Evaluation
Outcome evaluations measure the extent to which students are achieving various outcomes in accordance with the UGME’s goals and objectives. Such outcomes may include performance on the MCCQE, achievement of the College’s goals and objectives as measured through self-assessment and PGY1 evaluations.

2.4 Objectives of the Evaluation

Based on the evaluation model presented above, the following objectives were developed for the UGME Program Evaluation Strategy:

Formative Evaluation (Implementation Issues)
1. To assess the extent to which the curriculum is implemented as intended.
2. To assess the extent of the vertical and horizontal integration of content and competencies across the curriculum/Years.
3. To determine the extent to which the specified competencies were incorporated within the planned UGME curriculum.
4. To identify factors that facilitated as well as inhibited the implementation of the UGME Program.

Formative Evaluation (Outcomes/Impacts)
5. To identify Best Practices as they relate to the implementation of the program.
6. To identify the strengths and weaknesses of the UGME Program.
7. To determine the overall level of satisfaction of key stakeholders with the undergraduate medical program as appropriate.
8. To evaluate the extent to which the goals/objectives of individual courses and clinical clerkships are achieved.
9. To determine the level of knowledge/skill retention by students over time.
10. To determine the extent to which the specified competencies were acquired.
11. To determine the extent to which the program improved students’ educational skills (e.g., approaches to learned, communication skills, acquisition of information, etc.)

Summative Evaluation
12. To identify unanticipated outcomes related to the UGME Program.
13. To identify the most relevant knowledge/skills acquired through the program.
14. To evaluate the extent to which the overall goals/objectives of the UGME Program were achieved.
15. To assess the preparation of the graduates for clinical careers.
16. To identify curriculum content that will meet the needs of current and possibly, future clinical practice.

Recommendations
17. To provide feedback to the MD Program Evaluation Sub-Committee and Curriculum Committee to facilitate the future development and/or implementation of the UGME Program.

3.0 Methodology/Sources of Data

Table 1 presents an overview of the methodology/sources of data for the UGME Program Evaluation Strategy. Please see Appendix A for guidelines for evaluations and educational research conducted outside of this framework.

(Please view the PDF version of the strategy at the top of the page)

3.2 Internal Sources of Data

3.2.1 Program Level Objectives Self-Assessment
The College has several stated program level objectives reflecting Physician as: Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar, and Professional. In order to better understand the extent to which the College is achieving these objectives, students complete self-assessments rating themselves both currently, retrospectively for the first day of clerkship, and retrospectively to the first day of medical school. Students complete this self-assessment after core rotations have finished. Research indicates that aggregate self-assessments may serve as accurate indicators of performance (D’Eon et al., 2008; D’Eon & Trinder, 2013; Peterson et al., 2012). This source of data complies with Accreditation elements 8.4 and 8.5 as it involves student evaluations of the College’s Program Level Objectives and serves as a source of outcome data. Comparisons of the responses given by Regina and Saskatoon students help satisfy Accreditation element 8.7, which requires students at all instructional sites to have comparable educational experiences.

3.2.2 Course Evaluations
In compliance with Accreditation element 8.5, a formal process of collecting and using student evaluation data has been established. Each course is evaluated every second year, unless the course has undergone significant changes or at the request of the Course Director. Typically, courses that receive negative evaluations or have notable site differences are evaluated yearly until issues have been addressed. For an established curriculum, this sampling process results in at least half of the courses receiving a formal evaluation each year.

The evaluation form is administered to students using One45. Course evaluations are sent out the day of the final assessment and usually left open for four weeks after the course so students have the chance to comment on the exam. A sampling methodology is used where approximately one third of the students in Saskatoon and half of the students in Regina are selected to complete the evaluation. This method is intended to reduce evaluation fatigue and has been found to result in high response rates and reliable responses (Kreiter & Lakshman, 2005).

An evaluation report is generated and sent to the Course Director, the appropriate Year Chair, the Associate Dean, Education, Assistant Dean, Academic, Assistant Dean Curriculum, the Assistant Dean, Quality, the Chairs of the Curriculum Delivery, Assessment, and Curriculum Quality Review Sub-Committees as well as other appropriate personnel at relevant sites. The Chair of the Assessment Sub-Committee is also sent a file listing courses that may have assessment concerns. For courses with students in multiple sites, responses given by students at different sites are compared, which meets the conditions of Accreditation element 8.7. Proposed major changes that impact curricular mapping are made through consultation with of the Curriculum Specialist are brought to the Year Committee for approval. Once approved by the Year Committee, changes are then submitted to the Curriculum Quality Review Sub-Committee (CQRSC) for approval. Once approved by the CQRSC, recommendations are presented to the Curriculum Committee. If approved, the changes are then implemented.

The roles and responsibilities of key stakeholders are summarized below as are the sequential steps involved in the course evaluation process (Figure 5).

3.2.3 Clerkship Rotation Evaluations
In compliance with CACMS element 8.5, a formal process for collecting and using student evaluations of clerkship rotations has been established. Clerks are sent a standard clerkship rotation evaluation via One45 at the end of each rotation. Currently, clerks evaluate every rotation they complete. Results from each rotation are collated every three months, alternating between basic site-specific reports created through One45 and more complex reports comparing Regina and Saskatoon campuses. The basic reports are sent to the appropriate rotation director and coordinators as well as the Clerkship Chair. Any serious issues arising from these reports will be addressed as appropriate by the Clerkship Committee, with any resulting proposed curricular changes being addressed in the same manner as the more thorough rotation reports, described below.

Site evaluation summaries comparing rotations at different campuses are generated to meet the requirements of element 8.7, which states that students at all sites must have equivalent experiences. These reports, along with reports comparing all rotations are sent to the Clerkship Chair, appropriate Rotation Directors, appropriate tri-site Rotation Coordinators, the Associate Dean Education, Assistant Dean Academic, Assistant Dean Curriculum, Assistant Dean Quality the Chairs of the Curriculum Delivery, Assessment, and Curriculum Quality Review Sub-Committees, as well as other appropriate personnel at different sites. A list of rotations that may have assessment concerns is sent to the Chair of the Assessment Sub-Committee. Rotation Directors complete Rotation Evaluation response forms that they submit to the Clerkship Chair. Findings are then discussed at clerkship meetings. The Clerkship Chair may further review evaluations of all rotations, identify rotations that have potential problems and schedule meetings with the appropriate Rotation Directors to advise of identified issues. The Rotation Directors may then meet with the tri-site Rotation Coordinators to develop the process for implementing major changes to a rotation, working with departments to bring the changes in effect. Proposed major changes that impact curricular mapping are made through consultation with of the Curriculum Specialist brought to the Clerkship Committee for approval. Once approved by the Clerkship Committee, changes are then submitted to the Curriculum Quality Review Sub-Committee (CQRSC) for approval. Once approved by the CQRSC, recommendations are presented to the Curriculum Committee. If approved by the Curriculum Committee, changes are implemented by the Rotation Directors and appropriate departments.

Electives are evaluated in a similar manner to that described above. Students are sent an evaluation form through One45 at the end of each elective, with specific forms for internal and external electives. Results for internal electives are distributed following a similar process to that described above. To help protect student anonymity, both internal and external elective reports are aggregated until at least three students have completed the same elective. After that point, elective-specific results are released.

The roles and responsibilities of key stakeholders are summarized below as are the sequential steps involved in the rotation evaluation process

3.2.4 Instructor Evaluations
In compliance with Accreditation element 8.5, a formal process for collecting and using information from student evaluations of their instructors has been established. Instructor evaluations are collected primarily for program evaluation and course improvement purposes, with aggregate results for a course reported to Year Committees and the Curriculum Committee. Results for individual instructors are provided to the instructor in question as well as their Most Responsible Planner (MRP), the faculty member with the more direct responsibility for the activities of a particular instructor at a particular site. MRPs are typically a course or module director or coordinator. Below is a summary of the instructor evaluation process. Please see the complete instructor evaluation framework for a more comprehensive description.

Classroom teaching sessions with three or more instructors
Instructor evaluations are completed for all instructors who have taught at least three hours within a course or module, but only for courses/modules which are scheduled to be evaluated in the current academic year. Exceptions may be made on a course by course basis. Instructor evaluations are typically administered once per month. Students are typically taught by multiple instructors in a course, some of whom only teach one or two sessions. Completing evaluations more frequently allows students to provide feedback soon after being taught by a specific instructor, with the goal of obtaining more accurate feedback. UGME staff responsible for sending evaluations obtain schedules of when instructors complete their teaching in specific courses on a regular basis. That list of instructors is evaluated by approximately 33% of students. For modules that are less than two months in duration, instructor evaluation questions may be completed at the same time as the standard module evaluation.

Classroom teaching sessions with fewer than three instructors
For courses with one or two instructors, instructor evaluation items are completed for all instructors regardless of number of hours taught at the same time as the standard course evaluation completed at the end of the course.

Small group/clinical sessions
Evaluations are administered at the end of small group sessions via the system of record. Each instructor who teaches at least three hours to the same group of students in Year 1 or two hours to the same group of students in Year 2 is evaluated by 100% of the students in their small group. Exceptions may be made on a course-by-course basis.

Clerkship
Instructor evaluations are sent to each student upon the completion of each of their rotations and in-province electives to assess the preceptors they spent the most time with during the course of the rotation. This is determined in consultation with the Departments.

Selected Topics
Each Selected Topic session is evaluated by 1/3 of the students.

Aggregate instructor evaluation results are included in standard course evaluation reports and are reported at the end of each course. Individual feedback is provided at appropriate intervals throughout the course. Aggregate instructor evaluation results are also included in rotation evaluation reports. The roles and responsibilities of key stakeholders are summarized below as are the sequential steps involved in the course evaluation process (Figures 4 and 5).

3.2.5 Overall Year 1 and 2 Evaluations Completed by Students
At the end of the academic year, students in Years 1 and 2 evaluate their overall experience that year. Results are shared with the Year Chair, Associate Dean UGME, Assistant Dean Curriculum, Assistant Dean Quality, Assistant Dean Academic, Chairs of the Assessment, Curriculum Delivery, and Curriculum Quality Review Sub-Committees as well as other relevant stakeholders. This is in compliance with CACMS element 8.5. For Year 2, Regina and Saskatoon results are compared, which is in accordance with element 8.7.

3.2.6 Overall Year 1 and 2 Evaluations Completed by Instructors
At the end of the academic year, instructors in Years 1 and 2 evaluate their experience teaching in a course in the 2+2 Curriculum. Instructors who teach in multiple courses are asked to complete multiple evaluations. Results are shared with the appropriate Year Chair. This is in compliance with Accreditation element 8.4.

3.2.7 Overall Clerkship Evaluations
Clerks evaluate their overall clerkship experience on items pertaining to how well the clerkship met its objectives and perceived preparation for residency. These questions are given at the end of core rotations and electives and are typically included with the Program Level Objectives Survey. Results are shared with the Year Chair, Associate Dean UGME, Assistant Dean Curriculum, Assistant Dean Quality, Assistant Dean Academic, Chairs of the Assessment, Curriculum Delivery, and Curriculum Quality Review Sub-Committees as well as other relevant stakeholders. This is in compliance with Accreditation element 8.5.

3.2.8 Student Advancement and Graduation Rates
The Program Evaluation Sub-Committee is provided with data from the Student Academic Management Committee (SAMC) regarding student advancement and graduation rates each year. The Program Evaluation Sub-Committee will review the results and present to the Curriculum Committee in compliance with element 8.4. Other data such as the percent of students that pass their NBME exams on the first write may also be included in this report.

3.2.9 Feedback on Residency Performance of Graduates
PGY1 assessment data for U of S graduates who were accepted into a residency program at the U of S are obtained on a yearly basis. Results for U of S graduates and those who received their MD elsewhere are compared for each program. These data provide outcome data as to how well our graduates are performing in residency compared to those who received their undergraduate training elsewhere. These data are reported on an annual basis in compliance with element 8.4. Results are shared with the Curriculum Committee, Associate Dean, PGME, and other relevant stakeholders.

3.2.10 Student Feedback
Members of the Student Curriculum Review Committee (SCRC) sit on the Program Evaluation Sub-Committee. They are kept informed of evaluation results and will bring this to the attention of other SCRC members and students in general as required. They will also bring any student concerns to the attention of the Program Evaluation Sub-Committee.

Members of the SMSS that deal with curriculum-related issues sit on various chair committees (i.e., Year Committees, Systems Committees). They will bring back issues related to the evaluation to the SCRC as required. They will also bring any student concerns to the attention of the various committees as required.

3.2.11 Grade Comparisons between Campuses
Statistical analyses are conducted to compare grades between Regina and Saskatoon students for appropriate courses and rotations. This is done on an annual basis to help meet CACMS element 8.7. Results are shared with appropriate Year Chairs and the Curriculum Committee.

3.3 External Sources of Data

3.3.1 MCC Qualifying Examinations
Performance on the Medical Council of Canada Qualifying Examination (MCCQE Part I and Part II) is tracked over time. Graduates’ average scores are compared to those of all candidates as well as those trained at other Canadian medical schools. This meets the requirements of element 8.4 as it demonstrates, through the use of national norms of accomplishment, U of S graduate performance in comparison to other Canadian medical graduates. It also meets element 8.7 as overall performance of Regina and Saskatoon graduates are compared. Results are shared with the Curriculum Committee and other relevant stakeholders.

3.3.2 Canadian Medical School Graduation Questionnaire
The results of the Canadian Medical School Graduation Questionnaire (AFMC) are tracked over time. Reports are generated showing areas of improvement and decline from the previous year as well as areas that are higher and lower than the national average. The Program Evaluation Sub-Committee will review the graduation questionnaire reports on a yearly basis and forward to the appropriate committees.

3.3.3 Canadian Post-M.D. Education Registry (CAPER) Data
CAPER data are reviewed to identity residency match results, specialty choices, and practice location of graduates. The Program Evaluation Sub-Committee will create a report on an annual basis and forward to appropriate committees and individuals to help meet CACMS element 8.4.

3.3.4 College of Physicians and Surgeons of Saskatchewan (CPSS) Register
The CPSS register is searched to identify which graduates are located in Saskatchewan, what their practice setting type is, and their practice location. Reports will also include the proportion of our graduates who are practicing in rural/remote areas and areas with a high Aboriginal population. Results are shared with appropriate stakeholders. This is in compliance with CACMS element 8.4.

3.4 Internal/External Sources of Data

3.4.1. Correlation between MCCQE Scores and Grades
In order to understand which courses are most associated with MCCQE Part I performance, correlation coefficients and regression analyses are conducted between grades for all undergraduate courses and MCCQE performance.

3.4.2. Learning Environment
In compliance with CACMS element 3.5, reports specific to learning environment are created annually. Reports include questions specific to learning environment from course / rotation evaluations, aggregate instructor evaluations, as well as anonymized comments from course / rotation evaluations, and Graduation Questionnaire data on student mistreatment.

Reports are shared with the Program Evaluation Sub-Committee, Curriculum Committee, Associate Dean, Education, Assistant Dean Academic, Assistant Dean Curriculum, Assistant Dean, Quality, Unified Department Heads, Vice Dean, Education, and other appropriate stakeholders.

3.4.3. Program Efficacy Review
Accreditation Element 8.3 states: the medical education program objectives, learning objectives, content, and instructional and assessment methods are subject to ongoing monitoring, review, and revision by the curriculum committee to ensure that the curriculum functions effectively as a whole such that medical students achieve the medical education program objectives.

To help meet this element, internal and external data that are used to measure students’ attainment of the College’s Program Level Objectives are reviewed on an annual basis during a Curriculum Committee retreat. These sources of data include: student assessments, Program Objectives Self-Assessment, MCCQE I and II, and the AFMC-GQ. Retreat attendees review data and provide feedback on the extent to which the program objectives are being met. A report is created following the review that is provided to the Program Evaluation Sub-Committee, Curriculum Committee, and other key stakeholders in the College of Medicine.

Bibliography

  • Bax, N.D.S., & Godfrey, J. (1997). Identifying core skills for the medical curriculum. Medical Education, 31, 347-351.
  • Cohen, S., Kamarck, T., Mermelstein, R. (1983). A global measure of perceived stress. Journal of Health and Social Behavior, 24, 385-396.
  • College of Medicine (2004). Information guide 2004-2005: Undergraduate medical students. Saskatoon, SK: College of Medicine, University of Saskatchewan.
  • Coombes, Y. (2000). Combining quantitative and qualitative approaches to evaluation. In
  • M. Thorogood & Y. Coombes (Eds.), Evaluating health promotion: Practice and methods. New York, NY: Oxford University Press Inc.
  • Cousins, J.B., Donohue, J.J., & Bloom, G.A. (1996). Collaborative evaluation in North America: Evaluators’ self-reported opinions, practices, and consequences. Evaluation Practice, 17(3), 207-226.
  • D’Eon, M., Sadownik, L., Harrison, A., & Nation, J. (2008). Using self-assessments to detect workshop success: Do they work? American Journal of Evaluation, 29, 92-98.
  • D’Eon, M.F., & Trinder, K. (2013). Evidence for the validity of grouped self-assessments in measuring the outcomes of educational programs. Eval Health Prof. doi: 10.1177/0163278713475868.
  • Fitzpatrick, J.L., Sanders, J.R., & Worthen, B.R. (2004). Program evaluation: Alternative approaches and practical guidelines (3rd ed.). Boston, MA: Person Education, Inc.
  • Guba, E.G., & Lincoln, Y.S. (1989). Fourth generation evaluation. Newbury Part: CA: SAGE Publications, Inc.
  • Gerrity, M., & Mahaffy, J. (1998). Evaluating change in medical school curricula: How did we know where we are going? Academic Medicine, 73(Suppl. 9), S55-S59.
  • Guglielmino, L. (1977). Development of the Self-Directed Learning Readiness Scale. Doctoral Dissertation. University of Georgia.
  • Hendry, G., Cumming, R., Lyone, P., & Gordon, J. (2001). Student-centred course evaluation in a four-year, problem based medical programme: Issues in collection and management of feedback. Assessment and Evaluation in Higher Education, 26(4), 327-339.
  • Issel, L.M. (2004). Health program planning and evaluation: A practical, systematic 24 approach for community health. Mississauga, ON: Jones and Bartlett Publishers, Inc.
  • Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards: How to assess evaluations of educational programs (2nd ed.). Thousand Oaks, CA: SAGE Publications, Inc.
  • Kreiter, C. D., & Lakshman, V. (2005). Investigating the use of sampling for maximising the efficiency of student-generated faculty teaching evaluations. Medical Education, 39, 171-175.
  • Louie, B., Byrne, N., & Wasylenki, D. (1996). From feedback to reciprocity: Developing a student-centered approach to course evaluation. Evaluation and the Health Professions, 19(2), 231-242.
  • UGME Program Evaluation Strategy September 2002 1 6
  • Milburn, K., Fraser, E., Secker, J., & Pavis, S. (1995). Combining methods in health promotion research: Some considerations about appropriate use. Health Education Journal, 54, 347-356.
  • Nestel, D. (2002). Development of an evaluation model for an introductory module on social medicine. Assessment and Evaluation in Higher Education, 27(4), 301-308.
  • O’Sullivan, R. (2004). Practicing evaluation: A collaborative approach. Thousand Oaks, CA: SAGE Publications, Inc.
  • Pabst, R., & Rothkotter, H. (1997). Retrospective valuation of undergraduate medical education by doctors at the end of their residency time in hospitals: Consequences for the anatomical curriculum. The Anatomical Record, 249, 431-434
  • Patton, M.Q. (1998). Utilization focused evaluation. Newberry Park, CA: SAGE Publications, Inc.
  • Peterson, L.N., Eva. K.W., Rusticus, S.A., & Lovato, C.Y. (2012). The readiness for clerkship survey: Can self-assessment data be used to evaluate program effectiveness? Acad Med, 87(10), 1355-60.
  • Rossi, P.H., Freeman, H.E., & Lipsey, M.W. (1991). Evaluation: A systematic approach (6th ed.). Thousand Oaks, CA: SAGE Publications, Inc.
  • Scriven, M. (1991). Evaluation Thesaurus (4th ed.). Newbury Park, CA: SAGE Publications, Inc.
  • Smith, C., Herbert, D., Robinson, W., & Watt, K. (2001). Quality assurance through a Continuous Curriculum Review (CCR) Strategy: Reflections on a pilot project. Assessment and Evaluation in Higher Education, 26(5), 489-502).
  • Spratt, C., & Walls, J. (2003). Reflective critique and collaborative practice in evaluation: Promoting change in medical education. Medical Teacher, 25(1), 82-88.
  • Stern E. (1996). Developmental approaches to programme evaluation: An independent evaluator’s perspective. In Evaluating and reforming education systems. Paris: Organization for Economic Co-Operation and Development.
  • Sukkar, M. (1984). An approach to the question of relevance of medical physiology courses. Medical Education, 18, 217-221. University of Saskatchewan (2002). Principles of evaluation of teaching at the University of Saskatchewan. Saskatoon, SK: Author.
  • Whitman, N.A., & Cockayne, T.W. (1984). Evaluating medical school courses: A user-centered handbook. Salt Lake City, UT: University of Utah School of Medicine.