Skip to main content
  • Research article
  • Open access
  • Published:

Constructive alignment in a graduate-level project management course: an innovative framework using large language models


Constructive alignment is a learning design approach that emphasizes the direct alignment of the intended learning outcomes, instructional strategies, learning activities, and assessment methods to ensure students are engaged in a meaningful learning experience. This pedagogical approach provides clarity and coherence, aiding students in understanding the connection of their learning activities and assessments with the overall course objectives. This paper explores the use of constructive alignment principles in designing a graduate-level Introduction to Project Management course by leveraging Large Language Models (LLMs), specifically ChatGPT. We introduce an innovative framework that embodies an iterative process to define the course learning outcomes, learning activities and assessments, and lecture content. We show that the implemented framework in ChatGPT was adept at autonomously establishing the course's learning outcomes, delineating assessments with their respective weights, mapping learning outcomes to each assessment method, and formulating a plan for learning activities and the course's schedule. While the framework can significantly reduce the time instructors spend on initial course planning, the results demonstrate that ChatGPT often lacks the specificity and contextual awareness necessary for effective implementation in diverse classroom settings. Therefore, the role of the instructor remains crucial in customizing and finalizing the course structure. The implications of this research are vast, providing insights for educators and curriculum designers looking to infuse LLMs systems into course development without compromising effective pedagogical practices.


Constructive alignment is a learning design approach that plays a pivotal role in higher education by ensuring a coherent link among learning outcomes, instructional strategies, learning activities and assessment methods (Biggs, 1996). It forms the basis for developing courses that maximize learning experiences, deepen subject matter understanding, and amplify transparency by clarifying expectations and assessment criteria for students (Morselli, 2018). This alignment approach promotes active student engagement, nurturing a sense of responsibility and motivation toward achieving educational objectives (Zhang et al., 2022). Through an iterative process of reflection and refinement, constructive alignment enhances the educational model, focusing on student-centered approaches and strategically equipping learners for success in a dynamically competitive global environment (Biggs, 1996; Loughlin et al., 2021; Roßnagel et al., 2021).

While constructive alignment offers potential benefits, some educators find it challenging to integrate into their courses. Wikhamn (2017) underscores a few challenges to incorporating this approach in higher education, such as the time-consuming nature of the approach, especially when it necessitates a comprehensive redesign of courses or entire curricula. The intricacy involved in coordinating learning outcomes with instructional strategies and assessment methods, as well as gaps in training and lack of knowledge in implementing the approach, add to the challenges. These challenges are further pronounced in engineering education as engineering programs often delve deep into technical and design content. Aligning detailed technical and design content with broader learning outcomes can be difficult, as it requires translating these complex concepts into comprehensible, learner-centric objectives (Dym et al., 2005). Therefore, it becomes imperative to identify and assess tools that can support instructors in developing high-quality constructive alignment to their courses.

Large Language Models (LLMs) are advanced artificial intelligence systems designed to understand, generate, and interact with human language at a large scale. These models are trained on vast datasets of text, enabling them to learn a wide range of language patterns, structures, and nuances. LLMs use deep learning techniques, particularly neural networks, to process and produce language in a way that can mimic human-like understanding and responses (Hadi et al., 2023). Some examples of LLMs include ChatGPT (Open AI), Bert (Google), Electra (Google), RoBERTa (Meta), and ERNIE (Baidu). As LLMs continue to permeate various aspects of the education realm, integrating their use into course design holds immense promise (Rahman & Watanobe, 2023). The strengths of LLMs can be integrated into teaching to unlock opportunities that could aid in personalizing learning experiences and tailoring educational content to individual course needs. These systems can be versatile assistants, providing instant access to a vast array of educational resources and aiding in crafting engaging lectures and course materials. By integrating LLMs into course design, educators can optimize their time, innovate instructional strategies, and ultimately contribute to a more interactive and impactful learning environment for students.

This article presents an innovative framework for integrating LLMs, emphasizing ChatGPT for its proficiency in fostering conversational engagement—a process where interactive dialogue systems, such as ChatGPT, engage users in meaningful conversations, enhancing learning through dynamic interaction—and widespread use to support the implementation of constructive alignment in higher education courses. Through an illustrative case study from a graduate-level project management course, we highlight the benefits and challenges of adopting LLMs to transform instructional strategies and bolster consistency in course development. Our research emphasizes how technology can be pivotal in elevating the educational journey, aiding instructors in refining course design, and enriching the student learning experience.

Literature review

Biggs’ (1996) seminal work on constructive alignment introduces a strategy for designing and delivering high-quality learning in higher education. Constructive alignment combines constructivist learning theory and instructional design literature (Biggs, 1996). On the one hand, constructivism focuses on the notion that the learners’ activities provide opportunities to create meaning (Ertmer & Newby, 1993). On the other hand, instructional design emphasizes that a course's objectives and the targets for assessing student performance must be aligned. Biggs (1996) argues that integrating constructivism and instruction design can happen in three aspects. First, the intended learning outcomes are clearly stated, intertwined with specific course content, and indicate appropriate performances. Second, the instructional strategies immerse students in scenarios that are likely to provoke these anticipated performances. Third, the assessment methods used to evaluate students are based on the targeted performances. The core principles of constructive alignment remain relatively constant; however, refinements, variations, and developments have built upon Biggs’ original work. Boud (2007) extend Biggs’ model not only to encompass consistency of purpose between the proximate elements of programs but also look well beyond the point of graduation to seek alignment with longer-term purposes. Trigwell and Prosser (2014) argue that introducing qualitative differences into any of the three constructive alignment aspects—specifically, intended learning outcomes, instructional strategies, and assessment methods—has been observed to correlate with variations in the quality of student learning. For example, when educators describe that their instructional strategies aim to encourage students to deepen or modify their existing conceptions and to challenge their comprehension, rather than devoting substantial time to presenting information, it tends to result in students being more inclined to adopt deeper approaches to learning. According to Magnusson and Rytzler (2019), constructive alignment’s focus on goal orientation and standardization aligns well with the market-driven ideals prevalent in higher education today.

Several researchers have utilized the constructive alignment approach to improve students’ learning or identify course design flaws. Croy (2018) used constructive alignment to update the presentation rubric to include individual and group performance to align with the course goals, enhancing students’ satisfaction with the course. McCann (2017) implemented constructive alignment in an economics course, where the instructor found that combining formal and informal feedback was essential to promote students’ reflection on theory application and deepen their learning. Lasrado and Kaul (2021) used constructive alignment in a business and quality management course to introduce innovative ways of designing authentic tasks and aligning them with subject learning outcomes. Chan and Lee (2021) utilized constructive alignment to assess holistic attributes in Hong Kong’s engineering education, revealing a reliance on indirect assessment methods. Their study emphasized the difficulty of transitioning to direct assessments without well-designed, standardized marking rubrics. Caution is warranted when applying Chan and Lee’s (2021) findings to institutions with different characteristics; instructors must consider the unique features of each educational context. From prior research, it can be inferred that implementing a more robust constructive alignment approach is associated with enhanced student learning experiences, demonstrating adaptability across various course contexts.

Nonetheless, gaps in practical applications of constructive alignment still exist. A noticeable gap persists in applying constructive alignment principles to course design (Abejuela et al., 2022; Hailikari et al., 2021; Kumar et al., 2022; Maia & dos Santos, 2022; Tobiason, 2022; Zhang et al., 2022). According to the authors cited above, the limitations of applying constructive alignment theory include the time to restructure, organize, and prepare the course and lectures, and to ensure the curriculum meets institutional quantitative reporting and grading requirements. Cain and Babar (2016) and Teater (2011) mention that educators may face resistance from students in changing from a traditional structure to a more formative way of feedback. Ruge et al. (2019) analyzed the challenges instructors may face in implementing a constructive alignment approach in their courses in Australian universities; they identified factors such as lack of institutional support, limited resources, resistance by individual academics to changing pedagogical approaches, and lack of faculty engagement as possible barriers.

Regarding instructional design in interdisciplinary graduate education, Borrego and Cutler (2010) found that constructive alignment between intended learning outcomes, assessment methods, and learning experiences is severely lacking. Many higher education (especially in engineering) courses prioritize technical knowledge and skills, often overlooking the alignment of learning outcomes, instructional strategies, and assessment methods for a deeper theoretical understanding and practical application. This deficiency can impede students’ ability to connect theoretical concepts with real-world engineering challenges, affecting their readiness for the dynamic demands of the profession. Addressing this gap may be time-consuming and challenging for educators. However, leveraging innovative technologies such as ChatGPT can aid instructors in implementing constructive alignment effectively.

The literature on the application of LLMs in education reveals a broad spectrum of uses. Initial reviews have highlighted the nascent exploration of chatbots in educational settings, underscoring the importance of developing effective learning designs and strategies (Hwang & Chang, 2021; Kasneci et al., 2023). From an instructor’s perspective, recent studies have concentrated on leveraging technology to automate student assessments, provide adaptive feedback, and generate teaching materials, all of which demonstrate a shift toward more interactive and responsive educational tools (Sailer et al., 2023; Bernius et al., 2022; Moore et al., 2022; Sarsa et al., 2022; Zhu et al., 2020).

The adaptability of LLMs in creating diverse educational content, such as question–answer pairs and mathematics word problems, underscores their broad applicability across various educational contexts (Qu et al., 2021; Shen et al., 2021; Wang et al., 2021). Tack and Piech (2022) provide an in-depth analysis of conversational agents in educational dialogues, offering insights into their strengths and areas for improvement relative to human interactions. Additionally, Wang et al. (2023) have pioneered an innovative use of LLMs in analyzing qualitative Student Evaluation of Teaching (SET) comments, enriching the interpretation of SET scores.

LLMs have also been used to foster curiosity and question-asking skills among undergraduate students explaining coding concepts in computing education (MacNeil et al., 2022). Additionally, peer learning has been augmented through LLMs, as evidenced by Jia et al. (2021), who employ Bidirectional Encoder Representations from Transformers (BERT) to evaluate peer assessments, promoting a collaborative learning atmosphere. This body of work collectively illustrates the expanding role of LLMs in enhancing educational practices through automation, personalized learning experiences, and support for both instructors and students.

Conversational AI, including LLMs, has proven valuable in language education; it serves various applications such as acting as a conversational partner, addressing language learning anxiety, providing feedback, and offering scaffolds during language learning (Ji et al., 2022; El Shazly, 2021; Jeon, 2021; Lin & Mubarok, 2021). This diverse body of research underscores the transformative potential of LLMs in shaping various facets of the educational landscape. To our knowledge, this is the first peer-reviewed published work that explicitly addresses the integration of LLMs technologies in the context of constructive alignment within a graduate-level project management course design. This study points to an untapped area of research and application where the potential of LLMs, particularly ChatGPT, in aligning learning outcomes, instructional strategies, and assessment methods remains to be explored. The proposed framework for using LLMs aims to assist instructors in implementing a constructive alignment approach in course design and promoting a more comprehensive and application-oriented education for students.


Constructive alignment synergizes intended course learning outcomes, instructional strategies, and assessment methods. The overarching framework for establishing constructive alignment using LLMs such as ChatGPT within higher education is illustrated in Fig. 1. Although there is no strict sequence for developing constructive alignment in a course, this study delineates a four-phase approach: Initially, in the Preliminary Phase, the user should focus on getting information about the course requirements and input it into ChatGPT. In the Learning Outcomes Phase, the framework formulates the intended learning outcomes for the course, which then transitions to designing the Course Assessment Phase. The final phase involves structuring the course lecture schedule and activities; we call this the Course Schedule and Active Learning Strategies Phase. At each phase, there is an opportunity to revisit previous stages and check the information provided by ChatGPT to ensure coherence and consistency across all course components. Furthermore, given the risks associated with the misuse of AI (IEEE, n.d.) as well as equity, diversity, and inclusion (EDI) concerns (Meyer et al., 2023) associated with LLMs, users should be aware that the output from these models can be biased and inaccurate.

Fig. 1
figure 1

Suggested constructive alignment framework to be implemented in ChatGPT

The prompts used in ChatGPT are based on the prompt improvement pattern suggested by White et al. (2023). In this pattern, ChatGPT is requested to ask for additional information to enhance the results. The framework proposed in this research necessitates instructor validation, ensuring that every stage is independently valuable and contributes to a cohesive system that enhances learning outcomes. In addition, the critical role of instructor oversight of ChatGPT’s answers is paramount in addressing the challenge of inconsistency and variability inherent in hyper-scale LLMs. This oversight includes continuous validation of the information provided by ChatGPT, ensuring the educational experience remains aligned, responsive, and consistently high quality.

Phase 1: Preliminary phase

The first step in this framework is to gather information about the course description. Usually, this information is available in university calendars and course descriptions from previous years. This is also a good time to revisit the program-level learning outcomes (PLOs) and any accreditation requirements to understand how the individual course fits into the overall aims of the credential program. Ultimately, each individual course supports student attainment of the PLOs.

Phase 2: Learning outcomes phase

Using the information gathered during the Preliminary Phase, the user should outline the course’s context and description and request from ChatGPT any additional information that might enhance the resultant learning outcomes of the course. This step can be done multiple times (Initial Prompt – Additional Information – Refined Results) until the user is satisfied with the course learning outcomes and, if relevant and available, the accreditation requirements and PLOs.

Phase 3: Course assessment phase

In the third phase, the prompt will define the assessment methods for the course. The context is based on Phase 2 plus any additional information relevant to ChatGPT. In the first prompt in this phase, it is suggested to ask ChatGPT for any additional information required to produce the results. Once again, this step can be done multiple times (Initial Prompt – Additional Information – Refined Results) until the user is satisfied with the course assessment methods. Once the assessment is concluded, the user can ask ChatGPT to regenerate the course learning outcomes to better align with the assessment methods. In addition, the user can request ChatGPT to suggest rubrics for the assessments or questions for quizzes, assignments, or final exams. It is important to reinforce that in this phase, the role of the instructor’s expertise is paramount. Such skill ensures that the assessment methods are not only comprehensive and aligned with the learning outcomes but also customized to fulfill the requirements of the course.

Phase 4: Course schedule and active learning strategies phase

In this phase, the focus is on constructing the course schedule and delineating active learning strategies consistent with the course learning outcomes and assessment methods. Initially, users may provide additional details such as the number of classes, student presentations, guest lectures, and other activities planned for the course duration. Analogous to the preceding phases, users are advised to inquire about additional information that might be essential to refine the results. It is crucial that the lectures align coherently with the intended learning outcomes and assessment methods. Results, ideally, should be organized in a tabulated format. Based on the preliminary suggestions from ChatGPT, users can eliminate superfluous or redundant lectures and recommend topics for inclusion. As a culmination of this phase, users can request ChatGPT to provide pedagogical strategies that resonate with the course’s learning outcome and assessment methods and foster active student engagement during lectures.

Should the user determine that the course’s active learning strategies and assessment methods do not coherently align with the course schedule, they can submit a revised prompt for further refinement. This iterative process continues until the user deems the outputs from ChatGPT to be in congruence with all pivotal areas of the course.

Framework application: introduction to project management course

To explore the proposed framework, we employed a case study centred on a course instructed by the lead author, illustrating the use of ChatGPT v.4 to structure the course’s learning outcomes, assessment methods, and teaching activities using a constructive alignment approach. Introduction to Project Management is a graduate-level course designed for Master of Engineering (MEng) students. Typically taken during the first semester of their program, this course draws students from diverse engineering disciplines, including civil, mechanical, software, electrical, and petroleum engineering, providing a rich context for applying and evaluating the framework. Based on the Project Management Body of Knowledge, the course content covers the basic aspects of each of the project’s five main phases: initiation, planning, team formation, control, and close-out.

Transitioning from an online to an in-person modality, the instructor, with previous experience teaching this course, faced the challenge of adapting to the dynamic classroom environment and ensuring the engagement of approximately 360 students split into three classes. The course calendar description is provided in Fig. 2.

Fig. 2
figure 2

Introduction to Project Management Calendar Course Description. Source:

The instructor systematically input data about the course into ChatGPT and engaged in conversation with the tool to assess and validate the framework using AI-generated content. The course instructor conducted and compared the results generated by ChatGPT with the previous year’s course materials to evaluate the effectiveness and alignment of the AI-enhanced approach. This comparison involved several key steps:

  • Alignment with Learning Outcomes: The instructor first reviewed the learning outcomes defined with the assistance of ChatGPT, comparing them to those from the previous year. The focus was on ensuring that the AI-generated outcomes were aligned with the course objectives as suggested by the department and enhanced to better support students’ understanding and mastery of the subject matter. This step was crucial for validating the effectiveness of ChatGPT in refining or expanding the educational goals of the course.

  • Assessment Methods Analysis: The instructor then examined the assessment methods, including assignments, quizzes, and exams, designed with ChatGPT’s input. Each AI-assisted assessment was compared against the previous year’s assessments to identify improvements in how well they measured student learning relative to the defined outcomes.

  • Lecture Content Review: Finally, the course content, including lecture materials and teaching activities suggested by ChatGPT, was compared with that of the previous year. The instructor evaluated the relevance and comprehensiveness of the AI-enhanced content, looking for evidence of improvement in how well the materials supported the learning outcomes.

Learning outcomes phase

The initial prompt inputted into ChatGPT aimed to establish five distinct learning outcomes. Along with a detailed course description, there was an emphasis on integrating content from both the sixth and seventh editions of the Project Management Body of Knowledge (PMBOK) book (Project Management Institute 2017, 2021). Additionally, the user inquired about any additional information necessary to enhance the results for the course learning outcomes, as depicted in Fig. 3. ChatGPT requested details on areas such as "Depth of Coverage," "Assessment Method," "Course Duration & Intensity," "Target Audience," "Prerequisites," and "Key Textbooks and Resources." Table 1 offers a thorough breakdown of these topics.

Fig. 3
figure 3

First prompt inputted into ChatGPT to define the course learning outcomes

Table 1 Additional information required by ChatGPT

Utilizing the parameters set by ChatGPT, a refined prompt was inputted into ChapGPT, aiming to refine the course learning outcomes (Fig. 4). The course is structured as an introductory exploration into engineering project management, emphasizing the practical application of fundamental concepts via group projects. It is conducted over one semester, entails two 1.25-h weekly sessions, and is tailored for a diverse cohort of fresh graduates and mid-career professionals. No prerequisites are mandated, and the PMBOK serves as the primary instructional resource. The preliminary learning outcomes are presented in Table 2.

Fig. 4
figure 4

Second prompt inputted into ChatGPT to define the course learning outcomes

Table 2 ChatGPT preliminary course learning outcomes

Assessment phase

To align with the learning outcomes, a constructive alignment approach was adopted to structure the course assessment methods. An initial prompt, illustrated in Fig. 5, was inputted to seek guidance on the assessment methods. Additionally, ChatGPT was consulted about any additional information needed to define the assessment methods, as outlined in Table 3. It is worth noting that in the initial iteration, ChatGPT recommended including a student participation and engagement mark for the course. However, we requested that this recommendation be omitted due to the challenges associated with mandating student lecture attendance and the inconclusive evidence on whether grading participation meaningfully improves student engagement or learning (Paff, 2015). This example highlights the importance of carefully reviewing of all ChatGPT responses.

Fig. 5
figure 5

First prompt inputted to ChatGPT to define the course assessment methods

Table 3 Additional information required to define the course assessment methods

Adding the information required by ChatGPT, a new prompt was inputted into ChatGPT (Fig. 6). The in-person teaching method for the Introduction to Project Management course incorporates case studies and formative assessments. Given the course’s size, individual submissions should be automatically graded, primarily through quizzes. Students should further apply their knowledge through a simulation game, where they plan and evaluate a specific project to assess performance, risks, and adherence to time and budget constraints. Though guest lectures enhance the content, they are not factored into assessments.

Fig. 6
figure 6

Second prompt inputted to ChatGPT to define the course assessments methods

The final list of assessments methods suggested by ChatGPT is presented in Table 4. This list encompasses an array of assessment methods, including group presentations, simulation-based exercises, quizzes, and a final exam. As requested in the prompt, each assessment method is aligned with specific course learning outcomes, with ChatGPT furnishing an accompanying description and designated weightage.

Table 4 List of course assessment methods and alignment with course learning outcomes

Revisiting the course learning outcomes

A new prompt was inputted into ChatGPT requesting to update the course learning outcomes based on the course assessment methods (Fig. 7). The list is presented in Table 5. In the first iteration, the fifth outcome emphasizes the integration of "various aspects of project management" to collaboratively develop, implement, and monitor a project plan that considers stakeholder needs and potential challenges; whereas, in the second iteration, the fifth outcome highlights the synthesis and presentation of comprehensive project analyses and recommendations. The second iteration focuses more on the consolidation of knowledge and presentation rather than the execution of a plan.

Fig. 7
figure 7

Prompt inputted into ChatGPT to align course learning outcomes and assessments methods

Table 5 ChatGPT updated list of course learning outcomes

Moreover, the revised version of the learning outcomes evidently incorporates elements from Bloom's Taxonomy (Armstrong, 2010). Although employing Bloom's Taxonomy in the creation of learning outcomes is widely respected, it is not mandatory to include an outcome for every level, particularly in graduate-level courses. Furthermore, the use of terms such as “understand” or “acquire a foundational understanding” is generally not recommended as best practice (Potter & Kustra, 2012). The update learning outcomes also introduce the explicit use of simulation tools for evaluating project scenarios, which is encapsulated in the fourth outcome: "Evaluate Project Scenarios Using Simulation Tools." This learning outcome is distinct from the others mentioned in the first iteration, and emerges from the assessment details supplied to ChatGPT, which showcases ChatGPT's aptitude to recalibrate course learning outcomes in response to additional information provided by the user.

Course schedule and active learning strategies phase

After defining the course learning outcomes and assessment methods, a prompt was inputted to structure the course schedule (Fig. 8). The course is a 12-week course with two lectures per week. ChatGPT was asked to include guest lectures and the lectures used with group presentations to the course schedule. The prompt also asked for additional information required to improve the results, as outlined in Table 6.

Fig. 8
figure 8

Prompt inputted into ChatGPT requesting to structure the lecture content

Table 6 Additional information required to define the content of the lectures

Utilizing the information required by ChatGPT and based on the preliminary list of lecture topics, a refined prompt was inputted into ChatGPT to define the course schedule and lecture content (Fig. 9). The prompt aimed for ChatGPT to refine the course schedule and align them with the course learning outcomes and assessment methods. ChatGPT originally scheduled three lectures for reviewing course material, which the instructors found excessive. The final list of lectures is structured in six areas: Foundations and Fundamentals, Core Project Management Techniques, Integration and Advanced Topics, Guest Speakers, Student Presentations, and Course Closure. ChatGPT suggested possible guest lecture topics and how they would align with the course learning outcomes. The course schedule is presented in Table 7.

Fig. 9
figure 9

Prompt inputted into ChatGPT requesting the final course schedule and lecture content

Table 7 Course schedule and lecture content align with learning outcomes and assessment methods

In this phase, ChatGPT included solely the content of Lecture 10 in the final exam. Ideally, the final exam should encompass content from all the course lectures, so students could be evaluated on their individual understanding of the entire course content. When queried about this discrepancy, ChatGPT responded, “Not all content is included in the final exam due to the need for diversified assessment methods, which evaluate different skills and knowledge types throughout the course. Limiting exam content allows for deeper exploration of specific topics, reduces student stress, and promotes continuous engagement. Furthermore, practical skills or sessions focusing on reflection and feedback might not align with traditional exam formats. By focusing the exam on core concepts, educators ensure meaningful evaluation while considering time and logistical constraints.” Therefore, if the instructor considers it necessary, additional prompts would be required to ensure that all lecture content is included in the final exam.

Active learning strategies

Following the course schedule and lecture content definition, ChatGPT was requested to provide five active learning strategies that would align with the course learning activities and assessment methods (Fig. 10). Table 8 outlines five key classroom strategies designed to reinforce project management concepts. The strategies predominantly target the learning outcome of "Applying Basic Project Management Techniques," except for the "Risk Identification Activity" in Lecture 13, which focuses on evaluating project scenarios using simulation tools. Each strategy is linked to specific assessment methods, primarily group projects and simulation games, ensuring a holistic approach to teaching and evaluating fundamental project management principles.

Fig. 10
figure 10

Prompt inputted into ChatGPT to request five learning classroom activities

Table 8 Active learning strategies suggested by ChatGPT

Comparison of the framework results provided by ChatGPT with the actual course and discussion of its limitations

In this section, the results from the ChatGPT will be compared with the lead author’s course syllabus to discuss discrepancies and similarities.

Course learning outcomes

The actual course learning outcomes for the Introduction to Project Management course are presented in Table 9. Where the instructor's approach focuses more on practical skills such as project planning and team management, ChatGPT’s outcomes are based more on a theoretical foundation, drawing from PMBOK guidelines suggested by the course instructor, and incorporating broader concepts into the course learning outcomes, such as human resource dynamics and simulation tools. While the instructor's outcomes are effective for immediate, hands-on skills, incorporating elements suggested by ChatGPT's outcomes could enhance the course's depth, preparing students for a more diverse range of project management challenges.

Table 9 Introduction to Project Management course learning outcomes defined by the course instructor and ChatGPT

Course assessments

During the development of course assessments proposed by ChatGPT, some constraints were overlooked, such as limited teaching assistants and large class sizes that the instructor faced. The recommendations provided by ChatGPT were mostly conventional assessment methods, with no innovative approaches such as ungrading. The final assessment list mainly featured common assessment methods or assessments initially suggested by the course instructor. However, it's worth noting that the game-based assessment was an original idea from the course instructor, later adopted in ChatGPT's suggested assessment methods.

Course schedule and active learning strategies

Upon comparing the course schedule proposed by ChatGPT with the course schedule for the original course, ENGG 684: Introduction to Project Management, it was observed that of the 16 lectures suggested by ChatGPT (excluding guest lectures, presentations, and final course/review exam), 13 lectures (or 81%) aligned closely or were identical in content to the topics presented in the actual course.

The active learning strategies proposed by ChatGPT for the Introduction to Project Management course are somewhat commonplace within this educational context. This observation suggests that a more detailed understanding of the specific course context is needed to effectively tailor and enhance these strategies. Providing additional information about the course's unique aspects could enable the development of more customized and impactful active learning strategies.

Employing ChatGPT for constructive alignment implementation in higher education: advantages, limitations, and future work.

Implementing constructive alignment in course planning is a challenge in higher education. For reasons such as instructors’ lack of time (Simper, 2020), educators’ and students’ resistance to change (Cain & Babar, 2016), and lack of institutional support (Ruge et al., 2019), achieving the desired integration and coherence across curriculum components can be difficult. Amidst these challenges, the advent of AI and LLMs in education presents a transformative opportunity. As highlighted by Holmes et al. (2019), AI technologies offer the promise of personalizing learning and streamlining educational processes, a promise that ChatGPT has begun to fulfill through its adaptive learning algorithms and feedback integration capabilities. Unlike traditional AI tools discussed by Luckin & Holmes (2016), which primarily focus on static content delivery, ChatGPT’s reinforcement learning mechanism allows it to continually refine course structures based on iterative feedback, thus providing a dynamic tool for educators in the initial stages of aligning course learning outcomes, assessments, and activities. In this scenario, the framework proposed in this research using ChatGPT can reduce the time instructors spend on initial course planning, provide a starting point for the syllabus creation, and enable more focus on customization and direct student engagement. This aspect of ChatGPT is particularly beneficial in the complex and often time-intensive task of creating or revising graduate-level courses. Furthermore, given that constructive alignment has been shown to facilitate deeper learning and improve student performance in courses (Vanfretti & Farrokhbadi, 2014), applying this framework could result in enhanced academic outcomes, including increased student engagement, higher achievement levels, and a more profound understanding of course material.

The framework can also enhance the implementation of constructive alignment in higher education by facilitating integration with existing pedagogical models. For instance, the EdVEE model by Trowsdale and McKay (2023), enhances the visualization and sharing of the alignment among learning outcomes, content, teaching activities, and assessments, could be integrated with the proposed framework to assist instructors in defining course learning outcomes, assessments, and schedule. The framework can also be integrated into the serious game design principles outlined by Kalmpourtzis and Romero (2020). This integration could assist in defining learning outcomes and align them with the context of game-based learning environments, demonstrating the framework’s adaptability and potential to enhance educational practices through a variety of teaching methodologies and tools.

Despite some advantages, the use of ChatGPT in constructing graduate courses comes with notable limitations. As Kasneci et al. explain, “a clear strategy within educational systems and a clear pedagogical approach with a strong focus on critical thinking and strategies for fact-checking are required to integrate and take full advantage of LLMs in learning and assessment settings and teaching curricula” (2023, p. 1). Therefore, the suggestions provided by ChatGPT in the Introduction to Project Management course, while useful, often lack the specificity and contextual awareness necessary for effective implementation in diverse classroom settings. For instance, the recommended active learning activities and assessments may not account for limitations such as class size and resource availability, such as the number of teaching assistants. Implementing these suggestions without modifications could lead to practical challenges. Furthermore, while ChatGPT can provide a starting point for developing the course learning outcomes, assessment, and schedule, it still requires substantial input and validation from an experienced course instructor. The tool’s suggestions are often generic and may not align perfectly with the unique requirements of a specific course or its students. Therefore, the role of the instructor remains crucial in customizing and finalizing the course structure; the instructor must ensure that the outcomes, assessments, and activities are not only aligned but also practically feasible and contextually relevant. In addition, users should be aware of the potential biases in LLMs, as highlighted by Meyer et al. (2023), and the risk of data privacy breaches, as cautioned by the IEEE (n.d.), when providing sensitive or personal information. It is essential to exercise discretion and implement robust data protection measures to safeguard against unauthorized access to personal information from organizations, students, and personnel, and ensure the ethical use of technology.

The framework introduced in this research opens several paths for future exploration. First, future research can focus on how to craft detailed prompts that incorporate EDI into learning outcomes, assessments, and course schedules, and thereby enhance support for students from minority groups. Second, the framework offers scope for expansion to include the creation of rubrics designed for diverse course assessments, providing a systematic approach to evaluation. Third, there is an opportunity to examine the framework’s suitability for courses with scarce online materials, evaluating its adaptability and versatility. Lastly, given the varied accreditation standards of higher-education courses, the framework can be adapted to encompass the graduate attributes required by accrediting organizations, ensuring its broad relevance and utility in different educational settings.


Exploring constructive alignment in educational frameworks highlights the importance of structured instructional design. The framework suggested in this paper implemented through a case study in the Introduction to Project Management course showcases how ChatGPT model processes align with the constructive alignment proposed by Biggs (1996). The case study shows that this framework offers a potential method for aligning course learning outcomes, assessment methods, and active learning classroom activities with the course schedule. The course schedule is more than 80% similar to the original course, and the assessment methods are aligned with the course schedule and learning outcomes.

Therefore, the proposed framework can be adopted by other instructors to examine the outcomes of LLMs such as ChatGPT in diverse educational contexts. If proven viable, this framework has the potential to decrease the workload for instructors and help them adopt a constructive approach in course design.

While the proposed framework offers promising insights, it is essential to recognize the inherent challenges of LLMs, including occasional lapses in context comprehension and potential inaccuracies, which underline the importance of human intervention and review. The strengths of the LLMs system in this application to course design were balanced with the authors’ nuanced understanding of project management principles and teaching expertise. It is also worth noting that this study centered on a single course, limiting its broader applicability. As LLMs technologies continue to advance, the relevance and implications of these findings will most likely necessitate revisiting and updating.

Availability of data and materials

All data generated or analysed during this study are included in this published article.


  • Abejuela, H. J. M., Castillon, H. T., & Sigod, M. J. G. (2022). Constructive alignment of higher education curricula. Asia Pacific Journal of Social and Behavioral Sciences, 20.

  • Armstrong, P. (2010). Bloom’s Taxonomy. Vanderbilt University Center for Teaching. Retrieved November 11, 2023, from

  • Bernius, J. P., Krusche, S., & Bruegge, B. (2022). Machine learning based feedback on textual student answers in large courses. Computers and Education: Artificial Intelligence, 3, 100081.

    Article  Google Scholar 

  • Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347–364.

    Article  Google Scholar 

  • Borrego, M., & Cutler, S. (2010). Constructive alignment of interdisciplinary graduate curriculum in engineering and science: An analysis of successful IGERT proposals. Journal of Engineering Education, 99, 355–369.

  • Boud, D. (2007). Reframing assessment as if learning were important. In N. Falchikov & D. Boud (Eds.), Rethinking assessment in higher education: Learning for the longer term (pp. 27–44). Routledge.

    Chapter  Google Scholar 

  • Cain, A., & Babar, M. A. (2016). Reflections on Applying Constructive Alignment with Formative Feedback for Teaching Introductory Programming and Software Architecture. 2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C), pp. 336–345

  • Chan, C. K. Y., & Lee, K. K. W. (2021). Constructive alignment between holistic competency development and assessment in Hong Kong engineering education. Journal of Engineering Education, 110(2), 437–457.

    Article  Google Scholar 

  • Croy, S. R. (2018). Development of a group work assessment pedagogy using constructive alignment theory. Nurse Education Today, 61, 49–55.

    Article  Google Scholar 

  • Dym, C. L., Agogino, A. M., Eris, O., Frey, D. D., & Leifer, L. J. (2005). Engineering design thinking, teaching, and learning. Journal of Engineering Education, 94(1), 103–120.

    Article  Google Scholar 

  • El Shazly, R. (2021). Effects of artificial intelligence on English speaking anxiety and speaking performance: a case study. Expert Systems, 38(3), e12667.

    Article  Google Scholar 

  • Ertmer, P. A., & Newby, T. J. (1993). Behaviorism, cognitivism, constructivism: Comparing critical features from an instructional design perspective. Performance Improvement Quarterly, 6(4), 50–72.

    Article  Google Scholar 

  • Hadi, M. U., Al Tashi, Q., Qureshi, R., Shah, A., Muneer, A., Irfan, M., … Mirjalili, S. (2023). Large language models: a comprehensive survey of its applications, challenges, limitations, and future prospects.

  • Hailikari, T., Virtanen, V., Vesalainen, M., & Postareff, L. (2021). Student perspectives on how different elements of constructive alignment support active learning. Active Learning in Higher Education, 23(3).

  • Holmes, W., Bialik, M., Fadel, C. (2019). Artificial Intelligence in Education. Promises and Implications for Teaching and Learning. Center for Curriculum Redesign, Boston, MA

  • Hwang, G.-J., & Chang, C.-Y. (2021). A review of opportunities and challenges of chatbots in education. Interactive Learning Environments, 31(7), 1–14.

    Article  Google Scholar 

  • IEEE, n.d. The IEEE global initiative for ethical considerations in artificial intellingence and autonomous systems. Accessed 14 Nov 2023

  • Jeon, J. (2021). Chatbot-assisted dynamic assessment (ca-da) for L2 vocabulary learning and diagnosis. Computer Assisted Language Learning, 36(7), 1338–1364.

    Article  Google Scholar 

  • Ji, H., Han, I., & Ko, Y. (2022). A systematic review of conversational AI in language education: focusing on the collaboration with human teachers. Journal of Research on Technology in Education, 55(1), 48–63.

    Article  Google Scholar 

  • Jia, Q., Cui, J., Xiao, Y., Liu, C., Rashid, P. & Gehringer, E F. (2021). ALL-IN-ONE: Multi-Task Learning BERT models for Evaluating Peer Assessments.

  • Kalmpourtzis, G., & Romero, M. (2020). Constructive alignment of learning mechanics and game mechanics in serious game design in higher education. International Journal of Serious Games, 7(4), 361.

    Article  Google Scholar 

  • Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Gunnemann, S., Hullermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T. … Kasneci, G. (2023). ChatGPT for Good? On opportunities and challenges of large language models for education.

  • Kumar, S. S., James, M., & Case, J. (2022). Engineering design for community impact: Investigating constructive alignment in an innovative service-learning course. 2022 IEEE Frontiers in Education Conference (FIE), Uppsala, Sweden, 2022, pp. 1–5

  • Lasrado, F., & Kaul, N. (2021). Designing a curriculum in light of constructive alignment: a case study analysis. Journal of Education for Business, 96(1), 60–68.

    Article  Google Scholar 

  • Lin, C.-J., & Mubarok, H. (2021). Learning Analytics for Investigating the Mind Map-Guided AI Chatbot Approach in an EFL Flipped Speaking Classroom. Educational Technology & Society, 24(4), 16–35.

  • Loughlin, C., Lygo-Baker, S., & Lindberg-Sand, A. (2021). Reclaiming constructive alignment. European Journal of Higher Education, 11(2), 119–136.

    Article  Google Scholar 

  • Luckin, R., & Holmes, W. (2016). Intelligence Unleashed: An argument for AI in Education. UCL Knowledge Lab: London, UK

  • MacNeil, S., Tran, A., Mogil, D., Bernstein, S., & Ross, E. (2022). Generating diverse code explanations using the GPT-3 large language model. Proceedings of the 2022 ACM Conference on International Computing Education Research., 2, 37–39.

    Article  Google Scholar 

  • Magnússon, G., & Rytzler, J. (2019). Approaching higher education with Didaktik: University teaching for intellectual emancipation. European Journal of Higher Education, 9(2), 190–202.

    Article  Google Scholar 

  • Maia, D., & dos Santos, S.C. (2022). Monitoring students’ professional competencies in PBL: a proposal founded on constructive alignment and supported by AI technologies. 2022 IEEE Frontiers in Education Conference (FIE), Uppsala, Sweden, 2022, pp. 1–8

  • McCann, M. (2017). Constructive alignment in economics teaching: a reflection on effective implementation. Teaching in Higher Education, 22(3), 336–348.

    Article  Google Scholar 

  • Meyer, J. G., Urbanowicz, R. J., Martin, P. C. N., O’Connor, K., Li, R., Peng, P.-C., Bright, T., Tatonetti, N., Won, K. J., Gonzalez-Hernandez, G., & Moore, J. H. (2023). ChatGPT and large language models in academia: Opportunities and challenges. BigData Mining, 16, 20.

    Article  Google Scholar 

  • Moore, S., Nguyen, H. A., Bier, N., Domadia, T., & Stamper, J. (2022). Assessing the quality of student-generated short answer questions using GPT-3. Educating for a new future: Making sense of technology-enhanced learning adoption: 17th European conference on technology enhanced learning, Toulouse, France, 2022, p. 243-257

  • Morselli, D. (2018). Teaching a sense of initiative and entrepreneurship with constructive alignment in tertiary non-business contexts. Education Training, 60(2), 122–138.

    Article  Google Scholar 

  • Paff, L. A. (2015). Does grading encourage participation? Evidence & Implications. College Teaching, 63(4), 135–145.

    Article  Google Scholar 

  • Potter, M. K., & Kustra, E. (2012). A primer on learning outcomes and the SOLO taxonomy.

  • Project Management Institute. (2017). A guide to the Project Management Body of Knowledge (PMBOK guide) (6th ed.). Project Management Institute.

  • Project Management Institute. (2021). A guide to the Project Management Body of Knowledge (PMBOK guide) (7th ed.). Project Management Institute.

  • Qu, F., Jia, X., & Wu, Y. (2021). Asking questions like educational experts: Automatically generating question-answer pairs on real-world examination data. Proceedings of the 2021 conference on empirical methods in natural language processing. 2583–2593.

  • Rahman, M. M., & Watanobe, Y. (2023). ChatGPT for education and research: Opportunities, threats, and strategies. Applied Sciences, 13, 5783.

    Article  Google Scholar 

  • Roßnagel, C. S., Fitzallen, N., & Lo Baido, K. (2021). Constructive alignment and the learning experience: Relationships with student motivation and perceived learning demands. Higher Education Research & Development, 40(4), 838–851.

    Article  Google Scholar 

  • Ruge, G., Tokede, O., & Tivendale, L. (2019). Implementing constructive alignment in higher education – cross-institutional perspectives from Australia. Higher Education Research & Development, 38(4), 833–848.

    Article  Google Scholar 

  • Sailer, M., Bauer, E., Hofmann, R., Kiesewetter, J., Glas, J., Gurevych, I., & Fischer, F. (2023). Adaptive feedback from artificial neural networks facilitates pre-service teachers’ diagnostic reasoning in simulation-based learning. Learning and Instruction, 83(2023), 101620.

    Article  Google Scholar 

  • Sarsa, S., Denny, P., Hellas, A., & Leinonen, J. (2022). Automatic generation of programming exercises and code explanations using large language models. Proceedings of the 2022 ACM conference on international computing education research, 1, 27–43.

    Google Scholar 

  • Shen, J., Yin, Y., Li, L., Shang, L., Jiang, X., Zhang, M., & Liu, Q. (2021). Generate & Rank: a multi-task framework for math word problems. Findings of the Association for Computational Linguistics: EMNLP, 2021, 2269–2279.

    Google Scholar 

  • Simper, N. (2020). Assessment thresholds for academic staff: Constructive alignment and differentiation of standards. Assessment & Evaluation in Higher Education, 45(7), 1016–1030.

    Article  Google Scholar 

  • Tack, A., & Piech, C. (2022). The AI teacher test: Measuring the pedagogical ability of blender and GPT-3 in educational dialogues. Proceedings of the 15th international conference on educational data mining. 522–529.

  • Teater, B. A. (2011). Maximizing Student Learning: A Case Example of Applying Teaching and Learning Theory in Social Work Education. Social Work Education, 30(5), 571–585.

  • Tobiason, G. (2022). Going small, going carefully, with a friend: Helping faculty adopt lesson-level constructive alignment through non-evaluative peer observation. Active Learning in Higher Education.

    Article  Google Scholar 

  • Trigwell, K., & Prosser, M. (2014). Qualitative variation in constructive alignment in curriculum design. Higher Education, 67(2), 141–154.

    Article  Google Scholar 

  • Trowsdale, D., & McKay, A. (2023). EdVee: A visual diagnostic and course design tool for constructive alignment. Teaching & Learning Inquiry, 11(January).

  • Vanfretti, L., & Farrokhabadi, M. (2014). Consensus-based course design and implementation of constructive alignment theory in a power system analysis course. European Journal of Engineering Education, 40(2), 206–221.

    Article  Google Scholar 

  • Wang, Z., Lan, A., & Baraniuk, R. (2021). Math word problem generation with mathematical consistency and problem context constraints. Proceedings of the 2021 conference on empirical methods in natural language processing. 5986–5999.

  • Wang, Z., Denny, P., Leinonen, J., & Luxton-Reilly, A. (2023). Leveraging large language models for analysis of student course feedback. In Proceedings of the 16th Annual ACM India Compute Conference (COMPUTE '23). pp. 76–79.

  • White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., … Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with ChatGPT.arXiv [Cs.SE]. Retrieved from

  • Wikhamn, B. R. (2017). Challenges of adopting constructive alignment in action learning education. Action Learning: Research and Practice, 14(1), 18–28.

    Article  Google Scholar 

  • Zhang, H., Su, S., Zeng, Y., & Lam, J. F. I. (2022). An experimental study on the effectiveness of students' learning in scientific courses through constructive alignment: A case study from an MIS course. Education Sciences, 12(5), 338.

    Article  Google Scholar 

  • Zhu, M., Liu, O. L., & Lee, H.-S. (2020). The effect of automated feedback on revision behavior and learning gains in formative assessment of scientific argument writing. Computers & Education, 143, 103668.

Download references


Gratitude is extended to two anonymous reviewers whose valuable feedback contributed to enhancing the quality of this work.


The authors did not receive support from any organization for the submitted work.

Author information

Authors and Affiliations



The authors confirm their contribution to the paper as follows: study conception and design: Estacio Pereira; data collection: Estacio Pereira; analysis and interpretation of results: Estacio Pereira, Sumaya Nsair, Leticia Radin Pereira, Kimberley Grant; draft manuscript: Estacio Pereira, Sumaya Nsair, Leticia Radin Pereira. All authors reviewed the results and approved the final version of the manuscript.

Corresponding author

Correspondence to Estacio Pereira.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pereira, E., Nsair, S., Pereira, L.R. et al. Constructive alignment in a graduate-level project management course: an innovative framework using large language models. Int J Educ Technol High Educ 21, 25 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: