§4.1

Cross‑cutting patterns

Amongst RQ1, RQ2A, and RQ2B, three major patterns appeared numerous times. The first pattern was how the modular design and continuous student feedback worked together to support timely changes. Breaking the course into separate modules within education, industry, ethics and accessibility, focusing on prompt engineering, image generation, video generation, sound generation, research tools, vibe coding, agents, etc., meant that I could revise one unit at a time without disrupting the entire syllabus. After each module, I reviewed surveys, written reflections, and student projects. When students communicated confusion or struggled with a task, I simplified instructions, added a short demo, or moved content to a different week. When tools changed pricing, access, or interfaces, I replaced and/or added in a different GenAI tool but kept the same learning objectives. This pattern of ongoing adjustments showed that modularity and feedback provided a practical way for the course to stay responsive.

The second pattern was the central role of learner choice and multiple entry points in the curriculum design across different learning contexts. The for-credit university courses operated under outside pressures like grades and degree progress; the global courses ran without credit or grades and were free to participate in. Offering choices in both learning settings, such as letting participants choose which GenAI tools to use for a given assignment, what topic to focus on, and whether to engage through written, visual, or multimodal outputs, was a design move I retained across all four iterations. This pattern aligns with what self-determination theory predicts about autonomy, competence, and relatedness as supports for engagement. When the assignment design surfaced clear connections to learner-stated interests, the course evaluation feedback for that assignment was more positive and the assignment was retained in the next iteration.

The third pattern was the consistent role of shared GenAI principles and ethics. Even though specific tools and models changed quickly, the course kept coming back to the same core ideas: responsible use, fairness, transparency, safety, checking outputs carefully, and prompt‑engineering skills that work across various GenAI tools. When one tool changed or disappeared, I could swap in another and still teach the curriculum through the same core concepts. When students introduced new GenAI applications, I could observe using the same questions about bias, ownership, accessibility, and human-AI collaboration. This pattern suggests that in fast‑moving technological fields, what stays consistent is the shared principles and ways of evaluating them.