Theme stability across contexts
In this section I test the four-theme architecture against learners and observers outside my own classroom. The test matters because if the architecture only appears in materials I produced, the architecture could be read as a researcher's overlay; if the architecture appears in independent voices across age groups and contexts, the architecture has external validity.
I present five sources of cross-context evidence: K-12 children, university undergraduates, journalists, online workshop attendees, and my own public-facing writing across delivery channels.
A.5.1 K-12 children at UW KidsTeam surface the same themes in a co-design context
In July 2024 I participated in three days of UW KidsTeam co-design sessions with children and multiple Youth Advisory Board (YAB) sessions with teens at the University of Washington. The sessions investigated children's and teens' opportunities and challenges with generative AI in schools. The KidsTeam methodology was led by the UW team; I was a participating collaborator, not the facilitator. I did not bring my four-theme framework into the sessions as a category for the children to use; the framework's appearance in their reasoning is therefore near-independent of my own curriculum — the methodology was someone else's, the children were not my students, and the theme analysis (KT-THEMES) was the UW team's. What it is not is fully independent observation, since I was in the room. With that caveat in place, the convergence between the children's themes and the architecture I had built is meaningful evidence for the architecture's reach beyond my classroom.
The KidsTeam research themes document (KT-THEMES) catalogues what children and teens surfaced. Within "challenges," they named hallucinations such as "images produced with a third arm" (KT-THEMES-C5), which maps to the Ethics theme as a question about reliability and to the Accessibility theme as a question about whether AI-generated images can represent human bodies fairly. They named lack of emotion and human connection (KT-THEMES-C4), which maps onto the Ethics and Education theme cluster. They named cheating (KT-THEMES-C2), which is a recurring Ethics concern in classroom contexts.
Within "opportunities," they named teachers allowing students to use generative AI for paragraphs with mistakes to correct (KT-THEMES-O1), which maps onto the Education theme. They named producing faster work (KT-THEMES-O3), which maps onto the Industry theme through the productivity frame. They named checking math homework (KT-THEMES-O2), which maps onto the Education theme as an instructional application.
The children and teens at UW were not coached toward my framework, and the framework's themes appear in their reasoning. This is the strongest external-validation evidence the cross-iteration corpus carries.
A.5.2 Undergraduate cohorts in my own courses apply the framework in final-project work
The Iteration 1 Final Project Requirements (FP-1) distribute nine reflection questions across the four themes. Students who completed the final project were therefore prompted to engage with all four themes, and their work is part of the dialogue-with-informants-beyond-self evidence base.
The Iteration 2 student teach-out presentations (SP-2 series) extend this by giving students an opportunity to teach back. The recorded teach-outs cover DeepfakeAI (SP-2.DEEPFAKE, an Ethics topic with Industry implications), Sintra.ai (SP-2.SINTRA, an Industry topic), Haley Phillips's teach-out (SP-2.HALEY), Dakota A's teach-out (SP-2.DAKOTA), and AI and Robotics integration (SP-2.ROBOTICS, an Industry topic with Education implications). The distribution of student-chosen topics across the four themes is a piece of evidence that the framework matched what students cared about, rather than something I had to insist on.
Specific named undergraduates also surface in the cross-corpus material in ways that document theme engagement. Ashley Stafford appeared in the Iteration 1 Week 1 deck (DK-1.W01) and was subsequently quoted in the Aspen Public Radio article (AP-2024-05-16) as the named student voice on what the course offered. Ethan Cuenca appeared in the Iteration 1 Week 1 deck (DK-1.W01) and was publicly credited in my Keep Up Newsletter Episode 3 (KN-EP3-Q1, "I learned about Soundful from one of my students during class") as the student whose tool discovery shaped Iteration 2's curriculum. Both students are documented engagements with the framework across multiple artifacts.
A.5.3 Journalists frame my work in theme terms
The Aspen Public Radio article (AP-2024-05-16), published one week after Iteration 1 ended, frames my course in terms that map onto the four-theme architecture. The article's title is "Could AI be the next college teaching assistant? Some Colorado professors believe so." The framing is squarely within the Education theme. The article also reports on my course's industry-application orientation and on its accessibility implications, both of which map onto the other themes.
External journalism is not coached. The article's framing is the journalist's own organization of the material I had been teaching, and the framing lands on the architecture I had built. I treat this as additional cross-context evidence that the framework speaks intelligibly to non-academic audiences.
The article carries named quotes of me and of Ashley Stafford; I cite the article at source-level (AP-2024-05-16) in this dissertation.
A.5.4 Online workshop attendees engage with the framework in their feedback
The Iteration 3 Luma feedback corpus (LF-3, with sub-IDs LF-3.R01 through LF-3.R29) carries twenty-nine evaluative responses, nine of which include text feedback. The text feedback engages the framework substantively.
One four-star reviewer wrote (text preserved verbatim from the source):
"I think it was a good discussion regarding how to use the different AI image generation tools. A course work based on some of the neural networks behind them could be a great one."
The response is a constructive critique that surfaces a depth-versus-breadth tension within the Education theme. The reviewer wanted more on the neural-network mechanics under the tools (a deeper Education-theme dive into model architecture) and offered that the workshop succeeded as a tool-walkthrough (the breadth orientation). The tension is the kind of finding that analytic autoethnography surfaces from cross-context feedback. I return to it in §C.4.6.
A.5.5 My own public-facing writing across channels re-articulates the framework
The framework re-appears in my own public-facing writing across the cross-iteration channels.
The Keep Up Newsletter (KN-EP1 on image generation, KN-EP2 on research tools, KN-EP3 on sound tools) uses topic-clusters that align with the framework's domain coverage. Episode 1 sits within the Industry and Education themes (image generation as a craft and as a tool). Episode 2 sits within the Education and Industry themes (research tools as productive aids). Episode 3 sits within the Industry and Ethics themes (sound tools and the question of musician displacement).
The Keep Up Podcast (KP-EP2, KP-EP3) re-articulates the same theme clusters in audio form, building on the newsletter's groundwork.
The CU RMACC webinar (WB-2026-03-03), delivered ten weeks before my defense via the federal NAIRR Pilot platform, is the most recent public-facing synthesis of my work. It is also organized around the four-theme architecture, presenting the framework to a federal research audience as the coherent organizing principle of my pioneering practice.
A.5.6 What the cross-context evidence supports
The four-theme architecture is not a researcher's analytic imposition. It is the framework I built into my teaching from the K-12 origin (§A.2), and it is the framework that independently surfaces in five other sources: children at UW KidsTeam, undergraduates in my own courses applying it in final-project work and teach-outs, journalists at Aspen Public Radio, online attendees at the Iteration 3 workshop, and federal research audiences via the RMACC webinar.
I claim modest construct validity for the framework on this basis. The framework is not the only valid lens on generative-AI pedagogy. It is, however, a framework that has held its shape across age groups, institutional contexts, delivery modes, and audience types. Chapter D (§D.4) develops the framework's presence across eight documented contexts as a substantive finding in its own right.