Opening a New Hatch to Undiscovered Space

Category: Assessment

Reflecting on Experiential Practicum 391

My EDUC 391 practicum experience at College Heights Secondary School was both a positive and robust learning experience. I had the pleasure of teaching two units of the Chemistry 11 curriculum to a fantastic group of young individuals. Throughout the three weeks, 391 teacher candidates (TCs) were obligated to meet in groups of three to discuss and compare the progressions of our practicum experiences; these groups are called “Triads” and the three members are fixed. During my Triad’s weekly meetings, I learned about noteworthy characteristics like successes and issues that my peers faced in an English 11/12 and K/1 class. Although anticipated, it was fascinating to hear about the main areas of focus required for different age groups and subjects. For example, students in the K/1 class were highly diversified in terms of learning abilities, maturity, and socialization. Several students had Individual Education Plans (IEPs) and required specific supports to facilitate their learning. Consequently, there was an emphasis of focus towards ongoing classroom management. In the English 11/12 class, I learned of a student who was taking this class for their third and final time. This particular student read and wrote at problematically low grade level but had been actively turning their academic focus around in the recent year(s) and took agency over their learning with the goal of successfully graduating from high school. According to the teacher candidate (TC), this particular student supported themselves with a decent paying job that they enjoyed and hoped to continue with after high school; however, their continued employment was probationary on the student’s attainment of a high school diploma and this English course was a strict requirement. Because senior English is a requisite for graduation, the diversity of academic abilities across students is large and therefore required the TC to put high emphasis on differentiated learning in their teaching. The demographic in my Chemistry 11 class was certainly not homogenous; however, because Chemistry 11 and 12 and elected courses and require higher academic competence, the student diversity in both academic abilities and IEPs was lesser than my peer TCs. As such, I put high emphasis on personally knowing and understanding the concepts being taught so they could be presented to the students in dynamic and multifaceted formats to satisfy the diversity of learners in the classroom.

The students I worked with in my Chemistry 11 classroom ranged greatly in their individual strengths and focus; therefore, it was especially important for me to provide multifaceted formats for learning and to be able to confidently handle all types of questions and confusions associated with the chemistry concepts. What went well during my practicum was my ability to quickly create connections with students as individuals, obtain feedback on their learning, and then use this feedback to adapt lessons going forward. It quickly became clear to me that building relationships and trust with students was going to be a tool that could increase learning efficiency. Through simply knowing students’ names, classroom management became easier. For example, when certain students became talkative during lessons, I could use their name in examples or analogies to gain their attention without directly telling them to “stop talking.” Because the students already had familiarity with each other and my CT, I used something called “Name Tents” to expedite the processes of making individual connections. The Name Tents serve two functions: they act as an identifier and as a communication tool. Name Tents are folded pieces of paper (in a standing tent-shape) where students write and decorate their names, and then display them at their spaces. On the inside of the “tent” is a location for student comments and teacher responses. Near the end of each class, I provided students with a prompt to comment on in their Name Tents (I.E. What is one thing you would like me to know about you? If you could have a conversation with anyone—dead, alive, or fictional—who would it be? What from today’s lesson worked well for your learning? Etc.) which I would respond to and return to them the following day. After a single use of the Name Tents, I went from teaching in a classroom of strangers to having students stay after class and engage me in excited conversation over like-interests (music, cars, “The Peaky Blinders”). An advantage of this activity, in addition to building relationships through individual communication, was that the focus of the prompts could be shifted away from personal inquiry to educational inquiry and a tool for students to report feedback.

Template of the communication section of the Name Tents. Students write their “comment” to the day’s prompt. Below is where the teacher responds to the students comment. Often these back-and-forth’s would continue longer than a day around a single comment.

To help exemplify how I obtained and used formative assessment and feedback to enhance learning during my practicum, I will describe some of the tools I used during the bonding unit. The delivery of content for this unit was done through note packages that I created using information from multiple textbooks and additional resources. As far as note packages go, I created them to be as dynamic as possible. They included fill-in-the-blank sections, images, drawing sections, diagrams, analogies, textbook-definitions, student definitions, predictions, pattern recognition, practice problems (with extending questions) and more. Within these notes, I wrote prompts that would have students get into assigned, numbered groups of three and go to a corresponding numbered whiteboard in the room and work through problems. The use of whiteboards was based on the Vertical Non-Permanent Surfaces (VNPS) methodology in Thinking Classrooms proposed by SFU professor of education, Peter Liljedahl. His methodologies also include a “visibly random groups” component that was not utilized because of the current COVID-grouping restrictions in classrooms. The nine circumferentially located whiteboards provided me with an efficient route to circulate and acquire feedback on student understanding and provide differentiated learning assistance. It also allowed me the opportunity to use groups of students who were successfully completing problems as peer-learning resources for groups who were having difficulty and waiting for assistance. During the lesson on polar covalent bonds and molecular polarity, the student work displayed on the whiteboards revealed that I had underestimated the time it would take to cover this concept. In addition to using feedback from whiteboards, students were asked to complete a “Muddiest Point Card” (below) as an exit slip at the end of class.

Muddiest Point Card used for feedback on lessons during 391. These cards prompt students to explain the least clear components of the lesson–the parts of the lesson that were “clear as mud.”

Over that weekend, I used the Muddiest Point cards to create a focused assignment that had students progressively work towards conceptualizing polarity. The assignment had students clearly demonstrate an understanding of symmetry and used familiar concepts like directional force to conceptualize how electronegative atoms pull electron density. It also had students use molecular modelling kits to build the molecules they were describing geometrically. The assignment was marked formatively and returned to students with no grade but a great deal of feedback. Students were subsequently provided an “Understanding Check-In” which was a formative quiz based on the material from the whiteboards and assignment. Students were asked to treat this like a quiz when writing but understood explicitly that they would be marking it and that it would not affect their grade—rather it would be used to help focus the upcoming review for the summative unit test. I created the quiz such that each question addressed a specific component of their learning (symmetry, Lewis structures, bonding based on electronegativity, partial charges, etc.); it was then easy to tally up each section and weight the review appropriately. Review materials included conceptual checklists of everything we had gone over, practice problems, Phet simulations, educational videos, molecular modeling kits, Plickers multiple choice questions, and a lab that I co-created with my CT to have students apply the theory learned, but summatively assessed them on two curricular competencies. The Plickers application was a particularly valuable formative tool because it provided and saved immediate graphical class data of student answers and survey questions which was easily used to adjust weighted focus of learning.

Throughout practicum, phrases like “it’s not what you say, it’s what they do,” and “pre-assessments, formative assessments, and feedback are only valuable if they are applied to the learning” continuously occupied my mind. Tools like the Name Tents, exit slips, Plickers and others mentioned above allowed me to robustly adapt my teaching to fit the needs of the class and individuals. Although I am only just beginning the practice of extracting and using classroom data to facilitate education, I believe that I effectively implemented ongoing, bidirectional learning and that it had a positive result for the students, and myself as a developing educator.

During practicum, I spent a lot of time working towards refining my pacing. Even prior to starting practicum, I predicted that the pacing would be a component of teaching that would need to be worked out through experience. Although I am very capable of estimating the time it would take me to lecture a presentation to a group of people, teaching includes more unknowns than I was able to predict prior to starting. I also began my first day of teaching without having any clarity as to what these Chemistry 11 students did know, should know, and could know. For example, I anticipated teaching the concept of polarity would take no more than 30 minutes for the majority of the class to understand; in reality, it took several days with a great deal of different learning tools.

During the first week, it quickly became clear that a great deal of classroom efficiency was lost when transitions within lessons were sloppy and that the energy and mental state of the class greatly influenced learning efficiency. On my second day of practicum, my CT offered me some suggestions for material to get through and I created a lesson plan containing 8 or 9 components to fit into 1.3 hours in hopes of satisfying her recommendation. Not only did I not reach my pacing goal, but the learning also felt rushed, superficial, and ineffective—it felt terrible. Afterwards, I refused to attempt to cram lessons like I did that Tuesday, for the sake of the students and my own sanity. During student labs, I realized how the way in which the classroom was arranged influenced temporal efficiency. For example, by effectively spacing laboratory components, like materials, equipment, and waste containers, in the classroom, I could reduce the bottlenecking effect where students would waste time waiting. During each subsequent lab, I worked on refining the classroom layout and on techniques to keep students engaged and on track.

As I continue my development as a professional educator, I will increasingly become better at pacing. The use of timers, increasing familiarity around teaching certain concepts, viewing pacing through a holistic lens, and the physical set-up of classrooms are all items I am working towards refining in terms of increasing my ability to precisely plan lesson pacing.

As this semester concludes and the next begins, I approach the 490 practicum. During 391, I produced multiple assessment rubrics and had the opportunity to play around with several different assessment approaches; however, assessment was not the focus of the 391 practicum, and I was therefore not responsible of the overall assessment and reporting. Furthermore, there was a somewhat explosive situation that arose in response to students receiving their interim reports during my practicum. It demonstrated a strong disconnect between the function of holistic assessments based on proficiency scales and the percentage and grade-based reporting system on interims. As this experience appears in other writings, it will not be detailed here. The takeaway, however, is that I was able to observe how the practice and development of certain assessment styles could robustly represent learning, but without explicit understanding among the students, parents, and teachers, assessment can halt learning in its tracks and provoke anxiety, anger, and ego. I am curious as to how assessment strategies that are unfamiliar to students are best approached from the start and how evidence of learning is best documented by teachers for parents, students, and admin.   

Curriculum and Assessment

In his TED Talk titled How to escape education’s Death Valley, Sir Ken Robinson uses the Californian Death Valley as a metaphor to describe the effect of learning conditions on education. Death Valley is considered the driest, hottest part of America where nothing grows until once in a blue moon, mass rainfall floods the landscape awakening dormant seeds that come to life and sprout and bloom into a magnificent valley of life. Robinson says that teaching is a creative profession, not a delivery system, and that by allowing the proper conditions of possibilities, expectations, opportunities, relationships, innovations and creativity, learning is as inevitable as life in Death Valley after a downpour. Education that is narrowly focused, restricted, and conformed, however, will dry the learning from the education leaving behind a dormant arid scape of monotony. It is a great metaphor because at some point in time, every one of us has experienced curiosity, the spark the drives true learning, the engine of achievement, the water that brings the desert to life. Nevertheless, sparking curiosity in a collective of individual learners in a curricular framework that provides systematic assessment and reporting to bureaucratic officials in charge of monetary distribution can be a hurdle.

Before getting into curriculums, I would like to consider the differences and values of summative and formative assessment. Formative assessment is commonly an informal and ongoing tool used by teachers orient themselves to where a student is in their learning process. This style of assessment is not meant to be high stakes, but to diagnose the strengths and weakness of both the learner and the instruction so that the education is purposeful and effective. Conversely, summative assessment may be thought of as the formal cumulative outcome of learning intentions following instruction—the assessed product of what the student knows after the learning has occurred. These assessment types are often modeled in the K-12 education system, particularly in high schools as I have come to observe through my practicum experience this past month. The generalized model for what I have observed follows the similar process of presenting a lesson (full of required content based on curriculum), a work block (where students are given time to work on assignments, tasks, practice problems, quizzes, etc.), and finally a unit test that tests the important material of the unit. During the lesson portions, where content is transmitted, the teachers are constantly performing formative assessment by posing questions to the class and judging their answers and understanding, observing levels of attention and body language during instruction, and continuously communicating with students individually or as a collective. During work blocks, students have the time and resources to engage with the material and discover their own deficiencies in their learning; furthermore, they are able to ask the teachers for help or further explanation on individual levels. The teachers use this time for formative assessment as well, noting which students are struggling or motoring through content, so they may make individual adjustments and additionally use the student’s work as a metric for where their understandings are at. Formative assessment also helps orient the teachers to where discontinuities of learning stem from—if no one in the class appears to grasp the material following a lesson, perhaps corrections should be focused on the instruction, not the individual learner’s inability to understand, or visa versa. After the teacher makes appropriate efforts in mitigating discrepancies discovered through formative assessment, summative assessment is issued through the form of a written test so they may show their understanding in a standardized format.

Personally, I see clear advantages and short comings in the way formative and summative assessment are being used in the classes I’ve observed this year and these observations are independent of the educator’s abilities to perform effectively within this framework. On one hand, I see a written unit test being a summative tool that tells the teacher how well students write tests in a particular format; this could be completely independent of what students know or understand because it leaves out human variables like whether a student was bullied that day, whether they ate breakfast that morning, whether they memorized the material or understood it, and so on. Furthermore, we know from behavioural psychology and studies like the Candle Problem, that high-stake, extrinsic rewards and punishments (contingent motivators), like grades, restrict creativity and dulls cognition by narrowing focus. This phenomenon may be contrary to expectations, however, it does tell us that the model of testing, when based on contingent motivators, is best fitted to assess rote or mechanical tasks and that the act of testing creative and critical thought with contingent motivators is in itself a fallacy for achievement. On the other hand, I see an obvious need for summative assessment in that there must be some metric that provides structured feedback into the effectiveness of the educational process and the ability of the learner. But if the curriculum is content driven, then how would one possibly assess a student’s learning that may include creativity and critical thinking, while not narrowing focus through contingent motivators, without getting rid of summative assessment? Change the curriculum.

Curriculum may be thought of as the scaffolding that supports the space where learning can be explored by guided and planned intention. Many models of curriculum exist, and I suspect many of us have knowingly or unknowingly been subjected to several of them. Academic post-secondary institutions often model their curriculum with a syllabus which might include a timeline of headings describing content to be covered through a series of lectures or assignments and eventually leading to a form of summative assessment like a final exam. The largest shortcomings I have experienced with this style of curriculum is that it does not require the learner or teacher to be curious, critical, or even knowledgeable about the body of knowledge being transmitted. For example, I attended a course in genetics during my undergraduate degree where I observed a professor reading directly off PowerPoint slides borrowed from a different professor who had previously taught the course; furthermore, these slides were adapted from a the textbook manufacturer. As slides were read off to the class during lectures, questions from students would occasionally arise around the material and more often than not, students and their questions were deferred to asking again after class unless the answer to the questions happened to be within the slides being read. I will never forget the lecture following our second midterm. The syllabus stated that we would cover a third of the content in lectures before the first midterm, another third before the second midterm, and the final third before the cumulative final exam. Well, due to poor time management, the lecture following the second midterm landed on the final week of classes meaning there were two lectures left to complete 33.3% of the content, and with the miraculous rate capability of pressing the next button on PowerPoint presenter, we were able to cover all the material in the course and endure the final exam that upcoming Saturday morning. In this model, there was no need for us to be curious, there was no need for the professor to hold any knowledge in the subject, and there was no need to attend lectures (because we can all read). There was no formative assessment (as even questions went unanswered), and the summative assessment tested our ability to rote memorize words from slides. When I reflect on that experience, I realize that although the quality of education I received in that course was lacking, to be polite, it easily followed the requirements within the bounds of the curricular model. The information we needed was transmitted to us via prepared slides and we were assessed on our ability to reproduce the content, end of story.

In contrast, during my undergraduate honours thesis, I experienced a curricular model that was far closer to curriculum as praxis. Praxis as a curriculum can be conceptualized as the action of engagement in learning and situation which embodies qualities leading to human emancipation. On praxis, Mark K. Smith writes that “[i]t is the action of people who are free, who are able to act for themselves.” In my honours thesis, I was given that exact gift of freedom—to act and engage in the learning because it mattered and meant something. The research and topic of my thesis was collaboratively decided by myself and my PI, Dr. Sarah Gray. Afterword’s, I was given an intimidating level of autonomy with high stake responsibility in how I used my time and accomplished my work. For example, I was permitted to purchase thousands of dollars’ worth of laboratory equipment and materials on Dr. Gray’s account, but the onus was on me to give purposeful and precise reasoning for every decision I made. As a time commitment, there was a minimum requirement of hours I was to spend in the lab per week (although how and when I allocated my time was up to me), but honestly that never came into my consideration because I basically lived in lab. When performing biochemical and molecular biological experiments, I learned through peers in the lab, reading literature, and trial and error; there was no manual, no instruction, and total responsibility for my work. Dr. Gray had lab meetings once per week where she would sit with a notebook and rapidly absorb and assess where we were in our experiments and work—it was always an intimidating experience because she had high expectations for us and we were highly motivated to provide something useful in the group. That was her method of formative assessment because the quality of our work, efficiency, and level of understanding was exposed weekly in what we presented and how we answered her questions. Furthermore, we were constantly humbled by the little we really knew when she applied her wealth of professional knowledge to our child-like scientific minds—we respected her. The work was by far the hardest and most valuable period of my undergraduate degree. I engaged in the work to a nearly obsessive level because it mattered to me, because it was my own and because it was part of something greater than myself. My engagement was also tied to my respect for Dr. Gray and my peers in the lab; we were contributing to a body of scientific knowledge that would contribute possible solutions to the obesity epidemic and thus the work easily translated to the betterment of human emancipation. The summative assessment of my thesis was centered around my research proposal, the experimental work throughout the course, the final thesis document, and a final seminar where I presented my work to a panel of scientists with appropriate specializations. The onus of the research proposal was on me to create an argument that would permit me to even conduct research. The experimental works were my decisions, and the quality was what I judged to be good enough—after all I was the one who would be defining my work in the end. The final paper was my creation, my argument, my work, and my findings. Finally, my defense seminar was the manifestation of all the work I accomplished and the story it told transmitted through my personality in front of far more qualified scientists. The summative assessment was nothing like a written test, but an extremely high stakes judgement of a years’ worth of knowing, doing, and understanding in a guided framework full of autonomy and creative thinking.

The current BC curriculum, which I am being trained under, can be modeled as praxis if implemented appropriately. It is modeled around three concepts: Know, the content of which students are expected to know; Do, the curricular competencies of which students are expected to do; and Understand, the big ideas of which students are expected to understand. The core competencies are proficiencies developed for students to engage in deeper learning at an intellectual, personal, social, and emotional level. They include critical and creative thinking, communication, social and personal awareness, and responsibility competencies. The Big Ideas are the generalized principles that students will understand—the theory that can be applied to practice. And finally, the content is the stuff that students need to know but here, it is not centric to the curriculum as it was in my genetics class. I was fortunate to ask Dr. Christine Ho Younghusband how she would use the new curriculum when teaching something like adding and subtracting fractions (a content item I figured would be difficult to apply core competencies like critical and creative thinking to—how many ways can you creatively add fractions anyway…). Brilliantly, she said that the content—that is the fraction operation—was not the focus, but a vehicle for applying critical thinking. It made perfect sense after she said that because in the real world, we don’t go around collecting information (adaptations) that may one day come in handy for something; we go around encountering problems and adapt based on what that problem is. So, a problem is not the ability to learn fractions, the problem is a real thing that fractions can solve when critically and creatively applying them in the solution! With this model students can engage with their learning on a more personalized level and develop an education that is directly associated to things that matter like acknowledging relationships to place and community or why taking on responsibility has value in developing moral identity and purpose. Allowing students to discover and engage in their learning is what I experienced in my undergraduate honours thesis and it truly contained the element of human emancipation that is so central to curriculum as praxis. Having personal experience and understanding of how this model can potentiate learning gives me a framework for application as a future educator, and I must say, it’s exciting.