## PCMI 2012: Experiments with our Data Group

This will be my seventh and final year with the Secondary School Teachers’ Program (SSTP) at the Park City Mathematics Institute. It has been the most transformative and influential professional development experience of my career. I was a participant for three years and have been an SSTP Staff member for four. If you have the chance to spend three weeks in Park City Utah and think deeply about teaching mathematics, then you should apply. If you look at the  PCMI-SSTP Site at the Math Forum,   you’ll see how much goes on during this program.

There are too many things to talk about, but I will focus on the first two days of what may prove to be an very exciting experiment:  The Data and Chance Working Group .   The goal for every teacher in this group is to create some resource that they (and other teachers) can use with students or teachers.  Sometimes we get teachers with little or no experience with data and probability.  Sometimes we get experienced AP Statistics teachers.   It varies every year.  This year, most of our participants were on the “new” end of teaching detain schools.

I and Carol Hattan, the Working group leader, tried something new.  We asked one simple question: “Let’s think about some Field Day events from school…”

Which ones could we replicate here at PCMI?
What research questions might we ask?”

Who knew that would work!  Within the first two days of class,  the seven participants went deep into a lot of really important fundamental issues in statistics / data:

Important Statistical ideas discussed on the first day

• The importance of creating a clear, well defined research question
• The need for specific, repeatable clear protocols to answer your research question
• Designing protocols and controls on the measurement variables to reduce bias and variation in the results
• Choosing appropriate, measures of our key variables:   “amount of practice” and  “cup stacking success
• Choosing appropriate visual displays to see patterns in the data
• Describing data distributions correctly (identifying centers, spreads, shapes)
• Defining appropriate variables to measure improvements (First stack time  – Last stack time)
• Deciding if the slope of a linear model predicting last time from first time
• Distinguishing between what happened in a sample and whether we can generalize to a larger population
• Using simulations as a tool to see if sample results are too large to be due to random chance
• Interpreting the results of a simulation
• Executing all of these tasks on a variety of different technology platform (Excel, Fathom, “by hand,” R, Maple)

We didn’t simply pose those two questions and let them go.  We listened carefully to their conversations and allowed the participants to work through and answer their own questions.  They were extremely thoughtful, engaged, and successful at arriving at good answers and resolutions to their questions.  Once in a while, we would be there to suggest a good resource, or push them in a more productive direction, but the conversations were deep, interesting and productive.

More about this later….We’re only two days in, and I’m vey very jazzed…. This plan was a bit of a pilot experiment of something I want to try a couple of years from now at my school:  A project based research course using data.  Stay Tuned.