You must be logged-in in order to download this resource. If you do not have an AOE account, create one now. If you already have an account, please login.Login Create Account
Great! you’re all signed in. Click to download your resource.Download
Due to specific regulations in , AOE is not currently enrolling students in your state. We apologize, but at this time you can not move forward with course enrollment. Let us know if you have any questions. Please contact us with any questions.
In this episode Andrew will dissect how we as art teachers can fit into the current “big data” educational paradigm. Art teachers are all too familiar with the buzzwords, Professional Learning Communities, Common Formative and Summative Assessments, Student Learning Objectives, and standardized tests. Everything in education seems to be about hard, quantifiable data. This shouldn’t leave teachers on the outside looking in or force them to wave the white flag or stick their heads in the sand and say that using data isn’t relevant. AOE’s good friend and go-to data diva Sarah Dougherty joins the show to talk about how art teachers can make data work for them… in short, how to make data sexy! Sarah will help answer how art teachers can make data important, fun, and meaningful and ultimately work in the art room.
As Andrew and Sarah dig into data they discuss what good data and data collection looks like in the art room (6:30), what kind of assessments are best for getting that kind of authentic data (8:45), and how data can drive the decision making process in your PLC (11:30). Full episode transcript below.
Andrew: Welcome to Art Ed Radio; the podcast for art teachers. This show is produced by the Art of Education. I’m your host Andrew McCormick.
Today we’re discussing how we as art teachers can fit into the current educational paradigm. You know the one I’m talking about; the big data paradigm; PLCs, common formative assessments, student learning objectives, standardized tests. Everything seems to be about hard, quantifiable data. This can honestly leave our teachers out in the cold a little bit. Fear not, because in second, I’m going to bring on my favorite go to data and assessment guru, Sarah Dougherty, to talk about how we can make data work for us.
Sarah: Hi. My name is Sarah Dougherty, and I am the visual arts curriculum coordinator for Des Moines public schools.
Andrew: We’re going to talk about how we can make data sexy. I know that’s just a buzz word to attract the eyes and ears of our AOE fans out there. What I’m really talking about is making data important, fun, and meaningful. In short, making it work for us.
Sarah: I think using data is important. What else would we use to drive our instruction?
Andrew: In this episode, we’ll tackle the big ideas of what does good, sexy data in an art room look like? What kinds of assessments that we can create and use to create that kind of data and actually use it? Finally, some concrete takeaways to start creating and making that sexy data to drive your curriculum choices whether you’re an island, or you’re working in a group in a PLC.
Let’s talk about data. The reality is that accountability and standardization are 2 pillars of education. We can’t ignore data. It really shouldn’t just be a game that we play. We say we look at data, but really it’s just a dog and pony show that we put on for our administrators. “Oh, yeah, we’re looking at data.” when really we’re not.
I know data should drive our decisions in the classroom, but I think we need to ask the question, “What kind of data?” What does relevant data look like in the classroom? It shouldn’t be a 1-size fit all decree from the school district handed down from on high. It’s got to be something that’s created by art teachers, for art teachers that’s really meaningful, or we’re really just spinning our wheels.
It makes no sense to tie my curriculum decisions and program evaluations to standardized math and reading scores. You know those goofy specific rubrics that say, “Identify all the vocab words of big summative post tests, or do 7 different types of line.” We do so much more than that. Why wouldn’t we want our data to reflect this?
I think we need fewer rubrics and assessments that are trying to quantify a more qualitative art experience. We all know this game. If you can create 7 values you get an A, and 6 is a B, and 5 is a C. Or, you need 5 different types of line, or show me 7 different types of implied texture. I think people glom onto these ideas because there’s numbers involved. It’s easy to slip this thing into a rubric and some ways look legitimate to our administrators.
Like I said, we do so much more than this. Wouldn’t we want our data to reflect this? We need to create rubrics. May I even say alternative assessments like portfolio, or self-assessments that reflect the depth and breadth of the real work that we’re doing?
I think the number of lines a student creates is far less interesting and applicable than an assessment that can give feedback to students about their critical thought, creativity, and collaboration.
To me, it seems like a no brainer that we would want to create these rubrics that reflect a more open-ended national core art standards and 21st century skills set. If you look at those things, they are very open-ended. Why are then we trying to pigeon hole ourselves in to this quantitative box?
Creating assessments that hit these standards isn’t all foo fooey and hippy dippy. You can still describe what mastery, emerging mastery, proficiency looks like. I know that this language is starting to flirt with standard-based grading which is another concept near and dear to my heart. Wouldn’t this data be more relevant and more sexy? Wouldn’t you like that data to help guide you versus some hocus pocus of standardized test scores?
If you’re as into data and assessment as I am, make sure you check out AOE’s online course, Showing Student Growth in Art. Whether you’re looking for a lane change, or need some grad credits, or even some PD hours, I think you’ll really enjoy this course. As with all AOE courses, you’ll get a chance with showing student growth to get reflective with your practice, create a number of new tools that will take your teaching to the next level.
Showing student growth is a 2 credit class. It begins at the beginning of every month. Learn more by going to the artofed.com and clicking on the courses tab. Now, let’s bring on my good friend Sarah Dougherty.
All right, Sarah, welcome to the show. We are talking sexy data, making it work for us. You are my go to data guru. I want to start off by playing devil’s advocate here. Why should I really give a crap about sexy data? Can’t I just keep doing the same old dog and pony show that I’ve been doing for years?
Sarah: I think you can do that, but you shouldn’t because how do you know that same old dog and pony show is working for our kids? To know that requires a little bit of measurement.
Measurement is like a dirty word sometimes in the art room, but it absolutely doesn’t have to be. It’s so important that we just don’t keep doing something for the sake of doing it. It’s important to use a little data and assess not just kids’ work, but our own work so we know that what we’re doing is meaningful and relevant for kids and we know that kids are learning. That’s our primary job.
Andrew: If we do the same old dog and pony show, we’re relying on our best guesses, and hunches, and assumptions, but we really don’t know if we’re doing it right, or doing it well.
Sarah: Yeah, exactly.
Andrew: You’ve convinced me that I should make it work for me. That’s easier said than done. Can you give me some specific examples of some teachers out there that are making sexy data and making it work for them, or maybe some stuff that you’ve done?
Sarah: Yeah, I think the teachers that I have seen and worked with that are using it best are finding ways for kids to own the data, for kids to track their progress over time to use goal setting. That’s different from our thoughts around data in the past like having a multiple choice question and answer sheet and then dividing kids into the red, and the yellow, and the green. That’s the stuff that turns people off.
What’s really exciting is when we see kids looking at their scores over time and setting goals, saying, “I’m at a 2, but I know I need to do these 3 things to get a 3.” or for teachers to be able to use that data to have conversations with kids about what they need to do and where to to go next rather than that, I’ve used this phrase before because I love it so much, “Rather than the autopsy, having a monitor for the heartbeat of the learning and the artwork that’s happening in your room.”
You, as the clinician of the art room want to monitor that heartbeat to make sure that things are healthy, that things are robust. You don’t want to wait until there’s a flat line and figure out what went wrong. Using that and making on the spot decisions is really important. That goes back to that whole not waiting til the end piece, looking at that data.
Andrew: I really like that analogy. I’ve said that to my kids before and I’ve said it to my parents. The big clunky, it doesn’t really work for us rubric at the end, which is summative, it really is like an autopsy. The kids are done and they’re checked out and they’re ready to move on. They’re not going to change anything or learn anything.
That’s where I think some formative assessments along the way, some rubrics, some really authentic things that we can create. We’ve talked off mic a little bit about how our assessments and our data looks different from other teachers. Specifically, can you say what those things look like? Is it just a better rubric, or what is a different assessment tool that gets us better data?
Sarah: I think a better rubric is absolutely the case. Sometimes we look at rubrics, which are great ways to get a little data and to measure student progress, but we make them super quantified. I’m thinking about a little kid who can do 5 lines and then 7 lines. They move up the rubric by just adding more lines, or demonstrating more of something, which is not really showing a whole lot of growth.
Same for a secondary example like a kid who can make a composition touch 3 sides of the paper. “Oops, you touched 2 sides of the paper. I’m going to give you a lower grade.” I think that’s where we have a disconnect. That doesn’t really tell us very much about learning. The data from that doesn’t make sense for us.
In order to get data that makes sense, that helps us drive instruction, or helps students make decisions, the tool by which we glean that data has got to be smarter. I think the rubric is terrific. I think that there are other content areas wisening up to that. The rubric has got to be about developmental skills, the foundational skills. I have the foundations. I know the vocabulary. I can do a little application.
The next move up on the scale in the rubric would be, “I can do that, and I can apply it, and make inferences.” and going beyond. I think it is a lot about creating better tools with which to measure and then collect the data.
Andrew: I think it’s got to look a little bit different. It can’t just be, “Here’s a project. Here’s a rubric, moving on. Here’s a project. Here’s a rubric, moving on.” I think of it this way. We have these big, essential skills, or these goals, or these learning objectives. Really your projects are multiple data points. You could say, “I scored the student, the artist out of this on this project and here’s why.”
On a different project, same skill. It’s a this and here’s why. The data really does look different and you have to maybe explain a little bit. Maybe that becomes self-assessments. Maybe that becomes multiple check in points on a portfolio. This segues into one of the things that I think’s so important about data, which is working with the PLC. That’s something that near and dear to my heart. I know you’ve talked about it before at some of the AOE conferences is making your PLC work.
It’s really a no brainer. Like-minded teachers teaching the same thing, sharing best practices, but data is such a huge part of that. Like you said earlier, sometimes we, as art teachers, get a little bit turned off to data because it seems not relevant to us. Can you describe to us what a really high achieving art PLC looks like when they’re creating and using their data?
Sarah: First, I think it’s all about establishing a climate and culture where you can call people out and have real conversations. I think if we, especially those of us who have …. It’s not true just of elementary teachers, but it’s especially true I find in elementary buildings, people are afraid to have critical conversations with each other.
Andrew: We’re too nice.
Sarah: We are too nice.
Andrew: No one wants to be mean or anything.
Sarah: I think it’s beyond that we’re too nice. I think it’s we are in the fight for these kids’ lives. It feels like if we say something, or challenge something that someone does, we’re really betraying the only allies we have in this fight. I think that’s a little bit melodramatic, but that’s literally what we’re doing. We are preparing kids for lives.
If we’re not in that fight together, and we say something that ticks someone else off, there can be a cultural danger in your school. First of all, you just got to establish that climate and culture that we’re here to do what’s best for kids. What’s best for kids is to take a critical look at our practices and our own practices and being open to that. That’s, I think, the first part of a high functioning PLC, is to be able to say, “Ask questions like, ‘Why did you do it that way?’ or give a suggestion” and things like that.
I think the other thing that you point out, which is really smart, is using those Rick Dufour questions and starting out with, “What do we want kids to know and be able to do?” You were talking about how we get really project-specific rubrics and then we can’t measure anything over time. You’re absolutely right.
If we have a line item on our rubric that’s applying paint techniques on this project and then the next project we have one that’s behavior-based, following directions and then on the next project we have something else, there’s no way to measure progress over time.
We want to give kids multiple opportunities to get better at something. We cannot do 1 stop shops. “If you didn’t get it in this project, sorry, you’re never going to get again.” What a hopeless message to our kids.
Andrew: I’ve seen some rubrics that just have horrible criteria. Some of this was being a teacher of pre-service teachers. One of them was like, “How much did you enjoy the project? Did it seem like you really enjoyed the project, or only kind of enjoyed the project?”
Number 1, as a teacher, how do you know? You can guess, but you don’t really know. I actually think this gets to something that came up at a PLC talk just recently that I was having, which is a high functioning art PLC really needs to take emotion, and guesswork, and hunches, and beliefs out of it. We really do need to be data driven and research driven.
You can’t just say, “I like doing this.” If the research and the data says that doing this, or assessing this isn’t the greatest, then we shouldn’t be doing it. If you really do think it is the greatest, then let’s prove it. Let’s see your research. Show me the data and convince me that this is what we need to be doing.
I don’t want to turn this talk exclusively into PLCs, but I think the data is such a big part of that. We got to get on the same page and make good data so we’re making good choices.
Sarah: I think after you have that climate and culture, or I don’t think that we don’t have the luxury of saying, “First, we’re going to spend a year on climate and culture, then we’re going to do work.” You have to establish these things at the same time. Your second action, or your action right along with that would be to ask yourself, “What do we want kids to know and be able to do?”
The national standards are terrific, but they’re hard to drill all the way down to what kids would be doing to demonstrate that. If you have a whole bunch of rubrics, bring them together and figure out what those common threads are. What am I assessing over and over? You can distill those down to what’s the most important things.
From there, I think, the really concrete stuff is to develop a general rubric that you can use over, and over, and over again that would apply to every single project that you’re doing. That doesn’t mean that you can’t have …
Andrew: Amen. I want ….
Sarah: … specific [crosstalk 00:16:12].
Andrew: I want that on a tee shirt. I fight for that all the time. I think some people push back against that. Like you said, you got to have a long sample. You have to provide students an opportunity to improve something. If you don’t and every project’s something different, that’s a schizophrenic classroom right there.
Sarah: Absolutely. Can you imagine if you were, for instance, going to get your driver’s license and the exam was over different things every single time? I think that’s also a really great step. Going back to your question about what are good teachers doing. They’re not hiding the criteria from kids.
We did this terrific explanation, or this terrific demonstration in one of our PDs where we had 3 teachers go up to the front of the stage. We said, “You’re going to clap and the rest of the room is going to grade you on it.” The people up on the stage had no idea what the criteria were. Even we as the people who were assessing them didn’t know what the criteria were. It was this big confusion. That’s actually often what happens in classrooms.
Once the teachers on stage knew what the criteria were, the teachers had a clear set of criteria, the acceleration of learning of how to clap and what we were looking for was incredible. You could do it right away. I think making sure not only that you establish that as a PLC, but then you have ways to help kids understand what it is they’re learning and what specific steps they need to do to engage with that learning to get better and to improve, you’re going to see just enormous progress in your students when there’s clarity and transparency around that.
Andrew: That’s one of the things one of my partners in crime at the junior high I teach at, that’s one of the things we’ve been working on. With our learning management system, making our rubrics the same, making them visible. I’ve been a firm believer in a long time, students can’t hit targets if they can’t see them. You’ve got to make them transparent and visible.
Maybe that’s step 1 of making data be sexy, is we got to make it be visible. We got to agree upon it. We got to make sure that kids can see it so that they can achieve it, so that we can measure it.
That sounds so simple, but because we’re a subjective field, I think a lot of teachers, we can muddy the waters with what we like and “This is really important, but I don’t agree this is important.” You’ve got to define those steps.
Sarah, I know we could talk PLCs til the cows come home, but I want to wrap this up on some very concrete specific things. If there’s a teacher out there listening to this podcast, and they’re saying, “Man, tell me about it. All I hear all day long from the admins is what’s your data tell you and let data drive your decision.”
What are 2 or 3 very specific things that you think that teacher could do tomorrow that could start, improve what they’re doing and make data sexy for them?
Sarah: Number 1, boil it down to 3 to 5 things that your kids need to learn to be successful at the next level, whatever that is. If you teach 4th grade, boil it down. What are 3 to 5 things that kids need to know in 4th grade art so that they can be successful in 5th grade? If you teach an AP class, what are 3 to 5 things they need to know to be successful and pass that AP exam?
Once you’ve boiled that down, I think you can create some rubrics, some rubrics that go developmentally through stages, not quantified. I think the third thing that you do is you don’t forget the most important stake holder. You start by communicating with the student about those expectations and having them be a part of their own assessment process.
We think we’re a subjective; we like to think we’re a subjective group. We shouldn’t be. We’re objectively grading and assessing learning. That doesn’t stop us from having opinions about what’s good art and what’s not good art. That’s not our job. Our job is to assess learning over time.
Andrew: That’s awesome. Talking about my colleague at the junior high, we’ve been trying to think about that too. We try to get our students involved with self-assessment at the end, but what about at the beginning? What if we told them, ” Guys, here are the 5 or 6 essential skills we’re looking for over the whole year and in this project that we introduced yesterday, what do you guys think this essential skill looks like in this project?”
We’re trying to get them to help us actually create the rubric at the beginning so that we have some buy in. We’re still giving them some of the criteria, but we’re letting them fill out what quality, and what learning, and evidence of learning looks like. It’s been a really interesting process.
Sarah: Yeah, it’s a great first step also into a more choice-based classroom. I think that is almost always more meaningful for both teachers and students. I have a ton of questions all the time about how do I reach standards in a choice-based classroom, or how do I assess kids in a choice-based classroom? This is the way to do it.
Andrew: You just wormed your way into another podcast episode, the choice-based classroom and assessment.
Hey, Sarah, thanks so much for coming on and sharing all your great insights. I’m sure we’ll be in touch with more questions about choice-based and assessments down the road.
Sarah: Sounds good.
Andrew: All right. Thanks, Sarah. We’ll see you later.
Sarah: Thank you. All right.
Andrew: Bye. Holy cow, I love Sarah Dougherty. What a trove of knowledge about assessment and data and PLCs. Ultimately, while I know that this show’s about sexy data and making it relevant for us, it’s also really about better instruction and providing your students with the best, most authentic art experience that we can as teachers. Some of that starts with really authentic feedback. Who wouldn’t want that?
My big takeaway, think about what’s really big picture important for your students and for your art program. Is it the specific, quantifiable, play the game like everyone else data points and rubrics that we can easily slip into a dog and pony show for our administrators? Are we trying to show growth and critical thinking, and creativity, and passion, and grit, and perseverance, and voice?
We can create the assessments that show that. We can make that data work for us. Those are the things that take a different set of assessment tools. They’re probably going to look different from other teachers in the building. That’s okay. Don’t be shy. Make it work. Make it work for you. Make that data sexy.
Art Ed Radio is developed, produced, and support by the Art of Education with audio engineering by Michael Crocker. If you want to support the show and enjoy what we’re doing, please subscribe on iTunes. Leave some comments and write a review. We especially appreciate those 5 star reviews.
New episodes of Art Ed Radio are released every Tuesday. Additional content can be found under the podcast tab on the artofed.com. Join us next week as we’ll be talking about how to streamline your grading practices with technology. Thanks for listening.