ISD581 2016: Media Debate Discussion


Continuing the discussion from Readings: The Media Debate:

The Media Debate

How does the instructional medium influence instruction? Some say that it doesn’t matter what kind of truck delivers your cans of beans to the grocery store. I suggest that refrigerated trucks allow us to eat seafood in the summer.

This 27 page 2007 article, sums up the debate fairly well. The take-home message is that doing a study that compares delivering the "same" instruction in one medium (e.g., a face-to-face lecture) with another (e.g., a MOOC) is a waste of time. If the instruction really is the same, there will be no difference. If there is a difference, it is because the instruction was different. Instead, design instruction to take advantage of the affordances of the medium. (If you do not know what an affordance is, you should click the link and read the definition on Wikipedia and see also this short piece, and mention that in your response so that I can know whether to make a bigger deal about affordances in the future.)

Warnick, Bryan R. & Burbules, Nicholas C. (2007) Media Comparison Studies: Problems and Possibilities Teachers College Record. 109(11) p2483-2510

NOTE: If the above link will not work even when redirecting through the proxy server, you may need to search for the article through your library’s database. This link might work for USA affiliates, but I also have made the PDF available here.

Further Reading

Though the entirety of this decades-old debate is beyond the scope of interest for many readers, I include it here for those who may be interested. If you are pursuing a PhD, these references might be worth taking note of. This exchange, though from long ago, is still frequently cited. I especially like Kozma’s response.

What to do

  • Read Warnick & Burbules (2007).
  • Reply (not reply-as-linked-topic) to this message with a 200-500 word response by Thursday
  • Reply to at least 2 people’s responses. At least one reply must be after Thursday

What I find most intriguing about comparing media in an educational setting is the unintended learning outcomes. Both the intended and unintended outcomes can benefit or suffer through the use of media as explained by Warnick and Burbules (2007).

I can completely relate to this concept. When we train new pilots how to fly a Coast Guard helicopter we have many tools that we utilize including CBTs and full motion flight simulators with high definition visuals. The beauty of utilizing the simulator to complete flight events is that the instructor has complete control over the mission. We can manipulate the time of day, weather, and aircraft systems. Additionally, simulators don’t have the maintenance requirement that actual aircraft do, and become much more cost effective to use.

However, there is a distinct advantage completing events in the actual aircraft. In the actual aircraft, students are exposed to the vibrations, smells, noise, and sheer power of the situation which invokes feelings and fears. When you crash a flight simulator, at most you might turn the screen red (signaling a catastrophic crash) whereas in the actual aircraft it would be un-survivable. Everyone in aviation has this understanding when utilizing simulators.

As an example, I took a very experienced aviator out for his first flight in the MH-60 (our version of the Black Hawk helicopter). Prior to this flight he had flown various other helicopters and completed the ground school portion of our training - this includes approximately 12 flights in the simulator. He was ready to go. He did very well through starting the engines. However, when he engaged the rotors, he thought the vibration felt severe as compared to what he was accustomed to, and he immediately completed an emergency engine shut down. There was nothing wrong with the vibrations, there is just a lot more in this helicopter than what he was accustomed to, and the unintended consequence of using the simulator as the training media was that it does not model vibration very well.

So, what I took away from what instructional media affects instruction comes back to learning objectives. And more importantly, do those learning objectives include the intended and unintended learning outcomes that different media could influence. Without capturing a complete list of learning outcomes you can never fully compare or appreciate different media options for a training event. If you only focus on the intended outcomes, then as Clark claimed, media will never make a difference – --and he is wrong!–


I’m having trouble getting the article. Is anyone else having problems?

EDIT: Even with the EZProxy, I can’t access most of the articles using the DOI links (I’m getting the ‘please pay’ message). Does anyone have suggestions?

1 Like

@pfaffman I thought you would want to know it is issue number 11 from 2007.

@M_McCoy follow the link in the NOTE section above and click on Find Teachers college record from the publisher , then go to 2007, issue number 11 and click on the pdf version.


Epic fail, @pfaffman. Sorry folks, my desire not to provide semi-illegal copies to you has been outweighed by TCR’s bad web site design. I created a new category that only USA people (actually ISD581 people for now) can access and uploaded the paper there for your convenience. I humbly repent.

Thanks for the heads up @M_McCoy. Thanks for letting me know. (Hey, look! When you type a @name you see the people’s names in the suggestions. Discourse is always getting better.)

I prefer giving people the direct links to the publisher site wherever possible. This makes the links as widely useful as possible --because, you know, TONS of people are going to want to come to this site and see what readings I recommend–. Also, having students access articles through the server makes it possible for the library to have stats about which journals are used and how much (whether they can and do use those stats to make decisions about what journals to keep or justify their budges, I cannot say).

For these reasons, I consider making copies of articles for you to download wrong. I would like to claim that it is faster for me to give you the link than to download the article myself and upload it somewhere that only you can get it (e.g., here or Sakai), but I probably spend more time triple-checking that things like this don’t happen.

And then, I somehow messed up when making these edits and for a time had the text for FAQ: Participating in Long Discussions pasted in here. And I didn’t even go to a parade yesterday.

1 Like

Were you in the helicopter with him? I am just wondering because as his instructor, why did he not ask you about the vibrations? It seems to me that before doing an emergency shut down, he would ask the person who is sitting right next to him and teaching him. Maybe an emergency engine shut down is not a big deal but it does sound like one .

1 Like

How does instructional media influence instruction?

I disagree with the idea that media does not influence learning. For those that argue that media has little influence on learning, Warnick and Burbles (2007) state that one of the most controversial analogies often referred to is Clark’s medical analogy. Media is compared to various ways that medicine can be administered. While I agree that medication will eventually have the same effect, the method of administration does matter. Consider a patient with a severe allergic reaction. That patient requires IV administration. Swallowing a pill may be impossible as the throat could close. They could die before the pill dissolves, etc. In other words, the manner of delivery is important to the student and the situation. Educational media is more than just a delivery system. The context of use matters.

The use of technology itself creates new possibilities. Forcing that technology to fit usual methods limits those new possibilities. Forcing the media to be used in a traditional teaching manner limits the potential for learning. If we consider the concept of information transfer, media can certainly facilitate that process. Some may prefer to begin in the middle without the constraint of a “locked” course. The freedom affects their learning. A well designed course affords that interaction. Affordance, in terms of e-learning, means the design gives clues about navigation and use. Affordance, like learning itself is influenced by the student’s past experiences.


Does the instructional medium influence instruction? In short, yes. But, in my opinion, not as much as some people would assume. I think I fall more in line with Clark on this issue than Kozma.

Clark (1983) makes a good point about the difference between instructional delivery systems and instructional strategies. Dick and Carrey (2015) also stated that effective instructional strategies will benefit learners in any delivery system that is used. Chunking, graphic organizers, elaboration, sequencing, and paraphrasing are good instructional strategies that have been shown to benefit the learners, so the delivery system is (Warnick & Burbules (2007) would not say is inconsequential), but not as important as the instructional strategy used.

Warnick & Burbules (2007) stated using media helps instructors to “investigate the new possibilities that media present” (pp. 19). They give the example of the business class, whose learning objective is to compare business practices throughout the world. Warnick & Burbules (2007) seem to suggest that the learning objectives play a critical role in the educational selection. And I would agree with this. The media should be selected to fit your learning goals or needs and budget. Kozma (1991) replied the Clark with the assumption that the cognitive process is important to consider when developing instructional practices. Kozma thinks that Clark’s positions should be altered.

In a way, I agree with Kozma. I think that the instructional space can influence the learning. I feel that we have experienced that in 581. The challenges that we have done would not have been possible in Sakai. We had to change our educational space to meet the objectives of the course. So, did the instructional medium influence out instruction? Yes. But could we have achieved the goal in another way? Yes, I think so. @pfaffman stated in his syllabus, "Students will become facile with using electronic tools to create online instruction and the theories that guide it." I feel that Discourse is doing that for us, but I also feel that we can achieve this objective with another delivery system.


It’s interesting hearing this argument with a background in mental health. In counseling, it’s common to be trained in multiple modalities. Let’s use the example of a woman who has a phobia of dogs. Using a cognitive-behavioral framework, a therapist would help her identify her thoughts regarding dogs and analyze her actions around them. They would work to change her beliefs and behavior. The psychodynamic approach would assume that childhood traumas (generally around her relationship to her mother) are manifesting in this fear. A biological approach would include anti-anxiety medication and education to help her change her physiology in the situation.

Most therapists learn about 6-10 different counseling theories and choose 1-3 that personally ring true. Although certain modalities are shown to be superior with very specific problems, research has shown that the therapeutic alliance is the greatest predictor of success. Although this is the case, future therapists continue to train in – and offer – multiple frameworks.

There is an understanding that unlimited variables account for a client’s success in therapy. Only some of these are controllable (body language, word choice, validations, etc.), and the literature works to understand these things. Ultimately, we believe that we are just influencing patient growth in certain ways. I imagine that learning is the same thing. Phenomenology and other qualitative research methods are used to learn about that which is ‘too big’ to be measured. Whether talking to first-time African-American fathers in the delivery room, incarcerated adolescents, or elderly lesbian women in nursing homes, sometimes the experience teaches more than results of a post-test.

Educators may add video to a course based on the results of a large study. This may work for an instructor in an inner-city school, but may fail miserably with an affluent community. What if an instructor’s students wear glasses, have ADHD, don’t have technology at home, don’t like the instructor, have a medical or psychological illness, are in a class of 30 other students, or 3 other students, etc.! The variables go on and on, and will change in every environment.

I think – in understanding what works – researchers should take a qualitative approach. Reading though student narratives may provide more information than a controlled lab experiment. Technology has changed so quickly that the studies from the 1990s and early 2000s aren’t even necessarily relevant.

Although education forms the backbone of nursing, my heart goes out to all of my classmates (and instructor). You all have your work cut out for you!

What are your thoughts on media in education?

  • It’s generally useful; the research is ‘confused.’
  • It’s probably right; the instruction is what matters.
  • I like ice cream. :icecream:

0 voters

1 Like

I wasn’t a fan of the medical examples used (both the medication and medical students). Although the medication does get administered (or the students learn anatomy), the other variables are too important to leave out of consideration. Even the same medication may routinely have different effects in people, and there is no way to accurately predict how traumatic a cadaver lab may be.

I agree with you, and think of media as an enhancement to education. Media use seems to be less like the actual Tylenol, and more like the gel coating.


While reading the analysis article by Warnick, B., & Burbules, N., (2007) of the great media debate, I made several notes in the margin. I notated one word in particular several times, “individual.” As a therapist, we are trained to look at the individual and their needs since treatment isn’t a “one size fits all.” I have never understood this concept for clothing and in truth this concept doesn’t really translate well to any aspect of our lives (certainly not people), so why would it in education? Perhaps we’re really looking for a “one size fits most” option; this is still not ideal because an individual’s success depends on other factors. As such, the analogies didn’t really resonate well with me. I also felt that as though the question of using or not using media was also being implied.

Regarding the media debate, I agree with aspects of both sides of the argument. However, I predominantly agree with the authors (Warnick & Burbules, 2007), that better, more refined research is needed. I agree it is pointless to keep performing research investigating the same parameters when the outcome is the same. We know tools or props like media are beneficial and we know this because learning occurred and wasn’t hindered. However, research to determine the best media and method pairing to enhance learning for success is needed. I agree research to investigate media use in a more dynamic aspect is warranted in order to fully determine other effects that media use may have on learning that aren’t easily identified or measured, which possibly correlates to individual differences. Overall, I am of the opinion that there are parameters/aspects that should be clearly defined and investigated suggesting continued research of media use is valid.


Warnick and Burbles referenced a medical study about medical students learning anatomy by virtual dissection verses dissecting real cadavers. I can relate to this study because I have my 5th graders dissect cow eyes every year. I have them do this because our standards say that they need to learn the parts and functions of the eye. Then we have to learn how eyesight problems are corrected. The first year I taught this standard I showed my students a virtual dissection from the Exploratorium. This website shows a person dissection and talking abut a cow’s eye because it is the closest thing we can relate to a human eye to dissect as 5th graders. I was also able to use a virtual dissection tool that students could click the parts of the eye with a scalpel blade and then that part is separated from the eye and students are read about the function of that part. This website is no longer active online, but the video below is what it used to do. I still show the video to the students, but it doesn’t show the full functionality of the website.

The first year I taught this, with only virtual activities my students did well on the classroom unit test, but didn’t seem to do well on the end of the year state required science test with only about 35% proficient on the eye standards section. So, with that in mind, the next year I taught the eye standards I decided that I wanted my students to actually be a part of the dissection, not just a virtual-visual-participant. I decided to order cow eyes from a biology science distributor. I still had my students watch the virtual items I described above, but this time the students also had a real eye to look at, analyze, and then dissect. The immediate classroom grades didn’t improve much from the previous school year, however when it came time for state testing, my students excelled at this standard changing it from a weakness the previous year, to a strength the next year with about 89% proficient on the eye standards. This is my sixth year teaching and since those first two years my students have continued to excel at this standard on the state test.

With the real eye my students are able to see each part, smell it, hear the crunch of the scissors, feel the different textures like the vitreous humor (jelly stuff in the eye to help keep its shape), and generally see how each part is connected to the other parts of the eye. I think this is very valuable in the learning of how the eye works. However, I don’t feel that the only dissection the eye would be as helpful either. I think being able to mix the virtual with the physical instruction was the reason for the large improvement. I don’t believe that virtual media alone is the answer to instruction. I think it should be used as a tool for good instruction based on solid learning objectives. Warnick and Burbules stated that a common problem in comparing two media is that criterion based tests might not be able measure important differences between the two media. In my way of thinking, if a person truly learns something, the type of test shouldn’t matter. The medical survey that Warnick and Burbules referenced seems like it has to be one media or another, not both. Why does it have to be one or the other? I can see the benefits for medical students to use a virtual dissection tool early on in their classes because I imagine that cadavers aren’t cheap or super easy to find. Then, after students have become accustomed to the virtual models, they should be able to move to the physical dissections. In the end, I think that instruction should be based on what works for your students, not just what some research study seems to prove true. It just makes sense to me to take the best of both worlds and make it work for my students.


A qualitative approach could help you get a better understanding of the how the students feel about the use of media. Unfortunately in today’s climate where finances dictate many of the decision making, qualitative answers often get left behind. Imagine being a school administer requesting funding for adapting different type of media in your school. You can provide the board all the qualitative data that supports your request, but it seems what they really want to know is how the media will effect test scores - quantitative data. By using this media we see an increase of over 20%… That justifies the money and change…


I would love to imagine that school boards make decisions having to do with data, but there is little evidence to suggest that they do. I suspect that they don’t even make decisions based on what they themselves believe about technology, but I have yet to do that study (ask what skills students should have; ask which tools support which skills; ask what tools they support buying). There is no data showing that smartboards help kids learn or teachers teach, but they are now in 80% of classrooms.

But you are right that for most people, quantitative data are privileged over qualitative data, so much so, that people are willing to use a quantitative measure that doesn’t actually measure what it is that kids are learning.


Here’s the idea: Use the ACT Aspire, which ostensibly measures “college and participants readiness” (which, you know, isn’t based on whatever standards that we will otherwise be requiring teachers to teach to, and never mind that some students do not want to go to college).

Should it pass, it will be fairly devastating to teachers, and will pretty much remove any fiscal incentive for people to get masters degrees in education. And then, as Audrey Amrein-Beardsley points out, in the line above, set the state up for a bunch of expensive lawsuits.

But, mostly, people would say that “what works” would be measured quantitatively and “how it works” is more likely to be measured qualitatively.


##How is instruction helped by use of media

After reading the article Media comparison studies: problems and possibilities by Warnick & Burbules (2007), I agree that multiple media avenues should be used by educators. Making information available through lectures, educational practice, computers and/or other delivery methods is fundamental in teaching. Multiple media should be used by educators.

When a new skill is taught, like the difference between an acid and a base in an elementary science class, many techniques are used to deliver the information as thoroughly as possible. When I first introduced an acid to my students, I used pictures as examples. We listed common acids that could be found around their house. I began with acids that would be common or familiar to the students. We discussed each acid, wrote notes, drew pictures and identified each acid on the pH scale. I then introduced a base substance to the students following the same instructional pattern of discussion, note taking, picture drawing and identifying its strength on the pH scale. This lesson’s method of instruction was procedural.

When the hands on educational experience began in the science lab, the information was examined using a form of media. The students first used devices to combine various acids with a base a see the reaction. They could see the reaction but they could not feel, hear, or smell the reaction. This type of media was useful but the students still needed more media to get a better understanding.

The final lesson allowed the students to test various substances with litmus paper to identify the substance on the pH scale as either an acid or a base. The substances were in separate containers and unlabeled. This also gave them their first chance to smell or feel the substances. The students were surprised at how strong some of the acids smelled and how slippery some of them were. They were putting together the “medium” of notes and lecture with the “media” of examining the substances for themselves. This gave the student’s a more conceptual learning experience.

The outcome for this lesson compared the procedural and conceptual learning of students through the use of media and methods. The students that used the multiple media were able to make better comparisons between an acid and a base. They were able to explore and get a more in-depth understanding by testing the substances and handling the substances. The results of the experience was an important part of the course goals and their overall learning experience.

1 Like

I enjoyed reading about your students use of real cow eyes in the classroom for dissection at such a young age. What an impact that will have on their learning.

This is amazing. Being able to understand why the parts are so thick and understanding how everything is put together by seeing it, hearing it and smelling it for themselves.
Without this type of media, your tests results show that your students did not really have a good conceptual understanding. Way to go TEACHER!

1 Like

While I enjoyed the article, I feel like it covered a lot. Many ideas shared piqued my interest but I would like to focus on the idea of comparing old and new media technologies. I would have never thought to consider the way in which a technology/media has impacted society while comparing media for learning outcomes.

Warnick and Burbules (2007) gave the example of comparing the speed between walking and driving a particular distance. When the culture of a society is to walk, for example in Beijing China, the way in which the community is built, designed, and operates reflects this. When the culture of a society is to drive, for example in the majority of the United States, the communities reflect this by being spread out and expansive. In trying to compare the “speed” of these technologies it would be extremely difficult when taking into consideration the pre-existing impact they have had on the surroundings. The findings would be skewed.

I particularly like this example because it hits close to home. My employer offers hardware replacement on broken devices. There are various levels of replacement such as 2hr, 4hr, and next business day support. For the majority of the world we offer all 3 levels of hardware replacement. However, in places such as Beijing, China, we do not. In cities like Beijing and Hong Kong we can only offer next business day support.

In the traditional way of thinking, using a car to deliver a replacement device should be faster than walking but in these cases the cities are so densely populated that a car would never be able to deliver the device in a 2 or 4 hr window. That is why the majority of our products sold in China only come with next business day hardware replacement. With this type of example in mind, how could I compare the “speed” of walking versus driving when the entire society is built for the pedestrian.

1 Like

I disagree completely with Clark’s idea “that media are ‘mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition’” (1994, pg. 27). I teach six classes of general Physical Science classes a day. According to his statement, if I were to successfully teach a concept using one type of media, they would all succeed. However, this is completely not possible. There are many things that will interfere with this, such as IEP requirements, individualizing lesson plans to the students, functioning equipment (for that particular day), whether that child is in a decent mood or has had a rough night/day/week or if that child is even present due to behavioral issues, etc. Depending on these prior issues or accommodations, I will have to change up my methods (whether that includes media or not) to adjust to those students. Most of the time, I am able to keep the majority on track using the same instructional media. However, some days, that is not always the case. Another problem that may arise, is that student may not be comfortable using a particular media. For example, all of my students are supposed to have iPad access, however, not all of them are knowledgeable of how to properly USE the iPad in a classroom setting……other than to play games. They are quite adept at finding, downloading, and playing games. I could plan an entire lesson using the iPad and I will end up spending most of that lesson teaching (most, not all) of the students how to use the iPad, where or how to log in, what to press (sometimes), how to send an assignment, etc… I plan to tackle this issue next school year with some instructional iPad lessons or mini-lessons using the iPad, at the beginning of the year. (Praying for classroom sets of iPads! Right now they go home with the kids……which is a nightmare.) Some students do not have the discipline or attention span to learn via a media tool. I have incorporated games into lessons strictly for those students. An example: I created a Blendspace, stand-alone lesson on the Metric System for my first ISD classes. Several students I chose to participate in that process LOVED games and I was constantly telling them to put the games away. They loved the metric conversion challenge games, and enjoyed competing against each other and themselves.

Another issue I had was with the medical analogy and his lack of recognizing timing. Sure, all of those methods will accomplish the same objective, however, one may accomplish it faster than another, and in emergency situations, a pill dissolving is not ideal. Same idea pertains to my students. A concept may be learned much more quickly and efficiently, if it is taught through a different media, or a different method, for that particular student. And a particular method is not always able to be duplicated through a particular media. (However, I do not feel like being able to duplicate those methods is something that is too far away).


The Warnick and Burbules article put media comparison studies in a critical, reflective framework for me. One especially memorable piece of knowledge that I took away from previous coursework was that numerous studies have shown there is no significant difference in the effectiveness of instruction delivered through different media. Media comparison research seemed so straightforward: use the same instructional content, only vary the delivery system as the independent variable, and the dependent variables - test scores - would indicate which delivery system is more effective. Except there was no difference, as study after study found. Now, Warnick and Burbules have explained the reasons for the common “no significant difference” finding and called into question the concepts and assumptions on which this research is based.

After reading the article, I wonder . . . Is it possible to conduct a viable technology comparison study? It seems this is an area that absolutely does not lend itself to experimental research. Perhaps the purpose of media comparison studies should be interpretative, conducted using qualitative reach methods, with a goal of understanding the media being investigated in light of the purposes of the instruction.

Comparison studies have been too simplistic. The entire context of use must be taken into account. Therefore, it seems evident that technology comparison studies are not comparing the same things at all. As the article pointed out, the focus has been on the use of an objective measurement, usually pre- and post-instruction scores, to compare media effects on learning outcomes. This approach overlooks much of the intended and unintended purposes of instruction.

The decades-long debate over the usefulness of media comparison studies is likely to continue. Both critics and proponents should expand their focus to reconsider their views of this line of research. Proponents may wish to investigate media and instructional outcomes from a holistic view, taking into account purposes and goals beyond test scores. Using a different research protocol may be more productive than continuing to employ one that has repeatedly found “no significant difference” between media. Opponents, too, should consider that differences are manifested in ways other than test scores. Media does seem to matter!


###How does the instructional medium influence instruction?

To be honest, the entire time I was reading the article I kept thinking, ‘How can the instructional medium NOT influence instruction’? We learn through experience and when instruction is given through a different teaching medium then that experience has left a permanent mark in shaping our learning. As humans, we naturally tend to compare experiences that are similar. So, even if it is not scientific we naturally generate an opinion if one method worked better than another. Now, does that mean we base our opinions on erroneous misconceptions? Probably, but those opinions whether factual or not will influence what we think, learn and act upon.

The detailed explanations described by Warnick & Burbles (2007) made it apparent that comparing media can be extraordinarily problematic. Comparing media or methods of instruction is not a pure science and very difficult to replicate for a scientific evaluation. There are too many variables that effect how media can influence instruction and it is difficult to control them all.

I disagree with Clark that there is not a difference in the method of delivery. In my job in Career Services with the university we are consistently working on getting information out to the students about career related events. We don’t just use one method to send information to students, but in fact multiple. We send email announcements, --expensive-- ads in the Vanguard, post information on our website, do blasts on all of our different social media sites (Facebook, Pinterest, Instagram and Twitter). We are reaching the same audience, but not every student will read their email, or check out the website, or see the flyer posted on the bulletin board or view the different social media site, but hopefully we can catch most by presenting the information through different media outlets.