Readings: E-Learning: Chapter 7


@BeckyWheeler @Dan

Applying the Redundancy Principle: Explain Visuals with Words in Audio OR Text: Not Both (pages 132–149)

###What to look for

Note “Redundant on-screen text: When to lose it and when to use it” (p. 142). Pay close attention to the discussion of the redundancy principle from the cognitive theory of multimedia perspective (pp 138-139, Figure 7.4).

###Something to think about

Overall, adding printed text to a narrated graphic overloads the visual processing channel, and interferes with learning. But, there are certain boundary conditions in which redundant on-screen text actually improves learning.

###What to do

Read the Chapter. Write your response to the prompts below by Thursday. Reply to at least 2 people’s posts. :heart: at least 3 that seem interesting, provocative, that you agree with, or otherwise wish to acknowledge. At least one of your responses should be written after Thursday when everyone has had a chance to post. Ideally, you’ll participate on at least three different days.

  • Clark and Mayer (2011) recommend omitting redundant on-screen text in most e-learning. Discuss some reasons why it is generally not a good idea to add printed on-screen text to a narrated graphic. Also, describe an example of an exception to the redundancy principle. What is a situation in which concurrent on-screen graphics, on-screen text, and audio narration might enhance learning?

  • Give at least one example of both good and bad applications of redundancy principle you have experienced.How will you apply what you have learned in this chapter in your future study or professions?


Learners generally do not benefit from having graphics, printed words, and spoken narrations. When all three are presented, learners tend to have their visual channel overloaded. Learners tend to pay too much attention to the printed words and not enough attention on the graphics. Another problem is that when listening to spoken words and reading printed words, the learner will try to compare and reconcile the printed and spoken material. This strains the cognitive process and more information is lost than gained.

There are four exceptions to this rule according to Clark and Mayer (2011).

  1. You can have printed words on a page with no graphics.
  2. You can have printed words if you are going at a slower pace.
  3. If your learner has to give a great effort to understand the spoken language. This applies in the case of your learners being English language learners.
  4. You add just key words to the graphics.

A good example that I have seen of this redundancy principle was in a biology class. The teacher had the image of the heart. It was simple and black and white. It was similar to the image below.

In the explanation, the teacher began with a very basic picture of the heart. Then, she added key words to show the anatomy. She did this as she was speaking. Then, she added arrows that were like flashing dotted lines to show where the blood was flowing. This was an in person presentation, so the audio was her speaking to the class, but I liked the way she started out simply, then added the words, then added the movement of the arrows. I was able to take each part in. I was able to analyze the graphic, see where the vocabulary words fit in, and then hear her explanation of how the blood moved though the heart and picked up oxygen. The presentation was so good that although it was presented to me in high school, I still remember it today.

When I was looking for the above graphic, I found many bad examples.

I feel that both of these examples have too many words. I am unable to focus on the graphic. I think the lesson that I originally learned worked so well because of the simplicity of the graphic.


The main reason we should not include printed on-screen text with a narrated graphic is that the learner will not be able to pay attention to the graphic if they are busy reading the words. It’s as simple as that. This idea connected well with Chapter 6. In chapter 6, C & M discussed how your visual channels would have to process both the graphic and the written words at the same time. Thus, diminishing the attainment of learning outcomes. In Chapter 7, they add on to this thought by discussing how a learner has greater cognitive processing when connecting pictorial and verbal models.

One example of an exception to the redundancy principle would be in a slide that only contains text (no graphics or animations), you can include audio. This would have the same information presented verbally and visually at the same time, but since that is the only information that has to be processed on the slide, it is fine. One example of this would be an learning objective slide.

Another example of an exception can be seen below. When the graphic needs a few keywords to help learners make connections, you can include both written and spoken words. This is not really redundant because the written text is not identical to the spoken text, but merely a supporting piece of information.

C & M discussed using a technique called signaling to assist the learner in processing the information presented in a visual representation. I believe the graphic below is a good example of when you could use this idea. If the learner was presented with this graphic without the labels, it would be difficult for the learner to follow. This graphic also presents the technical vocabulary associated with cloud types. I could see adding audio to have students follow along through the different levels of cloud. Before moving on students could reflect on when they have encountered the different cloud types. Students would need the written words to discuss their experiences, because it would be highly unlikely they would remember from audio alone.

This example is from a SlideShare I found on tornado formation. Based on tonight’s weather, I thought this was appropriate to look up. It did not include audio, but I think the graphic can be used to explain how audio would not be appropriate for this slide. This slide has written words to explain the three steps in the graphics. If I included audio with this slide, the view would be too busy reading along to focus on the graphics. This would limit the attainments of learning outcomes. A lot of the pre-made elementary education PowerPoint presentations out there on the internet do just this and break the redundancy principle.

Note: The two images are from a presentations that do not contain audio commentary, but I think it could be used to increase learning outcomes. Both images are linked to the original location.

Looking forward, I would be sure to consider when I include written text. I know that I would like my objectives at the beginning of the e-learning session to contain both written and spoken word, because there would be no graphics. I think when I use animations, I will be sure to include audio. I’m pretty sure I would do that anyway. I would change how I use still frames. I think previously, I would’ve used written descriptions, but now, I would highly consider using spoken words instead.


I agree with you that taking a plain graphic and gradually adding text/explanations of the graphic makes it much easier to process. And you aren’t on visual stimulation overload with a graphic that is inundated with descriptions.

1 Like

I thought it was funny as I read the Principles of Redundancy. The first one says don’t use text, graphic and audio (redundancy) and the second principle says use text, graphics and audio only when deemed necessary. I was thinking how are they going to justify this after explaining how wrong it is to use all three! But as I read on, it all made sense. Kudos to Clark and Mayer for not leading me astray! Of course this was similarly done in previous chapters, but for the redundancy principle, there are more reasons in which it is safe to be incorporated in instruction.

It is generally not a good idea to add printed on-screen text to a narrated graphics because it can lead to extraneous cognitive processing. Both the graphic and printed text overload the visual channel while the narration is the only process operating through the ears. We need to limit the number of multimedia uses per sensory memory to just one each, unless deemed necessary and appropriate for learning. Not only does evidence support it, but the “cognitive theory of multimedia learning predicts that learners will learn more deeply from multimedia presentations in which redundant on-screen text is excluded rather than included” (Clark & Mayer, 2011, p. 139).

There are particular circumstances in which you would incorporate the redundancy principle into instruction to enhance learning. These include a time when there is no graphics on the page, for example, an objective page with bullets of the objectives and the narration of the objectives; use when language is unfamiliar to learner; use when the printed words are unobtrusive; or when the printed words can signal where to look on a screen. A bad example of the application of the redundancy principle would be a presentation of how to operate a new software that includes graphics of the screens, text of how to operate it while the narration for the graphic is playing simultaneously with no learner control. To make that scenario a good application of the redundancy principle, include the graphics of the new software, allow student to listen to press a play/start button to listen to the narration, after narration has ended, text can be produced for review.

I will use this information to consider if I am cognitively overloading the learners or using multimedia for special circumstances as stated above. I will evaluate the instruction by determining if I am creating it based on complementing all learning styles or adding media to truly enhance instruction. I see this principle being useful in the creation of presentations and online websites and software such as Scalar.


Hahaha! I agree. I also thought that it was contradictory to say don’t do it, and then to say, well only in some circumstances. I think the most important idea that Clark and Mayer were trying to get to though is that generally all three are not as beneficial as just graphics and narrations. But, sometimes, printed words can add to the overall development of the learning. I feel that this kind of distinction is what I am trying to learn in the IDD program: what is best, why and why not, and where is there room for an exception.


##Chapter 7

As I mentioned in my chapter 6 post, we sometimes use courses that allow the end user to select audio or text versions of a course. It’s one way or the other. C&M (2011) suggest that in general, it’s best to avoid redundant on-screen text. I have seen redundancy effectively used in e-learning when the audio and written text are not exactly the same. They complement each other, but either could stand alone. Redundancy can be used to reinforce difficult or complex topics. Hearing the words would allow you to study the graphics, and the written words provide memory support.

I have taught several different basic healthcare topics. One of the courses used a hippopotamus image every time HIPAA was discussed. Students always missed the test question: “What does the acronym HIPAA stand for?” Eventually I figured out that it was because they associated HIPAA with the hippo. Notice HIPAA has 1 “P” and hippo has 2. The spoken word was not enough. The situation was much improved with the addition of written explanation. (I hated that hippo. :confused:)


Lots of similar ideas from chapter 6 in this chapter. We need to be mindful when creating online learning using a combination of graphics, audio, and printed text. We have limited cognitive throughput and as C&M stated, if you combine printed text with audio narration it can hamper the ability of the learner to make connection in the material. Making these connections helps improve the learning process.

One idea I did like was providing an “audio off” button that when activated, replaced the narration with on screen text. As discussed in chapter 6, maybe headphones are not an option and a more sterile environment is needed that precludes the use of audio. Additionally, if the learner is going back to the training material looking for reference, or a specific topic – printed text might serve as a better mean than listening to the audio.

There is always going to be cases where you want to combine graphics, audio, and text. I think about some of the learning programs my kids use to help with reading, spelling, and comprehension. In these situations, all three helped maintain their attention and allowed them to properly hear the punctuation while seeing the spelling of the word.

When I was reading this chapter all I could think about is when I watch a movie with subtitles on. I pay more attention to reading the subtitles (even if the movie is in English) than watching the actual movie.

These are all good suggestions to take into account when you find yourself developing online training.


###Redundancy Principle C & M (2011) cited multiple research articles investigating the redundancy effect, which is the combination of on-screen text and audio of the written text simultaneously presented to the learner has shown that learning is hindered due to the “limited capacity” of the brain to process the same information.

Redundacy Principle in Use

Books on tape/CD (or in my case records), electronic devices like LeapFrog (paired with books) is a good example to me of the redundancy principle which supports the learning process for reading because the learning reader is likely going to have difficulty processing the spoken words. There will likely be pictures but there is usually ample time allotted to look at the associated pictures.

A bad example…the only thing that comes to mind for me is that anytime I attend a training and the presenter reads verbatim what is on the screen, I find it unnecessary and actually a bit rude. Even worse, when the presenter prompts the audience to read on their own and the reads it aloud to the audience. I’ll allow for significant points being made but when the majority of the presentation is delivered in that format it is distracting.


I will limit my use of multimedia paired with text and audio to only key points, allow ample time for processing, use only text or pictures not both, and be sure to consult the research for evidence based practice/guidelines for e-learning and such.


Clark & Mayer (2011) go into great detail explaining how on-screen text and audio along with graphics should not be used simultaneously. Using on-screen text and audio together is called the redundancy principle. Basically, this results in a major overload in cognitive processing and hinders learning. Clark & Mayer (2011) state that research supports the redundancy principle, but later explain there are exceptions to this rule. For example, on-screen text can be used in conjunction with audio when using simple or no graphics, when there is a language barrier or technical term difficulties, using key words for signaling (Clark & Mayer, 2011) or giving reference information.

An example of breaking the redundancy principle to enhance learning was in a PowerPoint presentation I recently presented to students about the Cooperative Education and Internship Program. I had a simple background graphic on the slide with the presentation objectives listed while briefly verbally explaining each point we would be going over during the presentation. I believe the audio, graphic, in conjunction with the text served the purpose of reinforcing the objectives to the students and allowed them to know what to expect from the presentation.

I have experienced both good and bad examples of the redundancy principle. An example of a bad application of the redundancy principle that I have witnessed several times is when the presenter reads word for word only the on-screen text from slides with graphics. I have always found this type of presentation to be so frustrating and boring to listen to. As Clark & Mayer (2011) state it is because I am comparing the on-screen text with the verbal information. A good example of applying the redundancy principle is a presentation with thought provoking graphics, limited or no text and the presenter speaking or pre-recorded audio.

I plan to use the redundancy principle when putting together career development presentations for students. I believe proper execution of the redundancy principle will help ensure that students have a greater ‘take away’ from the information presented to them. Hopefully it will also allow for greater engagement from the students.


In my experience when presenters either don’t know the material well or are nervous they fall back on the need to “read the slides”. Reading is not teaching. It’s demoralizing to the audience. PowerPoint is a good tool, but in the wrong hands can be detrimental to the audience. As much as possible I tell the students in the courses I instruct I would rather they dictate where the class goes, not the PowerPoint slides. Obviously, I have an idea of the main topics that need to be covered, but if a student led discussion covers those topics rather than me doing all the talking… I’m all for it. That proves to me that the learners are making a connection to the material.


###Redundancy principle Based on research, Clark and Mayer (2011) recommend that using printed text along with on-screen graphics and narration all at the same time should be avoided. One of the main reasons this should be avoided is because learners tend to pay more attention to the printed words when they are presented at the same time as graphics which leads to extraneous processing. Even though it is recommended to avoid using on-screen text with graphics and narration, it is sometimes used in special circumstances. A few of the acceptable situations include when learners have disabilities, there are very few key words on the screen, the presentation is a slow pace, and when there are no animations, graphics, and photos on the screen.

As I have mentioned before, I have had instructors in the past overload their power points with too many words. It makes it nearly impossible to comprehend the material.


I found a video that incorporates graphics, narration, and little on-screen text. I find it to be a good application of the redundancy principle. Even though it does have on-screen text, they are key words placed right next to the anatomical structure. What do you think? Too many words? Maybe it would be better to use arrows pointing to the anatomy instead of speaking it and having it shown on screen.

I think this next example is a poor application of the redundancy principle. There are far too many words on the video along with being able to see him and hear him speak.


###Applying what I have learned

I will apply this principle to my future profession because I agree with Clark and Mayer (2011) that it is an overload of information when there are too many printed words on the screen along with graphics and narration.


I have actually not watched a movie with subtitles (that I really wanted to see) because I didn’t feel like reading. :eyeglasses:

1 Like

##Chapter 7 - Applying the Redundancy Principle

I think chapter 6 and 7 are very similar. As a matter of fact I had to change some of chapter 6 post because I started talking about chapter 7 material. As far as why it’s not a good idea to have printed text along with a narrated graphic, I completely agree that it could overload the visual processing memory system as shown in Figure 7.4 on page 138. I have several students who are low readers and if there is text on the screen, they would be too distracted by reading the text instead of watching the graphic. I completely relate to the Redundancy Principle in my life because of closed captioning on the TV. My father loves to watch TV with closed captioning on, but it drives me crazy!!! :grimacing: I am too distracted by reading the captioning and finding grammatical errors, that I miss the acting going on in the movie. :rage: This meme made me laugh.

A good use of redundancy would be having students read along in their CHAPTER book, while there is an audio book recording playing of the book in the background. The students only have the text of the book going on in the visual processing and the audio in their phonetic processing. I think this could help the students hear the pronunciations of words, as well as get practice on reading fluently. However if it were a younger student’s picture book, I think having the text along with the narrated graphic would be distracting. For example, in the video below a person is reading straight from the book, but is also showing the pictures in the book and the video moves very quickly so it is hard to keep up with everything all at one time.

I think this video pretty much sums up a good example for using Redundancy Principle because it shows limited text on the diagram as the presenter is explaining about how the heart functions. If there was no text on this diagram, I wouldn’t know what part the person is referring to while she is speaking.


Based on what I read from C&M (2011), it is generally not a good idea to add printed on-screen text to a narrated graphic because it can cause an overload in the working memory. Our eyes cannot follow both the text and graphics. We tend to focus on one or the other.

An exception to the Redundancy Principle would be the “duolingo” app. I have really enjoyed using this app and it uses on-screen graphics, on-screen text and audio during the lessons. I do not feel overloaded when learning Spanish through this app. I complete it at my own pace and can review previous lessons. I actually feel like I am learning at a sufficient rate with this program.

I had a hard time coming up with a bad example of redundancy that I have experienced. Maybe because I have been out of college for so long. I found this video

that the presenter labeled as a non-redundant principle video on youtube. I disagree because this video has the presenter reading directly from the text on the screen. I completely lost focus on the graphics as I was reading the text along with the presenter. A way to fix that would be to present the planets one at a time with the text being only the name of the planet that is being identified as the speaker is talking. This would keep focus on the individual planets without being distracted by all the text on one screen.


I agree. There is definite overload with the audio and the text. It would be difficult to retain information if this were a new topic to the learner. The copyright through out is also a distraction.


Redundancy of narration and text is usually not suggested because it threatens overload to visual processing, and it risks the learner having to use limited cognitive space decoding rather than comprehending accurately and analyzing the information deeper. There are some exceptions though: there is no pictorial portion, the learning is learner guided rather than time constrained, the language or terminology is unfamiliar causing more cognitive overload trying to grasp spoken, unknown words, or only some key terms are put in text.

I honestly have no memory of positive or negative encounters regarding this. I think anytime I have asked to follow along with narration while having text, I have to choose one and tune out the other. Otherwise I find myself jumping around from my reading spot and spontaneously hearing where the narration is at, which obviously disrupts comprehension of material. The last several chapters have definitely enforced that narration with pictorials is best with maybe text for less familiar words. I will follow all of the principles as I, hopefully, create multimedia lessons in the future.

1 Like

The Redundancy Principle is used whenever an e-Learning design incorporates the use of written text, exact audio narration of the written text, and graphics or animations. C&M recommends not using redundant on-screen text in e-Learning in order to “avoid overloading the visual channel of working memory (pg. 133, C&M)/. The fear is that learners’ visual channel would become overloaded and enough attention would not be given to the graphics and they might try to match up the written text with the simultaneous narration thereby neglecting the graphics again. The learning styles hypothesis, based on the information acquisition theory, suggests that the Redundancy Principle could work, however, it assumes that people learn by filling their memory with information and it is not supported by available research.

An example of an exception to the redundancy principle would be a situation where there is plenty of time for the learner to process the pictorial presentation as well as the written and narrated text. A situation in which simultaneous on-screen text, graphics, and narrated text might enhance learning is during a presentation of the States of matter phase cycles. The presentation could show short, printed phrases next to each part of the graphic, while the narration is speaking the audio text.

One good example of the redundancy principle that I have experienced is any time a graphic has highlighted portions of text and the narration is speaking the same small phrases of text along with more detailed pieces of text. The below is an example I feel could meet all requirement for the redundancy principle.

One bad example of the redundancy principle I have experienced is when I am cognitively processing a graphic or animation, trying to read the on screen text and listen to the narrated text at the same time. I found this video that was okay at first and more and more text started to appear on the screen in addition to the water molecule he started to draw…that appeared beneath the onscreen text at approximately 1:51. The graphic he ends up drawing is hardly visible at all.


I am actually one of those people weirdos that uses subtitles (English only) for everything I watch on Netflix simply because my hearing is not that great! Not bad enough for a hearing aid, but bad enough I miss a lot if I don’t use the subtitles. I’m so used to having subtitles it that when I watch regular programming I have to remind myself there won’t be words on the screen. Honestly, for me, the words are not distracting, but helpful.


Clark and Mayer (2011) give a few reasons why printed tex and narrated graphics should not be used simultaneously. The reason that stood out to me stated that familiar text can at times take the learners attention away from the narrated graphics. The learner becomes engrossed in reading the printed text and misses out on the narrated graphic all together. However, when done correctly concurrent printed text and narrated graphics can enhance learning. The key difference is familiarity of text and unfamiliarity of text. If a word or words are subject matter specific and not likely found in a learner’s vocabulary then it is ok to have redundancy. The unfamiliar words or phrases should be noted on the graphic and reviewed during the narration.

To be honest, I really cannot think of the last time I saw a training that was narrated and had printed text. Maybe it is because people are not usually over zealous? Every training I have seen has been with one or the other, narration or printed text and lots of broken contiguity principles! Now I can say that in my personal world of learning foreign languages, it is not uncommon for me to see an animation/ depiction, hear the phrase being spoken in, let’s say Japanese, and then to see rōmaji (romanization of Japanese) at the bottom of the screen. This falls into Clark and Mayer’s (2011) special situation category. I am not a native Japanese speaker therefore seeing the rōmaji allows me to better process what I am seeing and hearing in the foreign language lesson.