The Medium is the Message

Marshall McLuhan’s famous declaration “The medium is the message” never made sense to me. It sounded cool, but on the surface there was not enough there to offer much of an explanation. It seemed one of those things other people understood and used, but I did not. Perhaps I had missed the class or not read the book in which the famous phrase was explained.

The expression came up again in the book club I joined while we reading a book by Johns (The Science of Reading). A sizeable proportion of one chapter considers McLuhan’s famous proposal and provided a reference to his first use of the phrase. The original mention was a comment he made at a conference and then continued to develop. 

The page is not a conveyor belt for pots of message; it is not a consumer item so much as a producer of unique habits of mind and highly specialized attitudes to person and country, and to the nature of thought itself (…) Let us grant for the moment that the medium is the message. It follows that if we study any medium carefully we shall discover its total dynamics and its unreleased powers.

Print, by permitting people to read at high speed and, above all, to read alone and silently, developed a totally new set of mental operations.

Johns’ book is about the history of the study of reading as a science with more on how reading and the methods by which reading skill is developed became a political issue. My effort to create a personal understanding of what any of this would have to do with McLuhan now is based on my consideration of different media and what McLuhan had to say specifically about reading. I have come to think about reading as a generative activity which is a topic I write about frequently. From this perspective, reading is an external task that gives priority to certain internal behaviors. In contrast to some other media, reading allows personal control of speed. A reader can take in information quickly or pause to reflect. A reader can reread. Text sometimes requires the reader to generate imagery in contrast to having imagery offered to them as would be the case with video. Reading cannot transfer a complete experience from author to reader and much is constructed by the reader based on existing knowledge. Reading has a social component. In most cases reading involves an implied interaction with an author, but also with others who have interpreted the same input and who often interact to share personal interpretations. 

What McLuhan had to say about media now reminds me of the notion of affordances. Affordance refers to the potential actions or uses that an object or environment offers to an individual, based on its design and the individual’s perception of it. The term was originally coined by psychologist James J. Gibson in the context of ecological psychology to describe the possibilities for action that the environment provides. Affordances can be both obvious (like a door handle that affords pulling) or less obvious, depending on how the individual perceives and interacts with the object or environment. It is this less obvious type of affordance that applies based on expectations for texts and for how we anticipate texts to be used. Factors such as the allowances for controlling speed and pausing with a medium that is essentially static when we are not interacting with it to allow reflection are more like the obvious affordances Gibson proposes.

Those who reject a media effect

Having reached what I hope is an appropriate understanding of McLuhan’s famous insight, I realized that I have encountered a contradictory argument commonly taught within one of my fields of practice (educational technology). This controversy concerns what tends to be called the media effect

The “media effect” refers to the idea that the medium or technology used to deliver instruction (such as television, computers, or textbooks) has a significant impact on learning outcomes. This concept suggests that different media can produce different levels of learning or change the way people learn.

This perspective was challenged by Richard Clark in his influential 1983 article, “Reconsidering Research on Learning from Media.” Clark argued that the media itself does not influence learning; rather, it is the instructional methods and content delivered through the media that determine learning outcomes. Clark famously stated, “media are mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition.”

Clark’s challenge to the media effect emphasized that it’s the instructional design, the way content is presented, and the interaction between learners and content that are crucial for learning, not the medium through which the instruction is delivered.

I always struggled when teaching this position. Instructional designers are expected to consider this argument, but my interpretation never allowed me to understand why this would be true. If I wanted to teach someone the cross-over dribble, wouldn’t it make more sense to begin by showing the move rather than describing it with text? I understand that each of us learns through our own cognitive actions, but how we access inputs (external representations) would seem to matter in what our cognitive behaviors have to work with. When you ask advanced students to deal with arguments such as Clark’s that challenge actions they might be prone to take, it is common to match the challenging position with a source that offers a counterargument. I paired Clark’s paper with a paper written by Robert Kozma. If you are inclined to pursue this controversy, I recommend this combination.

Does it matter?

Possibly. I think we are experiencing changes in how we experience information. Most of us experience more and more video both for entertainment and for learning. It is worth considering how we might be influenced by the medium of input. If we are trying to learn more frequently from video, how do we attempt to process the video experience in a way similar to how we can take control and process text? 

References:

Clark, R. E. (1983) Reconsidering research on learning from media. Review of educational research 53 (4), 445-459.

Johns, A. (2023). The science of reading: Information, media, and mind in modern America. University of Chicago Press.

Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational technology research and development, 42(2), 7-19.

*

Loading

YouTube Annotation with Glasp

I take a lot of notes and have done so for years. I have tried many different tools over this time period. Social Annotation is a subcategory of these tools that allows users to share their highlights and notes. The idea is that the sharing of notes allows individuals to find resources they have not personally explored and offer their own discoveries to others. Glasp serves these purposes.

I have written about Glasp on several previous occasions. A unique capability allows built-in AI capabilities to “chat” not only with your own notes, but also the annotations stored by others.

Glasp is a combination of a Profile page that is the online location allowing access to the content you have collected (see above) and a browser extension that provides the means to highlight and annotate the content viewed within your browser. Kindle content is imported automatically. Glasp could provide the storage location for all of your notes, but I export notes to Obsidian to take advantage of more advanced features.

I don’t spend a lot of time collecting information from Youtube because most of writing is based on books and journal articles. There are exceptions when I review tutorials for software tools and want to keep track of specific tactics. I understand that others use YouTube extensively and I wanted to explore the capabilities of Glasp with this information source. The following video is my effort to describe how notes and highlights are generated from YouTube content.

Loading

Processing video for Personal Knowledge Management

John’s “The Science of Reading” explores the historical and scientific journey of reading as a science and a practice. Much of my professional life as a researcher focused on reading and reading skills and as a consequence, I was aware of some of the history of the research and theory. What I found my perspective lacked was the broader perspective on what was expected of reading as a determinant of culture and as the basis for citizenship and commercial and scientific advancement. The political perspective associated with assumptions about what specific skills were necessary for the general advancement of nations was an angle I had not considered.

The closest I can come to explaining some of the insights I encountered might be compared to present assumptions concerning political arguments over why “educated” citizens can believe the things they believe and even what should be excluded from classroom consideration to prevent what some see as undesirable outcomes. Those of us involved in the nitty-gritty of the learning and improvement of the skills of reading are often oblivious to broader questions of what the general population may expect the skill to accomplish or the problems the acquisition of a skill may create.

A historical perspective provides both a way to see transitions in a skill and how that skill is developed, but also how in this case to consider that a skill exists in a reciprocal relationship with that knowledge and culture. For example, political values, arguably a part of culture, have varied in demanding that a specific form of communication be prioritized and thus justifies support as a means for accomplishing prioritized goals. Who needs to develop a specific communication skill, what information should this skill target, and how will the use of this skill be controlled? More to the point of this post, are we in an era in which reading is coming to the end of its reign in this broader capacity and are we seeing the early stages of a transition to a different means for recording and transmitting knowledge and culture? Are we in the midst of this transition without acknowledging it and perhaps more importantly supporting and shaping the direction of this transition?

Perhaps asking whether we are moving on from reading seems radical, but these thoughts came to me as I have watched my grandchildren and truthfully most of my relatives spend hours exploring videos on their phones. The time children and adolescents spend on YouTube and other video content exceeds by a considerable margin the time they spend reading. It seems this reality has to be acknowledged. I tried to locate some specific data and found that the results of a recent Gallup poll indicate adolescents report spending an average of 1.9 hours daily on YouTube alone. Adults may be different, but I would wager when they encounter a skill they must execute they are far more likely to see if YouTube has something to offer rather than search for and read the manual that provides related information. I understand that what may seem a similar reaction has been associated with television viewing because everyone spent and spends so much time watching television, but how we make use of televised content seems different and less responsive to transitory personal interests than online video.

A modest proposal

OK. I have not abandoned reading and I rely on reading professionally. I must read journal articles and books to perform my occupational role. Scientific research demands the sharing and reading of text documents in a specific format and with a required approach to citing related sources so that any arguments made can be evaluated based on existing research findings and theory. At this point, I am bound by this approach. However, the process by which the findings of this formal research process reaches potential practitioners is not so rigid. Classroom educators can read articles and blog posts in which proposed instructional activities based on the findings of the research community are offered, but they can also listen to and watch podcasts and YouTube presentations. They can take courses (e.g., Coursera) and interactive classes (e.g., Zoom) that rely on video. We all have been taught to read (and write), but what about the development of skills that optimize learning from video.

For several years now, I have been interested in the role of Personal Knowledge Management (PKM) in self-directed learning. Part of this interest has involved the exploration of specific digital tools that support the processing of information within the context of PKM. The PKM perspective can be applied to traditional educational settings, but it also encourages a long-term perspective which is the environment all of us face once no longer involved in courses that require us to learn to pass examinations and produce projects that demonstrate our learning. Our challenge is remembering specifics earlier exposure to information sources have provided when potentially useful and finding personally useful connections within this great volume of information.

PKM is about tools and tactics. What processes (tactics) allow us to store (internally and externally) a residue from our reflection on the information we have experienced? What external activities (tools) can facilitate storage and processing?

There are plenty of tools and plenty of related suggestions for tactics proposed by the PKM community. My focus here is on the less extensive focus on video and the even more limited focus on digital tools that are used during the initial video experience. How does a video viewer capture ideas for later use? How can skills unique to this approach be learned?

Why an integrated digital note-taking tool?

While watching an informative video, why not just take notes in a notebook next to your laptop or tablet? Why not just open a second window and simple word-processing app in a second window on your laptop? My answer would be you use an integrated digital tool to link the context between the original video and individual notes in ways that recognize future issues and uses. Note-taking is a far from perfect process and being able to recover a missing piece of information necessary to fix a confusing note requires being able to reexamine a specific segment of the original video. I first wrote about the importance of the preservation of context when describing apps that allowed the sound from lectures to be recorded within note-taking apps. These apps automatically establish a link between any note taken with a time-stamp connecting the note to a specific point in the audio recording. I even suggested that when a note-taker realizes she has missed something she knows she should have written down as a note, they simply enter something like ??? in their notes as a signal to later check the recorded audio for something not mentioned in the notes that may have been important.

I have a different reason for proposing the importance of digital notes. I use digital note-taking systems that allow me to quickly search and find notes I may have taken years ago. Students are not in this situation, but the delays say in a course with only a midterm and final exam involve delays that are long enough to be related to a sizable amount of content to review and a time frame likely to increase memory retrieval challenges. Digital notes make searching simple and allow integration and cross-referencing of content over time to be relatively easy. For those of us now functioning to manage large amounts of information outside of a formal and short-term academic setting, such challenges are now often described and addressed as Personal Knowledge Management (PKM).

Reclipped

There are several tools available to annotate videos. My favorite is ReClipped. This tool is an extension that is added to the Chrome browser and is activated when a video source the tool can be used with appears in the browser. When the extension has been added, an icon will appear in the icon bar at the top of your browser and the appearance of this icon will change when it has been activated by the presence of video content within the browser. When active with YouTube, additional icons will appear in YouTube below and to the right of the window displaying the video (see the following image with ReClipped icons identified by a red box). (Note: the video used in this example was created by Dr. Dan Alosso and associated with an online book club he runs.)

I have written about ReClipped before in my series about layering tools. I define a layering tool as a tool that allows additions overlayed on existing online content without actually modifying that content as sent from the host server. I wrote previously about ReClipped as a way an instructor could add content (questions, comments) to a video so that the composite of the original video and the additions could be presented to students and supplement their learning. The difference here is that a learner is adding the additions for personal use.

To keep this as simple as possible, I will focus on one tool — the pencil. The pencil represents the note tool (see the icons with the pencil tool enclosed in a red box below the video window). Clicking on the pencil creates a time stamp in the panel to the right of the video window allowing the user to enter a note associated with that time stamp (see examples in the image). I tend to click the pencil, pause the video, and then enter my notes. Pausing the presentation is obviously an option not available when listening to a live lecture and solves all kinds of issues that learners face in the live lecture setting.

The save and export buttons are also important. ReClipped will archive your annotations for you when you save, but I am more interested in exporting my annotations so I can use them within my broader Personal Knowledge Management strategy. I use a tool called Obsidian to collect all of my notes and to work with this large collection in other ways (reworking, linking, tagging). I also make use of an AI tool ( Smart Connections) to “chat” with my collection of notes.

ReClipped allows the notes associated with a given video to be exported in several formats (e.g., pdf). I export notes in markdown because this is the format Obsidian likes for import. Markdown is a formatting style something like html if you are familiar with the formatting style used in creating web pages. Such additions allow the incorporation of other information with text (e.g., links). For example one of the entries included in the example I have displayed is exported as the text string that appears below.

– [08:43](https://www.youtube.com/watch?v=ukJtbtb8Tb4&t=523s) levels of notes — fleeting, literature, permanent — literature vs permanent is a matter of connecting to what you already know vs summarization. Permanent note has been “filtered by our interest”

When stored in Obsidian it appears as the following image (this is an image and not active).

Within Obsidian, the link is active and will cause the browser to return to the video stored in YouTube at the location identified by the time stamp. So, if necessary, I can review the video I saw when first creating the note at the point associated with that note. This link will simulate that experience. One issue with time stamps — the creation of a time stamp follows the content the stamp references. You listen and then decide to create a note. To reestablish the context for a note it thus requires that you use the link to a time stamp to activate the video and then scrub backward a bit to view the relevant material.

ReClipped allows other content (e.g., screen captures) from a video to be collected while viewing. Taking and exporting notes is straightforward and easy for me to explain in a reasonable amount of time.

There is a free version of ReClipped and the paid unlimited version is $2 a month. Note that ReClipped is presently free to teachers and students.

Research

I try to ground my speculation concerning the application of digital tools and techniques in unique learning situations with links to relevant research. In this case, my preference would be for studies comparing traditional note-taking from video with taking notes using integrated digital note-taking tools similar to ReClipped. I have been unable to locate the type of studies I had hoped to find. I did locate some studies evaluating the effectiveness of scratch-built tools typically incorporating some type of guided study tactic (see Fang and colleagues reference as an example). Though important work, learner application of more flexible and accessible tools seems a different matter and need to be evaluated separately.

Putting this all together

If you agree with the argument that we will increasingly rely on video content for the skills and information we want to learn, my basic suggestion is that we think more carefully about how to optimize learning from such content and teach/learn skills appropriate to this content and context. Digital tools such as Reclipped allow notes to be taken while viewing videos. These notes can be exported and stored within a Personal Knowledge Management system for reflection and connection with information from other sources. This post suggests that experience with such tools under educator supervision would provide learners the skills needed to take a more active approach to learning from videos they encounter.

References:

Fang, J., Wang, Y., Yang, C. L., Liu, C., & Wang, H. C. (2022). Understanding the effects of structured note-taking systems for video-based learners in individual and social learning contexts. Proceedings of the ACM on Human-Computer Interaction6(GROUP), 1–21.

Johns, A. (2023). The Science of ReadingInformation, Media, and Mind in Modern America. University of Chicago Press.

Loading

Adobe Spark Video

Note: Adobe has replaced Spark tools with Adobe Express. Spark Video is part of Adobe Express.

Adobe Spark Video is a great tool for students to use to create videos. Adobe Spark is especially useful because it works through a web browser and hence is a great application for use with Chromebooks.

The following is the page you will encounter when you connect. You are going to want to create an account.

You can create various types of projects with Adobe Spark. My tutorial describes the slideshow.

The following video takes you through the basics of creating with Adobe Spark

Here is the final product from the project described above.

Loading

Create a YouTube Playlist

Educators may want to assign a collection of YouTube videos to students for a project or study assignment. This tutorial will explain how this is done and relies mostly on a series of images.

I see this process in three stages – create a playlist, add videos to the playlist, share the playlist with a specific audience. The process works a little differently depending on whether you want to use videos you have created or videos created by others.

Stages 1 and 2 using videos created by others.

Beneath a video from another source, you will find this save icon. The save icon brings up the option of adding to an existing playlist or creating a new playlist. You would first create a new playlist with a video you wanted to use and then continue to add additional videos. The order of selection can be modified at a later stage so you don’t have to worry about the order when first creating the list.

Working with your own videos or a mix of content from your own creations and existing videos seems to work a little differently. To create the list and add your own creations, work through the YouTube Studio.

Within the Studio, you can then identify a video you have created to be added, open the video as if to edit, and then use the playlist feature to add to an existing playlist.

The final step is to share the list with students. Note in this image the share button (left) and the list of selected videos on the right. A key feature of this list of videos is the opportunity to reorder the videos. You drag the video with the small parallel lines icon to change the position. The share icon offers the opportunity to share to various outlets or allows the copying of the URL for sharing with specific individuals.

A sample playlist focused on my own efforts to explain Layering services was created using this process.

Loading

Time-lapse video with iMotion

I am considering this to be the third contribution to my series on Classroom Gardens. It is related to the other two posts which concern indoor hydroponic gardening only in that time-lapse video is an interesting way to demonstrate plant growth and variations of such a project would be easy to implement

This is my setup for capturing the video of plant growth. The equipment toward the back of the image is the hydroponic garden and you can also see some young plants. Positioned in front and to the right of this garden is an iPad.

Time-lapse video requires a fixed location for the camera and steady control of the focus of the camera. This device (I wish I knew the name) holds an iPad. The video I provide was taken over a couple of weeks so you need to consider how you will create an environment allowing careful positioning of the camera. As long as no one bumps the iPad, this holder does the trick. A traditional tripod serves a similar purpose when time-lapse video is taken with a camera. It is also necessary to plug the iPad into a power source as the iPad remains active during this entire process so it would have run down the battery without being plugged in.

The app used for this process was iMotion for Schools. In the video tutorial that follows I incorrectly claim iMotion for Schools is the same price as iMotion Pro. I find different prices. I paid $3.99, but the iMotion for Schools page says $5.99

iMotion for Schools Tutorial

Here is the video created with iMotion.

The video you see here has been altered. The original video contained segments of black frames generated during the night when the lights for the hydroponic garden were off. One thing I do not explain in the tutorial which was already getting a little long was the opportunity to edit the video with the app (see tools when the completed video is open). There are tools for adding and removing individual frames. I used the delete frame tool to remove the blank frames. In the video, you see phases of smooth growth and then jumps. The jumps are caused by the growth that occurred during the night when the darkness prevented the recording of these changes.

One hint – you have to do this on the fly so I slowed during the frame rate to 1 frame per second to delete frames and speeded it back up to 16 frames per second before exporting the video. I don’t have an explanation for the flickering you see in the first section of the video. Because the growing lettuce fills the screen toward the end of the video and the flickering is no longer present, I assume the flickering was caused by the exposed lighting.

Loading

Loom

Loom is a free Chrome extension that allows the recording of the content appearing in chrome as a video AND superimposes a smaller video of you on what is captured from the Chrome screen. I see it as a great way to create tutorials, but it has many possible applications.

Here is a video describing the use of Loom. I am proud of the technique I came up with to generate this video. I am using Quicktime to record the section of the screen within which I am using Loom to simultaneously record a video of what appears within Chrome.

Here is the video generated by Loom. You can match it to the “how to do it” video that appears above.

One important demonstration from the Loom processes is not well explained in the first video. At the end of the video, you will see a few seconds of the screen that appears when you end recording in Loom. This screen shows two options for sharing what has been recorded. One is the Link for the content stored by Loom. If you want to do something with this video yourself (for example, put it on Facebook), the download button offers to opportunity to save the video to your computer.

Loading