Desirable Difficulty

Despite a heavy focus on cognitive psychology in the way I researched and explained classroom study tactics, I had not encountered the phrase desirable difficulty until I became interested in the handwritten vs. keyboard notetaking research. I discovered the idea when reviewing studies by Luo and colleagues and Mueller and Oppenheimer. Several studies have claimed students are better off taking notes by hand in comparison to on a laptop despite being able to record information significantly faster when using a keyboard. 

Since having a more complete set of notes would seem an advantage. The combination of more notes associated with poorer performance is counterintuitive. Researchers speculated that learners who understood they had to make decisions about what they had time to record selected information more carefully and possibly summarized rather than recorded verbatim what they heard. This focus on what could be described as deeper processing seemed like an example of desirable difficulty. The researchers also proposed that the faster keyboard recording involved shallow cognitive processing.  

Note: I am still a fan of more complete notes and the methodology used when demonstrating better performance from recording notes by hand needs to be carefully considered. I will comment on my argument more at the end of this post. 

Desirable difficulty an idea attributed to Robert Bjork has been used to explain a wider variety of retention phenomena. Bjork suggested that retrieval strength and storage strength are distinct phenomena and learners can be misled when an approach to learning is evaluated based on retrieval strength. I find these phrases to a bit confusing as applied, but I understand the logic. Students cramming for an exam make a reasonable example. Cramming results in what may seem to be successful learning (retrieval strength), but results in poorer retention over an extended period of time (storage storage strength). Students may understand and accept the disadvantages of cramming so it is not necessary that the distinction be unrecognized by learners. In a more recent book on learning for the general public, Daniel Willingham suggests that the brain is really designed to avoid rather than embrace thinking because thinking is effortful. The human tendency is to rely on memory rather than thinking. Desirable difficulty may be a way to explain why some situations that require thinking prevent something more rote. 

Increasing difficulty to improve retention

There are multiple tactics for productively increasing difficulty that I tend to group under the heading of generative learning. I describe generative activities as external tasks intended to increase the probability of productive cognitive (mental) behaviors. I suppose desirable difficulty is even more specific differentiating external tasks along a difficulty dimension. So in the following list of tasks, it is useful to imagine more and less difficult tasks. Often the less difficult task is the option learners choose to apply. In connecting these tactics with personal experience, I would recommend you consider the use of flashcards to conceptualize what would be the easier and the more challenging application. Then, move beyond flashcards to other study tactics and consider if you can identify similar contrasts. 

Retrieval Practice: Testing oneself on the material rather than passively reviewing notes is considered retrieval practice. The classic empirical demonstration of the retrieval practice or the testing effect compared reviewing content versus responding to questions. Even when controlling for study time, spending some time on questions was superior. With the flashcard applications I recommended you consider, answering multiple-choice questions would be less challenging than answering short-answer questions (recognition vs recall).

Spacing (Distributed Practice): Instead of cramming, spreading out study sessions over time is more productive. This method helps improve long-term retention and understanding. Spacing allows some retrieval challenges to develop and the learner must work harder to locate the desired information in memory. See my earlier description of Bjork’s distinction between retrieval strength and storage strength. 

Interleaving: Mixing different types of problems or subjects in one study session. For example, alternating between math problems and reading passages rather than focusing on one at a time. A simple flashcard version of this recommendation might be shuffling the deck between cycles through the deck. Breaking up the pattern of the review task increases the difficulty and requires greater cognitive effort. 

Other thoughts

First, the concept of committing to more challenging tasks is broader than the well researched examples I provide here. Writing and teaching could be considered examples in that both tasks require an externalization of knowledge that is both generative and evaluative. It is too easy to fake it and make assumptions when the actual creation of a product is not required.

Second, desirable difficulty seems to me to be a guiding principle that does not explain all of the actual cognitive mechanisms that are involved. The specific mechanisms may vary with activity – some might be motivational, some evaluative (metacomprehension), and some at the level of basic cognitive activities. For example, creating retrieval challenges probably creates an attempt to find alternate or new connections among stored elements of information. For example, in trying to put a name with a face one might attempt to remember the circumstances in which you may have met or worked with this person and this may activate a connection you do not typically use and is not automatic. For example, after being retired for 10 years and trying to remember the names of coworkers, I sometimes remember the arrangement of our offices working my way down the appropriate hallway and this sometimes helps me recall names. 

I did say I was going to return to the use of desirable difficulty as a justification for the advantage of taking notes by hand. If keyboarding allows faster data entry than handwriting, in theory keyboarding would allow more time for thinking, paraphrasing, and whatever advantage one would have when the recording method requires more time. Awareness and commitment would seem to be the issues here. However, I would think complete notes would have greater long-term value than sparse notes. One always has the opportunity to think while studying and a more complete set of notes would seem to provide the opportunity to have more external content to work with. 

References:

Bjork, R.A. (1994). Memory and metamemory considerations in the training of human beings. In J.  Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185-205). Cambridge,  MA: MIT Press.

Luo, L., Kiewra, K. A., Flanigan, A. E., & Peteranetz, M. S. (2018). Laptop versus longhand note taking: effects on lecture notes and achievement. Instructional Science, 46(6), 947-971.

Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological science, 25(6), 1159-1168.

Willingham, D. T. (2021). Why don’t students like school?: A cognitive scientist answers questions about how the mind works and what it means for the classroom. John Wiley & Sons.

Loading

YouTube Annotation with Glasp

I take a lot of notes and have done so for years. I have tried many different tools over this time period. Social Annotation is a subcategory of these tools that allows users to share their highlights and notes. The idea is that the sharing of notes allows individuals to find resources they have not personally explored and offer their own discoveries to others. Glasp serves these purposes.

I have written about Glasp on several previous occasions. A unique capability allows built-in AI capabilities to “chat” not only with your own notes, but also the annotations stored by others.

Glasp is a combination of a Profile page that is the online location allowing access to the content you have collected (see above) and a browser extension that provides the means to highlight and annotate the content viewed within your browser. Kindle content is imported automatically. Glasp could provide the storage location for all of your notes, but I export notes to Obsidian to take advantage of more advanced features.

I don’t spend a lot of time collecting information from Youtube because most of writing is based on books and journal articles. There are exceptions when I review tutorials for software tools and want to keep track of specific tactics. I understand that others use YouTube extensively and I wanted to explore the capabilities of Glasp with this information source. The following video is my effort to describe how notes and highlights are generated from YouTube content.

Loading

Processing video for Personal Knowledge Management

John’s “The Science of Reading” explores the historical and scientific journey of reading as a science and a practice. Much of my professional life as a researcher focused on reading and reading skills and as a consequence, I was aware of some of the history of the research and theory. What I found my perspective lacked was the broader perspective on what was expected of reading as a determinant of culture and as the basis for citizenship and commercial and scientific advancement. The political perspective associated with assumptions about what specific skills were necessary for the general advancement of nations was an angle I had not considered.

The closest I can come to explaining some of the insights I encountered might be compared to present assumptions concerning political arguments over why “educated” citizens can believe the things they believe and even what should be excluded from classroom consideration to prevent what some see as undesirable outcomes. Those of us involved in the nitty-gritty of the learning and improvement of the skills of reading are often oblivious to broader questions of what the general population may expect the skill to accomplish or the problems the acquisition of a skill may create.

A historical perspective provides both a way to see transitions in a skill and how that skill is developed, but also how in this case to consider that a skill exists in a reciprocal relationship with that knowledge and culture. For example, political values, arguably a part of culture, have varied in demanding that a specific form of communication be prioritized and thus justifies support as a means for accomplishing prioritized goals. Who needs to develop a specific communication skill, what information should this skill target, and how will the use of this skill be controlled? More to the point of this post, are we in an era in which reading is coming to the end of its reign in this broader capacity and are we seeing the early stages of a transition to a different means for recording and transmitting knowledge and culture? Are we in the midst of this transition without acknowledging it and perhaps more importantly supporting and shaping the direction of this transition?

Perhaps asking whether we are moving on from reading seems radical, but these thoughts came to me as I have watched my grandchildren and truthfully most of my relatives spend hours exploring videos on their phones. The time children and adolescents spend on YouTube and other video content exceeds by a considerable margin the time they spend reading. It seems this reality has to be acknowledged. I tried to locate some specific data and found that the results of a recent Gallup poll indicate adolescents report spending an average of 1.9 hours daily on YouTube alone. Adults may be different, but I would wager when they encounter a skill they must execute they are far more likely to see if YouTube has something to offer rather than search for and read the manual that provides related information. I understand that what may seem a similar reaction has been associated with television viewing because everyone spent and spends so much time watching television, but how we make use of televised content seems different and less responsive to transitory personal interests than online video.

A modest proposal

OK. I have not abandoned reading and I rely on reading professionally. I must read journal articles and books to perform my occupational role. Scientific research demands the sharing and reading of text documents in a specific format and with a required approach to citing related sources so that any arguments made can be evaluated based on existing research findings and theory. At this point, I am bound by this approach. However, the process by which the findings of this formal research process reaches potential practitioners is not so rigid. Classroom educators can read articles and blog posts in which proposed instructional activities based on the findings of the research community are offered, but they can also listen to and watch podcasts and YouTube presentations. They can take courses (e.g., Coursera) and interactive classes (e.g., Zoom) that rely on video. We all have been taught to read (and write), but what about the development of skills that optimize learning from video.

For several years now, I have been interested in the role of Personal Knowledge Management (PKM) in self-directed learning. Part of this interest has involved the exploration of specific digital tools that support the processing of information within the context of PKM. The PKM perspective can be applied to traditional educational settings, but it also encourages a long-term perspective which is the environment all of us face once no longer involved in courses that require us to learn to pass examinations and produce projects that demonstrate our learning. Our challenge is remembering specifics earlier exposure to information sources have provided when potentially useful and finding personally useful connections within this great volume of information.

PKM is about tools and tactics. What processes (tactics) allow us to store (internally and externally) a residue from our reflection on the information we have experienced? What external activities (tools) can facilitate storage and processing?

There are plenty of tools and plenty of related suggestions for tactics proposed by the PKM community. My focus here is on the less extensive focus on video and the even more limited focus on digital tools that are used during the initial video experience. How does a video viewer capture ideas for later use? How can skills unique to this approach be learned?

Why an integrated digital note-taking tool?

While watching an informative video, why not just take notes in a notebook next to your laptop or tablet? Why not just open a second window and simple word-processing app in a second window on your laptop? My answer would be you use an integrated digital tool to link the context between the original video and individual notes in ways that recognize future issues and uses. Note-taking is a far from perfect process and being able to recover a missing piece of information necessary to fix a confusing note requires being able to reexamine a specific segment of the original video. I first wrote about the importance of the preservation of context when describing apps that allowed the sound from lectures to be recorded within note-taking apps. These apps automatically establish a link between any note taken with a time-stamp connecting the note to a specific point in the audio recording. I even suggested that when a note-taker realizes she has missed something she knows she should have written down as a note, they simply enter something like ??? in their notes as a signal to later check the recorded audio for something not mentioned in the notes that may have been important.

I have a different reason for proposing the importance of digital notes. I use digital note-taking systems that allow me to quickly search and find notes I may have taken years ago. Students are not in this situation, but the delays say in a course with only a midterm and final exam involve delays that are long enough to be related to a sizable amount of content to review and a time frame likely to increase memory retrieval challenges. Digital notes make searching simple and allow integration and cross-referencing of content over time to be relatively easy. For those of us now functioning to manage large amounts of information outside of a formal and short-term academic setting, such challenges are now often described and addressed as Personal Knowledge Management (PKM).

Reclipped

There are several tools available to annotate videos. My favorite is ReClipped. This tool is an extension that is added to the Chrome browser and is activated when a video source the tool can be used with appears in the browser. When the extension has been added, an icon will appear in the icon bar at the top of your browser and the appearance of this icon will change when it has been activated by the presence of video content within the browser. When active with YouTube, additional icons will appear in YouTube below and to the right of the window displaying the video (see the following image with ReClipped icons identified by a red box). (Note: the video used in this example was created by Dr. Dan Alosso and associated with an online book club he runs.)

I have written about ReClipped before in my series about layering tools. I define a layering tool as a tool that allows additions overlayed on existing online content without actually modifying that content as sent from the host server. I wrote previously about ReClipped as a way an instructor could add content (questions, comments) to a video so that the composite of the original video and the additions could be presented to students and supplement their learning. The difference here is that a learner is adding the additions for personal use.

To keep this as simple as possible, I will focus on one tool — the pencil. The pencil represents the note tool (see the icons with the pencil tool enclosed in a red box below the video window). Clicking on the pencil creates a time stamp in the panel to the right of the video window allowing the user to enter a note associated with that time stamp (see examples in the image). I tend to click the pencil, pause the video, and then enter my notes. Pausing the presentation is obviously an option not available when listening to a live lecture and solves all kinds of issues that learners face in the live lecture setting.

The save and export buttons are also important. ReClipped will archive your annotations for you when you save, but I am more interested in exporting my annotations so I can use them within my broader Personal Knowledge Management strategy. I use a tool called Obsidian to collect all of my notes and to work with this large collection in other ways (reworking, linking, tagging). I also make use of an AI tool ( Smart Connections) to “chat” with my collection of notes.

ReClipped allows the notes associated with a given video to be exported in several formats (e.g., pdf). I export notes in markdown because this is the format Obsidian likes for import. Markdown is a formatting style something like html if you are familiar with the formatting style used in creating web pages. Such additions allow the incorporation of other information with text (e.g., links). For example one of the entries included in the example I have displayed is exported as the text string that appears below.

– [08:43](https://www.youtube.com/watch?v=ukJtbtb8Tb4&t=523s) levels of notes — fleeting, literature, permanent — literature vs permanent is a matter of connecting to what you already know vs summarization. Permanent note has been “filtered by our interest”

When stored in Obsidian it appears as the following image (this is an image and not active).

Within Obsidian, the link is active and will cause the browser to return to the video stored in YouTube at the location identified by the time stamp. So, if necessary, I can review the video I saw when first creating the note at the point associated with that note. This link will simulate that experience. One issue with time stamps — the creation of a time stamp follows the content the stamp references. You listen and then decide to create a note. To reestablish the context for a note it thus requires that you use the link to a time stamp to activate the video and then scrub backward a bit to view the relevant material.

ReClipped allows other content (e.g., screen captures) from a video to be collected while viewing. Taking and exporting notes is straightforward and easy for me to explain in a reasonable amount of time.

There is a free version of ReClipped and the paid unlimited version is $2 a month. Note that ReClipped is presently free to teachers and students.

Research

I try to ground my speculation concerning the application of digital tools and techniques in unique learning situations with links to relevant research. In this case, my preference would be for studies comparing traditional note-taking from video with taking notes using integrated digital note-taking tools similar to ReClipped. I have been unable to locate the type of studies I had hoped to find. I did locate some studies evaluating the effectiveness of scratch-built tools typically incorporating some type of guided study tactic (see Fang and colleagues reference as an example). Though important work, learner application of more flexible and accessible tools seems a different matter and need to be evaluated separately.

Putting this all together

If you agree with the argument that we will increasingly rely on video content for the skills and information we want to learn, my basic suggestion is that we think more carefully about how to optimize learning from such content and teach/learn skills appropriate to this content and context. Digital tools such as Reclipped allow notes to be taken while viewing videos. These notes can be exported and stored within a Personal Knowledge Management system for reflection and connection with information from other sources. This post suggests that experience with such tools under educator supervision would provide learners the skills needed to take a more active approach to learning from videos they encounter.

References:

Fang, J., Wang, Y., Yang, C. L., Liu, C., & Wang, H. C. (2022). Understanding the effects of structured note-taking systems for video-based learners in individual and social learning contexts. Proceedings of the ACM on Human-Computer Interaction6(GROUP), 1–21.

Johns, A. (2023). The Science of ReadingInformation, Media, and Mind in Modern America. University of Chicago Press.

Loading

Update to AI-Supported Social Annotation with Glasp

Social annotation is a digital and collaborative practice in which multiple users interact with text or video through comments, highlights, and discussions directly linked to specific parts of the source. This practice extends the traditional act of reading and watching into a participatory activity, allowing individuals to engage with both the text and each other in educational ways.

For learners functioning within a formal educational setting or an informal setting, social annotation can benefit learners in multiple ways. It can transform reading from a solitary to a communal act, encouraging students to engage more deeply with texts. Students can pose questions, share interpretations, and challenge each other’s views directly on the digital document. This interaction not only enhances comprehension and critical thinking but also builds a sense of community among learners. Potentially, educators can also participate guiding discussions or reacting to student comments.

Beyond the classroom, social annotation is used in research and professional fields to streamline collaborations. Researchers and professionals use annotation tools to review literature, draft reports, and provide feedback. This collaborative approach can accelerate project timelines and improve the quality of work by incorporating multiple expertises and viewpoints efficiently.

I have written previously about social annotation as a subcategory of my interest in technology tools that allow layering and even earlier in the description of specific annotation tools such as Hypothesis. As now seems the case with many digital topics, social annotation eventually was expanded to incorporate AI. This post updates my description of the capabilities of the AI capabilities of Glasp. Glasp is a free tool used to annotate web pages, link comments to videos, and import annotations from Kindle books. It functions as a browser extension when layering comments and highlights on web pages and videos. The accumulated body of additions is available through a website which is where the AI capability is applied as a mechanism for interacting with the collected content and for connecting with other Glasp users. 

The following content is divided into two sections. The first section focuses on the AI capabilities applied to personally collected content and the content collected by others. The second section explains how to locate the content of others who have used Glasp to collect content designated as public. This second section describes capabilities I have personally found very useful. As a retired individual, I no longer have access to colleagues I might interact with frequently. Collaborative tools are only useful when collaborators are available and developing connections can be a challenge.

Interacting with stored annotations using AI

The following image displays the personal browser view from the Glasp site. The middle column consists of thumbnails representing multiple web pages that have been annotated and the right-hand column the highlighted material (no notes were added to the source I used for this example) from the selected source. The red box was added to this image to bring your attention to the “Ask digital clone” button. This image is what you would see when connecting to my site to interact with my content. The button would read “Ask your clone” if I was connecting to my own account to interact with my content. Here is a link you can use to interact with my content. After you have read just a bit further, return and use this link to duplicate my example and then try a few requests of your own. 

The next image displays what happens when the “Ask digital clone” button is selected. You should see a familiar AI interface with a text box at the bottom (red box) for initiating an interaction. I know the type of content I have read so I have generated a prompt I know should be relevant to the content I have annotated. 

The prompt will generate a response if relevant information is available. However, here is what I find most useful. The response will be associated with a way to identify sources (see red box). Typically, I am most interested in reviewing original material from which I can then write something myself.

The link to relevant highlights should produce something that looks like the following.

Locating content saved by others

Glasp offers a capability that addresses the issue I identified earlier. How do you locate others to follow?

The drop-down menu under your image in the upper right-hand corner of the browser display should contain an option “Find like-minded people”. This option will attempt to identify others with some overlap in interests based on the type of content you have annotated. So, you must start by building at least a preliminary collection of annotated sites yourself. If you have no content, there is nothing available to use as the basis for a match.

Glasp should then generate something like the following. You can click on someone from this display to query their existing public material. If you then want to follow that individual, their site should contain a “Follow” button.

Summary

I hope this is enough to get you started. You can use the link to my account to explore. It seems unlikely to me that Glasp will always be free. They must have development and infrastructure costs. For now, the company has offered an interesting approach that has grown in capability during the time I have used it.

Loading

GPS and GIS: Interpreting Data in Relationship to Place

I have long been interested in cell phone photography as a way students could learn about GPS and GIS. The capabilities of our phones in this regard are probably now taken for granted and consequently ignored as a learning opportunity. If anything, the capability of connecting the phone to a location marked in captured images may be considered a security risk rather than a capability to be applied when useful.

Global Positioning Systems (GPS) and Geographic Information Systems (GIS) allow the investigator to search for interpretations of data related to place. GPS uses the signals from multiple satellites to allow an individual with a GPS device (hardware) to determine the location of the GPS device (place) in terms of precise latitude, longitude, and altitude. Put another way, the device allows you to determine exactly where you are standing on the earth. Typically, your position can be determined with greater than 10-foot accuracy. You may be familiar with GPS navigation because you have GPS hardware installed in your car or perhaps your phone. These devices know where you are and can establish a route between where you are and where you would like to go. The most basic function of a GPS is to locate the device in three-dimensional space (longitude, latitude, and altitude), but most map location (show location on a map), navigate, and store data (e.g., the coordinates of designated locations). Smartphones do so using true GPS (using the signals from satellites) but may also determine the location of the phone by calculating the phone’s location to multiple cell phone towers. The triangulation process with the cell towers is similar to that dependent on satellites but less accurate.

GIS is a software tool that allows the user to see the relationship between “layers” of information. In most cases, one layer is a map. Other layers could be the presence or quantity of an amazingly diverse set of things— e.g., voters with different political preferences, cases of a specific disease, fast food stores, a growth of leafy spurge, or nitrate concentration in water. The general idea is to expose patterns in the data that allow the researcher to speculate about possible explanations. 

With the addition of specialized software, some GPS devices and smartphones provide similar capabilities. The device knows your location and can identify restaurants, gas stations, or entertainment options nearby. The field of location-based information is expanding at a rapid pace. One approach involves providing useful, location-specific information. For example, how close am I to the nearest gas station? A second approach allows the user to offer his or her location as information. Being able to locate a phone can be useful if the phone is lost and some applications allow parents to locate their children using the location of the phone the child is carrying. Sometimes, individuals want to make their present location available in case others on a designated list of “friends” may be in the vicinity providing the opportunity for a face-to-face meeting. Obviously, there can be significant privacy concerns related to sharing your location.

A great example of student use of GPS, GIS, and the Internet is the GLOBE program (http://www.globe.gov/). GLOBE is an international program led by a collaborative group of U.S. federal agencies (NOAA, NSF, NASA, EPA). Over 140 colleges and 10,000 schools from over 90 countries are also involved. GLOBE involves students in authentic projects led by scientists in the areas of air quality, land cover, water quality, soil characteristics, and atmospheric sciences. 

In the GLOBE program, students in classes taught by specially trained teachers work with scientists to collect data according to precisely defined protocols. The advantage to the scientists is the massive and distributed data collection system made available by the Internet. Data gathered from precise locations (identified with GPS) can be integrated (with GIS) on an international scale. Students have access to educational materials and learn by contributing to authentic projects.

The GLOBE projects are presented in ways that have local relevance and have been matched to K–12 standards. While the topics of study most closely address standards in the areas of math and science, the international scope of the project also involves students with world geography, diverse cultures, and several languages (the project home page is available in seven languages). The data are also available online, and groups of educators are encouraged to propose and pursue related projects.

Readily available software and hardware also allow educators to design projects that are not dependent on formal, large-scale programs. We all have become much more familiar with GPS devices and many of us own navigation or phone devices that could be used in educational projects. Digital cameras tag images with GPS coordinates. Once we have a way of determining location, we might then consider what data we can match to location. Fancy equipment is not always necessary. Sometimes the data are what we see. Do you know what a Dutch Elm tree looks like? Have any Dutch Elms in your community survived? Where are they located? There are also many easy ways to use location to attach data, in this case photos, to maps. For example, Google Photos offers some amazing capabilities. If you store cell phone pictures in Google Photos, try searching for a location (e.g., Chicago). Google Photos knows where some things are located (e.g., the Bean), but will also return photos based on the embedded EXIF data that includes GPS information.  

Probes you may already own, your phone and data collection

Your cell phone has an interesting feature. It can store the exact location from which each picture was taken and other information with the same file containing the image. These data are stored as part of EXIF (exchangeable image file format). You may know that some images are accompanied by information such as the camera used to take the picture, aperture, shutter speed, etc. This is EXIF data. The longitude and latitude (I can never remember which is which) can also be stored as EXIF data.

I have a record of my first experience exploring these capabilities. I was using a camera with GPS capabilities rather than a phone, but it is the personal insights about capabilities that resulted that are relevant here. In was 2009 and my wife and I were in Washington, DC, for a conference. We spent some time visiting the local landmarks and I took the following picture. As you can see, we were standing at the fence that surrounds the White House and I took the following photo. I think tourists would no longer be allowed in this location, but that was a different time. 

I used the EXIF data to add the photo to Google Maps. In the following image, you can see the image and the mapped location in street view. At first the map information confused me – no white house. Then I realized, we are standing on Pennsylvania Ave. on the other side of the fence shooting through the trees to frame the picture. We were the pin looking through the fence, over the flower bed, toward the White House. I have often said I have a different understanding of technology because I have always been a heavy tech user and experienced life as technological capabilities were added. I was there before and after and thus have a sense of how things changed. When technology capabilities are already there you often learn to use them without a need to understand what is happening and without a sense of amazement that can motivate you to seek understanding and creative applications. 

Mapping photo collections with Google services

The following is a tutorial. The specific example that is the basis for the tutorial is not intended to be relevant to classroom use, but the example is authentic and the processes described should transfer. Now retired, we were wintering in Kauai when I decided to write this post. I do a lot of reading and writing in coffee shops and I had decided to begin collecting photos of the various shops I frequented. Others would probably focus on tourist attractions, but coffee shops were the feature that attracted me.

Mapping photos in Google is best understood as using two interrelated Google services – Google Photos and Google MyMaps. If you are cost conscience and are not interested in advanced features or storing a lot of images, you can get away with the free levels of Google tools. The two-stage process involves first storing and isolating the images you want to map (Google Photos) and then importing this collection to be layered on a Google map (MyMaps).

Creating an album in Google Photos

The first step involves the creation of an album to isolate a subset of photos. In the following image, you should find a column of icons on the left-hand border. The icon within the red box is used to create an album.

This icon should then display the Create album button at the top of the display.

Name the new album

Now, return to photos and for each photo you want to map, use the drop down menu to add that photo to the appropriate album.

Continue until you have identified all of the images you want to map.

MyMaps

Google MyMaps (https://www.google.com/maps/d/) provides a means to create multiple personal maps by layering content (e.g., images) on top of the basic Google map. Using the link provided here, open your Google account and identify what you want to label your new personal map. 

If you are adding images with GPS data, the process will automatically locate the images you provide to the appropriate location. It makes sense to me, to begin by moving the location that I am interested in to the screen. In this case, I am adding images to the island of Kauai.

The following image is a panel you will see in several of the images that follow. The first use of this panel is to enter a name for the map I am creating. 

The text box to enter the name is revealed by selecting the original value (Untitled map) and this will open a box to add the name you intend.

The next step is add the layer that will contain the photos on top of this map. 

The approach I am taken is to add all of the images once and this is accomplished by referencing the Google Photos album that already exists. I select “coffee shops” from my albums.

MyMaps does not assume I intend to use all of the images in the designated album so I must now select the images I want to add to the map. When finished, select “Insert”. 

This generates the finished product.

MyMaps allows you a way to share your map with others. Try this link to see the map I have just created. Selecting one of the thumbnails appearing on the map should open a larger view. Give it a try. Without the password, your access should be “read only”.

Summary

Google Photos and Google MyMaps allow students to explore GPS and GIS. Images taken with smartphones can be added to a Google map allowing authentic GIS projects. Care should be taken to understand how to turn GPS locations associated with photos on and off. 

Loading

A use for Obsidian Unlinked Mentions

Have you had the experience of coming across an application feature and wondering why did a software designer decide to go to the trouble of creating and then shipping that feature? Somewhere I encountered a comment on an Obsidian feature called an Unlinked Mention. It took me some time to find it and then even more time in an effort to understand why it exists. I am still not certain how it is to be used and why there wouldn’t be similar features that would be more useful. I have come up with one way I find it offers some value so I will explain what seems a hack and then hope others can find my description helpful in encouraging similar or additional uses. 

Note: My description and proposed actions are based on Obsidian on a computer. Some of the actions I describe I could not get to work on my iPad. 

So, I think an unlinked mention is supposed to be understood as something like a backlink. In Obsidian when you create a link among two notes (A – B), Obsidian recognizes but does not automatically display the backlink (B-A). For a given note (A), you can get Obsidian to display any backlinks to that note using the backlinks option for the right-hand panel of the Obsidian display. For the note that is active in the middle panel, the right-hand panel should indicate linked mentions and unlinked mentions. You may have to select which you want displayed and it is possible nothing will be displayed for either option. The linked mentions are the backlinks and you can select and display the backlinked notes from this display. 

The unlinked mentions are other notes that contain the same exact phrase as you have used to title Note A. Who knew? Why? Maybe I never quite understood the power of a title or how my notes were supposed to be titled. I have tried to think about this and I still don’t get it.

Here is my hack and I think a way to take advantage of unlinked mentions. Start with a blank note and add a title likely to be used within other content you have stored within other notes. To make the effort, your word or phrase would have to be something you want to investigate. I used the word “metacognition” because this is an important concept in the applied cognitive psychology research I read and attempted to apply to educational uses of technology. I have notes about this concept, but the greatest value I found in this hack was taking advantage of all of the Kindle notes and highlights I had stored in Obsidian via Readwise. In my account, there are more than 200 books worth of notes and highlights and the content for each book is often several pages long.  I create notes myself as I read, but there is all of this additional content that may contain things I might find useful. Certainly, several of these books would contain content, especially highlights, focused on metacognition. 

Once I have my new note with the simple title “metacognition” and for this note look under unlinked mentions in the right-hand column, I now have lots of entries. At this point, my note is still blank, but I now can access many other mentions of metacognition from this list of unlinked mentions. If I select one of these mentions, a “link” button appears and if I select this button Obsidian generates a forward link in the A document and adds the A document to my blank B document as a backlink. The B note is still blank.

Here comes the hack. One of the core plugins for Obsidians is called backlink (use the gear icon from the panel on the left) and it contains a slider that will display backlinks at the bottom of a note (see following image). Now you can display backlinks on your blank note that allow access to the unlinked content you have linked. See the second image below.

The process I have described is a way to generate a collection of links on a topic that would not be available without this hack. It is the process that finds specific mentions of a concept within much larger bodies of content (the highlights from Kindle books) that I find useful. Give it a try.

Loading

Why not ask for help? Have the benefits of technology-augmented studying been demonstrated?

I have written posts for Medium for a few months now. It is clear that some of my most popular posts concern note-taking and personal knowledge management. I have a history with the topic of note-taking having conducted research with college students based in my more general background in the cognitive processing of learning. When I most it is often about evaluating specific digital note-taking practices or knowledge management concepts based on basic cognitive principles. What about how learning works justifies a specific practice the self-help authors advocating for smart/atomic notes or second brain recommends.

I asked Dall-E to help come up with an image depicting the type of learning I had in mind. My prompt asked for an image showing an adult using a computer and note-taking to learn a skill that was something they had not studied in school. I decided I needed something specific so I requested an image focused on learning to bake bread.

As I have explained in some of these posts, I think some claims made for digital note-taking lack empirical support in the context to which the self-help writers propose their tactics apply. 

A couple of observations about the framework within which nearly all (maybe all) existing research was conceptualized. The research I am familiar with focuses on learning within a formal educational setting. Whether it be middle school or graduate students, note-taking is largely a practice to deal with information inputs that are determined by others with the goals for the learner being storing, understanding, and applying this information to examinations, projects, and papers assigned by others. The time frame with perhaps the exception of licensing exams or graduate preliminary examinations are weeks and at most a few weeks in length. Proposals such as Ahren’s Smart Notes or Forte’s Second Brain propose unique tactics and imagine the use of notes over an extended period. Implications of these differences do not seem to be tested or at least are not examined directly by existing research. 

The vocabulary of multiple authors proposing new systems and tactics can be an issue by itself. I am trying to understand the difference between smart notes, atomic notes, and permanent notes. For example, Ahrens titles his book Smart Notes, but then describes fleeting notes, literature notes, and permanent notes. There is a process here – fleeting notes can become permanent notes through a personalization process similar to what Forte in this book about the second brain called progressive summarization. I threw in personalization because that is what I call the process of rephrasing and emphasizing based on what the learner knows (again similar to certain properties of progressive summarization). I think I should be able to apply labels if I think my label communicates meaning more clearly. 

What am I looking for? I am searching for research literature that examines tactics used with these digital services as applied to learner-determined goals. Starting from a long-standing and nuanced literature defining cognitive benefits associated with note-taking, note-reviewing, highlighting, basic memory, and application what can be understood about self-directed learning? What basic descriptive data are available on the common use of the various features of the affordances of digital services? What types of notes do users actually create? Do users make use of tags and links when they attempt to use the notes they have created or do they simply search? Are notes reviewed periodically and new connections found as recommended by the self-help gurus? 

I have tried the various tools scholars use to explore the literature (Research Rabbit, Elicit, Google Scholar, Litmaps, etc.) with no luck. All I need is one or two quality studies of the type I have in mind and finding related work should be easy. Before I give up completely and decide advances in this area will proceed by logic and salespersonship, I decided maybe I should just ask for help. Maybe the wisdom of the crowd really exists. If you think you can provide a lead please do so. I am not putting down those who just imagine strategies for learning they think are unique and creative, but at some point I want to see the data. Am I missing something or is there just nothing there? If there is nothing there, why is this the case?

References

Ahrens, S. (2022). _How to take smart notes: One simple technique to boost writing, learning and thinking_. Sönke Ahrens.

Forte, T. (2022). Building a second brain: A proven method to organize your digital life and unlock your creative potential. Atria Books.

Loading