Read laterally or the checklist is not enough

I am in the midst of a book update and am sharing some issues here as I write about them elsewhere. This recommendation is related to the development of online content evaluation skills.

We have long advocated learners being aware of certain characteristics on online content that they should consider to determine the qualify of a resource. Many, including me, have offered a checklist of things to check.

However, bad faith content creators have become more sophisticated making the items to check about a source of less use. Wineberg and colleagues now argue that online content that seems questionable should now be evaluated using the techniques of fact checkers. He describes a productive strategy as lateral reading – open a few extra tabs in your browser and search for additional information related to claims you question. I can see a classroom demonstrated related to this skill making use of a project and after reading a document together open tabs and asking students who issues they wonder about and what search terms they might recommend to cross check.

Mike Caufield offers an online resource with a large section explaining lateral reading strategies.

Loading

Do students cyberbully each other while in school?

I am doing the background research necessary to update our textbook. I see a textbook for preservice and updating educators as a combination of explaining relevant research and classroom characteristics that may expand existing learner experiences and offering suggestions for how this relevant background might be applied.

When you do a review of this type certain things have a tendency of jumping out. One category of such revelations would be those things which seem to represent reversals of what you claimed earlier. If my perception of such seeming reversals is correct, I would be misleading readers if I did not make an adjustment in what I claimed in the previous edition.

Here is a description of once such inconsistency. Like so many educational issues writers have to consider the evidence carefully to sort out what the research actually says. For example, are the methodologies of the studies or the data the cause of what seem to be contradictory findings. Since a textbook is a secondary source involving a great deal of integration and summarization, what is the best way to fairly describe what is known so others can made informed decisions.

Here is an example.

The issue that concerns me concerns cyberbullying and what role schools should play in addressing this problem. When I wrote the last edition of our textbook, I used research from Wolak, Mitchell, and Finkelhor (2007) that provided data on the locations of bullies who were cyberbullying their peers. This study is somewhat dated, but the precise location from which the harassment occurred was precise and relevant. Why? Well, kids likely cyberbully kids they know though school, BUT they were rarely reported as doing so from school. Why does this matter? Issues came up when schools toke actions against the bullies with parents and lawyers claiming that this was not an issue for schools to address. As a consequence, schools could certainly have programs to discuss the general problem, but were reluctant to address specific students who were identified as being involved.

A recent study based on the observations of classroom teachers (Vega & Robb, 2019) seems to make a different claim. The teachers responding to this national survey claimed that they were aware of cyberbullying originating within the school. I quote from this study below.

Approximately one out of 10 teachers (13 percent) said that cyberbullying occurred in their classrooms “frequently” or “very frequently,” and three out of 10 (34 percent) said it occurred at least “occasionally.”

Now, the teacher responses to survey questions are a little difficult to interpret definitively. Did the teachers actually see comments on computers or phones or did they just hear about incidents from their interactions with students. Legally, it probably makes a difference. The Wolak, et. al (2007) asked precisely about location.

Is this distinction worth writing about? I am still trying to decide.

Vega, V., & Robb, M. B. (2019). THE COMMON SENSE CENSUS: INSIDE THE 21ST-CENTURY CLASSROOM. Retrieved from https://www.commonsensemedia.org/sites/default/files/uploads/research/2019-educator-census-inside-the-21st-century-classroom_1.pdf

Wolak, J., Mitchell, K. J., & Finkelhor, D. (2007). Does online harassment constitute bullying? An exploration of online harassment by known peers and online-only contacts. Journal of adolescent health, 41(6), S51-S58.

Loading

Complex things cannot be totally simplified

I consider myself a fairly experienced tech person. I used to make heavy use of tech including running my own server and writing the code necessary to conduct my research. Now, in retirement, I consider myself a tech writer. This background still does not mean all tech projects come easily to me. Such is my present situation.

Given the current state of the online world and our (my wife and myself) heavy involvement, we need to take things seriously. We finally decided to improve our security by purchasing a password manager and moving to more complex passwords. I am still in the process of trying to create a completely successful implementation.

My struggles stem from a couple of things. First, I am an impulsive problem solver. When something doesn’t work, I try something else. I often forget which of several attempts was successful making solving the same problem a second time no easier. Second, I have created for myself a very complex tech environment. Here are the factors that seem to be relevant to my present challenge:

  • Within a couple of weeks, I use 7 different devices (tablets, phone, computers)
  • These devices use multiple operating systems – MacOS, iOS, chrome
  • On these multiple devices I use four different browsers (Chrome, Firefox, Safari, Brave)
  • I have accounts on probably 30 or so online accounts. Some I use frequently and some seldom. Sometimes, I know the login and password and sometimes I do not. I often rely on stored passwords for those accounts I could not enter from memory.

The concept of a password manager is easy enough to understand. You open a service and go to the place to change the password. You create and store a more complex password in the password manager and you copy and paste this new password to the account you want to update and save. You should now be able to use this service by opening the password manager on another device you have installed the password manager on as the password managers share the new passwords you have created.

I have decided the problems I am having are due to the technology trying to be too helpful. Depending how you have set up your operating system and browsers, your passwords can be stored and autofilled. Sometimes this information is stored in multiple places on the same machine and shared across machines. So, your operating system may store this information and a browser on this device may also store the passwords. Changing the password for a site and storing this adjustment in the password manager does not necessarily change the stored passwords on your device. In some cases, I had probably 50 stored passwords associated with a browser. I could possibly delete all and turn off autofill, but my original plan was not to change every password in the password manager partly because I have relied on the stored passwords in various browsers. I hope this makes sense and explains how this process can end up being more complicated than those who promote password managers make it sound.

My present strategy has been to delete specific existing stored information on my devices for the specific sites I want to have more complex passwords. When the new passwords stored by the password manager are then saved and shared among the multiple devices. This process has to be repeated for each browser.

A password manager and complex passwords are good ideas. I certainly encourage you to make the effort. Depending on how complex your own tech world happens to be you may have an easy time making the adjustment or be in for considerable trouble shooting. If you have created something that approximates my own situation, the one suggestion I would offer is to be aware of the multiple ways old passwords may be stored on your devices.

Loading

Think like a practitioner – the generalization of an idea


I would suggest that educators have become familiar with an example of the concept of thinking like a practitioner but may be missing the bigger picture. The example presently in vogue is coding (programming) as in proposing that computational thinking represents a generalizable set of thinking skills (procedural skills) applied by programmers and possibly developed through learning to program at an age/experience appropriate level.

Before schools dive in head first, it might be prudent to explore comparable areas of practice that could develop important knowledge and thinking skills (I like to use the term procedural skills). What would be the most productive and efficient uses of this concept in classroom settings? If time and resources are limited, what types of practice (as in the activity of a category of practitioner) should be prioritized?

Do we have other practitioner experiences in classrooms? I have been thinking about this question for some time. In my thinking, I have found it useful to differentiate knowledge from skills. What is it a programmer knows? What is it a programmer has to do to program? Substitute a different practitioner for programmer and consider the knowledge and skill distinction I have identified. Do students exploring these other areas of practice develop important knowledge and procedural skills?

I started thinking about this when exposed to the training of a group of practitioners I knew little about – historians. I was not a fan of the study of history even though I had experienced high school. At a later point in life, I learned a little about the training of historians and became familiar with what I as a trained research psychologist would call a “methods” course. Future historians took a course I recall being called “The historians’ craft”. Essentially this course developed the skills and expectations by which the historian turns sources or data (photographs, diaries, interviews, etc.) into explanations of historical phenomena. How do you maintain objectivity? How is it you identify trends in causation that can be differentiated from the perspectives of the individuals offering the artifacts that are being examined? Is it possible there are multiple accounts of history that are legitimate? What historians do is more than learn about what other historians have concluded. What others have written might be described as background knowledge. What historians do is acquire background knowledge and combine this through rigorous thinking with careful data acquisition procedures to create more advanced and accurate accounts of the past? I spent my career as an educational researcher (psychologist) and I additional was trained at the undergraduate level as a biologist. I started to appreciate that all of these areas of practice involved considerable overlap when it came to the knowledge base that must be acquired and the practitioner-specific data collection and analysis skills that must be applied.

My point – I think most professions at a core level involve knowledge and skill development. When it comes to generalizability, is computational thinking superior to historical thinking? Since we expect students to take several history courses already, perhaps by including opportunities to “do history” we might develop some very important critical thinking skills. How do you avoid bias in personal thinking? How do you come to a conclusion that reflects multiple perspectives? We could certainly use citizens with skills such as these? How do these skills stack up against skills such as breaking a problem down into subcomponents and algorithmic thinking (claims for the type of thinking developed via programming).

I think the “think like a practitioner” approach works well for some practitioners you might not consider. How about think like an author? Yes, everyone is “taught” to write, but actually writing to communicate is different. Consider that writing to inform often requires that you learn about a topic. For example, I am not making up what I am writing here up. I have learned about topics such as “authentic tasks” and the training of historians in order to communicate through the procedural skills of writing. Writing to communicate also involves higher order thinking skills of multiple types. Like programming, it requires identification and organization of the parts of a whole. It involves the exercise of critical thinking. Given the multiple and often conflicting positions on an issue, what position can I best take and defend.

There is an instructional argument for the value of “writing to learn” that is consistent with both the development of writing, thinking, and cross-curricular content knowledge skills. What ever happened to students spending time writing to learn? Not an “in” approach at the moment, but it will probably resurface when coding to learn fades.

How about teaching? Teachers are certainly a category of practitioners. Teachers must develop both content knowledge and pedagogical knowledge (procedural skills). Some of these procedural skills involve explaining and developing a different type of understanding than is required by just a primitive level of knowing. Teaching to learn is not really that difficult to imagine than writing to learn and offers many of the same benefits. Both have general utility across content areas at a level of frequency of possible use I just don’t see for programming.

This post is getting long, but I hope you can see where I am going with this. What would a scalable level of functioning like a biologist look like? Students do labs, but how closely does a carefully orchestrated lab experience compare to what a biologist actually does? What would “authentic” research look like for any area of practice that can be associated with a course-specific content area?

Etc.

Etc.

I think it possible we have become fixated on programming because it is a new content area for K12, seems directly associated with a profession that is seen as lucrative, and seems to offer unique potential. I don’t see programming as offering unique potential or necessarily developing cognitive skills that are unique. In my opinion, what programming does offer is a ready-made scalable practitioner experience and this is attractive. Kids can code in Scratch, Kids can code simple robots. Students can take complete programming courses at the secondary level. If these opportunities get educators and administrators excited, I wish they would widen their vision a bit, appreciate the similarities of logic I have identified, and recognize the practitioner opportunities that could be associated with many existing courses/content areas.

Note: If this perspective is of interest, some of my original thinking was seeded by the following article.

Brown, J., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32–42.

Loading