The following comments are based on a recent book by Lee McIntyre – Post-Truth. I suggest it as a useful read for any educator interested in information literacy, science denial, and similar topics. I think it supports my observation that teaching students to evaluate the credibility of the online resources they encounter is not enough to assure they develop an accurate representation of their world. A broader approach to online literacy is required. This book offers some suggestions, but it is possibly more useful in identifying the multiple factors that have created the present information environment in which we presently function.
I will not attempt to write a summary of the multiple issues McIntyre’s book identifies. The resulting text would be far too long for a blog post. I will attempt to identify some of the key factors the book identifies as I interpret them. The result is more an outline of factors than a complete description. When I offer my own observations or comments, I will do so within brackets so my ideas can be differentiated from by summary of the author’s comments.
Factors creating the Post Truth environment:
Personal factors – Asch conformity [influence of group position], confirmation bias [Author seems to propose a brain predisposed to support existing personal perspectives – I see as a constructivist model of learning.]
Decline of mainstream media due to decline in ad revenue and readers/viewers going elsewhere, financial difficulties result in decline in number of actual reporters, decline in shared information experience as readers/viewers do not have a common information experience.
Rise of 24/7 news, much more time spent on opinion relative to actual news, embrace of a perspective/bias, encourage interest/viewers through promotion of conflict. Presenting both sides of a story when one side has little credibility confuses viewers and feeds personal biases – objectivity should not be confused with objectivity.
Social media – selection of friends, bias in what friends share, bias in what service feeds reader to suit their values
Weak approach of formal science – science is always questioning and suggesting more research would be helpful. Misleads public as to what is known because of this cautious approach.
Generation of purposeful falsehoods likely to be embraced by target group. Falsehoods given greater reach by sympathetic audience and bots.
Some ideas regarding what can be done
One remedy is heterogeneous group interaction – develop an appreciation for the scrutiny of others [This fits the value of argumentation]
Important to call out, rather than ignore lies no matter how obvious the falsehood.
Invest in a quality news source. Free is not free and not suited to meaningful learning.
Technology offers learners some study skill opportunities often not available until recently. A vast literature investigating highlighting and notetaking exists, but few K-12 educators have been trained to help their students learn to use these study skills effectively. While some may offer advice on taking notes, highlighting has been largely ignored because marking up content intended to be used in the future by other students was forbidden. The use of digital content eliminates this problem, but the opportunities of this content in digital form have been largely ignored.
My own familiarity with highlighting and notetaking go back to the late 1970s and 1980s. It is my impression that these study strategies were heavily investigated during that time frame because of the interest in generative strategies. Interest seemed to wane, but I sense a return of some of these ideas.
I recommend two recent sources:
Miyatsu, Toshiya, Khuyen Nguyen, and Mark A. McDaniel. (2018). Five Popular Study Strategies: Their Pitfalls and Optimal Implementations. Perspectives on Psychological Science 13, 3, 390-407.
Miyatsu and colleagues make an interesting point about study strategy research. They suggest that researchers have focused on developing new study techniques, but these techniques have been largely ignored. Miyatsu recommends that greater attention be focused on study strategies that are used and how these strategies might be optimized.
Highlighting and annotating (simplified notetaking) fit well with my interest in opportunities for the application of online layering opportunities.
Here is a quick perspective on the highlighting and notetaking research.
The potential benefits of both techniques are approached as potentially resulting from generative processing (activities while reading/listening) and external storage (improvement of review or studying). Of course, these are interrelated as better highlighting and notetaking should improve later review (I will make one comment on whether this relationship still holds at a later point). A quick summary might be that a) the benefit of notetaking appears to be in review and b) the benefit of highlighting appears to be in the generative act of highlighting. I cannot offer an explanation of why these strategies appear to work in different ways.
One further comment related to my reference to layering is that highlighting and notetaking can be provided rather than generated by students. Providing highlights and annotations can benefit review and may be a way to teach a better generative approach. One of the findings of these more recent reviews of the literature is that K-12 students do not benefit from highlighting opportunities while college students do. This could be because younger students have not practiced this technique and when provided the opportunity do not highlight in an effective way. They do benefit when important content is highlighted for them.
With notetaking more generative strategies (paraphrasing vs verbatim) improves the benefits of the note taking process, but verbatim notes are more effective for external storage (review). I think this could possibly be improved by use of apps that allow notetaking while recording presentations. The notes taken within such apps are timestamped allowing review of the original recorded content when the notes seem confusing. Students using this approach could also just enter a marker, eg., ???, in notes when confused rather than overload working memory and use this marker to return to the spot in the recorded notes for more careful thought when studying. The notes could even be improved later using this same approach.
If you don’t have access to a college library, you may be unable to read the Myatsu paper, but the second reference is online and offers some useful analysis.
I am now retired, but I still enjoy the beginning of a new semester. Every few days or so I check Amazon to see if anyone has purchased our textbook. Sales are nothing like earlier times when we were selling an expensive paper version through a publishing company, but the present circumstances are more about feeling relevant.
Our transition from big publishing company to self publishing was originally motivated by our interest in a different model for the college textbook. The motive was partially to offer a less expensive textbook (I called it the $29 textbook project), but also to offer a different approach more suited to the content – technology in education. We thought it ironic that our orignal project sought to develop classroom technology integration skills in educators using a book. We wanted both to differentiate the content sources by creating a primer rather than a book for more static information and web content to improve the recency of information and to offer actual demonstrations and examples. We also thought it made more sense for authors to write continuously rather than intensely once every three years. We went back and forth with our publishers for several years and could never get to the point of implementing our project. Book companies see efficiencies in using standardized tools and approaches. For example, they wanted to offer professionally shot and edited videos of a topic such as problem based learning they could use in several of their education books. These videos would not necessarily involve technology. We wanted to offer videos of the problem based activities featuring the teachers who implemented them we used as examples in our book. We wanted to create the opportunity for the educators who adopted our book to share among themselves. For example, we were promoting the idea of an “interactive syllabus” – just a web page serving as a course syllabus and linking to tasks and resources used to augment assigned readings. There is no reason to treat the book authors as the experts when profs and students have their own experiences and tasks that could be shared to our Primer.
Eventually, we agreed our priorities were not compatible and after 5 successful editions we were given full control of the copyright on our book to do with what we wanted.
So, we became self publishers and have tried to offer a scaled down version of some of our ideas. We went from selling a $140 book receiving royalties (12% on the wholesale value) to a $9 book receiving 70% minus a fee for the download size of the ebook. The one frustration we have is that while we get instructors adopting our bookis limited and while we don’t know for sure the activity associated with our online content seems unrelated to use of the book. For example, one would expect to see content associated with early chapters to be used early in a semester and indicators of this nature. We can only guess at why this is the case because we really don’t have a way to ask the adopting profs as would be the case with the book reps who make contact with profs for the commercial textbook companies. I still think the diversity of resources and a closer link between book and supplemental content are good ideas, but we have found over the years that it can take time for new ideas to be implemented.
Attempting to explain how learning and thinking work is not easy. After doing this for a living, I am still uncertain what is the best approach. We all know we know things and we all know we learn, but just how we did and do this is often not obvious. To help understand, I would suggest we make use of abstractions that simplify, but still have utility. To have value, a model or abstraction must be consistent with data (observations of how a process works) and be useful in suggesting testable practices.
I could use the jargon of my profession and theoretical perspective on this profession, but I propose that the concepts of thinking and model both meet the conditions I have identified. Thinking implies learning requires mental work (at least most kinds of academic learning). Easy to understand attributes of thinking are helpful. The learner is the one who thinks (or not) so learning ultimately depends on the activity of the learner no matter the circumstances external to the learner. However, others people and experiences outside of ourselves can encourage us to think. We think about things with which we are already more familiar more easily hence what we already know is important in how easily and how successfully we can think. The thinking we do that uses existing knowledge and skills can result in the modification and improvement of what we already know and this is what most mean by learning. These are the basics.
What we accomplish by learning means several different things. We know things after learning that we did not know before. We can do things after learning that we did not do before. We acquire understanding and capabilities. We also acquire information which may or may not be part of understanding.
The idea of a model is helpful in thinking about both understanding and capabilities. The construction of models of understanding and models as actions is a natural process. By natural I mean we as individuals construct models continuously whether to deal with mysteries of daily life or because of formal education. To help students understand and think about the commitment we continuously make to models, I like to ask students what they mean by the concept of a theory. A theory is one type of model. Students often use the word theory in a condescending way perhaps dismissing the information (models) acquired in a course as theoretical implying that theories are not useful. To the contrary, I claim, theories are how we think and if we do not acquire theories we apply from formal instruction, we make them up based on personal experience. A theory, personal or formal, is the abstraction we use to understand, to predict, and decide on action when we encounter a unique life experience – pretty much any new input from the world. What is kind of cool (interesting) is that individuals are quite capable of storing contradictory or at least inconsistent personal and formal theories about the same phenomena. In other words, we may develop one interpretation of a certain kind of situation based on our life experiences that is different than an interpretation we are taught. How can this happen and what can be done about it? This can happen because an external experience activates one internal model or the other. This is one of the frustrations of education. We can help learners acquire a model of the way something works in the world. They can understand this model and use it effectively within the classroom setting, but they can still revert to their own primitive model of how something works when encountering circumstances in their daily life to which the more formal theory should have been applied.
This was a very long introduction to get to the core issue. What are the models educators use to guide their work in helping students learn? This could be a question of whether or not educators activate formal models or personal models to guide their practice. Given what I have said about formal models and naive models (this is the term applied and the intent is to describe models built from field experience without the use of formal guidance), this could be a great topic to consider. I will have to save a discussion of this distinction for another time. Here, I want to share my personal bias when it comes to the utility of several formal theories.
Models of learning
Somewhere in the preparation of educators, most are exposed to multiple models of learning. At the least, they have probably been told about behaviorism, cognition, and constructivism. Recently, some preparation experiences may include some biology – brain structure and function. Certainly, biology and biochemistry have the potential to describe learning most accurately. I think an important issue is whether a more accurate description advances education or not. My personal opinion at this point in time is that our understanding of how the brain functions in learning is rather crude and I am aware of very little that improves on what other models describe and explain. I have an undergraduate major in biology but that was a long time ago and what I know now I describe as the content included about brain biology in your average Introduction to Psychology textbook. The one exception I can think of to my claim that there is nothing unique about a biological perspective is neural plasticity – the finding that long term experiences of a type can restructure the brain to predispose individuals to different patterns of mental behavior. I believe this idea is helpful. However, the interpretation of this phenomenon within education has also been generalized in ways that I think are inaccurate and certainly not a basis for significant changes in practice.
Here is my short list of models (actually categories of models) of learning and a very brief comment on what I see as the core mechanism of each.
Behaviorism – emphasis on external events and consequences that increase and decrease the frequency of behaviors.
Cognitive – constructivist (macro) and information processing (micro) – mental activity under the control of the learner. Thinking develops internal models.
Biological – chemical and biological action and storage (internal). Learning results in changes in the brain (vague) and must be accomplished by a combination of chemical and electrochemical actions taking place in physical structure some of which are specialized to accomplish certain things.
I find the concept of fidelity useful in understanding learning. Fidelity could be defined as the exactness of fit between a model and the actual thing/process. One might think that the more exact the fit, the better. We have learned from research on the use of simulations in learning that this is not the case. With simulations, in the early stages of learning, too much realism (match) can overload, confuse, and in some cases produce unproductive emotions. For example, the training of pilots does not begin by putting a novice in the pilots seat and letting him/her explore. The experience would be overwhelming and certainly terrifying. Typically, training makes use of experiences in simulators that simplify the experience to a limited number of actions and possible reactions. Using various techniques and equipment, more and more realism and experiences are added until the more experienced individual can deal with the emotions and complexity of full application.
I see a similarity in the usefulness of models of learning. Behaviorism offers little insight into mental behavior (I think supporters leave that to the biological researchers) and is really more a model of instruction (manipulation of external events). I regard behavioral models as useful for understanding and investigating incentives. I see biological models as eventually having the potential for high fidelity, but I see these models at present as mostly descriptive. At best the future might provide a level of understanding that encourages practices through a process of find out how to produce a given combination of chemical and anatomical conditions. I see the cognitive models as most useful, but differing in level of fidelity with information processing models offering a more detailed level of process clarity. Constructivism offers a broad perspective, but may or may not be sufficient to propose useful interventions. Especially when what seems like a productive process is not, analysis based on information processing models is often usefuL
Models of learning, models of instruction
Comparisons of approaches generated from models of how learning happens are important. Approaches may differ in the external events created, but any event allows “thinking” by learners. The relative effectiveness of different approaches is important. Putting down books, lectures, worksheets, life experiences, or task of one format or another all offer some type of input that learners will attempt to process. The capacity to point to idiosyncratic cases of students who learned from this or that experience is not really justification for much of anything. It is the relative productivity of one approach in contrast to another within defined requirements for a common set of learning circumstances (group size, time allowed, variations in past experiences, etc.) that provide the basis for application.
Arguing that one model of instruction based on this model of learning is superior seems pointless without data allowing those who must evaluate these claims. Models offer different ways to think about learning. These can be helpful in the design of learning experiences, but ultimately, it is the response of learners exposed to these experiences that matter.
[Image included purchased through the Noun Project]
I have been reading a lot lately about computational thinking and trying to decide what I think about this concept and trying to decide exactly what I think it is. To be clear, computational thinking as an educational goal and learning to program as an educational goal should be differentiated. “Should” is my opinion, but I think it is fair to suggest that as goals these concepts are about accomplishing different things.
My thinking on “computational thinking” goes back a long way. I read all of the Seymour Papert books and included a chapter on programming in the book my wife and I have written for many years. I understood that Papert had a broad vision of what programming could accomplish, but I also read the research challenging the position that programming experiences provided as an educational activity accomplished what I now see described as “computational thinking”. It is not just me that is making this connection (see this recent description of Papert’s work) and it is not just me that thinks the “coding to develop skills other than coding skills” is not really that well supported.
In thinking about the recent resurgence of the notion that learning to code offers broad benefits, I have identified what I see as important and interrelated sub-issues. The first is whether a notion such as computational thinking adds anything to the existing prioritization of “higher order” thinking skills. I guess when you get right down to it, both computational thinking and higher order thinking are abstract and likely are interpreted in different ways by different people. The issue is that it seems to me administrators and practitioners buy into a simplistic version of “computational thinking” not necessarily and hopefully not intended by the theorists/researchers and maybe even the evangelists promoting this cause. This simplification may encourage an unsuccessful implementation because the intended approach has been simplified in a way that the intended skills are not developed and not even taught.
I have two issues related to implementations of what Papert decided to call “constructionism”. Note that constructionism as a philosophy/learning theory or whatever it should be called is not the same as constructivism. The constructs can be related, but constructivism is broader and has a different focus. Constructivism argues that individuals create their own understanding. This is mental activity and exactly how this works is vaguely defined beyond suggesting that learning amounts to bringing external experiences in contact with existing knowledge potentially resulting in the modification of existing knowledge. Simply put – bringing together amounts to thinking. No one can think for you hence you ultimately learn only if you make the effort to think. Constructivism avoids the details. I rely on the understanding of thinking as cognition to handle the details. I don’t see cognition and constructivism as necessarily at odds.
Constructionism (Papert’s idea) emphasized external building (coding) to influence internal thinking. What counts as external building is unclear to me. Those promoting the broader concept of making expand the notion. I also think it obvious that external activity does not necessarily equate to productive thinking. You can certainly have thinking without building (I hope you think when reading this post) and you can have building without thinking (my favorite example if the amount of chemistry we learn from baking).
I have two questions when I think about constructionism and computational thinking.
Question 1: Are these skills and the suggested methods for developing them really new? I don’t think I really know, but I have an opinion. The question matters because we should not be starting over if we already know important things about what it takes. I apply this question in considering two concepts – making (Papert’s constructionism) and computational thinking.
The maker movement reminds me of all of the past work on generative learning. Generative learning was a broader concept (in my opinion) suggesting that it may be productive for some learners to be engaged in external activities to encourage productive thinking behaviors. For example, if learning results from the integration of existing knowledge with new information/experiences, we have long asked learners questions to encourage them making the effort to find such a connection. Being prompted with a question may seem very different from making the effort to create a program to make the computer do a specific thing, but both are external tasks hoped to encourage productive mental behaviors. What have we learned from research on many types of tasks about when the tasks are actually generative and when they are not?
Higher order thinking is not new. Problem-solving as an example is not a unitary process. Many efforts have been made to identify the components of problem-solving, to propose how these components skills might be developed, to evaluate whether these efforts to encourage learning tend to be successful, and whether development in one domain transfers to others. Note that the argument for developing computational thinking for all assumes transfer. It is more than developing programming skills. It seems to me that the subskills and dispositions associated with computational thinking have yet to receive nearly as much attention as that devoted to problem-solving, but I see planning, understanding that debugging is not failure, abstraction and instantiation, problem identification and problem-solving frequently mentioned. Are these skills unique to coding and is coding really the most efficient way to develop these skills within the school environment. Do we know anything that is relevant from past work?
Question 2: Why the enthusiasm for unproven ideas (coding to develop computational thinking, computational thinking as unique from other ways to describe higher order thinking) and why the superficial attempts to implement?
Regarding the enthusiasm for these “new” ideas, here are a couple of ideas that may apply to administrators and classroom educators.
I have been reading Dan Lyon’s recent book Lab Rats . The book criticizes the misapplications of processes and ways of thinking that have resulted from the digital economy. One of the issues Lyons focuses on is the willingness of management to move from unproven practice to unproven practice resulting in serious disruption to organizations and to employees. Why? Lyons proposes that these companies exist within a fearful environment. Change and competition produce uncertainty and fear of what the future might bring. There is this sense of urgency that something must happen making decision makers easy prey for evangelists touting this or that new approach. At least trying something new gives the impression of doing something.
My second thought might be described as “who sweats the details”? A related position is that we all sweat different details. Careful reading of Papert or Aspinall (a new computational thinking evangelist) and the existing relevant research should indicate a careful and realistic picture of what classroom application would look like. I have had the time and the inclination to review this content at this level. I can sweat these details as a function of my professional responsibilities. Few classroom teachers can make this commitment because they must sweat other details. The danger that can result from these different emphases is that there is a resistance to considering what these two different professions require (academic and practitioner) and a reluctance to appreciate what each offers. I don’t pretend to be able to tell a fourth-grade teacher working with a group of students I have never met how to do many things teachers must do on a daily basis. However, I feel perfectly justified telling this teacher that a couple of experiences allowing students to explore Scratch or Ozobot programming will result no benefit to higher order thinking. At best, I think I could explain the basics of the experiences that might have a chance of being successful, but even then I am doubtful of transfer to most “problem solving” tasks these students will encounter. The teacher would then have to decide whether he/she feels it practical to make adjustments necessary to meet these basics.
Is this egotistical? Some might think so, but I think it reflects an honest description of what different professionals have committed to do. There would certainly be exceptions, but individuals would need to be brutally honest in considering whether they qualify as an exception or not.
– –
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
You must be logged in to post a comment.