AI reduces skill learning

When a technology offers advantages and disadvantages, the decision-making process can be quite complicated, especially when oversight cannot be guaranteed. For example, many states now ban cell phones, making a use such as telling parents when the schedule for after-school activities has changed difficult. The advantages and disadvantages vary with the field of application and my interests have mainly been focused on education. Just to be clear, by this I mean learning in general, not just the type of learning that occurs under supervision or is associated with educational institutions.

The generic educational situation that raises concern involves tasks undertaken to encourage both skill development and knowledge, and includes a requirement that demonstrates that the task has been attempted by the existence of some product. In educational settings, such products might result from homework or class activities, or simply by visible demonstrations of activity. The issue with AI is that in many cases, such as problem sets or documents of various types, these same products could be generated by AI, avoiding the cognitive activity of the learners. The phrase “cognitive offloading” has been used to describe this alternative form of product creation. Teachers might simply call it cheating. Cognitive offloading itself can be a desirable or undesirable option, requiring decisions regarding when it is appropriate and efficient, and when it is a detriment. 

While cognitive offloading to avoid learning tasks seems an obvious problem, little actual research exists to demonstrate the damage done. Some would argue that if technology can replace an activity and that technology is readily available, why bother to “learn” the skill in the first place? Why learn information if your cellphone can allow you to search for information when it is needed? Why learn basic calculation skills when you cellphone can also serve to do mathematical operations? There are responses to these challenges, sometimes offered by students or parents, but this analysis would take this post in a direction I did not intend. 

Here, I want to focus on learning to write and writing to learn by discussing a different learning task. This may sound unnecessary, but at present, there is a reason to take this approach. The justification for being indirect is that writing is a complex skill consisting of multiple subskills, and we learn to become competent at even a basic level over years and not weeks or hours. We are investigating an alternative to the traditional methods of instruction that can be subverted now, and we cannot rely on experience to help us evaluate and tease apart how the development of subskills are impacted. The insights and evidence of the potential damage done would take to long to emerge. As one perspective, consider the lingering impact of COVID on learning. What about the move to online learning did we not anticipate and what consequences are we still trying to mitigate?

AI in Learning to Code

Shen and Tamkin had an opportunity to investigate the impact of AI with adult programmers learning to make use of a new library. Think of a library as a collection of functions (tools to perform specific and commonly used tasks). Instead of having to write code to accomplish common tasks each time a programmer encounters a need, libraries allow programmers to call prewritten code snippets. It takes some work to make use of a library – what functions are available, how do you call the function you want, what inputs and outputs are involved and how are these integrated with the code you write yourself? The researchers recognized that the learning coders had to do to make use of a new library provided an opportunity to study how AI could help and hinder learning a complex process. 

Shen and Tamkin studied actual programmers as they worked to learn a new library. They suggested that the process be viewed as a tutorial including both background information and simple programming tasks. Programmers were assigned to a control and a treatment group, with the treatment group having access to AI. The learning phase concluded with an assessment evaluating multiple concepts and skills. Video of treatment group participants was collected to document how each individual used AI and worked on the programming exercises.  

The researchers found that the treatment groups did not differ significantly in the time spent learning, which they found surprising. On the post-test, the largest group differences were in debugging skills. Smaller skill differences were found for code reading and conceptual understanding. Those without access to AI made more coding errors on the practice tasks, spent more time practicing debugging, and ended up with better skills on the outcome evaluation. How AI was used differed greatly with some simply asking AI to solve the coding challenges and others who only asked higher-level questions of the AI tool. Some users had the AI tool solve the coding challenges and then retyped the solutions themselves (rather than copying and pasting). This was not an effective strategy. 

Generalizing from the coding study

I have spent considerable time both coding and writing and I have always found the processes to have similarities. While others may find this a strange observation, I have always said that coding and writing were the two professional tasks I learned I could not perform later in the evening if I wanted to get a good night’s sleep. Reading was fine. Grading was fine. Something about both coding and writing was cognitively stimulating, making it difficult to sleep. 

The application of AI to complex skills is interesting, but difficult to study. Clearly, a single skill would seem very unlikely to be developed if a learner could completely substitute AI for practicing the skill. However, it seems possible that learning a multiple-component skill such as reading or coding might benefit from replacing specific components with AI under certain circumstances. We have limited cognitive capacity and substitution for some components of a complex task could allow the remaining components to receive more attention until well learned.

Learning to write might represent an example. I have often referred to Flower and Hayes’ writing process model when describing the components of writing and writing to learn. The use of AI to offer content to provide the basis for a writing task and perhaps even to offer a structure to guide the organization of a writing product could free up capacity to focus on lower-level skills such as spelling, grammar, and coherent paragraphs. In contrast, I typically use Grammarly while I write to allow to move more quickly while relying on this AI tool to alert me to possible spelling and grammatical improvements. 

Part of what Shen and Tamkin observed in their qualitative observations of the different learner-imposed focus of AI and the relationship of differences to what was learned or not learned offers a related perspective. Debugging is an important lower level coding skill and having AI debug code appeared to limit a coder’s ability to debug when working without AI. 

Suggestions for Learning to Write and Writing to Learn

AI can support both “learning to write” (developing writing skill) and “writing to learn” (using writing to deepen understanding), but depending on which writing skills are the goal best practices should differ.

Learning to write: skill development

Here AI should be thought of as a coach, not a ghostwriter.

Emphasize feedback: Tools like Grammarly give immediate feedback on grammar, syntax, cohesion, and organization, helping students revise iteratively while concepts are still fresh.

Structure and separate subprocesses: Generative tools can help students brainstorm ideas, outline structures, or identify expectations for different types of writing (e.g., sample introductions, transitions).

Process?first policies: “Writing first, AI second” approaches ask students to draft independently, then use AI for critique and revision. When coders used AI in the Shen and Tamkin, this is the general theme that seemed most successful. 

Writing to learn: thinking with text

When the goal is conceptual understanding of content knowledge, AI is best used to amplify reflection, not replace it.

Clarifying concepts for the writer: Students can ask AI to reexplain readings, generate examples, or pose practice questions, then respond in their own words, using writing as a space to consolidate understanding.

Challenge personal understanding: AI can generate counterarguments, alternative explanations, or “what if” scenarios that students must address in writing, pushing them beyond summary toward analysis. Why do others disagree with the summary I am creating? What can I offer to support my position and what are the limitations of the alternative?

Shared design principles

There are some guidelines these goals for writing. Across both purposes, similar design choices matter.

Make process visible: Require artifacts – notes, outlines, draft histories, and brief process memos about when and how AI was used. Document the transition from any use of AI to products student has generated. 

Align AI roles with goals: For skills (learning to write), let AI focus on feedback, exemplars, and mechanics; for content learning (writing to learn), keep generative help outside the main composing space and treat it as a prompt.

Previous analysis of technology and the writing process

Sources:

Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition & Communication, 32(4), 365-387

Shen, J. & Tamkin, A. (2026). How AI impacts skill formation. arXiv preprint arXiv:2601.20245 (this study has yet to officially be published)

Loading

The Rise and Fall of the Twitter EdChat

For over a decade, the hashtag #Edchat allowed educators using Twitter to gather, share, and commiserate. However, as the platform formerly known as Twitter transitioned into X, the landscape of this digital discussion site shifted dramatically. Drawing from recent research, this post explores how educators have used #Edchat over time, the stressors inherent in this social media use, and the history of a community in transition. Researchers understand that for a variety of reasons, Twitter and Twitter chats are far less influential than they were a few years ago. They argue that studying the edchat phenomenon historically may have value for other social media platforms generally and specifically, should they hope to involve educators. 

The Golden Era of #Edchat: Purpose and Participation

In its prime, #Edchat provided a means for informal professional development. Research by Willet (2019) and later Willet and colleagues analyzed hundreds of thousands of tweets to understand exactly how and why educators were using the platform. The study identified several key types of engagement:

  • Resource Sharing: The most common use case, where teachers curated and distributed lesson plans, EdTech tools, links, and articles.
  • Pedagogical Debate: Scheduled weekly chats allowed for deep dives into specific topics, from classroom management to the integration of AI.
  • Social Support: Perhaps most importantly, it provided a space for “digital social support,” helping teachers feel less isolated in their professional struggles.

This era was defined by a sense of “augmented intelligence,” in which the collective knowledge of the network enhanced individual teachers’ expertise.

2008–2014: The Golden Era of Synchronous Twitter Connection

The #Edchat phenomenon began in October 2008. This early era was defined by the weekly Tuesday night chat, a highly structured synchronous event that became a must for thousands of educators. An agreement on Tuesday night did not result from any official declaration, but once started, Tuesday night became the default for those wanting to participate in a common chat. 

I encourage anyone interested in the topic of teacher use of social media to read the two references to Willet and colleagues I provide here. These researchers had access to what I have heard described as the Twitter firehose, which was available until 2023, and downloaded over 15 million tweets for analysis using the #edchat hashtag as used over 15 years. Unlike researchers who implemented projects of a much smaller scale and made use of human raters to classify and quantify characteristics of such chats, Willet and colleagues used data analysis tools that quantified specific characteristics (questions, responses, links, secondary tags, retweets) and even tools that attempted to identify themes based on terms appearing in the tweets. These characteristics were mapped against years to identify trends.

This approach allowed certain questions that could only be addressed by this massive scale. What trends could be observed over the history of the edchat phenomenon, but may have been overlooked in the data? For example, the way in which edchats evolved interacted with the capabilities of the Twitter platform. Tags were a user-applied innovation that was later integrated into the tool as a capability. Tuesday night became the impromptu time for edchats, which took on a formalized approach. A chat leader would provide a series of prompts identified as Q1, Q2, Q… and participants would respond using R1,R2, R… . Other tweets could be added within the rough synchronous time frame defined by the prompts. Because chats were stored, others might review the session at their leisure and add their own contributions. 

Just to be clear for those unfamiliar with the reason for this format, this experience was based on a kludge of sorts. By searching for #edchat, you could follow the sequence of questions, responses, and related comments in real time, separate from other Twitter chatter. The hashtag functions as a sort of portal, focusing Twitter use on the chat content and turning Twitter from an asynchronous to a synchronous tool. Tools other than Twitter, such as Tweetdeck (no longer available), even allowed users to create a multi-column display, with individual columns focused on specific tags and updated automatically. Only the #echat contributions would then be displayed within one column. These tools became popular as an easy way to turn the Twitter feed into a synchronous experience unique to those using the #edchat hashtag. 

Research shows that between 2009 and 2014, these Tuesday sessions saw significantly higher engagement than other days, characterized by genuine dialogue and a high volume of questions and replies. Teachers weren’t just “knowledge telling”; they were building communities of practice and exploring new pedagogical ideas in real-time.

The researchers had a special interest in the frequency of questions and replies, and the ratio between the two, assuming these variables would be a good way to assess interaction. In addition, how did these variables differ between Tuesday and other days, assuming this would be related to the higher likelihood of synchronous interaction on Tuesdays? Replies made up a higher proportion on Tuesdays and were significantly higher in the earlier years. My interpretation differs from the researchers’. They argued that there was a decrease in interaction in later years. My interpretation is that the chats drifted away from the formal structure of questions interspersed with participants’ answers. Relying on the massive scale and automated methods employed rather than human raters following the give and take of individual sessions may have led to different interpretations. 

2014–2018: The Shift from Dialogue to Broadcasting

At its peak in 2017–2018, #Edchat was a massive digital footprint, averaging 120,000 tweets per month and involving roughly 200,000 different users. However, beneath these impressive numbers, the nature of the interaction was shifting. Starting around 2014, Willet and other researchers observed a transition from authentic conversation toward broadcast-style communication.

Several key trends marked this transformation:

The Rise of the Link: While early chats focused on natural discussion, later years saw a sharp increase in posts containing hyperlinks to external content, suggesting the platform was becoming a repository for resource sharing rather than deep discussion.

Retweet Dominance: Retweets began to outnumber original posts, and the percentage of questions receiving replies dropped significantly. Retweets could be used by individuals to bring those who followed these individuals but were not chat participants to experience the content.

Exploitation: As the hashtag grew in popularity, it became a target for spam and self-promotion. By 2018, the community faced a spike in problematic content and a decline in “authenticity scores” as commercial interests exploited the tag for marketing.

The transition to less interaction and greater influencer dominance may also be related to the active/passive distinction that researchers have begun to study in social media activity. A focus on Twitter chats as a source of resources is consistent with this research topic.

You can still find the use of #edchat on X, Bluesky, and Mastodon instances. The tag is typically used now to indicate educational content and is seldom used in the same way within a chat sequence. A few chats can still be found, often now, using more idiosyncratic tags.

The Paradox of Digital Support

I wrote a series of posts beginning in 2013, focused on edchats, mostly questioning the information value of the process. I appreciated the camaraderie the chats offered, but the limit on the number of characters Twitter allowed, along with my reaction to the content included in such chats, led me to believe the experience was very inefficient, and I thus proposed tactics I thought would increase the professional development value of the experiences. 

Edchats were often included as one experience within the grad course on technology class I taught, and I proposed, without success, that students might analyze the content of such chats as a potential thesis project. My suggestion at that time was that video-based systems (e.g., Skype, Zoom) would allow a much more productive approach. 

2018–2023: Volatility, X, and Fragmentation

The decline of #Edchat accelerated after 2018, driven by platform volatility and the eventual transition of Twitter into X. Changes in leadership and algorithm priorities disrupted the organic reach of educational hashtags. As the environment became more polarized, many educators began to migrate to other platforms like Instagram, Mastodon, or niche, specialized communities that better served their specific needs.

By 2023, the once-unified #Edchat community had largely fragmented. This decline highlights a critical vulnerability: digital spaces on commercial platforms often lack the stability and continuity of traditional professional development. When profit extraction and algorithmic shifts override user experience, the community suffers.

Lessons for the Future

The history of #Edchat is a reminder that while platforms change, the human need for collaboration remains constant. The legacy of this 15-year experiment suggests that for future digital communities to succeed, they must:

1. Prioritize Active Participation: Moving beyond passive consumption is essential to avoid the stress of social comparison.

2. Foster Authentic Dialogue: Successful communities require mechanisms that encourage genuine interaction over simple content broadcasting.

3. Shift to Knowledge Building: The goal of any digital faculty lounge should be to move from merely “telling” knowledge to collaboratively building it.

Perhaps online interaction among educators isn’t gone; it is simply evolving. As educators move toward new tools, the story of #Edchat serves as both a testament to the power of digital connection and a cautionary tale about the challenges of sustaining authentic community in commercial environments.

I have tried to identify where those educators interested in online interaction with peers went. I could not find the type of quantified data provided by Willett, but other researchers (Greenhow and colleagues) have suggested that Facebook groups and Instagram have become favorite sites for interaction. 

Sources:

Greenhow, C., Galvin, S. M., Brandon, D. L., & Askari, E. (2020). A decade of research on K–12 teaching and teacher learning with social media: Insights on the state of the field. Teachers College Record, 122(6), 1-72.

Willet, K.  (2019). Revisiting how and why educators use Twitter: Tweet types and purposes in# Edchat. Journal of Research on Technology in Education, 51(3), 273-289.

Willet, K., Carpenter, J., & Na, H. (2025). Ex-Edchat: Historic retrospective of X/Twitter# Edchat. Computers & Education, 241, 1-18.

Loading

Evaluating the Consequences of School Choice

The education of learners in the K-12 range is undergoing a noticeable shift as the concept of “school choice” has become a political wedge issue, and state-level legislative policy decisions have given parents and their kids greater control over where and under what conditions students are educated. While often framed from a perspective of the individual family, I have always felt this perspective is too narrow and decisions made for the benefit of one benefit also impact the experiences of others. In various ways, we are all stakeholders, whether it be as citizens living in a society dependent on an educated population, financial contributors as state and local taxpayers, and students and their families involved in school experiences. 

While I have followed the issue of school choice for years, this post was prompted by a recent EdWeek article – As School Choice Goes Universal, What New Research Is Showing. Before I comment on the article, here is some background on K12 School Choice.

School choice is the opportunity for parents to enroll their child in an elementary or secondary school other than the assigned school based on their home address. There are multiple variations. The optional school may be another public school in a different district or a public magnet or charter school. Funding for charter schools, which typically operate with a separate board from the public school district, may be public or private. Private schools can be further differentiated as parochial (religious emphasis) or independent. 

My personal interest is mainly in the impact of tax-based funding as I see funding as a nearly zero-sum variable – schools compete for a fixed pot of money dependent on student enrollment. Private schools rely on tuition paid by families, contributions, and, increasingly, tax money collected and then distributed to the school in which the student is enrolled. When school choice sends tax money to a private school the approach may be as a voucher or a family-controlled educational savings account (ESA). A voucher might be thought of as the cut of tax money a school receives from the state – per pupil expenditure, but by my understanding typically not the local tax based on property taxes. An ESA provides funds parents can use for several forms of educational assistance – tutoring, textbooks, and private school tuition (source – Overview of Public and Private School Choice Options).

According to the EdWeek source I identified previously, 18 states now allow all students to use state funds to attend private schools, approximately 1.5 million students in the 30 states that allow at least some students access to similar programs. An example of the type of situation that accounts for the difference might be an allowance for students attending what are considered low-performing to use such resources.

My interest has been in the academic achievement of both students who leave their designated public school and a more nuanced issue of what happens to public schools when a sizeable number of students and the funds associated with these students leave. Despite all of the research on such topics, the EdWeek analysis notes that only one of the 18 states identified as providing all students access to state funds made use of the same standardized achievement test in both public and private schools. Obviously, having large numbers of participants from both public and private institutions taking the same tests would offer the cleanest and most powerful comparison of academic impact. 

Proposed advantages and disadvantages of each approach to K12 education

At present, the political winds seem to be blowing toward greater parental choice. This is the case despite the lack of consistent findings on whether such choice is of greater benefit to student achievement. Depending on the sample of students used and the method used to quantify achievement, studies generate every possible outcome. The EdWeek attempt at a summary concludes that “Preliminary studies on earlier iterations of these programs have shown “neutral to negative” effects on state test scores, though some programs, like Ohio’s EdChoice, have demonstrated positive outcomes regarding graduation and college-going rates.”

What follows is my attempt to summarize the advantages and disadvantages based on two books by Diane Ravitch that I have read. Dr. Ravitch is a defender of public education as you can probably tell from the titles of her books, but pros and cons are more about the arguments and not the data. Ravitch’s work is heavily focused on research findings, but as I have already indicated, the subissues are so complex that it is very difficult to offer general conclusions. The issue that interests me – what happens to the public schools when students leave (the final con described below under the cons of Private schools) has not, to my knowledge, been a focus as an appropriate methodology would be very difficult to put together. 

Pros of Public Schools:

  • Universal Service and Stability: Public schools are charged with serving all children, providing not just education but also essential social services like nutritious meals, medical care, and mental health counseling. 
  • Accountability and Transparency: Public schools operate under strict state regulations and testing mandates, ensuring a level of transparency regarding student progress and the use of taxpayer funds that can be absent in the private sector.
  • Professionalism: Public schools generally require higher standards for teacher certification and may provide due process (tenure) to protect academic freedom.

Cons of Public Schools:

  • Impact of Poverty and Segregation: The biggest “con” of the public system is often beyond its direct control. Concentrated poverty and racial segregation significantly drive lower academic performance, and schools alone cannot solve these structural societal issues.
  • Curriculum Narrowing: Due to high-stakes testing mandates, public schools may reduce time for the arts, history, and physical education to focus more on tested subjects like math and reading.
  • Bureaucracy and Funding Disparities: Public schools are often burdened by intrusive regulations and suffer from persistent underfunding, particularly in districts with low property tax bases.

Pros of Private Schools:

  • Autonomy and Flexibility: Private schools enjoy the freedom to design their own curricula and select their own testing methods, allowing them to cater to specific educational philosophies or religious values.
  • Personalization and Choice: Families can select schools that align with their specific needs or interests, whether through small religious schools or specialized academies. This “market-driven” approach appeals to values of freedom and optimism.

Cons of Private Schools:

  • Lack of Oversight: some private schools receiving public funds are not accredited and are exempt from state accountability systems. This makes it difficult for the public to evaluate if tax dollars are producing academic results.
  • Selective Enrollment: Unlike public schools, private institutions can be selective. Critics argue this leads to greater segregation and less equity, as schools may shun students with the highest needs or those who are the “toughest challenge” to teach.
  • Draining Public Resources: Every dollar diverted to a private school voucher is a dollar removed from the public school system, which still incurs fixed costs such as building maintenance and teacher salaries. This can lead to increased class sizes and program cuts in the remaining public schools.

Summary

Parents and their kids are being allowed greater control over where they attend school. School choice comes with multiple pros and cons, and these issues have been difficult to evaluate because public and private schools in most states are not required to use the same achievement tests. My personal interest is in what happens to the public schools that lose students to private schools when choice is allowed.

Sources for Pro and Con Section

Ravitch, D. (2014). Reign of error: The hoax of the privatization movement and the danger to America’s public schools. Vintage.

Ravitch, D. (2020). Slaying Goliath: The passionate resistance to privatization and the fight to save America’s public schools. Vintage.

Loading

AI, Cheating, and Writing to Learn

One thing I miss as a retired academic is going to the office daily and having the chance to share ideas on common interests. My background was in educational psychology and topics such as AI and learning would not only be relevant to me, but also to the people I had lunch with and passed in the halls. I would have been interested in my colleagues’ take on the pros and cons of AI in classroom settings. Were they concerned about cheating? Had they encountered students who cheated and how did they know for sure that their suspicions were justified? Had they modified the assignments they had always used or perhaps abandoned these tactics as untrustworthy? 

I still find myself thinking about such topics and despite no longer having firsthand experience, I wonder what I would do should I still be working. When something is that important in your life and self-view, it doesn’t leave you, and I cannot help but continue to explore such topics and share my findings and opinions through outlets like this. 

Beyond the internet, AI poses a tremendous challenge, with both its opportunities and its risks. Cheating obviously falls in the risks category. It challenges how accomplishments are evaluated and the results shared with learners and other interested parties (e.g., employers, those involved in competitive selection processes for limited-enrollment programs, the instructor in subsequent courses). It also poses a challenge to our efforts to craft assignments we are confident will contribute to student learning. If tasks are not completed as we assume, we cannot trust the markers we use to evaluate what students know, nor can we rely on them to guide our decisions about when to move on and what we can assume will make sense in new instruction. 

Without my colleagues, I now must rely more on what I can read or find online to form my own opinions. This is a difficult and relatively recent problem and little I would regard as proven seems to be available. There is plenty of advice and personal perspectives and folks willing to offer books on the topic. I might as well offer my own perspective on a specific instructional situation, since, at present, ideas focused on specific tasks in a specific type of classroom are the only ones I don’t immediately find myself arguing with. 

The Opposite of Cheating

I have been reading The opposite of cheating: Teaching for integrity in the age of AI (Gallant & Rettinger). It is well written and well referenced, but the type of source I find myself both rejecting and applauding when it comes to specific recommendations. Typically, a negative reaction stems from the impracticality of a suggestion given my own circumstances. I doubt it is reasonable I should expect authors to create a master model differentiating when a specific idea can be applied as that would add too much complexity and readers need to be active participants in finding what they should take from a resource. Anyway, this book made a point that sparked what follows. 

Writing to Learn

Written products played a significant role in some of the courses I taught. I assumed the products were a) an incentive to read the sources I expected students to read and listen to presentations I and students made, b) a way to demonstrate understanding and depending on the assignment consider applications, c) a task that involved the student in thinking in ways that led to understanding and retention, and d) a way to evaluate students. Having students write in isolation is one of those common tasks that has come under suspicion because of AI

Back to the “Opposite of cheating”. One of the authors’ general suggestions is to evaluate the process, not the product. I wrote a post some time ago making a very similar point. I think it helpful to explain why emphasizing what I would describe as subprocesses allows not only what might be described as surveillance, but also a superior instructional approach. 

Similarity to the strategy of showing your work.

    Yes, a requirement in what is probably math classes that you show your work was partially a check on whether a student had done the work, but just as important it was a record of the processing that was involved. A student and the teacher had access to the student’s externalized thinking. This visible record might be used by the student when the process breaks down and must be adjusted. It also provided someone else the opportunity to follow the student’s logic. The concept of externalized thinking has many applications for those who propose that cognitive research is useful to educational issues.

My long term interests in showing your work have focused more on writing and a specific application of writing often called writing to learn. Given a writing to learn task, assigned by a teacher or taken on as a personal strategy, a student could, of course, simply start writing or feed a prompt to their AI tool of choice. Here again, the “show your work” strategy can serve as both a check that you have done the work and a benefit to deeper thinking. 

I have been influenced by the logic and justification of advocates of personal knowledge management and the second brain. These concepts, when considered carefully, are clearly process-oriented: engaging purposefully and thoughtfully in specific processes benefits the products they produce, and externalizing processes that could be performed internally enables them to be performed more skillfully. I have long been interested in the Writing Process Model (Flower & Hayes) and variations. These researchers sought to develop a model that identifies the processes of writing and how the processes interact to create a written product. One benefit they proposed for such a model was the identification of component skills, allowing more efficient development of proficiency in individual skills. Identification of processes could guide both the topics researchers pursue and the instructional practices relevant to the classroom.

Connection with the topic of mitigating cheating

Let me start with this claim: an externalization requirement can serve both the purpose of ensuring that a process has been executed by a person and the educational goal associated with the assigned task. I think this works well when the goal is writing to learn.

I already indicated that writing to learn (or learning to write) can be broken down into subprocesses. Rather than relying on the Writing Process Model, allow me to offer a simpler approach for this situation.

In order to complete a writing to learn assignment, a student must:

Read the content

Identified what she felt are important ideas in the content

Processed this collection of ideas to understand and apply

I assume this is acceptable as a gross level description. If an educator relies on only the product turned in, with AI the educator must guess whether any of these tasks had actually been performed by a given student.

Those of us who make use of Personal Knowledge Management tools engage in these processes even though we are not accountable to an educator responsible for our skill and knowledge development. We do these things because we believe they deepen our understanding and strengthen our ability to craft better products.

We integrate a variety of tools while we read that would allow someone else to agree that we have in fact read.

Most of these tools involve highlighting and annotation as part of the reading process. The highlights and notes serve as an external representation of what we regard as important ideas in the content.

We then extract highlights and annotations from the original context so we can store and manipulate these elements more effectively. Having these elements separated and independent allows their long-term access and allows further processing such as linking, tagging, and secondary elaborations to occur. We value this growing and ever-modifiable collection as what has become popular to describe as a second brain that can be searched and explored for new insights and the generation of products.

The tools we use are ever improving and the skills in using these tools are being constantly scrutinized in search of greater efficiency and effectiveness.

The tools are there and it is easy to find free options. There is long-term benefit in learning to use such tools as skills relevant to lifelong learning. Why not teach these techniques to students and use the potential side benefit of accountability?

Hypothes.is as a starting point – try it you might like it

I first used Hypothes.is because I was interested in social note-taking with students. Simply put, this perspective argues that there may be benefits to a system that allows students to share notes. What did others find interesting or valuable in an assigned reading, and what might comments they made in response to what they highlighted as important reveal that others may not have considered?

This same tool could be applied such that an individual’s highlights and notes be available just to the instructor rather than the entire class. This covers “was it read” and “were ideas I thought important identified? 

The process for exporting from Hypothes.is works like this:

How to Export Annotations

Activate Hypothesis: Go to the webpage or document you’ve annotated and open the Hypothesis sidebar.

Open Sharing Menu: Click the “Share” button.

Select Export: Choose the “Export” tab.

Select Annotations: Use the dropdown to choose which user’s annotations to export (your own, a specific group, etc.).

Choose Format: Select your desired file type (e.g. HTML, plain text).

Export: Click the “Export” button to download the file, or “Copy to clipboard”. 

A screenshot of Hypothes.is in use. Hypothes.is is a browser extension so the content must be something online or something you can open in a browser (e.g., pdf). The content window on the left is where the reader highlights, annotates, and reads. The highlights and notes appear in the column on the right. 

Organize and Elaborate

At this point, I would now bring individual elements into a tool such as Obsidian, which I would not hesitate to introduce to college students. This tool is designed to store a large collection of individual idea notes, tag them, create links among them, and extend individual notes by generating secondary notes (elaboration). I raise this tool as an opportunity, not because there are no other options. Perhaps this mention of this tool will raise the curiosity of those willing to go a little deeper. 

There are other basic ways to do this. In the next stage before writing, you might open a document in any word processing tool and copy and paste individual notes or ideas from the notes or highlights into this document. As you proceed, you might cut and paste from this working document to better organize topics and integrate them into your final product. Even with the many personal knowledge management tools I use, I often take this simple approach when approaching the final stage of a project. I might cut and paste chunks of text and citations from the content I have accumulated into a common document. Often, this is not just about collecting ideas from a single source but bringing together ideas from multiple sources. I open this “collection” document in a separate word processing window and work from this narrowing of material into a draft of the product I am creating. 

Some writing tools even offer visible ways to do this. For example, Scrivener provides notes as note cards that can be moved around in a space to explore organizational options. Even if you do not intend to use a tool like this, visualizing the approach may be helpful. The “corkboard” option in Scrivener is shown below. Here you can see how individual project-related notes have been moved to this corkboard. The notes can be dragged around to create an optimal structure.

Summary  

This post focuses more on a concept for discouraging AI cheating more than on a detailed tutorial for using the tools involved. The core idea is that tasks can be assigned that are both beneficial for applying subskills to the writing-to-learn process and useful for documenting students’ completion of these subskills. I have identified specific tools and tactics, but there are likely alternatives for everything I have used as an example.

Source

Gallant, T. & Rettinger, D. (2025). The Opposite of Cheating: Teaching for Integrity in the Age of AI (Vol. 4). University of Oklahoma Press.

Loading

RSS – Be Your Own Content Platform

Tim Wu, in his recent book, The Age of Extraction: How Tech Platforms Conquered the Economy and Threatened Our Future, examines the timeline of a variety of platforms and the manner in which they consistently morph from being initially attractive, innovative, and genuinely helpful resources into systems that become confining, controlling, and ultimately draining for their users. Using examples that range from Facebook and Amazon to UnitedHealth he argues that this transformation, from open utility to extractive gatekeeper, is not an accidental side effect, but rather a predictable, structural characteristic of platform business models as they achieve scale and market dominance.

In considering the examples from Wu’s book it occurred to me that while he emphasized the major players a wide variety of people use, the same issues apply to smaller platforms. Those of us who write and use platforms to share our work (e.g., Substack, Reddit, Medium) have likely experienced the same timeline. 

Many authors who write books with a similar message to The Age of Extraction do a great job of explaining the problem and its history, but even though they make an effort offer little as a remedy. I have read many such books. I typically find myself contemplating but failing to generate suggestions to augment what the author was able offer.

Like Wu, I bought into the original promise of the Internet as a leveling platform that would give content creators, sellers, and the “little guy” in general greater opportunities. In the early days (2002), I started a blog and did so from a server that was also my desktop computer (I worked at a University and had a dedicated IP which was more of the challenge than the ease with which any Mac could be used as a server). Things change. I now pay a hosting company a couple hundred dollars a year to allow me to run blog software and the related backend database and register my domain name. Still, as a hobby, once I pay for the space, I can function independently.

I believe the way we create and share content has changed. You can still do it, but it seems you have fewer and fewer regular readers. I have noticed a change that suggests more and more of my posts are read through search rather than by readers who regularly view the blog. I track hits out of curiosity and find little immediate interest in most posts. I check say a year or six months later and find that some posts have been read hundreds of times. Logically, I interpret this to mean I have written something that people found through search. I shouldn’t complain about this, but the switch from pure search to AI search now being developed by the big platforms means there will be far less attention to source material when an AI summary based on this homogenized and integrated material is made available. This is an emerging but I think obvious issue and a perfect example of what Wu means by platform extraction. 

The big switch (another book) to focus on extractive platforms has resulted from a) integrative platforms such as those I have already mentioned hosting multiple content creators and b) a related move away from the use of RSS readers by individual consumers. I certainly understand the benefits of single-stop platforms that provide a convenient way to reach a wide audience. My complaint is based on the history of these platforms. The pattern of extraction is evident. Start by offering a service in which the platform and content creators share in the risk and the rewards, and once a critical mass for a network effect is achieved, reduce the benefits to the producers and to the consumers. Wu suggests Amazon makes a familiar example of this approach.

I do post my content to one of these community platforms and continue to post the same content to my own blog. Yes, this means I pay twice and I continue to be frustrated by this situation. One approach allows me to own my content and the other to reach a larger audience – for a price.

My solutions:

I do have suggestions for an alternative approach, but I understand that each requires an effort that most consumers are unwilling to invest. You can be your own platform with easy-to-use tools.

Use Google Alerts – Yes Google is a big company, but it does offer some beneficial services. Google Alerts might be imagined as a period search process based on specific interests you specify. You provide a typical search request, then select how often you want to receive the results. Updates are sent to you in an email periodically according to the time intervals you request. I have multiple alerts that generate a week list of new content. In this approach, you are following a topic rather than specific content creators.

RSS is still around and modern readers make the process easy to implement. With RSS, you designate the sources (e.g., specific blogs) you want to follow, and an RSS reader accumulates new content generated by these sources. You check in to your reader when you have time and see what is new. Some contend that Google’s abandonment of its very popular Reader in 2013 signaled the end of this tool category, but more modern alternatives have since emerged. 

Yes, RSS readers do offer a subscription level and any provider realistically has costs. While the pro level offers great features, most users will find the free level meets their needs.

My preference is for web-based readers – the service is accessed through a browser rather than standalone apps. Feedly is my recommendation. I like Inoreader and Reeder (Apple) as apps. 

I have written more detailed descriptions elsewhere (Feedly, Inoreader) and you could consult these sources if you need more information. 

Summary

I didn’t really intend this post as a book review, but Tim Wu’s book is interesting and informative. As I suggested, the book identifies the typical timeline of extraction consumers should recognize and use to guide their decision making. Again, solutions, should that be what you are seeking, are not easy to imagine.

I think we have a classic “chicken and egg” problem with platforms versus independent sources. Content creators will go where their content is more likely to be consumed. Tools for sharing will exist and be improved where there are content creators and content consumers. 

For the great majority of creators and consumers, the motivation of income is deceptive and a trap. Most writers would seem better off thinking of their goal as visibility rather than profit. Writing for a platform for the vast majority should be treated as a hobby, recognizing the reality of being trapped by the network effect. 

Loading

The Decline in Reading Activities – Does it matter?

Lately, I have encountered several essays that examine possible connections between the claims that “few people, including students, read books anymore” and “the declining percentage of individuals moving through our educational systems who are competent readers” (see links to several of these articles included at the end of this post). To attract a general audience, this decline in reading proficiency is often framed as a concern for future international economic competitiveness. What caught my attention in this unexpected batch of essays was the claim that researchers had failed to provide evidence that differences in time spent reading, and more specifically in reading lengthy works, resulted in improved reading skills and related intellectual benefits. In several of these articles, it was claimed that such relationships had not been proven. I had to move beyond these secondary sources to primary sources to understand the bases for these claims, and then to explore further on my own to determine whether I agreed. Is the lack of interest in books a serious problem?

The low rate of reading proficiency in the U.S.

Reading skills are declining and this trend has been ongoing for years. This decline is evident in multiple tests used to assess reading skills, with the most attention likely given to the NAEP (National Assessment of Educational Progress), which compares the performance of U.S. students with that of students from other nations. Implications for economic productivity and international competitiveness are sure to catch the attention of the general public and politicians. 

The pattern of decline among U.S. students is notable: over the years, the less proficient have declined year by year, whereas the most skilled have not shown this trend and score at roughly the same level on the tests used to track reading skill. Among other issues, this means that the range of competence educators must address within a single grade level is increasing, thereby complicating instruction. Multiple factors potentially associated with this trend have been identified and such claims have provided ideas for remediation. Most of you have likely heard of the “science of reading” controversy, which in recent years has prompted greater emphasis on phonics. Screen time is often cited as a culprit in many areas, including reading. Recently, absenteeism has risen at an alarming rate and it seems logical that missing school for many days means students miss instruction. There are likely multiple causes for a general academic trend and these causes interact in ways that defy simple solutions. 

Recent changes in reading proficiency broken down by initial level of functioning.

Reading Skill and Reading Activity

Daniel Willingham, an educational expert I often cite, proposes a more straightforward explanation in claiming that the “main differentiating factor” between strong and weak readers is “the volume of reading they do,” because that’s how they build both reading skills and a rich background knowledge. The mention here of background knowledge should not be overlooked and requires a little more detail. Studies that differentiate background knowledge from reading skill demonstrate that background knowledge is more critical in predicting differences in what readers understand from what they read. Put another way if good and poor readers are also differentiated by what they already know about a topic (e.g., good readers with poor existing knowledge, poor readers with good existing knowledge), the difference in background knowledge is the better predictor of understanding and retention. So, what Willingham argues is that the amount of reading has a unique importance, possibly ignored in other discussions of the development not just reading skill, but the more general impact of reading as a component of learning more generally. 

It is not that time spent reading and its relationship has been ignored (see Allington & McGill-Franzen). Attempts to track the amount of reading students and adults spend have been recorded with great diligence. How many minutes, how many books, whether the reading is on a digital device or paper, how many books are available in the home, how frequently does an individual visit a library, does a family subscribe to a newspaper, and similar variables have been quantified and related to reading skill and potential moderating variables such as family income. These variables consistently relate to measures of reading skill, and because these measures of activity seem to be in decline, have been argued as possible causes of poorer reading skills.

Causal Relationship Between Activity and Performance

Allington and his colleague raise an interesting point as have other scholars I have recently read about why the amount of reading has not received more attention. Allington makes an observation that blames reading researchers, which I find interesting. He claims that researchers have difficulty explaining the observed relationship between time spent reading and reading skill, because this relationship has seldom been tested experimentally. This position should be presented to mean that the observed relationships are correlational. How can it be known if more reading alone builds reading skill when the data could also be interpreted to argue that more capable readers enjoy reading and read more? Why might this matter? Allington notes that elementary school reading instruction focuses far more time on skill development than on extended reading. Extended reading, often called free reading, allows students to use school time to read independently and would include basal reader time and library books.

Just to be clear, there are many topics of interest to scientists in other fields that involve limited “manipulative” research. When investigating factors we believe have negative consequences, scholars seldom create the unfavorable circumstances to determine whether those harmful consequences follow. Sometimes, potential causal relationships are analyzed through correlational techniques because the causes cannot be manipulated or would be extremely expensive to manipulate. 

Allington does provide two counterexamples to the claim that the amount of reading, and in these examples, the reading of books, cannot be manipulated. The first example involves the gifting of books to randomly selected children through existing programs (e.g., the Dolly Parton Imagination Library) and the subsequent administration of follow-up assessments. The second employed a similar approach, focusing on the “summer slide” in reading performance among children from low-income families. Again, in selecting participants at random and comparing the selected group with non-selected individuals based on a gifted home library for the summer on a follow-up assessment. In both cases, greater access to books was associated with higher reading proficiency scores. The random assignment of access to the gifted books allows for a more persuasive argument for causality.

Voluntary vs Required, Intensive vs Extensive

The topic I have presented here has a number of subtopics that are relevant, but examining these issues should probably be pursued in other posts. The voluntary vs required distinction relates to reading set in a formal education setting or pursued without direction outside of the classroom. Voluntary reading at any age has consequences. The difference between intensive and extensive reading is relevant to a variety of issues that relate to reading for different purposes (research papers, technical manuals vs books) differ in potential benefits. When it comes to the amount read, does a number of shorter pieces generate the same impact as a book, or are there unique skills involved when persistence is required, and relevant pieces necessary for understanding are spaced over greater units of time?  This issue is relevant to the different type of reading most of us might do online.

Resources

Academic sources:

Allington, R. & McGill-Franzen, A. (2021). Reading volume and reading achievement: A review of recent research. Reading Research Quarterly, 56, S231-S238.

Abdurakhmonova, Z., & Pardayeva, R. (2025). Exploring intensive and extensive reading. International Conference on Culture & History. 1(3), 34-38.

Gioia, D. (2008). To read or not to read: A question of national consequence. Diane Publishing.

Willingham, D. T. (2017). The reading mind: A cognitive approach to understanding how the mind reads. John Wiley & Sons.

News sources addressing the decline in the popularity of reading books

Kids Rarely Read Whole Books Anymore 

Whole books or excerpts: Which does the most to promote reading ability

Novels vs. excerpts: What to know about a big reading debate 

Loading

What does the role of revision in classroom note-taking research offer PKM advocates? 

Most of what I write about PKM and Second Brain focuses on relating the vast body of high-quality research on academic note-taking to what many of us do outside the classroom as independent learners. The nonclassroom use of notes and related strategies for making and using notes has been termed Personal Knowledge Management (PKM) by those offering self-improvement advice. The PKM area is driven mainly by common sense. I cannot find focused research to inform this transition, but I have been investigating classroom note-taking and studying for decades and now focus on synthesizing findings from one area to support or evaluate strategies in the other. 

The classroom research primarily focuses on the interrelated tasks of taking and reviewing notes, and, more recently, on handwritten versus keyboarded notes. With PKM, there is a greater emphasis on continually revisiting stored notes. This topic has not been emphasized in the research with classroom notes, but some studies do exist and it is this more minimal area I want to explore in this post. Again, the advantage is in the research evaluating efficacy and what cognitive processing is enabled or encouraged in modifying original notes in various ways. 

I now primarily focus on note-taking on a digital device. While studies have focused on whether handwriting is superior for initial learning, approaches that encourage a deeper look at your notes reveal a powerful consensus that transcends the medium: there is unique value in notes that involve not just how notes are taken, but in how notes are revised.

This insight provides a critical link between academic research on learning and the practical strategies of Personal Knowledge Management (PKM). If your goal is to move beyond simply collecting information to actively building a knowledge base, you must embrace the often-overlooked middle stage of note-taking: revision and restructuring.

The Three-Stage Model: Beyond Capture and Review

Traditional study advice often reduces note-taking to two phases: recording during a lecture or reading and reviewing before an exam. However, classroom-oriented research by Luo et al. (2016) and Flanigan et al. (2023) suggests a more productive, three-stage process: recording, revision, and review. This intermediate revision stage is where the magic happens – where passive information capture transforms into active knowledge construction.

The research on this topic is compelling. A study by Cohen et al. (2013) demonstrated the causal role of a note-restructuring intervention in improving student learning. Students who were required to restructure and reorganize their notes, summarize the main point, and elaborate on a detail performed significantly better on exams. The researchers concluded that this process was essential for students to “make information one’s own, by processing it, restructuring it, and then presenting it in a form so that it can be understood by others (or by oneself at a later point).” 

Sounds very similar to the pitch for PKM strategies. Revision isn’t just about neatness or completion; it’s about deepening understanding through elaboration, incorporating entirely missed ideas, and creating retrieval cues that activate deeper memory networks.

From Note-Taking to Note-Making

In the world of PKM, a distinction is often made between note-taking (the act of recording external information) and note-making (the act of processing that information into a new, personalized, and connected knowledge item). The revision stage is precisely where you transition from a passive note-taker to an active note-maker.

PKM methodologies, such as the Zettelkasten, emphasize that a permanent note should be able to stand alone, expressed in your own words, and contain enough context to be meaningful without referring back to the original source. This is a direct parallel to the restructuring intervention that required students to summarize the main point and elaborate on a detail.

When you revise a note in an academic setting, you are performing the cognitive work that drives learning: elaboration (connecting new ideas to what you already know), organization (clarifying underlying structure and identifying themes), and synthesis (cross-referencing the new idea with other sources). Without this deliberate revision, you risk falling into a common trap: mistaking familiarity for understanding. Most learners fail to organize their notes after class because they recognize the content and mistakenly assume they have mastered it. Active processing, often based on concepts such as generative processing, is the focus of much research. 

The Longhand Advantage in Revision

The handwriting versus keyboard comparison recurs in studies of revision. Some, but not all, studies contradict my assumptions about the advantages of a digital approach.  Flanigan et al. found that longhand note-takers added three times as many complete ideas to their notes during revision compared to computer note-takers, and twice as many partial ideas. These researchers argue that handwriting engages deeper cognitive processing during initial recording, making those notes more effective retrieval cues when revisited later.

However, the digital environment isn’t without its strengths. Research by Cojean and Grand (2024) found that students who take notes on a computer are more likely to reformat their notes during revision. The ease of manipulating text digitally encourages a strategy where transcription is prioritized during capture, and the deeper work of reformulation and organization is deferred until the revision stage.

In a modern PKM system, this deferred processing is not a weakness, but a feature. Digital tools make it effortless to refactor (break long notes into smaller, single-idea notes), link (create hypertext connections between related ideas), and organize (file processed notes into multiple collections). The digital environment transforms revision from a tedious manual task into a fluid, creative act of knowledge gardening.

Making Revision Your PKM Habit

Those offering practical advice for students seem to recommend a structured approach to the revision stage. Treat it as a non-negotiable part of your workflow, not an optional step before an exam. Here are three practical revision strategies:

The “Foot” and “Socks” Method: Immediately after capturing a new note, summarize the main point in a concise “foot” (like a title or summary field) and elaborate on a key detail in the “socks” (the body of the note). This forces immediate processing and mirrors the Cohen et al. intervention.

The Atomic Note Refactor: If your initial note is a long transcription, dedicate time to breaking it down into smaller, single-idea notes. Write each new note in your own words and link it to at least one other existing note in your system. This practice creates the interconnected knowledge web that makes PKM powerful.

The Cross-Reference Check: When revising, actively search your existing notes and collections for related concepts. Link your notes back to original sources to resolve ambiguities and provide context. Make an effort to relate lecture content to what appears in your textbook. This is the moment to create connections that integrate new information into your existing knowledge structure, moving beyond simple storage to true knowledge management.

Schedule dedicated revision sessions, ideally spaced throughout your learning timeline rather than clustered after completion. Consider handwriting your first draft or deeply processing material before digital capture to maximize the depth of your initial notes. Make note revision an ongoing habit, integrated into your learning cycles rather than a single end-of-lesson task.

Conclusion

By making revision a deliberate and structured part of your note-taking, you stop merely collecting information and start actively building a powerful, interconnected knowledge base that supports long-term learning and creative work. The research is clear: revision elevates note-taking from passive transcription to active knowledge building. It transforms fragmented jottings into complete, interconnected ideas ready for recall and application.

For anyone committed to lifelong learning and effective Personal Knowledge Management, understanding and embedding the practice of careful, thoughtful revision into your workflows will create richer, more useful knowledge bases – helping you learn smarter, not just harder. Classroom studies encourage a structured approach and often control such activities through assignments. Independent learners must take personal responsibility to produce similar results. The missing link in your note-taking isn’t the tool you use or the speed at which you capture—it’s the intentional work of revision that transforms information into true personal knowledge.

References

Cohen, D., Kim, E., Tan, J., Winkelmes, M. (2013). A note-restructuring intervention increases students’ exam scores. College Teaching 61(3), 95-99.

Cojean, S., & Grand, M. (2024). Note-taking by university students on paper or a computer: Strategies during initial note-taking and revision. British Journal of Educational Psychology, 94(2), 557-570.

Flanigan, A. E., Kiewra, K. A., Lu, J., & Dzhuraev, D. (2023). Computer versus longhand note taking: Influence of revision. Instructional Science, 51(2), 251-284

Luo, L., Kiewra, K. A., & Samuelson, L. (2016). Revising lecture notes: how revision, pauses, and partners affect note-taking and achievement. Instructional Science, 44(1), 45-67.

Loading