AI and the Writing Process

I found the Writing Process Model (Flower & Hayes, 1981; Hayes, 2012) helps me think about the development of writing skills and the specific application typically described as writing to learn. This model identifies the processes and the interaction among the processes involved in writing and has been used to guide both writing researchers and the development of instructional tactics. 

The model provides researchers and instructional designers with a concrete framework to work with by identifying specific skills that can be studied as the source of individual differences in writing skills or targeted for development, assuming that greater proficiency with these skills will lead to more effective writing.

I first used this model to speculate how specific technology tools could support writers. For example, technology offers powerful ways to take and review notes, a method for planning by creating an outline or “mind map”, and a way to record text that allows for easy manipulation and revision. I have begun to think about the model in a different way brought on by easy access to AI for writing in general and in classrooms more specifically. Given the general goals of learning to write and writing to learn, when do specific uses of AI facilitate and when do these applications harm the development of the writing subprocesses as involved in each category of writing experience? Put another way, instead of thinking of AI as an all-or-nothing approach to creating written content, would it make more sense to evaluate the use of AI in impacting writing subprocesses and perhaps have students use AI more selectively?

The Writing Process Model

The model identifies three general components a) planning, b) translation, and c) reviewing (see the following illustration). Planning involves setting a goal for the project, gathering information related to this goal, which we refer to as research, and organizing this information so that the product generated makes sense. The goal may be self-determined or the result of an assignment. Research may involve remembering what the author knows about a topic or acquiring new information. Research should also include identifying the characteristics of the audience. What do they already know? How should I explain things so that they will understand? Finally, the process of organization involves establishing a sequence of ideas in memory or externally to represent the intended flow of logic or ideas.

What many of us think of as writing is what Flower and Hayes describe as translation. Translation is the process of getting our ideas from the mind to the screen and this externalization process is typically expected to conform to conventions of expression such as spelling and grammar.

Finally, authors read what they have written and make adjustments. This review may occur at the end of a project or at the end of a sentence. Authors may also solicit advice from others rather than relying solely on their own review.

One additional aspect of the model that should not be overlooked is its iterative nature. This is illustrated in the figure, which presents the model using arrows. We may be tempted, even after an initial examination of this model, to view writing as a mostly linear process – we think a bit and jot down a few ideas, we use these ideas to craft a draft, and we edit this draft to address grammatical issues. However, the path to a quality finished product is often more circuitous. We do more than make adjustments in spelling and grammar. As we translate our initial ideas, we may discover that we are vague on a point we thought we understood and need to conduct further research. We may decide that a different organizational scheme makes more sense. This reality interpreted using our tool metaphor would suggest that within a given project we seldom can be certain we have finished the use of a given tool and the opportunity to move back and forth among tools is quite valuable.

This model describes the processes identified by Flower and Hayes, but ignores two other components. The first is the writing task, which consists of the assignment and any writing completed at a given moment. The other missing element is the long-term memory of the writer. The long term memory or existing knowledge provides a source of information and strategies that the writer can use without resorting to new research. 

Hayes (2012) updated this model to incorporate additional research and comments from colleagues and his own laboratory. Later work placed a strong emphasis on both revising the planning component and subsequent drafts, as well as on the role of working memory. Working memory recognizes that cognitive capacity is limited, meaning activities must fit within the existing capacity or perhaps be ignored. Processes become less demanding less capacity as a function of practice and aptitude and among other topics, is vital in understanding changes that could occur for learners across the grade levels. For example, Hayes noted that keyboarding is more difficult for younger learners than handwriting and products produced on a computer would thus appear of lower quality. Differences in cognitive demands do change with experience with keyboarding, eventually becoming less demanding than handwriting. Other factors can also impact cognitive demands and in one example, Hayes noted that studies requiring adults to write in all caps reduced the quality of the final product because transcription in this form is less practiced. As an aside, I could not help recognizing the tendency of some to write in all caps on social media and my perception of the quality of what is produced in that format. Anyway. Finally, Hayes recognized that writing was a motivated activity and differences in motivation could emphasize the processes.  

One of the significant patterns in writing proficiency Hayes and others (Bereiter & Scardamalia, 1987) associated with the limited capacity of working memory and the gradual development of proficiency in the cognitive writing subskills, was the tendency to move from what Hayes and others describe as knowledge telling to knowledge transforming. The first approach results in a dump of ideas from long term memory triggered by the assignment and information that has already been recalled. In other words, a product can be generated without changing what is stored in memory or how this content is organized. Most instructors have a feeling for how this works. They have asked students to reply to an essay question and received at least some responses that seem to be everything students knew about the topic rather than a specific answer to the question. It can be hard to know if the student thinks they have answered the question or if this is just a frequently productive ploy. It is the first situation that shares characteristics with the idea of knowledge telling.

In contrast, writing classified as knowledge transforming requires that previously stored information be reorganized, reinterpreted, or extended based on speculation or insights. The writing task sometimes determines the difference, but transforming is more demanding and, when expected, is more likely to be produced by more capable writers. Writing to learn could involve either output, but the greater manipulation of ideas in knowledge transformation reflects the most significant benefit. 

Klein (1999) offers a review organized around the writing processes hypothesized to be responsible for learning. A brief summary of some insights from this paper provides examples of tasks and skills that differentiate knowledge telling from knowledge transforming, also relating the distinction to the processes involved. 

  1. Point of utterance (no revision and limited planning). This explanation assumes that learning occurs in the attempt to generate comments on the topic (no revision or planning is expected). This category might be described as spontaneous writing – the learner is asked or personally commits to writing on a topic. An activity fitting within this category would be the five minute writing tasks some college lecturers assign at the end of a class. 
  2. Genre-related – This explanation focuses on the benefits of transforming ideas to the structure demanded of a specific genre. For example, in a “position paper,” a writer is expected to take on of several possible positions are generate an argument supporting this selection. In selecting, organizing, and connecting knowledge to fit the demands of a formal writing task, the learner creates an understanding that would not exist without the imposed task.
  3. Backward search – This explanation assumes that skilled writers formulate complex goals (characteristics of the desired product, audience needs, etc.) and then rework existing knowledge in terms of these goals. This seems a more generalized version of the “genre-related” explanation, requiring more sophisticated and complex problem-solving.

My own writing is generally linear with backtracking depending my assessment of how well things are going. This impression is based on my own behavior and may not accurately reflect the ideal recursive approach. When I take on a project, I assess what I already know and have notes on a topic and then read and take notes on additional material. I generate a rough idea of how these ideas could be organized (some would create a formal outline) and begin developing a draft. Often, I realize I need to fill a hole in my mental outline or find a reference in support of what I am trying to accomplish and have to take a closer look at my large collection of digital notes or read and take notes on another source or two. Upon completion of this initial draft, I reread what I have down to take care of lower level deficiencies (spelling, grammar) and often to add a little more material here or there so the document makes more sense or hangs together better. When writing for myself and without an external reviewer, I tend not to make major structural revisions. However, when I submit a paper for review, I do sometimes have to make larger adjustments even if the content is deemed useful. 

Where does AI play a role? Where should AI play a role? I have worked through my thoughts on these questions based on my own circumstances and also on what should be the circumstances for those in more formal learning settings. I want to produce content that meets a reasonable standard of quality, but I am not that interested in becoming a more accomplished writer. I want what I write to be a reflection of my experiences and what I have learned, and I assume I can learn from the process of writing. I am willing to invest time in this objective. I propose that others generate a similar analysis for themselves based on personal goals or goals for others they may be responsible for educating. 

How do I use AI? I primarily use AI to facilitate the research and planning components of my writing process and to perform some of the revision tasks. These emphases are consistent with my desire to learn from writing and my lesser interest in improving my writing skills. I am not suggesting everyone apply my priorities, I am suggesting it is possible to identify priorities and use AI strategically and efficiently.

Here is how this works. I have developed a large collection of notes consistent with methods of personal knowledge management and a technology-enabled second brain. This is a multidecade-long process that involves reading widely in the educational technology and educational psychology literature. When I get an idea for a writing project, I use AI to query this body of content for ideas related to my intended project. Based on the content I produce and queries for related information (some generated by AI), I review this new content and supplement my notes on the topic. I then often use AI (typically NotebookLM) to provide a structure for the intended project based on my collection of notes. I consider this proposed approach when generating my initial draft.

I use Grammarly heavily when I write. I use the pro version so I get constant feedback not only on spelling and basic grammar, but also more substantive recommended changes at the paragraph level. Grammarly identifies issues I should consider and offers suggestions. I suppose this could be a valuable learning opportunity, but I admit I just select the recommend changes that sound good. Grammarly has just pushed a major update that offers even more capabilities, but I have yet to explore which might be useful to me. 

The use of AI for research, planning, and text revisions do not limit my writing to learn. I think it is quite reasonable to associate AI capabilities with specific components of the writing process model, as applied to writing and writing to learn, in order to accomplish tasks that are not essential to developing writing skills or learning opportunities. The stickier problem presents itself when you must consider how to control the use of AI. 

Sources

Bereiter.C. & Scardamalia, M. (1987). Two models of composing processes (pp. 1-30). In C. Bereiter & M. Scardamalia (Eds) The psychology of written Composition. Erlbaum. 

Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College composition and communication, 32(4), 365-387.

Hayes, J. R. (2012). Modeling and Remodeling Writing. Written Communication, 29(3), 369-388. https://doi.org/10.1177/0741088312451260

Klein, P. D. (1999). Reopening inquiry into cognitive processes in writing-to-learn. Educational Psychology Review, 11, 203-270.

Loading

NotebookLM Enhancements

I was excited when Google announced its investment in developing NotebookLM as a tool for learners. First, I encountered the description of its new program Learning Your Way, which promised to personalize textbooks in various ways, including explanations based on personal interests. Examples of what this would look like were provided, but to submit your own content for the “learn your way” treatment, you first had to indicate your interest and then wait to be selected. I am still waiting. However, Google has also announced enhancements for the existing NotebookLM that make several proven study strategies more straightforward to implement, as well as a couple of unique ways to repurpose instructional text as a podcast or something resembling a slide-based lecture. The updates available to all are what I will describe here.

Before anyone gets overly excited, what Google can presently make available requires that an instructor or a learner must be able to load content into NotebookLM. There is no legal way in which what I will describe can be done with your existing commercial paper or even digital textbook. Companies producing instructional materials may either collaborate with tech companies or develop similar capabilities, but they are not doing so at present. I wrote a couple of textbooks earlier in my career. When my collaboration with a textbook company ended, I was given complete control of the copyrights and now I have full-length chapters I can use for demonstrations. This is what I am doing here. NotebookLM notebooks can be shared and I will make the content on which this tutorial is based available for you to explore

For those unfamiliar with the newest updates, NotebookLM presents as three vertical panels. The leftmost panel shows the content you have uploaded to NotebookLM as well as provides the opportunity to upload additional content and open individual files for viewing. The middle panel is where AI prompts are applied and the output from such prompts are displayed. The rightmost panel is where the older and new AI tools are activated and output from these tools can be accessed for later display. 

As soon as NotebookLM is launched, the middle panel displays a summary of the source files that have been added. Loading source files does not include them in this summary unless they have been selected (see check box). NotebookLM was developed as an RAG (retrieval-augmented generation), which means the prompts are applied to the content you provide rather than applied to a massive corpus available to the model. The new “Discover” tool available from the top of the sources panel breaks with this approach and allows a user to request additional information intended to be related to the purpose for a particular notebook that can be reviewed and added as an additional source. 

Before I continue, I want to identify one frustration I have with the way NotebookLM fits into my workflow. Developers have specific applications in mind when creating applications, and their vision may not include what an individual user prefers. My preference is to read, highlight, and annotate content before using an AI tool. While NotebookLM allows access to sources, it does not support highlighting or annotation, nor does it display such added layers from uploaded material. The workaround is to export personal additions and add this content to NotebookLM separately. 

Mindmap

New capabilities appear in the right-most column. The audio overview (podcast) and Mindmap capabilities were already available and will not be covered here. 

Quiz

The new tool that first caught my attention was the quiz. I have explored various ways to have LLMs ask questions about designated content, but having a built-in tool for doing so is unique. The tool generates a double-digit set of multiple-choice questions that are then stored as an associated resource. I have no idea why the number of items varies. When selected, the MC questions are displayed one at a time (see the following image) and then, no matter which alternative is selected, a comment is provided. The “explain” button (see red box in the following image) shows the content from the text associated with that question. If you want more questions, the set generated can be deleted and a new set generated. 

I think it is valuable when using this tool to be aware of your reaction and recognize you are working with an AI tool. As someone who had to generate multiple-choice questions for actual student exams, knowing what I know about student reactions, I would not use some of the AI-generated questions on an examination. I thought the stems of some AI questions were kind of vague or misleading, but the explanation clarified the logic justifying a given answer. If you understand this and are using the questions as a study experience, the ambiguity is probably fine. As a final comment to this post, I will explain why I prefer to ask an AI tool to generate open-ended (essay) questions on a specific topic and then wait to evaluate my response. Interacting with content in this way is more helpful. 

Flash Card Tool

Flash cards are a useful tool for implementing memory enhancement strategies such as retrieval practice and interleaving. While relying completely on flashcards would not be wise, this approach is effective and useful for quick reviews spaced over time. Again, the “Explain” button (see the second of the following images) offers a connection to the original content and this provides potential opportunities to review beyond simply verifying that a response is correct. Recognize that NoteBookLM generated 69 cards for the targeted chapter, so this tool covers the content in detail. 

Video Overview

The video overview offers what seems like an extension of the existing audio option. It strikes me as the equivalent of a lecture based on a series of slides. So imagine a large lecture environment in which you see the screen on which an instructor is projecting her PowerPoint and you are hearing the audio but not seeing the presenter. Having written this chapter, it is an interesting experience watching a lecture based on your words. It is possible to download the video so I decided to offer this download through YouTube if want to see an example. 

Just Use Chat

NotebookLM, before these built-in tools, was a way to interact with uploaded content using AI chat prompts. I find this capability more powerful than the tools that seem to recreate standard study experiences. The familiar tools are probably a way to engage less experienced learners, but I would argue that concepts such as debating and tutoring offer opportunities that take advantage of the language capabilities of LLMs. The following is an example of a chat I had with the content.

[Prompt] Ask me a short-answer question about generative tasks and wait to evaluate my reply

According to the sources, what two specific categories of generative activities are highlighted for their value in improving understanding, retention, or transfer, and what general type of content processing is involved in these activities?

——————————————————————————–

I will now wait to evaluate your reply.

[Learner reply] I remember the chapter talked about responding to questions and I can’t remember another example from the book. How about writing to learn, but I don’t think that was in the chapter.

[AI response] That was a strong attempt, and your intuition about “writing to learn” was very close to one of the specified terms!

Here is the answer to the question, based on the specific terminology used in the sources:

The two specific categories of research emphasizing generative activities that improve understanding, retention, or transfer are labeled as:

1. “Authoring to learn”

2. “Teaching to learn”

These activities involve a “generative processing of content”.

Your suggestion of “writing to learn” is directly supported by the text, as “authoring to learn” is described as involving the preparation and implementation of writing tasks. This generative processing of content is suggested to be superior in many ways to studying only for personal use.

While responding to questions is certainly a critical activity discussed in the sources—often serving as an external strategy for evaluation and regulation that compensates for weak metacognition—it was not one of the two specific research categories given the label “generative activities” in this context.

[ Added comment] What is not visible in my copies of this text from this interaction are the inserted numbered buttons that provide access to sections of text from the original text.


Notebooks can be shared. You are invited to explore the Notebook that was the basis for this prompt.

Loading