How many AI tools?

It’s not that I don’t find AI to be useful. I generate a half dozen images a month to embellish my writing. I search for journal articles I then read to examine an educational issue I want to write about. I examine what I have written to identify errors in grammar or syntax or even identify my use of passive voice which I still can’t figure out. My issue is the monthly subscription fees for the multiple tools that best suit these and other uses. It is simply difficult to justify the $20 a month fee which seems to be the going rate for each of the services and the level of use I make of each service

I regard my use of AI as both a benefit to personal productivity, but also a subject matter I explore and write about. Writing about the intersection of technology and education is a retirement hobby and I don’t need to do what I do on a budget. However, I don’t think this is true for everyone and I can set as a personal goal an exploration of the financial issues others might need to consider. AI tools differ in how flexible they are. Often, the less flexible tools are optimized to accomplish a specific set of tasks and this narrow range increases ease of use. Users can find themselves evaluating the cost-effectiveness of options based on ease of use versus total cost.

Grammarly as an example

As someone who spends a great deal of time writing, it is worth my time to consider how AI tools can be used to improve the productivity of the time I spend writing and the quality of what I write. Grammarly is a tool suited to such goals. I have relied on both the free and pro versions of Grammarly for a quick way to edit my writing. This is a great product I find helpful. Again, the issue is the Pro version adds another $12 to the total of my monthly subscriptions bill. The free version is great for basic errors, but the pro version is necessary for complete sentence rewrites. It is helpful to compare what I said with the suggested rewrite to decide whether I want to accept the alternate version. 

What I have been exploring lately is the use of more general-purpose AI tools (e.g., ChatGPT, Abacas.ai) as an alternative. I understood that many others were using such tools in this matter, but because I had already been using Grammarly I had not invested the time to explore how more flexible tools might be used to achieve similar goals. Here is the approach I have settled on. I use a multi-level prompt to specify what I want to learn from an evaluation of something I have written. I keep this prompt in a note app so that I can copy and paste the prompt at the beginning of the content I want to evaluate. The prompt appears below. The prompt is written to be flexible so that I can adapt it to different circumstances by adding or removing specific guidelines. For example, I realize I often write in passive voice and some tasks should not include text expressed in this way. The guideline “write in active voice” can be included when I think this approach is important. If writing for a specific age group might be a goal, a guideline related to this goal can be inserted. 

To demonstrate what this prompt accomplishes, I will take what I written here to this point and apply the prompt. 

Evaluate the following content using the following guidelines:

  • Correct spelling errors
  • Correct grammatical errors
  • Correct punctuation errors
  • Write in active voice
  • Suggest corrections
  • Only list the lines containing errors and the proposed corrections
  • Add an explanation for the need for a correction

[Insert content to be evaluated here]

Response from Abacas.ai.

1. **Original:** I examine what I have written to identify errors in grammar or syntax or even identify my use of passive voice which I still can’t figure out.  

   **Correction:** I examine what I have written to identify errors in grammar or syntax and even identify my use of passive voice, which I still can’t figure out.  

   **Explanation:** Added a comma before “which” to correctly punctuate the non-restrictive clause.

2. **Original:** It is simply difficult to justify the $20 a month fee which seems to be the going rate for each of the services and the level of use I make of each service.  

   **Correction:** It is simply difficult to justify the $20-a-month fee, which seems to be the going rate for each of the services and the level of use I make of each service.  

   **Explanation:** Added a hyphen in “$20-a-month” to correctly form a compound adjective and a comma before “which” to punctuate the non-restrictive clause.

3. **Original:** I regard my use of AI as both a benefit to personal productivity, but also a subject matter I explore and write about.  

   **Correction:** I regard my use of AI as both a benefit to personal productivity and a subject matter I explore and write about.  

   **Explanation:** Replaced “but also” with “and” to correct the parallel structure.

4. **Original:** Writing about the intersection of technology and education is a retirement hobby and I don’t need to do what I do on a budget.  

   **Correction:** Writing about the intersection of technology and education is a retirement hobby, and I don’t need to do what I do on a budget.  

   **Explanation:** Added a comma before “and” to separate two independent clauses.

5. **Original:** Grammarly as an example  

   **Correction:** Grammarly is an example.  

   **Explanation:** Added “is” to form a complete sentence.

6. **Original:** Again, the issue is the Pro version adds another $12 to the total of my monthly subscriptions bill.  

   **Correction:** Again, the issue is that the Pro version adds another $12 to the total of my monthly subscription bill.  

   **Explanation:** Added “that” for clarity and changed “subscriptions” to “subscription” for correct singular form.

7. **Original:** What I have been exploring lately is the use of more general-purpose AI tools (e.g., ChatGPT, Abacas.ai) as an alternative.  

   **Correction:** What I have been exploring lately is the use of more general-purpose AI tools (e.g., ChatGPT, Abacus.ai) as an alternative.  

   **Explanation:** Corrected the spelling of “Abacas.ai” to “Abacus.ai.”

8. **Original:** I understood that many others were using such tools in this matter, but because I had already been using Grammarly I had not invested the time to explore how more flexible tools might be used to achieve similar goals.  

   **Correction:** I understood that many others were using such tools in this manner, but because I had already been using Grammarly, I had not invested the time to explore how more flexible tools might be used to achieve similar goals.  

   **Explanation:** Changed “matter” to “manner” for correct word usage and added a comma after “Grammarly” to separate clauses.

The feedback I have received would normally have resulted in my making several adjustments. I have not done so for you to consider both the original and the suggestions. The feedback makes specific suggestions and I either copy and paste the suggestions to the original source material or simply retype what needs to be fixed. This is clumsier than just clicking on the recommendation offered by Grammarly and it often requires that I use the search function in a longer text to locate the offending sentence. When writing on my desktop machine I simply open a second window and move between one window with the AI feedback and the original document to make adjustments. 

Loading

Writing to learn in collaboration with an AI tutor

I have been working my way through a couple of new and popular books that consider the importance and perils of AI and that contain at least significant commentary on AI in education. There is not a lot in these books that is based on the research literature I tend to find most influential, but the authors have sufficient experience and opportunities to offer some very credible insights. This is not a book report, but I want to credit a few ideas that encouraged my own exploration. 

This time of the year, I often suggest some topics educators might explore over the summer while they have a little more time. With the attention AI has received in the past year and a half, I likely made a related recommendation last year at about this time. Reading these two books (citations at the end of this post) would be very useful if you spend time reading related to your profession. Perhaps you read in a different area. Hopefully. I can offer a few insights that will be sufficient to encourage your own exploration of AI tools. 

Ethan Mollick’s book, Co-Intelligence, is different in that it focuses on applications and ways to think about AI capabilities. Mollick offers interesting ideas that sometimes run in opposition to traditional advice. For example, it is OK to interact with AI tools as if they were a person even though you know they are not. Asking questions and making requests as you would with another person is just a practical way to explore AI tools. Mollick also suggests that we stop looking for how to do it techniques for AI. Instead, he suggests we explore. If you have the time, try to use AI whenever there seems some possible value and see what happens. In other words, once you get past the basics of how to use a given tool, explore. Value and issues will be different for all of us so the only way to make decisions is to spend time. Again, for educators, the summer seems a great time to explore. Finally, understand that your present experiences will be with AI tools that are the least powerful they will ever be. If you find something interesting, but flawed in some way, just wait until you see what will come next. 

There were some other suggestions about prompts I found useful. Perhaps the most concrete example is what was described as chain of thought prompting. AI tools will try to provide what you ask for, but it may be helpful to provide the sequence you want the tool to follow if a given process seems useful

Sal Kahn, the creator of Kahn Academy, offers thoughts on how AI will be helpful in education in his new book “Brave New Words”. Kahnmigo, the adaptation of AI as a tutor within the context of the other opportunities for learners and educators provided by Kahn and colleagues received a good deal of attention. An interesting theme seemed how this AI tool was prepared to assist, but not do for you (my interpretation). 

One example, which Kahn uses to start his book, I found particularly interesting and I have attempted to use as the basis for the implementation I will describe in the comments that follow, describes a collaborative writing experience in which the AI tool and a student were assigned personas of two individuals writing collaboratively. The two personas took terms introducing portions of a story with the other writer finishing the section of the story the other persona had initiated. Two collaborative writers with one controlled by AI and the other by a student.

My version

Several of my posts have considered AI as a tutor and I have tried to demonstrate how existing AI tools can be used to implement various functions provided by a human tutor. This post has some similar arguments. Here, I describe an effort to create something similar to what Khan described in his account of collaborative writing, but I am proposing an approach more focused on the strategy of writing to learn. The core idea of writing to learn is that the cognitive processes involved in explaining something by writing require beneficial cognitive activities that would be less likely to occur in more passive study strategies. So, I have been attempting to write a prompt in which I try to create a version of the Kahnmigo collaborative story writing example applied as a writing to learn task. 

 Kahnmigo was just announced Kahnmigo will be a free AI tool for teachers subsidized by Microsoft. Perhaps it would have made sense to use one of the Kahnmigo tools to implement what I have been working on. I will leave it to others to make such efforts. I wanted to try to create a similar approach using the AI services I have already been working with and paying for. 

What follows was developed using the paid version of ChatGPT using the most recently available version – 4o. I think the following prompt illustrates the chain of thought approach (Mollick) and is similar in approach to the Khan collaborative writing example. 

Prompt: Obviously this is longer and more detailed than how many would request action from an AI tool. This prompt was entered into the ChatGPT accompanied by the textbook chapter as a file to upload.

Create an activity in which two students work together to write a document. You will represent a graduate student studying educational technology and I will represent an undergraduate student taking a course preparing teachers to use technology in K12 classrooms. This interaction will be based on information from the attached document. The graduate student will first identify and list significant topics from this document. Then the grad student will identify a topic for the collaborative writing project from this list and suggest an outline. The grad student will then write the introduction to this topic. The graduate student will next write the first couple of sentences about the first item from the outline and then ask the undergraduate to complete writing comments to complete the comments about this section of the outline. Do not include examples or applications in the introductory sentences and allow the undergraduate to provide comments on these topics. Wait for the undergraduate to add their additional content before continuing. After the undergrad submits their additions, offer feedback that evaluates what the undergraduate student has added. What the undergraduate has said should be identified and then how well it summarizes what was included in the material to be read should be presented. Limit feedback to what the undergraduate writes for this section without including information the outline suggested should be included in other sections. The graduate student should then ask the undergraduate student if they want to continue. Do not continue until a response of yes is submitted.  If the undergraduate student responds “yes”, the graduate student should move to the next item from the outline and continue the same sequence of interactions until all items from the outline have been completed.

AI response

Significant Topics from the Document

1. Cognitive System and Learning

2. Memory Stores (Short-Term and Long-Term Memory)

3. Working Memory Constraints

4. Cognitive Processes and Mental Tools

5. Metacognition and Executive Control

6. Higher Order Thinking Skills

7. Meaningful Learning and Constructivism

8. Reception vs. Discovery Learning

9. Modeling Cognitive Behaviors

10. Authentic Activities

11. Project-Based Learning

12. Role of Technology in Learning

Selected Topic for Collaborative Writing Project

Higher Order Thinking Skills

Suggested Outline

1. **Introduction**

   – Definition of higher-order thinking skills

   – Importance in the 21st-century education

2. **Problem-Solving**

   – Description and components

   – Importance in education

3. **Critical Thinking**

   – Definition and key elements

   – Role in evaluating information

4. **Creative Thinking**

   – Explanation and examples

   – Relevance in innovative problem-solving

5. **Integration and Application**

   – How these skills interrelate

   – Examples of application in real-world scenarios

 Introduction

Higher-order thinking skills (HOTS) encompass a range of cognitive processes that are essential for success in the 21st-century educational landscape. These skills go beyond basic memorization or recall of facts, engaging students in complex tasks such as analysis, evaluation, synthesis, and creation of new ideas. HOTS are critical in preparing students to navigate the vast amounts of information available today and to solve novel problems effectively.

Problem-Solving

Problem-solving is a foundational aspect of higher-order thinking skills. It involves identifying a problem, generating possible solutions, evaluating these solutions, and implementing the most effective one. In education, problem-solving skills enable students to tackle complex challenges and develop strategies that can be applied across various domains. 

Your turn: Please add your comments about the importance of problem-solving in education and provide examples of how it can be incorporated into classroom activities.

Me – entered as a prompt:

The issue of whether or not problem-solving is domain specific has received a great deal of attention. Some content areas might be more directly focused on problem solving – e.g., coding, mathematics. If problem-solving is domain specific then the problem-solving skills developed in one area would not transfer to other areas in a positive way. Ths would suggest teachers in all areas should offer opportunities for problem-solving challenges related to what is studied in these areas. 

AI: Feedback

Your addition about the domain specificity of problem-solving is excellent. It addresses a critical aspect of the debate on how problem-solving skills are best developed and applied. Your point about certain content areas, like coding and mathematics, being more directly focused on problem-solving is particularly insightful. Highlighting the importance of teachers in all subject areas providing problem-solving opportunities helps underline the necessity for a holistic approach in education. This aligns well with the document’s emphasis on the importance of developing these skills across various domains.

Would you like to continue to the next section on Critical Thinking?

Me: entered as prompt

Yes

AI

Critical Thinking

Critical thinking is another crucial component of higher-order thinking skills. It involves the ability to analyze information objectively and make reasoned judgments. Critical thinking requires students to engage in reflective and independent thinking, assess the validity and relevance of information, and form well-supported conclusions. This skill is vital in today’s information-rich world, where individuals must navigate a vast array of sources and discern credible information from misinformation.

Your turn: Please add your comments about the key elements of critical thinking and its role in evaluating information, particularly in an educational setting.

ETC: 

This interaction continues according to the instructions in the prompt. ChatGPT has a method for sharing a chat with the public. Here is the link that should display the additional content AI and I generated (public access to chat).  

Related comments: Here are some related observations related to my experience.

  1. This interaction is developed on the fly. Redo the entire process and the elements of the interaction may all be different. The topics, the outline, the prompts, and even the way the content is formatted on the screen will be different. The prompt is similar to a program in that it specifies a sequence of operations you want to the AI to follow. However, unlike a coded program, here you are asking for a series to be created rather than a series of actions to follow in a specific manner. The process of generation (it is generative AI) will work a little differently each time it is implemented.
  2. I found the execution of the prompt varied with the AI tool I used. ChatGPT worked the best. I thought that NotebookLM would be an ideal way to implement this writing-to-learn task because of the way in which you can upload content to be the target of interactions with the AI. It is my opinion that the way Notebooklm works is easier to understand and explain. However, after an extended series of attempts to rework the prompt, I was unable to get the required sequence to work. NotebookLM would provide feedback in response to the first item from the outline and then stop. 
  3. This post is not an attempt to promote the specific prompt I wrote. I certainly don’t care if others try it with information sources of their choosing. This was an exploration for me and it is my hope others may continue in a similar way using my initial effort as a guide. 
  4. One final point I think is important. The approach I am describing here is using the interactive capabilities of AI to focus on an information source I trust. I am not asking AI to use its generic information base to provide the content to be learned. The nature of the interaction may not be perfect, but it primarily focuses on a vetted source and assumes learners have read this source.

Resources:

Khan, S. (2024). Brave new words: How AI will revolutionize education (and why that’s a good thing). Viking.

Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Penguin

Loading

Improving peer editing

Clearly, the teacher is likely to be the most important source of guidance in developing the processes necessary for effective writing. However, peers are also an important resource. The generation of individual feedback is time-consuming and providing such feedback to a class of students multiple times would be extremely demanding. Our comments here reflect the proposal of writing experts who argue that peers as a group can likely respond more quickly than the teacher who would be working alone and the comments of peers can augment the feedback provided by the teacher. There is one more argument for involving peers (Bruning, Schraw & Norby, 2011, p. 300). Better writers at all ages appear to be better editors with poor writers often seeing little of substance that can be changed in their original first drafts. The learning of editing skills to provide a useful service to peers develops the same skills that can be applied to the student’s own work.

Peer editing has gained increased attention among researchers with the research offering greater insight as more specific issues are investigated. For example, Wu and Shunn (2021) note that most previous research has focused on the value of peer feedback in improving the document for which feedback was provided. This is a different issue than whether giving and receiving feedback results in improved performance on future writing tasks. In their research, which involved secondary students enrolled in an AP composition course, across multiple writing tasks, the researchers investigated both the impact of peer editing on the present and a future writing task. The study supported the positive impact of giving and receiving feedback on both document quality and future performance. 

The proposed benefits of peer editing include (Pritchard & Honeycutt, 2007);

1 A non-threatening audience,

2 Increased opportunities to write and revise, and

3 Immediate feedback – to write and revise a lot, the teacher cannot do it all.

Note the use of the qualifier “proposed”. You might argue that some students can be quite insensitive in interacting with peers and so wonder about proposing peers offer a “non-threatening audience”. Proposed here implies that skills can be developed and offer advantages when a reasonable level of competence is present.

One should not assume that effective peer editing is a simple manner of having students exchange papers and offer the comments that come to mind. Without guidance and experience, student comments may be socially awkward and focused on the most shallow of writing skills (e.g., spelling errors). 

Peers, by definition, are at approximately the same level of proficiency as those they are attempting to assist. In addition, they often lack the social skills and sensitivity to have their suggestions interpreted as helpful rather than mean. However, given some preparation, spending time responding to the written products of peers can be helpful to the peer writer and a way to develop the writing skills of the writer. (Pritchard & Honeycutt, 2007)

Here is a brief summary of a series of activities proposed by Simmons (2003) as a process for developing the skills of peer editors. 

1) Teacher models editing. If possible, offer a document you (teacher) have written. Think aloud about what improvements might be made. Make the revisions and then compare the original and the revised documents.

2) Model how feedback should be communicated. Model praise. Model questioning – what was the author trying to do? Model how to offer suggestions.

3) Use peer pairs to initiate peer feedback experience.

4) Use examples from student work and student feedback with the class.

Those who have studied the development of peer editing skills want it to be understood that this is far from a one or two-lesson process. Often early efforts are a struggle. Student editors develop skills gradually and typically begin with superficial recommendations (spelling, grammar), unmerited praise (not to be confused with encouragement), or insensitive criticism. Regarding teacher expectations, it makes sense that the priorities of review applied to the work of others would be similar to changes, if any, developing writers would make with experience in their own work. Attention paid to the metacognitive processes of considering the audience and communication effectiveness of a document as a whole is more abstract than the recognition of grammatical rule violations. Hence, purposeful demonstration, discussion, and practice are important in developing editing skills whether applied to a document developed by the student or a document developed by a peer.

 Peer comments should include and begin with positive comments. What did you like? The targeted writing skills will change as the goals of writing change either with experience or purpose.

A computer and classroom whiteboard or projector combination is a great way for the teacher to model and provide examples. Writing tools that save comments and recommendations and writing tools that allow a comparison of drafts offer the teacher an external representation of author or peer editor thinking and provide the teacher something tangible to address. What challenges were recognized and what changes were actually implemented? We provide some examples of such capabilities in our online resources.

One interesting model for upper-elementary developed by Sarah Dennis-Shaw appears on the ReadWriteThink site. This model suggests that students offer peers comments, suggestions, and corrections.

Compliments

e.g., My favorite part was ___ because ___

Suggestions

e.g., If you gave more details, I would be certain I can understand what you mean.

Corrections

e.g., I found this misspelled word – mispell

It is worth the effort to review Dennis-Shaw lessons no matter what grade level you work at as the online resources are quite specific in outlining the steps in the instructional process and also provide sample instructional materials. For example, what might a writing sample used in the training phase look like? We also recommend that you do an Internet search for rubrics or checklists that might be suited to your own instructional circumstances (e.g., Simon Williams rubric)

References

Bruning, R.H., Schraw, G.J., Norby, M.M. (2011). Cognitive psychology and instruction (5th ed). Boston: Pearson.

Pritchard, R. J., & Honeycutt, R. L. (2007). Best practices in implementing a process approach to teaching writing. Best practices in writing instruction, 28-49.

Simmons, J. (2003). Responders are taught, not born. Journal of Adolescent and Adult Literacy, 46(8), 684-693.

Wu, Y., & Schunn, C. D. (2021). The Effects of Providing and Receiving Peer Feedback on Writing Performance and Learning of Secondary School Students. American Educational Research Journal, 58(3), 492-526.

Loading

AI, Tech Tools, and the Writing Process

Student access to AI has created a situation in which educators must consider when AI should and should not be used. I think about this question by considering the difference between what skill or skills are the focus of instruction and whether AI will replace a skill to improve the efficiency of the writing task or will support a specific skill in some way. It may also be useful to differentiate learning to write from writing to learn. My assumption is that unless specific skills are used by the learner those skills will not be improved. Hence when AI is simply used to complete an assignment a learner learns little about writing, but may learn something about using AI. 

Writing Process Model

The writing process model (Flower & Hayes, 1981) is widely accepted as a way to describe the various component skills that combine to enable effective writing. This model has been used to guide both writing researchers and the development of instructional tactics. For researchers, the model is often used as a way to identify and evaluate the impact of the individual processes on the quality of the final project. For example, better writers appear to spend more time planning (e.g., Bereiter & Scardamalia, 1987). For educators and instructional designers, understanding the multiple processes that contribute to effective writing and how these processes interact is useful in focusing instruction. 

Here, the writing process model will be used primarily to identify the subskills to be developed as part of learning to write and writing to learn and I will offer my own brief description of this model. It is worth noting that other than composing and rewriting products, other uses of technology to improve writing and increase the frequency of writing experiences seldom receive a lot of attention (Gillespie, Graham, Kiuhara & Hebert, 2014).

The model

The model identifies three general components a) planning, b) translation, and c) reviewing. 

Planning involves subskills that include setting a goal for the project, gathering information related to this goal which we will describe as research, and organizing this information so the product generated will make sense. The goal may be self-determined or the result of an assignment. 

Research may involve remembering what the author knows about a topic or acquiring new information. Research should also include the identification of the characteristics of the audience. What do they already know? How should I explain things so that they will understand? Finally, the process of organization involves establishing a sequence of ideas in memory or externally to represent the intended flow of logic or ideas.

What many of us probably think of as writing is what Flower and Hayes describe as translation. Translation is the process of getting our ideas from the mind to the screen and this externalization process is typically expected to conform to conventions of expression such as spelling and grammar.

Finally, authors read what they have written and make adjustments. This review may occur at the end of a project or at the end of a sentence. In practice, authors may also call on others to offer advice rather than relying on their own review.

One additional aspect of the model that must not be overlooked is the iterative nature of writing. This is depicted in the figure presenting the model by the use of arrows. We may be tempted, even after initial examination of this model, to see writing as a mostly linear process – we think a bit and jot down a few ideas, we use these ideas to craft a draft, and we edit this draft to address grammatical problems. However, the path to a quality finished product is often more circuitous. We do more than make adjustments in spelling and grammar. As we translate our initial ideas, we may discover that we are vague on some points we thought we understood and need to do more research. We may decide that a different organizational scheme makes more sense. This reality interpreted using our tool metaphor would suggest that within a given project we seldom can be certain we have finished the use of a given tool and the opportunity to move back and forth among tools is quite valuable.

Tech tools and writing

Before I get to my focus on AI tools, it might be helpful to note that technology tools used to facilitate writing subprocesses have existed for some time. For example, spelling and grammar checkers, outline and concept mapping, note-taking and note-storage, citation managers, online writing environments allowing collaboration and commenting, and probably many other tools that improve the efficiency and effectiveness of writing and learning to write. Even the use of a computer allows advantages such as storage of digital content in a form that can easily be modified rather than the challenge of making improvements to content stored on paper. The digital alternative to paper changes how we go about the writing process. I have written about technology for maybe 20 years and one of the bextbooks offered the type of analysis I am offering here not about AI tools, but about the advantages of writing on a computer and using various digital tools. 

A tool can substitute for a human process or a tool can supplement or augment a human process. This distinction is important when it comes to writing to learn and learning to write. When the process is what is to be learned, this substitution is likely to be detrimental as it allows a learner to skip needed practice. In contrast, augmentation often allows the opposite as a busy work activity or some incapability is taken care of allowing more important skills to become the focus. 

Here are the types of tools I see as supporting individual writing processes. 

Planning – Organization and Research

Prewriting involves developing a plan for what you want to get down on paper (or screen in this case). A writer goes about these two subprocesses in different ways. You can think or learn about a topic (research) and then organize these ideas in some way to present. Or, you can generate a structure of your ideas (organize) and then research the topics to come up with the specifics to be included in a presentation. Again, these are likely iterative processes no matter which subskill goes first.

One thing AI does very well is to propose an outline if you are able to generate a prompt describing your goals. You could then simply ask the AI service to generate something based on this outline, but this would defeat the entire purpose of learning about the topic by doing the research to translate the outline into a product or developing writing skills by expanding the outline into a narrative yourself.

Since I am writing about how AI might perform some of the subskills identified by the writing process model, I asked ChatGPT to create an outline using the following prompt. 

“Write an outline for ways in which ai can be used in writing. Base this outline on the writing subprocesses of the writing process model and include examples of AI services for the recommended activity for each outline entry.”

The following shows part of the outline ChatGPT generated. I tend to trust ChatGPT when it comes to well established content and I found the outline although a little different from the graphic I provided above to be quite credible and to offer reasonable suggestions. As a guide for writing on the topic I described, it would work well. 

I had read that AI services could generate concept maps which would offer a somewhat different way to identify topics that might be included in a written product. I tried this several times using a variety of prompts with ChatGPT’s DALLE. The service did generate a concept map, but despite making several follow-up requests which ChatGPT acknowledged, I could not get the map to contain intelligible concept labels. Not helpful.

Translation

Tools for improving the translation process have existed in some form for a long time. The newest versions are quite sophisticated in providing feedback beyond basic spelling and grammatical errors. I write in Google docs and make use of the Grammarly extension.

I should note that Grammarly is adding AI features that will generate text. Within the perspective I am taking here I have some concerns about these additions. Since I am suggesting that writing subskills can be replaced or supported, student access to Grammarly could allow writing subskills the educator was intending students to perform themselves to be performed to some degree by the AI. 

If you have not tried Grammarly, the tool identifies different types of modifications the tool proposes different modifications the writer might consider changing (spelling, missing or incorrect punctuation, alternate wording, etc.) and will make these modifications if the writer accepts the suggestion. The different types of recommendations are color-coded (see following image). 

Revision

I am differentiating changes made while translating (editing) from changes made after translating (revision). Minor changes such as spelling and grammar would seem more frequently fixed as edits by this distinction and major modifications made (addition of examples, restructuring of sections, deletion of sections, etc.) while revising. Obviously, this is a simplistic differentiation and both types of changes occur during both stages). 

I don’t know if I can confidently recommend a role for AI for this stage. Pre-AI, one might recommend that a writer share their work with a colleague and ask for suggestions. The AI version of Grammarly seems to be moving toward such capabilities. Already, a writer can ask AI to do things like shorten a document or generate a different version of a document. I might explore such capabilities out of curiosity and perhaps to see how modifications differ from my original creations, but for work that is to be submitted for evaluation of writing skill would that be something an educator would recommend? 

I have also asked an AI tool to provide an outline, identify main ideas or generate a summary of a document I have written just to see what it generates. Does the response to one of these requests surprise me in some way? Sometimes. I might add headings and subheadings to identify a structure I thought was not as obvious as I had thought. 

Conclusion:

My general point in this post was that questions of whether learners can use AI tools when assigned writing tasks should be considered in a more complex way. Rather than the answer being yes or no, I am recommending that learning to write and writing to learn are based on subprocesses and the AI tool question should be considered in response to a consideration of whether the learner was expected to be developing proficiency in executing a subprocess. In addition, it might be important to suggest that learning how to use AI tools could be a secondary goal. 

Subprocess here were identified based on the Writing Process Model and a couple of suggestions were provided to illustrate what I mean by using a tool to drastically reduce the demands of one of the subprocesses. There are plenty of tools out there not discussed and my intention was to use these examples to get you thinking about this way of developing writing skills.

References:

Bereiter, C., & Scardamalia, M. (1987). An attainable version of high literacy: Approaches to teaching higher-order skills in reading and writing. Curriculum inquiry17(1), 9-30.

Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College composition and communication32(4), 365-387.

Gillespie, A., Graham, S., Kiuhara, S., & Hebert, M. (2014). High school teachers use of writing to support students’ learning: A national survey. Reading and Writing27, 1043-1072.

Loading

Obsidian/Smart Connections Workflow

I use Obsidian plus the plugin Smart Connections to inform my blog writing activities. I write for educational practitioners and academics so I try to carefully base my content on sources that I have read and in many cases intend to cite in the content I generate. With this goal, Obsidian represents an archive I have developed over several years to store and organize notes from hundreds of books, journal articles, and websites. I explore my collection in different ways sometimes seeking notes on a specific article I want to emphasize and sometimes exploring to locate what I have read that is relevant to a topic that I might want to include but perhaps do not recall at the time. 

In some cases, I want to use an AI tool to support my writing. I seldom use AI to actually generate the final version of content I post, but I may explore the possible organization of material for something I want to write or I might use an AI tool to generate an example of how I might explain something based on the notes I have made available to the AI tool. 

The combination of Obsidian augmented by the Smart Connections plugin allows me to implement a workflow I have found useful and efficient. I have several specific expectations of this system:

  1. I have already read the source material and taken some notes or generated some highlights now stored in Obsidian. I want to write based on this content.
  2. I may not recall relevant sources I have stored in Obsidian because of the passage of time and the accumulation of a large amount of material. I want the AI system to understand my goals and locate relevant content. 
  3. I want the AI system to identify specific sources from the content I have reviewed rather than the large body used to train the LLM. I want the system to identify the specific source(s) from this material associated with specific suggestions so that I am aware of the source and can cite a source if necessary.
  4. When a specific source has been identified I want to be able to go directly to the original document and the location within that document that is the location for the note or highlight that prompted the inclusion in the AI content so that I can reread the context for that note or highlight.

Obsidian with the Smart Connections plugin does these things and is to some extent unique because all of the material (the original content) is stored locally (actually within iCloud which functions as an external harddrive) allowing the maintenance of functioning links between the output from Smart Connections, the notes/highlights stored in Obsidian, and the original documents (pdfs of journal articles, Kindle books, web pages). 

I do not know for certain that the Obsidian-based approach I describe is the only way to take the approach I take. I am guessing my approach works in part because I am not relying on an online service and online storage. I also use Mem.ai because it allows me to focus on my own content, but linking back to source documents does not work with this service. Mem.ai does include the AI capabilities as part of the subscription fee, but I don’t know when this might be an advantage. The Smart Connections plugin does require the use of an OpenAI API (ChatGPT) and there is a fee for this access.

Example:

Here is an example of what working with the Obsidian/Smart Connections setup is like. I am working on a commentary on the advantages and disadvantages of K12 students having access to AI in learning to write and writing to learn. I propose that writing involves multiple subprocesses and it is important to consider how AI might relate to each of these subprocesses. My basis for the list of subprocesses is based on the classic Flower and Hayes Writing Process Model. I had written a description of the Writing Process Model for a book I wrote and this section of content was stored within Obsidian as well as notes from multiple sources on AI advantages and disadvantages in the development of writing skills. I have not read a combination of the writing process model with ideas about the advantages and disadvantages of AI so this is the basis for what I think is an original contribution.

The following is a screenshot of Obsidian. The Smart Connection appears as a panel on the right side of the display. The left-hand panel provides a hierarchical organization of note titles and the middle panel provides access to an active note or a blank space for writing a new note. 

In the bottom textbox of the Smart Connections panel, I have entered the following prompt:

Using my notes, how might AI capabilities be used to improve writer functioning in the different processes identified by the writing process model. When using information from a specific note in your response, include a link to that note. 

Aside from the focus of the output, two other inclusions are important. First, there is the request to “use my notes”. This addition is recommended to ensure a RAG (retrieval augmented generation) approach. In other words, it asks the AI service use my notes rather than the general knowledge of the AI system as the basis for the output. The second supplemental inclusion is the request to include a link to that note which is intended to do just what it says – add links I can use to to see where ideas in the output came from.

The output from Smart Connections is in markdown. I copied this output into a new blank note and the links included are now active.

I purposefully selected a note that initially was part of a web page for this final display. I had originally used a tool that allowed the annotation of web pages and then the exporting of the annotated and highlighted content as a markdown file I added to Obsidian. This file included the link from the note file back to the online source. As you can see, the link from Obsidian brought up the web page and with the assistance of the activated service added as an extension to my browser displays what I had highlighted within this web page. Interesting and useful.

Conclusion:

We all have unique workflows and use digital tools in different ways because of differences in what we are trying to accomplish. What I describe in this post is an approach I have found useful and I have included related comments on why. I hope you find pieces of this you might apply yourself.

Loading

Writing to Learn Research – Messy

Writing to learn is one of those topics that keeps drawing my attention. I have an interest in what can be done to encourage learning and approach this interest by focusing on external tasks that have the potential to manipulate the internal cognitive (thinking) behavior of learners. My background in taking this perspective is that of an educational psychologist with a cognitive perspective. I have a specific interest in areas such as study behavior trying to understand what an educator or instructional designer can do to promote experiences that will help learners be more successful. The challenge seems obvious – you cannot learn for someone else, but you may be able to create tasks that when added to exposure to sources of information encourage productive “processing” of those experiences. We can ask questions to encourage thinking. We can engage students in discussions that generate thinking through interaction. We can assign tasks that require the use of information. Writing would be an example of such an assigned task. 

Writing to Learn

Writing to learn fits with this position of an external task that would seem to encourage certain internal behaviors. To be clear, external tasks cannot control internal behavior. Only the individual learner can control what they think about and how they think about something, but for learners willing to engage with an external activity that activity may change the likelihood productive mental behaviors are activated.

I found the summary of the cognitive benefits of writing to learn useful and consistent with many of my own way of thinking about other learning strategies – external tasks that encourage productive internal behaviors. Writing based on content to be learned requires that the writer generate a personalized concrete representation at the “point of utterance”. I like this expression. To me, it is a clever way of saying that when you stare at the screen or the empty sheet of paper and must fill the void you can no longer fool yourself – you either generate something or you don’t. You must use what you know and how you interpret the experiences that supposedly have changed what you know to produce an external representation.

To produce an external product, you must think about what you already know in a way that brings existing ideas into consciousness (working memory) by following the connections activated by the writing task and newly acquired information. This forces processing that may not have occurred without the external task. Connections between existing knowledge and new information are not necessarily made just because both exist in storage. Using knowledge to write or to perform other acts of application encourages making connections.

Such attempts at integration may or may not be successful. Having something external to consider offers the secondary benefit of forced metacognition. Does what I wrote really make sense? Do the ideas hang together or do I need to rethink what I have said? Does what I have proposed fit with the life experiences (episodic memory) I have had? 

Writing ends up as a generative process that potentially creates understanding and feeds the product of this understanding back into storage.

Graham, Kiuhara & MacKay, M. (2020)

In carefully evaluating and combining the results of many studies of writing to learn, these researchers intended not only to determine if the impact of writing to learn had the intended general benefit but to use the variability of writing tasks and outcomes from studies to deepen our understanding of how writing to learn encouraged learning. Surely, some activities would be more beneficial than others because of the skills and existing knowledge of learners or the specifics of the assigned writing tasks. So, the meta-analysis is asking if there is a general effect (Is writing to learn effective), and secondarily are there significant moderator variables that may help potential practitioners decide when, with whom, and how to structure writing to learn activities?

The Graham and colleagues’ research focused only on K12 learners. Potential moderator variables included grade level, content area (science, social studies, mathematics), type of writing task (argumentation, informational writing, narrative), and some others. I have a specific interest in argumentation () which is relevant here as a variable differentiating the studies because it requires a deeper level of analysis than say a more basic summary of what has been learned. 

Overall, the meta-analysis demonstrated a general benefit for writing to learn (Effect size = .30). This level of impact is considered on the low end of a moderate effect. Graham and colleagues point out that the various individual studies included in the study generated great variability. A number of the studies demonstrated negative outcomes meaning in those studies the control condition performed better than the group spending time on writing to learn. The authors propose that this variability is informative as it cannot be assumed that any approach with this label will be productive. The variability also suggests that the moderator variables may reveal important insights.

Unfortunately, the moderator variables did not achieve the level of impact necessary to argue for useful insights as to how writing to learn works or who is most likely to be a priority group for this type of activity. Grade level was not significant. The topic area was not significant. The type of writing task was not significant. 

Part of the challenge here is having enough studies focused on a given approach with enough consistency of outcomes to allow statistical certainty in arguing for a clear conclusion. Studies that involved taking a position and supporting that position (e.g., argumentation) produced a much larger effect size, but the statistical method of meta-analysis did not reach the level at which a certain outcome could be claimed. 

One interesting observation from the study caught my attention. While writing to learn is used more frequently in social studies classrooms, the number of research studies associated with each content areas was the smallest for social studies. Think about this. Why? I wonder if the preoccupation of researchers and funding organizations with STEM is responsible. 

More research is needed. I know practitioners and the general public get tired of being told this, but what else can you recommend when confronted with the messiness of much educational research? When you take ideas out of carefully controlled laboratories and try to test them in applied settings the results here are fairly typical. Humans left to their own devices as implementers of procedures and reactors to interventions are all over the place. Certainly, the basic carefully controlled research and the general outcome of meta-analysis focused on writing to learn implementation are encouraging, but as the authors suggest the variability in effectiveness means something, and further exploration is warranted.

Reference

Graham, S., Kiuhara, S. A., & MacKay, M. (2020). The effects of writing on learning in science, social studies, and mathematics: A meta-analysis. Review of Educational Research90(2), 179-226.

Loading

Follow bloggers for a deeper context

I have had at least one blog since 2002. Since that time, I have also had control of the server on which my blog and other content I created was stored. At the beginning of this period of time, I worked at a university and was able to run a server through the university network. This translates as I had a dedicated IP for my site and once someone found my content and bookmarked the site, the site would always be available at that address. I can’t remember the exact address, but it did indicate my server was identified as a part of the more general university network.

When I began to generate free content I intended to supplement the textbook my wife and I had written through what was originally Houghton-Mifflin, I decided it might appear that even though the content I was offering was free to any viewer, it might seem I was using university resources to benefit me financially and so I began renting server space. I have continued to host my content through Bluehost since that time. This company provides services at multiple levels. My blogs make use of WordPress, but I have a general account because I use Bluehost for content other than blogs. 

I spend about $200 a year for the server space and the cost of two domains (learningaloud and curmudgeonspeaks). I include this financial information because part of the issue of how you provide online content has a financial component. Among the financial issues are whether you want to make money and whether you want to minimize personal costs. None of the content on my server is behind a paywall and there are Google ads on some of my content. The income from ad clickthroughs is less than $25 a year. So, I must recognize that my site is a hobby and the inclusion of ads is pretty much a matter of personal curiosity. I follow the analytics my site generates as part of this hobby. The activity level the site generates is sufficient to maintain my interest, but has declined in this last decade. I attribute this decline to moving from having textbooks sold through a textbook company to self-publishing via Amazon. My motives for this transition have been documented in my blog posts and were related to my interest in investigating a different model for textbooks that combined a smaller and less expensive book with online resources. 

I am writing this post partly as an extension of a previous post that considered cross-posting my blog content to Medium and Substack. One way to look at the purpose of this post might be to explore the question of why with the availability of services such as Medium and Substack (and other outlets) would anyone want to continue to pay to host personal content. I am not alone in asking this question. My take is related to, but not equivalent to the concept of COPE (compose once publish everywhere), but focuses on different values and factors. 

I find that a core belief I have about having a location where you store and host your own content has considerable overlap with my beliefs concerning the value of books. I believe book authors and content providers bring a perspective and context to their creative work that is not maintained in pieces of content experienced in isolation. You might argue that this is fine because as a consumer you will build your own understanding based on the elements of information you pick up from multiple authors. I agree you might and probably should do this. However, models of understanding are transferable and can be used to build on and contrast with personal efforts to develop understanding.

Yes, this sounds pretty abstract and vague. Think of what I describe as a model as a way of understanding – how you see things working and what causes what to happen. Ways of understanding (models) can be general and specific and they can be complete or incomplete. Sometimes we have flawed ways of understanding that seem to work in some situations, but we may at some point find our way of understanding does not work in others. We can be convinced we have things figured out when this is really not the case and only when we try our models in actual situations or compare them to the models of others with different and perhaps more experience that we see a bigger picture. 

I have generated thousands of posts over the 20+ years I have been involved in blogging. I am certain some of my posts are naive and wrong and some may be inconsistent. Like a book, the collection does emphasize a limited set of ideas and provides connections among these ideas. These posts are tagged and organized so any interested party can explore related ideas to explore the broader context of my ideas. You just don’t get this with the selection of posts I add to Substack or Medium. 

Loading