AI has raised a significant number of ethical issues all of us should consider. I think about this topic in terms of my responsibilities as a content creator and also wonder about the responsibilities of the companies who offer the AI services. There are probably other categories of participants that are possible. Perhaps we as consumers should consider how our consumption activities influence the direction of this emerging technology.
The issues at the level of AI companies are widely discussed and probably something I am not qualified to evaluate. There is the issue of whether these companies can take advantage of the content we all can access to build their services without compensation to content creators. Are these companies different from any of us as individuals who can consume this content and then develop products and services based on these inputs? We are bound to obey copyright in our production of new material, but like the AI companies there is an important difference between “based on” and “copied”.
There is also the issue of unknown future consequences. It is often argued that AI and digital technologies move faster than the impact of these technologies can be evaluated and perhaps controlled through legislation. I see this as one limitation of the cost required to create present LLMs. I tend to think of Universities as playing an essential role in many advances. The focus on basic research questions and more care in evaluation without a profit motive serve an important public service. Universities simply do not have the resources to play this role in this situation.
As a content creator, I make use of AI. I tell myself my AI-based activities are partly exploratory because I create content intended to inform educators and if I want to offer reasonable insights I must invest considerable time in understanding the issues associated with AI tools. My activities must involve actual use in addition to reviewing what others have to say. I could invest this time and isolate this exploration from distributing any content I generate, but I cannot resist. What I will do is offer the following descriptions to provide visibility into any content you might consume. Visibility seems a reasonable ethical expectation.
I will divide my visibility remarks into two categories – text and images. The words you see in my posts are nearly all entered onto the screen through my keyboard and out of my mind. I do not use a tool such as ChatGPT to formulate the text you read based on my crafted prompts to direct what is returned from its general knowledge base. What I do on some occasions is to use a tool called Smart Connections to summarize notes I have taken from the sources I have read and then stored in Obisidian. Mostly, these are notes and highlights based on research articles and books I read. I have described this workflow in depth in a previous post. Perhaps I can claim that I do my own thinking, but I sometimes use AI to reduce the time required to translate the result of this thinking to words on the screen.
Images are different and examining where the images I use come from is what encouraged these comments on AI and personal ethics. I tend to use images in my posts as a way to activate the existing knowledge of readers. This is me as a cognitive psychologist talking. There is value in connecting new information with what a learner already knows and an image can stimulate this process.
The problem I have with images is that I can appreciate the goal of including relevant images, but I do not have the personal skills to create images. I can’t draw and I cannot always place myself in appropriate situations to capture images with a camera. Of course, there are people who are good at this sort of thing and there are outlets you can use to secure appropriate images.
For a long time, I paid for a subscription to the Noun Project. The service provides a database of images and compensates artists based on how frequently their images are downloaded.
I now pay the monthly $20 subscription fee for ChatGPT and a couple of other AI services. ChatGPT includes DALL-E. I did not commit to these services to generate images, but since I pay the money for other reasons I find DALL-E is an easy way to secure images I can use. I can both directly impact the content of such images and the quality is clearly superior to the simple line drawings I could locate through the Noun Project.
Here is an example of what I mean. I recently wrote a post about whether the Presidential debates are actually debates. I used the following image from DALL-E. I could have used images from the Noun Project.
So, now here is the issue. I can pay ChatGPT for images or I can continue to compensate artists who create images. Part of the decision I must make must be based on my personal finances. The cost of DALL-E images is available to me at no additional cost and these images are more focused and attractive. ChatGPT provides these images without compensating the artists on which this technology was developed. What are the long-term consequences to content creators based on how I and thousands of others make such decisions? What are the long-term consequences when there is less and less original content even being created?
You must be logged in to post a comment.