Will capitalism offer an alternative to surveillance capitalism?

Surveillance capitalism, a term popularized by economist Sashona Zuboff, describes the collection of information about people for the purpose of economic gain. Zuboff’s work focused on surveillance capitalism as it was practiced by social media companies paying particular attention to a) user lack of understanding that information about their online behavior was being collected and integrated to the extent that was the case and b) the effort of companies to provide online services in such a way that users would be more committed to the service and provide more behavioral data in the process.

I have no training as an economist and I likely misunderstand many of Zuboff’s interrelated arguments involving economics, the psychology of confirmation bias and behavior modification,  legal issues associated with informed consent, and factors such as the “network effect”. I do have some experience offering online content complete with ads and I have some experience applying both ad blockers and coding that enables the identification of ad blockers. I also have some experience making use of online services that are attempting to implement other approaches to collecting and sharing revenue for online services.

I would describe the question I am attempting to answer as this – Can capitalism offer an effective alternative to surveillance capitalism or will the greed of some online companies eventually lead to government intervention? I suppose the answer to the question could be other than the two options I identify, but I am betting things will go one way or the other.

The use of an online social media service involves the interaction of 3 and perhaps 4 parties: you as the user, the content creator, the service (e.g., Google, Facebook, Blogger), and possibly the company offering an ad (actually a combination of the company advertising and the ad delivery service). Each party has costs and benefits in the interaction of these agents. In the most common present model, a user makes the attempt to use a service – e.g., read a blog post on Blogger. The user reads content on this service for the cost of providing personal information collected by the service (Google). This is the cost to the user. The service (Blogger a service of Google) has the potential of receiving revenue for the cost of providing the infrastructure through the collection of personal information and possibly through ad revenue generated from clicks of ads. The ad company the infrastructure company for ads clicked  is compensated by those who want ads to be viewed. The content creator is compensated for their work (their cost) in creating content when an ad associated with their content is clicked. This is a balanced system or at least it is proposed to be so.

Consider what happens to the present system when one of the parties involved finds a condition as it exists unfair or devious. This would be the case when users (viewers) of content within this system object to surveillance capitalism because they find the cost to them of revealing their personal information out of balance with what they receive. They might decide to deploy an ad and cookie blocker to reduce the personal information that is revealed. This does prevent the infrastructure provider from receiving the benefit of the information harvested and any funds provided from ad clicks. This means the infrastructure company is providing a free service and still must pay the costs of providing the infrastructure. Likewise, the content provider continues to spend the time (a resource) involved in generating content and receives no compensation for this labor and the ad company receives no income because no ads are viewed and clicked. In solving the problem of unwanted surveillance, the user has defunded all of the other parties involved and in the long run this will likely have consequences. For example, the infrastructure company could block the reveal of requested content to users blocking the accompanying ads. Content creators could look for other outlets for their content or at least reduce the amount or quality of the content they produce.

Are there solutions existing or new companies can apply to bring these parties into some fair balance? I think some serious efforts are starting to emerge.

First, there is the subscription model that has been successful with streamed music. Some of the same companies now focused on music are expanding their model to include podcasts. To access podcasts affiliated with a service, users will have to pay the price of subscription. In some cases, this could mean the subscription they pay for music would also include podcasts. Content creators would be paid a small amount for each time their podcasts were served. This model could be extended to other content types – e.g., Medium for written material. What seems to be happening at present is that the more popular creators are compensated and those who receive less attention can offer content but are not compensated.

A second service I think has great potential is the Brave browser [https://brave.com/]. You can download this software for pretty much any device. The browser is really part of a service that involves three capabilities. First, the browser blocks cookies and scripts. This capability can be controlled by the user as a general approach (everything is blocked) or as a blocking capability turned on and off depending on the site. Second, the more general service associated with the browser allows users to commit an amount of money that is distributed to content providers/services in proportion to the amount of time allocated to the sites visited. Content creators must enroll (no cost) to be compensated in this manner. Each month, the sites visited are listed for the user and the user can drop for compensation any site from the list. Money is not forwarded to sites until they register. Finally, the service is just rolling out a potential compensation opportunity for users. This approach substitutes ads through the Brave browser service for ads normally accompanying the content when viewed with other browsers. So, users are paid to view ads, but the data normally collected via cookies and scripts are not allowed to pass. This final component of the Brave model is just being tested at this time. Brave takes part of the ad revenue when users offer compensation for content/service and when companies offer ads through Brave. These funds provide compensation for the infrastructure and company.

So, there are multiple opportunities here (as I understand the possible combinations). Users could use Brave to block ads. Users could contribute and block ads. Users could receive compensation for viewing ads and block ads. Users could blocks ads, contribute, and receive compensation for ads viewed via Brave.

Again, it is possible to consider how the user, content creator, and infrastructure provider are compensated when thee different combinations are implemented. If a user blocks ads and nothing else, the user receives content or uses a service at no cost, but the content creator or service provider is not compensated. If the user blocks ads and submits a voluntary contribution, the user receives access to the content or service and the provider is also compensated. If the user blocks ads, but accepts user ads and offers no compensation, the user benefits in multiple ways (content and revenue) and views ads and the providers receive some compensation. The difference in this final combination from the most common existing experience is that the user views ads, but does not provide personal information that can be shared.

I hope that I am understanding the Brave long-term view appropriately in claiming that the content creator potentially receives revenue from both contributors and from ads shown by Brave. It would be possible to compensate users, but not content creators when ads are displayed.

If Brave gets their approach off the ground and attracts sufficient users, I predict the model will change such that content contributors will make an exclusive commitment to have their content viewed through Brave and users will have to accept viewing ads for these sources unless they are contributors. This would guarantee a revenue source for users and content creators. Note – I am guessing at a long term model here. If this should happen, I would also predict that a similar model would then be adopted by other sites such as Facebook and Twitter. Google would be most harmed by this model because they would then be in competition with Brave to make money by selling and displaying ads.

In summary, I see alternatives to surveillance capitalism should these competitive models take hold. The present surveillance capitalism model is still much more popular, but public awareness of the collection and sharing of their information is encouraging ad/cookie blocking. Should ad and data collection blocking become common, the lack of opportunities for certain categories of content and service creators will eventually extinguish their willingness to work without compensation.

If capitalism doesn’t take on surveillance capitalism, government intervention seems very likely.

 

Loading

Salman Kahn at FETC

Salman Kahn was the featured speaker at this year’s Future of Educational Technology Conference. I had hoped that his FETC presentation would be available, but I have not been able to locate it online (I did find this interview). The conference in Orlando was a direct flight from Grand Forks and probably the ed tech conference I found most informative. It was also in late January which was a great time to escape from North Dakota for a few days. I have been following Kahn and his Kahn academy since I first saw his TED presentation. At some point during the process of developing his online resources, Kahn began describing what he was doing as supportive of a mastery approach. I am fairly certain this realization came sometime after his work became popular and I appreciated his association with the mastery learning research that guided my own early thinking about individualization in the 1980s. Too many innovators seem to want to give new names to older concepts. For me, technology provided a practical way to apply mastery concepts in classrooms. The work from the 70s-80s explored the potential of core ideas, but this form of personalization was very difficult for educators to implement. My own thinking about “personalization” assumes there are two issues that should be individualized – a) interests and b) existing knowledge and speed of learning. Neither variable can be addressed with a group-based approach. When the individualization of instruction to address differences in existing knowledge and speed of learning are implemented via a system such as is available through the Kahn Academy, what is happening is unfairly described as students being drilled by a computer. This perception misrepresents how a teacher’s time is intended to be applied. Technology is being applied to individualize information presentation and performance evaluation on an individual basis providing data that allows educators to recognize where their mentorship and tutoring can most usefully be applied. This is a type of interaction that does not happen often enough in most classrooms.

Loading

Keep it in Keep (iPad)

Google Keep offers an efficient and free way to archive content as you spend time on the Internet. I have described this service before, but did not offer an explanation of how it works on different devices. This post deals specifically with the iPad.

If you have Keep on your iPad, sending content to your Keep archive makes use of the “share” feature. The one tricky thing about sharing on the iPad is that you must activate “share” for specific apps. Here is the process.

The share icon (top red box) opens a display of the options. At the right-hand end of the existing options, a series of three dots (see red box) offers the opportunity to activate other share possibilities.

The three dot icon opens up the apps that can be coordinated with the active app. You use the slider associated with a given app to make it available. Once activated, this app will be available as an outlet for selected content when the share option is used.

Loading

Post-Truth for Educators

The following comments are based on a recent book by Lee McIntyre – Post-Truth. I suggest it as a useful read for any educator interested in information literacy, science denial, and similar topics. I think it supports my observation that teaching students to evaluate the credibility of the online resources they encounter is not enough to assure they develop an accurate representation of their world. A broader approach to online literacy is required. This book offers some suggestions, but it is possibly more useful in identifying the multiple factors that have created the present information environment in which we presently function.

I will not attempt to write a summary of the multiple issues McIntyre’s book identifies. The resulting text would be far too long for a blog post. I will attempt to identify some of the key factors the book identifies as I interpret them. The result is more an outline of factors than a complete description. When I offer my own observations or comments, I will do so within brackets so my ideas can be differentiated from by summary of the author’s comments.

Factors creating the Post Truth environment:
  • Personal factors – Asch conformity [influence of group position], confirmation bias [Author seems to propose a brain predisposed to support existing personal perspectives – I see as a constructivist model of learning.]
  • Decline of mainstream media due to decline in ad revenue and readers/viewers going elsewhere, financial difficulties result in decline in number of actual reporters, decline in shared information experience as readers/viewers do not have a common information experience.
  • Rise of 24/7 news, much more time spent on opinion relative to actual news, embrace of a perspective/bias, encourage interest/viewers through promotion of conflict. Presenting both sides of a story when one side has little credibility confuses viewers and feeds personal biases – objectivity should not be confused with objectivity.
  • Social media – selection of friends, bias in what friends share, bias in what service feeds reader to suit their values
  • Weak approach of formal science – science is always questioning and suggesting more research would be helpful. Misleads public as to what is known because of this cautious approach.
  • Generation of purposeful falsehoods likely to be embraced by target group. Falsehoods given greater reach by sympathetic audience and bots.

Some ideas regarding what can be done

  • One remedy is heterogeneous group interaction – develop an appreciation for the scrutiny of others [This fits the value of argumentation]
  • Important to call out, rather than ignore lies no matter how obvious the falsehood.
  • Invest in a quality news source. Free is not free and not suited to meaningful learning.

Loading

Teach highlighting and notetaking skills

Technology offers learners some study skill opportunities often not available until recently. A vast literature investigating highlighting and notetaking exists, but few K-12 educators have been trained to help their students learn to use these study skills effectively. While some may offer advice on taking notes, highlighting has been largely ignored because marking up content intended to be used in the future by other students was forbidden. The use of digital content eliminates this problem, but the opportunities of this content in digital form have been largely ignored.

My own familiarity with highlighting and notetaking go back to the late 1970s and 1980s. It is my impression that these study strategies were heavily investigated during that time frame because of the interest in generative strategies. Interest seemed to wane, but I sense a return of some of these ideas.

I recommend two recent sources:

Miyatsu, Toshiya, Khuyen Nguyen, and Mark A. McDaniel. (2018). Five Popular Study Strategies: Their Pitfalls and Optimal Implementations. Perspectives on Psychological Science 13, 3, 390-407.

Surma, T., Camp, G. & Kirschner, P. (translated) Less is more: Highlighting as learning strategy. [https://3starlearningexperiences.wordpress.com/2019/01/08/less-is-more-highlighting-as-learning-strategy/]

Miyatsu and colleagues make an interesting point about study strategy research. They suggest that researchers have focused on developing new study techniques, but these techniques have been largely ignored. Miyatsu recommends that greater attention be focused on study strategies that are used and how these strategies might be optimized.

Highlighting and annotating (simplified notetaking) fit well with my interest in opportunities for the application of online layering opportunities.

Here is a quick perspective on the highlighting and notetaking research.

The potential benefits of both techniques are approached as potentially resulting from generative processing (activities while reading/listening) and external storage (improvement of review or studying). Of course, these are interrelated as better highlighting and notetaking should improve later review (I will make one comment on whether this relationship still holds at a later point). A quick summary might be that a) the benefit of notetaking appears to be in review and b) the benefit of highlighting appears to be in the generative act of highlighting. I cannot offer an explanation of why these strategies appear to work in different ways. 

One further comment related to my reference to layering is that highlighting and notetaking can be provided rather than generated by students. Providing highlights and annotations can benefit review and may be a way to teach a better generative approach. One of the findings of these more recent reviews of the literature is that K-12 students do not benefit from highlighting opportunities while college students do. This could be because younger students have not practiced this technique and when provided the opportunity do not highlight in an effective way. They do benefit when important content is highlighted for them.

With notetaking more generative strategies (paraphrasing vs verbatim) improves the benefits of the note taking process, but verbatim notes are more effective for external storage (review). I think this could possibly be improved by use of apps that allow notetaking while recording presentations. The notes taken within such apps are timestamped allowing review of the original recorded content when the notes seem confusing. Students using this approach could also just enter a marker, eg., ???, in notes when confused rather than overload working memory and use this marker to return to the spot in the recorded notes for more careful thought when studying. The notes could even be improved later using this same approach. 

If you don’t have access to a college library, you may be unable to read the Myatsu paper, but the second reference is online and offers some useful analysis.

Loading

Missing the point

I am now retired, but I still enjoy the beginning of a new semester. Every few days or so I check Amazon to see if anyone has purchased our textbook. Sales are nothing like earlier times when we were selling an expensive paper version through a publishing company, but the present circumstances are more about feeling relevant.

Our transition from big publishing company to self publishing was originally motivated by our interest in a different model for the college textbook. The motive was partially to offer a less expensive textbook (I called it the $29 textbook project), but also to offer a different approach more suited to the content – technology in education. We thought it ironic that our orignal project sought to develop classroom technology integration skills in educators using a book. We wanted both to differentiate the content sources by creating a primer rather than a book for more static information and web content to improve the recency of information and to offer actual demonstrations and examples. We also thought it made more sense for authors to write continuously rather than intensely once every three years. We went back and forth with our publishers for several years and could never get to the point of implementing our project. Book companies see efficiencies in using standardized tools and approaches. For example, they wanted to offer professionally shot and edited videos of a topic such as problem based learning they could use in several of their education books. These videos would not necessarily involve technology. We wanted to offer videos of the problem based activities featuring the teachers who implemented them we used as examples in our book. We wanted to create the opportunity for the educators who adopted our book to share among themselves. For example, we were promoting the idea of an “interactive syllabus” – just a web page serving as a course syllabus and linking to tasks and resources used to augment assigned readings. There is no reason to treat the book authors as the experts when profs and students have their own experiences and tasks that could be shared to our Primer.

Eventually, we agreed our priorities were not compatible and after 5 successful editions we were given full control of the copyright on our book to do with what we wanted.  

So, we became self publishers and have tried to offer a scaled down version of some of our ideas. We went from selling a $140 book receiving royalties (12% on the wholesale value) to a $9 book receiving 70% minus a fee for the download size of the ebook. The one frustration we have is that while we get instructors adopting our book  is limited and while we don’t know for sure the activity associated with our online content seems unrelated to use of the book. For example, one would expect to see content associated with early chapters to be used early in a semester and indicators of this nature. We can only guess at why this is the case because we really don’t have a way to ask the adopting profs as would be the case with the book reps who make contact with profs for the commercial textbook companies. I still think the diversity of resources and a closer link between book and supplemental content are good ideas, but we have found over the years that it can take time for new ideas to be implemented.

Loading

A model of models

 Attempting to explain how learning and thinking work is not easy. After doing this for a living, I am still uncertain what is the best approach. We all know we know things and we all know we learn, but just how we did and do this is often not obvious. To help understand, I would suggest we make use of abstractions that simplify, but still have utility. To have value, a model or abstraction must be consistent with data (observations of how a process works) and be useful in suggesting testable practices. 

I could use the jargon of my profession and theoretical perspective on this profession, but I propose that the concepts of thinking and model both meet the conditions I have identified. Thinking implies learning requires mental work (at least most kinds of academic learning). Easy to understand attributes of thinking are helpful. The learner is the one who thinks (or not) so learning ultimately depends on the activity of the learner no matter the circumstances external to the learner. However, others people and experiences outside of ourselves can encourage us to think. We think about things with which we are already more familiar more easily hence what we already know is important in how easily and how successfully we can think. The thinking we do that uses existing knowledge and skills can result in the modification and improvement of what we already know and this is what most mean by learning. These are the basics.

What we accomplish by learning means several different things. We know things after learning that we did not know before. We can do things after learning that we did not do before. We acquire understanding and capabilities. We also acquire information which may or may not be part of understanding.

The idea of a model is helpful in thinking about both understanding and capabilities. The construction of models of understanding and models as actions is a natural process. By natural I mean we as individuals construct models continuously whether to deal with mysteries of daily life or because of formal education. To help students understand and think about the commitment we continuously make to models, I like to ask students what they mean by the concept of a theory. A theory is one type of model. Students often use the word theory in a condescending way perhaps dismissing the information (models) acquired in a course as theoretical implying that theories are not useful. To the contrary, I claim, theories are how we think and if we do not acquire theories we apply from formal instruction, we make them up based on personal experience. A theory, personal or formal, is the abstraction we use to understand, to predict, and decide on action when we encounter a unique life experience – pretty much any new input from the world. What is kind of cool (interesting) is that individuals are quite capable of storing contradictory or at least inconsistent personal and formal theories about the same phenomena. In other words, we may develop one interpretation of a certain kind of situation based on our life experiences that is different than an interpretation we are taught. How can this happen and what can be done about it? This can happen because an external experience activates one internal model or the other. This is one of the frustrations of education. We can help learners acquire a model of the way something works in the world. They can understand this model and use it effectively within the classroom setting, but they can still revert to their own primitive model of how something works when encountering circumstances in their daily life to which the more formal theory should have been applied.

This was a very long introduction to get to the core issue. What are the models educators use to guide their work in helping students learn? This could be a question of whether or not educators activate formal models or personal models to guide their practice. Given what I have said about formal models and naive models (this is the term applied and the intent is to describe models built from field experience without the use of formal guidance), this could be a great topic to consider. I will have to save a discussion of this distinction for another time. Here, I want to share my personal bias when it comes to the utility of several formal theories.

Models of learning

Somewhere in the preparation of educators, most are exposed to multiple models of learning. At the least, they have probably been told about behaviorism, cognition, and constructivism. Recently, some preparation experiences may include some biology – brain structure and function. Certainly, biology and biochemistry have the potential to describe learning most accurately. I think an important issue is whether a more accurate description advances education or not. My personal opinion at this point in time is that our understanding of how the brain functions in learning is rather crude and I am aware of very little that improves on what other models describe and explain. I have an undergraduate major in biology but that was a long time ago and what I know now I describe as the content included about brain biology in your average Introduction to Psychology textbook. The one exception I can think of to my claim that there is nothing unique about a biological perspective is neural plasticity – the finding that long term experiences of a type can restructure the brain to predispose individuals to different patterns of mental behavior. I believe this idea is helpful. However, the interpretation of this phenomenon within education has also been generalized in ways that I think are inaccurate and certainly not a basis for significant changes in practice.

Here is my short list of models (actually categories of models) of learning and a very brief comment on what I see as the core mechanism of each.

  • Behaviorism – emphasis on external events and consequences that increase and decrease the frequency of behaviors.
  • Cognitive – constructivist (macro) and information processing (micro) – mental activity under the control of the learner. Thinking develops internal models.
  • Biological – chemical and biological action and storage (internal). Learning results in changes in the brain (vague) and must be accomplished by a combination of chemical and electrochemical actions taking place in physical structure some of which are specialized to accomplish certain things.

I find the concept of fidelity useful in understanding learning. Fidelity could be defined as the exactness of fit between a model and the actual thing/process. One might think that the more exact the fit, the better. We have learned from research on the use of simulations in learning that this is not the case. With simulations, in the early stages of learning, too much realism (match) can overload, confuse, and in some cases produce unproductive emotions. For example, the training of pilots does not begin by putting a novice in the pilots seat and letting him/her explore. The experience would be overwhelming and certainly terrifying. Typically, training makes use of experiences in simulators that simplify the experience to a limited number of actions and possible reactions. Using various techniques and equipment, more and more realism and experiences are added until the more experienced individual can deal with the emotions and complexity of full application.

I see a similarity in the usefulness of models of learning. Behaviorism offers little insight into mental behavior (I think supporters leave that to the biological researchers) and is really more a model of instruction (manipulation of external events). I regard behavioral models as useful for understanding and investigating incentives. I see biological models as eventually having the potential for high fidelity, but I see these models at present as mostly descriptive. At best the future might provide a level of understanding that encourages practices through a process of  find out how to produce a given combination of chemical and anatomical conditions. I see the cognitive models as most useful, but differing in level of fidelity with information processing models offering a more detailed level of process clarity. Constructivism offers a broad perspective, but may or may not be sufficient to propose useful interventions. Especially when what seems like a productive process is not, analysis based on information processing models is often usefuL

Models of learning, models of instruction

Comparisons of approaches generated from models of how learning happens are important. Approaches may differ in the external events created, but any event allows “thinking” by learners. The relative effectiveness of different approaches is important. Putting down books, lectures, worksheets, life experiences, or task of one format or another all offer some type of input that learners will attempt to process. The capacity to point to idiosyncratic cases of students who learned from this or that experience is not really justification for much of anything. It is the relative productivity of one approach in contrast to another within defined requirements for a common set of learning circumstances (group size, time allowed, variations in past experiences, etc.) that provide the basis for application.

Arguing that one model of instruction based on this model of learning is superior seems pointless without data allowing those who must evaluate these claims. Models offer different ways to think about learning. These can be helpful in the design of learning experiences, but ultimately, it is the response of learners exposed to these experiences that matter.

[Image included purchased through the Noun Project]

Loading