Low use of purchased software?

A recent analysis of K12-level software usage entitled “Towards understanding app effectiveness and cost” written by R Baker and S. Gowda offers a dismal picture of the actual commitment to purchased resources. Using data collected by the BrightBytes Learning outcomes module, analysts were able to determine the committed time and frequency of use for free and purchased apps. The researchers were also able to track gains in some areas using standardized tests administered near the beginning and end of the study. The researchers offer a complete project description in addition to the summary provided online. I have requested the complete study, but the holiday season may have meant the researchers have not yet received this request.

Highlights from this study have been circulated. The online highlights that I have read feature some attention-grabbing data. The median activation of licenses sold was 30% and only 3% of apps reached the level of activity defined as extensive use (10 hours or more during the duration of the study). The data were collected from 58 districts comprising 845 different schools so the sample was substantial.

Some of the most frequently used apps are listed below. The statistic used for comparison is the median number of days on which the app was used. I am unable to report on the exact methodology, but I assume these data require that  a school make available a specific app in a given classroom. What is not provided in the overview is the number of schools/classrooms out of all possible schools/classrooms that installed a given app so the comparisons across apps are difficult to interpret. Some apps on this list are free and serve a general function (e.g. Google Drive). Some are more targeted to a specific content area and require payment.

Cengage Learning DigitalAce – 31 days
Sherpath – 19
Spanish Lessons – 13
Big Universe – 10
Zern Math – 10
Tenmarks Math – 9
Carnegie Learning – 8
Google Drive – 8

Some apps showed significant correlations between amount of use and standardized test gains. Some did not.

The data that seem to be generating the most reaction is the low level of overall use for these apps.

As I suggested, these data have encouraged a reaction from several bloggers.

Doug Johnson says that the district he represents also uses the BrightBytes tool to track usage within his district. He says that the level of usage within his district would be far higher and the reason that such data were collected in the first place is to determine what software to keep and what to replace from year to year.

Thomas Arnett offered a reaction I have seen most cited. He interpreted the results using his model of teacher “jobs”. He attempts to identify three of these jobs and explains teachers will make meaningful use of software only if it helps the teacher perform what they see as a job. Arnett also speculates that the very low “extensive” use of nearly all apps as teachers making some, but minimal use of apps as a way to meet administrator expectations.

I have my own opinion as what is going on in these data. I have written on several occasions about the pricing models associated with many applications. I wonder if some of what many might see as unexpectedly low activity is related to pricing models. Often apps are offered as a free “crippled” version, a price per class, and a price for a school. Purchasing the school level may seem easiest to implement and cost-effective, but might provide access to many educators not really committed to use. It would be interesting to know what were the expectations of those making the purchases. This situation might create a situation in which educators had access but did not and were not really required/expected to use the app. It also seems that a different usage picture would exist if educators were provided a budget and allowed to use this budget to purchase class-level access to apps.

Loading