Arguing against big data a convenient way to argue for an increased bottom line?

It seems to be Apple and Microsoft against Google. The “concern” expressed by the unlikely pairing of Apple and Microsoft is that Google collects, and as I understand all of the concerns, shares personal data for pay. Google, in response, argues they use the information collected as a way to improve the search experience.

While this sounds like an disagreement over a principle, the positions taken align with business interests. Google makes money from advertising. Apple makes money from hardware. Microsoft makes money from software. The income foci of these companies has evolved and this may have something to do with the positions now taken on privacy. Google offers software and services for free partly to increase use of the web and as a way to offer more ads and collect more data. Google also offers services that decrease the importance of hardware. Chrome hurts the hardware sales of Apple.

What I think is important under these circumstances is clear public understanding of what data are being collected, how it is being used, and what are the motives of the players involved. It turns out we all are also players because blocking ads while still accepting services (the consequences of modifications of browsers) involves personal decisions for what constitutes ethical behavior.

Into this business struggle and how it has been spun appears a recent “study”  from Tim Wu and colleagues. Evaluation of the study is complicated by the funding source – Yelp. Yelp has long argued their results should appear higher in Google searches and suggests Google elevates the results of Google services instead. Clearly, you or I could go directly to Yelp when searching for local information completely ignoring Google (this is what I do when searching for restaurants), but Yelp wants more.

I have a very small stake in Google ads (making probably $3-4 a year), but I am more interested in the research methodology employed in this case. My own background as an educational researcher involved the reading and evaluation of many research studies. Experience as an educational researcher is relevant here because many educational studies are conducted in the field rather than the laboratory and this work does not allow the tight controls required for simple interpretation. We are used to evaluating “methods” and the capacity of methods to rule out alternative explanations. Sometimes, multiple interpretations are possible and it is important to recognize these cases.

Take a look at the “methods” section from the study linked above. It is a little difficult to follow, but it seems the study contrasts two sets of search results.

The Method and the data:
The method involved a comparison of “search results” consisting of a) Google organic search results or b) Google organiic search results and Google local “OneBox” links (7 links for local services with additional information provided by Google). The “concern” here is that condition “b” contains results that benefit Google.

The results found that condition B generate fewer clicks.

Here is a local search showing both the OneBox results (red box) and organic results from a Minneapolis search I conducted for pizza. What you see is what I could see on my Air. Additional content could be scrolled up.

gsearch

The conclusion:

The results demonstrate that consumers vastly prefer the second version of universal search. Stated differently, consumers prefer, in effective, competitive results, as scored by Google’s own search engine, than results chosen by Google. This leads to the conclusion that Google is degrading its own search results by excluding its competitors at the expense of its users. The fact that Google’s own algorithm would provide better results suggests that Google is making a strategic choice to display their own content, rather than choosing results that consumers would prefer.

Issues I see:

The limited range of searches in the study. While relevant to the Yelp question which has a business model focused on local services, do the findings generalize to other types of search?

What does the difference in click frequency mean? Does the difference indicate as the conclusion claims that the search results provide an inferior experience for the user? Are there other interpretations. For example, the Google “get lucky” and the general logic of Google search is that many clicks indicate an inferior algorithm. Is it possible the position of the OneBox rather than the information returned that is the issue? This might be a bias, but the quality of the organic search would not be the issue.

How would this method feed into resolution of the larger question (is the collection of personal information to be avoided)? This connection to me is unclear. Google could base search on data points that are not personal (page rank). A comparison of search results based on page rank vs. page rank and personal search history would be more useful, but that is not what we have here.

How would you conduct a study to evaluate the “quality” concern?

Wired

Search Engine Land

Fortune

Time

Rationale for a copper bracelet

I have been concerned with an issue for some time and have been attempting to generate an analogy I might use to communicate this issue to educators. Here is a scenario I would like you to consider.

Assume you are a patient and you have wrist pain that you suspect is an indication of arthritis. You read an ad in Golf Digest for a copper bracelet ($29) that the providers claim offers relief from the pain of arthritis. Not knowing whether this claim is valid or not, you decide to call your physician for advice. You trust your physician who is about your age and you have noticed that he wears a bracelet that you think is probably copper. What would you expect the physician to use as the basis for his response to your inquiry? Would you expect him to be aware of the research literature on treatments for arthritis and arthritis pain? Would you assume that if he noted that he wore a bracelet that this was the case because the bracelet has scientifically proven value? How about a personal belief that “at least it can’t do any harm and I seem to feel better”?

As a retired educational psychologist and educational technologist, I spend considerable time writing to offer advice to practicing and future educators. I certainly write to influence their understanding of technology and instruction, but I attempt to make my ultimate goal the impact their practice has on their students. I would describe this as being an advocate for their students.

I spent most of the past 40 years engaged in a similar role both as a professor and as a researcher. I no longer consider myself a researcher, but the values that guided the initial commitment to research persist. I believe that understanding learning is best accomplished through the various methods of research. Certainly, practitioners and those who offer advice to practitioners do not have to be researchers, but they at least should rely on the best scientific thinking about practice.

I spend a great deal of time reading the popular books and online content intended to inform educator practice. I attend several conferences a year focused on the role of technology in supporting learning. I must say that I am discouraged by the disconnect between these two areas of my experience. I listen to the claims that it is time fo educational reform and new ways of doing things. I recognize that older folks are sometimes described as saying “new ideas will not work” and of being accused as rejecting change just because they are unwilling to change. I certainly do not want to be branded as being out of touch when I do not think I am – retired or not. Given my core philosophy that claims should be justified in scientific findings, I object to any research-based position I take being rejected out of hand because I argue new approaches lack demonstrated value. I would say this because I believe it to be true and I would invite any data-supported contradiction others can bring to my attention.

A couple of observations. Please do not reject without careful consideration unless you can verify that these observations are inaccurate.

1) Many concepts advanced as significant reforms and new ideas are historically not actually new. Many concepts such as student-centered learning, student choice, and projects experience are not new. Those of us who went through teacher training programs in the 1960s encountered these ideas.

Mayer has written about this issue and in frustration calls it the “three strike” problem. He asks how it is that new ideas that are actually old ideas keep resurfacing even though the ideas have been proven largely unsuccessful in a previous iteration.

It is almost as if the idea sounds good  and advocates forget or never knew the previous history of these practices.

2) I am willing to say that some practices that seem to be largely unsuccessful as commonly applied (problem-based learning, project-based learning) have been successful in some carefully researched cases. Hence, I can advocate for such practices and reference what I believe to be quality examples. At the same time, I can suggest by relying on research that the common implementations of these practices are less effective than what most of us would describe as traditional practices. It bothers me when advocates advocate without acknowledging what I would argue as the complexity of the practices they promote. I see few references to the general sub-par performance and no effort to contrast these many studies with successful examples. It is almost as if the approach seeks not to confuse practitioners with the facts. You cannot really simplify complexity if hidden in that complexity is the difference between success and failure.

If we truly care for the collective body of those we call students, what should we regard as the basis for practice? Being open minded is not a function of age, it is a willingness to consider both sides of an issue based on the best evidence available. Are you one of those interested in investing in a copper bracelet?

 

Prof – No one is reading you

No one is reading you was the title of  a recent article describing scholarly publications. My brief summary would suggest the article claimed “most publications receive little attention even though some might offer useful information”.

The article reminds me of a story told by my wife’s sister who claims to have checked my dissertation out from the university library. At the time books had a little card in front that was marked with the “due date” and she said she was concerned there were no return dates on my masterpiece. I guess it makes a good story at family gatherings. I admit that I have never checked out a thesis or dissertation either. I did read many before students finished their work I thought that was enough.

A couple of quotes from the linked article will give you the flavor:

Even debates among scholars do not seem to function properly. Up to 1.5 million peer-reviewed articles are published annually. However, many are ignored even within scientific communities – 82 per cent of articles published in humanities are not even cited once. No one ever refers to 32 per cent of the peer-reviewed articles in the social and 27 per cent in the natural sciences.

If a paper is cited, this does not imply it has actually been read. According to one estimate, only 20 per cent of papers cited have actually been read. We estimate that an average paper in a peer-reviewed journal is read completely by no more than 10 people. Hence, impacts of most peer-reviewed publications even within the scientific community are minuscule.

Note that the examples in this article do not include educational research. I also could not determine the source for the data provided which prevented me from understanding the scope and method of the research. Citation frequency is easy enough to to check. With access to Google Scholar you can now check citation frequency and most of us are vain enough to know which of articles have drawn the most attention. I do agree that many cited articles are not read. I think people sometimes cite what other researchers cite without actually reading the publication beyond the abstract.

If few scholars read each others work (I think this statement is a serious exaggeration but I have only my own experience to go on), the chance that such work influences practice seems unlikely. I am more concerned about this issue especially as it applies to education. Clearly, from time to time, “trends” move through the educational community. These ideas must come from somewhere and I would hope the basis for innovations had some basis in careful scholarship. My concern is that this is not the case.

I am reading a book by educational historian Jack Schneider -From the Ivory Tower to the Classroom – that addresses the transfer issue in education. Based on his analysis of several specific ideas, Schneider argues that there are key characteristics of ideas that make the transition from research to practice

  1. Perceived significance – research offers a big picture approach rather than a piece of the puzzle.
  2. Philosophical compatibility – fits with the professional identity and values of teachers
  3. Occupational realism – fit within the professional constraints within which teachers operate – e.g., time
  4. Transportability – easy to communicate

Understand that the author is not attempting to identify the characteristics of research that is most meaningful research or ideas with the greatest potential. The author is attempting to identify ideas that seem to have been accepted/considered rather than ignored. His arguments through a kind of case study approach – here are some ideas that have been accepted and here are some ideas that have been ignored. I assume the approach assumes all are credible ideas and the arguments are based in an analysis of the factors that determine acceptance.

In a later post, I will provide a follow-up on two of his cases. I have particular interest in two of the cases – projects (accepted) and generative processing (ignored). Much of my writing on technology stems from a generative processing perspective. I see “writing to learn” as an extension of the generative position and I have morphed “writing to learn” into “authoring to learn” as a way to justify many of the tactics I propose.

I think this is a very important issue. I do not expect practicing educators to read basic research, but I do wish they accepted the value of research and read a little more of the secondary literature based on this research. Now retired, I consider myself no longer an active researcher, but I hope to spend some time reading the publications and writing to offer my perspective.

The lure of the shiny and the new

shinythings

I attend several ed tech conferences a year and after a couple of decades I have come to some conclusions about the ed conference game.

Educators are attracted to shiny new things? I fall into this group. By new things, I actually mean new gadgets, new instructional philosophies and new instructional strategies. Professional presenters develop an ever changing schtick – making, coding, storytelling – in order to satisfy this demand.

I also attend at least one educational research conference a year. Certainly, there are trends in the topics that seem most dominant, but things move more slowly from year to year and ideas are openly challenged. The community as a whole looks at proposals and asks do they work, what does work mean, why does it work, what are the boundary conditions that define effectiveness, etc.

Don’t interpret this as criticism of any given shiny new thing, but learning is not magic. What are the first principles? How do learning experiences impact cognition? How do experiences modify motivation? I would argue the history of educational trends would argue against limiting our perspective to what tends to impress us at any given point in time. The transition from one shiny thing to the next comes with overhead. It takes time and it takes money to make transitions from one thing to the next. There will always will be those who both feed and take advantage of this constant overhead. They show educators the new things, explain how the new things are applied, and accept payment in return. These individuals should not be our only educational heroes.

I wish conferences offered some combination of proposed practical tactics and critical examination. I wonder how this could be accomplished? What if one of the key notes was given by a relevant educational researcher? Researchers and practitioners now seem to not only work in different worlds, but seek out conferences that further distance these two communities.

Something concrete enough to discuss

I find that future educators often glaze over when I mention research. When faced with this reaction, I often propose that researchers face a challenge many “experts” on educational topics are not required to address. Researchers must be very concrete when it comes to the topics they study. I did say concrete. The notion that academics are abstract is a representation and when true applies to explanations they offer but not the techniques they use. Unlike other experts who can offer generalizations, researchers must conduct experi ents. They are doer and not talkers. They have to define their interests in terms of specific situations and actions. I describe this as “operationalization” which may not be a familiar term. It pretty much means researchers are required to explain how an hypothesis is turned into an investigation. They cannot hide behind vague terms – motivation, engagement, creative, etc.

The skills of critical thinking and literacy make a good example. Both sound like areas in which we want to encourage achievement, but what are such skills when it comes to a specific setting? My own interest is in these skills as applied in online learning. Here is an example of a study developed by a researcher focused on these skills. I encourage educators to consider the “Methods” section.

The great thing about the specificity of research is that the methodology offers something concrete to discuss and debate.

PBL Challenge

It seems we point to the findings of science selectively. Most folks I know profess amazement that some ignore the overwhelming scientific evidence on climate change or evolution. They are concerned when the “scientific perspective” is not the basis for what students experience if these topics are considered.

Why then is the assumption that our best evidence should not guide practice applied when selecting learning activities? You will have to trust me on this (unless someone really wants to review my reference list), but direct instruction consistently results in better academic performance than project based learning, problem based learning, discovery learning, etc. How do PBL advocates rationalize this reality? I seriously want to know because I find the PBL philosophy appealing as well. I just personally struggle with ignoring what research findings suggest.

I try arguing with myself seeking answers. I know many of the proposals. Direct instruction works when the dependent variable is simple, factual knowledge. Direct instruction turns learners off because it is boring. Direct instruction results in learning that fails when it comes to application or flexibility. However, whatever the counter argument, the position is only an hypothesis unless tested. Show me the data (or the money if you prefer). I am waiting to be convinced.

I am aware of what I consider successful PBL research. Success is possible. Here is what I think until shown otherwise. I am guessing that successful PBL takes far more skill to implement with classroom groups than direct instruction. Most PBL attempts probably do not meet an acceptable standard.  I know this sounds harsh, but what is the goal here? In general, I think many students are simply lost or overwhelmed when self directed. I do not think a substantial proportion of students are any more motivated by many PBL tasks. The outcome data simply do not support the argument that common implementations of PBL are as productive as more traditional methods.

So, at this point in my career, I do no longer have the opportunity to conduct research studies. I do have great interest in this topic and continue to search the journals for interesting studies. Learning experiences should not be promoted by talk or novelty.

Web content evaluation – data for a change

I sometimes complain that pundits and keynoters receive too much blog attention and researchers too little. Since the researchers I follow seldom seem to blog, perhaps I should post in support of their activity.

So much attention has been focused on the quality of online resources and the skills necessary to critically evaluate these resources as a literacy component of 21st Century functioning that one might think this area would have generated considerable research activity. There seem to be plenty of recommendations for practice, but little formal assessment of skill or of the success of interventions.

The recent AERJ article by Wiley and colleagues (citation at end of post) describes an interesting study I feel both evaluates the value of commonly suggested practices for evaluating web sites (e.g., identify the page author and possible motive for offering the information) in terms of whether students (college students in this case) learn to apply such skills and whether the development of such skills influence how students then go about completing an online inquiry task. I thought the procedure used in the study was creative – basically offer students a fabricated Google results page based on a given search phrase and have participants evaluate the various links. Social psychologists and other researchers often employ deception in their research. The research demonstrated that more specific guidance and a more active evaluation task resulted in improved performance on a second site evaluation task AND the use of higher quality information in an inquiry task.

This study needs to be replicated with younger learners.

BTW – the methodology is similar (evaluate a set of sites addressing a given topic) to that proposed on the Beck “Good, bad and ugly” site.

Wiley, J., Goldman, S.R., Graesser, A.C., Sanchez, C.A., Ash, I.K. & Hemmerich, J.A. (2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational Research Journal, 46, 1060-1106.

Powered by ScribeFire.