What next?

I have been enjoying “end of” lists. We get a double dose this year – end of the year, end of the decade. Albums of the decade. Best apps of 2009. Best apps of 2009 for educators. I have purchased some music, but no apps.

It is also time for the brave among us to make predictions. I read Warlick’s post today. He found it easier to raise questions than make predictions, but did come up with a few prediction. He predicts the rise of augmented reality and the end of high stakes testing

Warlick does raise the Apple tablet as a possible wild card for the coming year. I have hopes for some device that will allow information consumption to be combined with the potential for information manipulation and production. Existing handhelds are either too small (ipod) to allow for efficient content production or are primarily intended for consumption only (e.g., Kindle). I think commercial educational resources suited to future interactive tablets will take on a format that includes a provided information framework, information options to allow for instructor/student preferences, and opportunities for students to contribute and interact in reaction to the contributions of colleagues.

Powered by ScribeFire.

Loading

Knowing when to give it up

I hate thinking about “what I used to be able to do”.

I used to be able to dunk. Now, I have trouble getting out of a chair.

I used to be able to run for miles (more like a fast trot). Now, I pace myself walking up multiple flights of stairs.

So, the physical capabilities are slipping a bit.

Age is supposed to bring wisdom (can’t remember which, but fluid or crystallized intelligence takes quite a while to drop off).

I used to be able to run my own servers (Apache, PHP, MYSQL, etc.) and thought it was great fun. I think I learned a lot doing so. Even if my time was not spent as efficiently as that of my colleagues who either let the pros do it or who did not bother to explore ahead of the pack, I still think it is worth understanding much of how technology does what it does. I sense this capacity beginning to slip as well. So many new concepts and so little time to explore. I hate having to accept it all as magic.

Today’s challenge is the following:

Fatal error: out of dynamic memory in yy_create_buffer() in /Library/WebServer/Documents/wiki/index.php on line 30

I am trying to set up Mediawiki to accommodate an activity I have in mind for my graduate educational psychology class and I get this error intermittently. I hate intermittent errors. They make no sense. Things should work or not work. I doubt the students I expect to use this wiki will accept “works most of the time”.

I have started to transition from operating my own servers to purchasing space on a server. Far less expensive if my time is worth anything. Select an application and it is installed automatically. No fooling around for a weekend to see if you can get it run. Where is the fun?

Still, if others depend on you, maybe  you must take the easy way out.

I’ll give it a few more days.

Loading

A Wonderful Life

Cindy and I are spending the first part of our holiday in Duluth. We had originally intended to go to Iowa and then on to a cabin by a like in Wisconsin to spend the holiday with the majority of our family. The storm changed that. We headed to Duluth to get closer to Wisconsin and now it looks like we will be staying here a little longer than anticipated. We are nearly here by ourselves.

A opened my present from Cindy a little early (a 7D) and we have been taking some pictures. A new camera and a good Internet connection. What a wonderful life.

So, I have had some time to explore and I found a new server application from Google (flogr) that offers a new way to display Flickr pictures. Google writing applications to support Yahoo!’s flickr. I uploaded the files to my server and was able to get the application running (take a look). A hint – Previous and Recent tabs appear near the top of photos when moused over.

The 7D takes some great pictures even in the storm.

Happy holidays.

Loading

Final week and a comment on evaluation

So, it is final week and final week brings with it the intense focus on reading and grading that goes along with being a prof. I need a break.

I have been reading several books critical of the U.S. educational system, teachers, and how we were failing the business community, failing to prepare the younger generation to compete with the Chinese, Norwegians, or whomever and I was starting to get depressed. I would like to think of myself as a serious scholar and committed to preparing the next generation for their opportunity to make a living, have children, and think meaningful thoughts, but perhaps I have been deluding myself. By the way, I have started to dilute the depressing stuff with the book by Daniel Willingham and I am now less concerned about my world view.

Did you ever notice how Andy Rooney begins many of his 60 minute segments by asking did you ever notice?

Did you ever notice how multiple choice tests are often the culprit in the sad state of education? I don’t mean MC tests as in high stakes testing and NCLB, but multiple choice tests in how we focus far to much on memorization (actually read Willingham and note that the importance of knowing stuff is not necessarily an endorsement of rote memorization). Just so this post does not get out of control and I can get back to grading I will make two points.

1) MC test items are not equivalent to a focus on facts and knowing stuff (as if this were proven to be a bad thing).

One of my favorite things is to get into a discussion regarding cognitive skill hierarchies with someone (OK so this may not be one of your favorite things). I love when someone attempts to educate me regarding such hierarchies ’cause a) I can usually effectively demonstrate that alternative evaluate devices such as essay exams are typically focused on factual recall and b) because yes I have heard of Bloom’s taxonomy.

Regarding Bloom’s hierarchy – I happen to have one of his (with colleagues Hastings and Madeus) books on my shelf and because I am old I have actually read what he wrote in the original. In fact, if you are concerned with cognitive skill hierarchies, you might explore this book and consider the examples provided. It appears Bloom felt multiple choice items were an acceptable way to exemplify the assessment of cognitive skills above the levels of knowledge and comprehension. Many MC items are offered to help the reader understand the differences among levels of the taxonomy.  Hmm… who knew.

Point noted – what is possible is not necessarily what educators do. I agree. However, see my earlier comment regarding what types of skills that are tapped by the majority of essay items. Asking the student to list and explain the same three poinats you made in class is not higher level thinking just because the information has to be written in a blue book. I apologize for resorting to stereotypes – I know of no one who uses blue books.

2) Student reaction to different types of assessment and how instructors may adjust

I have been an administrator for more than a dozen years. I have been trying to think of any case in which a student came to me complaining about items on an instructor’s MC test. Honestly, I can think of no example. What I tend to field are complaints about the way papers, essay exams and presentations have been evaluated? Despite rubrics, it seems there are concerns about fairness and consistency. Just what is it that I should do in such situations? Should I attempt to score the various products to see if there is some pattern of bias? I love the complaints about the evaluation of presentations. What I have are the marked rubric and the student’s claim? An impossible situation to evaluate.

My point? I am not sure. I am just noting that students are more likely to complain when instructors make the effort to do what some argue is the only acceptable thing.

BTW – I have a similar position on developing “higher level” MC items (which I do when I work with hundreds of students at a time). Pretty much by definition, you have to use items that involve information you did not provide when you attempt to evaluate something other than retention. I am puzzled by the complaint that sometimes results from this – “you did not tell us that ….”. I try to respond consistently – “you are correct, I did not tell you that …”. I DO NOT explain that “If I told you that, this would be a recall question and I am supposed to be assessing your ability to apply, think critically, etc.”. I actually do explain that this is my intent at the beginning of the course, but when students are complaining about how my approach is unfair after the fact I tend to just listen.

I am of the opinion that students would not feel cheated if you were to ask them some obscure fact. Why – you could show them where the obscure fact was in the book and what could they say? That would be fair. Not particularly meaningful, but fair. There is some unfairness in asking questions that involve assumed background knowledge, awareness of the world, or awareness of what commonly happens in a work environment students are preparing for. This is not typically information provided in a courese.  I think of this as a sampling problem. I assume the general awareness that is necessary to respond to such “scenario” or application items evens out across items. I cannot deny that for any given item some students will be at a disadvantage. There are all kinds of sampling problems in education. What topics/skills did those three essay items that made up the final sample or miss? What skills did that oral presentation hit/miss?

So, assessment and the relationship of assessment to learning are complex issues and nothing at the surface level, in my opinion, guarantees success or failure. Much depends on the specifics of whatever assessment approach is taken.

Back to grading (essay exams)

Loading

Google Docs Backup

I started using Google Docs to backup documents I prepared on my desktop. I eventually found writing in Google docs more convenient than writing on my own computers. I guess “computers” is the key here. I work from at least three machines daily and it is just so easy to login to one remote location and begin working.

Once I have invested several hours in a project, I begin to worry about backups. I trust Google with my files, but it is reassuring to have a second copy offsite (in this case on my desktop hard drive). Google now lets you zip up to two gigabytes of content and download in a single operation. Select the files (perhaps all of your files) and then the export option (available under more options). The process may take some time, but you can ask Google to notify you when the entire process is done.

I have prepared a simple demo if you need more information.

Loading

Late to the party

I have been thinking about the digital native / digital immigrant distinction and what about this dichotomy is assumed to be meaningful. As I understand the distinction, the difference is whether one has known “life without”. So if you were unaware of the world around you without Google, cell phones, or computers, you are a native. And, if you can remember punch cards or punch tape, coding in assembly, or waiting for your batch job to be run, you must be an immigrant. Actually, I am being a bit facetious. If you were alive and cognizant before the Internet, computers, cell phones, etc., you come to these devices and services unnaturally and are an immigrant.

It is kind of unclear whether this distinction is about positive or negative differences. Or, perhaps the the distinction proposes a combination of differences. As a distinction  proposed as relevant to educators, how this difference is understood really matters. Unless we are assuming that all differences are positive, I reject the notion that educational practices MUST be adapted to suit the interests and aptitudes of natives. When a difference is negative, I think the appropriate response is to determine whether the capability can be developed or not, and if change is possible focus on improvement.

The simplistic distinction of “did you experience life without” misses many significant qualifications. If you have been there from the beginning, you have several important “context advantages”. You better understand how things work. Some think you do not have to know. For example, I do not have to understand fuel injection to drive a car. True enough. But, I do think having a broad background has advantages. I remember attempting to explain HTML tags to educators who seemed to be baffled by the notion of a “markup language”. I realized that my background offered a perspective they did not have. I did markup on a key punch machine when I wrote my dissertation. I was familiar with markup because I remembered word processing programs in which the markup would appear (bold, underline) as part of the text. Adding tags around text was not foreign in this new setting. It was part of using technology.

I think there is a more important aspect of context and that involves technology within culture. If one experiences the gradual integration of technology in various forms, one also can observe how we all have accommodated such tools into our lives. We have a sense of the inter-relationship between human behavior and technology. Often, we see trends that result in unanticipated negative consequences. I do not assume that sitting each evening staring at a big screen is the way life has to be. It does not seem rebellious to me that some young parents are concerned that their children need some alternatives. I understand that there are options. I understand such options as a native not as an immigrant. I do not have to learn about such alternatives to understand.

So, in keeping with this perspective, I offer some new phrases to describe the awareness of digital natives. If you want, I am collecting other descriptive phrases. My present list includes:

Late to the party
Better late than never
This is all magic to me.

Loading

Web content evaluation – data for a change

I sometimes complain that pundits and keynoters receive too much blog attention and researchers too little. Since the researchers I follow seldom seem to blog, perhaps I should post in support of their activity.

So much attention has been focused on the quality of online resources and the skills necessary to critically evaluate these resources as a literacy component of 21st Century functioning that one might think this area would have generated considerable research activity. There seem to be plenty of recommendations for practice, but little formal assessment of skill or of the success of interventions.

The recent AERJ article by Wiley and colleagues (citation at end of post) describes an interesting study I feel both evaluates the value of commonly suggested practices for evaluating web sites (e.g., identify the page author and possible motive for offering the information) in terms of whether students (college students in this case) learn to apply such skills and whether the development of such skills influence how students then go about completing an online inquiry task. I thought the procedure used in the study was creative – basically offer students a fabricated Google results page based on a given search phrase and have participants evaluate the various links. Social psychologists and other researchers often employ deception in their research. The research demonstrated that more specific guidance and a more active evaluation task resulted in improved performance on a second site evaluation task AND the use of higher quality information in an inquiry task.

This study needs to be replicated with younger learners.

BTW – the methodology is similar (evaluate a set of sites addressing a given topic) to that proposed on the Beck “Good, bad and ugly” site.

Wiley, J., Goldman, S.R., Graesser, A.C., Sanchez, C.A., Ash, I.K. & Hemmerich, J.A. (2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational Research Journal, 46, 1060-1106.

Powered by ScribeFire.

Loading