Rethinking Assessment

How does the ability to track stock points affect the incentives and behaviors of companies?   The common phenomenon of focus on short term profits suggests in a narrow and significant way.

Services like SchoolMAX and Edulink allow students and parents to follow the ups and downs of grades like stock tickers.  On the one hand, these portals create improved connections to review grades, download homework assignments, and text chat with teachers.  On the other hand, they motivate the question about whether we are tracking and focusing on the right things (or perpetuating outdated indicators of achievement).

In the adult world, we aren’t graded on each email we write.  What matters is did we (usually as part of a team) accomplish/solve the end goal — did we sell the product, heal the patient, win the case, build the application, complete the compelling creative.  The changing nature of work and required skills has caused the need to rethink traditional assessment – what aside from attendence, the completion of assignments, and exam performance can be captured and tracked to motivate and improve learning in progress?

There is energy around the idea that games do not seperate learning and assessment – the potential to get “just in time”, constant feedback on one’s learning curve.  Professor James Gee has a knack for explaining in layman’s terms the potential for games as part of the solution to understanding “knowledge not just as facts, but knowledge as something you produce” and for transforming assessment from a stick to a carrot (watch a video interview here).

A more academic summary of “what we know about assessment in games” via UCLA CRESST here (with a helpful list of research papers in its end References section).  Baker and Delacruz advocate that games must be integrated into curriculum/training at the outset of design rather than as an add-on, so that the assessment is embedded in the transaction of the game and underlying game engine.  Current typical approaches to game-based assessment such as scoring mechanisms (e.g. number of obstacles conquered against time) and wrap around assessments (added tasks or questions) are compared to embedded assessments that could use process data to help explain learning outcomes (e.g. student online clickstream behavior to support inferences about student understanding).  The difference between motivation measures versus measurement of cognitive or procedural skills is also emphasized.

Finally, the development of Teachable Agents (tools for learning by teaching) will be interesting to watch over the next few years.  Some research papers available at The Teachable Agents Group.


Bookmark and Share

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s