Thursday, 31 May 2012

Enhanced item rendering in QTIWorks

After another flurry of activity, I've just finished and released the 8th development snapshot of QTIWorks for folk to play around with:

https://www2.ph.ed.ac.uk/qtiworks

This iteration of work has focused mainly on the delivery of single assessment items to candidates, joining up various bits of pipework that control this process and adding in some new rendering features and options that we'll need later for test rendering (and are quite useful in their own right too).

I'm afraid you'll still have to make do with playing around with the sample items that have already been loaded into the system for the time being. However, you'll probably be pleased to hear that the next work iteration will add in the first round of functionality for getting your own assessments into the system and doing fun things with them. (At last!) To make the wait a bit easier, I have gone through the existing samples and selected a reasonable set of options for how they should be delivered, which makes them much more fun to play with, as well as more illuminating to people who are less familiar with QTI.

Key changes in this release (1.0-DEV8)

  • Assessment item rendering now incorporates the following new features:
    1. Display of a model solution (via the QTI <correctResponse>).
    2. The ability to reset a session back to the state it was in immediately after the last run of template processing, which effectively clears all candidate input back to the original state, but leaves randomly-chosen things intact. Existing "reinit" functionality has been made a bit clearer.
    3. An explicit "closed" state, which is either entered explicitly by the candidate, or when an adaptive item becomes complete, or when the number of attempts hits your chosen value of maxAttempts. I think this will need a bit more work, as it was never implemented in QTIEngine or MathAssessEngine.
    4. A new "playback" feature lets the candidate step through every interaction they made with the item, which is a possible way of implementing the allowReview concept from test rendering.
  • A number of "knobs" are now available for controlling how you want to render a single item, including:
    • maxAttempts (as seen in tests)
    • author mode on/off (controls whether authoring hints are shown)
    • a simple prompt to be shown at the top of the question (similar to rubrics in tests, but simpler)
    • restrictions on what the candidate can do, including:
      • close a session explicitly when interacting
      • play a session back when closed
      • reinitialize a session when interacting or when closed
      • reset a session when interacting or when closed
      • see a model solution when interacting or when closed
      • see the result XML
      • see the item XML source
  • I have gone through all of the existing samples and set the above "knobs" to values that show each in their best light. For example, some suit a very formative "try as many times as you like" approach with rich feedback, others are more austere so are better displayed in a more rigid way. This should make the samples much more useful for people trying them out.
  • The authoring debugging information has been improved, and now shows bad responses (e.g. string submitted to a float), and invalid responses (e.g. wrong number of choices made).
  • There's now a rich database structure underpinning all of this, which records everything the candidate does and the changing QTI state during this process. This is currently used to implement the "playback" functionality, and will prove invaluable for analysing result data when the system delivers real tests.
  • The HTTP calls that control the typical delivery of a "candidate session" via a browser are now as RESTful as you would pragmatically expect in this type of scenario. A more formal RESTful web service API will be trivial to do from this, but I'm going to hold off until anyone actually needs it.
  • HTTP responses have been improved so that they always include the Content-Length headers. Responses that don't change also send cache-friendly HTTP headers such as ETag and Cache-Control.

Short term development roadmap

  • Next up is adding in the web-facing functionality for creating your own assignments into the system, uploading content into them, validating them and trying them out (using the various "knobs" I listed above).
  • After that it's finally time to implement the entry and exit points for candidates who get sent to QTIWorks to do an assessment. This is where LTI will come in.
  • Then... tests. At last!
And relax.

Thursday, 3 May 2012

QTI Works project area now on SourceForge

QTI Works now has a presence on SourceForge:

https://sourceforge.net/projects/qtiworks/

It's fairly vestigial at the moment, so don't get too excited! If you're a geek, you'll quickly notice that there is no source code there at the moment. I'm presently developing QTI Works within my own github area, and plan to push things over to SourceForge once we get past a few more milestones. Until then, you can get the source code at:

https://github.com/davemckain/qtiworks

Please don't expect any API stability this point. There's also no build documentation yet, which probably doesn't help...!

Wednesday, 2 May 2012

Showcasing QTI 2.1 in QTI Works

As well as being a tool for managing, trying and delivering your own QTI assessments, QTI Works will also act as a showcase for QTI and the things it can do, much as the existing QTIEngine and MathAssessEngine already do.

With this in mind, I have started to feed Graham Smith's excellent collection of QTI examples into QTI Works, and plan to continue this process over the next few months while it takes shape.

These examples will be bundled into development snapshots of QTI Works for you to play around. So far, I have assimilated 3 sets of examples:
  • The IMS standard examples (and a few extra bits)
  • Examples demonstrating the MathAssess QTI extensions
  • A small set of example items from language testing (German & Russian)
You can play around with these at: https://www2.ph.ed.ac.uk/qtiworks

(Yes, it does look very spartan at the moment, but don't worry about that!)

This exercise is actually very useful for this project for a number of other reasons:
  1. It provides me with some sample data for running automated "integration tests" on the JQTI+ / QTI Works software. (For example, I feed them all of the code performing the various bits of QTI logic, such as reading in XML, validating the data models, writing out XML, running template processing, running response processing etc. This is invaluable for finding and fixing bugs, and making sure we can handle "real world" examples properly.) 
  2. As well as being useful for integration testing, they also help with so-called "regression testing", which helps make sure that I don't break things inadvertently during the development process.
  3. These examples have been around for a few years now, so this process is a good way of doing some QA on them and making sure they're right up to date with the QTI specification.
Enjoy!