This iteration of work has focused mainly on the delivery of single assessment items to candidates, joining up various bits of pipework that control this process and adding in some new rendering features and options that we'll need later for test rendering (and are quite useful in their own right too).
I'm afraid you'll still have to make do with playing around with the sample items that have already been loaded into the system for the time being. However, you'll probably be pleased to hear that the next work iteration will add in the first round of functionality for getting your own assessments into the system and doing fun things with them. (At last!) To make the wait a bit easier, I have gone through the existing samples and selected a reasonable set of options for how they should be delivered, which makes them much more fun to play with, as well as more illuminating to people who are less familiar with QTI.
Key changes in this release (1.0-DEV8)
- Assessment item rendering now incorporates the following new features:
- Display of a model solution (via the QTI <correctResponse>).
- The ability to reset a session back to the state it was in immediately after the last run of template processing, which effectively clears all candidate input back to the original state, but leaves randomly-chosen things intact. Existing "reinit" functionality has been made a bit clearer.
- An explicit "closed" state, which is either entered explicitly by the candidate, or when an adaptive item becomes complete, or when the number of attempts hits your chosen value of maxAttempts. I think this will need a bit more work, as it was never implemented in QTIEngine or MathAssessEngine.
- A new "playback" feature lets the candidate step through every interaction they made with the item, which is a possible way of implementing the allowReview concept from test rendering.
- A number of "knobs" are now available for controlling how you want to render a single item, including:
- maxAttempts (as seen in tests)
- author mode on/off (controls whether authoring hints are shown)
- a simple prompt to be shown at the top of the question (similar to rubrics in tests, but simpler)
- restrictions on what the candidate can do, including:
- close a session explicitly when interacting
- play a session back when closed
- reinitialize a session when interacting or when closed
- reset a session when interacting or when closed
- see a model solution when interacting or when closed
- see the result XML
- see the item XML source
- I have gone through all of the existing samples and set the above "knobs" to values that show each in their best light. For example, some suit a very formative "try as many times as you like" approach with rich feedback, others are more austere so are better displayed in a more rigid way. This should make the samples much more useful for people trying them out.
- The authoring debugging information has been improved, and now shows bad responses (e.g. string submitted to a float), and invalid responses (e.g. wrong number of choices made).
- There's now a rich database structure underpinning all of this, which records everything the candidate does and the changing QTI state during this process. This is currently used to implement the "playback" functionality, and will prove invaluable for analysing result data when the system delivers real tests.
- The HTTP calls that control the typical delivery of a "candidate session" via a browser are now as RESTful as you would pragmatically expect in this type of scenario. A more formal RESTful web service API will be trivial to do from this, but I'm going to hold off until anyone actually needs it.
- HTTP responses have been improved so that they always include the Content-Length headers. Responses that don't change also send cache-friendly HTTP headers such as ETag and Cache-Control.
Short term development roadmap
- Next up is adding in the web-facing functionality for creating your own assignments into the system, uploading content into them, validating them and trying them out (using the various "knobs" I listed above).
- After that it's finally time to implement the entry and exit points for candidates who get sent to QTIWorks to do an assessment. This is where LTI will come in.
- Then... tests. At last!