Tuesday, 26 June 2012

Invitation to Pre-Conference Workshop at CAA2012

You are invited to a pre-conference workshop before the CAA 2012 Conference http://caaconference.co.uk/ at the De Vere Grand Harbour Hotel, Southampton on Monday 9th July 2012, 10:00 – 16:00. This is the announcement that we are circulating:


We have recently been funded by JISC to disseminate the results of a number of recent projects on standards-based Assessment through the QTI-PET project, and as part of this activity we are holding a Workshop on the day before the CAA 2012 conference (http://caaconference.co.uk/ ), at the same venue.  

The workshop will include introductions to some new tools being developed under the JISC funded projects QTIDI and Uniqurate:

  • A user-friendly editor called Uniqurate, which produces questions conforming to the Question and Test Interoperability specification, QTIv2.1,
  • A way of connecting popular VLEs to assessment delivery applications which display QTIv2.1 questions and tests – this connector itself conforms to the Learning Tools Interoperability specification, LTI,
  • A simple renderer, which can deliver basic QTIv2.1 questions and tests,
  • An updated version of our comprehensive renderer, which can deliver QTIv2.1 questions and tests and also has the capability to handle mathematical expressions.

There will be an opportunity to discuss participants’ assessment needs and to look at the ways these might be addressed using the applications we have available and potential developments which could be part of future projects.

We shall also demonstrate the features of the QTI Support site, created under the QTI-IPS project to help users to get started with QTI. This collection of tools, content and documentation is still growing, and we expect to add more features, prompted by the needs of our partners in the projects who are adopting the tools in their teaching.

Participants in the workshop are most welcome to join us as informal partners in QTI-PET.

Places are limited, so please register to attend the workshop by emailing Sue Milne sue.milne@e-learning-services.org.uk with your details as soon as possible.

Friday, 8 June 2012

Intermediate Mode for Uniqurate

I am pleased to announce the first release of the Intermediate mode of editing for Uniqurate.

The idea of a "halfway-house" mode came back towards the start of the project, and came about after consideration of what we could do with content that was authored in some other way (e.g. another editor, or by hand). The difficulty is that QTI is essentially a programming language for electronic assessment and there is always more than one way to skin the proverbial cat. For example, there are many ways that a multiple choice question could be implemented in QTI - Uniqurate does it one way, but there are many, many others. It would be impossible to map every single possible permutation of QTI that might represent an MCQ onto UQ's appropriate question component. Thus, at an early point it was decided that any content that was not created in UQ would have to be restricted to the XML-based Expert Mode editor.

Some time ago, Wilbert Kraan suggested a layer on top of the Expert mode that would hide certain aspects of the QTI XML, and supplement what was left with a few additional aids. Ultimately you'd still be editing the QTI directly, but it wouldn't seem so "frightening". We took to calling that the "halfway-house" mode.

With the launch of the QTI-PET project and the need to be able to provide a means of adding new context to existing content, this became even more important. We've presented a number of papers and demos on this theme. The tl;dr version is that we've got lots of QTI content, but much of it is written from a generic point of view, and is too dry to be truly engaging. Our colleagues at (say) Harper Adams could use much of it, but their students would react much better if it could have a few subject-specific hooks added just to give it an appropriate context.

Hence, the "halfway-house" or what we're now calling Intermediate mode. If you switch to expert mode and load a question, you'll noice a little icon at the top right of the screen. Click this, and all of the XML will be hidden apart from the human-readable parts.


The overall "tree" of the question is preserved and delimited by the dotted red lines - so, in the example above where a multiple choice question is being edited in Intermediate mode, you can see where the distractors' boundaries are with respect to the question body itself.

The rich-text editor is also brought over from the "friendly" mode editor, so that you can modify the style as well as the text, along with any maths components (you can add new maths components, too).

This has been tested in the big three browsers - i.e. relatively recent Firefox, Chrome and Internet Explorer 8 (don't get me started on the latter - had it not been for IE you could have had this yesterday! Is anyone still using IE?!).

As always, the latest version of Uniqurate can be found at


which will take you into "friendly" mode, so you will need to switch to Expert mode to find this new feature. This URL


will take you straight into expert mode.

Please give me as much feedback as you can! Reports on bugs, problems etc are always "welcome" :) but of particular interest is the user experience. I am not convinced that a little button in Expert mode is the best place for Intermediate mode, and would welcome suggestion on where and how to place it.

Thursday, 31 May 2012

Enhanced item rendering in QTIWorks

After another flurry of activity, I've just finished and released the 8th development snapshot of QTIWorks for folk to play around with:

https://www2.ph.ed.ac.uk/qtiworks

This iteration of work has focused mainly on the delivery of single assessment items to candidates, joining up various bits of pipework that control this process and adding in some new rendering features and options that we'll need later for test rendering (and are quite useful in their own right too).

I'm afraid you'll still have to make do with playing around with the sample items that have already been loaded into the system for the time being. However, you'll probably be pleased to hear that the next work iteration will add in the first round of functionality for getting your own assessments into the system and doing fun things with them. (At last!) To make the wait a bit easier, I have gone through the existing samples and selected a reasonable set of options for how they should be delivered, which makes them much more fun to play with, as well as more illuminating to people who are less familiar with QTI.

Key changes in this release (1.0-DEV8)

  • Assessment item rendering now incorporates the following new features:
    1. Display of a model solution (via the QTI <correctResponse>).
    2. The ability to reset a session back to the state it was in immediately after the last run of template processing, which effectively clears all candidate input back to the original state, but leaves randomly-chosen things intact. Existing "reinit" functionality has been made a bit clearer.
    3. An explicit "closed" state, which is either entered explicitly by the candidate, or when an adaptive item becomes complete, or when the number of attempts hits your chosen value of maxAttempts. I think this will need a bit more work, as it was never implemented in QTIEngine or MathAssessEngine.
    4. A new "playback" feature lets the candidate step through every interaction they made with the item, which is a possible way of implementing the allowReview concept from test rendering.
  • A number of "knobs" are now available for controlling how you want to render a single item, including:
    • maxAttempts (as seen in tests)
    • author mode on/off (controls whether authoring hints are shown)
    • a simple prompt to be shown at the top of the question (similar to rubrics in tests, but simpler)
    • restrictions on what the candidate can do, including:
      • close a session explicitly when interacting
      • play a session back when closed
      • reinitialize a session when interacting or when closed
      • reset a session when interacting or when closed
      • see a model solution when interacting or when closed
      • see the result XML
      • see the item XML source
  • I have gone through all of the existing samples and set the above "knobs" to values that show each in their best light. For example, some suit a very formative "try as many times as you like" approach with rich feedback, others are more austere so are better displayed in a more rigid way. This should make the samples much more useful for people trying them out.
  • The authoring debugging information has been improved, and now shows bad responses (e.g. string submitted to a float), and invalid responses (e.g. wrong number of choices made).
  • There's now a rich database structure underpinning all of this, which records everything the candidate does and the changing QTI state during this process. This is currently used to implement the "playback" functionality, and will prove invaluable for analysing result data when the system delivers real tests.
  • The HTTP calls that control the typical delivery of a "candidate session" via a browser are now as RESTful as you would pragmatically expect in this type of scenario. A more formal RESTful web service API will be trivial to do from this, but I'm going to hold off until anyone actually needs it.
  • HTTP responses have been improved so that they always include the Content-Length headers. Responses that don't change also send cache-friendly HTTP headers such as ETag and Cache-Control.

Short term development roadmap

  • Next up is adding in the web-facing functionality for creating your own assignments into the system, uploading content into them, validating them and trying them out (using the various "knobs" I listed above).
  • After that it's finally time to implement the entry and exit points for candidates who get sent to QTIWorks to do an assessment. This is where LTI will come in.
  • Then... tests. At last!
And relax.

Thursday, 3 May 2012

QTI Works project area now on SourceForge

QTI Works now has a presence on SourceForge:

https://sourceforge.net/projects/qtiworks/

It's fairly vestigial at the moment, so don't get too excited! If you're a geek, you'll quickly notice that there is no source code there at the moment. I'm presently developing QTI Works within my own github area, and plan to push things over to SourceForge once we get past a few more milestones. Until then, you can get the source code at:

https://github.com/davemckain/qtiworks

Please don't expect any API stability this point. There's also no build documentation yet, which probably doesn't help...!

Wednesday, 2 May 2012

Showcasing QTI 2.1 in QTI Works

As well as being a tool for managing, trying and delivering your own QTI assessments, QTI Works will also act as a showcase for QTI and the things it can do, much as the existing QTIEngine and MathAssessEngine already do.

With this in mind, I have started to feed Graham Smith's excellent collection of QTI examples into QTI Works, and plan to continue this process over the next few months while it takes shape.

These examples will be bundled into development snapshots of QTI Works for you to play around. So far, I have assimilated 3 sets of examples:
  • The IMS standard examples (and a few extra bits)
  • Examples demonstrating the MathAssess QTI extensions
  • A small set of example items from language testing (German & Russian)
You can play around with these at: https://www2.ph.ed.ac.uk/qtiworks

(Yes, it does look very spartan at the moment, but don't worry about that!)

This exercise is actually very useful for this project for a number of other reasons:
  1. It provides me with some sample data for running automated "integration tests" on the JQTI+ / QTI Works software. (For example, I feed them all of the code performing the various bits of QTI logic, such as reading in XML, validating the data models, writing out XML, running template processing, running response processing etc. This is invaluable for finding and fixing bugs, and making sure we can handle "real world" examples properly.) 
  2. As well as being useful for integration testing, they also help with so-called "regression testing", which helps make sure that I don't break things inadvertently during the development process.
  3. These examples have been around for a few years now, so this process is a good way of doing some QA on them and making sure they're right up to date with the QTI specification.
Enjoy!

Monday, 16 April 2012

HEA-STEM Conference April 12-13 2012

The conference was held at Imperial College London, in several different buildings. The parallel sessions were organised in subject strands. Our presentation was in the Maths, Stats and OR strand in the last set of papers.


A common theme in the conference was the need for maths to be set in context for each discipline. It is a problem compounded by the students' surprise in many subjecct areas on finding that their course includes maths. Speaker after speaker reported that students engage better with maths in context, and there was a lively discussion about the most effective way of supporting students: is it better to have a subject specialist teach the maths they need for their course, or should a mathematician tech the maths? The conclusion was that there should be several people contributing to this teaching, and that if the support is removed from the location where the problem was presented, the student is likely to be less embarrassed and may seek help more readily.


Our paper demonstrated the facilities in the QTI tools for contextualising questions, and also featured the first appearance of the LTI connector embedded in an institutional Moodle - the University of Glasgow's learning and Teaching Moodle instance.

LTIQuizzes update

During our presentation at the HEA STEM Conference I demonstrated LTIQuizzes through the VLE at Glasgow University. LTIQuizzes was playing on our Amazon EC2 virtual server, however to the audience it just looked like I was setting up and using a normal Moodle module. For specialist software this is a great system, because it can be part of the main VLE as far as staff and students are concerned, however safely isolated so that it doesn't put core services at risk, or require the same platform as the VLE. Moodle is a conventional LAMP stack application, while LTIQuizzes runs under Tomcat - they could run on the same server, but they use quite different technologies so it's much nicer keeping them on separate machines.

LTIQuizzes isn't really intended for production use, but it does show what is possible. For now storage is to files rather than a database, and the QTI engine is APIS, which only supports a subset of QTI. Database storage, full LTI 1.1 support as well as the LTI 1.0 extensions for VLE persistence and course membership will all be supported very soon, and the LTI section of the code can easilly be reused by other applications. (I'm also developing PHP and C# vaersions of the LTI classes.)