Wednesday, 21 November 2012

A significant date, and some reflections on the Uniqurate project

Hi all,

I just wanted to post to mark a significant date - today represents my last officially contracted day on Uniqurate, and thus the formal end of the development phase of the project. I am still working on the technical documentation, and I'm sure I'll also contribute to the final reporting process, but as far as actually being employed to work on Uniqurate, to coin a phrase: that's all she wrote!

I wanted to take the opportunity to say what a pleasure it's been to work with the project team and the wider community. It's always a joy to work with the usual QTI-heads, but what really made this project for me was the wider involvement from not only our official partner institutions, but also the extended community that's begun to grow as a result of the efforts on QTI-PET. I think these fresh perspectives really have helped further the cause of open standards for e-assessment, and for QTI, and we've seen a greater maturity in the final application as a result.

So in this post I wanted to take time to reflect on Uniqurate - not the application itself, which I covered in my last blog entry; this time, I want to talk about the project itself - particularly the role of Agile methodologies.

There have been some interesting lessons learned from my perspective, particularly speaking as someone who is usually an advocate of Agile for managing software development projects. I have taught Agile approaches to students on Software Engineering courses, and used them to excellent effect in the past both in software development and for general project and time management. I am a fan of Agile. I agree with the Agile Manifesto and its principles. However, during Uniqurate something became clear to me: 

Agile is not always the best option for a software development project in HE.

Back in May 2011, when the original call for projects went out from JISC, the Technology Transfer strand specified that projects should proceed according to a "rapid, open or agile methodology". Previous QTI projects on which I had worked had used Agile approaches and their eventual software artefacts distributed under open source licensing, so the established team members were very much in their comfort zone, including me. As the person who took point on writing the response to the call for Uniqurate, naturally it detailed a process that was very much focused around iterative development cycles, with an ongoing, continual process of feedback from end-users and response from the software development team - the response from the latter coming in the form of changes to the application.

We got underway with a flurry of activity and enthusiasm from all concerned. Once the teaching semester started, though, the realities of peoples' day jobs intruded. The level of interaction was inevitably reduced particularly among project members were confronted with teaching commitments and the need to support their students. With the project extending over a comparatively long period, it became relatively for a demographic who could reasonably be described as "harassed, overworked academics" to put it on the back burner.

In DSDM Atern, there is a pre-project questionnaire designed to ascertain the suitability of DSDM. Although one would need to translate the various DSDM-specific terms used, this document applies just as well to Agile approaches in general. Among other things, it stipulates that easy access to key representatives of the user community must be available to the developers throughout. It also stipulates that key members of the team "buy in" to the Agile approach.

Although I believe we had a successful development process, I am not sure that it was truly Agile; neither am I sure that had we undertaken DSDM Atern's pre-project questionnaire at the outset that we would have been truly able to green-light an Agile approach. Past projects had involved people enthusiastic about pushing the boundaries of e-assessment and open standards, many of whom were not directly involved in teaching or only had limited teaching commitments. On this project, we were actively seeking involvement from those whose "day job" was teaching. Thus, there were long periods during which we had little or no engagement from the user community. This was nobody's fault, let me state that clearly for the record; it was simply the reality of an academic role in British HE. Naturally, when the semester breaks occurred and people emerged for air, suddenly the enthusiasm and engagement reappeared and the lines of communication reestablished.

In hindsight, I also wonder whether or not our project team truly understood the Agile process - another tickbox we perhaps would have failed to tick on the DSDM paperwork. This was revealed to me during some of the correspondance leading up to the recent Programme Meeting. One of the team revealed that she'd not felt comfortable exposing academic members of staff to the application in the early stages, as it was limited in nature and scope at this point, and she did not want to put them off. Belatedly, I realised this had also been a contributory factor in perhaps not receiving the constant user feedback we'd have liked. I had perhaps been remiss in explaining one of the crucial tenets of Agile - that user involvement and feedback occurs throughout, particularly in the early stages of development - - think of it like a child's formative years!

Ultimately, we might say we did a two phased "big design up front" project instead of Agile. The initial enthusiasm and available of project members meant that when direct feedback dried up, I was able to extrapolate what I had into something approximating an overall specification for the application. I ended up developing this over two cycles, one leading up to CAA, then another leading up to when I actively finished development, around September (i.e. when my own teaching commitments started to intrude!).

I do wonder, however, what would have been if we'd actually set out to do this. Given the early availability of project members we could have created a formal specification as the very first task. In the actual event my extrapolations worked, but there were times when I felt I was making it up as I went along.

Armed with a good spec, we could then send academics back to the trenches while the software developers did their thing - perhaps having people resurface in the period between semesters and perhaps again at the start of the summer to nudge things back on track if they'd gone awry. Some might actually say this would actually still be Agile - it is still iterative development, after all, although I'd submit the sprints/timeboxes would be far too long to truly fit that description. 

Whatever moniker one would choose to attach to such approach, I think it has merit for future JISC projects that predominantly involve software development. I can't help but wonder what kind of application Uniqurate would have been in that alternate universe where JISC hadn't specified thou shalt be Agile, and we formally set out with a big design up front approach.

Tl;dr* - we need to be more agile about whether we choose Agile for managing software development on e-learning and academic-led projects.
------------
For those who don't speak meme: Tl;dr - "too long; didn't read", or "the short version"

Friday, 21 September 2012

Uniqurate: new release... the last big push of development

Today's release of Uniqurate represents what will most likely be the last big push of software development - at least development that will take place under the auspices of the Uniqurate project itself. Doubtlessly there will be bug fixes still to come, plus some little bits to come in QTI-PET down the line, and I'm sure there will be more development in the future on other projects and by others in the community. However with the start of the teaching semester next week and development officially ending in November, it's the end of the big stuff for now.

Without further ado, here's what's new:

  • Multiple choice component:
    • No feedback is now a valid scenario. A response will turn green if both feedback and distractor (answer) text has been filled in, orange if the distractor only is filled in (but which is now valid), and red when nothing's been filled in.
    • There is now an option to copy feedbacks. This means that if you don't want to have individual feedbacks for all your "wrong" answers, you can fill in one of them and use the Quick Feedback button.
    • There is a checkbox that will present a multiple choice component as a pull down list when delivered to the student. This might be useful for questions with lots of distractors!
  • Tests
    • You can now click on the edit icon next to a question edit that question in situ within the test.
  • General
    • There are now options to copy a component, and to move/drag a component into a different position within the question. Dragging is also generally improved (e.g. it will scroll properly if you drag a component towards the button or top of the browser window)
    • The much requested option to put feedback immediately after the component (rather than at the end of the question) is now in place. The slider at the top of the component pane toggles between Feedback shown with components and Feedback shown at end of question. The first option means that feedback will appear immediately underneath the component to which it applies. The second option will display all feedback at the bottom of the question.
    • Far too many bug fixes to list - but includes workaround for the Firefox issue where scroll bars weren't appearing on the QTIWorks preview.
    • Probably many interesting and exciting new bugs introduced :-)
It has to be said that the recent demo/workshops have been invaluable in providing me with a distillation of end-users' needs and issues. A special shout out to those who participated in the QTI-PET session in Oxford - much of what's gone into today's release came out of that session. Bug tracking and fixing in particular, while not a glamourous activity, absolutely depends on end users using the software and feeding back to the developer(s). Being physically there and part of the demo session gave that process an immediacy and dynamism that is lacking when reduced to emails bouncing back and forth.

So, with the bulk of software development done, and if you'll indulge me, I just want to take a brief moment to reflect on Uniqurate. 

Back at the end of last year, I had a very different vision for this application. The original intention was to create a loosely coupled set of mini-editors - i.e. one for a multiple choice, one for a a text entry, and so on. I also bandied about the term "de-maths-ing QTI". The intent there was that any questions likely to involve anything vaguely arithmetic would be generalised, the edges hammered off, and used as a baseline to create yet another mini-editor. We spoke about identifying questions with cross-disciplinary utility, so as to avoid those proverbial edges being too sharp and thus not needing that much hammering in the first place!

The real quantum leap for Uniqurate came just after Christmas. A discussion with Sue M early in the development of what came to be known as "friendly mode" saw the mini-editors become "components". The crucial difference was the concept of having multiple components per question. 

Laying the foundations to support this took some time, and was frustrating from my perspective - early releases of UQ looked nice but didn't do very much!  However, contrast what we now have with the original vision. You can create a question in UQ that tests the student's knowledge of a subject by having them engage in a variety of ways. The custom maths component which offers the potential for some very rich feedback scenarios.

The change of approach meant that we took a bit of time to get rolling but in the end, I think we have a much better application for it. 

I am really pleased with Uniqurate, even if I do say so myself. Certainly it goes light-years beyond what I envisaged at the start of the project. The short version of the Uniqurate project proposal was that Aqurate was too simple and Mathqurate too complex; our challenge was to create something in between. I think we've succeeded in that aim, but along the way something unexpected has happened: Uniqurate is in some respects more complex than Mathqurate, at least in terms of the content that one can produce. I think that's a good, no, a great thing. It means that those new to QTI can dive in and create some "clever" content without ever seeing a scrap of XML. In the long run, it can only help to encourage the adoption of QTI outside of the existing community.

Friday, 14 September 2012

LTIQuizzes

Quite early on in the QTIDI project I decided to write a demonstration application that would allow us to test LTI concepts along with a quiz system. The result was LTIQuizzes, a very simple quiz application that allows entry-level QTI 2.1 quizzes to be played through an LTI connection from a VLE. The I should really have blogged about this ages ago, however summer is a very busy time for me with conferences and software upgrades and feature development for the new academic year.

LTIQuizzes consists of an LTI connector (which is intended to be reusable software that can be linked to other systems), my original QTI 2.0 item player, APIS, and a new very basic QTI 2.1 assessment player which will eventually become part of my desktop Common Cartridge viewer, CCplayr. LTIQuizzes served a useful purpose for the QTIDI project, because it allowed us to demonstrate how QTI assessments can be linked into VLEs using LTI before our proper QTI delivery system, QTIWorks, was LTI enabled.

From the teacher's viewpoint LTIQuizzes is very straightforward to use. The teacher creates a new resource in the VLE, and clicks on the resource link which takes them into LTIQuizzes. As they are a teacher in the VLE course they are given the LTI "instructor" role, and so are shown the teacher screen in LTIQuizzes. This allows them to upload a packaged QTI question or test which will be displayed to any students that click on the VLE link into LTIQuizzes. Provided all the questions the quiz are automatically marked (i.e. there are no essay or extended test questions) LTIQuizzes will return a score to the VLE if it supports LTI version 1.1 when the student completes the assessment.

LTIQuizzes is intentionally extremely simple, and there is no way to create an activity without using an LTI link. David has taken a slightly different approach with the LTI connection in QTIWorks, where teachers set up the quiz after logging into QTIWorks directly, and then get the LTI information to configure an activity in Moodle, Blackboard or any other suitable VLE. This approach works very well with the LTI implementations that are now distributed as part of Blackboard 9 and Moodle 2.3, but is awkward with the Moodle 1.9 LTI plug-in, which assumes that administrators rather than teachers will put in most of the LTI information. When I was writing LTIQuizzes s the only LTI enabled VLE that I had access to was Moodle 1.9 with the LTI plug-in wa, so naturally I used an approach which fitted very closely with the approach taken by the developers of that version of Moodle plug-in.

Now the QTIWorks is fully LTI enabled, LTIQuizzes is much less important for our project, however I do intend to do some further work on it as it is sometimes useful to have a very simple system available for experimenting with. I will be doing some refactoring of to the APIS item player over the next few months to make it use generics properly (it's very old), and will probably use it for any experiments I do for ideas that might go into an eventual QTI 2.2. I will also be integrating the item and test player parts of LTIQuizzes into QTIPlayr so that it supports a variant of common cartridge that uses QTI 2.1 entry level assessments. The LTI interface is part of an ongoing project to provide simple lightweight LTI code in Java, C# and PHP that tool providers can use to quickly add LTI support to their software.

Monday, 10 September 2012

QTI-PET Workshops and Demos

Over the last few weeks we have held training workshops for the QTI-PET partners. Two have been face to face and one, mostly for the JISC RSCs, online. We've also had a couple of opportunities to demo the tools and let folks try them out. One of the great lessons learnt in the process is just how difficult it is to get University folk together during the summer! We had 3 attempts at finding a good date for the South workshop, and ended up with a compromise which meant some people could attend but others have the link to the recording of the RSC session, which is at https://sas.elluminate.com/mr.jnlp?suid=M.6207BE476C2071F8C1579CFF32DDA3&sid=2009077.

So, in chronological order we had:

University of Glasgow Learning and Teaching Seminar

Niall presented the projects and their work at the Learning and Teaching Centre Seminar on 14th August. Although the delegates are close colleagues, this was the first time that they had seen these tools demonstrated, and there was some interesting discussion about possible ways of using them.

QTI-PET Workshop North

Held in the Jura computer lab in the University of Glasgow Library on 17th August.

We had 4 partners attending and Sue and Niall presented and helped with the hands-on sessions.

David Reimer from Edinburgh was using Uniqurate's Friendly Mode for language questions - in ancient Hebrew. Lesley Hamilton from the University of the West of Scotland was assembling multi-part questions for medicine and nursing in Friendly Mode. Shazia Ahmed from Maths Support at the University of Glasgow was adding to her collection of questions using Intermediate Mode. Sue demonstrated making small changes in Expert Mode to customise a question already authored in Friendly Mode.

We demonstrated the QTI Works and JAssess renderers and showed a question running in QTI Works within Moodle for the first time. We also had a look at the LTIQuizzes tool running a simple test in Moodle.

Several suggestions for new Uniqurate Components were suggested, including a medicine component which would enable people to author questions which would allow nurses and other health professionals to practise drug calculations dosages. Participants would like to be able to use randomised graphics, including graphs and diagrams, but these features will need further development in both authoring and rendering.

There was a general feeling that the feedback should appear close to the input to which it refers. Making this change will reduce the time available for new components, so we have to decide which components are most essential.

There was some concern about terminology, since, although the attendees were all comfortable with technology, many of their colleagues are not used to using technology directly in teaching. People made these same comments at all three of the partners' workshops, and at the South Workshop, Roger Greenhalgh from Harper Adams University College voluteered to go through the terminology and find the problem areas. We are collecting instances of words and phrases that need to be changed - we try to avoid using QTI terms, but some other words are too obscure for new users, so we are looking for translations, and these edits will be made as soon as possible.

With QTI Works now linked from Uniqurate to show the question running from within the editor, it is now much easier to check that the question does what you expect. This works well in Chrome and Internet Explorer; however, some browsers, particularly Firefox, seem to have difficulty in displaying QTI Works.

The technology behaved well and the participants felt they had had a useful day and that they would use Uniqurate when back in their institutions. We asked them to let us know how they are getting on and to contact us with any difficulties.

QTI-PET Familiarisation Workshop for JISC RSCs

This session was held on 24th August and hosted by RSC Scotland; Sue did the presentation and Grainne Hamilton facilitated the session and collated the questions from attendees. The presentation and the question and answer session generally went well, although there were a few very brief fades in the audio and one would-be participant was unable to connect to the room. The authoring and delivery tools were well-behaved again and some participants were able to try out the tools during the demo.

There was a request for a pairing component, which would construct questions in which, for example, scientists are matched with their theories, or diseases with their symptoms. This is a QTI input type, which can be included in Uniqurate if time allows.

This session was recorded and the URL for the recording is https://sas.elluminate.com/mr.jnlp?suid=M.6207BE476C2071F8C1579CFF32DDA3&sid=2009077. This has been circulated to people who would have liked to go to a workshop but were unable to attend on the dates chosen.

eAssessment Scotland Online Demo

On 28th August we gave an online demonstration to the eAssessment Scotland Online Conference. There were participants from Australia and Asia as well as several from Europe and the UK, and indeed some in the University of Glasgow.

Sue presented and Niall collated questions from the chat stream and answered the more technical ones. We had a 45 minute time slot, which was just long enough to demonstrate all the tools and answer delegates' questions in reasonable detail.

We also had a poster displayed at the eAssessment Scotland Conference on 31st October in Dundee.

QTI-PET Workshop South

This workshop was held in Oxford on 7th September in the University of Oxford Medical Sciences Teaching Centre. We had a seminar room with wifi access, and delegates brought their laptops, so that they had the questions they had created on their hard drive. It was attended by partners from the University of Derby, the University of Oxford, Reaseheath College and Harper Adams University College.

Sue presented and, since Paul Neve was also there, attendees were able to feed back to him directly and in more detail about the Uniqurate design. Participants used the image facilities in the static text component to add pictures to multi part questions using the other components.

We were also able to try the new test construction facilities in Uniqurate, which worked well, with the questions we created during the earlier part of the session being assembled into tests later on.

Sue has been constructing a Moodle course for her class at the University of Glasgow, and we were able to see how the questions will look when used for formative assessment, and to go through the process of adding another question to the course. A copy of Sue's course will be used in future demonstrations to show how the setup process works, and a mock-up course is also available where users can try inserting questions for themselves.

Next Demo

This week we have a workshop at 12:00 on Wednesday 12th September at the ALT-C Conference in Manchester.

Friday, 31 August 2012

Testing times on Uniqurate

No, we're not having problems on the Uniqurate project, quite the opposite in fact - today's release adds test authoring. Sorry, I couldn't resist the headline :)


You will find that the button on the main menu to create a test is now enabled, and when used, you'll observe that the edit option changes to reflect that you have a test rather than a single question in memory.

Tests are divided into sections, and each section contains a number of questions. For the simplest use case, you can just leave all the default settings alone, click the Add question to section button a few times and select some questions. Give it a title, possibly fill in a bit of explanatory text that the student will see, and then save the test. Your work should then work in any delivery system compatible with QTI 2.1 tests.

However, Uniqurate does go a little further than that. You can have multiple sections in a test, and select random questions from the section so that no two deliveries of the test are the same. You can repeat a question within a section - useful if you want students to try several times with different randomised values (of course, this assumes that the question has randomised values!). You can even deliver different sections to the student depending on how they've done on the previous sections.

The test screen is a little busier than the question authoring screen, so when time permits we'll try and put together some documentation that talks people through the various options. In the meantime, however, the various little icons help scattered around the screen will pop up help about the option nearby.

Unfortunately, QTIWorks does not yet have tests up and running. JAssess should run tests fine, however, and as long as you avoid the maths components of UQ, the venerable MathAssessEngine should also run them too (although it does have a bit of a problem displaying the question feedback).

Meanwhile, people, as its author I hereby declare our old test authoring tool Spectatus officially obsolete! Awww. Please join me in a minute's silence for that worthy warhorse :-)

Tuesday, 21 August 2012

Uniqurate - the final thrust (ooer)

Following the recent RSC workshop, there was a release of Uniqurate that included quite a few bug fixes and a few feature requests. Most of the bugs should have been fixed (and I've probably introduced a whole bunch of new ones, but hey :-) and I've added some of the new features. However, there were a number of ideas for new components that emerged.

We've been fortunate in that around the CAA conference onwards I've been available to work on UQ more or less full time, and as a result we've made a lot of progress in July and August. However, the bad news is that my involvement in Uniqurate ends in November, and along with the fact that once we get into September and teaching commitments rear their ugly head, this quick burn development is going to be curtailed. Short version: there is probably enough time remaining to get test authoring done and perhaps one other component.

So I'm going to put the possible other component to a vote. Feedback from the RSC workshop, plus components that have been mooted in the past, leads us to the following list - numbered for reference, but not in any particular order:
  1. An "Excel" component - essentially a grid of (random) numbers, some of which will be blank, and the student has to fill in the blanks. The blank values would be derived from the non-blank values.
  2. A Medicine Component for questions where you have a list of medicines and a list of "strengths" for each - the question would choose randomly a drug and a strength to set up a calculation (but I'd need a little more explanation here :)
  3. "Fill in the blanks" within text - "blanks" could be either a text field into which the student types, or a pull down menu of choices
  4. Diagram labelling - drag labels onto the right place of a diagram. Potentially, drag bits of a diagram into the right place of a diagram.
These aren't components, but feature requests, any one of which would probably use up the non-test development time:
  1. Option to place feedback alongside the input it refers to
  2. Confidence-based marking
  3. A random wrapper - i.e. you define a group of components, and at question run time it chooses a random component within this wrapper to show. Thus you could set up a variety of different components but with related content - e.g. several maths components with slightly different variations of a concept, MCQs with different but related distractors, etc.
To help you decide, here's some thoughts about these from my perspective:

3 and 4 are eminently doable. Although questions with graphical interactions seem to get good responses from students, the scope of the sister QTIDI project means that we're probably stuck with the legacy Java applet-based graphics for now at the renderer end of things. So if it came down to 3 or 4 my vote would be 3, to concentrate on the component that would result in the best experience when it was finally rendered.

I understand the utility of 5, but it would mean a fundamental change in how the "friendly" mode composes its content, so it would be very complex to do and would burn up a lot of development time for something that is basically aesthetic.

1 and 2 would need to be fleshed out a lot more, and I think there are others with greater cross disciplinary utility - but I'm game if people vote them highest!

6 is often brought up as a popular choice - again, game if people vote it highest.

However, my own vote would be 7. I think it would be relatively easy to implement both from a UI and QTI perspective, yet add a new dimension of flexibility to questions. You could specify a whole raft of similar but subtly different components around a given subject, and really put a student through their paces (although, arguably, one could do this at test level).

Over to you, guys - what do you think?

Monday, 13 August 2012

Custom maths component for Uniqurate

I am very keen to see QTI used in other disciplines than maths, and I believe it's an important aspect of the current projects - I've specifically used the term "de-mathsing"in the past.

However, the reality is that even in non-maths disciplines people still need e-assessment resources that can handle some element of maths. We may not be talking about the complex, computer algebra system-enabled level seen in some of the MathAssess resources - ultimately, my feeling is that creating such rich content will most likely always be the purview of Expert Mode and those individuals who can craft QTI resources within it. Nevertheless, the ability to specify a simple expression and build a question around it spans many, if not most subject areas, and it is not unreasonable on the part of the user community to expect their e-assessment tool to support this.

In the early days of Uniqurate I'd expected much of this need to be served by individual quick maths components that we would identify across a range of disciplines - much as was the case with the quick maths triangle. However, most of the feedback emerging of late was more generic - a need for "more maths" was being articulated without very much in the way of detail. I wanted to scratch this itch but avoid travelling down the same path as Mathqurate. Thus, a general "simple maths" component started to formulate in the back of my mind, and I'm pleased to announce that the release of Uniqurate that I deployed on Friday introduces this new functionality.

You will observe that the components are now divided into three tabs, Text, Maths and Other (as the list of components on the left side was beginning to disappear off of the bottom on my MacBook screen :-) In the Maths tab alongside the existing "triangle" component is our new maths component. When dragged onto a question canvas this results in a fairly simple component that just shows Question Text and Answer, but clicking on one of those fields will pull up a new window that looks like that below:


The screenshot above shows a good example of how this component may be used. Here, we have a question around the old chestnut pi r2. It might be argued that this is actually a trick question - we're specifying our radius in centimeters but actually want the area in millimeters! This lets us showcase the different feedback that can be set up. Note that there is a correct and "non-correct" answer - the latter being what I might refer to as wrong, despite others out there hating such a pejorative term :-) However, you can also add custom feedbacks which will be triggered on alternative answers. Our example above shows only one, but you can have as many as you like.

The example shown involves only one variable (pi doesn't count, it's a constant, along with e) but you can have as many variables as you like - the table underneath the answer field will expand to accommodate them all. Our example is also using explicit values in the min and max fields - but you can use variables or even expressions there. The constraint field also gives you further control, by specifying conditions that must be met when the question runs - you might use this to avoid zero values in variables, for example, or to ensure that a variable value makes sense in the context of the value of another variable.

Even if I do so say so myself, I am very pleased with how this component came out. Initially, I was not expecting to be able to achieve the level of flexibility available in the final version - certainly the use of expressions in the min and max fields was not in my mind at the beginning, and even the use of variables in those fields I considered problematic. The whole thing was opened up courtesy of a chunk of code I created a couple of weeks or so ago that converts a sensible, human-readable maths expression into QTI. So, if one pipes

(pi*r^2)*10

into this function, one gets


<product>
    <product>
        <mathConstant name="pi"/>
        <power>
            <variable identifier="r"/>
            <baseValue baseType="integer">2</baseValue>
        </power>
    </product>
    <baseValue baseType="integer">10</baseValue>
</product>

straight back. Regardless of the complexity of the expression - whether it is just a single, explicit value or a six line, bracket-tastic epic, what you will get is a QTI excerpt that can be injected into a question wherever an expression is expected.

I won't go into more detail; this is perhaps a little more techie-oriented than is the norm on this blog (or certainly in posts I write here) but I wanted to mention it for the benefit of other developers who might be confronted with similar needs. There may be applications for this code not only for editor apps but for those who are converting resources in other standards into QTI. Uniqurate (like all of our projects to date) is open source and the code is available on SourceForge. Those of you so inclined, feel free to grab and hack - and be sure to let me know if you do (I'm looking at you, Mr. Pierce :-)

Meanwhile, for the rest of you - please go off and use it, break it, and (inevitably!) come screaming back to me. To test your questions you'll need to use JAssess for now, as support within QTIWorks for the QTI features this component depends on is still being developed. As ever, you'll find JAssess on the QTI Support site http://qti-support.gla.ac.uk/ along with a wealth of other useful resources. At the moment, you will also need to get in touch with Graham Smith to be able to run your own questions in JAssess, but you'll find his email address prominent on the initial JAssess landing page.

Wednesday, 1 August 2012

New Uniqurate release

A new release of Uniqurate was published yesterday which introduces a new component, the catchily titled "place items into correct order" component. When authoring the question you supply a number of items and specify a correct order for them. Then, when the question is delivered to the student, their task is to arrange them into this correct order.

From a user interface perspective, there are commonalities between this component and the existing multiple choice component - i.e. both of them require the ability to add a variable amount of "answers" to the component. I could have used the existing multiple choice question component as a basis for the UI for the new one, but I decided to adopt a slightly different approach. Rather than having an "add new item" button next to each answer, I used a single "add" button, and introduced the ability to re-order the answers by dragging them. I think this approach is nicer than the existing one on the MCQ component, and if the community agrees I will port it over to the MCQ component.

There are several potential avenues for the next thrust (ooer) of development, and I would be keen to get some thoughts from the community:

Maths Component
Would allow a user to enter a simple expression, i.e. A x B + C / D, possibly some functions too, specify the ranges for each variable, and possibly other parameters such as whether to use integers only etc. I would be open to suggestions on how to represent this in the user interface. My thinking at the moment is that when you drag this component onto the canvas, it fires up a wizard that leads you through entering the expression, then specifying the ranges for each variable etc. Once complete, the wizard would close so that all you have in the canvas/preview is a placeholder represented by the expression, and an edit button. Clicking the edit button would bring the wizard back up.

Select correct answer from pull-down list
It would be easy to implement this as a component in its own right, that would appear as a block within a question. However, I suspect that most people would like to be able to place such components inline within a static block of text. This will be more complex. In terms of how I would represent that from a UI perspective, I would probably arrange it so that one could drag it into place within a static text area. However, that would break the UQ convention with respect to how components are placed onto the canvas. Again, I'm open to suggestions as to how best this should be represented.

Test functionality
I've avoided implementing the ability to compile a group of questions into a test yet, based on the fact that the test capabilities of several of our renderers are still in flux. But this will need doing at some point and now does make a degree of sense, given the fact that UQ now supports a variety of question components, and that there's plenty for you testing types to be getting on with in question-land for the time being!

Any and all feedback will be gratefully received!

Monday, 23 July 2012

CETL-MSOR Conference / New Partners

Following on from CAA, we put in an appearance at the CETL-MSOR Conference in Sheffield. Although we were not presenting, it was useful to join the meeting and hear about delegates' experiences in assessing mathematics, statistics and operational research.

A recurring theme was the need for assessment resources in statistics in service teaching across a wide variety of disciplines and at all levels. A selection of resources which would begin to address this need will be rebuilt from the CALMAT question collection for a Glasgow University module this autumn.

We were able to add another informal partner to QTI-PET following a lunch-time discussion during the conference. So during the week we welcomed 5 new partners on board, bringing us to a total of 12 QTI-PET partners who will help us to pilot the tools and feed back to us about their experiences. We are arranging a series of training sessions to get everyone up to speed with the tools in time to get some material in front of students this coming semester.

Wednesday, 18 July 2012

QTI @ CAA 2012 / new Uniqurate release


Last week's participation in the 2012 International Computer Assisted Assessment (CAA) Conference was a great success for all of the QTI projects. QTI was clearly positioned front and centre throughout the conference, with stream A on Wednesday almost exclusively devoted to it. The closing speaker, Paul Bailey of JISC, went out of his way to stress the importance of QTI and open standards within the field of electronic assessment, and to talk about JISC's commitment to this area of research.

The conference was preceded by a QTI Workshop on the Monday, which was most productive, particularly from the perspective of Uniqurate in that it allowed us to get some new faces using the application - and exposed some new bugs! Consequently, a new version of Uniqurate has been released today that addresses the following bugs:

  • Adding an image into a question causes invalid XML to be generated
  • Using IE causes invalid XML to be generated (regardless of images!)
  • Inconsistent behaviour of watermarked input fields - sometimes the watermark remained when you tried to type into the field, meaning that what you got was a mismash of what you wanted and the watermark text

The version of UQ released today addresses these issues and represents quite a significant change under the hood, which should hopefully resolve any remaining inconsistencies across browsers in the XML that gets produced. As a bonus, while I had the image insertion code open, I also added the ability to not only add an image from a URL on the public internet, but also to push an image from your local hard disk and have it bundled as part of the eventual question content package. (This itself meant a big change under the hood with respect to the way the app handled content package manifests, but you end users don't care about that!)


Upcoming work will probably be to add a component that gets the student to place/drag items into a correct order - this seemed to get some appreciative nods for a "next up" suggestion, but if you have any other ideas, particularly if they're cross disciplinary in nature, fire them at me!

Saturday, 7 July 2012

QTIWorks snapshot 10 has been released

I'm pleased to announce that the 10th snapshot of QTIWorks is now available for you to play with:

This release has been timed squarely to coincide with the CAA conference in Southampton next week and focuses on showcasing QTI, as well as lifting the wraps on some of the functionality that people will use to actually deliver and integrate their assessments.

New look and feel

The first thing you'll notice is a new (albeit rather) minimalist look and feel. This is likely to evolve over time as I'm a terrible graphic designer, but I have to say I do like the font I've chosen. (Yay for Google Web Fonts and CSS @font-face!)

Public demo/showcase area

I have moved all of the functionality that doesn't require a full QTIWorks account into a new "Demos" section, which we'll develop over time as a nice public showcase of QTI. Currently you can do 3 things here:
  • Try out our bundled samples: I've bundled a selection of QTI 2.1 sample items into this area for people to try out... and this bundle now includes some nice examples from UPMC. I've made a bit of an effort to pick appropriate "delivery settings" for these so that they are showcased in a sensible way, though I have a wee bit more work to do in this respect. I think it looks quite nice, so go and have a play!
  • Upload and validate your own item or test: This is the validator from snapshot 1 with a lick of paint, and some minor improvements.
  • Upload and run your own item: This lets you upload your own QTI 2.1 assessment item for running, using a number of pre-defined "delivery settings". QTIWorks will automatically validate your item and divert to the validation results page if it finds any errors.

Logged-in area

Potential users of QTIWorks can now request a free account that will give them access to their own logged-in area (which I'm currently calling "the dashboard" but will probably change very soon). This is where people will eventually be able to use the full functionality provided by QTIWorks though, of course, we've still got some way to go in this respect. That said, there's already enough to cheer up a wet weekend in Skegness, including:
  • Assessment management: you can now upload and store new assessments in the system, list and view ones you've already uploaded, validate them, update their QTI data and try them out.
  • Delivery Settings management: Delivery Settings are a new concept in this snapshot. These allow you to specify exactly how an item (and later a test) should be delivered to candidates. For example, in formative assessment you might want to let the candidate have an unlimited number of tries, access a model solution and to be able to reset (and even re-randomise) the question freely whereas, in summative assessment, you probably want to prevent most of these things happening. Delivery settings provide a way of controlling all of these details. I have a feeling we'll refine this a bit more over the next few iterations, so feedback on the ones we've chosen already would be most welcome.
So that's it for this snapshot! The public area should be pretty stable (and rather slick now) so please give it a go and report any issues. On the other hand, I'd recommend wearing a hard hat and goggles if you want to try out the "dashboard" area at the moment. Because of this, you'll need to email me if you want an account created for the time being so that I can give you a bit of a pep talk.

Monday, 2 July 2012

CAA release for Uniqurate

A new release of Uniqurate has just been issued, which (as always) can be found at http://uniqurate.kingston.ac.uk/demo.

One of the problems was that UQ seemed to be turning into three discrete sub-applications:
  • The "Friendly" WYSIWYG drag-and-drop mode, targeted at those who just want to create e-assessment content and have no QTI expertise - or inclination to acquire it!
  • The "Intermediate" mode, for pre-existing content that couldn't readily be crammed into UQ's "Friendly" mode, enabling these novice users to alter the human-readable aspects and make changes to a question's context;
  • The "Expert" mode, for the old hands at QTI who find the support mechanisms of the other two modes confining and obtrusive.
Given that it's a fairly quiet period, with little coming from the client institutions in terms of new components, I thought I'd use the time to address this splintering of the app. This is doubly important with CAA coming up, particularly with our pre-conference workshop where we'll have new victims users exposed to UQ. I believe it's important that the journey that an application requires of a user is logical, and can be followed by those of all levels of expertise.

UQ now opens up with an initial landing/menu page, which invites the user to create or load new content:


Depending on the content, the user will be seamlessly routed to the best mode for them. If they're creating a new question or editing an existing UQ one, then clicking the Edit icon will take them to the "Friendly" mode. However, if the question was authored outside of UQ, it will offer them the "Intermediate" mode:


There have also been a lot of changes "under the hood". Most significant of these is the ability to support foreign characters, which will be invaluable for some of our colleagues on QTI-PET...


...although it's all Greek to me :-)

We've also got a new component, the simple text response (i.e. the student types some text, and it's either wrong, or right). Simple is the operative word, but it's a fairly elementary and basic component that  - bottom line - is going to be expected by end users even if it's so basic they don't think of articulating it as a requirement!

Friday, 29 June 2012

LTIQuizzes and updates to APIS

When I was prototyping our Java Learning Tools Interoperability (LTI 1.1) implementation for QTI-DI I needed a tool to connect it to so that I could try out features in a more meaningful way than a simple duplication of the IMS test suite would achieve. I used APIS, the QTI 2.0 item engine that I developed many years ago as part of JISC's first QTI 2.0 project. I also added on very simple assessment support (APIS only supported individual items) which I had developed for CCPlayr, my desktop Common Cartridge player. Although APIS was superseded by other JISC funded QTI players many years ago, I have continued to occasionally use it to experiment with ideas as we moved towards the final QTI 2.1 release, including the changes we made to ensure that the full functionality of the Common Cartridge was supported.

The combination of my LTI library code, the necessary code to link it to an assessment system and provide enough functionality to fully demonstrate how it could be used, my common cartridge assessment player and the updated APIS item player actually makes a fairly useful quiz system. I briefly demonstrated a development version of the system, LTIQuizzes, at the JISC/CETIS conference in Nottingham, and will be making it available as an extra deliverable from our current projects.

My initial plan for LTIQuizzes was that it would just be an QTI 2.1 Entry Level player - i.e. it would support the questions and tests from the QTI 2.1 version of Common Cartridge, but not much more. APIS does excede the CC requirements by supporting multiple interactions in a single question and flexible response processing and feedback, however I felt that Entry Level+ was a reasonable target. A burst of development activity on APIS over the last week has made me change my mind - I now plan to go well beyond QTI 2.1 Entry Level.

Until recently APIS did not support adaptive items, and had no concept of template items which were not part of the first QTI 2.0 draft. Because LTIQuizzes is the only tool we have that integrates with VLEs using LTI at the moment, I have been trying to extend the range of items that we can demonstrate using it at workshops. I particularly wanted to support the items generated by Uniqurate. Over the last week I have finished support for adaptive items (basically a bug fix) and started to add template item support.

With these improvements to the APIS item player, I now think that rather than labelling LTIQuizzes as an entry-level player I should now aim to make it an intermediate system that is capable of delivering any quiz that can be written with Uniqurate without invoking the expert mode. LTIQuizzes will probably never be able to deliver the full range of items that QTIWorks, JAssess and SToMP II support, however it will be able to deliver a useful subset of them, and may well find a niche market with users who do not need the advanced maths features of the other players, but are looking for a simple and robust system that integrates well with their VLE.

I'll be making LTIQuizzes available for download from the APIS sourceforge site within a few days, once I've integrated the APIS updates and added some installation documentation.

Tuesday, 26 June 2012

Invitation to Pre-Conference Workshop at CAA2012

You are invited to a pre-conference workshop before the CAA 2012 Conference http://caaconference.co.uk/ at the De Vere Grand Harbour Hotel, Southampton on Monday 9th July 2012, 10:00 – 16:00. This is the announcement that we are circulating:


We have recently been funded by JISC to disseminate the results of a number of recent projects on standards-based Assessment through the QTI-PET project, and as part of this activity we are holding a Workshop on the day before the CAA 2012 conference (http://caaconference.co.uk/ ), at the same venue.  

The workshop will include introductions to some new tools being developed under the JISC funded projects QTIDI and Uniqurate:

  • A user-friendly editor called Uniqurate, which produces questions conforming to the Question and Test Interoperability specification, QTIv2.1,
  • A way of connecting popular VLEs to assessment delivery applications which display QTIv2.1 questions and tests – this connector itself conforms to the Learning Tools Interoperability specification, LTI,
  • A simple renderer, which can deliver basic QTIv2.1 questions and tests,
  • An updated version of our comprehensive renderer, which can deliver QTIv2.1 questions and tests and also has the capability to handle mathematical expressions.

There will be an opportunity to discuss participants’ assessment needs and to look at the ways these might be addressed using the applications we have available and potential developments which could be part of future projects.

We shall also demonstrate the features of the QTI Support site, created under the QTI-IPS project to help users to get started with QTI. This collection of tools, content and documentation is still growing, and we expect to add more features, prompted by the needs of our partners in the projects who are adopting the tools in their teaching.

Participants in the workshop are most welcome to join us as informal partners in QTI-PET.

Places are limited, so please register to attend the workshop by emailing Sue Milne sue.milne@e-learning-services.org.uk with your details as soon as possible.

Friday, 8 June 2012

Intermediate Mode for Uniqurate

I am pleased to announce the first release of the Intermediate mode of editing for Uniqurate.

The idea of a "halfway-house" mode came back towards the start of the project, and came about after consideration of what we could do with content that was authored in some other way (e.g. another editor, or by hand). The difficulty is that QTI is essentially a programming language for electronic assessment and there is always more than one way to skin the proverbial cat. For example, there are many ways that a multiple choice question could be implemented in QTI - Uniqurate does it one way, but there are many, many others. It would be impossible to map every single possible permutation of QTI that might represent an MCQ onto UQ's appropriate question component. Thus, at an early point it was decided that any content that was not created in UQ would have to be restricted to the XML-based Expert Mode editor.

Some time ago, Wilbert Kraan suggested a layer on top of the Expert mode that would hide certain aspects of the QTI XML, and supplement what was left with a few additional aids. Ultimately you'd still be editing the QTI directly, but it wouldn't seem so "frightening". We took to calling that the "halfway-house" mode.

With the launch of the QTI-PET project and the need to be able to provide a means of adding new context to existing content, this became even more important. We've presented a number of papers and demos on this theme. The tl;dr version is that we've got lots of QTI content, but much of it is written from a generic point of view, and is too dry to be truly engaging. Our colleagues at (say) Harper Adams could use much of it, but their students would react much better if it could have a few subject-specific hooks added just to give it an appropriate context.

Hence, the "halfway-house" or what we're now calling Intermediate mode. If you switch to expert mode and load a question, you'll noice a little icon at the top right of the screen. Click this, and all of the XML will be hidden apart from the human-readable parts.


The overall "tree" of the question is preserved and delimited by the dotted red lines - so, in the example above where a multiple choice question is being edited in Intermediate mode, you can see where the distractors' boundaries are with respect to the question body itself.

The rich-text editor is also brought over from the "friendly" mode editor, so that you can modify the style as well as the text, along with any maths components (you can add new maths components, too).

This has been tested in the big three browsers - i.e. relatively recent Firefox, Chrome and Internet Explorer 8 (don't get me started on the latter - had it not been for IE you could have had this yesterday! Is anyone still using IE?!).

As always, the latest version of Uniqurate can be found at


which will take you into "friendly" mode, so you will need to switch to Expert mode to find this new feature. This URL


will take you straight into expert mode.

Please give me as much feedback as you can! Reports on bugs, problems etc are always "welcome" :) but of particular interest is the user experience. I am not convinced that a little button in Expert mode is the best place for Intermediate mode, and would welcome suggestion on where and how to place it.

Thursday, 31 May 2012

Enhanced item rendering in QTIWorks

After another flurry of activity, I've just finished and released the 8th development snapshot of QTIWorks for folk to play around with:

https://www2.ph.ed.ac.uk/qtiworks

This iteration of work has focused mainly on the delivery of single assessment items to candidates, joining up various bits of pipework that control this process and adding in some new rendering features and options that we'll need later for test rendering (and are quite useful in their own right too).

I'm afraid you'll still have to make do with playing around with the sample items that have already been loaded into the system for the time being. However, you'll probably be pleased to hear that the next work iteration will add in the first round of functionality for getting your own assessments into the system and doing fun things with them. (At last!) To make the wait a bit easier, I have gone through the existing samples and selected a reasonable set of options for how they should be delivered, which makes them much more fun to play with, as well as more illuminating to people who are less familiar with QTI.

Key changes in this release (1.0-DEV8)

  • Assessment item rendering now incorporates the following new features:
    1. Display of a model solution (via the QTI <correctResponse>).
    2. The ability to reset a session back to the state it was in immediately after the last run of template processing, which effectively clears all candidate input back to the original state, but leaves randomly-chosen things intact. Existing "reinit" functionality has been made a bit clearer.
    3. An explicit "closed" state, which is either entered explicitly by the candidate, or when an adaptive item becomes complete, or when the number of attempts hits your chosen value of maxAttempts. I think this will need a bit more work, as it was never implemented in QTIEngine or MathAssessEngine.
    4. A new "playback" feature lets the candidate step through every interaction they made with the item, which is a possible way of implementing the allowReview concept from test rendering.
  • A number of "knobs" are now available for controlling how you want to render a single item, including:
    • maxAttempts (as seen in tests)
    • author mode on/off (controls whether authoring hints are shown)
    • a simple prompt to be shown at the top of the question (similar to rubrics in tests, but simpler)
    • restrictions on what the candidate can do, including:
      • close a session explicitly when interacting
      • play a session back when closed
      • reinitialize a session when interacting or when closed
      • reset a session when interacting or when closed
      • see a model solution when interacting or when closed
      • see the result XML
      • see the item XML source
  • I have gone through all of the existing samples and set the above "knobs" to values that show each in their best light. For example, some suit a very formative "try as many times as you like" approach with rich feedback, others are more austere so are better displayed in a more rigid way. This should make the samples much more useful for people trying them out.
  • The authoring debugging information has been improved, and now shows bad responses (e.g. string submitted to a float), and invalid responses (e.g. wrong number of choices made).
  • There's now a rich database structure underpinning all of this, which records everything the candidate does and the changing QTI state during this process. This is currently used to implement the "playback" functionality, and will prove invaluable for analysing result data when the system delivers real tests.
  • The HTTP calls that control the typical delivery of a "candidate session" via a browser are now as RESTful as you would pragmatically expect in this type of scenario. A more formal RESTful web service API will be trivial to do from this, but I'm going to hold off until anyone actually needs it.
  • HTTP responses have been improved so that they always include the Content-Length headers. Responses that don't change also send cache-friendly HTTP headers such as ETag and Cache-Control.

Short term development roadmap

  • Next up is adding in the web-facing functionality for creating your own assignments into the system, uploading content into them, validating them and trying them out (using the various "knobs" I listed above).
  • After that it's finally time to implement the entry and exit points for candidates who get sent to QTIWorks to do an assessment. This is where LTI will come in.
  • Then... tests. At last!
And relax.

Thursday, 3 May 2012

QTI Works project area now on SourceForge

QTI Works now has a presence on SourceForge:

https://sourceforge.net/projects/qtiworks/

It's fairly vestigial at the moment, so don't get too excited! If you're a geek, you'll quickly notice that there is no source code there at the moment. I'm presently developing QTI Works within my own github area, and plan to push things over to SourceForge once we get past a few more milestones. Until then, you can get the source code at:

https://github.com/davemckain/qtiworks

Please don't expect any API stability this point. There's also no build documentation yet, which probably doesn't help...!

Wednesday, 2 May 2012

Showcasing QTI 2.1 in QTI Works

As well as being a tool for managing, trying and delivering your own QTI assessments, QTI Works will also act as a showcase for QTI and the things it can do, much as the existing QTIEngine and MathAssessEngine already do.

With this in mind, I have started to feed Graham Smith's excellent collection of QTI examples into QTI Works, and plan to continue this process over the next few months while it takes shape.

These examples will be bundled into development snapshots of QTI Works for you to play around. So far, I have assimilated 3 sets of examples:
  • The IMS standard examples (and a few extra bits)
  • Examples demonstrating the MathAssess QTI extensions
  • A small set of example items from language testing (German & Russian)
You can play around with these at: https://www2.ph.ed.ac.uk/qtiworks

(Yes, it does look very spartan at the moment, but don't worry about that!)

This exercise is actually very useful for this project for a number of other reasons:
  1. It provides me with some sample data for running automated "integration tests" on the JQTI+ / QTI Works software. (For example, I feed them all of the code performing the various bits of QTI logic, such as reading in XML, validating the data models, writing out XML, running template processing, running response processing etc. This is invaluable for finding and fixing bugs, and making sure we can handle "real world" examples properly.) 
  2. As well as being useful for integration testing, they also help with so-called "regression testing", which helps make sure that I don't break things inadvertently during the development process.
  3. These examples have been around for a few years now, so this process is a good way of doing some QA on them and making sure they're right up to date with the QTI specification.
Enjoy!

Monday, 16 April 2012

HEA-STEM Conference April 12-13 2012

The conference was held at Imperial College London, in several different buildings. The parallel sessions were organised in subject strands. Our presentation was in the Maths, Stats and OR strand in the last set of papers.


A common theme in the conference was the need for maths to be set in context for each discipline. It is a problem compounded by the students' surprise in many subjecct areas on finding that their course includes maths. Speaker after speaker reported that students engage better with maths in context, and there was a lively discussion about the most effective way of supporting students: is it better to have a subject specialist teach the maths they need for their course, or should a mathematician tech the maths? The conclusion was that there should be several people contributing to this teaching, and that if the support is removed from the location where the problem was presented, the student is likely to be less embarrassed and may seek help more readily.


Our paper demonstrated the facilities in the QTI tools for contextualising questions, and also featured the first appearance of the LTI connector embedded in an institutional Moodle - the University of Glasgow's learning and Teaching Moodle instance.

LTIQuizzes update

During our presentation at the HEA STEM Conference I demonstrated LTIQuizzes through the VLE at Glasgow University. LTIQuizzes was playing on our Amazon EC2 virtual server, however to the audience it just looked like I was setting up and using a normal Moodle module. For specialist software this is a great system, because it can be part of the main VLE as far as staff and students are concerned, however safely isolated so that it doesn't put core services at risk, or require the same platform as the VLE. Moodle is a conventional LAMP stack application, while LTIQuizzes runs under Tomcat - they could run on the same server, but they use quite different technologies so it's much nicer keeping them on separate machines.

LTIQuizzes isn't really intended for production use, but it does show what is possible. For now storage is to files rather than a database, and the QTI engine is APIS, which only supports a subset of QTI. Database storage, full LTI 1.1 support as well as the LTI 1.0 extensions for VLE persistence and course membership will all be supported very soon, and the LTI section of the code can easilly be reused by other applications. (I'm also developing PHP and C# vaersions of the LTI classes.)

Thursday, 5 April 2012

Announcing QTI Works

Development work during the first few months of the QTIDI project has been progressing successfully, though you would certainly be forgiven for doubting this! As well as me being notoriously bad at finding time to write blog posts, the first development iterations on QTIDI have focused on "refactoring" all of the QTI-related software components we have been building and using on recent JISC projects so that they make a much better foundation for the technical goals of this project. (Rubbish analogy time: I've taken lots of random things, laid them on a carpet, and hit them with hammers until they break into little pieces. I'm now joining these pieces up to make something nice. Hopefully. The analogy is rubbish as I can't think of any examples where I've made something nice out of random bits on a carpet. Luckily, I'm better at doing this with software, so probably shouldn't have used this analogy in the first place.)

One of the key bits of work done so far has been a redesign of JQTI, which is the Java software library for "doing" QTI stuff that Southampton developed a few years ago. I've previously blogged about why I thought this is necessary (see http://davemckain.blogspot.co.uk/2011/06/refactoring-jqti-jqti.html) and the result of this is coming together under the not-very-original name of JQTI+. This refactoring work is now almost complete, with the exception of QTI tests, which JQTI never quite implemented fully and will be revisited in a few months.

On top of JQTI+, I'm building the replacement for MathAssessEngine that will become the main technical deliverable of this project. MathAssessEngine, which was based on the original QTIEngine, is also going to be torn apart and redesigned so that it can do all of the things it now needs to do, and do them all really well.
To reflect the scope of the work we're doing, we've decided to give the replacement for MathAssessEngine a completely new name and, after a couple of months of riotously bad naming attempts, we've decided to call it QTI Works.

I will deploy regular public development snapshots of QTI Works while it takes shape over the next few months, which you will be able to find at:

http://www2.ph.ed.ac.uk/qtiworks

If you remember to wear a hard hat, you can go in and try the first development snapshot now. This showcases the brand new (and vastly improved) QTI 2.1 validation functionality in JQTI+, as well as demonstrating the newly-refactored rendering and delivery of QTI 2.1 assessment items. (You'll have to make do with a selection of sample items for the time being... the functionality that will allow you to upload, validate and try out your own items is still written on the back of some envelopes. This will turn into real code that you can play with during the next development iteration. Hopefully!)

Tuesday, 20 March 2012

Demos at the CETIS Conference

QTIDI and its sister project Uniqurate were represented at the CETIS Conference in Nottingham in February 2012. We demonstrated



  • The Learning Tools Interoperability (LTI) Connector running a Common Cartridge test;

  • The Uniqurate question authoring tool, which also has facilities for creating a contant package containing a question and its associated media and/or stylesheets;

  • Interoperability between our tools and those of colleagues from Germany, France and Korea as well as the UK, using the latest version of MathAssessEngine, which is currently morphing into QTI Works.

QTIDI - Connecting QTI Assessment to VLEs

The QTIDI project, funded by JISC under the Assessment and Feedback programme, is preparing a package of software and documentation for transferring the QTI Works rendering and responding engine - a direct descendant of MathAssessEngine - and software to link it to popular VLEs to institutions in the HE and FE sector. The project will provide a documented and packaged version of the tools for distribution to Kingston University, Harper Adams University College and the University of Strathclyde. Should other institutions wish to join in the project as receiving partners at a later date, they will be made welcome. Alongside, we have the Uniqurate project, which is building on the Aqurate and Mathqurate editors to create a more intuitive, user-friendly authoring too for QTIv2.1 questions. Eventually it will also incorporate the functionality of Spectatus to provide a one-stop editor for questions and tests.

Here's a quick roundup from Niall and David about what’s going on in QTIDI:

Niall: “I have been working on getting a good IMS LTI example working in Java. (The IMS example software is written in PHP.) A basic LTI 1.0 test with consumer (LMS) and tool components is almost complete. I'll be writing some developer documentation for LTI containing this code as well as C# and PHP example code. I have also (with help from Dave) set up a MathAssessEngine NetBeans project on my Laptop. I have also been taking part in the IMS QTI working group meetings.”

David: “Most of the work so far has been continued refactoring of JQTI to make it more sensible, and an initial webapp framework for SonOf(QTI|MA)Engine. So, all this first iteration is going to do is provide a "validation service". This will let you upload a standalone item XML or item/test Content Package, which will then be fully validated and will generate a summary report. Once that's in place, next iteration will be refactoring the rendering of standalone items, as well as a first cut of the REST API for that, which will be the way in for LTI.”

Note that the original MathAssessEngine will be replaced during this project, so the appearance and the way things get uploaded will change. The test mode is needing improvements, so there will be some changes there too. This means, of course, that feedback from clients will be more useful at the moment in pointing out "would like to see" suggestions (and the reasons for them), which can then be matched up with the specification to make sure things work the way they should, rather than as a debugging exercise.